anchor stringlengths 0 150 | positive stringlengths 0 96k | source dict |
|---|---|---|
Finding maximum subarry sum | Question: Looking for feedback on a question I solved in C++. It is a leetcode problem and I used divide and conquer to solve the problem.
53. Maximum Subarray
Given an integer array nums, find the contiguous subarray (containing at least one number) which has the largest sum and return its sum.
Example:
Input: [-2,1,-3,4,-1,2,1,-5,4],
Output: 6
Explanation: [4,-1,2,1] has the largest sum = 6.
Follow up:
If you have figured out the \$\mathcal{O}(n)\$ solution, try coding another solution using the divide and conquer approach, which is more subtle.
I am looking for feedback in terms of logic and also the implementation. If I search for a C++ solution online, I only see problem solved using arrays and I have used vector to solve to the problem while utilizing iterators.
Any feedback would be great.
Thanks!
int FindMaximumSubarray(const vector<int> &vec) {
if (vec.size() == 1) {
return vec.at(0);
}
int midIndex = vec.size() / 2;
vector<int> leftArray(vec.begin(), vec.begin() + midIndex);
vector<int> rightArray(vec.begin() + midIndex, vec.end());
int maximumSumLeftSubarray = FindMaximumSubarray(leftArray);
int maximumSumRightSubarray = FindMaximumSubarray(rightArray);
int maximumSumCrossingSubarray = FindMaximumSubarrayCrossing(vec);
return FindMaximumNumber(maximumSumLeftSubarray,
maximumSumRightSubarray,
maximumSumCrossingSubarray);
}
int FindMaximumSubarrayCrossing(const vector<int> &vec) {
int midIndex = vec.size() / 2, leftSum = INT_MIN, rightSum = INT_MIN, sum = 0;
for (auto itr = vec.rbegin() + midIndex; itr != vec.rend(); ++itr) {
sum += *itr;
if (sum > leftSum) leftSum = sum;
}
sum = 0;
for (auto itr = vec.begin() + midIndex + 1; itr != vec.end(); ++itr) {
sum += *itr;
if (sum > rightSum) rightSum = sum;
}
if (leftSum == INT_MIN || rightSum == INT_MIN) {
return (leftSum == INT_MIN) ? rightSum : leftSum;
}
return (leftSum + rightSum);
}
int FindMaximumNumber(const int &a, const int &b, const int &c) {
if (a >= b && a >= c) return a;
if (b >= a && b >= c) return b;
if (c >= a && c >= b) return c;
}
```
Answer: You're creating a bunch of unnecessary vector copies. Try passing iterators into FindMaximumSubarray instead of a vector.
You can find the max of an initializer list of numbers using std::max.
You don't need to pass ints as const refs.
Are you sure this is more performant than a linear solution? What is your reasoning? Can we see your linear version?
Your code looks like it might have potential overflow errors. Maybe that's not important. | {
"domain": "codereview.stackexchange",
"id": 37081,
"tags": "c++"
} |
Water world in gazebo for surface vehicle | Question:
I want to create water world in gazebo. i tried to use buoyancy plugin and drag plugin for it. But the buoyancy plugin acts for the entire world. i want buoyancy till certain level of water. How can this task be done?
Originally posted by uchiha_saail on Gazebo Answers with karma: 3 on 2019-09-01
Post score: 0
Answer:
You can take a look at the VRX project, where Gazebo is used to simulate a maritime environment. There is a buoyancy plugin that takes into account the level of the water.
https://bitbucket.org/osrf/vrx
Originally posted by Carlos Agüero with karma: 626 on 2019-09-03
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by uchiha_saail on 2019-09-05:
I have gone through the vrx plugin, but i just want simple buoyancy plugin up to level of still water with drag force of water acting on water. | {
"domain": "robotics.stackexchange",
"id": 4431,
"tags": "gazebo"
} |
Queries regarding Light | Question:
In the given picture,why has light been depicted so ? We know that light travels and propagates in a straight line in a given medium having the same refractive index throughout.
When we refer to the wavelength of light , do we refer to the wavelength of the wave-like pattern created by the oscillation of Electric field or Magnetic field ?
Answer: 1.) That rough sketch represents the amplitude of the electric or magnetic field, not the path the light follows.
2.) As the image shows, both fields have the same wavelength. | {
"domain": "physics.stackexchange",
"id": 94071,
"tags": "electromagnetic-radiation"
} |
Making some columns if a dataframe numeric | Question: I have two dataframes
> head(a[1:4,1:4])
Tumor_Sample_Barcode Chromosome Position End_Position
1: A1 chr4 90169866 90169866
2: A1 chr11 60235747 60235747
3: A1 chr1 983023 983023
4: A1 chr1 11346060 11346060
>
> head(c)
Chromosome Position VAF
1: chrM 6691 0.610284
2: chrM 14503 0.693325
3: chr1 31412236 0.645161
4: chr1 55693305 0.602941
5: chr1 69963412 0.709302
6: chr1 72608266 0.720000
>
> str(c)
Classes ‘data.table’ and 'data.frame': 44175 obs. of 3 variables:
$ Chromosome: chr "chrM" "chrM" "chr1" "chr1" ...
$ Position : num 6691 14503 31412236 55693305 69963412 ...
$ VAF : num 0.61 0.693 0.645 0.603 0.709 ...
- attr(*, ".internal.selfref")=<externalptr>
I want to merge them like this but complaining about not being integer
> a=a[c, on = c("Chromosome","Position"), VAF := i.VAF]
Error in bmerge(i, x, leftcols, rightcols, roll, rollends, nomatch, mult, :
Incompatible join types: x.Position (character) and i.Position (integer)
I have tried something like these but not working
> c=c[!is.na(as.numeric(as.character(c$Position))), ]
>
> c=c[!is.na(as.numeric(as.character(c$VAF))), ]
>
> c$VAF <- as.numeric(as.character(c$VAF))
c$Position <- as.numeric(as.character(c$Position))
which not working
Answer: Your issue seems to be related to the column Position being a character column in a, you should try:
library(data.table)
a$Position <- as.numeric(a$Position)
a[c, on = c("Chromosome","Position"), VAF := i.VAF] | {
"domain": "bioinformatics.stackexchange",
"id": 1223,
"tags": "r"
} |
Cylinder on the bed of a truck, rolling without slipping | Question:
I tried to solve a physics problem that involves a cylinder on a truck that is accelerating. I'm supposed to solve the acceleration of the center of mass of the cylinder when the acceleration of the truck, and the radius and the mass of the cylinder are known. The cylinder rolls without slipping.
In the solution it is said that as the rolling occurs without slipping, the point on the cylinder that is in contact with the truck accelerates at the same rate as the truck. Why is this true? I can understand that their speed should be the same as they are momentarily in contact, but why their accelerations?
I've been taught that if an object is under forces, regardless of the positions of the forces wrt. the center of mass, the acceleration of the COM is the sum of these forces. So if one point on the cylinder accelerates at the same rate as the truck, why doesn't the COM of the cylinder accelerate also at the same rate as the truck?
Answer: The place to start on this problem is with the kinematics. Let v represent the velocity of the CM of the cylinder, and let $\omega$ represent the counterclockwise rate of rotation of the cylinder. Then the tangential velocity at the bottom of the cylinder is $$v_T=v+\omega R$$ where R is the radius of the cylinder. Since the cylinder does not slip relative to the truck bed, this tangential velocity of the cylinder must match the velocity of the truck bed at all times. From this it follows that the acceleration of the cylinder a, the angular acceleration of the cylinder $\alpha$, and the acceleration of the truck $a_T$ must be related by $$a+\alpha R=a_T$$
Now for the mechanics: If F represents the forward frictional force exerted by the truck bed on the cylinder, F is responsible for both the linear acceleration of the center of mass and also for the angular acceleration. Therefore, $$a=\frac{F}{m}$$and $$\alpha=\frac{FR}{\frac{1}{2}mR^2}$$Combining the above equations, we have $$\frac{3F}{m}=a_T$$So, the tangential force is: $$F=m\frac{a_T}{3}$$ From this, it follows that the acceleration of the center of mass is $$a=\frac{a_T}{3}$$ | {
"domain": "physics.stackexchange",
"id": 87597,
"tags": "newtonian-mechanics, rotational-dynamics"
} |
Is it possible to reconstruct an MSA with PSSM? | Question: I am thinking of reconstructing an multiple sequence alignment (MSA) of protein sequences from a position-Specific Scoring Matrices (PSSM).
Is it possible? I suppose co-evolution information is lost in PSSM so maybe it is difficult?
Answer: In short I'm afraid it isn't possible because you lose the linkage per Haplotype which is the core information and vital for any population genetics or phylogenetic analysis.
Position-specific scoring matrix e.g. A .60 C .20 G .20 T 0
I tried to produce a position scoring matrix, but it didn't work out, anyway the allele frequency for each e.g. amino acid are given for each alignment position.
You could produce a consensus sequence which, albeit be hypothetical, it might be of some use to understand the generic behaviour of all the sequences in your sample.
Here's a cartoon describing linkage and its importance,
For a multiple sequence alignment of course AB and ab are amino acid residues within the same locus. PSSM could be used to construct a matrix to analyses an established alignment, but that's a fairly complicated Phylogenetics calculation (and a separate question). | {
"domain": "bioinformatics.stackexchange",
"id": 2120,
"tags": "phylogenetics, proteins, phylogeny, pssm"
} |
Heading estimation with GPS heading | Question:
I'm tuning a kalman filter for position estimation of an outdoor robot, and I'm seeing significant compass interference which is producing significant errors in the final position estimate.
My other sensors are: 3-axis gyro, 3-axis accel, GPS and a wheel speed sensor.
I'm considering an approach where I estimate heading based mostly on GPS heading while my vehicle is moving, instead of relying heavily on the compass for heading.
Has anyone else tried this, either with one of the existing ROS kalman filter nodes or with a custom filter?
Originally posted by ahendrix on ROS Answers with karma: 47576 on 2015-06-18
Post score: 1
Answer:
I haven't tried that exactly, but I have worked with multiple source of heading information using robot_localization. You'd have to make sure your GPS driver (or some other node) converts your GPS heading data into a PoseWithCovarianceStamped message. You may also want to either stop sending GPS-sourced heading messages when the vehicle is at rest, or dynamically modify the covariances of the heading sensors based on your current state (moving, turning, resting, etc.). I'd also suggest turning relative mode on for both heading sources.
What is causing the compass interference? Is it distortions from the metal and electronics on board your robot, or is it due to external sources?
Originally posted by Tom Moore with karma: 13689 on 2015-06-18
This answer was ACCEPTED on the original site
Post score: 2
Original comments
Comment by ahendrix on 2015-06-18:
OK, I'll give that a try. My compass error is coming from external sources, so there isn't much I can do about it. (I've already calibrated out the hard-iron interference from my chassis)
Comment by M@t on 2016-08-08:
For reference, I calculate the heading from sequential GPS position measurements (and use a rolling average so it's only valid in straight lines), and use this to check the IMU heading. Compared to GPS, my IMU (built into the CR Jackal) will naturally drift by 3-7 degrees.
Comment by ahendrix on 2016-08-08:
I have a GPS which provides heading directly as part of the GPRMC message.
Comment by tomy983 on 2022-11-13:
Hi ahendrix, I have the same problem as yours. My imu works quite well most of the times, but in a area where I test my robot the magnetometer goes crazy, also causing crashes. I have mounted a ZED-F9P, which can output an heading and its precision. When the robot is stationary, the heading is not updated and the precision goes to 180 degrees... So this part is kind of sorted... How did you go about the direction of movement? I mean, it's probably all good until driving straight ahead or smooth turns, but what about backwards maneuvers? | {
"domain": "robotics.stackexchange",
"id": 21960,
"tags": "navigation, ekf, robot-localization"
} |
Semiconductors, Solid-State Physics | Question: We know, that conductors, conduct because their valence energy band is "half" full, and k ("wave vector") can increase and therefore the electrons under the influence of a electric field can "move", and similarly insulators wont conduct, because ther valence band is full, and we have a "big" energy gap between the valence and conductivity band.
My question: Why can't under the infulence of a electric field electrons "jump" onto the next energy band (that's the conductivity band) , where their k ("wave vector") can increase "more freely" and therefore conduct electricity?
Answer: You said it here:
we have a "big" energy gap between the valence and conductivity band.
If you supply enough energy, electrons will jump to the conduction band (become excited). Semiconductors and insulators do not have many differences, it is simply called insulators when the gab is big, and semiconductors when small.
Semiconducting materials may react with sunlight (photovoltaic effect in solar cells), while inductors (like air) has become conducting when you see a lightning flash. | {
"domain": "physics.stackexchange",
"id": 29050,
"tags": "solid-state-physics, semiconductor-physics, conductors, insulators"
} |
Need a malleable solid which dissolves in HCl, acetone, or water | Question: I need to temporarily plug the end of a 3 mm ID copper tube with some sort of malleable solid solute that I can dissolve at later point by flushing with a solvent/acid via syringe. I need to keep the area at around room temperature, so I cannot use dry ice or excessive heat. I'm using it to prevent liquid silicone from seeping up the end of a tiny copper tube while I seal the tube, so it needs to be relatively inert to the copper tube and the silicone.
I've tried using Styrofoam and acetone (which is almost ideal), but this leaves behind a residue of slimy gunk that is difficult to remove, and I need the tube to be clear of foreign residues. Low melting point solid materials may be possible to use, but they can't leave behind any residue for my purposes. I can obtain any special chemicals through my university.
Answer: If the cavity in which you insert silicone is not air tight I would use and air pump to blow in the tube.
The easiest chemical I could imagine using is a low melting point salt such as ammonium acétate. If the tube is removable you dip in the liquid and upon solidification you can remove the excess. It's water soluble.
Sugar toffee will have a similar behavior and is malleable at a certain proportion of water.
Plumber use bread. | {
"domain": "chemistry.stackexchange",
"id": 10219,
"tags": "physical-chemistry, acid-base, solubility"
} |
Square bracket notation for dimensions and units: usage and conventions | Question: One of the most useful tools in dimensional analysis is the use of square brackets around some physical quantity $q$ to denote its dimension as
$$[q].$$
However, the precise meaning of this symbol varies from source to source; there are a few possible interpretations and few strict guidelines. What conventions are there, who uses them, and when am I obliged to follow them?
Answer: I had an extensive look around, and I turned up four conventions. This included a short poll of google, other questions on this and other sites, and multiple standards documents. (I make no claim of exhaustiveness or infallibility, by the way.)
Using $[q]$ to denote
commensurability
as an equivalence relation.
That is, if $q$ and $p$ have the same physical dimension $Q$, one might write $$[q]=[p]=[Q],$$ but no bracketed quantity is ever shown equal to an unbracketed symbol. Thus, if $v$ is a speed one might write $[v]=[L]/[T]$ or $[v]=[L/T]$ or $[v]=[L\,T^{-1}]$ or some equivalent construct. You can see $L$ and $T$ as denoting the dimension or just "some length" and "some time". To see how you would work without evaluating braces, here is a proof that the
fine structure constant is dimensionless:
$$
[\alpha]=\left[\frac{e^2/4\pi\epsilon_0}{\hbar c}\right]=\frac{[F\,r^2]}{[E/\omega][r/t]}
=\frac{[F r][\omega t]}{[E]}=\frac{[E]}{[E]}[1]=[1]
,$$
so $\alpha$ and $1$ are commensurable. Some examples are
this,
this,
this, or this.
Using $[q]$ to denote the dimensions of a quantity. Thus if the physical quantity $q$ has dimension $Q$, one writes $$[q]=Q.$$ A velocity would then be written as $[v]=L\,T^{-1}$ or its equivalents. This seems to be the leading candidate on Google, closely followed by the convention 1. Some examples are
this,
this,
this,
this, and this. This is my personal favourite, as I find that it permits the most flexibility without horribly formalizing the whole business (though I will often skip the actual evaluation of the braces, essentially using convention 1).
Using $[q]$ to denote the units of a quantity. Here if $q$ can be written as a multiple of some unit $\text q$, you write $$[q]=\text q.$$ This is contingent on what unit system you choose but different units for the same dimension are of course equivalent. When this approach is used, the notation $\{q\}=q/[q]$ is sometimes used to denote the purely numerical value of the quantity. A speed would be written, for example, as $[v]=\text m\,\text s^{-1}$. This use is endorsed by the
NIST
Guide to the SI,
section 7.1,
the
IUPAC guide
Quantities, units and symbols in physical chemistry, the IUPAP guide Symbols, units, nomenclature and fundamental constants in physics, as well as the
ISO standard
ISO 80000-1:2009, section 3.20. (That document is very paywalled, but chapters 0-3 are available for free preview
here.)
Google results seem relatively scarce, with
this and
this
as examples, although that could simply be poor representation. (There is also
this document, which uses the notation $[\text W]=[\text V][\text A]$, but I think this is quite uncommon as well as not very useful.)
Using $\operatorname{dim}(q)$ to denote the dimensions of a quantity. This is the notation set as standard by the
Bureau International des Poids et Mesures
in the
SI Brochure
(8th edition, chapter 1.3, p. 105). This also sets roman sans-serif as the standard for physical dimensions, so $\mathsf{Q}$ would be the dimension of $q$ and you write $$\operatorname{dim}(q)=\mathsf Q.$$ (To typeset roman sans-serif in TeX or MathJax, use \mathsf; note that this is distinct from \operatorname, which is used for $\operatorname{dim}$ and would produce $\operatorname{Q}$ through \operatorname{Q}.)
A real-world usage example is thus $\operatorname{dim}(v)=\mathsf L\,\mathsf T^{-1}$ for a velocity.
This use is set as standard by
ISO 80000-1:2009, section 3.7, and it is also endorsed by the NIST Guide to the SI,
section 7.14.
(NIST also reproduces the BIPM text in p. 16 of
The International System of Units.)
Examples of this online are
this,
this,
this and
this; I note, though, that most examples I found are technical, while pedagogical examples tended to use conventions 1 and 2. (This also feels less common, but that's hard to judge.)
I also find it important to add that few academic journals impose standards in this area. As a working physicist in academia, the style guidance of one's chosen journal is often the only style standard one is really obliged to follow. The style manuals of the American Physical Society, the Institute of Physics, Reviews of Modern Physics, Nature Physics and several Elsevier journals have no mention of which convention should be used in their publications.
As was made clear in Should we necessarily express the dimensions of a physical quantity within square brackets?, the choice of what the symbol $[q]$ means is entirely a matter of convention. The most important thing is that your usage is consistent. Do not jump conventions within a document. If your work is closely allied to other resources (e.g. textbooks) that use a particular convention, it is best to stick to that, to avoid confusing your students. If you are presenting an exam, use the notations used in your course to avoid confusing your examiner, or - at the very least - define all non-standard notation you use.
So, what convention should you use? There is really no requirement to use any one of the above (and you can even make up your own notation, as long as you define it appropriately and don't overdo it). This is really less of an issue than it looks, as there's actually rather rarely a need to use this notation in print except in pedagogic settings. (That's not to say that professional physicists don't use it in practice: we do use it, often, in everyday life, but it's mostly informal work used on the side to keep calculations straight or as exploratory scaling arguments when starting work on a problem, for example.)
If your work is a commercial report, or similar document, and it could potentially have legal repercussions, then you should check whether there is a legal standard you should be using, which will then probably be conventions 3 and 4. Academically, you are typically free to choose the conventions you find most convenient as long as you use them properly and you avoid conflicts with other allied resources. If you are publishing in a journal or as part of a bigger work, you should check if they provide style guidance on this, though as I said journals rarely take stances on this. (You should really be reading the style guidance anyway as part of your submission process, though.) For your informal work, you should use whatever you're most comfortable with!
Finally, if you have questions about the typesetting of these notations in LaTeX, you should go to How should I typeset the physical dimensions of quantities? on TeX.SE. | {
"domain": "physics.stackexchange",
"id": 9528,
"tags": "conventions, units, notation, dimensional-analysis"
} |
Finding area and perimeter of rectangles and circles that are instance of interface region | Question: The missing code in task 1 were the methodsarea(), perimeter() and toString().
In task 2 I know that using built in collection such as Arraylist would be efficient and simpler,So any new different idea and implementation are appreciated.
Make the classes Circle and Rectangle complete: write the missing code.
A static method selectRectangle accepts an array of areas of type Region, and returns an array that only contains the areas that are of type Rectangle. Create that method.
Create an array that contains both circles (objects of type Circle) and rectangles (objects of type Rectangle). Write code that determines and shows the perimeter and area of these regions. Call the method selectRectangles with the created array as argument.
public interface Region {
double area();
double perimeter();
}
class Circle implements Region {
private double radius;
public Circle(double radius) {
this.radius = radius;
}
@Override
public double area() {
return Math.PI * radius;
}
@Override
public double perimeter() {
return Math.PI * 2 * radius;
}
}
class Rectangle implements Region {
private double with;
private double length;
public Rectangle(double with, double length) {
this.length = length;
this.with = with;
}
@Override
public double area() {
return length * with;
}
@Override
public double perimeter() {
return 2 * (length + with);
}
public String toString() {
return "<" + this.with + this.with + this.area() + this.perimeter() + ">";
}
public static Rectangle[] selectRectangle(Region[] region) {
int countRect = 0;
for (int i = 0; i < region.length; i++) {
if (region[i] instanceof Rectangle) {
countRect++;
}
}
Rectangle[] rect = new Rectangle[countRect];
int companionVar = 0;
for (int i = 0; i < region.length; i++) {
if (region[i] instanceof Rectangle) {
rect[companionVar++] = (Rectangle) region[i];
}
}
return rect;
}
public static void main(String[] args) {
Region[] region = {new Circle(5),
new Rectangle(2, 4),
new Circle(2),
new Rectangle(7, 9),
new Circle(6)
};
for (int i = 0; i < region.length; i++) {
System.out.println(region[i].area() + region[i].perimeter());
}
Rectangle[] rectangle = selectRectangle(region);
}
}
Answer: In my point of view you cannot implement
area() and perimeter() any clearer or simpler.
In the toString() method I would somehow separate the values and also not print with two times and length not at all.
Here is my suggestion:
public String toString() {
return "<" + with + " x "+ length + ", area:" + this.area() + " perimeter: "+ perimeter() + ">";
}
In Rectangle[] selectRectangle(Region[] region) I would use an ArrayList to avoid the loop to count the number of Rectangles.
public static Rectangle[] selectRectangle(Region[] region) {
ArrayList<Rectangle> result = new ArrayList<>();
final int lastIndex = region.length;
for (int i = 0; i < lastIndex; i++) {
if (region[i] instanceof Rectangle) {
result.add((Rectangle) region[i]);
}
}
return result.toArray(new Rectangle[result.size()]);
} | {
"domain": "codereview.stackexchange",
"id": 25087,
"tags": "java, programming-challenge"
} |
build VS build_depend | Question:
Hello,
I saw in different tutorials two different way to add dependencies in package.xml:
<build_depend>package</build_depend>
<build_export_depend>package</build_export_depend>
<exec_depend>package</exec_depend>
and
<build>package</build>
I would like to know the difference, and which one I should use.
Thank you.
Originally posted by Mickael on ROS Answers with karma: 3 on 2020-05-09
Post score: 0
Answer:
I would like to know the difference, and which one I should use.
The difference is that build_depend et al. are legal values, while build is not a legal value for an element in a package manifest.
Perhaps the author of the "different tutorials" you mention (please always link to what you mention, we cannot guess) intended to actually write depend, as that would be a legal value.
See REP-149: Package Manifest Format Three Specification - Dependency tags for allowed elements and what their purpose is.
Originally posted by gvdhoorn with karma: 86574 on 2020-05-09
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 34931,
"tags": "ros, package.xml, build"
} |
Complexity of non-commutative range multiplication operation | Question: Let $G$ be a non-commutative group, such as $\operatorname{GL}_k(\mathbb F_p)$. Given a list $\vec{a}=(a_1, \cdots, a_n) \in G^n$, we use $(p, q, x)$ to denote the operation that maps $\vec{a}$ to $(a_1, \cdots, a_{p-1}, a_p x, \cdots, a_q x, a_{q+1}, \cdots, a_n)$. Do we have an algorithm running in time complexity $O(n^{1+\epsilon})$, such that when input a sequence of operations $(p_i, q_i, x_i)_{i \leqslant l}$ where $l = O(n), p_i \leqslant q_i \leqslant n$, it computes the result of these operations on list $(1, 1, \cdots, 1)$?
When the group operation is commutative, it can be computed using a textbook segmented tree or Fenwick tree, where each operation takes time $O(\log n)$, and all these $l$ operations take time $O(n \log n)$.
Answer: Yes, it can be done in $O(n \log^2 n)$ time. Build a balanced binary tree over the $n$ leaves. Store a group element in each node. Initialize the tree by storing the element $1$ in each node.
Identify each leaf with an index $j$ in the range $1 \le j \le n$. The idea is that, at any point during the execution of the algorithm, the value of the $j$th list element will be $g_1 g_2 \dots g_k$, where $g_1,\dots,g_k$ are the group elements obtained by traversing the tree starting at leaf $j$ and proceeding upwards until you reach the root.
Define the pushdown($v$) operation as follows: given an internal node $v$ of the tree, let $\ell,r$ be $v$'s left and right children, and $g_v,g_\ell,g_r$ be the group elements stored at $v,\ell,r$. Then we replace $g_v,g_\ell,g_r$ with $1,g_\ell g_v, g_r g_v$. This takes $O(1)$ time.
Define the megapush($w$) operation as follows: we iterate through the nodes on the path from the root to $w$, executing pushdown($v$) on each such node $v$ (starting with the root first and proceeding downwards). This takes $O(\log n)$ time, as the height of the tree is $O(\log n)$, since the tree is balanced.
Identify each internal node with a range of indices, namely, the range of indices associated with the leaves that are descendants of that node. Any range of indices $[\ell,u]$ can be written as the disjoint union of $O(\log n)$ ranges $[\ell_i,u_i]$, such that each $[\ell_i,u_i]$ is the range associated with a node of the tree.
To handle the operation $(p,q,x)$, express the range $[p,q+1]$ as a disjoint union of $O(\log n)$ ranges $[\ell_i,u_i]$ as above. For each node $v$ associated with one of these ranges, first execute megapush($v$), then multiply the group element stored at $v$ by $x$. In other words, if $g_v$ was previously stored at that node (after the megapush), replace it with $g_v x$. This takes $O(\log^2 n)$ time, since we call megapush on $O(\log n)$ nodes.
Finally, after all of these operations, you can traverse the nodes of the tree in pre-order, executing pushdown($v$) on each node $v$. (In other words, first you execute pushdown on the root, then on its two children, then on its four grandchildren, and so on.)
Then you can read off the results, as the leaf associated with index $j$ will store the value of the $j$th list item after executing all operations.
Each operation takes $O(\log^2 n)$ time ($O(\log n)$ time to execute megapush on each of $O(\log n)$ nodes), and there are at most $n$ operations, so the operations take $O(n \log^2 n)$ time. All other operations can be done in $O(n)$ time.
Therefore, the total running time is $O(n \log^2 n) = O(n^{1+o(1)})$. | {
"domain": "cs.stackexchange",
"id": 21563,
"tags": "algorithms, data-structures"
} |
The maximum depth possible on quantum computers | Question: I hope you don't mind me having two questions.
Firstly, I was running a Qiskit HHL simulation on a 12x12 matrix and a 12x1 vector, leading to a 16x16 matrix after expansion, and it resulted in a circuit width of 10 qubits and depth of 198 gates.
What is the maximum depth possible on a quantum computer?
Secondly, on a smaller problem in the HHL of size 2x2 the depth is 326 and width of 7 qubits. Are my results wrong? It seems odd to have a lower depth than such a small problem.
[1] https://qiskit.org/textbook/ch-applications/hhl_tutorial.html#implementationsim
Answer: This is quite a broad question. In fact it seems that you have 2 questions:
How can a smaller (in term of size) matrix result in a longer quantum circuit?
What is the maximum depth current quantum computer can execute (reliably)?
About question 1, the number of quantum gates and depth of the quantum circuit generated depends a lot on the matrix $A$ of your linear system, on the method used to implement the evolution $e^{-iAt}$ and on how you "load" the right-hand side $b$ in a quantum register.
Efficient methods to construct the quantum circuit that implements $e^{-iAt}$ exist when $A$ satisfy some properties (like sparsity or locality). But here, efficient does not mean NISQ-compliant, it only means that the circuits generated by the method have a number of quantum gates that scale well with the size of the matrix. Some examples of generic methods can be found here and an example of an hand-craft method for specific matrices has been written here.
Another point that might impact a lot the final depth of the circuit is the encoding of $b$ into a quantum register.
It is not possible to know if your issue is caused by one of the previous points or not without the actual matrix and right-hand side you used.
About your second question, have a look at this answer. Do not use the numbers in it as they are probably outdated, but you can use the method with up-to-date error rates.
The short answer is: in most of the quantum circuits depth is not the important figure, CNOT number and CNOT error rates seems to have a greater impact. | {
"domain": "quantumcomputing.stackexchange",
"id": 1640,
"tags": "qiskit, hhl-algorithm"
} |
Is Feynman's explanation of how the moon stays in orbit wrong? | Question: Yesterday, I understood what it means to say that the moon is constantly falling (from a lecture by Richard Feynman). In the picture below there is the moon in green which is orbiting the earth in grey. Now the moon wants to go at a tangent and travel along the arrow coming out of it. Say after one second it arrives at the red disc. Due to gravity it falls down toward the earth and ends up at the blue disc. The amount that it falls makes it reach the orbital path. So the moon is constantly falling into the orbital path, which is what makes it orbit.
The trouble I'm having is: shouldn't the amount of "fall" travelled by the moon increase over time? The moon's speed toward the earth accelerates but its tangential velocity is constant. So how can the two velocities stay in balance? This model assumes that the moon will always fall the same distance every second.
So is the model wrong or am I missing something?
Extra points to whoever explains: how come when you do the calculation that Feynman does in the lecture, to find the acceleration due to gravity on earth's surface, you get half the acceleration you're supposed to get (Feynman says that the acceleration is $16 ~\mathrm{ft}/\mathrm{s}^2$, but it's actually twice that).
Answer: What's actually happening is something more like this:
Here, $x_0$ and $v_0$ are the initial position and velocity of the moon, $a_0$ is the acceleration experienced by the moon due to gravity at $x_0$, and $\Delta t$ is a small time step.
In the absence of gravity, the moon would travel at the constant velocity $v_0$, and would thus move a distance of $v_0 \Delta t$ during the first time step, as shown by the arrow from the green circle to the red one. However, as it moves, the moon is also falling under gravity. Thus, the actual distance it travels, assuming the gravitational acceleration stays approximately constant, is $v_0 \Delta t + \frac12 a_0 \Delta t^2$ plus some higher-order terms caused by the change in the acceleration over time, which I'll neglect.
However, moon's velocity is also changing due to gravity. Assuming that the change in the gravitational acceleration is approximately linear, the new velocity of the moon, when it's at the blue circle marking its new position $x_1$ after the first time step, is $v_1 = v_0 + \frac12(a_0 + a_1)\Delta t$. Thus, after the first time step, the moon is no longer moving horizontally towards the gray circle, but again along the circle's tangent towards the spot marked with the second red circle.
Over the second time step, the moon again starts off moving towards the next red circle, but falls down to the blue circle due to gravity. In the process, its velocity also changes, so that it's now moving towards the third red circle, and so on.
The key thing to note is that, as the moon moves along its circular path, the acceleration due to gravity is always orthogonal to the moon's velocity. Thus, while the moon's velocity vector changes, its magnitude does not.
Ps. Of course, the picture I drew and described above, with its discrete time steps, is just an approximation of the true physics, where the position, velocity and acceleration of the moon all change continuously over time. While it is indeed a valid approximation, in the sense that we recover the correct differential equations of motion from it if we take the limit as $\Delta t$ tends towards zero, it's in that sense no more or less valid than any other such approximation, of which there are infinitely many.
However, I didn't just pull the particular approximation I showed above out of a hat. I chose it because it actually corresponds to a very nice method of numerically solving such equations of motion, known as the velocity Verlet method. The neat thing about the Verlet method is that it's a symplectic integrator, meaning that it conserves a quantity approximating the total energy of the system. In particular, this means that, if we use the velocity Verlet approximation to simulate the motion of the moon, it actually will stay in a stable orbit even if the time step is rather large, as it is in the picture above. | {
"domain": "physics.stackexchange",
"id": 58753,
"tags": "newtonian-gravity, orbital-motion"
} |
Would a universe with special-relativistic gravity make sense? | Question: Coulomb's law in electrostatics in analogous with Newtonian gravity. It's pretty clear neither of these can be used in a universe that obeys special relativity. They both must be modified to avoid instantaneous communication (see this question) among other problems. For electromagnetism, we get maxwells equations. For gravity, we have the analogous gravitoelectromagnitism and gravitational waves. However, gravity also causes spacetime curvature.
But what would a "special relativistic" theory of gravity with flat spacetime look like?
Like our universe, Newtonian gravity would still be accurate for describing the solar system. Mercuries orbit would still precess due to special relativity, though maybe at a different rate, it would be the same if we placed a charged test particle in orbit around an electrostatic well at a high speed.
Like our universe, the speed of gravity would equal the speed of light. Gravitational waves would be emitted by orbiting objects and cause the orbits to shrink over time. Since all forms of energy gravitate, the waves would still exert a force on matter, which would change the distance between two mirrors, and thus the waves would still be detectable.
Like in our universe, we still would get gravitational lensing.
Kinetic energy would still contribute to a star-cluster's total gravitational field. Also, an object in a gravity well would contribute less to the gravity felt by a distant observer.
The crucial difference, however, is that "special relativistic gravity" wouldn't have gravitational time dilation. There would be no black holes.
Besides the difficulty in setting up a big-bang/cosmology, what are the fundamental problems with a universe like this? My reasoning is as follows: Suppose an object with mass m resting in a deep gravity well (i.e. the center of a very dense globular cluster) were to convert it's entire mass into a spherical pulse of light (an idealized matter-antimatter reaction), emitting m of energy to a nearby observer. Suppose it fell in from infinity. Since it gave up potential energy V to come to rest in the well, to conserve energy we must have m + E = V, where E is the light energy emitted to infinity. This necessitates gravitational redshift because E < m. Red-shifting without time dilation would mean that distant observers see each photon have less energy but the timing of any light pulses arriving is not slowed down (this is different from redshift in our universe). Although unusual, this doesn't seem to make for an obvious contradiction, it would be analogous to firing ultrarelativistic electrons out of an electrostatic well. Is there a nice argument to show such a universe couldn't exist?
Answer:
The crucial difference, however, is that "special relativistic gravity" wouldn't have gravitational time dilation.
Note that the equivalence principle implies gravitational time dilation, so if you don't have gravitational time dilation, then you have to violate the equivalence principle somehow.
But basically, what you suggest is the obvious first thing to try, and it's what Einstein tried first, ca. 1906. See Weinstein, "Einstein's Pathway to the Equivalence Principle 1905-1907," http://arxiv.org/abs/1208.5137 . She gives a translation of a lecture Einstein gave in 1933:
... I attempted to treat the law of gravity within the framework of the special theory of relativity.
Like most writers at the time, I tried to establish a field-law for gravitation, since it was no longer possible to introduce direct action at a distance, at least in any natural way, because of the abolition of the notion of absolute simultaneity.
The simplest thing was, of course, to retain the Laplacian scalar potential of gravity, and to complete the equation of Poisson in an obvious way by a term differentiated with respect to time in such a way, so that the special theory of relativity was satisfied. Also the law of motion of the mass point in a gravitational field had to be adapted to the special theory of relativity. The path here was less clearly marked out, since the inertial mass of a body could depend on the gravitational potential. In fact, this was to be expected on account of the inertia of energy.
These investigations, however, led to a result which raised my strong suspicions. According to classical mechanics, the vertical acceleration of a body in the vertical gravitational field is independent of the horizontal component of its velocity ... But according to the theory I tried, the acceleration of a falling body was not independent of its horizontal velocity, or the internal energy of the system.
This led him to the equivalence principle and its implication of gravitational time dilation, which he published in 1907. | {
"domain": "physics.stackexchange",
"id": 57452,
"tags": "special-relativity, newtonian-gravity"
} |
Reflection of light from a plane surface | Question: Why do we witness scattering of light from mobile screen even when we have a plane screen to reflect light falling from any source of light (tubelight) onto the screen?
Answer: I can't be sure without seeing the phenomenon, but believe that the colors that you are seeing do not originate in the glass window of your mobile phone. I think that they are caused by diffraction from the structures under the glass: the pixels of the display.
I will comment that dispersion of white light into colors does occur in a parallel slab. It is difficult to detect because all of the colors exit parallel to each other, but offset by a tiny amount. To the eye it probably appears that no dispersion is taking place. | {
"domain": "physics.stackexchange",
"id": 69438,
"tags": "optics, reflection"
} |
Optical Vortex Generation Using a Spiral Phase Plate (SPP) | Question: I know it is possible to generate an optical vortex using a spp. The vortex will a have twist direction following the sign of its topological charge integer $\ell$. But how is the sign of $\ell$ assigned? Is it dependent on the spp geometry? It means that if I want to inverse the sign of $\ell$ shall I use a spp with reversed geometry (reversed "spiring" angle)?
Thanks for answering
Answer: Yes, the sign of the vortex charge depends on the SPP spiral's handedness. | {
"domain": "physics.stackexchange",
"id": 81240,
"tags": "optics, electromagnetic-radiation, vortex"
} |
Formal theory about explaining algorithms | Question: There is a lot of algorithms written in formal languages, but I have never seen any formal system which target is to explain or give a rationale behind an algorithm. It seems that when constructing examples, authors have to create both interesting, somewhat random, and small samples. I think this task could be formalized to some degree.
I wonder if there is such theory or maybe attempts to formalize explanations?
Edit: I was looking for a theory that describes how to teach other people an algorithm. As mentioned by jmite, it is possible to create a self-explaining algorithm using dependent types to solve this problem.
Answer: This might not be what you're looking for, but I think this is somewhat covered by the theory of dependent types, specifically intrinsically typed data structures.
The idea is that, instead of having an algorithm along with a proof of correctness, you start with a type that describes the properties of a solution. Then you simply write a program of that type, and you are guaranteed that it is correct.
Does this mean that you get correctness for free? Certainly not. But now the "why" at each stage is clear. The presentation of the algorithm and the explanation are one and the same. The notion of of correctness is formalized, and baked into the language itself.
For intros on this, see:
https://cs.ru.nl/~wouters/Publications/ThePowerOfPi.pdf
http://homepages.inf.ed.ac.uk/wadler/papers/propositions-as-types/propositions-as-types.pdf
https://www.manning.com/books/type-driven-development-with-idris
https://mitpress.mit.edu/books/little-typer | {
"domain": "cstheory.stackexchange",
"id": 5079,
"tags": "soft-question, advice-request"
} |
Skeleton tracking using openni_tracker | Question:
I am trying to use openni_tracker to track skeletons. After running the command rosrun openni_tracker openni_tracker . the Kinect is able to detect new users but nothing happens after that.
Please can anyone tell me how Im suppose to track skeletons with it.
Originally posted by fundamentals on ROS Answers with karma: 11 on 2017-01-20
Post score: 0
Answer:
The openni_tracker node publishes new users as frames. The node generates a frame for each limb (hand, arm, head, leg, and so on) of the detected person. You can check the result with RVIZ by setting the gui to visualize data about tf and by setting as fixed frame the Kinect's frame.
Originally posted by Chaos with karma: 396 on 2017-01-20
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by fundamentals on 2017-01-20:
To visualize this on Rviz do I just run Rviz normally using the command rosrun rviz rviz ? @Chaos
Comment by Chaos on 2017-01-23:
You need also to configure RVIZ to visualize frames. Press the add button on the bottom left, search for "tf" and add it. Make sure you set the right fixed frame. | {
"domain": "robotics.stackexchange",
"id": 26781,
"tags": "openi-tracker"
} |
install sound_play in fuerte | Question:
I'm trying to use sound_play package in fuerte.
However, I cannot find the sound_play package in my desktop-full ros fuerte. Do anyone know how to install the sound_play package?
Originally posted by ldsrogan on ROS Answers with karma: 1 on 2013-10-23
Post score: 0
Answer:
As you can see here: http://wiki.ros.org/sound_play
Sound_play is part of the audio_common group and can be installed for fuerte via
apt-get install ros-fuerte-audio-common
Originally posted by KruseT with karma: 7848 on 2013-10-25
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 15947,
"tags": "ros, ros-fuerte, sound-play"
} |
Work done by spring when compressed by a ball | Question: Let us asssume a vertical spring is fixed on ceiling. A ball has been thrown upward and it compresses the spring. What should be the work done by spring in this case?
My understanding is using work energy theorem. Let $v$ be the velocity of the ball when it is just about to hit the spring and let the spring be compressed by $x$. Evoking work energy theorem,we get $W_{\mathrm{spring}}+W_{\mathrm{gravity}}=-\frac{1}{2}mv^2$ and if mass is $m$,the equation becomes $W_{\mathrm{spring}}=\frac{1}{2}mv^2-mgx$. But according to a book,the answer was just given by $\frac{1}{2}mv^2$. I don't understand why that's the case. Is there any flaw in my understanding? If so,please enlighten me.
Answer: The question is not clear enough as to whether or not to include the gravitational force but even then I think that the book answer is incorrect.
Consider the system as the ball alone, then there are two external forces acting on the ball; the downward force due to the spring and the downward force due to the gravitational attraction of the Earth, magnitude $mg$.
The work done by the gravitational force is $-mgx$ (negative because force is downwards and displacement is upwards).
Let the work done by the spring (on the ball) be $W_{\rm spring}$ and the change in the kinetic energy of the ball is $0-\frac 12 mv^2 = -\frac 12 mv^2$.
Applying the work-energy theorem gives $W_{\rm spring}+(-mgx) = -\frac 12mv^2\Rightarrow W_{\rm spring}=+mgx-\frac 12mv^2$
Note the signs of the terms on the right-hand side of the equation.
With no gravitational force the work done by the spring $(-\frac 12 mv^2)$ is negative because the force exerted by the spring on the ball is downwards and the displacement of the ball is upwards. Put another way work is being done by the ball on the spring.
The positive sign in $+mgx$ might seem strange but what you must remember is that the work done by the spring is negative which means that the ball has done positive work on the spring and increases the elastic potential energy stored in the spring.
So the $+mgx$ term will make the work done by the spring less negative and so the increase in the stored elastic potential energy will be less. | {
"domain": "physics.stackexchange",
"id": 91544,
"tags": "work"
} |
Does the conservation of linear momentum for a ball hitting a wall and changing direction? | Question: The conservation of linear momentum essentially says:
$$\sum m \vec{v_1}=\sum m \vec{v_2}$$
But if I take the absolute value of both sides, and drop summation, is it also equivalent to say that:
$$mv = mv$$
Here is where my confusion would take place:
If a ball is travelling 45 degrees to the horizontal at 3 m/s and collides with a wall (that doesn't move at all), then travels in the +x axis, what speed is it moving in?
I know $mv_1=mv_2$ for the ball, because the velocity of the wall is 0 in both cases.
So is it correct to say that momentum is only conserved in the x axis, making the velocity of the ball in the x axis $$m * 3 * \sin45 = m * v_2 \implies v = 3 * \sin45$$
Or is it rather $$m * 3 = m * v \implies v = 3$$
Answer: The steps in the OP's suggested solution are deeply flawed.
First: you cannot take the absolute values of the two sides of a vector equation. Momentum has both size and direction, and both must be taken into account when doing an addition. Would you examine your bank statement for the month, while treating the deposits and withdrawals differently. Is a credit balance the same as a debit balance?
Secondly: momentum is only conserved for a system in the absence of any external force. if you treat the ball as your system, the force exerted by the wall on the ball constitutes an external force for this system, and thus conservation of momentum for the ball simply does not apply.
What if you consider the ball plus wall as the system? Then the forces are internal, and conservation of momentum applies. However, you must also assume that the wall recoils, however slightly, from the collision with the ball. If the ground exerts any horizontal force on the wall, then you have an external force again, and no conservation of momentum.
The simplest method is to assume a collision between a ball, mass $m_b$ and wall, mass $m_w$. Allow them to collide elastically, and conserve both kinetic energy and linear momentum. Keep track of the positive and negative signs on the velocities. Finally, see what happens to the final velocity of the ball, as the mass of the wall tends to infinity. | {
"domain": "physics.stackexchange",
"id": 20140,
"tags": "newtonian-mechanics"
} |
What exactly is heat? | Question: Is it energy?
Is it energy per unit volume?
Is it energy per unit time i.e power?
What is it?
Answer: I'll try to give an answer in purely classical thermodynamics.
Summary
Heat is a way of accounting for energy transfer between thermodynamic systems. Whatever energy is not transferred as work is transferred as heat. If you observe a thermodynamic process and calculate that system A lost $Q$ calories of heat, this means that if the environment around system A were replaced with $Q$ grams of water at $14\sideset{^{\circ}}{}{\mathrm{C}}$ and the process were repeated, the temperature of that water would rise to $15\sideset{^{\circ}}{}{\mathrm{C}}$.
Energy
Energy is a number associated with the state of a system. It can be calculated if you give the state variables - things like mass, temperature, chemical composition, pressure, and volume. (These state variables are not all independent, so you only need to give some combination of them.)
Sometimes the energy can be accounted very simply. For an ideal gas, the energy is simply proportional to the temperature, number of molecules, and number of dimensions. For a system with interesting chemistry, internal stresses and deformation, gravitational potential, etc. the energy may be more complicated. Essentially, we get to invent the formulas for energy that are most useful to us.
There's a nice overview of energy in The Feynman Lectures, here. For a more theoretical point of view on where these energy formulas come free, see Lubos Motl's answer here.
Energy Conservation
As long as we make the right definitions of energy, it turns out that energy is conserved.
Suppose we have an isolated system. If it is not in equilibrium, its state may change. Energy conservation means that at the end of the change, the new state will have the same energy. (For this reason, energy is often treated as a constraint. For example, an isolated system will maximize its entropy subject to the constraint that energy is conserved.)
This leaves the question of what an isolated system is. If we take another system (the environment) and keep it around the isolated system, we find no observable changes in the environment as the state of the isolated system changes. For example, changes in an isolated system cannot change the temperature, pressure, or volume of the environment. Practically, an isolated system should have no physical mechanisms for interacting with the rest of the universe. Matter and radiation cannot leave or enter, and there can be no heat conduction (I'm jumping the gun on that last one, of course, but take "heat conduction" as a rough term for now). A perfectly isolated system is an idealization only.
Next we observe systems A and B interacting. Before the interaction, A has 100 joules of energy. After interacting, A has 90 joules of energy, so it has lost 10 joules. Energy conservation says that if we measure the energy in system B before and after the interaction, we will always find that system B has gained 10 joules of energy. In general, system B will always gain exactly however much system A loses, so the total amount is constant.
There are nuances and caveats to energy conservation. See this question, for example.
Work
Work is defined by
$$\textrm{d}W = P\textrm{d}V$$
$P$ is pressure; $V$ is volume, and it is fairly easy to give operational definitions of both.
Using this equation, we must ensure that $P$ is the pressure the environment exerts on the system. For example, if we took a balloon into outer space, it would begin expanding. However, it would do no work because the pressure on the balloon is zero. However, if the balloon expands on Earth, it does work given by the product of its volume change and the atmospheric pressure.
That example treats the entire balloon as the system. Instead, we might think of only the air inside the balloon as a system. Its environment is the rubber of the balloon. Then, as the balloon expands in outer space, the air inside does work against the pressure from the elastic balloon.
I wrote more about work in this answer.
Adiabatic Processes
Work and energy, as described so far, are independent ideas. It turns out that in certain circumstances, they are intimately related.
For some systems, we find that the decrease in energy of the system is exactly the same as the work it does. For example, if we took that balloon in space and watched it expand, the air in the balloon would wind up losing energy as it expanded. We'd know because we measure the temperature, pressure, and volume of the air before and after the expansion and calculate the energy change from a formula.
Meanwhile, the air would have done work on the balloon. We can calculate this work by measuring the pressure the balloon exerts on the air and multiplying by the volume change (or integrating if the pressure isn't constant).
Remarkably, we could find that these two numbers, the work and the energy change, always turned out to be exactly the same except for a minus sign. Such a process is called adiabatic.
In reality, adiabatic processes are approximations. They work best with systems that are almost isolated, but have a limited way of interacting with the environment, or else occur too quickly for interactions beside pressure-volume ones to be important.
In our balloon, the expansion might fail to be adiabatic due to radiation or conduction between the balloon and the air. If the balloon were a perfect insulator and perfectly white, we'd expect the process to be adiabatic.
Sound waves propagate essentially adiabatically, not because there are no mechanisms for one little mass of air to interact with nearby ones, but because those mechanisms (diffusion, convection, etc.) are too slow to operate on the time scale of the period of a sound wave (about a thousandth of a second).
This leads us to thinking of work in a new way. In adiabatic processes, work is the exchange of energy from one system to another. Work is still calculated from $P\textrm{d}V$, but once we calculate the work, we know the energy change.
Heat
Real processes are not adiabatic. Some are close, but others are not close at all. For example, if I put a pot of water on the stove and turn on the burner, the water's volume hardly changes at all, so the work done as the water heats is nearly zero, and what work is done by the water is positive, meaning the water should lose energy.
The water actually gains a great deal of energy, though, which we can discover by observing the temperature change and using a formula for energy that involves temperature. Energy got into the pot, but not by work.
This means that work is not a sufficient concept for describing energy transfer. We invent a new, blanket term for energy transfer that is not done by work. That term is "heat".
Heat is simply any energy transferred between two systems by means aside from work. The energy entering the boiling pot is entering by heat. This leads to the thermodynamic equation
$$\textrm{d}E = -\textrm{d}W + \textrm{d}Q$$
$E$ is energy, $W$ work, and $Q$ heat. The minus sign is a convention. It says the if a system does work, it loses energy, but if it receives heat, it gains energy.
Interpreting Heat
I used to be very confused about heat because it felt like something of a deus ex machina to say, "all the leftover energy must be heat". What does it mean to say something has "lost 30 calories through heat"? How can you look at it and tell? Pressure, temperature, volume are all defined in terms of very definite, concrete things, and work is defined in terms of pressure and volume. Heat seems too abstract by comparison.
One way to get a handle on heat, as well as review everything so far, is to look at the experiments of James Joule. Joule put a paddle wheel in a tub of water, connected the wheel to a weight so that the weight would drive the wheel around, and let the weight fall. Here's the Wikipedia picture of the set up:$\hspace{100px}$.
As the weight fell, it did work on the water; at any given moment, there was some pressure on the paddles, and they were sweeping out a volume proportional to their area and speed. Joule assumed that all the energy transferred to the water was transferred by work.
The weights lost energy as they fell because their gravitational potential energy went down. Assuming energy is conserved, Joule could then find how much energy went into the water. He also measured the temperature of the water. This allowed him to find how the energy of water changes as its temperature changes.
Next suppose Joule starting heating the water with a fire. This time the energy is transferred as heat, but if he raises the temperature of the water over exactly the same range as in the work experiment, then the heat transfer in this trial must be the same as the work done in the previous one. So we now have an idea of what heat does in terms of work. Joule found that it takes 4.2 joules of work to raise the temperature of one gram of water from $14\sideset{^{\circ}}{}{\mathrm{C}}$ to $15\sideset{^{\circ}}{}{\mathrm{C}}$. If you have more water than that, it takes more work proportionally. 4.2 joules is called one calorie.
At last we can give a physical interpretation to heat. Think of some generic thermodynamic process. Imagine it happening in a piston so that we can easily track the pressure and volume. We measure the energy change and the work during the process. Then we attribute any missing energy transfer to heat, and say "the system gave up 1000 joules (or 239 calories) of heat". This means that if we took the piston and surrounded it with 239 grams of water at $14\sideset{^{\circ}}{}{\mathrm{C}}$, then did exactly the same process, the water temperature would rise to $15\sideset{^{\circ}}{}{\mathrm{C}}$.
Misconceptions
What I discussed in this post is the first law of thermodynamics - energy conservation. Students frequently get confused about what heat is because they mix up its definition with the role it plays in the second law of thermodynamics, which I didn't discuss here. This section is intended to point out that some commonly-said things about heat are either loose use of language (which is okay as long as everyone understands what's being said), or correct use of heat, but not directly a discussion of what heat is.
Things do not have a certain amount of heat sitting inside them. Imagine a house with a front door and a back door. People can come and go through either door. If you're watching the house, you might say "the house lost 3 back-door people today". Of course, the people in the house are just people. The door only describes how they left. Similarly, energy is just energy. "Work" and "heat" describe what mechanism it used to leave or enter the system. (Note that energy itself is not a thing like people, only a number calculated from the state, so the analogy only stretches so far.)
We frequently say that energy is "lost to heat". For example, if you hit the brakes on your car, all the kinetic energy seems to disappear. We notice that the brake pads, the rubber in the tires, and the road all get a little hotter, and we say "the kinetic energy of the car was turned into heat." This is imprecise. It's a colloquialism for saying, "the kinetic energy of the car was transferred as heat into the brake pads, rubber, and road, where it now exists as thermal energy."
Heat is not the same as temperature. Temperature is what you measure with a thermometer. When heat is transferred into a system, its temperature will increase, but its temperature can also increase because you do work on it.
The relationship between heat and temperature involves a new state variable, entropy, and is described by the second law of thermodynamics. Statements such as "heat flows spontaneously from hot bodies to cold bodies" are describing this second law of thermodynamics, and are really statements about how to use heat along with certain state variables to decide whether or not a given process is spontaneous; they aren't directly statements about what heat is.
Heat is not "low quality energy" because it is not energy. Such statements are, again, discussion of the second law of thermodynamics.
Reference
This post is based on what I remember from the first couple of chapters in Enrico Fermi's Thermodynamics. | {
"domain": "physics.stackexchange",
"id": 837,
"tags": "thermodynamics, heat, temperature"
} |
Simulating DNA sequences in R with a given value of $\theta = 4N_{e}\mu$ | Question: This might not be the most appropriate site to be asking such a question, but perhaps someone has a solution.
My question is: is there an R package or function for simulation of DNA sequences of a given basepair length generated according to a prespecified value of the population mutation rate $\theta = 4N_{e}\mu$, where $N_{e}$ is the effective population size and $\mu$ is the per-generation mutation rate
I have real sequence data from GenBank, but I would also like to simulate some random DNA sequences according to a coalescent process.
Essentially the function would output the generated aligned DNA sequences as a FASTA file.
I have been rather unsuccessful in tracking down a suitable package, as most only output generated phylogenies.
Answer: There are a number of tools but I don't know any that comes with an R library. They are usually all called from the command line. I don't either of any that produce FASTA file. SimBit (my own software) can produce vcf files which can easily be converted into FASTA with PGDspider.
However, if your simulations are so easy, you might not need these individual-based softwares. Just write the code yourself. It might only be take a few tens of lines. Here is a simple example
freq = 0.1 # initial frequency
N = 1000 # Constant population size
mu = 1e-7 # mutation rate
for (generation in 1:nbGenerations)
{
# Drift
freq = rbinom(1,N,freq) / N
# Mutations
oneWayMutations = rbinom(1,N*freq,mu) / N
otherWayMutations = rbinom(1,N*(1-freq),mu) / N
freq = freq - oneWayMutations + otherWayMutations
}
If you need something more complex (such as several loci), then you should probably use one of the existing individual-based software. See the post Sequence evolution simulation tool | {
"domain": "biology.stackexchange",
"id": 8927,
"tags": "population-genetics"
} |
Symfony rest endpoint for get a number products of one user | Question: I want to improve quality of this symfony rest endpoint (solid principle, Kiss, best practice...) Can you review my code please?
Symfony controllController function that return json list of product
/**
* @Route("/api/products/my_list/{number}", methods={"GET"})
* @param Security $security
* @param BeamyAPI $beamyAPI
* @param Request $request
* @param ProductService $productService
* @return string
*/
public function myList(
Security $security,
BeamyAPI $beamyAPI,
Request $request,
ProductService $productService,
int $number = 20
)
{
$productListUser = $this->em->getRepository(ProductAdmin::class)
->findProductsUser(
$security->getUser(),
$number
);
return new JsonResponse(
$productListUser,
Response::HTTP_OK
);
}
Trait to format array response
Trait ArrayFormat
{
/**
* format array for user product endpoint
*
* @param array $data
* @return array
*/
public function formatUserProduct($product) : array
{
return [
'id' => $product['id'],
'name' => $product['name'],
'notifications' => '',
'logo' => [
'contentUrl' => $product['logo']['contentUrl']
]
];
}
}
Repository permit to get array of products list for one user
/**
* get product list of user
*
* @param UserInterface $user
* @param integer $number
* @return array|null
*/
public function findProductsUser(UserInterface $user, int $number) :?array
{
$listProductUser = $this->em->getRepository(ProductAdmin::class)->findBy(
['user' => $user],
['product' => 'ASC'],
$number
);
$res = [];
array_walk($listProductUser, function(&$productUser){
$product = $this->productService->getProductInfo($productUser->getProduct());
$productUser = ArrayFormat::formatUserProduct($product);
});
return $listProductUser;
}
Thanks
Answer: $productListUser = $this->em->getRepository(ProductAdmin::class)
->findProductsUser(...
Than
public function findProductsUser(UserInterface $user, int $number) :?array
{
$listProductUser = $this->em->getRepository(ProductAdmin::class)->findBy(...
seems to me that findProductsUser is method of whatever is returned by $em->getRepository(ProductAdmin::class) why is that method retrieving itself from entity manager? Shouldn't it be just $this->findBy(...?
Further, the responsibility of formatting the entity should not be done by the repository. You need to either wrap the repository with the formatter into another class, or do it in the controller. Also there is no reason why the formatter must be a trait. Make it a service and inject it to the entire controller through contructor if all methods of the controller need it, or have it injected in an action method, like you do with Security, BeamAPI, etc... | {
"domain": "codereview.stackexchange",
"id": 37451,
"tags": "php, rest, doctrine, symfony4"
} |
Two different definitions of ladder operator for Harmonic Oscillators | Question: As it happened, I accidentally referred to two different editions of Introduction to QM by Griffiths. In the second chapter, while defining the ladder operator for harmonic oscillators, he used different terms. Now different definitions mean that their commutator change. Which of the two operators should I use? Also, why is there ambiguity in the definition of the same?
$$a_{\pm} \equiv \frac 1 {\sqrt{2m}}\bigg( \frac {\hbar}{i} \frac {d}{dx} \pm im \omega x\bigg)$$
$$a_{\pm} \equiv \frac 1 {\sqrt{2\hbar m \omega}}\bigg(\mp ip + m \omega x\bigg)$$
Answer: Your second equation is the standard definition for $a$ ($a_-$) and $a^\dagger$ ($a_+$) as found e.g. on Wikipedia. Note that the ladder operators here are dimensionless.
Griffiths is doing it slightly differently. I'm not sure why, but if you continue consistently with this alternative definition, you will get to the same correct results of course. For example, Griffiths (2.45) is
$$ [a_-, a_+] = \hbar\omega , $$
compare with the standard $[a, a^\dagger] = 1$. | {
"domain": "physics.stackexchange",
"id": 53280,
"tags": "quantum-mechanics, operators, harmonic-oscillator"
} |
Polymer Slurry used in boring foundation holes? | Question: I watched a video on the construction of Burj Khalifa, Dubai. The construction engineers answered that they're using a special type of protection known as Cathodic protection to protect from the corrosion due to local ground water. They told a compound named "Polymer Slurry" (What's that?). They're using viscous Polymer Slurry as their basic coat to dump the high density & low permeability concrete in order to create a "GOOD FOUNDATION". What is it made of? (Simply any polymer?) And what does it do? Does it actually repel water? Many organic compounds repel water... Then Why this?
Answer: After reading the link provided in the question, and doing a little googling, of "polymer slurry" and "foundation", I found something useful.
Polymer slurry seems to be made of one or more hydrophilic superabsorbent polymers, such as polyethylene glycol, polyvinyl alcohol, or carboxymethylcellulose that produce a thick viscous slurry when mixed with water. It does not repel water so much as the opposite. It is so hydrophilic that it absorbs water before the water can reach the concrete or the steel. Then this slurry becomes thicker the more water it absorbs so that the rate of diffusion of water through the slurry decreases. Ultimately, this action creates a hydrogel, which may be quite strong, despite being mostly water, due to extensive hydrogen bond crosslinks.
Other applications of polymer slurry seem to be in the drilling industry to modulate the viscosity of drilling fluids. | {
"domain": "chemistry.stackexchange",
"id": 3560,
"tags": "polymers, viscosity"
} |
Find nth Fibonacci Number, using iteration and recursion | Question: I'm a beginner programmer and I came upon this problem which is to find the nth number in the Fibonacci series.
I used to solve the problem using a for loop; today I learned about recursion but there is a problem: when I pass 40 or 41 to the recursive function, it takes a bit of time to calculate it, while in the iterative method it would instantly give me the answers.
I have these questions:
Why do most people (on the Internet) recommend using recursion? Because it's simpler and easier to write the program? (Logically I thought that we should write it in a way that is fast and simple)
Here are the 2 methods that a beginner like me can handle writing at the moment. Is there a better way than these two methods? And are these methods complex?
Here is the recursive method:
#include <iostream>
using namespace std;
unsigned long long fibonacci(unsigned long long n);
unsigned long long fibonacci(unsigned long long n){
if(n<=1)
return n; //base case
return fibonacci(n-1) + fibonacci(n-2); //recursive case
}
int main(){
unsigned int a {};
cout << "Enter number: ";
cin >> a;
cout << fibonacci(a) << endl;
return 0;
}
And here is the iterative (looping) method:
#include <iostream>
using namespace std;
int main() {
int n{};
unsigned long long t1{0}, t2{1};
unsigned long long sum{};
cout << "Enter the number of terms: ";
cin >> n;
cout << "Fibonacci Series: ";
for (int i{2}; i < n; ++i) {
sum = t1 + t2;
t1 = t2;
t2 = sum;
}
cout << t2;
return 0;
}
Note: I know that using namespace std is a bad idea but I have also tried the same thing without the namespace and still I get the delay, so I did it here because it's easy to understand.
Edit1: First of all I would like to thank everyone who commented and answered my question...To be honest I didn't think that this question would bring a lot of attention to the community so I appreciate the time you put on this and it means a lot to me.
Edit2: Let me demonstrate some of the things that might have been a little odd to you.
Q: What do you mean by better when you say if there is a better way than these two methods?
A: By better, I meant that the code shall be simple and also takes less time to execute or perform the calculations
Q: What do you mean when you say most people (on the internet) use recursion?
A: I've seen code out there that uses recursion to solve problems. The two common problems I've seen are Fibonacci and Factorial of a number. For factorial, I have also tried both methods (iteration and recursion) but I didn't get a delay for both of them when I type large numbers like 40 or 50. The common problem I saw with recursion is that eventually, you may run out of memory space because it is calling itself multiple times which results in a stack overflow (because stack as a part of memory will be filled). Although this problem doesn't occur in this example. So this raises another question here:
When should we use recursion?
Answer:
Why do most people (on the internet) recommend using recursion because it's simpler and easier to write the program? Logically I thought that we should write it in a way that is fast and simple.
This is a perceptive question. I wrote an article about exactly this topic in 2004, which you can read here:
https://docs.microsoft.com/en-us/archive/blogs/ericlippert/how-not-to-teach-recursion
Summing up: there are two good reasons to teach people to use recursion to solve Fib:
First, because it clearly illustrates what you have learned today. A naive translation of a recursive definition into a recursive function can often lead to poor performance. That's an important lesson for beginners to take away. (EXERCISE: How many additions does your naive recursive program execute for a given n? The answer may surprise you.)
Second, because the first lesson then gives us an opportunity to lead the beginner to learn how to write a recursive algorithm so that it performs well.
Unfortunately, as you have discovered, a great many people on the internet have not internalized that teaching recursion via fib is solely useful as an illustration of bad uses of recursion and how to fix them, and not in itself an example of a good use of recursion.
It would be much better if people attempting to teach recursion did so by providing a mix of good and bad recursive algorithms, and taught how to spot and avoid the bad ones.
Is there a better way than these two methods?
Another lesson you'll quickly learn is that asking "which is better?" is a sure way to get back the reply: "can you describe a clear metric for betterness?"
So: can you describe a clear metric for betterness?
If the goal is to print out the nth fib number, you can do way "better" than either of your solutions:
unsigned long long fibs[] = { 1, 1, 2, 3, 5, 8, ... }
if (0 <= n && n < sizeof(fibs... blah blah blah))
cout << fibs[n];
Done. There are only so many fib numbers that fit into a long. You can look them up on the internet, copy them into your program, and you've got a short, fast fib program with no loops at all. That's "better" to me.
But remember, the point of this exercise is to teach you something about recursion, so by that metric my program is certainly not "better".
are these methods complex?
By "these methods" I think you mean "methods of writing recursively-stated algorithms into code other than naive recursion and unrolling the recursion into a loop".
That's a matter of opinion. Let me put it this way.
I work on a compiler team, and I interview a lot of people. My standard coding question involves writing a simple recursive algorithm on binary trees that is inefficient when written the naive way, but can be made efficient by making a few simple refactorings. If the candidate is unable to write that clear, straightforward, efficient code, that's an easy no-hire.
Where things get interesting is when I ask "suppose you had to remove the left-hand recursion from this tree traversal; how might you do it?"
There are standard techniques for removing recursions. Dynamic programming reduces recursions. You could make an explicit stack and use a loop. You could make the whole algorithm tail recursive and use a language that supports tailcalls. You could use a language with cocalls. You could use continuation passing style and build a "trampoline" execution loop.
Some of these techniques are, from the perspective of the novice, terribly complicated. I ask the question because I want to know what is in the developer's toolbox. | {
"domain": "codereview.stackexchange",
"id": 37563,
"tags": "c++, beginner, c++11, comparative-review, c++14"
} |
Two-Sided Frequency Spectrum | Question: I am trying to make FFT simulation in Matlab by generating noise added
two sinus waves in 60Hz and 100Hz.
After adding the noise into these signals then I have applied the FFT
as I put my Matlab code below.
But I am in difficulty interpreting the FFT plot which shows two spectral peaks in the plot diagram below.
Could you please explain how to interpret this spectrum?
Why we see two peaks in the spectrum? How can we reduce the spectral to show only the frequencies of 60Hz and 100Hz?
%%%% Noise_Added_Two_Sinus%%%
>> f1=60;
>> f2=100;
>> fs=512;
>> t=0:1/fs:2-1/fs;
>> x1=2.4*sin(2*pi*f1*t);
>> x2=0.96*sin(2*pi*f2*t);
>> y=x1+x2+randn(size(t));
>> F=fft(y);
>> plot(abs(F))
Answer: First of all, you should totally fix the $x$-axis :)
Now, as you only have two sine waves, you should expect to have peaks at their exact frequencies (that is, at $\pm 60\textrm{ Hz}$ and $\pm 100\textrm{ Hz}$). The FFT function in MATLAB gives you the Discrete Fourier Transform of your signal within $0$ and $F_s$, where $F_s$ is your sampling frequency -and given that you represent your signal with samples, even if you are not aware of it, you have one. Anyway, to make visualization easier, we focus on the interval $[-\frac{F_s}{2},\frac{F_s}{2}]$; that way, your spectrum will be centered around the $0$ frequency -use the function fftshift for that. So, now, once you fix the $x$-axis according to your sampling frequency, you will find that the peaks are at the frequencies you have defined for your sine waves.
When it comes to the little ripples in the lower part of your spectrum, that's just disturbance caused by the noise. Since you are adding noise to the signal by the randn function, your noise follows a Gaussian distribution, translating into white noise for your signal. That is why your entire spectrum is showing sings of being slightly disturbed.
By the way, after fftshift, just get rid of the first half of your vector and you'll end up with positive frequencies. | {
"domain": "dsp.stackexchange",
"id": 4163,
"tags": "fft, frequency-spectrum, fourier"
} |
Is every open circuit a capacitor? | Question: I think that even open-ended wires can let AC current flow through them, just with a low capacitance. I also think an antenna could be a capacitor and open ended. Am I thinking correctly?
Answer: You are right, every circuit possesses some unintended capacitance, which is called "stray" capacitance. Whether or not it affects the operation of the circuit depends on the frequencies that the circuit is intended to operate at. The amount of stray capacitance that a circuit has is typically tiny, but at high enough frequencies even a very tiny amount of capacitance will couple parts of the circuit together and make it malfunction.
For example, the circuit inside a plug that connects together computer routers and switches in a big router farm has to operate at ultrahigh frequencies, at which two adjacent traces on a circuit board can present enough stray capacitance to stop the device from functioning.
And the capacitance between a long piece of wire strung between two trees as a shortwave transmitting antenna and the ground beneath it is enough to alter the resonant frequency of the antenna, which must be taken into account when designing and building the antenna. | {
"domain": "physics.stackexchange",
"id": 56535,
"tags": "electric-circuits, electric-current, capacitance, antennas"
} |
Why is pressure an intensive property? | Question: My teacher explained to me that we volume is an extensive property because it is additive in nature. But he also told us that pressure is an intensive property. Now according to the gas law equation $PV=nRT$, pressure is dependent on volume. Increasing pressure should increase volume. So shouldn't pressure be extensive as well.
Answer: From the ideal gas equation,
$$P=\frac{nRT}{V}$$
Now assuming the gas is uniformly distributed over space (has constant density for a given temperature), halving the number of moles will divide the volume by the same amount. Essentially, if we divide the number of moles by any number, we will end up dividing the volume by the same number to maintain constant temperature. So it doesn't matter how many moles of gas you take at a given temperature, you will always end up with the same pressure.
You could also look at it as ratio of two extensive quantities will always give an intensive quantity. | {
"domain": "physics.stackexchange",
"id": 26421,
"tags": "thermodynamics, pressure"
} |
Why does a NOx container get cold when NOx is released? and What is the relationship of the vapor pressure curve with the liquid-vapor dome? | Question: The temperature at which $\ce{NO_x}$ is in equilibrium with its liquid and vapor phase at 1 atm is about –84 °C. Does that mean $\ce{NO_x}$ exists as a liquid at that temperature? What about it's vapor phase then, because it's vapor and liquid phase should be at equilibrium, right?
Also, it's that the reason why compressed $\ce{NO_x}$ when released from its closed and pressurized container (e.g. 50 bar) at 20 °C becomes a gas while its temperature drops? Therefore, the compressed $\ce{NO_x}$ was in it's liquid state at 50 bar, right?
If that's the case, then what is the relationship between the vapor pressure curve and the liquid-vapor dome? How would you use the saturated liquid-vapor tables to predict the vapor pressure for your substance? And which specific volume would you chose to define your substance: its liquid specific volume or its vapor specific volume?
Answer: The vapor pressure curve describes the relationship between temperature and pressure for which the pure substance can exist as liquid and vapor, together in equilibrium. So it is not necessarily all liquid or all vapor. Both can be present in different proportions. As far as specific volume is concerned, your tables should list both the vapor specific volume and the liquid specific volume. The specific volume of the combination of vapor and liquid is proportional to the mass fraction of each. (So you need to know the mass fractions of vapor and liquid). | {
"domain": "chemistry.stackexchange",
"id": 12570,
"tags": "physical-chemistry, thermodynamics, experimental-chemistry"
} |
Why the Double Covering? | Question: It is known mathematically that given a bilinear form $Q$ with signature $(p,q)$ then the group $Spin(p,q)$ is the double cover of the group $SO(p,q)$ associated to $Q$, and that $Pin(p,q)$ is the double cover of the group $O(p,q)$ associated to $Q$.
I understand that all the representations of the double cover are representations of the starting group, but not viceversa, due to the properties of projective representations and the equivalence of algebras.
This question arises since in physics we are interested to spinor-bundles, arising from $Spin(p,q)$ groups which covers the $SO(p,q)$-principal bundles.
Questions:
Why care at all about using the $Spin$-principal bundle instead of the usual orthonormal frame bundle?
If we are interested in using the most general representation, why then restrict to the double cover and not some bigger group?
Answer:
In quantum mechanics, we are interested in projective representations since the Hilbert space is a projective space, not a vector space. However, all projective representations of $SO(p,q)$ ane faithful representations of $Spin(p,q)$.
In general, we are interested in universal covers of groups. However, $SO(p,q)$ is doubly-connected so its universal covering IS the double-cover. | {
"domain": "physics.stackexchange",
"id": 97539,
"tags": "differential-geometry, mathematical-physics, group-theory, representation-theory, spinors"
} |
Is length contraction in Special Relativity the same as the Doppler Effect? | Question: In my further reading of Special Relativity, the idea of length contraction when travelling at the speed of light is such that the length gets "squished" in the direction of travel.
This immediately made me think of the familiar Doppler Effect, with electromagnetic waves travelling at the speed of light, where their wavelength is shifted due to their velocity, i.e red shift.
How does Special Relativity and the Doppler Effect link? Is the Doppler Effect a Relativistic effect, as electromagnetic waves travel at the speed of light?
Answer: Length Contraction involves the apparent spatial separation of two parallel [timelike] worldlines, marking the ends of a stick. The contraction depends only on the relative-speed (but not direction) along the x-axis.
The Doppler Effect involves the apparent spatial separation of two parallel lightlike lines, marking the wavelength... the successive wavefronts of a light wave. The scaled wavelength depends on the relative-velocity, where the approaching case is different from the receding case.
Borrowing from my older reply to
Deriving Relativistic Doppler Effect through length contraction
In the source frame, imagine a ruler at rest with a marking at
$x=10$.
Interpret this as "where the source says the previous
wavefront is located when the source emits the next signal".
Note
that this marking has a worldline parallel to the source.
Although the "separation between these timelike-worldlines" is
equal to $\lambda_{source}$ in the source frame, these
timelike-worldlines are only indirectly related to the
source-wavelength [which is the "separation between the lightlike
signal-lines"]. In the receiver-frame the "separation between
these timelike-worldlines" is given by
$OX=\frac{\lambda}{\gamma}=\frac{10}{5/4}=8$, in accordance with
length-contraction. However, this is not the wavelength observed by the receiver--- the observed-wavelength is the
"separation between the lightlike signal-lines" given by $OW=20$
in the receiver frame. ($OW=k(O\lambda_{source}=(2)(10)=20.$).
The point is: the observed-wavelength (separation between
lightlike signal-lines) doesn't directly involve
length-contraction (involving parallel timelike-lines).
The above spacetime diagram is drawn on rotated graph paper so that
it becomes easier to visualize the ticks along the various segments.
Finally....
The Doppler effect occurs for any wave motion... e.g. sound.
For sound waves in Galilean physics, the Doppler factors depend on the velocities of source, receiver, and wind.
However, in special relativity, the Doppler Effect depends only on the relative-velocity between source and receiver. | {
"domain": "physics.stackexchange",
"id": 48068,
"tags": "special-relativity, inertial-frames, observers, doppler-effect"
} |
Error to Install Gazebo on Ubuntu | Question:
rushabh@ubuntu:~$ sudo apt-get update
Ign http://archive.ubuntu.com quantal InRelease
Ign http://archive.ubuntu.com quantal-updates InRelease
Ign http://archive.ubuntu.com quantal-proposed InRelease
Ign http://archive.ubuntu.com quantal-backports InRelease
Hit http://archive.ubuntu.com quantal Release.gpg
Get:1 http://archive.ubuntu.com quantal-updates Release.gpg [933 B]
Get:2 http://archive.ubuntu.com quantal-proposed Release.gpg [933 B]
Hit http://archive.ubuntu.com quantal-backports Release.gpg
Hit http://archive.ubuntu.com quantal Release
Get:3 http://archive.ubuntu.com quantal-updates Release [49.6 kB]
Get:4 http://archive.ubuntu.com quantal-proposed Release [49.6 kB]
Ign http://security.ubuntu.com quantal-security InRelease
Hit http://archive.ubuntu.com quantal-backports Release
Hit http://archive.ubuntu.com quantal/multiverse amd64 Packages
Hit http://archive.ubuntu.com quantal/universe amd64 Packages
Hit http://archive.ubuntu.com quantal/restricted amd64 Packages
Hit http://archive.ubuntu.com quantal/multiverse Translation-en_GB
Hit http://archive.ubuntu.com quantal/multiverse Translation-en
Hit http://security.ubuntu.com quantal-security Release.gpg
Hit http://archive.ubuntu.com quantal/restricted Translation-en_GB
Hit http://archive.ubuntu.com quantal/restricted Translation-en
Hit http://archive.ubuntu.com quantal/universe Translation-en_GB
Ign http://packages.ros.org precise InRelease
Hit http://archive.ubuntu.com quantal/universe Translation-en
Get:5 http://archive.ubuntu.com quantal-updates/multiverse amd64 Packages [7,957 B]
Get:6 http://archive.ubuntu.com quantal-updates/universe amd64 Packages [159 kB]
Hit http://security.ubuntu.com quantal-security Release
Hit http://packages.osrfoundation.org precise InRelease
Hit http://security.ubuntu.com quantal-security/multiverse amd64 Packages
Hit http://packages.ros.org precise Release.gpg
Hit http://security.ubuntu.com quantal-security/universe amd64 Packages
Hit http://packages.osrfoundation.org precise/main amd64 Packages
Hit http://packages.ros.org precise Release
Hit http://security.ubuntu.com quantal-security/restricted amd64 Packages
Hit http://packages.ros.org precise/main amd64 Packages
Get:7 http://archive.ubuntu.com quantal-updates/restricted amd64 Packages [1,970 B]
Hit http://archive.ubuntu.com quantal-updates/multiverse Translation-en
Hit http://archive.ubuntu.com quantal-updates/restricted Translation-en
Hit http://archive.ubuntu.com quantal-updates/universe Translation-en
Hit http://security.ubuntu.com quantal-security/multiverse Translation-en
Get:8 http://archive.ubuntu.com quantal-proposed/multiverse amd64 Packages [14 B]
Get:9 http://archive.ubuntu.com quantal-proposed/universe amd64 Packages [13.3 kB]
Get:10 http://archive.ubuntu.com quantal-proposed/restricted amd64 Packages [1,937 B]
Hit http://archive.ubuntu.com quantal-proposed/multiverse Translation-en
Hit http://security.ubuntu.com quantal-security/restricted Translation-en
Hit http://archive.ubuntu.com quantal-proposed/restricted Translation-en
Hit http://archive.ubuntu.com quantal-proposed/universe Translation-en
Hit http://archive.ubuntu.com quantal-backports/multiverse amd64 Packages
Hit http://archive.ubuntu.com quantal-backports/universe amd64 Packages
Hit http://archive.ubuntu.com quantal-backports/restricted amd64 Packages
Hit http://archive.ubuntu.com quantal-backports/multiverse Translation-en
Hit http://archive.ubuntu.com quantal-backports/restricted Translation-en
Hit http://archive.ubuntu.com quantal-backports/universe Translation-en
Hit http://security.ubuntu.com quantal-security/universe Translation-en
Ign http://archive.ubuntu.com quantal-updates/multiverse Translation-en_GB
Ign http://archive.ubuntu.com quantal-updates/restricted Translation-en_GB
Ign http://archive.ubuntu.com quantal-updates/universe Translation-en_GB
Ign http://archive.ubuntu.com quantal-proposed/multiverse Translation-en_GB
Ign http://archive.ubuntu.com quantal-proposed/restricted Translation-en_GB
Ign http://archive.ubuntu.com quantal-proposed/universe Translation-en_GB
Ign http://archive.ubuntu.com quantal-backports/multiverse Translation-en_GB
Ign http://archive.ubuntu.com quantal-backports/restricted Translation-en_GB
Ign http://archive.ubuntu.com quantal-backports/universe Translation-en_GB
Ign http://packages.ros.org precise/main Translation-en_GB
Ign http://security.ubuntu.com quantal-security/multiverse Translation-en_GB
Ign http://packages.ros.org precise/main Translation-en
Ign http://security.ubuntu.com quantal-security/restricted Translation-en_GB
Ign http://packages.osrfoundation.org precise/main Translation-en_GB
Ign http://security.ubuntu.com quantal-security/universe Translation-en_GB
Ign http://packages.osrfoundation.org precise/main Translation-en
Fetched 285 kB in 3s (86.0 kB/s)
Reading package lists... Done
rushabh@ubuntu:~$ . /usr/share/gazebo/setup.sh
bash: /usr/share/gazebo/setup.sh: No such file or directory
rushabh@ubuntu:~$ echo "source /usr/share/gazebo/setup.sh" >> ~/.bashrc
rushabh@ubuntu:~$ source ~/.bashrc
bash: /usr/share/gazebo/setup.sh: No such file or directory
rushabh@ubuntu:~$ . /usr/share/gazebo/setup.sh
bash: /usr/share/gazebo/setup.sh: No such file or directory
rushabh@ubuntu:~$ gazebo
gazebo: command not found
rushabh@ubuntu:~$ sudo apt-get install gazebo
Reading package lists... Done
Building dependency tree
Reading state information... Done
Some packages could not be installed. This may mean that you have
requested an impossible situation or if you are using the unstable
distribution that some required packages have not yet been created
or been moved out of Incoming.
The following information may help to resolve the situation:
The following packages have unmet dependencies:
gazebo : Depends: libfreeimage3 but it is not going to be installed
Depends: libprotobuf-dev but it is not installable
Depends: freeglut3 but it is not installable
Depends: libcurl4-openssl-dev but it is not installable
Depends: libogre-dev but it is not going to be installed
Depends: ros-fuerte-urdfdom but it is not going to be installed
Depends: libboost-thread1.46.1 but it is not installable
Depends: libboost-signals1.46.1 but it is not installable
Depends: libboost-system1.46.1 but it is not installable
Depends: libboost-filesystem1.46.1 but it is not installable
Depends: libboost-program-options1.46.1 but it is not installable
Depends: libboost-regex1.46.1 but it is not installable
Depends: libboost-iostreams1.46.1 but it is not installable
Depends: robot-player but it is not going to be installed
Depends: libcegui-mk2-0.7.5 but it is not installable
Depends: libavformat53 but it is not installable
Depends: libavcodec53 but it is not installable
Depends: libswscale2 but it is not installable
E: Unable to correct problems, you have held broken packages.
Originally posted by rdd0101 on Gazebo Answers with karma: 1 on 2013-02-04
Post score: 0
Answer:
See question 1190
Originally posted by gerkey with karma: 1414 on 2013-02-05
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 3003,
"tags": "gazebo"
} |
Draw sine wave going around a circle | Question: I have used d3 to draw a sine wave going around a circle. This is the very first time I've used d3 or drawn a SVG and I'm fairly new to JS as well, so I don't know if I've overcomplicated it/if there is a simpler way to achieve this. I'd appreciate some feedback - especially if there's any way to make my code more concise.
See my codepen.
const svg = d3.select('svg');
const margin = { top: 50, right: 50, bottom: 50, left: 50 };
const width = +svg.attr('width') - margin.left - margin.right;
const height = +svg.attr('height') - margin.top - margin.bottom;
// content area of your visualization
const vis = svg
.append('g')
.attr('transform', `translate(${margin.left+width/2},${margin.top+height/2})`);
// show area inside of margins
const rect = vis
.append('rect')
.attr('class', 'content')
.attr('width', width)
.attr('height', height)
.attr('transform', `translate(${-width/2},${-height/2})`);
// show scales
const xScale = d3
.scaleLinear()
.domain([-100, 100])
.range([-width/2, width/2]);
const yScale = d3
.scaleLinear()
.domain([100, -100])
.range([-height/2, height/2]);
vis.append('g').call(d3.axisTop(xScale));
vis.append('g').call(d3.axisLeft(yScale));
// draw circle
const pi = Math.PI
const radius = 63.66
const circle = vis
.append('circle')
.style('stroke-dasharray', '3, 3')
.style('stroke', 'black')
.style("fill", "transparent")
.attr("r", xScale(radius))
.attr("cx", 0)
.attr("cy", 0)
// get coordinates for a sine wave
const getSineWave = ({
numWaves,
wavelength,
amplitude,
phase,
numPointsPerWave,
}) => {
return (
d3.range(numWaves*numPointsPerWave+1).map(function(k) {
const x = k * wavelength/numPointsPerWave
return [x, amplitude * Math.sin(phase + 2 * pi * x/wavelength)];
})
)
}
// tranform a coordinate from linear space to circular space
const rotate = (cx, cy, x, y, radius) => {
const theta = x/radius,
sin = Math.sin(theta),
cos = Math.cos(theta),
nx = cx + (radius + y) * sin,
ny = cy + (radius + y) * cos
return [nx, ny];
}
// generate sine wave
const numWaves = 4
const amplitude = 10
const phase = pi/2
const circumference = 2 * pi * radius
const wavelength = circumference / numWaves
const numPointsPerWave = 4
const sineWave = getSineWave({
numWaves,
numPointsPerWave,
wavelength,
amplitude,
phase,
wavelength
})
var rotatedSine = sineWave.map( d => {
const rotatedCoords = rotate(0, 0, d[0], d[1], radius)
return rotatedCoords
})
// remove the last point as it would overlap the first point of the circle
rotatedSine.pop()
// get Path commands for given coordinates
const getPath = d3.line()
.x(d => xScale(d[0]))
.y(d => yScale(d[1]))
.curve(d3.curveCardinalClosed)
// draw sine wave going around a circle
const wave = vis
.append('path')
.attr('d', getPath(rotatedSine))
.attr('fill', 'none')
.attr('stroke', 'black')
.attr('stroke-width', '1px')
svg {
background-color: steelblue;
}
.content {
fill: lightsteelblue;
}
<script src="https://d3js.org/d3.v4.js"></script>
<svg width="1000" height="1000" </ svg>
Answer: Overall you have a good D3 code here, congrats (I'm fairly impressed with the questions I've seen here at CR lately, from people claiming "This is the very first time I've used d3 or drawn a SVG").
However, before sharing my proposed alternative, I'd like to tell you that, unfortunately, you're using the wrong tool for the task!
As you can see in my answer here, the problem is that D3 is designed to create visualizations based on data, normally qualitative or discrete quantitative data sets. According to Mike Bostock, D3 creator:
D3 is designed primarily for data visualization, mostly empirical datasets rather than continuous functions, and so there is no built-in method for generating abscissa values. (emphasis mine)
As you can see in your case, the line gets better if you push more datapoint into the array, increasing any of the two constants in...
d3.range(numWaves*numPointsPerWave+1)
In your particular case we can get a good line with numPointsPerWave = 10, which is not a big problem... however, the advice remains: D3 is not the correct tool here, you should look for a proper plotting library. As you can see in the linked answer above, in some situations we have to increase the data points a lot to have a good looking graph.
D3 radial line
All that being said, here is my proposed alternative: instead of all that complicated math and 2 functions to set the path's d attribute, use a D3 radial line generator.
In this answer I'll focus only on the use of the radial line generator, nothing more. I'm sure that other users will soon post answers regarding your JavaScript code (use of functions, constants, destructuring, currying etc...)
According to the API, d3.lineRadial():
Constructs a new radial line generator with the default settings. A radial line generator is equivalent to the standard Cartesian line generator, except the x and y accessors are replaced with angle and radius accessors. Radial lines are always positioned relative to ⟨0,0⟩; use a transform (see: SVG, Canvas) to change the origin.
So, all we need is the line generator...
const radialGenerator = d3.lineRadial()
.angle(d => d.angle)
.radius(d => d.radius)
.curve(d3.curveCardinalClosed);
And the adequate data:
const length = 100;
const amplitude = 20;
const radialScale = d3.scaleLinear()
.domain([0, length])
.range([0, Math.PI * 2]);
const data = d3.range(length).map(function(d) {
return {
angle: radialScale(d),
radius: xScale(radius) + Math.sin(d) * amplitude
}
});
Then, you append the path:
const wave = vis.append('path')
.attr('d', radialGenerator(data))
.attr('fill', 'none')
.attr('stroke', 'black')
.attr('stroke-width', '1px');
Have in mind that my function here is not as complex as yours, which accepts several different parameters: this answer is just to show you the existence of d3.lineRadial(), you can certainly improve it.
Here is the demo:
const svg = d3.select('svg');
const margin = {
top: 50,
right: 50,
bottom: 50,
left: 50
};
const width = +svg.attr('width') - margin.left - margin.right;
const height = +svg.attr('height') - margin.top - margin.bottom;
// content area of your visualization
const vis = svg.append('g')
.attr('transform', `translate(${margin.left+width/2},${margin.top+height/2})`);
// show area inside of margins
const rect = vis.append('rect')
.attr('class', 'content')
.attr('width', width)
.attr('height', height)
.attr('transform', `translate(${-width/2},${-height/2})`);
// show scales
const xScale = d3.scaleLinear()
.domain([-100, 100])
.range([-width / 2, width / 2]);
const yScale = d3.scaleLinear()
.domain([100, -100])
.range([-height / 2, height / 2]);
vis.append('g').call(d3.axisTop(xScale));
vis.append('g').call(d3.axisLeft(yScale));
// draw circle
const pi = Math.PI
const radius = 63.66
const circle = vis.append('circle')
.style('stroke-dasharray', '3, 3')
.style('stroke', 'black')
.style("fill", "transparent")
.attr("r", xScale(radius))
.attr("cx", 0)
.attr("cy", 0);
const length = 100;
const amplitude = 20;
const radialGenerator = d3.lineRadial()
.angle(d => d.angle)
.radius(d => d.radius)
.curve(d3.curveCardinalClosed)
const radialScale = d3.scaleLinear()
.domain([0, length])
.range([0, Math.PI * 2]);
const data = d3.range(length).map(function(d) {
return {
angle: radialScale(d),
radius: xScale(radius) + Math.sin(d) * amplitude
}
});
const wave = vis.append('path')
.attr('d', radialGenerator(data))
.attr('fill', 'none')
.attr('stroke', 'black')
.attr('stroke-width', '1px')
svg {
background-color: steelblue;
}
.content {
fill: lightsteelblue;
}
<script src="https://d3js.org/d3.v5.min.js"></script>
<svg width="1000" height="1000"></svg> | {
"domain": "codereview.stackexchange",
"id": 31988,
"tags": "javascript, mathematics, ecmascript-6, d3.js, svg"
} |
Why was Heaviside perplexed by this property of the gravitational potential? | Question: When reading an article (by Heaviside, linked below), I came across this sentence:
For it must be confessed that the exhaustion of potential energy from
a universal medium is a very unintelligible and mysterious matter.
When matter is infinitely widely separated, and the forces are least,
the potential energy is at its greatest, and when the potential energy
is most exhausted, the forces are most energetic!
But I thought that the force only depends on the derivative of the potential at that position and not its actual value i.e. $F = - \frac{dU}{dx} $. So why is Heaviside making this point?
I guess the first sentence is referencing the ether, but I still dont understand why Heaviside is confused. What physical intuition led him to the idea that the relationship between gravitational force and potential energy is somehow 'backwards'?
Here is the article: https://sergf.ru/Heavisid.htm
Answer: There is an article about this subject on the website 'mathpages'. The article is titled Why Maxwell couldn't explain gravity
Heaviside's thinking was strongly influenced by Maxwell's work.
Incidentally, to understand Maxwell's perspective on electromagnetism it is very helpful to read the treatment On physical lines of force (1861) (available on wikisource) that preceded Maxwell's large work 'A Dynamical Theory of the Electromagnetic Field (1865)'
(Note: while Maxwell uses a mechanical model he is not committed to the implementation details. Rather: pushing for a detailed model allows him to set up mathematical expressions for various relations; relations that will be valid for any model of how electromagnetism works. That is, the mathematics of Maxwell's model transcends the implementation details.) | {
"domain": "physics.stackexchange",
"id": 98239,
"tags": "forces, newtonian-gravity, potential-energy, history"
} |
Conservation of enegry in case of Exploding Projectile | Question: Consider the following problem :
A projectile of mass M explodes, while in flight, into three fragments. One fragment of mass $m_1 =\frac12 M$ travels in the original direction of the projectile. Another fragment of mass $m_2 =\frac16 M$ travels in the opposite direction and the third fragment of mass $m_3 =\frac13 M$ comes to rest.
The energy $E$, released in the explosion, is 5 times the kinetic energy of the projectile at explosion. What are the velocities of the fragments?
Let us try to apply conservation of energy at the point the projectile explodes.
Since all the fragmented particles will be at very same height at the instant of the explosion then there will be no difference in their potential energies. Only the kinetic energy of the projectile will affect how fast the particles move apart. So how could the loss be 5 times? I mean I can't understand conservation of energy in this case.
Answer: You must look at all forms of energy.
Just before the explosion, the projectile has gravitational potential energy GPE, kinetic energy KE, and also chemical potential energy CPE stored in the dynamite.
Just after the explosion the 3 fragments all have the same GPE as before. The CPE has disappeared in the explosion. As Jim says, we must assume that it is converted completely into KE, and none (or a negligible amount) into light, heat and sound. The total KE has increased by 5x what it was before the explosion, so the final KE is 6x the initial value. Another way of putting this is that the CPE was 5x the initial KE, which is just what is written in the question.
Momentum must also be conserved. | {
"domain": "physics.stackexchange",
"id": 32474,
"tags": "homework-and-exercises, newtonian-mechanics, momentum, conservation-laws, projectile"
} |
Prove not context free | Question: How can we prove that:
$$
L = \{ w_1\#w_2 \mid w_1 \in w_2;\; |w_2| > |w_1|;\; w_1 , w_2 \in \{0, 1\}^*\}
$$
is not context-free?
The language defines $w_1$ as a sub-string of $w_2$, and they are separated by a $\#$. This is easy with the CFG pumping-lemma for a slightly different language with $|w_2| \ge |w_1|$ by using the special case of $|w_2| = |w_1|$ (i.e. $w_1 = w_2$).
But here, $w_1$ is a proper sub-string of $w_2$ so I can't do the same. I fail to push the string out since we can always pump, for example the first symbol of $w_2$.
Answer: For large enough $p$, consider the word $w = 0^{p+1}1^{p+1}\#0^{p+1}1^{p+2} \in L$. Mark the part $1^{p+1}\#0^{p+1}$. According to Ogden's lemma, we can write $w = uxyzv$ such that $xyz$ contains at least $1$ and at most $p$ marked symbols, and $ux^iyz^iv \in L$ for all $i \geq 0$. The pumped part $xyz$ cannot lie all to the right of $\#$ since then pumping it down would result in a word not in the language (here we crucially use the fact that only the $0^{p+1}$ part to the right of $\#$ is marked). It also cannot lie all to the left of $\#$ since then pumping it up would result in a word not in the language. It follows that the part of $xyz$ to the right of $\#$ is of the form $0^k$, and the part of $xyz$ to the left of $\#$ is of the form $1^\ell$ (otherwise, there would be more than $p$ marked symbols). However, pumping up, the resulting word is not in the language. This contradiction shows that $L$ is not context-free. | {
"domain": "cs.stackexchange",
"id": 2753,
"tags": "formal-languages, context-free, proof-techniques, pumping-lemma"
} |
Why are most discovered exoplanets heavier than Earth? | Question: Looking at all discovered exoplanets (4393 exoplanets), I found than only 17 of them (less than one percent!) have masses less or equal to Earth's mass. Why so?
Is it because it is very difficult to discover an exoplanet of a low mass?
Is it because of the mass distribution, so that Earth-mass planets are very rare?
Is it because of the some other physical limitations?
According to Wikipedia:
The minimum mass/size required for an extrasolar object to be considered a planet should be the same as that used in our Solar System.
From another article:
A dwarf planet, by definition, is not massive enough to have gravitationally cleared its neighbouring region of planetesimals: it is not known quite how large a planet must be before it can effectively clear its neighbourhood, but one tenth of the Earth's mass is certainly sufficient.
So, where are all these planets that are lighter than Earth? Personally, I suspect that it's very difficult to detect these (relatively) low-mass planets. If so, are there any theoretical limitations that prevent formation of low-mass planets?
Note 1: most of the planets (around 70%) from the mentioned catalog do not have masses (i.e. there's no estimate for the mass of a planet). Most of the rest have $\sin i$ mass estimates. That might be one of the reasons.
Answer: There are a number of methods of detecting exoplanets, but all of them favour detection of larger planets over smaller ones, albeit for slightly different definitions of large:
Radial velocity measurement — this detects the small movement of the star towards and away from us as the planet and the star orbit their mutual barycenter. This movement is fastest when the planet is massive (so the barycenter is further from the center of the star) and close to the star (so the orbital velocity is highest). I also needs the planet's orbit not to be "face-on" to the Earth. This method produces measurements for the $mass\times \sin(i)$ since a more massive planet in a less inclined orbit produces the same motion as a less massive planet in a more inclined orbit
Transverse displacement — this detects the small movement of the star from side to side (against the background of distant stars) as the planet and the star orbit their mutual barycenter. The displacement is largest when the planet is massive and far from the star (although distant planet require observation over a long period of time). It works best on stars close to us.
Transit — this detects the tiny reduction in the brightness of the star when the planet moves between us. It is more likely to detect large planets, and more likely to notice if the orbital period of the planet is fairly small. | {
"domain": "astronomy.stackexchange",
"id": 5164,
"tags": "planet, exoplanet, mass, planetary-formation, earth-like-planet"
} |
Heavy loss and inaccurate answer in pytorch | Question: As my first AI model I have decided to make an AI model to predict multiplication of two numbers EX - [2,4] = [8]. I wrote the following code, but the loss is very high, around thousands, and it's very inaccurate. How do I make it more accurate?
import torch
import torch.nn as nn
import torch.nn.functional as F
data = torch.tensor([[2,4],[3,6],[3,3],[4,4],[100,5]],dtype=torch.float)
values = torch.tensor([[8],[18],[9],[16],[500]],dtype=torch.float)
lossfun = torch.nn.MSELoss()
model=Net()
optim = torch.optim.Adam(model.parameters(),lr=0.5)
class Net(nn.Module):
def __init__(self):
super(Net,self).__init__();
self.fc1 = nn.Linear(in_features=2,out_features=3)
self.fc2 = nn.Linear(in_features=3,out_features=6)
self.out = nn.Linear(in_features=6,out_features=1)
def forward(self,x):
x = self.fc1(x)
x = F.relu(x)
x = self.fc2(x)
x = F.relu(x)
x = self.out(x)
return x
for epoch in range(1000):
y_pred=model.forward(data)
loss = lossfun(y_pred,values)
print(loss.item())
loss.backward()
optim.step()
Note: I am a newbie in AI and ML.
Answer: There are a few things you could do to improve this NN, but are probably worth covering in different questions.
Your main problem though is that you forgot to reset the gradient after each training batch. You need to call optim.zero_grad() in order to do this, at the start of each training loop. Otherwise, using PyTorch, the gradient values keep accumulating inside the model's training data (sometimes you want this effect if you are adding gradients from multiple sources, that's why PyTorch is not clearing them automatically for you).
In addition, a learning rate of 0.5 is very high for the Adam optimiser - it is very common to leave it at the default value because Adam is an adaptive optimiser that will adjust step sizes depending on gradients seen so far.
Here is a working version of your code:
import torch
import torch.nn as nn
import torch.nn.functional as F
class Net(nn.Module):
def __init__(self):
super(Net,self).__init__();
self.fc1 = nn.Linear(in_features=2,out_features=3)
self.fc2 = nn.Linear(in_features=3,out_features=6)
self.out = nn.Linear(in_features=6,out_features=1)
def forward(self,x):
x = self.fc1(x)
x = torch.relu(x)
x = self.fc2(x)
x = torch.relu(x)
x = self.out(x)
return x
data = torch.tensor([[2,4],[3,6],[3,3],[4,4],[100,5]],dtype=torch.float)
values = torch.tensor([[8],[18],[9],[16],[500]],dtype=torch.float)
lossfun = torch.nn.MSELoss()
model=Net()
optim = torch.optim.Adam(model.parameters(),lr=0.001)
for epoch in range(20000):
optim.zero_grad()
y_pred=model.forward(data)
loss = lossfun(y_pred,values)
if (epoch % 1000 == 0):
print(loss.item())
loss.backward()
optim.step()
This version can be tweaked to quite easily reach 0 loss for your data set.
This has not in really learned how to multiply two values. The approximation to multiplying will be very weak as there is very little data. However, playing with some very basic data and a simple NN is a first step towards understanding details like this . . . | {
"domain": "ai.stackexchange",
"id": 1048,
"tags": "objective-functions, pytorch"
} |
How to calculate the concentration of NO at equilibrium? | Question:
$\ce{N2(g) + O2(g) <=> 2NO(g)}$, $K_c = 4.1 \cdot 10^{-4} (T=2000~^\circ\mathrm{C})$
What is $\ce{[NO]}$ when a mixture of $0.20~\mathrm{mol}$ $\ce{N2(g)}$ and $0.15~\mathrm{mol}$ $\ce{O2(g)}$ reach equilibrium in a $1.0~\mathrm{L}$ container at $T=2000~^\circ\mathrm{C}$?
When K is $\cdot10^{-4}$, you can assume the change is negligible, and just using the concentrations you can get from the given mole amount and the container volume, I get the right answer. When plugging back into, to see if it matches the give $K_c$, it's right:
$$((0.20~\mathrm{M}~\ce{N2})(0.15~\mathrm{M}~\ce{O2}) \cdot 4.1\cdot10^{-4} K_c) = x^2.$$ Ans: $\ce{[NO]} = 0.0035~\mathrm{M}$.
From what I've read and heard you should be able to do it 'the hard' way, using the quadratic formula, like you would if $K_c$ wasn't to $10^{-4}$, and small enough. But I can't seem to get the right answer trying to do it that way.
Rearranging everything, the equation I get to plug into the quadratic formula is
$$x^2 - 0.35x + 0.03,$$ then I get $0.2$ and $0.15$ as the answers. $0.2$ is too high, subtracting it from the $0.15~\mathrm{M}$ $\ce{O2}$ initial wouldn't be right. So using the $4.15$ as $x$, the change in $\mathrm{M}$, and subtraction that from the initial concentrations I have, then plugging that into the equation for $K_c$, even tried rearranging it in case I have got something mixed up, but I don't get anything close to the given $K_c$.
Answer: $K_c = \frac{[\ce{NO}]^2}{[\ce{N2}][\ce{O2}]}$
Let x = [NO]
$\frac{x^2}{(0.20M-0.5x)(0.15M-0.5x)} = 0.00041$
$x^2 = 0.00041(0.25x^2 -0.175xM + 0.030M^2)$
$x^2 + 0.000072xM -0.0000123M^2 = 0$ | {
"domain": "chemistry.stackexchange",
"id": 2883,
"tags": "equilibrium, concentration"
} |
Efficiency of an electric motor? | Question: Question: An electric motor runs off a 12V d.v. supply and has an overall efficiency of 75%. Calculate how much electric charge will pass through the motor when it does 90J of work.
Can someone tell me where I went wrong?
What I did:
First of all, I don't know what "when it does 90J of work" means so I just assumed that because the efficiency is 75%, the "good" work it does will be 0.75*90=67.5J. Therefore, Q=W/V=67.5/12=5.625=5.6C to 2 s.f.
The answer at the back is given as 10C but I'm not sure what I did wrong. However, I did figure out that if you divide by 0.75 (i.e. 90/0.75) instead of multiplying by 0.75 like I did, you get the right answer.
Can someone run me through in detail how I should think about problems like this? I wouldn't mind some background regarding efficiency as well since I don't know what it is.
Thanks!
Answer: The "work" is the useful rotational pull you get from the motor, and excludes any wasted heat. If only 75% of the energy going in comes out as work, then you need to put in 90/0.75J of energy to get 90J of work out. | {
"domain": "physics.stackexchange",
"id": 21971,
"tags": "electricity, charge, work, voltage"
} |
Comparing gene abundances between metagenomes | Question: My workflow until now:
Find fragments of a marker gene in unassembled metagenomes > download and assemble metagenomes > recover the gene neighborhood / gene set of interest
Right now I have a rough estimate of how abundant these genes are by the 'depth' noted on the assembled contigs (I use MEGAHIT for assembly). I was wondering if there's a more thorough/proper way to do this. I would like to compare the abundance of specific genes between a) samples in the same study, and b) different studies. I imagine that the size of the individual metagenomes should be considered in both cases, but point b) might add additional difficulties such as different sequencing techniques. I would appreciate your insights.
Answer: I would avoid using assemblies to answer this question, as there's no guarantee that you will be able to assemble your genes of interest; you can however estimate their abundance even if they are relatively rare.
How I understand your question as being one of estimating the abundance of either some specific genes (e.g. butyrate metabolism genes) or all genes in a microbial community across multiple samples for comparative purposes. In other words, not 16S or marker gene analysis for the purposes of estimating organismal abundance, which is a rather different problem (though in that case I would still not use an assembly).
A more standard workflow is:
align metagenomic reads against some existing database of genes annotated appropriately.
estimate the number of reads aligning against some gene or ortholog in some way (using for example KEGG Orthology or similar).
use counts from (2) as input to some statistical procedure, possibly summarizing across functional categories.
Some examples of how this has been done are here, here, here. I am sure that there are more recent/relevant references but I haven't been following the field closely in the last few years. | {
"domain": "bioinformatics.stackexchange",
"id": 1479,
"tags": "gene, assembly, metagenome"
} |
Can DFT magnitude be used to identify repeating patterns in an Image? | Question: Given the DFT magnitude vector of an 1-D image, I want to understand if we can calculate the size and pitch of repeating patterns in the image. Is this possible? I took a few test images and calculated DFT magnitude using openCV https://docs.opencv.org/3.4/d8/d01/tutorial_discrete_fourier_transform.html. Then I tried to calculate the repeating pattern size from the DFT magnitude. Here is an example. "1-D Image row" below shows the grayscale values of the 8 pixels in the image.
1-D Image 0 128 0 128 0 128 0 128
Magnitude 1 0 0 0 1 0 0 0
k 0 1 2 3 4 -3 -2 -1
Based on few other posts, my understanding of the index (k) in the DFT magnitude vector is as follows: k represents the maximum number of cycles of a pattern of specific size (size= Sample Size in pixels (N)/k), that can exist in the image. For example, for K=4 and N=8, pattern width has to be 8/4 = 2 pixels and there is maximum 4 cycles possible. In the above example, I see 0, 128, 0 pattern of width 2 pixels repeated 3 times and hence K=4 has the strongest magnitude.
K=0 corresponds to DC frequency. The magnitude vector repeats after K=N/2 and hence I marked those indices as -3, -2, -1. I do not use these values presently.
However, the above assumption of k does not hold for another image.
1-D Image 0 0 128 0 128 0 0 0
Magnitude 1 0.94 0 0.94 1 0.94 0 0.94
K 0 1 2 3 4 -3 -2 -1
I see only 0-128-0 pattern repeating twice and no other repeating patterns. But we still have strong magnitude for K=1 and K=3? Why is this?
Can someone explain what K captures and given DFT magnitude can it be used to identify repeating patterns in the image?
Answer: Yes, the DFT magnitude can reveal repetition patterns in an image. I provide an intuitive understanding for the results returned by the DFT which will hopefully make the interpretation of the DFT magnitudes clearer with regards to periodicity.
The DFT when given in its common form as follows:
$$X[k] = \sum_{n=0}^{N-1}x[n]e^{-j 2 \pi n k /N} \tag{1} \label{1}$$
Would have the following results for the OP's test waveforms. Since the magnitude is only of interest, what is given below is $|X[k]|$ as returned by the absolute value of the fft function in MATLAB, Octave and Python's scipy.fft:
abs(fft([0, 128, 0, 128, 0, 128, 0, 128]))
> [512, 0, 0, 0, 512, 0, 0, 0]
This matches the OP's intuition that the time domain is repeating 4 times over the duration of the input resulting in a DC offset as indicated by the first bin at $k=0$ and a frequency of 4 cycles/sample as indicated by the value at $k=4$. Due to mathematical equivalence with periodicity in the Fourier Transform (what I refer to as implied periodicity), this is also -4 cycles/sample since as the OP has shown we can count down negatively from the last sample to indicate positive or negative rotations. I'll explain that little bombshell about negative frequencies later but first let's see what the result for the second example is and how we interpret it:
abs(fft([0, 0, 128, 0, 128, 0, 0, 0])
> [256.0000, 181.0193, 0, 181.0193, 256.0000, 181.0193, 0, 181.0]
Unlike the first case which resulted in a minimum solution of $k=0$ as DC and one other frequency bin $k=4$, this one consists of all the frequency bins except $k=2$ and $k=6$.
The intuition on this is gained in first reviewing the inverse DFT, which is the time domain reconstruction given as follows:
$$x[n] = \frac{1}{N}\sum_{n=0}^{N-1}X[k]e^{j 2 \pi n k /N} \tag{2} \label{2}$$
I find a lot of insight from the continuous time Fourier Series Expansion (FSE) and it's reconstruction, where we recall from that theory (thanks to Josepeh Fourier's publication from 1828!) that any continuous time function can be decomposed into a sum of spinning phasors. Yes "spinning phasors"; I recommend to start with that instead of sinusoids if you really want to dive into the weeds of DSP. So instead of picturing sinusoids, and then with that having to convert the nice single spinning phasor with magnitude and phase as $Ke^{j\phi} = K\angle \phi$ into a more complicated expression with a sine and cosine as $\frac{K}{2} \cos(\phi)+ j\frac{K}{2} \sin(\phi)$, just stick with the phasor and picture a bicycle wheel spinning. Then we can easily comprehend the notion of positive and negative frequencies in terms of direction of rotation, and readily understand the equations for the DFT and inverse DFT as written above directly. (And this applies to the intuition that can be obtained with many more operations of time or space and frequency in signal processing). The FSE decomposes the finite duration time domain function (or similarly infinite duration periodic function) into a series of spinning phasors, each with a constant magnitude and starting phase in time. The fundamental frequency is $1/T$ where $T$ is the total length in time of the finite duration or the period for the periodic waveform case), and the only frequencies that exist will be integer multiples of the fundamental frequency (this makes sense as in the case of a periodic waveform extending to infinity, these are the only solutions that will periodically repeat in the summation). The discrete case as we use for the DFT and inverse DFT is the same thing only sampled in time, and sampled in frequency.
With that we see from equation \ref{2} that the time domain reconstruction is a sum of spinning phasors, each with a magnitude and starting phase as given by the complex value $X[k]$ for each $k$, and each spins at a multiple of the fundamental frequency as $k \omega_o$ where $\omega_o=2 \pi n/N$. Note that the sum of spinning phasors on the complex plane in the time domain is accomplished by placing each phasor on the end of the previous in the sum, with the end point as the total summation at any given point in time.
Here is a animation as a continuous time interpretation of the Discrete Fourier Transform demonstrating the result of the OP's first case, which resulted in a time domain reconstruction given by just two phasors each with magnitude $512/N = 512/8 = 64$ as $64 + 64e^{j4\omega_o}$, where I use $\omega_o = 2\pi n/N$ to represent the fundamental frequency.
I note that the frequency domain magnitude shown has been scaled by $\frac{1}{N}$ to represent the magnitude of the phasors for the time domain reconstruction (as given by equation \ref{2}). So in this graphic we see the "spinning phasors" in the IQ Phasor diagram, where the DC component is fixed with time, as DC, so does not spin, and then the one rotating is spinning at four times the rate of the fundamental frequency. The fundamental frequency as well as all the other components has a magnitude of 0 in this case. The magnitude and phase of this complex result is shown in the time domain on the left hand side, consistent with the distance from the end point of the sum of the two phasors at any point in time.
With that here is the same plot using the OP's second case:
Note here in the time domain, at the discrete sample times used, the waveform is always zero except at $n=2$ and $n=4$. However in order to represent this as "spinning phasors" using the recipe as stated (a fundamental frequency at the inverse of the total time duration, which here would be $1/N$ cycles/sample, and only integer multiples of that frequency), several frequency components are required such that the total sum will result in the time domain samples.
So with that understanding, and to answer the OP's question, note the comparison of the OP's two waveforms when each is continued periodically in time:
[0, 128, 0, 128, 0, 128, 0, 128, 0, 128, 0, 128, 0, 128, 0, 128, ...]
[0, 0, 128, 0, 128, 0, 0, 0, 0, 0, 128, 0, 128, 0, 0, 0, ...]
The first pattern repeats without variation while in the second case there is repetition and skips, which results in many more frequency components. Those frequency components are the discrete frequencies (as the rate of rotation of each spinning phasor) that represent the time domain waveform as given, at those sample locations, with their respective repetition rates.
This and further details of the DFT and its interpretations is exampled in more detail at this post. | {
"domain": "dsp.stackexchange",
"id": 11863,
"tags": "image-processing, fourier-transform, dft, opencv, image-segmentation"
} |
Conservation of energy and angular momentum | Question: I'm writing a java program to simulate the solar system. All planets are modelled as point masses. How do I check if my solar system is conserving energy? I'm not sure how to calculate the energy of the system at the start of the simulation, let alone at the end!
And given that my model is with point masses, presumably I can't calculate angular momentum at all?
Answer: The conserved quantities in your problem are: total mechanic energy $T+U$, total linear momentum $\vec P$, and total angular momentum $\vec L$ of the $N$ point masses.
They are defined as:
$$E=T+U \quad\text{ total mechanical energy}$$
$$U=-G\sum_{i}^N \sum_{j}^N \frac{m_i m_j}{|\vec r_i -\vec r_j|}\quad\text{ total potential energy}$$
$$T=\frac12\sum_i^N m_i v_i^2\quad\text{ total kinetic energy}$$
$$\vec P=\sum_i^N m_i \vec v_i \quad\text{ total linear momentum}$$
$$\vec L=\sum_i^N m_i (\vec r_i\times \vec v_i) \quad\text{ total angular momentum}$$
where $G$ is the gravitational constant (the gravitational force between 2 masses is $F=G \frac{m_i m_j}{(\vec r_i -\vec r_j)^2}$).
Note that:
$m_i$, $\vec r_i$, and $\vec v_i$ are the mass, the position, and the velocity of the point mass $i$.
only the total mechanical energy $E=T+U$ is conserved, while the total kinetic and total potential energies are not conserved separately;
the total linear and angular momenta are vectors, while total energies are scalars;
you must include also the sun in the sums over point masses $i=1,...,N$ and $j=1,...,N$ in the definition of the energies and momenta.
A point mass has a well defined angular momentum $m (\vec r \times \vec v)$ which is non-zero if the mass is rotating around another mass (or any point in the space). | {
"domain": "physics.stackexchange",
"id": 26735,
"tags": "newtonian-mechanics, simulations, solar-system"
} |
MergeSort w/ Custom Functions | Question: Please review my implementation of mergesort:
mergesort' :: (Ord a) => [a] -> [a]
mergesort' [] = []
mergesort' xs = merge' (sort' fst', sort' snd')
where fst' = take half xs
snd' = drop half xs
half = len `div` 2
len = length xs
merge' :: (Ord a) => ([a], [a]) -> [a]
merge' ([], []) = []
merge' ([], ys) = ys
merge' (xs, []) = xs
merge' (x:xs, y:ys) = if x < y then x : merge' (xs, y:ys) else y : merge'(x:xs, ys)
sort' :: (Ord a) => [a] -> [a]
sort' [] = []
sort' (x:xs) = m : sort' rest
where m = foldl (\acc x -> if x < acc then x else acc) x xs
rest = filterOne' (/= m) (x:xs)
filterOne' :: (a -> Bool) -> [a] -> [a]
filterOne' _ [] = []
filterOne' f (x:xs) = if not $ f x then xs else x : filterOne' f xs
If it doesn't adhere to the mergesort algorithm, please let me know.
Also, is there a Haskell library function for my filterOne'?
Answer: This isn't mergesort! sort' is a different sorting algorithm entirely, so you end up only performing one merge step before kicking it over to (I believe) selection sort.
First let's address the easy stylistic issues so we have a good base to work from. The primary issue is that the argument passed to merge' is a tuple, which is unnecessary, uncurried, and not very Haskell-y. I'll also drop the extra apostrophes where there isn't an actual naming conflict with the Prelude. We use as-patterns to avoid reconstructing patterns we just deconstructed. And let's just toss sort' and filterOne', they're not part of the solution!
mergesort :: (Ord a) => [a] -> [a]
mergesort [] = []
mergesort xs = merge (??? left) (??? right)
where half = (length xs) `div` 2
left = take half xs
right = drop half xs
merge :: (Ord a) => [a] -> [a] -> [a]
merge [] [] = []
merge xs [] = xs
merge [] ys = ys
merge xss@(x:xs) yss@(y:ys) = if x <= y then x : merge xs yss else y : merge xss ys
-- ^ Less than *or equal* so our sort is stable
One question is what goes in for the ??? where we used to call sort'? Well, mergesort!
One more complex issue of style is the usage of if in merge. Top-level ifs are better represented as guards in Haskell, it's easier to extend guards and their usage is more idiomatic.
merge xss@(x:xs) yss@(y:ys) | x <= y = x : merge xs yss
| otherwise = y : merge xss ys
One small change we can make is adding another base case to mergesort to account for lists of length 1, which are by definition sorted.
mergesort xs@[x] = xs
There's only one more improvement I'd make to this, and that's to call splitAt rather than take and drop in mergesort. Using both take and drop leads to walking half the list twice in each recursive step where we can walk it only once by doing some more bookkeeping. splitAt takes an index to split a list at, and returns the prefix and remainder just like using take and drop separately would.
So putting all of that together, here's the final version.
mergesort :: (Ord a) => [a] -> [a]
mergesort [] = []
mergesort xs@[x] = xs
mergesort xs = merge (mergesort left) (mergesort right)
where half = (length xs) `div` 2
(left, right) = splitAt half xs
merge :: (Ord a) => [a] -> [a] -> [a]
merge [] [] = []
merge xs [] = xs
merge [] ys = ys
merge xss@(x:xs) yss@(y:ys) | x <= y = x : merge xs yss
| otherwise = y : merge xss ys | {
"domain": "codereview.stackexchange",
"id": 7439,
"tags": "haskell, reinventing-the-wheel, mergesort"
} |
How possible is liberal human flight using a flying superconductor bed? | Question: I saw a video by Michio Kaku, showcasing levitation of a small object with a superconductor cooled by liquid nitrogen. It's the same technology used by some trains.
Hypothetically, how possible is it to achieve free human flight by some way of:
Putting a thin, light (say 5kg) bed, under a human in a fly suit. The bed is coated with the magnetic material of one polarity.
This bed has some sort of continuous thrust mechanism, with the required thrust to keep only it afloat & to fly it around. (It's very light due to carbon nano tubes, high technlology etc.)
On top of this bed is a freely floating human in a suit made of the opposing superconductor (or however this works)
So essentially the bed is what's really flying around and has tiny thrust requirements because of it's weight and the human is levitated at essentially minimal cost (due to the superconductor)
Answer: There's a lovely children's novel in which much the same idea is employed for building a "perpetumobile", something that cannot be possible for all we know about physics.
The easiest law stating that it can't work is Newtons third law: for every force (like the one that levitates the human) there must be an equal and opposite counter force, in this case upon the bed, pressing it downwards – because what human's suit interacts with, magnetically, is the bed. In effect, this amounts to exactly the same as if it were an ordinary lightweight bed without any superconductors at all: the bed plus human weights as much as a bed and a human together, inevitably. Making the bed extremely light always still leaves the human's weight to be stemmed by the thrust mechanism.
Note that the superconductor-floating is in principle not even so different from simply lying on the bed via old-fashioned "contact repulsion": on a subatomic scale, there is no such thing as contact: atoms aren't solid balls with a hard edge, but combinations of multiple particles swirling around in various quantum states. Rather, there is repulsion between the particles; most importantly Pauli repulsion (that can only be observed on quantum scales), but also electrodynamic interactions, which is basically the same as with superconductors and magnets. | {
"domain": "physics.stackexchange",
"id": 9444,
"tags": "superconductivity, levitation"
} |
PreciseQMA = PreciseBQP gives PP = PSPACE | Question: $\text{PreciseBQP}$ is defined as $\text{BQP}$ with inverse exponentially close completeness and soundness bounds (for a better definition, see Section 3.1 here, in the paper by Gharibian et al). Similarly, $\text{PreciseQMA}$ is $\text{QMA}$ with inverse exponentially close completeness and soundness gaps (for more details, see Remark 5 here, in the paper by Fefferman and Lin). It is known that
\begin{equation}
\text{PreciseBQP} = \text{PP}, \\
\text{PreciseQMA} = \text{PSPACE}.
\end{equation}
For more details, see Figure 1 here (paper by Deshpande et al). However, it seems obvious to me that
\begin{equation}
\text{PreciseQMA} \subseteq \text{PreciseBQP}.
\end{equation}
This is because one can simply replace the quantum witness given by the prover by the maximally mixed state, similar to the trick in Theorem 3.6 here, in the Marriott. We take an inverse exponential hit in the completeness and soundness bounds, but since the completeness and soundness bounds are inverse exponentially close in $\text{PreciseBQP}$, to begin with, we do not care.
However, if this is true, then it implies $\text{PP} = \text{PSPACE}$.
What am I missing?
Answer: I guess a similar argument used
in $\big[$ Marriot, Watrous $\big] $ [1]
to prove QMA$_{log}$ $\subseteq$ BQP
and
in $\big[$ Fefferman, Lin $\big]$ [2]
to prove QMA$_{exp}$ $\subseteq$ PSPACE
does not carry over since for L $\in $ QMA$_{exp}$ = PreciseQMA you get
$ x \in $ L $ \implies \text{tr}[Q_x] \geq c $
$ x \notin $ L $ \implies \text{tr}[Q_x] \leq 2^m s $
for $ c - s = \frac{1}{exp} $ and so using the totally mixed state $2^{-m} \mathbb{I}_m $ you accept with probability $ \text{tr}[2^{-m} Q_x] $ but does not yield any meaningful bound.
Both papers essentially apply an amplification procedure before the argument. | {
"domain": "quantumcomputing.stackexchange",
"id": 2072,
"tags": "quantum-algorithms, complexity-theory, computational-models"
} |
rosdep init and rosdep update? | Question:
I would like to know if rosdep init and rosdep update that we perform during ROS installation make any changes outside the ROS environment? like does it update or modify the debian packages or any other non-ROS components? Thank you.
Originally posted by sam26 on ROS Answers with karma: 231 on 2017-02-21
Post score: 2
Answer:
Well, rosdep init puts a file in /etc/ros/rosdep/, if this is "outside of the ROS environment" for you.
rosdep update fetches the new rosdep definition files and stores them somewhere too.
Other than that, rosdep does nothing as long as you don't do a rosdep install <PKG>. This will then install the required dependencies that are specified in the package. This can be other released ROS packages or third party packages as debian pkgs or from any other package manager, e.g. pip.
Originally posted by mgruhler with karma: 12390 on 2017-02-21
This answer was ACCEPTED on the original site
Post score: 3
Original comments
Comment by sam26 on 2017-02-21:
Thank you. But does rosdep update updates the package list of other package managers too or does it just update the available ROS package list?
Comment by gvdhoorn on 2017-02-21:
@sam26: it does not install any package or change anything besides the files that @mig already mentioned (ie: /etc/ros/rosdep) and a cache in your $HOME.
Comment by mgruhler on 2017-02-21:
Just the rosdep definitions. Those are "remappings" pointing to specific, distro dependent version. They are defined, e.g. here. Thus, the rosdep key ace points to ace for arch and libace-dev for debian...
Comment by sam26 on 2017-02-21:
Thank you!
Comment by sam26 on 2017-02-22:
@mig @gvdhoorn But when I look at the sources that rosdep sources.listcontains, it points to packages that are present in universe and multiverse as well ( non-ROS packages ). So rosdep update would update these non-ROS packages right (just like apt-get update) ?
Comment by gvdhoorn on 2017-02-22:
rosdep only uses a 20-default.list in /etc/ros/rosdep/sources.list.d, and that does not contain any mention of universe or multiverse.
But in any case: rosdep itself will never install or upgrade anything, that is the pkg mgrs job, only after you accept what it wants to do. | {
"domain": "robotics.stackexchange",
"id": 27078,
"tags": "rosdep"
} |
*** stack smashing detected ***: /opt/ros/kinetic/lib/moveit_ros_move_group/move_group terminated | Question:
When add a collision object to move_group, and set a goal to plan. This error happened, what's the meaning?
this error followed
[move_group-3] process has died [pid 3617, exit code -6, cmd /opt/ros/kinetic/lib/moveit_ros_move_group/move_group --debug __name:=move_group __log:=/home/jwx/.ros/log/07137956-8d97-11ea-b28a-c8d3ff1c1134/move_group-3.log].
log file: /home/jwx/.ros/log/07137956-8d97-11ea-b28a-c8d3ff1c1134/move_group-3*.log
This must be something wrong with my moveit collision avoid planning , Because when I run the official tutorial ,the move_group also crashed.
Originally posted by Jiawenxing on ROS Answers with karma: 86 on 2020-05-03
Post score: 0
Answer:
Finally, I figure out this issue. The reason that I got this problem is that I installed the libccd alone, it conflict with the internal library.
So i just uninstall the library.
Originally posted by Jiawenxing with karma: 86 on 2020-05-04
This answer was ACCEPTED on the original site
Post score: 2 | {
"domain": "robotics.stackexchange",
"id": 34893,
"tags": "ros-kinetic"
} |
What is photon spin and polarization? | Question: I have literally googled for hours on this subject. I am very curious on what spin really is and how polarization works for photons.
I’ve read that photons have a spin of 1(no idea what that means or how an electron can have a 1/2 spin…).
I’ve read two completely different definitions of photon spin: 1 says that a photon will spin parallel or aniparallel with respect to it’s direction of propagation. 2 says that it spins counter-clockwise or clockwise(left or right) with respect to its propagation direction. Which of these two is true?
I’ve read that linear polarization is horizontal or vertical, but can’t it be at any angle? The electric field could oscillate an any angle can’t it?
Also, I’ve read that linear polarization requires at least two photons with opposite spins, I don’t know if that’s true or not. Is circular polarization even real or is it made of a bunch of out of phase linearly-polarized photons that make it appear to spin on a graph?
In short: I’m not smart and I’m finding so many complicated and conflicting answers online. I’ve also had someone tell me that a photon doesn’t have an electric field and magnetic field, even though my research shows the complete opposite.
Please help me to understand. Please treat me like Homer Simpson or Peter griffin :) Explain it as you would to him.
Thank you so much.
Answer:
I’ve read that photons have a spin of 1(no idea what that means or how an electron can have a 1/2 spin…).
There is no easy way to explain this one. The spin expresses several things, but one is a limit on the maximum angular momentum that a plane wave can have. The limit can be zero (no angular momentum, spin-0), or half the energy divided by the wavelength (spin ½), or the energy divided by the wavelength (spin 1), etc., and for subtle reasons, intermediate ratios aren't possible. This property of waves is not quantum-mechanical, though people sometimes say it is. See this answer which is somewhat technical (undergraduate-physics-major level).
When you add quantum mechanics, you get particle-like behavior of the field, and each particle carries a certain amount of the angular momentum.
I’ve read two completely different definitions of photon spin: 1 says that a photon will spin parallel or antiparallel with respect to it’s direction of propagation. 2 says that it spins counter-clockwise or clockwise(left or right) with respect to its propagation direction. Which of these two is true?
The second one is better. The first one is saying that the axis of the rotation is parallel to the direction of propagation of the field. The angular momentum of the light is a vector pointing along the axis, and by an arbitrary convention, one direction of rotation (clockwise or counterclockwise) is assigned to each direction on the axis. So one circular polarization has an angular momentum pointing in the direction of propagation and for the other it points opposite the direction of propagation.
I’ve read that linear polarization is horizontal or vertical, but can’t it be at any angle?
Yes, it can be at any angle. There are also elliptical polarizations, which are intermediate between linear and circular, and can also be tilted at any angle.
What your sources may have been saying is that from sums of two polarization directions, such as horizontal and vertical, you can construct any polarization direction. If you add them in equal amounts with the same phase (so they reach zero at the same time), you get 45° diagonal polarization. If you add them out of phase by the right amount, you get circular polarization:
and if you add them out of phase by a different amount, you get an elliptical polarization.
Also, I’ve read that linear polarization requires at least two photons with opposite spins
That's wrong. A single photon can be linearly or elliptically polarized.
I’ve also had someone tell me that a photon doesn’t have an electric field and magnetic field
Photons are the electromagnetic field. They don't have a field in the way that, say, electrons do. Field lines don't end on photons. The photons just are the field. | {
"domain": "physics.stackexchange",
"id": 90911,
"tags": "particle-physics, photons, quantum-spin, polarization"
} |
Subset sum problem, given that a valid subset exists | Question: I have a problem at work. I need to find a subset of a set of positive integers that sums to a certain value. I know there is a subset but I need to find it. Is this new problem the same as the subset sum problem?
Answer: This problem is as hard as the primary subset sum problem.
Suppose you have a polynomial time algorithm (say the time with input size $n$ is bounded by a polynomial $f(n)$) for this problem, then for any instance of size $n$ of the primary subset sum problem, you can run this algorithm directly with at most $f(n)$ time. If the instance indeed has a solution, this algorithm will return a valid solution. Otherwise, this algorithm will return nothing, a meaningless string, or a wrong solution, etc. Anyway, the algorithm returns a valid solution if and only if the instance has a valid solution, and you can check in polynomial time whether this algorithm returns a valid solution. This results in a polynomial time algorithm for the primary subset sum problem. | {
"domain": "cs.stackexchange",
"id": 12645,
"tags": "algorithms, complexity-theory, subset-sum"
} |
If the language $A$ is decidable and the language $B$ is recognizable, then the language $A \cap B$ is recognizable? | Question: I am discussing with a friend the following question:
If the language $A$ is decidable and the language $B$ is recognizable, Then the language $A \cap B$ is recognizable?
I believe it is.
My point is that if the machine that recognizes B never halt in some string, so the bigger machine will never know what to answer.
The image makes my point.
But my friend says that the $A \ cap B$ is a subset of $A$, then it is decidable.
Answer: You are correct, without further information we can only say that $A\cap B$ is recognizable.
A simple example would be to let $B$ be any recognizable (but not decidable) language - the Halting Problem for example, if you want to be melodramatic, and let $A = \Sigma^{\ast}$. Then $A \cap B = B$, so clearly can't be decidable.
It is of course possible that the intersection happens to be decidable, (for example if $A \cap B = \emptyset$ is a trivial example), but this may not be true in general. | {
"domain": "cs.stackexchange",
"id": 4468,
"tags": "formal-languages, computability"
} |
MVP Passive-View | Question: First of all, I'm really new to the MVP design pattern (Passive-View) and I've been trying to implement it in a WinForms application.
Before I'm going any further, I'd be very happy if you guys can give me feedback on my current work.
Model:
public class PersonModel
{
public string FirstName { get; set; }
public string LastName { get; set; }
public string Gender { get; set; }
public PersonModel(string firstName, string lastName, string gender)
{
FirstName = firstName;
LastName = lastName;
Gender = gender;
}
}
Presenter:
public class ManagePersonPresenter
{
private readonly IManagePersonFormView _view;
private readonly List<PersonModel> _models;
public ManagePersonPresenter(IManagePersonFormView view)
{
_view = view;
_models = new List<PersonModel>();
Initialize();
}
private void Initialize()
{
_view.AddButtonEnabled = false;
_view.InputGenderMale = true;
}
public void OnSaveButtonClicked()
{
var person = new PersonModel(_view.InputFirstName, _view.InputLastName, GetGender());
_view.AddButtonEnabled = false;
_view.InputFirstName = null;
_view.InputLastName = null;
_models.Add(person);
_view.ShowMessage("Successfully added person '" + person.FirstName + @"'.");
RefreshTable();
}
public void OnTextChanged()
{
if (_view.InputFirstName == string.Empty || _view.InputLastName == string.Empty)
{
_view.AddButtonEnabled = false;
}
else
{
_view.AddButtonEnabled = true;
}
}
public void RefreshTable()
{
var dt = new DataTable();
dt.Columns.Add("First name");
dt.Columns.Add("Last name");
dt.Columns.Add("Gender");
foreach (var person in _models)
{
dt.Rows.Add(person.FirstName, person.LastName, person.Gender);
}
_view.DtPersons = dt;
}
private string GetGender()
{
return _view.InputGenderMale ? "Male" : "Female";
}
}
View:
public interface IManagePersonFormView
{
string InputFirstName { get; set; }
string InputLastName { get; set; }
bool InputGenderMale { get; set; }
bool InputGenderFemale { get; set; }
DataTable DtPersons { set; }
bool AddButtonEnabled { get; set; }
void ShowMessage(string message);
}
The Form:
public partial class FrmManagePersons : Form, IManagePersonFormView
{
private readonly ManagePersonPresenter _presenter;
public FrmManagePersons()
{
InitializeComponent();
_presenter = new ManagePersonPresenter(this);
}
public string InputFirstName
{
get => txtBoxFirstName.Text;
set => txtBoxFirstName.Text = value;
}
public string InputLastName
{
get => txtBoxLastName.Text;
set => txtBoxLastName.Text = value;
}
public bool AddButtonEnabled
{
get => btnSavePerson.Enabled;
set => btnSavePerson.Enabled = value;
}
public bool InputGenderMale
{
get => rdBtnGenderMale.Checked;
set => rdBtnGenderMale.Checked = value;
}
public bool InputGenderFemale
{
get => rdBtnGenderFemale.Checked;
set => rdBtnGenderFemale.Checked = value;
}
public DataTable DtPersons
{
set => dtGridPersons.DataSource = value;
}
public void ShowMessage(string message)
{
MessageBox.Show(message, @"Information", MessageBoxButtons.OK, MessageBoxIcon.Information);
}
private void btnSavePerson_Click(object sender, EventArgs e)
{
_presenter.OnSaveButtonClicked();
}
private void txtBoxFirstName_TextChanged(object sender, EventArgs e)
{
_presenter.OnTextChanged();
}
private void txtBoxLastName_TextChanged(object sender, EventArgs e)
{
_presenter.OnTextChanged();
}
private void dtGridPersons_SelectionChanged(object sender, EventArgs e)
{
dtGridPersons.ClearSelection();
}
private void btnRefreshTable_Click(object sender, EventArgs e)
{
_presenter.RefreshTable();
}
}
Answer: Your view is calling methods on your presenter. The view shouldn’t know the presenter exists. Instead of calling the presenter directly, the view should raise events that the presenter reacts to. The difference seems trivial for what you have here, but it can make quite a large difference on a larger more complex system.
Oh, and I should mention a few other things.
Gender isn’t a binary option.
Names are far more complicated than First & Last. | {
"domain": "codereview.stackexchange",
"id": 30023,
"tags": "c#, design-patterns, .net, winforms, mvp"
} |
Is there a website that shows population sizes? | Question: I'm looking for a website that shows the population sizes of a species (doesn't matter which) as a function of time at a geographic coordinate. Is there a government website or other free database with such information?
Answer: You can access the Imperial College global population dynamics database. They will have time series data at specific locations. http://www3.imperial.ac.uk/cpb/databases/gpdd
There is a sister database as well that might be useful. http://lits.bio.ic.ac.uk:8080/litsproject/
These contain several hundred time series, and you can see a paper that used them here: http://onlinelibrary.wiley.com/doi/10.1111/j.1461-0248.2011.01702.x/abstract | {
"domain": "biology.stackexchange",
"id": 1398,
"tags": "ecology, population-biology"
} |
Create automata from non regular grammar | Question: I have two grammars:
L → ε | aLcLc
L → ε | aLcLc | LL
This two grammars are equals but the first one is regular, so it produces a regular language and a Finite State Automata. Instead, the second one is non regular but it might produces a regular language.
To prove it, I want to create two differentes automata: the first one should be a correct automata and if the second one can't be create then the language is not regular. Are all this statments correct?
If so, can someone help me build these two automata? Thank you!
Answer: Your first language isn't regular. Here is a simple way of showing this. Consider all words in your language of the form $a^*c^*$; if your language were regular, then so would be that language. However, the new language is generated by the grammar $L \to \epsilon \mid aLcc$ (this requires an argument), and so is $\{a^n (cc)^n : n \geq 0\}$, which is classically known to be irregular.
With more effort, you can show that your language consists of all words $w$ in which every proper prefix $x$ satisfies $\#_a(x) > 2\#_c(x)$, and furthermore $\#_a(w) = 2\#_c(w)$.
An identical argument works for your second language. With more effort, you can show that the language consists of all words $w$ in which every prefix $x$ satisfies $\#_a(x) \geq 2\#_c(x)$, and furthermore $\#_a(w) = 2\#_c(w)$.
Summarizing, both of your languages are irregular, and this can be shown by intersecting them with $a^*c^*$. | {
"domain": "cs.stackexchange",
"id": 14587,
"tags": "automata, regular-languages, finite-automata, formal-grammars, regular-expressions"
} |
Work done against an internal force | Question: The internal force can be due to electric field or gravitational field. Can "work done against gravity" be negative? Or do we not use the word "against" when this work comes out negative?
Answer: Let's say that gravity is acting downward. Then work done against it will be when a force displaces the object upwards. However you say that work done against gravity is negative which means that the force displaced the object downwards.
Imagine a man freely falling with a jetpack on his back accelerating him downwards. The work done by the jetpack against gravity will be negative. | {
"domain": "physics.stackexchange",
"id": 48635,
"tags": "work"
} |
Grouping array elements into batches of at most three | Question: const items = ['a', 'b', 'c', 'd']
const reduced = items.reduce((acc, cur, index) => {
const arrayIndex = Math.ceil((index + 1) / 3) - 1
if (acc[arrayIndex]) {
acc[arrayIndex].push(cur)
} else {
acc.push([cur])
}
return acc
}, [])
I'm taking an array of items, batching them into arrays of three at most and returning them as arrays of array. Here reduced yields the correct structure of [["a", "b", "c"], ["d"]]. How do I accomplish this without the if statement (which mutates the accumulated value directly)?
Answer: If you are not beholden to using reduce, you could chunk your array like this.
let chunked = [];
for (let i = 0; i < items.length; i = i + 3) {
chunked.push(items.slice(i, i + 3));
}
Just for kicks, I implemented a jsFiddle to understand performance of Array.reduce() (as in original post) compared to for loop I have suggested and was somewhat surprised to find the for loop to be on the order of 50% slower than reduce.
When testing for larger arrays (100K items for example) performance became similar between the two solutions.
Regardless, it is probably very much in the territory of micro-optimization to consider one approach vs. the other based on performance testing alone. I would still prefer the simpler for loop code from an code management standpoint unless I knew I was going to be running this code at very high frequency in my application. However if the majority of the surrounding application was using more of a functional programming style, I would be happy to use reduce as well. | {
"domain": "codereview.stackexchange",
"id": 27845,
"tags": "javascript, functional-programming"
} |
Machine Language Simulator | Question: Simple Machine Translator(SML) is a simulator that executes code written in hexadecimal. It supports features such as read, write, add, subtract and many more. My previous question concerning this minimal exercise link can be found here for those who wants to follow up.
I made a lot of changes, restructured and move things around and would appreciate a review.
SML.h
#ifndef SML_SML_H_
#define SML_SML_H_
#include "evaluator.h"
#include <string>
constexpr size_word register_max_size = 6;
enum REGISTERS
{
ACCUMULATOR = 0,
INSTRUCTION_COUNTER = 1,
TEMPORARY_COUNTER = 2,
INSTRUCTION_REGISTER = 3,
OPERATION_CODE = 4,
OPERAND = 5
};
class SML
{
friend void swap( SML &lhs, SML &rhs );
friend class Evaluator;
public:
SML() = default;
SML( const int memory_size, const int word_lower_lim, const int word_upper_lim );
SML( const SML &s );
const SML& operator=( const SML s );
SML( SML &&s );
~SML();
void display_welcome_message() const;
void load_program();
void execute();
private:
size_word registers[ register_max_size ];
std::string temp_str; // holds the string before it is written into the memory
bool debug;
static const size_word read_ = 0xA; // Read a word(int) from the keyboard into a specific location in memory
static const size_word write_ = 0xB; // Write a word(int) from a specific location in memory to the screen
static const size_word read_str_ = 0xC; // Read a word(string) from the keyboard into a specific location in memory
static const size_word write_str_ = 0xD; // Write a word(string) from a specific location in memory to the screen
static const size_word load_ = 0x14; // Load a word from a specific location in memory to the accumulator
static const size_word store_ = 0x15; // Store a word from the accumulator into a specific location in memory
static const size_word add_ = 0x1E; /* Add a word from a specific location in memory to the word in the accumulator; store the
result in the accumulator */
static const size_word subtract_ = 0x1F;
static const size_word multiply_ = 0x20;
static const size_word divide_ = 0x21;
static const size_word modulo_ = 0x22;
static const size_word branch_ = 0x28; // Branch to a specific location in the memory
static const size_word branchneg_ = 0x29; // Branch if accumulator is negative
static const size_word branchzero_ = 0x2A; // Branch if accumulator is zero
static const size_word halt_ = 0x2B; // Halt the program when a task is completed
static const size_word newline_ = 0x32; // Insert a new line
static const size_word end_ = -0x1869F; // End the program execution
static const size_word sml_debug_ = 0x2C; // SML debug ( 1 to turn on, 0 to turn off )
size_word word_lower_limit; /* A word should not exceed */
size_word word_upper_limit; /* this limits */
size_word memory_size;
size_word *memory = nullptr;
void set_registers();
void memory_dump() const;
};
#endif
SML.cpp
#include "sml.h"
#include "evaluator.h"
#include <iostream>
#include <iomanip>
#include <algorithm>
SML::SML( const int mem_size, const int word_lower_lim, const int word_upper_lim )
: debug( false ), word_lower_limit( word_lower_lim ),
word_upper_limit( word_upper_lim ), memory_size( mem_size )
{
set_registers();
memory = new size_word[ memory_size ];
}
void SML::set_registers()
{
registers[ static_cast<unsigned>( ACCUMULATOR ) ] = 0;
registers[ static_cast<unsigned>( INSTRUCTION_COUNTER ) ] = 0;
registers[ static_cast<unsigned>( TEMPORARY_COUNTER ) ] = 0;
registers[ static_cast<unsigned>( INSTRUCTION_REGISTER ) ] = 0;
registers[ static_cast<unsigned>( OPERATION_CODE ) ] = 0;
registers[ static_cast<unsigned>( OPERAND ) ] = 0;
}
SML::SML( const SML &s )
{
temp_str = s.temp_str;
debug = s.debug;
word_lower_limit = s.word_lower_limit;
word_upper_limit = s.word_upper_limit;
std::copy( std::cbegin( s.registers ), std::cend( s.registers ), registers );
memory_size = s.memory_size;
memory = new size_word[ memory_size ];
std::copy( s.memory, s.memory + s.memory_size, memory );
}
SML::SML( SML &&s )
{
swap( *this, s );
memory = new size_word[ memory_size ];
std::move( s.memory, s.memory + s.memory_size, memory );
}
const SML& SML::operator=( SML s )
{
swap( *this, s );
memory = new size_word[ memory_size ];
std::move( s.memory, s.memory + s.memory_size, memory );
return *this;
}
void swap( SML &lhs, SML &rhs )
{
using std::swap;
swap( lhs.temp_str, rhs.temp_str );
swap( lhs.debug, rhs.debug );
swap( lhs.word_lower_limit, rhs.word_lower_limit );
swap( lhs.word_upper_limit, rhs.word_upper_limit );
swap( lhs.memory_size, rhs.memory_size );
swap( lhs.registers, rhs.registers );
}
void SML::display_welcome_message() const
{
std::cout << "***" << " WELCOME TO SIMPLETRON! " << "***\n\n";
std::cout << std::setw( 5 ) << std::left << "***"
<< "Please enter your program one instruction"
<< std::setw( 5 ) << std::right << "***\n";
std::cout << std::setw( 5 ) << std::left << "***"
<< "(or data word) at a time. I will type the"
<< std::setw( 5 ) << std::right << "***\n";
std::cout << std::setw( 5 ) << std::left << "***"
<< "location number and a question mark (?)."
<< std::setw( 6 ) << std::right << "***\n";
std::cout << std::setw( 5 ) << std::left << "***"
<< "You then type the word for that location"
<< std::setw( 6 ) << std::right << "***\n";
std::cout << std::setw( 5 ) << std::left << "***"
<< "Type the sentinel -0x1869F to stop entering"
<< std::setw( 5 ) << std::right << "***\n";
std::cout << std::setw( 5 ) << std::left << "***"
<< "your program"
<< std::setw( 5 ) << std::right << "***";
std::cout << "\n\n" << std::flush;
}
void SML::load_program()
{
size_word &ins_cnt = registers[ static_cast<unsigned>( INSTRUCTION_COUNTER ) ];
size_word temp;
while( ins_cnt != memory_size )
{
std::cout << std::setw( 2 ) << std::setfill( '0' )
<< ins_cnt << " ? ";
std::cin >> std::hex >> temp;
if( temp == end_ ) {
break;
}
if( temp >= word_lower_limit && temp < word_upper_limit )
memory[ ins_cnt++ ] = temp;
else
continue;
}
ins_cnt = 0;
std::cout << std::setfill( ' ' );
std::cout << std::setw( 5 ) << std::left << "***"
<< "Program loaded into memory"
<< std::setw( 5 ) << std::right << "***\n";
std::cout << std::setw( 5 ) << std::left << "***"
<< "Program execution starts..."
<< std::setw( 5 ) << std::right << "***\n";
execute();
std::cout << std::endl;
}
void SML::execute()
{
int divisor;
size_word &ins_cnt = registers[ static_cast<unsigned>( INSTRUCTION_COUNTER ) ];
size_word &ins_reg = registers[ static_cast<unsigned>( INSTRUCTION_REGISTER ) ];
while( memory[ ins_cnt ] != 0 )
{
ins_reg = memory[ ins_cnt++ ];
if( ins_reg < 1000 ) divisor = 0x10;
else if( ins_reg >= 1000 && ins_reg < 10000 ) divisor = 0x100;
else if( ins_reg >= 10000 && ins_reg < 100000 ) divisor = 0x1000;
Evaluator eval( *this ); // create an instance of evaluator
try
{
if( eval.evaluate( *this, ins_reg, divisor ) == 0 )
break;
}
catch ( std::invalid_argument &e )
{
std::cout << e.what() << "\n";
}
if( debug )
memory_dump();
}
}
void SML::memory_dump() const
{
std::cout << "\nREGISTERS:\n";
std::cout << std::setw( 25 ) << std::left << std::setfill( ' ' ) << "accumulator" << std::showpos
<< std::setw( 5 ) << std::setfill( '0' ) << std::internal << registers[ 0 ] << '\n';
std::cout << std::setw( 28 ) << std::left << std::setfill( ' ' )
<< "instruction counter" << std::noshowpos << std::setfill( '0' )
<< std::right << std::setw( 2 ) << registers[ 1 ] << '\n';
std::cout << std::setw( 25 ) << std::left << std::setfill( ' ' )
<< "instruction register" << std::showpos << std::setw( 5 ) << std::setfill( '0' )
<< std::internal << registers[ 3 ] << '\n';
std::cout << std::setw( 28 ) << std::left << std::setfill( ' ' )
<< "operation code" << std::noshowpos << std::setfill( '0' )
<< std::right << std::setw( 2 ) << registers[ 4 ] << '\n';
std::cout << std::setw( 28 ) << std::left << std::setfill( ' ' )
<< "operand" << std::noshowpos << std::setfill( '0' )
<< std::right << std::setw( 2 ) << registers[ 5 ] << '\n';
std::cout << "\n\nMEMORY:\n";
std::cout << " ";
for( int i = 0; i != 10; ++i )
std::cout << std::setw( 6 ) << std::setfill( ' ') << std::right << i;
for( size_word i = 0; i != memory_size; ++i )
{
if( i % 10 == 0 )
std::cout << "\n" << std::setw( 3 ) << std::setfill( ' ' ) << i << " ";
std::cout << std::setw( 5 ) << std::setfill( '0' ) << std::showpos << std::internal << memory[ i ] << " ";
}
std::cout << std::endl;
}
SML::~SML()
{
// resets all the registers
set_registers();
// free the memory
delete [] memory;
}
Evaluator.h
#ifndef SML_EVALUATOR_H_
#define SML_EVALUATOR_H_
#include <iostream>
#include <stdint.h>
typedef int32_t size_word;
constexpr size_word instruction_max_sixe = 70;
class SML;
class Evaluator
{
public:
Evaluator() = default;
Evaluator( const SML & );
int evaluate( SML &s, const int ins_reg, const int divisor );
private:
void read( SML &s, const int opr );
void write( SML &s, const int opr );
void read_str( SML &s, const int opr );
void write_str( SML &s, const int opr );
void load( SML &s, const int opr );
void store( SML &s, const int opr );
void add( SML &s, const int opr );
void subtract( SML &s, const int opr );
void multiply( SML &s, const int opr );
void divide( SML &s, const int opr );
void modulo( SML &s, const int opr );
void branch( SML &s, const int opr );
void branchneg( SML &s, const int opr );
void branchzero( SML &s, const int opr );
void newline( SML &s, const int opr );
void smldebug( SML &s, const int opr );
bool division_by_zero( SML &s, const int opr );
void (Evaluator::*instruction_set[ instruction_max_sixe ])( SML &, int );
};
#endif
Evaluator.cpp
#include "evaluator.h"
#include "sml.h"
Evaluator::Evaluator( const SML &s )
{
instruction_set[ s.read_ ] = &Evaluator::read;
instruction_set[ s.write_ ] = &Evaluator::write;
instruction_set[ s.read_str_ ] = &Evaluator::read_str;
instruction_set[ s.write_str_ ] = &Evaluator::write_str;
instruction_set[ s.load_ ] = &Evaluator::load;
instruction_set[ s.store_ ] = &Evaluator::store;
instruction_set[ s.add_ ] = &Evaluator::add;
instruction_set[ s.subtract_ ] = &Evaluator::subtract;
instruction_set[ s.multiply_ ] = &Evaluator::multiply;
instruction_set[ s.divide_ ] = &Evaluator::divide;
instruction_set[ s.modulo_ ] = &Evaluator::modulo;
instruction_set[ s.branch_ ] = &Evaluator::branch;
instruction_set[ s.branchneg_ ] = &Evaluator::branchneg;
instruction_set[ s.branchzero_ ] = &Evaluator::branchzero;
instruction_set[ s.newline_ ] = &Evaluator::newline;
instruction_set[ s.sml_debug_ ] = &Evaluator::smldebug;
}
int Evaluator::evaluate( SML &s, const int ins_reg, const int divisor)
{
size_word &opr_code = s.registers[ static_cast<unsigned>( OPERATION_CODE ) ];
size_word &opr = s.registers[ static_cast<unsigned>( OPERAND ) ];
opr_code = ins_reg / divisor;
opr = ins_reg % divisor;
if( opr_code == s.halt_ )
return 0;
else
(this->*(instruction_set[ opr_code ]))( s, opr );
return 1;
}
void Evaluator::read( SML &s, const int opr )
{
std::cin >> s.memory[ opr ];
}
void Evaluator::write( SML &s, const int opr )
{
std::cout << s.memory[ opr ];
}
void Evaluator::read_str( SML &s, const int opr )
{
std::cin >> s.temp_str;
s.memory[ opr ] = s.temp_str.size();
for( std::string::size_type i = 1; i != s.temp_str.size() + 1; ++i )
s.memory[ opr + i ] = int( s.temp_str[ i - 1 ] );
}
void Evaluator::write_str( SML &s, const int opr )
{
for( int i = 0; i != s.memory[ opr ] + 1; ++i )
std::cout << char( s.memory[ opr + i ]);
}
void Evaluator::load( SML &s, const int opr )
{
size_word &accumulator = s.registers[ static_cast<unsigned>( ACCUMULATOR ) ];
accumulator = s.memory[ opr ];
}
void Evaluator::store( SML &s, const int opr )
{
size_word &accumulator = s.registers[ static_cast<unsigned>( ACCUMULATOR ) ];
s.memory[ opr ] = accumulator;
}
void Evaluator::add( SML &s, const int opr )
{
size_word &accumulator = s.registers[ static_cast<unsigned>( ACCUMULATOR ) ];
accumulator += s.memory[ opr ];
}
void Evaluator::subtract( SML &s, const int opr )
{
size_word &accumulator = s.registers[ static_cast<unsigned>( ACCUMULATOR ) ];
accumulator -= s.memory[ opr ];
}
void Evaluator::multiply( SML &s, const int opr )
{
size_word &accumulator = s.registers[ static_cast<unsigned>( ACCUMULATOR ) ];
accumulator *= s.memory[ opr ];
}
void Evaluator::divide( SML &s, const int opr )
{
if( division_by_zero( s, opr ) )
throw std::invalid_argument( "Division by zero: Program terminated abnormally." );
size_word &accumulator = s.registers[ static_cast<unsigned>( ACCUMULATOR ) ];
accumulator /= s.memory[ opr ];
}
void Evaluator::modulo( SML &s, const int opr )
{
if( division_by_zero( s, opr ) )
throw std::invalid_argument( "Division by zero: Program terminated abnormally." );
size_word &accumulator = s.registers[ static_cast<unsigned>( ACCUMULATOR ) ];
accumulator /= s.memory[ opr ];
}
bool Evaluator::division_by_zero( SML &s, const int opr )
{
return ( s.memory[ opr ] == 0 );
}
void Evaluator::branchneg( SML &s, const int opr )
{
size_word &accumulator = s.registers[ static_cast<unsigned>( ACCUMULATOR ) ];
if( accumulator < 0 )
branch( s, opr );
}
void Evaluator::branchzero( SML &s, const int opr )
{
size_word &accumulator = s.registers[ static_cast<unsigned>( ACCUMULATOR ) ];
if( accumulator == 0 )
branch( s, opr );
}
void Evaluator::branch( SML &s, const int opr )
{
size_word &ins_cnt = s.registers[ static_cast<unsigned>( INSTRUCTION_COUNTER ) ];
s.registers[ static_cast<unsigned>( TEMPORARY_COUNTER ) ] = ins_cnt;
ins_cnt = opr;
s.execute();
ins_cnt = s.registers[ static_cast<unsigned>( TEMPORARY_COUNTER ) ];
}
void Evaluator::newline( SML &s, const int opr )
{
std::cout << '\n' << std::flush;
}
void Evaluator::smldebug( SML &s, const int opr )
{
if ( opr == 1 ) s.debug = true;
else if ( opr == 0 ) s.debug = false;
}
main.cpp
#include "sml.h"
int main()
{
SML sml(1000, -999999, 999999 );
sml.display_welcome_message();
sml.load_program();
}
Below are instructions written to test the machine
Tests
0xA60 // read a value and store in address 60( written to index 96(decimal) in the array,
0xA61 // read another value and store in address 61
0x1460 // write the value stored in address 60 to the accumulator
0x1e61 // add the value stored in address 61 to the accumulator
0x320 // print a newline
0x1562 // store the value in the accumulatore to address 62
0xb62 // write the value in address 62 to the screen
0x320 // print a newline
0xc67 // read a string and store it size at address 67, the characters would be stored from 68 to end of character
0xd67 // write the characters to screen
0x2c1 // turn on debug
-0x1869f // start execution
Answer: Just a few things
Use Raw string literals
std::cout << "***" << " WELCOME TO SIMPLETRON! " << "***\n\n";
std::cout << std::setw(5) << std::left << "***"
<< "Please enter your program one instruction"
<< std::setw(5) << std::right << "***\n";
std::cout << std::setw(5) << std::left << "***"
<< "(or data word) at a time. I will type the"
<< std::setw(5) << std::right << "***\n";
std::cout << std::setw(5) << std::left << "***"
<< "location number and a question mark (?)."
<< std::setw(6) << std::right << "***\n";
std::cout << std::setw(5) << std::left << "***"
<< "You then type the word for that location"
<< std::setw(6) << std::right << "***\n";
std::cout << std::setw(5) << std::left << "***"
<< "Type the sentinel -0x1869F to stop entering"
<< std::setw(5) << std::right << "***\n";
std::cout << std::setw(5) << std::left << "***"
<< "your program"
<< std::setw(5) << std::right << "***";
std::cout << "\n\n" << std::flush;
This can get extremely difficult to maintain. You can simply use string literals to make your life easy
const char* welcome_msg = R"""(
*** WELCOME TO SIMPLETRON! ***
*** Please enter your program one instruction ***
*** (or data word) at a time. I will type the ***
*** location number and a question mark (?). ***
*** You then type the word for that location ***
*** Type the sentinel -0x1869F to stop entering ***
*** your program ***
)"""
std::cout << welcome_msg;
Simplify
registers[static_cast<unsigned>(ACCUMULATOR)] = 0;
registers[static_cast<unsigned>(INSTRUCTION_COUNTER)] = 0;
registers[static_cast<unsigned>(TEMPORARY_COUNTER)] = 0;
registers[static_cast<unsigned>(INSTRUCTION_REGISTER)] = 0;
registers[static_cast<unsigned>(OPERATION_CODE)] = 0;
registers[static_cast<unsigned>(OPERAND)] = 0;
Instead of casting it to unsigned every time you use something from the enum, why not declare it unsigned first?
enum REGISTERS : unsigned
{
ACCUMULATOR = 0,
INSTRUCTION_COUNTER = 1,
TEMPORARY_COUNTER = 2,
INSTRUCTION_REGISTER = 3,
OPERATION_CODE = 4,
OPERAND = 5
};
Also, you don't have to specify the values here since they are continuous. That means this is the same as
enum REGISTERS : unsigned
{
ACCUMULATOR,
INSTRUCTION_COUNTER ,
TEMPORARY_COUNTER,
INSTRUCTION_REGISTER,
OPERATION_CODE,
OPERAND
};
Use a loop
registers[ACCUMULATOR] = 0;
registers[INSTRUCTION_COUNTER] = 0;
registers[TEMPORARY_COUNTER] = 0;
registers[INSTRUCTION_REGISTER] = 0;
registers[OPERATION_CODE] = 0;
registers[OPERAND] = 0;
Take advantage of the fact that these are all numbered from 1 to 5.
for (int i = ACCUMULATOR; i <= OPERAND; i++)
registers[i] = 0;
Comparing size_t and int32_t
int32_t has a fixed width of 32.
size_t is either 32 / 64 bits, depending on the platform.
Comparing both of them freely can sometimes be dangerous.
s.memory[opr] = s.temp_str.size();
in32_t = size_t
If size_t (although highly unlikely, possible ) exceeds the max size of int32_t, overflow! What I like to do is to keep a custom macro like _DEBUG_, and then use #ifdef to check for this.
#ifdef _DEBUG_
if ( s.temp_str.size() > INT32_MAX ) // handle it here
#endif // _DEBUG_ | {
"domain": "codereview.stackexchange",
"id": 39930,
"tags": "c++, object-oriented"
} |
Unable to communicate to some of ROS services | Question:
I am trying to call ros services but I am unable to call some rosservices...
It works for /rosout/get_loggers but doesn't work with gazebo related services..
rosservice list shows all services.
(I intentionally added spaces between rosrpc and localhost because the system thinks I am posting some URL links and I don't have enough karma)
$ rosservice uri /rosout/get_loggers
rosrpc: // localhost:59423
$ rosservice type /rosout/get_loggers
roscpp/GetLoggers
$ rosservice uri /gazebo/unpause_physics
rosrpc : // localhost:41557
$ rosservice type /gazebo/unpause_physics
ERROR: Unable to communicate with service [/gazebo/unpause_physics], address [rosrpc : // localhost:41557]
$ rosservice call /gazebo/unpause_physics
ERROR: Unable to communicate with service [/gazebo/unpause_physics], address [rosrpc : // localhost:41557]
I read about network setup but still not able to figure out what is going on..
Please help.
$ echo $ROS_MASTER_URI
http : // localhost:11311
$ echo $ROS_HOSTNAME
localhost
Originally posted by jys on ROS Answers with karma: 212 on 2012-11-15
Post score: 0
Original comments
Comment by jys on 2012-11-16:
rosservice list shows all. Fixing /etc/hosts didn't help.
Comment by Lorenz on 2012-11-18:
Maybe a stupid question but is gazebo running? What's the output of rosservice info /gazebo/unpause_physics?
Comment by jys on 2012-11-19:
I get same error.. Gazebo is running.. It is in "paused" state and GUI botton for "resume" doesn't respond..
Node: /gazebo
URI: rosrpc://localhost:36586
ERROR: Unable to communicate with service [/gazebo/unpause_physics], address [rosrpc://localhost:36586]
Comment by Ruben Alves on 2022-02-03:
I was using Husarnet to connect to a remote robot (ROSBot) and could list the services but not call them. In the end, the problem is that I had ROS_IP=127.0.0.1 on my computer and on the robot. After unsetting ROS_IP in my pc and in the robot, it connected fine. Of course, in order for it to work with Husarnet, I also had to set
ROS_IPV6=on
ROS_MASTER_URI=http://master:11311
ROS_HOSTNAME=master # in therobot
And in my computer, I had to set the variables in the same way, but a different ROS_HOSTNAME, of course: ROS_HOSTNAME=myComputerNameHere
Answer:
Does rosservice list show the services?
Also I had some trouble with some entries in the /etc/hosts file since the entries there usually overwrite the information from the DNS/DHCP Server. If you have some entries there, you can make a backup of your /etc/hosts file and delete every entry except the following:
127.0.0.1 localhost
At least this was the cause for a similar problem in a network with non-static IP addresses....
Originally posted by michikarg with karma: 2108 on 2012-11-15
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by jys on 2012-11-19:
rosservice list shows all. Fixing /etc/hosts didn't help.
Comment by fbelmonteklein on 2021-02-18:
so, it turns out that ROS register the names of hosts with the variables of the terminal in which the roslaunch/rosrun command was ran. I managed to solve it by executing export ROS_HOSTNAME=xxx.Home, where xxx is the name of the pc that was providing the service. this name (with the home suffix) was provided by the router, so no need to add it to /etc/hosts.
That said, changing etc/hosts is a more permanent and possibly better solution. | {
"domain": "robotics.stackexchange",
"id": 11766,
"tags": "ros, gazebo, network, services, pr2"
} |
How does vasoconstriction retain heat? | Question: I am reading on vasoconstriction on wikipedia. The article states that:
When blood vessels constrict, the flow of blood is restricted or decreased, thus retaining body heat or increasing vascular resistance.
Why/How exactly is the heat retained when blood flow is decreased? Isn't it the case that when things slow down they generally get colder?
Answer: Think of the body as a heat-sink. We can let off heat by using our fluid exchanges – namely, blood and sweat. When we exercise, sweat cools us off through evaporative cooling. When we’re not exercising or in warm weather, we’re generally not sweating, but we’re still exchanging heat with our surroundings – after all, the metabolism is still active, we’re still “burning energy,” and our skin never reaches air temperature nor does our core cool to the temperature of our skin (so some kind of heat exchange has to be happening). How can that be, when there’s no obvious mechanism to carry away our heat as in the case of sweating?
The answer to this lies at the heart of your question. Human thermoregulation is largely thanks to our blood flow and vasodilation:
“Skin blood flow in adult human thermoregulation: how it works, when it does not, and why.” Charkoudian. N. Mayo Clin Proc. 2003.
Cutaneous sympathetic vasoconstrictor and vasodilator systems also participate in baroreflex control of blood pressure; this is particularly important during heat stress, when such a large percentage of cardiac output is directed to the skin. Local thermal control of cutaneous blood vessels also contributes importantly—local warming of the skin can cause maximal vasodilation in healthy humans and includes roles for both local sensory nerves and nitric oxide. Local cooling of the skin can decrease skin blood flow to minimal levels.
In essence, your core “wants” to stay warm. If the air around your skin is comparatively warmer, cutaneous blood vessels will dilate to accommodate a higher flowrate, allowing more blood to engage in heat transfer. The opposite is true for air that is comparatively colder, and the relationships between cutaneous blood flow, sweating, and local skin cooling can be tested experimentally:
“Skin blood flow and local temperature independently modify sweat rate during passive heat stress in humans.” Wingo et al. J Appl Physiol. 2010.
In protocol I, two sites received norepinephrine to reduce skin blood flow, while two sites received Ringer solution (control). All sites were maintained at 34°C. In protocol II, all sites received 28 mM sodium nitroprusside to equalize skin blood flow between sites before local cooling to 20°C (2 sites) or maintenance at 34°C (2 sites). In both protocols, individuals were then passively heated to increase core temperature ∼1°C. Both decreased skin blood flow and decreased local temperature attenuated the slope of the SR to mean body temperature relationship (2.0 ± 1.2 vs. 1.0 ± 0.7 mg·cm−2·min−1·°C−1 for the effect of decreased skin blood flow, P = 0.01; 1.2 ± 0.9 vs. 0.07 ± 0.05 mg·cm−2·min−1·°C−1 for the effect of decreased local temperature, P = 0.02). Furthermore, local cooling delayed the onset of sweating (mean body temperature of 37.5 ± 0.4 vs. 37.6 ± 0.4°C, P = 0.03). These data demonstrate that local cooling attenuates sweating by independent effects of decreased skin blood flow and decreased local skin temperature. | {
"domain": "biology.stackexchange",
"id": 9054,
"tags": "cardiology, blood-circulation"
} |
How is potential difference defined across a resistor with time varying current | Question: From this discussion How can we define a potential for a moving charge? we know that we cannot define a scalar potential (as in electrostatics) in the case of moving charges as described by G. Smith:
You cannot describe the electromagnetic field of a moving charge as the gradient of a potential. If you could, the curl of the electric field would be zero, which would imply that the time derivative of the magnetic field would be zero. This is clearly false
How then can we define a potential difference across a resistor with time varying current?
Answer:
How then can we define a potential difference across a resistor with time varying current?
Basically we just assume that we can.
Circuit theory is an approximation to Maxwell’s equations which relies on three assumptions:
the distances are small enough and the time scales large enough that we can treat electromagnetic effects as instantaneous rather than propagating at c.
there is no net charge on any component.
there is no magnetic flux outside of any component.
With those three assumptions the vector potential from Maxwell’s equations becomes zero and only the scalar potential remains. And also the Coulomb gauge can be used (up to a simple additive constant) for that potential. While the statement by G. Smith is absolutely correct, the errors introduced by simply using the Coulomb potential anyway go to zero as deviations from these assumptions go to zero. Thus circuit theory is a well defined approximation to Maxwell’s equations and in that approximation the potential difference across a resistor is well defined. | {
"domain": "physics.stackexchange",
"id": 72537,
"tags": "electromagnetism, electric-circuits, electric-current, vector-fields"
} |
ROS Answers SE migration: Odometer Code | Question:
hello,
I'm implementing Odometer in the navigation stack using this:
http://wiki.ros.org/navigation/Tutorials/RobotSetup/Odom
What I understood is that in order to make this code really work on my robot then i'll have to change some stuff.
for example:
16 double vx = 0.1;
17 double vy = -0.1;
18 double vth = 0.1;
I changed those values manually because i dont have a driver. I got help from this:
http://www.geology.smu.edu/~dpa-www/robots/doc/odometry.txt
However, my question is what else should i change?
thank you,
Originally posted by maha on ROS Answers with karma: 27 on 2015-04-02
Post score: 1
Answer:
This is just dummy code showing how to fill such messages. What you have to change for your robot depends on your robot's data. There is no general answer for that.
Originally posted by dornhege with karma: 31395 on 2015-04-02
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by maha on 2015-04-07:
can you explain further or provide a source explaining the code.. i dont understand what's in the website. it is not enough.
Comment by dornhege on 2015-04-08:
Odometry only makes sense, when it comes to a robot. So, at least the velocity values or pose estimates needs to be robot data. Unless you have that it doesn't make sense to publish odometry. If you want more detailed information, you'll have to provide what data you can get out of your robot.
Comment by maha on 2015-04-08:
i can get anything out of my robot.. i already provided the velocities,, how can i get the pose estimates?
Comment by dornhege on 2015-04-08:
In this case the code should already work. You only need to continuously update these. It integrates velocities to get pose estimates in ll 31-34. If you have direct pose estimates, e.g., from encoders your code also use these. | {
"domain": "robotics.stackexchange",
"id": 21335,
"tags": "ros"
} |
If we are in the car, we see the trees on the street are moving backwards. Do the trees have kinetic energy? | Question: I copied this question from the internet. I thought about it but I can't make out the meaning of the statement.
Answer: Yes, the trees have kinetic energy in the car's frame of reference. Kinetic energy is a frame-dependent quantity (i.e., it is not Lorentz invariant). | {
"domain": "physics.stackexchange",
"id": 38419,
"tags": "newtonian-mechanics, energy, kinematics, inertial-frames, observers"
} |
Parsing values from string into struct using match in Julia | Question: My goal
I am parsing from a string which contains token:value pairs into a type.
Example:
mutable struct Foo
bar
baz
qux
end
function parse(str::AbstractString)::Foo
f = Foo()
bar_pattern = r"bar:(\w*)"
baz_pattern = r"baz:(\d*)"
qux_pattern = r"qux:(\w*)"
f.bar = match(bar_pattern, s)[1]
f.baz = match(baz_pattern, s)[1]
f.qux = match(qux_pattern, s)[1]
return d
end
Problem
This works but only if the patterns are actually present. When match can't find the pattern, it returns nothing, which of course can't be indexed [1] or accessed with captures. The result is an error.
I want the fields of the returned struct to either get the matched result (the "capture") directly, or remain empty or set to nothing, should the match be unable to find the pattern.
I could do something like this:
function safeparse(str::AbstractString)::Foo
f = Foo()
bar_pattern = r"bar:(\w*)"
baz_pattern = r"baz:(\d*)"
qux_pattern = r"qux:(\w*)"
if !isnothing(match(bar_pattern, s))
f.bar = match(bar_pattern, s)[1]
end
if !isnothing(match(baz_pattern, s))
f.baz = match(baz_pattern, s)[1]
end
if !isnothing(match(qux_pattern, s))
f.qux = match(qux_pattern, s)[1]
end
return f
end
But that approach seems ugly and becomes verbose very quick if more/new patterns are introduced.
Question
Is there a nicer but readable way to achieve this?
Preferably without combining/changing the regex patterns or too much regex magic, however I am
open to that route too if it is the only nice (less verbose) way. I am of course also open to general tips.
To keep things simple, just assume that the patterns my example is looking for only appear 0 or 1 times. However if the only way to make this nicer involves writing another function like safematch which does the check for nothing and returns the captured value or nothing, I would want that to also work with multiple matches somehow and stay a bit more general.
Answer: The easiest thing I can think of is to use eachmatch -- when there is no match at all, it returns an empty iterator:
julia> collect(eachmatch(r"(bar:(\w*))", "bla bar:werfsd 1223 bar:skdf"))
2-element Array{RegexMatch,1}:
RegexMatch("bar:werfsd", 1="bar:werfsd", 2="werfsd")
RegexMatch("bar:skdf", 1="bar:skdf", 2="skdf")
julia> collect(eachmatch(r"(bar:(\w*))", "bla "))
RegexMatch[]
Then you can simply combine those into a dictionary, for example:
julia> Dict(re => [m[1] for m in eachmatch(re, s)] for re in patterns)
Dict{Regex,Array{T,1} where T} with 3 entries:
r"qux:(\w*)" => Union{Nothing, SubString{String}}[]
r"baz:(\d*)" => SubString{String}["33"]
r"bar:(\w*)" => SubString{String}["werfsd", "skdf"]
Without further information about how you want to organize your data structure when more then one value occurs, I can't really say more. Perhaps you can make use of merge.
Give types to struct fields, and don't use mutable structs unless necessary. Without knowing more, I suggest
struct Foo
bar::Union{String, Nothing}
baz::Union{String, Nothing}
qux::Union{String, Nothing}
end
function Foo(;bar=nothing, baz=nothing, qux=nothing)
return Foo(convert(Union{String, Nothing}, bar),
convert(Union{String, Nothing}, baz),
convert(Union{String, Nothing}, qux))
end
which also takes care of converting the SubString from the regex match, in case this is relevant. | {
"domain": "codereview.stackexchange",
"id": 40264,
"tags": "parsing, regex, julia"
} |
Which Enzymes Catalyse the Deacetylation of Drugs in the Human Body? | Question: If you would like more specifics seeing how I realise that this question is very broad and may be difficult to answer in general then hopefully the following will help you out:
I am particularly interested in acetyl groups bound by carbon single bonds
Drug metabolism in the liver particularly interests me
The drug paracetamol's (acetaminophen) deacetylation to p-aminophenol is of particular interest to me.
Answer: Acetyl esters can be deacetylated by carboxylesterases.
In case of an acyl/aryl group attached to acetate group (e.g., benzoate), I guess it will first undergo ring opening and then beta oxidation (as in case of tyrosine).
Paracetamol -> p-aminophenol is a deamidation (I am not sure, but enzymes like ornithine decarboxylase can do that job... just a guess). | {
"domain": "biology.stackexchange",
"id": 961,
"tags": "pharmacology, metabolism"
} |
Is it a calculation trick to make the solution easier or is it a necessary step? | Question: I was solving a electrostatic problem, I had to calculate the voulume charge density by using the equation,
$\rho=\epsilon_0\nabla\cdot\vec E$
$\vec E=\epsilon_0(\frac{\lambda Ae^{-\lambda r}}{r}+\frac{Ae^{-\lambda r}}{r^2})\hat r$
I got the divergence as
$\nabla\cdot\vec E=\frac{1}{r^2}\frac{d}{dr}(r^2E_r)=-\frac{\epsilon_0A\lambda^2e^{-\lambda r}}{r} $
The answer is $\epsilon_0 A\left[4\pi\delta^3(r)-\frac{\epsilon_0A\lambda^2e^{-\lambda r}}{r}\right] $
I have the solution manual too where it is given how the answer can be obtained but i dont understand if it is one of the ways to get the answer or is the only way. In the method they used, the separated the $\vec E$ function into $fA$ where $f$ is a scalar function and $A$ is the vector. That is, $\vec E=\epsilon_0 A(e^{-\lambda r}(1+\lambda r))\frac{\hat r}{r^2}$
So, $f$ is $\epsilon_0 A(e^{-\lambda r}(1+\lambda r))$ and $A$ is $\frac{\hat r}{r^2}$
Why was this splitting into $f$ and $A$ necessary?Why was $A$ taken as $\hat r/r^2$ What is wrong in considering the whole thing as $E_r$ and calculating the divergence.? I am aware of the dirac delta function but i dont see why it used here because the function here is not just $1/r^2$ but there is a $e^{-\lambda r} $ too. I want some clarity regarding the calculation of divergence and for what functions we use dirac delta.
Answer: Is it necessary to split up the problem this way? No. You can always factor terms however you want and you should always get the same answer.
So why would you choose to factor it this way. Well firstly we need to think about what is "the difficult bit" here. Now taking the divergence of an analytic function is easy. We just have to plug into a formula and differentiate. The subtle thing is what happens when $r \rightarrow 0$ and the electric field blows up.
So now why would we choose to write this as $fA$ with $A = \frac{\hat{r}}{r^2}$? Well the thing to spot here is that $f$ is a nice well behaved function with no singularities and we already know the divergence of $A$ for $r \rightarrow 0$; $A$ is just the coulomb potential. So we have isolated the difficult bit of the problem and turned it into a problem we already understand. From here it is just a case of applying the equivalent of the product rule for the divergence.
$$
\nabla\cdot fA = f\nabla\cdot A + A \cdot \nabla f
$$
Why doesn't the exponential screw up the delta function? Well, in a sense, the delta function only really cares about the behavior of $\hat{E}$, for small $r$ (where the divergence occurs), and for $r \ll \frac{1}{\lambda}$, $e^{\lambda r}\approx 1$, so it does not matter. Also notice that
$$
f(r)\delta(\vec{r}) = f(0)\delta(\vec{r})
$$
as the $\delta$ is $0$ for $r\ne 0$ | {
"domain": "physics.stackexchange",
"id": 70335,
"tags": "homework-and-exercises, electrostatics"
} |
Can the polarity of an electromagnet be same at both ends? | Question: I came across this question:
The question asks to determine the polarity of the ends of the electromagnet.
On applying the clock rule, the polarity of end A comes to be South while that of end B also comes out to be South. Am I doing something wrong or is the question itself wrong?
Also, I would like to know whether this is possible practically or not.
Thanks for helping.
Answer: They have the same polarity, the north pole is actually at the top. It is not the same as bending a permanent magnet, because the wire is not a uniformly bent spiral, due to how the wiring is made at the top, reversing the polarity. | {
"domain": "physics.stackexchange",
"id": 99228,
"tags": "electromagnetism"
} |
Dominant parameters in the dynamical response of a linear oscillator | Question: I am working with a very simple 3-DOF damped LTI spring-mass system. As an exercise I am altering two parameters simultaneously (stiffness and damping coefficients connecting mass 1 to mass 2) to see which combination minimizes:
Settling time
Overshoot
The objective is to determine which parameter has the greater effect on the performance, either the stiffness or the damping.
My question is: How can I determine if changing stiffness or damping is having more of an effect on the aforementioned criteria ? Is there a way to quantify how they contribute to the performance ?
Answer: Equations of Motion
From what I understand from the question and your comments, you can write the equations of motion of the 3-dof oscillator in the form:
$$M \frac{d^{2}x}{dt^{2}} + C \frac{dx}{dt} + K x = f$$
where the vector $x$ represents the displacements, $M$, $C$ and $K$ are the mass, damping and stiffness matrix respectively, and $f$ is some forcing vector (e.g. impact, step function, etc.). Therefore, this type of equation may be classified as a Linear Time-Invariant (LTI) system.
Damping models
There are several ways to solve this type of LTI systems, for instance numerical time integration, and as mentioned in the response by @am304, different models to represent the damping matrix $C$.
Rayleigh damping is indeed one form:
$$C = aM + bK$$
where parameters $a$ and $b$ are chosen arbitrarily. You could also use modal damping, which is similar and may be written in the form:
$$V' C V= 2 x_iW$$
where $W$ is a diagonal matrix containing the linear eigenvalues of your system, $V$ the associated eigenvectors and $x_i$ is the damping coefficient (which may vary from mode to mode but can be considered as a single constant as a first approximation). The main advantage of this method is that the damping carries with it the inherent dynamics of your linear system and is diagonal in the modal domain.
Performance
In terms of performance, with respect to the overshoot and the settling time, they will be both dependent on the damping term as well as the stiffness, since the two quantities are related in the two modeling strategies here proposed. However, for a given stiffness matrix, the solution will reach a steady-state with a specific amplitude, and the overshoot and settling time will solely depend on $x_i$ as shown in the picture below (Last mass displacements for: $x_i=0.2$ (blue), $x_i=0.5$ (red) and $x_i=0.8$ (black)). | {
"domain": "engineering.stackexchange",
"id": 213,
"tags": "dynamics, springs, vibration, stiffness"
} |
About independent set in triangle free graphs | Question: From Wikipedia :
An independent set of $\sqrt{n}$ vertices in an $n$-vertex triangle-free graph is easy to find: either there is a vertex with more than $\sqrt{n}$ neighbors (in which case those neighbors are an independent set) or all vertices have less than $\sqrt{n}$ neighbors (in which case any maximal independent set must have at least $\sqrt{n}$ vertices)
Why does any maximal stable set have at least square root of $n$ vertices?
Thanks.
Answer: The point is that the graph is triangle free, so if $j$ and $k$ are both neighbours of $i$, then the edge $\{j,k\}$ is not in the graph. This means that $N_i = \{j : \text{$i$ is a neighbour of $j$}\}$ is an independent set for any $i$.
The other fact being used is that a graph with $n$ vertices of maximum degree $\Delta$ has an independent set of size at least $\lfloor n/(\Delta+1)\rfloor$. | {
"domain": "cs.stackexchange",
"id": 9524,
"tags": "graphs"
} |
Why does my book consider moment of inertia as a scalar when it is a tensor? | Question: I found in the internet that the moment of inertia of a rotating body is a tensor quantity. But in my book it is considered as a scalar quantity. Won't doing this give wrong results? So how does it work?
Can MOI be considered as a scalar quantity in special cases only? What are the limitations of considering this as a scalar? What are some situations in which considering MOI as scalar quantity will be wrong and I will have to consider it as a tensor?
Answer: It can be treated as a scalar when the body is constrained to move on a single rigid axis. In this case you have a line of points through the body that are fixed to not move. All other points move around this axis. Due to the body being rigid every point will exhibit circular motion around this axis with the same infinitesimal angular displacement. The body's rotation motion is now 1-dimensional. This is how most lower level undergraduate or high school books would introduce the topic since tensors and matrices might be beyond the level of the student. This makes learning the basic concepts a bit easier. In upper level courses and texts the tensor treatment is standard, the constrained case being a special case. | {
"domain": "physics.stackexchange",
"id": 67178,
"tags": "rotational-dynamics, moment-of-inertia"
} |
Evaluating the modulus squared of a spinor chain with different number of spinor and anti-spinors | Question: I want to evaluate the interference between diagrams in a BSM model whose relevant part of the contributions are
\begin{equation}
\begin{split}
A&=[\bar{u}_e(k_2)v_e(k_3)] [\bar{u}_e(k_1)u_\mu(p_1)] \\
B&=[\bar{u}_e(k_1)\gamma_\mu v_e(k_2)] [\bar{v}_\mu(p_1)\gamma^\mu v_e(k_3)]
\end{split}
\end{equation}
Using the polarization sum method, $AB^*+A^*B$ would be something like (ommiting spin indices)
\begin{equation}
\frac{1}{4}\sum \text{tr}[(\displaystyle{\not}{k_1}+m_e)u_\mu(p_1) \bar{u}_e(k_2)(\displaystyle{\not}{k_3}-m_e)\gamma^\mu v_\mu(p_1)\bar{v}_e(k_2) \gamma_\mu]+\text{Complex Conjugate},
\end{equation}
using $\sum u(p,m)\bar{u}(p,m)=\displaystyle{\not}{p}+m$ and $\sum v(p,m)\bar{v}(p,m)=\displaystyle{\not}{p}-m$
I don't see how to eliminate the remaining spinors in the above equation and would guess the result is zero, but I checked computionally and the result seems to be
\begin{equation}
\text{tr}[(\displaystyle{\not}{p_1}+m_\mu)\gamma^\mu (-\displaystyle{\not}{k_3}-m_e) (\displaystyle{\not}{k_2}-m_e)\gamma_\mu (\displaystyle{\not}{k_1}+m_e)]
\end{equation}
I have also tried to figure this out writing the spinor indices explicitly, but it did not help me.
How to carry this calculation? Would I need to use crossing? Or is the computational result itself wrong? Details are greatly appreciated.
Answer: I figured it out: the trick is simply to transpose the subchains (which are numbers) and use $u=v^c=C\bar{v}^T$ (which implies $\bar{v}^T=-Cu$ and $v^TC=\bar{u}$) and $C^{-1}\gamma_\mu^T C=-\gamma_\mu$. | {
"domain": "physics.stackexchange",
"id": 71600,
"tags": "dirac-equation, spinors, dirac-matrices, trace"
} |
Physics simulation software to perform this very specific experiment | Question: I need a physics simulation software that allows me to perform the following experiment:
1. Create a frictionless ramp/terrain defined by a parametric function;
2. Create a ball in an arbitrary position;
3. Visualize it falling down while it is affected by gravity and normal forces.
As there should be many options, the best would be a software that I could perform this experiment as fast as possible.
Answer: Open source physics engines Bullet and ODE can both do what you want. Both have example code and well-documented APIs and are cross-platform. You could take one of the demos and modify it according to your needs. One of the ODE demos is a ball rolling down a surface and then launching from a ramp and falling through a hoop. That'd be a good starting place. However, there will definitely be a learning curve to the API. Still, I think it'll be faster than coding your own from scratch even if it has more features than you need. | {
"domain": "physics.stackexchange",
"id": 2139,
"tags": "simulations, software"
} |
Why thin metal foil does not break like a metal stick? | Question: Consider a metal stick, say iron or aluminum. From the experience, even if it's resilient, bend it forward and backward a couple of times, it would be broken.
However, consider a thin iron foil or thin aluminum foil. From the experience, we know that it could be bend forward and backward for almost as many time as time was permitted.
How to explain this in solid states? Why was it that the thin foil seemed to be much more deformable than stick?(Does it has anything to do with the fact that in the normal direction, the metallic bound was weak?) Why thin foil doesn't break?
Answer: Almost all solid metals are made up of individual small crystals called grains. A small stretching movement will simply stretch the crystal lattice of each grain a little, so the whole thing bends.
When you flex thin foil, it is so thin that the stretching distance is small and the grains can deform to match.
But with a thicker rod, the stretching is much bigger and the stress force it creates in the material is much higher. The outermost grain boundaries (the furthest stretched) will begin to pull apart, creating surface cracks in the metal. Each time you flex it, these cracks grow until they pass right through and the thing snaps in two. If you look closely at such a "fatigue" fracture with a magnifying glass, you can sometimes see the individual crystals forming a rough surface. Or, sometimes you can see the individual "waves" as the crack progressed at each stress peak.
The formation and behaviour of these grains, and the factors which control them, is the principal phenomenon studied by metallurgists. | {
"domain": "physics.stackexchange",
"id": 73238,
"tags": "solid-state-physics"
} |
Dark energy time dilation | Question: I have read here that dark energy is somewhat like negative mass. Here Wikipedia states:
... that the cosmological constant required that 'empty space takes the role of gravitating negative masses which are distributed all over the interstellar space'.
If you take the equation for spherically symmetric time dilation $t_0 = t_f \sqrt{1 - \frac{2GM}{rc^2}}$ and substitute a negative mass, you get a $t_0 >1$ time dilation factor, suggesting that time moves faster under gravitating negative masses.
Is this consistent with observations (say, are photons traveling through the intergalactic medium 'time-accelerated' [I have never heard of such a thing, so this must be wrong]) or is dark energy not a gravitating negative mass, or am I looking at this all wrong?
Answer: The Wikipedia article is inaccurate. Dark energy doesn't act like negative mass. It acts more like a combination of some mass and some negative pressure.
It's not accurate to describe that equation from the WP article as "the" equation for time dilation. It applies to a spherically symmetric gravitational field, in vacuum. That's not what cosmological models look like.
The concept of gravitational time dilation only means anything in a static gravitational field. It isn't a well-defined concept in a cosmological model, which isn't static. | {
"domain": "physics.stackexchange",
"id": 61504,
"tags": "time-dilation, dark-energy"
} |
Calculate average over a set interval of datapoints in python | Question: I am trying to analyse my running data, which comes in gpx form, import and writing of data is working wonderfully.
Now I want to average the pace I calculated between two points overage a certain distance (for example 1000 m) and see the result as a list for each individual kilometer. If the kilometer is not finished at the end, I want the calculate the pace in the last segment as well.
The function has the following input:
average_over_distance(input_datalist, input_distancelist, average_distance)
input_datalist = [0, 5.3, 4.3, 4.6 ... ]
input_distancelist = [0, 4.5, 7.8, 12.3 ... ]
average_distance = 1000
Where input_datalist is the pace at each datapoint and input_distancelist is the amount of meters covered from the beginning to this datapoint. I have already written some code which is working, but is surprisingly slow and has a minor bug, in which the last item needs to be removed, as it will always be a zero.
If someone could take a look at it and suggest some improvements (especially in terms of performance), that would be very helpful.
Typically, the files contain between 800 and 3000 datapoints. For a example, for a run of roughly 6 km "runtastic_20191117_1510", the input data is:
input_datalist = [0, 8.154297259068265, 7.699997924184752, 4.1290248509679355, 5.233554408345463, 3.736102600544462, 4.330745941935702, 5.230497758127484, 5.121644868994811, 4.015323039710534, 4.726542792381121, 3.740745581335375, 3.4118823620200422, 3.987048386430994, 3.7203695740533966, 4.410022185697186, 5.2436728280589415, 3.7116435736709072, 4.092616264885011, 4.223133016298036, 3.6989551972321575, 3.933683921646469, 5.299174480076175, 5.454088851201578, 4.885582841700339, 4.322790700547173, 5.077202464669111, 4.468306638956414, 5.10435062743631, 4.611544695366803, 4.276830319696586, 4.088270186235909, 3.9011269060681983, 5.041657604796356, 4.277273006103617, 4.406055316530598, 4.557975009670055, 4.406041751786603, 4.27732553606914, 4.745072973440513, 4.710883275762832, 4.640505059675511, 5.06935043569426, 4.490555199849174, 4.572841974069774, 5.045738092463033, 4.205832799143788, 4.505775599108849, 4.28371867045946, 4.263979223210943, 4.7588714454922725, 4.91203496026689, 4.696805096033936, 4.162071809512042, 4.535661818149418, 4.171066245852658, 4.143397024221024, 4.640261191080346, 4.399112208437336, 5.685874427165761, 4.505487986404551, 4.720550515635307, 4.846515601858323, 5.298586073902652, 4.696656283201031, 4.441343777394891, 5.337965772042491, 4.48485689638503, 5.769112947432504, 5.319572026762231, 5.116981287986407, 5.22259033813319, 4.853202942230398, 4.448933861887161, 4.789873410985746, 5.672882596467915, 5.5417906688838166, 6.457016910025886, 6.786483505021053, 5.198308170917069, 5.092476666244364, 4.111794944472009, 4.62311212962504, 4.936303165196316, 4.2158075941616495, 5.0052319036352255, 6.027735861446753, 4.451769206323267, 4.348499095726568, 4.716995991496699, 4.739598104126194, 3.803071758523529, 3.8735972911018113, 4.655091267145741, 5.07588762499882, 5.502452004107536, 4.90400116659635, 5.053913295715235, 4.58068413622591, 4.816772523066801, 5.8804125927899475, 4.6149157805783245, 4.633021535424502, 4.694860556377299, 4.162950942798815, 4.143781839386801, 4.210068875540942, 4.466337101965024, 4.806729632448732, 4.497942736910237, 4.936378303939041, 4.767575030726392, 4.467464723983651, 4.754562019764752, 4.434949315331764, 3.7991288006306982, 4.356167584044449, 4.899659810097845, 4.824108634407948, 4.634182253538913, 4.9030628045839, 4.820284931682572, 4.542769711287221, 5.007690356986818, 5.295531279293059, 4.501087763083377, 4.488216232376019, 3.639894520745288, 4.905012199105443, 5.343166287117647, 4.4175484463040045, 4.525020534675778, 4.670701777770286, 3.941406726590055, 4.655011763949892, 6.140477088073999, 7.0078624375573995, 5.545091724531036, 5.288132248589611, 4.8816938060736526, 4.419245209980266, 4.7858786837636815, 4.956721947840855, 3.8419543236293525, 4.584358093975693, 3.6157730397543, 3.679531116433433, 3.8551332359568007, 5.01671434928462, 6.037315646638342, 4.200679939449504, 5.553716753945233, 4.8052903165724645, 5.62792540716035, 4.496031477404894, 3.3585767502392505, 4.426144671921158, 4.16635108522084, 4.543574826368433, 4.023584939710059, 4.4773033732429965, 4.056618759180805, 4.288378568522181, 4.016701064525987, 3.6309957856654416, 4.68751694219872, 4.200361225258804, 4.4642917645643445, 4.611021161957354, 4.608897935198266, 4.375182381141827, 4.3219335803397305, 4.981445645654108, 4.833516446559961, 4.336397398785247, 4.203682658060733, 3.784297284926399, 4.424894795321563, 4.088044358642628, 5.20713570393957, 6.0627377944839695, 3.735901221369735, 3.772403843344582, 4.9605949817268975, 6.184966776447468, 5.575863638814157, 3.965755781299144, 5.054754982298068, 5.055491060388767, 4.734832940836844, 4.6680405144198405, 4.518231790785486, 5.352054360203661, 3.933386810266203, 4.073500276584424, 3.6496806936814843, 4.160556562775662, 4.231158566179572, 4.126957725155156, 5.593731232209662, 6.557381777970839, 4.642808962883382, 3.955271250935829, 4.328041504215481, 3.8714835731936508, 4.617792757235988, 4.316975631278537, 4.639702196406146, 3.816561586325325, 4.071972083203815, 4.105020127529573, 5.418019364469931, 5.979515822345107, 4.572346964592393, 5.072186750588375, 4.605953060201322, 4.52944254464933, 5.45304404918759, 4.214181255790554, 5.065635658544667, 4.646521009104833, 3.766190012573553, 4.747353072646974, 3.9823024273203877, 4.008735954584001, 4.498706648060665, 4.335661856552447, 5.4356808518186295, 5.975294162959008, 6.343006329478395, 4.713236251049612, 6.188658480557391, 5.124416742172999, 5.619545161380488, 4.049275101887822, 4.454425726867608, 4.429777695304846, 4.573400986105693, 3.7655872110826647, 6.033389303976118, 5.409110541159138, 4.522967609924991, 3.8649484408364314, 3.547691073057468, 3.8216406348013447, 3.7081335110051663, 5.148494271933852, 5.0951639784476335, 5.1868397841911404, 3.915056916229462, 4.406991109627597, 3.9763203810458996, 4.28549379742172, 4.054115252910068, 4.776024139760099, 5.520744481502811, 4.888183596040763, 4.124802660905125, 4.864276535037368, 5.177312990215, 4.86043806706891, 4.135061226941825, 3.570930298137865, 3.700051516827064, 4.826854581084116, 4.878739859154471, 3.926577500254844, 4.101678697318054, 5.533643097300404, 5.603894439072506, 5.5836491017136645, 4.132822241747506, 4.327310749268595, 4.901185832428964, 4.404933049474804, 4.353807464674405, 4.005766544385017, 4.506879230892712, 4.906734585665481, 5.123709595337477, 3.9234155679802045, 4.188877024821228, 4.932978861075624, 4.727035002624785, 4.6932702535094615, 5.883208720833115, 5.60566192288835, 4.560563955425492, 5.669761784806086, 5.275740020308531, 3.8030108229859043, 3.262000519242214, 4.002520224062654, 4.718094076517959, 4.478663977565892, 4.390544097105828, 4.19771623465651, 4.5480069304831785, 4.594856759883293, 3.8475266675044115, 4.929983554415432, 4.382543120384308, 3.6614842492966537, 4.101811441660566, 4.5657701468204115, 4.258297327206018, 6.2327786523192685, 6.141584317657115, 4.330490431701469, 3.6629381384362287, 3.6957939483910436, 5.121599558473292, 6.042316399492867, 4.75216860990142, 5.978575732467344, 6.00395603245871, 8.042208384599991, 5.1021107749039025, 4.517556660128847, 4.935516587555954, 5.632047276257036, 6.423600308741078, 6.00440093918871, 4.845324684406579, 5.055066910344564, 4.9389227035720875, 4.680938244648393, 4.9204120831447336, 4.678421499818818, 4.177438981630106, 4.353078428831223, 4.250867676745693, 4.968408209906815, 4.407893575868392, 4.216143959716419, 4.073363509236613, 3.5906037538507043, 3.691974738042348, 5.036374082670621, 5.303505234368339, 4.595696918891344, 4.1300635347916135, 4.578008133895794, 4.8718892710481585, 4.910747946403624, 3.906923185712786, 3.7330877569388266, 3.897689856046975, 5.802573172588164, 4.008559710782361, 4.392256205143359, 4.748812027698272, 4.07431196233908, 5.260360061285357, 9.359585548001148, 13.125105261196474, 5.380398263912016, 4.130980041632252, 3.2739945880797694, 3.928870358141829, 4.365717934322635, 4.310671755409893, 4.29005790128022, 4.707968241314882, 6.345982614926924, 6.131622489289469, 5.076782565165626, 4.865378764540353, 4.119554065730303, 4.124088634945004, 5.238130249873582, 5.579928645245936, 5.522380159660524, 5.121673937886574, 5.900959324337007, 4.858015036866666, 5.187047039414815, 4.879726213161024, 4.091664395550873, 4.837767172037724, 4.257922821251948, 4.67990874895055, 5.406604825715869, 5.579957583303129, 4.6204975750040305, 5.238908190956787, 4.6220244661279235, 4.865310773483241, 5.3436658704418, 4.7078411709655885, 5.522251283593757, 5.809488123591483, 5.857531981615105, 5.414726666730081, 5.08323718554349, 5.945605891365354, 5.607541690637272, 4.133746420689716, 4.135401205340503, 5.224387695688467, 5.228290701546441, 5.538561186853402, 5.446367710527512, 4.6423440977698816, 4.853491793936539, 4.607173624287225, 6.040282575425058, 5.186394503344894, 4.849786217055432, 4.8537653697490395, 4.776660374217542, 5.032812532554486, 5.53873330719436, 5.34336505487213, 4.322361773758633, 4.360275788289467, 4.6084726720321365, 4.769621387307082, 4.134098145663927, 4.901530424054935, 5.152005831962173, 5.487658037299265, 4.719588500446772, 4.870480538873378, 4.902862677085294, 4.620710345607096, 5.2355052862068945, 5.199863591781361, 5.766644251221267, 7.598788637776362, 7.133574355872142, 5.964849017616679, 5.598263679775067, 5.612157326648779, 4.8704106527246545, 4.532958071342506, 4.13961046938866, 6.027068718261925, 5.237024579091238, 5.102538323259633, 5.210318277513763, 3.778892100015756, 5.456404694376721, 4.8019619965987985, 4.878191938028389, 6.548044633747711, 5.768001662343498, 5.045591480188853, 4.720907855592685, 9.172196383817367, 5.2545597237758095, 4.681417777704229, 5.971315496784966, 3.684618597555957, 4.801114328452435, 4.680478088477566, 5.555867415427176, 5.27830765541006, 4.372746909313776, 4.125134248904619, 4.949176280889171, 5.195891869097251, 4.573216598940454, 9.425778187394444, 14.530948232708237, 15.593075826191377, 6.853818862628895, 7.0628011743550605, 6.877345450766101, 4.419415731769701, 4.365855944753687, 4.914215523401811, 5.890169505133561, 6.696349144225213, 5.455641272726224, 4.491935561060528, 5.045477445700691, 4.366653603581333, 4.667168036633596, 5.025731647884249, 5.452353090157114, 5.646332078214436, 5.675451738912615, 4.926132948232751, 4.853437603310502, 5.8863644009082, 6.872906850875094, 5.061691654041292, 4.020451529830195, 4.759392869964344, 4.642903720505812, 4.16820169937677, 3.8732898979294474, 4.591339181786737, 4.77128676701023, 4.394301688749651, 3.5827652905733913, 3.596804035306872, 3.6349118606187116, 4.187996663638694, 4.59958514622361, 4.467786796658763, 5.33258780981264, 5.446587239227867, 5.400921522256128, 4.994677705240711, 5.280729520522615, 5.596505350164431, 5.480381222725426, 5.276506641778495, 4.927198227898725, 5.5007019344073, 6.3927226627934415, 4.893321707184026, 5.469610714372455, 5.480731726644543, 5.809071390656785, 5.344346764147256, 8.784779754402654, 5.64251223436406, 5.200250666831702, 4.813288290089864, 4.204272390177281, 4.597767286104216, 5.542276227351737, 5.159232715823571, 4.97802276508083, 5.168328146529712, 5.869873943518415, 4.7577037646608495, 4.5594698514444385, 4.6486154469728485, 4.361110487388936, 4.445216263055807, 4.735657362747071, 5.855672069800528, 6.711408602885964, 6.277830805653751, 8.069971890956039, 6.523580156999423, 5.352097060441109, 4.5193744659476565, 4.670852116926712, 4.31749071218104, 5.9038388477506745, 5.156765629067522, 5.972383637211377, 4.863313642890077, 4.741459973381345, 5.013307214865355, 3.756380102752061, 3.6573548432696996, 3.9542882382964577, 4.846262971576997, 4.628820794382232, 4.634623072909339, 6.663578270926263, 5.1653246237832455, 4.057144047002918, 4.441839529076168, 4.847284940314969, 6.549024398459078, 5.516858945171008, 5.050030579321399, 7.508700305495534, 5.289871902784273, 7.261676530493964, 6.273114030218404, 5.891465813903992, 5.754975587418065, 5.611277253106119, 3.974445324538755, 3.866515171739321, 3.8148723957045867, 4.331822088552902, 4.236060975349821, 3.8929893685590025, 4.273716116286058, 5.122359735401137, 5.3881856603291105, 6.168689308793071, 7.496764045470881, 5.355983664117728, 4.457117100223586, 4.519838111870202, 4.76018042036791, 5.296781862809513, 5.209190890406699, 6.780529324328974, 5.735678685893203, 4.885659099536959, 4.295766299476571, 5.461881703955669, 4.234657086287799, 4.104718261660021, 5.047180653459006, 4.093515224120995, 3.5354341296045817, 4.024620321747599, 4.588147410590329, 5.051399703901492, 5.247352666742633, 4.684998126730578, 4.992099870242227, 4.61345790705681, 5.014806567272018, 4.618044173779868, 5.65158462801022, 5.548525993892823, 5.646393201598026, 6.159491988943552, 4.789419901289577, 4.6736512810108275, 4.363105511327151, 4.739391667120165, 4.897441960035855, 5.461516912637022, 5.243873127366143, 5.125065202894813, 4.351851614609344, 5.3655333223312445, 4.243655718883197, 3.7754875890613078, 4.719243395600352, 3.8813961449106613, 4.218969588152745, 4.257677718939865, 6.103674486703634, 7.791663380655756, 10.464238426233472, 8.594965185662373, 6.75186715111778, 6.031324685806287, 6.145523302453469, 6.75140945728926, 8.154059132392755, 4.106360394477029, 4.709620377998221, 6.028553163902719, 4.097065935296995, 5.0426994603803985, 3.2005053600326656, 4.092894804323027, 3.842959298407252, 5.233649653441752, 5.620989567976742, 6.240709912225148, 5.402477404353037, 4.949452795156887, 6.66384694626282, 6.878195322360906, 5.223144951942625, 4.420158019516105, 5.5748343085937835, 5.351436065486316, 2.803141976997137, 2.853987131694524, 3.628310473460109, 3.2999520776521822, 3.072156740561856, 4.022257331264973, 4.246632301136313, 3.2032110524464987, 3.890465629743822, 5.516141415478659, 4.606479518381352, 4.617117217288059, 5.2150682654367415, 4.3625251186388665, 4.610699586240319, 4.339292661652301, 5.338621562017436, 6.245388897091721, 5.252203657075473, 4.663563296139489, 5.613802045800598, 4.457795237559685, 4.511612941820016, 7.5219824401326605, 9.17922567246037, 6.09564157037004, 5.575136863116195, 8.148782813497581, 10.337371026863684, 11.397323448389315, 13.452285184002244, 19.5321880505188, 9.104925253521232, 5.54880509500184, 5.297901295579736, 5.988140824070068, 5.712105550155605, 6.082955341897327, 5.514548951533992, 5.464177314396885, 5.965901886583051, 5.349768122904307, 5.455309858192952, 5.298726867437543, 5.27469020516504, 4.649991348964344, 4.657421878443319, 5.590467970089882, 4.839346013516856, 4.563686472478843, 4.460033351195948, 4.796752890700532, 4.936679285281038, 4.9476732954772, 6.1392458938730945, 6.376675176853506, 7.1893167742280895, 7.830862993334671, 7.201136642310429, 8.819884427914946, 14.75778834627489, 10.720082995027086, 13.683835023100594, 9.083588248964963, 6.438601514845196, 5.770584825015614, 5.971641604445405, 5.947097347049473, 5.8707122742282944, 5.913629995689299, 8.46564700833827, 4.691928546549967, 4.93123094432014, 6.5383321864315365, 7.965339881360501, 9.553183457183184, 5.975672597974902, 7.285598168392091, 6.283135292771771, 5.087462116591565, 4.299544221293388, 4.452401951635002, 4.928524696311035, 4.3192558539968635, 5.270743879168626, 4.434409547593456, 6.315079614095595, 7.744781172450392, 8.08815578671949, 8.311096077776098, 7.919508946141913, 7.023719658943078, 4.2348297914384405, 5.297799056675912, 5.433120928375736, 6.883671910111391, 6.1792032956470955, 13.600344019383114, 6.080828732734664, 6.3592677654039225, 5.167408049888382, 5.908986243562096, 6.049967830296147, 5.389658284884827, 5.1795190101266915, 5.225140621772348, 4.885351697184914, 5.609925213863514, 6.063650915495623, 6.064491470070761, 5.417789742021097, 6.642258468903965, 4.4894790182232285, 5.328546012050812, 5.071869409181481, 4.934339573713094, 4.860577415921298, 6.361065045926173, 6.190369858616081, 7.752494922626216, 6.127335665675411, 5.952050620062956, 6.068822834066873, 4.265677036952063, 5.22676808371028, 4.947280614722526, 6.131532273754776, 5.640349902998796, 4.9605101666572144, 5.678809739479061, 4.719745254190277, 5.8758917407168365, 5.913293582954954, 5.485515502725772, 5.827580748165716, 5.104395243299189, 6.156313145512767, 5.704854390030226, 5.543713820717527, 6.963449964818909, 5.42907727323174, 4.9077764911701145, 5.223678986534636, 4.959746320310088, 5.887911278350437, 6.204625494405663, 4.813010715023378, 5.8089148090748335, 4.269984326680447, 4.503035786380268, 5.164840548106424, 4.593358661615181, 4.537538610033516, 4.23346846152204, 4.599166460471844, 4.658141927517007, 4.874112991746525, 6.166375609977758, 10.015845685546097, 6.486143999799107, 5.53486556416923, 5.270831174465323, 4.915667196745372, 6.0476493575518235, 6.313789492784983, 4.394310739262499, 4.966348552199283, 5.477472022486794, 4.400604850729746, 5.013639016806944, 5.014571233503277, 4.959505331690136, 4.5459152890121555, 4.760725343443551, 4.389542691319214, 4.5899672203500135, 4.871417868900642, 4.417547567530365, 5.431557397676423, 5.0063916174177425, 4.677939660871351, 4.912490579012375, 4.704522676025109, 4.676469616096843, 4.564527258579771, 4.31847476550115, 4.884535624222552, 4.361030312218816, 4.1987006350092715, 4.488650239277422, 4.964672710238733, 7.1043549698253825, 10.07783045950558, 13.138767501869758, 10.456322266224637, 9.166881106821982, 0]
input_distancelist = [0.0, 4.087824158760445, 8.416829654805763, 16.489760803553693, 22.858918498786547, 31.78087190779509, 39.47777534665958, 45.85065511351118, 52.35898089281798, 60.66051304942781, 67.71288405240794, 76.62376361065739, 86.39354163829078, 94.75394515051694, 103.71362850181802, 111.27216946368478, 117.62903695453325, 126.60978434358435, 134.75453361297863, 142.64756781405455, 151.65912149772066, 160.13294247502517, 166.42323041679043, 172.53485313558602, 179.35764878011392, 187.0687168475811, 193.63401211900094, 201.09395970691278, 207.62433658857995, 214.85257257788373, 222.64650659949302, 230.7999142303188, 239.3444538302696, 245.9560359618332, 253.74916333195128, 261.3145094156179, 268.62769855391167, 276.1930679287745, 283.98609959125736, 291.0109300880504, 298.08674398155756, 305.26987015094574, 311.8453345533723, 319.2683216562706, 326.5577345689296, 333.1639699149783, 341.089471218815, 348.4873836536744, 356.2687847907752, 364.0862087481631, 371.0906705654512, 377.8767243299224, 384.97374723203956, 392.9825790972546, 400.3317454746838, 408.32330719262035, 416.3682358196568, 423.5517394977272, 431.1290259487073, 436.99150739467007, 444.3898920832804, 451.45121538083384, 458.3290088455969, 464.6199953216893, 471.717243091887, 479.2224791204489, 485.4670551924642, 492.899473685059, 498.6773694615957, 504.9435377107775, 511.4577951327448, 517.8403239499004, 524.7086403442719, 532.201072097167, 539.160198252126, 545.0361057412193, 551.05100874369, 556.2133513235237, 561.1250752389611, 567.5374179161081, 574.0830214638444, 582.1897811110265, 589.3999314164306, 596.1526231004325, 604.0593723030042, 610.7190703899448, 616.2490627617958, 623.7367225636532, 631.4022026751912, 638.4688470805338, 645.5017921955034, 654.2666368920637, 662.8719024445528, 670.0325210920518, 676.5995170139406, 682.6574223456819, 689.4545931050169, 696.0501422137825, 703.3270756023389, 710.2473387398869, 715.9158752256096, 723.1388311620137, 730.3335599320968, 737.4335223128868, 745.4406628704747, 753.4848443996515, 761.4023712427368, 768.8656084709943, 775.8003303884307, 783.2111257860241, 789.9637146843942, 796.9553893027283, 804.4167427532398, 811.4275532537163, 818.9436105890502, 827.7175519536598, 835.3695379459859, 842.1727313564011, 849.082470721635, 856.2753974365647, 863.0738690576441, 869.9890895904058, 877.326757016186, 883.9831856204079, 890.2778011321084, 897.6834184157178, 905.1102738984999, 914.2680484283256, 921.0638181390335, 927.302316351919, 934.8479796849883, 942.2144287602789, 949.3511150484037, 957.8083323937884, 964.9690733378777, 970.3975331880208, 975.1540953537805, 981.1654176166511, 987.4688404068477, 994.297071479403, 1001.8398376644643, 1008.8047725355505, 1015.5296470966207, 1024.2057870310082, 1031.476888600743, 1040.695756313634, 1049.7548815876291, 1058.4013618240147, 1065.045816941952, 1070.5670345300855, 1078.5022578305548, 1084.504244393815, 1091.4410434501328, 1097.363889067618, 1104.777834788606, 1114.7026737344358, 1122.2336822766545, 1130.2342882389257, 1137.5706554403384, 1145.8551414976243, 1153.300098998726, 1161.5171229075897, 1169.2900685094562, 1177.5887526280533, 1186.7689707557988, 1193.8800561650598, 1201.8158815728566, 1209.2825381202442, 1216.5115948004316, 1223.7439817619145, 1231.3627117780638, 1239.0753090944513, 1245.7668070459374, 1252.6630975006967, 1260.349969866839, 1268.2795249842825, 1277.0878534066046, 1284.6209891929236, 1292.7748472260105, 1299.176319211485, 1304.674385343202, 1313.5968196791698, 1322.4329185481022, 1329.1525425964517, 1334.5419544620527, 1340.5201016142348, 1348.925393107604, 1355.5198439662643, 1362.1133346747158, 1369.1533577863743, 1376.2941127199197, 1383.6716300490025, 1389.8997680853406, 1394.1369986122663, 1402.3199692935111, 1411.453188369646, 1419.4649369975978, 1427.3429998997478, 1435.4199746480365, 1441.379026298819, 1446.4623559663573, 1453.6419176497723, 1462.0694896716523, 1469.7712026290874, 1478.3811664073494, 1485.5996222986626, 1493.321077358964, 1500.5054465107498, 1509.2393114253803, 1517.4253531398674, 1525.5454919693434, 1531.6978017141046, 1537.2723890759723, 1544.5625911519437, 1551.1343786220061, 1558.3713897025916, 1565.7306470515364, 1571.843440755871, 1579.753241333986, 1586.3335277146512, 1593.5073537380388, 1602.3580312721115, 1609.3794878234246, 1617.7498549519107, 1626.0650280416266, 1633.4745650354876, 1641.1627414745603, 1647.2950612781008, 1652.8735871923507, 1658.1287188447377, 1665.2010003001624, 1670.5871972341272, 1677.092002563289, 1683.0236807287056, 1691.2556068129059, 1698.7388011408027, 1706.2636332263187, 1713.5521551456961, 1722.4042495110127, 1727.9290601363384, 1734.091502756006, 1741.4612953806952, 1750.085817503976, 1759.481599900724, 1768.203857307605, 1777.1931057352588, 1783.6674905889129, 1790.2096418276822, 1796.6361625455438, 1805.150300154924, 1812.714039791365, 1821.0969994472882, 1828.8751773910733, 1837.0972754948227, 1844.0765813688938, 1850.1144143903307, 1856.9335799716587, 1865.0147746550472, 1871.8674552321552, 1878.305801416041, 1885.163893808819, 1893.2250400661346, 1902.559675735365, 1911.568559309483, 1918.4743677968493, 1925.3067331812808, 1933.7958902986866, 1941.922644194629, 1947.9464033614597, 1953.8946476932801, 1959.8644593211461, 1967.9299727605141, 1975.6329863097415, 1982.434061492972, 1990.001335038096, 1997.657469032029, 2005.9788060255833, 2013.37490688066, 2020.168291106233, 2026.6739941936341, 2035.1699928345565, 2043.1275750542818, 2049.884817335686, 2056.936453999012, 2064.0388221842677, 2069.7046645697837, 2075.6510334004174, 2082.960070979789, 2088.8392127494944, 2095.1574417003185, 2103.922426835744, 2114.1411049557205, 2122.4691911282616, 2129.5341908490673, 2136.976886595009, 2144.5689600592764, 2152.509785848817, 2159.8390036452797, 2167.0934916685756, 2175.7570660148854, 2182.5184137892566, 2190.124347699705, 2199.228123945564, 2207.3546148404844, 2214.6553181745203, 2222.483173012267, 2227.8312424998508, 2233.258723685365, 2240.9560812616146, 2250.0562440472095, 2259.0755058888617, 2265.5838892469837, 2271.100537355006, 2278.114878820896, 2283.6903427482516, 2289.2422377243515, 2293.3870362348157, 2299.9202799850873, 2307.2988998544356, 2314.052667720879, 2319.9711786456464, 2325.1603765613345, 2330.711860159936, 2337.591344096401, 2344.185388037762, 2350.934498184642, 2358.055577675798, 2364.8300780164336, 2371.9549882740116, 2379.934358763901, 2387.5917749759683, 2395.433311308817, 2402.142368198882, 2409.704559245377, 2417.6106776445163, 2425.793923077375, 2435.0774128797752, 2444.1060048118557, 2450.724522973171, 2457.0096743702743, 2464.2628361695897, 2472.3337370292334, 2479.6149240342393, 2486.4568967226214, 2493.2447289830297, 2501.7765919731264, 2510.705750756328, 2519.2578250784086, 2525.002402930663, 2533.317941612467, 2540.907055675144, 2547.926355057581, 2556.1076955247445, 2562.4443973749953, 2566.0058086321906, 2568.5454708040347, 2574.740799048955, 2582.8099092836687, 2592.991151932846, 2601.475354844825, 2609.110601517834, 2616.843348343546, 2624.6132512447134, 2631.693446272542, 2636.946113251952, 2642.3824122725714, 2648.948250558034, 2655.7993786914226, 2663.8908693725684, 2671.973463197271, 2678.3370570262214, 2684.310849067762, 2690.346893738649, 2696.855182578897, 2702.5039816132944, 2709.3654946140514, 2715.7917585519035, 2722.6227428893917, 2730.7693869224604, 2737.659617921174, 2745.4881612585405, 2752.6108072590223, 2758.7761058905753, 2764.749866951605, 2771.9640971924473, 2778.3267460723937, 2785.538593083939, 2792.389816959504, 2798.6277319305195, 2805.7081180614086, 2811.744303599065, 2817.4820437620874, 2823.172722607368, 2829.328773574473, 2835.8862746543227, 2841.4926558949196, 2847.43703137571, 2855.500741613594, 2863.561225150303, 2869.9415581720737, 2876.317128169257, 2882.335538404113, 2888.455825374253, 2895.636105989244, 2902.5040136212247, 2909.7391074231195, 2915.2576130422026, 2921.684685512651, 2928.5578407078447, 2935.4253612396533, 2942.403737494589, 2949.026939355918, 2955.045162564108, 2961.2834287111946, 2968.995261981538, 2976.640038356202, 2983.8730927118513, 2990.8617676247154, 2998.9247918101223, 3005.7253888582904, 3012.195360830476, 3018.2695974368335, 3025.332360076029, 3032.176311728702, 3038.975060853193, 3046.188958899199, 3052.5557432898386, 3058.9661678605944, 3064.746537150112, 3069.1332012517587, 3073.80594056454, 3079.394235127384, 3085.348462230648, 3091.287948847506, 3098.131998704921, 3105.4855485963412, 3113.5378360228574, 3119.0684405159395, 3125.4333778634586, 3131.9660741846965, 3138.363636014497, 3147.184563537988, 3153.2935923216646, 3160.235199389139, 3167.0683321885026, 3172.1589103989104, 3177.9379193667737, 3184.5443466735055, 3191.605135477897, 3195.239306829935, 3201.5830035591785, 3208.7033536146864, 3214.2855964808155, 3223.3322135052867, 3230.275046158905, 3237.3968257513343, 3243.3964889615786, 3249.7116444071253, 3257.334617797681, 3265.4151628956215, 3272.1502904080617, 3278.5656150863106, 3285.8544308710375, 3289.390832075334, 3291.6847865234204, 3293.822487494344, 3298.6859561680035, 3303.4055189180303, 3308.2523502386266, 3315.794825388387, 3323.4298307012373, 3330.212873320064, 3335.872020022766, 3340.8498568433083, 3346.9597404787983, 3354.3804465152543, 3360.9870231360132, 3368.6206337585722, 3375.762723580857, 3382.3952570443907, 3388.508825403254, 3394.4123629337414, 3400.2856105388246, 3407.0522434502022, 3413.9202277651952, 3419.583032692493, 3424.432994150808, 3431.0184078052966, 3439.3093505230518, 3446.313044953006, 3453.492460107818, 3461.4895139304094, 3470.0954624165147, 3477.355508345067, 3484.341743916529, 3491.9273253568463, 3501.2311258098193, 3510.49861244688, 3519.6689402599072, 3527.6281952475583, 3534.875225636962, 3542.336041215329, 3548.586914997847, 3554.706955285324, 3560.878741562161, 3567.552512183708, 3573.864771352406, 3579.820869175551, 3585.903171113801, 3592.220482091751, 3598.985652030438, 3605.045484708306, 3610.259747095651, 3617.0717523812214, 3623.166031311483, 3629.2479442741783, 3634.986096052954, 3641.223216285674, 3645.0176578385913, 3650.92519192072, 3657.335139338436, 3664.2604119014704, 3672.1888547424273, 3679.438750454652, 3685.4531264918533, 3691.914035539646, 3698.610134554531, 3705.0596734360192, 3710.73838709394, 3717.7445680127535, 3725.055359490577, 3732.2259533402016, 3739.8692665352523, 3747.367964317488, 3754.4067618444265, 3760.099248195437, 3765.065915449015, 3770.3756051117743, 3774.5061440756945, 3779.615812774648, 3785.843901121496, 3793.219553124775, 3800.3560097067625, 3808.0765435898556, 3813.7225874933124, 3820.186587552854, 3825.7678320539017, 3832.6218693998303, 3839.652052828662, 3846.301023638688, 3855.1748150073818, 3864.2888700449107, 3872.7185371112887, 3879.596689107751, 3886.7979472281318, 3893.9901897921877, 3898.9925071625366, 3905.445796309138, 3913.661756340914, 3921.1661547135104, 3928.0428565669745, 3933.132673203049, 3939.174758684047, 3945.775378781245, 3950.2146734630337, 3956.516023277533, 3961.106331478863, 3966.4200135150513, 3972.0779150265334, 3977.8700044675747, 3983.8104226343803, 3992.1973371875956, 4000.818364612231, 4009.5560968023997, 4017.2510881136586, 4025.1200337042214, 4033.6824340032163, 4041.482047366937, 4047.989464857154, 4054.175839164697, 4059.579472199971, 4064.0258350748263, 4070.249403969677, 4077.7280796660507, 4085.102975073431, 4092.105510773697, 4098.398640110848, 4104.797586518191, 4109.713623561749, 4115.525199690537, 4122.347888841422, 4130.10746676292, 4136.210369597522, 4144.081923929813, 4152.202659923969, 4158.807007107478, 4166.949967743588, 4176.3783242371255, 4184.660679010621, 4191.925775431977, 4198.524606505984, 4204.877016082835, 4211.991924655188, 4218.66914149775, 4225.894379920831, 4232.541362787702, 4239.759425690303, 4245.6574765074365, 4251.665078052428, 4257.56855167589, 4262.980253386546, 4269.940038500471, 4277.0722208840325, 4284.712039184735, 4291.745290638626, 4298.551564935563, 4304.654875401599, 4311.011500080289, 4317.51548237392, 4325.175057259569, 4331.387549351381, 4339.242412125712, 4348.071293838873, 4355.134572957812, 4363.722548028855, 4371.6233713547535, 4379.452365358939, 4384.913556548925, 4389.191633219446, 4392.377085612214, 4396.255325094666, 4401.192231077453, 4406.718932933146, 4412.142935364318, 4417.080176031501, 4421.168119568838, 4429.285608082064, 4436.363319374688, 4441.892562034921, 4450.028465596091, 4456.638681730372, 4467.053703604603, 4475.197898588096, 4483.8717696195035, 4490.240811404983, 4496.17096532721, 4501.512238003262, 4507.682246841518, 4514.41699807825, 4519.419113763399, 4524.265346208263, 4530.647197303729, 4538.188405827532, 4544.167656777098, 4550.396514400627, 4562.287932533161, 4573.967499326983, 4583.154511733213, 4593.255668522451, 4604.105809367556, 4612.3930298480145, 4620.242386931371, 4630.648611431588, 4639.216566145926, 4645.259437570468, 4652.495621558766, 4659.715133597257, 4666.106868400471, 4673.74770310671, 4680.977263981335, 4688.659007512467, 4694.902816508124, 4700.240087554424, 4706.586629979226, 4713.734240339941, 4719.67198681931, 4727.149524830137, 4734.537865488608, 4738.9693213712335, 4742.600709739137, 4748.069097758433, 4754.048024222796, 4758.138614694556, 4761.363161172254, 4764.287824448178, 4766.765718194661, 4768.472302897158, 4772.1333250807775, 4778.140624447511, 4784.432424062794, 4789.998982083498, 4795.834541850496, 4801.314334395371, 4807.358950850187, 4813.459289732951, 4819.046598067387, 4825.277397714688, 4831.387652530603, 4837.678471847632, 4843.99795830784, 4851.166430425698, 4858.323465842141, 4864.2859959043235, 4871.17397895828, 4878.478015625643, 4885.951801292379, 4892.9009467023825, 4899.6531239063415, 4906.39029739025, 4911.819845890108, 4917.047230445359, 4921.6837396408955, 4925.940401181482, 4930.56930006079, 4934.3486390765975, 4936.607333438598, 4939.716762101491, 4942.152726381758, 4945.822348167744, 4950.999455845006, 4956.775877879188, 4962.357815902481, 4967.962791134621, 4973.640693879005, 4979.2773896745875, 4983.214871747783, 4990.319270930896, 4997.078908374307, 5002.177048447781, 5006.361845811085, 5009.851084082571, 5015.429256712817, 5020.004493030124, 5025.309700029515, 5031.861755370551, 5039.614515109188, 5047.101110815375, 5053.864459968983, 5061.581838716641, 5067.906056720303, 5075.423028929472, 5080.70139996678, 5085.005373518919, 5089.126626152068, 5093.137328593887, 5097.346343768947, 5102.092167200395, 5109.963400514358, 5116.255321551138, 5122.390530719636, 5127.232907545349, 5132.627346235659, 5135.078264630505, 5140.559973588018, 5145.801667217381, 5152.25235448874, 5157.893480048924, 5163.403151218812, 5169.587835213865, 5176.023439228092, 5182.402852864241, 5189.225971321105, 5195.167821178739, 5200.665059360467, 5206.161535610428, 5212.314106108835, 5217.332479526459, 5224.757246007833, 5231.012861190276, 5237.585059850004, 5244.3404387280425, 5251.1983345048175, 5256.438547125294, 5261.823255001341, 5266.122946089954, 5271.563048469208, 5277.163359263101, 5282.655912637079, 5290.470225123603, 5296.8476523978725, 5303.585360631793, 5309.02173963872, 5314.931538482158, 5321.651277423001, 5327.521052052287, 5334.5835801206695, 5340.256477921415, 5345.893494393828, 5351.970103477654, 5357.690029940076, 5364.220349741838, 5369.634845812043, 5375.477822872349, 5381.490639264271, 5386.277538502685, 5392.417317263559, 5399.20925927504, 5405.590457932012, 5412.311231774181, 5417.972548964183, 5423.344885032394, 5430.27055698847, 5436.008863441382, 5443.815293344428, 5451.217706941529, 5457.671600923901, 5464.928454957941, 5472.2745816133975, 5480.148346030518, 5487.396036154331, 5494.551965246234, 5501.390816415622, 5506.796476959458, 5510.124536753893, 5515.263696990602, 5521.286125710302, 5527.610238972517, 5534.391278452856, 5539.903061852088, 5545.182511439602, 5552.768077256673, 5559.479916545232, 5565.565448921571, 5573.140165230809, 5579.78869601286, 5586.435990822559, 5593.1570912356065, 5600.489681311339, 5607.4914154859935, 5615.085220963556, 5622.347436951842, 5629.1900717309545, 5636.735736565067, 5642.8727118177985, 5649.530867208073, 5656.6565113484085, 5663.441935726776, 5670.527316248631, 5677.655200329947, 5684.957891593467, 5692.676666194079, 5699.5009246083355, 5707.144378321542, 5715.083342356128, 5722.509479737396, 5729.223584629867, 5733.915542376195, 5737.223132582963, 5739.760153907725, 5742.948017908532, 5746.584296479717, 5746.584296479717]
average_distance = 1000
Running the function leads to a resulting average pace of
average_pace = average_over_distance(input_datalist, input_distancelist, average_distance)
average_pace = [4.715677705829617, 4.596201931814222, 4.924473000260758, 5.385001220009992, 5.695676885416609, 5.730593560295301]
# import numpy
import numpy as np
# defined a function to find the nearest value to a given one
def find_nearest(array,value):
idx = np.searchsorted(array, value, side="left")
if idx > 0 and (idx == len(array) or math.fabs(value - array[idx-1]) < math.fabs(value - array[idx])):
return array[idx-1]
else:
return array[idx]
# defined a function to calculate the average between two points of a list
def mean(numbers):
return float(sum(numbers)) / max(len(numbers), 1)
# defined main function
def average_over_distance(input_datalist, input_distancelist, average_distance):
output_average_list = []
ratio_steps = int(input_distancelist[-1]/average_distance)
distance_steps = range(average_distance,(ratio_steps+2)*average_distance,average_distance)
for index1, elements1 in enumerate(input_distancelist):
for index2, elements2 in enumerate(distance_steps):
if elements1 == find_nearest(input_distancelist,elements2):
if index2 == 0:
cutoff_index_start = 0
cutoff_index_end = index1
else:
cutoff_index_start = cutoff_index_end+1
cutoff_index_end = index1
output_average_list.append(mean(input_datalist[cutoff_index_start:cutoff_index_end]))
# TODO: The last file will always be zero... Reason unclear... going from "distance_steps = range(average_distance,(ratio_steps+1)*average_distance,average_distance)" to "distance_steps = range(average_distance,(ratio_steps+2)*average_distance,average_distance) does not help
del output_average_list[-1]
return output_average_list
Answer: There are some logical errors with this function as well as inefficient techniques.
cutoff_index_start should not be cutoff_index_end + 1-- rather it should be just cutoff_index_end. This is because subarray slicing does not include the elements at the upper bound endpoint, meaning your original code is skipping elements occurring at index1s.
The double for loop is unnecessary since you know the index at which your intervals are occurring. No need to check against every element in input_distancelist.
np.searchsorted can take an array of values to insert.
With all those points, the code simply becomes:
def average_over_distance(input_datalist, input_distancelist, average_distance):
output_average_list = []
ratio_steps = int(input_distancelist[-1] / average_distance)
distance_steps = range(0, (ratio_steps + 2) * average_distance, average_distance)
breaks = np.searchsorted(input_distancelist, distance_steps)
for index2, elements2 in enumerate(distance_steps[1:]):
cutoff_index_start = breaks[index2]
cutoff_index_end = breaks[index2 + 1]
output_average_list.append(np.mean(input_datalist[cutoff_index_start:cutoff_index_end]))
return output_average_list
You will notice slightly different results, this is due to numpy's float representation and the logical error I pointed out in #1 above. | {
"domain": "codereview.stackexchange",
"id": 37000,
"tags": "python, python-3.x"
} |
Issue regarding game velocity | Question: So, this is a thing. I've got a simple little 2D game where a character can run left and right and jump. My problem here is that I've got two velocities, xVel and yVel, that I need to increment as the character runs in a certain direction, and stop altogether as the player stops moving altogether. Right now, I'm only focusing on xVel. So we have the player class, which contains the movement methods. I've got two booleans to determine when to increment the velocities, goingRight and goingLeft. When going left, goingRight is false and goingLeft is true, and vice versa. The only problem is that when the player doesn't move, he shouldn't have any velocity. But there isn't a way to determine when the player isn't moving.
Now, the above is my setup for velocities and such so that the player can run into a jump and move in the air. I'm getting confused just trying to think about it, and overall, it just feels disorganized. Am I doing this right, and if so, what can I do to improve it? If not, what would be the best approach for adjusting these velocities and making player movement more smooth. I'll post both classes for the game below so that you can analyze what I'm doing wrong. I don't necessarily need code corrected, but I would like a solid understanding and explanation of what I can do to improve the continuity of the game's movement.
Main class:
import com.hasherr.platformer.entity.Player;
import org.lwjgl.LWJGLException;
import org.lwjgl.input.Keyboard;
import org.lwjgl.opengl.Display;
import org.lwjgl.opengl.DisplayMode;
import static org.lwjgl.opengl.GL11.*;
public class Main {
private void display() {
try {
Display.setDisplayMode(new DisplayMode(1000, 550));
Display.setTitle("Unnamed Platformer Game");
Display.create();
} catch (LWJGLException e) {
e.printStackTrace();
System.exit(0);
}
// OpenGL
while (!Display.isCloseRequested()) {
Display.update();
Display.sync(60); // sync to 60 fps
initGL();
player.update();
handleKeyboardInput();
}
Display.destroy();
}
private void handleKeyboardInput() {
if (!player.goingLeft && !player.goingRight) {
player.xVel = 0;
}
if (Keyboard.isKeyDown(Keyboard.KEY_D)) {
player.moveRight();
} else if (Keyboard.isKeyDown(Keyboard.KEY_A)) {
player.moveLeft();
} else if (Keyboard.isKeyDown(Keyboard.KEY_SPACE)) {
player.jump();
}
}
private void initGL() {
// initial OpenGL items for 2D rendering
glClear(GL_COLOR_BUFFER_BIT);
glEnable(GL_TEXTURE_2D);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glOrtho(0 , 1000, 0, 550, 1, -1);
// start rendering player image
player.grabTexture().bind();
glBegin(GL_QUADS);
glTexCoord2f(0, 0);
glVertex2f(player.xPos, player.yPos);
glTexCoord2f(1, 0);
glVertex2f(player.xPos + 150, player.yPos);
glTexCoord2f(1, 1);
glVertex2f(player.xPos + 150, player.yPos + 150);
glTexCoord2f(0, 1);
glVertex2f(player.xPos, player.yPos + 150);
glEnd(); // stop rendering this image
}
Player player = new Player();
public static void main(String[] args) {
Main main = new Main();
main.display();
}
}
Player class:
import java.io.IOException;
import org.newdawn.slick.opengl.Texture;
import org.newdawn.slick.opengl.TextureLoader;
import org.newdawn.slick.util.ResourceLoader;
public class Player {
public Texture playerTexture;
// Positions & speed
public float xPos = 20.0f; // This is initial
public float yPos = 0.0f; // Same as above.
public float xVel, yVel;
public static int gravityForce = 6;
public static int jumpVelocity = 100;
private static int moveSpeed = 15;
public boolean isSupported = true; // Once again, initial value.
public boolean goingRight, goingLeft;
// movement methods
public void update() {
applyGravity();
checkForSupport();
}
private void checkForSupport() {
if (yPos == 0) {
isSupported = true;
} else if (yPos > 0 /* and is not on a platform */) {
isSupported = false;
}
}
public Texture grabTexture() {
try {
playerTexture = TextureLoader.getTexture("PNG", ResourceLoader
.getResourceAsStream("resources/test_char.png"));
} catch (IOException e) {
e.printStackTrace();
}
return playerTexture;
}
private void applyGravity() {
if (!isSupported) {
yPos -= gravityForce;
if (yPos < 0) {
yPos = 0;
}
}
}
private void printPos(String moveMethod) {
System.out.println(moveMethod + " X: " + xPos + " Y: " + yPos
+ " Left: " + goingLeft + " Right: " + goingRight);
}
// movement methods
public void moveRight() {
xPos += moveSpeed;
goingRight = true;
goingLeft = false;
printPos("Moving right!");
}
public void moveLeft() {
xPos -= moveSpeed;
goingRight = false;
goingLeft = true;
printPos("Moving left!");
}
public void jump() {
if (isSupported) {
yPos += jumpVelocity;
}
}
public void shoot() {
// do shooty stuff here
}
}
Answer: The movement of your character is going to feel a bit unnatural in this setup. When you hit the left key, the character will immediately start moving left at full speed; as soon as you stop hitting the key, the character will stop dead. I suggest modeling the character's velocity as well as its position, so that e.g. while holding on the left key, the character's velocity decreases steadily until it reaches a minimum. Likewise, modeling gravity as a constant velocity instead of an acceleration will lead to unnatural looking jumps.
As for the goingLeft and goingRight bools, they seem like an awkward way to manage state, and it will require great care as your control scheme becomes more complex to maintain a consistent state. If you need to maintain this information beyond (or instead of) the velocity I suggest, consider an enum with the values XMOTION_LEFT, XMOTION_STOPPED, XMOTION_RIGHT, or similarly descriptive names. | {
"domain": "codereview.stackexchange",
"id": 4161,
"tags": "java, opengl, physics"
} |
[Solved] Is clearing_rotation_allowed parameter actually working? | Question:
Hello all,
I'm playing with different combinations of recovery behaviors. In some cases, make the robot spin to recover is not an option, so the clearing_rotation_allowed parameters is really helpful. But it doesn't seem to prevent the robot to make in place rotations, when I keep all other recovery-related parameters with default values.
Anyone else show this? Or am I doing something wrong?
Originally posted by jorge on ROS Answers with karma: 2284 on 2013-04-29
Post score: 2
Original comments
Comment by jorge on 2013-05-06:
Btw, I created an issue (#38)
Answer:
It's a misspelling in the code. Until the fix is released, rename clearing_rotation_allowed to clearing_roatation_allowed should work.
Originally posted by jorge with karma: 2284 on 2013-05-11
This answer was ACCEPTED on the original site
Post score: 3
Original comments
Comment by 2ROS0 on 2014-08-05:
This is now fixed, just in case anyone else is wondering. | {
"domain": "robotics.stackexchange",
"id": 14002,
"tags": "navigation, move-base"
} |
Unable to subscribe to topics published by Husky | Question:
I'm fairly confident this is a networking issue, but not sure exactly how to fix it. If I publish to a topic myself on the Husky, I can subscribe to it from a client. However, I can't subscribe to topics that the Husky creates during startup. For instance, when ssh'd into the Husky, I can use rostopic hz /imu/data and I see that the data is being published. However, if I do the same on the client, I don't get any information.
On the client (~/.zshrc), I have set the following information:
. /opt/ros/melodic/setup.zsh
export ROS_IP=10.10.10.115
export ROS_MASTER_URI=http://10.10.10.111:11311
On the Husky (~/.bashrc) I have set the following:
. /opt/ros/melodic/setup.bash
export ROS_IP=10.10.10.111
In the Husky's /etc/ros/setup.bash, I have the following:
# Mark location of self so that robot_upstart knows where to find the setup file.
export ROBOT_SETUP=/etc/ros/setup.bash
# Setup robot upstart jobs to use the IP from the network bridge.
# export ROBOT_NETWORK=br0
# Insert extra platform-level environment variables here. The six hashes below are a marker
# for scripts to insert to this file.
######
export LCM_DEFAULT_URL=udpm://239.255.76.67:7667?ttl=5
export HUSKY_IMU_XYZ='0 -0.15 0.065'
export HUSKY_IMU_RPY='3.1415 0 0'
export HUSKY_LASER_ENABLE=1
# Pass through to the main ROS workspace of the system.
source /opt/ros/melodic/setup.bash
# I added this in an attempt to make things work
export ROS_IP=10.10.10.111
source /home/administrator/startup_ws/devel/setup.bash
export HUSKY_LOGITECH=1
export HUSKY_JOY_DEVICE=/dev/input/js0
export HUSKY_GAZEBO_DESCRIPTION=$(rospack find husky_gazebo)/urdf/description.gazebo.xacro
Here are the values of various environment variables:
eric@cpr-mic09:~$ echo $ROBOT_NETWORK
eric@cpr-mic09:~$ echo $ROS_IP
10.10.10.111
eric@cpr-mic09:~$ echo $ROS_HOSTNAME
I would really appreciate any advice. Thank you!
Originally posted by EricW on ROS Answers with karma: 15 on 2021-02-03
Post score: 0
Answer:
Like many people I struggle with this myself several times, so I eventually created a gist for setting things up correctly:
https://gist.github.com/chfritz/8c2adab45a94e091be77c55b0432ad2e
The key is to know under what name the ROS clients announce themselves to the ROS master, and making sure that all clients that want to talk to each other can resolve those names respectively.
Originally posted by chfritz with karma: 553 on 2021-02-03
This answer was ACCEPTED on the original site
Post score: 2
Original comments
Comment by EricW on 2021-02-03:
Thanks so much! I'll try this out and hopefully, it fixes the issue
Comment by EricW on 2021-02-03:
Thanks for the script! It definitely is a lot cleaner than what I was using, but it's still not working for me. I'm in the same situation I was in before. I can see the ROS topics that are being published, but I can't actually get any of the messages on the client (rostopic hz /imu/data says no messages are coming), while when ssh'd into the client, I do get messages. I think this is an issue with the Husky's setup.
Comment by chfritz on 2021-02-03:
Have you already tried the steps described in the Troubleshooting comment below the gist?
Comment by EricW on 2021-02-03:
Sorry, should have tried those and not assumed it was Husky specific. I think I'm falling into the second troubleshooting scenario you described (not able to see results of rostopic echo /rosout on the client). I ran rosnode list -a and I get http://cpr-mic09:37661/ for /rosout both while ssh'd into the Husky and while on the client. I'm not really sure how to find out what hostname the topics are actually being published on if those are matching up
Comment by chfritz on 2021-02-03:
can you resolve cpr-mic09 from the client, i.e., what happens when you ping that from the client?
Comment by EricW on 2021-02-03:
If I run ping cpr-mic09, I get cpr-mic09: Name or service not known, but if I do ping cpr-mic09.local, it works.
Comment by EricW on 2021-02-03:
We use a SSH configuration to connect to the robot (we call it val). Not sure if this could be affecting things:
Host val
ForwardAgent yes
ForwardX11 yes
User eric
Hostname cpr-mic09.local
Comment by chfritz on 2021-02-03:
Well there is your problem. Like the Troubleshooting says, you have to be able to resolve all names listed by rosnode list -a. So either change the ROS_HOSTNAME used by the robot, or add cpr-mic09.local to /etc/hosts on the client.
Comment by EricW on 2021-02-04:
Thank you so much! I added 10.10.10.111 cpr-mic09 to /etc/hosts and now it works perfectly! Sorry I didn't get it from your troubleshooting instructions. I needed to lookup what /etc/hosts was actually doing before I realized what was happening | {
"domain": "robotics.stackexchange",
"id": 36042,
"tags": "ros-melodic, husky"
} |
Non Scaled New Actual Data | Question: I am new to Machine Learning and I have a conceptual question.
I have a scaled dataset (scikit-learn and pandas).
After training/testing my algo, I will make new predictions using new actual data which will not be scaled or normalized.
Will this discrepancy be a problem, if so, how should I resolve it?
Best,
Answer: You should save the scaler params used to fit the training set and use the same ones to transform all other data used with the model from then on - whether CV, test or new unseen data.
After training/testing my algo, I will make new predictions using new actual data which will not be scaled or normalized.
No that won't work. Once you add scaling/normalisation to the training pipeline, the exact same scaling (as in same scaling params, not re-calculated) should be applied to all input features.
The scikit-learn scalers like e.g. StandardScaler have two key methods:
fit should be applied to your training data
transform should be applied after fit, and should be used on every data set to normalise model inputs.
fit_transform can be used on the training data only to do both in a single step.
If you need to do the training and predictions in different processes (maybe live predictions are on different devices for instance), then you need to save and restore the scaling params. One basic, simple way to do this is using pickle e.g. pickle.dump( min_max_scaler, open( "scaler.p", "wb" ) ) to save to a file and min_max_scaler = pickle.load( open( "scaler.p", "rb" ) ) to load it back. | {
"domain": "datascience.stackexchange",
"id": 2064,
"tags": "machine-learning, python, scikit-learn, pandas, feature-scaling"
} |
Why can't $\rm N_2$ TEA lasers produce optical breakdown? (sparks in the air) | Question: I've been googling around and I can't find any direct explanation of why N2 TEA lasers are never shown to be used to demonstrate optical(dielectric?) breakdown or sparks in the air.
Would someone please help me understand what it is about N2 TEA lasers that make optical breakdown difficult/impossible?
I appreciate your help.
Answer: The explanation is simpler than you think.
Just like CO2 lasers do not produce a visual breakdown of atmospheric air (or most other IR lasers), TEA lasers operate in the ultraviolet range: 337.1 nm.
This answers in theory why weak lasers would not breakdown air, but why don't strong lasers break down air?
This is because of limitations of electrical properties of the materials used in construction of the laser. There are major two factors that impact the power of a TEA laser:
The atmospheric makeup of the resonator cavity
The dE/dt of the resonator cavity
Paradoxically, the same factors that would increase the power of a TEA laser via changing the atmospheric makeup also force a decrease in dE/dt. This limits the practical power for any given length of laser cavity, and thus many TEA lasers could not achieve atmospheric breakdown under perfect conditions.
This is in combination with the fact that the coherence of TEA lasers is, franky, doodoo, and with all other factors, it comes down to the simple fact:
The power per area of TEA laser beam does not exceed the requirement to break down air, it is possible, but the construction required may be hundreds of feet in length to garner enough power, and that is impractical to put into a light N2 vacuum. | {
"domain": "physics.stackexchange",
"id": 70143,
"tags": "optics, laser"
} |
Looking for a strong Phd Topic in Predictive Analytics in the context of Big Data | Question: I'm going to start a Computer Science phd this year and for that I need a research topic. I am interested in Predictive Analytics in the context of Big Data. I am interested by the area of Education (MOOCs, Online courses...). In that field, what are the unexplored areas that can help me choose a strong topic? Thanks.
Answer: As a fellow CS Ph.D. defending my dissertation in a Big Data-esque topic this year (I started in 2012), the best piece of material I can give you is in a link.
This is an article written by two Ph.D.s from MIT who have talked about Big Data and MOOCs. Probably, you will find this a good starting point. BTW, along this note, if you really want to come up with a valid topic (that a committee and your adviser will let you propose, research and defend) you need to read LOTS and LOTS of papers. The majority of Ph.D. students make the fatal error of thinking that some 'idea' they have is new, when it's not and has already been done. You'll have to do something truly original to earn your Ph.D. Rather than actually focus on forming an idea right now, you should do a good literature survey and the ideas will 'suggest themselves'. Good luck! It's an exciting time for you. | {
"domain": "datascience.stackexchange",
"id": 2783,
"tags": "machine-learning, bigdata, data-mining, statistics, predictive-modeling"
} |
Wood: A Naturally Occurring Composite Material? | Question: In materials science texts, I see wood used an example of a naturally occurring composite material. One of the main components of wood is cellulose, which is a polymer. But what other component makes it a composite?
Thanks for any clarification.
Answer: the two components of wood-as-a-composite are cellulose fibers and lignin, the resin in which the cellulose fibers are embedded. Cellulose furnishes strength in tension and the resin furnishes strength in shear. | {
"domain": "physics.stackexchange",
"id": 56607,
"tags": "material-science"
} |
Understanding the Coulomb term in the semi-empirical mass formula | Question: Here's a passage I am not understanding:
The tendency to an excess of neutrons at large mass numbers is a Coulomb repulsion effect. Because a given nucleon interacts with only a small number of its neighbours through the strong force, the amount of energy tied up in strong-force bonds between nucleons increases just in proportion to $A$.
I'm having difficulty in understanding as to why the amount of energy tied up in strong force bonds is proportional to $A$ (Atomic mass number).
Answer: The semi-empirical mass formula you are talking about gets its name from the fact that parts of it are obtained empirically.
As for the first term, Weizsäcker observed that the binding energy per nucleon ($E_B / A$) was approximately constant for large nuclei. Therefore he concluded that
$$ \frac{E_B}{A} \sim const. \quad\Rightarrow\quad E_B \sim const. A $$ | {
"domain": "physics.stackexchange",
"id": 64706,
"tags": "nuclear-physics, coulombs-law, strong-force"
} |
Retrogradation movement of planet Mars relatively from Earth by Copernic | Question: I have a simple question : on the following figure:
I don't understand on the right figure why there is a progressive shift on the right when we start from step 1) to step 9). I guess there is an angle between the rotation plane of Mars and the rotation plane of Earth. Otherwise, we could'nt see clearly the recessing movement if the 2 planes were identical, could we ? We would just draw in the sky a simple line which would be the projection of the curve (on right figure) on a single Oy axis.
Anyone could explain me if this difference in two rotation planes is the cause of the shape of this curve (which is also due to the relative position of Mars from Earth) ?
Any help is welcome.
Answer: You are right. If Mars orbited in exactly the same plane as the Earth, instead of an S or a loop, we would see Mars moving prograde relative for the stars along the ecliptic, then slowing and stopping, moving retrograde for a few months, as Earth overtakes it, still on the ecliptic, then moving prograde again.
But Mars doesn't orbit in the same plane, so it has some motion perpendicular to the ecliptic. when these motions are combined, the usual effect is a "loop" or sometimes an "S".
(note that the axial tilt of the Earth is not relevant here) | {
"domain": "astronomy.stackexchange",
"id": 4577,
"tags": "earth, mars, near-earth-object"
} |
rosrun rqt_graph rqt_graph (Import error: no module named rospkg)(Is this is related to anaconda?) | Question:
Problem started when i executed rosrun as mentioned in question, then i installed rospkg using command
sudo apt-get install python-rospkg
and rospkg got installed in /usr/lib/python2.7/dist-packages/rospkg , but after that import error still showing. I am using anaconda too on my laptop and python2.7 is default for anaconda. One interesting point i got here that i have installed rospkg in python2.7 but when i am importing rospkg in python3.4, it is working.
Originally posted by NEO on ROS Answers with karma: 1 on 2016-09-07
Post score: 0
Original comments
Comment by Dirk Thomas on 2016-09-07:
Please add anaconda to the title and tags since the problem is very likely related to that.
Comment by SL Remy on 2016-09-08:
What are the permissions on /usr/lib/python2.7/dist-packages/rospkg ?
Comment by NEO on 2016-09-08:
default for all user
Comment by SL Remy on 2016-09-09:
I've had a case where I was not able to read some of the directories in dist-packages so when I did an import of that module, python was unable to do so. This is the reason why I was asking about the permissions.
Comment by SL Remy on 2016-09-09:
I also notice that on my machine, python2 and python2.7 somehow do different things.... So python -c "import rospkg" works, but python2.7 -c "import rospkg" does not... (just in case this isn't weird enough!)
Answer:
This error occurs because only Python packages in /opt are added to PYTHONPATH environment. I'm reading about python-rospkg and I found that this package is installed in /usr. You only have to type "export PYTHONPATH=$PYTHONPATH:/usr/lib/python2.7/dist-packages" and this line will be add in your .bashrc file. In my case, that was the solution that fix the error.
Originally posted by JuanmaOnse with karma: 16 on 2016-12-06
This answer was ACCEPTED on the original site
Post score: 0
Original comments
Comment by adayoegi on 2018-02-14:
It works, thanks a lot! | {
"domain": "robotics.stackexchange",
"id": 25703,
"tags": "ros, rosrun, anaconda"
} |
Klein-Gordon quantization and SHO analogy | Question: I understand that the procedure to quantize Klein-Gordon's field is to manipulate in a such a way to bring up the simple harmonic oscillator behavior of the field. This is done by Fourier transforming the space variable of the field $\phi\left(\vec{x},t\right)$ and plugging back into KG's equation.
The result of this is to obtain a SHO equation of motion for each modes,
$$
\left(\frac{d^2}{dt^2}+\omega_p^2\right)\phi\left(\vec{p},t\right)=0.
$$
The conjugate momentum given by $\pi\left(\vec{p},t\right)=\dot{\phi}\left(\vec{p},t\right)$ is also the Fourier transform of the space variable of the conjugate momentum $\pi\left(\vec{x},t\right)=\dot{\phi}\left(\vec{x},t\right)$.
Now, to quantize the SHO in non-relativistic quantum mechanics, we impose commutation relations. Since what is behaving like an oscillator are the modes, we should impose
$$
\left[\phi\left(\vec{p},t\right),\pi\left(\vec{p}',t\right)\right]=i\hbar\delta\left(\vec{p}-\vec{p}'\right).
$$
But this is not what's done in textbooks. The commutation relations are instead imposed on the actual fields
$$
\left[\phi\left(\vec{x},t\right),\pi\left(\vec{x}',t\right)\right]=i\hbar\delta\left(\vec{x}-\vec{x}'\right),
$$
which in turn implies,
$$
\left[\phi\left(\vec{p},t\right),\pi\left(\vec{p}',t\right)\right]=i\hbar\left(2\pi\right)^3\delta\left(\vec{p}+\vec{p}'\right).
$$
The factor $\left(2\pi\right)^3$ could be included by convention in the first commutation relation. However, the plus sign is what is bugging me. This, of course, also changes the commutation relation between the ladder operators,
$$
\left[a_\vec{p},a^\dagger_{\vec{p}'}\right]=\left(2\pi\right)^3\delta\left(\vec{p}+\vec{p}'\right)
$$
Is it just a convention which doesn't affect the physics or does it have deeper implications?
Thanks
Answer: My confusion between the coupled HOs and Klein-Gordon quantization was due to the following.
In coupled HOs we begin, for example, with the Lagrangian
$$
L=\sum_{n=0}^{N+1}\left[\frac{1}{2}m\dot{q}_n^2-\frac{k}{2}\left(q_{n+1}-q_{n}\right)^2\right]
$$
with $q_0=q_{N+1}=0$, and then proceed to uncouple the EOM with the variable transformation $q_n\left(t\right)=\sum_{j=0}^{N+1}Q_j\left(t\right)\sin{n p_j}$ to obtain
$$
L=\sum_{n=0}^{N+1}\left[\frac{1}{2}m\dot{Q}_n^2-\frac{1}{2}m \omega_n^2Q^2_n\right].
$$
In analogy to the SHO we now can impose the commutation relations on the mode coordinates:
$$
\left[Q_m,P_n\right]=i\delta_{mn},
$$
and
$$
\left[Q_m,Q_n\right]=\left[P_m,P_n\right]=0.
$$
In the same way, with KG, we would do a change of variables using the Fourier transform
$$
\phi\left(\vec{x},t\right)=\int\frac{d^3p}{\left(2\pi\right)^3}\phi\left(\vec{p},t\right)e^{-i\vec{p}\cdot\vec{x}},
$$
and get the uncoupled Lagrangian
$$
L=\frac{1}{2}\dot{\phi}^2-\frac{1}{2}\omega^2_p\phi^2.
$$
where now $\phi=\phi\left(\vec{p},t\right)$. Immediate analogy would lead to
$$
\left[\phi\left(\vec{p},t\right),\pi\left(\vec{p}',t\right)\right]=i\left(2\pi\right)^3\delta\left(\vec{p}-\vec{p}'\right).
$$
and this would lead to the inconsistencies I mentioned above.
The problem is that this commutation relation is wrong. The reason being that the original variable transformations from $q\rightarrow Q$ made the $Q$'s real while the modes of the field $\phi\left(\vec{p},t\right)$ are not. Thus the right commutation relations are
$$
\left[\phi\left(\vec{p},t\right),\pi^\dagger\left(\vec{p}',t\right)\right]=i\left(2\pi\right)^3\delta\left(\vec{p}-\vec{p}'\right).
$$
This corrects the sign of all other commutation relations. Btw, this is supported by the fact that the Lagrangian should be a real function and thus the uncoupled Lagrangian for the field should actually read
$$
L=\frac{1}{2}\dot{\phi}\dot{\phi}^\dagger-\frac{1}{2}\omega^2_p\phi\phi^\dagger,
$$
where $\phi=\phi\left(\vec{p},t\right)$. | {
"domain": "physics.stackexchange",
"id": 52475,
"tags": "quantum-field-theory, second-quantization, klein-gordon-equation"
} |
What is the difference between the balance of linear momentum and Cauchys momentum equation? | Question: I am currently working on a presentation about the Cauchy's momentum equation or (also known as?) Cauchy's first law. I need to base my presentation on an equation given by my professor: $ \rho \ddot{u} - \nabla \cdot P = f$. Where $\rho$ is the mass density, $P$ the first Piola-Kirchhoff stress tensor, $f$ an external volume force und $u$ the displacement field so $\ddot{u}$ is the acceleration. Online I found https://en.wikiversity.org/wiki/Continuum_mechanics/Balance_of_linear_momentum this on Wikiversity about the balance of linear momentum. That looks pretty similar to me, except that is uses the body force density, whatever that might be, and the Cauchy stress tensor. Since I study math and not physics I have a pretty hard time, dealing with those equations. My task is to derivate the equation $ \rho \ddot{u} - \nabla \cdot P = f$ and talk about an inverse problem were it can be used. If someone could explain to me the connection and/or difference between the balance of linear momentum and Cauchy's equation of motion, that would be super helpful. Thank you a lot already.
Answer: Continuum mechanics can be described with different descriptions, using different sets of coordinates:
Lagrangian description, with fields representing physical quantities expressed as a function of the reference space coordinates (that can be interpreted as labels associated to material points: constant reference coordinates, same material point) and time as independent variables, $f^0(\mathbf{r_0},t)$;
Eulerian description, with fields representing physical quantities expressed as a function of the physical space coordinates and time as independent variables, $f(\mathbf{r},t)$;
arbitrary description.
It looks like you're trying to find the balance equations of a continuous medium using reference coordinates when you talk about Cauchy equation.
Namely, starting from the integral equation of mass and linear momentum for a material volume, expressed in physical space
$\dfrac{d}{dt}\displaystyle \int_{V} \rho = 0$
$\dfrac{d}{dt}\displaystyle \int_{V} \rho \mathbf{u} = \int_V \rho \mathbf{g} + \oint_{\partial V} \mathbf{t_n} = \int_V \rho \mathbf{g} + \oint_{\partial V} \mathbf{\hat{n}} \cdot \mathbb{T} = \int_V \rho \mathbf{g} + \int_{V} \nabla \cdot \mathbb{T} $,
being $\mathbb{T}$ Cauchy stress tensor. Changing coordinates from physical to reference space coordinates, it's possible to recast the integral equations as
$ \displaystyle \int_{V^0} \dfrac{\partial }{\partial t}\bigg|_{\mathbf{r_0}} ( \rho J ) = 0$
$\displaystyle \int_{V^0} (\rho J) \dfrac{\partial }{\partial t}\bigg|_{\mathbf{r_0}} \mathbf{u} = \int_{V^0} \rho J \mathbf{g} + \oint_{\partial V^0} \mathbf{\hat{n}^0} \cdot \mathbb{P} = \int_{V^0} \rho J \mathbf{g} + \int_{V^0} \nabla_0 \cdot \mathbb{P} $,
having used Nanson's formula for the transformation of the surface integral, being $\mathbb{F}$ the nominal stress tensor, $\nabla_0 \cdot$ the divergence in the reference space, and $J$ the determinant of the gradient of the transformation from the reference to the physical coordinates, and exploiting the mass conservation $\frac{\partial}{\partial t} \big|_{\mathbf{r_0}} (\rho J) = 0$ to write $\rho(\mathbf{r_0},t) J(\mathbf{r_0},t) = \overline{\rho}(\mathbf{r_0})$ constant.
Thus, the differential equations using the reference coordinates, remembering that the acceleration fields of material particle is $\mathbf{a} = \big|_{\mathbf{r_0}} \mathbf{u}$ read either
using reference coordinates:
$\rho J = \overline{\rho}$
$\overline{\rho} \mathbf{a} = \overline{\rho} \mathbf{g} + \nabla_0 \cdot \mathbb{P}$
using physical coordinates (in convective form):
$D_t \rho = - \rho \nabla \cdot \mathbf{u}$
$\rho D_t \mathbf{u} = \rho \mathbf{g} + \nabla \cdot \mathbb{T}$. | {
"domain": "physics.stackexchange",
"id": 92511,
"tags": "continuum-mechanics"
} |
How does the synaptic cleft exist? | Question: I'm not asking why the synaptic cleft exists, i.e. what function it holds, rather how.
So I know that the neurotransmitter diffuses across it, it is 20-40 nm wide and contains basal lamina (in NMJs at least), but I cannot find any allusion as to what causes the gap; how the two membranes don't just touch, and how so consistent a distance is created across neurons.
Google has so far failed me on this one, though perhaps I am not searching well enough, any help would be greatly appreciated.
Answer: There are membrane proteins that act as structural components of the gap (i.e. the synapses aren't just floating there, they are anchored to each other via membrane proteins).
https://en.wikipedia.org/wiki/Neuroligin
https://en.wikipedia.org/wiki/Neurexin
The most common examples are neurexin (expressed on the pre-synaptic terminal) and neuroligin (expressed on the post-synaptic terminal). Not surprisingly, these proteins are involved in synapse formation in the developping brain, as well as the adult brain (in plastic processes where new synapses are formed). To a first approximation, as synapses are growing, their membrane proteins bind their partners and become anchored. This also leads to intracellular signalling and further maturation of the synapse. That's how synapses know where to attach. | {
"domain": "biology.stackexchange",
"id": 5374,
"tags": "neuroscience, synapses"
} |
How do experimentalists find their nanoscale creations? | Question: I am reading up on hyperbolic metamaterials and investigating processes like sputtering and electron deposition to create substrates only nanometers thick. For reference, see this paper by Lu et al from 2013: http://www.nature.com/nnano/journal/v9/n1/abs/nnano.2013.276.html. My question may be somewhat naive, but once they have created the material, how do they actually know where it is and how do they maneuver it to complete their experiment? The structures may only have dimensions of up to 100 nanometers, and this seems far too small for a pair of tweezers!
Answer: Although the exact methods can vary form experiment to experiment the solutions I'm familiar with tend to rely on probabilistic methods.
For example, let's say you figured out how to create nanodots and now you want to examine them under a scanning tunneling microscope for whatever reason. Generally what you would do is create a large amount of nanodots (not just one), get them into a solution, purify them to get rid of most contaminants, then get them on a substrate. If the concentration is large enough it doesn't matter where you start looking on the substarte sooner or later you bump into one of them (the higher the concentration the sooner you will find one).
There are some tricks to increase the probability of finding your nanoscale creations. One of this is to grow them at specific points on a substrate and make sure your creations are sticking to that point. For example this article describes a method to measure the conductivity of single walled carbon nanotubes. They first put a cobalt thin film catalyst onto the substrate where they want the nanotubes to grow. Then, after goring the tubes they deposit electrodes across them. The way this is done practically is to put a large amount of catalyst pads on a surface then right after the growing process depositing wires next to each an every one of the catalyst pads in a predetermined pattern. At this point you don't know and don't really care where the nanotubes are exactly, you just have to know where they probably going to end up. If you have enough catalyst pads ad wires at the right distance form each and every one of them, then some of the wires are bound to connect to one and exactly one of the nanotubes. Now it's just a question of scanning trough the substrate to find one, but since you created the substrate, you know where to look, and you can even deposit markers on it to help you navigate it like a map.
You can also imagine a variation of this, where you grow your nano-structures in a test tube, then put sticky pads on a substrate that your nano-structures like to stick to. Submerge the substrate into a solution of your nanocreations for a while and when you pull it out one or two of them are bound to stick to each of those pads -- no tweezers required. A great candidate for such a glue is DNA since you can code what it sticks to. (You can read more about his here.) | {
"domain": "physics.stackexchange",
"id": 34802,
"tags": "condensed-matter, experimental-physics, solid-state-physics, experimental-technique"
} |
Mechanism of arene side chain oxidation by permanganate | Question: When treated with hot, concentrated acidic $\ce{KMnO4}$, arenes are oxidised to the corresponding carboxylic acids. For example, toluene is oxidised to benzoic acid.
I've tried to examine how this happens, using the mechanism of oxidation of double bonds via cyclic intermediate as a reference, but I can't manage to cook up a satisfactory one.
In an older book, I have read that there is no (known) mechanism for many organic oxidation reactions. I'm inclined to think that this may have changed.
So, is there a mechanism for this? If so, what is it?
If not, what are the hurdles in finding this mechanism? For example, what problems are there with other proposed mechanisms (if they exist)?
Answer: Some general information on side-chain oxidation in alkylbenzenes is available at Chemguide:
An alkylbenzene is simply a benzene ring with an alkyl group attached
to it. Methylbenzene is the simplest alkylbenzene.
Alkyl groups are usually fairly resistant to oxidation. However, when they are attached to a benzene ring, they are easily oxidised by
an alkaline solution of potassium manganate(VII) (potassium
permanganate).
Methylbenzene is heated under reflux with a solution of potassium
manganate(VII) made alkaline with sodium carbonate. The purple colour
of the potassium manganate(VII) is eventually replaced by a dark brown
precipitate of manganese(IV) oxide.
The mixture is finally acidified with dilute sulfuric acid.
Overall, the methylbenzene is oxidised to benzoic acid.
Interestingly, any alkyl group is oxidised back to a -COOH group on
the ring under these conditions. So, for example, propylbenzene is
also oxidised to benzoic acid.
Regarding the mechanism, a Ph.D. student at the University of British Columbia did his doctorate on the mechanisms of permanganate oxidation of various organic substrates.1 Quoting from the abstract:
It was found that the most vigorous oxidant was the permanganyl ion ($\ce{MnO3+}$), with some contributing oxidation by both permanganic acid ($\ce{HMnO4}$) and permanganate ion ($\ce{MnO4-}$) in the case of easily oxidized compounds such as alcohols, aldehydes, or enols.
The oxidation of toluene to benzoic acid was one of the reactions investigated, and a proposed reaction mechanism (on pp 137–8) was as follows. In the slow step, the active oxidant $\ce{MnO3+}$ abstracts a benzylic hydrogen from the organic substrate.
$$\begin{align}
\ce{2H+ + MnO4- &<=> MnO3+ + H2O} & &\text{(fast)} \\
\ce{MnO3+ + PhCR2H &-> [PhCR2^. + HMnO3+] & &\text{(slow)}} \\
\ce{[PhCR2^. + HMnO3+] &-> PhCR2OH + Mn^V} & &\text{(fast)} \\
\ce{PhCR2OH + Mn^{VII} &-> aldehyde or ketone} & &\text{(fast)} \\
\ce{aldehyde + Mn^{VII} &-> benzoic acid} & &\text{(fast)} \\
\ce{ketone + Mn^{VII} &-> benzoic acid} & &\text{(slow)} \\
\ce{5 Mn^V &-> 2Mn^{II} + 3Mn^{VII}} & &\text{(fast)}
\end{align}$$
The abstraction of a benzylic hydrogen atom is consistent with the fact that arenes with no benzylic hydrogens, such as tert-butylbenzene, do not get oxidised.
Reference
Spitzer, U. A. The Mechanism of Permanganate Oxidation of Alkanes, Arenes and Related Compounds. Ph.D. Thesis, The University of British Columbia, November 1972. DOI: 10.14288/1.0060242. | {
"domain": "chemistry.stackexchange",
"id": 4,
"tags": "organic-chemistry, reaction-mechanism, aromatic-compounds, organic-oxidation"
} |
How to separate Transient and Steady-State Expression from Periodic Summation Response? | Question: Background
My question comes from here, it's a response of 1st order LPF RC circuit from an arbitrary periodic input.
How to determine the transient response of a circuit to causal periodic inputs?
Problem
Suppose if I have input signal with period of $T = 10s$
$\displaystyle u(t) = 2t (\theta(t) - \theta(t - 5)) + 0 (\theta(t - 5) - \theta(t - 10) )$
For t from 0 to infinity, there'll be $1-e^{-sT}$ in the denominator.
and then the system transfer function
$ H(s) = \dfrac{1/sC}{R + 1/sC} $
In order to find it's output we need F(s) which is a transfer function for one single period response from the input signal. Inverse it. Multiplies it with unit step since it's a causal signal and system. And then time shifted it by $nT$.
$ F(s) = H(s) U(s) $
$ f(t) \theta(t) = $
$ \displaystyle \begin{align} f(t-nT) \theta(t-nT) &= 2\ \theta(t - nT) \left( RC (e^{-(t-nT)/(RC)} -1) + (t - nT)\right) \\ &+ 2\ \theta(t-5 - nT) \left( (5 - RC) e^{-(t-5-nT)/(RC)} + RC - (t - nT) \right) \end{align}$
Then from periodic summation properties of laplace transform we get
$ \displaystyle y(t) = \mathcal{L}^{-1}\left[\frac{1}{1-e^{-sT}} F(s) \right] = \sum_{n=0}^{\infty} f(t-nT) \theta(t-nT) $
Assume if $R = 50k$ and $C = 100uF$, thus time constant of $5s$. This is the plot
Sum[2UnitStep[t-10n] ( 5 ( e^(-(t-10n)/(5) ) - 1) + t - 10n ) + 2UnitStep[t - 5 -10n] ( (5-5) e^(-(t - 5 - 10n)/(5) ) + 5 - (t - 10n) ), {n, 0, 5}]
Now if I change the time constant into $20s$. This is the plot if we sum it from n = 0 to 20.
Sum[2UnitStep[t-10n] ( 20 ( e^(-(t-10n)/(20) ) - 1) + t - 10n ) + 2UnitStep[t - 5 -10n] ( (5-20) e^(-(t - 5 - 10n)/(20) ) + 20 - (t - 10n) ), {n, 0, 20}]
Question
How to separate its transient and steady state response? Such as,
$ \displaystyle y(t) = y_{tr}(t) + y_{ss}(t) $
At what time t such that the transient vanishes?
How many period T of input signal does it take for it to be vanished?
I couldn't find any approach since it's difficult to find when the overlapping magnitude became steady. And that depends on the time constant, which results in asymptotic to 0 as t goes to infinity on each summation term.
Analytic expression is what I hope for.
Answer: Let $x_0(t)$ be the part of the input signal $x(t)$ in the interval $[0,T]$:
$$x_0(t)=\begin{cases}x(t),&t\in [0,T]\\0,&\textrm{otherwise}\end{cases}$$
The pseudo-periodic input signal is then given by
$$x(t)=\sum_{n=0}^{\infty}x_0(t-nT)\tag{1}$$
If $h(t)$ is the impulse response of a causal and stable LTI system, and if $y_0(t)=(x_0\star h)(t)$ is its response to $x_0(t)$, the response to $x(t)$ is given by
$$y(t)=\sum_{n=0}^{\infty}y_0(t-nT)\tag{2}$$
Since $x(t)$ starts at $t=0$, the response $y(t)$ is composed of a steady-state component and of a transient component. The latter decays to zero because we have assumed that the LTI system is stable. The steady-state response is the response that would be observed if the input were periodic, i.e., if it had been switched on at $t=-\infty$. Consequently, the steady-state response is
$$y_s(t)=u(t)\sum_{n=-\infty}^{\infty}y_0(t-nT)\tag{3}$$
where $u(t)$ denotes the unit step function. From $(2)$ and $(3)$, the transient response must be
$$y_t(t)=-u(t)\sum_{n=-\infty}^{-1}y_0(t-nT)\tag{4}$$
such that
$$y(t)=y_s(t)+y_t(t)\tag{5}$$
holds.
Note that in $(3)$ and $(4)$ we don't actually need to evaluate infinite sums. For all practical purposes, the number of relevant past periods (i.e., the number of negative indices $n$) can be chosen to correspond to a few time constants of the system.
For the input signal and the LTI system given in the question (and with time constants $\tau=RC=20$ and $\tau=RC=5$, respectively), we obtain the following decompositions of the output signal: | {
"domain": "dsp.stackexchange",
"id": 12085,
"tags": "signal-analysis, continuous-signals, laplace-transform, periodic"
} |
What is the distance from Alpha Centauri to Barnard's Star? | Question: Alpha Centauri AB is the closest star system to Earth (4.366 ly), followed closely by Barnard's star (5.988 ly). The closest star system to Alpha Centauri is Luhman 16 (3.8 ly from α Cen). So I am wondering, what is the distance from α Centauri AB to Barnard's star, and more generally, from one star to a different one (all below 10 pc)?
Answer: To find the distance from one star to another, we need three things for both of the stars: their right ascensions, declinations, and the distance from Earth to those stars.
So, let's get those things:
From the Wikipedia page on Alpha Centauri:
$RA = 14^h\:39^m\:36.49400^s$
$DEC = -60^{\circ}\:50'\:0.23737''$
$R = 4.37\:\rm{ly}$ (you gave 4.366, some other sources give 4.367... I'm going to stick with 4.37)
and for Barnard's Star:
$RA = 17^h\: 57^m\: 48.49303^s$
$DEC = +04^{\circ}\: 41'\: 36.2072''$
$R = 5.958 \: \rm{ly}$ (again, you gave a slightly different value, I'm sticking with Wikipedia for now)
where RA is right ascension, DEC is declination, and R is radial distance from Earth to the target star.
Now, by themselves, it is relatively difficult for us to obtain an actual distance. What I would do is convert these to rectangular coordinates, and then it's a matter of using the 3-d distance formula.
First, however, we need to convert RA and DEC into units like radians or degrees.
For right ascension, we can use the general formula:
$degrees = 15 (h + \dfrac{m}{60} + \dfrac{s}{3600})$
and for declination:
$degrees = deg + \dfrac{m}{60} + \dfrac{s}{3600}$
(when the declination is negative, however, multiply all terms in the formula by -1)
So, for Alpha Centauri AB, we have:
$RA = 15 (14 + \dfrac{39}{60} + \dfrac{36.49400}{3600}) \approx 219.902^{\circ}$
$DEC = -1 (60 + \dfrac{50}{60} + \dfrac{0.23737}{3600}) \approx -60.833^{\circ}$
and for Barnard's Star, we have:
$RA = 15 (17 + \dfrac{57}{60} + \dfrac{48.49303}{3600}) \approx 269.452^{\circ}$
$DEC = 04 + \dfrac{41}{60} + \dfrac{36.2072}{3600} \approx 4.693^{\circ}$
Now, to convert from spherical to rectangular coordinates, we have to define which of RA, DEC, and R can be assigned to $r$, $\theta$, and $\phi$. R should be $r$ - that's pretty straightforward. Since RA can be thought of as "celestial longitude", we'll assign it to $\theta$, and thus declination will be $\phi$.
To clarify, I'm defining $\phi$ as the angle from the xy-plane - so a $\phi$ of $\dfrac{\pi}{2}$ would mean pointing straight upwards. I know some sources define $\phi$ as the angle complementary to that angle (so, $\dfrac{\pi}{2}$ -
the angle from the xy-plane), but for astronomical purposes, I think the definition I'm using is more intuitive and easier to work with.
We can then use the conversions:
$x = r\cos{\theta}\cos{\phi}$
$y = r\sin{\theta}\cos{\phi}$
$z = r\sin{\phi}$
So, for Alpha Centauri AB:
$x = 4.37 \cos{219.902^{\circ}} \cos{−60.833^{\circ}} \approx -1.634\: \rm{ly}$
$y = 4.37 \sin{219.902^{\circ}} \cos{−60.833^{\circ}} \approx -1.366\: \rm{ly}$
$z = 4.37 \sin{−60.833^{\circ}} \approx -3.816\: \rm{ly}$
and for Barnard's Star:
$x = 5.958 \cos{269.452^{\circ}} \cos{4.693^{\circ}} \approx -0.057\: \rm{ly}$
$y = 5.958 \sin{269.452^{\circ}} \cos{4.693^{\circ}} \approx -5.938\: \rm{ly}$
$z = 5.958 \sin{4.693^{\circ}} \approx 0.487\: \rm{ly}$
And now, finally, we can use the distance formula for 3-d:
$d = \sqrt{(x_1 - x_2)^2 + (y_1 - y_2)^2 + (z_1 - z_2)^2}$
So, the distance between Alpha Centauri AB and Barnard's Star is:
$d = \sqrt{(-1.643 + 0.057)^2 + (-1.366 + 5.938)^2 + (-3.816 - 0.487)^2} \approx\mathbf{6.476\,ly}$
Well, that was certainly tedious - but it's a process that you can standardize to pretty much any star, or really, any two astronomical objects:
First, convert RA and DEC to degrees.
Second, assign R, RA, and DEC to the spherical coordinates $r$, $\theta$, and $\phi$.
Third, convert spherical coordinates to rectangular coordinates.
Lastly, use the distance formula with the two sets of $x$, $y$, and $z$ coordinates.
Hope this helps. :) | {
"domain": "astronomy.stackexchange",
"id": 5991,
"tags": "star, distances"
} |
Does the energy density of space create cause gravitational attraction beyond what would be computed from an object's mass alone? | Question: I am starting from the assumption that the gravitational warping of spacetime increases its volume, so a spherical region of space with a fixed surface area would be able to fit a larger number of telephone booths inside it if it contained a large mass than it would without the mass. John Rennie's answer to this question: When a massive object warps the space around it, does the amount of space expand? and some of the answers to this question: Does curved spacetime change the volume of the space? give me the impression that gravitational warping does increase volume, but some of the other answers I have seen suggest otherwise.
A second assumption I am using is that energy exerts a gravitational pull and empty space has some intrinsic energy embedded within it. I am basing this on what I have read about kugelblitz backholes, and the vacuum energy of space.
So, when we examine a region of space defined by a surface area around a large mass, does that region have more gravity than what would be attributed to the mass alone due to the relatively larger volume of space inside the region created by the mass's warping of spacetime?
Answer:
Does the energy density of space create cause gravitational attraction beyond what would be computed from an object's mass alone?
No. A positive cosmological constant (equivalent to positive dark energy) reduces gravitational attraction. It has a repulsive effect.
From the deSitter-Schwarzschild metric, the Newtonian potential of a mass $M$ in the presence of a cosmological constant $\Lambda$ is
$$\varphi=-\frac{M}{r}-\frac{\Lambda r^2}{6}$$
and the gravitational field is
$$-\nabla\varphi=\left(-\frac{M}{r^2}+\frac{\Lambda r}{3}\right)\hat r$$
in geometrical units where $G=c=1$.
At the surface of the Earth, the cosmological constant reduces gravitational acceleration by the unmeasurable amount of about $1$ part in $10^{30}$.
The repulsive effect of the cosmological constant / dark energy is what is causing the observed acceleration of the expansion of the universe, according to the current Lambda-CDM model of cosmology.
In General Relativity, both energy density and pressure cause spacetime curvature. Lorentz invariance requires a positive energy density of the vacuum to be accompanied by a negative pressure of the vacuum. The antigravity of the negative pressure dominates the gravity of the positive energy density, causing repulsion. | {
"domain": "physics.stackexchange",
"id": 72642,
"tags": "general-relativity, gravity"
} |
What causes a rainbow, its colours and its shape? | Question: What is the cause of rainbows? Do they appear due to rainfall or any other natural phenomenon. What makes it form a semi-circle in the atmosphere and its colours?
Answer: Short answer: A rainbow is formed when light enters a drop of water or an ice crystal and gets refracted and reflected back into an observer's eye.
Longer Answer: Light travels at different velocities depending on the media involved; it travels slower through water than it does through air, for instance. As light enters a raindrop or ice crystal, it first gets refracted. However, not all the frequencies of light get refracted at the same angle. The colors with the shorter wavelengths (e.g. blue, indigo, and violet) get refracted less than the longer wavelengths (red). This spreads the colors out just like passing through a prism. The light then gets reflected off of the back of the raindrop, gets refracted again as it passes out of the raindrop, and travels to the observer's eye.
The reason it is circular is because the light that got refracted and reflected back to the observer, does so at a specific range of angles, 40 to 42 degrees.
In order to see a rainbow, the sun (or other light source) must be directly behind the observer. A straight line is formed from the light source, the observers eyes, and the center of the rainbow. All rainbows are actually circles. We usually see it as an arc because the earth intercepts the circle. (BTW, because of this, no two people, even standing shoulder to shoulder, can see the same exact rainbow. Each rainbow is being created by different raindrops.) If you're ever in an airplane passing over a cloud with the sun above you (as opposed to near the horizon), you may see a complete rainbow. It's pretty awesome. Here's an illustration: | {
"domain": "earthscience.stackexchange",
"id": 717,
"tags": "meteorology, atmosphere"
} |
What experimental proof of quantum superposition do we have? | Question: My question is both naive and subtle. Naive because I don't know much more than the layman about physics and in particular quantum physics. Subtle because physics is an attempt to model the world, and as a computer scientist, with a strong interest in machine learning but also formal logic and models, a model is just that, a model. Not necessarily reality. It is not because a model fits reality that the model is the truth about reality.
From my understanding, we know that:
the quantum physics model has not been contradicted on the notion of superposition that it introduces
there are experiments that can be explained using the quantum model where the classical model fails
Am I correct to say that this only proves that the classical model, is just that, a model, and therefore incomplete? And that for those cases quantum mechanics has a better predictability power.
Now the question: Has it been somehow proven that a physical entity can a some point in time and space have a dual state (independently of the model)? Or is it only that quantum mechanics is the only known model that allows us to explain things we otherwise couldn't?
I would like to know if objects of our world can be in two states at the same time or that it is just more practical for predictability purposes to model things this way.
Answer: "Being in superposition" is not an objective property of a quantum mechanical state. Quantum mechanical states live in a Hilbert space, where, since it is a vector space, every state can be expressed as the sum of other states. That is what we mean by "superposition": The sum of two states is a again a state.
But as long as you don't choose a basis of this vector space as your reference for what an "unsuperposed" state is, asking whether a state is "in superposition" doesn't make any sense. A state of definite position is not a superposition of other states of definite position, but it is an infinite superposition of states of definite momentum. Every state is a superposition of states that belong to a basis where it itself isn't a basis vector.
So "quantum superposition" is not some sort isolated postulate of quantum mechanics, it is built right into the basic mathematical structure of the space of states. You cannot remove "superposition" from this formulation of the theory any more than you can remove real numbers from classical mechanics. So there is no experimental test like "Quantum mechanics with superposition" vs. "Quantum mechanics without superposition" where you could compare the predictions of two well-defined theories.
Also, note that this superposition really is about a technical property in the mathematical formalism: Our ability to form sums of states. The formalism itself makes no direct claim about how you should think about this, and indeed different quantum interpretations may disagree whether "the object is in both states at once" is really the correct natural language interpretation of the mathematical fact in the formalism. But since (most) quantum interpretations do not change the experimental predictions of quantum mechanics, none of these different ontologies of quantum superposition can be experimentally tested.
Therefore, every experimental test of quantum mechanics is an experimental test of "quantum superposition", if you so wish. The notion of superposition cannot be separated from the rest of quantum mechanics, it is too fundamental for that. Whether that "really means" an object "is in two states at the same time" is not a question physics can answer. | {
"domain": "physics.stackexchange",
"id": 98612,
"tags": "quantum-mechanics, hilbert-space, quantum-interpretations, superposition, schroedingers-cat"
} |
What methods exist to calculate the density of states in the continuum of a molecule? | Question: Say I have an arbitrary molecule in the Born-Oppenheimer approximation, and furthermore say that I can approximate the molecule as having only one active electron. What methods exist to calculate the density of states as a function of the energy of an electron in the continuum (that is, with positive energy)?
Answer: The continuum states are different in several aspects.
First, there are a countably infinite number of bound molecular states vs an uncountably infinite number of states in any finite range of the continuum. Thus, the ratio of total number of bound states over the number of states in even a small range at the beginning of the continuum is effectively zero. So in any definition of density of states in which the continuum is finite, below the first ionization potential the density is zero.
Second, the continuum states are free and therefore at infinity the energy eigen-states will resemble plane waves. Since these states cover all space, the potential from the molecule has a negligible effect. Consider calculating $\langle \phi| V |\phi\rangle$ where $V$ is the potential due to the molecular nuclei and bound electrons. Since $\phi$ is spread over all space, this is similar to asking what if when calculating $\langle\phi|\phi\rangle$ we only integrated over a finite region. Compared to the infinite size of space, this is truly negligible.
Therefore the density of states in the continuum after the first ionization energy (and before the second one) can be approximated by:
$$D(E) \propto \sqrt{E-E_0}$$
where $E_0$ is the ionization edge.
The x-ray absorption of molecules, or x-ray photoelectron spectroscopy (XPS), is probably the closest you'll have to experimental measurements which you can try to work backwards to check any density of state calculations. Here is a graph of x-ray absorption of common gas atoms/molecules which shows the strong effects of the ionization edge. Note that the edges there are showing when a new atomic/molecular energy level is now able to reach the continuum, so this is demonstrating the strong edge between continuum vs bound states.
Here is a paper looking at XPS for solids and trying to work backwards to the bound state levels. They need to know the density of states in the continuum to do this, and comment:
"... the final state electrons are ~ 1250 eV into the continuum and the lattice potential affects them very little. Therefore, the appropriate final state density will be proportional simply to $\epsilon^{1/2}$" (where $\epsilon$ is the energy of the free electron)
Beyond this initial approximation $\sqrt{E-E_0}$, there will also be structure in the density of states due to combinatorics of arranging energy in the bound states. Further combinatorics and tradeoffs of energy between the free electrons occurs at the second ionization energy, and so on.
The main point is that the density of states in the continuum should be completely determined by calculating the ionization energies, and the energy levels of the bound states of the remaining ion. The energy levels of the free electron and ion can be considered separately.
Update:
Here is an x-ray absorption overview, which also shows some fine structure within ~ 50eV of an edge. Absorption is mostly dominated by single transitions, and the more electrons which change energy level usually the more suppressed the matrix element, so this type of structure in the absorption spectrum is likely showing more about the bound state density than the final state. | {
"domain": "physics.stackexchange",
"id": 17580,
"tags": "quantum-mechanics, molecules, density-of-states"
} |
Mass Moment of Inertia of combined objects | Question:
I have a small circular ring placed on a circular disc. The outer radius of the ring is same as the radius of the disc. I have worked out the difference of Moment of Inertia of the combined shape to be = 1/2 M(ring) {R1^2 + R2^2 }. Is this correct?
Worked out as follows :-
For disc--> I(1) = I(disk) = 1/2 * M(disc) * R(disk)^2
For ring--> I(ring) = 1/2 * M(ring) * {R1^2 + R2^2}
R1= inner radius of ring
R2 = outer radius of ring
Total MOI of combined shape -->
I(2) = I(disk) + I(ring)
What I want to find is I(2) - I(1) = 1/2 * M(ring) {R1^2 + R2^2}
Is this method correct??
Note: 1. This is for the Torsion Pendulum experiment
2. The minor shapes like nuts and bolts are neglected.
Answer: I(ring) = Idisk(R2) - Idisk(R1).
The trick is figuring out the mass.
Mass of R2-sized disk would be MR2 = M*(pi*R2*R2)/((pi*R2*R2)-(pi*R1*R1))
Mass of R1-sized disk would be MR1 = MR2*(pi*R1*R1)/(pi*R2*R2)
So I(ring) = 1/2MR2*(R2*R2) - 1/2MR1*(R1*R1)
= 1/2( (M*(pi*R2*R2*R2*R2) - M*(pi*R1*R1*R1*R1))/((pi*R2*R2) - (pi*R1*R1)) ) )
I guess the pis can come out:
= 1/2*M*(R2^4-R1^4)/(R2^2 - R1^2)
Ugh, this means:
= 1/2*M*(R2^2+R1^2)*(R2^2-R1^2)/(R2^2 - R1^2)
= 1/2*M*(R2^2+R1^2)
So, yep. | {
"domain": "physics.stackexchange",
"id": 20035,
"tags": "homework-and-exercises, moment-of-inertia"
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.