anchor stringlengths 0 150 | positive stringlengths 0 96k | source dict |
|---|---|---|
apt update fails/ cannot install pkgs even after updating keys | Question:
While trying to run: apt update I encountered the same problem described in https://answers.ros.org/question/325039/apt-update-fails-cannot-install-pkgs-key-not-working/
After following what @gvdhoorn suggested and trying to run apt update again, I get the following error:
Get:31 http://packages.ros.org/ros/ubuntu xenial/main amd64 Packages [3.150 kB]
Err:31 http://packages.ros.org/ros/ubuntu xenial/main amd64 Packages
Writing more data than expected (3154295 > 3149706) [IP: 64.50.236.52 80]
I tried this on two different PCs with Ubuntu 16.04 and ROS Kinetic installed. Is this something related to the repository having troubles or something wrong on my machine ? Does anyone else have the same problem?
Originally posted by matteolucchi on ROS Answers with karma: 26 on 2019-07-25
Post score: 0
Answer:
What solved the issue was to run sudo apt-get clean before running sudo apt-get update
Originally posted by matteolucchi with karma: 26 on 2019-07-26
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by duck-development on 2019-07-26:
so if you problem is solved just close the questen | {
"domain": "robotics.stackexchange",
"id": 33512,
"tags": "ros, apt, ros-kinetic, update"
} |
Time difference in two clocks | Question: Two identical (and synchronized) clocks C1 & C2, are at same location A. Clock C2 is then moved from location A to another location B which is distance D away from A. Thereafter both the clocks remain at their respective locations, i.e. in same frame of reference.
Q1. As I understand, C1 at A & C2 at B, may give the different time readings, but will the rate of change of time be same in both the clocks? A Yes/No answer will do.
Q2. If the answer to Q1 is Yes then, just because of displacement of clock C2 from point A to point B, what is the equation for the difference in the time between the two clocks. If it depends upon how C2 was moved from A to B, please assume the simplest case because for the purpose of my question, it does not matter how it was moved (slowly, or fast..), so if it helps, you may consider a straight line move from A to B at a constant speed. Obviously, it needs to accelerate from A and then needs to stop at B.
Answer: Q1: Yes. The clocks end up at rest relative to one another and therefore (in flat spacetime) tick at the same rate.
Q2: the time difference completely depends on path used. The difference can be made as close to $0$ as you like by moving Clock C2 suitably slowly (but not exactly $0$ and of course never less than $0$). The difference can be made as close to $\infty$ as you like by sending C2 on a near-$c$ joyride for a long time.
For the specific case of Clock C2 moving at constant velocity, covering distance $D$ in time $T,$ the time difference (amount by which C1 is ahead of C2) is $\Delta t=T-\sqrt{T^2-(D/c)^2}$ (draw the spacetime diagram and compare the lengths of paths—thinking explicitly about acceleration is unnecessary). Again, $\Delta t$ can be made as tiny as you want as long as you're willing to move C2 slowly enough: $\lim_{T\to\infty}\Delta t=0.$ The maximum $\Delta t$ achievable with this straight path is however limited to $\Delta t<D/c$ due to the condition $D/T<c,$ since the clock can't travel faster than light. | {
"domain": "physics.stackexchange",
"id": 92259,
"tags": "special-relativity, speed-of-light"
} |
Listing employees whose employee number is divisible by some given number | Question: Here I want to improve the code in addListofEmployees and mainMethod. How can I do so?
import java.util.LinkedList;
import java.util.List;
import java.util.Scanner;
import java.util.stream.Collectors;
public class AddEmployees extends Employee {
public static List<Employee> addListOfEmployees(long upperLimit) {
List<Employee> list = new LinkedList<>();
for (long i = 0; i < upperLimit; i++) {
list.add(new Employee(i));
}
return list;
}
public static List<Employee> getAllNumbersDividedByProvidedNumber(long upperLimit, int divideBy) {
return AddEmployees.addListOfEmployees(upperLimit).parallelStream()
.filter(Employee -> Employee.getNumber() % divideBy == 0).collect(Collectors.toList());
}
public static void mainMethod() {
Scanner sc = null;
try {
sc = new Scanner(System.in);
System.out.println("enter upper limit as lower is zero");
long upperLimit = sc.nextInt();
System.out.println("enter the number you want to divide with");
int divideBy = sc.nextInt();
for (Employee e : AddEmployees.getAllNumbersDividedByProvidedNumber(upperLimit, divideBy)) {
System.out.println(e.getNumber());
}
} catch (Exception ex) {
ex.printStackTrace(System.out);
} finally {
if (sc != null)
sc.close();
}
}
public static void main(String[] args) {
mainMethod();
}
}
Employee Class
package com.lambada.filter_predicatemethod;
public class Employee {
private long number;
public Employee() {
}
public Employee(long number) {
this.number = number;
}
public long getNumber() {
return number;
}
}
Edit
I thinks this can achieving same result but not sure that chaining is a good part
public static List<Employee> getEmployeeDividedBy(long upperLimit, long dividedby) {
return LongStream.range(0, upperLimit).mapToObj(Employee::new).collect(Collectors.toList()).stream()
.filter(Employee -> Employee.getNumber() % dividedby == 0).collect(Collectors.toList());
}
Answer: Yes, there is.
AddEmployee.java:
private AddEmployees() {}
public static List<Employee> createListOfEmployees(long upperLimit) {
return LongStream.range(0, upperLimit)
.mapToObj(Employee::new)
.collect(Collectors.toList());
}
public static List<Employee> getAllNumbersDividedByProvidedNumber(long upperLimit, int divideBy) {
return createListOfEmployees(upperLimit).parallelStream()
.filter(Employee -> Employee.getNumber() % divideBy == 0).collect(Collectors.toList());
}
Note that the constructor is declared private since it is useless in this context. Also, addListOfEmployees is not the best possible name; I used createListOfEmployees instead.
Also, since you are printing employees, it would be very nice if they have overriden the toString method.
All in all, I had this in mind:
Employee.java:
public class Employee {
private final long number;
public Employee() {
this(0L);
}
public Employee(long number) {
this.number = number;
}
public long getNumber() {
return number;
}
public String toString() {
return "[Employee ID: " + number + "]";
}
}
AddEmployees.java:
import java.util.List;
import java.util.Scanner;
import java.util.stream.Collectors;
import java.util.stream.LongStream;
public class AddEmployees extends Employee {
private AddEmployees() {}
public static List<Employee> createListOfEmployees(long upperLimit) {
return LongStream.range(0, upperLimit)
.mapToObj(Employee::new)
.collect(Collectors.toList());
}
public static List<Employee> getAllNumbersDividedByProvidedNumber(long upperLimit, int divideBy) {
return createListOfEmployees(upperLimit).parallelStream()
.filter(Employee -> Employee.getNumber() % divideBy == 0).collect(Collectors.toList());
}
public static void main(String[] args) {
Scanner sc = new Scanner(System.in);
System.out.println("Enter the number of employees: ");
int numberOfEmployees = sc.nextInt();
System.out.print("Enter the divisor: ");
int divisor = sc.nextInt();
getAllNumbersDividedByProvidedNumber(numberOfEmployees,
divisor)
.forEach(System.out::println);
}
}
Additional advice
You can shorten your getEmployeeDividedBy a bit:
public static List<Employee> getEmployeeDividedBy(long upperLimit, long dividedby) {
return LongStream.range(0, upperLimit)
.filter(l -> l % dividedby == 0)
.mapToObj(Employee::new)
.collect(Collectors.toList());
}
Note that I removed .collect(Collectors.toList()).stream() since it is not required. Also, I first filter, and map only after that since this prunes away some pointless computation.
Hope that helps. | {
"domain": "codereview.stackexchange",
"id": 20552,
"tags": "java"
} |
Given points on a 2D plane, find line that passes through the most points | Question: Could someone give some feedbacks from perspective of oo design and coding style on the following codes:
Question: Given n points on a 2D plane, find the Line with maximum number of points that lie on it. Similar to https://leetcode.com/problems/max-points-on-a-line/description/. I was given Point interface, empty Line class, empty Solution class and was asked to fill in some functions.
My codes:
public interface Point
{
public double getX();
public double getY();
}
public class Line
{ // added by myself.
private Point p1, p2;
public Line(Point p1, Point p2) {
this.p1 = p1;
this.p2 = p2;
}
}
public class Solution
{
// Write the function here
public Line maxPointsLine(Point[] points) {
if(points == null) return null;
if(points.length==1) return new Line(points[0], new Point(p));//
Map<Double, Integer> map = new HashMap<>();// O(n), n = points.length. n-1
Line result;
int maxPoints = 0;
for(int i = 0; i < points.length; i++) { // One Point
map.clear();
int overlap =0;
int countSameX = 1;
for(int j = i+1; j < points.length; j++) { // the second Point
double x = points[j].getX() - points[i].getX(); // x intersect
double y = points[j].getY() - points[i].getY(); // intersect on y coordinate
if(x==0 && y==0) {
overlap++;
}
if(points[j].getX()==points[i].getX()) {// slope is infi,
// slope (1) finite, (2) infinite,
countSameX++;
continue;
}
double slope = y/x;
if(map.contains(slope)) map.put(slope, map.get(slope)+1);
else map.put(slope, 2);
if(map.get(slope)+overlap > maxPoints) { // each line slope and points[i]
// update result and maxPoints
maxPoints = map.get(slope)+overlap;
result = new Line(points[i], points[j]);
}
}
if(countSameX>maxPoints) { // line parallel to Y coordinate
// update result and maxPoints
maxPoints = countSameX;
result = new Line(points[i], points[j]);
}
}
return result;// null
}
}
In the above codes, I added some comments based on questions I was asked.
(I did this coding question long ago. just came into my mind. ) In fact, the interviewer told me that didn’t have good coding style and lack of OO design. suggestions are welcomed for me to improve myself. Thanks.
By the way, as for coding style, there are plugins following standards like https://google.github.io/styleguide/javaguide.html. I just did not fix coding style issues manually due to lack of time.
As for OO design, I was given empty classes. I just added some functions to solve problem I was given.
I really did not find any big issues in my codes. Confused and frustrated.
Answer:
Disclaimer
I'm a C# dev, not a Java dev. It's possible I make minor syntax mistakes in this answer, but the intention of the answer is to argue the intention of OOP, not the exact syntax of Java.
Adherence to OOP - part 1
Other than having defined a Line and Point class, there is no OOP approach in this solution. This is the biggest red flag:
public class Solution
If your solution is implemented in a single class, then it's not really making the best of an OOP approach. You'd expect an OOP solution to be a collection of classes.
When you start with OOP, the first thing you should do is split responsibilities. As you have put everything into a single class with a single method; that implies that you think there is only a single responsibility. I disagree. There are several independent parts of logic needed for this problem:
Defining a line by supplying two points (point A and point B => line AB)
Checking if another point is on the same line (C and line AB => boolean)
Tracking which points are all on the same line.
Testing the entire collection of points to find the line with the highest point count.
4 responsibilities suggests that you are likely going to need at least 4 classes, each with a single responsibility.
Note: That's a theoretical estimate. The interview question is simple for the sake of example, so many of these can arguably be condensed. I can definitely see an argument for combining the last two bullet points.
We'll get back to this point when we address the math itself.
Coding style
I'm having trouble following your approach. I've grown a bit rusty in math over the years, but I feel like the bigger issue here is your coding style. It's horribly unreadable. Some examples:
There is one empty line in your entire method.
This doesn't really make things easily readable. Line breaks don't have an effect on program performance, so use them freely. Separate smaller logical "chapters", such as declarations, initializations, and flow logic (if, for, etc.)
In your defense, the style guide you referenced also seems to avoid line breaks. However, their code usually consists of very terse and very simple examples.
As you can see, "real" code isn't as neat as example code. You need to pay more attention to readability if the code becomes more complex (alphanumerically, not necessarily logically).
Your comments aren't always helpful
Some examples:
Map<Double, Integer> map = new HashMap<>();// O(n), n = points.length. n-1
if(points[j].getX()==points[i].getX()) {// slope is infi,
// slope (1) finite, (2) infinite,
At a guess, I'd say you wrote these comments for yourself during development. And that's fine, I do that too.
But try to put yourself in my shoes now. I'm looking at your code, and all I see if a bunch of complicated mathematical logic. There is no explanation of the intention of the algorithm, and the existing comments make things more confusing instead of explaining it.
This doesn't need to be done via comments (external documentation takes more effort but can be as helpful); but comments are an easy first attempt at documenting the code.
You have defined a Line class, but you are not using it until after you've established that a given pair of points defines the line with the most points on it.
That's putting the cart before the horse. You should be using the Line class to handle the logic of checking if a point is on it.
You've basically "faked" OOP by putting the result of your (non-OOP) method in a class, while leaving the method devoid of any real OOP principles.
The math
I can see hints that you understand the mathematical side of things. You've e.g. avoided the symmetry trap (checking A,B and then B,A).
The slope calculation is a clever approach. I would've solved it differently, i.e. by calculating the equation of the AB line (y = ax + b, you can essentially define a line by storing the values of a and b), and then testing to see if other points fit the equation.
I'm going to use your approach (slope calculation), but I'm reshuffling the approach. I'm trying to focus on easy OOP principles.
Adherence to OOP - part 2
Here's my countersuggestion for a better OOP approach:
Take two points (the same way you did it, avoiding symmetrical operations)
Define a Line based on these two points.
Store the two points in a list (instead of two separate fields)
For all remaining points, check if they are on the same line.
If so, then add this point to the point list in the Line.
Store the lines for later retrieval
At the end, take the line with the longest list of points.
You'll find that the underlying mathematics are exactly the same as yours, but the order of operations is changed to a more segregated design:
We contain line-specific logic in the Line class.
We can separate the overarching logic (starting from a given pair AB) from the inner logic (checking all other points to see if they are on AB).
A quick implementation example:
Take two points (the same way you did it, avoiding symmetrical operations).
List<Line> listOfLines = new List<Line>();
for(int a = 0; a < points.length; a++) //point A
{
for(int b = a+1; b < points.length; b++) // point B
{
Define a Line based on these two points.
var lineAB = new Line(points[a], points[b]);
Store the lines for later retrieval
listOfLines.Add(lineAB);
Note the changed Line class:
public class Line
{
public List<Point> Points;
public Line(Point a, Point b) {
this.Points = new List<Point>();
this.Points.Add(a);
this.Points.Add(b);
}
}
For all remaining points, check if they are on the same line. If so, then add this point to the point list in the Line.
for(int c = b+1 ; c < points.length ; c++) //point C
{
if(lineAB.ContainsPoint(points[c])
{
lineAB.Points.Add(points[c]);
}
}
Note the methods in the Line class:
public bool ContainsPoint(Point c)
{
var a = this.Points[0];
var b = this.Points[1];
//If all X values are equal, they are on the same line:
if(a.GetX() == b.GetX() && a.GetX() == c.GetX())
return true;
//If all Y values are equal, they are on the same line:
if(a.GetY() == b.GetY() && a.GetY() == c.GetY())
return true;
//If AB and AC have the same slope, they are on the same line:
return CalculateSlope(a,b) == CalculateSlope(a,c);
}
private double CalculateSlope(Point p1, Point p2)
{
double xDiff = p2.GetX() - p1.GetX();
double yDiff = p2.GetY() - p1.GetY();
return yDiff / xDiff;
}
At the end, take the line with the longest list of points:
Line lineWithMostPoints = null;
for (Line l : listOfLines) {
if (
lineWithMostPoints == null
|| l.Points.Length > lineWithMostPoints.Points.Length
)
lineWithMostPoints = l;
}
return lineWithMostPoints;
Summary of changes
This is just a quick-fire list of why this reworked example uses a lot more OOP:
Most calculation logic operates in scope of the smaller possible module, e.g. two points, or a line and a point. During these calculations, there is no additional hassle from having to use collections, thus unburdening the code complexity.
The calculation of slopes is a separate method.
The logic (which checks if point C lies on line AB) is separated, and much more intuitive (compared to storing int countSameX and Map<Double, Integer> map and then using those to calculate your result).
Comments explain why certain code exists (ContainsPoint() is a good example of this) and help with readability.
Variable names have been slightly improved. Yours weren't too bad, but condensing everything into the same method caused you to suffer from persistent variable names. E.g. notice how ContainsPoint() refers to A,B,C as specific point with specific meanings, but the underlying CalculateSlope() method simply uses P1 and P2 because it doesn't care about whether you passed A, B or C. Using separate methods gives you the option to rename parameters to better suit the scope of the current method.
Overall, the code is more self-documenting than before. Though more verbose, it's easier to understand the intention of a method by looking at its body.
Notice that it's not just a matter of comments and variable names. The method names of the additional methods also help to divide the logic into chapters. I did not need to use comments to explain the CalculateSlope() method, as its name inherently reveals its purpose.
Improvements to still add
Ideally, you may want to make the Points in the Line class private, and add specific methods to add extra points to it and retrieve its length.
You could change ContainsPoint to immediately add the point if it fits on the line. Rename the method's name according, if you do this!
Even though you're skipping the simple symmetry pitfall (AB,BA), there is a similar pitfall for lines with already established points (e.g. ABDE, DE).
Let's say you first check line AB. As it turns out, C is not on AB, D and E are both on AB.
A few iterations later, you're checking line BD. Based on your earlier calculations, you already know that E is on BD. You don't really need to check that again; but the current code still does it anyway.
This is a minor performance thing, but its importance can increase as the collection of points increases.
Ideally, whenever you start checking a new line (e.g. DE), you should first check your existing results for a line which contains both these points (e.g. line AB contains points A,B,D,E so it contains information that is relevant to you now). Instead of creating a line DE, you could instead skip the calculation because AB(+DE) is already more complete than the new line DE will ever be (as it won't check A and B again, we already passed those).
A similar performance gain can be made by stopping the iteration early.
Let's say you have 26 points (A to Z). You're now starting on the calculations for lines where the first point is X.
At best, these calculations can only yield a maximum of 3 points (X,Y,Z), since the calculations never checks the other points again.
Suppose that you already have a line in your collection which has 5 points on it (ABCDE). There's no point to doing further calculations, because even if every remaining point is on the same line (XYZ), it'll still contain less points than the existing line with ABCDE.
Similarly, if you find a line with more than half of the point list on it (and you've tested all points for compatibility with this line), you can be sure that this is the longest line possible, since it's impossible for two distinct lines to share more than a single point.
Most of these improvement would make the existing code more complex, which would suggest either separating it further, or documenting it better. Or, preferably, both. | {
"domain": "codereview.stackexchange",
"id": 29699,
"tags": "java, object-oriented, interview-questions, computational-geometry"
} |
Overriding GetHashCode and Equals | Question: I'm creating a class to wrap a list of Mask and I'd like to know if I'm overriding the GetHashCode() and Equals() correctly.
Mask is a wrapper for a List of Points. It does implement IEnumerable<Point>.
public class MaskCollection : IEnumerable<Mask>
{
#region Variables
private List<Mask> masks;
public int Count { get { return masks.Count; } }
#endregion
#region Overrides
public override int GetHashCode()
{
int hash = 0;
foreach(Mask mask in masks)
{
hash ^= mask.Center.GetHashCode();
}
return hash;
}
public override bool Equals(object obj)
{
if (ReferenceEquals(this, obj)) return true;
if (!(obj is MaskCollection)) return false;
MaskCollection other = (MaskCollection)obj;
if (this.Count != other.Count) return false;
HashSet<Mask> uniqueMasks = new HashSet<Mask>(masks);
foreach(Mask mask in other)
{
if (!uniqueMasks.Contains(mask)) return false;
}
return true;
}
#endregion
}
And if I am overloading them correctly, can this code be improved? My Equals() looks really slow, if they actually are equal.
Answer: This implementation of Equals violates the contract of Equals.
From MSDN:
The following statements must be true for all implementations of the
Equals(Object) method. In the list, x, y, and z represent object
references that are not null.
...
x.Equals(y) returns the same value as y.Equals(x).
The problem is with this code
HashSet<Mask> uniqueMasks = new HashSet<Mask>(masks);
foreach(Mask mask in other)
{
if (!uniqueMasks.Contains(mask)) return false;
}
return true;
As it's possible that all masks in other are in uniqueMasks, but uniqueMasks contains a mask not in other (i.e. other \$\subsetneq\$ uniqueMasks).
For example, the following code will print False True.
var x = new MaskCollection(new[] { new Mask(0), new Mask(0) });
var y = new MaskCollection(new[] { new Mask(0), new Mask(1) });
Console.WriteLine(x.Equals(y));
Console.WriteLine(y.Equals(x));
Given this implementation of Mask
public class Mask : IEquatable<Mask>
{
private readonly int center;
public Mask(int center)
{
this.center = center;
}
public int Center
{
get { return this.center; }
}
public bool Equals(Mask other)
{
return this.center == other.center;
}
public override int GetHashCode()
{
return this.center.GetHashCode();
}
}
You could write
return new HashSet<Mask>(masks).SetEquals(other);
but I would probably recommend not overriding GetHashCode and Equals for this class, if you don't have a good reason to do so. | {
"domain": "codereview.stackexchange",
"id": 12043,
"tags": "c#, overloading"
} |
Simple question about change of coordinates | Question: Suppose we have two coordinate systems (Cartesian and spherical)
$$x^{\mu} = (t,x,y,z)$$
$$x'^{\mu'} = (t',r,\theta,\phi)$$
where $r= \sqrt{x^2 + y^2 + z^2} , \theta = \cos^{-1}(z/r), \phi = \tan^{-1} (y/x)$. My question is, in general, what are the components of a vector $A_{\mu} = (A_t,A_x,A_y,A_z)_{\mu}$ in the primed coordinates? From GR, I believe the answer is $A'_{\mu'} = (A_{t'},A_{r},A_{\theta},A_{\phi})_{\mu'} = \frac{\partial x^{\mu}}{\partial x'^{\mu'}} A_{\mu}$, with the inverse matrix used for upper-index vectors.
If this is the case, in particular it should work for position vectors. That is, $x'^{\mu'} = \frac{\partial x'^{\mu'}}{\partial x^{\mu}} x^{\mu}$. However, applying this transformation gives $x'^{\mu'} = (t',r,0,0)$, not $(t',r,\theta,\phi)$. Am I doing something wrong?
Edit: The second paragraph incorrectly applies the formula I've cited, as pointed out by mike stone.
As for the first question, since we have $x'_r = \sqrt{x_1^2 +x_2^2 + x_3^2}, x'_{\theta} =\cos^{-1}(x_3/x'_r)$,$x'_{\phi} = \tan^{-1}(x_2/ x_1)$, does it follow for any vector $A'_{\mu}$ (for instance, the EM gauge field) that $A'_r = \sqrt{A_1^2 + A_2^2 + A_3^2}$, $A'_{\theta} = \cos^{-1}(A_3/ A'_r)$, and $A'_{\phi} = \tan^{-1}(A_2/A_1)$?
Answer: Your transformation matrix:
I will ignore the "t" coordinate
\begin{align*}
&\text{The position vector for a sphere is: } \\
&\vec{R_s}=
\begin{bmatrix}
x \\
y \\
z \\
\end{bmatrix}=
\left[ \begin {array}{c} r\cos \left( \vartheta \right) \cos \left(
\varphi \right) \\ r\cos \left( \vartheta \right)
\sin \left( \varphi \right) \\ r\sin \left(
\vartheta \right) \end {array} \right]&(1)
\\\\
&\text{we can now calculate the transformation matrix $R$:}\\\\
&R=J\,H^{-1}\\
&\text{$J$ is the Jakobi matrix }\quad\,, J=\frac{\partial\vec{R_s}}{\partial\vec{q}}\quad \text{with:}\\
&\vec{q}=\begin{bmatrix}
r \\
\varphi \\
\vartheta \\
\end{bmatrix}\quad, H=\sqrt{G_{ii}}\,,H_{ij}=0\quad\text{and } G=J^{T}\,J\quad\text{the metric.}\\\\
&\Rightarrow\\\\
&R=\left[ \begin {array}{ccc} \cos \left( \varphi \right) \cos \left(
\vartheta \right) &-\sin \left( \varphi \right) &-\cos \left(
\varphi \right) \sin \left( \vartheta \right) \\
\sin \left( \varphi \right) \cos \left( \vartheta \right) &\cos
\left( \varphi \right) &-\sin \left( \varphi \right) \sin \left(
\vartheta \right) \\ \sin \left( \vartheta
\right) &0&\cos \left( \vartheta \right) \end {array} \right]&(2)
\end{align*}
\begin{align*}
&\text{We can solve equation (1) for $r\,,\varphi$ and $\vartheta$}\\\\
&r=\sqrt{x^2+y^2+z^2}\\
&\varphi=\arctan\left(\frac{y}{x}\right)\\
&\vartheta=\arctan\left(\frac{z}{\sqrt{x^2+y^2}}\right)\\\\
&\text{and with equation (2):}\\\\
&R= \left[ \begin {array}{ccc} {\frac {xz}{\sqrt {{y}^{2}+{x}^{2}}r}}&-{
\frac {y}{\sqrt {{y}^{2}+{x}^{2}}}}&-{\frac {x}{r}}
\\ {\frac {yz}{\sqrt {{y}^{2}+{x}^{2}}r}}&{\frac {x}
{\sqrt {{y}^{2}+{x}^{2}}}}&-{\frac {y}{r}}\\ {\frac
{\sqrt {{y}^{2}+{x}^{2}}}{r}}&0&{\frac {z}{r}}\end {array} \right]
&(3)\\\\
&\text{The components of a vector can transformed either with equation (2) or with equation (3) }
\end{align*} | {
"domain": "physics.stackexchange",
"id": 52154,
"tags": "general-relativity, differential-geometry, coordinate-systems"
} |
What is the maximum dynamic range of an N bit signal? | Question: In this presentation on slide 15 there is a corollary:
Rule of thumb: the dark noise must be larger than 0.5
Corollary: With a N bit digital signal you can deliver no more*) than
N+1 bit dynamic range.
*) You can if you use loss-less compression
They give an example:
Example : A 102f camera with 11 bit dynamic range will deliver only 9
bit in Mono8 mode. Use Mono16!
Why N+1?
How can an 8 bit signal deliver 9 bit dynamic range?
Answer: From taking a look at the presentation, it seems that they might be using a slightly different definition of dynamic range than is typical. Usually, as in the Wikipedia article on the topic, dynamic range is defined as:
$$
\text{dynamic range} = \frac{\text{largest possible representable value}}{\text{smallest possible representable value}}
$$
For an $N$-bit (unsigned) signal, this is equal to:
$$
\text{dynamic range} = \frac{2^N-1}{1} = 2^N-1
$$
However, their discussion of dynamic range is interspersed with discussion of quantization noise. Therefore, I posit that they instead define dynamic range as:
$$
\text{dynamic range} = \frac{\text{largest possible representable value}}{\text{largest possible quantization error}}
$$
For a uniformly-quantized quantity such as this, the maximum quantization error is equal to half of one bit. That leads to a dynamic range of:
$$
\text{dynamic range} = \frac{2^N-1}{0.5} = 2(2^N-1)
$$
The extra factor of 2 gives you an approximate increase of 1 bit in "dynamic range" when measured this way. I assume that's what is meant by an $N$-bit digitized signal providing $N+1$ bits of dynamic range. | {
"domain": "dsp.stackexchange",
"id": 1706,
"tags": "image-processing"
} |
Why should the baseline's prediction be near zero, according to the Integrated Gradients paper? | Question: I am trying to understand Intagrated Gradients, but have difficulty in understanding the authors' claim (in section 3, page 3):
For most deep networks, it is possible to choose a baseline such that the prediction at the baseline is near zero ($F(x') \approx 0$). (For image models, the black image baseline indeed satisfies this property.)
They are talking about a function $F : R^n \rightarrow [0, 1]$ (in 2nd paragraph of section 3), and if you consider a deep learning classification model, the final layer would be a softmax layer. Then, I suspect for image models, the prediction at the baseline should be close to $1/k$, where $k$ is the number of categories. For CIFAR10 and MNIST, this would equal to $1/10$, which is not very close to $0$. I have a binary classification model on which I am interested in applying the Integrated Gradients algorithm. Can the baseline output of $0.5$ be a problem?
Another related question is, why did they choose a black image as the baseline in the first place? The parameters in image classification models (in a convolution layer) are typically initialized around $0$, and the input is also normalized. Therefore, image classification models do not really care about the sign of inputs. I mean we could multiply all the training and test inputs with $-1$, and the model would learn the task equivalently. I guess I can find other neutral images other than a black one. I suppose we could choose a white image as the baseline, or maybe the baseline should be all zero after normalization?
Answer: You are right that the baseline score is near zero only when there are a large number of label classes, i.e., when k is large. We should have qualified this line in the paper more carefully.
In this sense, formally, the technique explains the *difference in prediction between the input score and the baseline score, as is made clear elsewhere in the paper (see Remark 1 and Proposition 1 for instance.) | {
"domain": "ai.stackexchange",
"id": 2164,
"tags": "deep-learning, image-recognition, papers"
} |
Help me identify a cluster | Question: as I was browsing Nasa's Eyes few days ago, I saved a screenshot, but it seems text is blurrry, supher-485, first letter is S ends with er-485 from as I can read
it kinda looked like Cosmic Web, Indra's net, so i saved cool screenshot
Help me identify this cluster?
Answer: Since this appears to be a screenshot of Eyes on Exoplanets, I searched for "exoplanet 485" and found Kepler-485 b.
With the colors exaggerated, the label area could be "Kepler-485" in yellow overwritten with "Sun" in orange.
If this is correct, it's not a cluster, just a star with a planet, plus some artifacts from the instrument which took the image. | {
"domain": "astronomy.stackexchange",
"id": 2898,
"tags": "exoplanet, globular-clusters, star-cluster"
} |
GA puzzle solver stuck at local maximum | Question: I have a jigsaw type problem with 192 pieces which I am trying to find solutions to. I have written a GA which starts from a random allocation then 'crosses' by taking rectangular blocks from one solution then filling in from another then 'mutates' by switching pieces at random. Through a complex procedure I can evaluate a solution and calculate a score.
I have a population size of 1000 and take 40% of the first parent in the cross and mutate up to 4% of the pieces in each solution generation. I select the parents with a weighted selection and do not allow any exact duplicates in my population.
The problem I am having is that the algorithm gets stuck at about 70% score, with (I assume) a population full of near identical solutions.
What can I do to improve the performance of the algorithm, am I making any glaring mistakes?
Should I be mutating more, or directing the mutation more towards poor scoring pieces?
Should I have a larger population, or run several populations in parallel and share high scoring solutions between them?
Update
I have replaced my roulette with a tournament selection (with 4 solutions picked at random and the best selected) and implemented mutations up to 4 times on each new child in the generation. I rearranged the process so that I cull half the population, then add some new random results then repopulate with crossovers.
Unfortunately, I seem to get a marginal performance increase, but I still seem to be converging at around 70% score.
Does anyone have any other ideas of things I can tweak to try and break through this barrier?
Answer: 1000 individuals is typically a fairly large population, so lacking further information, you're probably fine there.
You don't say how quickly your algorithm is converging or exactly you calculate the weights for weighted selection, but if you're doing the typical roulette-wheel or fitness-proportionate selection, it's known that that method produces a large amount of selection pressure, particularly early in the run. What happens is that most early solutions are terrible, so the first mediocre thing it finds gets a much higher proportion of the total fitness than it really deserves and quickly swamps the population. The easiest way to avoid that is to use something with a more controllable selection pressure, like binary tournament selection or rank-biased selection.
For mutation, the general rule of thumb has always been to mutate each allele independently with probability 1/n. Like any heuristic, that's only a starting point, and you should let experiment guide you in making any adjustments, but if I'm understanding you correctly, you have 192 alleles, so a 4% mutation rate gives an expectation of about 8 mutations per individual, which I would guess is too high.
Or do you mean you make a single mutation to 4% of the generated offspring? If that's the case, it's probably too low. I'd start with a method that mutates every single individual by a randomly selected amount, with the expected amount being one flip. Some individuals will get two or three flips, others will not be altered at all.
The third idea, having multiple populations and sharing between them goes by the name of "island model" GAs, and has been known to work pretty well for some problems. However, I think in your case there are some issues with the way the underlying algorithm is working that need to be addressed before moving to a parallel model that will only make it more difficult to tease out the dynamics of what's going on.
Without more details about exactly how you're encoding your solutions, how your crossover and mutation operators work, etc., we're kind of limited to general statements such as those I just made. Feel free to post any clarifications though and I'll take another run at helping understand what might be going on. | {
"domain": "cstheory.stackexchange",
"id": 1146,
"tags": "ne.neural-evol, genetic-algorithms"
} |
How tight is the XOR lemma? | Question: The XOR lemma states that if you have a distribution $D$ on $\{0,1\}^n$, and all the Fourier coefficients of $2^n D$ are small, then it is close in $L_1$ to the uniform distribution. Specifically, suppose all Fourier coefficients are at most $\epsilon$.
The argument is a simple application of Cauchy—Schwarz with the fact Fourier preserves the $L_2$ norm to obtain:
$$|U-D|_1 \leq 2^{n/2} \epsilon$$
How tight is this? I obviously care about the exponent of $2$, not a constant outside.
I was a certain a 'probabilistic' $D$ would show tightness but that doesn't seem to be the case. Below is an explanation:
Consider generating $D$ as follows: let $D(x)$ be $\frac{1}{2^n} + z(x)\frac{\epsilon}{2^{n/2}}$ where $z(x)$ is iid on $\{-1,1\}$.
Even if we ignore this is probably not a distribution (I'm trying to point here that a probabilistic construction seems to fail so that's okay.) , this can't be turned into a tight example since there are so many Fourier coefficients, one of them will correlate significantly more in signs than the random $2^{n/2}$ with our $z$, and then we'll get more than $2^{n/2} \frac{\epsilon}{2^{n/2}}$.
Thus tightness if it exists needs to come from a controlled pseudorandom construction.
EDIT-
As clement posted in a comment, bent functions are apparently a thing and clearly solve our tightness.
First notice that if $f,g$ are each bent functions on a different set of variables, then so is $fg$. Thus if there is one for $m,n$ variables, there is one for $m+n$.
Now consider $1/2-1/2x+1/2y+1/2xy$, it's a bent function in two varibles, so tensoring gives for any even $n$ gives a bent function.
Answer: Instead of choosing $z\in\{-1,1\}^{2^n}$ uniformly at random, you may want to look instead at more structured (yet "pseudo-random"-ish) functions such as bent functions:
Definition. A Boolean function $f\colon\{-1,1\}^n\to\{-1,1\}$ is called bent if $|\hat{f}(S)|=2^{-n/2}$ for all $S\subseteq [n]$.
Such functions are known to exist; see, e.g., Chapter 6.3 of Ryan O'Donnell's Analysis of Boolean functions (2014) (or the online version). | {
"domain": "cstheory.stackexchange",
"id": 4826,
"tags": "co.combinatorics, pr.probability, boolean-functions, pseudorandomness"
} |
Node.js chat service | Question: I've built a simple chat server/client on Node.js and socket.io that I would like reviewed. My main concern is making the chat.js (client) running as cleanly as possible (OO) and streamlining data back and forth so I can add client monitoring (updated list of connected users) features down the line without issue.
Aside from that, I'm open to any critique/feedback about these sections of code on best practices or design patterns. This project is mostly for learning an unfamiliar technology (socket.io and, somewhat, node.js).
Here is the server code:
server.js:
var express = require('express');
var app = express();
var PORT = 7029;
var APPNAME = "Hermes";
//Set Views
app.set('views', __dirname + '/templates');
app.set('view engine', 'jade');
app.engine('jade', require('jade').__express);
app.get('/', function(req,res) {
res.render('page');
});
//Setup server
app.use(express.static(__dirname + '/public'));
var io = require('socket.io').listen(app.listen(PORT));
io.set('transports', ['xhr-polling']);
console.log('Listening on port ' + PORT);
//Event Handlers
io.sockets.on('connection', function(socket) {
welcome = new Message("Welcome to the ZeroDae chat service.", APPNAME);
socket.emit('message', welcome );
socket.on('send', function(data) {
message = new Message(data.content, data.user);
io.sockets.emit('message', message);
});
socket.on('join', function(data) {
join = new Message(data + " has joined the chat.", APPNAME);
io.sockets.emit('message', join);
});
socket.on('disconnect', function() {
leaving = new Message("Someone has left.", APPNAME);
io.sockets.emit('message', leaving);
});
});
function Message(content, user) {
this.content = content;
this.user = user;
}
Here is the chat client:
chat.js:
$(document).ready(function () {
var messages = [];
var socket = io.connect('http://chat.zerodaedalus.com');
var field = $('#field');
var send = $('#send');
var content = $('#content');
var current = $('#current');
var warning = $('#browserWarning');
var user = getUser();
var username = $('#username');
var update = $('#update');
var id;
function User() {
}
//Hide warning
warning.hide();
//set username and display it on screen
username.val(user);
currentUser();
//event handlers
send.click(submit);
update.click(setUser);
field.keyup(function (e) {
if(e.which === 13) {
submit();
}
});
username.keyup(function (e) {
if(e.which === 13) {
setUser();
}
});
socket.on('message', function(data) {
if(data.content) {
if(data.user === 'Hermes')
classes = 'message text-muted ';
else
classes = 'message ';
content
.append('<p></p>')
.find('p:last-child')
.addClass(classes + id)
.append('<span></span>')
.find('span:last-child')
.addClass('user')
.text(data.user + ': ')
.closest('.message')
.append('<span></span>')
.find('span:last-child')
.addClass('content')
.text(data.content)
}
else {
console.log('There is a problem:', data);
}
});
function getUser() {
if(typeof(Storage) !== "undefined") {
if(localStorage.getItem('username'))
name = localStorage.getItem('username');
else
name = generateUsername();
}
else {
warning.show();
}
return name;
}
function setUser() {
localStorage.setItem('username', username.val() );
if(username.val() === '')
user = generateUsername();
else
user = username.val();
currentUser();
}
function generateUsername() {
return 'zd-' + Math.floor((Math.random() * 729) + 1);
}
function currentUser() {
current.html(' (Currently: ' + user + ')');
socket.emit('join', user);
console.log(user);
}
function submit() {
var text = $('#field').val();
socket.emit('send', { content: text, user: user });
field.val('');
content.animate({scrollTop: content.prop('scrollHeight')});
}
});
GitHub
Demo
Answer: From a once over:
There is no need to set views or view engine for the server part, you are not using it
Be careful about polluting the global namespace, variables like welcome, join and leaving should be declared with var
I would probably call APPNAME -> APPNICK, this tells me that this will be used for chat messages
From a memory perspective, you are creating a fresh set of functions on each connection. I would declare the 3 events handlers as functions outside of your connection handler, and then assign them
function onSend(data) {
message = new Message(data.content, data.user);
io.sockets.emit('message', message);
}
function onJoin(data) {
join = new Message(data + " has joined the chat.", APPNAME);
io.sockets.emit('message', join);
}
function onDisconnect() {
leaving = new Message("Someone has left.", APPNAME);
io.sockets.emit('message', leaving);
}
//Event Handlers
io.sockets.on('connection', function(socket) {
welcome = new Message("Welcome to the ZeroDae chat service.", APPNAME);
socket.emit('message', welcome );
socket.on('send', onSend);
socket.on('join', onJoin);
socket.on('disconnect', onDisconnect);
});
From a readability and annoyance perspective I would modify Message so that the user gets defaulted to APPNAME or APPNICK
function Message(content, user) {
this.content = content;
this.user = user || APPNICK;
}
This way you can simply
function onJoin(data) {
join = new Message(data + " has joined the chat." );
io.sockets.emit('message', join);
}
Then from a DRY perspective, you are repeating io.sockets.emit('message', a ton of times. What if you made the sending of a Message a function of it ?
Message.prototype.send = function(){
io.sockets.emit('message', this);
}
Then your 3 handlers would look like this:
function onSend(data) {
new Message(data.content, data.user).send();
}
function onJoin(data) {
new Message(data + " has joined the chat.").send();
}
function onDisconnect() {
new Message("Someone has left.").send();
}
The next thing that would bother me is the repeat new, but I will stop here ;)
For the client part
13 should be a constant var ENTER = 13 with if(e.which === ENTER ) {
This :
content
.append('<p></p>')
.find('p:last-child')
.addClass(classes + id)
.append('<span></span>')
.find('span:last-child')
.addClass('user')
.text(data.user + ': ')
.closest('.message')
.append('<span></span>')
.find('span:last-child')
.addClass('content')
.text(data.content)
looks impressive, but is definitely not the most performing piece of code, but it is fast enough, then I would leave it alone.
console.log('There is a problem:', data); <- Problems happen with chat, I would make this part of the DOM, users will thank you for it
Fun piece of code, easy to follow. | {
"domain": "codereview.stackexchange",
"id": 7582,
"tags": "javascript, jquery, node.js, chat, socket.io"
} |
Newton cannonball threshold velocity | Question: In the classical "Newton cannonball experiment", it is known that as the velocity of the "horizontally" fired cannonball increases, the orbit changes, after an initial threshold is passed, from a circular closed orbit to an elliptical closed one, then to an open parabolic, and on to an open hyperbolic.
But where can i find a detailed analysis of this procedure?
I'm particularly puzzled by the threshold value at which the orbit changes from
a closed ellipse to an open parabola. It seems amazing that by a continuous variation of the initial velocity, a closed ellipse suddenly becomes an open parabola!
Answer: You are essentially talking about Kepler's problem. Given 'horizontal' initial velocity $v_{0}$ at radius $R$, you can calculate both the energy and the angular momentum of the orbit
$$L=mv_{0}R$$
$$E=-\frac{GMm}{R}+\frac{1}{2}mv_{0}^2$$
Here $m$ is the mass of the particle and $M$ is the mass of earth. It can be shown that this type of motion posses an effective potential
$$V_{\rm eff}(r)=\frac{L^2}{2mr^2}-\frac{GMm}{r}$$
such that the total energy satisfies
$$E=\frac{1}{2}m\dot{r}^2+V_{\rm eff}(r)$$
When looking at this potential, which describes effective 1D motion in the radial direction, one can observe the different regimes you've described.
In particular, the case of parabolic orbit corresponds to $E=0$, which after solving gives you the threshold velocity
$$v_{\rm th}=\sqrt{\frac{2GM}{R}}$$ | {
"domain": "physics.stackexchange",
"id": 44679,
"tags": "orbital-motion"
} |
RectifyNodelet was not subscribing to image_mono | Question:
I am trying to customize Imageproc node to add a time stamp on each input frame. The output from mono was perfectly fine. The Image from mono has time stamp on it.
But the RectifyNodelet was not able to subscribe to It.
I have converted raw_msg to a cv::Mat and the used putText for stamping arrival time on it. Then converted the same to sensors:msg using CvImage().toImageMsg
Could anybody please let me know what might be the reason.
Originally posted by mkreddy477 on ROS Answers with karma: 58 on 2017-02-17
Post score: 0
Answer:
It was a sync problem. I was able to solve the problem by simply passing the header of raw_msg to the CvImage(). Previously I was not passing the default header.
Originally posted by mkreddy477 with karma: 58 on 2017-02-17
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 27044,
"tags": "ros"
} |
What distribution is the easiest to compress? | Question: I'm currently playing around with some compression algorithms and I'm asking myself if there is a type of data distribution / noise distribution that is easier to target with quantization (meaning less distortions at same rate). To my understanding i.d.d. Gaussian is the upper bound on "compression difficulty".
Are there distributions that especially efficient to compress, maybe via quantizers designed for those (I know about LLoyd-Max but I'm looking for something that works generally well with a known distribution and doesn't need to be optimized samplewise)? Does it make sense to transform the data into such a distribution prior to quantizing it?
Answer:
I'm currently playing around with some compression algorithms and I'm asking myself if there is a type of data distribution / noise distribution that is easier to target with quantization (meaning less distortions at same rate). To my understanding i.d.d. Gaussian is the upper bound on "compression difficulty".
Fat32's answer and your distortion aspect are on-spot here:
The continuous Gaussian distribution is the distribution that maximizes differential entropy.
But things get a bit hairy in your general problem:
a type of data distribution / noise distribution that is easier to target with quantization (meaning less distortions at same rate).
discrete input, lossless compression
So, I'd argue that a data distribution is already given in discrete quantities – and for discrete sources it's trivial to show that the minimum entropy is 0 (discrete information is non-negative, and hence so is its expectation, the entropy), and can be realized by a source where all but one element have probability 0 (and thus, the remaining element probability 1), since then the expectation value of probability-weighed informations collapses to the information in the only occurring value, and $\log_2(P=1)\equiv 0$.
Hence, the source that always gives the same value has 0 bit of information, and can be compressed the best. By simply ignoring the source. As it gives no information.
Now assume you've got a discrete source of information that you want to compress losslessly. Can you map that to a different distribution that has lower entropy without losing info? No, you can't.
Either you find a mapping that combines multiple input values into one output value, which means you lose information, and hence are lossy,
or you only increase the entropy (or, best case, you just keep the same entropy), by splitting events that are identical into multiple bins, thus approximating the discrete uniform source more, which has maximum entropy.
So, losslessly, you can't do anything to improve your source if its i.i.d.; compression that works better than a plain Huffman codec on average (for example, lossless audio codecs) uses the fact that observations aren't independent; no matter what you do, your losslessly compressed data still has at least as much bits as there was entropy in the source (that's a fundamental result of basic information theory).
Now, the more interesting case is that of noise, or signal, in the continuous case.
continuous source, quantization
The problem you're describing, *how much information do I lose when I subject a source of information to a specific quantization" is called Rate-Distortion Theory. It's a fun field full of fun integrals!
So, let's start with a short consideration of what quantization is:
It takes a continuous source and maps it to discrete case. Quite intuitively, this is a lossy process (or your source wasn't really continuous from the start): The probability of any one of the continuously possible values is (surely) 0 – thus, the information in the event "value $v$ has been observed" is $-\log_2\left(P_X(v)\right)=-\log_2(0)$, and that is unbound (the event "has infinite information", but I hesitate saying that, because "infinite" is not something you should quantize things with). Thus, the continous source has unbounded entropy in the discrete information theory sense – and since any quantization in this world can only give you a finite amount of bits, you're bound to lose some of the original information. (I could've explained that more plastically – how do you quantize $e$, $3$ and $\pi$ with the same quantizer and still "hit" all three values exactly?)
So, the question is, what continuous distribution $F_X$ of the source $X$ suffers the least distortion when being sampled (by a quantization function $Q: \{x\in X\}\mapsto \{y\in Y\}$ with $Q^{-1}(y)=x\;\forall y\in Y$, i.e. a quantization that leaves the values it can "exactly" produce untouched) with a fixed bit depth $r$, giving us the discrete source $Y$?
Now, following the usual Rate-Distortion theory approach, we should first define a distortion function $D$ that tells us how "far" the quantized value $x$ is from the input value $y$, but we don't have to: any reasonable metric would have the property of being $D=0$ for $y=x$, and always $D\ge0$.
Hence, the infimum of the distortion would occur under optimal distribution $\tilde F_X$ of $X$ (let's assume said distribution is differentiable to its density $\tilde f$, else things just get uglier):
\begin{align}
d &=
\inf_{F_X} E\left[D(X,Y=Q(X))\right]\\
&= \inf_{F_X} \int\limits_X f_X(x)D(x,y=Q(x))\,\mathrm{d}x \\
&= \int\limits_X \tilde f_X(x)D(x,Q(x))\,\mathrm{d}x \\
&= \int\limits_{\left\{x\in X| Q(x)=x\right\}} \tilde f_X(x)D(x,Q(x))\,\mathrm{d}x +
\int\limits_{X \backslash \left\{x\in X| Q(x)=x\right\}} \tilde f_X(x)D(x,Q(x))\,\mathrm{d}x \\
&= \int\limits_{X \backslash \left\{x\in X| Q(x)=x\right\}} \tilde f_X(x)D(x,Q(x))\,\mathrm{d}x \\
&=\int\limits_{X \backslash \left\{x\in X| Q(x)=x\right\}} \tilde f_X(x)D(x,Q(x))\,\mathrm{d}x \\
&= 0 \\
&\iff \tilde f(x) = 0\; \forall \notin Y\\
&\implies \tilde F_X = F_Y
\end{align}
So, this might not be a surprising result, but it's quite fundamental to understand: No matter how you (sensibly) define distortion/loss, the closer you get to the output distribution of your quantizer, the less distortion you see.
Optimizing for something better than just transmitting the least bits possible
With that result, we can move on: we already know the optimally compressible output distribution $F_Y$, namely the constant output mentioned above, so the optimal continous source distribution in terms of compression is just the same.
Assuming you really want your quantizer to give you an $r$-bit observation of reality, you'd build it to have $2^r$ steps of equal probability, and since "wasting" steps is a bad idea, we just pick $r$ to be an integer.
Hence, you'd look at the source distribution and find a continuous mapping so that the output of $Q$ is discretely uniform.
That means you'd just find a mapping so that the density mass in each of the quantization bins is equal.
In the uniform step width quantization ("linear ADC"), that means finding the inverse of $F_X(x)$. | {
"domain": "dsp.stackexchange",
"id": 6538,
"tags": "compression, quantization"
} |
Do packages have to be in the catkin source space? | Question:
Hi there! Been trying out ROS and I'm simply amazed just how detailed the tutorials have been. One thing that's been puzzling me though is that catkin_make (and in my case, catkin build) seems to also look into the /opt/ros/kinetic/share folder on top of the source space. What is the reason for this? Is the latter space exclusively for packages I write while the former is for packages developed by other parties?
Many thanks.
Originally posted by ServoRen on ROS Answers with karma: 1 on 2017-10-09
Post score: 0
Answer:
You can have the concept of overlays such that you can layer new versions of packages on top of other ones and leverage shared installations such as the binaries.
There's a tutorial on this at: http://wiki.ros.org/catkin/Tutorials/workspace_overlaying
Originally posted by tfoote with karma: 58457 on 2017-10-09
This answer was ACCEPTED on the original site
Post score: 0
Original comments
Comment by gvdhoorn on 2017-10-10:
@tfoote: so the answer is essentially 'yes' then? :) (an overlay just adds a 'new' source space).
And extending ROS_PACKAGE_PATH manually would also be akin to adding a new source space, but it the build artefacts will not end up in a corresponding build/devel space, but of the workspace ..
Comment by gvdhoorn on 2017-10-10:
.. where catkin_make / catkin build was invoked.
@ServoRen: as ROS (catkin) pkgs are just CMake projects, you could actually build without Catkin. That would remove the 'workspace' concept. It's not often done though.
Comment by tfoote on 2017-10-10:
Yes, if you're talking about packages that you want to build, they have to be in the source space. The source space is specifically a subset of the current workspace and does not include the other paths http://wiki.ros.org/catkin/workspaces#Source_Space
Comment by ServoRen on 2017-10-11:
I see. So let's say I want to share my work and it uses both packages I wrote and not-so-common packages by other people, it would be best to place both categories of packages in the source space ..
Comment by ServoRen on 2017-10-11:
.. More common packages like geometry_msgs or roscpp can be assumed to be taken from the /opt/ros//share folder. Is this right?
Thanks so much!
Comment by tfoote on 2017-10-11:
That cannot be assumed, you should provide instructions on how to install your depedencies. There are tools like rosdep to automate this based on the contents of the source space and the dependencies in their package.xml files.
Comment by ServoRen on 2017-10-13:
I see. Thanks for the input. | {
"domain": "robotics.stackexchange",
"id": 29035,
"tags": "ros, packages"
} |
How to explain get_weight with autoencoder in keras? | Question: I built an autoencoder model of three layers with 9 5 9.
Input dim =9, encoder dim =5, output dim=9
When I get the model weights,
weight1=autoencoder.layers[1].get_weights()
weight2=autoencoder.layers[2].get_weights()
print(weight1)
[array([[ 0.0023533 , -0.02289476, -0.01658 , 0.03487475, -0.38416424],
[ 0.00594878, 0.01835718, 0.01768207, 0.04458401, 0.10922299],
[ 0.03288281, 0.22234452, 0.04393397, -0.14807932, 0.04412287],
[ 0.16347113, 0.02014653, -0.05967368, -0.09127634, 0.9797626 ],
[-0.0901033 , 0.1602385 , -0.16297013, 0.43326673, -0.2514738 ],
[ 0.00272129, -0.00525797, 0.01420719, -0.04066049, -0.01261563],
[ 0.40665478, -0.07740633, -0.02576585, 0.0406443 , -0.218632 ],
[-0.00641229, 0.08050939, -0.02497054, -0.12399215, 0.10901988],
[-0.14366671, 0.02168852, 0.19099002, -0.10509221, -0.4306924 ]],
dtype=float32), array([-0.0133634 , 0.14412224, 0.13419336, -0.32834613, -0.31566525],
dtype=float32)]
There have two array in weight1.
I know the first array means but how to explain the second array?
Answer: Does the layer contain two matrices, one for the actual weights and one for the biases?
There could be one bias value for each of the columns in your weight matrix, depending on how you built your model. | {
"domain": "datascience.stackexchange",
"id": 5060,
"tags": "python, keras, autoencoder"
} |
What is the functionality of --pause flag in rosbag play cmd? | Question:
I came across an application where --pause flag argument was used in rosbag play cmd . I checked the wiki it says it starts the ros bag in paused mode . I was wondering what passed mode means.
Originally posted by Shiva_uchiha on ROS Answers with karma: 117 on 2020-09-22
Post score: 0
Original comments
Comment by Solrac3589 on 2020-09-23:
a few lines before it says:
Additionally, during playing, you can pause at any time by pressing space. When paused, you can step through messages by pressing s.
I suppose it will be that
Comment by Shiva_uchiha on 2020-09-23:
got it (Y)
Answer:
It means the replay is paused. So it does not publish anything. You have to unpause it (in the terminal by pressing the space key, usually) to have it play...
Originally posted by mgruhler with karma: 12390 on 2020-09-23
This answer was ACCEPTED on the original site
Post score: 1 | {
"domain": "robotics.stackexchange",
"id": 35565,
"tags": "ros-melodic, rosbag"
} |
What is the mean of inconsistent in machine learning and why 1NN is well known to be inconsistent | Question: I really need help understanding the meaning of consistency in machine learning, why it's important, and why 1NN is considered to be inconsistent.
Answer: Concistency of any algorithm in machine learning or statistics rather means that assuming you train on an infinite amount of data that your algorithm will converge to the true value of your estimate. Meaning that if you feed infinite datasets into the algorithm your error will converge to 0
Now regarding 1NN:
1NN has an important property: When feeding infinite training examples the error rate of the 1NN converges to a limit of twice the Bayes error. Now if you take the definition above and match the 1NN's property you'll see the reason why it's inconsistent.
See his paper for a proof of the property: http://cseweb.ucsd.edu/~elkan/151/nearestn.pdf | {
"domain": "datascience.stackexchange",
"id": 6086,
"tags": "machine-learning"
} |
About the stress generated inside an object | Question:
Case 1 is the case where the external force F acts on a point of an object passing through the center of mass, and Case 2 is the case where the external force F acts evenly on the surface.
In both cases, the center of mass will move at the same acceleration.
What I'm curious about is the stress that occurs inside an object (especially does shearing force occur)?
In the case of CASE2, I don't think shear force will be generated intuitively. In the case of CASE1, I think the shear force will be generated by inertia at the part away from the point of action of the force, is my guess correct?
Answer: I'm sure you're right. Just suppose that the top and bottom layers of your cuboid were only very weakly attached to the central part. They'd be left behind when the force is applied to the central part! | {
"domain": "physics.stackexchange",
"id": 93976,
"tags": "newtonian-mechanics, stress-strain"
} |
Qualitative Analysis- Distinguishing between two ions or two salts | Question: I know my question is going to be quite below the level of this site but I am 1st grade in university and I can't understand anything.
I am going to give an example from the book: distinguish between $\ce{AgI}$ and $\ce{AgCl}$. Answersheet tells me to add $\ce{NH3}$; only $\ce{AgCl}$ dissolves. I don't even know why $\ce{AgI}$ precipitate and $\ce{AgCl}$ dissolves.
Or there is another question: distinguish between $\ce{Ag+}$ and $\ce{[Ag(NH3)2]+}$. And answer sheet tells to add NaCl and only $\ce{Ag+}$ gives a precipitate. Why doesn't $\ce{[Ag(NH3)2]+}$ also give a ppt.?
How do I know which cations or anions or salts are soluble in which solvent? What should I look when I am trying to distinguish ions or salts? Or how do I know if there is no reaction? I know these questions are too general but I am having a hard time understanding this class.
You can either answer the questions from the book (which I failed to understand) or help me to understand the basis of the subject by answering others. Or both. I would really appreciate any kind of answers coming from you.
Answer: Answering this question requires a preliminary discussion of the solubility product constants of three silver halides and the formation constants of three silver complexes. First, note that $\ce{AgCl}$, $\ce{AgBr}$, and $\ce{AgI}$ are all insoluble in water, but insoluble is a relative term in the end. Their respective solubility equilibria and solubility product constants, i.e., equilibrium constants for their dissolution, are:
$$\ce{AgCl(s) <=> Ag^+ (aq) + Cl^- (aq) \quad $K_\mathrm{sp(1)} = \pu{1.8E-10}$ \tag 1}$$
$$\ce{AgBr(s) <=> Ag^+ (aq) + Br^- (aq) \quad $K_\mathrm{sp(2)} = \pu{5.4E-13}$ \tag 2}$$
$$\ce{AgI(s) <=> Ag^+ (aq) + I^- (aq) \quad $K_\mathrm{sp(3)} = \pu{8.3E-17}$ \tag 3}$$
Silver ions also form complexes with ammonia, thiosulfate ion, and cyanide ion. The formation equilibria and associated equilibrium constants are as follows:
$$\ce{Ag^+ (aq) + 2 NH3 (aq) <=> [Ag(NH_3)_2]^+ (aq) \quad $K_\mathrm{f(4)} = \pu{1.6E7}$ \tag 4}$$
$$\ce{Ag^+ (aq) + 2 S2O_3^2- (aq) <=> [Ag(S2O3)2]^3- (aq) \quad $K_\mathrm{f(5)} = \pu{2.0E13}$ \tag 5}$$
$$\ce{Ag^+ (aq) + 2 CN^- (aq) <=> [Ag(CN)2]^- (aq) \quad $K_\mathrm{f(6)} = \pu{1.0E21}$ \tag 6}$$
Now consider the following sequence of aqueous solution additions:
An aqueous $\ce{NaCl}$ solution is added to an aqueous solution of $\ce{AgNO3}$. Then $\ce{AgCl}$ precipitates, as per equilibrium (1).
Next concentrated ammonia is added in excess, i.e., well above the 2 to 1 stoichiometry of equilibrium (4). Then the following equilibrium, obtained by adding equilibria (1) and (4), occurs:
$$\ce{AgCl (s) + 2 NH3 (aq) <=> [Ag(NH3)2]^+ (aq) + Cl^- (aq) \quad $K_\mathrm{sp(1)} K_\mathrm{f(4)} = \pu{2.9E-3}$ \tag 7}$$
Although the equilibrium constant is less than unity, adding concentrated ammonia in excess (in a hood!) results in all of the $\ce{AgCl}$ dissolving: the equilibrium is driven to the right.
Next $\ce{NaBr}$ is added. This results in precipitation of $\ce{AgBr}$ via the following equilibrium:
$$\ce{[Ag(NH_3)_2]^+ (aq) + Br^- (aq) <=> AgBr (s) + 2 NH3 (aq) \quad $1/(K_\mathrm{sp(2)} K_\mathrm{f(4)}) = \pu{1.2E5}$ \tag 8}$$
This equilibrium is simply the reverse of the addition of equilibria (2) and (4).
Next excess sodium thiosulfate ($\ce{Na2S2O3}$) is added. Then $\ce{AgBr}$ dissolves as per the following equilibrium, obtained by adding equilibria (2) and (5):
$$\ce{AgBr (s) + 2 S2O3^2- (aq) <=> [Ag(S2O3)2]^3- (aq) + Br^- (aq) \quad $K_\mathrm{sp(2)} K_\mathrm{f(5)} = \pu{10.8}$ \tag 9}$$
Next $\ce{KI}$ is added. This results in precipitation of $\ce{AgI}$ via the following equilibrium:
$$\ce{[Ag(S2O3)2]^3- (aq) + I^- (aq) <=> AgI (s) + 2 S2O3^2- (aq) \quad $1/(K_\mathrm{sp(3)} K_\mathrm{f(5)}) = \pu{6.0E2}$ \tag{10} }$$
This equilibrium is simply the reverse of the addition of equilibria (3) and (5).
Lastly, $\ce{KCN}$ is added: in a hood, with proper safety precautions! Then $\ce{AgI}$ dissolves as per the following equilibrium, obtained by adding equilibria (3) and (6):
$$\ce{AgI (s) + 2 CN^- (aq) <=> [Ag(CN)2]^- (aq) + I^- (aq) \quad $K_\mathrm{sp(3)} K_\mathrm{f(6)} = \pu{8.3E4}$ \tag{11} }$$
When I did this demo in lectures, I skipped the last step in order to avoid dealing with the hazardous waste issue that cyanide solutions present.
So now, at long last, the OP's questions:
Distinguish between $\ce{AgCl}$ and $\ce{AgI}$ by adding ammonia.
From equilibrium (7), $\ce{AgCl}$ can be dissolved if an excess of concentrated ammonia is added. Trust me: this should be done in a hood! But, comparing equilibria (1) and (7), $\ce{AgI}$ is more than a million times less soluble than $\ce{AgCl}$, so no amount of concentrated ammonia will significantly dissolve $\ce{AgI}$.
The OP's second question involves distinguishing between silver ion and the silver ammonia complex.
The OP's answer sheet claims that adding $\ce{NaCl}$ should result in only silver ion giving a precipitate. But equilibrium (7) shows that sufficiently high chloride ion concentrations should result in $\ce{AgCl}$ precipitation. But silver also forms $\ce{[AgCl2]^-}$, $\ce{[AgCl3]^2-}$, and $\ce{[AgCl4]^3-}$ , when the chloride concentration is high. So adding chloride will not cause AgCl to precipitate, in agreement with the OP's answer sheet. See here as well.
Last thought: I have done this demo in lecture, minus the last cyanide step, and I think the OP should not be expected to know all this stuff, especially if an exam is involved. The entire point of this kind of problem is to show that we can control equilibria to our advantage in, e.g., gold extraction via the cyanide process, and insoluble is a relative term.
Source of all solubility product constants and formation constants:
Daniel C. Harris, Appendix I In Quantitative Chemical Analysis; 7th Ed.; W. H. Freeman & Company: New York, NY, 2007 (ISBN: 0-7167-7041-5; ISBN-13: 9780716770411). | {
"domain": "chemistry.stackexchange",
"id": 13904,
"tags": "inorganic-chemistry, solubility, analytical-chemistry, precipitation"
} |
Expanding a metric tensor in a Taylor series around another metric tensor | Question: I am doing problem 5.11 in Guidry, which asks the following:
using $$g_{\mu \nu}(x) = \frac{\partial x'^{\alpha}}{\partial x^{\mu}} \frac{\partial x'^{\beta}}{\partial x^{\nu}} g_{\alpha \beta}(x')$$ and the transformation $$x^{\mu} \rightarrow x'^{\mu} = x^{\mu} + \epsilon K^{\mu}$$ to show that to first order, $$g_{\mu \nu}(x) = (\delta^{\alpha}_{\mu}+\epsilon\partial _{\mu}K^{\alpha})(\delta^{\beta}_{\nu}+\epsilon\partial _{\nu}K^{\beta})g_{\alpha \beta}(x').$$ Then, by expanding $g_{\alpha \beta}(x')$ in a Taylor series around $g_{\alpha \beta}(x)$ show that to order $\epsilon$ the above equation implies that $$\partial _{\nu}K_{\mu} + \partial _{\mu}K_{\nu} + K^{\gamma}\partial_{\gamma}g_{\mu \nu} = 0.$$ The part that I am stuck at is expanding $g_{\alpha \beta}(x')$ in a Taylor series. The solution set says it should be $$g_{\alpha \beta}(x) + \frac{\partial g_{\alpha \beta}}{\partial x^{\gamma}} \vert_x\Delta x^{\gamma} = g_{\alpha \beta}(x')$$ and that $$\Delta x^{\gamma} = \epsilon K^{\gamma}.$$ However, I do not understand why $\Delta x^{\gamma} = \epsilon K^{\gamma}$ and how to interpret $x^{\gamma}$ here.
Answer: $\Delta x^\gamma = x'^\gamma - x^\gamma$. Here $\gamma$ is just an index. | {
"domain": "physics.stackexchange",
"id": 89088,
"tags": "general-relativity, differential-geometry, metric-tensor, coordinate-systems, vector-fields"
} |
How to check if $m$ numbers in a sequence satisfy a condition, such that all these numbers are spaced apart by at least $k$? | Question: Suppose we have a sequence $s$ with $n$ elements from $s[1..n]$. I want to check if there exists $m \leq n$ elements in this sequence that each satisfy some simple condition (that can be tested in $O(1)$ for each $s[i]$), such that all of these elements are spaced apart by at least $k$. Is there an algorithm that can achieve this in $O(n)$ time?
For example, suppose we have $s=[3, 4, 13, 2, 6, 4, 1, 9, 11, 5]$ and we are trying to verify if there exists $m=3$ elements that satisfy the condition $s[i] \leq 4$ and are spaced apart by at least $k=2$. This is true because $s[1] = 3, s[4] = 2, s[7] = 1$ and $s[1], s[4], s[7]$ are spaced apart by at least $2$.
A naive solution that I thought of involved iterating through the sequence and recording each $s[i]$ that satisfies the condition, and then for each $s[i]$, trying to find $m-1$ others that don't violate the spacing requirement. I'm not sure how I can do this in $O(n)$ time. Is it possible?
Answer: As Yuval Filmus said in the comment, you can solve the optimization version (finding maximum viable $m$, which is a stronger version) by dynamic programming.
Let $m_i$ be the maximum viable elements of the sub-problem on $s[1..i]$. Consider the sub-problem on $s[1..i]$. An optimal solution has two choices.
It excludes $s[i]$, then it must choose an optimal solution of the sub-problem on $s[1..(i-1)]$.
It includes $s[i]$, then it cannot include $s[i-1],\ldots,s[i-k+1]$, and must choose an optimal solution of the sub-problem on $s[1..(i-k)]$.
Now you can write a recursion formula for $m_i$, and compute $m_1,\ldots,m_n$ in order in $O(n)$ time. | {
"domain": "cs.stackexchange",
"id": 11234,
"tags": "algorithms, arrays, subsequences"
} |
Volume change inside a balloon upon decreasing the the outer pressure | Question:
A balloon is filled with hydrogen at room temperature. It will burst
if pressure exceeds 0.2 bar. If at 1 bar pressure the gas occupies
2.27 L volume, upto what volume can the balloon be expanded?
Now i can easily figure out that the pressure is 0.2bar at a volume of 11.35L. However, when i check the answer in my book, it says that the volume of the balloon should be less than 11.35L.
However, if you decrease the volume from 11.35L, wouldn't the pressure become greater than 0.2bar, hence bursting the balloon?
Answer: The relationship between pressure and volume is given by Boyle's Law.
$$\mathrm{P}_1\mathrm{V}_1 = \mathrm{P}_2\mathrm{V}_2$$
P V
0.199 11.41
0.20 11.35
0.201 11.29
1.0 2.27
So yes, you're right. Using Boyle's Law, the volume should be more than 11.35L so the pressure stays below 0.20 bar.
However this doesn't make real world sense. When you blow up a balloon, as the pressure increases the volume increases.
Edit - Thanks @Mithoron here is a version of the problem that works...
A balloon is filled with hydrogen at room temperature (25 C) and pressure (1.00 bar) to a volume of 2.27 L in a chamber. The pressure in the chamber is then reduced isothermally. The balloon will burst when the pressure in the chamber is reduced to 0.200 bar. What will the volume of the balloon be just as it explodes? | {
"domain": "chemistry.stackexchange",
"id": 5308,
"tags": "homework, gas-laws, erratum"
} |
Concept of potential difference | Question:
work done = charge x potential difference
Let's suppose you have two unit charges. They flow across two conductors of identical dimensions. One flows through it in time $t_1$ and the other flows in time $t_2$. Where $t_2 > t_1$. How will you compare the potential differences across the two identical conductors i.e which one is greater and which one is smaller? Please compare the potential differences in terms of work done by the two unit charges while flowing through the conductors in the given amount of time?
Answer: $\Delta V_1 < \Delta V_2$
Work also equals $\Delta K.$ (Actually $q\Delta V = -W$) Assuming they have identical masses and initial velocities, you can use the
following reasoning $$ t_1 < t_2 \implies \Delta K_1 > \Delta K_2 \implies \Delta U_1 < \Delta U_2 \implies q \space\Delta V_1 < q \space \Delta V_2 \implies \Delta V_1 < \Delta V_2$$ | {
"domain": "physics.stackexchange",
"id": 44717,
"tags": "electrostatics, charge, work, potential, potential-energy"
} |
Which kind of electrical brain activity is associated with consciousness and why? | Question: According to this article The ethical brain
At the end of the week 5 into the 6 (42-43 days) the first electrical brain activity occurs in a pre-born developing human.
And according to the same article
This activity, however, is not coherent activity of the kind that
underlies human consciousness, or even the coherent activity seen in a
shrimp's nervous system
My question is, which kind of electrical brain activity is associated with consciouness and why?
Answer: Gamma band oscillations (GBO) (Wikipedia) (NCBI) are 30-90 Hz electrical waves generated by the brain and are thought to possibly be associated with cognition and consciousness (Panagiotaropoulos, 2012). Some evidence for this putative relationship can be seen with experiments such as pre-pulse inhibition (PPI), which can describe how our sensations are interpreted by the brain. Also, PPI may be aberrant in Schizophrenic patients (who have altered cognition). Changes to PPI are associated with aberrant GBO, which could imply an association between GBO and an individual's interpretation of stimuli, their cognition and by extension consciousness.
But of course, the counter-argument is that the intrinsic activity of the brain isn't changed, merely how the brain processes stimuli. However, all forms of measuring cognition that I've studied involve investigating responses to external stimuli, so as far as I know, this is a universal problem. Furthermore, there's a lot we still don't know about consciousness and no model or measurement is perfect. This article gives quite a nice overview on neural correlates of consciousness. | {
"domain": "biology.stackexchange",
"id": 8610,
"tags": "human-biology, neuroscience, brain, electrophysiology"
} |
$T$-matrix approach to solve 1D Schrödinger equation for an incident wave on a double stepped potential; am I on the right track? What's next? | Question: Introduction/background
I'm looking to solve for the transmitted and reflected probabilities for a 1D QM wave packet on a potential step of finite thickness and arbitrary shape. To do that I'm going to use the script and technique in Jake VanderPlass' blogpost Animating the Schrödinger Equation which uses a split-step Fourier method and a gaussian wave packet rather than a single $k_0$.
Results of a quick hack/test below show an attempt at an "antireflection coating" with two steps roughly $\lambda / 4$ apart in a rough analogy to an index matching layer in optics. The quick test of a double step does demonstrate a reduced reflection, which encourages me to continue.
above: before, and below: after scatting. left: single step, right double step of same final height. click individual images for full size.
However, to check my calculation (I'm needing to modify a few things, including narrowing the width of the wave packet in $k$-space) I need an analytical solution to compare it with.
I'm not a quantum mechanic (it's been more than several decades) but from One-dimensional Schrödinger Equation: Transmission/Reflection from a step (and several other places) I see that if the incident wave function on a single step from $V=0$ to $V=V_0$ were $\exp(ik_o x)$ then the solution will be $\exp(ik_o x) + r\exp(-ik_o x)$ on the left and $t\exp(ik_o x)$ on the right where
$$r = \frac{k-k'}{k+k'}$$
$$t = \frac{2k}{k+k'}$$
where
$$k = \sqrt{2 M E}$$
$$k' = \sqrt{2 M (E-V_0)}$$
I am wondering if I can use the Transfer matrix or $T$-matrix approach to solving the arbitrary two-step problem. I'd done this a long time ago for a thin film optics application, there's a matrix for each interface and one for the drift space between the two transitions.
In this chapter titled Transfer Matrix Equation 1.59 gives an expression for a transfer matrix $\mathbf{M}$ as a function of $r, t, r', t'$ which I believe are the reflection and transmission coefficients from the right (as shown above) and from the left, respectively.
$$
\mathbf{M_S} = \begin{pmatrix}
t' - \frac{r r'}{t} & \frac{r}{t} \\
-\frac{r'}{t} & \frac{1}{t} \\
\end{pmatrix}
$$
Question
Have I got it right so far? If so, what would the matrix for the drift space between the two steps look like perhaps something of the following form for a distance $d$ between the two steps?
$$
\mathbf{M_D} = \begin{pmatrix}
\exp(ik'd) & 0 \\
0 & \exp(-ik'd) \\
\end{pmatrix}
$$
Given the product of three matrices $\mathbf{M_{S1}} \ \mathbf{M_{D12}} \ \mathbf{M_{S2}}$ for the first step, the drift, and the second step, how would I use that to get the final amplitudes of reflected wave to the left and the transmitted wave to the right?
Answer: Theoretically everything seems right. I would be very careful of the order of $M_s$, it seems the reference you cite, uses unconventional notation for transfer matrix. In your case of double barrier, the analytical form of $r$ and $t$ is easy, however, in some more complex (combination) of rectangular potential, you will get a lots of r, t, r', t' at every region. If you are not interested what happens between the region, there is a good solution that I have studied recently.
Short derivation of Transfer Matrix
In this section I will layout my convention for transfer matrix for a simple finite rectangular potential. This can be easily extended to any combination of rectangular potential. Consider following potential,
The wavefunction after solving the Schrodinger's equation at every region looks like
\begin{equation}
\begin{cases}
\psi_1(x)=\alpha e^{ik_1 x}+\beta e^{-ik_1 x} & x< -\frac{a}{2} \\
\psi_2(x)=C e^{i k_2x}+D e^{-i k_2x} & -\frac{a}{2}< x< \frac{a}{2} \\
\psi_3(x)=\gamma e^{ik_1 x}+\delta e^{-ik_1 x} & \frac{a}{2}< x
\end{cases}
\end{equation}
where $k_1=\sqrt{\frac{2mE}{\hbar^2}}$ and $k_2=\sqrt{\frac{2m(E-V_0)}{\hbar^2}}$.
Now using the boundary condition to find the coefficients,
\begin{align}
\label{1.4a}
\psi_1(-a/2)&=\psi_2(-a/2)\\
\label{1.4b}
\psi'_1(-a/2)&=\psi'_2(-a/2)\\
\label{1.4c}
\psi_2(a/2)&=\psi_3(a/2)\\
\label{1.4d}
\psi'_2(a/2)&=\psi'_3(a/2)
\end{align}
The above equations after putting the appropriate wavefunctions
\begin{align}
\alpha e^{-ik_1 a/2} + \beta e^{ik_1 a/2} &= Ce^{-i k_2 a/2} + De^{i k_2 a/2}\\
ik_1 \alpha e^{-ik_1 a/2} - ik_1 \beta e^{ik_1 a/2} &= i k_2 Ce^{-i k_2 a/2} - i k_2 De^{i k_2 a/2}
\end{align}
In the matrix form it can be written as,
\begin{equation}
\begin{pmatrix}
e^{-ik_1 a/2} & e^{ik_1 a/2}\\
ik_1 e^{-ik_1 a/2} & -ik_1 e^{ik_1 a/2}
\end{pmatrix}
\begin{pmatrix}
\alpha\\ \beta
\end{pmatrix} =
\begin{pmatrix}
e^{-i k_2 a/2} & e^{i k_2 a/2}\\
i k_2 e^{-i k_2 a/2} & -i k_2 e^{i k_2 a/2}
\end{pmatrix}
\begin{pmatrix}
C\\D
\end{pmatrix}
\end{equation}
\begin{equation}
\begin{pmatrix}
1 & 1\\
ik_1 & -ik_1
\end{pmatrix}
\begin{pmatrix}
e^{-ik_1 a/2} & 0 \\
0 & e^{ik_1 a/2}
\end{pmatrix}
\begin{pmatrix}
\alpha \\ \beta
\end{pmatrix} =
\begin{pmatrix}
1 & 1\\
i k_2 & -i k_2
\end{pmatrix}
\begin{pmatrix}
e^{-i k_2 a/2} & 0 \\
0 & e^{i k_2 a/2}
\end{pmatrix}
\begin{pmatrix}
C\\D
\end{pmatrix}
\end{equation}
Similarly for next boundary,
\begin{equation}
\begin{pmatrix}
1 & 1\\
i k_2 & -i k_2
\end{pmatrix}
\begin{pmatrix}
e^{i k_2 a/2} & 0 \\
0 & e^{-i k_2 a/2}
\end{pmatrix}
\begin{pmatrix}
C\\D
\end{pmatrix} =
\begin{pmatrix}
1 & 1\\
ik_1 & -ik_1
\end{pmatrix}
\begin{pmatrix}
e^{ik_1 a/2} & 0 \\
0 & e^{-ik_1 a/2}
\end{pmatrix}
\begin{pmatrix}
\gamma \\ \delta
\end{pmatrix}
\end{equation}
Now define
\begin{equation}
P_1 = \begin{pmatrix}
e^{-i k_1 a/2} & 0 \\
0 & e^{i k_1 a/2}
\end{pmatrix}\\
\end{equation}
\begin{equation}
P_2 = \begin{pmatrix}
e^{-i k_2 a/2} & 0 \\
0 & e^{i k_2 a/2}
\end{pmatrix}\\
\end{equation}
\begin{align}
D_1 &= \begin{pmatrix}
1 & 1\\
ik_1 & -ik_2
\end{pmatrix}\\
D_2 &= \begin{pmatrix}
1 & 1\\
i k_2 & -i k_2
\end{pmatrix}
\end{align}
\begin{align}
\label{1.10}
\notag \begin{pmatrix}
\alpha\\ \beta
\end{pmatrix}
&= P_1^{-1}D_1^{-1}D_2P_2P_2D_2^{-1}D_1P_1^{-1}
\begin{pmatrix}
\gamma \\ \delta
\end{pmatrix}\\
&= \underbrace{\begin{pmatrix}
t_{11} & t_{12}\\
t_{21} & t_{22}
\end{pmatrix} }_{= T}
\begin{pmatrix}
\gamma \\ \delta
\end{pmatrix}
\end{align}
Rearranging the terms, we get
\begin{equation}
\begin{pmatrix}
\gamma\\\beta
\end{pmatrix}
= \frac{1}{t_{11}}\begin{pmatrix}
1 & -t_{12}\\
t_{21} & |T|
\end{pmatrix}
\begin{pmatrix}
\alpha\\\delta
\end{pmatrix}
\end{equation}
Now, if we consider that the wave is coming from the left only, then putting $\delta=0$, we can find the transmission amplitude by $t=\frac{\gamma}{\alpha}$ and reflection amplitude as $r=\frac{\beta}{\alpha}$. Thus you can find the values of $t_{ij}$ and get the T matrix in the form of r and t. This should give you back $M_s$ that you wrote in the introduction. Note that it could be not of the exactly as you wrote as the rearrangement of amplitude of reference with my convention is different.
My argument
One of the convenience of doing this kind of calculation is that, you need not to worry about what is going in the region inside the potential. For any potential (that is of the form of combination of rectangular potential), you just need to carefully calculate T matrix (depending upon the boundary and region, it is can be multiplication of various matrices, but it will be much easier to solve this rather than solving the reflection and transmission coefficient at every boundary) and you will find reflection and transmission amplitudes. If you want to find the wavefunction, just plug it into the wavefunction for in first equation. Even you need not to find $k_2$ unless you require wavefunction inside the potential as $k_2^2 = k_1^2 + \frac{2m|V_0|}{\hbar}$.
I hope this helps to go beyond just simple potential barriers. If there is any typo or suggestions you want to add, feel free to critic. | {
"domain": "physics.stackexchange",
"id": 83853,
"tags": "quantum-mechanics, schroedinger-equation, scattering, s-matrix-theory"
} |
Algorithm to say if win or lose using player's rating and randomness | Question: I wrote a small simulator to understand the Elo rating system. I can create players, matches and tournaments, and I want to be able to predict the match ending depending on the rating of each player and some randomness. At the beginning the ratings are all around the same values (say 600) and it appears to stay between 400 and 1600. This is what I've tried :
p1 and p2 are instances of Player, rating is an integer attribute.
def random_win(self):
q = float(self.p2.rating)/float(self.p1.rating)
if q>2:
self.win_2()
elif q<0.5:
self.win_1()
else:
d = float(self.p2.rating)-float(self.p1.rating)
lim = max(self.p1.rating, self.p2.rating)/2
r = random.randint(-lim, lim)
if r > d:
self.win_1()
else:
self.win_2()
I'd like to know if there are some "official" ways of doing this, this custom one works quite fine but I think it can be improved or at least simplified. Any ideas ? The best would be to get something closer the World Chess Ratings.
Once the wins and losses calculated and stored into the Player as a list attribute, I calculate the new rating for each player using this algorithm.
Answer: World Chess Ratings are going to be skewed: people that lose are less likely to play than people who win, so it's quite possible that everyone who went to a single tournament and got wrecked is not going to play in tournaments anymore. And Elo is a measurement of skill, not skill itself, so of course it won't go past a certain value. A good chess player who played 1 competitive match has a lower rating than his playing strength. So even though his rating is lower, he'll win more than his rating suggests, that's why his rating goes up.
As such, your simulation is going to keep giving skewed results until you give each player a "skill" attribute which determines their real playing strength. That means each player would have a rating (which starts at 1000) and a skill (which starts at a randomized value between 0 and 2500 or so, where the distribution is, say, Gaussian), and where you used to compare rating, you compare skill instead. This should give you a more realistic simulation of Elo rating, but you'll still be assuming that players all play equally as much (which is not the case in real life). | {
"domain": "codereview.stackexchange",
"id": 20857,
"tags": "python, performance, python-2.x, chess, battle-simulation"
} |
Basis independence in Quantum Mechanics | Question: The idea that the state of a system does not depend on the basis that we choose to represent it in, has always puzzled me. Physically I can imagine that the basis ought to just yield an equivalent representation of the system, but I cannot convince myself of this independence. I apologize for the naive character of this post.
For example, in most QM books one frequently comes across statements such as: "The state of the system is represented by a vector (or ket) $|\psi\rangle$ in the corresponding Hilbert space", or that a "vector is not the same thing as the set of its components in a basis". Physically this means that e.g. the momentum $\mathbf{p}$ of a system should not depend on how we choose to orient the axes of our basis.
It is a very abstract idea that the state is just a vector lying in the entire Hilbert space (spanned by the set of all the common eigenvectors of the observables describing the system). How can I write or speak of this $|\psi\rangle$ without having a basis to represent it in. As soon as I want to say anything about this $\psi$, e.g. its norm, I will need its components in some basis, to compute $\sqrt{\langle \psi|\psi \rangle}.$
So what do I know about $|\psi\rangle$ without a chosen basis? How can I express that knowledge (of $\psi$) without a basis?
Is this independence better illustrated when one considers the fact that the set of eigenvalues for any chosen observable of the system, are the same regardless of the chosen basis?
Why it seems so difficult to imagine vector spaces, or vectors lying in abstract high-dimensional spaces ($|\psi\rangle \in \mathcal{H}$), without a basis? In what sense do we mean that a vector is more than just its components $(\langle v_1|\psi\rangle, \langle v_2|\psi\rangle, \dots)$?
But as soon as I want to compute overlaps such as $\langle v_1|\psi\rangle$ or norms $||\psi||$ I need the components of $|\psi\rangle$ in some basis. So how can I convince myself that no matter what basis I choose, this abstract $|\psi\rangle \in \mathcal{H}$ will not depend on it?
Finally, how should I interpret $|\psi\rangle \in \mathcal{H}$ without necessarily having an observable in mind? (i.e. the general statement that the state of the system lies in the Hilbert space).
Answer: The state is a vector in the Hilbert space of the Hamiltonian, which gives it a natural basis in terms of the eigenvectors; distinct eigenvalues then exist in distinct (orthogonal) subspaces - for degenerate values the subspaces are larger, but they are still distinct from all others. Clearly this situation gives many advantages in analysis.
However, this representation, though natural, and dependent only upon the spectral decomposition of the Hamiltonian, is not unique. The experimentalist will prefer to establish a basis which corresponds to his experimental setup! Chosing a different basis will change all of the coordinates, but does not change the states.
To make this clear recall that the state vectors of our Hilbert space can also be viewed a as rays, which are similar to the geometric rays of Euclidean geometry. Imagine the ray first, then superimpose a grid upon it - as you rotate the grid about the origin of the ray the intersections of the grid with the ray define the coordinates for that basis. The coordinates change, but the ray doesn't change. For ordinary geometry the relationships between two rays (angles, projections) don't change, so it is clear that some relationships between the coordinates are fixed - these are properties of a metric space.
Our Hilbert space is a normed space - distances don't mean anything, but a normalized state vector always has a length of 1, even when you change the basis; nor does the length change under unitary operators - hence their name.
All of this becomes clear in a good linear algebra course. | {
"domain": "physics.stackexchange",
"id": 28296,
"tags": "quantum-mechanics, hilbert-space, vectors"
} |
What is the relationship between driving point resistance and Thevenin resistance? | Question: Thevenin Theorem
A linear, active, resistive network which contains one or more voltage and current sources, can be replaced by a single voltage source $V_T$ (the Thevenin voltage) and a series resistance $R_T$ (the Thevenin resistance).
However, when the entire network has only a single voltage source $V_s$, then the rest of the network is passive; if current through this source is $I_s$ the driving point resistance is defined as,
$$
R_{\text{dp}} := \frac {V_s}{I_s}
$$
Now my simple question, can't we say that in this scenario,$R_{\text{dp}}$ is $R_T$ and $V_s$ is $V_T$?
Thanks.
Answer: For an a.c. circuit
(applies also for d.c. ideal circuits without any reactance $X$ impedance):
Since the actual circuit and the simplified Thevenin (i.e. one source with one ohmic resistor load in series, replacing the real part of the possible complex discrete impedance components and ohmic resistor components of the actual a.c. circuit e.g. coils, capacitors, resistors etc. and the algebraic sum of all the active sources) are in any way equivalent, regarding the real power consumption in your circuit, then an actual circuit having one source and multiple impedance components and or ohmic resistors (could be any arbitrary combination of in series or in parallel connection),
$$
|Z|=\sqrt{Z Z^{*}}=\sqrt{R^{2}+X^{2}} = R_{\text{dp}} = \frac {V_s}{I_s} = \frac {V_{Th}}{I_{Th}} = R_{\text{Th}}
$$
where $R$ is the net ohmic resistance and $X$ the net reactance of all passive circuit components for a single source, multiple mixed ohmic and reactive components circuit. $V_{s}$ and $I_{s}$ are the RMS values of source voltage and source current in the a.c. actual circuit.
Of course it depends also as previously described in a previous answer where you assign the output of your circuit since $R_{\text{Th}}$ will change if your assigned output is not in parallel with the source and at the end of the actual circuit. Thevenin describes a power consumption equivalent circuit depending where you sample the power in the circuit. If the output is not as above described then obviously $R_{\text{dp}}$ ≠ $R_{\text{Th}}.$
Also, notice that in this case, $V_{s}$ and $I_{s}$ are not equal with ${V_{Th}}$ and ${I_{Th}}$. | {
"domain": "physics.stackexchange",
"id": 86076,
"tags": "electric-circuits, electrical-resistance, electrical-engineering"
} |
Replace outside comma with newline | Question: I'm looking for a better alternative to replace insert + gsub.
hobbies = "Hobbies: \nsports (basketball, gym), foods (hamburger, steak, pasta), reading, movie, justfortesting((test1, test2), test3)"
print "\n\nBefore: \n\n" + hobbies
counter = 0
for i in 0...hobbies.length do
case hobbies[i]
when '('; counter += 1
when ')'; counter -= 1
when ','; hobbies.insert i + 1, "\n" if counter == 0
end
end
counter = nil
hobbies.gsub!(",\n", "\n")
hobbies.gsub!("\n ", "\n")
print "\n\nAfter: \n\n" + hobbies
Output:
Before:
Hobbies:
sports (basketball, gym), foods (hamburger, steak, pasta), reading, movie, justfortesting((test1, test2), test3)
After:
Hobbies:
sports (basketball, gym)
foods (hamburger, steak, pasta)
reading
movie
justfortesting((test1, test2), test3)
Answer: How about:
hobbies = "Hobbies: \nsports (basketball, gym), foods (hamburger, steak, pasta), reading, movie, justfortesting((test1, test2), test3)"
print "\n\nBefore: \n\n" + hobbies
counter = 0
hobbies = hobbies.chars.map do |char|
next "\n" if char == ',' and counter == 0
counter += 1 if char == '('
counter -= 1 if char == ')'
char
end.join
counter = nil
print "\n\nAfter: \n\n"
hobbies.each_line do |line|
puts line.lstrip
end | {
"domain": "codereview.stackexchange",
"id": 8006,
"tags": "ruby"
} |
Why are waveguide fields not infinite? | Question: I was reading The Feynman Lectures on Physics, Volume II, when I got to the section on waveguides. Toward the end, Feynman uses the concept of an infinite number of alternating line sources to explain attenuation and propagation in the guide.
While I very much enjoyed this point of view, one problem did stick out in my mind. If we have an infinite number of sources, I agree the beam will zero out everywhere except at the special angles for which all sources constructively interfere (far from the sources). However, at this angle, wouldn't an infinite number of positively reinforcing waves produce an infinite field? Do the further sources produce weaker fields, making the sum finite?
Even if that's true, what if we consider the problem in terms of bouncing plane waves? Won't the transverse field components be infinite in extent and constant down the guide, setting up a standing wave pattern. But, since in an infinitely long guide there will be infinite in-phase reflections, shouldn't the resulting fields be infinite?
When we mathematically solve for the fields however, they seem to turn out perfectly finite, even in ideal conductors.
Where's the discrepancy? Any help would be greatly appreciated!
Answer: You're presumably referring to this construction:
Fig. 24–15.The line source $S_0$ between the conducting plane walls $W_1$ and $W_2$. The walls can be replaced by the infinite sequence of image sources
This does indeed produce a field which has infinite energy density, or more precisely, if you draw a plane parallel to the source wires then it has infinite power flowing through it. However, this is because you have an infinite number of copies of the waveguide you're trying to describe: each waveguide copy carries a finite amount of power (the same for all copies) and it becomes infinite when you count an infinite number of them.
There's nothing in the infinite-source case, though, which really requires that the fields themselves be infinite. (In fact, you know they will be regular inside each copy of the waveguide, because you can solve it for the actual waveguide (you've already done this) and show that the fields are finite and regular.)
Keep in mind that in the construction we're replacing the waveguide walls with the infinite collection of sources, so that there are no reflections in the infinite-sources case. This is the usual case with the method of images - we choose the image sources so that the fields will cancel out in the plates, and then we remove the plates. This is what seems to be tripping you up when you say things like
But, since in an infinitely long guide there will be infinite in-phase reflections, shouldn't the resulting fields be infinite?
There's no reflections - simply fields travelling through and interfering. The fields from $S_0$ will thin out (as they're no longer contained inside the waveguide) and their energy will be replaced by the fields from all the other sources. | {
"domain": "physics.stackexchange",
"id": 26738,
"tags": "electromagnetism, waveguide"
} |
Experimental samples with rare earth metal | Question: Many experiments, such as optical, superconductivity, etc, use the samples that involve rare earth metals and transition metals. Why are they used that often. Is the main reasons:
They have the required electronic structure in its d- and f- orbital so that we can create the required spin-spin interaction.
They provide the required structure of the crystal
They provide the required energy band gap
Any elaboration?
Edit: Here the samples I mean is the usual YBCO, or some papers, say Gd5Si2Ge2, that I do not have any idea. I am also curious about why they decide to investigate a particular compound in the first place?
Answer: It's the open d- and f-shells that make for very interesting physics, because it can introduce effects that dramatically depend on doping. Look up "Strongly Correlated Materials" for an overview. | {
"domain": "physics.stackexchange",
"id": 140,
"tags": "solid-state-physics, experimental-physics"
} |
Can a $CX_{1,2}\cdot CX_{2,1}$ be synthesised to some $CU$ plus local gates? | Question:
Can the above circuit be synthesised to an operation where there is only one control qubit? I.e. a controlled-unitary gate, eventually surrounded by local gates.
Answer: It can't.
The easiest way to determine this for an arbitrary operation is to compute its KAK decomposition. Only operations that have at most one non-zero interaction coefficient can be performed by at most one controlled operation. If you're limited to CNOT or CZ (as opposed to the full range of CPHASE(t)) then the non-zero coefficient must be maximum ($\pi/4$).
You can use cirq to compute KAK decompositions, and confirm that CNOT*NOTC has two non-zero interaction coefficients and therefore requires multiple controlled gates:
import cirq
a, b = cirq.LineQubit.range(2)
print(cirq.kak_decomposition(cirq.Circuit(
cirq.CX(a, b),
cirq.CX(b, a),
)))
KAK {
xyz*(4/π): 1, 1, 0
before: (0.826*π around -0.679*X+0.281*Y+0.679*Z) ⊗ (0.75*π around Z)
after: (-0.25*π around Z) ⊗ (-0.826*π around 0.679*X+0.281*Y-0.679*Z)
}
The line xyz*(4/π): 1, 1, 0 is the interaction coefficients. | {
"domain": "quantumcomputing.stackexchange",
"id": 4769,
"tags": "quantum-circuit, gate-synthesis"
} |
How to calculate the temperature of Earth's surface | Question: From a dissertation, I read about the greenhouse gas's effect which make the surface warmer. It says, I quote:
The greenhouse effect results in an average surface temperature of approx. $14^\circ\ \rm C$, compared to $-19^\circ\ \rm C$ if the atmosphere would have been transparent to thermal radiation.
Is there some simple way to estimate the average temperature of Earth's surface using some parameters like sunlight intensity?
Answer: An easy calculation is to start with the solar constant, the power (energy per unit time) produced by solar radiation at a distance of one astronomical unit. This is 1.361 kilowatts per square meter. The surface area of the Earth is $4\pi R^2$, where $R$ is the radius of the Earth, while the cross section of the Earth to solar radiation is $\pi R^2$. Thus the Earth as a whole receives 1/4 of that solar constant.
Assume a planet with an atmosphere that is transparent in the thermal infrared, with the same albedo as that of the Earth (0.306), rotating rapidly like the Earth, and orbiting at the same distance from the Sun as the Earth. The effective temperature of this planet is given by the Stefan Boltzmann law:
$$T = \left(\frac{(1-\alpha)\,I_\text{sc}}{4\sigma}\right)^{1/4}$$
where
$\alpha$ is the albedo (0.306),
$I_\text{sc}$ is the solar constant (1.361 kW/m2),
$\sigma$ is the Stefan Boltzmann constant (5.6704×10−8 W/m2/K4), and
the factor of 1/4 arises from the the fact that the Earth is a rapidly rotating spherical object.
The result is -19 °C. | {
"domain": "earthscience.stackexchange",
"id": 742,
"tags": "meteorology, atmosphere, climate, climate-change, greenhouse-gases"
} |
Why am I getting a nullptr from my bag file? | Question:
I am playing back a bagfile and I get a nullptr when I should be getting actual data.
Below is a screenshot of the bagfile being viewed from RQT. There is only one message called "decomposition/sectors" and I am certain that it should have data. Yet, when I am playing back the bag file in C++ code for a unit test, I get a nullptr for this particular item. Why am I getting this? The other messages will be properly read and instantiated, but not this one.
Here is a snippet of code which shows how I am reading this:
void CoverageReporterTester::SendMessage(const rosbag::MessageInstance& bag_msg)
{
if (topic_name == avidbots_topics::decomposition_sectors)
{
message_stack::sector_array msg1 = *bag_msg.instantiate<message_stack::sector_array>();
SendMessage(topic_name, msg1);
}
}
/* Read bag file */
rosbag::Bag bag_file;
std::string bag_dir = ros::package::getPath("coverage_reporter") +
"/test/data/bag_files/coverage_reporter.bag";
bag_file.open(bag_dir, rosbag::bagmode::Read);
/* List of topics we are interested in */
std::vector<std::string> topics;
topics.push_back(topics::decomposition_sectors);
topics.push_back(avidbots_topics::static_3d_costmap_topic);
rosbag::View view(bag_file, rosbag::TopicQuery(topics));
/* Loop through each message in the bag file which is associated with one of our topics listed above */
foreach(rosbag::MessageInstance const bag_msg, view)
{
coverage_reporter.SendMessage(bag_msg);
}
Originally posted by cpagravel on ROS Answers with karma: 156 on 2017-03-31
Post score: 0
Answer:
I found the answer, the message description had changed since the bagfile was created. If there are any other reasons, please feel free to leave additional answers so others can use this as a reference.
Originally posted by cpagravel with karma: 156 on 2017-03-31
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by ahendrix on 2017-03-31:
Yep; instantiate returns a null pointer if it can't convert the stored message into the type you've given. This can happen if the message definition has changed, or if your code uses the wrong message type. | {
"domain": "robotics.stackexchange",
"id": 27485,
"tags": "rosbag, roscpp"
} |
Absorption coefficient for liquid water | Question: Wikipedia has this plot of the absorption spectrum of liquid water:
The vertical axis seems weird to me. The Beer-Lambert Law says $T=10^{Ax}$ where $T$ is the fraction of light transmitted, $A$ is the absorption coefficient, and $x$ is the optical distance through the material. If we take the values listed on the vertical axis to be $A$ as I've described it here, the transmittance ranges from 80% transmittance though a 1 nm (!) thick sample at 80 nm wavelength ($10^{-10^8*10^{-9}}$) to 80% transmittance through a 10 m thick sample at 500 nm wavelength ($10^{-10^{-2}*10}$).
Alternatively phrased, the absorption coefficient at 2 um is 10^4. Therefore the transmittance would be $10^{-10^4*0.01}$ through a 1 cm sample. That would be one in $10^{-100}$ photons transmitted through the sample. Given that IR spectroscopy is possible, that sure doesn't seem right to me.
Does the transmission of light through water really vary by 10 orders of magnitude or am I misreading this?
Answer: The plot is from Wikipedia, where important/helpful links to the source information can be found, and is worth a look.
A similar and more detailed plot can be found on this page of Martin Chaplin's famous site on all things water. I can't recommend that site highly enough! The plot is shown below, and note that it's using centimeters instead of meters.
What happened to Martin Chaplin's famous Water Structure and Science website?
This section in Wikipedia gives us a clue. It says
Absorption coefficients for 200 nm and 900 nm are almost equal at 6.9 m−1 (attenuation length of 14.5 cm).
Those numbers are reciprocal, and reading the links shows the terms refer to $1/e$ lengths, natural units.
Both this plot and Wikipedia's are intended to give a global electromagnetic perspective rather than detailed quantitative information. I believe there really is a peak near 2 microns where a 1 cm cuvette of pure water would be nearly opaque. Remember it's using natural units, not 10, so ya, if you sit on that peak, using:
$$T = exp\left(-h(cm) \ \times \ ~100 (1/cm)\right)$$
and plugged in a 1 cm optical path, you'd get a transmission of 4E-44, or basically zero. You many find that there are different cuvettes with substantially reduced optical paths that can be used in this case. A sub-1 millimeter path length cuvette might do better.
However, a little bit above or below 2 microns, the attenuation drops to only 10 per centimeter, which gives a transmission of about 5E-05 for a standard 1 centimeter path length cuvette, and a quality instrument would have no problem measuring that with a suitable detector and integration time.
If you're interested in going farther, I'd recommend you find a plot that is nod drawn with such thick lines, or find tabulated data and plot it in detail by yourself. There are some numbers just for example in this 1975 paper but the hard short-wavelength cutoff at 2 microns may obscure the peak there. However let's look at the properties near 3 microns.
At 2.959 microns, $n, k = 1.329, 0.292$. Putting that into here
$$\frac{E}{E_0} \ = \ exp\left(j \left(n + jk\right) \frac{2\pi}{\lambda} x \right) $$
puts the electric field attenuation of 1 centimeters at $exp(-6200)$. You need to square to get transmitted intensity, so doubling the argument gives 12,400/cm or 1.24/um, which very nicely matches the "3 micron peak" in the plot, especially the inset in red(see below). This is really a mixture of states in the circa 3400 cm${}^{-1}$ water vibrational spectrum. Also, when you read, make sure you keep track of the isotopic mixture being discussed. H2O and HDO differ profoundly in their infrared optical properties.
All is well, and water continues to be amazing, profound stuff! | {
"domain": "chemistry.stackexchange",
"id": 8239,
"tags": "physical-chemistry, water, spectroscopy, spectrophotometry"
} |
How to display CV_32FC2 images in ROS | Question:
I'm trying to use OpenCV to calculate real-rime optical flow and display it in ROS. The task, using provided OpenCV's functionality, seems fairly easy. I've used uvc_camera and cv_bridge to capture images with ROS and convert them to cv::Mat. Then, I'm using calcOpticalFlowFarneback to calculate optical flow, which returns a cv::Mat optical flow image of type CV_32FC2.
The code I'm using to publish the image looks like this:
cv_bridge::CvImage flowImage;
sensor_msgs::ImagePtr flowImagePtr;
if (!_flow.empty()) {
flowImage.header = cvImagePtr_->header; //copy header from input image
flowImage.encoding = sensor_msgs::image_encodings::TYPE_32FC2;
flowImage.image = _flow; //_flow is calculated elsewhere
flowImagePtr = flowImage.toImageMsg();
publishCvImage(flowImagePtr); //uses a publisher to publish image
}
The code works fine, but ROS is unable to display floating point images (I tried either image_view and rviz). What is the right way to convert those images to a displayable form?
Cheers, Tom.
Edit:
Eventually I'm using these two methods to visualize the results (the below code bases on this):
cvImagePtr = cv_bridge::toCvCopy(image, sensor_msgs::image_encodings::RGB8);
flowImage = cvImagePtr->image; //original RGB captured image, needs to be converted to MONO8 for cv::calcOpticalFlowFarneback
cv::Mat velx = cv::Mat(flowImage.size(), CV_32FC1);
cv::Mat vely = cv::Mat(flowImage.size(), CV_32FC1);
cv::Mat flowMatrix; //outcome of cv::calcOpticalFlowFarneback
void OpticalFlow::extractFlowFieldFarneback() {
for (int row = 0; row < flowMatrix.rows; row++) {
float *vx = (float*)(velx.data + velx.step * row);
float *vy = (float*)(vely.data + vely.step * row);
const float* f = (const float*)(flowMatrix.data + flowMatrix.step * row);
for (int col = 0; col < flowMatrix.cols; col++) {
vx[col] = f[2*col];
vy[col] = f[2*col+1];
}
}
}
void OpticalFlow::overlayFlowFieldFarneback() {
if (!velx.empty() && !vely.empty()) {
for (int row = 0; row < flowImage.rows; row++) {
float *vx = (float*)(velx.data + velx.step * row);
float *vy = (float*)(vely.data + vely.step * row);
for (int col = 0; col < flowImage.cols; col++) {
double length = sqrt(vx[col]*vx[col] + vy[col]*vy[col]);
if (length > 1) {
cv::Point2i p1(col, row);
cv::Point2i p2(col + (int)vx[col], row + (int)vy[col]);
cv::line(flowImage, p1, p2, cvScalar(255, 0, 0, 0));
}
}
}
}
}
Originally posted by tom on ROS Answers with karma: 1079 on 2011-05-25
Post score: 1
Answer:
If you just want to visually inspect it, you could scale the value range according to the maximum value:
00436 void depthToCV8UC1(const cv::Mat& float_img, cv::Mat& mono8_img){
00437 //Process images
00438 if(mono8_img.rows != float_img.rows || mono8_img.cols != float_img.cols){
00439 mono8_img = cv::Mat(float_img.size(), CV_8UC1);}
00440 //The following doesn't work if there are NaNs
00441 double minVal, maxVal;
00442 minMaxLoc(float_img, &minVal, &maxVal);
00443 ROS_DEBUG("Minimum/Maximum Depth in current image: %f/%f", minVal, maxVal);
00446 cv::convertScaleAbs(float_img, mono8_img, 100, 0.0);
00447 }
Originally posted by Felix Endres with karma: 6468 on 2011-05-25
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by Felix Endres on 2011-05-25:
Ah, that is because you have channels for x and y. I don't know any simple way to get that visualized, sorry
Comment by tom on 2011-05-25:
Thanks Felix, the idea is ok, but I get the error: OpenCV Error: Assertion failed (img.channels() == 1) in minMaxLoc. What I'd like to achieve the most, is to obtain nice real-time images with arrows showing temporary optical-flow vectors overlayed over the image - is there a simple way? | {
"domain": "robotics.stackexchange",
"id": 5668,
"tags": "opencv, rviz, image-view"
} |
Self-energy in two scalar Yukawa interaction | Question:
Considering the Lagrangian of two scalar fields in $d=4$:
$$\mathcal{L}=\frac{1}{2}(\partial\phi)^2-\frac{1}{2}m^2\phi^2+\frac{1}{2}(\partial\chi)^2-\frac{1}{2}M^2\chi^2-g\phi^2\chi$$
What would be the self-energy (diagrams, first order) for the $\chi$-particle?
The second particle confuses me somehow.
Answer: The $\chi$ particle seems to be interacting with the $\phi$ particle only in the $-g\phi^2\chi$ term, so the 1-loop correction to the $\chi$ propagator would be a loop of 2 $\phi$ particles. The first 4 terms are the kinetic and mass terms of the particles, while the rest (in this case 1) are the interaction terms - they tell you what type of particles (and how many of each) can appear in any given vertex (only 1 in this case), as well as the coupling constant ( in this case, the coupling constant is $g$ and has dimension of mass$^1$). | {
"domain": "physics.stackexchange",
"id": 59608,
"tags": "quantum-field-theory, lagrangian-formalism, feynman-diagrams, interactions, self-energy"
} |
What is the major product obtained on acidification of substituted epoxide? | Question: The problem
Source : MS Couhan (problems in organic chemistry . Chapter: Alcohols , phenols and ethers).
My Thoughts
If I proceed along $\ce{path 1}$ , $\ce1a$ is formed . It is unstable due to inductive withdrawing effect of $\ce{C=O}$ neighboring to it. Eventually $\ce2$ is obtained.
However, along $\ce{path 2}$ relatively more stable $\ce{3a}$ (compared to $\ce{1a}$) is formed. $\ce3a$ on ring expansion gives $\ce{3b}$ , leading to $\ce{4}$ ($\ce{3b}$ is relatively unstable compared to $\ce{1b}$).
My question
Which of the two paths should I choose that would lead to a major product ?
Answer: The reaction of epoxyketone 1 affords diketone 2 under photochemical and thermal conditions ostensibly through the diradical.1 Boron trifluoride provides the diketone 3 through acyl bond migration and not via alkyl group migration as in your intermediate 3a. I have not located a proton-catalyzed reaction. Unfortunately, diketone 3 was not one of your choices.
1) John R. Williams, George M. Sarkisian, James Quigley, Aaron Hasiuk, and Ruth VanderVennen, J. Org. Chem., 1974, 39, 1028. | {
"domain": "chemistry.stackexchange",
"id": 12343,
"tags": "reaction-mechanism"
} |
How does the voltage affect current? | Question: The way I see it, volts are ways of measuring the electric potential of an electron in a circuit (I'm assuming from one end of the battery to the other but i could be wrong). So 1 coulomb of charge would have 8 joules of energy on the negative side of the battery and none on the positive side if it were an 8 volt battery.
If this is correct, how would volts affect the current (assuming the resistance is constant)? As in, do volts help "push" charge through a circuit? Are they both the potential energy of a coulomb and the "push" of the current?
Answer:
As in, do volts help "push" charge through a circuit?
In a resistive circuit or through a resistor circuit element, yes. But be aware that not everything behaves like a resistor. For example, a DC voltage won’t push current through a capacitor regardless of its strength (up to dielectric breakdown, of course)
Are they both the potential energy of a coulomb and the "push" of the current?
For a resistor this is approximately correct, except that it is the change in voltage rather than the voltage itself. Or in other words, as long as you are thinking of voltage as a difference between two points instead of an absolute number at each point. | {
"domain": "physics.stackexchange",
"id": 79342,
"tags": "electric-circuits, electric-current, electrical-resistance, voltage"
} |
Centrifuge Artificial Gravity? | Question: I just saw the following video: NASA centrifuge. In video, a person is walking on a huge spinning centrifuge.
But his weight is still acting downwards and there is no supporting force which will keep him from falling vertically to the ground. So why doesn't the person fall down?
Answer: If you look carefully, you'll notice wires attached to the person, as well as a boom above him that follows him as he walks. The wires are more apparent in the closeup (starting at 0:24), especially if you look at the shadow. This apparatus is what supports him against the force of gravity. | {
"domain": "physics.stackexchange",
"id": 53903,
"tags": "newtonian-mechanics, forces, classical-mechanics, centripetal-force, inertia"
} |
What's the influence of a tilted orbital plane, when observing an exoplanet transit? | Question: I made an illustration to explain what i mean : If we assume we have two similar planet/star systems (similars in size/mass/period..) but tilted differently relative to us.
How can we predict that differences in dimming measurements (time & intensity) are caused by the tilting of orbital plane? (and not by the diameter of the exoplanet)
Answer: The depth of the transit tells you the size of the planet (relative to the star) in both cases.
The difference in total eclipse width tells you the impact parameter (how far from the centre of the disk, the transit crosses). This in turn depends on the orbital inclination and the ratio of the stellar radius to the radius of the orbit.
A second constraint on inclination and the ratio of stellar radius to orbital radius comes from the ratio of total transit duration to the time when the planet is fully eclipsing the star. A more inclined transit has a shorter period of total eclipse but longer ingress and egress.
To get a full solution then requires some estimate of the stellar mass, which leads via Kepler's third law to the orbital radius.
There are also some important additional complications due to limb darkening and eccentricity that mean, in practice, this problem is not solved analytically, but by fitting a detailed model to the light curve. For instance, the transit depth becomes smaller for transits that don't cross the centre of the stellar disk, because of limb darkening.
The best picture I can find is copyrighted material, but you can find it here from Carole Haswell's book on Transiting Exoplanets. The inclinations range from 90 degrees to 80 degrees. | {
"domain": "astronomy.stackexchange",
"id": 2207,
"tags": "exoplanet, planetary-transits"
} |
Why do fields have to form a representation of the Lorentz group? | Question: It is often claimed in quantum field theory texts that to have a sensible Lorentz invariant theory, the fields introduced must be in representations of the Lorentz group. This fact has always seemed obvious to me until recently when I pondered on it and realised I couldn't convince myself of it.
For example, I could define a field $\psi$, which is a scalar under rotations and gains a factor of $e^{\vec{v} \cdot \vec{s}}$ under boosts, where $\vec{s}$ is the vector that points from my house to my local grocery store (potentially time dependent), as well a field $\phi$ which transforms the same way but with a minus sign in the exponent. Then
$$\int d^4x \psi \phi$$
is a Lorentz invariant action, though it is clearly absurd. My question is, why am I allowed to have scalars, and Dirac spinors, but not grocerystoreions?
Answer: The physical symmetries of a theory are defined by the action. (Here I'm thinking of classical field theory and ignoring the possibility of quantum anomalies). The logic is that the dynamics of a system are determined by the action; a symmetry is a transformation of the degrees of freedom that leaves the system invariant; therefore, a symmetry is a transformation of the fields that leaves the action invariant [as pointed out in the comments, actually the action need not actually be strictly invariant to give the same dynamics; for example if a transformation simply rescales the action by an overall constant, then you will get the same equations of motion and therefore the same dynamics, at least classically. I will use the phrase "leave the action invariant" somewhat loosely to mean "gives you an identical physical system"]. Any transformation of the fields you come up with that leaves the action invariant, is a symmetry of the theory.
In your case, the action has a Lorentz symmetry under which $\phi$ and $\psi$ transform as scalars in the usual way. In general, if you're able to construct a Lorentz invariant action, then you'd expect the individual pieces of the action must transform in some representation, in order to combine to form a scalar. Having said that, the representation can be "non-obvious" (more precisely, nonlinearly realized), if the symmetry is spontaneously broken. In the 1860s wasn't obvious that it would be helpful to formulate Maxwell's equations in terms of fields that transform nicely under Lorentz transformations, given that the world we live in day-to-day apparently has a preferred frame where the Sun is stationary.
Now, you're right that often the above story is usually presented "backwards," where we start with field transformations, and then build a Lagrangian from there. That is because we are usually not in the position of knowing the fundamental Lagrangian and deriving the symmetries (the logical order of operations), but instead we have some idea what the degrees of freedom and symmetries are and we are guessing at what the underlying Lagrangian is (illogical but that's how science works!).
Now, how can we interpret your "symmetry"? I've said the action determines what the symmetries are, and your transformation is apparently a symmetry of the action. In fact, your action does have a kind of scaling symmetry, where the fields transform as $\phi \rightarrow \lambda \phi$ and $\psi \rightarrow \lambda^{-1} \psi$ while the coordinates stay fixed, and the transformation you've defined is essentially a subgroup of the full symmetry group where you simultaneously do a Lorentz boost and a rescaling, with a related parameter. However, this scaling symmetry will not be present if you add normal kinetic terms for $\phi$ and $\psi$, like $(\partial \phi)^2$ and $(\partial \psi)^2$. (Unless you generalize the symmetry to allow the coordinates to rescale, like in the conformal group). | {
"domain": "physics.stackexchange",
"id": 99188,
"tags": "special-relativity, symmetry, field-theory, group-theory, representation-theory"
} |
Bulk Modulus, How are these formulas equivalent? | Question: Bulk Modulus is defined as
$$ B = \frac{VdP}{-dV}$$
Where $V$ is volume and $P$ is pressure. It is also defined as,
$$ B = \frac{\rho dP}{d\rho}$$
Where $\rho$ is density. Question: How are these equivalent?
If $V = \frac{m}{\rho}$, $dV = \frac{m}{d\rho}$
Substituting into the first equation of the bulk modulus, I get
$$ B = \frac{\frac{m}{\rho}dP}{\frac{m}{d\rho}}$$
or
$$ B = \frac{d\rho dP}{\rho} $$
Answer:
If $V = \frac{m}{\rho}$, $dV = \frac{m}{d\rho}$
This is the source of your error. You can re-write the above as $\rho V = m$, and this yields $\rho dV + Vd\rho = 0$, or $V \frac d {dV} = -\rho\frac d {d\rho}\,$ as a differential operator. This leads directly to the alternate form for the bulk modulus $B = \rho \frac{dP}{d\rho}$.
Being a bit more formal, from $B=-V\frac{dP}{dV}$, the chain rule dictates that $B=-V\frac{d\rho}{dV}\frac{dP}{d\rho}$. From $\rho V = m$, $V\frac{d\rho}{dV} + \rho = 0$, or $V\frac{d\rho}{dV} = -\rho$, once again yielding $B = \rho \frac{dP}{d\rho}$.
You need to be very careful when you use "physics math." It can get you in trouble. | {
"domain": "physics.stackexchange",
"id": 15883,
"tags": "homework-and-exercises, fluid-dynamics"
} |
What's the difference and connection between symmetry breaking and anomaly? | Question: I'm just wondering what's the difference between symmetry breaking and anomaly.
From my understanding, symmetry breaking means: there is a symmetry in the action, but in the ground state of the theory(minimum of the action), the symmetry becomes smaller, which is a subgroup of the original symmetry.
While anomaly means that there is a symmetry in the classical theory ,but it's no longer a symmetry after quantization.
I think there should be some connections between them, but I don't know yet.
Answer: The answer to the linked question largely answers your question, but it assumes a bit. Contrast the following cases. I italicize your special interest parts, but they should be set in context.
Unbroken, not anomalous symmetry. Good symmetry, both classically and quantum, with visible and pleasing effects. Global or local (e.g. color).
Symmetry currents conserved.
Explicitly broken symmetry. Both classically and quantum, normally considered when the breaking (the non-vanishing divergence of currents) is small, e.g. the flavor SU(3) symmetry of light quarks.
Spontaneously broken symmetry. The current is conserved, but quantum collective effects, induced by the hamiltonian, alter the vacuum to a non-symmetric type, so the realization of the symmetry is peculiar, in the Nambu-Goldstone mode, with less facile effects, even though visible. The chiral symmetries of massless quarks in QCD are such. The local version underlies EW interactions.
Anomalous symmetry The current is conserved classically, but due to a property of the Dirac sea, develops a non-vanishing divergence proportional to $\hbar$ computable via fermion loop corrections to it. It is a property of the fermion sea/measure, not the vacuum altered by the hamiltonian. Globally, as in flavor chiral ones, they control important physics processes, like neutral pion decay, and are generally summarized by WZWN actions. Locally, they invalidate gauge invariance and render gauge theories inconsistent, so are avoided/cancelled (by the addition of suitable fermions) with extreme prejudice. | {
"domain": "physics.stackexchange",
"id": 67860,
"tags": "quantum-field-theory, lagrangian-formalism, symmetry-breaking, quantum-anomalies"
} |
In the following reaction, will the ratio between substances be 3:1? | Question: In the reaction
$$\ce{3 C2H2(g) <=> C6H6(g)}$$
will the ratio of $\ce{[C2H2(g)]}$ to $\ce{[C6H6(g)]}$ at equilibrium be 3:1?
How is the mole ratio related to the equilibrium (or lack thereof) of the reaction? How do I know if 3:1 is indeed the ratio?
Thanks.
Answer: No. The location of the equilibrium is dependent on many factors, most prominently temperature and pressure.
A chemical equilibrium appears to have a static composition, which is of course not true. The point in question is where the rates of the reaction equal in both directions. The equilibrium can be described by the equilibrium constant $K$, which is defined as the product of partial pressure $p_{\ce{B}}$, concentration $c_{\ce{B}}$, [...], chemical activity $a_{\ce{B}}$, in general $x_{\ce{B}}$ of the reactants and products, with respect to their stoichiometric number $\nu_{\ce{B}}$.
$$ K_x = \prod_{\ce{B}}(x_{\ce{B}})^{\nu_{\ce{B}}}$$
The standard equilibrium constant $K^\circ$ is given through the standard reaction Gibbs energy $\Delta_r G^\circ$ and the thermodynamic temperature $T$, $\mathcal{R}$ is the gas constant.
$$K^\circ = \exp\left\{\frac{-\Delta_r G^\circ}{\mathcal{R}\cdot T}\right\}$$
Since all equilibrium constants are proportional to each other it follows for a constant temperature, $T = \mathrm{const.}$, that the location is dependent on the Gibbs energy:
$$ \prod_{\ce{B}}(x_{\ce{B}})^{\nu_{\ce{B}}} \propto \exp\left\{-\Delta_r G^\circ\right\}$$
In your case $$\ce{3 C2H2(g) <=> C6H6(g)}$$ you would probably choose a pressure based approach $$K_p = \frac{p(\ce{C6H6})}{p^3(\ce{C2H2})} \propto \exp\left\{-\Delta_r G^\circ\right\}$$
And you can see the temperature and pressure dependency in the original definition:
\begin{align}
G(p,T) &= H - TS\\
H(p,T) &= U + pV\\
\end{align}
So you will only know the product ratio by very accurate calculations or empirical measurement.
For the system in question, I believe it has been done and the results may be found in W. J. Taylor, et. al., J. Res. Natl. Bur. Stand., Vol. 37, No. 2, p. 95. | {
"domain": "chemistry.stackexchange",
"id": 2288,
"tags": "equilibrium"
} |
What is the proper way to model diffusion in inhomogeneous media (Fokker-Planck or Fick's law) and why? | Question: I'm quite confused with the following problem. Normally a one-dimensional Fokker-Planck equation is written in the following form:
$$\frac{\partial \psi}{\partial t}=-\frac{\partial}{\partial x}(F\psi)+\frac{\partial^2}{\partial x^2}(D\psi)$$
While traditional convection-diffusion equation without sources has the form:
$$\frac{\partial \psi}{\partial t}=-\frac{\partial}{\partial x}(F\psi)+\frac{\partial}{\partial x}(D\frac{\partial \psi}{\partial x})$$
Considering non-constant diffusion $D=D(x,t)$ these equations significantly differ, that looks surprising, because they should interchangeably fit to the same problems (e.g. here). Is there any profound reason/physical explanation for such difference?
Or more straightforwardly: both equations are supposed to describe the evolution of $\psi$ with given $F(x,t)$ and $D(x,t)$. Suppose I have my distribution of something $\psi$ and corresponding coefficients, how can I then decide what form of equation I should use?
P.S. When one writes down a Langevin equation for Brownian motion with non-constant diffusion there appears a so-called noise-induced drift term and the corresponding Fokker-Planck equation then has a form of convection-diffusion equation that I referred earlier.. meaning that "classical" F-P equation is then suitable only for the constant diffusion, which is totally incorrect.. eventually I got lost completely.
Answer: It is a sticky question, and as van Kampen puts it, " no universal form of the diffusion equation exists, but each system has to be studied individually." https://link.springer.com/article/10.1007/BF01304217
(Unfortunately, I don't have full access to his paper, but you might be able to get it through your library.)
Now, the main reason the question is sticky is that it exposes an ambiguity in the Langevin description. In the Wikipedia article you link to, it says that an Itô process whose Langevin equation reads
$$
dX_t = \mu(X_t,t)dt+\sigma(X_t,t)dW_t,
$$
then the respective Fokker-Planck equation is
$$
\frac{\partial{p}}{\partial{t}}=-\frac{\partial{\left[\mu p\right]}}{\partial x}
+\frac{1}{2}\frac{\partial^2\left[\sigma^2p\right]}{\partial x^2}
$$
where $\sigma^2/2=D$.
Notice that they distinguished that it is an Itô process. If it had been a Stratonovich process, i.e.
$$
dX_t = \mu(X_t,t)dt+\sigma(X_t,t)\circ dW_t,
$$
the Fokker-Planck equation would read
$$
\frac{\partial{p}}{\partial{t}}=-\frac{\partial{\left[\mu p\right]}}{\partial x}
+\frac{1}{2}\frac{\partial}{\partial x}\left[\sigma\frac{\partial}
{\partial x}\left(\sigma p\right)\right].
$$
So now there are two different Fokker-Planck equations in addition to Fick's second law? What gives?
The issue is that when you write down the Langevin process, having $\sigma$ have a spatial dependence causes the noise term to have a non-linear influence on the position. In the Ito picture, the noise is treated as if it were kicking the Brownian particle at the beginning of each time interval $\Delta t$. In the Stratanovich convention, the noise is averaged between the endpoints of the time interval. Depending on whether you integrate using the Stratanovich convention or the Ito one, you get different results. There is also another convention called the Isothermal convention, and this gives a Fokker-Planck equation that looks a bit closer to Fick's Law. Here are a few references, which you should be able to access:
http://www.bgu.ac.il/~ofarago/shakedthesis.pdf
and
https://arxiv.org/pdf/1402.4598.pdf
Cheers! | {
"domain": "physics.stackexchange",
"id": 54279,
"tags": "statistical-mechanics, probability, diffusion, convection, stochastic-processes"
} |
Sluggish Loop Transferring Dictionary Data to Spreadsheet | Question: I've added a timer to my code and the bottleneck comes when I'm looping through 47 rows and inputting data from a dictionary that I previously loaded with values.
Since i use these files for a lot of different things I've set up public variables in order to avoid setting them up for every new code.
So my question is, is there a quicker way to pull data from the dictionary based on criteria in specific cells? The line directly below is repeated 8 times for the 8 different columns being populated with dictionary data, each column takes .20 seconds to complete, so 1.6 seconds per each iteration in the w loop (orderStart to orderEnd).
Cells(w, OF_clearanceColumn1) = stationclearanceData(Cells(w, OF_stationColumn).Value & Cells(orderStart - 1, OF_clearanceColumn1).Value)
Timer stats:
3.4453125 Open client database, determine which account we're on
3.484375 Find last 4 weks and order range
7.31640625 Adding columns and formating
7.61328125 loop through last 4 weeks, add station clearance and P&L data to dictionary
7.6484375 find range of cumulative, add clearance and P&L for 2019
100.90234375 adding data from dictionary to order file
NEW TIMER STATS, THANKS TO AJD AND MOVING DATA FROM DICTIONARY TO ARRAY TO RANGE.
1.71875 Open client database, determine which account we're on
1.75 Find last 4 weeks and order range
5.3203125 Adding columns and formating
5.6171875 loop through last 4 weeks, add station clearance and P&L data to dictionary
5.6640625 find range of cumulative, add clearance and P&L for 2019
7.6171875 adding data from dictionary to order file
Anyway, below is the code...
Sub Orders_Historicals_autofilterdict()
Dim start As Double
start = Timer
''--------------------------------------
''Static Variables
''--------------------------------------
Call DefinedVariables
Dim orderFile As Variant
Dim orderStart As Long
Dim orderEnd As Long
Dim clientdataFile As Variant
Dim internalFile As Variant
Dim dateStart As Long
Dim stationStart As Long
Dim stationEnd As Long
Dim currentStation As String
Dim currentWeek As String
Dim dictData As New Scripting.Dictionary
Dim stationclearanceData As New Scripting.Dictionary
Dim stationplData As New Scripting.Dictionary
Dim key As Variant
Dim fileOnly As String
Dim networkOnly As String
Dim i As Long
Dim w As Long
Dim t As Long
Dim plTotal As Long
Dim clearTotal As Long
Dim stationHash As String
''--------------------------------------
''Dictionary the Order Abbreviations
''--------------------------------------
Application.ScreenUpdating = False
Set orderFile = ActiveWorkbook.ActiveSheet
Workbooks.Open clientdataLocation
Set clientdataFile = ActiveWorkbook.Sheets(dan_location) '/ Change sheet when using on different computer
clientdataFile.Activate
For i = 1 To Cells(Rows.count, 1).End(xlUp).row
If dictData.Exists(Cells(i, clientOrder).Value) Then
Else: dictData.Add Cells(i, clientOrder).Value, i
End If
Next
''--------------------------------------
''Determine Account/Network & Open Internal Associated with Order
''--------------------------------------
orderFile.Activate
fileOnly = ActiveWorkbook.Name
fileOnly = Left(fileOnly, InStr(fileOnly, ".") - 1)
If InStr(fileOnly, 2) > 0 Or InStr(fileOnly, 3) > 0 Then
fileOnly = Left(fileOnly, Len(fileOnly) - 1)
End If
networkOnly = ActiveWorkbook.Name
networkOnly = Mid(networkOnly, InStr(networkOnly, "IO.") + 3)
networkOnly = Left(networkOnly, InStr(networkOnly, ".") - 1)
Workbooks.Open Filename:=clientdataFile.Cells(dictData(fileOnly), clientInternal).Value
Set internalFile = ActiveWorkbook
internalFile.Sheets(WT_newWeek).Activate
Debug.Print Timer - start & " Open client database, determine which account we're on"
''--------------------------------------
''Find Last 4 Dates & Column Header for Orders
''--------------------------------------
For i = 1 To 700
If Cells(i, 1) = WT_newWeek Then
dateStart = i
ElseIf Cells(i, 1) = "Station" Then
stationStart = i + 1
Exit For
End If
Next
For i = stationStart To 700
If Cells(i, 1).Value = Cells(stationStart - 2, 1).Value & " Total" Then
stationEnd = i - 1
Exit For
End If
Next
orderFile.Activate
For i = 1 To 700
If Cells(i, 1) = "Station" Then
orderStart = i + 1
Exit For
End If
Next
For i = orderStart To 700
If Len(Cells(i, 1)) = 0 And Len(Cells(i - 1, 1)) = 0 And Len(Cells(i - 2, 1)) = 0 Then
orderEnd = i - 3
Exit For
End If
Next
Debug.Print Timer - start & " Find last 4 weeks and order range"
''--------------------------------------
''Add Dates to Order Header and Formatting
''--------------------------------------
Cells(orderStart - 1, OF_buyAlgoColumn) = "Algorithm Recommendation"
Cells(orderStart - 1, OF_totalplColumn) = "Total P&L"
Cells(orderStart - 1, OF_totalclearanceColumn) = "Total Clearance %"
Cells(orderStart - 1, OF_clearanceColumn1) = internalFile.Sheets(WT_newWeek).Cells(dateStart, 1)
Cells(orderStart - 1, OF_clearanceColumn2) = internalFile.Sheets(WT_newWeek).Cells(dateStart - 1, 1)
Cells(orderStart - 1, OF_clearanceColumn3) = internalFile.Sheets(WT_newWeek).Cells(dateStart - 2, 1)
Cells(orderStart - 1, OF_clearanceColumn4) = internalFile.Sheets(WT_newWeek).Cells(dateStart - 3, 1)
Cells(orderStart - 1, OF_plColumn1) = internalFile.Sheets(WT_newWeek).Cells(dateStart, 1)
Cells(orderStart - 1, OF_plColumn2) = internalFile.Sheets(WT_newWeek).Cells(dateStart - 1, 1)
Cells(orderStart - 1, OF_plColumn3) = internalFile.Sheets(WT_newWeek).Cells(dateStart - 2, 1)
Cells(orderStart - 1, OF_plColumn4) = internalFile.Sheets(WT_newWeek).Cells(dateStart - 3, 1)
Range(Cells(orderStart - 2, OF_clearanceColumn1), Cells(orderStart - 2, OF_clearanceColumn4)) = "Clearance"
Range(Cells(orderStart - 2, OF_plColumn1), Cells(orderStart - 2, OF_plColumn4)) = "P&L"
Cells(orderStart - 1, OF_stationColumn).Copy
Range(Cells(orderStart - 1, OF_buyAlgoColumn), Cells(orderStart - 1, OF_plColumn4)).PasteSpecial xlPasteFormats
Cells(orderStart, OF_stationColumn).Copy
Range(Cells(orderStart - 2, OF_clearanceColumn1), Cells(orderStart - 2, OF_plColumn4)).PasteSpecial xlPasteFormats
Range(Cells(orderStart - 2, OF_buyAlgoColumn), Cells(orderEnd, OF_plColumn4)).HorizontalAlignment = xlCenter
Cells(orderStart, OF_stationColumn).Copy
Range(Cells(orderStart, OF_buyAlgoColumn), Cells(orderEnd, OF_plColumn4)).PasteSpecial xlPasteFormats
Cells(orderStart, OF_totalColumn).Copy
Range(Cells(orderStart, OF_plColumn1), Cells(orderEnd, OF_plColumn4)).PasteSpecial xlPasteFormats
Range(Cells(orderStart, OF_totalplColumn), Cells(orderEnd, OF_totalplColumn)).PasteSpecial xlPasteFormats
Range(Cells(orderStart, OF_totalclearanceColumn), Cells(orderEnd, OF_clearanceColumn4)).NumberFormat = "0%"
Range(Cells(orderStart - 2, OF_buyAlgoColumn), Cells(orderEnd, OF_plColumn4)).FormatConditions.Delete
Range(Columns(OF_buyAlgoColumn), Columns(OF_plColumn4)).AutoFit
Debug.Print Timer - start & " Adding columns and formating"
''--------------------------------------
''Add Clearance and P&L by Date to Dictionary
''--------------------------------------
For i = OF_clearanceColumn1 To OF_clearanceColumn4
currentWeek = Cells(orderStart - 1, i).Value
internalFile.Sheets(currentWeek).Activate
For t = 1 To 700
If Cells(t, 1) = "Station" Then
stationStart = t + 1
Exit For
End If
Next
For t = stationStart To 700
If Cells(t, 1).Value = Cells(stationStart - 2, 1).Value & " Total" Then
stationEnd = i - 1
Exit For
End If
If stationclearanceData.Exists(Cells(t, WT_stationColumn).Value & currentWeek) Then
Else:
On Error Resume Next
stationclearanceData.Add Cells(t, WT_stationColumn).Value & currentWeek, Cells(t, WT_mediaactColumn).Value / Cells(t, WT_mediaestColumn).Value
stationplData.Add Cells(t, WT_stationColumn).Value & currentWeek, Cells(t, WT_profitColumn).Value
End If
Next
orderFile.Activate
Next
Debug.Print Timer - start & " loop through last 4 weeks, add station clearance and P&L data to dictionary"
''--------------------------------------
''Add Cumulative Clearance and P&L to Dictionary
''--------------------------------------
internalFile.Sheets("Cumulative").Activate
For t = 5 To 70000
If Cells(t, 1) = "" And Cells(t + 1, 1) = "" And Cells(t + 2, 1) = "" Then
stationEnd = t + 1
Exit For
End If
Next
For t = 5 To stationEnd
If Cells(t, CT_yearColumn) = 2019 Then
If stationclearanceData.Exists(Cells(t, CT_hashColumn).Value) Then
Else:
On Error Resume Next
stationclearanceData.Add Cells(t, CT_hashColumn).Value, Cells(t, CT_clearanceColumn).Value
stationplData.Add Cells(t, CT_hashColumn).Value, Cells(t, CT_invoiceColumn).Value - Cells(t, CT_actcostColumn).Value
End If
End If
Next
Debug.Print Timer - start & " find range of cumulative, add clearance and P&L for 2019"
orderFile.Activate
''--------------------------------------
''Loop Through Stations on Order File and Update Based on Dictionary Values
''--------------------------------------
For w = orderStart To orderEnd
If Cells(w, OF_stationColumn) <> "" Then
If Cells(w, OF_stationColumn) <> Cells(w - 1, OF_stationColumn) Then
stationHash = Cells(w, OF_stationColumn).Value & " " & Cells(w, OF_trafficColumn).Value & " Total"
On Error Resume Next
Cells(w, OF_clearanceColumn1) = stationclearanceData(Cells(w, OF_stationColumn).Value & Cells(orderStart - 1, OF_clearanceColumn1).Value)
Cells(w, OF_clearanceColumn2) = stationclearanceData(Cells(w, OF_stationColumn).Value & Cells(orderStart - 1, OF_clearanceColumn2).Value)
Cells(w, OF_clearanceColumn3) = stationclearanceData(Cells(w, OF_stationColumn).Value & Cells(orderStart - 1, OF_clearanceColumn3).Value)
Cells(w, OF_clearanceColumn4) = stationclearanceData(Cells(w, OF_stationColumn).Value & Cells(orderStart - 1, OF_clearanceColumn4).Value)
Cells(w, OF_plColumn1) = stationplData(Cells(w, OF_stationColumn).Value & Cells(orderStart - 1, OF_plColumn1).Value)
Cells(w, OF_plColumn2) = stationplData(Cells(w, OF_stationColumn).Value & Cells(orderStart - 1, OF_plColumn2).Value)
Cells(w, OF_plColumn3) = stationplData(Cells(w, OF_stationColumn).Value & Cells(orderStart - 1, OF_plColumn3).Value)
Cells(w, OF_plColumn4) = stationplData(Cells(w, OF_stationColumn).Value & Cells(orderStart - 1, OF_plColumn4).Value)
Cells(w, OF_totalplColumn) = stationplData(stationHash)
Cells(w, OF_totalclearanceColumn) = stationclearanceData(stationHash)
End If
End If
Next
Debug.Print Timer - start & " adding data from dictionary to order file"
clientdataFile.Activate
ActiveWorkbook.Close saveChanges:=False
Application.ScreenUpdating = True
Range(Cells(orderStart - 2, OF_buyAlgoColumn), Cells(orderEnd, OF_plColumn4)).HorizontalAlignment = xlCenter
MsgBox ("Buy Algorithm Complete")
End Sub
Answer: Obligatory message: Please ensure you use Option Explicit at the start of every module.
Code readability
The first thing that hits me is a wall of declarations. That and the double spacing makes it hard to review this code. Are all the variables used? I know that there are some variables in there that are not in that wall of declarations.
You are also happy to spread the lines out, but then use "scrunching" techniques such as Else: dictData.Add Cells(i, clientOrder).Value, i
Some of the code here can be broken into logic chunks - either as subroutines or functions. Remember, you can pass parameters to these routines!
DefinedVariables?
I don't know what DefinedVariables does.
Call is deprecated. You just use
DefinedVariables
instead of
Call DefinedVariables
Activethingies
You use the active workbook (explicitly and implicitly), active sheet (explicitly and implicitly) and active cell/range (implicitly) a lot. In reality, you can never be sure what is the active book, sheet or cell, you just don't know if something has changed the focus outside of your macro.
There are some occasions within Excel VBA where immediately grabbing the active object is necessary (e.g. when copying a sheet), but for pretty much all cases you can explicitly qualify the object you are using to prevent the code being hijacked by something that is on screen.
Having said that, activating something while screen updating is off is a null activity.
Object typing
You declare variables as Variant, but then use them for objects
Dim clientdataFile As Variant
Set clientdataFile = ActiveWorkbook.Sheets(dan_location) '/ Change sheet when using on different computer
If you are going to use it for worksheets, then declare it as such!
Dim clientdataFile As Worksheet
Strange use of inbuilt functions
InStr(fileOnly, 2) is not how InStr is supposed to be used. I suspect that this code does not work as intended - have you checked this?
Use Arrays instead of looping through cells
There have been numerous discussions in these hallowed halls about the performance hit of switching between the Excel Model and the VBA model. And every loop that calls a range or a cell performs that switch.
The best option is to out the range into an array instead of looping.
The use of a do while loop is neater than an arbitrary For I = loop, the exit conditions are more explicitly stated than a hidden Exit For. Technically correct, but harder to maintain.
Use Excel functionality
Excel has named ranges. This can be exploited to simplify code. You don't have to declare static variables which hold column numbers if you can use the named ranges.
Magic numbers
You have some magic numbers in the code. What is the significance of 700 or 70000 ? How are you going to manage the code if these change - how will you ensure you have got every copy of them?
Also, what happens to stationStart or stationEnd if you go through the loops and do not find the relevant cell? Currently they stay at 0.
what does this look like?
Putting most of what I said above into practice gives the following code. This is not tested and I have not moved all the declarations to where they are supposed to be. I have also found a couple left over!
Sub Orders_Historicals_autofilterdict2()
Dim start As Double
start = Timer
''--------------------------------------
''Static Variables
''--------------------------------------
DefinedVariables
Dim currentStation As String
Dim currentWeek As String
Dim stationclearanceData As New Scripting.Dictionary
Dim stationplData As New Scripting.Dictionary
Dim key As Variant
Dim i As Long
Dim w As Long
Dim plTotal As Long
Dim clearTotal As Long
Dim stationHash As String
''--------------------------------------
''Dictionary the Order Abbreviations
''--------------------------------------
Application.ScreenUpdating = False
Dim orderFile As Worksheet ' notVariant
Dim clientdataFile As Worksheet 'Variant
Dim clientdataBook As Workbook ' I added this
Dim dictData As New Scripting.Dictionary
Set orderFile = ActiveWorkbook.ActiveSheet ' consider putting that orderBook variable in, because this gets used a few times later.
Set clientdataBook = Workbooks.Open(clientdataLocation) 'clientdataLocation is undeclared? What happens if this is null?
Set clientdataFile = clientdataBook.Sheets(dan_location) '/ Change sheet when using on different computer
With clientdataFile ' not activate! Now the following code is fully qualified.
For i = 1 To .Cells(.Rows.Count, 1).End(xlUp).Row
If dictData.Exists(.Cells(i, clientOrder).Value) Then ' This could be "If Not dictData etc."
Else
dictData.Add .Cells(i, clientOrder).Value, i
End If
Next
End With
''--------------------------------------
''Determine Account/Network & Open Internal Associated with Order
''--------------------------------------
Dim fileOnly As String
Dim networkOnly As String
Dim internalBook As Workbook ' I added this
fileOnly = orderFile.Parent.Name ' no need to activate
fileOnly = Left(fileOnly, InStr(fileOnly, ".") - 1)
If InStr(fileOnly, 2) > 0 Or InStr(fileOnly, 3) > 0 Then '' Does this actually work?
fileOnly = Left(fileOnly, Len(fileOnly) - 1)
End If
networkOnly = orderFile.Parent.Name ' at this point, you have already lost track of what is supposed to be active.
networkOnly = Mid(networkOnly, InStr(networkOnly, "IO.") + 3)
networkOnly = Left(networkOnly, InStr(networkOnly, ".") - 1)
Dim internalFile As Workbook
Set internalFile = Workbooks.Open(Filename:=clientdataFile.Cells(dictData(fileOnly), clientInternal).Value)
Debug.Print Timer - start & " Open client database, determine which account we're on"
''--------------------------------------
''Find Last 4 Dates & Column Header for Orders
''--------------------------------------
Dim dateStart As Long
Dim stationStart As Long
Dim stationEnd As Long
Dim orderStart As Long
Dim orderEnd As Long
Dim findStationArray As Variant ' I added the next 4
Dim startFound As Boolean
Dim endFound As Boolean
Dim stationStartValue As String ' assumption here
With internalFile.Sheets(WT_newWeek) ' no need to Activate!
findStationArray = .Range("A1:A700").Value
i = LBound(findStationArray, 1)
While i <= UBound(findStationArray, 1) Or Not startFound
Select Case .Cells(i, 1).Value
Case WT_newWeek
dateStart = i
Case "Station"
If Not startFound Then
stationStart = i + 1
startFound = True
End If
End Select
i = i + 1
Wend
stationStartValue = .Cells(stationStart - 2, 1).Value & " Total" ' do this only once, not 700 times
While i <= UBound(findStationArray, 1) Or Not endFound
endFound = (.Cells(i, 1).Value = stationStartValue)
If endFound Then stationEnd = i - 1
i = i + 1
Wend
End With
With orderFile ' again - do not .Activate
findStationArray = .Range("A1:A700").Value
i = LBound(findStationArray, 1)
While i <= UBound(findStationArray, 1) Or Not startFound
startFound = (.Cells(i, 1).Value = "Station")
If startFound Then orderStart = i + 1
i = i + 1
Wend
While i <= UBound(findStationArray, 1) Or Not endFound
endFound = (Len(.Cells(i, 1)) = 0 And Len(.Cells(i - 1, 1)) = 0 And Len(.Cells(i - 2, 1)) = 0)
If endFound Then orderEnd = i - 3
i = i + 1
Wend
End With
Debug.Print Timer - start & " Find last 4 weeks and order range"
''--------------------------------------
''Add Dates to Order Header and Formatting
''--------------------------------------
With orderFile ' assumption here - have we lost track of what is active yet?
.Cells(orderStart - 1, OF_buyAlgoColumn) = "Algorithm Recommendation"
.Cells(orderStart - 1, OF_totalplColumn) = "Total P&L"
.Cells(orderStart - 1, OF_totalclearanceColumn) = "Total Clearance %"
.Cells(orderStart - 1, OF_clearanceColumn1) = internalFile.Sheets(WT_newWeek).Cells(dateStart, 1)
.Cells(orderStart - 1, OF_clearanceColumn2) = internalFile.Sheets(WT_newWeek).Cells(dateStart - 1, 1)
.Cells(orderStart - 1, OF_clearanceColumn3) = internalFile.Sheets(WT_newWeek).Cells(dateStart - 2, 1)
.Cells(orderStart - 1, OF_clearanceColumn4) = internalFile.Sheets(WT_newWeek).Cells(dateStart - 3, 1)
.Cells(orderStart - 1, OF_plColumn1) = internalFile.Sheets(WT_newWeek).Cells(dateStart, 1)
.Cells(orderStart - 1, OF_plColumn2) = internalFile.Sheets(WT_newWeek).Cells(dateStart - 1, 1)
.Cells(orderStart - 1, OF_plColumn3) = internalFile.Sheets(WT_newWeek).Cells(dateStart - 2, 1)
.Cells(orderStart - 1, OF_plColumn4) = internalFile.Sheets(WT_newWeek).Cells(dateStart - 3, 1)
.Range(.Cells(orderStart - 2, OF_clearanceColumn1), .Cells(orderStart - 2, OF_clearanceColumn4)) = "Clearance"
.Range(.Cells(orderStart - 2, OF_plColumn1), .Cells(orderStart - 2, OF_plColumn4)) = "P&L"
.Cells(orderStart - 1, OF_stationColumn).Copy
.Range(.Cells(orderStart - 1, OF_buyAlgoColumn), .Cells(orderStart - 1, OF_plColumn4)).PasteSpecial xlPasteFormats
.Cells(orderStart, OF_stationColumn).Copy
.Range(.Cells(orderStart - 2, OF_clearanceColumn1), .Cells(orderStart - 2, OF_plColumn4)).PasteSpecial xlPasteFormats
.Range(.Cells(orderStart - 2, OF_buyAlgoColumn), .Cells(orderEnd, OF_plColumn4)).HorizontalAlignment = xlCenter
.Cells(orderStart, OF_stationColumn).Copy
.Range(.Cells(orderStart, OF_buyAlgoColumn), .Cells(orderEnd, OF_plColumn4)).PasteSpecial xlPasteFormats
.Cells(orderStart, OF_totalColumn).Copy
.Range(.Cells(orderStart, OF_plColumn1), .Cells(orderEnd, OF_plColumn4)).PasteSpecial xlPasteFormats
.Range(.Cells(orderStart, OF_totalplColumn), .Cells(orderEnd, OF_totalplColumn)).PasteSpecial xlPasteFormats
.Range(.Cells(orderStart, OF_totalclearanceColumn), .Cells(orderEnd, OF_clearanceColumn4)).NumberFormat = "0%"
.Range(.Cells(orderStart - 2, OF_buyAlgoColumn), .Cells(orderEnd, OF_plColumn4)).FormatConditions.Delete
.Range(.Columns(OF_buyAlgoColumn), .Columns(OF_plColumn4)).AutoFit
End With
Debug.Print Timer - start & " Adding columns and formating"
''--------------------------------------
''Add Clearance and P&L by Date to Dictionary
''--------------------------------------
Dim t As Long
For i = OF_clearanceColumn1 To OF_clearanceColumn4
currentWeek = orderFile.Cells(orderStart - 1, i).Value
With internalFile.Sheets(currentWeek)
findStationArray = .Range("A1:A700").Value
t = LBound(findStationArray, 1)
While t <= UBound(findStationArray, 1) Or Not startFound
startFound = (.Cells(t, 1).Value = "Station")
If startFound Then stationStart = t + 1
t = t + 1
Wend
stationStartValue = .Cells(stationStart - 2, 1).Value & " Total" ' do this only once, not 700 times
While t <= UBound(findStationArray, 1) Or Not endFound
endFound = (.Cells(t, 1).Value = stationStartValue)
If endFound Then
stationEnd = i - 1 ' is this meant to be "i" or "t" ?
Else
If stationclearanceData.Exists(Cells(t, WT_stationColumn).Value & currentWeek) Then
Else
On Error Resume Next ' I assume you want to fail silently. Otherwise this is dangerous
stationclearanceData.Add Cells(t, WT_stationColumn).Value & currentWeek, Cells(t, WT_mediaactColumn).Value / Cells(t, WT_mediaestColumn).Value
stationplData.Add Cells(t, WT_stationColumn).Value & currentWeek, Cells(t, WT_profitColumn).Value
On Error GoTo 0 ' stop the error hiding - otherwise you will not pick up any errors later in the code
End If
End If
i = i + 1
Wend
End With
Next i
Debug.Print Timer - start & " loop through last 4 weeks, add station clearance and P&L data to dictionary"
''--------------------------------------
''Add Cumulative Clearance and P&L to Dictionary
''--------------------------------------
Dim cumulativeSheet As Worksheet
With internalFile.Sheets("Cumulative") ' again, no need to .Activate
findStationArray = .Range("A5:A70000").Value
t = LBound(findStationArray, 1)
While t <= UBound(findStationArray, 1) Or Not endFound
endFound = (.Cells(t, 1) = "" And .Cells(t + 1, 1) = "" And .Cells(t + 2, 1) = "")
' If endFound Then stationEnd = t + 1 ' this is superfluous, because the loop will exit with t+1 anyway. But good to have here for future readability and maintenance.
t = t + 1
Wend
For t = 5 To stationEnd
If Cells(t, CT_yearColumn) = 2019 Then
If stationclearanceData.Exists(Cells(t, CT_hashColumn).Value) Then
Else
On Error Resume Next
stationclearanceData.Add Cells(t, CT_hashColumn).Value, Cells(t, CT_clearanceColumn).Value
stationplData.Add Cells(t, CT_hashColumn).Value, Cells(t, CT_invoiceColumn).Value - Cells(t, CT_actcostColumn).Value
On Error GoTo 0 ' stop the error hiding - otherwise you will not pick up any errors later in the code
End If
End If
Next
Debug.Print Timer - start & " find range of cumulative, add clearance and P&L for 2019"
''--------------------------------------
''Loop Through Stations on Order File and Update Based on Dictionary Values
''--------------------------------------
' **** The changes here are for better performance.
Dim stationValues As Variant
Dim trafficValues As Variant
Dim totalPLValues As Variant
Dim totalClearanceValues As Variant
Dim clearance1Values As Variant ' if these are contiguous columns then this could be handled as a two dimensional array.
Dim clearance2Values As Variant
Dim clearance3Values As Variant
Dim clearance4Values As Variant
Dim pl1Values As Variant ' Ditto
Dim pl2Values As Variant
Dim pl3Values As Variant
Dim pl4Values As Variant
Dim clearanceValue1 As String
Dim clearanceValue2 As String
Dim clearanceValue3 As String
Dim clearanceValue4 As String
Dim plValue1 As String
Dim plValue2 As String
Dim plValue3 As String
Dim plValue4 As String
With orderFile '.Activate
stationValues = .Range(.Cells(orderStart - 1, OF_stationColumn), .Cells(orderEnd, OF_stationColumn)).Value ' use arrays instead of calling excel ranges
trafficValues = .Range(.Cells(orderStart - 1, OF_trafficColumn), .Cells(orderEnd, OF_trafficColumn)).Value ' use arrays instead of calling excel ranges
totalPLValues = .Range(.Cells(orderStart - 1, OF_totalplColumn), .Cells(orderEnd, OF_totalplColumn)).Value
totalClearanceValues = .Range(.Cells(orderStart - 1, OF_totalclearanceColumn), .Cells(orderEnd, OF_totalclearanceColumn)).Value
clearance1Values = .Range(.Cells(orderStart - 1, OF_clearanceColumn1), .Cells(orderEnd, OF_clearanceColumn1)).Value
clearance2Values = .Range(.Cells(orderStart - 1, OF_clearanceColumn2), .Cells(orderEnd, OF_clearanceColumn2)).Value
clearance3Values = .Range(.Cells(orderStart - 1, OF_clearanceColumn3), .Cells(orderEnd, OF_clearanceColumn3)).Value
clearance4Values = .Range(.Cells(orderStart - 1, OF_clearanceColumn4), .Cells(orderEnd, OF_clearanceColumn4)).Value
pl1Values = .Range(.Cells(orderStart - 1, OF_plColumn1), .Cells(orderEnd, OF_plColumn1)).Value
pl2Values = .Range(.Cells(orderStart - 1, OF_plColumn2), .Cells(orderEnd, OF_plColumn2)).Value
pl3Values = .Range(.Cells(orderStart - 1, OF_plColumn3), .Cells(orderEnd, OF_plColumn3)).Value
pl4Values = .Range(.Cells(orderStart - 1, OF_plColumn4), .Cells(orderEnd, OF_plColumn4)).Value
clearanceValue1 = .Cells(orderStart - 1, OF_clearanceColumn1).Value ' evaluate these only once, instead of every time in the loop
clearanceValue2 = .Cells(orderStart - 1, OF_clearanceColumn2).Value
clearanceValue3 = .Cells(orderStart - 1, OF_clearanceColumn3).Value
clearanceValue4 = .Cells(orderStart - 1, OF_clearanceColumn4).Value
plValue1 = .Cells(orderStart - 1, OF_plColumn1).Value
plValue2 = .Cells(orderStart - 1, OF_plColumn2).Value
plValue3 = .Cells(orderStart - 1, OF_plColumn3).Value
plValue4 = .Cells(orderStart - 1, OF_plColumn4).Value
For w = LBound(stationValues) + 1 To UBound(stationValues) 'orderStart To orderEnd
If stationValues(w, 1) <> "" Then
If stationValues(w, 1) <> stationValues(w - 1, 1) Then
stationHash = stationValues(w, 1) & " " & stationValues(w, 1) & " Total"
' On Error Resume Next ' don't hide errors - what is the issue here?
clearance1Values(w, 1) = stationclearanceData(stationValues(w, 1) & clearanceValue1)
clearance2Values(w, 1) = stationclearanceData(stationValues(w, 1) & clearanceValue2)
clearance3Values(w, 1) = stationclearanceData(stationValues(w, 1) & clearanceValue3)
clearance4Values(w, 1) = stationclearanceData(stationValues(w, 1) & clearanceValue4)
pl1Values(w, 1) = stationclearanceData(stationValues(w, 1) & plValue1)
pl1Values(w, 2) = stationclearanceData(stationValues(w, 1) & plValue2)
pl1Values(w, 3) = stationclearanceData(stationValues(w, 1) & plValue3)
pl1Values(w, 4) = stationclearanceData(stationValues(w, 1) & plValue4)
totalPLValues(w, 1) = stationplData(stationHash)
totalClearanceValues(w, 1) = stationclearanceData(stationHash)
End If
End If
Next
' return the changed arrays to the ranges.
.Range(.Cells(orderStart - 1, OF_totalplColumn), .Cells(orderEnd, OF_totalplColumn)).Value = totalPLValues
.Range(.Cells(orderStart - 1, OF_totalclearanceColumn), .Cells(orderEnd, OF_totalclearanceColumn)).Value = totalClearanceValues
.Range(.Cells(orderStart - 1, OF_clearanceColumn1), .Cells(orderEnd, OF_clearanceColumn1)).Value = clearance1Values
.Range(.Cells(orderStart - 1, OF_clearanceColumn2), .Cells(orderEnd, OF_clearanceColumn2)).Value = clearance2Values
.Range(.Cells(orderStart - 1, OF_clearanceColumn3), .Cells(orderEnd, OF_clearanceColumn3)).Value = clearance3Values
.Range(.Cells(orderStart - 1, OF_clearanceColumn4), .Cells(orderEnd, OF_clearanceColumn4)).Value = clearance4Values
.Range(.Cells(orderStart - 1, OF_plColumn1), .Cells(orderEnd, OF_plColumn1)).Value = pl1Values
.Range(.Cells(orderStart - 1, OF_plColumn2), .Cells(orderEnd, OF_plColumn2)).Value = pl2Values
.Range(.Cells(orderStart - 1, OF_plColumn3), .Cells(orderEnd, OF_plColumn3)).Value = pl3Values
.Range(.Cells(orderStart - 1, OF_plColumn4), .Cells(orderEnd, OF_plColumn4)).Value = pl4Values
End With
Debug.Print Timer - start & " adding data from dictionary to order file"
clientdataBook.Close saveChanges:=False
Application.ScreenUpdating = True
' lost track of what is supposed to be active yet?
orderFile.Range(Cells(orderStart - 2, OF_buyAlgoColumn), Cells(orderEnd, OF_plColumn4)).HorizontalAlignment = xlCenter
MsgBox ("Buy Algorithm Complete")
End Sub | {
"domain": "codereview.stackexchange",
"id": 33973,
"tags": "performance, vba, excel, hash-map"
} |
What happens in an exothermic reaction at the atomic level? | Question: Exothermic Reactions are those chemical reactions in which heat is released.
How does this happen?
What I mean is where does the heat energy come from? Which form of energy is converted to heat energy? What makes some reactions release heat? What affects the amount of heat energy released?
Thanks!
Answer:
What happens in an exothermic reaction at the atomic level?
In a chemical reaction, bonds are broken and bonds are formed. For example, when elemental oxygen and hydrogen form water, we are breaking bonds in the elements and making heteroatomic bonds connecting hydrogen and oxygen atoms:
$$\ce{O2 + 2H2-> 2H2O}$$
To emphasize the bonds, we could also write:
$$\ce{O=O + H-H + H-H -> H-O-H + H-O-H}$$
In a thought experiment, we could first separate the elements into "isolated" atoms, and then recombine them to form water. Bond dissociation energies estimate how much energy it takes to separate bound atoms into "isolated" atoms. In the case of the example reaction, it takes less energy to break all the bonds of the reactants (one double bond connecting oxygen atoms and two single bonds connecting hydrogen atoms) than to break all the bonds of the products (four single bonds between oxygen and hydrogen atoms). As a result, there is an excess of energy that becomes available, resulting in an exothermic reaction.
What affects the amount of heat energy released?
The reaction follows a more subtle path than completely ripping molecules apart into atoms and then making new molecules. However, because enthalpy is a state function, the result in terms of reaction enthalpy is the same. So the only thing that affects the reaction enthalpy is the set of reactants and products.
The amount of heat depends on how you set up the reaction. If the temperature before and after the reaction is the same, if the pressure is constant and there is no non-PV work, it is equal to the reaction enthalpy. If the reaction is used to do work (electrochemical cell, steam engine etc), if it is a photochemical reaction, if it gives off light, or if any other of the conditions given in the previous sentence are not met, that will affect the amount of heat transferred.
Which form of energy is converted to heat energy?
The energy "comes from" the different energy of electronic states in the reactants and products. You can call this chemical energy, or a combination of kinetic and potential energy of the electrons. We see that electrons going from a higher energy state to a lower energy state give off energy when considering atomic spectra (sodium giving off a yellow light in the Bunsen burner, for example). In the case of reactions, electronic states are different between reactants and products not because they are in excited states, but because of the different configuration of nuclei giving rise to different electronic states.
How does this happen?
Unless the reaction gives off energy directly (by emitting electromagnetic waves - rather rare), the available energy gets transformed into kinetic energy of the nuclei (e.g. molecular vibration), and then is transferred to the surrounding by the common ways of heat transfer until the system is at thermal equilibrium again. If the reaction is endothermic, some of the activation energy (provided by molecular collisions) is not "returned to the pool" but instead used to get the electrons into their higher energy states in the products (compared to the reactants), leading to a net heat transfer from surroundings to the chemical reaction. | {
"domain": "chemistry.stackexchange",
"id": 13698,
"tags": "thermodynamics"
} |
Band Structure and Carrier Recombination/Generation | Question: So i've been a bit confused, looking at PN junction, semiconductors and the like (trying to nail down how exactly semiconductors work, transistors and such). I've read the wiki on band structure (“holes” and free electrons).
Apparently the big part is the band theory, and how the energy levels work. This is what I have so far (correct me if I’m wrong):
Materials/atoms have energy levels and bands based on these energy levels.
The forbidden band (I’m assuming closest to the atom): no free electrons are allowed in or out.
Above that is the valence band, which can force free electrons out with some energy leaving behind a “hole”. The valence band is usually pretty well occupied (Does this mean it might have some “unoccupied holes” in it?).
Electrons in the conduction band, I’m assuming, take very little energy to move out of the atom (or are already leaving the atom).
The band gap determines how much energy it takes to remove an electron from the valence band to the conduction band (Insulators have a big gap, semiconductors’ gap is small. But does it, maybe, grow larger with heat?).
So, I have a few questions:
When an electron leaves a valence band, does it leave behind an actual hole, or is that just conceptual? What determines how “filled” a band is? How come electrons cannot just slide on into a band?
When an electron reaches the conduction band, is it “free” from the atom? Or just takes very little energy to get discharged?
When we refer to charge carriers, what is this? A Femi Gas? Are there holes in this gas that can move? Or do just electrons move in and out through the gas? I’m guessing the concept of “holes” has me slightly confused as I don’t understand how a hole can “move”.
Answer: One could write a novel about those questions... I'll try to nail down the most important facts.
Regarding what you figured out so far:
Basically correct. I would say: Every system of atoms has a quantum mechanical ground state. You can approximately assign an energy to each of the electrons (depending on the approximation you are using, e.g. Hartree-Fock or density-functional theory).
Bands are a fancy way of plotting those levels in case of a periodic crystal lattice. The k-axis, called crystal momentum, should only be understood as a quantum number or an index. It is NOT the momentum.
The forbidden band is not an actual band - it marks the absence of bands. That's why you call it band gap. The band gap/forbidden band is between the valence bands (=lower, filled bands) and the conduction bands (=upper, not filled bands). The band gap may also be nonexistent (metals).
It's not "closest to the atom", and there are no electrons there because there are no states that they can occupy (it's a gap).
The valence band is essentially fully occupied. This implies (nontrivially) that for every electron moving in one direction, there is an electron moving in the exact other direction. Therefore, there is no conduction. If an electron jumps (for whatever reason) into the conduction band, it doesn't have the aforementioned partner there - thus, it conducts. The same goes for the hole it leaves behind. One can show that a single "absent electron" behaves like a positive charge governed by the same equations as the electron. That's what you call a hole.
It's below the band gap (if the latter one even exists).
Essentially, all electrons of the atom can move throughout the material. The probability that this happens is not equal for all electrons though. So, doesn't really take them any energy.
That's correct. I'm not aware of a temperature dependence of the band gap. Temperature makes it easier to jump over the gap, though. (Edit by @lemon: the band gap actually decreases almost linearly with increasing temperature (at least for silicon and germanium))
Concerning the questions:
Like I mentioned before, if you remove an electron from a band, it leaves behind a quasiparticle that acts like an electron with the opposite charge. This is what you call a hole.
One band can always hold 2 electrons per crystal unit cell. If a crystal has 8 atoms per unit cell, there will be four filled bands. This comes from the Pauli principle which states that a quantum mechanical state can only be occupied by 1 electron, or 2, if you count spin degeneracy. When a state in the band structure is occupied by 2 electrons, there can't be another one there. The states will be filled from bottom to top (concerning energy). The energy of the topmost filled state is called Fermi energy.
All the electrons in the system are free to move, in principal. The problem is, as I explained in 3), that a full band does not conduct. Only if an electron "jumps" into the conduction band, it can conduct (and the hole it leaves behind will also conduct).
Every electron can move. Holes can move as well as electrons. Mind though, that a hole is only a missing electron which behaves equally to an electron with opposite charge (imagine that you have 100 people in a room, and everyone has a ball. Nothing will ever change. If you take away one ball, the person without a ball can be given a ball from the person next to him, and it will be as if the "hole" moved).
The picture illustrates the concept of a band structure quite nicely:
The k axis (horizontal axis) is the k vector, it's just a quantum number/an index. I won't go into detail about that (look into the Bloch theorem if you want to know more).
There are some energies below the pictures which "belong" to the core electrons (1s). Their probability of moving between the atoms is very small, and the energy needed to get them to the valence band is very high (so they can't get up).
Every point in the diagram that belongs to a solid line marks a quantum state. The white spaces between don't have states. Only the solid lines.
The gray regions are the forbidden regions = band gaps. As you can see, there are no bands there.
The dashed line marks the Fermi energy, the highest occupied energy. The white area below marks the energy region where there are occupied states (=solid lines). This area is full of electrons, it's the valence band.
The bands in the top white area are the conduction states. If an electron doesn't jump up from the valence band, there is no electron up there.
Mind that "band" can mean "one solid line" as well and "a bunch of solid lines". The conduction band and the valence band are actually a bunch of bands (=a bunch of solid lines). | {
"domain": "physics.stackexchange",
"id": 6161,
"tags": "quantum-mechanics, condensed-matter, electronic-band-theory, carrier-particles"
} |
error building a package with rosjava | Question:
Hi,
I'm trying to rosmake a package using the rosjava.mk as described here.
Everything works fine until I add a dependency on another (self created) package.
No problems occur with dependencies on core packages (like std_msgs) or if I don't use the rosjava.mk.
I get an extension during the run of generate_properties.py because the script is trying to locate the "installation of stack None":
[ rosmake ] All 18 linesest: 0.3 sec ]
{-------------------------------------------------------------------------------
rosrun rosjava_bootstrap generate_properties.py test > ros.properties
Traceback (most recent call last):
File "/opt/ros/electric/stacks/rosjava_core/rosjava_bootstrap/scripts/generate_properties.py", line 174, in <module>
generate_properties_main()
File "/opt/ros/electric/stacks/rosjava_core/rosjava_bootstrap/scripts/generate_properties.py", line 171, in generate_properties_main
generate_ros_properties(package)
File "/opt/ros/electric/stacks/rosjava_core/rosjava_bootstrap/scripts/generate_properties.py", line 124, in generate_ros_properties
props['ros.pkg.%s.version'%(p)] = get_package_version(p)
File "/opt/ros/electric/stacks/rosjava_core/rosjava_bootstrap/scripts/generate_properties.py", line 112, in get_package_version
return get_stack_version_cached(s)
File "/opt/ros/electric/stacks/rosjava_core/rosjava_bootstrap/scripts/generate_properties.py", line 106, in get_stack_version_cached
_stack_version_cache[s] = val = roslib.stacks.get_stack_version(s)
File "/opt/ros/electric/ros/core/roslib/src/roslib/stacks.py", line 310, in get_stack_version
return get_stack_version_by_dir(get_stack_dir(stack, env=env))
File "/opt/ros/electric/ros/core/roslib/src/roslib/stacks.py", line 168, in get_stack_dir
raise InvalidROSStackException("Cannot location installation of stack %s. ROS_ROOT[%s] ROS_PACKAGE_PATH[%s]"%(stack, env[ROS_ROOT], env.get(ROS_PACKAGE_PATH, '')))
roslib.stacks.InvalidROSStackException: Cannot location installation of stack None.
I'm using the newest ros-electric-rosjava-core (0.1.0-s1313710013~maverick).
Thanks!
Originally posted by martin on ROS Answers with karma: 3 on 2011-10-06
Post score: 0
Answer:
I believe that's fixed at head. You can try installing rosjava from source to fix the issue. Or, you can put your new package in a stack.
Originally posted by damonkohler with karma: 3838 on 2011-10-08
This answer was ACCEPTED on the original site
Post score: 0
Original comments
Comment by martin on 2011-10-13:
Putting it in a package worked (didn't try installing it from source). Thanks! | {
"domain": "robotics.stackexchange",
"id": 6885,
"tags": "rosmake, rosjava"
} |
Circular motion and friction | Question: This is a solved example in my text book"Engineering mechanics dynamics. R.C.Hibbeler" :
The 3 kg disk is attached to the end of a cord. The other end of the cord is attached to a ball and socket joint located at the center of the platform. If the platform rotates rapidly, and the disk is placed on it and released from rest, determine the time it takes for the disk to reach a spead great enough to break the cord. The maximum tension the cord can sustain is 100 N.. And the coefficient of the kinetic friction between the disk and the platform is 0.1
when the free body diagram of the particle was drawn, the friction force was considered in the tangential direction,, but it wasn't considered in the normal direction(tension was only considered)why?
Also, it is written "the frictional force has a sense of direction that opposes the relative motion of the disk with respect to the platform" I don't know how is this thing is true .I was expecting that the friction force acts in the same direction this of the relative motion of the disk with respect to the platform , since this force causes the speed of the disk to increase .
Answer: Circular motion requires a centripetal force pointing to the center of the circle that the disk is moving around, and this is provided by the tension in the cable. In addition, the disk is slipping on the turntable, and the kinetic friction force is trying to stop this slippage, so the disk is getting accelerated in a tangential direction, as shown in the diagram. Your problem requires that you use Newton's 2nd law to determine the tangential acceleration of the disk. This will allow you to calculate the tangential velocity, and angular velocity, of the disk with respect to time. Once you know this, you can determine how long the disk will remain on the turntable, because the string breaks when the centripetal force requirement exceeds the breaking point of the string. | {
"domain": "physics.stackexchange",
"id": 41755,
"tags": "homework-and-exercises, newtonian-mechanics, kinematics, angular-velocity"
} |
The throughput of the ALOHA protocol if the Binomial distribution was used | Question: In all the examples of ALOHA I've seen, the Poisson distribution is used. Theoretically, how could the throughput be calculated if a Binomial distribution was used instead?
For example, in the case that the offered load follows a Binomial distribution with probability $p$, I know that the probability of $k$ attempts within $t$ time slots is as follows:
$$\Pr\big[k \text{ attempts within $t$ time slots}\big] = \binom{t}{k}p^k(1-p)^{t-k}\,.$$
I know that the throughput, $S$ is equivalent to $G$ (average offered load) multiplied by the probability of success:
$$S = G\cdot\Pr\big[\text{successful}\big]\,,$$
but what would this be in the case that a binomial distribution is used?
Answer: I don't think it makes sense to use the binomial distribution.
The point of using the Poisson distribution is that network nodes are assumed to want to transmit with an exponential distribution at some rate. This is a reasonable model for objects that randomly emit stuff – for example radioactive decays follow this distribution. Given a bunch of things emitting stuff at intervals sampled from the exponential distribution, the number of emissions in some time interval is given by a Poisson distribution. In other words, the Poisson distribution arises naturally from a model of the individual nodes. A particular property of the Poisson distribution is that it has infinite support: it assigns a probability to the event that you have a million transmission attempts within ten time slots. Another feature of using exponential distributions is that it allows one to model the back-off time after a collision.
In contrast, using a binomial distribution doesn't come from any natural model of the individual nodes' behaviour. It only allows you to model "were there at least $n$ transmissions attempts during this time slot?" for some $n$ of your choosing. It doesn't seem able to deal with back-off and it doesn't seem able to deal with multiple collisions within a single time slot. | {
"domain": "cs.stackexchange",
"id": 7579,
"tags": "probability-theory, computer-networks, communication-protocols, protocols"
} |
Path integral on matrix model | Question: I was looking at a 0-dimensional matrix model, where the variables are $N\cdot N$ Hermitean matrices. It had a gauge symmetry, e.g. $U(N)$. And in the path integral, the Faddeev-Popov trick was used. So instead of integrating over the set of matrices, one was reduced to integrating over the $N$ (real) eigenvalues. Or, I tried counting the number of degrees of freedom and I was puzzled:
$N$ by $N$ Hermitean matrices have $N^2$ real independent variables ($N(N-1)$ values for the triangular part (not counting the diagonal ones), plus $N$ real variables for the diagonal ones, giving us a total of $N^2$
$U(N)$ has real dimension $N^2$, classic result from group theory
At the end, I arrive at $N^2 - N^2 = 0$ variables instead of $N$ real eigenvalues.
I tried seeing it from another point but I can't derive what I'd like to. What I am thinking/doing wrong?
Answer: Your $N^2-N^2$ calculation is naive, well, it is incorrect because not all $N^2$ generators of $U(N)$ are changing the Hermitian matrix $M$. If a generic Hermitian matrix $M$ is transformed to
$$ M\to U M U^{-1} ,\quad U U^\dagger = 1,$$
then $N$ directions in $U$ i.e. in $U(N)$ don't change $M$ at all. This is easily seen in the basis in which $M$ is diagonal: the matrices
$$ U = {\rm diag} (e^{i\alpha_1},\dots , e^{i\alpha_N})$$
don't change $M$ at all. So the right difference is $N^2 - (N^2-N) = N$. | {
"domain": "physics.stackexchange",
"id": 4929,
"tags": "gauge-theory, path-integral, degrees-of-freedom, brst, matrix-model"
} |
Plotting some displays from a weather URL | Question: I've got some code that plots some displays from a weather URL and locations in a CSV file. I'd like to see if anyone can make the code more efficient, the code runs fine without any errors and I'm aware you won't be able to test it, but if anyone could take a look at making a class function or breaking up the functions so they can be handled better I'd appreciate it. This is the first piece of code I've ever written so I'm sorry about how it looks.
import matplotlib.pyplot as plt
import numpy as np
import UKMap
import csv
import requests
import bs4
import datetime as dt
def windy (open_file, close_file):
with open(open_file, 'r') as csvfile, open(close_file, 'w') as csvoutput:
towns_csv = csv.reader(csvfile, dialect='excel') # csv reader
writer = csv.writer(csvoutput, lineterminator='\n') # csv writer
for rows in towns_csv:
x = float(rows[2]) # gets x axis
y = float(rows[1]) # gets y axis
url = "http://api.met.no/weatherapi/locationforecast/1.9/?{0};{1}" # start of url string
lat = "lat={}".format(y) # creates the latititue part of the url string
lon = "lon={}".format(x) # creates the longitude part of the url string
text = url.format(lat, lon) # combines the strings together to create a new url
response = requests.get(text).text # get the url into text format
winds= bs4.BeautifulSoup(response, "xml")
# uses BeautifulSoup to make an xml file
wind_all = winds.find_all("windSpeed") # finds the "windSpeed" element
speeds = wind_all[0].get("mps") # finds the first "mps" attribute
wind_dir = winds.find_all("windDirection") # finds the "windDirection" element
wind_dirs = wind_dir[0].get("deg") # finds the "deg" attribute
rows.append(speeds) # append speed value
rows.append(wind_dirs) # append wind value
writer.writerow(rows)
def speed(file):
with open(file) as latloncsv:
towns_csv = csv.reader(latloncsv, dialect='excel')
datestring = dt.datetime.strftime(dt.datetime.now(), '%Y-%m-%d')
for rows in towns_csv:
x = float(rows[2]) # x co-ordinates
y = float(rows[1]) # y co-ordinates
u = float(rows[3]) # wind speed
v = float(rows[4]) # wind direction
# plots a scatter of the x and y co-ordinates using the wind direction
#to orientate and the speed to adjust the colour
plt.scatter(x, y, marker =(3,0,v), c=u, vmin=0, vmax =10,
cmap='jet', s=50, edgecolors='none')
cbar = plt.colorbar(shrink = .5) # print colourbar
cbar.set_label('Wind Speed (mps)') # labels the colourbar
UKMap.UKMap() # prints basemap
plt.savefig('Speed_Direction' + datestring + '.jpg')# save plot name
plt.show()
def get_data(open_file, save_file):
with open(open_file, 'r') as csvfile, open(save_file, 'w') as csvoutput:
towns_csv = csv.reader(csvfile, dialect='excel') # csv reader
writer = csv.writer(csvoutput, lineterminator='\n') # csv writer
for rows in towns_csv:
x = float(rows[2]) # gets x axis
y = float(rows[1]) # gets y axis
url = "http://api.met.no/weatherapi/locationforecast/1.9/?{0};{1}" # start of url string
lat = "lat={}".format(y) # creates the latititue part of the url string
lon = "lon={}".format(x) # creates the longitude part of the url string
text = url.format(lat, lon) # combines the strings together to create a new url
response = requests.get(text).text # get the url into text format
temps= bs4.BeautifulSoup(response, "xml")
# uses BeautifulSoup to make an xml file
temp_all = temps.find_all("precipitation") # finds the "windSpeed" element
windy = temp_all[0].get("value") # finds the first "mps" attribute
rows.append(windy) # append speed value # append wind value
writer.writerow(rows)
return(rows)
def rainfall():
with open('rain1.csv') as latloncsv:
towns_csv = csv.reader(latloncsv, dialect='excel')
datestring = dt.datetime.strftime(dt.datetime.now(), '%Y-%m-%d')
for rows in towns_csv:
x = float(rows[2]) # x co-ordinates
y = float(rows[1]) # y co-ordinates
u = float(rows[3]) # rainfal
volume = (u)**3
plt.scatter(x, y, c=u, marker ='*', vmin=0, vmax =12, cmap='cool_r',
s=volume, facecolor='none', edgecolors='none')
cbar = plt.colorbar(shrink = .5) # print colourbar
cbar.set_label('Precipitation (mm)') # labels the colourbar
UKMap.UKMap() # prints basemap
plt.savefig('Rainfall' + datestring + '.jpg')# save plot name
plt.show()
Answer: Imports
From documentation:
Imports should be grouped in the following order:
standard library imports
related third party imports
local application/library specific imports
You should put a blank line between each group of imports.
So, your imports will look like this:
import bs4
import csv
import datetime as dt
import matplotlib.pyplot as plt
import numpy as np
import requests
import UKMap
Styling
You don't need any space after a functions' name:
def windy (open_file, close_file)
^
|____ not needed
More, you should have two newlines between functions:
...
import UKMap
# -- here, you should have two lines
def windy (open_file, close_file):
...
Around keywords/arguments, you should not have spaces:
cbar = plt.colorbar(shrink = .5)
^ ^
|_|
|______ not needed
Limit all lines to a maximum of 79 characters to improve readability. (I personally like to stick with PyCharm's convention of 120 characters length). The preferred way of wrapping long lines is by using Python's implied line continuation inside parentheses, brackets and braces. Long lines can be broken over multiple lines by wrapping expressions in parentheses. These should be used in preference to using a backslash for line continuation.
After each , you should have a space:
plt.scatter(x, y, marker =(3,0,v),
Should be:
plt.scatter(x, y, marker=(3, 0, v),
Comments
You have way too many comments. Inline comments are unnecessary and in fact distracting if they state the obvious. Rather than commenting each line of your code, you might want to document your functions by adding some docstrings and in addition, you might try to give your variables better names.
More, when you add a comment, don't use different styles:
# plots a scatter of the x ...
#to orientate and ...
UKMap.UKMap() # prints basemap
You should usually have two spaces after your last python statement and # and one space after #.
The right way to do it:
statement 1 # comment
Code improvements
When you're opening a file for reading, you can do:
with open(open_file) as csvfile ...
...
This will open the file in reading mode by default.
DRY(Don't repeat yourself)
Your code is really messy. I don't know how you're gonna use all of these functions. You didn't give us any hint.
You do repeat some lines of code multiple times, and this asks for a better structure of your code. Let's see how / what can we do.
What I'd personally do is move your reader / writer into two separate functions:
def csv_reader(csvfile):
with open(csvfile) as towns_csv:
return csv.reader(towns_csv, dialect='excel')
def csv_writer(csvfile):
with open(csvfile, 'w') as writer:
return csv.writer(writer, lineterminator='\n')
Then, I'd get rid of some useless variables. First, your windy function might look like this:
def windy(csvfile_in, csvfile_out):
"""
Your docstring here
"""
reader = csv_reader(csvfile_in)
writer = csv.writer(csvfile_out)
for rows in reader:
url_text = "http://api.met.no/weatherapi/locationforecast/1.9/?lat={};lon={}".format(float(rows[1]),
float(rows[2]))
response = requests.get(url_text).text
winds = bs4.BeautifulSoup(response, "xml")
wind_all, wind_dir = winds.find_all("windSpeed"), winds.find_all("windDirection")
speeds, wind_dirs = wind_all[0].get("mps"), wind_dir[0].get("deg")
rows.append(speeds)
rows.append(wind_dirs)
writer.writerow(rows)
Then, doing the same as above for speed function, we get:
def speed(csvfile_in):
"""
Your docstring here
"""
reader = csv_reader(csvfile_in)
datestring = dt.datetime.strftime(dt.datetime.now(), '%Y-%m-%d')
for rows in reader:
plt.scatter(float(rows[2]),
float(rows[1]),
marker=(3, 0, float(rows[4])),
c=float(rows[3]),
vmin=0,
vmax=10,
cmap='jet',
s=50,
edgecolors='none')
cbar = plt.colorbar(shrink=.5)
cbar.set_label('Wind Speed (mps)')
UKMap.UKMap()
plt.savefig('Speed_Direction{}.jpg'.format(datestring))
plt.show()
Moving forward to get_data():
def get_data(csvfile_in, csvfile_out):
"""
Your docstring here
"""
reader = csv_reader(csvfile_in)
writer = csv.writer(csvfile_out)
for rows in reader:
url_text = "http://api.met.no/weatherapi/locationforecast/1.9/?lat={};lon={}".format(float(rows[1]),
float(rows[2]))
response = requests.get(url_text).text
temps = bs4.BeautifulSoup(response, "xml")
temp_all = temps.find_all("precipitation")
windy = temp_all[0].get("value")
rows.append(windy)
writer.writerow(rows)
return rows
And last but not least, rainfall():
def rainfall(csvfile_in):
"""
Your docstring here
"""
reader = csv_reader(csvfile_in)
datestring = dt.datetime.strftime(dt.datetime.now(), '%Y-%m-%d')
for rows in reader:
plt.scatter(float(rows[2]),
float(rows[1]),
c=float(rows[3]),
marker='*',
vmin=0,
vmax=12,
cmap='cool_r',
s=float(rows[3]) ** 3,
facecolor='none',
edgecolors='none')
cbar = plt.colorbar(shrink=.5)
cbar.set_label('Precipitation (mm)')
UKMap.UKMap()
plt.savefig('Rainfall{}.jpg'.format(datestring))
plt.show()
There're many ways of rewriting your code so that you don't have to repeat yourself, but as much as my time allowed, I came up with all of the above.
The final code:
import bs4
import csv
import datetime as dt
import matplotlib.pyplot as plt
import numpy as np
import requests
import UKMap
def csv_reader(csvfile_in):
with open(csvfile_in) as towns_csv:
return csv.reader(towns_csv, dialect='excel')
def csv_writer(csvfile_out):
with open(csvfile_out, 'w') as writer:
return csv.writer(writer, lineterminator='\n')
def windy(csvfile_in, csvfile_out):
"""
Your docstring here
"""
reader = csv_reader(csvfile_in)
writer = csv.writer(csvfile_out)
for rows in reader:
url_text = "http://api.met.no/weatherapi/locationforecast/1.9/?lat={};lon={}".format(float(rows[1]),
float(rows[2]))
response = requests.get(url_text).text
winds = bs4.BeautifulSoup(response, "xml")
wind_all, wind_dir = winds.find_all("windSpeed"), winds.find_all("windDirection")
speeds, wind_dirs = wind_all[0].get("mps"), wind_dir[0].get("deg")
rows.append(speeds)
rows.append(wind_dirs)
writer.writerow(rows)
def speed(csvfile_in):
"""
Your docstring here
"""
reader = csv_reader(csvfile_in)
datestring = dt.datetime.strftime(dt.datetime.now(), '%Y-%m-%d')
for rows in reader:
plt.scatter(float(rows[2]),
float(rows[1]),
marker=(3, 0, float(rows[4])),
c=float(rows[3]),
vmin=0,
vmax=10,
cmap='jet',
s=50,
edgecolors='none')
cbar = plt.colorbar(shrink=.5)
cbar.set_label('Wind Speed (mps)')
UKMap.UKMap()
plt.savefig('Speed_Direction{}.jpg'.format(datestring))
plt.show()
def get_data(csvfile_in, csvfile_out):
"""
Your docstring here
"""
reader = csv_reader(csvfile_in)
writer = csv.writer(csvfile_out)
for rows in reader:
url_text = "http://api.met.no/weatherapi/locationforecast/1.9/?lat={};lon={}".format(float(rows[1]),
float(rows[2]))
response = requests.get(url_text).text
temps = bs4.BeautifulSoup(response, "xml")
temp_all = temps.find_all("precipitation")
windy = temp_all[0].get("value")
rows.append(windy)
writer.writerow(rows)
return rows
def rainfall(csvfile_in):
"""
Your docstring here
"""
reader = csv_reader(csvfile_in)
datestring = dt.datetime.strftime(dt.datetime.now(), '%Y-%m-%d')
for rows in reader:
plt.scatter(float(rows[2]),
float(rows[1]),
c=float(rows[3]),
marker='*',
vmin=0,
vmax=12,
cmap='cool_r',
s=float(rows[3]) ** 3,
facecolor='none',
edgecolors='none')
cbar = plt.colorbar(shrink=.5)
cbar.set_label('Precipitation (mm)')
UKMap.UKMap()
plt.savefig('Rainfall{}.jpg'.format(datestring))
plt.show()
I know my code can be even more DRYed but I'll let other reviewers handle what I've missed | {
"domain": "codereview.stackexchange",
"id": 23707,
"tags": "python, csv, beautifulsoup, matplotlib"
} |
A LinkedList implementation in Python | Question: class Link:
def __init__(self, value):
self.val = value
self.next = None
def __repr__(self):
return f"{self.val} "
def __str__(self):
return self.__repr__()
# TODO : Implement Non-recursive solutions and string representations in recursive solutions.
class LinkedList:
def __init__(self):
self.head = None
self.tail = None
self.length = 0
def __str__(self):
curr = self.head
linklist = ""
while curr:
linklist += str(curr) + ' '
curr = curr.next
return 'Linklist : ' + linklist
def __len__(self):
curr = self.head
next = self.head
size = 0
while curr and next:
if not next.next:
size += 1
return size
else:
size += 2
curr = curr.next
next = next.next.next
return size
def insert(self, key):
if not self.head:
self.head = Link(key)
self.tail = self.head
else:
node = Link(key)
self.tail.next = node
self.tail = node
self.length += 1
def delete(self, key):
if not self.head:
return False
elif self.head.val == key:
self.head = self.head.next
else:
curr = self.head
prev = None
while curr and curr.val != key:
prev = curr
curr = curr.next
if curr:
prev.next = curr.next
self.length -= 1
return True
return False
def print_reverse(self, node):
if node:
self.print_reverse(node.next)
print(str(node))
def do_reverse(self):
prev = None
curr = self.head
n = None
while curr:
n = curr.next
curr.next = prev
prev = curr
curr = n
self.head = prev
def delete_a_node_pointer(self):
pass
def find_middle_element(self):
curr = self.head
next = self.head
while curr and next:
if not next.next:
return curr.val,
elif next.next and not next.next.next:
return curr.val, curr.next.val
curr = curr.next
next = next.next.next
def hascycle(self):
slow = self.head
fast = self.head
while slow and fast and fast.next:
slow = slow.next
fast = fast.next.next
if fast is slow:
return True
return False
def create_a_cycle(self):
index = 0
curr = self.head
while curr:
if index == 4:
break
curr = curr.next
index += 1
self.tail.next = curr
def find_start_of_the_cycle(self):
slow = self.head
fast = self.head
cycle = False
while slow and fast and fast.next:
slow = slow.next
fast = fast.next.next
if fast is slow:
cycle = True
break
if cycle:
curr = self.head
while curr and fast and curr is not fast:
curr = curr.next
fast = fast.next
return curr
else:
return None
def removeNthFromEnd(self, n):
pass
def test_main():
linklist = LinkedList()
linklist.insert(2)
# linklist.insert(4)
# linklist.insert(3)
# linklist.insert(-3)
#linklist.insert(-92)
print(str(linklist))
# linklist.delete(2)
# linklist.delete(3)
# linklist.delete(4)
# Don't print once the list has a cycle as it will loop for forever
#linklist.create_a_cycle()
# print(str(linklist))
print('HasCycle : ' , linklist.hascycle())
print('Cycle : ', str(linklist.tail), '->', linklist.find_start_of_the_cycle())
print('Middle Element : ', linklist.find_middle_element())
linklist.do_reverse()
print('Reversed', str(linklist))
print('Length LinkedList : ', str(len(linklist)))
if __name__ == '__main__':
test_main()
Answer: A few things I'd add to @greybeard's answer (though I'll first emphasize: Add Docstrings):
It's a LinkedList, I'd expect to be able to iterate over it. There should be an __iter__ function here.
You rely heavily on knowing what your values are (head and tail), so you should hide them as private variables (_head and _tail) to indicate that external code should not access or modify them.
You keep a length attribute, but when __len__ is called, you go through the expensive task of re-computing this anyway. If you trust this value, return it; if not, then don't bother keeping it.
You have functions to detect and manage cycles in your linked list, but many of your other functions (including __len__) don't check if they're trapped in one. This creates a ripe field for getting code locked in an infinite loop.
print_reverse relies on recursion, which won't work for lists of more than a few thousand items.
do_reverse is really vague, but seems to reverse the list; in Python, this is usually defined as __reversed__
delete_a_node_pointer... does nothing, throws no errors, and takes no argument. Delete this, or at least raise NotImplementedError()
create_a_cycle goes to element 4... for no explicable reason. This should be an argument.
You support creating a cycle mid-list (that is, tail points to somewhere in the middle of the list), but then elsewhere in your code treat tail.next as properly pointing to head (particularly in insert... and it should be used in delete, that's probably a bug that it's not there). Either keep your linked list as a single ring or support middle-cycles, you can't really do both. | {
"domain": "codereview.stackexchange",
"id": 33588,
"tags": "python, algorithm, linked-list, reinventing-the-wheel"
} |
Is the given grammar LL(2)? | Question: http://dickgrune.com/Books/PTAPG_1st_Edition/BookBody.pdf
The book by Grune and Jacobs presents an example of a grammar that is $LL(K + 1)$ but not $LL(K)$
The example is $S -> a^kb/a^ka$
The grammar of this type is $LL(K + 1)$ but not $LL(K)$.
I have an example based on the grammar shown. Is this also $LL(2)$ ?
$S-> cca/ccb$
Based on the information above, I just want to confirm that is this grammar also $LL(2)$ but not $LL(1)$ ?
Answer: It is neither LL(2) nor LL(1). Read the Grune and Jacobs argument carefully. What is the k in this example?
If the first two characters of the input are "cc", would you know which production to apply? So how can it be LL(2)? | {
"domain": "cs.stackexchange",
"id": 7935,
"tags": "automata, formal-grammars, compilers, parsers"
} |
Pressure of a gas | Question: In a recent physics class, I was told that for a gas enclosed in a closed vessel the pressure of the gas is variable but the volume and amount remains constant if we heat the gas. I understood this - simply as molecules can't escape out of closed vessel and still have same free space for moving.
Then I was told that if we consider gas in an open vessel and heat it, then its volume and pressure will be constant, then its molecules may escape, therefore the amount of gas cannot be constant on changing other conditions. Now I have the following doubts about this statement:
Why will pressure remain constant if we heat the open container, as molecules will start moving more swiftly therefore will exert more pressure on walls.
Why will volume be constant for the gas in open vessel, as volume is the free space available to gas for motion and in an open vessel whole universe is available to gas for motion.
Please elaborate on above points.
Answer: The air pressure inside a closed container may increase or decrease with temperature changes over time. The volume will remain the same as long as the container is rigid and does not change size. The amount of gas does not change in the closed container as the amount of gas molecules, its mass, does not change. As you say you understand this part I will move onto your questions, number 1. When you heat an open container the pressure remains the same because the excited molecules may bounce off of the walls of the container as you assumed, however they do not bounce off of the open top, so gas molecules can escape into the atmosphere allowing for no pressure increase. Question number 2. The volume of gas IN the open container remains the same. Gas molecules may enter or exit the open container, but the volume of gas inside the container at different moments will remain the same. For example a 1 liter open container will always have 1 liter of volume considered to be inside of it. | {
"domain": "physics.stackexchange",
"id": 79490,
"tags": "thermodynamics, pressure, ideal-gas"
} |
Convert Time Stamp from rosbag(Topic) to Understandable Format Y/M/D H:M:S | Question:
Hi someone would you know what the following TimeStamp means? and how to convert it to seconds or other understable format such year/month/day Hour:Minutes:Seconds?
I have a car-robot that has driven 80 meters in about 1 min of simulation. However when I recorded the rosbag I had the:
First Timestamp =1,63277521885359E+018
and the
Last TimeStamp from this topic (odom) = 1,63277529735619E+018
I do not know to convert this data to an understandable format such as Y/M/D H:M:S
Some tool makes the trick? Or some Python Library such as DateTime (how to specific use, suitable class to apply?)
I have done some research herehttps://discourse.ros.org/t/timestamps-and-rosbags-discussing-an-alternative-to-clock-and-use-sim-time/3238/9 But these steps seem too complex http://wiki.ros.org/rosbag/Cookbook
In addition, I got another column in my rosbag called field.header.sec
that intuitively should return the seconds. But assessing this data again. The first/end data are:
T0 = 7348
TF = 8036
Subtracting these values I have 688 seconds = 11 minutes. And this is Not realistic because I have killed the simulation and the rosbag after 2 minutes( max) and the car in the simulation achieved the goal after 1 min, not 11 minutes....
I wonder how to understand and convert the TimeStamp extracted from rosbags.
If someone could help me, I would be very grateful
Beginning of csv file (localizer or odom topic)
https://aws1.discourse-cdn.com/business7/uploads/ros/original/2X/f/f3cb0d690ea58fdb73a36d2d14afdd64462c9a58.png
End of csv file
https://aws1.discourse-cdn.com/business7/uploads/ros/original/2X/7/7773cabae44ccebc654301fcf9877eada01dde44.png
Originally posted by Vini71 on ROS Answers with karma: 266 on 2021-10-14
Post score: 0
Original comments
Comment by gvdhoorn on 2021-10-14:
Did you cross-post this to ROS Discourse here?
Comment by Vini71 on 2021-10-14:
yes sorry I am trying to get as much ideas as possible to help on this issue.
Comment by gvdhoorn on 2021-10-14:
That's not how it works, and is actually pretty annoying. At best it leads to duplicated answers. In most cases it leads to wasted effort.
And it's not such a special problem. The Q&As I've linked in my answer already discuss this, and they're years old.
Comment by Vini71 on 2021-10-14:
Ok....I respect your point of view...because actually there are different channels, different experts that will visualize...so..., I think different..but ok.
Could you take a look at my own answer below, please? I am trying to figure out what is wrong with the code.
Comment by gvdhoorn on 2021-10-14:
It's not just my point of view: https://en.wikipedia.org/wiki/Crossposting
Comment by Vini71 on 2021-10-14:
Hummm ok I got the idea @gvdhoorn, I have tooken a quick read...so much rules to be aware...not easy. But thanks to share these rules to me. I will try to follow all of them. Additionally let me ask you, I have edited the question on ROS Discourse, and instead put the whole question as here I have just put this link of ROS Answers for other ROS users visualize. Would this be the right manner or not? I mean if I wish that some question be visualized by different communities...which is the right way?? And thanks again for your answer!! It saved my time!
Answer:
I have a car-robot that has driven 80 meters in about 1 min of simulation. However when I recorded the rosbag I had the: First Timestamp =1,63277521885359E+018 and the Last TimeStamp from this topic (odom) = 1,63277529735619E+018
I do not know to convert this data to an understandable format such as Y/M/D H:M:S Some tool makes the trick?
Afaik, this is just a Unix Epoch stamp. See #q273953 or #q296045 for instance.
If you run rosbag info /path/to/your.bag, it should show you something like this (from here):
start: Jun 17 2010 14:24:58.83 (1276809898.83)
end: Jun 17 2010 14:25:00.01 (1276809900.01)
note the number in brackets: that's the Unix Epoch stamp. The 'human readable' version is printed before it.
In addition, I got another column in my rosbag called field.header.sec that intuitively should return the seconds.
field.header.seq is not a timestamp. Is the sequence ID field. That's just a message counter.
Refer to std_msgs/Header:
# sequence ID: consecutively increasing ID
uint32 seq
Edit:
but for the data I have, that is
unix_epoch_timestamp_t0 Out[29]: 1632775218853592670
and unix_epoch_timestamp_tf Out[30]: 1632775297356186066
1632775218853592670 is not a valid Unix Epoch stamp. Or at least, not for the functions you are using, as they will expect a resolution of seconds. Note the warning epochconverter.com gives you:
Assuming that this timestamp is in nanoseconds
You're likely concatenating the seconds and the nanoseconds from the std_msgs/Header colums in your .csv as strings, and then trying to interpret that as a single number. That doesn't work.
You can probably pass time.localtime(..) a float, in the form of seconds.nanoseconds:
>>> time.strftime("%a, %d %b %Y %H:%M:%S +0000", time.localtime(1632775297.356186066))
'Mon, 27 Sep 2021 22:41:37 +0000'
Note that your format string doesn't allow for showing the nanosecond precision of the stamp, and Python renders this in my local timezone (UTC+2). I'm also not sure hard-coding a +0000 for the time-zone is a good idea.
Originally posted by gvdhoorn with karma: 86574 on 2021-10-14
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 37019,
"tags": "ros-melodic, rosbag"
} |
Lower bound for finding majority element in a sorted array | Question: Suppose $A$ is a sorted array with $n$ elements. I want to know whether we can determine if there are majority elements in $A$ with time complexity $O(1)$.
Recall that a majority element of $A$ is an element which appears over $n/2$ times in $A$.
I can find a $O(\log(n))$ algorithm to solve this problem. But I'm not sure whether this can be done in $O(1)$. I was trying to use the adversary method, but I don't know how to construct it.
Answer: Any algorithm would need $\Omega(\log n)$ queries.
To see this, define $f(k)$ to be the number of queries needed for deciding whether an element $x$ appears at least $a$ times in a sorted array $A$. We assume that $x$ appears in $A[m],A[m+1],\dots,A[M]$, and that $k\triangleq\min\{a, m-1, n-M\}$.
Notice that in these notations we are looking to bound $f(\lfloor n/2 \rfloor)$.
Claim: $f(k)\ge \log k$.
Proof:
Consider the first query made by the algorithm.
If it was done within radius $k/2$ of the interval (i.e. somewhere in $[m-k/2,\ldots,M+k/2]$), and found the median at the spot, then we still need at least $f(k/2)$ queries to decide the problem.
If it was done outside that radius, consider the case where the queried cell did not contain the median. Once again, this leaves us with at least $f(k/2)$ queries to be made.
Continue with induction and you get the $\Omega(\log n)$ bound. | {
"domain": "cs.stackexchange",
"id": 4406,
"tags": "algorithms, complexity-theory, lower-bounds, constant-time"
} |
Bubble Sort in Forth (for strings) | Question: I wrote the following Forth code for sorting a string with the Bubble Sort method.
It looks nice to my eyes, but I'd like your experienced opinion and any comments about the code you might have.
In the compare-and-swap-next word, is using the return stack to save base address of string ok?
the bubblesort word uses 2 pick which is not so bad? A previous version had 5 pick (!), anyway ... is 2 pick fine: don't overthink about that or maybe try some more refactoring?
How would I go about adding a check for any swaps in each round and terminate the sort early? A variable? A stack cell (on TOS)? Rethink all of the implementation?
: compare-and-swap-next ( string i -- )
2dup + dup >r c@ rot rot 1 + + c@ 2dup >
if r@ c! r> 1 + c!
else r> drop 2drop
then ;
: bubblesort ( string len -- string len )
dup 1 -
begin dup 0>
while dup 0
do 2 pick i compare-and-swap-next
loop
1 -
repeat
drop ;
\ s" abracadabra" bubblesort \ cr type
\ s" The quick brown fox" bubblesort \ cr type
\ s" a quick brown fox jumps over the lazy dog." bubblesort \ cr type
Code available on github
Nitpicks welcome! Pedantism welcome!
Thank you!
Answer: To address your immediate concerns,
Using the return stack for storing your temporaries is a perfectly valid technique.
pick is always frown upon (as well as tuck, roll, and friends). It seems that the len parameter to bubblesort does not participate in computation - except the very beginning - and mostly just stays in the way. Consider
: bubblesort
dup >r 1-
....
and use over instead of 2 pick (don't forget to r> the length at the end).
I prefer a slightly different formatting of conditional. Consider
2dup > if
r@ c! r> 1 + c! else
r> drop 2drop then ;
Same for the loops. Consider
: bubblesort ( string len -- string len )
dup 1 - begin
dup 0> while
dup 0 do
2 pick i compare-and-swap-next
loop
1 -
repeat
drop ;
Keeping control words together with their respective conditions/actions looks more Forthy for me.
r> drop is also known as rdrop.
rot rot is also known as -rot. | {
"domain": "codereview.stackexchange",
"id": 38760,
"tags": "beginner, algorithm, sorting, forth"
} |
Projection Operator in qiskit.opflow | Question: I am looking for a way to implement a projection operator like $\mid +\rangle\langle+\mid$ in qiskits opflow module. I tried Plus @ ~Plus but this gives me an error message. There must be a direct implementation of projection operators right? I can't use the operators in qiskit.quantum_info for what I want to do, except if there is a way to convert those operators to the ones in opflow.
Edit:
The error message I get from (Plus)@(~Plus) is ValueError: Composition with a Statefunctions in the first operand is not defined.
The reason I think I can't use qiskit.quantum_info's opearotors it that I want to compute the spectrum using qiskit.algorithms.NumPyEigensolver and this seems to me to not work with a quantum_info operator:
>>>plus = Statevector([1/np.sqrt(2),1/np.sqrt(2)]).to_operator()
>>>plus
Operator([[0.5+0.j, 0.5+0.j],
[0.5+0.j, 0.5+0.j]],
input_dims=(2,), output_dims=(2,))
>>>solver = NumPyEigensolver()
>>>spectrum = solver.compute_eigenvalues(plus)
AttributeError: 'Operator' object has no attribute 'to_spmatrix'
I am generally confused about the connection and difference between qiskit.quantum_info and qiskit.opflow because they seem to me to have a large overlap. Is one of those an older module and should not be used anymore? Is there a general way to convert objects like operators from one to the other?
Answer: You can convert a qiskit.quantum_info operator to a qiskit.opflow operator easily by passing it as a parameter to PrimitiveOp constructor:
from qiskit.opflow import PrimitiveOp
# |+>
sv = Statevector.from_label('+')
# |+><+|
proj = sv.to_operator()
# Convert to opflow operator:
op = PrimitiveOp(proj) | {
"domain": "quantumcomputing.stackexchange",
"id": 4305,
"tags": "qiskit, programming"
} |
Is there a name for this graph transformation (remove a 'thru' node) | Question: I have a directed graph, eg:
$$A\to B\to C$$
$A$ and $C$ may have multiple in- and out-edges, but B has exactly one in- and one out-edge.
I want to remove node $B$ and replace the edges $(A,B)$ and $(B,C)$ with a single edge $(A,C)$.
Is there a name for this operation? In particular a name or algorithm for finding all nodes such as $B$ and removing them from the graph.
Answer: You're contracting the edge $(A,B)$ (or, equivalently, $(B,C)$).
To contract an edge $(x,y)$ is to delete the vertex $y$ and make all its neighbours except $x$ itself be neighbours of $x$ instead. | {
"domain": "cs.stackexchange",
"id": 7806,
"tags": "graphs"
} |
Robots files generation - simplifying foreach code | Question: I'm trying to figure out if there's a way I can simplify this code which is used to generate a robots.txt file with numerous rules. Different files/folders are separated in separate arrays because they're applied differently but in some cases they are applied the same. In these cases, rather than write out a separate foreach for each one, this is what I'm trying to simplify:
<?php
public function generateRobotsFile($rbfile, $smfile) {
$tab = array();
$tab['SemBot'] = array('Googlebot',"bingbot\nCrawl-delay: 5",'MSNBot');
$tab['Lang'] = array('eb');
$tab['Files'] = array('address.php','cart.php');
$tab['Folder'] = array('classes','docs','themes');
fwrite($writeFd, "\nUser-agent: *\n");
foreach ($tab['Lang'] as $Lang) {
fwrite($writeFd, 'Disallow: ' . __PS_BASE_URI__ . $Lang . "\n");
}
foreach ($tab['Folder'] as $Folder) {
fwrite($writeFd, 'Disallow: ' . __PS_BASE_URI__ . $Folder . "\n");
}
foreach ($tab['SemBot'] as $SemBot) {
fwrite($writeFd, "User-agent: ' . $SemBot . "\n");
foreach ($tab['Files'] as $Files) {
fwrite($writeFd, 'Disallow: ' . __PS_BASE_URI__ . $Files . "\n");
}
foreach ($tab['Folder'] as $Folder) {
fwrite($writeFd, 'Disallow: ' . __PS_BASE_)URI__ . $Folder . "/\n");
}
}
The arrays must remain separated but in some cases they are looped through the same disallow rule so I'm reaching for a foreach array in array but haven't quite grasped it yet.
Answer: If number of lines is your primary concern, then really all we can do with this is merge your array creation lines, and use array_merge() to create temporary arrays which you can loop around, like so:
public function generateRobotsFile($rbfile, $smfile)
{
$tab = array('SemBot' => array('Googlebot', "bingbot\nCrawl-delay: 5", 'MSNBot'),
'Lang' => array('eb'),
'Files' => array('address.php', 'cart.php'),
'Folder' => array('classes', 'docs', 'themes'));
fwrite($writeFd, "\nUser-agent: *\n");
foreach(array_merge($tab['Lang'], $tab['Folder']) as $disallow)
{
fwrite($writeFd, 'Disallow: ' . __PS_BASE_URI__ . $disallow . "\n");
}
foreach($tab['SemBot'] as $SemBot)
{
fwrite($writeFd, "User-agent: " . $SemBot . "\n");
foreach(array_merge($tab['Files'], $tab['Folder']) as $disallow)
{
fwrite($writeFd, 'Disallow: ' . __PS_BASE_URI__ . $disallow . "\n");
}
}
} | {
"domain": "codereview.stackexchange",
"id": 7739,
"tags": "php, array"
} |
What are the assumptions made in ideal MHD? | Question: So my understanding is that in ideal magnetohydrodynamics (MHD), we assume a conductivity of infinity, and that makes the electric field in a comoving frame equal to zero. As I was watching one derivation of the equations of ideal MHD, I also noticed that the conservative energy PDE was treated as adiabatic, e.g., excluding terms for heat transfer.
So my question is this: Is the adiabatic treatment of the fluid part of the assumptions of idealization of MHD? Or was this probably just a simplification of the individual deriving the equations? Are there any other assumptions we make for ideal MHD? Commenting on how good these (or any other) assumptions are would also be appreciated.
Answer: Ideal magnetohydrodynamics (MHD) means isentropic. No entropy is produced. So dissipative heat transfer terms are excluded. The condition that the electric field vanish in the rest frame is a consequence of this since otherwise entropy would be produced due to Ohm's law.
A (relativistic) demonstration of this is given in Harris, Phys. Rev. 108, 1357 (1957). And it is also summarized in section II.B of arXiv:1412.3135. | {
"domain": "physics.stackexchange",
"id": 84899,
"tags": "electromagnetism, fluid-dynamics, magnetohydrodynamics"
} |
Prettify JSON class | Question: It's not much, but I tried employing some of the things I learned yet never really got to use, since that kind of code isn't really needed where I work (for the most part). I tried making it as much C++11 as I could. I would like to know if I messed up somewhere, maybe something could have been done better, or maybe you got some tips to improve readability?
#include <string>
#include <regex>
#include <vector>
class JSONPretify : public std::string{
public:
JSONPretify(std::string j){
this->assign(j);
pretify();
};
JSONPretify(std::string j, bool colon_space){
this->assign(j);
pretify();
if(colon_space)
insertColonSpaces();
};
private:
void pretify(){
std::regex var = std::regex(R"((\".+?\".*?(?=\{|\[|\,|\]|\}))|(\d+?))");
long it = 0;
int depth = 0;
while(it < this->size() && it != -1){
regex_pos pos_tab = findRegexFirstPosition(it, var);
long pos_comma = this->find(",", it);
long pos_obj_start = this->find("{", it);
long pos_obj_end = this->find("}", it);
long pos_array_start = this->find("[", it);
long pos_array_end = this->find("]", it);
long old_it = it;
unsigned long work_with = find_lowest(std::vector<long>{pos_tab.pos, pos_comma, pos_obj_start, pos_obj_end,pos_array_start,pos_array_end});
switch(work_with){
case(TAB):{
std::string insert = generateSpaces(depth);
this->insert(pos_tab.pos, insert);
it = pos_tab.pos+insert.size()+pos_tab.length;
break;
}
case(COMMA):{
std::string insert = "\n";
this->insert(pos_comma+1, insert);
it = pos_comma+1;
break;
}
case(OBJ_START):{
std::string insert = "\n";
this->insert(pos_obj_start+1, insert);
it = pos_obj_start+insert.size();
depth+=1;
if(pos_obj_start-1 < 0 || pos_obj_start > this->size()) continue;
if(this->at(pos_obj_start-1) != ':'){
std::string extra = generateSpaces(depth-1);
this->insert(pos_obj_start, extra);
it+=extra.size();
}
break;
}
case(OBJ_END):{
std::string insert = "\n"+generateSpaces(depth-1);
this->insert(pos_obj_end, insert);
depth-=1;
it = pos_obj_end+insert.size()+1;
break;
}
case(ARRAY_START):{
depth+=1;
std::string insert = "\n";
this->insert(pos_array_start+1,insert);
it=pos_array_start+insert.size();
break;
}
case(ARRAY_END):{
depth-=1;
std::string insert = "\n"+generateSpaces(depth);
this->insert(pos_array_end,insert);
it=pos_array_end+insert.size()+1;
break;
}
default:{
break;
}
};
if(it == old_it)
break;
}
};
void insertColonSpaces(){
long pos = 0;
while(pos < this->size() && pos != -1){
pos = this->find(":", pos);
if(pos == -1 || pos >= this->size()) break;
this->replace(pos,1, " : ");
pos+=3;
}
}
struct regex_pos{
long pos;
long length;
};
std::string generateSpaces(int l){
std::string r="";
for(int i = 0; i < l; i++){
r+= " ";
}
return r;
}
regex_pos findRegexFirstPosition(long start_pos, std::regex rx){
long at = -1;
long l = 0;
std::string ss(this->begin()+start_pos, this->end());
std::smatch m;
std::regex_search ( ss, m, rx );
for (unsigned i=0; i<m.size(); ++i) {
at = m.position(i);
l = m[i].str().size();
break;
}
if(at != -1) at += start_pos;
return {at,l};
}
template<typename T>
unsigned long find_lowest(std::vector<T> outof){
unsigned long lowest_it = 0;
for(unsigned i = 0; i < outof.size(); i++){
if((outof[i] < outof[lowest_it] && outof[i] != -1) || (outof[lowest_it] == -1 && outof[i] != -1)){
lowest_it = i;
}
}
if(outof[lowest_it] == -1)
lowest_it = outof.size()+1;
return lowest_it;
}
enum positions{
TAB = 0,
COMMA = 1,
OBJ_START = 2,
OBJ_END = 3,
ARRAY_START = 4,
ARRAY_END = 5
};
};
Github link
Answer: Thanks for editing the question. I have few ideas how you might want to polish your code.
std::string base class
It caught my eye that JSONPretify is derived from std::string. Generaly speaking STL containers are not meant for such use case. Specificaly it is a good idea if you are deriving from a base class to check that it has virtual destructor. std::string does not have virtual destructor.
See e. g.
https://www.securecoding.cert.org/confluence/display/cplusplus/OOP52-CPP.+Do+not+delete+a+polymorphic+object+without+a+virtual+destructor
interface
I would go even further and suggest that as you don't need to model any state, keep any data or invariant simple function might be better interface.
std::string pretify(const std::string& j, bool colon_space = false);
separation of interface and implementation
In order to be able to hide all hairy details from users of your code you might separate it into interface and implementation. The most common form is iterface only header file (e. g. prettify.hpp) and implementation source file (e. g. prettify.cpp). You then might leave all definitions and implementation details for prettify.cpp. To separate it from the rest of your code (even that you are only linking to) you might use either anonymous namespaces or internal linkage functions (surprisingly this is other meaning of keyword static).
http://en.cppreference.com/w/cpp/language/namespace
http://en.cppreference.com/w/cpp/language/storage_duration
find_lowest
I would try to avoid implementation of this algorithm and use std::string::find_first_of() and/or std::min_element().
If you decide to stick with it then you still might simplify it by not making it a template as there is just single call to it. You also probably don't want to copy the argument vector so reference might be more appropriate:
unsigned long find_lowest(const std::vector<long>& outof){
work_with
I would recommend using scoped enum (great C++11 extension) and distinquishing between such enum and integers.
See http://en.cppreference.com/w/cpp/language/enum [Scoped enumerations]
All it takes is new positions definition:
enum class positions{
and change to values usage:
case(positions::TAB):{
I am perplexed by this
positions work_with = find_lowest(std::vector<long>{pos_tab.pos, pos_comma, pos_obj_start, pos_obj_end,pos_array_start,pos_array_end});
because you are assigning position to work_with but checking content in switch. Is it correct?
variables
This is kind of subjective opinion but omitting some helper variables might increase readability.
case(COMMA):{
std::string insert = "\n";
this->insert(pos_comma+1, insert);
it = pos_comma+1;
break;
}
shortened to
case(COMMA):{
this->insert(pos_comma+1, "\n");
it = pos_comma+1;
break;
}
For those variables that you create and don't intend to change I would definitely use const to let know the compiler about your intention and let it actually check that you don't accidentally violate that.
const std::regex var = std::regex(R"((\".+?\".*?(?=\{|\[|\,|\]|\}))|(\d+?))");
const regex_pos pos_tab = findRegexFirstPosition(it, var);
const std::string insert = generateSpaces(depth);
insertColonSpaces
Basically you are replacing one string with another.
This question might give some hints (e. g. boost::algorithm::replace_all_copy or using std::regex).
https://stackoverflow.com/questions/5343190/how-do-i-replace-all-instances-of-a-string-with-another-string
generateSpaces
Unless I have overlooked something it is the same as
return std::string(l * 4, ' ');
Check std::string "fill" constructor here:
http://www.cplusplus.com/reference/string/string/string/
for() { break; }
This loop
for (unsigned i=0; i<m.size(); ++i) {
at = m.position(i);
l = m[i].str().size();
break;
}
looks more like a simple condition
if ( m.size() > 0 ) {
at = m.position(0);
l = m[0].str().size();
} | {
"domain": "codereview.stackexchange",
"id": 20628,
"tags": "c++, c++11, json, formatting"
} |
Accessing 3D map data saved in bt file/formats to save octomap | Question:
I have been able to build a 3D map using octomap and have saved the same as a .bt file. I have to work on a localisation algorithm which requires the coordinates of free and occupied cells(especially the boundary cells of any occupied region) of the saved map.
Is it possible to extract the information like coordinates and no. of free and occupied cells from a bt file? If yes, how? If not, then what should be the accessible file format and how should I save my map in that format?
I am using ROS electric in Ubuntu 11.10. Please help.
Originally posted by shubh991 on ROS Answers with karma: 105 on 2012-06-25
Post score: 4
Original comments
Comment by ChickenSoup on 2012-07-04:
@shubh991 How do you save the map into a.bt file when octomap_server is running?
Comment by ChickenSoup on 2012-07-04:
ok found it : rosrun octomap_server octomap_saver map.bt
Answer:
bt ("Bonsai Tree") files are a very compact binary serialized representation of OctoMap octree, and thus not directly human readable.
You can easily write a program that reads in the .bt, creates an OcTree from it and then traverse the full tree with an octree leaf_iterator to do whatever you want with the coordinates or voxels. For details see the doxygen documentation of OctoMap: http://octomap.sourceforge.net/doxygen/index.html
You can also have a look at the file "bt2vrml.cpp" in your octomap package. It reads in a .bt file and creates a VRML file of the occupied voxels (which will result in a quite large file).
Originally posted by AHornung with karma: 5904 on 2012-06-26
This answer was ACCEPTED on the original site
Post score: 3
Original comments
Comment by shubh991 on 2012-06-26:
thanx man...that seems helpful to proceed with. | {
"domain": "robotics.stackexchange",
"id": 9941,
"tags": "ros, octomap, octomap-mapping"
} |
Why do we care about the multiplicity of poles and zeros in rational Z-transforms? | Question: In the DSP class I'm taking, a lot of the questions ask me to list the multiplicity of a pole or zero in a rational Z-transform. For example, the multiplicity of the zero at $z=1$ in :$\frac{(z-1)^2}{(z-5)}$ is $2$.
What knowledge do we gain by knowing the multiplicity of some pole or zero?
Of course, if we know the multiplicity of all the poles and zeros, we can describe the z-transform of a rational function, but this is not a very satisfying answer.
Answer: Multiplicity has important relevance when it comes to filter design. For example, Multiplicity will decide how sharp your attenuation is near zeros. Take two simple high pass filters as example
$$
H_1(z) = 1-z^{-1}\\
H_2(z)=(1-z^{-1})^2=1-2z^{-1}+z^{-2}
$$
The transfer function $H_1(z)$ has only one zero at $z=1$ but $H_2(z)$ has two zeros at $z=1$. Which means, for the fourier transform $H_2(e^{j\omega})$ will provide better attenuation at $\omega=0$
This is shown by a simple MATLAB example as below
h1=[1 -1];
h2 = [1 -2 1];
[H1,w]=freqz(h1);
[H2,w]=freqz(h2);
figure(1)
plot(w,20*log10(H1))
hold on
plot(w,20*log10(H2))
legend('H1','H2') | {
"domain": "dsp.stackexchange",
"id": 8663,
"tags": "discrete-signals, z-transform"
} |
where can I download the API documentation of ROS | Question:
i just want download the API documentation of ROS...
Originally posted by C.J. on ROS Answers with karma: 1 on 2013-03-28
Post score: 0
Original comments
Comment by hiranya on 2013-03-28:
I think it is always updating so its better to refer online version (fuerte, groovy). you can get summarized version form here.
Answer:
A ROS API Documentation per-se doesn't exist.
ROS is composed of many many different modules (I suggest you check the original PDF) each of which doing a very small part of the work.
The nearest thing to a baseline ROS API is roscpp which is the foundation of ROS.
In this page (as on most other pages) on the top right you'll find a link Code API which will take you to the relevant doxygen.
Good luck learning ROS!
Originally posted by Claudio with karma: 859 on 2013-03-28
This answer was ACCEPTED on the original site
Post score: 2
Original comments
Comment by C.J. on 2013-03-28:
thank you !!! | {
"domain": "robotics.stackexchange",
"id": 13579,
"tags": "ros"
} |
Why is there so much methane in space? | Question: In school I learned that methane is organic matter and that it is a possible product of crude oil refinement. However, recently I read that Uranus has huge methane storms and the Saturn moon Titan even has lakes consisting only of methane or ethane.
Now, I'm wondering, how does all that methane get out there?
Is it the result of organic processes or is it created in the cores of suns like other elements? Do we even know?
Methane is also a pretty complex material. Do we know sources of more complex molecules other than methane in such quantities? Except life on earth?
Answer: Often for molecules to form in interstellar space, dust is used as a catalyst. The reason is that in typical interstellar environments, densities are so immensely low that even for just two atoms to meet, the probability is so small that formation time scales are very long. For 3+ atoms, the chance decreases rapidly. Instead, an atom can stick to a dust grain and wait for ages until other atoms stick. The atoms slowly "crawl" around on the surface of the dust grain, eventually meet and make bonds. If the formation of a bond releases energy (is exothermic), the molecule can be ejected from the grain surface. This process is call adsorption.
The dust also helps shielding the molecules from stellar radiation which would otherwise easily destroy them. This is why molecular clouds are also very dusty. But actually UV irradiation helps with the formation of very complex molecules by ionizing less complex molecules that subsequently can make bonds with other atoms and molecules.
If the environments are extremely dense, as when a star dies and ejects its gas either as a supernova or a planetary nebula, molecules can also form. This is probably also how the dust itself is formed, although it might also be formed later on (this is currently debated; the problem is that supernovae are so powerful that their shock waves tend to destroy dust shortly after it forms, and planetary nebulae are created from stars that live so long that they can't explain the abundance of dust in the very early Universe where they wouldn't have had the time to live their lives).
As Stan Liou and LocalFluff says, methane is actually an "easy" molecule to form, both because it's rather simple, and because its constituents, hydrogen and carbon, are the most and the fourth most abundant elements in the interstellar medium, respectively.
In fact, far more complex molecules are regularly found in interstellar space, as can be seen on this list. | {
"domain": "astronomy.stackexchange",
"id": 796,
"tags": "abiogenesis, astrochemistry"
} |
Why is $(\log(n))^{99} = o(n^{\frac{1}{99}})$ | Question: I am trying to find out why $(\log(n))^{99} = o(n^{\frac{1}{99}})$. I tried to find the limit as this fraction goes to zero.
$$
\lim_{n \to \infty} \frac{ (\log(n))^{99} }{n^{\frac{1}{99}}}
$$
But I'm not sure how I can reduce this expression.
Answer: $\qquad \begin{align}
\lim_{x \to \infty} \frac{ (\log(x))^{99} }{x^{\frac{1}{99}}}
&= \lim_{x \to \infty} \frac{ (99^2)(\log(x))^{98} }{x^{\frac{1}{99}}} \\
&= \lim_{x \to \infty} \frac{ (99^3) \times 98(\log(x))^{97} }{x^{\frac{1}{99}}} \\
&\vdots \\
&= \lim_{x \to \infty} \frac{ (99^{99})\times 99! }{x^{\frac{1}{99}}} \\
&= 0
\end{align}$
I used L'Hôpital's rule law in each conversion assuming natural logarithm. | {
"domain": "cs.stackexchange",
"id": 1049,
"tags": "asymptotics, landau-notation"
} |
Ambiguity with definitions of vector potential | Question: In one of my books (the great Baez & Munian's "Gauge fields, knots and gravity"), the vector potential is defined as a $End(E)$ valued 1-form, with $End(E)$ endomorphisms of the fiber $E$. So, with $e_i$ as basis of sections in $E$, the basis of sections in $End(E)$ is $e_i \otimes e^j$, then $$A(x) = A_{\mu j}^{i} (e_i \otimes e^j)\wedge dx^{\mu}.$$ Fine. Covariant derivative of a section is then:
$$(D_{\mu} s)^i = \partial_{\mu}s^i + A_{\mu j}^{i}s^j$$
But in QFT books I find that vector potential is defined as $A(x) = A_{\mu}^{i} \tau_{i} dx^{\mu}$, with $\tau$ the generator of the group (that is, basis of sections), and covariant derivative as:
$$(D_{\mu} s)^i = \partial_{\mu}s^i + \tau^{ai}{}_{j}A^{a}_{\mu}s^j$$
(summation in repeated indexes). I don't know how to reconcile these two definitions. The vector potential should have 3 indexes: one for the spacetime (base manifold) and two for the $End(E)$, so why are QFT books showing a vector potential with one spacetime and only one index for the fiber?
Answer: This is because the generators $\tau_a$ of the groups are themselves matrices, they carry two indices: $\tau_a = (\tau_a)_j^i (e_i \otimes e^j)$. In other words, what you call $$A_{\mu j}^i (e_i \otimes e^j)$$ in the first part of your question should be identified with $$A_\mu^a \tau_a$$ in the second part. The transition between the two expressions can be made as follows: $$A_\mu = A_{\mu j}^i (e_i \otimes e^j)= A_\mu^a (\tau_a)_j^i (e_i \otimes e^j) \, . $$
To summarize, you can describe your gauge field with two or three indices, the relation between the two notations being $$A_{\mu j}^i = A_\mu^a (\tau_a)_j^i \, . $$ | {
"domain": "physics.stackexchange",
"id": 42736,
"tags": "differential-geometry, gauge-theory, field-theory, lie-algebra, yang-mills"
} |
Why is the speed of light 299,792,458 meters/sec? | Question: Ok, I am majoring in physics (4th year) and I never understood this fundamental (kinda) question. Maybe I haven't explored it enough.
For example, why does it take 8min20sec for the light from the sun to get to us?
I know the answer to this question on a 'surface' scale. The sun is 1AU away, c=3E8 m/s, and d=v/t to get approx 8 min 20 sec.
my question is on a deeper level.
Say you could "ride a photon" I KNOW THIS IS IMPOSSIBLE, but just say you could. Or a better question: what does a photon experience? The photon, as per my understanding, would leave the sun and (if it is on the right trajectory) hit the Earth instantaneously. A photon leaving Alpha Centauri would see the universe all at once, in a infinitesimal small unit of time (if directed out to space).
If a photon sees everything all at once, why do we perceive it to have a speed? I am sure this has something to do with frames of reference, special relatively, Lorentz transforms? but just seems strange. why is the speed of light finite to us... if it was infinite would this be problematic?
Answer: Speed of light being finite is one of the fundamentals of our Universe.
If it were infinite, this would have a major implication in causality.
Besides, in non-quantum physics, light is just an electromagnetic wave. Eelectromagnetic field is described by Maxwell's equations, which predict that the speed $c$ of electromagnetic waves propagating through the vacuum depends on the dielectric permittivity $ε_0$ and the magnetic permeability $μ_0$ by the equation $c = {1\over\sqrt{ε_{0}μ_{0}}}$ so you can not have an infinite speed of light unless electric permittivity or magnetic permeability were zero, which in turn would cause all sorts of odd things to electromagnetical attraction (and thus, to matter existence beyond elemental particles). | {
"domain": "astronomy.stackexchange",
"id": 263,
"tags": "light, speed"
} |
Negative sign in rotation operator again | Question: In Wikipedia's page on the rotation operator, section "In relation to the orbital angular momentum", they write
$$
R(z,t) = exp((-i/h) \varphi L_z)
$$
where $\varphi$ is the angle being rotated through My Schaum's textbook also has the negative sign.
However, this website does not have the negative sign. Its argument for deriving the rotation operator uses Taylor series. They say if you write
$ e^{iL_z \varphi / \hbar} \cdot f(\theta_0, \phi_0, r_0)$ (i.e. the operator acting on a particular point of the function $f$) and expand in Taylor series, you get:
\begin{align}
e^{iL_z \varphi / \hbar} \cdot f(\theta_0, \phi_0, r_0)&=1 \cdot f(\theta_0, \phi_0, r_0) \\
&+ (1/1!) \varphi^1 (d/d\phi) \cdot f(\theta_0, \phi_0, r_0) \\
&+ (1/2!) \varphi^2 (d^2/d\phi^2) \cdot f(\theta_0, \phi_0, r_0) \\
&+ (1/3!) \varphi^3 (d^3/d\phi^3) \cdot f(\theta_0, \phi_0, r_0) \\
&+ (1/4!) \varphi^4 (d^4/d\phi^4) \cdot f(\theta_0, \phi_0, r_0) \\
&+ \cdots
\end{align}
which is the Taylor expansion for $ f(\theta_0, \phi_0 + \varphi, r_0) $,
so the expression $ e^{iL_z \varphi / \hbar} $ has successfully rotated the function's point through an angle $\varphi$.
That makes sense to me. But if you tried it with the negative sign as Wikipedia
and Schaum do, the expression is instead:
\begin{align}
e^{iL_z \varphi / \hbar} \cdot f(\theta_0, \phi_0, r_0)&=1 \cdot f(\theta_0, \phi_0, r_0) \\
&+ (1/1!) (-\varphi)^1 (d/d\phi) \cdot f(\theta_0, \phi_0, r_0) \\
&+ (1/2!) (-\varphi)^2 (d^2/d\phi^2) \cdot f(\theta_0, \phi_0, r_0) \\
&+ (1/3!) (-\varphi)^3 (d^3/d\phi^3) \cdot f(\theta_0, \phi_0, r_0) \\
&+ (1/4!) (-\varphi)^4 (d^4/d\phi^4) \cdot f(\theta_0, \phi_0, r_0) \\
&+ \cdots
\end{align}
the Taylor expression for $ f(\theta_0, \phi_0 - \varphi, r_0) $. So that doesn't give you a rotation by $\varphi$ but by $-\varphi$, right?
So are the Wikipedia and Schaum formulation of the rotation operator wrong?
--
The linked question by Omry did not answer this question; the answer there suggested that it was okay to have the negative sign.
Answer: Let's make this concrete by using a $\text{spin-}\frac 1 2$ state in the $z$ direction, where we get to use the Pauli matrices, usually written as$$\sigma_x = \left[\begin{array}{cc}0&1\\1&0\end{array}\right]; ~~~\sigma_y = \left[\begin{array}{cc}0&-i\\i&0\end{array}\right]; ~~~\sigma_z = \left[\begin{array}{cc}1&0\\0&-1\end{array}\right]$$In this convention, the unit vector in the $+x$ direction is $\sqrt{\frac 1 2} \left[\begin{array}{c}1\\1\end{array}\right]$ while the unit vector in the $+y$ direction is $\sqrt{\frac 1 2} \left[\begin{array}{c}1\\i\end{array}\right]$, these having eigenvalue $+1$ for those operators. Note that the question of which one is "+y" comes down to which square root of negative 1 we choose to be "i", or, equivalently, what convention you take on the $\sigma_y$ matrix. The $+i$ bottom-left convention, however, is attested by both Wikipedia and MathWorld, and I think it's probably also in Griffiths' Introduction to Quantum Mechanics, so let's go with this convention.
Now we expect a rotation by $+\pi/2$ around the $z$-axis will turn the $+x$ direction into $+y$. So we form the rotation matrix for both signs to see which is "right". This is a bit tricky as $e^{i\pi} = e^{-i\pi} = -1$, and multiplying a wavefunction by $-1$ (or any phase factor $e^{i\phi}$), in quantum mechanics, doesn't change any observables: $$\langle\psi|\hat A|\psi\rangle = \langle\psi|e^{-i\phi} \hat A e^{i\phi}|\psi\rangle = \langle e^{i\phi}\psi| \hat A | e^{i\phi} \psi\rangle = \langle\psi'|\hat A|\psi'\rangle.$$ So we see that in the following expression, we should not use the Pauli matrix $\sigma_z$ willy-nilly but we should instead use $\frac 1 2 \sigma_z$ for our $\mathbf n \cdot \mathbf J / \hbar$ term, since after the exponent reaches $\pi$ all of our observables are the same as they used to be and a $2\pi$ rotation has been performed. So we write $$R^\pm_z(\theta) = \exp(\pm ~ i~\frac \theta 2~\sigma_z)=\left[\begin{array}{cc}e^{\pm~i~\theta/2}&0\\0&e^{\mp~i~\theta/2}\end{array}\right]$$and we find that for a rotation by $\pi/2$ this becomes$$R^\pm_z(\pi/2) = \sqrt{\frac 1 2} \left[\begin{array}{cc}1 \pm i&0\\0&1\mp i\end{array}\right].$$Operating on our $+x$ vector gives: $$R^\pm ~\hat x = \frac 1 2 \left[\begin{array}{c}1\pm i\\1\mp i\end{array}\right] = \frac {1\pm i} 2~\left[\begin{array}{c}1\\(1\mp i)^2/2\end{array}\right] = e^{\pm~i~\pi/4}~\left[\begin{array}{c}1\\\mp i\end{array}\right]$$Again, that phase prefactor does not change any observables so it is irrelevant. Therefore we find out that the correct way to rotate the angle forward by an amount $\theta$ is to use the negative exponent:$$R_z(\theta) = \exp\left(-i~\theta~\frac{\sigma_z}{2}\right).$$Wikipedia and Schaum are therefore right, consistent with their conventions. It is possible that the PhysicsPages result is also consistent with its own conventions, but it would probably require that for them, $L_z = -\sigma_z/2$ or their $f$ has a special form or something. It's certainly not 100% obvious that they're doing the right thing. Otherwise they have the "+ is clockwise" rotation convention, which is totally fine in normal physical terms but it is not the right-hand rule that you learned in your undergraduate coursework. | {
"domain": "physics.stackexchange",
"id": 23340,
"tags": "quantum-mechanics, angular-momentum, rotation, conventions"
} |
Is the observation of an observer local in general relativity? | Question: Taking Schwarzschild spacetime as an example, an observer at infinity can observe events happened in his neighbourhood at infinity and measure the corresponding physical quantities. I want to know whether the observer at infinity can observe events happened at finite $r$.
Answer: No, the Schwarzschild observer is not a local observer. The observer at infinity is an idealised observer. An observer actually at an infinite distance would be useless because it would take infinite time for them to receive information from any finite $r$. To explain this observer, we need to invoke the concept of limits from calculus. We put the observer at a finite $r$ which is sufficiently far from $r=0$ that spacetime is almost flat, and then in the limit as $r\to\infty$ spacetime curvature approaches zero, and our finite observer approaches the observer at infinity.
However, the Schwarzschild observer does not directly observe events. Instead, they correlate events observed by local observers. A good description of this procedure is given in the Wikipedia article on Gullstrand–Painlevé coordinates. This article first explains Schwarzschild coordinates so that it can then describe how Gullstrand–Painlevé coordinates differ from them.
Schwarzschild coordinates
A Schwarzschild observer is a far observer or a bookkeeper. He does not directly make measurements of events that occur in different places. Instead, he is far away from the black hole and the events. Observers local to the events are enlisted to make measurements and send the results to him. The bookkeeper gathers and combines the reports from various places. The numbers in the reports are translated into data in Schwarzschild coordinates, which provide a systematic means of evaluating and describing the events globally. Thus, the physicist can compare and interpret the data intelligently. | {
"domain": "physics.stackexchange",
"id": 67816,
"tags": "general-relativity, black-holes, observers"
} |
Multi Agent Deep Reinforcement Learning for continuous and discrete action | Question: I am looking to have a cooperative multi agent reinforcement learning framework where one agent has a discrete action space and another agent has a continuous action space. Is there a way to do this as most papers I have seen will only handle one or the other.
Answer: Seems like what you are looking for is Parametrized RL to train an agent for a Parametrized Markov Decision Process. You can look up both terms to find courses/readings about it.
Anyway, one existing RL framework for it is the Multi-pass Parametrized DQN (MPDQN) which is proposed in this paper, and if google enoug you can even find the author's dissertation on MP-DQN which preliminaries section might help you a lot. In short, the Agent uses Actor-Critic arhitecture in which the Actor will predict the continuous parameters of all actions given the current state, and the Critic will predict the action-value of each action given the states concatenated by the predicted values of the continuous parameters. The final discrete action is chosen by sampling the action-value or using argmax.
In P-DQN, the state concatenated with all the continuous parameters' values are passed to the Critic. However, it is shown that by passing all continuous parameters at once to the Critic, the unused action's parameters will affect the gradient thus affecting the Actor's parameters, which is not expected. Therefore, the parameters of each action are passed separately into the Critic in MP-DQN (unrelated action parameters' are zeroed). With good design, you can implement it so you can pass all the parameters and state in a single pass (multi-pass) as shown in MP-DQN. Other alternative for Parametrized RL is the hybrid-PPO and parametrized DDPG.
Finally, based on my personal experiments, I still have difficulty on controlling the scale of the continuous parameters values prediction so that it stays in a certain predefined range. There are 2 ways to do it, first is using softmax or using gradient inverting (changing the gradients' sign depending on the current predicted parameter values). However, I still eventually produced Actor that output only extreme values (either upper or lower bound) of the continuous parameters. | {
"domain": "ai.stackexchange",
"id": 3022,
"tags": "deep-learning, deep-rl"
} |
2,4 dihydroxy benzoate formation | Question: When resorcinol and potassium hydrogen carbonate are added to each other in acidic conditions together with water, 2,4-dihydroxybenzoate is formed.
What kind of reaction is this? and what is a possible mechanism for it?
Answer: This reaction has been studied in basic conditions by Barbarossa et al. [1] using $\ce{KOH}$, $\ce{KHCO3}$ and $\ce{K2CO3}$ under $\ce{CO2}$.
Two mechanisms are offered in the paper, the first one looks to be more relevant for the example the OP quoted.
References
Barbarossa, V.; Barzagli, F.; Mani, F.; Lai, S.; Vanga, G. The Chemistry of Resorcinol Carboxylation and Its Possible Application to the CO₂ Removal from Exhaust Gases. Journal of CO₂ Utilization 2015, 10, 50–59. https://doi.org/10.1016/j.jcou.2015.04.004. | {
"domain": "chemistry.stackexchange",
"id": 11653,
"tags": "organic-chemistry"
} |
Parse error at "BOOST_JOIN" | Question:
i use ubuntu 16.04 and kinetic kame, when i try to build a package, it returns:
usr/include/boost/type_traits/detail/has_binary_operator.hp:50: Parse error at "BOOST_JOIN"
someone knows something about it ?
Originally posted by inflo on ROS Answers with karma: 406 on 2016-05-06
Post score: 9
Original comments
Comment by The Martin on 2016-06-01:
I have the same problem. According to this thread at stackoverflow it might be because of mismatching Boost and QT versions.
Comment by Markus Bader on 2016-06-27:
I have the same problem as well :-(
Comment by pmarinplaza on 2016-10-05:
Hello, Did you fix it?
Comment by zhaoyang on 2016-10-11:
i have the same problem when i try rgbdslam-v2.;-(
ubuntu 16.04
Answer:
I just fixed adding as "The Martin" said:
In qnode.cpp:
#ifndef Q_MOC_RUN
#include <ros/ros.h>
#include "../include/testpackage/qnode.hpp"
#include <ros/network.h>
#endif
Originally posted by pmarinplaza with karma: 330 on 2016-10-05
This answer was ACCEPTED on the original site
Post score: 6
Original comments
Comment by bonzi on 2017-01-19:
Did you add a separate qnode.cpp file, or the file was exisiting, and you edited it??
Comment by Tones on 2017-07-18:
As an addition: The #ifndef Q_MOC_RUN .. #endif statement has to encapsulate all ros- and boost-related includes in your code. It should not encapsulate Qt or STL-includes. | {
"domain": "robotics.stackexchange",
"id": 24577,
"tags": "ros"
} |
Non-Perturbative effects QCD and the Standard Model? | Question: I read in an article that the Standard Model leaves unanswered questions about the non-perturbative effects of the QCD.
I have basic knowledge about the perturbative and non-perturbative QCD. Could you please tell me what are these effects and unanswered questions?
Sorry in advance if my question is too broad.
Answer: It seems that one of the question is the strong CP-problem in the QCD. It reflects the non-trivial vacuum structure of the QCD, being composed of the different states labelling by discrete number $n$ with the weight $e^{i\theta n}$. This is reflected in appearance of the term $S= \theta\int d^{4}x G_{\mu\nu}^{a}\tilde{G}^{\mu\nu}_{a}$, with $G_{\mu\nu}^{a}$ being the gluon strength tensor and $\tilde{G}$ is the dual one. This term, being invisible within the perturbation theory (since can be expressed as the integral of the total derivative), manifests itself non-perturbatively. In particular, the dipole electric moment of the neutron is proportional to $\theta$.
From experiment we know that $|\theta| < 10^{-10}$. But this smallness isn't expected within the QCD, in which all dimensionless quantities are predicted to be of order of one... | {
"domain": "physics.stackexchange",
"id": 42962,
"tags": "particle-physics, standard-model, quantum-chromodynamics, instantons, non-perturbative"
} |
Energy Loss in Inelastic Collision | Question: If we take a tennis ball and drop it from a certain height above the ground on earth, it collides inelastically, with the maximum height it can reach reducing after each collision due to loss of energy to the atmosphere, sound energy and to change the vibrational energy of both the ball and the floor.
Now consider the same experiment in vacuum. The lost energy shouldn't escape to the atmosphere to heat it up, or to contribute to producing sound; it can only change the internal energies of the floor and the ball itself. Is the amount of energy loss the same? If yes, then can we say that dropping a ball in vacuum heats it up more than it could've heated up in an atmosphere full of air? This would be the case if the maximum possible height reached by the ball after each collision would remain the same.
Answer:
In an inelastic collision, does a ball get hotter if it is dropped in a vacuum?
Short answer: Yes, the ball gets hotter if it cannot lose energy to sound. All the kinetic energy lost to the ball's internal pressure wave during impact is converted into thermal energy. In the case of impact in an air environment, a portion of that pressure wave energy is lost to sound.
Elaboration
Impact: Upon impact, the kinetic energy of the ball is converted into the potential energy of lattice compression.
Elastic Collision: In a fully elastic collision, all of that lattice compression recoils, and repels the ball back upwards at the same speed.
Inelastic Collision: In an inelastic collision, a portion of the compressive energy is transmitted through the ball's material. In effect, the energy associated with the pressure wave has disconnected from the rebound kinetic energy of the ball, resulting in its reduced height on the bounce.
Reflection: The pressure wave travels through the material and strikes a surface on the other side of the ball. The pressure wave causes the surface of the material to first bulge outward and then retract. The recoil regenerates a pressure wave going the opposite direction, which will continue to a surface on the other side of the ball, where it will bounce, etc. This continues until the pressure wave disperses its cohort of oscillating kinetic-potential energy into thermal energetic disorder.
Sound Production: If air surrounds the ball, an increment of the pressure wave's energy is transmitted to the air when the pressure wave causes the surface to bulge and retract. This causes compression and decompression of the air surrounding the ball, producing the sound we hear.
Vacuum Reflection: When a vacuum surrounds the ball, the surface deforms outward and recoils back inward due to the impact of the pressure wave. But, all the energy striking the surface reflects back into the bulk material of the ball. No kinetic and potential energy is lost to the surrounding air.
Multiple Reflections: The pressure wave bounces from surface to surface. The waves produce complex surface distortions due to their variable path length between their points of reflection and their opposing spherical inner surface. The variable path length produces a dispersion of the pressure waves' energy, resulting in a rapid decay into random motion.
Thermal Energy Decay: The pressure waves eventually fully decay into randomly directed momentum, which is recognized as thermal energy.
1) The major decay/dispersion effect is due to the pressure waves' loss of coherence with each reflection. The variable path lengths between generation and reflection of each segment of the pressure wave result in a non-coherent phase relationship between the various segments of the pressure wave. That is, the pressure waves hit and reflect at different times, interfere, and quickly disperse the coherent pressure wave energy into random directions. The conversion of pressure wave energy into thermal energy occurs much faster than would be expected by the rate of loss of coherent to random energy in an elastic material. Note that in a resonant elastic structure, like a bell or gong, where reflections reinforce and stay coherent, the sound is heard much longer than in a non-resonant structure like a spherical ball.
2) A minor loss of pressure wave energy occurs in a metallic or stiff/elastic materials due to inelastic losses during inter-atomic collisions. This occurs due to the off-center force transmission during atom collision.
Conclusion: Vacuum vs. Air Thermal Energy Conversion: An increment of energy is lost to the creation of a pressure wave internal to a stiff-elastic ball after the impact of the ball-surface in an inelastic collision. The pressure wave will convert completely into thermal energy if it cannot lose energy to the production of air vibration/sound. | {
"domain": "physics.stackexchange",
"id": 59851,
"tags": "newtonian-mechanics, momentum, conservation-laws, collision"
} |
Enum to deserialize HTML sizes from JSON with serde | Question: I added an enum for my webscraper to deserialize data from a JSON field that represents an HTML image size, which can either be an unsigned int like 1080 or a string like "100%":
use serde::Deserialize;
use std::fmt::{Display, Formatter};
#[derive(Clone, Debug, Eq, PartialEq)]
pub enum HtmlSize {
String(String),
Int(u64),
}
impl Display for HtmlSize {
fn fmt(&self, f: &mut Formatter<'_>) -> std::fmt::Result {
match self {
Self::String(string) => write!(f, r#""{}""#, string),
Self::Int(int) => write!(f, "{}", int),
}
}
}
impl<'de> Deserialize<'de> for HtmlSize {
fn deserialize<D>(deserializer: D) -> Result<Self, D::Error>
where
D: serde::Deserializer<'de>,
{
deserializer.deserialize_any(HtmlSizeVisitor)
}
}
struct HtmlSizeVisitor;
impl<'de> serde::de::Visitor<'de> for HtmlSizeVisitor {
type Value = HtmlSize;
fn expecting(&self, formatter: &mut Formatter) -> std::fmt::Result {
write!(
formatter,
"an unsigned integer or a string representing a width"
)
}
fn visit_u64<E: serde::de::Error>(self, n: u64) -> Result<HtmlSize, E> {
Ok(HtmlSize::Int(n))
}
fn visit_str<E: serde::de::Error>(self, string: &str) -> Result<HtmlSize, E> {
Ok(HtmlSize::String(string.to_string()))
}
}
Example usage:
#[derive(Clone, Debug, Deserialize, Eq, PartialEq)]
pub struct ImageInfo {
id: String,
alt: String,
height: HtmlSize,
src: String,
width: HtmlSize,
caption: String,
credit: String,
}
I did not find anything readily available like this, so I implemented the above HtmlSize with its corresponding visitor. Is this a sensible implementation or did I reinvent the wheel? Can the implementation be improved?
Answer: You don't actually need to implement your own logic, you can use #[serde(untagged):
#[derive(Clone, Debug, Eq, PartialEq, Deserialize)]
#[serde(untagged)]
pub enum HtmlSize {
String(String),
Int(u64),
}
#[serde(untagged)] will deserialize into which enum variant matches the incoming data. | {
"domain": "codereview.stackexchange",
"id": 44831,
"tags": "json, rust, web-scraping"
} |
Error Handling When Using Dictionary | Question: I have set up a Dictionary that calls on a class to fill a DataGridView via SQL statements. The problem is in two (out of 5) instances the value passed HAS to be an integer, but the value comes from a textbox so it is being passed as string. Before I set these Dictionaries up I was just using a switch statement that can be seen here. In my switch statement I simply added:
int n;
bool isNumber = int.TryParse(textBox.Text, out n);
if (!isNumber)
{
MessageBox.Show("Input must be an integer");
break;
}
to the two cases that needed the value to be an integer type.
Any idea on how I can add error handling to this? Preferably a message box that just says "value needs to be an integer".
private void findScriptsQueryButton_Click(object sender, EventArgs e)
{
SqlConnection connection = new SqlConnection();
string value = this.valueTextBox.Text;
connection.ConnectionString = GetSqlConnection[serverComboBox.SelectedItem.ToString()];
findScriptsDataGrid.DataSource = GetDataSource[findByComboBox.SelectedItem.ToString()](connection, value).Tables[0];
}
private static TableAdapters.FindScript dataSource = new TableAdapters.FindScript();
private static readonly Dictionary<String, Func<SqlConnection, String, DataSet>> GetDataSource = new Dictionary<String, Func<SqlConnection, String, DataSet>>()
{
{"Target VDN", dataSource.FillByTargetVDN},
{"Skill Group", dataSource.FillBySkillGroup},
{"Translation Route Pool", dataSource.FillByTranslationRoutePool},
{"Name", dataSource.FillByName},
{"Label", dataSource.FillByLabel}
};
private static readonly Dictionary<String, String> GetSqlConnection = new Dictionary<String, String>()
{
{"SERVER01", ConfigurationManager.ConnectionStrings["csS01"].ConnectionString},
{"SERVER02", ConfigurationManager.ConnectionStrings["csS02"].ConnectionString}
};
Answer: In your data structure include a validator (this should probably be a Predicate<String>); most of the predicates will just return true, but the two that require integers will have a nontrivial validator that checks for integer.
So I guess the type should be Dictionary<String, Tuple<Func<SqlConnection, String, DataSet>>,Predicate<String>> | {
"domain": "codereview.stackexchange",
"id": 8353,
"tags": "c#, beginner, winforms, hash-map, error-handling"
} |
What is the counting argument for the number of elementary operations required for a random function? | Question: What is the counting argument for the following statement (classical)?
"A random function on n bits requires $e^{\Omega(n)}$ elementary operations."
It appears in the introduction of PRL 116, 170502 (2016): Efficient Quantum Pseudorandomness.
Is it that since there are infinitely many n-bit boolean functions, implementing one such randomly chosen function using elementary operations would require an exponentially large number? (I'm assuming that elementary operations here mean two-bit universal gates.)
Also, why $\Omega(n)$ and not $O(n)$?
Thanks in advance.
Answer: We say that a function $f(n)$ is $O(n)$ if its bounded above by $n$ asymptotically, which is not to be confused with a function $f(n)$ being $\Omega(n)$ which means that $f(n)$ is bounded below by $n$ asymptotically.
Also, there are $2^n$ boolean functions on $\{0,1\}^n$ since each boolean function $f:\{0,1\}^n\rightarrow \{0,1\}$ is in one-to-one correspondence with a subset $S$ of $\{1,2,\ldots,n\}$ via the identification $f^{-1}(1)=S$.
So like you said in the comments $2^n=e^{\Omega(n)}$. | {
"domain": "quantumcomputing.stackexchange",
"id": 2830,
"tags": "complexity-theory, classical-computing, random-quantum-circuit"
} |
Is there a maximum speed you can travel through time? If so, could you ever reach such a speed? | Question: I understand that everything in the universe travels at the speed of light through spacetime. As a layperson I'm going to refer to this velocity as the spacetime velocity.
Some examples..
Objects with mass have a spacetime velocity that point mostly in the time direction, but also in the 3 space directions.
Light has a spacetime velocity that points the maximum amount in the 3 space directions and minimum in the time direction.
Now, why I think the answer to my first question may be yes..
If an object had its spacetime velocity pointed the maximum in the time direction, wouldn't this object be traveling at the maximum speed something could travel through time?
If I'm thinking about it correct and there is in fact a speed limit in the time direction, is this speed limit of time also out of reach like the speed limit of space?
Or asked in another way, could something ever travel the maximum speed through time?
Answer: So the spacetime velocity refers to the 4-velocity. Most of our favorite 3D vectors have a 4-vector counter-part, and it is this 4-vector counter part Lorentz transforms between various interial frames.
The most basic one is spatial separation:
$$ \vec r = (\Delta x,\Delta y, \Delta z)$$
Different coordinate systems have different values of the $\Delta$'s, but they all agree on the length:
$$||\vec r||^2 = \Delta x^2+\Delta y^2+ \Delta z^2$$
in 3D space. The 4-vector version is:
$$ r^{\mu} = (c\Delta t, \vec r)$$
All inertial frames agree on the so-called invariant interval:
$$\Delta s^2 = r^{\mu}r_{\mu} \equiv c^2\Delta t^2 - ||\vec r||^2 $$
If you differentiate a world-line defined by $r^{\mu}(\tau)$ with respect to $\tau$, you get the 4-velocity:
$$ \frac{dr^{\mu}}{d\tau} = v^{\mu} = (\gamma c, \vec v) $$
where $\gamma=1/\sqrt{1-(/c)^2}$ is the usual Lrentz factor. It's magnitude is:
$$v^{\mu}v_{\mu} = \gamma^2c^2 - \gamma||\vec v||^2 = c^2$$
which is constant for all $\vec v$. This is the origin of "we all move through spacetime at the speed of light".
In the rest frame, it is:
$$ v^{\mu} = (c, 0, 0,0) $$
That is, "we move through time at $c$".
Now there is no frame that moves at $c$, and we can't write down the 4-velocity of light because $\gamma \rightarrow \infty$, but as you approach $c$, you don't move through space only (your point 2).
We can rewrite it (in the $z$-direction) as
$$ v^{\mu} = \gamma c(1, 0, 0, \sqrt{1-1/{\gamma^2}}) \rightarrow \gamma c (1, 0, 0, 1) $$
so light moves through space and time equally. The scale factor $c\gamma$ diverges, which is why people say light doesn't move through time, and sees the Universe Lorentz contracted into a 2D plane, but that view point gives no insight into relativity. It is a viewpoint from an inertial frame that does not exist.
As seen in the twin paradox, the path between two time-like ($\Delta s^2 > 0$) events is the inertial path in-which both events occur at the same location. That way $\Delta r=0$, so that:
$$ \Delta s^2 = \Delta t^2 $$
and $\delta t^2$ is maximized. (Remember, all inertial frames see the same $ \Delta s^2$). | {
"domain": "physics.stackexchange",
"id": 79379,
"tags": "special-relativity, speed-of-light, thought-experiment"
} |
Invoking "make -j8 -l8" failed | Question:
Hi I tryed to make a move group interface in c++, but when i try to build the package i get all this errors using catkin_make, I'm using Kinetic im ubuntu 16.04.
This code block was moved to the following github gist:
https://gist.github.com/answers-se-migration-openrobotics/a1de816ea92ad18729cc3966908ed920
Originally posted by Chrisch on ROS Answers with karma: 1 on 2017-10-22
Post score: 0
Answer:
Have you added a add_definitions(-std=c++11) statement somewhere in your CMakeLists.txt?
MoveIt needs C++11 enabled, otherwise you'll run into errors like these.
Originally posted by gvdhoorn with karma: 86574 on 2017-10-23
This answer was ACCEPTED on the original site
Post score: 1 | {
"domain": "robotics.stackexchange",
"id": 29164,
"tags": "ros, moveit, catkin-make, move-group-interface"
} |
How is the oracle in Grover's search algorithm implemented? | Question: Grover's search algorithm provides a provable quadratic speed-up for unsorted database search.
The algorithm is usually expressed by the following quantum circuit:
In most representations, a crucial part of the protocol is the "oracle gate" $U_\omega$, which "magically" performs the operation $|x\rangle\mapsto(-1)^{f(x)}|x\rangle$.
It is however often left unsaid how difficult realizing such a gate would actually be.
Indeed, it could seem like this use of an "oracle" is just a way to sweep the difficulties under the carpet.
How do we know whether such an oracular operation is indeed realizable?
And if so, what is its complexity (for example in terms of complexity of gate decomposition)?
Answer: The function $f$ is simply an arbitrary boolean function of a bit string: $f\colon \{0,1\}^n \to \{0,1\}$. For applications to breaking cryptography, such as [1], [2], or [3], this is not actually a ‘database lookup’, which would necessitate storing the entire database as a quantum circuit somehow, but rather a function such as
\begin{equation*}
x \mapsto \begin{cases}
1, & \text{if $\operatorname{SHA-256}(x) = y$;} \\
0, & \text{otherwise,}
\end{cases}
\end{equation*}
for fixed $y$, which has no structure we can exploit for a classical search, unlike, say, the function
\begin{equation*}
x \mapsto \begin{cases}
1, & \text{if $2^x \equiv y \pmod{2^{2048} - 1942289}$}, \\
0, & \text{otherwise},
\end{cases}
\end{equation*}
which has structure that can be exploited to invert it faster even on a classical computer.
The question of the particular cost can't be answered in general because $f$ can be any circuit—it's just a matter of making a quantum circuit out of a classical circuit. But usually, as in the example above, the function $f$ is very cheap to evaluate on a classical computer, so it shouldn't pose a particularly onerous burden on a quantum computer for which everything else about Grover's algorithm is within your budget.
The only general cost on top of $f$ is an extra conditional NOT gate $$C\colon \left|a\right> \left|b\right> \to \left|a\right> \left|a \oplus b\right>$$ where $\oplus$ is xor, and an extra ancillary qubit for it. In particular, if we have a circuit $$F\colon \left|x\right> \left|a\right> \lvert\text{junk}\rangle \mapsto \left|x\right> \left|a \oplus f(x)\right> \lvert\text{junk}'\rangle$$ built out of $C$ and the circuit for $f$, then if we apply it to $\left|x\right>$ together with an ancillary qubit initially in the state $\left|-\right> = H\left|1\right> = (1/\sqrt{2})(\left|0\right> - \left|1\right>)$ where $H$ is a Hadamard gate, then we get
\begin{align*}
F\left|x\right> \left|-\right> \lvert\text{junk}\rangle
&= \frac{1}{\sqrt{2}}\bigl(
F\left|x\right> \left|0\right> \lvert\text{junk}\rangle
- F\left|x\right> \left|1\right> \lvert\text{junk}\rangle
\bigr) \\
&= \frac{1}{\sqrt{2}}\bigl(
\left|x\right> \left|f(x)\right> \lvert\text{junk}'\rangle
- \left|x\right> \left|1 \oplus f(x)\right> \lvert\text{junk}'\rangle
\bigr).
\end{align*}
If $f(x) = 0$ then $1 \oplus f(x) = 1$, so by simplifying we obtain $$F\left|x\right> \left|-\right> \lvert\text{junk}\rangle = \left|x\right> \left|-\right> \lvert\text{junk}'\rangle,$$ whereas if $f(x) = 1$ then $1 \oplus f(x) = 0$, so $$F\left|x\right> \left|-\right> \lvert\text{junk}\rangle = -\left|x\right> \left|-\right> \lvert\text{junk}'\rangle,$$ and thus in general $$F\left|x\right> \left|-\right> \lvert\text{junk}\rangle = (-1)^{f(x)} \left|x\right> \left|-\right> \lvert\text{junk}'\rangle.$$ | {
"domain": "quantumcomputing.stackexchange",
"id": 43,
"tags": "quantum-algorithms, circuit-construction, grovers-algorithm, complexity-theory, oracles"
} |
What are the draw backs, if any, in using C and C++ together? Is doing so considered correct by the large? | Question: I have written code exclusively in both C and C++. I see clear advantages in both and therefore have recently began using them together. I have also read, what I would consider, outrageous comments claiming that to code in C is outright dumb, and it is the language of past generations. So, in terms of maintenance, acceptance, common practice and efficiency; is it something professionals on large scale projects see/do?
Here's an example snippet:
I obviously need to #include both <stdio.h> and <iostream>.
#define STRADD ", "
#include <stdio.h>
#include <stdlib.h>
#include <iostream>
#include <fstream>
#include <iomanip>
#include <string>
#include "Struct.h"
#include "Rates.h"
#include "Taxes.h"
using namespace std;
And later I utilize functions like...
void printHeading(FILE * fp)
{
fprintf(fp, "Employee Pay Reg Hrs Gross Fed SSI Net\n");
fprintf(fp, "Name Rate Ovt Hrs Pay State Defr Pay\n");
fprintf(fp, "==================================================================================\n");
return;
}
and..
void getEmpData(EmpRecord &e)
{
cout << "\n Enter the employee's first name: ";
getline(cin, e.firstname);
cout << " Enter the employee's last name: ";
getline(cin, e.lastname);
e.fullname = e.lastname + STRADD + e.firstname; //Fullname string creation
cout << " Enter the employee's hours worked: ";
cin >> e.hours;
while(e.hours < 0)
{
cout << " You did enter a valid amount of hours!\n";
cout << " Please try again: ";
cin >> e.hours;
}
cout << " Enter the employee's payrate: ";
cin >> e.rate;
while(e.rate < MINWAGE)
{
cout << " You did enter a valid hourly rate!\n";
cout << " Please try again: ";
cin >> e.rate;
}
cout << " Enter any amount to be tax deferred: ";
cin >> e.deferred;
while(e.deferred < 0)
{
cout << " You did enter a valid deferred amount!\n";
cout << " Please try again: ";
cin >> e.deferred;
}
cin.ignore(100, '\n');
return;
}
Thanks in advance!
Answer: You can't write exclusively in both C and C++.
You can write
exclusively in C
exclusively in C++
or using a combination of the two different languages.
to code in C is outright dumb, and it is the language of past generations.
C may be an old language but it is still heavily used and has its place.
It is practically the only glue language that is universal so great for writing effecient modules for other languages or gluing languages together.
It is also great for low level coding where you want/need to get close to the hardware.
But there are downsides to writing in C (but saying it is dumb to do so is stretching it a bit). But you should be able to justify your choice of choosing C over an alternative.
So, in terms of maintenance, acceptance, common practice and efficiency; is it something professionals on large scale projects see/do?
Are there large code bases writing in C that need maintenance: Yes.
Is it common practice to use C: That is entirely dependent on what you are doing it is imposable to generalize.
Personally I write exclusively in C++.
There is nothing I can do in C I can't do in C++ so I don't write C anymore (In fact I have dropped it from my resume (especially since I don't want to write C)). The advantage of C++ is I can get code written to as low a level as C but I also have higher level constructs (though not as high as modern scripting languages).
Code Review:
I see no advantage of using frpintf(). In situations like this C++ std::ofstream is much more flexible,
printHeading(FILE * fp)
{
fprintf(fp, "Employee Pay Reg Hrs Gross Fed SSI Net\n");
// I would always use C++ stream.
// It can be more than just a file.
printHeading(std::ostream& stream)
{
stream << "Employee Pay Reg Hrs Gross Fed SSI Net\n";
Also if you are doing anything complex it is TYPE SAFE. *Unlike C code using fprintf(). This is one major area that C falls down in and is the cause of some of the major bugs in C code.
You check for an invalid number. But you are not checking for completely invalid input. What happens if somebody typed "Fred"
cin >> e.hours;
while(e.hours < 0)
{
cout << " You did enter a valid amount of hours!\n";
cout << " Please try again: ";
cin >> e.hours;
}
You expect that there is never more than 100 bad characters in the input?
cin.ignore(100, '\n');
What happens if I accidentally paste in a paragraph of text.
And last but worst of all:
using namespace std;
Never do this. It is OK if you are writting a ten line toy project. But once you get past anything more than a toy it causes more problems (in name clashes) than it is worth. The reason standard library is shortened to std:: is to make it easy and quick to type. | {
"domain": "codereview.stackexchange",
"id": 2115,
"tags": "c++, c"
} |
Building an IDE, block by -- er, mock by mock | Question: The opening sentence of an answer I received in my previous post snowballed, and led to completely ditching the previous approach. Mocking my IDE with a MockFactory worked ok, ...for some values of "ok" - the more components needed to be involved, the messier the setup code was getting.
So instead of manufacturing mocks, I thought I'd build them. Enter the MockVbeBuilder API:
Mock<VBE> mock = new MockVbeBuilder()
.ProjectBuilder("TestProject1", vbext_ProjectProtection.vbext_pp_none)
.AddComponent("TestModule1", vbext_ComponentType.vbext_ct_StdModule, contentForProject1Module1)
.AddComponent("TestModule2", vbext_ComponentType.vbext_ct_StdModule, contentForProject1Module2)
.UserFormBuilder("UserForm1", codeBehindForProject1UserForm1)
.AddControl("Button1")
.AddControl("Button2")
.MockProjectBuilder()
.AddComponent("TestClass1", vbext_ComponentType.vbext_ct_ClassModule, contentForProject1Class1)
.AddComponent("ThisWorkbook", vbext_ComponentType.vbext_ct_Document, contentForProject1ThisWorkbook)
.MockVbeBuilder()
.ProjectBuilder("TestProject2", vbext_ProjectProtection.vbext_pp_locked)
.AddComponent("TestClass1", vbext_ComponentType.vbext_ct_ClassModule, contentForProject2Class1)
.AddReference("TestProject1", "PathToProject1")
.MockVbeBuilder()
.Build();
I also exposed a "shortcut" builder, to mock the entire IDE out of just a single standard module:
VBComponent component;
Mock<VBE> mock = new MockVbeBuilder().BuildFromSingleStandardModule(content, out component);
This simplifies usage for the simpler tests that really only need a single code module with some test content.
Here's the MockVbeBuilder class:
namespace RubberduckTests.Mocks
{
/// <summary>
/// Builds a mock <see cref="VBE"/>.
/// </summary>
public class MockVbeBuilder
{
private readonly Mock<VBE> _vbe;
private Mock<VBProjects> _vbProjects;
private readonly ICollection<VBProject> _projects = new List<VBProject>();
private Mock<CodePanes> _vbCodePanes;
private readonly ICollection<CodePane> _codePanes = new List<CodePane>();
public MockVbeBuilder()
{
_vbe = CreateVbeMock();
}
/// <summary>
/// Adds a project to the mock VBE.
/// Use a <see cref="MockProjectBuilder"/> to build the <see cref="project"/>.
/// </summary>
/// <param name="project">A mock <see cref="VBProject"/>.</param>
/// <returns>Returns the <see cref="MockVbeBuilder"/> instance.</returns>
public MockVbeBuilder AddProject(Mock<VBProject> project)
{
project.SetupGet(m => m.VBE).Returns(_vbe.Object);
_projects.Add(project.Object);
foreach (var component in _projects.SelectMany(vbProject => vbProject.VBComponents.Cast<VBComponent>()))
{
_codePanes.Add(component.CodeModule.CodePane);
}
return this;
}
/// <summary>
/// Creates a <see cref="MockProjectBuilder"/> to build a new project.
/// </summary>
/// <param name="name">The name of the project to build.</param>
/// <param name="protection">A value that indicates whether the project is protected.</param>
public MockProjectBuilder ProjectBuilder(string name, vbext_ProjectProtection protection)
{
var result = new MockProjectBuilder(name, protection, () => _vbe.Object, this);
return result;
}
/// <summary>
/// Gets the mock <see cref="VBE"/> instance.
/// </summary>
public Mock<VBE> Build()
{
return _vbe;
}
/// <summary>
/// Gets a mock <see cref="VBE"/> instance,
/// containing a single "TestProject1" <see cref="VBProject"/>
/// and a single "TestModule1" <see cref="VBComponent"/>, with the specified <see cref="content"/>.
/// </summary>
/// <param name="content">The VBA code associated to the component.</param>
/// <param name="component">The created <see cref="VBComponent"/></param>
/// <returns></returns>
public Mock<VBE> BuildFromSingleStandardModule(string content, out VBComponent component)
{
var builder = ProjectBuilder("TestProject1", vbext_ProjectProtection.vbext_pp_none);
builder.AddComponent("TestModule1", vbext_ComponentType.vbext_ct_StdModule, content);
var project = builder.Build();
component = project.Object.VBComponents.Item(0);
return AddProject(project).Build();
}
private Mock<VBE> CreateVbeMock()
{
var vbe = new Mock<VBE>();
var windows = new MockWindowsCollection {VBE = vbe.Object};
vbe.Setup(m => m.Windows).Returns(windows);
vbe.SetupProperty(m => m.ActiveCodePane);
vbe.SetupProperty(m => m.ActiveVBProject);
vbe.SetupGet(m => m.SelectedVBComponent).Returns(() => vbe.Object.ActiveCodePane.CodeModule.Parent);
vbe.SetupGet(m => m.ActiveWindow).Returns(() => vbe.Object.ActiveCodePane.Window);
var mainWindow = new Mock<Window>();
mainWindow.Setup(m => m.HWnd).Returns(0);
vbe.SetupGet(m => m.MainWindow).Returns(mainWindow.Object);
_vbProjects = CreateProjectsMock();
vbe.SetupGet(m => m.VBProjects).Returns(() => _vbProjects.Object);
_vbCodePanes = CreateCodePanesMock();
vbe.SetupGet(m => m.CodePanes).Returns(() => _vbCodePanes.Object);
return vbe;
}
private Mock<VBProjects> CreateProjectsMock()
{
var result = new Mock<VBProjects>();
result.Setup(m => m.GetEnumerator()).Returns(_projects.GetEnumerator());
result.As<IEnumerable>().Setup(m => m.GetEnumerator()).Returns(_projects.GetEnumerator());
result.Setup(m => m.Item(It.IsAny<int>())).Returns<int>(value => _projects.ElementAt(value));
result.SetupGet(m => m.Count).Returns(_projects.Count);
return result;
}
private Mock<CodePanes> CreateCodePanesMock()
{
var result = new Mock<CodePanes>();
result.Setup(m => m.GetEnumerator()).Returns(_codePanes.GetEnumerator());
result.As<IEnumerable>().Setup(m => m.GetEnumerator()).Returns(_codePanes.GetEnumerator());
result.Setup(m => m.Item(It.IsAny<int>())).Returns<int>(value => _codePanes.ElementAt(value));
result.SetupGet(m => m.Count).Returns(_codePanes.Count);
return result;
}
}
}
The MockProjectBuilder class:
namespace RubberduckTests.Mocks
{
/// <summary>
/// Builds a mock <see cref="VBProject"/>.
/// </summary>
public class MockProjectBuilder
{
private readonly Func<VBE> _getVbe;
private readonly MockVbeBuilder _mockVbeBuilder;
private readonly Mock<VBProject> _project;
private readonly Mock<VBComponents> _vbComponents;
private readonly Mock<References> _vbReferences;
private readonly List<VBComponent> _components = new List<VBComponent>();
private readonly List<Reference> _references = new List<Reference>();
public MockProjectBuilder(string name, vbext_ProjectProtection protection, Func<VBE> getVbe, MockVbeBuilder mockVbeBuilder)
{
_getVbe = getVbe;
_mockVbeBuilder = mockVbeBuilder;
_project = CreateProjectMock(name, protection);
_vbComponents = CreateComponentsMock();
_project.SetupGet(m => m.VBComponents).Returns(_vbComponents.Object);
_vbReferences = CreateReferencesMock();
_project.SetupGet(m => m.References).Returns(_vbReferences.Object);
}
/// <summary>
/// Adds a new component to the project.
/// </summary>
/// <param name="name">The name of the new component.</param>
/// <param name="type">The type of component to create.</param>
/// <param name="content">The VBA code associated to the component.</param>
/// <returns>Returns the <see cref="MockProjectBuilder"/> instance.</returns>
public MockProjectBuilder AddComponent(string name, vbext_ComponentType type, string content)
{
var component = CreateComponentMock(name, type, content);
return AddComponent(component);
}
/// <summary>
/// Adds a new mock component to the project.
/// Use the <see cref="AddComponent(string,vbext_ComponentType,string)"/> overload to add module components.
/// Use this overload to add user forms created with a <see cref="MockUserFormBuilder"/> instance.
/// </summary>
/// <param name="component">The component to add.</param>
/// <returns>Returns the <see cref="MockProjectBuilder"/> instance.</returns>
public MockProjectBuilder AddComponent(Mock<VBComponent> component)
{
_components.Add(component.Object);
_getVbe().ActiveCodePane = component.Object.CodeModule.CodePane;
return this;
}
/// <summary>
/// Adds a mock reference to the project.
/// </summary>
/// <param name="name">The name of the referenced library.</param>
/// <param name="filePath">The path to the referenced library.</param>
/// <returns>Returns the <see cref="MockProjectBuilder"/> instance.</returns>
public MockProjectBuilder AddReference(string name, string filePath)
{
var reference = CreateReferenceMock(name, filePath);
_references.Add(reference.Object);
return this;
}
/// <summary>
/// Builds the project, adds it to the VBE,
/// and returns a <see cref="MockVbeBuilder"/>
/// to continue adding projects to the VBE.
/// </summary>
/// <returns></returns>
public MockVbeBuilder MockVbeBuilder()
{
_mockVbeBuilder.AddProject(Build());
return _mockVbeBuilder;
}
/// <summary>
/// Creates a <see cref="MockUserFormBuilder"/> to build a new form component.
/// </summary>
/// <param name="name">The name of the component.</param>
/// <param name="content">The VBA code associated to the component.</param>
public MockUserFormBuilder UserFormBuilder(string name, string content)
{
var component = CreateComponentMock(name, vbext_ComponentType.vbext_ct_MSForm, content);
return new MockUserFormBuilder(component, this);
}
/// <summary>
/// Gets the mock <see cref="VBProject"/> instance.
/// </summary>
public Mock<VBProject> Build()
{
return _project;
}
private Mock<VBProject> CreateProjectMock(string name, vbext_ProjectProtection protection)
{
var result = new Mock<VBProject>();
result.SetupProperty(m => m.Name, name);
result.SetupGet(m => m.Protection).Returns(() => protection);
result.SetupGet(m => m.VBE).Returns(_getVbe);
return result;
}
private Mock<VBComponents> CreateComponentsMock()
{
var result = new Mock<VBComponents>();
result.SetupGet(m => m.Parent).Returns(() => _project.Object);
result.SetupGet(m => m.VBE).Returns(_getVbe);
result.Setup(c => c.GetEnumerator()).Returns(() => _components.GetEnumerator());
result.As<IEnumerable>().Setup(c => c.GetEnumerator()).Returns(() => _components.GetEnumerator());
result.Setup(m => m.Item(It.IsAny<int>())).Returns<int>(index => _components.ElementAt(index));
result.Setup(m => m.Item(It.IsAny<string>())).Returns<string>(name => _components.Single(item => item.Name == name));
result.SetupGet(m => m.Count).Returns(_components.Count);
return result;
}
private Mock<References> CreateReferencesMock()
{
var result = new Mock<References>();
result.SetupGet(m => m.Parent).Returns(() => _project.Object);
result.SetupGet(m => m.VBE).Returns(_getVbe);
result.Setup(m => m.GetEnumerator()).Returns(() => _references.GetEnumerator());
result.As<IEnumerable>().Setup(m => m.GetEnumerator()).Returns(() => _references.GetEnumerator());
result.Setup(m => m.Item(It.IsAny<int>())).Returns<int>(index => _references.ElementAt(index));
result.SetupGet(m => m.Count).Returns(_references.Count);
return result;
}
private Mock<Reference> CreateReferenceMock(string name, string filePath)
{
var result = new Mock<Reference>();
result.SetupGet(m => m.VBE).Returns(_getVbe);
result.SetupGet(m => m.Collection).Returns(() => _vbReferences.Object);
result.SetupGet(m => m.Name).Returns(() => name);
result.SetupGet(m => m.FullPath).Returns(() => filePath);
return result;
}
private Mock<VBComponent> CreateComponentMock(string name, vbext_ComponentType type, string content)
{
var result = new Mock<VBComponent>();
result.SetupGet(m => m.VBE).Returns(_getVbe);
result.SetupGet(m => m.Collection).Returns(() => _vbComponents.Object);
result.SetupGet(m => m.Type).Returns(() => type);
result.SetupProperty(m => m.Name, name);
var module = CreateCodeModuleMock(name, content);
module.SetupGet(m => m.Parent).Returns(() => result.Object);
result.SetupGet(m => m.CodeModule).Returns(() => module.Object);
result.Setup(m => m.Activate());
return result;
}
private Mock<CodeModule> CreateCodeModuleMock(string name, string content)
{
var codePane = CreateCodePaneMock(name);
codePane.SetupGet(m => m.VBE).Returns(_getVbe);
var result = CreateCodeModuleMock(content);
result.SetupGet(m => m.VBE).Returns(_getVbe);
result.SetupGet(m => m.CodePane).Returns(() => codePane.Object);
codePane.SetupGet(m => m.CodeModule).Returns(() => result.Object);
return result;
}
private Mock<CodeModule> CreateCodeModuleMock(string content)
{
var lines = content.Split(new[] { Environment.NewLine }, StringSplitOptions.None).ToList();
var codeModule = new Mock<CodeModule>();
codeModule.SetupGet(c => c.CountOfLines).Returns(() => lines.Count);
// ReSharper disable once UseIndexedProperty
codeModule.Setup(m => m.get_Lines(It.IsAny<int>(), It.IsAny<int>()))
.Returns<int, int>((start, count) => String.Join(Environment.NewLine, lines.Skip(start - 1).Take(count)));
codeModule.Setup(m => m.ReplaceLine(It.IsAny<int>(), It.IsAny<string>()))
.Callback<int, string>((index, str) => lines[index - 1] = str);
codeModule.Setup(m => m.DeleteLines(It.IsAny<int>(), It.IsAny<int>()))
.Callback<int, int>((index, count) => lines.RemoveRange(index - 1, count));
codeModule.Setup(m => m.InsertLines(It.IsAny<int>(), It.IsAny<string>()))
.Callback<int, string>((index, newLine) => lines.Insert(index - 1, newLine));
return codeModule;
}
private Mock<CodePane> CreateCodePaneMock(string name)
{
var windows = _getVbe().Windows as MockWindowsCollection;
if (windows == null)
{
throw new InvalidOperationException("VBE.Windows collection must be a MockWindowsCollection object.");
}
var codePane = new Mock<CodePane>();
var window = windows.CreateWindow(name);
windows.Add(window);
codePane.Setup(p => p.SetSelection(It.IsAny<int>(), It.IsAny<int>(), It.IsAny<int>(), It.IsAny<int>()));
codePane.Setup(p => p.Show());
codePane.SetupGet(p => p.VBE).Returns(_getVbe);
codePane.SetupGet(p => p.Window).Returns(() => window);
return codePane;
}
}
}
And the MockUserFormBuilder class:
namespace RubberduckTests.Mocks
{
/// <summary>
/// Builds a mock <see cref="UserForm"/> component.
/// </summary>
public class MockUserFormBuilder
{
private readonly Mock<VBComponent> _component;
private readonly MockProjectBuilder _mockProjectBuilder;
private readonly Mock<Controls> _vbControls;
private readonly ICollection<Mock<Control>> _controls = new List<Mock<Control>>();
public MockUserFormBuilder(Mock<VBComponent> component, MockProjectBuilder mockProjectBuilder)
{
if (component.Object.Type != vbext_ComponentType.vbext_ct_MSForm)
{
throw new InvalidOperationException("Component type must be 'vbext_ComponentType.vbext_ct_MSForm'.");
}
_component = component;
_mockProjectBuilder = mockProjectBuilder;
_vbControls = CreateControlsMock();
}
/// <summary>
/// Adds a <see cref="Control"/> to the form.
/// </summary>
/// <param name="name">The name of the control to add.</param>
/// <returns></returns>
public MockUserFormBuilder AddControl(string name)
{
var control = new Mock<Control>();
control.SetupProperty(m => m.Name, name);
_controls.Add(control);
return this;
}
/// <summary>
/// Builds the UserForm, adds it to the project,
/// and returns a <see cref="MockProjectBuilder"/>
/// to continue adding components to the project.
/// </summary>
/// <returns></returns>
public MockProjectBuilder MockProjectBuilder()
{
_mockProjectBuilder.AddComponent(Build());
return _mockProjectBuilder;
}
/// <summary>
/// Gets the mock <see cref="UserForm"/> component.
/// </summary>
/// <returns></returns>
public Mock<VBComponent> Build()
{
var designer = CreateMockDesigner();
_component.SetupGet(m => m.Designer).Returns(() => designer);
return _component;
}
private Mock<UserForm> CreateMockDesigner()
{
var result = new Mock<UserForm>();
result.SetupGet(m => m.Controls).Returns(() => _vbControls.Object);
return result;
}
private Mock<Controls> CreateControlsMock()
{
var result = new Mock<Controls>();
result.Setup(m => m.GetEnumerator()).Returns(() => _controls.GetEnumerator());
result.As<IEnumerable>().Setup(m => m.GetEnumerator()).Returns(() => _controls.GetEnumerator());
result.Setup(m => m.Item(It.IsAny<int>())).Returns<int>(index => _controls.ElementAt(index).Object);
result.SetupGet(m => m.Count).Returns(_controls.Count);
return result;
}
}
}
I love the builder pattern, I find it's definitely the right tool for the job here... except I feel like I've somehow bastardized it, both with the "shortcut" builder method and with the "nested builders" that enable the fluent API, which require the calling test code to "walk back up" to the MockVbeBuilder instance to call Build on the right type, in order to get a Mock<VBE> instance.
On the other hand, using the, well, fluent fluent API or not is in the hands of the caller. Take the example mock setup I gave at the top of this post - it could just as well be built like this:
var vbeBuilder = new MockVbeBuilder();
var project1Builder = vbeBuilder.ProjectBuilder("TestProject1", vbext_ProjectProtection.vbext_pp_none)
.AddComponent("TestModule1", vbext_ComponentType.vbext_ct_StdModule, contentForProject1Module1)
.AddComponent("TestModule2", vbext_ComponentType.vbext_ct_StdModule, contentForProject1Module2)
.AddComponent("TestClass1", vbext_ComponentType.vbext_ct_ClassModule, contentForProject1Class1)
.AddComponent("ThisWorkbook", vbext_ComponentType.vbext_ct_Document, contentForProject1ThisWorkbook);
var form1 = project1Builder.UserFormBuilder("UserForm1", codeBehindForProject1UserForm1)
.AddControl("Button1")
.AddControl("Button2")
.Build();
project1Builder.AddComponent(form1);
var project1 = project1Builder.Build();
var project2 = vbeBuilder.ProjectBuilder("TestProject2", vbext_ProjectProtection.vbext_pp_locked)
.AddComponent("TestClass1", vbext_ComponentType.vbext_ct_ClassModule, contentForProject2Class1)
.AddReference("TestProject1", "PathToProject1")
.Build();
var mock = vbeBuilder.AddProject(project1)
.AddProject(project2)
.Build();
However that feels overly verbose. Is there a better way? Am I really abusing the builder pattern or I've done The Right Thing™?
Answer: It's very hard to follow the builder when it's jumping around like it is... I wonder whether you could achieve something like:
var Mock<VBE> mock = new MockVbeBuilder()
.AddProject(settings =>
{
settings.Name = "";
settings.Protection = /* something... */;
settings.ComponentBuilder
.AddComponent("TestModule1",
vbext_ComponentType.vbext_ct_StdModule,
contentForProject1Module1)
.AddComponent("TestModule2",
vbext_ComponentType.vbext_ct_StdModule,
contentForProject1Module2);
})
.Build();
It's probably going to complicate the design a bit but I think the addition of some Settings type objects may help you in the long run.
I realise that the above isn't really a review of your code... I can whine about some of your naming if you'd like ;)
Naming
You're inconsistent with capitalisation on Vbe/VBE the class is VBE but you CreateVbeMock. I'm guessing that's because VBE is a generated class?
The documentation for MockProjectBuilder (and what it does) is a lot more than just navigating to the project builder.
UserFormBuilder method doesn't quite follow the pattern elsewhere - should be MockUserFormBuilder.
Those are the only ones that I can see... Your code is getting too consistently good to pick out many things!
I wonder whether calling your methods that navigate to a parent builder ToXyzBuilder would make the code read a bit better? | {
"domain": "codereview.stackexchange",
"id": 15004,
"tags": "c#, design-patterns, rubberduck, mocks"
} |
Can Carbon Form bonds without Hybridization? | Question: Carbon has two electrons in its p orbital which should be able to form bonds, are there any examples in which this occurs instead of carbon hybridizing before bonding?
Answer: Hybridisation is a mathematical concept. It is one way of deriving chemical bonds from elements but not the only way. A different one (that computational chemistry typically uses) would be to just plant atoms at certain positions in space, and then mix their unhybridised atomic orbitals with each other to see what comes out. One can then back-calculate this to localise the molecule-spanning orbitals to single bonds and thereby deduce a hybridisation that would have worked to build up the molecule. Take the following example of methane (taken from Professor Klüfers’ internet scriptum for basic and inorganic chemistry at the university of Munich):
On the left you have four hydrogen s-orbitals, on the right you have the orbitals of carbon. As you see, there is no hybridisation assumed at all and the ground state is taken to be a fully populated $\mathrm{2s}$ orbital and three $\mathrm{2p}$ orbitals populated with two electrons amoung them for a total of $\mathrm{2s^2~2p^2}$. The hybrid orbitals are gained from symmetry considerations (in a tetrahedron, the p-orbitals transform as $\mathrm{t_1}$) and hydrogen orbitals were selected as was seen fit. The electrons are added a posteriori. Spin conservation would have produced a twice-excited state with very high total energy, so immediate spin pairing is assumed.
To localise the bonds, linear combine all four orbitals. If you take every orbital with factor $+1$, you arrive at the $\ce{C-H}$ bond to the upper right hydrogen. By shifting around a factor of $-1$ among the three $\mathrm{t_1}$ molecular orbitals during linear combination, you can address every other $\ce{C-H}$ bond. The resulting bonding orbital looks like n $\mathrm{sp^3}$ orbital mixed with a hydrogen $\mathrm{s}$ orbital. | {
"domain": "chemistry.stackexchange",
"id": 4592,
"tags": "bond, hybridization, vsepr-theory"
} |
Please suggest me which RTOS can be integrated with ROS? | Question:
Hi,
I am working on a 7- axis robot in which each joint is controlling by a BLDC motor with the help of Elmo gold whistle drive. The motor drive is using Ethercat as a communication interface using SOEM drivers. Now, coming to the Ethercat communication part, is it necessary to use RTOS to avoid packet loss or can SOEM driver it-self can take care about the transmission and packet loss? What is the role of RTOS, if it is necessary to integrate?
Originally posted by TraiBo on ROS Answers with karma: 146 on 2016-11-08
Post score: 2
Answer:
It is possible to use SOEM to control a number of EtherCAT slaves without an RTOS. This is done on the KUKA youbot for example:
https://github.com/youbot/youbot_driver
Since the Master is in control of the EtherCAT cycle, package loss should not be a problem. However, on a general-purpose OS (non-RTOS), such as Ubuntu with mainline Kernel, you will experience control jitter. In some cases, the EtherCAT slaves expect a new control message every few milliseconds. And if they do not receive the message in time, they will shut down. This behavior is entirely slave dependant.
Originally posted by Jan Carstensen with karma: 66 on 2017-10-17
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 26180,
"tags": "ros, robot"
} |
Calculate pairs in a Set ("Sherlock and Pairs" HackerRank challenge) | Question: I have started reading clean code and want to improve my coding practices. Here is my attempt at solving an online puzzle. Please review the code and let me know how could I have written it better in terms of logic, coding practices or any other advice you might have.
Problem Statement
Sherlock is given an array of N integers (\$A_0, A_1 \ldots A_{N-1}\$) by Watson. Now Watson asks Sherlock how many different pairs of indices \$i\$ and \$j\$ exist such that \$i\$ is not equal to \$j\$ but \$A_i\$ is equal to \$A_j\$.
That is, Sherlock has to count the total number of pairs of indices \$(i,j)\$ where \$A_i =Aj\$ AND \$i≠j\$.
Input Format
The first line contains \$T\$, the number of test cases. \$T\$ test cases follow.
Each test case consists of two lines; the first line contains an integer \$N\$, the size of array, while the next line contains N space separated integers.
Output Format
For each test case, print the required answer on a different line.
Constraints
\$1≤T≤10\$
\$1≤N≤10^5\$
\$1≤A[i]≤10^6\$
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
namespace NoOfPairsInASet
{
class Solution
{
static Dictionary<Int64, Int64> CountOfElements = new Dictionary<Int64, Int64>();
static Int64 Sum = 0;
static Int64[] data;
static void Main(string[] args)
{
int TestCases = Convert.ToInt32(Console.ReadLine());
//int TestCases = 1;
for (int t = 0; t < TestCases; t++)
{
CalculatePairs();
Console.WriteLine(Sum);
}
}
private static void CalculatePairs()
{
int NoOfElements = Convert.ToInt32(Console.ReadLine());
string Elements = Console.ReadLine();
//int NoOfElements = 3;
//string Elements = "1 1 2 1 2";
string[] ArrayOfElements = Elements.Split(' ');
data = new Int64[ArrayOfElements.Length];
ConvertStringToInt(ArrayOfElements);
GenerateDictionary();
CalculateSum();
}
private static void ConvertStringToInt(string[] ArrayOfElements)
{
for (Int64 i = 0; i < ArrayOfElements.Length; i++)
{
data[i] = Convert.ToInt64(ArrayOfElements[i]);
}
}
private static void GenerateDictionary()
{
CountOfElements.Clear();
int DEFAULTCOUNT = 1;
for (Int64 i = 0; i < data.Length; i++)
{
if (CountOfElements.ContainsKey(data[i]) == true)
{
IncrementCounter(i);
}
else
{
CountOfElements.Add(data[i], DEFAULTCOUNT);
}
}
}
private static void IncrementCounter(Int64 i)
{
Int64 count = CountOfElements[(data[i])];
count++;
CountOfElements[(data[i])] = count;
}
private static void CalculateSum()
{
Sum = 0;
foreach (KeyValuePair<Int64, Int64> item in CountOfElements)
{
if (item.Value > 1)
{
Sum += CalculatePermutationOfValue(item.Value, 2);
}
}
}
private static Int64 CalculatePermutationOfValue(Int64 n, Int64 r)
{
//Int64 P = factorial(n) / factorial(n - r); // with r = 2, this simplifies to n*n-1
Int64 P = n * (n - 1);
return P;
}
//private static Int64 factorial(Int64 n)
//{
// Int64 fact = 1;
// for (Int64 i = n; i >= 1; i--)
// {
// fact = fact * i;
// }
// return fact;
//}
}
}
I have kept the test cases still commented. Is that okay?
Also, I have still kept the factorial function as it can be used in the future. Should I remove the factorial function or can I keep it commented?
Answer: Dead code
Kill it. If you have some commented out code that you aren't using just delete it. It adds noise to the reader and doesn't add anything to the code. Not to mention factorial is an incredibly simple thing to add back in:
public int Factorial(int number)
{
return number == 0 ? 1 : Enumerable.Range(1, number).Aggregate((i, j) => i * j);
}
Note from the above that, like you, I prefer non-recursive solutions where possible.
You also have dead code here:
private static Int64 CalculatePermutationOfValue(Int64 n, Int64 r)
{
//Int64 P = factorial(n) / factorial(n - r); // with r = 2, this simplifies to n*n-1
Int64 P = n * (n - 1);
return P;
}
Remove the r parameter and delete the commented code. Also, don't bother with P = ... just return n * (n - 1);.
Naming
You've used a sensible name for namespace and class - kudos!
Prefer long to Int64 - it looks nicer and is more standard.
All local variables should be named in camelCase P -> p.
Quite a few of your names are quite good - but can be tightened up in places: try to be as descriptive as you can.
Other stuff
Everything is static - not necessarily a problem but some people get upset about that sort of thing.
This function can be massively simplified (I think):
private static void IncrementCounter(Int64 i)
{
Int64 count = CountOfElements[(data[i])];
count++;
CountOfElements[(data[i])] = count;
}
To
CountOfElements[(data[i])]++;
You should rely on fields less and pass things through parameters. Side effects suck when you're trying to follow execution. E.g.
private static void GenerateDictionary()
It returns void but it generates a dictionary? Now I have to dig into the code to find where my dictionary has been generated.
You should write instructions out the Console. Users don't like spamming the keyboard hoping to do the right thing(TM).
Never trust a user to input a sensible value:
int TestCases = Convert.ToInt32(Console.ReadLine());
Should be more along the lines of:
var numberOfTestCases = 0;
// This only works if MinimumNumberOfTestCases >= 1
while (numberOfTestCases < MinimumNumberOfTestCases || numberOfTestCases > MaximumNumberOfTestCases)
{
Console.WriteLine(UserMessages.PromptForNumberOfTestCases);
int.TryParse(Console.ReadLine(), out numberOfTestCases);
}
Note that I've pretended that you have a class called UserMessages with a property/field containing the text to prompt to a user. I've also assumed that you have created well named constants for the min/max number of test cases.
Linq
You should defintely learn some LINQ (Language integrated query). For example, your ConvertStringToInt becomes redundant. I didn't mention it earlier but that's also a poorly named function. It converts the items of a string array to ints - that isn't clear from the name.
Anyway, with LINQ:
ArrayOfElements.Select(item => int.Parse(item)).ToArray();
Notice that I'm just doing a straight forward parse - if the item isn't a valid integer we can't continue so might as well fail quickly. | {
"domain": "codereview.stackexchange",
"id": 14991,
"tags": "c#, programming-challenge, set"
} |
apt-get vs. svn for installing stacks/packages | Question:
This is probably a bad question but I was wondering if there are any advantages to using one over the other (or even git or rosinstall)?
For example which is better out of the following two methods:
sudo apt-get install ros-groovy-laser-drivers
svn co https:://code.ros.org/svn/ros-pkg/stacks/laser_drivers/trunk
best regards
Originally posted by Abbi on ROS Answers with karma: 11 on 2013-06-06
Post score: 0
Answer:
The first one is the most easy method but the software version can be old. With the second method you can the most recent version but you have to compile yourself which is more work.
In the second method you have to check for updates and recompile yourself while with the first method you can just enter apt-get upgrade.
Originally posted by davinci with karma: 2573 on 2013-06-06
This answer was ACCEPTED on the original site
Post score: 2
Original comments
Comment by Bill Smart on 2013-06-07:
It really depends on what you want to do, and what your comfort level with installing from source is. My default is to use apt-get unless I have a compelling reason to install from source, such as needing the very latest version of a package, for the reasons outlined in this answer. | {
"domain": "robotics.stackexchange",
"id": 14465,
"tags": "ros"
} |
Is the sum of 2 numbers in list k | Question: I got a coding problem to solve and was wondering if I got a commonly acceptable and efficient/clean solution. If there is a better way I'd gladly hear it.
given a list with numbers return if the sum of 2 elements in the list = k
def is_sum_of_2nums_in_list_k(list, k):
for i in list:
if k - i in list:
return True
return False
They also mentioned you get bonus points if you do it in one 'pass'. What does this mean?
Answer: You are using one explicit pass over the list (for i in list), and one implicit pass over the list (k - i in list). This means your algorithm is \$O(N^2)\$.
Note: You can reduce your implementation to one line:
return any(k - i in list for i in list)
But you have a bug. is_sum_of_2nums_in_list_k([5], 10) returns True.
A one pass algorithm
starts with an empty set,
takes each number in the list,
tests if k-i is in the set,
return true if it is,
adds i to the set.
returns false | {
"domain": "codereview.stackexchange",
"id": 33500,
"tags": "python, python-3.x, interview-questions, k-sum"
} |
Which combinations of orbitals lead to pi bonds according to Valence Bond Theory? | Question:
How many of the following combination of the orbitals will lead to formation of $\pi$-bonds with $z$ axis being the internuclear axis:
$$p_x+p_x,\,p_z+p_z,\,p_y+p_y,\,d_{zx}+p_x,\,d_{zy}+p_y,\,s+p_y,\,d_{yz}+d_{zy},\,d_{zx}+d_{zx},\,d_{z^2}+s$$
My attempt/Understanding:
According to Valence Bond Theory, any orbital having any relations with $z$-axis or $z$ related planes should form $\sigma$-bonds. This is because $z$ is internuclear axis and these orbitals will have direct overlapping, forming $\sigma$-bonds.
Rest orbitals will form $\pi$-bonds by lateral overlapping, so I counted them as answer $(4)$:
$$p_x+p_x,\,p_y+p_y,\,s+p_y,\,d_{xy}+d_{xy}$$
But answer given is $(6)$, and it is not mentioned which are orbitals are counted. So, please tell which orbitals are to be counted for, and tell any flaws in my understandings, if any.
Answer: Here are the requirements for a $\pi$ bond:
Each contributing orbital has to have exactly one nodal plane containing the bond axis. There can be other nodal planes that might cut through the axis, but don't contain it so those planes don't count. Two nodal planes containing the axis, as with a $d_{xy}$ orbital, also fail because that's too many.
The nodal planes identified in (1) that contain the bond axis must line up, so they end up as a single nodal plane going through the bond. Draw, let us say, the $p_x-p_x$ pair, which is easy to see it's a $\pi$ bond, to see what I mean.
If you sketch the orbital combinations for each of the nine given cases you see immediately that three fail test (1) (the $s$ orbital is no-go for a nodal plane, and $p_z$ has its nodal plane oriented wrong), but the others pass both tests (1) and (2) and they are the six combinations identified by the book.
The hidden lesson here is that $d$ orbitals can form $\pi$ bonds too. You will see them do so when you study transition-metal complexes. | {
"domain": "chemistry.stackexchange",
"id": 15028,
"tags": "valence-bond-theory"
} |
Send command and get response from Windows CMD prompt silently - follow-up | Question: This post is an updated version of the original post, with the code examples below reflecting changes in response to very detailed inputs from user @pacmaninbw.
The updated source examples below also include the addition of a function that is designed to accommodate commands that do not return a response. The comment block describes why this function addition is necessary, and its usage.
I am interested in getting feedback with emphasis on the same things as before, so will repeat the preface of the previous post below.
The need:
I needed a method to programmatically send commands to the Windows 7
CMD prompt, and return the response without seeing a console popup in
multiple applications.
The design:
The environment in addition to the Windows 7 OS is an ANSI C (C99)
compiler from National Instruments, and the Microsoft Windows Driver
Kit for Windows 8.1. Among the design goals was to present a very
small API, including well documented and straightforward usage
instructions. The result is two
exported functions. Descriptions for each are provided in their respective comment blocks. In its provided form, it is intended to be built as a DLL. The only header files used in this library arewindows.h and stdlib.h.
For review consideration:
The code posted is complete, and I have tested it, but I am new to
using pipes to stdin and stdout, as well as using Windows
methods for CreateProcess(...). Also, because the size requirements
of the response buffer cannot be known at compile time, the code
includes the ability to grow the response buffer as needed during
run-time. For example, I have used this code to recursively read
directories using dir /s from all locations except the c:\
directory with the following command:
cd c:\dev && dir /s // approximately 1.8Mbyte buffer is returned on my system
I would especially appreciate feedback focused on the following:
Pipe creation and usage
CreateProcess usage
Method for dynamically growing response buffer (very interested in feedback on this)
Handling embedded null bytes in content returned from ReadFile function. (Credit to @chux, this is a newly discovered deficiency)
Usage example:
#include <stdio.h> // printf()
#include <stdlib.h> // NULL
#include "cmd_rsp.h"
#define BUF_SIZE 100
int main(void)
{
char *buf = NULL;
/// test cmd_rsp
buf = calloc(BUF_SIZE, 1);
if(!buf)return 0;
if (!cmd_rsp("dir /s", &buf, BUF_SIZE))
{
printf("%s", buf);
}
else
{
printf("failed to send command.\n");
}
free(buf);
/// test cmd_no_rsp
buf = calloc(BUF_SIZE, 1);
if(!buf)return 0;
if (!cmd_no_rsp("dir /s", &buf, BUF_SIZE))
{
printf("success.\n"); // function provides no response
}
else
{
printf("failed to send command.\n");
}
free(buf);
return 0;
}
cmd_rsp.h
/////////////////////////////////////////////////////////////////////////////////////////////////////////////////
/////////////////////////////////////////////////////////////////////////////////////////////////////////////////
//
// Prototype: int int __declspec(dllexport) cmd_rsp(char *command, char **chunk, size_t size)
//
// Description: Executes any command that can be executed in a Windows cmd prompt and returns
// the response via auto-resizing buffer.
/// Note: this function will hang for executables or processes that run and exit
/// without ever writing to stdout.
/// The hang occurs during the call to the read() function.
//
// Inputs: const char *command - string containing complete command to be sent
// char **chunk - initialized pointer to char array to return results
// size_t size - Initial memory size in bytes char **chunk was initialized to.
//
// Return: 0 for success
// -1 for failure
//
/////////////////////////////////////////////////////////////////////////////////////////////////////////////////
/////////////////////////////////////////////////////////////////////////////////////////////////////////////////
int __declspec(dllexport) cmd_rsp(const char *command, char **chunk, unsigned int chunk_size);
/////////////////////////////////////////////////////////////////////////////////////////////////////////////////
/////////////////////////////////////////////////////////////////////////////////////////////////////////////////
//
// Prototype: int int __declspec(dllexport) cmd_no_rsp(char *command)
//
// Description: Variation of cmd_rsp that does not wait for a response. This is useful for
// executables or processes that run and exit without ever sending a response to stdout,
// causing cmd_rsp to hang during the call to the read() function.
//
// Inputs: const char *command - string containing complete command to be sent
//
// Return: 0 for success
// -1 for failure
//
/////////////////////////////////////////////////////////////////////////////////////////////////////////////////
/////////////////////////////////////////////////////////////////////////////////////////////////////////////////
int __declspec(dllexport) cmd_no_rsp(const char *command);
#endif
cmd_rsp.c
#include <windows.h>
#include <stdlib.h> // calloc, realloc & free
#include "cmd_rsp.h"
#define BUFSIZE 1000
typedef struct {
/* child process's STDIN is the user input or data entered into the child process - READ */
void * in_pipe_read;
/* child process's STDIN is the user input or data entered into the child process - WRITE */
void * in_pipe_write;
/* child process's STDOUT is the program output or data that child process returns - READ */
void * out_pipe_read;
/* child process's STDOUT is the program output or data that child process returns - WRITE */
void * out_pipe_write;
}IO_PIPES;
// Private prototypes
static int CreateChildProcess(const char *cmd, IO_PIPES *io);
static int CreateChildProcessNoStdOut(const char *cmd, IO_PIPES *io);
static int ReadFromPipe(char **rsp, unsigned int size, IO_PIPES *io);
static char * ReSizeBuffer(char **str, unsigned int size);
static void SetupSecurityAttributes(SECURITY_ATTRIBUTES *saAttr);
static void SetupStartUpInfo(STARTUPINFO *siStartInfo, IO_PIPES *io);
static int SetupChildIoPipes(IO_PIPES *io, SECURITY_ATTRIBUTES *saAttr);
int __stdcall DllMain (HINSTANCE hinstDLL, DWORD fdwReason, LPVOID lpvReserved)
{
switch (fdwReason)
{
case DLL_PROCESS_ATTACH:
/* Respond to DLL loading by initializing the RTE */
if (InitCVIRTE (hinstDLL, 0, 0) == 0) return 0;
break;
case DLL_PROCESS_DETACH:
/* Respond to DLL unloading by closing the RTE for its use */
if (!CVIRTEHasBeenDetached ()) CloseCVIRTE ();
break;
}
/* Return 1 to indicate successful initialization */
return 1;
}
/////////////////////////////////////////////////////////////////////////////////////////////////////////////////
/////////////////////////////////////////////////////////////////////////////////////////////////////////////////
//
// Prototype: int __declspec(dllexport) cmd_rsp(char *command, char **chunk, size_t chunk_size)
//
// Description: Executes any command that can be executed in a Windows cmd prompt and returns
// the response via auto-resizing buffer.
//
// Inputs: const char *command - string containing complete command to be sent
// char **chunk - initialized pointer to char array to return results
// size_t chunk_size - Initial memory size in bytes of char **chunk.
//
// Return: 0 for success
// -1 for failure
//
/////////////////////////////////////////////////////////////////////////////////////////////////////////////////
/////////////////////////////////////////////////////////////////////////////////////////////////////////////////
int __declspec(dllexport) cmd_rsp(const char *command, char **chunk, unsigned int chunk_size)
{
SECURITY_ATTRIBUTES saAttr;
/// All commands that enter here must contain (and start with) the substring: "cmd.exe /c
/// /////////////////////////////////////////////////////////////////////////////////////////////////////////
/// char cmd[] = ("cmd.exe /c \"dir /s\""); /// KEEP this comment until format used for things like
/// directory command (i.e. two parts of syntax) is captured
/// /////////////////////////////////////////////////////////////////////////////////////////////////////////
const char rqdStr[] = {"cmd.exe /c "};
int len = (int)strlen(command);
char *Command = NULL;
int status = 0;
Command = calloc(len + sizeof(rqdStr), 1);
if(!Command) return -1;
strcat(Command, rqdStr);
strcat(Command, command);
SetupSecurityAttributes(&saAttr);
IO_PIPES io;
if(SetupChildIoPipes(&io, &saAttr) < 0) return -1;
//eg: CreateChildProcess("adb");
if(CreateChildProcess(Command, &io) == 0)
{
// Read from pipe that is the standard output for child process.
ReadFromPipe(chunk, chunk_size, &io);
status = 0;
}
else
{
status = -1;
}
free(Command);
return status;
}
/////////////////////////////////////////////////////////////////////////////////////////////////////////////////
/////////////////////////////////////////////////////////////////////////////////////////////////////////////////
//
// Prototype: int __declspec(dllexport) cmd_no_rsp(char *command)
//
// Description: Variation of cmd_rsp that does not wait for a response. This is useful for
// executables or processes that run and exit without ever sending a response to stdout,
// causing cmd_rsp to hang during the call to the read() function.
//
// Inputs: const char *command - string containing complete command to be sent
//
// Return: 0 for success
// -1 for failure
//
/////////////////////////////////////////////////////////////////////////////////////////////////////////////////
/////////////////////////////////////////////////////////////////////////////////////////////////////////////////
int __declspec(dllexport) cmd_no_rsp(const char *command)
{
/// All commands that enter here must contain (and start with) the substring: "cmd.exe /c
/// /////////////////////////////////////////////////////////////////////////////////////////////////////////
/// char cmd[] = ("cmd.exe /c \"dir /s\""); /// KEEP this comment until format used for things like
/// directory command (i.e. two parts of syntax) is captured
/// /////////////////////////////////////////////////////////////////////////////////////////////////////////
SECURITY_ATTRIBUTES saAttr;
const char rqdStr[] = {"cmd.exe /c "};
int len = (int)strlen(command);
char *Command = NULL;
int status = 0;
Command = calloc(len + sizeof(rqdStr), 1);
if(!Command) return -1;
strcat(Command, rqdStr);
strcat(Command, command);
SetupSecurityAttributes(&saAttr);
IO_PIPES io;
if(SetupChildIoPipes(&io, &saAttr) < 0) return -1;
status = CreateChildProcessNoStdOut(Command, &io);
free(Command);
return status;
}
static int SetupChildIoPipes(IO_PIPES *io, SECURITY_ATTRIBUTES *saAttr)
{
//child process's STDOUT is the program output or data that child process returns
// Create a pipe for the child process's STDOUT.
if (!CreatePipe(&io->out_pipe_read, &io->out_pipe_write, saAttr, 0))
{
return -1;
}
// Ensure the read handle to the pipe for STDOUT is not inherited.
if (!SetHandleInformation(io->out_pipe_read, HANDLE_FLAG_INHERIT, 0))
{
return -1;
}
//child process's STDIN is the user input or data entered into the child process
// Create a pipe for the child process's STDIN.
if (!CreatePipe(&io->in_pipe_read, &io->in_pipe_write, saAttr, 0))
{
return -1;
}
// Ensure the write handle to the pipe for STDIN is not inherited.
if (!SetHandleInformation(io->in_pipe_write, HANDLE_FLAG_INHERIT, 0))
{
return -1;
}
return 0;
}
// Create a child process that uses the previously created pipes for STDIN and STDOUT.
static int CreateChildProcess(const char *cmd, IO_PIPES *io)
{
PROCESS_INFORMATION piProcInfo;
STARTUPINFO siStartInfo;
BOOL bSuccess = FALSE;
// Set up members of the PROCESS_INFORMATION structure.
ZeroMemory(&piProcInfo, sizeof(PROCESS_INFORMATION));
// Set up members of the STARTUPINFO structure.
// This structure specifies the STDIN and STDOUT handles for redirection.
ZeroMemory(&siStartInfo, sizeof(STARTUPINFO));
SetupStartUpInfo(&siStartInfo, io);
// Create the child process.
bSuccess = CreateProcess(NULL,
cmd, // command line
NULL, // process security attributes
NULL, // primary thread security attributes
TRUE, // handles are inherited
CREATE_NO_WINDOW, // creation flags
//CREATE_NEW_CONSOLE, // creation flags
NULL, // use parent's environment
NULL, // use parent's current directory
&siStartInfo, // STARTUPINFO pointer
&piProcInfo); // receives PROCESS_INFORMATION
// If an error occurs, exit the application.
if (!bSuccess)
{
return -1;
}
else
{
// Close handles to the child process and its primary thread.
CloseHandle(piProcInfo.hProcess);
CloseHandle(piProcInfo.hThread);
CloseHandle(io->out_pipe_write);
}
return 0;
}
// Create a child process that uses the previously created pipes for STDIN and STDOUT.
static int CreateChildProcessNoStdOut(const char *cmd, IO_PIPES *io)
{
PROCESS_INFORMATION piProcInfo;
STARTUPINFO siStartInfo;
BOOL bSuccess = FALSE;
// Set up members of the PROCESS_INFORMATION structure.
ZeroMemory(&piProcInfo, sizeof(PROCESS_INFORMATION));
// Set up members of the STARTUPINFO structure.
// This structure specifies the STDIN and STDOUT handles for redirection.
ZeroMemory(&siStartInfo, sizeof(STARTUPINFO));
SetupStartUpInfo(&siStartInfo, io);
// Create the child process.
bSuccess = CreateProcess(NULL,
cmd, // command line
NULL, // process security attributes
NULL, // primary thread security attributes
TRUE, // handles are inherited
CREATE_NO_WINDOW, // creation flags
//CREATE_NEW_CONSOLE, // creation flags
NULL, // use parent's environment
NULL, // use parent's current directory
&siStartInfo, // STARTUPINFO pointer
&piProcInfo); // receives PROCESS_INFORMATION
// If an error occurs, exit the application.
if (!bSuccess)
{
return -1;
}
else
{
// Close handles to the child process and its primary thread.
CloseHandle(piProcInfo.hProcess);
CloseHandle(piProcInfo.hThread);
CloseHandle(io->out_pipe_write);
}
return 0;
}
// Read output from the child process's pipe for STDOUT
// Grow the buffer as needed
// Stop when there is no more data.
static int ReadFromPipe(char **rsp, unsigned int size, IO_PIPES *io)
{
COMMTIMEOUTS ct;
int size_recv = 0;
unsigned int total_size = 0;
unsigned long dwRead;
BOOL bSuccess = TRUE;
char *accum;
char *tmp1 = NULL;
char *tmp2 = NULL;
//Set timeouts for stream
ct.ReadIntervalTimeout = 0;
ct.ReadTotalTimeoutMultiplier = 0;
ct.ReadTotalTimeoutConstant = 10;
ct.WriteTotalTimeoutConstant = 0;
ct.WriteTotalTimeoutMultiplier = 0;
SetCommTimeouts(io->out_pipe_read, &ct);
//This accumulates each read into one buffer,
//and copies back into rsp before leaving
accum = (char *)calloc(1, sizeof(char)); //grow buf as needed
if(!accum) return -1;
memset(*rsp, 0, size);
do
{
//Reads stream from child stdout
bSuccess = ReadFile(io->out_pipe_read, *rsp, size-1, &dwRead, NULL);
if (!bSuccess || dwRead == 0)
{
free(accum);
return 0;//successful - reading is done
}
(*rsp)[dwRead] = 0;
size_recv = (int)strlen(*rsp);
if(size_recv == 0)
{
//should not get here for streaming
(*rsp)[total_size]=0;
return total_size;
}
else
{
//New Chunk:
(*rsp)[size_recv]=0;
//capture increased byte count
total_size += size_recv+1;
//increase size of accumulator
tmp1 = ReSizeBuffer(&accum, total_size);
if(!tmp1)
{
free(accum);
strcpy(*rsp, "");
return -1;
}
accum = tmp1;
strcat(accum, *rsp);
if(total_size > (size - 1))
{ //need to grow buffer
tmp2 = ReSizeBuffer(&(*rsp), total_size+1);
if(!tmp2)
{
free(*rsp);
return -1;
}
*rsp = tmp2;
}
strcpy(*rsp, accum);//refresh rsp
}
}while(1);
}
// return '*str' after number of bytes realloc'ed to 'size'
static char * ReSizeBuffer(char **str, unsigned int size)
{
char *tmp=NULL;
if(!(*str)) return NULL;
if(size == 0)
{
free(*str);
return NULL;
}
tmp = (char *)realloc((char *)(*str), size);
if(!tmp)
{
free(*str);
return NULL;
}
*str = tmp;
return *str;
}
static void SetupSecurityAttributes(SECURITY_ATTRIBUTES *saAttr)
{
// Set the bInheritHandle flag so pipe handles are inherited.
saAttr->nLength = sizeof(SECURITY_ATTRIBUTES);
saAttr->bInheritHandle = TRUE;
saAttr->lpSecurityDescriptor = NULL;
}
static void SetupStartUpInfo(STARTUPINFO *siStartInfo, IO_PIPES *io)
{
siStartInfo->cb = sizeof(STARTUPINFO);
siStartInfo->hStdError = io->out_pipe_write;
siStartInfo->hStdOutput = io->out_pipe_write;
siStartInfo->hStdInput = io->in_pipe_read;
siStartInfo->dwFlags |= STARTF_USESTDHANDLES;
}
Answer: Handling embedded null bytes
ReadFile(, lpBuffer,,,) may read null characters into lpBuffer. Should this occur, much of code's use of str...() would suffer. Code instead needs to keep track of data as a read of "bytes" with a length and not as a string. I'd recommend forming a structure with members unsigned char data[BUF_SIZE] and DWORD sz or size_t sz. This affects code significantly. Effectively replace str...() calls with mem...() ones.
Minor: with using a "byte" buffer rather than a string, the buffer could start with NULL.
// char *accum;
// accum = (char *)calloc(1, sizeof(char));
char *accum = NULL;
Pipe creation and usage
Although there is much good error handling, cmd_rsp() fails to check the return value of ReadFromPipe(chunk, chunk_size, &io);.
// ReadFromPipe(chunk, chunk_size, &io);
if (ReadFromPipe(chunk, chunk_size, &io) == -1) TBD_Code();
Minor: Using sizeof(char) rather than sizeof *accum obliges a reviewer and maintainer to check the type of accum. In C, code can be simplified:
// accum = (char *)calloc(1, sizeof(char));
accum = calloc(1, sizeof *accum);
Minor: Unclear why code is using unsigned for array indexing rather than the idiomatic size_t. Later code quietly returns this as an int. I'd expect more care changing sign-ness. Else just use int.
// Hmmm
unsigned int total_size = 0;
int size_recv = 0;
CreateProcess usage
Memory Leak:
if(SetupChildIoPipes(&io, &saAttr) < 0) {
free(Command); // add
return -1;
}
Minor: no need for int cast of a value in the size_t range.
// int len = (int)strlen(command);
// Command = calloc(len + sizeof(rqdStr), 1);
Command = calloc(strlen(command) + sizeof(rqdStr), 1);
Method for dynamically growing response buffer
Good and proper function for ReSizeBuffer( ,size == 0);
Bug: when realloc() fails, ReSizeBuffer() and the calling code both free the same memory. Re-design idea: Let ReSizeBuffer() free the data and return a simple fail/success flag for the calling code to test. For the calling code to test NULL-ness is a problem as ReSizeBuffer( ,size == 0) returning NULL is O.K.
Unclear test: if(!(*str)) return NULL;. I would not expect disallowing resizing a buffer that originally pointed to NULL.
if(!(*str)) return NULL; // why?
if(!str) return NULL; // Was this wanted?`
Cast not needed for a C compile. Is code also meant for C++?
// tmp = (char *)realloc((char *)(*str), size);
tmp = realloc(*str, size);
For me, I would use the form below and let it handle all edge cases of zeros, overflow, allocation success, free-ing, updates. Be prepared for large buffer needs.
// return 0 on success
int ReSizeBuffer(void **buf, size_t *current_size, int increment);
// or
int ReSizeBuffer(void **buf, size_t *current_size, size_t new_size);
Tidbits
Consider avoiding ! when things work. This is a small style issue - I find a ! or != more aligns with failure than success.
// if (!cmd_no_rsp("dir /s", &buf, BUF_SIZE)) {
// printf("success.\n");
if (cmd_no_rsp("dir /s", &buf, BUF_SIZE) == 0) {
printf("success.\n");
With a change for handling piped data as a string, change variables names away from str... | {
"domain": "codereview.stackexchange",
"id": 29775,
"tags": "c, console, child-process, winapi"
} |
Multithreaded app for writing to a MySQL database | Question: I have a multi-threaded app (WCF) that writes to a MySQL db. I want to use MySQL's built in connection pooling abilities.
As far as I can tell, to do that, I should set MySqlConnectionStringBuilder.MinimumPoolSize to some value approximately equal to the number of threads I connect. Then Open/Close the connection for each call to the db. Is this correct? If not, what is the proper way to use pooling?
Here is the function I use to send data to the db. It gets called by many threads throughout the day. Hundreds, maybe thousands of times per day.
private static void execute(MySqlCommand cmd)
{
try
{
MySqlConnectionStringBuilder cnx = new MySqlConnectionStringBuilder();
cnx.Server = MySqlServer;
cnx.Database = Database;
cnx.UserID = UserId;
cnx.Password = Pass;
cnx.MinimumPoolSize = 100;
cmd.Connection = new MySqlConnection(cnx.ToString());
cmd.Connection.Open();
cmd.ExecuteNonQuery();
}
catch (MySqlException e)
{
System.Console.WriteLine(e.Message);
}
finally
{
if (null != cmd.Connection)
{
cmd.Connection.Close();
}
}
}
Answer: Yes, your code will use pooling.
You might, for the sake of server efficiency, use a smaller MinimumPoolSize (say, 20), and a larger MaximumPoolSize (something a little larger than your maximum number of threads).
You probably don't need all those connections all the time. If your threads do anything significant between their uses of the function you've shown, a significant number of your connections will be idle.
Unless you're sure you need all those connections all the time, you should reduce the RAM and thread burden on your server with a smaller MinimumPoolSize. | {
"domain": "codereview.stackexchange",
"id": 6991,
"tags": "c#, mysql, connection-pool"
} |
Model From World File Not Visible In Gazebo When Using Roslaunch | Question:
Hi all,
I'm having what appears to be a problem among several users: using roslaunch to open a world file in Gazebo. Like others who have asked, I've carefully gone through several tutorials, including "Using roslaunch to start Gazebo, world files and URDF models".
In general everything works normally. I'm using OS X (El Capitan) with ROS Jade and Gazebo version 8.
Carefully following the tutorial listed above, I've created the packages and directory structure that is needed. I have a "TestRobot_ws" workspace that contains the packages "TestRobot_description" (with the directories "meshes" and "urdf") and TestRobot_gazebo" (with the directories "launch", "models" and "worlds"). The appropriate files are in each directory.
In a terminal I "source /opt/ros/jade/setup.bash". Then I cd into the TestRobot_ws directory and "source devel/setup.bash". Because the collada model for my world file is in the TestRobot_gazebo/models directory, I "export GAZEBO_MODEL_PATH=~/devel/TestRobot_ws/src/TestRobot_gazebo/models:$GAZEBO_MODEL_PATH".
If I cd into the TestRobot_gazebo directory and "gazebo worlds/TestRobot.world", gazebo will start my world file and display its collada file.
If I "roslaunch TestRobot_gazebo TestRobot.launch", Gazebo will start. My robot model is correctly spawned into Gazebo, but my world file is NOT correctly displayed. The collada model is not visible. However, if I look at the left pane of the Gazebo UI, in the "models" section of the "worlds" tab my collada model is listed (along with it's link), as well as the ground object and my spawned robot model.
I've spent several days trying many suggestions and tutorials. Nothing so far has worked. I'm wondering if it is a pathing issue. If I "echo $GAZEBO_MODEL_PATH" it returns ~devel/TestRobot_ws/src/TestRobot_gazebo/models: (which is correct). "ROSPACK LIST" returns TestRobot_description and TestRobot_gazebo, so it sees the packages.
In short: Gazebo will correctly launch and display the collada model in the world file. But using ROSLAUNCH and the launch file does not display the collada file (although it is listed in the "models" section of the "worlds" tab in the Gazebo UI left panel).
Thoughts/ideas/suggestions??? There is probably a simple answer, but I cannot figure it out. Below is the code for my WORLD file and LAUNCH file.
Any help would be GREATLY appreciated,
Thanks!
WORLD FILE (works correctly when launched from Gazebo)
<?xml version="1.0" ?>
<sdf version="1.4">
<world name="default">
<include>
<uri>model://ground_plane</uri>
</include>
<include>
<uri>model://sun</uri>
</include>
<!-- The collada model in the TestRobot_gazebo/models directory -->
<model name="Test_House_Model">
<pose>0 0 0 0 0 0</pose>
<static>true</static>
<link name="Test_House_Model_link">
<visual name="Test_House_Model_visual">
<geometry>
<mesh><uri>model://models/Test_House_Model.dae</uri></mesh>
</geometry>
</visual>
</link>
</model>
</world>
</sdf>
LAUNCH FILE (does not display the collada model when called from ROSLAUNCH, but does spawn the robot)
<launch>
<!-- launch an empty Gazebo world -->
<include file="$(find gazebo_ros)/launch/empty_world.launch">
<!-- insert the modified world with the collada model -->
<arg name="world_name" value="$(find TestRobot_gazebo)/worlds/TestRobot.world"/>
</include>
<!-- Spawn a robot into Gazebo -->
<node name="spawn_urdf" pkg="gazebo_ros" type="spawn_model" args="-file $(find TestRobot_description)/urdf/TestRobot.urdf -urdf -model TestRobot" />
</launch>
Originally posted by MikeC on ROS Answers with karma: 3 on 2016-12-28
Post score: 0
Answer:
Just something to check: ~devel/TestRobot_ws/src/TestRobot_gazebo/models is not a valid path (missing a /). Also: does $GAZEBO_MODEL_PATH really contain the ~? I would expect that to be resolved by the shell (ie: bash) to /home/$USER (or whatever the OSX equivalent is). Try $HOME.
Originally posted by gvdhoorn with karma: 86574 on 2016-12-29
This answer was ACCEPTED on the original site
Post score: 0
Original comments
Comment by MikeC on 2016-12-29:
Hi gvdhoorn,
It now works.
Here is the solution:
Taking your advice, I inserted **$HOME** into the path. So it becomes: export GAZEBO_MODEL_PATH=$HOME/devel/TestRobot_ws/src/TestRobot_gazebo/models:$GAZEBO_MODEL_PATH. (A trailing "/" does not matter.)
THANK YOU! THANK YOU! THANK YOU!
MikeC
Comment by gvdhoorn on 2016-12-29:
Not the "trailing '/'" matters, but the one between ~ and devel/TestRobot_ws/... It could be that things would've worked even with ~, if you'd inserted the / at that point.
But good to hear you got things to work.
Comment by MikeC on 2016-12-30:
Gvdhoorn, once again, you are correct. I tried it using "~/devel/..." and it also works. Sometimes we are so far into the forest that we cannot see the trees. Thank you again! | {
"domain": "robotics.stackexchange",
"id": 26594,
"tags": "ros"
} |
Is it possible to control phase of a light radiation? | Question: Is it possible to control phase of a light radiation?
Assume I have two point sources A and B. They emit spherical waves of a green light (or any other color).
Is it possible to make such physical system where the phase of B is shifted by some $\theta$ in comparison to A.
Answer: The challenge is to make two sources that are highly coherent but spatially separated. The easy ways to do that generally involve starting with a single source and splitting its output (for example with beam splitters, partially reflective mirrors, or fiber optic couplers) and routing the beams to different locations.
If creating your "two" point sources that way is acceptable in your scenario, then varying the phase of one or both sources is usually easily accomplished. For example, you can use moving mirrors to physically vary the length of the path from your beam splitter to one of the source locations, or you can vary the optical length of the path with something like a lithium niobate phase modulator. | {
"domain": "physics.stackexchange",
"id": 87072,
"tags": "optics, electromagnetic-radiation"
} |
What is the general formula for stacked DFT on the same signal? | Question: Let us assume a finite, aperiodic signal $x[n]$ upon which we perform DFT n times, as such:
DFT{DFT{DFT{...{DFT{$x[n]$}}...}
How can we calculate this directly instead of applying the DFT n times?
Answer: Let $\text{DFT}_2 = \text{DFT}(\text{DFT}(...))$. Then,
$$
\begin{align}
\text{DFT}_2(x[n]) &= N \cdot x[-n] \\
\text{DFT}_3(x[n]) &= N \cdot \text{DFT}(x[-n]) \\
\text{DFT}_4(x[n]) &= N^2 \cdot x[n]
\end{align}
$$
and thus
$$
\text{DFT}_M(x[n]) =
N^{\lfloor M/2 \rfloor} \cdot \begin{cases}
\sum_{n=0}^{N-1} x[n\cdot (-1)^{\lfloor M/2 \rfloor}] e^{-j2 \pi k n /N}, & M=\text{odd} \\\
x[n\cdot(-1)^{\lfloor M/2 \rfloor}], & M=\text{even}
\end{cases}
$$
Note that $\text{DFT}_4(x[n])/N^2 = x[n]$.
Testing
import numpy as np
from numpy.fft import fft
def dft(x):
N = len(x)
out = np.zeros(N, dtype='complex128')
for k in range(N):
for n in range(N):
out[k] += x[n] * np.exp(-2j*np.pi * k * n / N)
return out
def dft_M(x, M=1):
N = len(x)
sign = (-1)**(M//2)
if sign == -1:
x_flip = np.zeros(N, dtype=x.dtype)
x_flip[0] = x[0]
x_flip[1:] = x[1:][::-1]
x = x_flip
if M % 2 == 0:
out = N**(M//2) * x
else:
out = N**(M//2) * dft(x)
return out
for N in (128, 129):
x = np.random.randn(N) + 1j*np.random.randn(N)
assert np.allclose(fft(x), dft(x))
assert np.allclose(fft(fft(x)), dft_M(x, 2))
assert np.allclose(fft(fft(x)), dft_M(x, 2))
assert np.allclose(fft(fft(fft(x))), dft_M(x, 3))
assert np.allclose(fft(fft(fft(fft(x)))), dft_M(x, 4)) | {
"domain": "dsp.stackexchange",
"id": 10374,
"tags": "dft"
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.