anchor stringlengths 0 150 | positive stringlengths 0 96k | source dict |
|---|---|---|
Assistance requested to refactor a section of Typescript code for readability | Question: Code
if (source.length > 0) {
if (
source[0].mandatory === true &&
this.allInputs.documentsType !== "mandatory"
) {
item.mandatory = !item.mandatory;
} else if (
source[0].mandatory === false &&
this.allInputs.documentsType === "mandatory"
) {
item.mandatory = !item.mandatory;
}
} else {
if (
target[0].mandatory === true &&
this.allInputs.documentsType === "mandatory"
) {
item.mandatory = !item.mandatory;
} else if (
target[0].mandatory === false &&
this.allInputs.documentsType !== "mandatory"
) {
item.mandatory = !item.mandatory;
}
}
All is working, the problem I have is that this code is not very readable. Can someone please help me refactor this?
Thanks
Answer: I don't know much javascript or typescript. But this seems to be a basic problem.
So, I will give it a try in phased manner:
Level 1: Clean up, by merging conditions
if (source.length > 0) {
if ((source[0].mandatory === true &&
this.allInputs.documentsType !== "mandatory") ||
(source[0].mandatory === false &&
this.allInputs.documentsType === "mandatory"))
{
item.mandatory = !item.mandatory;
}
} else {
if ((target[0].mandatory === true &&
this.allInputs.documentsType === "mandatory") ||
(target[0].mandatory === false &&
this.allInputs.documentsType !== "mandatory"))
{
item.mandatory = !item.mandatory;
}
}
Level 2: Merging more aggressively
if ((source.length > 0 && ((source[0].mandatory === true &&
this.allInputs.documentsType !== "mandatory") ||
(source[0].mandatory === false &&
this.allInputs.documentsType === "mandatory"))) ||
((target[0].mandatory === true &&
this.allInputs.documentsType === "mandatory") ||
(target[0].mandatory === false &&
this.allInputs.documentsType !== "mandatory")))
{
item.mandatory = !item.mandatory;
}
Level 3: Using generic type variable, to clean up repeating conditions
Edit: I would have liked to simply this more but this assumes both source and target to be same which are not.
var obj = null;
if (source.length > 0) {
obj = source[0];
} else {
obj = target[0];
}
if ((obj.mandatory === true &&
this.allInputs.documentsType !== "mandatory") ||
(obj.mandatory === false &&
this.allInputs.documentsType === "mandatory"))
{
item.mandatory = !item.mandatory;
}
I hope you understand what I am suggesting you to do.
Edit: As correctly suggested by Scott my last optimization is wrongly treating source and target as same, please pardon my mistake. | {
"domain": "codereview.stackexchange",
"id": 41396,
"tags": "javascript, typescript"
} |
A different interpretation of $E=mc^2$ but no idea what it might mean | Question: I wanted $E=mc^2$ to look like an 'inverse square' sort of a formula. So this is what I derived:
$E=mc^2$, so;
$m=E/c^2$,
assuming $E=E_1E_2$ (I am aware that when you decompose energy into two multipliers the units will be different but in purely mathematical sense there should be a way of doing it) and there a constrant = A. So;
$m=A(E_1E_2)/c^2$.
And I translated this into English as such: The mass between to energies is inversely proportional with the speed of light between those two energies.
And if you use $E=(hc)/\lambda$ equation, the previous equation becomes:
$m=A(h_1/\lambda_1)(h_2/\lambda_2)$, (Maybe $h_1 = h_2$).
Do these mean anything to someone who actually knows physics? :)
Answer:
$E=E_1 E_2$ (I am aware that when you decompose energy into two
multipliers the units will be different but in purely mathematical
sense there should be a way of doing it)
Unfortunately, the fact that the unit $\rm J$ is not equal to the unit $\rm J^2$ is more than an inconvenience - it is a fatal flaw in your mathematics, and everything after this assumption is nonsense. In a purely mathematical sense, you can, of course, express any number as the product of two numbers, but you can't necessarily assign them any physical significance that seems convenient.
It's easier to see why this is so by using a quantity that you may have more intuition about - length:
I have a rectangular plot of land. I know that the diagonal measures 150 meters, but I want to know the area to find out how much corn seed I need to plant it.
Well, I know that the formula for the area of a rectangle is $A = s_1 s_2$, where $s_1$ and $s_2$ are the lengths of the two sides.
I also know that if I have a length $L$, then it should be possible to express it as two lengths $L = L_1 L_2$,
And I translated this into English as such: The diagonal between two lengths of a rectangle is equal to the product of the two lengths.
So $150~\rm m = 15 ~\rm m \cdot 10 ~\rm m$, therefore my plot must have an area of $150 m^2$. (I am aware that the units will be different, but in a purely mathematical sense there should be a way of doing it)
Hopefully, it's obvious that this is nonsense - given the diagonal length, the area of that rectangle could be anything from $0~\rm m^2$ to $11,250~\rm m^2$. The mere fact that you can multiply two numbers to get a third number doesn't mean you can arbitrarily assign units, and from that deduce meaning.
If $E = E_1 E_2$, then $E$, $E_1$, and $E_2$ cannot all have units of $J$, not because I'm a stickler for notation, but because they cannot all be measurements of energy. Assuming they are will lead to you to an incorrect conclusion. | {
"domain": "physics.stackexchange",
"id": 21662,
"tags": "energy, mass"
} |
What are some interesting applications of the skyline problem? | Question: You are given a set of $n$ rectangles in no particular order. They have varying widths and heights, but their bottom edges are collinear, so that they look like buildings on a skyline. For each rectangle, you’re given the $x$ position of the left edge, the $x$ position of the right edge, and the height. Your task is to draw an outline around the set of rectangles so that you can see what the skyline would look like when silhouetted at night.
Source
Drawing in practice I imagine can come in different forms, for example if a building is described by a triple (leftX, height, rightX) then the output can be a left to right ordered list $\{x_{1},h_{1},x_{2},h_{2},x_{3},h_{3},x_{4},h_{4},...\}$ where each $x_{i}$ represents a vertical edge and each $h_{i}$ represents a horizontal edge.
This problem is clearly defined and there are some quite efficient algorithmic solutions, however I could not find anywhere the motivation for this problem. Why would anyone want to solve it? Are there any interesting applications where being able to solve this problem efficiently is important?
Answer: The "skyline problem" is an example of finding the Pareto front of a set of points in 2-D space. More generally it's an example of multi-objective optimization which has many applications.
You might find this thread relevant. | {
"domain": "cs.stackexchange",
"id": 4464,
"tags": "reference-request, computational-geometry, graphics, applied-theory"
} |
Why is the direction of cathode rays independent of the position of anode? | Question: The following statement is from the book Concepts of Physics Volume 2, by Dr. H.C. Verma, chapter 41 - "Electric Current through Gases", topic "Cathode Rays", page 343:
Cathode rays are emitted normally from the cathode surface. Their direction is independent of the position of anode.
I found the explanation for the first sentence from the section perpendicular emission of the Wikipedia article on Crookes tube. However, I don't understand the second sentence in the quoted statement. It's said that the direction of the cathode rays is independent of the position of anode. Initially, I thought this statement is incorrect as the negatively charged stream, the cathode rays must get deflected towards the positively charged anode. But after seeing the experimental setup of a Crooke's tube (the one with conical flask) it's seems the direction of cathode rays is independent of the position of anode:
Image cropped from Crookes tube - Wikipedia
One possible explanation, I came up with is: The negatively charged electrons from the cathode experience strong repulsive force from the cathode, due to which they gain very high speeds. Due to this they are not significantly deflected by the anode. But if this is the case, I doubt why we must have an anode inside the discharge tube and carry-on only with the cathode? So I think this kind of explanation is incorrect.
To put my question in a nutshell:
Why is the direction of cathode rays independent of the position of anode?
Answer: The electric field created by the anode is too weak to significantly influence the emission of the electrons from the cathode, which is mainly due to thermal effects, hits by the gas molecules, electron repulsion among themselves, etc.
However, once the electrons are released from the cathode, they do feel the anode field and start accelerating towards the anode. Anode is necessary in order to drive electric current by removing the electrons from the tube. Otherwise, a negative electron cloud would accumulate in the tube and prevent further electron emission. | {
"domain": "physics.stackexchange",
"id": 72989,
"tags": "electricity, electrons, plasma-physics, air"
} |
Analyzing Minesweeper Probabilities | Question: Calculating probabilities in Minesweeper might sound like an easy task, but I've seen so many probability calculators that's either incorrect, horribly slow, or with ugly code (or all of them) so I have to share my code for this.
This code is used within my Minesweeper Flags online game by the AIs AI_Hard, AI_Extreme3 and AI_Nightmare.
This code is written in Java 6, but don't let that scare you away. It would not be too difficult to update to more modern Java versions.
So, how does it work?
Let's say this board has 6 mines. A simple approach would be to recursively place all 6 mines and count the number of combinations where each field has a mine. Although that would work for a small board like this, for a bigger board with the same pattern and 51 mines, that's not optimal.
So, what if you just place the mines around the 1 and the 3 and use combinatorics for the rest of the board where we don't have any clues? That would help with the bigger board with the same pattern, but what if you have plenty of clues distributed around the board like this, that's where things are starting to get too complex and too slow for most algorithms.
###My approach
I will explain my approach by guiding you through a manual calculation of the 4x4 example board above.
Board representation. This board can be represented like:
abcd
e13f
ghij
klmn
Rules. So we have a 1 and a 3 and we know there are 6 mines in total on the board. This can be represented as:
a+b+c+e+g+h+i = 1
b+c+h+i+d+f+j = 3
a+b+c+d+e+f+g+h+i+j+k+l+m+n = 6
This is what I call rules (the FieldRule class in my code)
Field Groups. By grouping fields into which rules they are in, it can be refactored into:
(a+e+g) + (b+c+h+i) = 1
(b+c+h+i) + (d+f+j) = 3
(a+e+g) + (b+c+h+i) + (d+f+j) + (k+l+m+n) = 6
These groups I call Field Groups (The FieldGroup class in the code)
The RootAnalyzeImpl class stores a collection of rules, and when it is getting solved it begins by splitting the fields into groups, then creates a GameAnalyze object to do the rest of the work.
GameAnalyze. It starts by trying to simplify things (we'll come to that later), when it can't do so any more it picks a group and assign values to it. Here I pick the (a+e+g) group. I find that it's best to start with a small group.
(a+e+g) = 0 is chosen and a new instance of GameAnalyze is created, which adds (a+e+g) = 0 to its knownValues.
Simplify (FieldRule.simplify method). Now we remove groups with a known value and try to deduce new known values for groups.
(a+e+g) + (b+c+h+i) = 1
(a+e+g) is known, so (b+c+h+i) = 1 remains which makes the rule solved. (b+c+h+i) = 1 is added to knownValues. Next rule:
(b+c+h+i) + (d+f+j) = 3
(b+c+h+i) = 1 is known so we have left (d+f+j) = 2, making also this rule solved and another FieldGroup known. Last rule:
(a+e+g) + (b+c+h+i) + (d+f+j) + (k+l+m+n) = 6
The only unknown remaining here is (k+l+m+n) which after removing the other groups has to have the value 3, because (a+e+g) + (b+c+h+i) + (d+f+j) = 0 + 1 + 2.
Solution So what we know is:
(a+e+g) = 0
(b+c+h+i) = 1
(d+f+j) = 2
(k+l+m+n) = 3
As all rules have been solved and all groups have a value, this is known as a solution (Solution class).
Doing the same for (a+e+g) = 1 leads, after simplification to another solution:
(a+e+g) = 1
(b+c+h+i) = 0
(d+f+j) = 3
(k+l+m+n) = 6 - 3 - 1 = 2
Solution combinations. Now we have two solutions where all the groups have values. When a solution is created, it calculates the combinations possible for that rule. This is done by using nCr (Binomial coefficient).
For the first solution we have:
(a+e+g) = 0 --> 3 nCr 0 = 1 combination
(b+c+h+i) = 1 --> 4 nCr 1 = 4 combinations
(d+f+j) = 2 --> 3 nCr 2 = 3 combinations
(k+l+m+n) = 3 --> 4 nCr 3 = 4 combinations
Multiplying these combinations we get 143*4 = 48 combinations for this solution.
As for the other solution:
(a+e+g) = 1 --> 3 nCr 1 = 3
(b+c+h+i) = 0 --> 4 nCr 0 = 1
(d+f+j) = 3 --> 3 nCr 3 = 1
(k+l+m+n) = 2 --> 4 nCr 2 = 6
311*6 = 18 combinations.
So a total of 48 + 18 = 66 combinations.
Probabilities The total combinations where a field in the (k+l+m+n) group is a mine is:
In first solution: 3 mines, 4 fields, 48 combinations for the solution.
In second solution: 2 mines, 4 fields, 18 combinations in solution.
\$3/4 * 48 + 2/4 * 18 = 45\$
To calculate the probability we take this value divided by the total combinations of the entire board and we get: \$45 / 66 = 0.681818181818\$
Common problems in other algorithms:
They treat the "global rule" in a special way, instead of treating it just like another rule
They treat fields individually instead of bunching them up into FieldGroups
This leads to most algorithms being unable to solve The Super board of death in reasonable time. My approach? About four seconds. (I'm not kidding!)
###Class Summary
Not included here, some of them will be posted for review separately
Combinatorics.java: Contains methods for combinatorics.
FieldGroupSplit.java: A static method and a class to store the result for separating field groups.
RuntimeTimeoutException.java: An exception extending RuntimeException
RootAnalyze.java: Just an interface that RootAnalyzeImpl implements.
SimplifyResult.java: Enum for the result of FieldRule.simplify
SolvedCallback.java: Interface for letting GameAnalyze inform whenever it has found a solution
Included below
FieldGroup.java: A collection of fields. As fields is a generic type, it can be MinesweeperField, String, or whatever.
FieldRule.java: A rule, consisting of a number of FieldGroups that equals a number
GroupValues.java: For assigning values to FieldGroups. Map<FieldGroup, Integer>
RootAnalyzeImpl.java: Where it all begins. Contains a set of rules that should be solved. Also used to access the results when solve is completed.
GameAnalyze.java: For branching and recursively solving and trying values to groups.
Solution.java: Stores a way of assigning all the groups.
All the code can be found at http://github.com/Zomis/Minesweeper-Analyze
#Code
FieldGroup.java: (51 lines, 1158 bytes)
/**
* A group of fields that have common rules
*
* @author Simon Forsberg
* @param <T> The field type
*/
public class FieldGroup<T> extends ArrayList<T> {
private static final long serialVersionUID = 4172065050118874050L;
private double probability = 0;
private int solutionsKnown = 0;
public FieldGroup(Collection<T> fields) {
super(fields);
}
public double getProbability() {
return this.probability;
}
public int getSolutionsKnown() {
return this.solutionsKnown;
}
void informAboutSolution(int rValue, Solution<T> solution, double total) {
if (rValue == 0)
return;
this.probability = this.probability + solution.nCr() / total * rValue / this.size();
this.solutionsKnown++;
}
public String toString() {
if (this.size() > 8) {
return "(" + this.size() + " FIELDS)";
}
StringBuilder str = new StringBuilder();
for (T field : this) {
if (str.length() > 0)
str.append(" + ");
str.append(field);
}
return "(" + str.toString() + ")";
}
}
FieldRule.java: (201 lines, 5326 bytes)
/**
* A constraint of a number of fields or {@link FieldGroup}s that should have a specific sum
*
* @author Simon Forsberg
* @param <T> Field type
*/
public class FieldRule<T> {
private final T cause;
private final List<FieldGroup<T>> fields;
private int result = 0;
/**
* Create a copy of an existing rule.
*
* @param copyFrom Rule to copy
*/
public FieldRule(FieldRule<T> copyFrom) {
this.fields = new ArrayList<FieldGroup<T>>(copyFrom.fields); // Deep copy? Probably not. FieldGroup don't change much.
this.result = copyFrom.result;
this.cause = copyFrom.cause;
}
/**
* Create a rule from a list of fields and a result (create a new FieldGroup for it)
*
* @param cause The reason for why this rule is added (optional, may be null)
* @param rule Fields that this rule applies to
* @param result The value that should be forced for the fields
*/
public FieldRule(T cause, Collection<T> rule, int result) {
this.fields = new ArrayList<FieldGroup<T>>();
this.fields.add(new FieldGroup<T>(rule));
this.result = result;
this.cause = cause;
}
FieldRule(T cause, FieldGroup<T> group, int result) {
this.cause = cause;
this.fields = new ArrayList<FieldGroup<T>>();
this.fields.add(group);
this.result = result;
}
boolean checkIntersection(FieldRule<T> rule) {
if (rule == this)
return false;
List<FieldGroup<T>> fieldsCopy = new ArrayList<FieldGroup<T>>(fields);
List<FieldGroup<T>> ruleFieldsCopy = new ArrayList<FieldGroup<T>>(rule.fields);
for (FieldGroup<T> groupA : fieldsCopy) {
for (FieldGroup<T> groupB : ruleFieldsCopy) {
if (groupA == groupB)
continue;
FieldGroupSplit<T> splitResult = FieldGroupSplit.split(groupA, groupB);
if (splitResult == null)
continue; // nothing to split
FieldGroup<T> both = splitResult.getBoth();
FieldGroup<T> onlyA = splitResult.getOnlyA();
FieldGroup<T> onlyB = splitResult.getOnlyB();
this.fields.remove(groupA);
this.fields.add(both);
if (!onlyA.isEmpty()) {
this.fields.add(onlyA);
}
rule.fields.remove(groupB);
rule.fields.add(both);
if (!onlyB.isEmpty()) {
rule.fields.add(onlyB);
}
return true;
}
}
return false;
}
public T getCause() {
return this.cause;
}
public Collection<FieldGroup<T>> getFieldGroups() {
return new ArrayList<FieldGroup<T>>(this.fields);
}
public int getFieldsCountInGroups() {
int fieldsCounter = 0;
for (FieldGroup<T> group : fields) {
fieldsCounter += group.size();
}
return fieldsCounter;
}
public int getResult() {
return this.result;
}
public FieldGroup<T> getSmallestFieldGroup() {
if (this.fields.isEmpty())
return null;
FieldGroup<T> result = this.fields.get(0);
for (FieldGroup<T> group : this.fields) {
if (group.size() < result.size()) {
result = group;
}
}
return result;
}
public boolean isEmpty () {
return fields.isEmpty() && result == 0;
}
public double nCr() {
if (this.fields.size() != 1)
throw new IllegalStateException("Rule has more than one group.");
return Combinatorics.nCr(this.getFieldsCountInGroups(), this.result);
}
public SimplifyResult simplify(Map<FieldGroup<T>, Integer> knownValues) {
if (this.isEmpty()) {
return SimplifyResult.NO_EFFECT;
}
Iterator<FieldGroup<T>> it = fields.iterator();
int totalCount = 0;
while (it.hasNext()) {
FieldGroup<T> group = it.next();
Integer known = knownValues.get(group);
if (known != null) {
it.remove();
result -= known;
}
else totalCount += group.size();
}
// a + b + c = -2 is not a valid rule.
if (result < 0) {
return SimplifyResult.FAILED_NEGATIVE_RESULT;
}
// a + b = 42 is not a valid rule
if (result > totalCount) {
return SimplifyResult.FAILED_TOO_BIG_RESULT;
}
// (a + b) = 1 or (a + b) = 0 would give a value to the (a + b) group and simplify things.
if (fields.size() == 1) {
knownValues.put(fields.get(0), result);
fields.clear();
result = 0;
return SimplifyResult.SIMPLIFIED;
}
// (a + b) + (c + d) = 0 would give the value 0 to all field groups and simplify things
if (result == 0) {
for (FieldGroup<T> field : fields) {
knownValues.put(field, 0);
}
fields.clear();
result = 0;
return SimplifyResult.SIMPLIFIED;
}
// (a + b) + (c + d) = 4 would give the value {Group.SIZE} to all Groups.
if (totalCount == result) {
for (FieldGroup<T> field : fields) {
knownValues.put(field, result * field.size() / totalCount);
}
return SimplifyResult.SIMPLIFIED;
}
return SimplifyResult.NO_EFFECT;
}
@Override
public String toString() {
StringBuilder rule = new StringBuilder();
for (FieldGroup<T> field : this.fields) {
if (rule.length() > 0) {
rule.append(" + ");
}
rule.append(field.toString());
}
rule.append(" = ");
rule.append(result);
return rule.toString();
}
}
GameAnalyze.java: (85 lines, 2276 bytes)
public class GameAnalyze<T> {
private final SolvedCallback<T> callback;
private final GroupValues<T> knownValues;
private final List<FieldRule<T>> rules;
GameAnalyze(GroupValues<T> knownValues, List<FieldRule<T>> unsolvedRules, SolvedCallback<T> callback) {
this.knownValues = knownValues == null ? new GroupValues<T>() : new GroupValues<T>(knownValues);
this.rules = unsolvedRules;
this.callback = callback;
}
private void removeEmptyRules() {
Iterator<FieldRule<T>> it = rules.iterator();
while (it.hasNext()) {
if (it.next().isEmpty())
it.remove();
}
}
private boolean simplifyRules() {
boolean simplifyPerformed = true;
while (simplifyPerformed) {
simplifyPerformed = false;
for (FieldRule<T> ruleSimplify : rules) {
SimplifyResult simplifyResult = ruleSimplify.simplify(knownValues);
if (simplifyResult == SimplifyResult.SIMPLIFIED) {
simplifyPerformed = true;
}
else if (simplifyResult.isFailure()) {
return false;
}
}
}
return true;
}
void solve() {
if (Thread.interrupted())
throw new RuntimeTimeoutException();
if (!this.simplifyRules()) {
return;
}
this.removeEmptyRules();
this.solveRules();
if (this.rules.isEmpty()) {
callback.solved(Solution.createSolution(this.knownValues));
}
}
private void solveRules() {
if (this.rules.isEmpty())
return;
FieldGroup<T> chosenGroup = this.rules.get(0).getSmallestFieldGroup();
if (chosenGroup == null) {
throw new IllegalStateException("Chosen group is null.");
}
if (chosenGroup.size() == 0) {
throw new IllegalStateException("Chosen group is empty. " + chosenGroup);
}
for (int i = 0; i <= chosenGroup.size(); i++) {
GroupValues<T> mapCopy = new GroupValues<T>(this.knownValues);
mapCopy.put(chosenGroup, i);
List<FieldRule<T>> rulesCopy = new ArrayList<FieldRule<T>>(); // deep copy!
for (FieldRule<T> rule : this.rules) {
rulesCopy.add(new FieldRule<T>(rule));
}
new GameAnalyze<T>(mapCopy, rulesCopy, this.callback).solve();
}
}
}
GroupValues.java: (32 lines, 687 bytes)
public class GroupValues<T> extends HashMap<FieldGroup<T>, Integer> {
private static final long serialVersionUID = -107328884258597555L;
private int bufferedHash = 0;
public GroupValues(GroupValues<T> values) {
super(values);
}
public GroupValues() {
super();
}
}
RootAnalyzeImpl.java: (267 lines, 7690 bytes)
public class RootAnalyzeImpl<T> implements SolvedCallback<T>, RootAnalyze<T> {
private final List<FieldGroup<T>> groups = new ArrayList<FieldGroup<T>>();
private final List<FieldRule<T>> originalRules = new ArrayList<FieldRule<T>>();
private final List<FieldRule<T>> rules = new ArrayList<FieldRule<T>>();
private final List<Solution<T>> solutions = new ArrayList<Solution<T>>();
private double total;
private boolean solved = false;
@Override
public double getTotal() {
return this.total;
}
private RootAnalyzeImpl(Solution<T> known) {
for (Entry<FieldGroup<T>, Integer> sol : known.getSetGroupValues().entrySet()) {
this.rules.add(new FieldRule<T>(null, sol.getKey(), sol.getValue()));
}
}
public RootAnalyzeImpl() {}
public void addRule(FieldRule<T> rule) {
this.rules.add(rule);
}
/**
* Get the list of simplified rules used to perform the analyze
*
* @return List of simplified rules
*/
@Override
public List<FieldRule<T>> getRules() {
return new ArrayList<FieldRule<T>>(this.rules);
}
@Override
public FieldGroup<T> getGroupFor(T field) {
for (FieldGroup<T> group : this.groups) {
if (group.contains(field)) {
return group;
}
}
return null;
}
/**
* Return a random solution that satisfies all the rules
*
* @param random Random object to perform the randomization
* @return A list of fields randomly selected that is guaranteed to be a solution to the constraints
*
*/
@Override
public List<T> randomSolution(Random random) {
if (random == null) {
throw new IllegalArgumentException("Random object cannot be null");
}
List<Solution<T>> solutions = new LinkedList<Solution<T>>(this.solutions);
if (this.getTotal() == 0) {
throw new IllegalStateException("Analyze has 0 combinations: " + this);
}
double rand = random.nextDouble() * this.getTotal();
Solution<T> theSolution = null;
while (rand > 0) {
if (solutions.isEmpty()) {
throw new IllegalStateException("Solutions is suddenly empty. (This should not happen)");
}
theSolution = solutions.get(0);
rand -= theSolution.nCr();
solutions.remove(0);
}
return theSolution.getRandomSolution(random);
}
private RootAnalyzeImpl<T> solutionToNewAnalyze(Solution<T> solution, List<FieldRule<T>> extraRules) {
Collection<FieldRule<T>> newRules = new ArrayList<FieldRule<T>>();
for (FieldRule<T> rule : extraRules) {
// Create new rules, because the older ones may have been simplified already.
newRules.add(new FieldRule<T>(rule));
}
RootAnalyzeImpl<T> newRoot = new RootAnalyzeImpl<T>(solution);
newRoot.rules.addAll(newRules);
return newRoot;
}
@Override
public RootAnalyze<T> cloneAddSolve(List<FieldRule<T>> extraRules) {
List<FieldRule<T>> newRules = this.getOriginalRules();
newRules.addAll(extraRules);
RootAnalyzeImpl<T> copy = new RootAnalyzeImpl<T>();
for (FieldRule<T> rule : newRules) {
copy.addRule(new FieldRule<T>(rule));
}
copy.solve();
return copy;
}
/**
* Get the list of the original, non-simplified, rules
*
* @return The original rule list
*/
@Override
public List<FieldRule<T>> getOriginalRules() {
return this.originalRules.isEmpty() ? this.getRules() : new ArrayList<FieldRule<T>>(this.originalRules);
}
private double getTotalWith(List<FieldRule<T>> extraRules) {
if (!this.solved)
throw new IllegalStateException("Analyze is not solved");
double total = 0;
for (Solution<T> solution : this.getSolutions()) {
RootAnalyzeImpl<T> root = this.solutionToNewAnalyze(solution, extraRules);
root.solve();
total += root.getTotal();
}
return total;
}
@Override
public double getProbabilityOf(List<FieldRule<T>> extraRules) {
if (!this.solved)
throw new IllegalStateException("Analyze is not solved");
return this.getTotalWith(extraRules) / this.getTotal();
}
@Override
public List<Solution<T>> getSolutions() {
if (!this.solved)
throw new IllegalStateException("Analyze is not solved");
return new ArrayList<Solution<T>>(this.solutions);
}
/**
* Separate fields into field groups. Example <code>a + b + c = 2</code> and <code>b + c + d = 1</code> becomes <code>(a) + (b + c) = 2</code> and <code>(b + c) + (d) = 1</code>. This method is called automatically when calling {@link #solve()}
*/
public void splitFieldRules() {
if (rules.size() <= 1)
return;
boolean splitPerformed = true;
while (splitPerformed) {
splitPerformed = false;
for (FieldRule<T> a : rules) {
for (FieldRule<T> b : rules) {
boolean result = a.checkIntersection(b);
if (result) {
splitPerformed = true;
}
}
}
}
}
public void solve() {
if (this.solved) {
throw new IllegalStateException("Analyze has already been solved");
}
List<FieldRule<T>> original = new ArrayList<FieldRule<T>>(this.rules.size());
for (FieldRule<T> rule : this.rules) {
original.add(new FieldRule<T>(rule));
}
this.originalRules.addAll(original);
this.splitFieldRules();
this.total = 0;
new GameAnalyze<T>(null, rules, this).solve();
for (Solution<T> solution : this.solutions) {
solution.setTotal(total);
}
if (!this.solutions.isEmpty()) {
for (FieldGroup<T> group : this.solutions.get(0).getSetGroupValues().keySet()) {
// All solutions should contain the same fieldgroups.
groups.add(group);
}
}
this.solved = true;
}
@Override
public List<FieldGroup<T>> getGroups() {
if (!this.solved) {
Set<FieldGroup<T>> agroups = new HashSet<FieldGroup<T>>();
for (FieldRule<T> rule : this.getRules()) {
agroups.addAll(rule.getFieldGroups());
}
return new ArrayList<FieldGroup<T>>(agroups);
}
List<FieldGroup<T>> grps = new ArrayList<FieldGroup<T>>(this.groups);
Iterator<FieldGroup<T>> it = grps.iterator();
while (it.hasNext()) {
// remove empty fieldgroups
if (it.next().isEmpty()) {
it.remove();
}
}
return grps;
}
@Override
public List<T> getFields() {
if (!this.solved) {
throw new IllegalStateException("Analyze is not solved");
}
List<T> allFields = new ArrayList<T>();
for (FieldGroup<T> group : this.getGroups()) {
allFields.addAll(group);
}
return allFields;
}
@Override
public void solved(Solution<T> solved) {
this.solutions.add(solved);
this.total += solved.nCr();
}
@Override
public List<T> getSolution(double solution) {
if (Math.rint(solution) != solution || solution < 0 || solution >= this.getTotal()) {
throw new IllegalArgumentException("solution must be an integer between 0 and total (" + this.getTotal() + ")");
}
if (solutions.isEmpty()) {
throw new IllegalStateException("There are no solutions.");
}
List<Solution<T>> solutions = new ArrayList<Solution<T>>(this.solutions);
Solution<T> theSolution = solutions.get(0);
while (solution > theSolution.nCr()) {
solution -= theSolution.nCr();
solutions.remove(0);
theSolution = solutions.get(0);
}
return theSolution.getCombination(solution);
}
@Override
public Iterable<Solution<T>> getSolutionIteration() {
return this.solutions;
}
}
Solution.java: (135 lines, 3778 bytes)
/**
* Represents a solution for a Minesweeper analyze.
*
* @author Simon Forsberg
* @param <T>
*/
public class Solution<T> {
public static <T> Solution<T> createSolution(GroupValues<T> values) {
return new Solution<T>(values).nCrPerform();
}
private static <T> double nCr(Entry<FieldGroup<T>, Integer> rule) {
return Combinatorics.nCr(rule.getKey().size(), rule.getValue());
}
private double mapTotal;
private double nCrValue;
private final GroupValues<T> setGroupValues;
private Solution(GroupValues<T> values) {
this.setGroupValues = values;
}
private List<T> combination(List<Entry<FieldGroup<T>, Integer>> grpValues, double combination) {
if (grpValues.isEmpty()) {
return new LinkedList<T>();
}
grpValues = new LinkedList<Entry<FieldGroup<T>, Integer>>(grpValues);
Entry<FieldGroup<T>, Integer> first = grpValues.remove(0);
double remaining = 1;
for (Entry<FieldGroup<T>, Integer> fr : grpValues) {
remaining = remaining * nCr(fr);
}
double fncr = nCr(first);
if (combination >= remaining * fncr) {
throw new IllegalArgumentException("Not enough combinations. " + combination + " max is " + (remaining * fncr));
}
double combo = combination % fncr;
List<T> list = Combinatorics.listCombination(combo, first.getValue(), first.getKey());
if (!grpValues.isEmpty()) {
List<T> recursive = combination(grpValues, Math.floor(combination / fncr));
if (recursive == null) {
return null;
}
list.addAll(recursive);
}
return list;
}
public Solution<T> copyWithoutNCRData() {
return new Solution<T>(this.setGroupValues);
}
public List<T> getCombination(double combinationIndex) {
return combination(new LinkedList<Map.Entry<FieldGroup<T>,Integer>>(this.setGroupValues.entrySet()), combinationIndex);
}
public double getCombinations() {
return this.nCrValue;
}
public double getProbability() {
if (this.mapTotal == 0)
throw new IllegalStateException("The total number of solutions on map is unknown");
return this.nCrValue / this.mapTotal;
}
public List<T> getRandomSolution(Random random) {
List<T> result = new ArrayList<T>();
for (Entry<FieldGroup<T>, Integer> ee : this.setGroupValues.entrySet()) {
List<T> group = new ArrayList<T>(ee.getKey());
Collections.shuffle(group, random);
for (int i = 0; i < ee.getValue(); i++) {
result.add(group.remove(0));
}
}
return result;
}
public GroupValues<T> getSetGroupValues() {
return new GroupValues<T>(setGroupValues);
}
public double nCr() {
return this.nCrValue;
}
private Solution<T> nCrPerform() {
double result = 1;
for (Entry<FieldGroup<T>, Integer> ee : this.setGroupValues.entrySet()) {
result = result * Combinatorics.nCr(ee.getKey().size(), ee.getValue());
}
this.nCrValue = result;
return this;
}
void setTotal(double total) {
this.mapTotal = total;
for (Entry<FieldGroup<T>, Integer> ee : this.setGroupValues.entrySet()) {
ee.getKey().informAboutSolution(ee.getValue(), this, total);
}
}
@Override
public String toString() {
StringBuilder str = new StringBuilder();
for (Entry<FieldGroup<T>, Integer> ee : this.setGroupValues.entrySet()) {
str.append(ee.getKey() + " = " + ee.getValue() + ", ");
}
str.append(this.nCrValue + " combinations (" + this.getProbability() + ")");
return str.toString();
}
}
#Usage / Test
Tests and usage can be found on GitHub. Especially see General2DTest.
#Questions
Even though this code is quite fast already, can it be made even faster? (Polynomial time anyone?)
Does another implementation of this exist? Can any libraries be used to calculate this?
Besides that, any general comments about this code and/or this approach?
Answer: GroupValues
GroupValues doesn't seem to serve much of a purpose beyond what you get from Map; it adds no functionality beyond an unused field. In practice, all I think it is achieving is obscuring what is actually going on.
FieldGroup
That you are extending ArrayList, rather than composing with it, is a code smell. I find myself looking at informAboutSolution. There's a calculation there that doesn't make any sense if the list is empty. Furthermore, because you are publicly a list, anybody can come along and remove all of your entries, or insert more entries, or do all manner of silliness that would screw up your calculation.
I think you should tease out the running tally aspect of what's going on here. At a minimum....
void informAboutSolution(int rValue, Solution<T> solution, double total) {
// codestyle: always use braces
if (rValue == 0) {
return;
}
double probabilityChange = solution.nCr() / total * rValue / fields.size();
this.runningTally.update(probabilityChange);
}
This really needs better variable names - what are you really doing here? You are computing the probability that any cell in this group has a bomb, given a solution which allocates some number of bombs to the group.
class RunningTally {
private double probability = 0;
private int solutionsKnown = 0;
public double getProbability() {
return this.probability;
}
public int getSolutionsKnown() {
return this.solutionsKnown;
}
public void update(double change) {
probability += change;
solutionsKnown++;
}
}
void informAboutSolution(int bombsAllocated, Solution<T> solution, double total) {
// codestyle: always use braces
if (bombsAllocated == 0) {
return;
}
int cellsAvailable = fields.size();
double probabilityBombed = bombsAllocated / cellsAvailable;
double solutionPercentage = solution.nCr() / total;
double probabilityChange = solutionPercentage * probabilityBombed;
runningTally.update(probabilityChange);
}
Allocating objects and doing work together is a code smell -- not necessarily wrong, but definitely suspicious. If you are concerned enough with string concatenation to think that a StringBuilder is a good idea, it probably makes sense to allow multiple objects to share the same StringBuilder.
void writeTo(StringBuilder str) {
final String START_OBJECT = "(";
final String END_OBJECT = ")";
final String SEPARATOR = " + ";
str.append(START_OBJECT);
if (fields.size() > 8) {
str.append(fields.size());
str.append(" FIELDS");
} else {
final int cursor = str.length();
for (T field : fields) {
// This is really a two state FSM...
if (str.length() > cursor) {
str.append(SEPARATOR);
}
str.append(field);
}
}
str.append(END_OBJECT);
}
GameAnalyze
Don't care for the name much - class names are usually nouns, so maybe GameAnalyzer or GameAnalysis.
void solve() {
if (Thread.interrupted())
throw new RuntimeTimeoutException();
if (!this.simplifyRules()) {
return;
}
this.removeEmptyRules();
this.solveRules();
if (this.rules.isEmpty()) {
callback.solved(Solution.createSolution(this.knownValues));
}
}
Major props for checking for interruption. Because it's not obvious that solve() is recursive, it looks as though the interrupt check is in the wrong place. It might make more sense to put that check within solveRules.
I don't like cancelling the solve by throwing a runtime exception, and don't understand why you would choose to implement a unique RuntimeException for it at that. If you feel like you cannot throw a checked exception here, perhaps you should have a state object that various bits use to decide if they should continue processing.
It's not at all clear to me that Solution.createSolution is in the right place -- it seems more flexible to pass the known values to a callback that knows how to apply the solution.
One aspect of the recursive approach that I really don't like is you are missing the opportunity to solve positions in parallel. At a minimum, I would want to consider using a different core to create Solutions than to decompose the problem. Without profiling, it's difficult to say if this would make a difference, but your current approach doesn't support that refactoring. For example, consider a listener that takes knownValues; you could have one version that publishes solutions in line, and an alternative that puts the knownValues into a queue, where another thread will pick them up to convert them.
Solution
Solution is trying to be two different things at once - it's the calculator, and the solution. Those two bits should be teased apart, which will make everything that is going on in there much clearer.
The use of Random in Solution seems really strange; you might look to squashed encoding, which would essentially give you a deterministic ordering of the possible "random" solutions available.
RootAnalyzeImpl
There are an awful lot of functions in here that are throwing IllegalStateExceptions. That suggests to me that you have one ore more classes in here, one which composes the solved state, and another the unsolved state, which have different verbs associated with them.
This is diseased:
if (Math.rint(solution) != solution || solution < 0 || solution >= this.getTotal()) {
throw new IllegalArgumentException("solution must be an integer between 0 and total (" + this.getTotal() + ")");
}
IF solution must be an integer, why on earth does it have type double? You seem to be using a combinatorics library that produces non integer values; and then letting that decision contaminate everything else. Surely it makes more sense to treat your combinatorics problems as integer (or long) based, and firewall away any libraries that don't agree? | {
"domain": "codereview.stackexchange",
"id": 8200,
"tags": "java, performance, algorithm, combinatorics, minesweeper"
} |
Is there a running hash algorithm that can efficiently handle arbitrary updates to a file's contents? | Question: This question is about file-hashing/fingerprinting algorithms (similar to SHA-1 and MD5 and so on). Those algorithms are handy because they give you a small (and fixed-sized) hash code for any file, which can be later used to efficiently determine if that file is different from another file, and (if we are willing to ignore the unlikely possibility of hash-collisions) also whether two files are the same.
One small downside to computing the hash/fingerprint for a file is that you have to read the entire file in order to do so; if the file is very large (e.g. gigabytes or more) this can be an expensive operation.
A good way to avoid that expense is to compute the file's hash code as you are writing it to the disk, and store the hash code with the file. You can even resume updating the hash code later on, if/when you append more bytes to the end of the file, and (assuming no bugs or filesystem corruption) you'll have the file's fingerprint/hash cheaply available to you at all times.
However, the above "update as you go" approach seems like it may break down if you want to update the file in other ways besides appending -- in particular, if you want to truncate the file, or overwrite some existing bytes within the file with new values, you might have to then re-read the entire file in order to update the fingerprint/hash to the appropriate new value.
My question is, is there a type of hashing/fingerprinting algorithm that can efficiently handle file-truncation and byte-overwrite operations, and still provides reasonably good-quality hashing/fingerprinting? ("efficiently" in this case means that one could perform one of these operations on the file and then compute the correct new hash/fingerprint without having to re-read other parts of the file)
Answer: One way to achieve it is to keep a dynamic balanced tree augmented with hashes. This will give you logarithmic slowdown for overwrites and truncation, and also will work with any hashing algorithm.
Let the file be partitioned in blocks of fixed size, $H$ be a hash-function, and $h_t$ be a value stored in a node $t$. If $t$ is a leave, then $h_t$ is a hash of the corresponding block. If $r$ is a parent of $a$ and $b$, then $h_r = H[(h_a,h_b)]$. | {
"domain": "cs.stackexchange",
"id": 14882,
"tags": "efficiency, hashing"
} |
Fetching a post from the database based on a query parameter | Question: I get an id via Get request to fetch an object from the database.
I have used mysqli prepared statements to avoid any security problems. As I am new to these stuff I would like a confirmation that implemented the logic correctly and safe.
User goes to
website.com/post.php?post=130
In post.php i do the following:
Check if user sent a value in "post" parameter, otherwise redirect to the start page:
if(strlen($_GET["post"]) == 0){
header("Location: index.php");
exit();
}
then i use the paramter to fetch the object from the database:
$requestedID = $_GET["post"];
if(is_numeric($requestedID)){
if($stmt = $mysqli->prepare("SELECT * FROM posts WHERE id =?")) {
$stmt->bind_param('s', $requestedID);
$stmt->execute();
$result = $stmt->get_result();
if($result->num_rows > 0) {
$row = $result->fetch_assoc();
// logic for the content
}else{
header("Location: index.php");
exit();
}
} else {
header("Location: index.php");
exit();
}
}else {
header("Location: index.php");
exit();
}
Is my mysqli code secure? Is my handling of GET parameter secure enough? Do you see anything that concerns you?
Answer: There are two things that jump out at me:
You assume $_GET['post'] exists.
If someone just goes to post.php without specifying a parameter, $_GET['post'] is not set -- and any attempt to read it would trigger a notice like "Undefined index: post". Checking the length won't make that notice go away; even the act of getting the length starts with the assumption that your parameter exists.
You really ought to put that variable into a known state before you make any assumptions about it. It might be as simple as, say:
$requestedID = isset($_GET['post']) ? $_GET['post'] : null;
You don't even need the strlen check; null and '' aren't numeric, so they'd both fail the is_numeric check.
(Course, the bigger issue is that you have error reporting set to ignore notices. You should generally be developing with error reporting turned all the way up, even if you turn it back down for production.)
You repeat the same error-case code three times. If you ever want to change how you handle nonexistent or unspecified post IDs, you have to change the code in three places.
That could be fixed by changing the flow a bit.
$requestedID = isset($_GET["post"]) ? $_GET['post'] : null;
$row = null;
if (is_numeric($requestedID)) {
if ($stmt = $mysqli->prepare("SELECT * FROM posts WHERE id =?")) {
$stmt->bind_param('s', $requestedID);
$stmt->execute();
$result = $stmt->get_result();
# oh...this too. $result can be false if something went wrong.
if ($result && $result->num_rows > 0) {
$row = $result->fetch_assoc();
}
}
}
// $row is only set (and truthy!) if you succeeded
if ($row) {
// logic for the content
}
else {
header("Location: index.php");
// exit(); not necessary if this is the last line of the script
} | {
"domain": "codereview.stackexchange",
"id": 23725,
"tags": "php, security, mysqli"
} |
Which areas of chemistry require calculus? | Question: I didn't have much chemistry in school/college, and am now building up my knowledge again. I am studying maths in parallel, building up to calculus.
I noticed that the introductory chemistry books (even at the college level) don't use any calculus. However, more advanced books do tend to use calculus.
Can anyone tell me which areas of chemistry require (or make use of) calculus?
Answer: Analytical chemistry : To predict, for example the $\ce{pH}$, for which moieties will complex and also infer statistics thereof.
Electrochemistry : The Nernst-Plank equation is challenging. But just to calculate the concentration of some things, taking care of different kinds of reactants, depending on what you do you'll need to be good in calculus. Nyquist plots are not really difficult to use but the theory to obtain them is quite complex (it's not a pun) if you look at Nyquist stability criterion.
Organometallic Chemistry : To calculate the oxidation degree of a metal for example, or a TOF (turn over frequency), a TON it refers also to catalysis.
Thermodynamics : You need to have good skills in math to do thermodynamics, also in mass transfer or heat transfer, everything which is close to process chemistry, as per the McCabe Thiele model for distillation (for example) this is not hard but you need.
Quantum Chemistry : Here come the hardest thing (as I know it). You need to be very good in linear algebra, able to solve differential equations, have some skills in analysis to calculate integrals, and so on.
Kinetics : Calculus is not really hard but some times really weird. Also have some skills in algebra can help and be able to solve differential equations.
If you're picky, you may find other parts of chemistry in which calculus is used, but the main ones are in this post, especially quantum chemistry and process chemistry.
Finally, it also depends on your education or experience level, as the most you will study at a high level will include some good skills in calculus. You'll need to be critic and also to know what is the theory behind a hypothesis to be able to verify if what the software tells you can be consider as true or false. | {
"domain": "chemistry.stackexchange",
"id": 6549,
"tags": "theoretical-chemistry"
} |
How does ROS2 select the default QoS profile? | Question:
I have a ROS2 based python code, which mentions nothing about the QoS Settings. I assume that it must be taking some default profile.
How can I view this default profile ? I mean where is the source of this profile ? Is it xml based ? Can I also edit it ?
Originally posted by aks on ROS Answers with karma: 667 on 2018-07-25
Post score: 0
Answer:
The defaults are defined here in rmw, and are described here. The default profiles are wrapped accordingly in rclpy (example), or you can customise the QoS settings like this. Some RMW implementations support xml file configuration in which case the rmw_qos_profile_system_default profile can be used to pass through to that configuration.
Originally posted by dhood with karma: 621 on 2018-07-25
This answer was ACCEPTED on the original site
Post score: 2
Original comments
Comment by aks on 2018-07-25:
Thanks for the quick explanation @dhood.
One question : If nothing at all is mentioned in the code(about QoS settings), e.g. talker.py, does it always use the default QoS or does it mean that it doesnt use any QoS in this case ?
Comment by dhood on 2018-07-25:
without it being specified, pub/sub will use the profile rmw_qos_profile_default, services will use rmw_qos_profile_services_default, etc.
Comment by morten on 2021-10-22:
which doers rviz use? I am assuming rmw_qos_profile_default, I have a lidar for example where the points are invisible unless i manually change the QoS to best effort. | {
"domain": "robotics.stackexchange",
"id": 31358,
"tags": "ros, ros2, xml, dds"
} |
First Time Programming Sockets | Question: The following code was my first attempt at creating and manipulating socket variables. I compiled and ran the code through cmd.
What are the most common ways to organize methods in both a server and client program?
Client program:
import java.util.Scanner;
import java.io.InputStreamReader;
import java.io.BufferedWriter;
import java.io.PrintWriter;
import java.io.OutputStreamWriter;
import java.io.PrintStream;
import java.io.IOException;
import java.net.Socket;
public class Client_Example_Bank {
public static void main (String[] args) throws IOException {
// user connect / disconect int
int conInt = 1;
int complInt = 1;
int money = 0;
int sTalk = 1;
// Scanner is needed to accept data from the user.
Scanner scanUI = new Scanner(System.in);
// a socket which will direct the client to the server using the name or IP address and the port number that the server will also use.
Socket s = new Socket("DESKTOP-1HPRSDS", 8901);
// A second scanner will collect data from the server.
Scanner scanSI = new Scanner(s.getInputStream());
// Create a printstream object to pass data to the server.
PrintStream ps = new PrintStream(s.getOutputStream());
// initialize variables for data sent between client and server
String clientString = " ";
String serverString = " ";
try {
// main while loop running while the connection is still in progress
while (conInt == 1){
// while the server is talking
while (sTalk == 1){
serverString = scanSI.next();
// a is the character I chose to let the client know when it is time to respond
// if not a the data being sent needs to be displayed on the client side.
if (serverString.equalsIgnoreCase("a")){
sTalk = 0;
}
else {
System.out.println(serverString);
}
}
// clear the scanner after all information has been sent and the server has indicated with
// the character a that it is waiting on input.
scanSI = new Scanner(s.getInputStream());
// while the server is not talking and is waiting for the client to give info
while (sTalk == 0){
System.out.println("waiting for user input...");
clientString = scanUI.next();
ps.println(clientString);
sTalk = 1;
}
}
}catch (Exception e){
System.out.println("Goodbye");
s.close();
}
}
}
Server Program:
import java.io.BufferedReader;
import java.io.BufferedWriter;
import java.io.PrintWriter;
import java.io.IOException;
import java.net.Socket;
import java.net.ServerSocket;
import java.util.Date;
import java.io.InputStreamReader;
import java.util.Scanner;
import java.io.PrintStream;
public class Server_Example_Bank {
public static void main (String[] args) throws IOException {
// variables for while loops
int conInt = 1;
int clientTalk = 0;
int serverTalk = 0;
int step = 0;
int money = 0;
int moneyChange = 0;
// create a server socket so that clients can connect through the servers specified port
ServerSocket ss = new ServerSocket(8901);
// this will accept incoming connections
Socket cs = ss.accept();
// collects data that a client is trying to pass to server
Scanner sc = new Scanner (cs.getInputStream());
// declare variable to hold client data
String clientString = " ";
// A variable to send data back to the client
PrintStream p = new PrintStream(cs.getOutputStream());
// listening and talking method for client/server with bank processes
do {
// while client is not talking or when app is started
while (clientTalk == 0){
mainMenu(p);
clientTalk = 1;
}
// while client is talking take user input and put it into a string variable
while (clientTalk == 1){
System.out.println("Waiting.for.user.input...");
clientString = sc.next();
System.out.println("user.input.=."+clientString);
clientTalk = 2;
}
// if user input is b (stands for balance)
if (clientString.equalsIgnoreCase("b")){
p.println(" ");
p.println("Your.Balance.Is: "+money);
p.println(" ");
}
// if user input is d (stands for deposit)
if (clientString.equalsIgnoreCase("d")){
p.println(" ");
p.println("Deposit.ammount:");
p.println("a");
clientString = sc.next();
// parseInt from the string of user input to make sure the value is an integer
moneyChange = Integer.parseInt(clientString);
money = money + moneyChange;
clientString = "";
p.println("Your.new.total.is: "+money);
p.println(" ");
}
// if user input is w (stands for withdraw)
if (clientString.equalsIgnoreCase("w")){
p.println(" ");
p.println("Enter.Withdraw.Ammount");
p.println("a");
clientString = sc.next();
// parseInt from the string of user input to make sure the value is an integer
moneyChange = Integer.parseInt(clientString);
// if there is not enough in the account to make the withdraw
if ((money - moneyChange)< 0){
p.println("Not.Enough.Funds");
}
else {
money = money - moneyChange;
p.println("Your.new.total.is: "+money);
}
p.println(" ");
}
// if user has completed their transaction but is on the main menu
if (clientString.equalsIgnoreCase("t")){
conInt = 1;
}
// clear client string
clientString = "";
// this starts the second conversation between the client and server to find out if the
// user wants to complete another transaction in their account now that one has been completed
while (clientTalk == 2){
p.println(" ");
p.println("Did.you.want.to.make.another.transaction?.y.or.n");
p.println("a");
clientString = sc.next();
System.out.println("User entered "+clientString);
if (clientString.equalsIgnoreCase("n")){
closeConnection(cs);
}
else if (clientString.equalsIgnoreCase("y")){
conInt = 0;
clientTalk = 0;
}
else {
p.println("Invalid Entry");
}
p.println(" ");
}
// clear client string
clientString = "";
}while(conInt == 0);
System.out.println("Disconnecting from client");
closeConnection(cs);
}
// close method
public static void closeConnection(Socket skt){
try {
skt.close();
}catch(Exception e){
System.out.println("Client Disconnected");
}
}
// main menu method
public static void mainMenu(PrintStream pS){
pS.println(" ");
pS.println(" ");
pS.println("Welcome.to.your.bank.account.");
pS.println("What.would.you.like.to.do.today?");
pS.println(" ");
pS.println("View.Balance: type.b");
pS.println("Deposit.Funds: type.d");
pS.println("Withdraw.Funds: type.w");
pS.println("Transaction.Complete: type.t");
pS.println(" ");
pS.println("a");
}
}
Answer: You appear to be new to Java. For one thing, you are using integers to store ones and zeros to be used as flags. Java has a boolean type to handle that. If you need to store more than true or false, then you can use an Enum to signify the separate "states" you wish to convey.
You might consider using separate threads for reading and writing to and from the server. However, your current design does not really need this, since it's a simple back and forth of messages in pre-ordained sequence. More complicated scenarios could be handled by having separate threads, one reading from the user, another sending messages to server, a third reading messages from server.
Your while (sTalk == 0) loop should not be a loop. In fact, it is a common pattern in your code for you to put in loops that never actually loop. Most of your "while" loops do not need to be there, none of them will ever repeat the loop. They are really just "if" statements.
You use a the scanner class in this case, a more robust solution would use a messaging library like protobuf, or JSON, or http, or use some sort of protocol to organize data being sent into messages. Those three I listed are popular, but the basic idea is that you define what a "message" looks like and you always send data as a "message". That way you would not be making assumptions about what is being sent over the socket (ie you magically know when to expect a string or an int or whatever). A message would have a format, perhaps a header, a length, and you would read each message all together as a unit. You would also not need the magic" character 'a' to have a special meaning. Having a protocol defined allows you to avoid making such assumptions. JSON is a message type that does not even require a header, you might look into that one, it is a very simple protocol and very popular these days.
You should probably not catch "Exception", catch "IOException" or whatever is specifically thrown, and you should have try/catch around the different places you do I/O to be able to differentiate the different issues that occur.
Overall, I would say that your code is long-winded and not broken up enough into individual pieces. It also makes too many assumptions about the ordering regarding who says what and when.
A simple client design would simply be like this:
while (true) {
read from client a command
based on what the command is, compose the command into a message for the server
send the full message to server as a sequence of bytes to the socket
read a single response message from server, read all the bytes for a single message
output the result to user
}
server:
while(true) {
read all the bytes for a single message from client
do whatever based on that message
send a single response message to client
}
You would not have all those extra while loops all over your code. Each message would be a header indicating the number of bytes in the message and the content, or some other protocol.
Each one of those steps would be a separate function. So your main loop would be about 10 lines of code (like my pseudo-code), and would be easy to read and understand. | {
"domain": "codereview.stackexchange",
"id": 30037,
"tags": "java, networking, socket, server, client"
} |
When does the bulk of a solid end and the surface begin? | Question: Working on nanocrystal materials, I was curious to calculate the ratio of surface atoms to bulk atoms. However, when I provided this ratio to a professor, he said it's common to assume a surface to be 5-10 lattice constants thick. It was a passing comment, but I'm curious if anyone has any insights on what delineates the bulk of a crystal from the surface of a crystal?
Answer: 5-10 lattice constants is just a rule-of-thumb. Surface effects decay into the material - i.e., surface states have finite extension, surface charge is being screened, etc. Most of these are associated with some exponential scale of decay, $\sim e^{-x/l}$, where $l$ can be taken as the surface thickness.
As a simple example, one could mention skin effect in metals - the electric field within the bulk of the metal is screened, whereas the surface effects are limited to the skin depth. | {
"domain": "physics.stackexchange",
"id": 92385,
"tags": "condensed-matter, solid-state-physics"
} |
Cast a raw map to a generic map using a method, cleanly and safely in a fail early manner | Question: Casting, instanceof, and @SuppressWarnings("unchecked") are noisy. It would be nice to stuff them down into a method where they won't need to be looked at. CheckedCast.castToMapOf() is an attempt to do that.
castToMapOf() is making some assumptions:
(1) The map can't be trusted to be homogeneous
(2) Redesigning to avoid need for casting or instanceof is not viable
(3) Ensuring type safety in an fail early manner is more important than the performance hit
(4) ReturningMap<String,String>is sufficient (rather than returningHashMap<String, String>)
(5) The key and value type args are not generic (like HashMap<ArrayList<String>, String>)
(1), (2) and (3) are symptoms of my work environment, beyond my control. (4) and (5) are compromises I've made because I haven't found good ways to overcome them yet.
(4) Is difficult to overcome because even if a HashMap.class was passed into a Class<M> I haven't been able to figure out how to return a M<K, V>. So I return a Map<K, V>.
(5) Is probably an inherent limitation of using Class<T>. I'd love to hear alternative ideas.
Despite those limitations can you see any problems with this java 1.5 code? Am I making any assumptions I haven't identified? Is there a better way to do this? If I'm reinventing the wheel please point me to the wheel. :)
Usage code block:
public class CheckedCast {
public static final String LS = System.getProperty("line.separator");
public static void main(String[] args) {
// -- Raw maps -- //
Map heterogeneousMap = new HashMap();
heterogeneousMap.put("Hmm", "Well");
heterogeneousMap.put(1, 2);
Map homogeneousMap = new HashMap();
homogeneousMap.put("Hmm", "Well");
// -- Attempts to make generic -- //
//Unsafe, will fail later when accessing 2nd entry
@SuppressWarnings("unchecked") //Doesn't check if map contains only Strings
Map<String, String> simpleCastOfHeteroMap =
(Map<String, String>) heterogeneousMap;
//Happens to be safe. Does nothing to prove claim to be homogeneous.
@SuppressWarnings("unchecked") //Doesn't check if map contains only Strings
Map<String, String> simpleCastOfHomoMap =
(Map<String, String>) homogeneousMap;
//Succeeds properly after checking each item is an instance of a String
Map<String, String> checkedCastOfHomoMap =
castToMapOf(String.class, String.class, homogeneousMap);
//Properly throws ClassCastException
Map<String, String> checkedCastOfHeteroMap =
castToMapOf(String.class, String.class, heterogeneousMap);
//Exception in thread "main" java.lang.ClassCastException:
//Expected: java.lang.String
//Was: java.lang.Integer
//Value: 1
// at checkedcast.CheckedCast.checkCast(CheckedCast.java:14)
// at checkedcast.CheckedCast.castToMapOf(CheckedCast.java:36)
// at checkedcast.CheckedCast.main(CheckedCast.java:96)
}
Methods code block:
/** Check all contained items are claimed types and fail early if they aren't */
public static <K, V> Map<K, V> castToMapOf(
Class<K> clazzK,
Class<V> clazzV,
Map<?, ?> map) {
for ( Map.Entry<?, ?> e: map.entrySet() ) {
checkCast( clazzK, e.getKey() );
checkCast( clazzV, e.getValue() );
}
@SuppressWarnings("unchecked")
Map<K, V> result = (Map<K, V>) map;
return result;
}
/** Check if cast would work */
public static <T> void checkCast(Class<T> clazz, Object obj) {
if ( !clazz.isInstance(obj) ) {
throw new ClassCastException(
LS + "Expected: " + clazz.getName() +
LS + "Was: " + obj.getClass().getName() +
LS + "Value: " + obj
);
}
}
Some reading I found helpful:
Generic factory with unknown implementation classes
Generic And Parameterized Types
I'm also wondering if a TypeReference / super type tokens might help with (4) and (5) and be a better way to approach this problem. If you think so please post an example.
Answer: You should really rewrite the main method as proper unit tests, with separate test cases, for example:
private static void iterateMapKeysAsStrings(Map<String, ?> map) {
for (String key : map.keySet()) {
// nothing to do, invalid cast will be triggered for wrong type
}
}
@Test(expected = ClassCastException.class)
public void testHeterogeneousMap() {
Map heterogeneousMap = new HashMap();
heterogeneousMap.put("Hmm", "Well");
heterogeneousMap.put(1, 2);
//Unsafe, will fail later when accessing 2nd entry
//Doesn't check if map contains only Strings
@SuppressWarnings("unchecked")
Map<String, String> map = (Map<String, String>) heterogeneousMap;
iterateMapKeysAsStrings(map);
}
@Test
public void testHomogeneousMap() {
Map homogeneousMap = new HashMap();
homogeneousMap.put("Hmm", "Well");
//Happens to be safe. Does nothing to prove claim to be homogeneous.
//Doesn't check if map contains only Strings
Map<String, String> simpleCastMap = (Map<String, String>) homogeneousMap;
iterateMapKeysAsStrings(simpleCastMap);
//Succeeds properly after checking each item is an instance of a String
Map<String, String> safeCastMap = castToMapOf(String.class, String.class, homogeneousMap);
iterateMapKeysAsStrings(safeCastMap);
}
I added a helper method iterateMapKeysAsStrings to trigger ClassCastException after an unsafe cast.
For testing an invalid cast with castToMapOf, you don't need to save the result in a variable:
@Test(expected = ClassCastException.class)
public void testInvalidCast() {
Map heterogeneousMap = new HashMap();
heterogeneousMap.put("Hmm", "Well");
heterogeneousMap.put(1, 2);
//Properly throws ClassCastException
castToMapOf(String.class, String.class, heterogeneousMap);
}
I couldn't improve the implementation of castToMapOf. I tried a few things, but they didn't work out. The unit tests helped a lot in this: you either get "all green" results, or if something fails, you can pinpoint the problem, without having to read everything including the successful cases.
In comments you wrote that you like the detailed text in the ClassCastException in the checkCast method. I don't really see how this message matters. I understand that if you "test" your code with your original main method, it's easy to read. But with unit tests you don't have to read at all. I think less code is generally better: you could use a shorter message with the LS variable, without the System.getProperty("line.separator"). | {
"domain": "codereview.stackexchange",
"id": 9378,
"tags": "java, generics, casting"
} |
Optimize/ refactor program that highlights differences between two text collections | Question: I made a simple comparison window in WPF that has two RichTextBoxes highlighting the differences (in red) of two StringCollections.
I knocked the code up in about 5 minutes and having gone back to look at it it appears pretty shoddy. I've included comments to show my intentions.
private static void DifferentiateText(RichTextBox richTextBox1, RichTextBox richTextBox2, StringCollection text1,
StringCollection text2, SolidColorBrush brushColour)
{
var text1Length = text1.Count;
var text2Length = text2.Count;
//Loop through text1 array
for (var i = 0; i < text1Length; i++)
{
var text1Line = text1[i] + "\r\n";
//If we're within text1 and text2 array boundaries, then compare their values
if (i < text2Length)
{
var text2Line = text2[i] + "\r\n";
//If lines aren't the same, apply brush
if (text1Line != text2Line)
{
AssignTextToTextBox(richTextBox1, text1Line, brushColour);
AssignTextToTextBox(richTextBox2, text2Line, brushColour);
}
else //Output with default brush
{
AssignTextToTextBox(richTextBox1, text1Line);
AssignTextToTextBox(richTextBox2, text2Line);
}
}
else //At the end of text2 array, so just output the rest of text1
{
AssignTextToTextBox(richTextBox1, text1Line, brushColour);
}
//If we're at the end of text1 array but not at the end of text2 array, then just output the rest of text2
if (i == text1Length - 1 && text1Length < text2Length)
{
for (; i < text2Length; i++)
{
var text2Line = text2[i] + "\r\n";
AssignTextToTextBox(richTextBox2, text2Line, brushColour);
}
}
}
}
private static void AssignTextToTextBox(RichTextBox richTextBox, string text, SolidColorBrush brushColour = null)
{
var textRange = new TextRange(richTextBox.Document.ContentEnd, richTextBox.Document.ContentEnd)
{
Text = text
};
if(brushColour != null)
textRange.ApplyPropertyValue(TextElement.ForegroundProperty, brushColour);
}
I don't particularly like how AssignToTextBox is used 6 times in the first for loop when really it will only be called a maximum of two times.
Is there any way to optimize/ refactor/ make-less-shoddy this code?
Answer: Since you're using WPF, you should really consider implementing MVVM.
Wouldn't Environment.Newline be better than "\r\n"?
Also, why do you add the new line at those times? That way you're intorducing several points where you could forget to do so. Wouldn't it be better to apply this only in AssignTextToTextBox(), e.g. Text = text + Environment.Newline?
Comments should explain why something was implemented in that way, not what the code is doing. For example: //If lines aren't the same, apply brush doesn't tell me anything the code isn't showing me already. If you need comments to explain what your code does, it's a sign you need to re-write the code.
I don't particularly see the point of text1Length and text2Length, just use text1.Count and text2.Count directly. But more importantly, why are you using StringCollection when List<string> is available?
If you apply these changes, I think your logic can be reduced to this:
var maximum = (list1.Count > list2.Count) ? list1.Count : list2.Count;
for (var i = 0; i < maximum; i++)
{
var text1Line = (i >= list1.Count) ? string.Empty : list1[i];
var text2Line = (i >= list2.Count) ? string.Empty : list2[i];
var applicableBbrushColour = (text1Line == text2Line) ? null : brushColour;
AssignTextToTextBox(richTextBox1, text1Line, applicableBbrushColour);
AssignTextToTextBox(richTextBox2, text2Line, applicableBbrushColour);
}
You could even move (i >= list1.Count) ? string.Empty : list1[i]; to a separate method, considering this logic is repeated; perhaps even to an extension method? | {
"domain": "codereview.stackexchange",
"id": 18394,
"tags": "c#, wpf"
} |
How can you determine if a process is possible or impossible by using the 1st and the 2nd Laws of Thermodynamics? | Question: How can you determine if a process is possible or impossible by using the 1st and the 2nd Laws of Thermodynamics?
Answer: $\bf{Impossible}:$ A process is $impossible$ between two equilibrium states if at the beginning of the process the system and its surrounding has total $S_0$ entropy and at the end of the process the total entropy is $S_1$ $and$ $S_0 > S_1$.
three comments:
between two equilibrium states: if you can define non-equilibrium
entropy then there are possible generalizations to include entropy
rates and non-equilibrium processes such that are centered around the idea that $internal$ entropy generation is always positive.
possible vs. impossible: One can reformulate the above negative statement of "impossibility"
to a positive one but not easily because, in general, not all
processes are possible just because the final entropy is larger than
the initial one. Ashes do not turn to gold just because the latter
might have higher entropy, there are also other conservation laws besides energy.
There are other that are sometimes more convenient formulations of the $impossibility$ statement. For example
(a) the entropy of an *isolated* system cannot decrease, therefore in equilibrium the entropy is maximum
(b) the internal energy of a constant entropy system cannot increase and it is at a *minimum*
(c) if a system can exchange heat only at a fixed given temperature than in
equilibrium its free energy must be minimum.
There are many other similar variations but are all are essentially equivalent $impossibility$ statements. | {
"domain": "physics.stackexchange",
"id": 70504,
"tags": "thermodynamics, entropy"
} |
How do ceiling fans cool or heat? | Question: Why does a ceiling fan blowing air downward cool in the summertime:
whereas a ceiling fan sucking air upwards heats in the wintertime:
?
Why isn't it the other way around?
Answer: In summertime, the ceiling fan blows air downwards and cools down your body using the wind chill effect.
In wintertime, if you have an active heating system at home, it will heat up the air in the room. Hotter air moves up and accumulate near the ceiling, colder air being down. A ceiling fan in reverse direction moves cold air up pushes hot air downwards to people's level. It can do this in forward direction but then you will have the wind chill effect again.
Why does the chill effect only occur in summertime mode ?
Consider this fan :
You can try it yourself: If you stand in front of the fan you will feel a strong air stream, if you stand behind you will feel a much weaker stream. This is because the fan collects air from all directions in the back and blows it in one direction in the front.
So, in wintertime mode, wind speed above the fan will be higher than that below, there will be a weak stream of air at your level so you will not feel the chill effect. | {
"domain": "physics.stackexchange",
"id": 30648,
"tags": "thermodynamics, fluid-dynamics, everyday-life, cooling, fan"
} |
Memoization helper | Question: Please review:
/// Fetch a value from the `map` or create a new one from the `fun` (memoization).
/// Example: \code
/// flat_map<string, Frople> fcache;
/// Frople& frople = getOrElseUpdate (fcache, id, [&]() {return Frople (id);});
/// \endcode
template<typename M, typename K, typename F>
inline auto& getOrElseUpdate (M& map, const K& key, F fun) {
auto it = map.find (key);
if (it != map.end()) return it->second;
return map.emplace (std::make_pair (key, fun())) .first->second;
}
Named after the Scala method.
The function is used to simplify the common map operations of checking whether there is a value in a map and generating the value if it isn't in the map, allowing one to use a simple to read one-liner instead of repeating these common but hard-to-read operations in the code.
P.S. Incorporating bits of advise from @vnp and @Loki reviews:
template<typename C, typename F>
inline auto getOrElseUpdate (C& map, typename C::key_type const& key, F funct) ->decltype (map[key]) {
auto it = map.find (key);
if (it != map.end()) return it->second;
return map.emplace (key, funct()) .first->second;
}
Answer: Well overall impression is the code is very condensed. Some white space may help.
I am going to have to disagree with @vnp about the type names.
Its pretty common to use short template type names. Common conventions T generic type C container K key, F functor. If you have a lot then fine you may give more description but the type should be generic. I don't want to impose my thoughts on the user of the type (let the compiler impose its rules on the user). I would not want to limit the use of this to maps if it can generically be applied to other container types. As long as the parameter and variable names are descriptive then the type names don't need to be.
The Key type is unnecessary. This information can be extracted from the container.
Don't assume the internal value is std::pair use the container to tell you its internal value type C::value_type (yes its a std::pair for std::map but don't lock yourself to this type if you don't need to). But since you are using emplace() you should not be passing the internal object (that's relay for insert()). You can just pass the parameters used to construct the internal type.
Don't like the emplace and return of a part of the result in a single line.
template<typename C, typename F>
inline auto getOrElseUpdate(C& associativeCont, typename C::key_type const& key, F funct) -> decltype (associativeCont[key])
{
auto it = associativeCont.find(key);
if (it != associativeCont.end())
{ return it->second;
}
// pair<iterator, bool>
// iterator points at inserted value
// bool indicates success. Since
// we already checked for an existing value it will
// always succeed so no need to re-check.
auto inserted = associativeCont.emplace(key, function());
// Return the value that was inserted.
return inserted.first->second;
} | {
"domain": "codereview.stackexchange",
"id": 8071,
"tags": "c++, c++11, c++14, memoization"
} |
Are "computable problem" and "computable function" the same thing? | Question: I'm confused by the use of the expressions "computable problem" and "computable function" in the context of computability theory. Are they refer to the same thing or are there differences?
Answer: They're basically the same, in informal usage. They're not necessarily exactly 100% identical.
A computable function is often defined to be a function $f:\mathbb{N}^k \to \mathbb{N}$, or a function $f:\mathbb{N} \to \{0,1\}$, or a function $f:\{0,1\}^* \to \{0,1\}$, that can be computed by some algorithm (e.g., a Turing machine that always halts). These definitions are all basically equivalent, up to encoding of the inputs and outputs.
A decidable language is a set $L \subseteq \{0,1\}^*$ that can be decided ("computed") by some algorithm. It is also sometimes called a recursive language.
These are basically equivalent: for instance, any function $f:\{0,1\}^* \to \{0,1\}$ corresponds to a language $L=\{x \mid f(x)=1\}$, and $f$ is computable iff $L$ is decidable; and conversely the language $L$ corresponds to the function $f(x) = 1$ if $x \in L$, $f(x) = 0$ otherwise.
A "computable problem" is a slightly more informal term, and probably refers to a problem that can be formulated as either a function or a language, and where that function/language is computable/decidable.
Introductory textbooks may choose one of these formalisms and definitions and then work with it in a precise and rigorous and careful way, carefully distinguishing these concepts. Professionals who are experienced in this field might speak a bit more informally and not bother to distinguish between these concepts, as they are after all basically equivalent.
That is my impression about how those terms are normally used, in the absence of any context. Looking at the specific context might influence the interpretation of those terms. | {
"domain": "cs.stackexchange",
"id": 21226,
"tags": "computability"
} |
How to perform model fitting for system identification | Question: I am having a really hard time in understanding how to formulate a model say linear AR model to represent a communication channel or maybe any motion. I have the experimental data representing the kinematics of robot movement. In the case of a communication channel, it will have a medium, transmitter and receiver. So, will the medium be represented by AR model or do we represent the transmitter and receiver by these models? How to decide the order of the system to be chosen initially. I am not from the area of signal processing but need to use this for a part of my research work. I have gone through literature review and the background of Kalman filter, Least square method, recursive least square for estimation purpose. But I cannot find a good example which shows how the parameters are estimated from the time series or motion data or speech modeling. Can somebody explain or point out the starting point and how to go about it? I know this is a very broad topic but an explaination with a small example will be really an eye opener for a beginner.
Answer: In any signal processing problem, there are usually two components: the signal model and the channel model.
The signal model is the mathematical description of how your ideal signal, call it $s(t)$ is generated.
The channel model is the mathematical description of how your channel corrupts or alters the signal.
One aim in telecommunications is to look at the received signal $r(t)$ --- $s(t)$ corrupted by the channel --- and try to undo the effects of the channel.
For example, suppose
$$
s(t) = A \sin(\omega_0 t + \phi)
$$
and suppose the channel is:
$$
r(t) = h(t)*s(t) + n(t)
$$
where $h(t)$ is the impulse response of the channel, and $n(t)$ is some additive noise.
In this context, your "AR" filter is the $h(t)$.
If the context is in robotics or kinematics, then things get a little more complicated.
Did you open the "Algorithm" tab on the link in your comment? It's pretty straighforward:
The polyfit MATLAB file forms the Vandermonde matrix, $\bf V$, whose elements are powers of $x$:
$v_{i,j} = x_i^{n - j}$
It then uses the backslash operator, $\backslash $, to solve the least squares problem ${\bf V} p \approx y$. | {
"domain": "dsp.stackexchange",
"id": 702,
"tags": "linear-systems, autoregressive-model, system-identification"
} |
Unable to comunicate ROS - Matlab | Question:
Dear ROS community!
I have a problem...I configured my matlab environment as ROS master and i want to connect external Linux machine.
I adjusted ROS_MASTER_URI inside the linux machine, after that i started matlab environment using rosinit. Matlab sees linux's topics and nodes, however linux doesn't see matlab master... The message is "ERROR:Unable to communicate with master" tapping rosnode list in Linux machine. Could you give me a direction to solve the problem?
Originally posted by Dornier on ROS Answers with karma: 1 on 2020-12-06
Post score: 0
Answer:
In addition to ROS_MASTER_URI, you must also set ROS_IP ( or ROS_HOSTNAME ).
http://wiki.ros.org/ROS/EnvironmentVariables#ROS_IP.2FROS_HOSTNAME
Originally posted by miura with karma: 1908 on 2020-12-07
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by Dornier on 2021-01-12:
You are right! I ve solved the issue. Thank You! | {
"domain": "robotics.stackexchange",
"id": 35840,
"tags": "matlab, ros-melodic, master, network, ros-master-uri"
} |
How to select the Pilz Motion Planner? | Question:
Hello,
i use MoveIt with ROS Melodic and Ubuntu 18.04.
At a first stage i install MoveIt with binary. But if i got it right, the Pilz Planner isn't a part of this installation.
So i build MoveIt from source without problems in another workspace (does this matter?).
But how can i select this planner? If i change the pipeline in my move_group.launch, i got an error:
MoveGroup running was unable to load pilz_industrial_motion_planner::CommandPlanner
i downloaded the pilz_industrial_motion_planner_planning_pipeline.launch.xml into my launch directory. But i don't know what i have to do in addition to get thinks work.
Thanks for your help!
Originally posted by anonymous74883 on ROS Answers with karma: 13 on 2021-06-24
Post score: 0
Answer:
There is an apt package that contains the pilz motion planner. Install it with the following command:
sudo apt-get install ros-melodic-pilz-industrial-motion-planner
Originally posted by pvl with karma: 111 on 2021-06-24
This answer was ACCEPTED on the original site
Post score: 0
Original comments
Comment by anonymous74883 on 2021-06-24:
That makes life easier. I will try it. Thanks a lot!!
Comment by pvl on 2021-06-24:
You are welcome ;). Let me know if it works and if it does please accept the answer to let others know that it works. | {
"domain": "robotics.stackexchange",
"id": 36572,
"tags": "moveit, ros-melodic"
} |
Has an artificial symbiotic relationship ever been created? | Question: Have 2 organisms ever been introduced to create a symbiotic relationship that doesn't occur in their natural environment?
Answer: Here are some examples:
symbiosis between genetic modified yeast cell populations (Shou W et al. 2007)
symbiosis between green algae and embryonic chick connective tissue (Buchsbaum R et al. 1934)
symbiosis between EcoBot II and microbial fuel cells (Ieropoulos, Ioannis, et al. 2005) | {
"domain": "biology.stackexchange",
"id": 2379,
"tags": "cell-biology, zoology, mycology, parasitology, symbiosis"
} |
Can Aluminum Hydroxide (Al(OH)3) be heated to it's melting temperature before decomposing? | Question: On the wikipedia page for aluminum hydroxide the listed melting point is 300 °C (572 °F), but in the same page it states that aluminum hydroxide decomposes at only 180 °C (356 °F). Can aluminum hydroxide be melted without decomposing?
Answer: TL.DR: Aluminium oxide (sometimes known as alumina) is made by heating the aluminium hydroxide to a temperature of about 1100 - 1200°C. (Chemguide)
$$\ce{Al(OH)3 ->[\Delta] Al2O3 + 3H2O}$$
As @andselisk said, the temperature Wikipedia mentions is aluminium hydroxide mixture in a fire retardant which might have other compounds. Decomposition of pure aluminium hydroxide has been studied in various papers and its decomposition reaction mechanism, temperature, reaction kinetics has been exhaustively studied.
A paper examines$\ce{^{[1]}}$ the decomposition of aluminium hydroxide at a temperature of 973-1123 K(700 °C-850 °C).
Another paper$\ce{^{[2]}}$ shows aluminium hydroxide being heated from room temperature to 1200 K(926.85 °C) where the exact temperature of decomposition products i.e $\ce{Al2O3}$ is noted:
[...] we find the first and the smallest endothermic peak at 519
K, which is due to the partial dehydroxylation of gibbsite
($\ce{Al(OH)3}$) and formation of boehmite ($\ce{AlOOH}$). Endothermic
peak at 585 K corresponds to two processes: (i) transformation of
gibbsite to phase $\ce{χ-Al2O3}$ and (ii) additional conversion of
gibbsite to boehmite. The first process is in accordance with the
results obtained. Another endothermic peak at 815 K is due to
decomposition of boehmite and formation of alumina $\ce{γ-Al2O3}$. The
obtained TG curves show clearly three steps of weight loss. The weight
loss in the first step (about 5 wt.%) is due to the partial
transformation of gibbsite to boehmite; the second step (about 25
wt.%) corresponds to decomposition of gibbsite to boehmite, otherwise
to $\ce{χ-Al2O3}$. The last step of around 3 wt.% relates to the
formation of $\ce{γ-Al2O3}$. Further observations show that the total weight
loss is equal to 33 wt.%. All these transformations of gibbsite and
the chemical composition are confirmed by the XRD analysis.
References
Chemical kinetics and reaction mechanism of thermal decomposition of aluminum hydroxide and magnesium hydroxide at high temperature (973-1123 K) by Ienwhei Chen, Shuh Kwei Hwang, and Shyan Chen, Ind. Eng. Chem. Res. 1989, 28, 6, 738–742 DOI: https://doi.org/10.1021/ie00090a015
http://przyrbwn.icm.edu.pl/APP/PDF/131/a131z3p62.pdf
Bhattacharya, Indra & Das, S. & Mukherjee, P. & Paul, Subir & Mitra, P.. (2004). Thermal Decomposition of Precipitated Fine Aluminium Trihydroxide. Scandinavian Journal of Metallurgy. 33. 211 - 219. 10.1111/j.1600-0692.2004.00686.x.(DOI) (Here aluminium hydroxide is subjected to dehydration upto a temperature of 1440 °C) | {
"domain": "chemistry.stackexchange",
"id": 13970,
"tags": "melting-point, decomposition"
} |
Has anyone integrated ROS with Unity 3d? | Question:
You know: www.unity3d.com, runs Mono. Prefer Windows, but understand that Mac might be closer to supportable.
Originally posted by daveayyyy on ROS Answers with karma: 51 on 2014-01-28
Post score: 5
Original comments
Comment by bchr on 2014-01-28:
What kind of integration are you talking about?
Comment by daveayyyy on 2014-01-28:
Using C# to talk to native, DLL, or websocket, for example. Want to be able to interact with the 3D model of my building to tell robot where to go, have robot tell me where it is and reflect that in the model.
Comment by Wolf on 2014-01-28:
I integrated untiy via DLL + socket conection to player/stage 3 years ago . Was quite a bit of work. Haven't heard any thing about ROS+Unity, would be awesome, though.
Comment by Kadir Firat Uyanik on 2015-04-06:
Is there anyone who has got his/her hands dirty on that (viz. ROS+Unity3D) ?
Comment by pavan on 2016-06-17:
http://ieeexplore.ieee.org/xpl/articleDetails.jsp?arnumber=7392488
These guys have done quite a bit in making unity and ROS speak. But sadly, they say nothing about their code.
Answer:
rosbridge is a great way to make a websocket which can stream data via restful api. You can use rosbridge to get topics from ROS to unity. I've done this myself but one caveat is that PointCloud2 topics are bugged in indigo to the point that the streaming doesn't work for those types.
Here's a great example to get you started with streaming topics form a kinect: http://wiki.ros.org/ros3djs/Tutorials/Point%20Cloud%20Streaming%20from%20a%20Kinect
Also, do ingest the data via Unity3d you merely point it at the web service.
Originally posted by jacksonkr_ with karma: 396 on 2016-11-09
This answer was ACCEPTED on the original site
Post score: 1 | {
"domain": "robotics.stackexchange",
"id": 16807,
"tags": "ros, integration"
} |
Parsing one UserData type to a preview | Question: I am currently developing a web app and am using Spring as a Backend. I have some user data that is provided by one of our services and I need to display it in my frontend.
The subject of the code review is the part, where I process the data in my backend to create a simpler data type that I can directly use in the frontend. This avoids that I have to mess with the incoming data in my frontend app and can directly take the data I want to display from this data type.
This is my parser I wrote for this task. I tried some stuff that was relatively unknown to me before e.g. this return switch thingy. Any feedback is greatly appreciated!
@Service
public class UserDataParser {
public ProfilePreviewData toProfilePreviewData(ExtendedUserData user) {
ProfilePreviewData profilePreviewData = new ProfilePreviewData();
profilePreviewData.setAccountId(user.getId());
profilePreviewData.setAccountName(user.getName());
profilePreviewData.setAccounts(getAccountPreviewData(user));
return profilePreviewData;
}
private List<AccountPreviewData> getAccountPreviewData(ExtendedUserData user) {
List<AccountPreviewData> accountsPreviewData = new ArrayList<>();
for (AccountData account : user.getAccounts()) {
if (!account.getPublish()) continue;
accountsPreviewData.add(getMinecraftAccountPreviewData(account));
}
for (ExternalAccountData externalAccount : user.getExternalAccounts()) {
if (!externalAccount.getPublish()) continue;
accountsPreviewData.add(getExternalAccountPreviewData(externalAccount));
}
return accountsPreviewData;
}
private AccountPreviewData getMinecraftAccountPreviewData(AccountData account) {
AccountPreviewData accountPreviewData = new AccountPreviewData();
accountPreviewData.setAccountType(AccountType.MINECRAFT);
accountPreviewData.setDisplayName(account.getName());
accountPreviewData.setUniqueIdentifier(account.getUuid().toString());
return accountPreviewData;
}
private AccountPreviewData getExternalAccountPreviewData(ExternalAccountData accountData) {
AccountPreviewData accountPreviewData = new AccountPreviewData();
AccountType type = getTypeOfExternalAccount(accountData.getTypeName());
accountPreviewData.setAccountType(type);
accountPreviewData.setDisplayName(accountData.getAccountId());
accountPreviewData.setUniqueIdentifier(accountData.getName());
return accountPreviewData;
}
private AccountType getTypeOfExternalAccount(String typeName) {
return switch (typeName) {
case "Discord" -> AccountType.DISCORD;
case "TeamSpeak" -> AccountType.TEAMSPEAK;
default -> AccountType.UNKNOWN;
};
}
}
```
Answer: Naming
The class name is a bit abstract. I prefer naming services based on the concrete things they provide instead of what they consume. You of course follow the naming conventions of your organization, but to me a Parser converts transportation data format, such as an XML or JSON document, to a Java object. I prefer to use Mapper suffix for classes that convert between different type of Java objects and I use the result object type as the mapper name. So my choice for naming this class would be ProfilePreviewDataMapper.
When your mapper is named after the result type, you don't need to repeat it in the method name, since the information is in the class name. The method becomes ProfilePreviewDataMapper.from(ExtendedUserData extendedUserData). Also by following this naming style, you automatically limit the scope of each class to it's return type and "accidentally" adding more and more responsibilities becomes harder, because you have to figure out different method names that break the pattern. It's easier to keep classes small and manageable.
In Java, get is a special prefix for a method and it marks an accessor to a field. Getters are not expected to create new objects. Using get prefix in a method that accepts parameters and creates objects breaks one of the most widely used Java idioms, so I would recommend against that. If you want to follow your class structure, use something like createAccountPreviewData instead, as that prefix immediately signals that something new is being returned instead of returning a reference to an existing object. Also use the same prefix for all methods. Now you have to and get prefixes in methods that perform similar tasks.
Structure
Consider the testability of your class. What data do you need to set up in order to unit test the getMinecraftAccountPreviewData method? I would take the three private methods and extract them into standalone mappers. The getAccountPreviewData method would become AccountPreviewDataMapper.from(ExtendedUserData extendedUserData) which gets injected into ProfilePreviewDataMapper. Apply same pattern to getMinecraftAccountPreviewData and getExternalAccountPreviewData and then you can create individual unit tests for each mapper without having to set up a lot of unnecessary data for each of them.
Instead of implemeting the mappers straight up as ProfilePreviewDataMapper etc, make them interfaces instead and write the implementation in ProfilePreviewDataMapperImpl.
You have now implemented S, O, L and D of the SOLID principles
Style
I find this "if not something, skip" logic unnecessarily difficult to follow and not using curly braces only adds to it. I would replace
for (AccountData account : user.getAccounts()) {
if (!account.getPublish()) continue;
accountsPreviewData.add(getMinecraftAccountPreviewData(account));
}
with
for (AccountData account : user.getAccounts()) {
if (account.getPublish()) {
accountsPreviewData.add(getMinecraftAccountPreviewData(account));
}
} | {
"domain": "codereview.stackexchange",
"id": 44668,
"tags": "java, parsing"
} |
Is the $i$ in QM a time component in disguise? | Question: In SR, it is possible to replace the Minkowski metric $\eta_{\mu\nu}$ with a (pseudo) euclidean metric $\delta_{\mu\nu}$ provided that time is measured in imaginary units.
I was wondering if the same trick can be used to get rid of complex numbers in QM.
The answer I gave myself is "No, without changing the physics". In fact, the usual commutator of a particle:
$$
[q^i, p_j] = i \delta^i_j
$$
becomes:
$$
[q^\mu,t_{\nu\rho}] = C_\rho \delta^\mu_\nu
$$
where $C_\rho$ is real and $t_{i0}=p_i$ in the rest frame:
$$
[q^i,t_{j0}] = \delta^i_j
$$
Now, my questions are:
Is my reasoning correct or does this lead to any inconsistencies?
If it does not, is experimental accuracy enough to distinguish between the two "theories"?
If this is not ruled out neither by experimental nor experimental evidence, I guess I'm not the first one to have this idea: could you please provide any pointers?
Answer: You may redefine various quantities according to $p\to ip_{\rm yours}$ and things like that but that clearly doesn't change physics, just conventions. For example, the mixed-signature spacetime may indeed be emulated by having the ${+}{+}{+}{+}$ signature but with one (or three) components pure imaginary, and Einstein actually favored this convention at some moment.
However, no QM theory can ever get rid of $i$. It appears in
$$ [x,p]=i\hbar $$
The commutator of two Hermitian operators is unavoidably anti-Hermitian, i.e. $i$ times a Hermitian operator. It follows from $(xp)^\dagger=p^\dagger x^\dagger$ i.e. $[x,p]^\dagger = -[x^\dagger,p^\dagger]$.
Also, in Schrödinger's or Heisenberg's equations, there has to be an $i$ because the probability-preserving operators are unitary, and unitary operators are $\exp(iH)$ where $H$ is Hermitian.
Equivalently, $i$ must appear in the exponent $\exp(iS/\hbar)$ of Feynman's path integral.
Every big modification of this kind either produces a totally inconsistent theory, or a totally equivalent theory to quantum mechanics. | {
"domain": "physics.stackexchange",
"id": 30988,
"tags": "quantum-mechanics, special-relativity, complex-numbers"
} |
SICP - exercise 2.5 - representing pairs of nonnegative integers using only numbers and arithmetic operations | Question: From SICP
Exercise 2.5: Show that we can represent pairs of nonnegative integers using only numbers and arithmetic operations if we represent the pair a and b as the integer that is the product 2^x*3^y. Give the corresponding definitions of the procedures cons, car, and cdr.
Please review my code.
(define (cons x y)
(* (expt 2 x) (expt 3 y)))
I created repeat-divide to find x and y.
(define (repeat-divide x y)
(if (> (remainder x y) 0)
0
(+ 1 (repeat-divide (/ x y) y))))
Selectors
(define (car p) (repeat-divide p 2))
(define (cdr p) (repeat-divide p 3))
I am not mathematically inclined, so I feel this solution might be extremely inefficient. How can I make this code better and more efficient?
Answer: Well this is going to be inefficient no matter what, it's just that way. So we aren't going to improve the built-in expt, so that leaves repeat-divide for improvement. Let call it de-factor.
So first I thought something like
(define (de-factor p n)
(let* ((max-expt (ceiling (/ (log p) (log n))))
;;log of p in base n
(gcd-of (gcd p (expt n max-expt)))
(res (/ (log gcd-of) (log n))))
res))
which works so long as log internal is accurate enough to return the closest integer.It runs into significant rounding errors for some x and y less than a hundred. If you built a very large log table you could solve in log time so long as you could accurately cast from your log table to the input x and y. (gcd runs in log time) Of course in such case the memory overhead of such a table or map would be huge, so not really that great of an optimization.
(define (de-factor p n)
(let* ((rough-max-expt (let loop ((x 1))
(if (>= (expt n x) p)
x
(loop (* 2 x)))))
;;can't use log trick, returns inexact math
(gcd-of (gcd p (expt n rough-max-expt))))
(x-to-what-y n gcd-of)))
(define (x-to-what-y x n) ;return y such that (= n (expt x y)) is #t
(if (= n 1)
0
(let loop ((y 1) (step 1) (acc x) (narrow? #f))
(cond ((= acc n) y)
((< step 1) (error x step acc narrow?))
(else (let ((next (* acc (expt x step))))
(if (<= next n)
(loop (+ y step)
(if narrow? (/ step 2) (* step 2))
next
narrow?)
(loop y
(/ step 2)
acc
#t))))))))
Minimal changes to the rest
(define (cons x y)
(* (expt 2 x) (expt 3 y)))
(define (car p) (de-factor p 2))
(define (cdr p) (de-factor p 3))
As far as performance it's still less than linear because of the exact bignum math. (cdr (cons 12345 54321)) returns 12345 in a few seconds, but (cdr (cons 123456 654321)) in about a minute and a half.
When I tried your (cdr (car 12345 54321)) it ran out of memory, with a basic tail-recursion optimization, it took about 10 seconds. (cdr (cons 123456 654321)) is about a half hour. So my code is an optimization, but really just the nature of that big of numbers is they are very inefficient to do a lot of math on, and even more so the bigger they get. You would never do something this weird in production code.
Tail optimization for benchmarking
(define (repeat-divide x y)
(let loop ((acc 0) (x x))
(if (> (remainder x y) 0)
acc
(loop (+ 1 acc) (/ x y))))) | {
"domain": "codereview.stackexchange",
"id": 17706,
"tags": "performance, beginner, lisp, scheme, sicp"
} |
Phase of wave is constant | Question: The general equation of wave is $y(x,t) = Asin(\omega t - kx)$ . We define the velocity of wave using the fact that $\omega t - kx$ is constant and then differentiate with respect to time but it doesn't seem plausible that $\omega t - kx$ is constant . Please explain about it .
Answer: If $\omega t - kx$ is constant then $y(x,t)$ is constant.
How do you measure the speed of a wave?
You "measure" the distance a trough or crest or a fixed displacement of the wave, $y(x,t)$, moves in unit time and that is the speed of the wave.
So you have $x$ increasing by $\Delta x$ whilst $t$ is also increasing by $\Delta t$ keeping $\omega t - kx$ constant and find the speed of the wave$\dfrac {\Delta x}{\Delta t}$
$\omega t - kx = \text{constant} \Rightarrow \omega \Delta t - k\Delta x = 0 \Rightarrow \text{speed of wave} = \dfrac {\Delta x}{\Delta t} = \dfrac {\omega}{k}$ | {
"domain": "physics.stackexchange",
"id": 46364,
"tags": "waves"
} |
The Number of Paths in a Directed Graph | Question: Suppose I have a directed graph $G = (V,E)$. Suppose that $v_1$ and $v_2$ are two nodes in the graph. Am I correct the number of simple paths (that is, it has no cycles) from $v_1$ to $v_2$ is $O(E)$? Is it true for the special case of directed acyclic graphs?
Bob
Answer: That's not true even for DAGs: consider the following, with all edges directed left-to-right:
o o o ... o
/ \ / \ / \
x o o ... y
\ / \ / \ /
o o o ... o
There are $2^{|E|/4}$ paths from $x$ to $y$. In a graph with cycles, it can be even worse: consider a clique. | {
"domain": "cs.stackexchange",
"id": 10912,
"tags": "algorithms, graphs"
} |
How do the electrons "know" where to go when grounded in this simple lightbulb example? | Question: A rookie question but I'm reading a chapter in a book (Code: The Hidden Language of Computer Hardware and Software) that discusses a simple electricity/lightbulb model (pictured below)
The book says that the electrons from the negative terminal of the battery go into the earth and then electrons come out of the earth at your friend's house, go through the lightbulb and wire, the switch at your house, and then the positive terminal of the battery.
My question is: How do the electrons exiting the negative terminal "know" where to go (the lightbulb)? Are these just totally different electrons? If so, then what's the point of the battery?
I know it's a beginner question but I just couldn't figure it out!
Thanks!
Answer:
Are these just totally different electrons?
Yes. The electrons actually don't drift far. Typically only a few millimeters a second, if I remember correctly. Compare it with a traffic queue; when the first one moves, he leaves space for the next, who then moves and leaves space for the next and so on. The motion progresses through the whole queue (wire), even though the individual cars (electrons) don't move much.
New electrons enter at the other end, since they see a lack of electrons here, now that they all have moved a step forward. A lack of electrons at a spot is namely a spot with slightly less negative charge, which repels electrons less than all other places. So new electrons will quickly move here.
If so, then what's the point of the battery?
We just described how electrons move. But why they move is another question. Something must "pull" in the electrons, otherwise they wouldn't want to move at all.
Since we know that negative charge is attracted by positive, we can create such "pull" by placing a positive charge. That's the battery's positive end. Electrons are drawn towards it, and that makes the drift. That causes the current.
As electrons keep arriving at the positive end, they would pile up. They would accumulate a bigger and bigger negative electric field, which would repel and soon prevent any further electrons from arriving. The current would stop again. So the battery has to carry them away.
Inside the battery some complex chemistry takes the incoming charges and carries them to the negative end. They don't want to be here, because they are repelled from something negative. So they drift away into the ground, making space for others.
A battery is like a pump in a water pipe that keeps the flow going. | {
"domain": "physics.stackexchange",
"id": 37553,
"tags": "electric-circuits, electrons, electric-current, electrical-resistance, voltage"
} |
Tokenize a binary stream | Question: This is my code to extract values from a binary stream. I am fairly happy with it, the only thing I can think should change is the fact it throws if there is an error. The only option for failure is boost::optional (limited to C++11 by target platform).
struct Token
{
enum TokenClass
{
BYTE, WORD, INT, SZCHAR, DATA
};
TokenClass type;
boost::variant<uint8_t, uint16_t, uint32_t, std::string, std::vector<uint8_t>> value;
Token( TokenClass c, uint8_t b ) : type(c), value(b) { }
Token( TokenClass c, uint16_t w ) : type(c), value(w) { }
Token( TokenClass c, uint32_t i ) : type(c), value(i) { }
Token( TokenClass c, std::string s ) : type(c), value(std::move(s)) { }
Token( TokenClass c, std::vector<uint8_t> d ) : type(c), value(std::move(d)) { }
};
// tokenize a byte stream. returns a vector of tokens that can be tested using validate
std::vector<Token> tokenize( const std::vector<uint8_t>& message_data, const std::vector<Token::TokenClass>& expects )
{
std::vector<Token> ts;
auto it = std::begin( message_data );
for ( auto&& expect : expects )
{
switch (expect)
{
case Token::BYTE:
{
if (it == std::end( message_data ))
{
std::stringstream ss;
ss << "not enough data left in stream for byte";
throw std::out_of_range(ss.str());
}
ts.emplace_back( Token::BYTE, *it );
}
break;
case Token::WORD:
{
if (std::distance(it, std::end( message_data )) < 2)
{
std::stringstream ss;
ss << "not enough data left in stream for word";
throw std::out_of_range(ss.str());
}
uint16_t v = (*it++ << 8) + *it;
ts.emplace_back( Token::WORD, v );
}
break;
case Token::INT:
{
if (std::distance(it, std::end( message_data )) < 4)
{
std::stringstream ss;
ss << "not enough data left in stream for dword";
throw std::out_of_range(ss.str());
}
uint32_t v = (*it++ << 24) + (*it++ << 16) + (*it++ << 8) + *it;
ts.emplace_back( Token::INT, v );
}
break;
case Token::SZCHAR:
{
auto pos = std::find(std::begin(message_data), std::end(message_data), '\0');
if (pos == std::end( message_data ))
{
std::stringstream ss;
ss << "no terminating null byte found in stream";
throw std::out_of_range(ss.str());
}
auto e = it;
for( ; e != pos and std::isprint(*e); e++ )
;
ts.emplace_back( Token::SZCHAR, std::string(it,e) );
it = e;
}
break;
default:
break;
}
it++;
}
// collect the rest of the message and place in a data token
if (it != std::end(message_data))
{
ts.emplace_back( Token::DATA, std::vector<uint8_t>(it,std::end(message_data)) );
}
return ts;
}
Usage is fairly simple. You provide an expected input sequence (for headers, etc) and then call tokenize to parse the data. Left over bytes at the end of the stream is placed into a DATA token for further parsing.
For example:
std::vector<Token::TokenClass> expects{ Token::SZCHAR, Token::WORD, Token::BYTE };
std::vector<uint8_t> input{ 'A', 'B', 'C', '\0', 0x1, 0xff, 'S', 0x12, 0x34 };
auto ts = tokenize( input, expects );
This takes a stream (input) and the tokens (expects) and the results are a null-terminated string ('ABC'), a word (0x1ff), a byte ('S'), and data (0x12, 0x34).
The next thing this needs is a validation function. You can have fixed values (for magic bytes that signify start of headers, etc.), a range of values where anything outside that range is an error (ie a percentage) or a series of fixed values (usually signifying an action) where anything that it does not understand is an error.
Answer: You force the user code to pass in a vector as the raw data. This is not always how data is read in. Provide an overload that takes in a begin and end pair of char* iterators and forward the vector overload to that.
memcpy is faster than scanning for a null byte. So consider changing the format to use a prefix length instead of the null terminator.
If the format is fixed then you will want to add the values of the enums explicitly. That way they don't depend on the declaration order and it's clear that that the values are chosen for a reason.
Exceptions are frankly fine, one thing I see is that you only throw an exception where more data can create a correct parse (either directly or by adding a null terminator). So you can add the remaining unparsed tokens to the last DATA token. That way you can partially parse the data and resume parsing when more data comes in. | {
"domain": "codereview.stackexchange",
"id": 26434,
"tags": "c++, c++11, parsing"
} |
Functional force in raising a block upright | Question:
Here $5$ blocks of equal dimensions are placed one on another in two ways:
The $1^{st}$ way is by placing one upon another individually and the $2^{nd}$ is done by considering the $5$ blocks as one object and then making it upright. We have been asked to find which way is more favourable.
When I solved the question,I did it by calculating the work done in both processes. Obviously the work done in both processes were equal since they were the definition of work done using average displacement and using displacement of center of mass.
However the book did one more thing apart from that,they calculated the force needes in raising the blocks. That's the part where i am fuzzy. I didn't understand what they meant by average force(is that even a thing?) in the "$1^{st}$" step which is $\frac{mg(0+a+2a+3a+4a)}{10a}$. Secondly, I see they used integration for the second process without any details(how is a novice student supposed to figure all this out?). The division by $\frac{\pi}{2}$ also got over my head.
So, I request the physics lovers to kindly explain the concepts used in this problem since I never came across them till this date(NO NEED TO DO CALCULATIONS),I just need to understand what they basically did and what division by $10a$ and $\frac{\pi}{2}$ hold importance here.
Answer:
I didn't understand what they meant by average force(is that even a thing?) in the "1st" step which is $\frac{mg(0+a+2a+3a+4a)}{10a}$
By seeing the images you have provided the average force seems to be the total work done divided by the total displacement. Let's understand with an example:
Suppose you are moving $n$ bodies of mass $m_1$,$m_2$,$m_3$,...,$m_n$ by the distance $x_1$,$x_2$,$x_3$,...,$x_n$ by force $F_1$,$F_2$,$F_3$,...,$F_n$ respectively then the average force will be $$F_{av} = \frac{F_1x_1 + F_2x_2 +F_3 x_3 + ... +F_nx_n}{x_1+x_2+x_3+...+x_n}$$ and same is in your case. Hope you can understand.
Secondly i see they used integration for the second process without any details(how is a novice student supposed to figure all this out?). The division by $\frac{\pi}{2} $ also got over my head.
If you are not given any details then you need to obtain them and that's physics.
Suppose you have joined all the boxes then the mass of the body will be $5m$ and weight will be $5mg$ which will be downward and to make it upright you will need to apply the force perpendicular to the surface and if we divide the weight components into along the body and perpendicular to the body you will get perpendicular component as $5mg\cos{\theta}$ where $\theta$ is the angle of the body with horizontal when the body is lying on the ground the $\theta = 0°$ and as you make it upright the $\theta$ will increase to $90°$ thus the limits of integration are $0$ to $\frac{\pi}{2}$.
Hope it helps. ;)
Edit:
Sorry to inform you but I have no examples of average force as I haven't read it anywhere I only analysed this from the images you gave above.
Now for the integration process
I don't understand on which point the weight is acting,is it on the center of mass or blocks
The weight is acting on the centre of mass of the system (tower) of blocks.
I will explain this easier way but if you want the harder way then inform here.
So to get the tower of 5 blocks upright you need to apply the force $F = mg\cos{\theta}$ as explained in the image you have provided. And to get total work done you need to get the sum of all the forces on every specific angle between $0$ to $90°$ and we will get this by integration. $$W_{\text{total}} = \int_0^{π/2}{5mg\cos{\theta}d\theta}$$ $$W_{\text{total}} = 5mg\int{\cos{\theta}d\theta} $$ and rest is solved in the image and still if you can't get it then use a integral calculator that shows steps.
Now for the reason we are dividing the integral with $π/2$ is because as stated before in my answer the definition of average force but here we will substitute angular displacement instead of linear displacement (height in method 1) thus you are moving the tower from 0 to $π/2$ thus the angular displacement will be $\pi/2$.
Hope you can understand now as this answer is getting too long. :)
Edit_2:
Are we applying force only on the center of mass and not on any other point?
Yes you can apply the force on any other point but actually in calculating the work done we take the force taken by the body as example here we need the force $5mg\cos{\theta}$ to make the block upright, so if you are applying the force $F_t$ at a point $r$ units up from the pivot of the block (which is the point of contact of the block with the ground) then $$\text{Torque} = T = F_tr = 5mg\cos{\theta}$$ and now the force applied by you depends on the point where you apply but the force needed by the body is same i.e. $5mg\cos{\theta}$ so if you apply force very near to the pivot then $r$ will be small and you will need to apply more force ad if you apply the force farther from pivot you will need less force to obtain the same amount $5mg\cos{\theta}$ and I think this answers all the three questions you asked about the force.
Now for the second question:
In physics work done is a scalar quantity that is it doesn't have a direction because it is the dot product of two vectors Force and displacement this is the basic concept of the work done. So if work done is a scalar quantity then we don't need to worry about the force wether it is angular or linear (as I think) because we only need the magnitude of the force.
Edit_3:
The reason $5mg$ is here because it is the weight of the block. And the force applied by you exactly cancels the weight because you apply that much amount of force if you apply less amount of force then the block will not lift. To lift the block up you need to apply the force perpendicular to the surface of the block which is $5mg\cos{\theta}$ and that's the torque which we are applying $F_tr$. Also the point of contact from the ground is the point or surface which is on the ground. | {
"domain": "physics.stackexchange",
"id": 83546,
"tags": "homework-and-exercises, newtonian-mechanics, potential-energy"
} |
ProjectQ - Error messages | Question: How can we get rid of runtime next error:
Traceback (most recent call last):
File "C:\Users\Marija\Anaconda3\lib\site-packages\projectq\types\_qubit.py", line 135, in __del__
self.engine.deallocate_qubit(weak_copy)
File "C:\Users\Marija\Anaconda3\lib\site-packages\projectq\cengines\_basics.py", line 153, in deallocate_qubit
tags=[DirtyQubitTag()] if is_dirty else [])])
File "C:\Users\Marija\Anaconda3\lib\site-packages\projectq\cengines\_main.py", line 288, in send
raise compact_exception # use verbose=True for more info
RuntimeError: Qubit has not been measured / uncomputed. Cannot access its classical value and/or deallocate a qubit in superposition!
raised in:
' File "C:\\Users\\Marija\\Anaconda3\\lib\\site-packages\\projectq\\backends\\_sim\\_pysim.py", line 139, in get_classical_value'
' raise RuntimeError("Qubit has not been measured / "'```
Answer: By measuring all the qubits at the end of the code, I avoided the above-mentioned error message. | {
"domain": "quantumcomputing.stackexchange",
"id": 2312,
"tags": "programming"
} |
Is the emptiness problem for PEGs decidable? | Question: The emptiness problem for Context Free Grammars is decidable. Does the same hold for Parsing Expression Grammars (PEGs)? That is, is it decidable given a PEG $G$ to find whether $L(G) = \emptyset$ or not.
My intuition says no, as PEGs are closed under intersection, allowing one to construct hard instances of intersection, but I haven't given the proof a lot more thought. The classic reduction to the PCP doesn't work I believe - at least simply replacing choice with ordered choice - as this changes what languages are accepted.
Answer: I should have done my research better before asking. The original paper introducing PEGs (Parsing Expression Grammars:
A Recognition-Based Syntactic Foundation by Bryan Ford) actually contains a proof that emptiness is undecidable, indeed using the post correspondence problem. | {
"domain": "cs.stackexchange",
"id": 18850,
"tags": "formal-grammars"
} |
Converting a non-stationary random process into a WSS process by adding a random phase | Question: Here is an example where this method has been implemented.
We were trying to calculate the spectrum of a transmitted signal(Random signal/weighted pulse)
The auto correlation function of the pulse which is not WSS because it is dependent on t:
We associated a random variable(which is the delay) with an uniform PDF with the signal:
PDF of the delay/the variable is:
I was unable to grasp the concept behind it, probably my basics are a bit rusty.
By simply just associating a random variable (with an uniform PDF), how can we just make any random process a wide sense stationary process?
Whats the concept behind it?
Answer: It is not true that you can convert any non-stationary random process into a WSS random process by just adding a random phase. What is true is that you can make a (wide-sense) cyclostationary process WSS by adding a random phase. A common example is the PAM process mentioned in your question:
$$x(t)=\sum_ka_kp(t-kT)\tag{1}$$
The discrete values $a_k$ in $(1)$ are assumed to be random. The process $x(t)$ can be shown to be cyclostationary. The modified process
$$\tilde{x}(t)=\sum_ka_kp(t-kT+\theta)\tag{2}$$
where $\theta$ is a random variable uniformly distributed on $[0,T]$ is WSS. This random phase reflects our uncertainty about the phase of the signal, i.e., about the origin of the time axis. In this sense, it is physically motivated, and not just a nice "trick" enabling us to compute the power spectrum of $(2)$. | {
"domain": "dsp.stackexchange",
"id": 7037,
"tags": "digital-communications, power-spectral-density, random-process, stationary"
} |
Stabilizing forces between the protein sequences? | Question: we know that Protein structures from secondary to Quaternary are maintained by noncovalent or weak interactions including electrostatic interactions,van der Waals forces & hydrogen bonding. What is the significance of these weak interactions why they are preferred by nature rather then strong covalent interactions?
Answer: Proteins often undergo conformational changes. The activation energy required to undergo a conformational change in a protein with only covalent interactions would be much higher compared to that of a protein with weaker bonding forces. An example of a conformational change seen in enzymes is the change in conformation during allosteric regulation, in which an effector molecule binds a site on the enzyme distant from the enzymes active site. The binding of the effector molecule induces a conformational change that exposes the active site of the enzyme. Another example of conformational change is the change in conformation seen in Toll-Like-Receptors (TLRs). Additionally, many proteins change their conformation in response to pH. These changes would likely not occur if the protein only contained covalent bonds, and the ability of proteins to undergo such changes makes them responsive to the environment, as well as allowing them to carry out their enzymatic actions, if such actions exist for that particular protein.
Here is the wiki on allosteric regulation https://en.wikipedia.org/wiki/Allosteric_regulation
Here is a figure on conformational changes in TLRs http://www.nature.com/ni/journal/v8/n7/fig_tab/ni0707-675_F1.html
And here is the wiki on activation energy, a concept used in chemistry (see section on catalysis and Gibbs free energy) https://en.wikipedia.org/wiki/Activation_energy | {
"domain": "biology.stackexchange",
"id": 4943,
"tags": "biochemistry, proteins, protein-folding, protein-interaction, protein-structure"
} |
Print the Camera parameters from the file created after Camera Calibration | Question:
I have calibrated my monocular camera using the ROS Tutorial.
After calibrating the camera, a .yaml file got created which contains the various parameters of the camera.
Task: I want to print the coordinated of the center which are stored in .yml file. I think that i can do so by subscribing the camera parameter info but i don't know how can i do so?
Originally posted by rosqueries on ROS Answers with karma: 1 on 2014-03-16
Post score: 0
Answer:
rostopic echo /camera/camera_info
Change 'camera' to your camera name. This is really basic tutorial level stuff btw.
Originally posted by davinci with karma: 2573 on 2014-03-16
This answer was ACCEPTED on the original site
Post score: 0
Original comments
Comment by rosqueries on 2014-03-16:
i want to get the parameters in my program. For example i want to print them using cout<<focal_lenght<<center_x<<center_y;
Comment by rosqueries on 2014-03-16:
So, how do i get them into variable in C++ program
Comment by davinci on 2014-03-17:
See the publish / subscriber tutorial for creating a subscriber to a topic. Change the topic to the camera info topic. | {
"domain": "robotics.stackexchange",
"id": 17304,
"tags": "calibration, camera-calibration"
} |
why does subscriber not automatically subscribe to a topic when publisher suddenly starts publishing? | Question:
Hi all,
I am having a basic issue with publishing and subscribing to topics. Before I get into the core of the problem, I would like to ask a very simple question with respect to the "beginner_tutorial" package. In this package, there exists the "talker" and the "listener" nodes. When I run the talker node followed by the listener node, the listener subscribes to the topic published by the talker. However, if I run the listener node first it does not subscribe (which is correct as talker is not publishing). But why does it not start subscribing as soon as I run the talker node after a while? Basically, why does the subscriber automatically not understand when the talker starts publishing and latch on to it if I first run the subscriber followed by the publisher node?
Thanks a lot.
Originally posted by Ashesh Goswami on ROS Answers with karma: 36 on 2014-08-06
Post score: 0
Answer:
When a new publisher becomes active:
It informs the ROS master.
The ROS master sends a callback to all of the nodes that are subscribed to that topic, informing them that there is a new publisher, and giving its address.
The subscribers then use this address to open a new connection directly to the publisher so that they can receive messages.
messages are then passed over this TCP socket from publisher to subscriber
All of the steps in this process take a nonzero amount of time. The end result is usually that the first one or two messages sent by the publisher are not seen by subscribers.
This process can also fail if the master cannot contact the subscriber for some reason (for example, it can't resolve the hostname that the subscriber is running on), or if the subscriber cannot contact the publisher for some reason (again, hostname resolution is often the problem here).
Originally posted by ahendrix with karma: 47576 on 2014-08-06
This answer was ACCEPTED on the original site
Post score: 4
Original comments
Comment by Ashesh Goswami on 2014-08-06:
I see..so does that mean that the publisher needs to be publishing for the subscriber to subscribe to the topic being published by the publisher? In other words, the talker node needs to be running and publishing before I can run the listener node? thanks a lot
Comment by ahendrix on 2014-08-06:
No. You can start the publisher and the subscriber in any order.
Comment by Ashesh Goswami on 2014-08-06:
but thats the problem I am facing. If I start the talker first it works but if I start the talker(publisher) after the subscriber node, it does not. I am trying this with the default beginner_tutorial package. What do you think might be the issue? thanks
Comment by ahendrix on 2014-08-06:
"This process can also fail if the master cannot contact the subscriber for some reason (for example, it can't resolve the hostname that the subscriber is running on), or if the subscriber cannot contact the publisher for some reason (again, hostname resolution is often the problem here)."
Comment by Ashesh Goswami on 2014-08-06:
my bad I missed that part earlier..thanks!! | {
"domain": "robotics.stackexchange",
"id": 18937,
"tags": "ros, publisher"
} |
How to turing reduce equivalent languages $Q$ to infinite language $I$ | Question: Given two languages:
$Q= \{(\langle M_1 \rangle , \langle M_2 \rangle ) \mid L(M_1) = L(M_2)\}$
$I= \{\langle M \rangle \mid \;\vert L(M) \vert = \infty \}$
I'm trying to Turing reduce $Q$ to $I$ ($Q \le_T I$), not the other way around as solved here
Any ideas on how to solve this? What exactly will the Turing machine do here and which part is getting solved by the mysterious Oracle?
Answer: Given $M_1$ and $M_2$ you construct a machine $M$ that on input $w$ does the following:
it runs $M_1$ on the first $w$ inputs ($\epsilon$,0,1,00,01,..) for $w$ steps each.
if $M_1$ accepts some input, you run $M_2$ on the same input and check it accepts too. (note, $M_2$ may not halt!)
If $M_1$ rejects some input, you run $M_2$ for $w$ steps and verify it doesn't accept during that time.
you do the same with $M_2$: run $w$ steps on each of the first $w$ inputs, and verify everything works, or reject otherwise
if all checks pass - accept. Otherwise reject.
The idea is the following: as long as $M_1$ and $M_2$ behave the same, you will keep accepting all $w$'s. but as long as you find a difference, then you will reject that $w$ and all inputs $w'>w$, thus the accepted language becomes finite. You should be careful because machines may not halt. For instance, $M_1$ may reject some input, but $M_2$ won't halt on it -- still, they both "reject" it, and this case should be carefully analyzed. | {
"domain": "cs.stackexchange",
"id": 5046,
"tags": "complexity-theory, turing-machines, reductions"
} |
Why should AAS use element lamps? | Question: AAS (Atomic Absorption Spectroscopy) is a quantitative analytical technique used to measure very small concentrations of ions in substances. The main idea is that the sample is atomised in a flame and then light is shot through the atomised sample and the absorbance is measured.
However I don't get why they use an atomic emission lamp (a lamp made of only 1 element). What's wrong with using a normal incandescent lamp or some lamp that produces all the wavelengths at once (it would save you having to change the lamp for every test)? I've heard that multi-element lamps make the machine less sensitive due to noise but I don't understand how. If we tune the monochromator to the particular wavelength we want, it should be the same as with using an atomic emission lamp?
Answer: Absorption bands are very small: smaller than the band which a typical monochromator is capable of isolating.
If a continuum light source was employed, the absorbed band would be much smaller than the band which would be isolated by the monochromator and which would be read by the detector. This implies a low SNR (signal-to-noise ratio), and a very poor resolution.
Using a single element lamp (or lamps with combined elements but with distinct emission frequencies) overrides the problem: a monochromator is well capable of isolating the different frequencies, and absorption, which occurs in a narrow band of frequencies, occurs in the same narrow band of light from the source: this implies a much higher SNR, and a much better resolution.
To make an easy to conceive example, let's say that the effect of absorption by the sample is the effect of a star shining in the sky: using a continuum light source implies watching it through the midday sky. A HCL lamp would be a midnight sky, in the metaphor.
Note that, aside from what textbooks say, continuum souce AAS (HR-CS AAS) does exist, but it requires, as expected, a sophisticated monochromator (high resolution monochromator, with a resolution of a few pm!) | {
"domain": "chemistry.stackexchange",
"id": 12058,
"tags": "analytical-chemistry, spectroscopy"
} |
What is the difference between Curie's law and Curie-Weiss law? | Question: Curie's Law:
$$\chi_m = \frac{C}{T}$$
Curie-Weiss law:
$$\chi_m = \frac{C}{T-T_c}$$
(C is Curie constant and $T_c$ is Curie temperature.)
Answer: Curie's law is:
$$ \chi_m = \frac{C}{T} = \frac{M\mu_0}{B} \tag{1} $$
where $B$ is the applied field. Weiss's modification was to say that the applied field needs to be replaced by $B + \lambda M$ to get:
$$ \chi_m = \frac{M\mu_0}{B + \lambda M} \tag{2} $$
We can rewrite this as:
$$ \chi_m = \frac{M\mu_0/B}{1 + \lambda M/B} \tag{3} $$
and then use equation (1) to substitute for $M\mu_0/B$ to get:
$$ \chi_m = \frac{C/T}{1 + \lambda M/B} = \frac{C}{T + \lambda MT/B} \tag{4} $$
Then rearrange equation (1) again to get $MT/B = C/\mu_0$ and substitute to get:
$$ \chi_m = \frac{C}{T + \lambda C/\mu_0} = \frac{C}{T + T_c} \tag{5} $$
where we define the constant $T_c$ as $T_c = \lambda C/\mu_0$. | {
"domain": "physics.stackexchange",
"id": 99129,
"tags": "ferromagnetism"
} |
Why we cannot measure the barrier potential on a diode? | Question: I read a lot of answers but I still do not understand: the potential barrier is real (e.g.the left side of an otherwise open junction has higher potential than its right side) and why I cannot measure it by use of a voltmeter?
Often it is argued, that the two metal-semiconductors junctions from the tips forms another potential which exactly cancel out the barrier of the junction.
I believe that this is the case, but why cancellation is exactly?
Would it mean that when I connect two metallic caps on both ends of the junction without connecting them via a voltmeter, both metallic ends have now same potential? This sounds strange. Or is there a closed electrical loop needed? This is also somehow "absurd", because modern voltmeters are nearly ideal in terms of resistance and there is no current flowing.
I would like to learn, where exactly is my wrong interpretation of how things are going there. There must be some missing key fact in my considerations.
Btw, also the energetic argument is not really satisfying, because, although true, it doesn't explain details on a microscopic level.
Answer: The electrons of the conduction band of the n-side of the junction can lower its energy by occupying the available states of the valence band in the p-side.
But if we are at the equilibrium situation, other electrons have already migrated, creating an E-field in the region. The work done (in the equilibrium state) against the E-field to migrate equals the energy decrease by change the quantum state. What means: the Fermi level is the same in both sides of the joint.
The excess of electron of the p-side of the joint are attracted by the ions of the n-side. It is like a spring hanging in the vertical position with a mass attached. There is an upward force due to the spring that is balanced by the gravitational force.
If a conductor (or a voltimeter) connects n and p terminals, nothing happens. The probability that an electron moves in one direction is the same as moving in the opposite one. | {
"domain": "physics.stackexchange",
"id": 77414,
"tags": "semiconductor-physics, chemical-potential"
} |
Get spectral picture from a wavelet transform | Question: According to my previous question, I have changed the generate command to:
y=generate1(100,1000,1);
and got the following picture:
Now I want to test a wavelet transform on the same signal, which of course exists in Matlab, but I have one thing which I should clarify: generally, as I know large scale values correspond to small frequencies and small scale values to large frequencies. So, how should I determine if the given frequencies are small or large? Also, which wavelet transform should i use?
For example:
m=cwt(y,1:100,'sym2');
and plot it
plot(m)
How should I read the second picture? What kind of information can I get?
Answer: Try this one:
m=cwt(y,1:100,'sym2','plot');
colormap(pink)
Result (btw, I cannot reproduce your curve in time domain with the function generate1):
Low scale values compress the wavelet and correlate better with high frequencies. While high scale values stretch the wavelet and correlate better with the low frequency content of the signal. | {
"domain": "dsp.stackexchange",
"id": 1422,
"tags": "fourier-transform, wavelet, power-spectral-density"
} |
Edge Detection on a Color Image | Question: I understand the process of using a Sobel kernel for edge detection in greyscale images. The input is a greyscale image, and the output is a greyscale image. I'm having trouble, however, figuring out how to apply the Sobel kernel to a color image.
Do I find the color gradient for each color channel (R, G, and B) then average each value to get the final color gradient value? Or do I average the color channels then find the color gradient of that image?
My goal is to render a raytraced image, find the edges, then re-render only the high-contrast regions with anti-aliasing and leave the low-contrast regions alone.
Answer: Finding edges in a color image can be done by decomposing the image into its channels, finding the gradients separately and fusing them somehow. However, such approach doesn't incorporate the color components in a joint model. Luckily, there is a better way to do this, which is the structure tensor representation.
The color structure tensor describes the bi-dimensional first order differential structure at a certain point in the image. It is specified by:
$
S=
\left[ {\begin{array}{cc}
R_x^2+G_x^2+B_x^2 & R_xR_y+G_xG_y+B_xB_y \\
R_xR_y+G_xG_y+B_xB_y & R_y^2+G_y^2+B_y^2 \\
\end{array} } \right]
$
where subscripts denote the partial derivatives. This is a more precise description of the local gradients. Eigen-decomposition is then applied to the structure tensor matrix $S$ to form the eigenvalues and eigenvectors. The larger eigenvalue shows the strength of the local image edges (gradient magnitude) and the corresponding eigenvector points across the edge (in gradient direction). In other words, eigenvalues encode the gradient magnitude while the eigenvectors contain the gradient orientation information. Note that the eigenvectors and eigenvalues can be computed analytically for a structure tensor.
There are many other usecases of the structure tensor, so it is good to know. Last but not least, here is a MATLAB code, which uses this information to compute the gradients:
http://www.mathworks.com/matlabcentral/fileexchange/28114-fast-edges-of-a-color-image-actual-color-not-converting-to-grayscale/content/coloredges.m
After obtaining the gradients, you could further carry on your edge detection in a standard fashion (canny etc.). | {
"domain": "dsp.stackexchange",
"id": 1671,
"tags": "image-processing, computer-vision, edge-detection"
} |
What roadblocks are there to HSA becoming standard, similar to floating point units becoming standard? | Question: I remember when my dad explained to me for the first time how a certain model of computer he had came with a "math coprocessor" which made certain math operations much faster than if they were done on the main CPU without it. That feels a lot like the situation we are in with GPUs today.
If I understand correctly, when Intel introduced the x87 architecture they added instructions to x86 that would shunt the floating point operation to the x87 coprocessor if present, or run some software version of the floating operation if it wasn't. Why isn't GPU compute programming like that? As I understand it, GPU compute is explicit, you have to program for it or for the CPU. You decide as a programmer, it isn't up to the compiler and runtime like Float used to be.
Now that most consumers processors (Ryzen aside) across the board (including smartphone Arm chips and even consoles) are SoCs that include CPUs and GPUs on the same die with shared main memory, what is holding back the industry from adopting some standard form of addressing the GPU compute units built in to their SoCs, much like floating point operation support is now standard in every modern language/compiler?
In short, why can't I write something like the code below and expect a standard compiler to decide if it should compile it linearly for a CPU, with SIMD operations like AVX or NEON, or on the GPU if it is available? (Please forgive the terrible example, I'm no expert on what sort of code would normally go on a GPU matter, hence the question. Feel free to edit the example to be more obvious if you have an idea for better syntax.)
for (int i = 0; i < size; i += PLATFORM_WIDTH)
{
// + and = are aware of PLATFORM_WIDTH and adds operand2 to PLATFORM_WIDTH
// number of elements of operand_arr starting at index i.
// PLATFORM_WIDTH is a number determined by the compiler or maybe
// at runtime after determining where the code will run.
result_arr[a] = operand_arr[i] + operand2;
}
I am aware of several ways to program for a GPU, including CUDA and OpenCL, that are aimed at working with dedicated GPUs that use memory separate from the CPU's memory. I'm not talking about that. I can imagine a few challenges with doing what I'm describing there due to the disconnected nature of that sort of GPU that require explicit programming. I'm referring solely to the SoCs with an integrated GPU like I described above.
I also understand that GPU compute is very different than your standard CPU compute (being massively parallel), but floating point calculations are also very different from integer calculations and they were integrated in to the CPU (and GPU...). It just feels natural for certain operations to be pushed to the GPU where possible, like Floats were pushed to the 'Math coprocessor' of yore.
So why hasn't it happened? Lack of standardization? Lack of wide industry interest? Or are SoCs with both CPUs and GPUs still too new and is it just a matter of time? (I am aware of the HSA foundation and their efforts. Are they just too new and haven't caught on yet?)
(To be fair, even SIMD doesn't seem to have reached the level of standard support in languages that Float has, so maybe a better question may be why SIMD in general hasn't reached that level of support yet, GPUs included.)
Answer: A couple issues come to mind:
Synchronization/Communication overhead
In order to seamlessly transition from CPU to GPU code you need to communicate with the GPU. The GPU additionally has to be available(aka not rendering the screen), and all instructions on the CPU side of things need to retire/finish executing. Additionally you need to make sure that any pending writes have reached L3 cache/main memory, so that the GPU sees writes. As a result a transition to GPU code is quiet expensive, especially if the GPU is doing something latency sensitive(like rendering the next frame of something), and you need to wait for that process/task/thread/whatever to finish. Similarly returning back to the CPU is also expensive.
In addition you have to handle what happens if multiple CPU cores start fighting over the GPU.
Differing Memory Performance Needs
GPUs typically require high bandwith memory, but low latency is not as important, while CPUs are typically more sensitive to low latency.
Low performance GPUs can and do use main memory, but if you wanted a high performance GPU built into the CPU you would potentially need two different types of memory. At which point there isn't much advantage to having everything on one chip, since all that does is make cooling harder.
Inertia/Dev Infrastructure
SIMD has compiler support right now and lots of work put into it. Simple GPU style workloads like dot products are already memory bound anyway on a CPU, so existing CPU+GPU combos would not benefit.
Could just have lots of SIMD
Not much more to say beyond heading. SIMD+Many cores+lots of execution units would give you a more GPU like CPU. Add better SMT for a bonus. See Xeon Phi for a real world implementation of this concept. Though one thing worth mentioning is silicon spent on more GPU style features is silicon not spent on branch prediction etc.
Edit:
Another thing that comes to mind is there are broadly speaking three reasons to have a GPU.
Just want to browse the web, display Netflix etc. For this use case existing CPU and GPU performance/architecture is more than sufficient.
Want to play high end videogames etc. Existing architecture has a lot of momentum behind it, and I'm not convinced gaming CPU workloads really need better SIMD performance, and instead need better cache/branch etc, though I don't really know. However the GPU is likely already busy so it might not be the best idea to shift even more work to the CPU
HPC applications. Custom hardware like Xeon Phi is available for people who need a more GPU like CPU. | {
"domain": "cs.stackexchange",
"id": 16937,
"tags": "computer-architecture, cpu"
} |
Unable to connect Groovy-raspbian with arduino Mega ADK & Leonardo | Question:
Good afternoon partner I'm trying to follow the example Hello World in Rosserial/tutorials but I'being given the same with the two Arduino:
[ERROR] [WallTime: 1390504719.732367] Unable to sync with device; possible link problem or link software version mismatch such as hydro rosserial_python with groovy Arduino
I have looked for fixing it but I don't find something that could help me.
Anyone could help me please?
Originally posted by 4LV4R0 on ROS Answers with karma: 70 on 2014-01-23
Post score: 0
Original comments
Comment by tonybaltovski on 2014-01-23:
Have you updated rosserial recently?
Comment by 4LV4R0 on 2014-01-23:
We work with some many differents raspberry. This one has been set up a few days ago , less than a week. Following as always the tutorial. Problems with new versions maybe? Thanks for your interest
Comment by 4LV4R0 on 2014-01-26:
Thanks you for your interest. i´m telling you tomorrow. We´re trying it in the student´s raspberry. If not, we´ll start from the beginning again.
EDIT: Remaking ros_lib was a solution for us in this case. Thanks you so much
Answer:
Try remaking your ros_lib.
Originally posted by tonybaltovski with karma: 2549 on 2014-01-24
This answer was ACCEPTED on the original site
Post score: 2 | {
"domain": "robotics.stackexchange",
"id": 16748,
"tags": "arduino, rosserial, ros-groovy"
} |
U- or RO-method for Singlet-Triplet Gap? | Question: When I want to compare the energy between two different spin states of the same molecule, i.e. singlet-triplet-gap, do I better use unrestricted or restricted open formalism to compare the energies between both?
Further on, do I need to calculate the singlet state also with restricted open/unrestricted method? At least within Gaussian I did not recognize any energy difference between all of them. (Which sounds logic for me, as if the ground state is a singlet, also the open shell version will find that best version would be to have the same orbitals in alpha and beta.)
Answer: To expand on user1420303's answer a bit:
When I want to compare the energy between two different spin states of the same molecule, i.e. singlet-triplet-gap, do I better use unrestricted or restricted open formalism to compare the energies between both?
It depends.
The unrestricted formalism will almost always give lower absolute electronic energies (i.e., closer to the "real" value) than restricted-open will, due to the greater variational flexibility introduced by the separate spin-up and spin-down orbital sets. So, from a purely energetic perspective, yes, unrestricted is likely preferred.
If you're interested in other properties, though, the spin contamination introduced by the unrestricted formalism may be problematic. Of course, in systems with sufficient static correlation (viz., a degenerate or near-degenerate ground state) to result in spin contamination (radicals, bond-breaking states, transition metals, etc.), single-reference methods like Hartree-Fock and DFT may not give reliable results anyways.
Further on, do I need to calculate the singlet state also with restricted open/unrestricted method?
It depends.
For most 'non-exotic' organic systems, there is no need to calculate the singlet state using an unrestricted method since, as you note, the results will often be indistinguishable. For systems with static correlation, though, if you still want to try to use a single-reference method, you will need to explore the unrestricted orbital space, as there may be a lower-energy solution where the alpha and beta orbital compositions appreciably differ. The primary method I'm aware of for such exploration is the "broken-symmetry" method. Two papers I know of that discuss the method, albeit not in extensive detail, are the following reviews by Frank Neese:
F. Neese. "Prediction of molecular properties and molecular spectroscopy with density functional theory: From fundamental theory to exchange-coupling." Coord Chem Rev 253: 526, 2009. doi:10.1016/j.ccr.2008.05.014
F. Neese. "A critical evaluation of DFT, including time-dependent DFT,
applied to bioinorganic chemistry." J Biol Inorg Chem 11: 702, 2006. doi:10.1007/s00775-006-0138-1
The method is integrated into the current version of ORCA (see Section 5.9.10 of the v3.0.3 ORCA manual), and tips for running such calculations are available at the ORCA Input Library. Other software packages likely include this capability; search their manuals for more information. | {
"domain": "chemistry.stackexchange",
"id": 5397,
"tags": "computational-chemistry"
} |
Electric field due to infinitely charged plane does not satisfy boundary conditions | Question: The field at the surface of a conductor is always $\frac{\sigma}{\epsilon_0}$.
An infinite conducting plane has two faces, each with a surface charge density $\sigma$.
The field at the surface is $\frac{\sigma}{\epsilon_0}\hat{y}$, then
zero inside the plate and then $\frac{-\sigma}{\epsilon_0}\hat{y}$.
This satisfies boundary conditions as the value of the field goes down
by $\frac{\sigma}{\epsilon_0}$ at every interface with a free charge density $\sigma$.
Fair enough.
But when we have an infinite plane sheet of charge, the field is $\frac{\sigma}{2\epsilon_0}$. Obviously, this cannot be a conducting surface, which means it must be a dielectric and that this charge is bound. In this case, the electrostatic boundary conditions say that the field below the sheet must be the same. But it goes down by $\frac{\sigma}{\epsilon_0}$
Now my question, why is this so?
I think that there might be another field inside the infinite plate that satisfies boundary conditions, but I can't think of what kind of field this might be or what its magnitude could be. Some help?
Answer: A sheet of charge is neither a conductor nor a dielectric. The charges are imagined to be fixed, and there is nothing to polarize. Sometimes a sheet of charge is just a sheet of charge.
The boundary conditions are found from Gauss's Law. The electric field on crossing a sheet of charge changes by $\sigma/\epsilon_0$. This condition is satisfied in both of your examples, the conducting slab and the charged sheet. | {
"domain": "physics.stackexchange",
"id": 39181,
"tags": "electrostatics, electric-fields, boundary-conditions, dielectric"
} |
Connect Four - Console Application | Question: I have attempted to make the classic game Connect Four in C++, and I would love some feedback on this project. I have done everything myself without using the help of a tutorial or guide, so sorry if there are some bad practices here.
Main.cpp
// Connect4V2.cpp : Defines the entry point for the console application.
//
#include "stdafx.h"
#include "Board.h"
#include "Player.h"
#include "GameLogic.h"
#include <iostream>
#include <vector>
/*
AI
Search for empty blocks with block filled under
Make data structure with these blocks
randomly choose a block from that data structure to spawn on
*/
int main()
{
Board board;
Player player1, player2;
GameLogic gameLogic;
gameLogic.game(board, player1, player2, gameLogic);
char c;
std::cin >> c;
return 0;
}
GameLogic.h
#pragma once
#include <string>
class Board;
class Player;
class GameLogic
{
private:
char m_Turn = X;
bool m_FoundWinner = false;
char m_Winner = ' ';
void allocateFirstTurns(Player& player1, Player& player2, char choice);
void decideFirstTurn(Player& player1, Player& player2);
void gameRound(Board& board, Player& player1, Player& player2, GameLogic& gameLogic);
void changeTurn(char turn);
public:
static const int ROWS = 9;
static const int COLUMNS = 9;
static const int WINNING_ROWS = 4;
static const char X = 'X';
static const char O = 'O';
static const char EMPTY = ' ';
void game(Board& board, Player& player1, Player& player2, GameLogic& gamelogic);
void setWinner(char gamePiece);
};
GameLogic.cpp
#include "stdafx.h"
#include "GameLogic.h"
#include "Board.h"
#include "Player.h"
void GameLogic::allocateFirstTurns(Player & player1, Player & player2, char choice)
{
if (choice == 'y')
{
player1.setGamePiece(X);
player2.setGamePiece(O);
}
if (choice == 'n')
{
player1.setGamePiece(O);
player2.setGamePiece(X);
}
}
void GameLogic::decideFirstTurn(Player& player1, Player& player2)
{
bool chosen = false;
char choice = ' ';
while (!chosen)
{
std::cout << "Would you like to go first? 'y' - Yes. 'n' - No.\n";
std::cin >> choice;
if (choice == 'y' || choice == 'n')
chosen = true;
}
allocateFirstTurns(player1, player2, choice);
}
void GameLogic::game(Board & board, Player& player1, Player& player2, GameLogic& gameLogic)
{
char turn = X; //First turn
decideFirstTurn(player1, player2);
while (!m_FoundWinner)
{
gameRound(board, player1, player2, gameLogic);
}
board.displayBoard();
//Announce winner
std::cout << "Winner: " << m_Winner << "\n";
if (m_Winner == player1.getGamePiece())
std::cout << "Player 1 wins the game.\n";
if (m_Winner == player2.getGamePiece())
std::cout << "Player2 wins the game.\n";
}
void GameLogic::gameRound(Board & board, Player & player1, Player & player2, GameLogic& gameLogic)
{
while (!m_FoundWinner)
{
if (m_Turn == player1.getGamePiece())
{
board.displayBoard();
std::cout << "Player 1 move.";
player1.move(board);
m_FoundWinner = board.checkForWinner(gameLogic, player1.getGamePiece());
changeTurn(player1.getGamePiece());
}
else
{
board.displayBoard();
std::cout << "Player 2 move.";
player2.move(board);
m_FoundWinner = board.checkForWinner(gameLogic, player2.getGamePiece());
changeTurn(player2.getGamePiece());
}
}
}
void GameLogic::changeTurn(char turn)
{
if (turn == X)
m_Turn = O;
else
m_Turn = X;
}
void GameLogic::setWinner(char gamePiece)
{
m_Winner = gamePiece;
}
Board.h
#pragma once
#include <vector>
#include <iostream>
class GameLogic;
class Board
{
private:
std::vector<std::vector<char>> m_Board;
std::string m_SrchHorizontal = "Horizontal";
std::string m_SrchVertical = "Vertical";
std::string m_SrchRightDiagonal = "DiagonalRight";
std::string m_SrchLeftDiagonal = "DiagonalLeft";
void initBoard();
void searchForWinner(GameLogic& gameLogic, char playerPiece, std::string searchDirection, bool& foundWinner);
public:
Board();
char getBoardPosition(int row, int col) const;
void displayBoard();
bool isMoveLegal(int row, int col);
void addPlayerPiece(int row, int col, char playerPiece);
bool checkForWinner(GameLogic& gameLogic, char playerPiece);
};
Board.cpp
#include "stdafx.h"
#include "Board.h"
#include "GameLogic.h"
Board::Board()
{
initBoard();
}
void Board::initBoard()
{
std::vector<char> tempBoard;
for (int i = 0; i < GameLogic::ROWS; i++)
{
tempBoard.push_back(GameLogic::EMPTY);
}
for (int i = 0; i < GameLogic::ROWS; i++)
{
m_Board.push_back(tempBoard);
}
}
void Board::displayBoard()
{
std::cout << "\n";
for (int row = 0; row < GameLogic::ROWS - 1; row++)
{
std::cout << "\t"; //std::cout << "1 2 3 4 5 6";
//std::cout << "\nrow";
for (int col = 0; col < GameLogic::COLUMNS - 1; col++)
{
if(col != 0)
std::cout << "|" << m_Board[row][col] << "|";
}
std::cout << "\n";
}
}
bool Board::isMoveLegal(int row, int col)
{
if (m_Board[row][col] == GameLogic::EMPTY) //If square player wnats to move to is empty
{
if (row == GameLogic::ROWS - 2 && m_Board[row][col] == GameLogic::EMPTY) //If square player wants to move to is on the bottom row.
{
return true;
}
else
{
int tempRow = row;
tempRow++;
if (m_Board[tempRow][col] != GameLogic::EMPTY)
{
return true;
}
else
{
std::cout << "You cannot move here.\n";
return false;
}
}
}
else
{
std::cout << "You cannot move here.\n";
return false;
}
}
void Board::addPlayerPiece(int row, int col, char playerPiece)
{
m_Board[row][col] = playerPiece;
}
char Board::getBoardPosition(int row, int col) const
{
return m_Board[row][col];
}
bool Board::checkForWinner(GameLogic& gameLogic, char playerPiece)
{
bool foundWinner = false;
searchForWinner(gameLogic, playerPiece, m_SrchHorizontal, foundWinner);
searchForWinner(gameLogic, playerPiece, m_SrchVertical, foundWinner);
searchForWinner(gameLogic, playerPiece, m_SrchRightDiagonal, foundWinner);
searchForWinner(gameLogic, playerPiece, m_SrchLeftDiagonal, foundWinner);
return foundWinner;
}
void Board::searchForWinner(GameLogic& gameLogic, char playerPiece, std::string searchDirection, bool& foundWinner)
{
if (!foundWinner)
{
int i = 0;
for (int row = 0; row < GameLogic::ROWS; row++)
{
for (int col = 0; col < GameLogic::COLUMNS; col++)
{
while (m_Board[row][col] == playerPiece && !foundWinner)
{
i++; //Counts blocks
gameLogic.setWinner(playerPiece);
if (searchDirection == m_SrchHorizontal)
col++;
if (searchDirection == m_SrchVertical)
row++;
if (searchDirection == m_SrchRightDiagonal)
{
row++;
col++;
}
if (searchDirection == m_SrchLeftDiagonal)
{
row++;
col--;
}
if (i == GameLogic::WINNING_ROWS)
{
foundWinner = true;
}
}
i = 0;
}
}
}
else
{
return;
}
}
Player.h
#pragma once
class GameLogic;
class Board;
class Player
{
private:
char m_GamePiece = ' ';
int m_Score = 0;
int getRowPosition(const Board& board) const;
int getColPosition(const Board& board) const;
public:
void move(Board& board);
void setGamePiece(const char gamePiece);
char getGamePiece() const;
};
Player.cpp
#include "stdafx.h"
#include "Player.h"
#include "Board.h"
#include "GameLogic.h"
int Player::getRowPosition(const Board& board) const
{
int row = 0;
bool positionAllowed = false;
while (!positionAllowed)
{
std::cout << "Enter row.\n";
std::cin >> row;
if (row >= 1 && row < GameLogic::ROWS - 1)
positionAllowed = true;
else
std::cout << "Position out of bounds. Please enter again.\n";
}
return row;
}
int Player::getColPosition(const Board& board) const
{
int row = 0;
bool positionAllowed = false;
while (!positionAllowed)
{
std::cout << "Enter column.\n";
std::cin >> row;
if (row >= 1 && row < GameLogic::COLUMNS - 1)
positionAllowed = true;
else
std::cout << "Position out of bounds. Please enter again.\n";
}
return row;
}
void Player::move(Board& board)
{
int row = 0;
int col = 0;
bool moveComplete = false;
while (!moveComplete)
{
row = getRowPosition(board);
col = getColPosition(board);
if (board.isMoveLegal(row, col))
{
board.addPlayerPiece(row, col, getGamePiece());
moveComplete = true;
}
}
}
void Player::setGamePiece(const char gamePiece)
{
m_GamePiece = gamePiece;
}
char Player::getGamePiece() const
{
return m_GamePiece;
}
Answer: Overall, you've done a pretty good job. There's always room for improvement, though...
Compiling with extra warnings
It's always a good idea to compile with as strict a level of compiler warnings as you can. Given the usage of stdafx.h, you're likely using Visual Studio, so this'll be under C/C++ -> General -> Warning Level. I'm using GCC to compile everything, so this is simply -Wall -Wextra. This shows up a few things:
Player.cpp:6:5: warning: unused parameter ‘board’ [-Wunused-parameter]
int Player::getRowPosition(const Board& board) const
^
Player.cpp:23:5: warning: unused parameter ‘board’ [-Wunused-parameter]
int Player::getColPosition(const Board& board) const
^
GameLogic.cpp: In member function ‘void GameLogic::game(Board&, Player&, Player&, GameLogic&)’:
GameLogic.cpp:42:10: warning: unused variable ‘turn’ [-Wunused-variable]
char turn = X; //First turn
These are pretty mild as far as warnings go - nothing that's actually dangerous. However, passing in a parameter that you don't use can be confusing for the person reading your code, so you should try and fix these up.
Stringly-typed functions and reference parameters
In your searchForWinner function, you passing in a std::string searchDirection, which is limited to a few different options. If you ever have a limited number of (distinct) options, an enum (or enum class) is a better choice:
enum class SearchDirection
{
Horizontal, Vertical, DiagonalLeft, DiagonalRight
};
Passing a boolean reference parameter (bool& foundWinner) is also a bit of a code-smell. Much better would be to return a boolean. This would make your searchForWinner function look like:
bool searchForWinner(GameLogic& gameLogic, char playerPiece, SearchDirection searchDirection);
Your checkForWinner function could then look like:
bool Board::checkForWinner(GameLogic& gameLogic, char playerPiece)
{
return searchForWinner(gameLogic, playerPiece, SearchDirection::Horizontal) ||
searchForWinner(gameLogic, playerPiece, SearchDirection::Vertical) ||
searchForWinner(gameLogic, playerPiece, SearchDirection::DiagonalRight) ||
searchForWinner(gameLogic, playerPiece, SearchDirection::DiagonalLeft);
}
Code simplifications
initBoard() (which actually has a bug that just hasn't manifested itself because ROWS == COLUMNS currently in your code) can be simplified. std::vector has a constructor that takes a count and a value to initialize with:
void Board::initBoard()
{
const static std::vector<char> row(GameLogic::COLUMNS, GameLogic::EMPTY);
m_Board = std::vector<std::vector<char>>(GameLogic::ROWS, row);
}
You could even do away with this function entirely, and simply do this in the constructor via an initializer-list:
Board::Board()
: m_Board(GameLogic::ROWS, std::vector<char>(GameLogic::COLUMNS, GameLogic::EMPTY))
{ }
Your getRowPosition and getColPosition have exactly the same code (including the variable being named row in both!). Whenever you have copy-paste code like this, you should refactor it into a different method:
int Player::getPosition(const std::string& which, int size) const
{
int pos = 0;
bool positionAllowed = false;
while (!positionAllowed)
{
std::cout << "Enter " << which << "\n.";
std::cin >> pos;
if (pos >= 1 && pos < size - 1)
positionAllowed = true;
else
std::cout << "Position out of bounds. Please enter again.\n";
}
return pos;
}
You can then change getRowPosition and getColumnPosition to:
int Player::getRowPosition() const
{
return getPosition("row", GameLogic::ROWS);
}
int Player::getColumnPosition() const
{
return getPosition("column", GameLogic::COLUMNS);
}
There's a bit of confusion regarding the 0-based vs 1-based coordinates you are using. You try to correct for this initially (but only for rows?) with:
if (row == GameLogic::ROWS - 2 && ...)
This looks far more complicated than need be. You've done your input checking in getRowPosition and getColPosition. I'd simply convert them to the equivalent 0-based array offsets (by subtracting 1), so move then looks like:
row = getRowPosition();
col = getColumnPosition();
if(board.isMoveLegal(row - 1, col - 1)) {
...
} | {
"domain": "codereview.stackexchange",
"id": 18148,
"tags": "c++, object-oriented, console"
} |
Find the distance that the ball reaches after breaking the rope | Question:
A ball attached to a rope of $1m$ of radius, describes circles with a frequence of $f=10s^{-1}$ in a horizontal plane at a height of $3m$ over the floor. If at a certain moment, the rope breaks... find:
a) The distance that the ball reaches right after the rope is broken.
Attempt. So I've first drawn a diagram but I'm unsure I am right about what is happening. On any case, I tried going with $\omega=2\pi f\implies \omega=20\pi\ rad/s$, hence the velocity after it breaks, I'm assuming it is found by $V=\omega\cdot r=20m/s$. And from this moment my confusion comes in because I don't know if I should treat the movement after the rope breaks as a projectile motion, but I've tried that and I don't know the angle with the horizontal and I can't seem to be able to get anywhere. I'm stuck
Answer: The situation is something like this:
So here according to the question, the radius of the rope is 1m and and height of the circular motion is 3m above the floor.
As you have written about your attempt, you have correctly figured out the angular velocity $\omega$ of the ball but the velocity of the ball must be $v = r\omega$ = (1$m$)*(20$\pi$ $rad/s$) = 20$\pi$ $m/s$
Then, after the rope breaks, use the concept of horizontal projectile motion as given in the below figure:
Where your initial height is 3m, the angle of projection is 0 (as the ball is projected from the horizontal plane) and the velocity of the projectile is 20$\pi$ m/s.
So by applying the equations:$$x = v_{0x}t$$
and $$y = y_0+v_{0y}t+\frac{1}{2}at^2$$
along the horizontal and vertical directions respectively where $v_{0y} = 0$ (As the initial velocity is horizontal. There is no vertical component of the initial velocity), you must get the answer.
Hope it helps you. | {
"domain": "physics.stackexchange",
"id": 76183,
"tags": "homework-and-exercises, newtonian-mechanics, projectile"
} |
How to model the travel time of tsunami waves generated by volcanic explosion? | Question: I am trying to understand how can I model the travel time of tsunami waves generated by volcanic explosions? I want to analyze the 2022 Tonga tsunami. I have been using the UNESCO CoMIT software with tsunamis generated by underwater earthquakes but not with volcanic eruptions, so any suggestions would be great.
Answer: Tsunamis are shallow waves in the sense that the tsunamis have wavelengths that typically exceeds hundreds of kilometers. This is far greater than the depth any of the oceans. This means that tsunamis "feel" the bottom of the ocean, regardless of the depth of ocean water. The velocity of a shallow wave is $v\approx\sqrt{g d}$ where $g$ is the local acceleration due to gravitation (including fictitious centrifugal acceleration; $9.80665\,\text{m}/\text{s}^2$ is a reasonable worldwide approximation) and $d$ is the local depth.
You will need a model of ocean depth as a function of location, the more detailed the better. | {
"domain": "earthscience.stackexchange",
"id": 2705,
"tags": "oceanography, earthquakes, volcanic-eruption, tsunami"
} |
Transformers in parallel/series | Question: I ran across the diagram below in the mindtomachine blog post describing an homebrew arc welder. It shows two transformers with inputs in parallel and outputs in series. Is there some advantage to this arrangement rather than a single transformer with a doubled turns ratio?
Perhaps it's just what he had at hand, but I wanted to confirm with someone having EE background.
P.S. I'm not planning on building this, just curious.
Answer: I would suggest that this is a case where they had used two old high current output transformers with the outputs in series to double the voltage ie just using what was available (second-hand) instead of buying new, a good example of re-purposing. | {
"domain": "engineering.stackexchange",
"id": 1560,
"tags": "electrical-engineering, power-electronics"
} |
getting click position from pcl_visualizer | Question:
Hi,
I wanted to get the coordinates where a user clicked inside the pcl_visualizer. Preferably, I would like to get the equation of the 3D line that corresponds to back-projection of the point which was clicked.
Thanks,
Abhishek
Originally posted by aa755 on ROS Answers with karma: 61 on 2011-04-20
Post score: 0
Original comments
Comment by AHornung on 2011-04-21:
You would probably get the best answers for this on the PCL mailing list.
Answer:
Unfortunately you can't get the coordinate of a single point using pcd_viewer in pcl_visualization.
Originally posted by kwc with karma: 12244 on 2011-06-17
This answer was ACCEPTED on the original site
Post score: 1 | {
"domain": "robotics.stackexchange",
"id": 5417,
"tags": "pcl"
} |
Complex Conditionals in a Strategy Game | Question: I have been looking at this method for a long time and I have finally refactored it into something that I believe is much more readable. I am not totally happy with it though so I would like to get other opinions about it. There are a great many questions about complex if-else structures here on Code Review, so I am sorry to be adding to it. Hopefully this one will be interesting, at least.
This method is very important to my Game as a whole. It checks to make sure that a Job is valid for a particular Floor in the tower. It is called when the User or AI selects a job to add to the JobQueue, and also again when the JobQueue tries to start the job. The problem is that the construction rules are different for floors that are above ground compared to those that are below ground.
In the past, I had a switch statement for the jobType, and then an if-else statement inside of that to check if the floor was above or below ground. Then inside of that were the necessary cases for all of the jobTypes. The downsides to this were that some of the code was repeated, everything was happening inside one method, and it was not totally clear what was happening.
I have refactored this down to a method that calls two other methods. It always calls the first method, and then it calls one of two additional methods depending on whether the job rules are different for above and below ground for that particular job. I am primarily concerned with the clarity and readability of this code, especially whether or not there are enough comments. If there are better ways to do this I would love to hear that also.
-(BOOL) checkIfFloor:(int)floorNumber isValidForJob:(JobType)jobType {
DTTowerFloor *floor = [self floorFromFloorList:floorNumber];
if (![self checkEarlyReturnConditionsForFloor:floor jobType:jobType]) {
return NO;
}
if ([self isJobSimpleJob:jobType]) {
//these conditions are the same for above and below ground
if (![self checkIfFloor:floor isValidForSimpleJob:jobType]) {
return NO;
}
} else {
//these conditions are different for above and below ground
BOOL aboveGround = floor.floorNumber >= 0;
if (![self checkIfFloor:floor isValidForJob:jobType aboveGround:aboveGround]) {
return NO;
}
}
return YES;
}
-(BOOL) checkEarlyReturnConditionsForFloor:(DTTowerFloor *)floor jobType:(JobType)jobType {
//check these conditions first for an early return
if (floor.isRevealed == NO) {
return NO;
}
if ([floor alreadyHasJobOfType:jobType]) {
return NO;
}
if (floor.floorState == FloorUnderAttack && jobType != JobTypeFightEnemy) {
return NO;
}
if (jobType == JobTypeFightEnemy && (floor.enemies.count == 0 && !floor.enemyCave)) {
return NO;
}
if (floor.floorState == FloorDestroyed) {
if (!(jobType == JobTypeCleaningJob || jobType == JobTypeFightEnemy)) {
return NO;
}
}
return YES;
}
-(BOOL) isJobSimpleJob:(JobType)jobType {
return (!(jobType == LadderJob || jobType == BottomBuildJob || jobType == WallBuildJob));
}
-(BOOL) checkIfFloor:(DTTowerFloor *)floor isValidForSimpleJob:(JobType)jobType {
switch (jobType) {
//simple jobs
case JobTypeMining:
if (floor.floorBuildState & FloorHasLadder && (int)floor.groundBlocks.count > 0) {
return YES;
}
break;
case JobTypeHaulItem:
if (floor.itemsNeedingHauling.count > 0 && (int)floor.groundBlocks.count <= 0) {
return YES;
}
break;
//building jobs
case RoomBuildJob:
{
NSInteger requiredBuildings = (FloorHasBottom | FloorHasWalls);
if ((floor.floorBuildState & requiredBuildings) == requiredBuildings && ((floor.floorBuildState & FloorHasRoom) == 0)) {
return YES;
}
break;
}
case SuperiorWallBuildJob:
{
NSInteger requiredBuildings = (FloorHasBottom | FloorHasWalls);
if ((floor.floorBuildState & requiredBuildings) == requiredBuildings) {
return YES;
}
break;
}
//enemies and animals
case JobTypeFightEnemy:
if (floor.floorState == FloorUnderAttack || floor.floorState == FloorDestroyed) {
return YES;
}
break;
case JobTypeCleaningJob:
if (floor.floorState == FloorDestroyed && floor.enemies.count == 0 && !floor.enemyCave) {
return YES;
}
break;
case JobTypeHaulAnimalToPasture:
case JobTypeHaulAnimalToSlaughterhouse:
if (floor.animals.count > 0) {
return YES;
}
break;
case JobTypeBreedAnimals:
if (floor.animals.count > 1) {
return YES;
}
break;
default:
break;
}
return NO;
}
-(BOOL) checkIfFloor:(DTTowerFloor *)floor isValidForJob:(JobType)jobType aboveGround:(BOOL)aboveGround {
switch (jobType) {
//building jobs
case LadderJob:
if (aboveGround) {
if (((floor.floorBuildState & FloorHasLadder) == 0) && (floor.floorBuildState & FloorHasBottom)) {
return YES;
}
} else {
if ((floor.floorBuildState & FloorHasLadder) == 0) {
return YES;
}
}
break;
case BottomBuildJob:
if (aboveGround) {
NSNumber *oneFloorBelowNum = [NSNumber numberWithInt:floor.floorNumber - 1];
DTTowerFloor *oneFloorBelow = [self.towerDict objectForKey:oneFloorBelowNum];
if (((floor.floorBuildState & FloorHasBottom) == 0) && (int)floor.groundBlocks.count <= 0 && (oneFloorBelow.floorBuildState & FloorHasWalls)) {
return YES;
}
} else {
NSInteger requiredBuildings = (FloorHasLadder | FloorHasWalls);
if (((floor.floorBuildState & requiredBuildings) == requiredBuildings) && ((floor.floorBuildState & FloorHasBottom)== 0)) {
return YES;
}
}
break;
case WallBuildJob:
if (aboveGround) {
NSInteger requiredBuildings = (FloorHasBottom | FloorHasLadder);
if (((floor.floorBuildState & requiredBuildings) == requiredBuildings) && ((floor.floorBuildState & FloorHasWalls) == 0)) {
return YES;
}
} else {
if ((floor.floorBuildState & FloorHasLadder) && ((floor.floorBuildState & FloorHasWalls) == 0) && (int)floor.groundBlocks.count <= 0) {
return YES;
}
}
break;
default:
break;
}
return NO;
}
I should note that when it comes to bitwise conditionals such as (floor.floorBuildState & FloorHasLadder) and (floor.floorBuildState & FloorHasWalls) == 0), my understanding is that you should not pass those back directly such as return (floor.floorBuildState & FloorHasLadder) but should instead return YES or NO, because they are not pure bools and can cause problems when passed around directly. If this is incorrect, please let me know.
Also if you see any cases that do not have the JobType at the start of the enum field name, I just have not changed all of them yet. I will definitely be getting around to that eventually.
Answer:
I should note that when it comes to bitwise conditionals such as
(floor.floorBuildState & FloorHasLadder) and (floor.floorBuildState &
FloorHasWalls) == 0), my understanding is that you should not pass
those back directly such as return (floor.floorBuildState &
FloorHasLadder) but should instead return YES or NO, because they are
not pure bools and can cause problems when passed around directly. If
this is incorrect, please let me know.
In general, this is mostly correct.
Technically, 0 is false, and everything else should be regarded as true.
Suppose we have this example:
int x = 2;
if (x) {
foo();
}
if (x == YES) {
bar();
}
In this example, foo() will execute, but bar() will not. Now, for starters, this is why we don't write == YES.
But people are foolish, and will.
floor.floorBuildState & FloorHasLadder will only ever result in being equal to 1 or 0 if FloorHasLadder is equal to 1. And chances are, its actual value could change, plus for any other values in this enum, it simply won't work.
So we need to make sure we're returning 1 or 0 from a method that claims to return a BOOL. Bitwise operations make no such guarantee.
LOGICAL operators, however, do make that guarantee.
So in the example of (floor.floorBuildState & FloorHasWalls) == 0, we'd be returning the result of the logical == operator, this is guaranteed to be either 1 or 0, YES or NO. Returning this value is perfectly fine.
We could force a logical operator onto everything by appending an && YES to everything, which will force everything to evaluate to either YES or NO, an expected return value... but this seems more likely to confuse maintainers.
Moreover, sometimes doing the explicit return and sometimes relying on the == 0 return will show inconsistency, which might encourage a foolish maintainer to make a poor decision to "fix" something that's not broken for the sake of consistency.
So with all that said, I think it's probably best to stick with the explicit returns.
One other thing I do want to point out is the fact that when you return out of a switch statement, the break is unnecessary.
In the case of both of your switch statements, you're always entering an if condition based on a logical operator, or break-ing the switch down to a final return NO;. Here, we can simply just return the result of the logic operator, get rid of all the breaks, get rid of the return NO at the bottom, and set the default case to return NO; | {
"domain": "codereview.stackexchange",
"id": 9490,
"tags": "game, objective-c"
} |
Reservoir Sampling of an enumerable collection of unknown size | Question: I have an enumerable source for which I do not know how big it is. I want to select a random element from it, without having to hold the whole collection in memory and with uniform distribution. I read about the reservoir sampling algorithm, and created a simple implementation for the very basic case where the reservoir size is 1, which is what I want.
Basically the idea is that if an item appears in position n starting by 1, if a random roll returns less than 1/n I pick that value as the current selected value.
static T Random<T>(IEnumerable<T> source, Random random)
{
using var e = source.GetEnumerator();
if(!e.MoveNext())
return default;
var result = e.Current;
var count = 1;
while(e.MoveNext())
{
if(random.NextDouble() < 1.0 / ++count)
result = e.Current;
}
return result;
}
I created a demo project and the results seems to add up.
https://dotnetfiddle.net/0iXEck
public static void Main()
{
var size = 1000;
var tries = size * 100;
var random = new Random();
var array = Enumerable.Range(0, size).ToArray();
var histogram = array.ToDictionary(k => k, v => 0);
for(var i=0; i<tries; i++)
histogram[Random(array, random)]++;
foreach(var kv in histogram.OrderBy(kv => kv.Key))
Console.WriteLine($"{kv.Key,5}: {kv.Value}");
}
I wonder if am I missing anything because probability knowledge is not my strongest point. Is this still "reservoir sampling algorithm" or does it have another name?
Answer: Let me try another answer.
Background info: Wikipedia Reservoir Sampling.
The intro header to the Wikipedia link states:
The size of the population n is not known to the algorithm and is
typically too large for all n items to fit into main memory.
But consider you have a dictionary to hold the results for all n items in your main memory. And each example they provide has it returning a complete array, so it all has to fit into memory somehow.
You could make your life easier if your method accepted an IList<T> since that would return a known Count. Consider that each individual call to your method will enumerate over EACH element in the collection. Since you pass in a 1000 element array, all 1000 elements are enumerated for one call, and you make 100 calls. I am just saying after the first call has been made, the size of the collection is no longer unknown. Rather it is not remembered between invocations.
But that is not what you asked for. My answer continues to with IEnumerable<T> of unknown length.
Typically, one does not pass in the Random instance, but rather relies upon the method to have one handy.
The method name Random<T> is a poor name producing much confusion with the Random class. I suggest it be GetReservoirSample<T>.
For C#, the CodeReview community strongly supports uses braces to avoid one-liners.
There is no reason to OrderBy Key on histogram since it was created in Key order.
A quick reworking of your method, which includes Random as an optional parameter, would become:
private static Random _random = new Random();
public static T GetReservoirSample1<T>(IEnumerable<T> source, Random random = null)
{
T result = default;
int count = 0;
if (random == null)
{
random = _random;
}
IEnumerator<T> e = source.GetEnumerator();
// This enumerates over the entire collection!!!
while (e.MoveNext())
{
if (random.NextDouble() < (1.0 / ++count))
{
result = e.Current;
}
}
return result;
}
But that can be simplified by getting rid of the Random parameter. And since you are iterating over all elements, there is no need for the enumerator. You may use an foreach instead yielding smaller code.
public static T GetReservoirSample2<T>(IEnumerable<T> source)
{
T result = default;
int count = 0;
// This enumerates over the entire collection!!!
foreach (var element in source)
{
if (_random.NextDouble() < (1.0 / ++count))
{
result = element;
}
}
return result;
}
IList versus IEnumerable
Finally, if you ever decide to change source to be an IList<T>, there is this version:
static T GetReservoirSample3<T>(IList<T> source)
{
T result = source.Count > 0 ? source[0] : default;
// This enumerates over the entire collection MINUS ONE!!!
for (int i = 1; i < source.Count; i++)
{
if (_random.NextDouble() < (1.0 / (i + 1.0)))
{
result = source[i];
}
}
return result;
}
UPDATED
In my original example using IList<T>, there is no performance benefit. Each example must enumerate over the full collection. However, with a IList you can get a performance boost. Essentially, your method remembers the last element that satisfies the condition but it must continue to enumerate over the remainder of the collection.
With a list, you can process the collection backwards and immediately return on the first item that satisfies the condition:
static T GetReservoirSample3B<T>(IList<T> source)
{
for (int i = source.Count - 1; i > 0; i--)
{
if (_random.NextDouble() < (1.0 / (i + 1.0)))
{
return source[i];
}
}
return source[0];
}
Also, I wanted to see the min and max values, so I altered Main for my own purposes. I share it here:
public static void Main()
{
var size = 1000;
var tries = size * 100;
var array = Enumerable.Range(0, size).ToArray();
var histogram = array.ToDictionary(k => k, v => 0);
var random = new Random();
for (var i = 0; i < tries; i++)
{
histogram[GetReservoirSample3B(array)]++;
}
var min = int.MaxValue;
var max = int.MinValue;
foreach (var kv in histogram)
{
if (kv.Value < min)
{
min = kv.Value;
}
if (kv.Value > max)
{
max = kv.Value;
}
}
foreach (var kv in histogram)
{
var extra = (kv.Value == min) ? "\t** MININUM ** " : (kv.Value == max) ? "\t** MAXIMUM **" : "";
Console.WriteLine($"{kv.Key,5}: {kv.Value}{extra}");
}
}
All that said, I do not know if you are correctly implementing a reservoir sampling. From what I read, it expects you to return a sample subset of k elements where k is less than or equal to the source collection size of n. Since you are returning a single element, that is for k == 1, then this produces the same distribution as:
static T GetReservoirSample3C<T>(IList<T> source) => source[_random.Next(source.Count)]; | {
"domain": "codereview.stackexchange",
"id": 41431,
"tags": "c#, random"
} |
Perturbation of a Schwarzschild Black Hole | Question: If we have a perfect Schwarzschild black hole (uncharged and stationary), and we "perturb" the black hole by dropping in a some small object. For simplicity "dropping" means sending the object on straight inward trajectory near the speed of light.
Clearly the falling object will cause some small (time dependent) curvature of space due to its mass and trajectory, and in particular, once it passes the even horizon, the object will cause some perturbation to the null surface (horizon) surrounding the singularity (intuitively I would think they would resemble waves or ripples). Analogously to how a pebble dropped in a pond causes ripples along the surface.
Is there any way to calculate (i.e. approximate numerically) the effect of such a perturbation of the metric surrounding the black hole?, and specifically to calculate the "wobbling" of the null surface as a result of the perturbation,maybe something analogous to quantum perturbation theory?
Or more broadly, does anyone know of any papers or relevant articles about a problem such as this?
Answer: Your intuitive picture is basically correct. If you perturb a black hole it will respond by "ringing". However, due to the emission of gravitational waves and because you have to impose ingoing boundary conditions at the black hole horizon, the black hole will not ring with normal-modes, but with quasi-normal modes (QNMs), i.e., with damped oscillations. These oscillations depend on the black hole parameters (mass, charge, angular momentum), and are therefore a characteristic feature for a given black hole.
Historically, the field of black hole perturbations was pioneered by Regge and Wheeler in the 1950ies.
For a review article see gr-qc/9909058
For the specific case of the Schwarzschild black hole there is a very nice analytical calculation of the asymptotic QNM spectrum in the limit of high damping by Lubos Motl, see here. See also his paper with Andy Neitzke for a generalization.
Otherwise usually you have to rely on numerical calculations to extract the QNMs. | {
"domain": "physics.stackexchange",
"id": 814,
"tags": "general-relativity"
} |
What does this vector represent? | Question: Let $p$ is the position vector from the origin of frame {s} (i.e. inertial frame) to the origin of the body frame {b}. Take a look at the following picture, the vectors $\omega_b,v_b$ represent the angular and linear velocities of frame{b} attached to the moving robot expressed in the body frame. The vector $\omega_s$ is the angular velocity of frame {b} expressed in the inertial frame {s}. Surprisingly, the vector $v_s$ is not the linear velocity of the body frame's origin expressed in the inertial frame {s} (i.e. $\dot{p} \neq v_s$). The actual formula is
$$
\dot{p} = v_s + \dot{R}R^T p
$$
The notation confuses me. In the book I'm reading, it is
the physical meaning of $v_s$ can now be inferred: imagining the moving
body to be infinitely large, $v_s$ is the instantaneous velocity of the
point on this body currently at the fixed-frame origin, expressed in.
Could anyone provide different explanation what does exactly this vector mean? Why do we need it if it is not the linear velocity of the origin of frame {b} expressed in the inertial frame {s}?
Reference: Modern Robotics Mechanics, Planning, and Control
Answer: Anything to do with rotations is often difficult and usually counter-intuitive.
I have used the symbols that you have quoted but simplified the diagram a little.
The top image is the initial condition with frame{s} at position $K$ and frame{b} at position $L$.
The displacement from $K$ to $L$ is $\vec p$.
I have introduced another set of coordinate axes attached to the "extended" robot and positioned at $K$ and called the unit vectors $\hat X_{\rm b}$ and $\hat X_{\rm b}$.
A key idea is that the two sets of axes attached to the robot do not move relative to one another as the robot is rigid.
Now let the robot move with both a rotation and a translation relative to frame{s} to a new position in a time $\Delta t$ with the two sets of axes attached to the robot moving to positions $K'$ and $L'$ as shown in the lower figure.
The displacement from position $L$ to position $L'$ is $\Delta \vec p$ and the displacement from position $K$ to position $K'$ is $\Delta \vec q$.
Relative to frame{s}, the frame attached the left hand side of the robot has undergone a displacement of $\Delta \vec q$ in a time of $\Delta t$ so $\vec v_{\rm s} = \dfrac{\Delta \vec q}{\Delta t}$ and this must also be the velocity of the robot frame at position $L'$ as the robot is rigid.
From the diagram it can be seen that $\vec v_{\rm s} \ne \dfrac{\Delta \vec p}{\Delta t}$. | {
"domain": "physics.stackexchange",
"id": 85084,
"tags": "classical-mechanics, kinematics"
} |
Newton's third law of motion versus Work | Question: Newton third law of motion says that "To every action, there is always an equal and opposite reaction". The vector study tells us that if two vectors are of same nature and equal magnitude but opposite direction, say $\vec{A}$ and $\vec{B}$, and they acts then the resultant is given by,
$\vec{A}$ $=$ $\vec{-B}$
$=$$>$ $\vec{A}$ $+$ $\vec{B}$ $=$ $0$
My question is, if I carry object upward then its reactive Force must be acting against my Force and according to Newton's third law of motion it should be equal to my force and according to above equation both must cancel each other then how I am able to do work on it by moving it upward?
Answer: This is a common misconception. When you apply a force upward on the object, the "reaction" force in Newton's 3rd Law is NOT the force of gravity down on the object; they do not have to be equal, and as you said, cannot be equal if you are to accelerate the object upwards.
It is just a confusing coincidence that the force of gravity kind of looks like a reaction force, but it's not. One key giveaway that you have not identified a "action/reaction pair" is that both of the forces are on the same object - the force of your hand on the object, and the force of gravity on the object. Those can't possibly be a "3rd law pair".
The more accurate way of saying Newton's 3rd Law is this:
"If object A puts a force on object B, object B puts an equal and opposite force on object A, and the forces are the same type, and occur at the same time".
In your situation, those two forces are part of two separate action/reaction pairs;
the reaction force to you pushing up on the object is the object pushing down on you
the force of gravity on the object (from the Earth) is a reaction force to the force of gravity on the Earth from the object.
You can put an arbitrarily large amount of force on the object, and the force of gravity opposing you on the object will remain the same. On the other hand, the reaction force from the object pressing on you will become equally arbitrarily large. Luckily, you plus whatever you're standing on (probably the Earth) is way more massive than the object is, so the reaction force produces a negligible acceleration on you & the Earth, but you produce a large acceleration for the object (F = m a). | {
"domain": "physics.stackexchange",
"id": 19970,
"tags": "newtonian-mechanics, vectors, work"
} |
Difference Between Two Forms of Equations of Auto Regressive (AR) Model | Question: I found equation 1 for Autoregressive model in various books and articles but I also found equation 2 for AR model, I understand the physical meaning of the equation but two different equations confused me a lot.
Can anyone tell me what is the difference between them and where one should use them?? I want to use AR equation for forecasting so which one should i use and why??. Both equations should yield different results because of the negative sign (which should just flip the actual signal i think so).
Answer: Auto Regressive Model means the current output is a linear combination of previous outputs and driving noise:
$$ y \left[ n \right] = \sum_{k = 1}^{p} {a}_{k} y \left[ n - k \right] + v \left[ k \right] $$
As you can see, the current value $ y \left[ n \right] $ depends on $ m $ values before it.
The parameter $ p $ is the order of the model while $ v \left[ k \right] $ is the driving IID White Noise.
Usually if we want to estimate the parameters of the model we have 2 main cases:
The Model Order $ p $ Is Known
In this case we need to estimate $ p $ parameters $ \left\{ {a}_{k} \right\}_{k = 1}^{p} $ from a set of $ m \geq p $ samples $ \left\{ y \left[ n \right] \right\}_{n = 1}^{m} $. the classic way to do so is using the Yule Walker Normal Equations.
The Model Order $ p $ Is Unknown
If $ p $ is unknown the intuitive a approach is estimate it and then go back to (1). One could estimate is by running on a grid of options (Try model order 3, 4, 5, 6, ... and chose the one which works best in MSE) yet this might do overfit. A classic approach to evaluate the grid is using Akaike iInformation Criterion (AIC).
In your equations:
The connection is between the output to a different signal which means it is not an AR model.
If you tell us where did you get those form we might be able to clear things up.
Welcome to DSP! | {
"domain": "dsp.stackexchange",
"id": 6635,
"tags": "discrete-signals, linear-systems, autoregressive-model"
} |
Looking for reference material in quantum algebra | Question: I want to reconstruct some problems to take my study further deeper that are closely aligned with information retrieval in the form of quantum algebra. but I wonder if there are very limited books available in the Indian market. can anyone suggest some reference material about quantum algebra?
Answer: A great reference on the subject matter is Quantum Groups and Their Representations. The authors, Klimyk and Schmüdgen cover a variety of areas. For instance, quantized universal enveloping algebras, quantized algebras of functions, q-oscillator algebras, their representations and corepresentations, and noncommutative differential calculus, are among the many topics explored in the book with potential physical and mathematical applications in mind.
The basic quantum groups, quantum algebras and their representations are given in detail with the necessary proofs and presented with explicit formulae. A number of topics and results from the more advanced general theory are developed and discussed as well, making it a very deep and cohesive presentation. | {
"domain": "physics.stackexchange",
"id": 91161,
"tags": "quantum-mechanics, quantum-information, resource-recommendations"
} |
Least force required | Question:
Relying on my past knowledge on how to attack the problem, I should use the equation
Moment of B about AC = $r_{AB} \times B • n^ AC$
To find the least force on B.
I just don't know what position vector I should use for B, I am certain that I should not use points A OR C as a reference to get its position vector. That's what I believe. Or should I use it? Or the way I know is not the right way of solving? Is there any other way?
Sharing what I've done so far, I tried solving the midpoint of line AC, thinking force B could be from point B to the midpoint of line AC. Say, I've got my position vector for force B, I continued solving for $r×B$, then the dot product of the answer and unit vector AC, and I've got force B as 28 lb., and since my process is unclear surely my answer was wrong, too.
Answer: You need to compute the length of the moment arm from $B$ to $\overline{AC}$
That is the height of the triangle, and can be computed directly from tuples as $\frac{\|\overline{AC} \times \overline{AB} \|}{\|\overline{AC}\|}$
So $\overline{AC}\times\overline{AB}= \{72,108,54\}\qquad\|\overline{AC}\times\overline{AB}\|= 140.58\,sqin$
and $\overline{AC}=\{-9,6,0\}\qquad\|\overline{AC}\|=10.82\,in$
$140.58\, sqin/10.82\,in=13\,in$
$260\, in\, lbf/13\, in = 20\, lbf$ | {
"domain": "engineering.stackexchange",
"id": 3783,
"tags": "statics"
} |
FIR Filter 3dB Frequency | Question: I am new to signal processing and I want to calculate the 3 dB - cut off frequency of this FIR filter, which has the following transfer function:
For the evaluation i need to replace the z with and set the squared magnitude equal to 1/2.
But the problem is when I want to solve this for omega zero is the result. My question is what am I doing wrong here, or is something missing ?
Answer: There are several problems with your approach:
The expression $\displaystyle\frac{1+z^{-1}}{2}$ is the transfer function $H(z)$, NOT its magnitude $|H(z)|$.
Consequently, the expression $\displaystyle \frac{1+e^{-j\Omega}}{2}$ is the complex frequency response $H(e^{j\Omega})$, not its magnitude.
So now you have to compute the squared magnitude of $H(e^{j\Omega})$ and set it equal to $\frac12$. This is a basic exercise in complex numbers, so I trust that you can take it from here.
For some magical reason, the right half of you last equation is actually the final correct equation, even though everything before it is wrong. If you think about it for a while, you should figure out that the solution is given by
$$\cos(\Omega_c)=0\tag{1}$$
which certainly does not imply that $\Omega_c=0$. What is it that it does imply? | {
"domain": "dsp.stackexchange",
"id": 9712,
"tags": "lowpass-filter, finite-impulse-response, frequency-response"
} |
What is the emf in this circuit? | Question: There is an infinite solenoid with radius $r$ inside the first loop powered by a current that changes over time so that the magnitude of the magnetic field inside the solenoid is $B(t)$. According to Faraday's law this generates an emf in the circuit, given by:
\begin{equation}
\mathcal{E}=-\frac{d\Phi}{dt}=-\pi r^2\frac{dB}{dt}
\end{equation}
This is equivalent to an emf generator. My guess is that the correct configuration is
where the cell provides an emf $\mathcal{E}$ as above. In a book I have, though, the solenoid is considered equivalent to the following configuration:
Where each cell provides an emf $\mathcal{E}'=\frac{\mathcal{E}}{4}$. This situation is clearly different and yields different results for the currents running in the loops. Actually, the left loop is no different mathematically, while the second loop is because of the generator in the common branch.
What is the correct configuration and why?
I thought that Faraday law should apply both to the left loop and to the big loop including the right and the left loop, this is why the first configuration looks fine to me.
EDIT: here is the figure in the textbook (readcomments in the accepted answer for more information). Sorry for the quality.
Answer: You are correct. The emf is present in the left hand loop and the outer loop, because the changing flux is linked with both these loops. There is no net emf in the right hand loop, because no (changing) flux is linked with it.
It is therefore fine, for purposes of circuit theory, to represent the emf as if it were concentrated in the place you have shown (or in the left hand vertical line of the circuit or the left hand of the bottom line).
[We can understand where the textbook writers were coming from. They see the left hand loop surrounding the solenoid and believe that there ought to be emfs in all four sides of the square, including in the wire containing R2. They are forgetting that, applying the same reasoning to the outer loop, there ought also to be a 'downward' emf in R3, as well as emfs in other parts of the loop. So the emf they have shown in the wire containing R2 is not, on this viewpoint, the only emf in the right hand loop – and we know that the resultant emf in the right hand loop is zero. In general it is unsafe to argue in terms of emfs other than resultant emfs in loops.]
Which book gave you the other answer? | {
"domain": "physics.stackexchange",
"id": 82412,
"tags": "magnetic-fields, electric-circuits, electromagnetic-induction"
} |
How to find the torque required to rotate wooden slab | Question: How can I find the Torque needed to rotate a wooden slab with no weight applied on either end of the slab if I have a servo motor placed at the middle of the slab?
Answer: Torque is equal to angular acceleration times moment of inertia of your board, idealized to$ \ 1/12mL^2$.
There are interesting points to check for though.
Whit a small defect in symmetry your board can turn into a propeller and create a lift.
If the face thickness is not insignificant the whirling of the air current can become very complex. | {
"domain": "engineering.stackexchange",
"id": 4419,
"tags": "electrical-engineering, motors, electrical"
} |
Why does the lowering operator applied to lowest state have to be 0? | Question: When solving a problem in QM with raising and lowering operators ($\hat{a},\hat{a}^\dagger,L_{\pm},..$) it is often assumed that:
$$ L_-|\Omega>=0 $$
Why is this assumed?
Couldn't the result of applying the lowering operator to the lowest eigenstate give a mathematical function without physical meaning?
Or the same for applying the raising operator to the eigenstate with the highest eigenvalue (say energy, angular momentum, etc) ?
I am looking for a formal way of thinking about this.
Answer: Harmonic oscillator
For the harmonic oscillator ladder operators the reason is the following:
A physical system where there is no lower bound on the energy is unstable.${}^{1}$ This is why in QM we always assume that there is one state which attains the smallest values of the energy: $|\Omega\rangle$.
As a consequence of the commutation relations, the energy of $\hat{a}|\Omega\rangle$ is one unit of $\omega \hbar$ less than that of $|\Omega\rangle$.
$$
H \,\hat{a}|\Omega\rangle = (E_{\mathrm{min}} -\hbar\omega)\,\hat{a}|\Omega\rangle\,.
$$
So the only way out is that $\hat{a}|\Omega\rangle$ is not an eigenstate, therefore it has to be the zero state (not $0$ as a number but the null element of the Hilbert space).
Angular momentum
For the angular momentum the reason is the following:
We want to build finite dimensional representations. For the same reasoning as above the eigenvalue under $\hat{L}_z$ of $\hat{L}_-|\Omega\rangle$ is $\hbar$ less than that of $|\Omega\rangle$. Since we are diagonalizing an Hermitian operator ($\hat{L}_z$), then $\hat{L}_-|\Omega\rangle$ is orthogonal to $|\Omega\rangle$, so it's either a new state or the zero state.
If it's not the zero state then this procedure never ends and the representation becomes infinite dimensional. There is nothing wrong with infinite dimensional representations (they do exist), but particles typically transform under finite dimensional ones.
Note: the two arguments are really the same argument, but I wanted to emphasize the fact that the crucial point (boundedness of energies vs. finite dimension) is different.
$\quad{}^1$ Having negative energies is not problematic per se. The theory is fine as long as there is a lower bound. Naturally we can always change the offset so that this lower bound is zero | {
"domain": "physics.stackexchange",
"id": 60592,
"tags": "quantum-mechanics, hilbert-space, operators, vacuum"
} |
How does this measurement in the Hadamard basis look like? | Question: I am reading this paper by Mahadev. In going from (19) to (20) the author does a Hadamard measurement on two registers. I don't understand what exactly the Hadamard measurement does.
The (simplified) claim is as follows. Let's start with the following state
$$\vert 0\rangle\vert x_0\rangle + \vert1\rangle\vert x_1\rangle.$$
If one applies the Hadamard measurement to the second register, one obtains
$$\vert 0\rangle + (-1)^{d(x_0\oplus x_1)}\vert 1\rangle,$$
where $d$ is the "measurement outcome".
I've removed some other terms that exist in the original paper but I beleive the above is correct. Is there a way to see why this claim holds? I see it for the case where $x_0$ and $x_1$ are bits but the argument is for bit strings too and I'm not sure why.
For the general case, is $|d|=|x_i|$ i.e. if $x_i$ are $k$-bit strings, then we have $k$ possible measurement outcomes?
If 1. is true, why does the phase $d.(x_o\oplus x_1)$ emerge?
This is also discussed in this video.
Answer: Answer refined based on updated question
Your updated question boils down to something like “if we take a Hadamard transform of a superposition of two basis states and measure, why is our string related to the bitwise XOR of the two basis states?”
But this much has been known since Simon's algorithm. Simon proposed a very restrictive promise on his oracle $U_f$, while the more modern tests use a trapdoor claw-free function - but either way it's the same idea. The Hadamard test reveals XOR patterns hidden in the output of a function.
For example, if you have a wavefunction $|\psi\rangle$ on a register of $k$ qubits that's a superposition of two random basis states with $x_i\in\{0,1\}^k$:
$$|\psi\rangle=\frac{1}{\sqrt 2}\big(|x_0\rangle+|x_1\rangle\big),$$
then taking the Hadamard transform of each qubit in $|x_i\rangle$ and measuring each qubit in the computational basis will return a random $k$-length string $d$ that with high probability is not $\bf 0$ and further satisfies the equation:
$$d\cdot(x_0\oplus x_1)=0.$$
As a Boolean equation this says that if we (1) take the bitwise XOR of the two strings $x_0$ and $x_1$, (2) take the bitwise AND of the string $d$ with this XOR, and (3) find the parity of the number of $1$'s in the AND of the above, the parity will always be $0$. This is because the phase always gets kicked back and the Hadamard transform effectively takes the Fourier transform over the Boolean cube.
To answer your specific questions, if $x_i$ are $k$-bit strings then we have $2^{k-1}$ possible random outcomes for our measured string $d$ and without something like the Simon promise then half of the possible strings satisfy the Boolean equation. The phase emerges as a kick-back via the Hadamard transform.
Here, I concatenate your first qubit with the rest of the register $x_i$ but the argument carries through if you were to Hadamard even the first qubit. | {
"domain": "quantumcomputing.stackexchange",
"id": 5305,
"tags": "quantum-state, measurement, hadamard"
} |
Does moving something horizontally in gravity do no work? | Question:
Bill’s job is to lift bags of flour and place them in the back of a
truck, which is parked next to him. Sally is loading the same bags of
flour into a similar truck that is located 10 m away. Sally wants a
raise because she says she is doing more work than Bill. Does the
physics definition of work support her claim?
Attempt: By the definition Work is Force multiplied by the Displacement in the direction of the force. Sally does the same amount of Work when she lifts the bag. But, when she cares the bag for 10 m to the truck there is no force exerted on the bag in the direction of the truck. Therefore, she does the same amount of Work. Is my reasoning correct? Why the Force exerted in the direction of the truck is zero?
Answer: Yes, your reasoning is correct.
You could also reason this way: the work done by Bill and Sally is turned into energy. In both cases the final energy - potential energy of the bag - is same for both of them.
Edit: after editing, you also asked: "Why the force exerted in the direction of truck equals zero?"
Let's start with a reasonable assumption that bag is carried toward the truck with the constant velocity. In case of constant velocity, according to 1st Newton law, the net forces equals zero. There are only two forces acting on the bag: the force of gravity (vertically down), and the force of Sally (vertically up). Therefore there is no horizontal force in direction of the truck. | {
"domain": "physics.stackexchange",
"id": 19860,
"tags": "homework-and-exercises, newtonian-mechanics, work"
} |
Deadlock prevention by killing a victim | Question: In our operating systems class, the teacher told us about deadlocks prevention methods. One of them is that to select as a victim the process which requires the CPU time or one that requires a lot of resources, and remove this process. The next time the algorithm is called, it may be possible that again the same process becomes the victim. This causes starvation.
I'm confused: if the victim process is terminated, then how can it again become part of the deadlock?
Answer: The idea is that the process is not terminated: only the current operation is cancelled. The process reacts to the cancellation by trying again. In this model, it is assumed that each process runs something like the following pseudocode:
begin_transaction:
try:
acquire(lock_1);
acquire(lock_2);
…
release(lock_2);
release(lock_1);
with Cancelled:
goto begin_transaction
Without any form of deadlock prevention, the processes would get stuck forever. Cancelling one of the processes (in a way that releases its locks) prevents that.
This form of deadlock prevention only works if processes are written in the expected way and can cope with cancellation. It does not generalize to arbitrary programs. | {
"domain": "cs.stackexchange",
"id": 3008,
"tags": "concurrency, deadlocks"
} |
Problem with rostopic | Question:
I've created my own message type, and have a C++ program generating messages. When I attempt to listen to the messages using rostopic echo in my PC, then I get perfect values.
Then I tried to enter into Husky A200 by ssh ing and then executing a subscriber program which subscibes to my topic.
ssh administrator@cpr-sal-01
Then typed password and entered into Husky
roscd face_subscriber (where i have my subscriber code)
rosrun face_subscriber face_subscriber (my face_subscriber code)
The problem is that my topic is not found in rostopic list of Husky. This subscriber code works perfectly in my pC. What should I do to make my topic get in the rostopic list of Husky ?
PS - I have electric running in my system and Husky has Feurte running in it. (
Thanks in advance
Originally posted by Arjun PE on ROS Answers with karma: 18 on 2012-09-03
Post score: 0
Answer:
Seems like your network isn't setup properly. Before you run you own code, you should try with rostopic echo to eliminate error sources.
Where is you core running? Can you see other topics from the remote machine? Is the ROS_MASTER_URI and possible ROS_IP/HOSTNAME set correctly? If that all works, your topics should also come up.
Originally posted by dornhege with karma: 31395 on 2012-09-03
This answer was ACCEPTED on the original site
Post score: 5 | {
"domain": "robotics.stackexchange",
"id": 10868,
"tags": "c++, message, ros-electric, husky, ros-fuerte"
} |
Where can I find a collection of density functional parameterizations? | Question: I want to implement a solution to the electronic Schroedinger equation using DFT. I know there are many, many functionals to choose from but don't know where to start. I'd like to start with a couple LDA and GGA functionals, is there a good place where I can look up some parameterization, or even the tabulated data itself to form my own parameterization?
Answer: Unless you are doing it just for fun and curiosity how the think works, you are better with some existing library. Most prominent is the libxc, offering the lowest level interface, i.e. given density (and possibly gradient) at series of points, it returns the energy. What is left for you is the construction of the grid and optimization of density, ie. you can go for the real Hohenberg-Kohn DFT, not just Kohn-Sham. For the usage examples (of the really nice and simple interface) see http://www.tddft.org/programs/octopus/wiki/index.php/Libxc:manual
And most importantly, you can find definitions of all the functionals in the list of implemented functionals | {
"domain": "chemistry.stackexchange",
"id": 5100,
"tags": "electronic-configuration, density-functional-theory"
} |
How are synaptic vesicles brought to the synapse? | Question: I'm reading about how synaptobrevin is used to identify synaptic vesicles for tethering near the synaptic cleft. Since neurons have a synapse and dendrites, I'd like to know how exactly the vesicles get moved from the Golgi to the synapse.
As far as I can tell, they are transported along microtubules by dynamins as opposed to floating freely in the cytosol (correct?). If so how do the dynenins identify synaptic vesicles - would they form a SNARE complex that gets released near the synapse or by some other means?
Answer: You are looking for a review on vesicle cargoing along the cytoskeleton. This open access article is the most recent I found on the subject.
From the abstract:
How synaptic cargos achieve specificity, directionality and timing of transport is a developing area of investigation. Recent studies demonstrate that the docking of motors to their cargos is a key control point. Moreover, precise spatial and temporal regulation of motor-cargo interactions is important for transport specificity and cargo recruitment. Local signaling pathways Ca2+ influx, CaMKII signaling and Rab GTPase activity regulate motor activity and cargo release at synaptic locations.
M. A. Schlager, C. C. Hoogenraad: Basic mechanisms for recognition and transport of synaptic cargos. In: Molecular brain. Vol. 2, 2009, p. 25. DOI:10.1186/1756-6606-2-25. PMID 19653898. PMC 2732917. | {
"domain": "biology.stackexchange",
"id": 499,
"tags": "neuroscience, cell-biology, synapses, intracellular-transport"
} |
A quetion from ROS noobie. Error in talker & listener configuration | Question:
Learning ROS from Mastering ROS for Robotics Programming book from yesterday. Actually i struck up in configuring talker & listener.
Configuration & Errors as follows. I need some help to clear the error.
ROS Indigo
Ubuntu 14.04
CATKIN_MAKE:
roos@roos-Inspiron-1520:~$ cd catkin_ws
roos@roos-Inspiron-1520:~/catkin_ws$ catkin_make mastering_ros_demo_pkg
Base path: /home/roos/catkin_ws
Source space: /home/roos/catkin_ws/src
Build space: /home/roos/catkin_ws/build
Devel space: /home/roos/catkin_ws/devel
Install space: /home/roos/catkin_ws/install
####
#### Running command: "make cmake_check_build_system" in "/home/roos/catkin_ws/build"
####
####
#### Running command: "make mastering_ros_demo_pkg -j2 -l2" in "/home/roos/catkin_ws/build"
####
roos@roos-Inspiron-1520:~/catkin_ws$ roscore
... logging to /home/roos/.ros/log/58f9d6ee-6de2-11e6-aed6-001d09bb3726/roslaunch-roos-Inspiron-1520-17202.log
Checking log directory for disk usage. This may take awhile.
Press Ctrl-C to interrupt
Done checking log file disk usage. Usage is <1GB.
started roslaunch server http://roos-Inspiron-1520:38168/
ros_comm version 1.11.20
SUMMARY
========
PARAMETERS
* /rosdistro: indigo
* /rosversion: 1.11.20
NODES
auto-starting new master
process[master]: started with pid [17214]
ROS_MASTER_URI=http://roos-Inspiron-1520:11311/
setting /run_id to 58f9d6ee-6de2-11e6-aed6-001d09bb3726
process[rosout-1]: started with pid [17227]
started core service [/rosout]
WHILE RUNNING TALKER & LISTENER I AM GETTING FOLLOWING ERROR:
roos@roos-Inspiron-1520:~$ rosrun mastering_ros_demo_pkg demo_topic_publisher
[rospack] Error: package 'mastering_ros_demo_pkg' not found
roos@roos-Inspiron-1520:~$
roos@roos-Inspiron-1520:~$ rosrun mastering_ros_demo_pkg demo_topic_subscriber
[rospack] Error: package 'mastering_ros_demo_pkg' not found
roos@roos-Inspiron-1520:~$
Originally posted by Robotfreak on ROS Answers with karma: 1 on 2016-08-29
Post score: 0
Answer:
Did you add the workspace to your ROS environment?
When you open a new terminal, try this first before running rosrun :
$ source ~/catkin_ws/devel/setup.bash
Originally posted by bhavyadoshi26 with karma: 95 on 2016-08-29
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 25636,
"tags": "ros, talker"
} |
Searching in a Binary Search Tree | Question: I'm studying Binary Search Trees (BST) and I would like to verify that my understanding of BSTs is correct.
For example, let S = [17, -10, 7, 19, 21, 23, -13, 31, 59].
Binary Search Tree for S, with S1 as root:
How many comparisons is needed to search this tree for the presence of each of the elements in I = {-13, 31}?
Search for -13:
Since -13 is less than 17: Go to left Child
Is -13 < 17?
Comparisons = 1
Since -13 is less than -10: Go to left Child
Comparisons = 2
Item (13) found with two comparisons.
Search for 31:
Since 31 is greater than 17: Go to right Child
Comparisons = 1
Since 31 is greater than 21: Go to right Child
Comparisons = 2
Sincer 31 is greater than 21: Go to right Child
Comparisons = 3
Item (31) found with three comparisons.
Total number of comparisons to find the presence of each item in I: 5.
Is my understanding correct?
Answer: Yes... You can also practice with an implementation of your own. I have created such an implementation and a good test is to compile this with different optimization levels in gcc:
#include <stdio.h>
#include <stdlib.h>
#include <time.h>
typedef struct node {
int node_id;
int data;
char *name;
struct node *left;
struct node *right;
} node;
node *newNode(int data, char *name, int node_id) {
node *new_node = malloc(sizeof(node));
new_node->data = data;
new_node->name = name;
new_node->node_id = node_id;
new_node->right = new_node->left = NULL;
return new_node;
}
node *insert_node(node *root, int data, int node_id, char *name) {
if (root == NULL)
return newNode(data, name, node_id);
else {
node *cur;
if (node_id < root->node_id) {
cur = insert_node(root->left, data, node_id, name);
root->left = cur;
} else if (node_id > root->node_id) {
cur = insert_node(root->right, data, node_id, name);
root->right = cur;
}
}
return root;
}
node *find_node_data(node *root, int node_id) {
if (root == NULL)
return NULL;
else if (root->node_id == node_id)
return root;
else if (root->node_id > node_id)
return find_node_data(root->left, node_id);
else
return find_node_data(root->right, node_id);
}
void print(node *np) {
if (np) {
print(np->left);
printf("(%d, %d, %s)", np->node_id, np->data, np->name);
print(np->right);
}
}
int main() {
int T = 1000; //test case 1000 nodes
int data, r2, r;
int node_id = 0;
//printf("Input number of nodes:");
//scanf("%d", &T);
node *root = NULL;
srand(time(NULL));
while (T-- > 0) {
//scanf("%d %d", &data, &node_id);
r = (2 + T) * rand() % 100; //node id
r2 = (2 + T) * (rand() % 100); // data
printf("Input data. %d:\n", r2);
printf("node id. %d:\n", r);
root = insert_node(root, r2, r, "foobar");
}
print(root);
// printf("\n");
// printf("Find data at node:");
// scanf("%d", &T);
print(find_node_data(NULL, node_id));
T = r;
printf("node data %d %s", find_node_data(root, r)->data, find_node_data(root, r)->name);
root = NULL;
// find_node_data(root, T)->data;
print(root);
return 0;
} | {
"domain": "cs.stackexchange",
"id": 12556,
"tags": "binary-trees, binary-search-trees, binary-search"
} |
Inserting two rows that are almost identical to each other | Question: I'm trying to make test data for an application I'm writing, and some of my test data is very repetitive, involving inserting a row and then inserting another row that is identical except for one column. As such, I have very repetitive INSERT statements that make the file longer than I think it needs to be.
INSERT INTO [dbo].[InvoiceItem] (invoice_item_name, amount, invoice_id) VALUES
('TUITION',
(SELECT tuition_amount FROM [dbo].[Grade] WHERE grade_id =
(SELECT grade_id FROM [dbo].[Student] WHERE first_name = 'Murdoch')),
(SELECT invoice_id FROM [dbo].[Invoice] WHERE student_id =
(SELECT student_id FROM [dbo].[Student] WHERE first_name = 'Murdoch' AND invoice_date = '2019-09-05'))),
('TUITION_SERVICE_FEE',
(SELECT tuition_amount FROM [dbo].[Grade] WHERE grade_id =
(SELECT grade_id FROM [dbo].[Student] WHERE first_name = 'Murdoch')) * 0.03,
(SELECT invoice_id FROM [dbo].[Invoice] WHERE student_id =
(SELECT student_id FROM [dbo].[Student] WHERE first_name = 'Murdoch' AND invoice_date = '2019-09-05'))),
('TUITION',
(SELECT tuition_amount FROM [dbo].[Grade] WHERE grade_id =
(SELECT grade_id FROM [dbo].[Student] WHERE first_name = 'Hartwell')),
(SELECT invoice_id FROM [dbo].[Invoice] WHERE student_id =
(SELECT student_id FROM [dbo].[Student] WHERE first_name = 'Hartwell' AND invoice_date = '2019-09-05'))),
('TUITION_SERVICE_FEE',
(SELECT tuition_amount FROM [dbo].[Grade] WHERE grade_id =
(SELECT grade_id FROM [dbo].[Student] WHERE first_name = 'Hartwell')) * 0.03,
(SELECT invoice_id FROM [dbo].[Invoice] WHERE student_id =
(SELECT student_id FROM [dbo].[Student] WHERE first_name = 'Hartwell' AND invoice_date = '2019-09-05'))),
Is there a way to clean this up by any chance?
Answer: You should create a Stored Procedure that does this with a parameter called @firstname
it would come out way cleaner.
here is what the create would look like, I added in the @InvoiceDate parameters as well thinking that it was also something that was not static in the insert statement.
CREATE PROCEDURE Sproc_Name
@FirstName nvarchar(50)
,@InvoiceDate DateTime
AS
INSERT INTO [dbo].[InvoiceItem] (invoice_item_name, amount, invoice_id) VALUES
('TUITION',
(SELECT tuition_amount FROM [dbo].[Grade] WHERE grade_id =
(SELECT grade_id FROM [dbo].[Student] WHERE first_name = @FirstName)),
(SELECT invoice_id FROM [dbo].[Invoice] WHERE student_id =
(SELECT student_id FROM [dbo].[Student] WHERE first_name = @FirstName AND invoice_date = @InvoiceDate))),
('TUITION_SERVICE_FEE',
(SELECT tuition_amount FROM [dbo].[Grade] WHERE grade_id =
(SELECT grade_id FROM [dbo].[Student] WHERE first_name = @FirstName)) * 0.03,
(SELECT invoice_id FROM [dbo].[Invoice] WHERE student_id =
(SELECT student_id FROM [dbo].[Student] WHERE first_name = @FirstName AND invoice_date = @InvoiceDate)))
GO
and then you would call it like this
EXECUTE Sproc_Name @FirstName = "Murdoch", @InvoiceDate = CONVERT(date, GETDATE())
-- CONVERT(date, GETDATE()) gives you 2018-04-05
I just used GetDate() for illustrative purposes, you can put in a string like "2018-04-05" and it should work as well. | {
"domain": "codereview.stackexchange",
"id": 30092,
"tags": "sql, sql-server, database"
} |
Does acceleration in free fall disprove Newtonian mechanics? | Question: In Newtonian mechanics, how come the net force on an object in free fall is zero, yet it is accelerating??
If F=(a)(m), and F=0, it follows that: a must equals 0
But in reality a falling object will accelerate with a = 9.8 m/s^2.
Since a=/= 0, then either F =/= 0 or F =/= (a)(m).
And because we know that for object in free fall F = 0, then F =/= a m (i.e. newton second law does not apply)
Answer: @annav is correct, of course, but I think maybe missing the op's intuitive reasoning that's suggesting "no force" to him:
(a) Suppose you're sitting in an accelerating automobile. Then you >>feel<< a force that's responsible for accelerating you forwards.
(b) But suppose instead you're in free fall, in a gravitational field, accelerating downwards. Then you feel absolutely nothing, no force whatsoever. So, yes, there must be a force, but >>where is it???<<, so to speak.
Answer: (a) You feel a force while sitting in an accelerating car because the force acts directly on your back, but not directly on the rest of your body. The carseat presses your back, then your back presses on your internal organs, etc. And it's all this pressing of one part of your body on another part that you're feeling. (This is called a contact force -- one object pushing on another that it's in contact with.)
(b) But in a gravitational field, gravity acts >>directly on every part<< of your body simultaneously. Even on every cell, every molecule, of every internal organ. So no one part of your body is pressing against any other part. All your "parts" are moving together, in unison, so to speak. And so you >>feel nothing<< directly. But there nevertheless is an overall force, gravity, just like everybody else already explained. (And note that gravity is not a contact force -- it acts at a distance, without being in contact with the objects it's acting on. And that's why it can directly affect your internal organs, which the accelerating car can't do.)
Note that prior to Newton's discovery/explanation of gravity, everybody believed that all forces were contact forces. Nobody ever imagined that one body could exert a force on another body without being directly in contact with it. And at first blush, that indeed sounds pretty reasonable. So Newton's genius was not only explaining gravity, but also conjuring up the almost unimaginable idea of force-at-a-distance in the first place.
So your intuition is quite understandable -- you don't feel a force, so how can there possibly be a force??? Action(force)-at-a-distance is the answer. But don't feel too bad -- it took Isaac Newton to figure that out. | {
"domain": "physics.stackexchange",
"id": 48377,
"tags": "newtonian-mechanics, newtonian-gravity, reference-frames, acceleration, free-fall"
} |
AsyncTask alternative with Runnable and CountdownLatch | Question: What is wrong with this pattern which emulates AsyncTask functionality?:
// on the UI thread
final CountDownLatch latch = new CountDownLatch(1);
onPreexecuteWrapper();
new Thread(new Runnable() {
doInBackgroundWrapper();
latch.countDown();
}
}).start();
new Thread(new Runnable() {
@Override
public void run() {
try {
latch.await();
} catch (InterruptedException e) {
Logger.error(TAG, "Not good.");
}
context.runOnUiThread(new Runnable() {
@Override
public void run() {
onPostExecuteWrapper();
}
});
}
}).start();
Here's what I think is wrong:
Two threads -> more resources (I didn't compare actually, just guessing).
No progress update functionality.
Appears to be stinky.
Answer: First thing that is wrong is that you are reinventing a wheel.
Second why not ditch the CountDownLatch and do context.runOnUiThread directly in the first runnable instead?
// on the UI thread
onPreexecuteWrapper();
new Thread(new Runnable() {
@Override
public void run() {
doInBackgroundWrapper();
context.runOnUiThread(new Runnable() {
@Override
public void run() {
onPostExecuteWrapper();
}
});
}
}).start();
Last thing is that it's better to use an Executor and ThreadPool than to spawn new threads. | {
"domain": "codereview.stackexchange",
"id": 14347,
"tags": "java, multithreading, android"
} |
What's the difference between prions and prion-like proteins? | Question: If I added a prion domain to a protein, does that make the protein a prion-like protein or would it be considered a prion at that point?
I'm trying to understand what prions are, how they aggregate and what macroscopic shapes they conform to (i.e. a ring vs smaller micelles).
Thank you very much!
Answer: My understanding is that the principal quality of a prion is that it behaves like a prion in cells- the logic is a little circular.
A few of these criteria for acting like a prion (in yeast, where prions are most common):
Reproduces itself by "templating" (refolding of non-prion versions of itself into prion versions)
Sensitive to protein denaturing agents (which get rid of the prion protein conformation).
Dependence on protein chaperones which reproduce prions by releasing prion oligomers from larger prion aggregates. These oligomers go on to "seed" new prion aggregates.
Forms insoluble protein aggregates consisting of many copies of the prion protein.
An important subtlety here is that proteins with identical amino acid sequence can be both prion and non-prion in the same cell. It is a biochemical consequence of a specific protein structure, rather than simply a feature of the protein sequence. Thus, a prion domain confers the ability to become a prion, not necessarily "prion-ness" itself. This Scientific American article might be helpful regarding this point.
As an example of how these criteria are applied, this paper uses some biochemical and genetic methods to screen for the presence of new prions in wild yeast.
As a review of how prions oligomerize and form their classic aggregates, this paper might be helpful.
It is certainly true that creating fusion proteins with candidate prion domains is a common way of confirming that those domains are competent to become prions (see e.g. this paper). But there is usually a higher standard of evidence to show that some protein actually is a prion- usually by demonstrating the existence of heritable protein aggregates mentioned in point (4).
Hope that helps. | {
"domain": "biology.stackexchange",
"id": 7245,
"tags": "biochemistry, proteins, synthetic-biology, prion"
} |
Could not load controller 'joint_state_controller' because controller type 'joint_state_controller/JointStateController' does not exist | Question:
Hello world,
I am running into the issue of not being able to load my controllers and I am getting this error:
[ERROR] [1562494434.299706787, 0.160000000]: Could not load controller 'gimbal_r_joint_position_controller' because the type was not specified. Did you load the controller configuration on the parameter server (namespace: '/firefly/vi_sensor/gimbal_r_joint_position_controller')?
[ERROR] [1562494435.300931, 1.160000]: Failed to load gimbal_r_joint_position_controller
[INFO] [1562494435.301550, 1.160000]: Loading controller: gimbal_p_joint_position_controller
[ERROR] [1562494435.304706403, 1.170000000]: Could not load controller 'gimbal_p_joint_position_controller' because the type was not specified. Did you load the controller configuration on the parameter server (namespace: '/firefly/vi_sensor/gimbal_p_joint_position_controller')?
[ERROR] [1562494436.305523, 2.170000]: Failed to load gimbal_p_joint_position_controller
[INFO] [1562494436.306111, 2.170000]: Loading controller: gimbal_y_joint_position_controller
[ERROR] [1562494436.308813421, 2.170000000]: Could not load controller 'gimbal_y_joint_position_controller' because the type was not specified. Did you load the controller configuration on the parameter server (namespace: '/firefly/vi_sensor/gimbal_y_joint_position_controller')?
[ERROR] [1562494437.310188, 3.170000]: Failed to load gimbal_y_joint_position_controller
[INFO] [1562494437.310789, 3.170000]: Loading controller: joint_state_controller
[ERROR] [1562494437.313623297, 3.170000000]: Could not load controller 'joint_state_controller' because the type was not specified. Did you load the controller configuration on the parameter server (namespace: '/firefly/vi_sensor/joint_state_controller')?
[ERROR] [1562494438.314230, 4.080000]: Failed to load joint_state_controller
I get this error for each of the controllers that I am trying to load.
I am able to see that they type is loaded to the param server under the correct namespace when I run "rosparam list":
> j*******@j********:~/rotors_ws$ rosparam list /vi_sensor
> /vi_sensor/gimbal_p_joint_position_controller/joint
> /vi_sensor/gimbal_p_joint_position_controller/pid/d
> /vi_sensor/gimbal_p_joint_position_controller/pid/i
> /vi_sensor/gimbal_p_joint_position_controller/pid/p
> /vi_sensor/gimbal_p_joint_position_controller/type
> /vi_sensor/gimbal_r_joint_position_controller/joint
> /vi_sensor/gimbal_r_joint_position_controller/pid/d
> /vi_sensor/gimbal_r_joint_position_controller/pid/i
> /vi_sensor/gimbal_r_joint_position_controller/pid/p
> /vi_sensor/gimbal_r_joint_position_controller/type
> /vi_sensor/gimbal_y_joint_position_controller/joint
> /vi_sensor/gimbal_y_joint_position_controller/pid/d
> /vi_sensor/gimbal_y_joint_position_controller/pid/i
> /vi_sensor/gimbal_y_joint_position_controller/pid/p
> /vi_sensor/gimbal_y_joint_position_controller/type
> /vi_sensor/joint_state_controller/publish_rate
> /vi_sensor/joint_state_controller/type
Here is my .yaml file.
vi_sensor:
# Publish all joint states -----------------------------------
joint_state_controller:
type: joint_state_controller/JointStateController
publish_rate: 50
# Position Controllers ---------------------------------------
gimbal_r_joint_position_controller:
type: effort_controllers/JointPositionController
joint: gimbal_r_joint
pid: {p: 100.0, i: 1.0, d: 25.0}
gimbal_p_joint_position_controller:
type: effort_controllers/JointPositionController
joint: gimbal_p_joint
pid: {p: 100.0, i: 1.0, d: 13.0}
gimbal_y_joint_position_controller:
type: effort_controllers/JointPositionController
joint: gimbal_y_joint
pid: {p: 100.0, i: 1.0, d: 13.0}
The file is loaded by the following line:
<rosparam file="$(find rotors_gazebo)/config/gimbal_control.yaml" command="load" ns="/firefly/vi_sensor"/>
I have tried many many things to help make this gimbal work, but this is what is stopping me from moving forward. I am able to make the gimbal work when it is not attached to a drone, so I know that it works correctly; however, when it is attached to the drone I run into this error. I am guessing it is a namespace issue, but I am not sure what it is.
Any help is appreciated. Thank you!!
Originally posted by crazy_computer on ROS Answers with karma: 36 on 2019-07-06
Post score: 1
Original comments
Comment by jayess on 2019-07-06:
Can you please edit your question using the preformatted text (101010) button (which is for code/terminal output/etc.) instead using the quotation (") button, which is for quoting. It'll make your question easier to read.
Comment by crazy_computer on 2019-07-07:
Just made those edits, thank you!
Answer:
So, I ended up finding the issue. As suspected it was a namepsace issue in loading the .yaml file to the param server. If in the future you have this issue, please pay close attention to how you are using this line:
<rosparam file="$(find rotors_gazebo)/config/gimbal_control.yaml" command="load" ns="/firefly/vi_sensor"/>
When defining namespace in that line, it ends up appending the namespace onto the namespace you are already in.
This is why I ended up with my namespace being something like "/firefly/vi_sensor/vi_sensor" and because of this, the yaml file was being loaded, just to a different location than the controllers.
Originally posted by crazy_computer with karma: 36 on 2019-07-07
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by PumpkinIcedTea on 2022-01-13:
Hi may I know how you solved this problem ? | {
"domain": "robotics.stackexchange",
"id": 33355,
"tags": "gazebo, ros-kinetic"
} |
visualizing gazebo model in rviz | Question:
Is that possible to visualize the kitchen model which original from gazebo model library in Rviz? Any suggestions?
Thanks a lot.
Originally posted by crazymumu on ROS Answers with karma: 214 on 2015-09-08
Post score: 0
Original comments
Comment by Dinl on 2015-09-12:
One question about, how could you import the color into gazebo? I have a .obj file model with textures defined in .mtl file, then I convert it into .dae collada file, but gazebo doesn't load the color.
Answer:
Yes, it is. The kitchen model is a single .dae (COLLADA) file that is available here in the gazebo_models repo.
The easiest way to use it in rviz is probably to copy the model into a ROS package and then publish it as a rviz Marker by using the Mesh_Resource approach.
Originally posted by Stefan Kohlbrecher with karma: 24361 on 2015-09-09
This answer was ACCEPTED on the original site
Post score: 3 | {
"domain": "robotics.stackexchange",
"id": 22590,
"tags": "ros"
} |
undefined reference to 'ros::init' | Question:
Hi,
I am making a service, which takes two double array as input and returns another double array. Below is the sample server.cpp file-
#include <ros/ros.h>
#include <kinematics/test_service.h>
bool service_callback(kinematics::test_service::Request &req, kinematics::test_service::Response &res) {
double x1 = req.param_1[0];
double y1 = req.param_1[1];
double z1 = req.param_1[2];
double x2 = req.param_2[0];
double y2 = req.param_2[1];
double z2 = req.param_2[2];
ROS_INFO_STREAM("Pose {" << x1 << ", " << y1 << ", " << z1 << "}");
res.param_3.resize(3);
res.param_3.at(0) = x1 + x2;
res.param_3.at(1) = y1 + y2;
res.param_3.at(2) = z1 + z2;
return true;
}
int main(int argc, char **argv) {
ros::init(argc, argv, "test_service_node");
ros::NodeHandle nh;
ros::ServiceServer service = nh.advertiseService("test_service", service_callback);
ros::spin();
return 0;
}
Below is the sample client.cpp file-
#include <ros/ros.h>
#include <kinematics/test_service.h>
int main(int argc, char **argv) {
ros::init(argc, argv, "service_node_test");
ros::NodeHandle nh;
ros::ServiceClient client =
nh.serviceClient<kinematics::test_service>("test_service");
std::vector<double> param_1(3);
param_1.push_back(0.8047);
param_1.push_back(0.1436);
param_1.push_back(0.3308);
std::vector<double> param_2(3);
param_2.push_back(0.7966);
param_2.push_back(0.1354);
param_2.push_back(0.3337);
kinematics::test_service service;
service.request.param_1 = param_1;
service.request.param_2 = param_2;
if (client.call(service)) {
ROS_INFO_STREAM("Clinet Response " << service.response.param_3.size());
} else {
ROS_ERROR("Failed to call test_service");
return -1;
}
return 0;
}
The CMakeLists.txt file has following content-
cmake_minimum_required(VERSION 2.8.3)
project(service_test)
find_package(catkin REQUIRED COMPONENTS)
catkin_package(
LIBRARIES
CATKIN_DEPENDS
)
include_directories(include
${catkin_INCLUDE_DIRS}
)
add_executable(server src/server.cpp)
add_executable(client src/client.cpp)
target_link_libraries(server ${catkin_LIBRARIES})
target_link_libraries(client ${catkin_LIBRARIES})
install(TARGETS server
ARCHIVE DESTINATION ${CATKIN_PACKAGE_LIB_DESTINATION}
LIBRARY DESTINATION ${CATKIN_PACKAGE_LIB_DESTINATION}
RUNTIME DESTINATION ${CATKIN_PACKAGE_BIN_DESTINATION}
)
install(TARGETS client
ARCHIVE DESTINATION ${CATKIN_PACKAGE_LIB_DESTINATION}
LIBRARY DESTINATION ${CATKIN_PACKAGE_LIB_DESTINATION}
RUNTIME DESTINATION ${CATKIN_PACKAGE_BIN_DESTINATION}
)
install(DIRECTORY launch/
DESTINATION ${CATKIN_PACKAGE_SHARE_DESTINATION}/launch
PATTERN ".svn" EXCLUDE)
While compiling the above code using catkin_make, it is throwing following error-
server.cpp:(.text+0x61): undefined reference to `ros::init(int&, char**, std::string const&, unsigned int)'
server.cpp:(.text+0xbd): undefined reference to `ros::NodeHandle::NodeHandle(std::string const&, std::map<std::string, std::string, std::less<std::string>, std::allocator<std::pair<std::string const, std::string> > > const&)'
server.cpp:(.text+0x14a): undefined reference to `ros::spin()'
server.cpp:(.text+0x15e): undefined reference to `ros::ServiceServer::~ServiceServer()'
How to solve this error?
Originally posted by ravijoshi on ROS Answers with karma: 1744 on 2016-06-06
Post score: 1
Answer:
You're not linking against roscpp, which is the library that provides those symbols.
How to solve this error?
Link against roscpp. Add it to the find_package(catkin COMPONENTS ..) statement, and add it to your catkin_package(.. CATKIN_DEPENDS ..) list.
I'd also make your CMakeLists.txt search for your kinematics package, as you use that in your service server definitions.
Edit: this is also discussed in the Writing a Simple Publisher and Subscriber (C++) - Building your nodes tutorial, and see the catkin documentation » How to do common tasks » Package format 2 (recommended) » C++ catkin library dependencies for how to update your CMakeLists.txt and package manifest.
Note also that this is not a ROS specific problem, but a standard C++ linking error, and that it has been asked many times before.
Originally posted by gvdhoorn with karma: 86574 on 2016-06-06
This answer was ACCEPTED on the original site
Post score: 4
Original comments
Comment by ravijoshi on 2016-06-06:
Thanks . It worked.
Comment by skr_robo on 2016-07-14:
I am facing a similar issue. I edited the CMakeLists.txt as suggested above. But I also had to edit the package.xml to add roscpp as run and build dependency. Is this right way to do it or does linking roscpp requires me to add any additional #include...... statements in the source file ?
Comment by gvdhoorn on 2016-07-14:\
But I also had to edit the package.xml [..]
Yes, that is what I meant with "update your [..] package manifest": package.xml is the manifest.
And, no. Provided you are not using anything not already included by ros.h, you don't need to add any additional #includes.
Comment by skr_robo on 2016-07-15:
Thank You. | {
"domain": "robotics.stackexchange",
"id": 24824,
"tags": "ros-indigo"
} |
Content filtering of webpage | Question: This is code for filtering the data of webpage, for the web crawler I made for my project. I know python scripts can lag than other languages, but this takes a lot of time when processing even a single page.
I don't want to use any other external libraries for filtering content. Is there any way my current code can be improved to be cleaner and faster?
# -*- coding: utf-8 -*-
import urllib
url='http://designingadam2.wordpress.com'
def content(page,url):#FILTERS THE CONTENT OF THE REMAINING PORTION
flg=0
#REMOVES &nsbp LIKE CHARACTERS
while page.find("&",flg)!=-1:
page.replace(' ','')
start=page.find("&",flg)
end=page.find(";",start+1)
if (end-start)<10: #USED IF HERE TO CNFRM TAGS-->REMOVE IF NOT NEEDED
pageO=page[:start]
pageT=page[end+1:]
page=pageO+pageT
flg=start+1#TO CONTINUE FROM NEXT POS
else:
flg+=1
flg=0
#REMOVES CONTENT BETWEEN SCRIPT TAGS
while page.find("<script",flg)!=-1:
start=page.find("<script",flg)
end=page.find("</script>",flg)
end=end+9
i,k=0,end-start
page=list(page)
while i<k:
page.pop(start)
i=i+1
page=''.join(page)
flg=start
#REMOVES CONTENT BETWEEN STYLE TAGS
flg=0
while page.find("<style",flg)!=-1:
start=page.find("<style",flg)
end=page.find("</style>",flg)
end=end+9
i,k=0,end-start
page=list(page)
while i<k:
page.pop(start)
i=i+1
page=''.join(page)
flg=start
#REMOVES THE TAGS
s_list = list(page)
i,j = 0,0
while i < len(s_list):
# find the <
if s_list[i] == '<':
while s_list[i] != '>':# and i!=(len(s_list)-1):
# remove everything between the < and the >
s_list.pop(i)
# make sure we get rid of the > to
s_list.pop(i)
else:
i=i+1
#-------------------------------------------------------------------
#REMOVES WHITESPACES
s_list="".join(s_list)
lst=s_list.split()
#CONVERT TO LOWERCASE
i=0
while i<len(lst):
lst[i]=lst[i].lower()
i+=1
#REMOVES DUPLICATES
lst=list(set(lst))
#REMOVE COMMON WORDS
phrase=['to','a','an','the',\
'for','from','that','their',\
'i','my','your','you','mine',\
'we','okay','yes','no','as',\
'if','but','why','can','now',\
'are','is','also',',','.',';',\
':','?','|','/','\n','\t']
i=0
while i<len(lst):
if lst[i] in phrase:
lst.pop(i)
else:
i+=1
print lst
print len(lst)
def pageContent(url):#EXTRACTS HTML CODE
f = urllib.urlopen(url)
page = f.read()
f.close()
#page=page.replace(u'\xa0', ' ').encode('utf-8','ignore')
return page
page=pageContent(url)
content(page,url)
Please mind the comments. I left it so it could be of some help.
Answer: Per my comment, follow the style guide - for example, there should be whitespace around = when assigning, and after commas:
i, k = 0, end - start
content is not a good name for your function. You should be more descriptive of what it actually does (perhaps filter_content?) and add a docstring providing more information. Throughout your code there are temporary variables with cryptic names (s_list? lst?) that could be changed to make things much clearer - I was wondering why flg isn't Boolean, and it turns out that it isn't actually a flag.
Your approach to removing HTML tags (picking through the whole page character by character) is particularly prone to error; what if one of the attributes within a tag contains '>'? For a good standard library solution, see here.
The conversion to lowercase is, frankly, ludicrous:
i=0
while i<len(lst):
lst[i]=lst[i].lower()
i+=1
you had the whole string (called, confusingly, s_list) to hand just two lines beforehand, and
s_list = s_list.lower()
is so much simpler.
As you're making a set to remove duplication:
lst=list(set(lst))
why not keep the set, instead of converting back to list, and use it to do the filtering, too? For example, use set.difference_update:
>>> words = set('this is a sentence to filter'.split())
>>> words.difference_update(['a', 'to', 'this', 'is'])
>>> words
set(['sentence', 'filter'])
Your other function could be simplified significantly:
def page_content(url):
with urllib.urlopen(url) as f:
return f.read() | {
"domain": "codereview.stackexchange",
"id": 8966,
"tags": "python, optimization, html, python-2.x"
} |
Why do we care about the canonical commutation relations? | Question: Suppose $\hat{x}$ and $\hat{p}$ are the position and momentum operators, it can be shown that
$$[\hat{x}, \hat{p}] = i\hbar\mathbb{I}.$$
The Stone-von Neumann theorem tells us that that the above is unique up to unitary equivalence.
I am unclear on the significance of the canonical commutation relations shown above. My current interpretation of commutators is, informally speaking, that they measure the extent to which two operators commute. What further information does the canonical commutation relation give us, and why is its uniqueness up to unitary equivalence such a big deal?
Answer: There's more to it, and the deeper content it encodes is related to symmetries and their associated conserved quantities. Let us start with the classical theory, to see that this is indeed already present there. In Classical Mechanics we may formulate our theory in the Hamiltonian Formalism. In that case we have a phase space $(\Gamma,\Omega)$ where $\Gamma$ is a space, which in basic mechanics courses is usually described as the space of pairs $(q^i,p_i)$ of position and momenta, and where $\Omega$ is an object called sympletic form.
The sympletic form gives rise to an operation among functions on $\Gamma$ called the Poisson bracket $\{,\}$. The Poisson bracket between position and momenta obey $$\{q^i,p_j\}=\delta^i_{\phantom i j}\tag{1}\label{ccr}.$$
Now, you might be aware of a result known as Noether's theorem which puts in correspondence symmetries and conservation laws. In the Hamiltonian Formalism it can be phrased as follows. For a given symmetry we have a function in $\Gamma$, called its Hamiltonian charge $Q$, which has the property that $$\{Q,f\}=-\delta_Q f\tag{2}$$
where $\delta_Q$ is the variation of the observable according to the symmetry corresponding to $Q$.
Now let us consider translations. Consider a translation by $\epsilon^i$ so that the coordinates get transformed as $q^i\to q^i+\epsilon^i$. We will have $\delta q^i = \epsilon^i$. In that regard, observe that if we define $Q = \epsilon^i p_i$ we have $$\{ Q,q^i\}=\epsilon^j\{p_j,q^i\}=-\epsilon^i = -\delta_Q q^i\tag{3}.$$
Observe that (1) has been used in the second equality. What this tells is that (1) is the statement that momentum is the generator of translations, or else that momentum is the Hamiltonian charge associated to translations. In particular, momentum in the $i$-th direction generates translations in the $i$-th direction, that is the content of (1).
This then naturally generalizes to Quantum Mechanics. And it is not so surprising that it happens, since we know that the correspondence principle gives the quantization rule $[] \leftrightarrow i\{\}$. In that setting, the Canonical Commutation Relations are just saying that momentum should be the generator of translations.
Obviously, the whole analysis of symmetries that I have outlined above in Classical Mechanics can be made in a self-contained manner in Quantum Mechanics. I only did it in Classical Mechanics to show you that there is a classical version of the story, which may be easier to understand first.
In summary, commutation relations often encode symmetry statements and their associated charges, and the CCR is just one example of that. | {
"domain": "physics.stackexchange",
"id": 92472,
"tags": "quantum-mechanics, operators, commutator, phase-space"
} |
Why Turtlebot3_navigation2 launch file different from Github | Question:
I'm using ros2 foxy in ubuntu 20.04
when I check the turtlebot3_navigation2 launch file by typing command
vi /opt/ros/foxy/share/turtlebot3_navigation2/launch/navigation2.launch.py
find this code PythonLaunchDescriptionSource([nav2_launch_file_dir, '/nav2_bringup_launch.py'])
was different from github turtlebot3_navigation2 foxy-devel branch
PythonLaunchDescriptionSource([nav2_launch_file_dir, '/bringup_launch.py'])
here the link github turtlebot3_navigation2
I have already check my ros version is foxy, and use sudo apt install ros-foxy-turtlebot3 to install
Is there any way to inspect my turtlebot3_navigation2 git version in path /opt/ros/foxy/share/turtlebot3_navigation2 ?
Why this problem occur?
Thanks for help.
Originally posted by SteveYK on ROS Answers with karma: 3 on 2020-12-16
Post score: 0
Answer:
According to http://repo.ros2.org/status_page/ros_foxy_default.html?q=turtlebot3_navigation2 the latest version (as of writing this) is 2.1.0, comparing that with foxy-devel using github I see lots of commits (19 at the moment):
https://github.com/ROBOTIS-GIT/turtlebot3/compare/2.1.0...foxy-devel
Including the commit that changed the line you are referencing:
https://github.com/ROBOTIS-GIT/turtlebot3/commit/f2209819bb8109cd77f15b903690e12355322732
What this means is that they have committed these changes to foxy-devel intending for them to be part of foxy eventually, but they have not done a release for foxy since then and therefore the changes you see on GitHub are not reflected in the latest foxy binaries. If you want to compare what you have from your binary with the source code, use one of these two links:
2.1.0 tag on the main repository:
https://github.com/ROBOTIS-GIT/turtlebot3/tree/2.1.0
or the release repository (used by our build farm to build packages):
https://github.com/robotis-ros2-release/turtlebot3-release/tree/release/foxy/turtlebot3_navigation2
I recommend the first one, but it's possible there are patches included in the second one that aren't the upsteam source. In this case I still recommend the first link.
Originally posted by William with karma: 17335 on 2020-12-18
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by SteveYK on 2020-12-20:
Thanks, I think it solved my problem. | {
"domain": "robotics.stackexchange",
"id": 35884,
"tags": "ros2, roslaunch"
} |
Periodic multi-layer scattering of neutrons | Question: I am trying to understand the reflectivity plot on slide 26 of Neutron optics,Soldner lecture.
1.Is the peak from $\theta$=0.0 to 0.4 due to total external reflection from the first upper surface?.
2.There is another peak at $\theta$=1.0. Is it because of Bragg's interference?(As given in slide 23 of Stewart's lectures.
3.Why are there alternative peaks between $\theta$=0.4 and 1.0?
Answer: Here are the answers:
Yes.
Yes... But is it better called as Darwin's plateau.
Those are fine structure arising from multiple wave interference (and are not seen experimentally) (see page 16 of Analysis and design of multilayer structures
for neutron monochromators and supermirrors ,S. Masalovich ) | {
"domain": "physics.stackexchange",
"id": 44359,
"tags": "quantum-mechanics, optics, particle-physics, large-hadron-collider, optical-materials"
} |
boolean circuit to decide if there's a path of at most $k$ edges from $u$ to $v$ in graph $G$ | Question:
Let an undirected graph and $k\in\mathbb{N}$. Prove that there's a circuit of depth $2$, size of $n^{O(k)}$ and unlimited fan-in, which gets $\langle G,v,u\rangle$ as an input and checks if there's a path from $u$ to $v$ of at most $k$ edges.
So I'm familiar with Warshall's algorithm and I think I should utilize it here. The problem is that it seems impossible to do that with a depth-$2$ circuit.
I'd be glad if you could direct me what to do.
Thanks
Answer: Since you are allowed an $n^{O(k)}$ size circuit, you could go over all possible $k$-length paths originating at $u$. The first level will contain $n^{O(k)}$ AND gates, one for every possible sequence of vertices $v_1=u,v_2,...,v_l=v$, for $l\le k$, and in the second level you input the results of the previous level to an OR gate. | {
"domain": "cs.stackexchange",
"id": 8943,
"tags": "algorithms, complexity-theory, circuits"
} |
Is there a better way to output a javascript array from ASP.net? | Question: I often run into the problem of producing a javascript array on an ASP.net page from an IEnumerable and I was wondering if there was an easier or clearer way to do it than
<% bool firstItem = true;
foreach(var item in items){
if(firstItem)
{
firstItem = false;
}
else
{%>
,
<%}%>
'<%:item%>'
%>
The whole thing is made much more verbose because of IE's inability to handle a hanging comma in an array or object.
Answer: With a bit of help from System.Linq this becomes quite easy.
var array = [ <%=
string.Join(",", items.Select(v => "'" + v.ToString() + "'").ToArray())
%> ]; | {
"domain": "codereview.stackexchange",
"id": 95,
"tags": "c#, javascript, asp.net"
} |
using a cast in an overriden equals method should I use generics and if so, how? | Question: here is my issue when trying to override my equals method.
this is what I have currently
@Override
public boolean equals(Object o) {
if (this.name.equals(((Animal) o).getName())) {
System.out.println("True");
return true;
} else {
System.out.println("False");
return false;
}
}
I'm trying to compare the names of objects which extend the Animal class is there a better way to create this equals method without the explicit casting to compare the names of each object?, I was messing around with Generics but just couldn't get it to work properly. below is what I attempted with generics but when I tried to pass an object in the parenthesis it was just using the super equals method. (The get method literally just returns the name)
public boolean equals(Class<? extends Animal>o) {
if (this.name.equals(o.getName())) {
System.out.println("True");
return true;
} else {
System.out.println("False");
return false;
}
}
any help is really appreciated.
Answer: You could try something like the following in The Animal Class:
@Override
public boolean equals(Object obj) {
Animal animal = null;
if(obj == null || this ==null) {
return false;
}
if(obj instanceof Animal) {
animal = (Animal) obj;
}
if(this.name.equals(animal.getName())) {
return true;
}
return super.equals(obj);
}
Now if you have a Cat and a Dog which extend Animal
and you check
private Cat cat = new Cat("Pipsy");
private Dog dog = new Dog("Pupsy");
if(cat.equals(dog)) will return false because you check the name, and they have a different one.
I do not think you have to use Generics, Object is generic enough :). Hope this helped | {
"domain": "codereview.stackexchange",
"id": 36141,
"tags": "java, generics"
} |
Call to external web service | Question: The architecture is showing a call to an external Web Service which is called by me. Then I will expose the data using WCF to be callable from another WCF client. But I really don't like how it's done. Interfaces, contracts, implementation in the BLO and implementations on the DAL.
7 time copy & paste of method which just call another identical method.
P.S: I've changed the name of the methods between the image and the code.. In the code I've called the main duplicate method GetExampleData / GetExampleDataInfo.
namespace .Risk.WCF.Services
{
public static class LeagueServerSingletonContainer
{
private static volatile IWindsorContainer serverInstance;
private static readonly object serverSyncRoot = new Object();
public static IWindsorContainer ServerContainer
{
get
{
if (serverInstance == null)
{
lock (serverSyncRoot)
{
if (serverInstance == null) // double-check
{
IWindsorContainer serverContainer = SingletonContainerFactory.ConfigureContainer(
"LeagueServerWindsor.config")
.Install(new LeagueWcfClientsWindsorInstaller());
serverInstance = serverContainer;
}
}
}
return serverInstance;
}
}
}
}
namespace .Example.WCF.Services
{
public class ExampleService : IWCFService, IExampleService
{
public ExampleApiResponse<ExampleDataResponse> GetExampleData(int targetId, DateTime? dateFrom,
DateTime? dateTo)
{
var apiSessionKey = InitSessionKeyInformation();
return ExampleSingleton.Example()
.GetExampleDataInfo(apiSessionKey, targetId, dateFrom, dateTo);
}
}
}
namespace .Example.WCF.Services.Contracts
{
public static class ExampleServerContainer
{
private static volatile IWindsorContainer serverInstance;
private static readonly object serverSyncRoot = new Object();
public static IWindsorContainer ServerContainer
{
get
{
if (serverInstance == null)
{
lock (serverSyncRoot)
{
if (serverInstance == null) // double-check
{
IWindsorContainer serverContainer =
new WindsorContainer().Install(new ExampleWcfClientsWindsorInstaller());
serverInstance = serverContainer;
}
}
}
return serverInstance;
}
}
}
}
namespace .Example.WCF.Services.Contracts
{
public class ExampleWcfClientsWindsorInstaller : IWindsorInstaller
{
public void Install(IWindsorContainer container, IConfigurationStore store)
{
container.Install
(
new WcfClientInstaller<IExampleService>(
"ExampleServiceEndpoint",
Config.ExampleService.ExampleServiceConnectionPoolCapacity,
(int) Config.ExampleService.ExampleServiceAcquireChannelTimeout.TotalMilliseconds,
"ExampleServiceWcfClient")
);
}
}
}
namespace .Example.WCF.Services.Contracts
{
[ServiceContract]
public interface IExampleService
{
[OperationContract]
[FaultContract(typeof (TrackedFault))]
ExampleApiResponse<ExampleDataResponse> GetExampleData(int targetId, DateTime? dateFrom,
DateTime? dateTo);
}
}
namespace .Example.BLL
{
public class ExampleBLO : IExampleBLO
{
private static readonly Object _apiTokenLockObj = new object();
private readonly ILog log = LogManager.GetLogger(typeof (ExampleBLO));
private string ExampleApiSessionToken;
private Timer ExampleKeepAliveTimer;
public ExampleApiResponse<ExampleDataResponse> GetExampleDataInfo(int targetId, DateTime? dateFrom,
DateTime? dateTo)
{
return ExampleDAOSingleton.Example()
.GetExampleDataInfo(ExampleApiSessionToken, targetId, dateFrom, dateTo, _apiSessionKey.ApiUrlv2);
}
private void ClearApiToken()
{
log.Info("Clearing APIToken.");
ExampleKeepAliveTimer.Stop();
lock (_apiTokenLockObj)
{
ExampleApiSessionToken = null;
}
}
}
}
namespace .Example.BLL
{
public class ExampleSingleton
{
private static volatile IExampleBLO _ExampleBLO;
private static readonly object _ExampleBLOSyncRoot = new Object();
/// <summary>
/// Restituisce l'istanza statica dell'oggetto per l'accesso ai dati alle terze parti
/// </summary>
public static IExampleBLO Example()
{
if (_ExampleBLO == null)
{
lock (_ExampleBLOSyncRoot)
{
if (_ExampleBLO == null) // double-check
_ExampleBLO = new ExampleBLO();
}
}
return _ExampleBLO;
}
}
}
namespace .Example.BLL.Interfaces
{
public interface IExampleBLO
{
ExampleApiResponse<ExampleDataResponse> GetExampleDataInfo(int targetId, DateTime? dateFrom,
DateTime? dateTo);
}
}
namespace .Example.DAL
{
internal class ExampleDAO : AbstractDAO, IExampleDAO
{
private readonly ILog log = LogManager.GetLogger(typeof (ExampleDAO));
public ExampleApiResponse<ExampleDataResponse> GetExampleDataInfo(string authenticationToken,
int targetId, DateTime? dateFrom, DateTime? dateTo, string apiUrlv2)
{
string url;
if (dateFrom == null && dateTo == null)
{
url =
HttpUtility.UrlPathEncode(
string.Format(
@"{0}?method=information::get_earnings_market&token={1}&target_id={2}&extended=true",
apiUrlv2, authenticationToken, targetId));
}
else
{
url =
HttpUtility.UrlPathEncode(
string.Format(
@"{0}?method=information::get_earnings_market&token={1}&target_id={2}&datefrom={3}&dateto={4}&extended=true",
apiUrlv2, authenticationToken, targetId, dateFrom.Value.ToString("yyyy-MM-dd HH:mm:ss"),
dateTo.Value.ToString("yyyy-MM-dd HH:mm:ss")));
}
ExampleApiResponse<ExampleDataResponse> response = CallApiMethod<ExampleDataResponse>(url);
return response;
}
private ExampleApiResponse<T> CallApiMethod<T>(string url) where T : ExampleBaseResponse
{
log.DebugFormat("Contacting API with request url : {0}", url);
ExampleApiResponse<T> response = null;
var serializer = new XmlSerializer(typeof (ExampleApiResponse<T>));
try
{
using (var client = new WebClient())
{
using (Stream stream = client.OpenRead(url))
{
client.Encoding = Encoding.UTF8;
if (stream != null)
{
response = (ExampleApiResponse<T>) serializer.Deserialize(stream);
}
if (response != null && response.Response.ParseResult() != Results.Success)
{
log.WarnFormat("Error in API response. Request url: {0}", url);
}
}
}
}
catch (Exception)
{
log.WarnFormat("Error in API call with request url: {0}", url);
throw;
}
return response;
}
}
}
namespace .Example.DAL
{
public class ExampleDAOSingleton
{
private static volatile IExampleDAO _ExampleDAO;
private static readonly object _ExampleDAOSyncRoot = new Object();
/// <summary>
/// Restituisce l'istanza statica dell'oggetto per l'accesso ai dati alle terze parti
/// </summary>
public static IExampleDAO Example()
{
if (_ExampleDAO == null)
{
lock (_ExampleDAOSyncRoot)
{
if (_ExampleDAO == null) // double-check
_ExampleDAO = new ExampleDAO();
}
}
return _ExampleDAO;
}
}
}
namespace .Example.DAL.Interfaces
{
public interface IExampleDAO
{
ExampleApiResponse<ExampleDataResponse> GetExampleDataInfo(string authenticationToken, int targetId,
DateTime? dateFrom, DateTime? dateTo, string apiUrlv2);
}
}
Answer: You've turned your IoC container into a Service Locator (happens when you pass the IoC container around as a dependency)!
public void Install(IWindsorContainer container, IConfigurationStore store)
^^^^^^^^^^^^^^^^^^^^^^^^^^^ here!
Advantages
The "service locator" can act as a simple run-time linker. This allows code to be added at run-time without re-compiling the application, and in some cases without having to even restart it.
Applications can optimize themselves at run-time by selectively adding and removing items from the service locator. For example, an application can detect that it has a better library for reading JPG images available than the default one, and alter the registry accordingly.
Large sections of a library or application can be completely separated. The only link between them becomes the registry.
But this comes at a price:
Disadvantages
Things placed in the registry are effectively black boxes with regards to the rest of the system. This makes it harder to detect and recover from their errors, and may make the system as a whole less reliable.
The registry must be unique, which can make it a bottleneck for concurrent applications.
The registry can be a serious security vulnerability, because it allows outsiders to inject code right into an application.
The registry hides the class' dependencies, causing run-time errors instead of compile-time errors when dependencies are missing.
The registry makes the code more difficult to maintain (opposed to using Dependency injection), because it becomes unclear when you would be introducing a breaking change
The registry makes code harder to test, since all tests need to interact with the same global service locator class to set the fake dependencies of a class under test.
(http://en.wikipedia.org/wiki/Service_locator_pattern)
By doing this, you've introduced tight coupling between ExampleService and ExampleSingleton, which has become ambient context - which is a DI/IoC anti-pattern that you should strive to eliminate.
The solution is to inject the dependencies through the class' constructor, ideally as an interface (/abstraction - an abstract class works just as well).
Your architecture relies heavily on static classes and the Singleton [anti-]pattern, which is making your solution essentially untestable, as far as I can tell. | {
"domain": "codereview.stackexchange",
"id": 6268,
"tags": "c#, web-services, wcf"
} |
2DOF arm with quick movement: Stepper, servo, or DC motor? | Question: To make a two degree of freedom arm capable of speeds needed to play air hockey, how should the arm be controlled? I'm wondering about speed vs accuracy.
The following image is from a project involving a 2DOF drawing arm:
For this air hockey project, the general mechanics of that arm seem appropriate. it can move the "hand" around a 2D plane. I also like how both motors are at the base of the arm, instead of having the 2nd motor at the elbow (which adds weight to the arm and slows it).
However, the drawing robot is slow.
Steppers:
I understand that in general, steppers are 'slow'.
But, steppers have accurate positioning, which is very important in a game of air hockey. I also know that the steppers can be geared-up, but then they lose torque. The steppers would need to handle the inertia of quick, back-and-forth arm movement.
Hobby servos:
I've used small servos for other projects, but the accuracy was poor. Most don't have a way to read the angle externally, and those that do, have noisy signals. I'm not sure how strong the hobby servos are, and if they can be accurate enough for this project. I understand that using digital servos improve dead band issues.
DC motors with external feedback:
The only other method I can think of for controlling the arm would be to use DC motors with sensors like as rotary encoders. I think I would need an absolute rotary encoder. But they seem to be around \$50-$1000, which is a bit much for a solution I'm not sure will even work out. Perhaps there are cheaper solutions to motor angle measurement. I could just use a potentiometer, but then I'm worried about noise again.
It's worth noting that I don't know of any easy or affordable way to design my own drivetrain. All I have is a drill, and I don't know how I would mount shafts/bearings and such, even if the gears themselves were affordable.
This means that if I need to gear-up or down, I don't think I can unless it's cheap and involves simple tools.
So for the arm: DC motors with external feedback, servos, steppers, something else?...
Which method would be the best in terms of speed, and which for accuracy? I'm wondering which would cost less as well, but I understand that is a grey area.
I'm leaning towards servos out of simplicity. I'm may try digital servos with ball-bearings, in the hope they that will move quick enough, but be strong enough to handle the inertia of the arm.
(Note that a 2DOF arm is desired, and not something else like a 3D-printer x-y belt system.)
Answer: Servo mechanisms are not the "popular" RC-Servos (not-only). Servo is a closed-loop control, not only for motors. Even Steppers could or not be servo-motors. RC-servos are in major cases a DC brushed motor, with reduction gear and a potentiometer for position feedback, and electronic control.
This is a common RC Servo. The fact that it has no ball bearings does not
directly to bad precision, there's plenty of high precision products that don't use ball bearings, they are better lubricated and more strong.
By Gophi (Own work) [GFDL (http://www.gnu.org/copyleft/fdl.html) or CC BY-SA 4.0-3.0-2.5-2.0-1.0 (http://creativecommons.org/licenses/by-sa/4.0-3.0-2.5-2.0-1.0)], via Wikimedia Commons
So you can use a DC brushed, brush-less, steeper motor, AC induction motor, and still add servo-mechanism (closed loop).
But what is common for staring designer on robotic arms, is a big interest in motor and actuators, and a low research on the mechanic itself. If the structure is much flexible, using high precision actuators would not help so much.
So you need to design the arm itself to be strong under the weight, forces, acceleration and deceleration (be it or not with ball bearing), and then with the dimensions, you can start selecting the actuators.
Rought speaking, the bigger the number of poles, the more torque the motor will have and less the speed (this is very variable). The question is, if the motor is directing driving the arm you would want high-torque and less speed. If you gearing, you can use a low pole, high-speed, medium to high torque motor, and have the gearbox match the speed and torque for your application. | {
"domain": "robotics.stackexchange",
"id": 1292,
"tags": "robotic-arm, kinematics, servomotor, stepper-motor, rcservo"
} |
Show that $ Y \subseteq A^*$ is decidable | Question: Let A be a nonempty alphabet, $X ⊆ A^*$ a decidable set, and $Y ⊆ A^*$
be a semi-decidable set. We assume that $Y ⊆ X$ and that $X \setminus Y ⊆ A^*$ is semi-decidable. Show that then the set Y ⊆ A∗ is decidable.
I am looking for some ideas where to start this problem. I think we need 2 TM's which check each input simultaneously. But some further ideas where to start would be great!
Answer: Let $x \in A^*$ be your input string. We can check that $x \in X$ and, if that is not the case, reject immediately (since we know that $x \not\in X \supseteq Y$).
Now, we know that exists a Turing machine $T_1$ that accepts $Y$. Moreover, there exists a Turing machine $T_2$ that accepts $X \setminus Y$. Interleave the execution of $T_1(x)$ and $T_2(x)$. Eventually, one of $T_1(x)$ and $T_2(x)$ must halt and accept. If $T_1(x)$ accepts, then accept. If $T_2(x)$ accepts, then reject. | {
"domain": "cs.stackexchange",
"id": 19582,
"tags": "decidability"
} |
Why can't the total angular momentum of a composite system be less than zero? | Question: After reading the following excerpt from Griffiths' "Introduction to Quantum Mechanics", in his discussion of the addition of angular momentum, I'm a bit confused as to why the total spin (and I suppose angular momentum) of two particles can't be less than zero.
In his example of a particle with spin 3/2 and another with spin 2, he declares that the total spin can range from 7/2 to 1/2. Why can't we find both of the particles with a -ve spin, so that they're parallel but "negatively". Is this because both of them being aligned but in the -ve direction, is just the same as both being aligned but in the +ve direction, and there's no clear way of distinguishing the two cases?
Answer: The magnitude of a vector, $\left|\vec v\right|^2$, is always nonnegative.
For angular momenta in quantum mechanics we have the nontrivial result that the orbital angular momentum magnitude operator $\hat L{}^2$ has eigenvalues $L(L+1)\hbar^2$ for nonnegative integer $L$. This result is buried in some hocus-pocus about how the Legendre polynomials turn into the spherical harmonics.
Spin is trickier because the four-state particle-antiparticle spinor arises from the Dirac equation, also in a nontrivial way. (Dirac’s original paper is a good read.) At the level of Griffiths, spin angular momentum is an assumption to be clarified later. However, you can probably rearrange the definition
$$
\hat S{}^2 = \hat S_x{}^2 + \hat S_y{}^2 + \hat S_z{}^2
$$
into a combination of $S_z$, $S_\pm$, and their commutators in a way that might convince you the eigenvalues of $S^2$ depend on the magnitude, but not the sign, of the eigenvalues of $S_z$ for which the ladder operators terminate. | {
"domain": "physics.stackexchange",
"id": 87556,
"tags": "quantum-mechanics, angular-momentum, quantum-spin"
} |
Energy not conserved with accleration by a constant force | Question: Suppose I have an object with mass $m$ in vacuum that I propel by applying a constant force, $F$, on it, with a rocket engine that I supply a constant amount of energy, $\frac{\delta E_{supply}}{\delta t}$, to.
Then the object's acceleration is given by $F = ma \implies a = \frac{F}{m}$. Thus, it will have a constant increase in velocity, $\frac{\delta v}{\delta t} = a$.
But, $E = \frac{mv^2}{2}$, and so the objects increase in energy equals $\frac{\delta E_{object}}{\delta t} = \frac{m}{2} \frac{\delta}{\delta t} (v^2) = mv \frac{\delta v}{\delta t} = mva$.
This energy increase, $\frac{\delta E_{object}}{\delta t}$, is not constant, but $\frac{\delta E_{supply}}{\delta t}$ is and so the principle of conservation of energy is violated. Where is my error?
EDIT:
Many of you point out that the problem lies in how I assume a rocket will convert my energy to a constant force (@BMS for example says that $F = \frac{dK}{dx}$ instead of $F = \frac{dK}{dt}$), but there is one thing about that I don't understand:
Imagine that instead of a rocket, I propel my object with an electron accelerator. I accelerate my electrons over an electric potential of $V$ volts and aim the beam opposite to the direction of desired travel. This will consume a constant amount of power since $P = IV$ (power equals current times voltage and the rate of electron ejection (current) can be kept constant).
This should mean that that all my electrons are ejected with the same velocity, $V_e$, relative to my object at the time of ejection, shouldn't it? (or does $F = \frac{dK}{dx}$ make this statement invalid?) And since the electrons should require a known force to be accelerated to $V_e$ and because of Newton's third law, every force has an equal and opposite force, shouldn't this apply a constant force on my object?
Answer: One mistake you are making is equating a constant power $\frac{\delta E}{\delta t}$ with a constant force. One does not imply the other.
I'm going to switch to a more standard notation and use $K$ for kinetic energy. And let's say $U=\text{const}$ to keep things simple in our system. Finally, let's imagine there's only this one force $F$ on this constant-mass object.
One of the many relations between force and energy is $F=dK/dx$. Note the distinction between that and the quantity $dK/dt$. One outcome of this distinction is that a constant force will supply a non-constant power, and vice versa. Weird, yeah, but oh well. It can be explained by noting for a constant force, the object will move a greater distance in each successive time interval, causing a greater amount of work $F\,dx$ to be done. | {
"domain": "physics.stackexchange",
"id": 16211,
"tags": "newtonian-mechanics, energy-conservation"
} |
Structuring a user authentication system | Question: I'm new to the MVC model. This is where I started: User authentication system. After questioning, reading and thinking a lot, I was able to write a small framework (don't know if 'framework' is the right word).
I will try to explain everything as simple as possible. I hope you could tell me what I'm doing wrong and what not.
The site structure (folders are bold):
index.php
/modules
/javascript
/images
/css
/applications
config.php
Router.class.php
Registry.class.php
/model
/entities
User.class.php
Connection.class.php
/mappers
UserMapper.class.php
/services
AuthService.class.php
RegistrerService.class.php
/controller
HomeController.class.php
AuthController.class.php
RegisterController.class.php
BaseController.class.php (gets extended by all controllers)
/view
View.class.php
/templates
Is this a good site structure? Do you have any tips/suggestions?
Here are some important scripts that show how everything works:
index.php
<?php
require_once('application/config.php');
$registry = new Registry();
$router = new Router($registry);
$view = new View($registry);
$router->setPath('application/controller');
$registry->con = Connection::get();
$registry->router = $router;
$registry->view = $view;
//Load page
$router->load();
?>
<a href="index.php?route=login">Login</a> <a href="index.php?route=register">Register</a>
config.php
error_reporting(E_ALL);
session_start();
require_once('application/Registry.class.php');
require_once('application/Router.class.php');
//Autoload classes
function __autoload($className) {
if (file_exists($class = 'application/model/entities/' . $className . '.class.php')) { //Entities
require_once($class);
}
else if (file_exists($class = 'application/model/mappers/' . $className . '.class.php')) { //Mappers
require_once($class);
}
else if (file_exists($class = 'application/model/services/' . $className . '.class.php')) { //Services
require_once($class);
}
else if (file_exists($class = 'application/controller/' . $className . '.class.php')) { //Controllers
require_once($class);
}
else if (file_exists($class = 'application/view/' . $className . '.class.php')) { //View
require_once($class);
}
}
Registry.class.php
<?php
class Registry {
private $vars = array();
public function __set($index, $value) {
$this->vars[$index] = $value;
}
public function __get($index) {
return $this->vars[$index];
}
}
Router.class.php
This script calls the correct controller (and action). index.php?route=auth/index will call the index() function of AuthController.class.
AuthController.class.php
<?php
class AuthController extends BaseController {
public function index() {
//Create new authenticationService and pass connection variable
$authService = new AuthService($this->registry->con);
if($authService->login('test@test.com', 'testPass')) {
$notice = 'Test logged In';
} else {
$notice = 'Not logged in';
}
$this->registry->view->heading = 'Login';
$this->registry->view->notice = $notice;
//Show the template 'test.php'
$this->registry->view->show('login');
}
}
View.class.php
I will only show the show() function of this class because it is the most important. It loads the correct template.
public function show($templateName) {
$template = 'application/view/templates/' . $templateName . '.php';
if (!file_exists($template)) {
die('Template not found');
}
//Set template variables
foreach ($this->vars as $key => $value) {
$$key = $value;
}
//Include template file
include($template);
}
login.php
This file is within the templates folder:
<h1><?php echo $heading; ?></h1>
<?php echo $notice; ?>
Am I doing it right (according to the MVC model)?
Answer:
Is this a good site structure? Do you have any tips/suggestions?
You might wanna create a new folder public / public_html to store all the application's public files for the web (javascripts, stylesheets, images, front-end controller, etc). Then you can set this folder as your web root so that the rest of your application code remains private. It also has the benefit of when the PHP module fails to load, only your front-end controller's code will be exposed. The rest will remain hidden.
I don't see a reason why you wouldn't merge config.php with index.php. They essentially do the same thing; bootstrapping the "framework".
May I suggest you to make use of a namespace based auto loader (e.g. the PSR-0 autoloader).
With this you don't have to keep adding new folders to your auto loader (referring to the long chain of else ifs in your current auto loader). Instead, you declare them via namespaces in your classes which makes it more efficient and handy. Additionally it prevents conflict between duplicate class names from different namespaces.
Notes
Your controllers are tightly coupled to their dependencies. I suggest you make use of Dependency Injection to couple them loosely. It also makes for easier testing.
Your Registry class introduces global state which goes against the principles of OOP and is considered an anti-pattern.
HTML code in the index.php bootstrap file is a big no no.
<a href="index.php?route=login">Login</a> <a href="index.php?route=register">Register</a>
Same goes for:
if($authService->login('test@test.com', 'testPass')) {
$notice = 'Test logged In'; //<---
} else {
$notice = 'Not logged in'; //<---
}
In the MVC pattern all View related logic goes in it's corresponding View class and HTML in the template.
In the show() method of your View class you're doing:
foreach ($this->vars as $key => $value) {
$$key = $value;
}
Fortunately, extract() gets the job done with less code: extract($this->vars);.
Am I doing it right (according to the MVC model)?
The MVC pattern is all about separation of concerns. You've almost got that right. Make sure every type of logic resides in the appropriate layer, and you will be doing it right.
As far as the rest of your code (the "framework"), that's just detail that has nothing to do with the MVC pattern directly. The only thing it serves is to bootstrap itself so that you can utilize the MVC pattern in an easy, efficient manner. | {
"domain": "codereview.stackexchange",
"id": 9450,
"tags": "php, classes, mvc, authentication"
} |
Printing the number of ways a message can be decoded | Question: My below solution to the following problem is exceeding the maximum recursion depth. I am looking for any improvements which i can make.
Problem
Alice and Bob need to send secret messages to each other and are
discussing ways to encode their messages:
Alice: “Let’s just use a very simple code: We’ll assign ‘A’ the code
word 1, ‘B’ will be 2, and so on down to ‘Z’ being assigned 26.”
Bob: “That’s a stupid code, Alice. Suppose I send you the word ‘BEAN’
encoded as 25114. You could decode that in many different ways!”
Alice: “Sure you could, but what words would you get? Other than
‘BEAN’, you’d get ‘BEAAD’, ‘YAAD’, ‘YAN’, ‘YKD’ and ‘BEKD’. I think
you would be able to figure out the correct decoding. And why would
you send me the word ‘BEAN’ anyway?” Bob: “OK, maybe that’s a bad
example, but I bet you that if you got a string of length 5000 there
would be tons of different decodings and with that many you would find
at least two different ones that would make sense.” Alice: “How many
different decodings?” Bob: “Jillions!”
For some reason, Alice is still unconvinced by Bob’s argument, so she
requires a program that will determine how many decodings there can be
for a given string using her code.
Input
Input will consist of multiple input sets. Each set will consist of a single line of at most 5000 digits representing a valid encryption (for example, no line will begin with a 0). There will be no spaces between the digits. An input line of ‘0’ will terminate the input and should not be processed.
Output
For each input set, output the number of possible decodings for the input string. All answers will be within the range of a 64 bit signed integer.
Example
Input:
25114
1111111111
3333333333
0
Output:
6
89
1
The following is my solution -
def helper_dp(input_val, k, memo):
if k == 0:
return 1
s = len(input_val) - k
if input_val[s] == '0':
return 0
if memo[k] != -1:
return memo[k]
result = helper_dp(input_val, k - 1, memo)
if k >= 2 and int(input_val[s:s+2]) <= 26:
result += helper_dp(input_val, k - 2, memo)
memo[k] = result
return memo[k]
def num_ways_dp(input_num, input_len):
memo = [-1] * (len(input_num) + 1)
return helper_dp(input_num, input_len, memo)
if __name__ == '__main__':
number = input()
while number != '0':
print(num_ways_dp(number, len(number)))
number = input()
Problem Link - https://www.spoj.com/problems/ACODE/
Answer: Your code performs one recursion per character and recursion depth is limited. For long inputs, your code will raise RecursionError: maximum recursion depth exceeded. Replacing recursion with an explicit loop solves the issue.
Also, if you start the calculation at k = 1 with memo[-1] and memo[0] properly initialized and then work upwards to k = len(s), the code will become significantly simpler. When calculating memo[k], you only ever access memo[k-2] and memo[k-1], so you can store only those. I renamed them old and new. Here is how it could look like:
def num_ways_dp(s):
old = new = 1
for i in range(len(s)-1):
if int(s[i:i+2]) > 26: old = 0 # no two-digit solutions
if int(s[i+1]) == '0': new = 0 # no one-digit solutions
(old, new) = (new, old + new)
return new
if __name__ == '__main__':
number = input()
while number != '0':
print(num_ways_dp(number))
number = input() | {
"domain": "codereview.stackexchange",
"id": 37126,
"tags": "python, performance, python-3.x, recursion, dynamic-programming"
} |
About time and time dilation | Question: This question is related to this answer of John Rennie.
He says:
The length of the red line is the same in both figure 1 and figure 2
I guess his meaning of red line is the space-time distance traveled by the same particle between points (space-time states) A and B. It makes sense.
Then he adds:
For special relativity we need to extend this idea to include all three spatial dimensions plus time. There are various ways to write the line element for special relativity and for the purposes of this article I’m going to write it as:
$$\mathrm ds^2 = - c^2\mathrm dt^2 + \mathrm dx^2 + \mathrm dy^2 +\mathrm dz^2$$...
note that we can’t just add time to distance because they have different units - seconds and metres - so we multiply time by the speed of light $c$ so the product $ct$ has units of metres.
I think this (they have different units) is not a good reason. Because he is talking about fundamental basic concept of the physics. He could easily say that "so, we discovered that our understanding about space and time was wrong and now we have space-time with four new coordinates $(t,x,y,z)$ and the unit of them are the new unit (for example) we name it $\mathrm {st}$. $\mathrm {st}$ is a new unit not meter nor second and after now meter and second are meaningless".
And then, he could write:
$$\mathrm ds^2 = \mathrm dt^2 + \mathrm dx^2 + \mathrm dy^2 +\mathrm dz^2$$
But I guess his true reason of that formula is the results of the experiments not units difference.
Why does he use the speed of light $c$ in that formula? Is this obtained from his strong (at least I think that principle is very strong) principle (The length of the red line is the same in both figure 1 and figure 2)?
Why do we need to improve our physics?
I think because we want our physics matches with the results of the experiments. But, there are some problems here:
3.1. How are we sure that what we measure in the experiments, is the same thing that we want to (or we must) measure? For example, consider to speed. How are we sure that we exactly measure the $\frac{\mathrm dx}{\mathrm dt}$. Because as far as I know measurement needs to a time interval and the speed $v=\frac{\mathrm dx}{\mathrm dt}$ is defined for a single time instance.
3.2. Assuming we must improve the physics, why should we change the definition of an undefined concept? (As far as I know, time is an undefined concept like point in geometry.) Why do we not change definitions those are defined by ourselves like velocity, kinetic energy, etc.? I think talking about time dilation is completely similar to talking about size of points on the plane. It is similar to that we say some points on the plane are bigger than the others!
Maybe it is true (maybe some points are bigger than the others in fact) but I think we cannot discover it because we don't know what point is (it is an undefined concept). As far as I remember, I have learnt that we have some undefined concepts and we define other concepts by getting help of them but we never can define them because if we could, they weren't called undefined. Can someone please define point for me? If he/she can, I will prove that there is no time dilation!
Note that this is not a definition for point: "A plane is created by points" because I will immediately ask "What is the plane?" And so on.
Answer:
He says: The length of the red line is the same in both figure 1 and figure 2. I guess his meaning of red line is the space-time distance travelled by the same particle between points (space-time states) A and B. It makes sense.
Actually, it doesn't. Because there is no motion in spacetime. See relativist Ben Crowell saying so here: "Objects don't move through spacetime. Objects move through space". The spacetime interval is the same because time is a cumulative measure of local motion, and when you move fast through space your local motion is of necessity reduced. Otherwise the local motion plus macroscopic motion would exceed c, which can't happen because of the wave nature of matter.
Then he adds: "For special relativity we need to extend this idea to include all three spatial dimensions plus time. There are various ways to write the line element for special relativity and for the purposes of this article I’m going to write it as: $\mathrm ds^2 = - c^2\mathrm dt^2 + \mathrm dx^2 + \mathrm dy^2 +\mathrm dz^2$.
Yes, note the minus sign, and see Einstein's derivation.
I think this (they have different units) is not a good reason. Because he is talking about fundamental basic concept of the physics. He could easily say that "so, we discovered that our understanding about space and time was wrong and now we have space-time with four new coordinates $(t,x,y,z)$ and the unit of them are the new unit (for example) we name it $\mathrm {st}$. $\mathrm {st}$ is a new unit not meter nor second and after now meter and second are meaningless".
You're right. Because what we're really dealing with isn't length or time, it's motion. We define our second and our metre using the motion of light.
And then, he could write: $\mathrm ds^2 = \mathrm dt^2 + \mathrm dx^2 + \mathrm dy^2 +\mathrm dz^2$
You missed the minus sign out. That apart the expression is correct, but it's only really a restatement of Pythagoras's theorem. See the simple inference of time dilation on Wikipedia.
Why does he use the speed of light $c$ in that formula? Is this obtained from his strong (at least I think that principle is very strong) principle (The length of the red line is the same in both figure 1 and figure 2)?
The speed of light is in the expression because "A light-signal, which is proceeding along the positive axis of x, is transmitted according to the equation x = ct". See Einstein's derivation.
How are we sure that what we measure in the experiments, is the same thing that we want to (or we must) measure?
Because of pair production and electron diffraction and the wave nature of matter. Hence matter behaves like the light between the parallel mirrors.
Assuming we must improve the physics, why should we change the definition of an undefined concept? (As far as I know, time is an undefined concept
It isn't. See what I said about it here. As Einstein said, time is what clocks measure. And if you take a look at what a clock actually does, if you open up a clock and take a cold scientific look at the empirical evidence, you will see cogs turning or a crystal oscillating. You will see that the clock features some kind of regular cyclical motion along with something like gears or a counting device, and it gives some kind of cumulative display of the thing we call "the time". It's that simple.
I think talking about time dilation is completely similar to talking about size of points on the plane. It is similar to that we say some points on the plane are bigger than the others!
It isn't I'm afraid Lucas. Time dilation is just a reduced rate of local motion. Again, it's very simple. | {
"domain": "physics.stackexchange",
"id": 32217,
"tags": "spacetime, time, time-dilation, laws-of-physics"
} |
How to define quantum Turing machines? | Question: In quantum computation, what is the equivalent model of a Turing machine?
It is quite clear to me how quantum circuits can be constructed out of quantum gates, but how can we define a quantum Turing machine (QTM) that can actually benefit from quantum effects, namely, perform on high-dimensional systems?
Answer: (note: the full desciption is a bit complex, and has several subtleties which I prefered to ignore. The following is merely the high-level ideas for the QTM model)
When defining a Quantum Turing machine (QTM), one would like to have a simple model, similar to the classical TM (that is, a finite state machine plus an infinite tape), but allow the new model the advantage of quantum mechanics.
Similarly to the classical model, QTM has:
$Q=\{q_0,q_1,..\}$ - a finite set of states. Let $q_0$ be an initial state.
$\Sigma=\{\sigma_0,\sigma_1,...\}$, $\Gamma=\{\gamma_0,..\}$ - set of input/working alphabet
an infinite tape and a single "head".
However, when defining the transition function, one should recall that any quantum computation must be reversible.
Recall that a configuration of TM is the tuple $C=(q,T,i)$ denoting that the TM is at state $q\in Q$, the tape contains $T\in \Gamma^*$ and the head points to the $i$th cell of the tape.
Since, at any given time, the tape consist only a finite amount of non-blank cells, we define the (quantum) state of the QTM as
a unit vector in the Hilbert space $\mathcal{H}$ generated by the configuration space $Q\times\Sigma^*\times \mathrm{Z}$. The specific configuration $C=(q,T,i)$ is represented as the state $$|C\rangle = |q\rangle |T\rangle |i\rangle.$$
(remark: Therefore, every cell in the tape isa $\Gamma$-dimensional Hilbert space.)
The QTM is initialized to the state $|\psi(0)\rangle = |q_0\rangle |T_0\rangle |1\rangle$, where $T_0\in \Gamma^*$ is concatenation of the input $x\in\Sigma^*$ with many "blanks" as needed (there is a subtlety here to determine the maximal length, but I ignore it).
At each time step, the state of the QTM evolves according to some unitary $U$
$$|\psi(i+1)\rangle = U|\psi(i)\rangle$$
Note that the state at any time $n$ is given by $|\psi(n)\rangle = U^n|\psi(0)\rangle$. $U$ can be any unitary that "changes" the tape only where the head is located and moves the head one step to the right or left. That is, $\langle q',T',i'|U|q,T,i\rangle$ is zero unless $i'= i \pm 1$ and $T'$ differs from $T$ only at position $i$.
At the end of the computation (when the QTM reaches a state $q_f$) the tape is being measured (using, say, the computational basis).
The interesting thing to notice, is that each "step" the QTM's state is a superposition of possible configurations, which gives the QTM the "quantum" advantage.
The answer is based on Masanao Ozawa, On the Halting Problem for Quantum Turing Machines.
See also David Deutsch, Quantum theory, the Church-Turing principle and the universal quantum computer. | {
"domain": "cs.stackexchange",
"id": 20,
"tags": "quantum-computing, turing-machines, computation-models"
} |
Proof of equivalence of L0 and language accepted by made up machine | Question: I have a made up machine, which has the same definition as Turing machine, but the transition function and the step of computation of the machine. What would be the approach of the proof that the language accepted by this machine is (or is not) equal to L0 (recursively enumerable language)?
Answer: You need to show that any language that is accepted by a Turing machine is accepted by one of your machines. So, start with an arbitrary Turing machine $M$ and show that one of your machines can accept $L(M)$, e.g., by showing how to directly simulate the operation of $M$. | {
"domain": "cs.stackexchange",
"id": 10361,
"tags": "computability, turing-machines"
} |
Why was the Planck constant $h$ fixed to be exactly $6.62607015\times10^{−34}\text{Js}$ and not some other value? | Question: So apparently in May, all of the SI base units were redefined to be relative to the Planck Constant $h$, instead of relying on physical objects like the Kilogram Prototype in a Paris Basement. Planck's constant was defined as
$$6.62607015\times10^{−34}\text{Js}$$
(It's not a measured value anymore. It is the value)
My question is: Why was this exact number chosen? In 2010, the measured value of $h$ was
$$6.626\mathbf{\color{blue}{06957}}\times10^{−34}\text{Js}$$
So why did they choose this arbitrary value of
$$6.626\mathbf{\color{blue}{07015}}\times10^{−34}$$
instead of something more exact like
$$6.626\mathbf{\color{blue}{07}}\times10^{-34}$$
That would have been within the margin of error so it could have worked just fine.
Answer: There are many, many instruments which are calibrated using the old definition of the kilogramme - the International Prototype Kilogramme (IPK) made of a platinum-iridium alloy.
So one kilogramme measured using the new definition had to be as close as possible to one kilogramme using old definition so as not to have to recalibrate all instruments which relied on the old definition of the kilogramme.
Using the old definition of the kilogramme (IPK) the numerical value of Planck’s constant was measured as accurately as possible using the Kibble (watt) balance and the X-ray crystal density method.
The two values that you quoted $6.62606957 \times 10^{−34}\, \rm kg\,m^2s^{-1}$ and later $ 6.62607015 \times 10^{−34}\, \rm kg\,m^2s^{-1}$ were the results of such measurements of Planck's constant.
As of 20 May 2019 the determination/definition was turned on its head with the value of Planck’s constant defined as $ 6.62607015 \times 10^{−34}\, \rm kg\,m^2s^{-1}$ and the IPK (made of a platinum-iridium alloy) having a measured value of one kilogramme to within one part in $10$ billion.
On page 131 of the BIPM brochure on the SI system of units it states:
The number chosen for the numerical value of the Planck constant in this definition is such that at the time of its adoption, the kilogram was equal to the mass of the international
prototype, m(K) = 1 kg, with a relative standard uncertainty of $1 \times 10^{-8}$, which was the standard uncertainty of the combined best estimates of the value of the Planck constant at that time.
In future the new definition of one kilogramme via the defined exact value of Planck's constant will enable measurements to be made to see by how much the masses of the IPK and its daughters change with time.
—-
why did they choose this arbitrary value of 6.62607015×10−34.
This value was chosen to make one kilogramme using the old definition (IPK) as close as possible to one kilogramme using the new definition (via Planck's constant).
instead of something more exact like 6.62607x10-34.
This would have required the recalibration of many, many (accurate) instruments. | {
"domain": "physics.stackexchange",
"id": 62823,
"tags": "physical-constants, si-units, metrology"
} |
Calculator using PHP with logging to MySQL database | Question: I have made a calculator in PHP and I need some feedback in terms of security and performance or any other general improvements.
You can download a zip file with all the contents from this link if you wish to try it out for yourself! I rewrote the source and comments from Norwegian to English, except the .css file. I also put in a export of the MySQL database I used, so it's possible to test out the database if necessary.
The calculator also contains some "easter eggs". This is a school project of mine, and the teacher said that those would add a bonus. There are currently two:
Expand the log 5 times in a row (decreased for less clickyness)
Make a calculation that results in 1337
My questions:
Can something be improved with the filter function (security-wise)?
Is there other practices for making a PHP calculator with support for cos,sin and tan?
Are there any security flaws with the MySQL connection?
Is there any major performance issues?
index.php
<?php
session_start(); //start use of sessions on this document
//This is a function that gets the result of the Resultval session, if not set it returns 0
function getResIfSet() {
if (isset($_SESSION["Resultval"])) {
echo $_SESSION["Resultval"];
} else {
echo 0;
}
}
?>
<!DOCTYPE html>
<html>
<head>
<meta charset="utf-8"> <!--Set charset to UTF-8-->
<title>Net-Calculator</title> <!--Set title of window-->
<link rel="stylesheet" type="text/css" href="style_eng.css"> <!--Link to external stylesheet: style.css-->
<!--Import JQuery-->
<script src="http://ajax.googleapis.com/ajax/libs/jquery/1.11.1/jquery.min.js"></script> <!--Imorter JQuery biblotek for funksjonar.. etc-->
<!--Start javascript with JQuery-->
<script>
//This functions just fades in the .box class when the page loads
$(function() {
$(".box").hide().fadeIn(500);
});
//Once document is ready..
$(document).ready(function() {
var position = false; //Determines if the log is expanded or not
var clicks = 0; //Keeps a hold of how many times the #log_box is clicked (Easter egg)
var res = "<?php getResIfSet(); ?>"; //The result value if user has calculated something (gets from php session *see top of document*)
//if the result equals 1337 (Easter egg2)
if (res == "1337") {
$(".mainwindow").animate({top: '-2000px'}, 2000); //Make .mainwindow float far up
$("#elite").fadeIn(1000); //Fade in the #elite (img)
$("#elite_text").fadeIn(3000); //Fade in the #elite_text (paragaph)
}
//If the #log_box div is clicked..
$("#log_box").click(function() {
//check the position value and expand/retract based on that
if (position === false) {
//make #log_box to auto height with animate.. i found this on the net, not that sure how it works tbh :D
$(this).animate({height: $("#log_box").get(0).scrollHeight});
position = true; //invert the position variable
//This is the "OM NOM NOM" easter egg.. if the #log_box is expanded 5 times it turns it into a face!
if (clicks === 5) {
$(".mainwindow").fadeOut(5000);
$("h1").html("0 0");
$("#log_box").html("'OM NOM NOM!'<br><br><br><br><br><br><br>");
$("form").html(" ");
$(".box").css("display", "none");
} else {
//if clicks isnt 5.. increase it each time the #log_box expands
clicks++;
}
} else {
//make the #log_box to a single line only (compact)
$(this).animate({height: '20px'});
position = false; //invert the position variable
}
});
});
</script>
</head>
<body>
<div id="mainwrap"> <!--MainWrap just centers the content on the webpage-->
<div id="wrap2"> <!--This is an additional div around the main wrap the 1337 image gets put in (Easter egg)-->
<img src="1337.png" id="elite"> <!--Image gets loaded in, but is set to display:none in css by default unless 1337 is the result (Above)-->
<p id="elite_text">Click <a href="index_eng.php">here</a> to go back!</p> <!--Just a simple paragraph for the user to reload the page-->
<div class="mainwindow"> <!--This is the div where the "calculator" part is-->
<h1>Net Calculator</h1> <!--Create a text logo-->
<form action="calculator.php" method="post"> <!--Create the form to send data to calculator.php with the method post-->
<input id="input_bar" type="text" name="Input" autofocus> <!--Make an text field with autofocus on, so the user dont need to click it-->
<input id="submit_btn" type="submit" value="Rekn ut!"> <!--Finally a submit button-->
</form>
<?php
//This section prints out the "boxes" with errors, answers or warnings
//Check if Result session and Type session is set.. if they are, print out a box with the Result.. and Type defines the color of the border.
if (isset($_SESSION["Result"], $_SESSION["Type"]) && !empty($_SESSION["Result"]) && !empty($_SESSION["Type"])) {
//Example: <div class="box answer"><h3> 3 + 3 = 6 </h3></div>
//Note: Type defines the border of the box
echo '<div class="box ' . $_SESSION["Type"] . '"><h3>' . $_SESSION["Result"] . '</h3></div>';
//Clear the used sessions (also Resultval for 1337 easter egg) so it wont carry over if the phage is refreshed
unset($_SESSION["Result"]);
unset($_SESSION["Resultval"]);
unset($_SESSION["Type"]);
}
//This next if prints out a warning to the user, used to notify mysql database errors, but the calculator still works.. so it needs an extra box.
//Check if the warning session is set
if (isset($_SESSION["Warn"]) && !empty($_SESSION["Warn"])) {
//This box doesen't have changing borders, so its set to yellow by default, but the warning message changes
echo '<div class="box warn"><h3>' . $_SESSION["Warn"] . '</h3></div>';
//then clear the used session once done.
unset($_SESSION["Warn"]);
}
?>
<!--This is a work in progress (WIP)-->
<div id="log_box">
<h3>Log (Click me!)</h3>
Just static text for now, but i plan to add this in the future.<br>
3*3 = 15<br>
15-3 = 13<br>
20/5 = 4<br>
(30*2)/3 = 20<br>
10+10 = 20<br>
<br>
The log will only be local based on sessions, since the database is just to log whats happening..<br>
Imagine if everyone saw what other users were doing.. Thats why this is gonna be local only.
</div>
</div>
</div>
</div>
</body>
</html>
calculator.php
<?php
//sets result session to the input of this function
function setResult($resultString) {
$_SESSION["Result"] = $resultString;
}
//sets the warn session which indicates a warning (see index.php)
function setWarning($toWarn) {
$_SESSION["Warn"] = $toWarn;
}
//sets the type of box to display (basicly color of the border)
function setBox($boxType) {
//two things work here; answer and error
$_SESSION['Type'] = $boxType;
}
//returns the user to the starts and ends the document
function returnToStart() {
header("Location: index_eng.php"); //navigates the user back to index.php
die(); //end the document so it wont continue to load..
}
//get the input from the form index.php sent to this document.. if its empty.. set an error (bellow)
function getInput() {
//if input session isnt set to anything...
if (!empty($_POST["Input"]) && isset($_POST["Input"])) {
return $_POST['Input']; //return the value if it is set
} else {
//if not..
setResult("Error: Input is empty!"); //set the result to the error message
setBox("error"); //set the box type to error (red border)
returnToStart(); //and finally return the user back to index.php
}
}
//filter the user input and error out if it doesent contain what it is supposed to contain
function filter($toFilter) {
//first, remove all semicolons and whitespaces in the user input
$toFilter = preg_replace('/[; ]+/', '', $toFilter);
//then error out if input contains any other characters except those needed for sin(), cos() and tan()
//this is to minimize harmful functions that you can write into the eval() function
if (preg_match('/[bdefghjklmpqruvwxyzBDEFGHJKLMPQRUVWXYZ]+/', $toFilter)) {
setResult("Error: Calculation contains unknown text. "); //set the result to the error message
setBox("error"); //set the box type to error (red border)
returnToStart(); //and finally return the user back to index.php
}
return $toFilter; //return the filtered input
}
//checks if the result string still contains text of any kind.. it it does thats an error..
function checkResult($toCheck) {
if (preg_match('/[a-zA-Z]+/', $toCheck)) {
return true;
} else {
return false;
}
}
//log the result to the database
function logResult($input_filter, $result) {
$servername = "127.0.0.1"; //server adress
$username = "usernamehere"; //database username
$password = "passwordhere"; //and password
//create connection
$conn = new mysqli($servername, $username, $password);
//check if the connection was sucsessfull..
if ($conn->connect_error) {
setWarning("Warning: Could not connect to database."); //write a warning if it couldnt connect to the database
} else {
//create a query string that inserts values into the database kalk_log at table log
$query = "INSERT INTO `calc_log`.`log` (`Calculation`, `Result`, `FromIP`) "
. "VALUES ('$input_filter', '$result', '" . $_SERVER['REMOTE_ADDR'] . "');";
//run the query against the conenction..if it fails, display a message indicating it failed
if ($conn->query($query) === FALSE) {
setWarning("Warning: Couldn't put calculation into database."); //set warning message, no need for box type since its always yellow by default
$conn->close(); //finally close the connection!
}
}
$conn->close(); //close connection if it still isnt
}
//this is a unfinished function, supposed to log to a session and display it on the local log (the one with click me)
function localLog($calculation, $result){
//TODO: do stuff here
}
//The main method for solving a calculation (calls the other methods)
function solve() {
echo 'Calculating...'; //just a indication that the process has started
$input_raw = getInput(); //get the input and put it into a local variable
$input_filter = filter($input_raw); //filter the input and assign it to new variable
$result = eval('return ' . $input_filter . ';'); //calculate using eval and input and put it to a variable
//if the result still contains text.. dont display it and error out
if (checkResult($result)) {
setResult("Error: Result contains unknown text."); //write error message
setBox("error"); //set the box to type error
returnToStart(); //return the user to the start
}
if (!empty($result) || $result === 0) { //checks if the result is empty (which means error from eval) i use the or (||) to still display even if the actual calculation is 0 (as in 51-51)
$_SESSION["Resultval"] = $result;
setResult($input_filter . ' = <u>' . $result . '</u>'); //set result session to what it returns with some html tags to format it correctly
setBox("answer");
//You can comment out logResult to supress the warnings on the website
logResult($input_filter, $result); //log the result to the database using the input and result, rest it figures out itself.
localLog($input_filter, $result); //log to a session to display on the webpage (Work In Progress)
} else {
setResult('Error: Something went wrong when solving.'); //if the result is empty.. set an error message as the result
setBox("error"); //set box type to error..
}
returnToStart(); //and return to the start..
}
session_start(); //start use of sessions in the document
solve(); //call the main method (solve) and figure out the calculation from the user
style.css
#mainwrap{
min-width: 300px;
max-width: 500px;
margin-left: auto;
margin-right: auto;
margin-top: 50px;
}
body{
background: url(bakgrunn.jpg) no-repeat center center fixed;
-webkit-background-size: cover;
-moz-background-size: cover;
-o-background-size: cover;
background-size: cover;
}
.mainwindow{
position: relative;
background-color: #dfdfdf;
border: gray solid 3px;
border-radius: 15px;
padding: 15px;
text-align: center;
margin-bottom: 10px;
font-family: cursive;
}
.box{
background-color: whitesmoke;
border-radius: 15px;
padding: 15px;
text-align: center;
min-height: 30px;
font-family: sans-serif;
margin-bottom: 20px;
}
#log_box{
background-color: whitesmoke;
border: gray solid 3px;
border-radius: 15px;
padding: 15px;
text-align: center;
font-family: sans-serif;
overflow: hidden;
height: 20px;
}
.error{
border: red solid 3px;
}
.warn{
border: yellow solid 3px;
}
.answer{
border: greenyellow solid 3px;
}
#input_bar{
width: 75%;
border: black solid 1px;
border-radius: 5px;
}
#submit_btn{
width: 75px;
margin-top: 10px;
margin-bottom: 25px;
border: black solid 1px;
background-color: white;
border-radius: 5px;
}
#submit_btn:hover{
background-color: lightcoral;
}
h1{
color: black;
}
h3{
padding:0;
margin:0;
margin-bottom: 15px;
}
table{
width: 100%;
border-collapse: collapse;
border: grey solid 2px;
border-radius: 15px;
margin-top: 15px;
}
th,td{
border: gray solid 2px;
width: 50%;
padding: 7px;
}
#elite{
display:none;
position: absolute;
width: 100%;
height: 300px;
left: 0px;
z-index: -1;
}
#elite_text{
display:none;
position: relative;
top: 300px;
color: red;
text-align: center;
width: 100%;
}
#wrap2{
position:absolute;
width: 500px;
height: auto;
}
Answer: Search for ### and you will find my comments in your code
<?php
//sets result session to the input of this function
function setResult($resultString) {
$_SESSION["Result"] = $resultString;
}
//sets the warn session which indicates a warning (see index.php)
function setWarning($toWarn) {
$_SESSION["Warn"] = $toWarn;
}
//sets the type of box to display (basicly color of the border)
function setBox($boxType) {
//two things work here; answer and error
$_SESSION['Type'] = $boxType;
}
//returns the user to the starts and ends the document
function returnToStart() {
header("Location: index_eng.php"); //navigates the user back to index.php
// ### die indicates failure, use exit instead
// ### die(); //end the document so it wont continue to load..
exit;
}
//get the input from the form index.php sent to this document.. if its empty.. set an error (bellow)
// ### think about your function names, a function called getInput that redirects to the start isn't really getting input is it?
// ### perhaps a better name, getInputOrReturnToStart()
function getInput() {
//if input session isnt set to anything...
// ### if (!empty($_POST["Input"]) && isset($_POST["Input"])) {
// ### empty tests for isset as well
if (!empty($_POST["Input"])) {
return $_POST['Input']; //return the value if it is set
} else {
//if not..
setResult("Error: Input is empty!"); //set the result to the error message
setBox("error"); //set the box type to error (red border)
returnToStart(); //and finally return the user back to index.php
}
}
//filter the user input and error out if it doesent contain what it is supposed to contain
function filter($toFilter) {
//first, remove all semicolons and whitespaces in the user input
$toFilter = preg_replace('/[; ]+/', '', $toFilter);
//then error out if input contains any other characters except those needed for sin(), cos() and tan()
//this is to minimize harmful functions that you can write into the eval() function
if (preg_match('/[bdefghjklmpqruvwxyzBDEFGHJKLMPQRUVWXYZ]+/', $toFilter)) {
setResult("Error: Calculation contains unknown text. "); //set the result to the error message
setBox("error"); //set the box type to error (red border)
returnToStart(); //and finally return the user back to index.php
}
return $toFilter; //return the filtered input
}
//checks if the result string still contains text of any kind.. it it does thats an error..
function checkResult($toCheck) {
// ### simplify code
// if (preg_match('/[a-zA-Z]+/', $toCheck)) {
// return true;
// } else {
// return false;
// }
return preg_match('/[a-zA-Z]+/', $toCheck);
}
//log the result to the database
function logResult($input_filter, $result) {
// ### i realise this is a simple project, but it is best to store config data in constants at start of script
// ### or in a separate file, not in the middle of your code
$servername = "127.0.0.1"; //server adress
$username = "usernamehere"; //database username
$password = "passwordhere"; //and password
//create connection
$conn = new mysqli($servername, $username, $password);
//check if the connection was sucsessfull..
if ($conn->connect_error) {
setWarning("Warning: Could not connect to database."); //write a warning if it couldnt connect to the database
} else {
// ### always escape sql parameters, as external parameters could have been interfered with
// ### what if $_SERVER['REMOTE_ADDR'] is not set?
$remote_addr = mysqli_real_escape_string($_SERVER['REMOTE_ADDR']);
//create a query string that inserts values into the database kalk_log at table log
$query = "INSERT INTO `calc_log`.`log` (`Calculation`, `Result`, `FromIP`) "
. "VALUES ('$input_filter', '$result', '" . $remote_addr . "');";
//run the query against the conenction..if it fails, display a message indicating it failed
if ($conn->query($query) === FALSE) {
setWarning("Warning: Couldn't put calculation into database."); //set warning message, no need for box type since its always yellow by default
$conn->close(); //finally close the connection!
}
}
// ### if there is a conn error you are trying to close a connection that was never opened?
// ### Warning: mysqli::close(): Couldn't fetch mysqli
$conn->close(); //close connection if it still isnt
}
//this is a unfinished function, supposed to log to a session and display it on the local log (the one with click me)
function localLog($calculation, $result){
//TODO: do stuff here
}
//The main method for solving a calculation (calls the other methods)
function solve() {
echo 'Calculating...'; //just a indication that the process has started
$input_raw = getInput(); //get the input and put it into a local variable
$input_filter = filter($input_raw); //filter the input and assign it to new variable
$result = eval('return ' . $input_filter . ';'); //calculate using eval and input and put it to a variable
//if the result still contains text.. dont display it and error out
if (checkResult($result)) {
setResult("Error: Result contains unknown text."); //write error message
setBox("error"); //set the box to type error
returnToStart(); //return the user to the start
}
if (!empty($result) || $result === 0) { //checks if the result is empty (which means error from eval) i use the or (||) to still display even if the actual calculation is 0 (as in 51-51)
// ### you have set functions for every other use of $_SESSION except this one??
$_SESSION["Resultval"] = $result;
setResult($input_filter . ' = <u>' . $result . '</u>'); //set result session to what it returns with some html tags to format it correctly
setBox("answer");
//You can comment out logResult to supress the warnings on the website
logResult($input_filter, $result); //log the result to the database using the input and result, rest it figures out itself.
localLog($input_filter, $result); //log to a session to display on the webpage (Work In Progress)
} else {
setResult('Error: Something went wrong when solving.'); //if the result is empty.. set an error message as the result
setBox("error"); //set box type to error..
}
returnToStart(); //and return to the start..
}
session_start(); //start use of sessions in the document
solve(); //call the main method (solve) and figure out the calculation from the user | {
"domain": "codereview.stackexchange",
"id": 10681,
"tags": "javascript, php, html, mysql, css"
} |
How to obtain an impulse response from an exponential sine sweep (23kHz/50kHz) | Question: I am currently working on my master thesis and have so far generated a broadband ESS to obtain frequency response and impulse response.
To generate the ESS as an excitation signal I use the following formula:
()=sin(21(-1)).
See: Calculating the inverse filter for the (exponential) sine sweep Method
For the broadband signal, I chose f1=10Hz and f2=23kHz.
After recording the excitation signal, I get my spectrum via complex division and with subsequent IFFT my impulse response.
Here everything still works great (see picture ESS20k). But for my subwoofer I want to create an excitation signal with f2=500. But there are errors in the spectrum, as well as in the impulse response(See picture ESS500Fehler).
I know that it has something to do with the frequency spectrum above 500Hz. If I set this above 500Hz to -150dB, I at least get a recognizable impulse response. However, this leads to problems as soon as you move the measurement window (I have done this so far to measure the distance). If anyone could help me even with a new way of thinking, I would be very grateful.
Answer: That's a typical problem with sweep measurements and complex division. You often have some amount of noise in your measurement, so your acquired signal,yy, becomes
$$y[n] = x[n]*h[n]+q[n]$$
where x is your excitation, h the impulse response and q some additive noise. With spectral division your transfer function estimate becomes
$$H_{est}[k] = \frac{Y[k]}{X[k]} = H[k]+\frac{Q[k]}{X[k]}$$
That means your noise spectrum is amplified by the inverse of the excitation sine. For a limited range sweep the energy outside the sweep-range is very low, so you get a lot of noise amplification. Since the inverse DFT requires information at ALL frequencies, that amplified noise messes up your impulse response.
One possible workaround would be to calculate the impulse response in the time domain using an LMS or least squared algorithm. You can also try inverting the excitation, band-pass filtering the inverted impulse response and than convolving with the acquisition.
Alternatively you can construct an excitation that has "sufficient" energy at all frequencies, i.e. $X[k] \gg Q[k]$. That can be done with sweeps but it's tricky. It's much easier if you use a pseudo random noise where you can directly optimize the excitation spectrum based on your noise spectrum (which is typically brown or pink) and your requirements. This as also the added benefit that pseudo random noise reacts more benign to non-linear distortions. Sweeps are sensitive to the 2nd and 3rd order distortions of a typical electromagnetic driver: the noise energy tends to concentrate in time so you can end up with "ghost reflections". | {
"domain": "dsp.stackexchange",
"id": 12256,
"tags": "impulse-response, audio-processing, sweep"
} |
Thomas precession and Lorentz group | Question: I've recently learned about the Thomas precession. To shallowly summarize the Thomas precession takes place when you have a boost of first system to second system and then from the second system you have another non-collinear boost to the first one to a third system. If you now look at the relation between the first system and third system, the relation you get is just a different boost but you also have a rotation part of the transformation. Mathematically we could say
$$
\Lambda_{1 \rightarrow 2} \Lambda_{1 \rightarrow 2} \neq \Lambda_{1 \rightarrow 3}
$$
where we denoted $\Lambda$ as the boost transformation and the subscript of $\Lambda$ denotes from what to which system we do the transformation. It can be shown rather that
$$
\Lambda_{1 \rightarrow 2} \Lambda_{1 \rightarrow 2} = R \Lambda \tag{1}
$$
where $R$ denotes a rotation. This is all well and fine the problem that I have is that I've always believed Lorentz transformations $\Lambda$ are part of a (Lorentz) group.
But if the relation (1) hold, this would break the definition of a group , specifically two objects in the group acting on each other would produce a object, which is not part of the group (Closure).
Could anyone please clear this up for me? Are perhaps rotations also part of the Lorentz group?
Links:
Thomas precession: https://en.wikipedia.org/wiki/Thomas_precession
Groups: https://mathworld.wolfram.com/Group.html
Answer: rotations are part of the Lorentz group, which is true to its name and is really a group (this is not always the case in physics, e.g. the renormalization group is not a group at all).
In fact, a boost can be thought of as a rotation in the plane spanned by one spatial direction and the time axis, with an imaginary angle.
Rotations form a subgroup of the Lorentz group, but boosts do not form such a subgroup, exactly due to Thomas precession (boosts along one specific axis do form a subgroup). | {
"domain": "physics.stackexchange",
"id": 91333,
"tags": "general-relativity, special-relativity, group-theory"
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.