anchor stringlengths 0 150 | positive stringlengths 0 96k | source dict |
|---|---|---|
Why does iRobot not sell the Create in Europe? | Question: I'm trying to find a good beginners platform to use ROS with, and I came across the iRobot Create. To my surprise, they do not sell in Europe. Why is that?
Answer: I believe the Create was not sold in Europe because it is not RoHS compliant.
However, every Roomba can be hacked. You can communicate with the robot with the "Open Interface" or sometimes called the "Serial Control Interface". There is lots of information about this on the web. See here for example: http://www.irobot.com/filelibrary/pdfs/hrd/create/Create%20Open%20Interface_v2.pdf
Note that you will need to either make or buy a cable to connect to the 7 pin mini-DIN port on the robot. Every Roomba has this. Sometimes it is under the faceplate though.
Most libraries you can find online communicate with the Roomba outlined above and do not use the "Command Module" that was unique to the Create. | {
"domain": "robotics.stackexchange",
"id": 359,
"tags": "mobile-robot, ros, irobot-create"
} |
Undecidable among these for turing machine | Question: Below are two questions I found in Theory of Computation book but couldn't find its correct answers, can anyone please give correct answers with explanation?
It is undecidable, whether
an arbitrary Turing machine(TM) has 15 states
an arbitrary TM halts after 10 steps
an arbitrary TM ever prints a specific letter
an arbitrary TM accepts a string w in 5 steps
Which one of the following is not decidable?
given a TM M, a string s and an integer k, M accepts s with k steps
equivalence of two given TMs
language accepted by a given DFSA(Deterministic finite state automata) is nonempty
language accepted by a CFG(Context free grammar) is nonempty
Update: In first question I think 1.2 is right because halting is undecidable for Turing machine but not sure whether remaining options are decidable or not.
In second question I think 2 is right, but not sure about the decidability of non emptiness of CFG and DFSA.
Answer:
(1.1) Is it undecidable whether an arbitrary Turing machine(TM) has 15 states?
No, this is a decidable problem. Given a TM in a suitable encoding, it is fairly straightforward to determine how many states the TM has. Consider any common encoding, or define a reasonable one yourself, and then describe an algorithm that answers the question using the encoding. Hint: you can make your algorithm really simple if you define the encoding such that the first part of the encoding is an encoded list of states that are part of the TM.
(1.2) Is it undecidable whether an arbitrary TM halts after 10 steps?
I assume what you mean by this is the following: is it undecidable whether an arbitrary TM halts after exactly 10 steps for some input? The answer is that, no, this is also decidable. Consider all $\Sigma^{10}$ configurations of the first $10$ cells of the tape. For each of these, configurations, execute 10 steps according to the TM's transition table (for nondeterministic TMs, this includes all possible paths of length 10). If one of the paths halts after exactly 10 steps, output yes; otherwise, output no.
(1.3) Is it undecidable whether an arbitrary TM ever prints a specific letter?
This one is actually undecidable. Suppose it weren't undecidable. Take an arbitrary TM. Construct a new TM with a new alphabet symbol not in the alphabet of the original TM. Replace all transitions in the original TM which cause the machine to halt with transitions which cause the machine to halt and which write this new symbol to the tape. By the assumption, the problem of whether the new TM prints this specific symbol is decidable; however, solving this problem would give us a way to solve the halting problem for the original TM (since the new one printing the symbol would only occur if the original one halted). Since the halting problem is undecidable, we have a contradiction, hence, this problem is undecidable.
(1.4) Is it undecidable whether an arbitrary TM accepts a string w in 5 steps?
This is clearly decidable. Given a TM and a string, write the string on the tape and execute five steps of the TM according to its transition table.
Which one of the following is not decidable?
given a TM M, a string s and an integer k, M accepts s with k steps
equivalence of two given TMs
language accepted by a given DFSA(Deterministic finite state automata) is nonempty
language accepted by a CFG(Context free grammar) is nonempty
(2.1.) is definitely decidable as per the argument in (1.4).
(2.2.) is ambiguous; if we mean syntactically equivalent, then this should be decidable given a suitable encoding. If it means semantically equivalent, i.e., they decide/accept the same language, then this is definitely undecidable.
(2.3.) is definitely decidable, since you can always minimize the DFA and see whether it's a single state with no accepting states.
(2.4.) is definitely decidable. Begin marking symbols (terminal/nonterminal) if they lead to a string of only terminal symbols. First, mark all terminal symbols. Next, mark nonterminals which lead to a string of only nonterminals. Iteratively, mark unmarked nonterminals which lead to strings of terminals or already-marked nonterminals. Continue until you complete an iteration without marking any new nonterminals. If the start symbol is marked, then it leads to a string of terminals, which is a string generated by the grammar, so its language is non-empty. Otherwise, the start symbol doesn't lead to a string of all terminals, so its language is empty.
By process of elimination, the answer to question (2) must be (2.2) and the interpretation given to "equivalent" must be "decides/accepts the same language." | {
"domain": "cs.stackexchange",
"id": 1802,
"tags": "formal-languages, turing-machines, undecidability"
} |
Why strong electric field leads to non-Ohmic behavior? | Question: Homogenous conductors like silver or semiconductors like pure germanium or germanium containing impurities obey ohm's law within some range of electric field values.
but if the field becomes too strong, there is a departure from Ohm's law in all cases. why?
Answer: There are many different mechanisms causing non-ohmic charge transport in materials. Here are just a few examples:
In metals, the (steady-state) resistance typically increases with the applied voltage (or current), as a result of the Joule heating caused by the current. The reason for the higher resistance at high temperatures is usually the more frequent interactions of electrons with the vibrating lattice of metal ions. This is also often called phonon scattering.
In semiconductors, as the material gets hotter, the resistance can actually decrease. This is because in the intrinsic regime, the carrier concentration increases with temperature.
In semiconductors, as the E-field (which depends on the voltage) is increased beyond the point where the electron energies in between collisions exceed the energy required to excite high-energy lattice vibration modes (i.e. emit optical phonons), electrons undergo more lattice scattering, which increases the resistance. This effect is called velocity saturation.
In certain semiconductors such as some III-V's, which have accessible conduction band valleys that are higher in energy than the one normally populated in equilibrium, the electric field can provide this energy, moving electrons to these upper valleys. These valleys usually have a higher effective mass than at the conduction band edge, and hence a lower average group velocity. This causes the resistance to increase with an increasing E-field. This effect is more gradual than the previous one, so the average carrier velocity can actually have an overshoot that peaks at a certain E-field.
It can happen, often in insulators or intrinsic semiconductors, that the charge injected between two contacts results in a space-charge not neutralized by the material. The I-V relation in this case is given by the Mott-Gurney law. The resulting current is called space charge limited current, and varies superlinearly with voltage, corresponding to a static resistance that decreases with voltage.
There is also a host of interface-related effects resulting in non-ohmic behavior I won't even go into here, because I'm assuming you are asking about the resistance of a material and not a combination of materials. | {
"domain": "physics.stackexchange",
"id": 60974,
"tags": "electric-current, electrical-resistance, voltage, conductors, approximations"
} |
A Groovy Election | Question: Implementing the July 2015 Community Challenge seemed relevant considering it is election time.
I decided to go with the strategy pattern as this can be implemented in many different ways. I did my own implementation of this first, and then later also added a "PascalElection" strategy, as that code was translated and adapted from a Pascal implementation (not included for review here)
Overview of code:
ElectionStrategy: Interface for the Strategy Pattern
Election: Class to hold data about the votes and nominees. Also contains a static method to perform an election.
ElectionResult: (static class within Election) Represents the result of an election.
CandidateState: (static class within Election) Represents several states a candidate can be in (my strategy only uses three of them, the PascalElection strategy uses all of them).
SimonElection: My own implementation of the election strategy
Round: As the STV voting system is an iterative one, this represents a single 'round' in the voting. This can be used to plot a graph over the results. (The code to produce the graphs is not included in this post)
This project is also available on GitHub: Zomis/StackSTV
Example graphs showing election results:
My strategy:
The pascal strategy:
Code
ElectionStrategy:
interface ElectionStrategy {
Election.ElectionResult elect(Election election)
}
Election:
class Election {
final List<Candidate> candidates = new ArrayList<>()
final List<Vote> votes = new ArrayList<>()
int availablePositions
int maxChoices
private Election(int availablePositions) {
this.availablePositions = availablePositions
}
void addVote(Vote vote) {
this.votes << vote
this.maxChoices = Math.max(maxChoices, vote.preferences.length)
}
void addCandidate(String name) {
this.candidates.add(new Candidate(name: name))
}
double calculateQuota(double excess) {
(votes.size() - excess) / (availablePositions + 1)
}
static class ElectionResult {
List<Round> rounds
List<Candidate> candidateResults
List<Candidate> getCandidates(CandidateState state) {
candidateResults.stream()
.filter({it.state == state})
.collect(Collectors.toList())
}
}
ElectionResult elect(ElectionStrategy strategy) {
strategy.elect(this)
}
static enum CandidateState {
HOPEFUL, EXCLUDED, ALMOST, NEWLY_ELECTED, ELECTED
}
@ToString(includeNames = true, includePackage = false)
static class Candidate {
String name
double weighting = 1
double votes
CandidateState state = CandidateState.HOPEFUL
Candidate copy() {
new Candidate(name: name, weighting: weighting, votes: votes, state: state)
}
}
@ToString
static class Vote {
int numVotes
Candidate[] preferences
static Vote fromLine(String line, Election election) {
String[] data = line.split()
Vote vote = new Vote()
vote.numVotes = data[0] as int
int candidateVotes = data.length - 2
vote.preferences = new Candidate[candidateVotes]
for (int i = 0; i < vote.preferences.length; i++) {
int candidate = data[i + 1] as int
if (candidate > 0) {
vote.preferences[i] = election.candidates.get(candidate - 1)
}
}
vote
}
void distribute(Round round) {
double remaining = numVotes
int choiceIndex = 0
preferences.eachWithIndex { Candidate entry, int i ->
if (entry) {
double myScore = remaining * entry.weighting
entry.votes += myScore
remaining -= myScore
round.usedVotes[choiceIndex++] += myScore
}
}
round.excess += remaining
}
}
static final ElectionResult fromURL(URL url, ElectionStrategy strategy) {
BufferedReader reader = url.newReader()
String[] head = reader.readLine().split()
int candidates = head[0] as int
Election stv = new Election(head[1] as int)
for (int i = 0; i < candidates; i++) {
stv.addCandidate("Candidate $i") // use a temporary name at first. real names are at the end of the file
}
String line = reader.readLine();
while (line != '0') {
Vote vote = Vote.fromLine(line, stv)
stv.addVote(vote)
line = reader.readLine();
}
for (int i = 0; i < candidates; i++) {
String name = reader.readLine()
stv.candidates.get(i).name = name
}
stv.elect(strategy)
}
}
SimonElection:
class SimonElection implements ElectionStrategy {
@Override
Election.ElectionResult elect(Election election) {
List<Round> rounds = new ArrayList<>()
int electedCount = 0
int roundsCount = 0
double previousExcess = 0
while (electedCount < election.availablePositions) {
Round round = new Round(roundsCount, election.maxChoices)
rounds << round
double roundQuota = election.calculateQuota(previousExcess)
roundsCount++
round.quota = roundQuota
election.candidates*.votes = 0
election.votes*.distribute(round)
List<Election.Candidate> elected = election.candidates.stream()
.filter({candidate -> candidate.votes > roundQuota})
.collect(Collectors.toList())
elected.each {
if (it.state != Election.CandidateState.ELECTED) {
electedCount++
}
it.state = Election.CandidateState.ELECTED
it.weighting *= roundQuota / it.votes
}
if (elected.isEmpty()) {
Election.Candidate loser = election.candidates.stream()
.filter({it.state == Election.CandidateState.HOPEFUL})
.min(Comparator.comparingDouble({it.votes})).get()
loser.state = Election.CandidateState.EXCLUDED
loser.weighting = 0
}
round.candidates = election.candidates.collect {it.copy()}
previousExcess = round.excess
}
new Election.ElectionResult(rounds: rounds, candidateResults: election.candidates)
}
}
Round:
@ToString
class Round {
int round
List<Election.Candidate> candidates = new ArrayList<>()
double quota
double[] usedVotes
double excess
Round(int round, int maxChoices) {
this.round = round
this.usedVotes = new double[maxChoices]
}
}
Running the code
Tests are available on the GitHub repository, I am currently only running the code in a test, using election data from the most recent Stack Overflow election
Primary Concerns
I'm mostly interested in the way I'm using Groovy, and what Groovy things I could do instead of, or in addition to(?), using Java 8. Should I use some Java stuff instead of some Groovy stuff, or should I use more Groovy instead of Java?
Any other comments welcome.
Answer: Being biased toward Groovy, I say do more Groovy stuff :)
There are a number of things you can do to make your code more Grooooooovy.
def is your friend
The def keyword makes it a cinch to declare variables and it makes your declarations easier on the eyes.
// Eeewwww
String s1 = 'hello'
double d1 = Math.floor(10.34)
ArrayList<Integer> l1 = new ArrayList<Integer>()
l1.add(1)
l1.add(2)
l1.add(3)
// Much better.
def s2 = 'hello'
def d2 = Math.floor(10.34)
def l2 = [1, 2, 3]
// It's all the same
assert s1 == s2 // == calls me.equals(other)
assert s1.is(s2) // me.is(other) is Java's reference equality
assert d1 == d2
assert l1 == l2
assert s1.class == s2.class
assert d1.class == d2.class
assert l1.class == l2.class
The code above also demonstrates that Groovy determines identity differently than Java. Also notice that primitives are auto-boxed. In fact, Groovy doesn't have primitives. Everything is an Object.
for loops are basically pointless...
In Groovy code for loops are rare because there are much better ways of accomplishing the same thing.
// This is the same...
for (int i = 0; i < candidates; i++) {
String name = reader.readLine()
stv.candidates.get(i).name = name
}
// ...as this, which uses Number.times(Closure)...
candidates.times {
String name = reader.readLine()
stv.candidates.get(i).name = name
}
// ...and as this, which uses Range.each{Closure).
(0..<candidates).each {
String name = reader.readLine()
stv.candidates.get(i).name = name
}
Not only are these constructs pleasant to work with, they eliminate the possibility of handling the incrementation incorrectly. If it can't be touched, it can't be broken.
...and so are Java Streams
Groovy enhances Java Collections in such a powerful way that it makes Java 8 Streams look like Fortran (as long as you don't need the laziness provided by Java 8 Streams).
Java 8 Streams
candidateResults.stream()
.filter({it.state == state})
.collect(Collectors.toList())
election.candidates.stream()
.filter({candidate -> candidate.votes > roundQuota})
.collect(Collectors.toList())
election.candidates.stream()
.filter({it.state == Election.CandidateState.HOPEFUL})
.min(Comparator.comparingDouble({it.votes})).get()
the Groovy way
candidateResults.findAll {it.state == state}
election.candidates
.findAll {candidate -> candidate.votes > roundQuota}
election.candidates
.findAll {it.state == Election.CandidateState.HOPEFUL}
.min {it.votes}
multiple classes per file
You can place multiple Groovy classes in the same *.groovy file. No more static inner classes :)
roll your own enhancements
Just as Groovy enhances Java through its GDK, you can enhance Java and Groovy classes through meta-programming. Here's an example of an enhancement I made to Election.fromURL():
before
/* Iterates through two sections of the file.
* The first section is handled with the 'while loop',
* and the second in the 'for loop'
*/
while (line != '0') {
Vote vote = Vote.fromLine(line, stv)
stv.addVote(vote)
line = reader.readLine()
}
for (int i = 0; i < candidates; i++) {
String name = reader.readLine()
stv.candidates.get(i).name = name
}
after
/* Also iterates through two sections of the file,
* but by using an Iterator.while(Closure condition) method
* added through meta-programming.
*
* The first section is handled with the 'while()/call() loop',
* and the second in the 'upto() loop'
*/
use(IteratorCategory) {
reader.iterator().while { line -> line != '0' }.call { line ->
stv << Vote.fromLine(line, stv)
}.upto(candidates) { line, i ->
stv.candidates.get(i).name = line
}
}
I created this construct because it makes it easier to see the intention of the code.
explanation
The added method Iterator.while(Closure) expects a Closure which when called returns a value that can be evaluated by the Groovy Truth. The value the Closure returns is used to determine whether to continue iterating or not.
The Iterator.while(Closure) method returns yet another Closure. This Closure initiates the iteration when called. The Closure expects yet a third Closure, which is called with each element provided by the iterator. Until iteration aborts.
Finally, when the iteration completes, the Iterator is returned, ready for additional iterating.
Iterator.while(Closure) (and Iterator.upto(Integer, Closure)) are made possible by Groovy's meta-programming. In this case, implemented by the Groovy Category shown below:
package net.zomis.meta
/*
* Adds useful methods to Iterator.
*/
@groovy.lang.Category(Iterator)
class IteratorCategory {
/*
* Returns a Closure which iterates while the condition Closure
* evaluates to true. The returned Closure expects another Closure,
* an action closure, as its single argument.
* This 'action' Closure is called during each iteration and is passed
* the Iterator.next()value.
* When the iteration is complete, the Iterator is returned.
*
* Example usage:
* use(IteratorCategory) {
* def iter = reader.iterator().while { it != 'end' }.call { println it }
* }
*
* @param condition Closure to evaluate on each iteration.
* @return a Closure
*/
Closure 'while'(Closure condition) {
{Iterator iter, Closure closure ->
while(iter.hasNext()) {
def it = iter.next()
if(condition(it)) {
closure(it)
} else {
break
}
}
return iter
}.curry(this)
}
/*
* Similar to Number.upto(Number, Closure), executes the Closure
* UP TO a specified number of times. However, instead of returning
* the Closure's return value, it returns the Iterator where
* it left off.
*
* Example usage:
* use(IteratorCategory) {
* def iter = reader.iterator().upto(5) {it, i -> println "$i - $it" }
* }
*
* @param to number of times to iterate
* @param closure to execute. Called with Iterator.next() and index.
* @return Iterator
*/
Iterator upto(int to, Closure closure) {
int i = 0
while(this.hasNext() && i < to) {
closure this.next(), i
i++
}
return this
}
}
all done
I hope this helps you make your Groovy code more... Groooovy.
Check out most of StackSTV already Groovy-fied right here. | {
"domain": "codereview.stackexchange",
"id": 15337,
"tags": "community-challenge, groovy"
} |
Kendall rank correlation coefficient's p-value | Question: I'm trying to compute a p-value for a two tailed test following Wikipedia formula which indicates that:
one computes Z, and finds the cumulative probability for a standard
normal distribution at -|Z|. For a 2-tailed test, multiply that number
by two to obtain the p-value
I'm using this Rust's library which computes Tau value and then you can get the significance from this source code.
The problem is that this calculator (with default values) gives a 2-sided p-value = 0.0389842391014099. Which is far from the p-value I'm getting. The steps I'm following are these:
Compute Tau
Compute the statistical significance: Z with significance = kendall::significance(tau, x.len())
Gets the CDF from Gaussian Distribution with sigma = 1 using this GSL library's function: cdf = gaussian_P(-significance.abs(), 1.0)
Multiply that value by 2
I'm getting a very different value: 0.011946505026920469. I don't understand what I'm missing. Perhaps it's a misunderstanding of Gaussian distribution and it's sigma param. Any kind of help would be really appreciated
Answer: It looks like the issue is that you are using the wrong function to compute the cumulative distribution function (CDF) of the normal distribution. The function you are using, gaussian_P(), computes the CDF of the standard normal distribution, with mean 0 and standard deviation 1. However, the Z-score you are using (the output of kendall::significance()) is not necessarily normally distributed with mean 0 and standard deviation 1.
To compute the p-value for your two-tailed test, you need to use the CDF of the normal distribution with mean 0 and the same standard deviation as your Z-score. In the GSL library, this function is called gsl_cdf_gaussian_P(). You can use this function to compute the CDF of the normal distribution with the same standard deviation as your Z-score, and then multiply the result by 2 to get the p-value for your two-tailed test.
For example, if your Z-score has a standard deviation of 1.5, you could compute the p-value as follows:
let z = kendall::significance(tau, x.len());
let p_value = 2 * gsl_cdf_gaussian_P(-z.abs(), 1.5);
This should give you the correct p-value for your two-tailed tests. | {
"domain": "datascience.stackexchange",
"id": 11901,
"tags": "correlation, kendalls-tau-coefficient"
} |
The metric exterior of a massive object | Question: The only condition apart from perfect spherical symmetry that is required for the retrieval of the Schwarzschild-metric $g_{ik}$ is actually ($R_{ik}$ being the Ricci tensor, the contraction of the curvature tensor):
$$R_{ik}=0$$
But the Schwarzschild metric is indiscriminately used also for the outside of a massive object like a star or planet. In that case the retrieval of the metric should be formulated together with a boundary condition that is given by components of the energy-momentum tensor $T_{ik}$ ($\kappa = \frac{8\pi G}{c^4}$, $G$ being the gravitational constant):
$$R_{ik}=0\,\,\, \text{for}\,\,\,\ r>r_{star}\,\,\, \text{and}\,\,\, R_{ik}|_{r_{star}} = \kappa (T_{ik} -\frac{1}{2}g_{ik}g^{mn}T_{mn})$$.
In the simplest case of a pure dust of mass density $\rho$ we would have $T_{00}=\rho c^2$ and $T_{ik}=0$ for $i$, $k \neq 0$.
Why is this boundary condition usually neglected ? Is $\rho c^2$ too small (for dust this might indeed the case) to have a sensible effect on the metric and under which conditions would that change, i.e. how massive must the object be to generate a boundary condition not to be neglected ?
Answer: Are you familiar with the Interior Schwarzschild Metric? As far as I can tell it addresses all your queries. | {
"domain": "physics.stackexchange",
"id": 78799,
"tags": "general-relativity, metric-tensor, boundary-conditions, stars"
} |
Why do most laser beams have a Gaussian intensity profile? | Question: Why do most laser beams have a Gaussian intensity profile?
Are there other types of profiles?
Can we say that the optical pulse is generated by using a Gaussian function?
Answer: While most lasers generate Gaussian beams, for reasons well outlined by Massimo Ortolano in his answer, this is not the only possibility.
Other two kinds of laser profiles that have applications in optical laboratories are for example Hermite-Gaussian and Laguerre-Gaussian beams.
The latter are in particular very interesting, as laser light with a Laguerre-Gaussian amplitude distribution happens to have a well-defined orbital angular momentum, as first observed by L. Allen in 1992.
Quoting from Allen's paper: The transverse amplitude distribution of laser light is usually described in terms of a product of Hermite polynomials $H_n(x)H_m(y)$ and associated with TEM$_{nmq}$ modes. Laguerre polynomial distributions of amplitude, TEM$_{plq}$ modes, are also possible but occur less often in actual lasers.
Even when the laser itself can only generate a fundamental Gaussian profile, the spatial profile of the light is (relatively) easily modified, for example with SLMs, and made into any profile one wants. | {
"domain": "physics.stackexchange",
"id": 33580,
"tags": "optics, laser"
} |
The dangling chain - help with derivation | Question: I am trying to understand the derivation of the normal modes of a dangling chain given here[pdf].
The author considers a chain with density per-unit-length $\rho$ hanging from a fixed point and adopts a coordinate system with the $x$ axis pointing vertically upwards from the chain's loose end in its equilibrium position and the $u$ axis pointing the the right. The horizontal displacements are considered to be small so that distances along the chain can be approximated as distances along $x$.
The tension in the chain at position $x$ is $w(x) = \rho g x$ and the accelerating force is due to the difference in the horizontal components of the tension at the ends of a small interval of chain, $\Delta x$.
If the segment at $x$ is displaced from the vertical by an angle $\alpha$, the horizontal component of the tension is $F(x) = w(x)\sin\alpha \approx Wu_x$. I think I get this – if the notation $u_x$ means $\mathrm{d}u/\mathrm{d}x$. The author then says:
The difference in force between the points on the change[sic – I think "chain" is meant] at $x$ and $x+\Delta x$ is thus $\Delta F = \Delta x(w u_x)_x$.
This bit I don't get. How does $\Delta F = w(x+\Delta x) u_x - w(x)u_x$ become the above expression?
Answer: Using Taylor expansion,
$$w(x+\Delta x)=w(x)+w_x(x) \Delta x+O(\Delta x^2)$$
$$u_x(x+\Delta x)=u_x(x)+u_{xx}(x) \Delta x+O(\Delta x^2)$$
Replace the above in the following,
$$w(x+\Delta x) u_x(x+\Delta x) - w(x) u_x(x) \approx \Delta x w_x(x) u_x(x) + \Delta x w(x) u_{xx}(x) = \Delta x (w u_x)_x$$ | {
"domain": "physics.stackexchange",
"id": 47120,
"tags": "homework-and-exercises, newtonian-mechanics, forces, string, continuum-mechanics"
} |
Why can we allow the speed of light being infinite in case of Surface Plasmons? | Question: I have a problem with understanding of these sentences:
We have indicated in the opening paragraph of the Introduction that surface plasmon polaritons are solutions
of Maxwell’s equations in which the effects of retardation—the finiteness of the speed of light—are
taken into account. An important subclass of surface plasmon polaritons are surface plasmons. These can
be viewed as the limiting case of surface plasmon polaritons when the speed of light is allowed to become
infinitely large.
Nano-optics of surface plasmon polaritons (1.1.2) - Anatoly Zayats et.al.
How can we allow c be infinite? Why is it correct in case of surface plasmons?
The article is available online, easy to find with Google Scholar
Answer: To get a better understand of what is going on, take a look at the plot below, also linked here: http://en.wikipedia.org/wiki/File:Dispersion_Relationship.gif
What the author meant by "letting the speed of light go to infinity" is that the we let the slope of the blue line become infinite. In that case, the solid red line would not curve as shown below, but look much more like a step function. This same effect of making the plot look like a step function can also be obtained when we take the asymptotically large value of $k_x$, i.e. if one plots much larger values of $k_x$ below (instead of 0-3, one plots from 0-1000 for instance).
What I am trying to say is that one should not really think of taking the speed to light to be infinite (which is physically unreasonable, of course), but to take the large $k_x$ limit in which you can really think of the surface plasmon polariton as really being solely being comprised of a surface plasmon (i.e. there is no mixing between light and the surface plasmon at these large values of $k_x$.
This is precisely why inelastic electron scattering, which usually has a poor momentum resolution (at least compared to these effects), is unable to probe the surface plasmon polariton limit. It averages over a large portion of the $k_x$ axis above and effectively measures only the surface plasmon. Light, on the other hand, cannot couple the the surface plasmon polariton directly, as it doesn't have enough momentum to transfer to the SPP. Therefore, one has to come up with clever ways, for example making a grating out of the material one wishes to probe, to get light to couple to the SPP. | {
"domain": "physics.stackexchange",
"id": 15226,
"tags": "electromagnetism, solid-state-physics, quantum-electrodynamics, classical-electrodynamics, plasmon"
} |
Multiunit Auction | Question: Consider multiunit auction (as it is defined in Introduction to Mechanism
Design by Noam Nisan) , where $k$ identical units of some good are sold in an auction (where $k < n$). In the simple case each bidder is interested in only a single unit. In this case $A = \{S–wins|S ⊂ I, |S| = k\}$, and a bidder’s valuation $v_i$ gives some fixed value $v^∗$ if $i$ gets an item, i.e. $v_i(S) = v^∗$ if $i ∈ S$ and $v_i(S) = 0$ otherwise.
Let consider possible solutions:
The first incentive compatible solutions seems like generalization of Vickrey auction. Maximizing social welfare means allocating the items to the $k$ highest bidders, and in the VCG mechanism with the pivot rule, each of them should pay the $k + 1$’st highest offered price. It's obvious incentive compatible just by generalization of Vickrey auction.
But what if, the bidder with the highest bid pays the price equal to the second highest bid and the bidder with the second highest bid pays the price equal to the third highest bid ans so on. Obviously their valuation will be much less as in the first solution, but more interesting question is whether the second solution incentive compatible?
Answer: This is not quite the same as the usual generalized-second-price-auction setting because you are assuming the items are identical, whereas in a GSP setting there is an order on the items (slot A is better than slot B is better than ...).
But anyway, your auction is not dominant-strategy-incentive compatible. Suppose the bidders' true valuations are $v_1 > v_2 > \dots > v_k > \dots > v_n$, and suppose every bidder reports her valuation truthfully.
The first bidder wins an item and pays $v_2$. If she had bid $v_k + \epsilon$, she would have still won an item, but only paid $v_k$. So she would have preferred to lie and bid $v_k + \epsilon$. | {
"domain": "cstheory.stackexchange",
"id": 2102,
"tags": "gt.game-theory"
} |
Why is the Pegasus launched from a subsonic airplane? | Question: Considering that the reason typically given for launching spacecraft from sea-level as opposed to mountains is that the limiting factor is velocity, not altitude, then why isn't the Pegasus rocket launched from a supersonic airplane?
At ~50,000 lbs, a B-1 could take two of them to Mach 2, even if the plane had to be modified as is the case with the shuttle carriers. Other planes could do, especially the easily-available F-111 or some old Russian hardware such as the Tu-22 or Tu-160. Considering that NASA now launches humans on Russian hardware this is not entirely infeasible.
Answer: The "Stargazer" L-1011 Lockhead TriStar aircraft also serves as a mobile lab in addition to being a launch vehicle. It is large enough to support future launch vehicles and also supplies a lot of in-flight testing/monitoring. All that stuff takes space which you wouldn't have on a small craft like a F-111, etc. Separation at lower speeds is also safer for the flight crew.
Since the launch vehicle is used over and over and does a bulk of the fuel burning in the over all mission profile, for practical reasons it was better to let the rocket do the extra delta-v. The aircraft only gets it to about 3% (600 mph) of orbital velocity (20,000 mph) and going supersonic (say Mach 2.5) would only get it to about 8-10% (1900 mph). However for all the time the aircraft would be in flight over the many launches, it would be burning a lot more fuel for that small benefit even if they could fit all their supporting test hardware (and no in-flight support crew) onto something smaller. | {
"domain": "physics.stackexchange",
"id": 8410,
"tags": "orbital-motion, rocket-science, popular-science"
} |
Normalizing the solution to free particle Schrödinger equation | Question: I have the one dimensional free particle Schrödinger equation
$$i\hbar \frac{\partial}{\partial t} \Psi (x,t) = -\frac{\hbar^2}{2m} \frac{\partial^2}{\partial x^2} \Psi (x,t), \tag{1}$$
with general solution
$$\psi(x,t) = A e^{i(kx-\omega t)} + B e^{i(-kx-\omega t)}. \tag{2}$$
I'd expect that the solution is normalized:
$$ \int_{-\infty}^\infty |\psi(x,t)|^2 dx = 1. \tag{3}$$
But
$$|\psi(x,t)|^2 = \psi(x,t)\psi^* (x,t) = A^2 + B^2 + AB ( e^{2ikx} + e^{-2ikx} ), \tag{4}$$
and the integral diverges:
$$ \int_{-\infty}^\infty |\psi(x,t)|^2 dx = \frac{AB}{2ik} (e^{2ikx} - e^{-2ikx})\biggl|_{-\infty}^\infty + (A^2+B^2) x\biggl|_{-\infty}^\infty. \tag{5}$$
What is the reason for this? Can it be corrected?
Answer: Schroedinger's equations may have both normalizable and non-normalizable solutions. The function
$$
\psi_k(x,t) = A e^{i(kx-\omega t)} + B e^{i(-kx-\omega t)}. \tag{2}
$$
is a solution of the free-particle Schroedinger equation for any real $k$ and $\omega = |k|/c$.
As a rule, if the equation has a class of solutions parameterized by continuous parameter ($k$), these solutions are not normalizable to infinite space.
One purpose of wave function is to use it to calculate probability of configuration via the Born rule; the probability that the particle described by $\psi$ has $x$ in the interval $(a,b)$ of the line is
$$
\int_a^b|\psi(x)|^2\,dx.
$$
For this to work, $\psi$ has to be such that it has finite integral
$$
\int_S |\psi(x)|^2\,dx
$$
where $S$ is region where it does not vanish.
Plane wave (or sum of such waves) cannot be normalized for $S=\mathbb R$ (or higher-dimensional versions of whole infinite space), but it can be normalized for finite intervals (or regions of configuration space which similarly have finite volume).
People deal with this situation in several ways:
instead of $\mathbb R$, they describe system by functions that are limited to an imaginary finitely-sized box, so all regular functions are normalizable (delta distributions will remain non-normalizable even there). The exact size of the box is assumed to be very large but it is almost never fixed to definite value, because it is assumed as it gets expanded to greater sizeits influence on the result becomes negligible.
retain infinite space, but use only normalizable functions to calculate probability (never use non-normalizable function with the Born rule);
retain infinite space, retain plane waves, use Dirac formalism and be aware of its drawbacks. Never work with $\langle x|x\rangle$ as with something sensible, do not think $|x\rangle$,$|p\rangle $ represent physical states (people call them states to simplify the language), mind that $\langle x|$ is a linear functional that is introduced to act on some ket, not a replacement notation for $\psi^*$. | {
"domain": "physics.stackexchange",
"id": 19841,
"tags": "quantum-mechanics, homework-and-exercises, wavefunction, schroedinger-equation, normalization"
} |
After choosing top models in classification? Can I apply it on the rest of my dataset | Question: I am working with a corpus that has 5 datasets in product reviews (A, B, C, D and E), mine is a text classification problem and I need to find the best 5 top models in terms of classification performance (F1).
I started with collection A: the mp3 reviews, Because it has the largest numbers of documents (900: yes, 750: No).
I trained the data using 10-fCv using different algorithms and pre-processing tasks, got the weighted results for all experiments.
I chose the top 5 models and I want to apply them to the rest of the corpus: B, C, D and E (other products' reviews).
My plan is to run 10-fCv and get the results for all the collections and compute the Micro-average for precision, recall and F1.
Is this the right way to choose a model for a large collection?
Answer: This is an interesting question.
In general the split of data is about the underlying distribution. That means you split a dataset into train-test sets in a way that random split of train-test does not dramatically affect the distribution. But Splitting based on topics is not random!
Specially in your case you are talking about text in which the distribution is super sensitive to the domain i.e. if you collect the commentary of 1000 football games and the narrations of 1000 documentary movies about wild life, you will see that they are literally two different things. The conceptual difference between products most likely affects the distribution of words/terms/phrases therefore the model trained on reviews of mp3s MUST NOT be validated on reviews of football shoes!
In your case, I would say the train-test split (CV folds) should be done on whole data together so that you maintain the original topology of word distribution (topology here is not a Math term but I simply mean the shape of distribution).
In this case if you do Topic Modeling on the whole training data you simply see 5 different product topics. Or if you use word2vec or doc2vec you hopefully see 5 different clusters. Then you can run your models in this setting for evaluation.
NOTE:
If the size of classes are very different you need to come up with some solution for small classes. If it was the case, just drop me a line in comments and we can discuss solutions.
Good Luck :) | {
"domain": "datascience.stackexchange",
"id": 2325,
"tags": "classification, text-mining, model-selection"
} |
Unlabelled pins on this L293D dual DC motor driver | Question: I've got this board from a local store. It's an L293D based dual DC motor driver. All the pins are labelled except the 5 on the bottom as shown in the picture below. Couldn't find any datasheet or manual for using this board.
I've connected the board's Vcc to Arduino's 5V and board's ground to Arduio's GND. Then put the IN1 to HIGH and IN2 to low, and got a voltage of 4.3 across the motor A's terminals.
Now what I want to do is use an external power source (11.1V battery) for the motor terminals, NOT the Arduino's 5V.
Are there any datasheet/manual for this board?
How do I use external power source with this board?
What are the pin-out/labels for those 5 pins annotated with a red
box in the picture above?
Answer: A Google Image search on L293D board suggests that this is a widely available board manufactured by (or at least designed by) Deek-Robot.
The 5 pin connector mostly duplicates pins on the 6 pin green terminal block:
I suspect it is mostly useful for a quick motor connection when testing, whereas the green connector is used for a final motor connection.
Note that the Deek-Robot product page includes a link to the (very simple) circuit diagram and a link to the L293D datasheet.
Also, note that this chip can get very hot when pushed close to it's limits. 4 of the 16 pins are designed to be heat sinks to the ground plane, but it looks like this board doesn't have a very good thermal connection, so you may want to consider attaching a chip heatsink to the top side of the chip with thermal glue if it gets too hot. I've even known people to solder the heatsink to the chip via the heatsink pins as the plastic chip packaging is quite thermally insulating. | {
"domain": "robotics.stackexchange",
"id": 1993,
"tags": "arduino, motor, driver, identification"
} |
Can cancer cells in the same person, organ, and origin have different DNA? | Question: Is it possible for cells from the same tumor to have different genetic material, and if so, to what degree is it possible (how fast do they mutate) ?
Answer: Cancer cells and normal cells differ on the genetic basis but they share the same genetic background, so they have not different DNA in the sense of two different people. They have to be different, since cancer cells have to accumulate mutations on a number of genes to become a cancer cell, which can survive and will not be directed into apoptosis. These are genes which control the cell cycle, certain growth factors, tumor suppressor genes, cell signalling and so on. In all these genes there need to be either activating or deactivating mutations present. Most of these mutations are point mutations (single nucleotide polymorphisms, SNP), where only one base is altered to achieve a mutation. An example for such a mutation would be the mutation in the B-RAF kinase, which is involved in signaling and activating genes, where a point mutation exchanges Valine 600 to glutamic acid (V600E, for reference see here). These are relatively small differences between these two cells. The Cancer Genome Project aims to sequence cells from a cancer and also normal cells from the same individual and then do a comparison. Later stages tumors often tend to genetic instabilities where complete regions of the genome are duplicated, inverted or deleted, see here for a review.
A more basic overview is given by the Ebook "Essentials of Cell Biology" by the Nature Group, which has an overview of cancer cells.
Regarding the mutation rate, this is hard to estimate. There are a few papers available here, but they focus on the mutation rate in germ cells, so these are mutations that are transmitted to the next generation. The estimates there are between 70 and 100 mutations per generation per individual, depending on the research method. You can find a nicely explained blog article here, which gives a number of original references, too.
The mutation rate in cancer is a different thing, since this depends on which genes are mutated. The mutation rate for cancer has to be higher for the cancer cells to allow to collect the necessary transformations into a cancer cell, but these mutations seem to occur only in a few hotspot regions of the genome (Further references can be found in this article called "The causes and consequences of genetic heterogeneity in cancer evolution" and in this called "Emerging patterns of somatic mutations in cancer"). There are two article called "The mutation rate and cancer" which are interesting in this context. They can be found here and here. As always, if you have problems with getting the articles, let me know, I can help there. | {
"domain": "biology.stackexchange",
"id": 7709,
"tags": "cancer, mutations, dna-damage"
} |
Quantum Number of a Tennis Ball | Question: A tennis player has a tennis ball container with a single ball in it (it normally holds three). He shakes the tennis ball horizontally back and forth, so that the ball bounces between the two ends. We model the tennis ball as a quantum particle in a box.
The questions: what is the quantum number n for this ball? If the ball were to absorb a photon and jump to the next energy level, what should the energy (in eV) of that photon be?
For both of these questions, I am confused about how to apply quantum mechanical principles to the tennis ball. For the former, I suppose that a quantum number of n would make sense...if we model it as a particle in the box, the probability distribution across the container would match that of a particle at the n=2 energy level (remember--we are moving back and forth, and therefore the particle would be most likely to be at one of the ends and not in the middle). Would this be the correct reasoning? Is there something else I'm missing? For the latter question, I would use
$$E_n=\frac{n^2\hbar^2\pi^2}{2ma^2}$$
where m is the mass of the tennis ball and a is the length of our container. Say that we are now in the $n=2$ energy level. To go to $n=3$, we have to apply an energy of $\Delta E=E_3-E_2$ to make that jump. Is this the correct procedure for both of these questions? I'm just having a hard time applying quantum mechanical thinking to these macroscopic objects.
Thank you in advance.
Answer: You are right that the question is making a very crude model of a tennis ball. However, it does capture some important qualitative features of the classical limit that are worth understanding.
The intuition that you are supposed to get is that the classical limit corresponds to large $n$. The key thing to look at is $\Delta E_n/E_n$, the fractional difference in energy between two adjacent energy levels:
\begin{equation}
\frac{\Delta E_n}{E_n} = \frac{(n+1)^2-n^2}{n^2} = \frac{2n +1}{n^2}
\end{equation}
When $n$ is small, the jump to the next energy level is relatively large, and you notice that the tennis ball is quantum mechanical. The tennis ball is not free to move freely because the energy levels of this bound system are discrete, and it can't bounce around however it likes.
When $n$ is huge, the jump to the next energy level is very small (it scales like $2/n$). In that case you can approximate the energy levels as continuous, and you don't notice quantum behavior at all.
Based on all of this, you should notice that $n=2$ is probably not a good guess for the energy level of a tennis ball.
Indeed, taking $a=10 {\rm cm}$ and $m=10 {\rm g}$, I find
\begin{equation}
E_1 = \frac{\hbar^2 \pi^2}{2 m a^2} \approx 10^{-64} {\rm J}
\end{equation}
which is muuuuuch less than the average kinetic energy of a tennis ball! And $E_2$ is only a factor of 4 bigger than this, which is really no better
Some parting thoughts:
(1) If it bothers you that a tennis ball is a system with many internal degrees of freedom, but here we are modeling it as a single quantum particle with no structure, well congrats because that is definitely a very crude model. However we can say we are only looking at the center of mass of the tennis ball: quantum mechanics allows us to separate out the center of mass for special treatment in a similar way as occurs in classical mechanics.
(2) I said the limit $n\rightarrow \infty$ is the classical limit. We can also phrase it as $\hbar\rightarrow 0$. To see this, note that $E_1\rightarrow 0$ as $\hbar\rightarrow 0$, so any particle with finite energy must have $n\rightarrow \infty$ to compensate. | {
"domain": "physics.stackexchange",
"id": 9521,
"tags": "quantum-mechanics, homework-and-exercises"
} |
Data processing inequality for interaction information | Question: The interaction information is defined as $I(X;Y)-I(X;Y|Z)$. Let $Z-(X, Y) -(X', Y')$ be a Markov chain. Is there an inequality similar to the data processing inequality, relating $I(X';Y')-I(X';Y'|Z)$ to $I(X;Y)-I(X;Y|Z)$? Thanks in advance.
Answer: Since the interaction information can be either negative or positive, and a Markov chain that erases everything can be used to take the interaction information to $0$, without further conditions the answer is "no". | {
"domain": "cstheory.stackexchange",
"id": 5046,
"tags": "it.information-theory"
} |
Calculates start time and end time of jobs in a dataproc cluster | Question: I have the below function get_status_time which calculates the start time and the end time of the spark job which has already completed its run (status could be either fail or pass).
It's working but the function is too complex; I want to fine tune it to reduce cognitive complexity.
def run_sys_command(cmd):
try:
proc = subprocess.Popen(cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE, universal_newlines=True,
shell=True)
s_output, s_err = proc.communicate()
s_return = proc.returncode
return s_return, s_output, s_err
except Exception as e:
logger.exception(e,stack_info=True)
def get_status_time(app_name):
"""returns the start and end time of the spark job running on dataproc cluster
Args:
app_name (str): application name
region (str): region name
Returns:
list: [status_value, start_time, end_time]
"""
try:
end_time = get_today_dt()
logger.info(f"the end_time is {end_time}")
app_id = app_name
cmd = "gcloud dataproc jobs describe {} --region={}".format(app_id, region)
(ret, out, err) = run_sys_command(cmd)
logger.info(f"return code :{ret} , out :{out} , err :{err}")
split_out = out.split("\n")
logger.info(f"the value of split_out is {split_out}")
logical_bool,status_value, start_time= False,"UNACCESSED",""
try:
matches = split_out.index(" state: ERROR")
print(f"matches are {matches}")
except Exception as e:
matches = 0
if matches == 0:
# Grab status
for line in split_out:
if logical_bool == False:
if "status:" in line:
logical_bool = True
elif logical_bool == True:
status_value = line
break
else:
status_value = "FAILED"
# Grab start_time
logical_bool = False
for line in split_out:
if logical_bool == False:
if "state: RUNNING" in line:
logical_bool = True
elif logical_bool == True:
start_time = line.replace("stateStartTime:", "").strip(" `'\n")
logger.info(f"START TIME AFTER STRIP: {start_time}")
start_time = datetime.strptime(start_time, "%Y-%m-%dT%H:%M:%S.%fZ").strftime("%Y-%m-%d %H:%M:%S")
break
status_value = status_value.replace("state:", "").strip()
if status_value == "DONE":
status_value = "SUCCEEDED"
return [status_value, start_time, end_time]
except Exception as e:
logger.error(e, stack_info=True, exc_info=True)
if __name__ == "__main__":
get_status_time('data-pipeline','us-east4')
Answer: Factor repeated line-handling logic out to a utility function. Most of the
complexity in your current implementation stems from the need to (1) examine a
list of lines, (2) find a line that meets some condition, and (3) grab the next
line. You need to do that in two places, and both times you try to achieve it
within the confines of a regular for-loop using a boolean flag variable to
manage state. Any time you find yourself doing something moderately complex
more than once, consider writing a function. Even if the function mimicked your
current approach, simply factoring out that behavior would be a noteworthy
improvement. But I think there's a somewhat more intuitive way to implement the
behavior using a while-true loop and Python's next() function:
def get_line_after(lines, predicate):
it = iter(lines)
while True:
line = next(it, None)
if line is None:
return None
elif predicate(line):
return next(it, None)
Factor grubby parsing details out to utility functions. The other
complexity in the current implementation
involves the parsing of
information from the output of a subprocess call. You can simplify the
primary function by shifting those annoying details elsewhere. This strategy
doesn't really reduce the amount of code (it increases it slightly), but it
increases clarity in two ways, detailed in the ensuing points.
The primary function becomes small and clear. It acquires a routine,
step-by-step quality. Its job is to delegate tasks to others and to log the
results. A few other notes: (1) the region variable was undefined in your
code, so I've added it to the function signature here; (2) for brevity here,
I've omitted logging calls; (3) restrict try-except handling to the things that
can fail beyond your control (parsing code typically does not meet that test);
(4) your code is unclear about how to respond to a failure of the subprocess
call; and (5) because we don't have your data, we can't run the code, so there
might be typos or errors in my code suggestions.
def get_status_time(app_name, region):
end_time = get_today_dt()
cmd = 'gcloud dataproc jobs describe {} --region={}'.format(app_name, region)
try:
ret, out, err = run_sys_command(cmd)
except Exception as e:
# Return, raise, or have run_sys_command() log its own exception
# and then return data, letting callers decide what to do rather
# than requiring them to handle exceptions as well.
return
lines = out.split('\n')
status_value = get_status_value(lines)
start_time = get_start_time(lines)
return [status_value, start_time, end_time]
The utility functions become sharply focused. Although the utility
functions are still a bit tedious, at least they are focused on very narrow
parts of the problem. They are also easy to unit-test and debug in isolation
from the other machinery of the program. Finally, because of their narrow
focus, they tend to require the spawning of fewer intermediate variable names:
instead, their job is just to return an answer. Under
most circumstances, I would not do any logging in these
functions. An additional improvement you
could make is to convert some of the magic values (eg, the status values and
datetime formats) into named constants.
def get_status_value(lines):
if ' state: ERROR' in lines:
return 'FAILED'
else:
predicate = lambda line: 'status:' in line
line = get_line_after(lines, predicate)
if line is None:
return 'UNACCESSED'
else:
sval = line.replace('state:', '').strip()
return 'SUCCEEDED' if sval == 'DONE' else sval
def get_start_time(lines):
predicate = lambda line: 'state: RUNNING' in line
line = get_line_after(lines, predicate)
if line is None:
return ''
else:
line = line.replace('stateStartTime:', "").strip(" `'\n")
dt = datetime.strptime(start_time, '%Y-%m-%dT%H:%M:%S.%fZ')
return dt.strftime('%Y-%m-%d %H:%M:%S') | {
"domain": "codereview.stackexchange",
"id": 42975,
"tags": "python, python-3.x, google-bigquery"
} |
Pressure at the outlet of a container full of water | Question:
Find the speed of the water coming out of the container in the given figure when pressure at $P_0$ is $1\ \ \mathrm{atm}$.
Solution in the book :-
By equation of continuity,
$Av_0 = av$ where $A, v_0$ are the volume and area at top of the tank.
$v_0 = av/A$
Since $A << a \ \ \therefore v_0 \approx 0$
$\color{#A28}{P_0 = P ,\text{Because both are exposed to air}}$
By Bernoulli's equation,
$\Delta p + \frac12 \rho \Delta v^2 + \rho g \Delta h = 0$
$0 + \frac12 \rho v^2 + \rho g h = 0$
$v = \sqrt{2gh}$
In my attempt, I did not take $P = P_0$ instead I took $P = P_0 + \rho_{water} h (0 - -h) = P_0 + \rho gh$, for which I got different answer, which is wrong I know for sure.
I did not get the part in purple, I am heavily confused why we have to take those two pressure equal just because the things are exposed to air.
With height the pressure increases.
The water inside is pushing water near hole out so that pressure should also be taken in account.
Why we neglected these two factors is beyond my understanding, please help.
Answer: The pressure inside the container, far from the hole, is $P_0+\rho g h$. But the pressure at the hole is atmospheric, because (as the solution says) the water here is exposed to the atmosphere. The pressure in the liquid changes from $P_0+\rho g h$ in the region surrounding the hole to $P_0$ at the hole. If the hole is small, this region of change is small.
If there were not such a pressure difference the water would not be pushed sideways out of the hole. | {
"domain": "physics.stackexchange",
"id": 37000,
"tags": "fluid-dynamics, fluid-statics"
} |
Why doesn't class weight resolve the imbalanced classification problem? | Question: I know that in imbalanced classification, the classifier tends to predict all the test labels as larger class label, but if we use class weight in loss function, it would be reasonable to expect the problem to be solved. So why we need some approaches like down sampling or up sampling for imbalanced classification problem?
Answer: Class weights do help with the imbalance problem ("resolve" seems too much), but upsampling has a certain advantage on it.
If you think about it, downsampling/upsampling the number of samples in each class to balance the dataset is almost exactly the same as using class weights.
For example, say you have a dataset containing 3 samples divided into 2 classes: and you are training with an MSE loss.
You can choose to upsample the number of samples from class B, which will get you the following cost function over a single epoch:
Here is where the small difference comes into effect, if you are training in a batch gradient descend (a single weights update per epoch), the prediction for the 2 identical B1 samples will be the same, so the loss function can be written as:
Which is exactly the same as using a weighted loss function. However if you are using a mini-batch gradient descend (as most model our days), the 2 different samples may appear in different mini-batches, and so the predictions for them at the same epoch won't be the same (because one of them will pass thru a model that was already updated once).
This is a small difference but sometimes it is important. It means that with weighted classes, the effective learning rate varies between mini-batches. In some cases that can make learning unstable. So, when possible, upsampling is the better approach (practically it produces slightly better results).
The problem is that you can't always upsample/downsample without any worry. If we go back to our example but this time we have a dataset of 5 samples, divided into 2 classes:
Downsampling is problematic - which of the 3 A class samples do we ignore?
Upsampling is problematic - which of the 2 B class samples do we duplicate?
It can be solved by randomly choosing which samples to keep/ignore at each epoch, however with very big datasets, this can lead to slower processing... So the weighted classes is still a valid option. | {
"domain": "datascience.stackexchange",
"id": 4492,
"tags": "classification, class-imbalance, weighted-data"
} |
Question about Difference of Gaussian (DoG) algorithm | Question: I am recently learning about Computer Vision and I am having a trouble understanding Difference of Gaussian (DoG) algorithm. I get how the algorithm works in high level idea, but I am trying to implement my own and I am confused about some steps.
For instance, I am trying to create 5 blur level for each octave, and I am confused about which filter and sigma value applying to which image. Using Matlab, for the first octave, I created a filter and applied:
sigma = 0.5;
gauss = fspecial('gaussian', [5 5], sigma);
blur1 = imfilter(img, gauss, 'replicate');
dog1 = img - blur1;
%Next level
blur2 = imfilter(blur1, gauss, 'replicate');
dog2 = blur1 - blur2;
I am not so sure if this is how I need to apply? Do I apply gaussian filter to previously applied image? I also saw code using k*sigma. I am not sure what k means and how to apply? Oh and what value should I used for sigma? Is it in [0, 1] range or can be bigger than that? Could someone help me on this? Thank you very much.
Thank you so much!
Answer: Difference of gaussian is the difference in the output of two Gaussian filters with different blur amounts (sigma).
Sigma is the size of the Gaussian filter. A bigger sigma gives you a bigger amount of blurring. A good way to think about it is a Gaussian filter with variance sigma is very roughly like averaging 3 x sigma samples wide (or 3 x 3 in an image)
e.g. from wikipedia:
Very Important when making a Gaussian filter in MATLAB make sure the size of the filter is at least 6 x sigma. In your above code you have 5 x 5 which is fine for sigma = 0.5, but for sigma = 1 you would want 6 x 6 or bigger.
The k is simply a multiplier for sigma.
e.g.
sigma = 0.5;
gauss1 = fspecial('gaussian', round([10*sigma 10*sigma]), sigma);
sigma = 1;
gauss2 = fspecial('gaussian', round([10*sigma 10*sigma]), sigma);
blur1 = imfilter(img, gauss1, 'replicate', 'same');
blur2 = imfilter(img, gauss2, 'replicate', 'same');
dog2 = blur1 - blur2;
A more complete code example that allows you to set the number of octaves and the steps per octave:
%% Filter using DoG
stepsPerOctave = 5;
octaves = 4;
mult = nthroot(2,stepsPerOctave);
% Create blurry images
sigma = 0.5;
kernelSize = [10*sigma*2^(octaves),10*sigma*2^(octaves)]
for k = 1:octaves*stepsPerOctave+1
disp(['Sigma is ' num2str(sigma)]);
gauss = fspecial('gaussian', kernelSize, sigma);
blur(:,:,k) = imfilter(I, gauss, 'replicate', 'same');
imagesc(blur(:,:,k)); colorbar; title(['Gaussian ' num2str(k)]); pause;
sigma = sigma * mult;
end
% Create DoG
for k = 1:octaves*stepsPerOctave
dog(:,:,k) = blur(:,:,k+1) - blur(:,:,k);
imagesc(dog(:,:,k)); colorbar; title(['DoG ' num2str(k)]); pause;
end | {
"domain": "dsp.stackexchange",
"id": 2906,
"tags": "matlab, computer-vision, gaussian"
} |
Does this dimensioneless quantity have a name? | Question: When studying creeping flows, a common choice for a characteristic pressure scale is $$p_0 = \frac{\mu_0 U_0}{L_0},$$ where $\mu_0$ is a reference dynamic viscosity, $U_0$ is a reference velocity and $L_0$ is a reference length.
This leads me to think about the dimensionless quantity $$\frac{pL}{\mu U}.$$ Does it (or the inverse of it) have a specific name?
Answer: I am unaware of any dimensionless quantity that this represents, and Wikipedia seems to agree. However, what you have defined there is the ratio of the pressure stress to the viscous stress, and this is in some sense similar to the Bingham Number (yield stress to viscous stress).
So what would your number mean? Let's go ahead and call it the Toliveira number for the time being. When $To \gg 1$, the pressure stress is much more important than the viscous stress. This means that dilation is a bigger effect than shear (remember, pressure stress is the trace of the stress tensor). So a fluid packet is growing or shrinking isotropically much more than it is deforming under shear.
The inverse of this, $To \ll 1$ implies that the volume of the fluid element is not changing nearly as much as the shape of the fluid element is changing. Shear dominates dilation.
I could imagine this number maybe being important defining regimes where volumetric heat release is important (combustion for example) or regions where compression is large but viscous forces are small (shocks). But I have a feeling there are numbers more directly on-point in those cases. I'll keep digging.
After some more digging, I did find one number that is kind of close. The Poiseuille Number is defined as:
$$ P = -\frac{d p}{dx} \frac{L^2}{2\mu U} $$
This relates the pressure gradient in a laminar duct to the viscous forces in the duct. For an incompressible flow, the pressure doesn't mean much anymore but the pressure gradient is important as this is what drives flows. In your comment, you asked if the number you gave has any meaning in an incompressible flow. I am fairly certain the answer is no because absolute pressure is not what matters, but pressure gradients do matter. | {
"domain": "physics.stackexchange",
"id": 26447,
"tags": "fluid-dynamics, dimensional-analysis, navier-stokes"
} |
Do we need both path compression and union by rank together in disjoint set data structure? | Question: I was studying disjoint set data structure. I studied path compression and union by rank. Initially all the elements are single in their own set and by performing unions we can combine different sets.
Now since we are performing union by rank the height of the resultant tree is always minimum. At this point i think that we might not need path compression at all. Am i right? If i am wrong please explain me with an example.
Answer: No, you are not right, at least not if you want the fastest possible running times. Using both is faster than using just one of them alone. Using just Union by rank gives you a $O(\lg n)$ amortized running time per operation, whereas using both Union by rank and path compression gives you a $O(\alpha(n))$ amortized running time per operation, which is asymptotically faster. Here $n$ is the number of elements and $m$ is the number of operations performed on them. See https://en.wikipedia.org/wiki/Disjoint-set_data_structure#Time_complexity. | {
"domain": "cs.stackexchange",
"id": 11556,
"tags": "data-structures"
} |
Hydrolysis of transition metals' halides? | Question: So, it's a relatively common known solubility rule that any hydroxide with a cation not in the first two groups is basically insoluble. So supposing we have a transition or post-transition metal $M$ and say it forms a charge of $n+$ (so it's cation is $\ce{M^{n+}}$) why would $\ce{MX_n}$ (where $\ce{X}$ is a halide) remain in solution? Why doesn't the following reaction occur?
$$
\ce{MX_n(aq)} + n\ \ce{H_2O(l)} \rightarrow \ce{M(OH)_n(s)} + n\ \ce{HX(aq)}
$$
My thought was that perhaps this reaction does occur, but just very slowly and not fast enough for the $\ce{M(OH)_n}$ to precipitate in noticeable amounts. However, I don't really know and this is just a guess. Could someone explain to me why this doesn't happen?
Answer: $\ce{MCl_n(aq)} + n\ \ce{H2O(l)} \rightarrow \ce{M(OH)n(s)} + n\ \ce{HCl(aq)}$ does tend to go to the right. "Basically insoluble" is not quantitative enough to generalize. If the hydroxide is very insoluble, the acid produced by hydrolysis may not keep it in solution. But if the hydroxide is even only slightly soluble, it may not precipitate out. The acidity of the solution is a subtle indicator that the reaction has shifted a little bit to the right. The reaction isn't slower, just not always so extreme.
If you put $\ce{SiCl4}$ in water, it gives $\ce{HCl}$ + silica gel because $\ce{SiO2}$ (or "$\ce{Si(OH)4}$") is so insoluble.
If you put $\ce{MgCl2}$ in water, no precipitate appears because $\ce{Mg(OH)2}$ is slightly soluble (~2 mg/L), and the $\ce{HCl}$ produced by the hydrolysis reaction is acidic enough so that if a significant amount of $\ce{Mg(OH)2}$ were produced, it would just redissolve. $\ce{Mg(OH)2}$ gives a pH of about 10 in water, so at pH's lower than 10, $\ce{Mg(OH)2}$ doesn't precipitate out. BTW, $\ce{MgCl2}$ is slightly acidic because of the hydrolysis reaction.
$\ce{AlCl3}$ is complicated because of ionic complexes, but in water it hydrolyzes to give a very acidic solution and some aluminum hydroxide complexes or gels which are of varying solubility. $\ce{AlCl3}$ has been used as a water treatment because it produces flocculent precipitates that carry down impurities.
$\ce{NiCl2}$ does not precipitate in water because $\ce{Ni(OH)2}$ is even more soluble than $\ce{Mg(OH)2}$.
$\ce{FeCl3}$ is quite acidic in water. You might expect the commercial 40% solution to be a cloudy product because of the production of acid, but it is a clear, deep red color. Oh, but when it is diluted and the pH rises because the acid is diluted, it precipitates in flocs like $\ce{AlCl3}$. It is used to clarify water by precipitating dirt and dust. | {
"domain": "chemistry.stackexchange",
"id": 9587,
"tags": "acid-base, transition-metals, precipitation, hydrolysis"
} |
Relating the variance of the current operator to measurements | Question: (EDIT: Thanks to Nathaniel's comments, I have altered the question to reflect the bits that I am still confused about.)
This is a general conceptual question, but for definiteness' sake, imagine a quantum dot sandwiched between two macroscopic metal leads with different chemical potentials. The chemical potential difference drives a current of electrons that flow through the quantum dot from one lead to another. Conservation of charge leads to the continuity equation for the charge density operator $\hat{\rho}$:
$$ \frac{ \mathrm{d} \hat{\rho}}{\mathrm{d}t} = \mathrm{i}[\hat{H},\hat{\rho}] = - \nabla \hat{j}. $$
Given a Hamiltonian $\hat{H}$, one can in principle use the above formula to calculate the form of the current operator $\hat{j}$, whose expectation value gives the number of electrons that pass from one reservoir to the other per unit time. The expectation value of the current is independent of time in the steady state. The operational procedure to measure this expectation value is clear: sit there and count the number of electrons $n_i$ that pass through the quantum dot in time $t$, then repeat this procedure $M$ times, giving
$$ \langle \hat{j} \rangle \approx \frac{1}{M}\sum\limits_i^M \frac{n_i}{t}. $$
The angle brackets on the left mean quantum mechanical average: $\langle \hat{j} \rangle = \mathrm{Tr}(\hat{\chi} \hat{j})$, where $\hat{\chi}$ is the density operator describing the quantum dot. (I am assuming that conceptual issues relating to quantum measurement are unrelated to this problem -- although please tell me if I'm wrong -- because a similar question can be posed for a classical stochastic system.)
Now, each measurement $n_i$ will not not be exactly $\langle \hat{j} \rangle t$ due to fluctuations of the current. One can write down the variance of the current
$$(\Delta j)^2 = \langle \hat{j}^2\rangle - \langle \hat{j} \rangle^2. $$
(As Nathaniel pointed out, the calculated variance in the current depends on the choice of time units.) However, the quantity you actually measure is the following:
$$ (\Delta n)^2 = \overline{n^2} - \overline{n}^2, $$
where the overline means the average over the $M$ realisations of a measurement of $n_i$ electrons hopping between the reservoirs in time $t$, i.e. $\overline{n} = \frac{1}{M}\sum_i n_i$.
My confusion relates to the fact that the quantity $\Delta n(t)$ must depend on the measuring time $t$. You can see this easily by considering the limit $t\to\infty$: if you watch and wait for long enough then the fluctuations will average to zero and every measurement $n_i$ that you make will be exactly the expected value. On the other hand, (I think) $\Delta j$ is the expected RMS fluctuation over a single unit of time, i.e. $\Delta j = \Delta n(1)$. Is there a simple relationship between $\Delta j$ and $\Delta n(t)$ measured over arbitrary times?
Answer: Current fluctuations are notoriously difficult to calculate and work with. There is no simple relation between the moments of the current and corresponding density. Correlations, autocorrelations, etc. spoil any chance of a simple relation in general.
There is a useful method called full counting statistics (for a review see "Nonequilibrium fluctuations, fluctuation theorems, and counting statistics in quantum systems" by Esposito, Harbola and Mukamel, Rev. Mod. Phys. 81, 1665–1702 (2009), arXiv:0811.3717) which helps us calculate the current distribution.
Sorry I cannot give a better answer, but this is a very broad field of theoretical and experimental research in out-of-equilibrium quantum statistical physics.
Here is a paper which measures the full counting statistics for a quantum dot, which you mentioned:
S. Gustavsson et al. Counting Statistics of Single Electron Transport in a Quantum Dot. Phys. Rev. Lett. 96 no. 7, 076605 (2006). arXiv:cond-mat/0510269 [cond-mat.mes-hall]. | {
"domain": "physics.stackexchange",
"id": 8626,
"tags": "quantum-mechanics, statistical-mechanics, electric-current, non-equilibrium"
} |
From where should I start Machine Learning? | Question: I want to know how to start from scratch for Machine Learning. Also which language is best for implementing its algorithms or developing future applications based on it. Thanks!
Answer: I am a book person so I would recommend one of the following books:
Elements of Statistical Learning (Hastie and Tibshirani).
Pattern Recognition and Machine Learning (Bishop).
The first book is available as a free download from the authors' website. You may download and start reading it. You will get an idea about your deficiencies. If it's too difficult, then you need to improve your statistics and linear algebra skills. For Linear Algebra I recommend:
Linear Algebra and its Applications (David Lay).
For statistics I like:
Discovering Statistics (Andy Fields).
Stay away from the recipe books if your aims are long-term. | {
"domain": "datascience.stackexchange",
"id": 737,
"tags": "machine-learning, programming"
} |
Login modal with MVVM pattern | Question: I would like some feedback on whether this follows MVVM principles well.
The MainPage has a Login button that opens the login modal page. Upon successful login, the LoginPageViewModel will raise an event which is handled in the parent viewmodel.
I use Action objects to do the show/hide modals. The Actions are defined in a page's codebehind and invoked in its viewmodel.
MainPage.xaml
<Button Command="{Binding ShowLoginModalCommand}" Text="Login"/>
MainPage.xaml.cs
public MainPage()
{
var vm = new MainPageViewModel();
vm.ShowLoginModal += (lvm) => Navigation.PushModalAsync(new LoginPage(lvm));
vm.HideLoginModal += () => Navigation.PopModalAsync();
BindingContext = vm;
InitializeComponent();
}
MainViewModel.cs
public Action<LoginViewModel> ShowLoginModal;
public Action HideLoginModal;
public ICommand ShowLoginModalCommand => new Command(() =>
{
var lvm = new LoginViewModel();
lvm.LoginSucceeded += (se, ev) =>
{
this.Customer = ev.Customer;
HideLoginModal();
};
ShowLoginModal(lvm);
}
public Customer Customer
{
get => _customer;
set
{
_customer = value;
OnPropertyChanged(nameof(Customer));
}
}
LoginPage.xaml
<Button Command="{Binding LoginCommand}" Text="Login"/>
LoginViewModel.cs
public ICommand LoginCommand => new Command(async () =>
{
var customer = await VerifyLogin(); // Assume valid login.
LoginSucceeded(this, new LoginSuccessEventArgs(customer));
}
public class LoginSuccessEventArgs : EventArgs
{
public Customer Customer { get; set; }
public LoginSuccessEventArgs(Customer customer) => Customer = customer;
}
public event EventHandler<LoginSuccessEventArgs> LoginSucceeded;
Answer: Generally speaking I find a good implementation and like the simplicity of the approach. I would point out some problems I found though.
Inconsistent MVVM style
You are using two different ways to instantiate views and viewmodels and linking them together in both views, an inconsistence that could get bigger the and messier the more complex the program becomes.
In the MainPage you're creating the viewmodel in the view's constructor, and assigning it to the BindingContext right away. But in the LoginPage you create the viewmodel in the handler of the main page and pass it to the constructor of the view (not show here, but I guess it also assigns the BindingContext with it).
Specially for bigger applications, it's important to clearly define one style, either view first or viewmodel first, and live with it (I don't even know if it's called "style", but it sounds like that to me). A more concerning problem is that when VMs begin to take dependencies on services and other stuff, you most likely will want to use an IoC container for injecting them, and then it becomes more important to clearly define who'll create viewmodels and how. Personally, I like viewmodel-first approach, in which viewmodels create other viewmodels and are passed to the views.
MainPage viewmodel
Why are ShowLoginModal and HideLoginModal plain public fields? In situations like this, I would expect them to be normal events, raised from the viewmodel and handled in the view. This way feels a bit non-standard.
On the ShowLoginModalCommand method, the viewmodel creates the login viewmodel directly. Maybe its unnecesary now, but consider using an IoC container or other factory to create this, as generally when other viewmodels take extra dependencies it becomes harder to create them all inline.
The Customer setter doesn't needs to be public, a private one will do. You only want to set the current user from the login and nowhere else, so make that crystal clear.
The invocations of the ShowLoginModal and HideLoginModal delegates don't contain a null validation, and will result in a NullReferenceException if no one suscribed to them. A nice shortcut to call them safely is to use the ShowLoginModal?(lvm); syntax.
LoginPage viewmodel
Beware of the async reentrancy in the LoginCommand method. The VerifyLogin call inside it could potentially be a slow one, but since it's an await call, the UI will remain responsive. That means the user could do other things in the meanwhile, including changing the user/password or even click the login button again! That could have many unanticipated effects. I would counter it by disabling the UI while the verification takes place, using something like this in the VM:
public bool EnableForm { get; set; }
public ICommand LoginCommand => new Command(async () =>
{
//Disable UI before doing anything
EnableForm = false;
OnPropertyChanged(nameof(EnableForm));
var customer = await VerifyLogin(); // Assume valid login.
LoginSucceeded(this, new LoginSuccessEventArgs(customer));
}
And in the view bind the controls to that property:
<Button Command="{Binding LoginCommand}" Text="Login" IsEnabled="{Binding EnableForm}"/>
In the LoginSuccessEventArgs the Customer property doesn't needs to be read-write. Once again, since you're just informing a value, there is no benefit in changing it after creation. Also, the constructor should check for null on the customer passed.
Possibly it's not shown here or just a work-in-progress thing, but there is no handling of the login failure case, that should just reenable the controls (following my above suggestion), show some error message to the user and not raise the LoginSuccess event.
Like in the main page, the raising of the event is not null checked and will blow up if nobody happens to be suscribed.
The call to the VerifyLogin method, although maybe simplified here, could imply a lot of work under the cover. That's the kind of thing that's better delegated to a service to do all the heavy work, and just have the presentation layer handle the visual thing. I would inject it in the constructor and call it passing the user/password entered, and it would return success or failure. Something along the lines of this:
private readonly ILoginService _loginService;
public LoginViewModel(ILoginService loginService)
{
if(loginService == null) throw new ArgumentNullException(nameof(loginService));
_loginService = loginService;
}
public ICommand LoginCommand => new Command(async () =>
{
var customer = await _loginService.VerifyLogin(Username, Password);
if(customer == null)
//show some error message here
else
LoginSucceeded(this, new LoginSuccessEventArgs(customer));
} | {
"domain": "codereview.stackexchange",
"id": 30081,
"tags": "c#, authentication, mvvm, xaml, xamarin"
} |
Time travel (Velocity and Mass) | Question: I learned that if I move on a high velocity, if my watch shows 12:00 and a my home clock shows 12:00 before the trip, when I come back, my watch will be like 12:05 while the clock will be 12:10.
I also hear about the same thing when we are near a massive object.
Is there a formula about this?
For example if I travel 1h at 100km/h, how many times will I be younger than the rest of the world?
Answer: There most certainly are formulas for these things! I don't know precisely what your background is but I'll try to explain it in very elementary terms. I'll discuss the case where there is no effect of gravity (or when this is negligible), since if gravity is incorporated the story becomes a bit more complicated.
So say we neglect gravity, and we consider two observers $\mathcal O_1$ and $\mathcal O_2$, where observer $\mathcal O_1$ is at rest, but the other observer may move as she likes. Both observers have clocks, and there is a relatively simple formula that relates the elapsed times $\Delta t$ and $\Delta \tau$ on the two clocks. We consider the situation in which the two observers are initially at the same location. Their clocks both show an elapsed time of zero at that point. Let $v(t)$ be the function that gives the velocity $v$ of observer $\mathcal O_2$ at each time $t$ on the clock of the stationary observer $\mathcal O_1$. (For simplicity we consider only motion in the $x$-direction.)
If observer $\mathcal O_1$ now measures a time $\Delta t$ on her clock, then observer $\mathcal O_2$ will measure a time
\begin{align}
\Delta \tau = \int_{0}^{\Delta t} \sqrt{1-\frac{v(t)^2}{c^2}}\, \text{d}t, \tag{1}
\end{align}
where $c$ denotes the speed of light. First of all let's check if this makes sense. Suppose observer $\mathcal O_2$ does not move at al! Then $v(t)=0$ and hence
\begin{align}
\Delta\tau = \int_0^{\Delta t}1\,\text{d}t = \Delta t
\end{align}
and so both stationary observers measure the same elapsed time, as expected. So far the formula seems to work!
Now let's consider a different example: observer $\mathcal O_2$ moves with constant velocity $v$ for a time $\Delta t/2$, and then back with velocity $-v$, again for a time $\Delta t/2$, making for a total time $\Delta t$ (on the clock of observer $\mathcal O_1$, as indicated by the use of $t$ instead of $\tau$), after which the observers meet again. Then, since $v^2=(-v)^2$, the integral $(1)$ becomes
\begin{align}
\Delta \tau = \int_{0}^{\Delta t} \sqrt{1-\frac{v(t)^2}{c^2}}\, \text{d}t = \int_{0}^{\Delta t} \sqrt{1-\frac{v^2}{c^2}}\, \text{d}t = \sqrt{1-\frac{v^2}{c^2}}\int_{0}^{\Delta t} \, \text{d}t = \sqrt{1-\frac{v^2}{c^2}}\Delta t.
\end{align}
Let's see what we can learn from this. Since $v$ must always be smaller than the speed of light, $v^2/c^2$ is smaller then 1. Also, because of the squares, $v^2/c^2$ is always positive. Hence $v^2/c^2$ lies between $0$ and $1$, which means that $1-v^2/c^2$ also lies between $0$ and $1$. Finally then, the multiplication factor $\sqrt{1-v^2/c^2}$ lies between $0$ and $1$, and therefore the formula above tells us that $\Delta\tau$ is always smaller than or equal to $\Delta t$. In other words:
The moving observer experiences less time then the observer which is at rest.
To get a feeling for the actual numbers, suppose observer $\mathcal O_2$ moves at a velocity of $7000$ km/h, which, according to a little searching on the internet, seems to be about the top speed that is reached by the fastest fighter jets today. Then doing the calculation shows that
\begin{align}
\Delta \tau \approx 0.99999999997 \Delta t,
\end{align}
which is an extremely small difference in elapsed times. (It is a difference of about 1 second on a total time of 2000 years.) For normal everyday speeds the difference is even a lot smaller than this, so definitely not noticeable. On the other hand, for a hypothetical spaceship that moves at a velocity of $90\%$ of the speed of light, i.e., $v=0.90\,c$, we get
\begin{align}
\Delta \tau \approx 0.44 \Delta t,
\end{align}
so at that speed the stationary observer ages approximately twice as fast as the moving observer. | {
"domain": "physics.stackexchange",
"id": 46147,
"tags": "reference-frames, time, relativity, time-dilation, observers"
} |
Doubts about the Einstein "way" | Question: I followed the whole Einstein/Schwarzschild derivation, and the very first thing I don't like in it, is that after emphasizing the equivalence-principle requirement, Einstein skips this as a boundary condition, and assumes instead that the Ricci's curvature is div(grad(relativistic potential)); and that its ZERO was sufficient to define this potential!
The "div(grad(potential))=0" sufficiently defines potential only in flat space!
Gravitational acceleration, to be a grad(potential) in curved space, MUST be defined/measured in local (curved) space!
At this point, the analogy: acceleration <-> curvature_of_geodesic; is incorrect because the curvature of geodesic IS defined/calculated in map space.
Is there any method of quantitative comparison between acceleration and gravitational field, and hence, any demonstration that Schwarzschild's solution satisfies the equivalence principle?
Answer: Let's start from the Schwarzschild (SM) metric:
$$ds^2 = c^2 dt^2 \left( 1+ \frac{2U}{c^2}\right) - dr^2 \frac{1}{1+ \frac{2U}{c^2}} -r^2 d\Omega^2$$
where $U = -\frac{GM}{r}$ is the scalar potential with $M$ as mass of gravitating body. In the following we can take into account that the ratio of the scalar potential to square of the velocity of light is extraordinarily small for systems where Newton's gravitational law had been checked for its correctness, i.e. systems like the surroundings of the earth or sun. For heavier & smaller objects (white dwarfs, neutron stars, black holes etc.) it's better to use Einstein's gravitation theory right away. So this means that the following approximation would be not valid around these extremely heavy stellar-like objects mentioned above.
$$ \frac{U}{c^2} \approx \begin{cases} 10^{-9} & \text{on the earth's surface} \\ 10^{-6} &\text{on the sun's surface} \\ 10^{-4} &\text{on the surface of a white dwarf}\\
10^{-1} & \text{on the surface of a neutron star}\end{cases}$$
So for instance around the sun $U/c^2 \approx 10^{-6}$, so we are allowed of a couple of approximations. So we will write the SM metric:
$$ds^2 = c^2 dt^2 \left( 1+ \frac{2U}{c^2}\right) - dr^2 (1- \frac{2U}{c^2}) -r^2 d\Omega^2$$
This will also allow us to develop the metric tensor's components $g_{ik}$ in the following way ($\eta_{ik}=diag(1,-1,-1,-1)$ is the Minkowski tensor)
$$g_{ik} \approx\eta_{ik} + 2\psi_{ik}$$
and as the $\psi_{ik}$ are really small, so we can neglect all terms of order higher than one (i.e. the quadratical terms) in the Ricci-tensor:
$$R_{ik} = \psi_{ik,l}^{\quad l} +\psi^l_{l,ik} - \psi^{l}_{k,li} -\psi^l_{i,kl}$$
Due to some freedom in the choice of coordinates this expression can be further simplified, so at the end we get:
$$R_{ik} =\psi_{ik,l}^{\quad l} \quad \text{or simply }\quad R_{ik} =\Box \psi_{ik}$$
Then starting from Einstein's equations $R_{ik} =\kappa(T_{ik} -\frac{1}{2}g_{ik} T^l_l)$ without any approximation and using the mentioned decomposition of the metric tensor above we get:
$$\Box \psi_{ik} = \kappa (T_{ik} -\frac{1}{2}\eta_{ik} T^l_l)$$.
Considering the Schwarzschild case we can just approximate the energy-momentum tensor like $T_{ik}=diag(\rho c^2,p,p,p)\approx diag(\rho c^2,0,0,0)$ so that $T^l_l = \rho$.
When we assume a massive object there is no pressure or assume that the pressure is much smaller than the energy density $\rho c^2$ so that it can be neglected (the $c^2$ factor makes the first element in the diagonal very large compared to the other elements).
Yes indeed we will limit our approximation to the $00$ component of the $\psi$-tensor, but if desired the other components can also be computed.
We take from the SM-metric $\frac{U}{c^2} =\psi_{00}$. And we get:
$$\Box \psi_{00} = \kappa (T_{00}- \frac{1}{2}\eta_{00} T^l_l) = \frac{\kappa}{2}\rho = 4\pi \frac{G}{c^2} \rho$$
Under the further assumption that the concerned bodies of the system don't move a lot the time derivative of $\psi_{00}$ is zero we get:
$$\Delta U = 4\pi G \rho$$
Yes in the outer space, i.e. outside the massive body $\rho=0$ and so we get there $\Delta U =0$. This is nothing new, potential theory from the $19^{th}$ century already tells us this. Yes this actually corresponds to $R_{ik}=0$. Naively we could conclude in both cases that the field is zero, but this would be only true if the boundary conditions would say $\rho=0$ in the whole space up to infinity in both cases. But for the SM case, this not the case, so in both cases -- the Newton's and Einstein case -- the solution will be non-zero. It is well-known that $\Delta U=0$ is fulfilled by harmonic functions that are non-zero (the latter have to be adapted to the given boundary conditions). So there is no problem at all.
For the implication of the equivalence principle one has to do a similar computation for the geodesic equation (the dot represents the derivation with the $ds$):
$$ \ddot{x}^i = -\Gamma^i_{kl} \dot{x}^k \dot{x}^l $$
This equation takes the equivalence principle (EP) into account because the mass which should be on the lhs the inertial mass as multiplication factor, whereas on the rhs the gravitational mass as multiplication factor which are actually already cancelled out because both are the same (EP).
We assume here that the velocities of the concerned bodies are small, so we can approximate $\dot{x}^i=(1,0)$. We will now use Greek alphabet indices for space coordinates $\alpha =(1,2,3)$.
$$\frac{d^2 x^\alpha}{ds^2} \approx \frac{d^2 x^\alpha}{c^2 dt^2} = -\Gamma^\alpha_{kl} \frac{dx^k}{ds}\frac{dx^l}{ds} \approx -\Gamma^\alpha_{00}$$
Computing the Christoffel-symbols we get:
$$\Gamma^\alpha_{00} = \psi_{00,\alpha} -2\psi_{0 \alpha,0}$$
Assuming that the gravitational field is stationary we can make an appropiate choice of coordinates where the mixed components ${0,\alpha}$ of the metric (and therefore also $\psi_{0\alpha}$) disappear. We are left with only 1 term on the rhs of the Christoffel symbol computation. So we get for the geodesic equation:
$$ \frac{dx^\alpha}{c^2 dt^2} = -\nabla \psi_{00}$$
i.e. we get the Newton's law:
$$\frac{d^2\vec{x}}{dt^2} = -\nabla U$$
or compared with what you might expect:
$$ m_\text{inertial} \frac{d^2\vec{x}}{dt^2} = - m_G\nabla U$$
One would not get that ($m_G$ is the gravitational mass, $m_\text{inertial}$ the inertial mass) because indeed the EP is used right from the beginning which allows to cancel them out.
So I think, the derivation is solid and the used approximations are valid.
I haven't put the definitions of all symbols of the Ricci tensor, Christoffel symbols etc due to the length of the answer. They can be found on Wikipedia for instance. Or just ask for it.
In case of Einstein's theory there are 10 components of the metric whereas in Newton's theory there is only 1. Using the demonstrated approximation they will also fulfill some equations, but in comparison with the component of $\psi_{00}$ these components will be much smaller taking into account the table of $U/c^2$ above. | {
"domain": "physics.stackexchange",
"id": 80627,
"tags": "general-relativity, geodesics, equivalence-principle"
} |
How can I text-mine full-text articles from PubMed? | Question: I want to do a text mining study using full-text versions of articles I find on PubMed. My intended search protocol will be roughly as follows:
Search PubMed using a gene name (and any alternate names) as the query, all matching papers are subjected to Step 2; my understanding is that this will return articles that mention the gene in their abstract
Search full-text for any and all matches from a list of keywords; assign each paper a score based on the number of matching keywords; any papers that match have to be read by a human but the most relevant papers will have a higher score and get read first
The two-step search needs to be repeated many times with different genes in Step1 so an automated approach is probably worth the time it will take to develop.
I know enough about programming that I could write a script to do Step2 if I had the paper as a plain-text document (I program in Perl but I also know a little Python) but I have no idea how I could automate the process of searching for papers, downloading them, converting them to plain-text documents that my program could work on.
I considered posting this in StackOverflow but have opted for this site because I have not ruled out the possibility that this can be done without doing my programming.
UPDATE: I have found one tool that might be very useful for exactly this problem. Unfortunately, I am not in a position to ask for a free trial so I cannot evaluate it. Even if it is an appropriate tool, I will most likely not be able to use it for my study.
Answer: Below is a Python script that might help you to get started (apologies if it fails the Pythonic test - it works!). It uses the Entrez part of the Biopython library. The script sets up a query, in this case yeast AND Saccharomyces against the pmc database. Also note that this script uses the 2 step process that NCBI likes you to use - the first part of the fetchByQuery function gets a set of results then the second part uses those results to actually obtain the data.
The output is an xml file which you get to parse with your favourite tools. In your case you will need to get out the text sections and do your token analysis. If you use Python for that I recommend the Natural Language Toolkit (NLTK).
In your case you could just set up search terms as a Python list and loop through writing each dataset to a file named from the search term.
from Bio import Entrez
import urllib
import urllib2
import sys
def fetchByQuery(query,days):
Entrez.email = "xxx" # you must give NCBI an email address
searchHandle=Entrez.esearch(db="pmc", reldate=days, term=query, usehistory="y")
searchResults=Entrez.read(searchHandle)
searchHandle.close()
webEnv=searchResults["WebEnv"]
queryKey=searchResults["QueryKey"]
batchSize=10
try:
fetchHandle = Entrez.efetch(db="pmc", retmax=100, retmode="xml", webenv=webEnv, query_key=queryKey)
data=fetchHandle.read()
fetchHandle.close()
return data
except:
return None
days=100 #looking for papers in the last 100 days
termList=["yeast","Saccharomyces"]
query=" AND ".join(termList)
xml_data=fetchByQuery(query,days)
if xml_data==None:
print 80*"*"+"\n"
print "This search returned no hits"
else:
f=open("pmcXml.txt" ,"w")
f.write(xml_data)
f.close() | {
"domain": "biology.stackexchange",
"id": 2802,
"tags": "bioinformatics, literature"
} |
GW190521 black hole merger total mass calculation and missing mass, how does this happen? | Question: I have just read an article about that black hole merge event (it's in Italian):
Sette miliardi di anni fa, due mostri si unirono
What made me curious is that the article tell that a 66 solar mass black hole merge with a 85 solar mass black hole to form a 142 solar mass black hole.
If the datas are correct, that suggests to me that during a merge event, matter inside a black hole is ejected (66+85=151).
Is that (mass ejection) possible? I have always thought that escape form the inside of a black hole is impossible.
I suppose it is somewhat possible if we think about Hawking radiation, but due my amateur knowledge level I have not been able to dig into this very much.
Answer: I remember the first time I'd read about a gravitational wave detection and the resulting mass I also stopped when I realized the numbers didn't add up. At first it's shocking. While we might be aware that very tiny changes in mass are associated with huge amounts of energy release, it's absolutely amazing to think of a mass the size of our Sun radiated as energy!
But that's exactly what happens in the mergers of black holes. The fraction of mass converted to energy is much smaller when two normal stars merge, gravitational waves are produced but are small in comparison. But when the objects are extremely dense so that they can orbit very close before the merger, as in the case of neutron stars and black holes, those gravitational waves are powerful and a significant fraction of that mass will radiate away as ripples in space itself.
Answer(s) to Fraction of initial mass lost (radiated) by neutron star mergers compared to black hole mergers? indicate that this fraction can be up to several percent in these cases.
We can convert an absurdly tiny amount of those distortion waves back into other forms of energy as a thought experiment, though not practical. For more on that, see answers to
Does a gravitational wave loses energy over distance?
Transfer of energy from gravity back to other “more familiar” forms of energy?
Does shortening the path length of an excited etalon do work? What about LIGO? | {
"domain": "astronomy.stackexchange",
"id": 4774,
"tags": "black-hole, gravitational-waves, mass"
} |
iRobot Create 2: Encoder Counts | Question: This post is a follows from an earlier post (iRobot Create 2: Angle Measurement). I have been trying to use the wheel encoders to calculate the angle of the Create 2. I am using an Arduino Uno to interface with the robot.
I use the following code to obtain the encoder values. A serial monitor is used to view the encoder counts.
void updateSensors() {
Roomba.write(byte(149)); // request encoder counts
Roomba.write(byte(2));
Roomba.write(byte(43));
Roomba.write(byte(44));
delay(100); // wait for sensors
int i=0;
while(Roomba.available()) {
sensorbytes[i++] = Roomba.read(); //read values into signed char array
}
//merge upper and lower bytes
right_encoder=(int)(sensorbytes[2] << 8)|(int)(sensorbytes[3]&0xFF);
left_encoder=int((sensorbytes[0] << 8))|(int(sensorbytes[1])&0xFF);
angle=((right_encoder*72*3.14/508.8)-(left_encoder*72*3.14/508.8))/235;
}
The code above prints out the encoder counts; however, when the wheels are spun backwards, the count increases and will never decrement. Tethered connection to the Create 2 using RealTerm exhibits the same behavior; this suggests that the encoders do not keep track of the direction of the spin. Is this true?
Answer: This is true. The encoders on the Create are square-wave, not quadrature. Therefore, they rely on the commanded direction to figure out which way the wheel is spinning. When driving forward, they count up, and when driving backward they count down as expected. But if you move the wheel when the robot is not moving, or move the wheel opposite the commanded direction, you will get incorrect results. The latest OI Spec available at www.irobot.com/create has a small discussion of this in the section for Opcode 43 on page 30. This was only recently updated, and contains a number of notes regarding gotchas like this. So you may want to re-download. | {
"domain": "robotics.stackexchange",
"id": 681,
"tags": "irobot-create, roomba"
} |
move_base global planner plans outside static map | Question:
Hi,
The default global planner plans outside the static map i.e. unexplored area. Is there any parameter I have missed out?
planner_params.yaml
base_local_planner: dwa_local_planner/DWAPlannerROS
recovery_behaviors: []
planner_frequency: 0
planner_patience: 1000.0
controller_patience: 1000.0
recovery_behavior_enabled: false
clearing_rotation_allowed: false
controller_frequency: 20.0
global_constmap_params.yaml
global_costmap:
global_frame: /map
robot_base_frame: base_link
update_frequency: 1.0
static_map: true
Unfortunately during mapping, not all parts of the room was bounded by a wall i.e. black color cells in the occupancy map. The planner found a path that goes outside the static map from one end and then into the other side of the room.
Originally posted by aswin on ROS Answers with karma: 528 on 2014-06-20
Post score: 0
Answer:
It depends on which global planner you're using, but in navfn, the parameter is allow_unknown
Originally posted by David Lu with karma: 10932 on 2014-06-20
This answer was ACCEPTED on the original site
Post score: 0
Original comments
Comment by aswin on 2014-07-02:
I am using navfn and I set it to allow_unknown as false. However, it still plans outside static map. I verified this parameter by printing it in initialize() and also viewing the potential that goes outside the static map.
Comment by aswin on 2014-07-02:
Ok so when I use track_unknown: true and unknown_cost_value: 255 inside costmap_common.yaml, I can get what I want. If this is a necessary parameter, it must be added to the navigation tutorial
Comment by David Lu on 2014-07-03:
Great! Glad you figured it out. | {
"domain": "robotics.stackexchange",
"id": 18330,
"tags": "ros, navigation, move-base, base-global-planner"
} |
In LPA*, how are predecessors/successors of a vertex defined? | Question: While trying to implement LPA* (mostly based on its description in the same authors’ paper on its derivative D*Lite), I noticed it mentions predecessors and successors of a vertex without giving a full explanation.
I understand that $Pred(u)$ is the set of all vertices from which an edge leads towards $u$, and $Succ(u)$ is the set of all vertices towards which an edge leads from $u$. Predecessors and successors must be immediate neighbors (thus e.g. a predecessor of a predecessor of $u$ is not necessarily a predecessor of $u$).
In practice, however, it is quite common to have edges that are not directional, i.e. can be traversed both ways (which is equivalent to the same two nodes being connected by two antiparallel directional edges). I infer that for two vertices $u$ and $v$ connected in this way, $u$ would be both a predecessor and a successor of $v$ (and vice versa). In other words, $Pred(u) \cap Succ(u)$ is not necessarily empty.
Am I correct to assume that predecessor and successor are defined only by the existence of an edge traversable in a given direction, and are specifically not related to start cost?
Answer: If you have an undirected graph, you can convert it to a directed graph by including edges in both directions: for each undirected edge $(u,v)$, add directed edges $u \to v$ and $v \to u$. So, if you have undirected edges, imagine applying this transformation and then running LPA* on the resulting graph. That should make it clear what to do.
Alternatively: in an undirected graph, Pred(u) is the set of all edges incident on $u$, and so is Succ(u) (there is no distinction between them in an undirected graph). So, yes, they can overlap.
The cost of the edges are irrelevant. | {
"domain": "cs.stackexchange",
"id": 11459,
"tags": "algorithms, graphs, shortest-path"
} |
[ROS2] How to implement a sync service client in a node? | Question:
In ROS1, it was possible to make a (synchronous) service call from a callback.
In ROS2, in a derived node class, if I do the following:
create a service client using "create_client"
within a callback function (e.g. a timer callback) call async_send_request() on the client and then:
call get() on the future that is returned
It will block forever. Instead, I must register a callback when I call async_send_request() in order to handle the response.
An async design does have its advantages, so I am wondering, is it the intent of ROS2 to force asynchronous service message handling in a node or is there a way to do a sync call that I haven't uncovered yet?
Originally posted by mschickler on ROS Answers with karma: 95 on 2020-02-04
Post score: 4
Original comments
Comment by MrCheesecake on 2020-06-05:
Hi,
I'm having the same trouble. I want to not use spin_until_future_complete in my code, because I'm already spinning the node outside of my class.
I tried to use wait/wait_for/wait_until on the future, but they all never return and block forever as you described.
So the only way is to use a callback? Or have you found another possibility in the meantime?
Comment by jdlangs on 2020-06-08:
@MrCheesecake, see the answer I just posted about how you can get the service response in a synchronous way.
Comment by vissermatthijs on 2020-06-08:
This is a problem for me as well I asked the same question on github https://github.com/ros2/rclpy/issues/567. Does somebody have python example? That would help me allot
Comment by MrCheesecake on 2020-06-10:
@jdlangs thanks for your answer, do you have a example or explanation on how to use callbackgroups. There is not much about this topic out there. So when I use a callbackgroup of type Reentrant future.get() and future.wait() will work? Because now they don't (also with MultiThreadedExecutor) .
Comment by jdlangs on 2020-06-12:
@MrCheesecake I just edited my answer to include a full standalone example that demonstrates how to do it.
Answer:
You're right, this is something I also would like as well. Dealing with all the async stuff for a simple sync application where I'm willing to wait makes the application-level code less clean.
I think your best bet right now is to create a sync_service wrapper to deal with all the async and waiting for you. Then you can use that in your code to interact with it like a synchronous operation.
Right now, that's not "batteries included".
Originally posted by stevemacenski with karma: 8272 on 2020-02-04
This answer was ACCEPTED on the original site
Post score: 1 | {
"domain": "robotics.stackexchange",
"id": 34380,
"tags": "ros2"
} |
How can DNA replication result in hair pin structures? | Question: My professor said that one of the reasons SSB proteins are so important was to prevent the formation of hair pin structures, I can't see how or why DNA would form hairpin structures and there's not much about it on the internet so can anybody explain this hair pin thing and how SSB proteins prevent it from happening ?
Answer: DNA Hairpins are formed when two regions in same single stranded DNA are complementary in nucleotide sequence but in the opposite directions (as represented in image below). These two sets of nucleotide sequences base-pair with each other by forming hydrogen bonds between adenine-thymine and guanine-cytosine respectively to form hairpin loop. Same structures can be seen in case of RNA.
(Figure via: https://brilliant.org/problems/dna-zipper/)
Single stranded binding (SSB) proteins bind to single DNA nucleotide sequences and prevent the breakdown of newly synthesized DNA because of nucleases and it also removes the secondary structure of the DNA strands like hairpin loops so that other enzymes can bind to DNA strand and operate properly. As represented in figure below SSB bind to ssDNA through electrostatic interactions and prevent the bond formation within the nucleotides of single DNA strand, thus preventing formation of hairpin loops in DNA.
(Figure via: http://helicase.pbworks.com/w/page/17605582/Amanda-Kinney)
(via: https://proteopedia.org/wiki/index.php/Single_stranded_binding_protein) | {
"domain": "biology.stackexchange",
"id": 10519,
"tags": "dna, dna-sequencing, dna-replication, dna-damage"
} |
Open source code for the maths behind a heliostat? | Question: Theoretically, using a Raspberry Pi, (at least) one mirror, and two motors, one should be able to build a heliostat, i.e. a device which redirects sunlight to a fixed spot, like a scrub in the shadow of a building.
I am now searching for heliostat (open) source code, ideally in python, hopefully with enough comments. Also: Is my following rough approach correct?
We need to know the exact geographical location of the mirror in terms of longitude, latitude. For simplicity, we assume that the mirror itself always has an obstacle-free view of the sun.
For a given time, we can use celestrial mechanics to calculate the path of the sun on the sky.
Using the reflection law from geometrical optics, we can determine the position of the mirror, since we know the vector from the mirror position to the spot we want to direct light to.
That sound simple enough, at least theoretically. I read that for step 2, many use precalculated tables. Why? Is it numerically so challenging?
References
https://www.sunearthtools.com/dp/tools/pos_sun.php
Jon Henley: Rjukan sun: the Norwegian town that does it with mirrors
Renard Bleu: ICARUS: the Analog Heliostat
Raspberry Forum: 2 axis Heliostat
Open Source Sun Tracking Skylight
Sadly mostly offline, but everybody is linking there: http://www.cerebralmeltdown.com/
Answer: Assuming you know Sun Altitude/Elevation and Azimuth at a given location on Earth (you can calculate it using any astronomy library, such as suncalc.js), and the Altitude/Elevation and Azimuth of target w.r.t mirror, the mirror must point toward this direction:
mirrorAz = TargetAz + (SunAz - TargetAz) / 2
mirrorAlt = TargetAlt + (SunAlt - TargetAlt) / 2 | {
"domain": "astronomy.stackexchange",
"id": 6863,
"tags": "the-sun, python, celestial-mechanics, mathematics"
} |
Siamese Network - Sigmoid function to compute similarity score | Question: I am referring to siamese neural networks introduced in this paper by G. Koch et al.
The siamese net computes 2 embeddings, then calculates the absolute value of the L1 distance, which would be a value in [0, +inf). Then the sigmoid activation function is applied to this non-negative input, so the output afterwards would be in [0.5, 1), right?
So, if two images are from the same class, your desired L1 distance should be close to 0, thus the sigmoid output should be close to 0.5, but the label given to it is 1 (same class); if two images are from different classes, your expected L1 distance should be very large, thus the sigmoid output should be close to 1, but the label given to it is 0 (diff. class).
How does the use of a sigmoid function in order to compute the similarity score (0 dissimilar, 1 similar) in a siamese neural network make sense here?
Answer: I would like to augment the answer of @Shubham Panchal, since I feel the real issue is still not made explicit.
1.) $\alpha$ could also contain negative entries so that the sigmoid function maps to $(0,1)$.
2.) @Stefan J, I think you are absolutely correct: two identical embedding vectors would be mapped to $0.5$ while two vectors that differ would be mapped to (depending on $\alpha$) values towards $1$ or $0$, which is not what we want!
@Shubham Panchal mentions the Dense layer and provides a link to an implementation, which is correct.
Now to make it very clear and short, in the paper they forgot to mention that there is a bias!
So it should be $p = \sigma(b+ \sum_{j}\alpha_{j}|h_{1,L-1}^{(j)} - h_{2,L-1}^{(j)}|)$.
Let $\hat{h} := \begin{pmatrix}\hat{h}_{1} & \ldots & \hat{h}_{n}\end{pmatrix}^{T}$, where $\hat{h}_{j}:= |h_{1,L-1}^{(j)} - h_{2,L-1}^{(j)}|$.
Then we know that $\hat{h}_{i} \geq 0$ for all $i$.
If you consider now the classification problem geometrically, then $\alpha$ defines a hyperplane that is used to separate vectors $\hat{h}$ close to the origin from vectors $\hat{h}$ further away from the origin. Note that for $\alpha = 1$, we have $\sum_{j}\alpha_{j}|h_{1,L-1}^{(j)} - h_{2,L-1}^{(j)}| = ||\hat{h}||_{1}$. Using $\alpha$ results thus in a weighting of the standard $1$-norm, $\sum_{j}\alpha_{j}|\hat{h}^{(j)}|$.
Already for $n=2$ you can see that you can have two classes where the hyperplane must not go through the origin. For example, let's say two images belong together, if $\hat{h}_{1} \leq c_{1}$ and $\hat{h}_{2} \leq c_{2}$. Now you can not separate those points from points with $\hat{h}_{1} > c_{1}$ or $\hat{h}_{2}> c_{2}$ using a hyperplane that contains the origin. Therefore, a bias is necessary.
Using the Dense layer in Tensorflow will use a bias by default, though, which is why the presented code is correct. | {
"domain": "datascience.stackexchange",
"id": 8402,
"tags": "neural-network, siamese-networks"
} |
Does constant Q transform have linearity property in the transformed domain? | Question: With FFT, I sometimes take advantage of the fact that I can pre-calculate a signals Fourier Transform, and then you can add the noise in the spectral domain:
$$ \mathscr{F} (x[n] + h[n]) = \mathscr{F}(x[n]) + \mathscr{F}(h[n]) $$
This is useful when we want to discard of the original signal, but we still want to noise it with a spectrum. If we know the spectrum of the noise beforehand, this is also faster computationally (just adding together the two spectra).
Does the same relation hold for constant Q transforms?
Answer: It’s equivalent to a bank of linear filters, so yes it is linearly additive because each component filter is linearly additive with respect to its inputs/outputs. | {
"domain": "dsp.stackexchange",
"id": 8098,
"tags": "fourier-transform, linear-systems, transform"
} |
Three body force | Question: Reading on wikipedia about Faddeev equations, I've come across the notion of three-body force. The article which describes this type of force doesn't give me any understanding what it is about.
My questions are: does every three-body problem have such force? Or are e.g. classical celestial three-body problems free of it? If it's not in every three-body problem, then what classes of problems do have it?
Also, does such a force violate principle of superposition for forces?
Answer: Please do however remember the following line from the link you yourself have cited:
In general, if the behaviour of a system of more than two objects cannot be described by the two-body interactions between all possible pairs, as a first approximation, the deviation is mainly due to a three-body force.
Hence, it can be seen that, initially, when people were thinking of many-body problems, they encountered terms in the mathematical formulation which they later termed as many body forces. Incidentally, they were discovered in strong interactions and were a result of gluon mediation. In the celestial scale hence, you would need to have a similar mediating phenomena/theory to explain any physically valid three body force.
Also related: N-body forces in classical mechanics | {
"domain": "physics.stackexchange",
"id": 11096,
"tags": "three-body-problem"
} |
moment of inertial taken at different position | Question:
what is difference between Principal axes of inertia and principal moments of inertia Taken at the center of mass,Moments of inertia Taken at the center of mass and aligned with the output coordinate system and Moments of inertia Taken at the output coordinate system ?
And why is the momet of inertia taken at center of mass and aligned with output coordinate system is used in urdf in ros?
Principal axes of inertia and principal moments of inertia: ( grams * square millimeters )
Taken at the center of mass.
Ix = ( 0.89, 0.45, 0.00) Px = 177217.10
Iy = (-0.45, 0.89, -0.01) Py = 632385.17
Iz = ( 0.00, 0.00, 1.00) Pz = 780108.86
Moments of inertia: ( grams * square millimeters )
Taken at the center of mass and aligned with the output coordinate system.
Lxx = 268506.70 Lxy = 182259.67 Lxz = -4.63
Lyx = 182259.67 Lyy = 541099.71 Lyz = -851.31
Lzx = -4.63 Lzy = -851.31 Lzz = 780104.72
Moments of inertia: ( grams * square millimeters )
Taken at the output coordinate system.
Ixx = 364622.20 Ixy = -112574.17 Ixz = -2115.49
Iyx = -112574.17 Iyy = 1445552.39 Iyz = -163.21
Izx = -2115.49 Izy = -163.21 Izz = 1780663.05
Originally posted by dinesh on ROS Answers with karma: 932 on 2018-05-28
Post score: 0
Answer:
An inertial tensor matrix can be expressed in any coordinate system, and expresses the torque required to rotate the object about the origin of that coordinate system. All inertia tensors are diagonalizable, i.e. a rotation exists that will bring all their off diagonal values to zero, it is then expressed in its principal axes. The first section in your post above is this rotation matrix and a vector of the principal moments, multiplying these together produces the inertia tensor matrix.
I'd assume the different forms of the inertia tensor shown above are used at different times to simply the equations of motion. They can all be calculated from each other if the relevant transformations are known, but it is simpler to have the values to hand.
Originally posted by PeteBlackerThe3rd with karma: 9529 on 2018-05-28
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 30914,
"tags": "urdf, xacro, ros-kinetic"
} |
What is Dissipative Control? | Question: I am reading an article that says:
stabilize the multi-vehicle system to one of its local minima via dissipative control
And other that deals with dissipative system:
(PID) controllers is designed to make the closed-loop linear system asymptotically stable and strictly quadratic dissipative
Question: What exactly is dissipative control or quadratic dissipative?
Answer: The intuitive idea is that a dissipative system cannot store more energy than what was initially stored plus what is supplied during an experiment, which is schematically depicted below.
This figure is adopted from: http://www.eeci-institute.eu/pdf/M012/lec2.pdf
So we write that a system $\dot{x} = f(x,u)$, $y = g(x,u)$ is dissipative with respect to the supply rate $s(u,y)$ if there exists a storage function $V:\mathbb{R}^n\to\mathbb{R}$ such that the dissipation inequality
$$V\big(x(t_1)\big) \leq V\big(x(t_0)\big) + \int_{t_0}^{t_1} s\big(u(t),y(t)\big) \;\mathrm{d}t$$
hold for all system trajectories and for all $t_0< t_1$.
We call it quadratic dissipative if the storage function is a quadratic function, e.g. $V(x) = x^\top P x$.
Thus dissipative control is a controller such that the closed-loop system is dissipative with respect to the in- and output of the closed loop system. | {
"domain": "engineering.stackexchange",
"id": 3323,
"tags": "control-engineering, pid-control"
} |
Configuration corresponding to lowest potential energy | Question:
Figure shows a small magnetised needle P placed at a point O. The arrow shows the direction of its magnetic moment. The other arrows show different positions (and orientations of the magnetic moment) of another identical magnetised needle.
I am asked to find the configuration corresponding to the lowest potential energy among all the configurations shown.
This is till where I could reach
Since the direction of the magnetic moment of the needle placed at d is visible I imagine a small magnet at that place having the north pole at the head of the arrow, with that I get the idea of the magnetic field lines.
Now I know that when the direction of external magnetic field lines are in the direction of the magnetic moment no torque is applied. Therefore, the magnetized needle when placed at Q3 and Q6 is in stable equilibrium but the answer in my book shows placement of the second magnetic needle at Q6 has lowest potential energy. Please explain where am I going wrong.
Answer: Given a magnetic moment, you can roughly imagine the current generating it as a circular current loop with axis parallel to the moment and right hand rule determining flow of current. This in turn gives you some idea of the magnetic field.
The potential energy is $-\vec{\mu}\cdot\vec{B}$. This achieves a minimum when the dot product is at a positive maximum, i.e. when the field and the moment are aligned. You get zero potential if they are perpendicular and a maximum when they point in exactly opposite directions.
The moments are parallel to the field only at Q3 and Q6. Wherever the field is stronger between the two will be the minimum energy. That should be Q6 for a realistic dipole since it's closer to either current source. I'd have to crunch the numbers to verify that for an ideal dipole. | {
"domain": "physics.stackexchange",
"id": 83747,
"tags": "homework-and-exercises, magnetic-fields, potential-energy, magnetic-moment"
} |
Menu driven program to represent polynomials as a data structure using arrays | Question:
Write a menu-driven program to represent Polynomials as a data
structure using arrays and write functions to add, subtract and
multiply two polynomials; multiply a polynomial with a constant, find
whether a polynomial is a “zero” polynomial, return the degree of the
polynomial. Assume that a new polynomial is created after each
operation.
I have used two arrays one for storing the terms of all the polynomials and another to store the beginning and ending indices of each polynomial.
Only the non zero terms of the polynomials are stored except for the zero polynomial which I have represented as a single term having coefficient zero and exponent -1. The terms are stored in decreasing order of their exponents.
polynomial.h
#ifndef POLYNOMIAL_H_
#define POLYNOMIAL_H_
// all polynomials are stored in one array
// holds the current available index
extern int avail;
// reads a polynomial from the user and stores it
//
// Terms will be entered in decreasing order of exponents.
// Each term will be entered as the coefficient and the exponent respectively, separated by a space.
// Zero polynomial will consist of one term with coefficient 0 and exponent -1.
// 0 0 will be entered after the last term to terminate the input.
void read_poly();
// prints the polynomial stored at poly_idx
void print_poly(int poly_idx);
// returns whether the polynomial stored at poly_idx is a zero polynomial or not
int is_zero(int poly_idx);
// returns the degree of a polynomial stored at poly_idx
int get_deg(int poly_idx);
// multiplies the polynomial stored at poly_idx and stores the result
void mult_poly_const(int poly_idx, double k);
// adds the polynomials stored at poly1_idx and poly2_idx and stores the result
void add_poly(int poly1_idx, int poly2_idx);
// subtracts the polynomial stored at poly2_idx from that stored at poly1_idx
// and stores the result
void sub_poly(int poly1_idx, int poly2_idx);
// multiplies the polynomials stored at poly1_idx and poly2_idx and stores the result
void mult_poly(int poly1_idx, int poly2_idx);
#endif
polynomial.c
#include <stdio.h>
#include <limits.h>
#include <math.h>
#include "polynomial.h"
#define MAX_NO 100
#define MAX_TERMS 10000
#define EPS 0.0000005
struct term {
double coef;
int expon;
};
// For each polynomial, terms are stored in decreasing order of their exponents.
// All terms have nonzero coefficients, except for a zero polynomial
// which has a single term with coefficient zero and exponent -1.
// Stores the terms of all the polynomials
struct term terms[MAX_TERMS];
struct polynomial {
// holds the starting index of the polynomial in the array terms
int start;
// holds the ending index of the polynomial in the array terms
int end;
};
// Stores all the polynomials
struct polynomial polynomials[MAX_NO];
int avail = 0;
void read_poly()
{
if (avail == 0) {
polynomials[avail].start = 0;
} else {
polynomials[avail].start = polynomials[avail - 1].end + 1;
}
double coef;
int expon;
int i = polynomials[avail].start;
while (1) {
scanf("%lf%d", &coef, &expon);
// To terminate the input
if (coef == 0 && expon == 0) {
break;
}
terms[i].coef = coef;
terms[i].expon = expon;
i++;
}
polynomials[avail].end = i - 1;
avail++;
}
void print_poly(int poly_idx)
{
for (int i = polynomials[poly_idx].start; i <= polynomials[poly_idx].end; i++) {
if (i != polynomials[poly_idx].start) {
printf("+ ");
}
if (terms[i].expon != 0) {
printf("%lf * x ^ %d ", terms[i].coef, terms[i].expon);
} else {
// if the exponent is zero just the cofficient is printed
printf("%lf ", terms[i].coef);
}
}
printf("\n");
}
int is_zero(int poly_idx)
{
if (polynomials[poly_idx].start == polynomials[poly_idx].end && fabs(terms[polynomials[poly_idx].start].coef) <= EPS) {
return 1;
}
return 0;
}
int get_deg(int poly_idx)
{
if (is_zero(poly_idx)) {
return INT_MIN;
} else {
return terms[polynomials[poly_idx].start].expon;
}
}
// Enters a zero polynomial at the index idx
void enter_zero_poly(int idx)
{
if (idx == 0) {
polynomials[idx].start = polynomials[idx].end = 0;
} else {
polynomials[idx].start = polynomials[idx].end = polynomials[idx - 1].end + 1;
}
terms[polynomials[idx].start].coef = 0;
terms[polynomials[idx].start].expon = -1;
}
void mult_poly_const(int poly_idx, double k)
{
if (k == 0) {
enter_zero_poly(avail);
avail++;
return;
}
polynomials[avail].start = polynomials[avail - 1].end + 1;
int res_term_idx = polynomials[avail].start;
for (int i = polynomials[poly_idx].start; i <= polynomials[poly_idx].end; i++) {
terms[res_term_idx].coef = k * terms[i].coef;
terms[res_term_idx].expon = terms[i].expon;
res_term_idx++;
}
polynomials[avail].end = res_term_idx - 1;
avail++;
}
void add_poly(int poly1_idx, int poly2_idx)
{
polynomials[avail].start = polynomials[avail - 1].end + 1;
// stores the answer if the answer is not a zero polynomial
int i = polynomials[poly1_idx].start;
int j = polynomials[poly2_idx].start;
int k = polynomials[avail].start;
while (i <= polynomials[poly1_idx].end || j <= polynomials[poly2_idx].end) {
// if any of the term of any of the two polynomials is a zero term
// then it is not processed
// Will be required if atleast one of them is a zero polynomial
if (i <= polynomials[poly1_idx].end && terms[i].coef == 0) {
i++;
continue;
}
if (j <= polynomials[poly2_idx].end && terms[j].coef == 0) {
j++;
continue;
}
if (i > polynomials[poly1_idx].end) {
terms[k] = terms[j];
j++;
k++;
} else if (j > polynomials[poly2_idx].end) {
terms[k] = terms[i];
i++;
k++;
} else if (terms[i].expon > terms[j].expon) {
terms[k] = terms[i];
i++;
k++;
} else if (terms[i].expon < terms[j].expon) {
terms[k] = terms[j];
j++;
k++;
} else {
// only if the resulting term is not a zero term
// it will be stored
// since non zero terms are not stored
if (terms[i].coef + terms[j].coef != 0) {
terms[k].expon = terms[i].expon;
terms[k].coef = terms[i].coef + terms[j].coef;
i++;
j++;
k++;
} else {
i++;
j++;
}
}
}
if (k == polynomials[avail].start) {
// If the answer is a zero polynomial
enter_zero_poly(avail);
} else {
polynomials[avail].end = k - 1;
}
avail++;
}
void sub_poly(int poly1_idx, int poly2_idx)
{
polynomials[avail].start = polynomials[avail - 1].end + 1;
// stores the answer if the answer is not a zero polynomial
int i = polynomials[poly1_idx].start;
int j = polynomials[poly2_idx].start;
int k = polynomials[avail].start;
while (i <= polynomials[poly1_idx].end || j <= polynomials[poly2_idx].end) {
// if any of the term of any of the two polynomials is a zero term
// then it is not processed
// Will be required if atleast one of them is a zero polynomial
if (i <= polynomials[poly1_idx].end && terms[i].coef == 0) {
i++;
continue;
}
if (j <= polynomials[poly2_idx].end && terms[j].coef == 0) {
j++;
continue;
}
if (i > polynomials[poly1_idx].end) {
terms[k].expon = terms[j].expon;
terms[k].coef = -terms[j].coef;
j++;
k++;
} else if (j > polynomials[poly2_idx].end) {
terms[k] = terms[i];
i++;
k++;
} else if (terms[i].expon > terms[j].expon) {
terms[k] = terms[i];
i++;
k++;
} else if (terms[i].expon < terms[j].expon) {
terms[k].expon = terms[j].expon;
terms[k].coef = -terms[j].coef;
j++;
k++;
} else {
// only if the resulting term is not a zero term
// it will be stored
// since non zero terms are not stored
if (terms[i].coef - terms[j].coef != 0) {
terms[k].expon = terms[i].expon;
terms[k].coef = terms[i].coef - terms[j].coef;
i++;
j++;
k++;
} else {
i++;
j++;
}
}
}
if (k == polynomials[avail].start) {
// If the answer is a zero polynomial
enter_zero_poly(avail);
} else {
polynomials[avail].end = k - 1;
}
avail++;
}
void mult_poly(int poly1_idx, int poly2_idx)
{
if (is_zero(poly1_idx) || is_zero(poly2_idx)) {
enter_zero_poly(avail);
avail++;
return;
}
polynomials[avail].start = polynomials[avail - 1].end + 1;
// Storing the terms of the answer in sorted order of their exponents
//
// Some zero terms which arise in the answer will also be stored
// and will be removed later.
int new_term_idx = polynomials[avail].start; // holds the index where a new term will be stored
for (int i = polynomials[poly1_idx].start; i <= polynomials[poly1_idx].end; i++) {
for (int j = polynomials[poly2_idx].start; j <= polynomials[poly2_idx].end; j++) {
struct term new = {terms[i].coef * terms[j].coef, terms[i].expon + terms[j].expon};
int k = polynomials[avail].start;
// finding if a term whose exponent is not greater than the new term's exponent exists
// and the finding first such term if exists
while (k < new_term_idx && new.expon < terms[k].expon) {
k++;
}
if (k == new_term_idx) {
// if such a term does not exist
terms[k] = new;
new_term_idx = k + 1;
} else if (new.expon == terms[k].expon) {
terms[k].coef += new.coef;
} else {
// moving all the terms from the found term to one place right
// to make place for the new term
for (int mv_idx = new_term_idx - 1; mv_idx >= k; mv_idx--) {
terms[mv_idx + 1] = terms[mv_idx];
}
// entering the new term
terms[k] = new;
new_term_idx++;
}
}
}
// scanning for zero terms and removing them
for (int i = polynomials[avail].start; i < new_term_idx; i++) {
if (terms[i].coef == 0) {
// moving all terms after the current term to one place left
// to overwrite the zero term
for (int mv_idx = i + 1; mv_idx < new_term_idx; mv_idx++) {
terms[mv_idx - 1] = terms[mv_idx];
}
new_term_idx--;
}
}
polynomials[avail].end = new_term_idx - 1;
avail++;
}
main.c
#include <stdio.h>
#include <limits.h>
#include "polynomial.h"
int main()
{
printf("\nNew polynomials are created for all options except 2, 3, 4.\n");
printf("They will be stored at the current index.\n\n");
while (1) {
// printing the options
printf("Current index = %d\n", avail);
printf("1. Enter a polynomial\n");
printf("2. Print a polynomial\n");
printf("3. Find whether a polynomial is a zero polynomial\n");
printf("4. Find the degree of a polynomial\n");
printf("5. Multiply a polynomial with a constant\n");
printf("6. Add two polynomials\n");
printf("7. Subtract two polynomials\n");
printf("8. Multiply two polynomials\n");
printf("Enter option(EOF to exit): ");
// getting the option from the user
int op;
if (scanf("%d", &op) == EOF) { // to stop if EOF is received
break;
}
printf("\n");
// executing the actions according to the option entered
switch(op) {
case 1: {
printf("Terms should be entered in decreasing order of exponents.\n");
printf("Each term should be entered as the coefficient and the exponent respectively, separated by a space.\n");
printf("Zero polynomial should consist of one term with coefficient 0 and exponent -1.\n");
printf("Enter the polynomial with each non-zero term in a line(Enter 0 0 in the next line after the last term):\n");
read_poly();
break;
}
case 2: {
printf("Enter the polynomial index: ");
int poly_idx;
scanf("%d", &poly_idx);
print_poly(poly_idx);
break;
}
case 3: {
printf("Enter the polynomial index: ");
int poly_idx;
scanf("%d", &poly_idx);
if (is_zero(poly_idx)) {
printf("It is a zero polynomial.\n");
} else {
printf("It is not a zero polynomial.\n");
}
break;
}
case 4: {
printf("Enter the polynomial index: ");
int poly_idx;
scanf("%d", &poly_idx);
int deg = get_deg(poly_idx);
if (deg == INT_MIN) { // for zero polynomial, degree is given as -inf
printf("-inf\n");
} else {
printf("%d\n", deg);
}
break;
}
case 5: {
printf("Enter the polynomial index and the constant respectively: ");
int poly_idx;
double k;
scanf("%d%lf", &poly_idx, &k);
mult_poly_const(poly_idx, k);
break;
}
case 6: {
printf("Enter the polynomial indices: ");
int poly1_idx, poly2_idx;
scanf("%d%d", &poly1_idx, &poly2_idx);
add_poly(poly1_idx, poly2_idx);
break;
}
case 7: {
printf("Enter the polynomial indices: ");
int poly1_idx, poly2_idx;
scanf("%d%d", &poly1_idx, &poly2_idx);
sub_poly(poly1_idx, poly2_idx);
break;
}
case 8: {
printf("Enter the polynomial indices: ");
int poly1_idx, poly2_idx;
scanf("%d%d", &poly1_idx, &poly2_idx);
mult_poly(poly1_idx, poly2_idx);
break;
}
default: {
printf("Invalid option\n");
break;
}
}
printf("\n");
}
printf("\n");
return 0;
}
The program works. This is the first time I have written a program of this size. Also it is the first time I have written a program which is not contained in one file. Is there any way I can improve it? Also I feel the polynomial.c file is too large. Should I try to break it up into smaller files?
Answer:
Don't repeat yourself
add_poly and sub_poly are practically identical except in a single line
terms[k].coef = terms[i].coef + terms[j].coef; // add
terms[k].coef = terms[i].coef - terms[j].coef; // sub
which strongly suggests that these functions should be unified. One possible approach is to express sub_poly in terms of add_poly. In pseudocode:
sub_poly(first, second)
{
negated = mul_poly_const(second, -1);
add_poly(first, negated);
}
Avoid naked loops
Most loops implement an important algorithm, and hence deserve the name - especially if you feel obliged to comment them. In your case, the zero term removal is an obvious candidate to become a function. Notice that having and using such function would immensely simplify add_poly logics: just blindly add them term-wise, and remove zero terms in a second pass.
Comparing doubles
Comparing doubles for equality
if (terms[i].coef + terms[j].coef != 0)
almost never works. Usually the application defines some small \$\varepsilon\$, and considers doubles to be equal if they fall within \$\varepsilon\$ of each other. Maybe that's what the EPS was supposed to be (I didn't find it used anywhere). | {
"domain": "codereview.stackexchange",
"id": 15435,
"tags": "c, array, mathematics"
} |
proving that a log gabor filter has 0 DC offset | Question: I have read somewhere online that the log gabor filter has an advantage over the gabor filter, in the sense that it has 0 DC component. How do you prove this property mathematically? Thanks in advance.
Answer: Log-gabor are filters defined similarly as gabor filters in the sense that their envelope consist in a Gaussian in Fourier space. This is advantageous because this makes them optimal with respect to the compromise between localization (in space) and detection (of the mean frequency).
The difference is that log-gabor (as their name implies) are defined in the log-space frequency domain:
This makes sense as this relevant feature (frequency) may for some applications better optimized when the precision in frequency is proportional to the mean frequency.
this has the drawback that they have no simple analytical formulation in the space domain as simple Gabor filters,
In perception, this is very advantageous: for instance the human ear is in a large range sensitive to the relative increments of frequency (this is called the Fechner-Weber rule). This is also used in vision, see for instance the filters obtained in this this computational neuroscience paper mimicking the receptive field of simple cells in the primary visual cortex:
To specifically answer your question, log-Gabor filters have indeed the property of having a zero DC offset:
To show this consider that this offset is the value of the spectrogram at the origin (for null frequencies). In log-frequencies, the DC corresponds to the limit of the log function at zero, that is minus infinite. Necessarily, this implies that this value is zero for a finite bandwidth of the filter. In (simplified) mathematical terms, the envelope is :
$$f \propto \exp( - \log(f)^2 / B^2 /2$$
so that for $B>0$, we have $f(0) \propto \lim_{f \rightarrow \infty} \exp( - \log(f)^2 / B^2 /2) = 0$.
This is relevant in perception. Indeed, remember that the DC component represent the average luminance value in the image, that is some global configuration (daylight vs nightlight, some specific exposure setting in your camera/eyeball) more relevant to specific tasks such as keeping the circadian rhythm (= syncing with the day rhythm). When doing a perceptually relevant task, such as detecting the shape of an object or tracking the motion of an object, these features are independent of the global configuration: e. g. a cat at night has the same shape as a cat during day! | {
"domain": "dsp.stackexchange",
"id": 1490,
"tags": "phase, local-features, gabor"
} |
Using latest (GitHub) version of Qiskit as Python library | Question: I just started working in Qiskit and want to use some of the new functions available to Qiskit on Github. Unfortunately, I don't know how to implement the latest GitHub version into my Anaconda distribution of Python.
Anaconda uses an older Qiskit library which I installed using pip. I would now like to git clone the latest version of Qiskit into that location. How can I do this? When I just type git clone https://github.com/Qiskit or something similar, it doesn't work.
Answer: Yes, installing Qiskit through pip install will install the latest, stable version of Qiskit onto your environment. So do note, the version you want to clone may not be functional with existing Jupyter Notebooks and other tutorials, since the code is not yet officially released.
That being said, cloning the repo is very easy. The URL that you were trying to clone, https://github.com/Qiskit, is the Qiskit Github user account page. That is why you were receiving an error. The repo that you are looking to clone would be qiskit-terra. Once you are there, you can follow this article on cloning github repositories.
Once you have cloned the repo, you can follow this guide on how to install Qiskit Terra from the source code you just cloned. | {
"domain": "quantumcomputing.stackexchange",
"id": 686,
"tags": "programming, qiskit"
} |
Lower bound of complexity of simple problem | Question: Consider the simple problem:
Given a list of objects L of length n, and an object O, determine if O is in L.
It is intuitive that there cannot exist an algorithm a with worst-case time complexity less than O(n) to correctly solve this problem. But without considering individual algorithms, how can one mathematically prove that no faster algorithm exists?
Answer: The informal argument is that you need at least $n$ steps to read all the input assuming you can only read at most one element of the input per "step". Of course, you might state that maybe there's an algorithm that doesn't need to read all the input. The argument against that is an adversarial argument which produces an input where such an algorithm will get the wrong answer.
Let $x$ be an input of length $n$ that doesn't contain $O$, and $A$ an algorithm that completes in less than $n$ steps correctly reporting that the input does not contain $O$. There is necessarily an element of the list that $A$ did not look at. Pick one and call that the $i$th element. Running $A$ on $x'$ which is identical to $x$ except that the $i$th element is $O$ then produces the incorrect result. Since $A$ is deterministic and all the inputs $A$ actually reads are the same, then $A$ must behave in the same manner and so it will incorrectly report that the list does not contain $O$. | {
"domain": "cs.stackexchange",
"id": 10591,
"tags": "complexity-theory, time-complexity, proof-techniques"
} |
Can random forest algorithm provide customer churn prediction probability at each customer instead at class level? | Question: I have customer training data set from telecom industry along with its test data set containing churn values 0 & 1 for each customer. I also have customer data set whose churn value is to be predicted ie 0 & 1. It is also required to get the churn prediction probability at individual customer level so that they can be arranged in descending order of the propensity to churn
For brevity, showing limited features
cust_train.xls
cust_id Account Length VMail Message Day Mins Eve Mins Night Mins Intl Mins CustServ Calls
cust_train_output.xls
cust_id churn (0/1)
I want to know if it is possible to get the churn prediction probability at individual customer level & how by random forest algorithm rather than class level provided by:
predict_proba(X) => Predict class probabilities for X.
Goal is to arrange the customer in descending order of the propensity to churn.
Alterntively, is this possible with Logistic Regression Model?
Thanks
Answer:
I want to know if it is possible to get the churn prediction probability at individual customer level & how by random forest algorithm rather than class level provided by: predict_proba(X) => Predict class probabilities for X.
I think the predict_proba(X) is actually the correct method to accomplish your task. For each instance (person) the function will display the probability for each class label. So, if you know which class is churn, I'll just assume class 1, you just need to slice the results on that column. For example:
from sklearn.ensemble import RandomForestClassifier
from sklearn.datasets import make_classification
from tabulate import tabulate
X, y = make_classification(n_samples=1000, n_features=50,
n_informative=2, n_redundant=0,
random_state=1, shuffle=False)
clf = RandomForestClassifier(n_estimators=100, max_depth=2,
random_state=0)
clf.fit(X, y)
train_class_probability = clf.predict_proba(X)
print(tabulate(train_class_probability[0:4], ['Churn', 'No Churn']))
NewData, y = make_classification(n_samples=10, n_features=50,
n_informative=2, n_redundant=0,
random_state=1000, shuffle=False)
class_prediction = clf.predict(NewData)
class_probability = clf.predict_proba(NewData)
print(tabulate(class_prediction.reshape(-1, 1), ['0 - Churn / 1 - No-Churn']))
print(tabulate(class_probability, ['Churn', 'No Churn']))
# Just print the first column, Churn
print(tabulate(class_probability[:, 0].reshape(-1, 1), ['Probability of Churn']))
Output will be:
Class Probabilities of the Training Data (First four instances)
Churn No Churn
-------- ----------
0.595298 0.404702
0.620975 0.379025
0.601251 0.398749
0.610663 0.389337
Class Prediction on New Data
0 - Churn / 1 - No-Churn
--------------------------
0
0
0
1
1
1
1
0
1
1
Class Probability Prediction on New Data
Churn No Churn
-------- ----------
0.553698 0.446302
0.602109 0.397891
0.587715 0.412285
0.423982 0.576018
0.419588 0.580412
0.419984 0.580016
0.369798 0.630202
0.572373 0.427627
0.414734 0.585266
0.40422 0.59578
Probability of Churn on New Data
Probability of Churn
----------------------
0.553698
0.602109
0.587715 <- new customer #3
0.423982 <- new customer #4
0.419588
0.419984
0.369798
0.572373
0.414734
0.40422
So, the probability that "new customer #3" will churn is ~59% and the probability that "new customer #4" will churn is ~42%.
Hopefully, I have understood your question.
HTH | {
"domain": "datascience.stackexchange",
"id": 4177,
"tags": "predictive-modeling, random-forest, machine-learning-model"
} |
Does $\delta^+(G)+\delta^-(G) \geq n$ imply strong connectivity? | Question: Denote by $\delta^+(G)$ the minimal out degree in $G$, and by $\delta^-(G)$ the minimal in-degree.
In a related question, I've mentioned the Ghouila-Houri extension of Dirac's theorem on Hamiltonian cycles, which suggests that if $\delta^+(G),\delta^-(G) \geq \frac{n}{2}$ then G is Hamiltonian.
In his comment, Saeed have commented on a different extension that seems stronger, except it requires the graph to be strongly connected.
The strong connectivity was proven redundant for the Ghouila-Houri theorem about 30 years after it was first published, and I was wondering if the same holds for the extension Saeed presented.
So the question is:
Who proved (can anyone find the reference) that $\delta^+(G)+\delta^-(G) \geq n$ implies $G$ is Hamiltonian, given that $G$ is strongly connected?
Is the strong connectivity redundant here as well, i.e. Does $\delta^+(G)+\delta^-(G) \geq n$ imply strong connectivity?
(Note that while the graph obviously has to be strongly connected for it to be Hamiltonian, I'm asking whether this condition is implied by the degree conditions).
Answer: The variation that I suggested was actually slightly different variation of Woodal theorem. Perhaps I saw it in the Bang-Jensen and Gutin's book. At the time that I wrote a comment I didn't check the book for correctness. So to be sure I wrote the graph should be strongly connected. BTW, that statement holds because can be interpreted as a special case of Woodal theorem. In addition does not need strongly connectivity requirement.
This is the theorem 6.4.6 from Bang-Jensen and Gutin's book:
Let $D$ be a digraph of order $n\ge 2$. If $\delta^+(x)+\delta^-(y) \ge n$ for all pairs of vertices $x$ and $y$ such that there is no arc
from $x$ to $y$, then $D$ is hamiltonian.
That means the answer to the second part of your question is also Yes.
There was a doubt about that whether $n$ is a tight bound or not. Here I try to answer it. We cannot reduce the requirement of at least $n$ to $k<n$, consider the following graph. $a,b,c$ are making bidirected triangle, and $e,d$ are making a bidirected $k_2$. If hamiltonian cycle starts at $e$, it cannot go to $d$ in the next move because the only way for $d$ is using $b$ but, $b$ is the only way to back to $e$. On the other hand the Hamiltonian cycle after $e$ cannot go to $c$, because then the only back way to $e,d$ is going directly to $d$ to use $b$ in next moves, but again we are in the previous position. Also from the picture it's clear that every vertex has in and out degree at least $2$. So sum of every two arbitrary in-out is at least $4=5-1 = n-1$. We can extend these sort of graphs to arbitrary $n$.
P.S1: Sure the aforementioned theorem holds for simple digraphs. i.e digraphs without loop or parallel edges.
P.S2: I don't have a good Tex tool right now. So image is not good. | {
"domain": "cstheory.stackexchange",
"id": 2673,
"tags": "reference-request, graph-theory"
} |
Bird identification- Crane | Question: What species of crane is this?
Today afternoon, I found this crane among a group of white cranes that were moving around a herd of buffaloes. I live in West Bengal, India if it matters.
Is it an eastern cattle egret,Bubulcus ibis coromandus in breeding plumage?
Answer: Yes, this is a cattle egret displaying its breeding plumage.
The cattle egret (Bubulcus ibis) is a cosmopolitan species of heron (family Ardeidae) found in the tropics, subtropics and warm temperate zones.
Specifically, this is the eastern subspecies, B. ibis coromandus.
B. ibis coromandus [source: Wikimedia Commons]
B. ibis ibis © Larry Thompson, 2007-2015
The cattle egret is the only member of the monotypic genus Bubulcus, although some authorities regard two of its subspecies as full species, the western cattle egret and the eastern cattle egret.
The eastern subspecies B. ibis coromandus, described by Pieter Boddaert in 1783, breeds in Asia and Australasia, and the western subspecies (B. ibis ibis) occupies the rest of the species range, including the Americas.
The eastern subspecies (B. ibis coromandus) differs from the nominate subspecies in breeding plumage in that the buff color on its head extends to the cheeks and throat, and the plumes are more golden in color.
B. ibis usually feeds in seasonally inundated grasslands, pastures, farmlands, wetlands and rice paddies. Their name comes from their tendency to often accompany cattle or other large mammals in these areas, where they catch insects attracted to and small vertebrates disturbed by these animals.
Originally native to parts of Asia, Africa and Europe, B. ibis has undergone a rapid expansion in its distribution and successfully colonized much of the rest of the world in the last century.
[Source: Discover Life]
Major Source: Wikipedia | {
"domain": "biology.stackexchange",
"id": 6759,
"tags": "species-identification, zoology, ornithology"
} |
Equation of light beam through a dielectric block | Question: Suppose we have a light beam $E^{(in)}(x,y,z=0)$.
Using the the Fresnel-Huygens propagator, the light field travelling through air at any $z$ value from the transmitter is provided by:
$$E_r (x,y,z)=∫dξdηE_r (x-ξ,y-η ,0) \exp{\left(\frac{iπ}{λz} (ξ^2+η^2)\right)}$$
What would the equation of light be if the light travels passes through a dielectric block (wall) and exits it. Is the equation of the light a distance $d$ away from the block the same as the one above.
Answer: Small caveat, your propagator is valid only for monochromatic waves. Furthermore, you are using the paraxial approximation with optic axis is $z$. Btw you forgot the normalization factor, which is why your formula is not dimensionally correct. Check out for example Thorn and Blanford’s Modern Classical Physics (eq. 8.28).
You didn’t specify the setup, but I’ll assume the dielectric wall to be perpendicular to the optic axis. Up to translation, I’ll assume that it extends between the planes $z=\pm d/2$ with $d$ the thickness of the wall. In a similar fashion, I’m guessing you are looking for the propagator from the plane $z=z_i<-d_w/2$ to $z=z_f>d_w/2$ with $d_w$ the wall’s thickness. I’ll assume that the wall is made of a linear isotropic dielectric.
The propagation of the field can be decomposed into $5$ steps:
Going from $z=z_i$ to $z=-d/2$ through the air
Crossing the air-wall boundary from $z=-d/2^-$ to $z=-d/2^+$
Going from $z=-d/2$ to $z=-d/2$ through the wall
Crossing the wall-air boundary from $z=d/2^-$ to $z=d/2^+$
Going from $z=d/2$ to $z=z_f$ through the air
And you just need to convolve the corresponding propagators. Actually, steps $2.$ and $4.$ perfectly cancel, and you can regroup $1.$ and $5.$ to get an effective propagation of distance $d_a=z_f-z_i-d$ through the air. Finally, the propagation through the dielectric is the same as through the air, up to a change of wavelength $\lambda’$.
The total propagator is therefore a convolution of two Gaussians which is still Gaussian. The prefactor is simply the inverse of the sums of the inverse of the prefactors.
In the case of a lossless dielectric, then $\lambda’$ is real and can be written $\lambda’=\lambda/n$ with $n$ the relative refractive index of the wall with respect to the air. The propagator is then exactly the same as your formula, only you replace the real distance $D=d_a+d_w$ by an effective distance:
$$
\tilde D =d_a+\frac{d_w}{n}
$$
(Note that this is not the optical path length). However, since you are talking of a wall, the there probably is some opacity. The result however is unchanged, and you need only give $n$ an imaginary part and separate the imaginary Gaussian kernel (like you had) from the real one (like a heat kernel translating attenuation).
Hope this helps. | {
"domain": "physics.stackexchange",
"id": 92916,
"tags": "electromagnetism, electromagnetic-radiation, visible-light, dielectric"
} |
Is BF3 an electrophile? | Question: Boron has an empty $2p$ orbital.
But is it an electroplhile?
I know it is a Lewis acid.
Answer: Lewis acids are by definition electrophiles. Electrophiles love electrons, or negative charge. Boron has an empty 2p orbital and there exists a strong partial positive charge on the boron due to the extremely electronegative fluorine atoms covalently bound to boron. This strong partial positive character, coupled with a vacant orbital, makes BF3 a potent Lewis acid and thus an electrophile.
That being said, note that acidity and basicity are both thermodynamic properties, while electrophilicity and nucleophilicity are both kinetic properties. That is, while you can label a Lewis acid an electrophile, it would be incorrect to say, for example, that because a certain Lewis acid is a "strong" Lewis acid, then it must be a strong electrophile.
Formation Constants; Lewis Acidity | {
"domain": "chemistry.stackexchange",
"id": 1671,
"tags": "acid-base, orbitals"
} |
openni_launch for kinect xbox 360 -1414 | Question:
so,... i don't know what going on..
i'm using Kinect xbox 360, model 1414.
i've succeed run the example of nite (nite/NITE-Bin-Dev-Linux-x64-v1.5.2.23/Samples/Bin), but, i tried connected it with ros roslaunch openni_launch openni.launch, it shown up:
process[camera_base_link2-23]: started with pid [15373]
process[camera_base_link3-24]: started with pid [15374]
terminate called after throwing an instance of 'openni_wrapper::OpenNIException'
what(): unsigned int openni_wrapper::OpenNIDriver::updateDeviceList() @ /home/adelleodel/ros/src/openni_camera/src/openni_driver.cpp @ 125 : enumerating image nodes failed. Reason: One or more of the following nodes could not be enumerated:
Device: PrimeSense/SensorV2/5.1.0.41: The device is not connected!
Image: PrimeSense/SensorKinect/5.1.2.1: Failed to set USB interface!
Device: PrimeSense/SensorV2/5.1.0.41: The device is not connected!
[FATAL] [1454896895.994153086]: Service call failed!
[FATAL] [1454896895.994256906]: Service call failed!
[FATAL] [1454896895.994619240]: Service call failed!
there's some driver for primesense... and i just download it all. am i wrong to install all of it?
seems that my driver doesn't fit. and i wonder so why i can run my example then? '-'
Originally posted by adelleodel on ROS Answers with karma: 76 on 2016-02-07
Post score: 0
Answer:
It appears your device is not connected. Have you given permissions for the USB port being used for the kinect?
Originally posted by stevemacenski with karma: 8272 on 2016-02-13
This answer was ACCEPTED on the original site
Post score: 0
Original comments
Comment by adelleodel on 2016-02-14:
how to do it?
Comment by stevemacenski on 2016-02-14:
Try chmod 777 /dev/ttyUSB0, this should give Ubuntu read and write permissions on all accounts to access USB ports. It expires after some time / terminal closing or something, so if it works now and doesn't later, just try it again | {
"domain": "robotics.stackexchange",
"id": 23675,
"tags": "openni-launch"
} |
Bug in ros::NodeHandle::searchParam()? | Question:
Hi all,
according to my understanding of searchParam(), the second if condition in the following code snippet should never be true (probably except for some rare race condition cases which shouldn't happen in my case):
std::string full_param_name, param_name = "a/b";
if (nh.searchParam(param_name, full_param_name)) {
if (!nh.getParam(full_param_name, result)) {
// this should never be reached (but it is reached) ... ?!
}
}
In particular, see also this ROS tutorial (wiki).
However, in my case I have /a/b on the parameter server (note the leading / -> global) and param_name is a/b (note the / in between: is this allowed?). searchParam() returns true in that case but getParam() returns false. The ros::NodeHandle nhis initialized with ~ (and, thus, points the node's private namespace). Interesstingly, full_param_name is (erroneously) set to $fully_qualified_namespace_of_my_node/a/b (which doesn't exist).
Is this a bug or am I doing something wrong? Even if "nested keys" (= the / in between a and b) are not allowed (are they?), I would assume that searchParam() returns false ... ?
Thanks!
EDIT: here is a minimum working example that demonstrates the issue:
#include <ros/ros.h>
int main(int argc, char **argv)
{
ros::init(argc, argv, "test_node");
std::string full_param_name, param_name = "a/b", result;
ros::NodeHandle nh("~");
ros::spinOnce(); // (probably not needed)
if (nh.searchParam(param_name, full_param_name)) {
if (!nh.getParam(full_param_name, result)) {
ROS_FATAL("THIS SHOULD NEVER BE PRINTED ... ?! (%s)", full_param_name.c_str());
} else {
ROS_INFO("Okay, parameter found: '%s' = '%s'.", param_name.c_str(), result.c_str());
}
} else {
ROS_INFO("Okay, parameter '%s' not found on the ROS parameter server.", param_name.c_str());
}
ros::shutdown(); // (probably not needed)
return 0;
}
Output:
[FATAL] [1507900407.659972647] [/my_namespace/my_node | 7f61d64c4780]: THIS SHOULD NEVER BE PRINTED ... ?! (/my_namespace/my_node/a/b)
Note that rosparam list | grep "a/b" returns /a/b.
EDIT2: since I am successfully using "nested keys" in my code (at other locations/contexts), I assume that they are allowed.
EDIT3 (@Dirk Thomas): thanks for your assistance / thoughts! I think that I've tracked down the problem correctly now. Your link was somewhat helpful. For instance, assume the following is stored on the parameter server
/a/b: 42
/a/c: 'cannot be found with searchParam using private ns'
/node_namespace/node_name/a/b: 41
and I ask for searchParam(nh_private, "a/b", x) then x=/node_namespace/node_name/a/b (with value 41) and true is returned (all fine). However, if I ask for searchParam(nh_private, "a/c", x) then x=/node_namespace/node_name/a/c and true (sic!) is returned but that parameter does not exist (and I don't understand why the docblock states "[...] (yet)"). IMHO, this behavior of searchParam() makes absolutely no sense and I would consider this to be a bug.
More specifically, my concerns are:
searchParam() returns true although nothing was found (/node_namespace/node_name/a/c).
searchParam() is said "to search up the tree for a parameter" but it returns an invalid key although there is a (here: global) parameter (/a/c) matching the given partial key (a/c).
What do you think? ;-)
Originally posted by CodeFinder on ROS Answers with karma: 100 on 2017-10-10
Post score: 1
Original comments
Comment by gvdhoorn on 2017-10-13:
If you feel this is a bug (and you already have a MWE), then I believe it would be better to report this over at ros/ros_comm/issues.
Edit: ah. I see you already did that: ros/ros_comm/issues/1187.
Comment by CodeFinder on 2017-10-13:
Yes. ;-) Thanks for referencing the issue, forgot that.
Comment by Dirk Thomas on 2017-10-17:
Well, the actual behavior seems to match what is written in the docblock (https://github.com/ros/ros_comm/blob/38799662b45a42e118762865fac854317b0924a0/tools/rosmaster/src/rosmaster/paramserver.py#L82-L103). So I assume the behavior is intentionally this way and I don't think this is a bug.
Comment by CodeFinder on 2017-10-18:
Hm, I disagree since you are referring to the Python docs but I am using roscpp where it is stated that it "returns true if the parameter was found, false otherwise". Clearly, this isn't true. Anyway, I'll write my own version of searchParam().
Comment by Dirk Thomas on 2017-10-18:
If you think the implementation in both languages is different from each other then please consider to provide a PR which makes them consistent rather than "writing your own".
Comment by CodeFinder on 2017-10-18:
At least the docs are not consistent, don't you think so?
Comment by Dirk Thomas on 2017-10-18:
I would interpret it that the Python doc is just more precise where the C++ doc is not clear about the (partial) match.
Comment by nckswt on 2018-03-06:
Just noting that I've recently come across the same problem: ros::param::search returns true and gives a result path to a parameter that doesn't exist. For example, if I have a param at /a/b/c/d, and I call ros::param::search("a","c/d", res), the return value is true and res is a/c/d.
Comment by nckswt on 2018-03-06:
From what I can tell, this isn't an issue with either RosCpp or RosPy client library, but a problem with the payload returned from the XmlRpc call to the parameter server. I haven't been able to find the "searchParam" remote API method implementation from the parameter server, though.
Answer:
You might want to print the value of full_param_name after getting it with searchParam before calling getParam.
The API docs describe the second argument of searchParam with:
[out] result the found value (if any)
Update:
The docblock is kind of misleading. The result is actually a parameter key and not a value.
When I run your code snippet as-is I get:
no parameter set: Okay, parameter 'a/b' not found on the ROS parameter server.
after rosparam set /a/b global_value: Okay, parameter found: 'a/b' = 'global_value'.
after rosparam set /test_node/a/b node_value: Okay, parameter found: 'a/b' = 'node_value'.
After calling rosparam delete /test_node/a/b I do get THIS SHOULD NEVER BE PRINTED ... ?! (/test_node/a/b).
Maybe you can post the exact steps you are performing when running into the problem.
May this docblock clarifies the behavior of searchParam which seems to be the rational for the case above: https://github.com/ros/ros_comm/blob/38799662b45a42e118762865fac854317b0924a0/tools/rosmaster/src/rosmaster/paramserver.py#L82-L103
Originally posted by Dirk Thomas with karma: 16276 on 2017-10-13
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by CodeFinder on 2017-10-14:
Thank you very much! That did the trick, sorry for the noise!
Minor follow-up question: it is possible to use searchParam() with types other than std::string? There just seems to be a string version of that method.
Comment by CodeFinder on 2017-10-16:
@Dirk Thomas: For some reason, I am unable to reproduce this on my primary dev machine: if I print the value of full_param_name, I get 'a/b' = '/a/b' (the value is 'test_value'). Also, the ROS wiki exactly states that searchParam() + getParam() is the way to go. Is that an error in the wiki?
Comment by Dirk Thomas on 2017-10-16:
With what other parameter type do you want to call searchParam()? char* should work too using implicit conversion.
Comment by CodeFinder on 2017-10-17:
To your last comment: before you edited your answer, you've written that searchParam() returns the actual ROS parameter server value in its (last) parameter result which obviously isn't the case (see also wiki link in my initial post). My comment just referred to that statement and ...
Comment by CodeFinder on 2017-10-17:
... doesn't make any sense after your edit/update. With respect to your edit, I've updated my initial post. | {
"domain": "robotics.stackexchange",
"id": 29048,
"tags": "c++, parameter, getparam, server, nodehandle"
} |
"Bad" behavior of propagator in $O(N)$ model | Question: In Polyakov's book about gauge fields & strings, in chapter devoted to non-linear sigma model he emphasizes problem with large $N$ expansion of this model. Lagrangian of 2D model is
$$\frac{1}{2g^2}(\partial_{\mu}{\bf n})^2$$
with constraint ${\bf n}^2=1$. It is possible to add term into action which explicitly contains constraint by introducing additional field $\lambda(x)$. Then, one can integrate out fields ${\bf n}$ (path integral over these fields is gaussian) and obtain effective action in terms of field $\lambda$. Now it is time to use $N\rightarrow\infty$. It is possible to find saddle point of this effective action and see that it corresponds to $\lambda=m^2$, $m>0$ (up to sign or $i$). Then, we can ivestigate fluctuations near saddle point as $m^2+\alpha(x)$, where $\alpha$ is fluctuation.
After all calculations, we can compute all the correlation functions of initial model in terms of effective action. Near to the end of this chapter, he says that propagator of $\alpha(x)$ field in effective action has bad behavior,
$$D(q^2)\rightarrow q^2/\ln(q^2/m^2), \quad q^2\rightarrow\infty,$$
and then says that if we do not impose constraint $n_in^i=1$ everything will be ok. Instead of constraint ${\bf n}^2=1$, it is possible to introduce quartic term into initial action and avoid "bad behavior" problem.
Why constraing $n_in^i=1$ creates problem with propagator behaviour for large momenta? Can somebody clarify this moment?
Answer: If you do not impose the constraint $\boldsymbol n^2=1$ the system is linear, i.e., free, which means that the propagator is just
$$
D(q^2)=\frac{1}{q^2+m^2}
$$
which has a nice UV behaviour. Recall that non-linearities, i.e., interactions, come from the metric $g_{\mu\nu}(\boldsymbol n)$. If you do not impose the constraint, the manifold is flat, and so $g_{\mu\nu}(\boldsymbol n)=\delta_{\mu\nu}$, which means that the Lagrangian is just $L=\frac12 \boldsymbol n\cdot\partial^2\boldsymbol n$, which is gaussian. | {
"domain": "physics.stackexchange",
"id": 62950,
"tags": "quantum-field-theory, propagator, regularization, sigma-models"
} |
Is it possible to turn modular multiplication into in-place operation? | Question: I began to work on the implementation of Shor's algorithm with a custom value for the modulo. Despite some questions have already been asked about it here, I don't manage to get a complete example or at least a satisfying idea of what I should do regarding the U matrix performing the modular multiplication or exponentiation.
I implemented a circuit to perform the classic-quantum operation a * b % m were a and m are classic and x is in a quantum register. It requires 2n+2 qubits, were n is the number of bits required to represent m. To operate, it applies a shift and add approach with a modular add operator, and at each step an external classic modulo is applied to the shifted value of a before the addition.
The problem is that this circuit performs the operation (b,0,0)->(b,a*b%m,0) (the last value being the ancilliae qubits). However, I think here b can be seen as a dirty register, because when using Shor's algorithm, we would need to get rid of it to apply the multiplication several times. I guess the ideal operation would then be (b,0,0)->(a*b%m,0,0). This is probably in general impossible, because modular multiplication is not always reversible (e.g. x*4%8=0 can lead to x=2, x=4 or x=6).
Then my question is: is my circuit completely useless to build Shor's algorithm? In case yes, what should I do instead?
Answer: Okay so the paper epelaaez mentionned answers to the question.
The answer it gives is : no, the modular multiplication is not useless. It is even very close to what is needed.
Having the operation $MultMod(a,mod)$, the plan is the following:
Perform (b,0,0)->(b,a*b%m,0) by applying $MultMod(a,mod)$ to b
Perform (b,a*b%m,0)->(a*b%m,b,0) with some SWAPs
Perform (a*b%m,b,0)->(a*b%m,0,0) by applying the adjoint of $MultMod(a^{-1},mod)$, where $a^{-1}$ is the modular inverse of a modulo mod, that can be efficiently computed classically. The adjoint simply changes the addition of $a*b\%mod$ to the second register to a substraction. This way the second register becomes $b-(a*b\%mod)*a^{-1}\%mod = b-b = 0$.
About the remark that some modular multiplications are not reversible: I think it is a relevant point, however, the algorithm moves the problem to the classical part. Indeed, numbers such that their modular multiplication is not reversible don't have a modular inverse, that is they are not prime with the modulo and so the step 3 can't be executed. As in Shor's algorithm it is checked that a and N (the mod) are prime, the problem can't happen. | {
"domain": "quantumcomputing.stackexchange",
"id": 3767,
"tags": "shors-algorithm"
} |
subscribe to topic of laser in standalone version of gazebo | Question:
Hi all,
I installed standalone version of Gazebo and i put a simulated pioneer3at in the environment with laser scanner, and i can get data of laser in shell with this command
gztopic echo /gazebo/default/pioneer3at/hokuyo/link/laser/scan
but i want to subscribe to this topic like this, anyone can help me???
Originally posted by Vahid on Gazebo Answers with karma: 91 on 2013-05-09
Post score: 0
Answer:
Follow the tutorial in the link and instead of subscribing to ~/world_stats, replace the string with topic name you posted.
Here is an example of subscribing to Gazebo laser topic.
Originally posted by iche033 with karma: 1018 on 2013-05-09
This answer was ACCEPTED on the original site
Post score: 0
Original comments
Comment by Vahid on 2013-05-10:
Thank you so much, I modify the listener for getting data of laser, i put it here. | {
"domain": "robotics.stackexchange",
"id": 3277,
"tags": "gazebo"
} |
Is there an existing model for a ball on beam system? | Question:
Hello everyone, I have a couple of hours next week that I was planning to spend with our favorite simulation framework.
Before I started though, I wanted to check if there already was an existing model of a ball on beam, or a ball on plate system.
If not, I was going to follow Building a Visual Robot Model with URDF from Scratch assuming that still is how things are built in Gazebo these days.
Originally posted by SL Remy on Gazebo Answers with karma: 319 on 2015-12-08
Post score: 0
Original comments
Comment by markus on 2018-04-23:
Hey SL REmy, did you build up a ball on plate system in gazebo and would you like to share it??! 2f4yor@gmail.com
Answer:
I don't know of an existing beam-on-ball system, or ball on plate system.
You can use URDF to create such a system, but I would recommend using SDF (here is a relevant tutorial). With SDF you can contribute your model into Gazebo's database, and SDF is the default format for Gazebo.
Originally posted by nkoenig with karma: 7676 on 2015-12-08
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 3839,
"tags": "control"
} |
Why is Energy change occurring during the reaction at constant temperature and constant volume given by internal energy change? | Question: When volume and temperature are kept constant, shouldn't internal energy remain constant (as it's a state function depending on state variables)? When heat is supplied, why does the internal energy increase if state variables are kept constant?
Answer: For a system likely to be the seat of a chemical reaction, the variables of state are not limited to the temperature and the volume: it is necessary to add the extent of reaction. | {
"domain": "physics.stackexchange",
"id": 84664,
"tags": "thermodynamics, energy, energy-conservation, phase-transition, physical-chemistry"
} |
[PCL_tutorial] No tf data.Fixed Frame camera_depth_frame does not exist | Question:
I followed this tutorial 4.1 (http://wiki.ros.org/pcl/Tutorials) and ran these with kinect v2
$ rosrun my_pcl_tutorial example input:=/narrow_stereo_textured/points2
$ rosrun rviz rviz
After that I selected camera_depth_frame for the Fixed Frame and select output for the PointCloud2 topic.
But I could not see anything and rviz said "No tf data.Fixed Frame camera_depth_frame does not exist"
How can I get downsampled images? thank you.
Originally posted by student_ja on ROS Answers with karma: 1 on 2016-06-23
Post score: 0
Answer:
I have not gone through the tutorials bt can you run rosrun tf view_frames command and see if Frame camera_depth_frame exists in the tf tree. If not then your program is not able to publish the tf data.
Originally posted by sar1994 with karma: 46 on 2016-06-25
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 25032,
"tags": "ros, rviz, fixed-frame, tutorial"
} |
Time dilation confusion | Question: I'm just starting to learn about special relativity, and I'm a little bit confused about something. Take the example of an observer in $S$ on the ground observing a train move at constant velocity $v$ relative to $S$, an observer in $S'$ is on the train, and this observer in $S'$ flashes a light that reflects from the ceiling and returns to him in a time, $t'$ he measures.
I know that in general, if two frames $S$ and $S'$ are in relative uniform motion with respect to each other, and an observer in $S$ can see the clock of an observer $S'$ and the observer in $S'$ can see the clock of the observer $S$, then the observer in $S$ will see the clock of the observer in $S'$ run slower than his own, and vice versa. But I also know that $S'$ time is proper time, and so $t'\leq t$. This seems very strange to me. Observer $S$ sees his clock running faster, so intuitively, observer $S$ expects to measure less time for the event, but the time he measures for the light to return is no less than the time that observer $S'$ measures.
Am I understanding this correctly?
Answer: I agree with your statements up through the claim that $t'<t$. That's all fine.
Here I think is the issue you're running into:
The quantity $t'$ in the relationship above represents the time interval as measured in frame $S'$. It does not represent the number of ticks by the moving clock as measured in frame $S$. That's a subtle but important distinction, read it again.
I think this latter (incorrect) reasoning is leading to your apparent contradiction. | {
"domain": "physics.stackexchange",
"id": 16199,
"tags": "special-relativity, time-dilation"
} |
What counts as an "ancilla" qubit? | Question: I am getting confused about the meaning of the term "ancilla" qubit. It's use seems to vary a lot in different situations. I have read (in numerous places) that an ancilla is a constant input - but in nearly all of the algorithms I know (Simion's, Grover's, Deutsch etc) all the qubits are of constant input and therefore would be considered ancilla. Given that this does not seem to be the case - what is the general meaning of an "ancilla" qubit in quantum computers?
Answer: The general meaning of ancilla in ancilla qubit is auxiliary. In particular, when people write about "constant input" what they mean is that, for a given algorithm -which has a purpose, such as finding the prime factors of an input number, or effecting a simple arithmetic operation between two input numbers the value of the ancilla qubits will be independent of the value of the input.
Probably your confusion arises because some algorithms study a function, employing a constant input, rather than study an input, using a constant function. Maybe in these cases the term ancilla qubit makes less sense, since, as you point out, all input qubits are constant and act as ancillae. | {
"domain": "quantumcomputing.stackexchange",
"id": 123,
"tags": "terminology-and-notation"
} |
Question about the most probable decay mode | Question: I want to understand which of the following possible decay modes for the B$^+$ meson is most probable:
$$ B^+ \rightarrow \tau^+\upsilon_\tau $$
$$ B^+ \rightarrow \mu^+\upsilon_\mu $$
$$ B^+ \rightarrow e^+\upsilon_e $$
I think that it would be the electron as this is a fundamental particle and has the least mass, but I'm not sure and this isn't sufficient explanation. Any help would be appreciated, thanks!
Answer:
I think that it would be the electron as this is a fundamental particle and has the least mass
Both are true for the electron, and what is missing in the justification is that in the other decays the fundamental particles belong to the same family, leptons, and the interaction entering the calculations of decay is the weak interaction in all three.
Then one can use the fact of the smaller mass to decide that the electron decay is more probable, using the phase space argument as suggested in the comment. | {
"domain": "physics.stackexchange",
"id": 67620,
"tags": "particle-physics, mesons"
} |
How does Weinberg definition of particle states from standard momentum work? | Question: In his first volume, part 2.5, Weinberg define one particle states $Φ_{p,}$ ($p$ is the momentum and $$ another quantum number) in the following way :
Choose a Standard momentum $k$
Find a Lorentz transformation $ L(p) $ such that $p^\mu = L^\mu _\nu k^\nu $
Define $Φ_{,} : = N(p) (())Φ_{,}$ where $N(p)$ is a normalization factor and $(())$ the unitary representation of $ L(p) $
However, still according to Weinberg we have $(())Φ_{,} = \sum _{\sigma'} C_{\sigma' \sigma} ((),k)Φ_{,'} $ with $C_{\sigma' \sigma} ((),k) \in \mathbb{C}$. It means that $Φ_{,}$ is not an eigenvector of $\hat{\sigma}$ (the observable corresponding to the quantum number $\sigma$) with eigenvalue $\sigma $ but a linear combination of eigenvectors. Because of this, I don't understand the definition in point 3. The only way out of this mess would be if $()$ was chosen specifically so that the $\sigma s$ don't mix up.
I know it has been answered here but I am not convinced. It doesn't make sense to me to be able to simply redefine something that has a physical meaning. It is not a phase.
Answer: I believe 3. is literally a definition. That is, while the unitary representation of an arbitrary homogenous Lorentz transformation $U(\Lambda)$ acts as:
$$U(\Lambda) \Psi_{\rho, \sigma} = \sum_{\sigma'} C_{\sigma' \sigma}(\Lambda, p)\Psi_{\Lambda p, \sigma'} \tag{2.5.3},$$
Weinberg defines a particular (he says that he makes a "standard" choice) sort of homogenous Lorentz transformation $L(p)$ which specifically satisfies 3. as written in your question.
In particular, Weinberg later shows an explicit form for $L(p)$ for mass positive-definite particles on pg. 68, equation 2.5.24. I assume he does the same for massless particles. The following so-called "convenient" form of $L(p)$ as defined by Weinberg is given below:
$$L^i_k(p) = \delta_{ik}+(\gamma^2-1)\hat{p}_i\hat{p}_k, \tag{2.5.24}$$
which is certainly a more particular form than that of (2.5.3). | {
"domain": "physics.stackexchange",
"id": 97510,
"tags": "quantum-field-theory, special-relativity, hilbert-space, quantum-states, elementary-particles"
} |
FFT on impulse response: phase doesn't look right, Python | Question: I want to analyze an arbitrary impulse response. Basically, a list of tap coefficient. In general, "tap1" will give me a BW limitation. I want to add more taps and see what it does to the FFT. I have two cases
The "cursor" is the first item in the list.
I think the amplitude graph is correct, but something doesn't look right in the simple case: Why does the phase go back to zero?
I add 100 zeros before the cursor: I get the same amplitude response, but the phase response is oscillating.
I know I am missing something basic here
Python code:
import numpy as np
import pandas as pd
import plotly.express as px
def fft_on_signal(signal):
fft = pd.DataFrame()
n = len(signal)
fft['fft'] = np.fft.rfft(signal)
fft['radians'] = np.angle(fft.fft)
fft["amp"] = fft.fft.abs()
fft['freq'] = np.fft.rfftfreq(n=n, d=1 / n)
return fft
data=[1,0.2]+[0]*99 ##adding tap1 for BW limitation
fig=px.line(data, height=200, width=350).update_traces(line_shape='hvh')
fig.show()
fig=px.line(fft_on_signal(data), x='freq', y='amp', height=400, width=600)
fig.show()
fig=px.line(fft_on_signal(data), x='freq', y='radians', height=400, width=600)
fig.show()
data=100*[0]+[1,0.2]+[0]*99 ##adding 100 zeros before the cursor
fig=px.line(data, height=200, width=350).update_traces(line_shape='hvh')
fig.show()
fig=px.line(fft_on_signal(data), x='freq', y='amp', height=400, width=600)
fig.show()
fig=px.line(fft_on_signal(data), x='freq', y='radians', height=400, width=600)
fig.show()
Answer:
The "cursor" is the first item in the list. I think the amplitude graph is correct, but something doesn't look right in the simple case: Why does the phase go back to zero?
You are not showing the picture so we can't really tell. My guess is, it's phase wrapping. The phase is periodic from with $2\pi$. If you want a continuous phase graph you need to "unwrap" it. So for example: https://ccrma.stanford.edu/~jos/fp/Phase_Unwrapping.html#:~:text=Phase%20unwrapping%20ensures%20that%20all,converted%20to%20true%20time%20delay.
I add 100 zeros before the cursor: I get the same amplitude response, but the phase response is oscillating.
Of course you do. Adding 100 zeros in front of the signal is the same as delaying it by 100 samples. The Fourier Transform of a delay of $n$ samples is
$$H(\omega) = e^{-j\omega nT}$$
where T is the sample period. So you add a steep linear phase to the signal and due to the phase wrapping it turns into a sawtooth like shape. | {
"domain": "dsp.stackexchange",
"id": 10441,
"tags": "fft, python, impulse-response"
} |
At creating a ROS package by hand | Question:
In tutorials, creating a ROS package by hand
To see one example of why specifying these dependencies is useful, try executing the following commands rospack commands:
rospack export --lang=cpp --attrib=cflags foobar
rospack export --lang=cpp --attrib=lflags foobar
what is the "--attrib" , "cflags" and "lflags"?
Originally posted by rosmaker on ROS Answers with karma: 51 on 2012-05-31
Post score: 0
Answer:
In manifest files of packages that export libraries, you can see something like this (taken from roscpp's maifest file):
<export>
<cpp cflags="-I${prefix}/include `rosboost-cfg --cflags`" lflags="-Wl,-rpath,${prefix}/lib -L${prefix}/lib -lros `rosboost-cfg --lflags thr
</export>
With rospack export --lang=cpp --attrib=cflags roscpp you'll get the value of the attribute cflags as specified in roscpp's manifest.xml. cflags exports are used by CMake to build up all compiler flags for building a package, lflags are used for linker flags respectively.
Originally posted by Lorenz with karma: 22731 on 2012-06-01
This answer was ACCEPTED on the original site
Post score: 2 | {
"domain": "robotics.stackexchange",
"id": 9622,
"tags": "ros, package"
} |
Should the input of a Kalman filter always be a signal and its derivative? | Question: I always see the Kalman filter used with such input data. For example, the inputs are commonly a position and the correspondent velocity:
$$
(x, \dfrac{dx}{dt})
$$
In my case, I only have 2D positions and angles at each sample time:
$$
P_i(x_i, y_i) \qquad \text{and} \qquad (\alpha_1, \alpha_2, \alpha_3)
$$
Should I compute velocities for each point and for each angle to be able to fit the Kalman framework?
Answer: A state variable and its derivative are often included as inputs to a Kalman filter, but this is not required. The essence of the Kalman framework is that the system in question has some internal state that you are trying to estimate. You estimate those state variables based on your measurements of that system's observables over time. In many cases, you can't directly measure the state that you're interested in estimating, but if you know a relationship between your measurements and the internal state variables, you can use the Kalman framework for your problem.
There is a good example of this on the Wikipedia page. In that example, 1-dimensional linear motion of an object is considered. The object's state variables consist of its position versus time and its velocity on the one-dimensional line of movement. The example assumes that the only observable is the object's position versus time; its velocity is not observed directly. Therefore, the filter structure "infers" the velocity estimate based on the position measurements and the known relationship between velocity and position (i.e. $\dot{x_k} \approx \frac{(x_k - x_{k-1})}{\Delta t}$ if acceleration is assumed to be slowly-varying). | {
"domain": "dsp.stackexchange",
"id": 2394,
"tags": "filters, adaptive-filters, kalman-filters"
} |
How do you determine the handedness of the polarization vector of a beam of light? | Question: The Wikipedia page on circular polarization says,
[For] polarization [as] defined from the point of view of the source...left- or right-handedness is determined by pointing one's left or right thumb away from the source, in the same direction that the wave is propagating, and matching the curling of one's fingers to the direction of the temporal rotation of the field at a given point in space. When determining if the wave is clockwise or anti-clockwise circularly polarized, one again takes the point of view of the source, and while looking away from the source and in the same direction of the wave's propagation, one observes the direction of the field's spatial rotation.
What I don't get is,
How do I even know what the direction of wave propagation is if I only have an expression for the E field and not the B field? I know it's perpendicular to the E field but there are two ways for that to be true.
Even if I know the direction of propagation and so I apply the RHR to the cross product of the wave propagation vector and the E field vector (which seems to be what the Wiki quote is describing), that would just give me the direction the crossproduct would point in -- how does that then translate into CW or CCW rotation?
Answer:
How do I even know what the direction of wave propagation is if I only have an expression for the E field and not the B field?
If you have, say,
$$\vec{E} = \hat{x}E_0 \cos\left(\omega t - kz\right)$$
you can ignore all the prefactors and just look at the argument of the cosine. If $t$ increases, then to stay at the same point on the wave, $z$ must also increase. Therefore we have a wave propagating in the $+z$ direction.
A wave propagating the other way would have $\ldots\cos\left(\omega t + kz\right)$
If you are working in exponential notation, then it's the same. Ignore the pre-factors and just focus on the exponent in $\hat{x}E_0 e^{i(\omega t \pm kz)}$.
.. how does that then translate into CW or CCW rotation?
In a circularly polarized wave you're going to have something like
$$\vec{E} = \hat{x}E_0 \cos\left(\omega t - kz\right) + \hat{y}E_0 \cos\left(\omega t - kz \pm \pi/2\right)$$
If the $\hat{y}$ component is lagging the $\hat{x}$ component you have a RHC polarization, and if the $\hat{y}$ component is leading you have LHC polarization.
I apply the RHR to the cross product of the wave propagation vector and the E field vector (which seems to be what the Wiki quote is describing)
I think you're misunderstanding this.
The "temporal rotation of the field" is not the same as the field vector itself.
Visualize the E vector spinning around the axis of propagation. Your fingers should follow that spin (with your thumb pointed along the axis). That's not the same as taking the cross product of the E field (at some specific point in time) and the propagation direction.
If you can do this with your right hand, it's RHC polarization, and if you can do it with your left hand it's LHC polarization. | {
"domain": "physics.stackexchange",
"id": 93756,
"tags": "electromagnetism, optics, electromagnetic-radiation, polarization, mathematics"
} |
Is there a program to solve a metric TSP for 80 edges at optimum? | Question: i'm going to use the Christofides heuristic algorithm in order to solve a TSP for about 80 edges. Eventually i should have a solution, that is within the factor 1.5 of the optimum.
But when i'm finished, i'd like to check my solution but i don't know how. So i thought about using a computer-program to find the optimal solution to see, if my solution is within the 3/2-range.
i am not quite sure, if this is really possible or how long it might take. if it would take less than a month, i think, it would be worth a try.
Answer: It should be no problem to solve this instance with an integer linear programming based approach. You could try the online interface to the Concorde TSP solver. | {
"domain": "cs.stackexchange",
"id": 1432,
"tags": "complexity-theory, graphs"
} |
Determine spectral type of star from its properties | Question: How can I determine spectral type of star, if I know all its another properties? For Example, Rigel A has spectral type B8Ia.
Rigel has temperature 12 500 K. According to Harvard spectral classification, spectral class of this star is B (10 000-30 000 K) and subclass is 8 (12 000-14 000 K), so B8.
Are there any exact boundaries for Yerkes spectral classification as well? How can I tell if a star is super giant (I), bright giant (II) or giant (III)? And then, how can I tell if it is bright (a), normal (ab) or faint (b)?
Answer: There are no exact boundaries in temperature, luminosity, surface gravity etc. for spectral classes because the classification system works in a different way - it is fundamentally an empirical system, with classification based only on the appearance of features in the spectra.
The Yerkes or Morgan-Keenan (MK) system is based only on a set of standard stars and their spectra, and the type of any other given star (that is not one of those standards) is based only on where that star’s spectral lines fit into the two-dimensional matrix of standards. The introduction of this Annual Reviews article by Morgan and Keenan spells out the principle.
Of course those lines reflect physical effects
Iike temperature or luminosity (really primarily surface gravity), but those physical properties are not the basis of the classification. And there are other factors that can change the underlying properties while preserving the appearance of the spectral lines. For example in the intro of this paper, Gray discusses how stars of the same luminosity class can have different luminosities due to rotation.
Divorcing the classification from the physical properties is an interesting philosophical approach, but one that has stood the test of time even as our understanding of stars and their physical properties has improved in the past 70 years. If you have time, I encourage you to read the first couple of pages of the review article linked above - the quote from Dimitri Mihalas (who was a leading theorist on stellar atmospheres) is especially interesting, I think. | {
"domain": "astronomy.stackexchange",
"id": 4734,
"tags": "star, spectral-type"
} |
Pulling people out of a database | Question: I have this class that pulls people out of a database. I want it to be a little prettier and function better.
<?php
class comps extends db_config {
public $output = array();
private $arr = array();
public $area;
public $me;
public $options = array();
public function __construct() {
$this->pdo = new PDO('mysql:host='.$this->_host.';dbname='.$this->_db.';charset='.$this->_charset, $this->_user, $this->_pass);
}
public function __toString() {
return var_dump($this->result());
}
public function result() {
return $this->output;
}
public function setoption($option, $value) {
array_push($this->options, array($option=>$value));
}
public function run() {
$where = array();
$limit = '';
$showMe = '';
if (isset($this->options['admin'])) {
$where[] = "admin = '".$this->options['admin']."'";
}
if (isset($this->options['area'])) {
$where[] = "area = '".$this->area."'";
}
if (isset($this->options['showMe']) && $this->options['showMe'] = true) {
$showMe = "AND missionary <> '".$this->me."'";
}
if (isset($this->options['limit'])) {
$limit = " LIMIT ".$this->options['admin'];
}
if (count($where) > 0) {
$where = implode(' AND ', $where);
} else {
$where = '';
}
$area = isset($this->area) ? $this->area : '';
if ($area != '') {
$sth = $this->pdo->prepare("SELECT missionary FROM missionary_areas WHERE area_uid = :area AND missionary_released = '0' ".$showMe);
$sth->bindParam(':area', $area);
$sth->execute();
$sth->setFetchMode(PDO::FETCH_OBJ);
while ($mis = $sth->fetch()) {
$sth2 = $this->pdo->prepare("SELECT * FROM missionarios WHERE ".$where." mid = :mid LIMIT 1");
$sth2->bindParam(':mid', $mis->missionary);
$sth2->execute();
$this->arr[] = $sth2->fetch(PDO::FETCH_ASSOC);
}
foreach($this->arr as $a) {
array_push($this->output, $a);
}
} else {
$sth = $this->pdo->prepare("SELECT * FROM missionarios ".$where.$limit);
$sth->execute();
$res = $sth->fetchAll(PDO::FETCH_ASSOC);
foreach($res as $a) {
array_push($this->output, $a);
}
}
}
}
Answer: Here's the code smells:
The new keyword in a class definition
A result() method and $output attribute for data created by single method
A very long, ambiguous method: run
Two mostly-different SQL statements separated by an if-else
And here's what I'd do to eliminate them:
Move the PDO object to a constructor argument (see below) so you're not violating the Law of Demeter (see #1 and 2 here)
Remove result() altogether and simply return the result from the method in which it's built.
Break this into smaller, more meaningful methods based on responsibility (see #4 below)
Create getMissionaryByArea() and getAllMissionaries() methods with optional parameters
Here's what my code might look like:
<?php
/**
* This class name is ambiguous.
* I'm not sure what it means, so I cannot suggest a better name.
*/
class comps {
public function __construct(PDO $pdo) {
$this->pdo = $pdo;
}
public function getMissionariesByArea($area, $admin_id=false, $omit_missionary_id=false, $limit=100) {
$sql = <<<'ENDSQL'
SELECT m.*
FROM missionarios m
JOIN missionary_areas ma ON ma.missionary = m.mid
WHERE ma.area_uid = :area
AND ma.missionary_released = '0'
LIMIT :limit
ENDSQL;
$params = [
'area' => $area,
'limit' => $limit
];
if ($omit_missionary_id) {
$sql .= ' AND ma.missionary <> :omit_missionary_id';
$params['omit_missionary_id'] = $omit_missionary_id;
}
if ($admin_id) {
$sql .= ' AND m.admin = :admin_id';
$params['admin_id'] = $admin_id;
}
$sth = $this->pdo->prepare($sql);
$sth->execute($params);
return $sth->fetchAll();
}
public function getAllMissionaries($admin_id=false, $limit=100) {
$sql = 'SELECT * FROM missionarios';
$params = ['limit' => $limit];
if ($admin_id) {
$sql .= ' AND admin = :admin_id';
$params['admin_id'] = $admin_id;
}
$sth = $this->pdo->prepare($sql);
$sth->execute($params);
return $sth->fetchAll();
}
}
I can revise if I've missed something in my refactoring. | {
"domain": "codereview.stackexchange",
"id": 13282,
"tags": "php, pdo"
} |
Questions on Carnot's theorem | Question: This article on Carnot's theorem states that
All heat engines between two heat reservoirs are less efficient than a Carnot heat engine operating between the same reservoirs.
However, it only proves that no heat engine can be more efficient than the Carnot heat engine (using a proof which Sal Khan also uses), and it proves that no irreversible engine is more efficient than a Carnot heat engine. It establishes in the former proof that
All reversible engines that operate between the same two heat reservoirs have the same efficiency.
and in the latter proof
No irreversible engine is more efficient than the Carnot engine operating between the same two reservoirs.
These proofs derive the result that the Carnot engine is the engine with optimal efficiency which are in accordance with my textbook being used for self study (Resnick and Halliday 10th edition, though I haven't referenced Callen's book which is more advanced since I wish to brush up on some mathematics skills). However, I have two questions predicated on the same premise:
Premise: The article claims that all heat engines are less efficient than a Carnot engine.
#1: Why then can't an irreversible engine not be as efficient as a Carnot engine (for this doesn't violate the result of the latter proof, which simply states that it cannot be more efficient)?
#2: The article only proves that all reversible engines have the same efficiency as the Carnot engine. How then, cannot a different design for a reversible engine be constructed with such an efficiency (what is the justification for the uniqueness of the Carnot cycle)?
If Callen deals with this, citations are appreciated. Also, any links including references to the early thermodynamicists (Clausius, Gibbs, Maxwell, etc.) are appreciated. Even though Carnot worked under the caloric theory of heat, links to his work and his reasoning are also appreciated.
Answer: The proof behind Carnot's upper limit posed on the efficiency of heat engines is more robust than this. The quotes you've pasted are among the various statements of the second law of thermodynamics. Here I'll sketch for you some of the ideas of the proof, mainly to show where these formulations (related to Carnot) of the second principle come from. Inevitably I'll repeat most of the things you probably already know, but they're repeated more for discussion purposes. Towards the end I'll touch on your two main questions more closely.
The main question raised by Carnot way basically that: from the second principle we already know that it is impossible to build a thermal engine working only with a single heat bath. So then, the question became: what is the maximal amount of work that we can achieve with a thermal engine working reversibly between two heat paths, with which it can exchange heat?
Now from a purely schematic point of view, we know that one such engine should be described by a thermodynamic cycle maximizing the area under the curve in the PV diagram! So Carnot set out to come up with a thermodynamic cycle that satisfies this. Remember that the net useful work provided by the system is equal to the area enclosed by one closed cycle, so intuitively we already have an idea of the type of expansions and compressions the cycle should be made of: i.e. the cost of compression being minimised by compressing at cold (lowest $T$), and the expansion yielding the maximum amount of work by expanding at hot (highest $T$), hence the choices of the main two reversible (no form of loss whatsoever, no entropy production) isothermal compression and expansion parts of the Carnot cycle. Here's the PV diagram taken from wikipedia ("Carnot cycle p-V diagram". Licensed under CC BY-SA 3.0 via Commons):
A quick reminder of each step involved along with the work/heat provided to outside (minus sign) or received (plus sign):
1 to 2: reversible isothermal expansion, $T=T_1,$ $V_1 \to V_2,$ $W_1 = -RT_1 \ln{V_2/V_1}$ and $Q_1 = RT_1 \ln{V_2/V_1},$ no internal energy change, $Q=-W$
2 to 3: cooling the working agent via a reversible adiabatic expansion, internal energy reduced only via work, $Q_2=0,$ $V_2 \to V_3,$ $T_1 \to T_2$ and $W_2 = C_v (T_2-T_1)$
3 to 4: reversible isothermal compression at cold $T=T_2,$ $V_3 \to V_4,$ $W_3 = -RT_2 \ln{V_4/V_3}$ and $Q_3 = RT_2 \ln{V_4/V_3},$
4 to 1: heating via a reversible adiabatic compression: $T_2 \to T_1,$ $V_4 \to V_1,$ $Q_4 = 0$ and $W_4 = C_v (T_1-T_2)$
All the ingredients there to calculate the thermal efficiency $\eta_{Carnot}$, given by the net work provided by the system to environment divided by the total received heat during one cycle (important idea is to use the adiabatic transformations to express $W$ in terms of $T_{1,2}$ and $V_{1,2}$). Once done, it should be a unique function of the two heat bath temperatures (in Kelvin):
$$
\eta_{Carnot} = \frac{T_1-T_2}{T_1}=1-\frac{T_2}{T_1}
$$
To see why any irreversible engine would have a lower efficiency, replace any of the 4 steps of the cycle by an irreversible one and re-calculate the efficiency, e.g. let's replace the 2 to 3 adiabatic expansion by an irreversible process, for our purposes a simple free expansion (see Gay-Lussac) will do, during which no work is done and $T_1$ remains constant, this process is immediately followed by another irreversible process, corresponding to the heat exchange (hence the irreversibility) with the cold bath as soon as contact is established, to reach $T_2.$ If all other 3 steps are left unaffected (i.e. reversible), the efficiency becomes:
$$
\eta = \eta_{Carnot}-\frac{C_V (T_1-T_2)}{RT_1 \ln{V_2/V_1}} < \eta_{Carnot}
$$
To further convince yourself, you can repeat the calculation for the replacement of any of the other 3 steps, by an irreversible one, and you will always find $\eta < \eta_{Carnot}.$ If you prefer, in terms of entropy, any irreversible process can be shown to have less heat flow to the system during an expansion and more heat flow out of the system during a compression, which simply means more entropy is given to environment than received from it, which consequently transforms the Clausius theorem into an inequality, i.e.
$$
\oint \frac{dQ}{dT} \leq 0
$$
Regarding your second question, the key idea is that no other engine can yield a greater efficiency than Carnot's (which conceptually we now expect it to be true, remember the earlier points on how to build a cycle that maximizes $\eta$), but this does not mean that there are no other engines with reversible transformations that cannot yield the same efficiency, take e.g. the Stirling engine (a 4 step cycle again):
For the Stirling motor you can show that the efficiency is:
\begin{align*}
\eta_{Stirling} &= \frac{R(T_1-T_2)\ln{V_2/V_1}}{RT_1 \ln{V_2}{V_1}} \\
&= 1-\frac{T_2}{T_1} = \eta_{Carnot}
\end{align*}
which should convince you that Carnot's cycle is not unique. To sum up, all this leads yet to another statement of the second principle of thermodynamics: There's no machine performing a cyclic process with an efficiency greater than $\eta_{Carnot}.$ | {
"domain": "physics.stackexchange",
"id": 24411,
"tags": "thermodynamics, heat-engine, carnot-cycle"
} |
Protons and Neutrons Overshoot Actual Mass? | Question: When I add up the mass of 6 protons and 6 neutrons in amu, I get a mass that is greater than the mass of carbon. I thought that it should be the other way around, because I have not including binding energy when I add up the mass of the protons and neutrons.
Proton: 1.007276466812 u
Neutron: 1.00866491600 u
6(1.007276466812)+6(1.00866491600)=12.099
Carbon: 12 u
Why is this so?
Answer: You are getting the right thing. This is the binding energy formula.
$$E_{\text{binding}} = (M_{\text{constituents}}-M_{\text{BoundState}})c^2$$
When the constituents come together to form a bound state the total mass is lowered not raised. Binding energy is the energy corresponding to the mass lost by the constituents as a result of them entering the bound state. | {
"domain": "physics.stackexchange",
"id": 21584,
"tags": "nuclear-physics, binding-energy"
} |
Quiz component with different but similar data models | Question: I have a component with this template:
<ion-card *ngIf="isActive">
<ion-card-header>{{question.question}}</ion-card-header>
<ion-grid>
<ion-row>
<ion-col *ngFor="let answer of question.answers">
<button ion-button outline (click)="answer.action()">{{answer.text}}</button>
</ion-col>
</ion-row>
</ion-grid>
</ion-card>
And the code:
@Component({
selector: 'extra-card',
templateUrl: 'extra-card.html'
})
export class ExtraCard {
public static readonly EXTRA_CARD_TYPE_1: string = "type_1";
public static readonly EXTRA_CARD_TYPE_2: string = "type_2";
@Input() type: string = ExtraCard.EXTRA_CARD_TYPE_1;
isActive: boolean = true;
question;
constructor(
private ga: GoogleAnalytics,
private socialSharing: SocialSharing,
private appRateService: AppRateService
) {}
ngOnInit() {
switch (this.type) {
case ExtraCard.EXTRA_CARD_TYPE_1:
this.makeExtraCardType1();
break;
case ExtraCard.EXTRA_CARD_TYPE_2:
this.makeExtraCardType2();
break;
}
}
private makeExtraCardType1() {
this.question = {
question: "Some question",
answers: [
{text: "Answer 1", action: () => {this.action1Type1();}},
{text: "Answer 2", action: () => {this.action2Type1();}},
{text: "Answer 3", action: () => {this.action3Type1();}}
]
};
}
private makeExtraCardType2() {
this.question = {
question: "Some question",
answers: [
{text: "Answer 1", action: () => {this.action1Type2();}},
{text: "Answer 2", action: () => {this.action2Type2();}},
]
};
}
...
}
What does this code do?
A sequence of questions is shown to the user. Each question has several options. In the end of the sequence we do some action and skip this card.
What is the problem?
In this example I have 2 types of extra cards, but I want to have more (5, 10, 20, etc). In this case my component code will be growing too quickly.
What I want
I want to separate the logic of different question sequences. But I'm facing a problem with dependency injection for some unique actions in sequences. And I want to have a single component and use it like this:
<extra-card [type]="type_1"></extra-card>
Also, I want to avoid excessive dependency injections in the base component (in case with passing DI instances to question model).
Answer: Not the Strategy yet
I'd try to not have a super huge ExtraCard component class if possible. The idea is to encapsulate the data in a Card interface/classes:
export interface Card {
context: Context;
question: string;
answers: Answer[];
}
export interface Answer {
text: string;
action: () => void;
}
export class Card1 implements Card {
constructor(public context: Context) { }
question = "Some card";
answers = [
{ text: "Answer 1", action: () => { console.info("Q1-A1"); this.context.ga.doSomething(); } },
{ text: "Answer 2", action: () => { console.info("Q1-A2"); } },
{ text: "Answer 3", action: () => { console.info("Q1-A3"); } }
];
}
export class Card2 implements Card {
constructor(public context: Context) { }
question = "Some card";
answers = [
{ text: "Answer 1", action: () => { console.info("Q2-A1"); } },
{ text: "Answer 2", action: () => { console.info("Q2-A2"); } }
];
}
Please notice two things here:
the Card is now responsible for holding the data related to the specific Question (both, question and the list of answer with corresponding behavior). This is not really the "Strategy Pattern" yet, but it is a first big step towards it. In Strategy pattern you normally do not mix the data and behavior in the same class, but rather separate them.
We declare a context: Context property that will hold the important dependencies (aka Services) that are required for a given Card to do it's work in answers' actions.
This allows the card classes to stay regular TypeScript classes instead of being declared as services. This also means that the Context with the services should be provided from the outside. In TypeScript we can use the constructor (public context: Context) to enforce client pass in the context.
The Context, by the way can look something like this:
export class Context {
ga: GoogleAnalytics;
socialSharing: SocialSharing;
appRateService: AppRateService;
// Other services/dependencies...
}
Now we need to change the ExtraCard component as well as it consumer.
Why not make the ExctraCard as dumb as we can? Here's what it could look like. Notice that the component now expects the entire Card to be passed in via @Input() card. It will also notify the consumer about receiving an answer (see EventEmitter.
<div *ngIf="isActive">
<div>{{card.question}}</div>
<div *ngFor="let answer of card.answers">
<div (click)="triggerAction(answer)">{{ answer.text }}</div>
</div>
</div>
import { Component, EventEmitter, Input, Output } from '@angular/core';
@Component({
selector: 'extra-card',
templateUrl: 'extra-card.component.html'
})
export class ExtraCard {
@Input() isActive: boolean = true;
@Output() answerReceived = new EventEmitter<void>();
@Input() card: Card;
triggerAction(answer: Answer): void {
answer.action();
this.answerReceived.emit();
}
}
The last part is your consumer component. It orchestrates Card objects creation and iteration.
The collection of allCards is created in ngOnInit rather than constructor, and it defines the number, kind, and order of the Cards. Consumer's constructor is used to inject the dependencies and prepare a sharedContext that is than passed to the Cards (remember, they now require it in their constructors).
answerReceived() method is used to react to the corresponding event. In my code I assumed that we only need to iterate through the questions linearly (first to last). If that is not the case (e.g. you may need to jump from Card1 to Card3 if a specific Answer was selected), then ExtraCard would need to emit event with some data. For example, answerReceived = new EventEmitter<number>() and this.answerReceived.emit(answer.id). Then the answerId could be used in Consumer for implementing more complex iteration logic.
<div *ngFor="let card of allCards; let cardIndex = index">
<extra-card
[isActive]="isActive(card)"
[question]="card"
(answerReceived)="answerReceived(cardIndex)"></extra-card>
</div>
import { Component } from '@angular/core';
import { Card1, Card2, Context, Card } from './extra-card.component';
@Component({
selector: 'consumer',
templateUrl: 'consumer.component.html'
})
export class Consumer {
private sharedContext: Context;
allCards: Card[] = [];
activeCard: Card;
constructor(ga: GoogleAnalytics, appRateService: AppRateService, socialSharing: SocialSharing) {
this.sharedContext = {
appRateService,
ga,
socialSharing
};
}
ngOnInit(): void {
this.allCards = [
new Card1(this.sharedContext),
new Card2(this.sharedContext),
// ...
];
this.activeCard = this.allCards[0];
}
isActive(card: Card): boolean {
return this.activeCard === card;
}
answerReceived(cardIndex: number): void {
const card = this.allCards[cardIndex + 1];
if (card != null) {
this.activeCard = card;
} else {
// TODO Handle card enumeration complete event.
}
}
}
Disclaimer
This is by no means the best approach I recommend to use generally, but I think it's better what you have now. Especially, since you identified a very important problem of Card/Question quantity growth. My solution is much more scalable, but it too has its limits.
If you apply this for now, and see some other issues coming, feel free to post a follow-up question with a specific ask.
I hope this helps. | {
"domain": "codereview.stackexchange",
"id": 28780,
"tags": "quiz, typescript, angular-2+"
} |
C++ variardic universal template for unknown types, used to handle multiple network protocols | Question: I am creating a template function with variardic arguments, to handle a specific classes that have some interface, method, member or whatever is specialized in a specialization area. However I came to a solution to handle even types that are not supported, thus avoid exceptions, polymorphism, virtual functions, RTTI, etc.
I'd like to hear a suggestions and also a peak to the implementation.
The example below shows a simple parsing of known network protocols (pseudo logic), that can handle all relevant types, if not specialized classes are passed, they shall be omitted as it happens.
#include <iostream>
#include <cstring>
template<typename T>
struct is_validator
{
static const bool value = false;
};
struct Null {
Null(const std::string& res[[maybe_unused]]) {}
bool valid() const {
return false;
}
};
struct RtpRFC
{
std::string m_res;
RtpRFC(const std::string& res) : m_res{res}{}
bool valid() const {
if (!strcmp(m_res.c_str(), "rtp"))
return true;
return false;
}
};
struct RtspRFC
{
public:
std::string m_res;
RtspRFC(const std::string& res) : m_res{res}{}
bool valid() const {
if (!strcmp(m_res.c_str(),"rtsp"))
return true;
return false;
}
};
struct StunRFC
{
std::string m_res;
StunRFC(const std::string& res) : m_res{res}{}
bool valid() const {
if (!strcmp("stun", m_res.c_str()))
return true;
return false;
}
};
struct NonValid
{
};
template<>
struct is_validator<Null>
{
static const bool value = true;
};
template<>
struct is_validator<RtpRFC>
{
static const bool value = true;
};
template<>
struct is_validator<StunRFC>
{
static const bool value = true;
};
template<>
struct is_validator<RtspRFC>
{
static const bool value = true;
};
/*terminator*/
bool VParse(...) {
return false;
}
template <class T,
//typename std::enable_if<is_validator<T>::value>::type,
typename...ARgs>
bool VParse(T type, ARgs&&... FArgs)
{
if constexpr (is_validator<T>::value) {
if (type.valid()) {
return true;
}
else {
return VParse(std::forward<ARgs>(FArgs)...);
}
}
return VParse(std::forward<ARgs>(FArgs)...);
}
int main(void)
{
std::string ret{};
const std::string someNetworkData[10] = {
"stun", "rtp", "stun", "stun", "rtsp", "rtp", "http", "http2", "udp"
};
for(int i=0; i < 10; i++) {
auto res = someNetworkData[i];
bool valid = VParse(
10,
"test",
NonValid{},
RtpRFC{res} ,
RtspRFC{res},
StunRFC{res},
Null{res}, //dummies
Null{res},
Null{res}
);
if (valid) {
ret += res;
ret += "|";
}
}
std::cout << ret;
return 0;
}
Answer: Avoid C string functions if possible
There is no need to use strcmp(), especially not if you are working with std::strings to begin with. For example, you can just write:
bool valid() const {
return m_res == "rtp";
}
Also make sure then to #include <string> instead of #include <cstring>.
Simplifying the code
Since your code needs C++17 anyway, you can simplify VParse() significantly by using fold expressions and a helper function, like so:
template <typename T>
bool is_valid(T&& type)
{
if constexpr(is_validator<T>::value) {
return type.valid();
}
return false;
}
template <typename... Args>
bool VParse(Args&&... args)
{
return (is_valid(std::forward<Args>(args)) || ...);
}
This splits the code into one self-contained part that checks a single argument, and another part that just iterates over all arguments.
Should you allow types that are not validators?
I think it's risky to have your function check whether is_validator<T>::value is a valid expression, and ignoring it if not. It's quite easy to make a typo somewhere and turn a valid validator into something that's not, and your code will then explicitly allow that, instead of letting the compiler catch the error. So I would recommend using SFINAE (like the one you commented out), or concepts if you can use C++20, to limit the accepted types, unless you really have a situation where it would be better to allow arbitrary types and ignore those that you can't use.
Alternatives
I assume the above code is just a toy example, but consider what happens if the list of validators grows a lot more than you have in your example. VParse() will have to call each validator in turn before one returns true. This can be expensive. If it is just matching strings, then you could just use a std::unordered_set<std::string> of valid types and check if a given protocol name is in that set. | {
"domain": "codereview.stackexchange",
"id": 43267,
"tags": "c++, template-meta-programming, variadic"
} |
Cooking with sawdust | Question: In a book about post-war Japan (Embracing Defeat, Dower) the author mentions a process for making sawdust at least partially edible, so it could be used in recipes in a 1:4 ratio with flour for cooking. The author says the sawdust was "fermented," as I recall.
My question is whether this is anything more than marginally possible. After all, kimchee, yogurt, beer, and other fermented foods aren't necessarily easier to digest. Is there a fermentation process involving bacteria that break down cellulose that might make wood even slightly nutritious?
My impression is that in forests the process of breaking down wood does not initially involve bacteria but other microorganisms; on the other hand, ruminants such as cows use bacteria to help them metabolize.
Thanks for any insight.
Answer: Yes. I think it would certainly be possible for raw sawdust to be made digestible by humans through some sort of fermentation process with some kind of microbe or another.
Pickling (as is done with kimchee and cucumbers and lots of other fruits) uses naturally occurring bacteria that are artificially selected for by using a strong brine solution so that most microbes die but the ones that naturally survive in strong brine also convert the organic materials they find (raw cucumber or cabbage) into different organic materials (pickled cucumber or pickled cabbage in the case of kimchee).
If you could find a microbe that could digest wood (there are many I'm sure; the gut bacteria in termites provide evidence of at least one or some), and if you could find a way to select for that particular microbe, then fermenting sawdust for nutritious human consumption becomes feasible. The only remaining problem is finding that particular microbe whose excretions were digestible to humans. | {
"domain": "biology.stackexchange",
"id": 767,
"tags": "metabolism, digestive-system, food"
} |
What are the disadvantages of free-flow intersections? | Question: I've recently learned about free-flow intersections. In particular, the DCMI, and the Stack interchanges.
What are the disadvantages of using them, compared to the "normal" signaled intersections? do they cost significantly more?
Answer: Free flow intersections take up a lot more real estate due to the wider curves needed for the higher speeds. Every time traffic crosses the roads need to be put onto a different grade. This means bridges and ramps to lift the traffic to that grade and back down again. Ramps (and the levies they are built on) also take up space and moves interaction with the main roads away from the actual intersection.
If traffic flow is not heavy then a simple priority regulated diamond interchange with just single lane traffic all around is more than enough. With a little forethought adding a third lane on the bridge to allow for left-turning traffic to not block other traffic will help flow for not that much extra cost. | {
"domain": "engineering.stackexchange",
"id": 1681,
"tags": "highway-engineering, traffic-intersections, traffic-light"
} |
Vector potential of wavefunction in ring geometry | Question: Assume that we have a wavefunction on ring geometry of length L with solenoid inside (like the aharonov bohm experiment). We can change the magentic field $B$ inside the solenoid continuously just increasing the current.
Also assume that we have just solved the schrodinger equation for $B=0$ case, so we know wavefunction $\psi(x)$. Since this is ring geometry, this wavefunction satisfies the boundary condition
$$ \psi(x) = \psi(x+L) $$
where L is the perimeter of the ring.
Now, we increase B, so the hamiltonian of this system includes vector potential $A$. When we include vector potential in our hamiltonian, we can easily get the wavefunction by just multiplying a phase factor to wavefunction for $A=0$, as following
$$ \psi(x) \rightarrow e^{i\frac{q}{\hbar} \int {A \dot{} dx }} \psi(x) $$
But since we have ring geometry, we require this wavefunction also satisfies the periodic boundary condition
$$ \psi(0) = e^{i\frac{q}{\hbar} \int_0^L {A \dot{} dx }} \psi(L) = e^{i\frac{q}{\hbar} {A L }} \psi(0) $$
and if $\psi(0) \neq 0$, we need
$$ A = n\frac{\hbar2\pi}{qL}$$
where n is an integer.
But this means that the vector potential A, so magnetic field must be quantized. But as I mentioned earlier, solenoid can have continuous magnetic field. So I arrive at a contraction here. What is wrong in this argument?
Answer: There is nothing wrong with your argument. If you have a geometry where there is truly no magnetic field where the ring is, then the flux through the ring is an integer multiple of the magnetic flux quantum. If you try to make the flux non-integer, the ring geometry will create a screening current that either adds or subtracts to the flux in such a way that the total flux is still an integer multiple.
A real-world example of such a geometry is a superconducting loop, and a real-world application is the precise measurement of magnetic fields by SQUIDs. | {
"domain": "physics.stackexchange",
"id": 49187,
"tags": "quantum-mechanics"
} |
how to add sonar sensor to pioneer3at robot in standalone version of gazebo | Question:
Hi all,
I want to add sonar sensor to pioneer3at robot in standalone version of gazebo, i don't want to use ros, just i want to work with standalone version of gazebo and i want to test my navigation algorithm, how can i add sonar sensor to my simulated pioneer3at robot and how can i get data of it? can any one help me?
Originally posted by Vahid on Gazebo Answers with karma: 91 on 2013-05-10
Post score: 0
Answer:
There isn't a sonar sensor in Gazebo yet. It's in the todo list
Originally posted by iche033 with karma: 1018 on 2013-05-11
This answer was ACCEPTED on the original site
Post score: 0
Original comments
Comment by Kurt Leucht on 2014-07-03:
Is it done yet? Gazebo status says it's done in 1.9 but I don't see it in any tutorials or examples. http://gazebosim.org/#status | {
"domain": "robotics.stackexchange",
"id": 3282,
"tags": "gazebo"
} |
How to determine the minimum force in these questions? | Question: Take a look at this example
The author mentioned that the shortest path when the angle is 90 which is clearly obvious. Now take a look at the following problem from the same book
The author has taken another approach for similar problem and claims the minimum magnitude occurs when $F_B$ is perpendicular with $F_A$ which yields an angle of 70. See the solution below
Which approach is correct??
Answer: 1) For the first problem we want the vertical component to be $0$ and we don't care what the horizontal component of the resultant vector actually is. Therefore we just need to focus on
$$F_1\sin60^\circ=F_2\sin\theta$$
Or to get in a form to minimize:
$$F_2=\frac{F_1\sin60^\circ}{\sin\theta}$$
Since $F_1$ is fixed, the only variable quantity is $\theta$, so we are good to go. You can use calculus, or you can reason that to minimize $F_2$ with respect to $\theta$ we need the denominator to be as large as possible, and $F_2$ needs to have some downward component. Therefore, it makes sense that $\theta=90^\circ$ minimizes $F_2$. I'm not covering this to answer the first problem, since the solution is given in the question, but we can approach the second problem in a similar way to see what the difference is.
2) So, for the second problem we want the vertical component to be $0$, but now we do care about what the resultant magnitude is. Furthermore, we can control what $F_A$ (analogous to $F_1$) is now. Setting up the problem like the first one requires more work than last time, but it isn't too bad:
$$F_A\sin20^\circ=F_B\sin\theta$$
$$F_A\cos20^\circ+F_B\cos\theta=F_R$$
We can combine these to get:
$$F_B\cot20^\circ\sin\theta+F_B\cos\theta=F_R$$
Or
$$F_B=\frac{F_R}{\cot20^\circ\sin\theta+\cos\theta}$$
Since $F_R$ is fixed, the only variable quantity is $\theta$, so we are good to go. Using calculus$^*$, you can show that you get a minimum when
$$\sin\theta-\cot20^\circ\cos\theta=0$$
Or
$$\tan\theta=\cot20^\circ$$
This is only true if $\theta + 20^\circ=90^\circ$. Therefore, $F_A$ must be perpendicular to $F_B$ (since $\theta$ is positive in the clockwise direction, whereas the $20^\circ$ is measured in the counter-clockwise direction).
And hopefully now you can see the difference between the two approaches comes from different constraints on the system. In the first case, the resultant horizontal component was free, but $F_1$ was fixed. In the second case we had the requirement that the resultant horizontal component be a set value, but $F_A$ was now free. Note that if we had specified $F_A$ as well as the resultant horizontal force that there would just be a single possibility for what $F_2$ and $\theta$ could be.
To be more general, the vector addition figures provided by the solutions are nice, but they require some intuition and maybe even some hindsight to produce. The methods outlined here are a little more practical in terms of steps to follow. Just express what you are trying to minimize in terms of only fixed values and the variable that you are minimizing with respect to. Then you are good to go to find what value of the variable minimizes your quantity of interest.
$^*$ For those who do not know calculus we can still do some reasoning like we did for the first problem. We want to maximize our denominator. Of course, this also means maximizing a constant multiple of the denominator. We can then cleverly choose to maximize the denominator multiplied by $\sin20^\circ$. This is equal to
$$\cos20^\circ\sin\theta+\sin20^\circ\cos\theta$$
Using a useful trig identity this is equal to
$$\sin(20^\circ+\theta)$$
Since the sine function is at a maximum when it's argument is $90^\circ$ it must be that the forces are perpendicular (usually perpendicular would mean a difference of $90^\circ$, but keep in mind that $\theta>0$ is measured moving clockwise, whereas the $20^\circ$ is measured moving counter-clockwise). | {
"domain": "physics.stackexchange",
"id": 60385,
"tags": "homework-and-exercises, newtonian-mechanics, forces, vectors, geometry"
} |
Conservation of information and determinism? | Question: I'm having a hard time wrapping my head around the conservation of information principle as formulated by Susskind and others. From the most upvoted answers to this post, it seems that the principle of conservation of information is a consequence of the reversibility (unitarity) of physical processes.
Reversibility implies determinism: Reversibility means that we have a one to one correspondence between a past state and a future state, and so given complete knowledge of the current state of the universe, we should be able to predict all future states of the universe (Laplace's famous demon).
But hasn't this type of determinism been completely refuted by Quantum Mechanics, the uncertainty principle and the probabilistic outcome of measurement?
Isn't the whole point of Quantum Mechanics that this type of determinism no longer holds?
Moreover, David Wolpert proved that even in a classical, non-chaotic universe, the presence of devices that perform observation and prediction makes Laplace style determinism impossible. Doesn't Wolpert's result contradict the conservation of information as well?
So to summarize my question: How is the conservation of information compatible with the established non-determinism of the universe?
Answer: The short answer to this question is that the Schrödinger equation is deterministic and time reversible up to the point of a measurement. Determinism says that given an initial state of a system and the laws of physics you can calculate what the state of the system will be after any arbitrary amount of time (including a positive or negative amount of time). Classically, the deterministic laws of motion are given by Newton's force laws, the Euler-Lagrange equation, and the Hamiltonian. In quantum mechanics, the law that governs the time evolution of a system is the Schrödinger equation. It shows that quantum states are time reversible up until the point of a measurement, at which point the wave function collapses and it is no longer possible to apply a unitary that will tell you what the state was before, deterministically. However, it should be noted that many-world interpreters who don’t believe that measurements are indeterministic don’t agree with this statement, they think that even measurements are deterministic in the grand scheme of quantum mechanics. To quote Scott Aronson:
Reversibility has been a central concept in physics since Galileo and Newton. Quantum mechanics says that the only exception to reversibility is when you take a measurement, and the Many-Worlders say not even that is an exception.
The reason that people are loose with the phrasing “information is always conserved” is because the “up until a measurement” is taken for granted as background knowledge. In general, the first things you learn about in a quantum mechanics class or textbook is what a superposition is, the Heisenberg uncertainty principle and then the Schrödinger equation.
For an explanation of the Schrödinger equation from Wolfram:
The Schrödinger equation is the fundamental equation of physics for describing quantum mechanical behavior. It is also often called the Schrödinger wave equation, and is a partial differential equation that describes how the wavefunction of a physical system evolves over time.
The Schrödinger equation explains how quantum states develop from one state to another. This evolution is completely deterministic and it is time reversible. Remember that a quantum state is described by a wave function $|\psi\rangle$, which is a collection of probability amplitudes. The Schrödinger equation states that any given wave function $|\psi_{t_0}\rangle$ at moment $t_0$ will evolve to become $|\psi_{t_1}\rangle$ at time $t_1$ unless a measurement is made before $t_1$ . This is a completely deterministic process and it is time reversible. Given $|\psi_{t_1}\rangle$ we can use the equation to calculate what $|\psi_{t_0}\rangle$ is equal to.
If the electron is in a superposition then the wave function will be so:
$|\psi\rangle = \alpha|0\rangle + \beta|1\rangle$ where $\alpha$ and $\beta$ are equal to $\frac{1}{\sqrt{2}}$.
The state of an electron that is spin up is $|\psi\rangle = 1|1\rangle$. Clearly, a quantum state that is in a superposition of some observables is a valid ontological object. It behaves in a way completely different than an object that is collapsed into only one of the possibilities via a measurement. The problem of measurements, what they are and what constitutes one, is central to the interpretations of quantum mechanics. The most common view is that a measurement is made when the wave function collapses into one of its eigenstates. The Schrödinger equation provides a deterministic description of a state up to the point of a measurement.
Information, as defined by Susskind here, is always conserved up to the point of a measurement. This is because the Schrödinger equation describes the evolution of a quantum state deterministically up until a measurement.
The black hole information paradox can be succinctly stated as this:
Quantum states evolve unitarily governed by the Schrödinger equation. However, when a particle passes through the event horizon of a black hole and is later radiated out via Hawking radiation it is no longer in a pure quantum state (meaning a measurement was made). A measurement could not have been made because the equivalency principle of general relativity assures us that there is nothing special going on at the event horizon. How can all of this be true?
This paradox would not be a paradox if the laws of quantum mechanics didn't give a unitary, deterministic, evolution for quantum states up to a measurement. The reason being, if measurements are the only time unitarity breaks down and the equivalency principle tells us a measurement cannot be happening at the horizon of a black hole, how can unitarity break down and cause the Hawking radiation to be thermal and therefore uncorrelated with the in-falling information? Scott Aaronson gave a talk about quantum information theory and its application to this paradox as well as quantum public key cryptography. In it he explains
The Second Law says that entropy never decreases, and thus the whole universe is undergoing a mixing process (even though the microscopic laws are reversible).
[After having described how black holes seem to destroy infomration in contradiction to the second law] This means that, when bits of information are thrown into a black hole, the bits seem to disappear from the universe, thus violating the Second Law.
So let’s come back to Alice. What does she see? Suppose she knows the complete quantum state $|\psi\rangle$ (we’ll assume for simplicity that it’s pure) of all the infalling matter. Then, after collapse to a black hole and Hawking evaporation, what’s come out is thermal radiation in a mixed state $\rho$. This is a problem. We’d like to think of the laws of physics as just applying one huge unitary transformation to the quantum state of the world. But there’s no unitary U that can be applied to a pure state $|\psi\rangle$ to get a mixed state $\rho$. Hawking proposed that black holes were simply a case where unitarity broke down, and pure states evolved into mixed states. That is, he again thought that black holes were exceptions to the laws that hold everywhere else.
The information paradox was considered to be solved via Susskind's proposal of black hole complementarity and the holographic principle. Later AMPS showed that the solution is not as simple as it was stated and further work needs to be done. Currently the field of physics is engaged in an amazingly beautiful collection of ideas and solutions being proposed to solve the black hole information paradox as well as the AMPS paradox. At the heart of all of these proposals, however, is the belief that information is, conserved up to the point of a measurement. | {
"domain": "physics.stackexchange",
"id": 59391,
"tags": "conservation-laws, information, determinism"
} |
Inherited Code - Worksheet_Change Event Code | Question: I inherited this code and have to fix it. It does work and I know I can refactor the code using If Not Intersect(Target, Me.Range()) Is Nothing Then syntax. I am wondering if using a function to pass the cell references in this case would be best, but im not really familiar on working with functions yet and would like some input or guidance on best practice with the code below. Please note I am well aware of the usage of select within this code block, but the original author wants me to keep the select to move the active cell based on selections made in the worksheet.
Private Sub Worksheet_Change(ByVal Target As Range)
Application.ScreenUpdating = False
Application.EnableEvents = False
Dim wb As Workbook: Set wb = Application.ThisWorkbook
Dim wsDE As Worksheet: Set wsDE = wb.Sheets("Data Entry")
Dim Unique_Identifier As String
Dim Wire_Type As String
With wsDE
Select Case Target.Address
Case Is = "$B$4": Hide_All
Select Case Range("B4")
Case Is <> ""
Range("A100:A199").EntireRow.Hidden = False
Range("B101").Select
Sheet5.Visible = xlSheetVisible 'Confirmation-Incoming
Range("B5") = ""
Case Else: Range("B5").Select
End Select
Case Is = "$B$5": Hide_All
Select Case Range("B5")
Case Is <> ""
Range("A200:A211").EntireRow.Hidden = False
Range("A216:A227").EntireRow.Hidden = False
Range("B201").Select
With ThisWorkbook
Sheet7.Visible = xlSheetVisible 'Checklist
Sheet4.Visible = xlSheetVisible 'Confirmation-Outgoing-1
Sheet2.Visible = xlSheetVisible 'Wire Transfer Request-1
End With
Select Case Range("B5")
Case Is > 1
Range("A200:A299").EntireRow.Hidden = False
Unique_Identifier = Range("B5").Value
Wire_Type = "Deposit/Loan"
Call Find_Recurring(Unique_Identifier, Wire_Type)
End Select
Case Else: Range("B6").Select
End Select
Case Is = "$B$6": Hide_All
Select Case Range("B6")
Case Is <> ""
Range("A300:A312").EntireRow.Hidden = False
Range("A316:A330").EntireRow.Hidden = False
Range("B301").Select
With ThisWorkbook
Sheet3.Visible = xlSheetVisible 'Checklist-Loan Closing
Sheet12.Visible = xlSheetVisible 'Confirmation-Outgoing-2
Sheet11.Visible = xlSheetVisible 'Wire Transfer Request-2
End With
Case Else: Range("B7").Select
End Select
Case Is = "$B$7": Hide_All
Select Case Range("B7")
Case Is <> ""
Range("A400:A411").EntireRow.Hidden = False
Range("A414:A499").EntireRow.Hidden = False
Range("B401").Select
With ThisWorkbook
Sheet9.Visible = xlSheetVisible 'Checklist-Cash Management
Sheet14.Visible = xlSheetVisible 'Confirmation-Outgoing-3
End With
Case Else: Range("B8").Select
End Select
Case Is = "$B$8": Hide_All
Select Case Range("B8")
Case Is <> ""
Range("A500:A599").EntireRow.Hidden = False
Range("B501").Select
With ThisWorkbook
Sheet13.Visible = xlSheetVisible 'Wire Transfer Request - Brokered-Internet
End With
Case Else: Range("B9").Select
End Select
Case Is = "$B$9": Hide_All
Select Case Range("B9")
Case Is <> ""
Range("A600:A610").EntireRow.Hidden = False
Range("B601").Select
Sheet8.Visible = xlSheetVisible 'Checklist-Internal
Select Case Range("B9")
Case Is > 1
Range("A600:A699").EntireRow.Hidden = False
Unique_Identifier = Range("B9").Value
Wire_Type = "Internal"
Call Find_Recurring(Unique_Identifier, Wire_Type)
End Select
Case Else: Range("B10").Select
End Select
Case Is = "$B$10": Hide_All
Select Case Range("B10")
Case Is <> ""
Sheet6.Visible = xlSheetVisible 'Wire Transfer Agreement
Sheets("Wire Transfer Agreement").Visible = True
Range("A5000:A5099").EntireRow.Hidden = False
Range("A5005:A5011").EntireRow.Hidden = True
Range("B5001").Select
Case Else: Range("B11").Select
End Select
Case Is = "$B$11": Hide_All
Select Case Range("B11")
Case Is <> ""
' Sheets("Recurring Wire Transfer Request").Visible = True
Sheet18.Visible = xlSheetVisible 'Recurring Wire Transfer Request
Range("A5100:A5118").EntireRow.Hidden = False
Range("A5111:A5114").EntireRow.Hidden = True
Range("B5101").Select
Case Else: Range("B11").Select
End Select
'Wires from Deposit Account or Loan (Post-Closing) Section
Case Is = "$B$205"
Select Case LCase(Range("B205"))
Case Is = "yes"
Range("A212:A215").EntireRow.Hidden = False
Case Else
Range("A212:A215").EntireRow.Hidden = True
Range("B206").Select
End Select
Case Is = "$B$227"
Select Case LCase(Range("B227"))
Case Is = "domestic"
Range("A222:A243").EntireRow.Hidden = False
Range("A267:A299").EntireRow.Hidden = False
Range("A244:A266").EntireRow.Hidden = True
Range("B229").Select
Case Is = "international"
Range("A244:A299").EntireRow.Hidden = False
Range("A228:A243").EntireRow.Hidden = True
Range("B245").Select
Case Is <> "international", "domestic"
Range("A228:A299").EntireRow.Hidden = True
Range("B227").Select
End Select
Case Is = "$B$269"
Select Case LCase(Range("B269"))
Case Is = "yes"
Sheets("Wire Transfer Agreement").Visible = True
Range("A5000:A5099").EntireRow.Hidden = False
Range("B282:B299").EntireRow.Hidden = True
Application.ScreenUpdating = True
Range("B5001").Select
Case Else
Sheets("Wire Transfer Agreement").Visible = False
Range("A5000:A5099").EntireRow.Hidden = True
Range("B281:B299").EntireRow.Hidden = False
Range("B270").Select
End Select
'Loan-Closing Wires Section
Case Is = "$B$306"
Select Case LCase(Range("B306"))
Case Is = "yes"
Range("A313:A316,A331").EntireRow.Hidden = False
Case Else
Range("A313:A316").EntireRow.Hidden = True
Range("A331").EntireRow.Hidden = False
Range("B307").Select
End Select
Case Is = "$B$331"
Select Case LCase(Range("B331"))
Case Is = "domestic"
Range("A332:A347").EntireRow.Hidden = False
Range("A370:A399").EntireRow.Hidden = False
Range("A348:A369").EntireRow.Hidden = True
Range("B331").Select
Case Is = "international"
Range("A347:A399").EntireRow.Hidden = False
Range("A332:A346").EntireRow.Hidden = True
Range("B349").Select
Case Is <> "domestic", "international"
Range("A332:A399").EntireRow.Hidden = True
Range("B331").Select
End Select
Case Is = "$B$373"
Select Case LCase(Range("B373"))
Case Is = "yes"
Sheets("Wire Transfer Agreement").Visible = True
Range("A5000:A5099").EntireRow.Hidden = False
Range("B383:B399").EntireRow.Hidden = True
Application.ScreenUpdating = True
Range("B5001").Select
Case Else
Sheets("Wire Transfer Agreement").Visible = False
Range("A5000:A5099").EntireRow.Hidden = True
Range("B383:B399").EntireRow.Hidden = False
Range("B374").Select
End Select
'Cash Management Wires Section
Case Is = "$B$406"
Select Case LCase(Range("B406"))
Case Is = "yes"
Range("A412:A413").EntireRow.Hidden = False
Case Else
Range("A412:A413").EntireRow.Hidden = True
Range("B407").Select
End Select
Case Is = "$B$425"
Select Case LCase(Range("B425"))
Case Is = "yes"
Range("A430:A431").EntireRow.Hidden = False
Case Else
Range("A430:A431").EntireRow.Hidden = True
Range("B426").Select
End Select
'Internal Foresight Wires Section
Case Is = "$B$610"
Select Case LCase(Range("B610"))
Case Is = "domestic"
Range("A611:A625").EntireRow.Hidden = False
Range("A648:A699").EntireRow.Hidden = False
Range("A626:A647").EntireRow.Hidden = True
Range("B612").Select
Case Is = "international"
Range("A626:A699").EntireRow.Hidden = False
Range("A611:A625").EntireRow.Hidden = True
Range("B627").Select
Case Is <> "international", "domestic"
Range("A611:A699").EntireRow.Hidden = True
Range("B610").Select
End Select
'Wire Transfer Agreement Section
Case Is = "$B$5004"
Range("A5005:A5011").EntireRow.Hidden = True
Range("B5004").Select
Select Case LCase(Range("B5004"))
Case Is = "entity"
Range("A5007:A5011").EntireRow.Hidden = False
Range("B5007").Select
Case Is = "individual(s)"
Range("A5005:A5006").EntireRow.Hidden = False
Range("B5005").Select
End Select
'Recurring Wire Transfer Request Section
Case Is = "$B$5104"
Range("A5111:A5114").EntireRow.Hidden = True
Range("B5105").Select
Select Case LCase(Range("B5104"))
Case Is = "yes"
Range("A5111:A5114").EntireRow.Hidden = False
Range("B5105").Select
Case Is = "no"
Range("A5111:A5114").EntireRow.Hidden = True
Range("B5105").Select
End Select
Case Is = "$B$5118"
Select Case LCase(Range("B5118"))
Case Is = "domestic"
Range("A5119:A5131").EntireRow.Hidden = False
Range("A5132:A5199").EntireRow.Hidden = True
Range("A5150").EntireRow.Hidden = False
Range("B5120").Select
Case Is = "international"
Range("A5119:A5131").EntireRow.Hidden = True
Range("A5132:A5149").EntireRow.Hidden = False
Range("A5151:A5199").EntireRow.Hidden = True
Range("B5133").Select
Case Is <> "international", "domestic"
Range("A5119:A5199").EntireRow.Hidden = True
Range("B5118").Select
End Select
End Select
End With
'CIF Calls
If Not Intersect(Target, Range("B103")) Is Nothing Then CIFIncoming
If Not Intersect(Target, Range("B206")) Is Nothing Then CIFOutD
If Not Intersect(Target, Range("B307")) Is Nothing Then CIFOutL
If Not Intersect(Target, Range("B407")) Is Nothing Then CIFOutCM
If Not Intersect(Target, Range("B506")) Is Nothing Then CIFBrokered
Application.ScreenUpdating = True
Application.EnableEvents = True
End Sub
Answer: As AJD pointed out, using Named Ranges will make the code easier to understand, read, write and modify. The same logic can be applied to worksheet code names.
Here are the names that I used when refactoring the code:
Sheets("Wire Transfer Agreement") -> wsWTA
Sheet2 -> wsWireTransferRequest1
Sheet3 -> wsChecklistLoanClosing
Sheet4 -> wsConfirmationOutgoing1
Sheet5 -> wsConfirmationIncoming
Sheet7 -> wsChecklist
Sheet6 -> wsWireTransferAgreement
Sheet8 -> wsChecklistInternal
Sheet9 -> wsChecklistCashManagement
Sheet11 -> wsWireTransferRequest2
Sheet12 -> wsConfirmationOutgoing2
Sheet13 -> wsWTRBrokeredInternet
Sheet14 -> wsConfirmationOutgoing3
Sheet18 -> wsRecurringWTR
Using nested Select statements are particularly hard to read. Normally I would alternate Select with If..ElseIf..Else statements but the procedure is entirely too long; so I recommend writing a subroutine for each Case of the top level Select statement (see code below).
I only use Range.EntireRow when working with Range variables (e.g. Target.EntireRow). Using Rows() directly will make you code more condensed and the extra whitespace will make it easier to read.
Before
Range("A200:A211").EntireRow.Hidden = False
After
Rows("200:211").Hidden = False
Application.ScreenUpdating = True is no longer required. ScreenUpdating now resumes after the code has finished executing.
Refactored Code
Private Sub Worksheet_Change(ByVal Target As Range)
Application.ScreenUpdating = False
Application.EnableEvents = False
Dim Unique_Identifier As String
Dim Wire_Type As String
Select Case Target.Address
Case Is = "$B$4"
EntryB4
Case Is = "$B$5"
EntryB5
Case Is = "$B$6"
EntryB6
Case Is = "$B$7"
EntryB7
Case Is = "$B$8"
EntryB8
Case Is = "$B$9"
EntryB9
Case Is = "$B$10"
EntryB10
Case Is = "$B$11"
EntryB11
Rem Wires from Deposit Account or Loan (Post-Closing) Section
Case Is = "$B$205"
EntryB205
Case Is = "$B$227"
EntryB227
Case Is = "$B$269"
EntryB269
Rem Loan-Closing Wires Section
Case Is = "$B$306"
EntryB306
Case Is = "$B$331"
EntryB331
Case Is = "$B$373"
EntryB373
Rem Cash Management Wires Section
Case Is = "$B$406"
EntryB406
Case Is = "$B$425"
EntryB425
Rem Internal Foresight Wires Section
Case Is = "$B$610"
EntryB610
Rem Wire Transfer Agreement Section
Case Is = "$B$5004"
EntryB5004
Rem Recurring Wire Transfer Request Section
Case Is = "$B$5104"
EntryB5104
Case Is = "$B$5118"
EntryB5118
End Select
Rem CIF Calls
If Not Intersect(Target, Range("B103")) Is Nothing Then CIFIncoming
If Not Intersect(Target, Range("B206")) Is Nothing Then CIFOutD
If Not Intersect(Target, Range("B307")) Is Nothing Then CIFOutL
If Not Intersect(Target, Range("B407")) Is Nothing Then CIFOutCM
If Not Intersect(Target, Range("B506")) Is Nothing Then CIFBrokered
Application.EnableEvents = True
End Sub
Sub EntryB4()
With wsDataEntry
.Activate
Hide_All
Select Case .Range("B4")
Case Is <> ""
.Rows("100:199").Hidden = False
.Range("B101").Select
wsConfirmationIncoming.Visible = xlSheetVisible
.Range("B5") = ""
Case Else
.Range("B5").Select
End Select
End With
End Sub
Sub EntryB5()
With wsDataEntry
Hide_All
.Activate
Select Case .Range("B5")
Case Is <> ""
.Rows("200:211").Hidden = False
.Rows("216:227").Hidden = False
.Range("B201").Select
With ThisWorkbook
wsChecklist.Visible = xlSheetVisible
wsConfirmationOutgoing1.Visible = xlSheetVisible
wsWireTransferRequest1.Visible = xlSheetVisible
End With
Select Case .Range("B5")
Case Is > 1
.Rows("200:299").Hidden = False
Unique_Identifier = .Range("B5").Value
Wire_Type = "Deposit/Loan"
Call Find_Recurring(Unique_Identifier, Wire_Type)
End Select
Case Else: .Range("B6").Select
End Select
End With
End Sub
Sub EntryB7()
With wsDataEntry
Hide_All
.Activate
Select Case .Range("B7")
Case Is <> ""
.Rows("400:411").Hidden = False
.Rows("414:499").Hidden = False
.Range("B401").Select
With ThisWorkbook
wsChecklistCashManagement.Visible = xlSheetVisible
wsConfirmationOutgoing3.Visible = xlSheetVisible
End With
Case Else: .Range("B8").Select
End Select
End With
End Sub
Sub EntryB8()
With wsDataEntry
Hide_All
.Activate
Select Case .Range("B8")
Case Is <> ""
.Rows("500:599").Hidden = False
.Range("B501").Select
With ThisWorkbook
wsWTRBrokeredInternet.Visible = xlSheetVisible
End With
Case Else: .Range("B9").Select
End Select
End With
End Sub
Sub EntryB9()
With wsDataEntry
Hide_All
.Activate
Select Case .Range("B9")
Case Is <> ""
.Rows("600:610").Hidden = False
.Range("B601").Select
wsChecklistInternal.Visible = xlSheetVisible
Select Case .Range("B9")
Case Is > 1
.Rows("600:699").Hidden = False
Unique_Identifier = .Range("B9").Value
Wire_Type = "Internal"
Call Find_Recurring(Unique_Identifier, Wire_Type)
End Select
Case Else: .Range("B10").Select
End Select
End With
End Sub
Sub EntryB10()
With wsDataEntry
Hide_All
.Activate
Select Case .Range("B10")
Case Is <> ""
Sheet6.Visible = xlSheetVisible
wsWTA.Visible = True
.Rows("5000:5099").Hidden = False
.Rows("5005:5011").Hidden = True
.Range("B5001").Select
Case Else: .Range("B11").Select
End Select
End With
End Sub
Sub EntryB11()
With wsDataEntry
Hide_All
.Activate
Select Case .Range("B11")
Case Is <> ""
wsRecurringWTR.Visible = xlSheetVisible
.Rows("5100:5118").Hidden = False
.Rows("5111:5114").Hidden = True
.Range("B5101").Select
Case Else: .Range("B11").Select
End Select
End With
End Sub
Rem Wires from Deposit Account or Loan (Post-Closing) Section
Sub EntryB205()
With wsDataEntry
.Activate
Select Case LCase(.Range("B205"))
Case Is = "yes"
.Rows("212:215").Hidden = False
Case Else
.Rows("212:215").Hidden = True
.Range("B206").Select
End Select
End With
End Sub
Sub EntryB227()
With wsDataEntry
.Activate
Select Case LCase(.Range("B227"))
Case Is = "domestic"
.Rows("222:243").Hidden = False
.Rows("267:299").Hidden = False
.Rows("244:266").Hidden = True
.Range("B229").Select
Case Is = "international"
.Rows("244:299").Hidden = False
.Rows("228:243").Hidden = True
.Range("B245").Select
Case Is <> "international", "domestic"
.Rows("228:299").Hidden = True
.Range("B227").Select
End Select
End With
End Sub
Sub EntryB269()
With wsDataEntry
.Activate
Select Case LCase(.Range("B269"))
Case Is = "yes"
wsWTA.Visible = True
.Rows("5000:5099").Hidden = False
.Rows("282:299").Hidden = True
.Range("B5001").Select
Case Else
wsWTA.Visible = False
.Rows("5000:5099").Hidden = True
.Rows("281:299").Hidden = False
.Range("B270").Select
End Select
End With
End Sub
Rem Loan-Closing Wires Section
Sub EntryB306()
With wsDataEntry
.Activate
Select Case LCase(.Range("B306"))
Case Is = "yes"
.Range("A313:A316,A331").EntireRow.Hidden = False
Case Else
.Rows("313:316").Hidden = True
.Rows(331).Hidden = False
.Range("B307").Select
End Select
End With
End Sub
Sub EntryB331()
With wsDataEntry
.Activate
Select Case LCase(.Range("B331"))
Case Is = "domestic"
.Rows("332:347").Hidden = False
.Rows("370:399").Hidden = False
.Rows("348:369").Hidden = True
.Range("B331").Select
Case Is = "international"
.Rows("347:399").Hidden = False
.Rows("332:346").Hidden = True
.Range("B349").Select
Case Is <> "domestic", "international"
.Rows("332:399").Hidden = True
.Range("B331").Select
End Select
End With
End Sub
Sub EntryB373()
With wsDataEntry
.Activate
Select Case LCase(.Range("B373"))
Case Is = "yes"
wsWTA.Visible = True
.Rows("5000:5099").Hidden = False
.Rows("383:399").Hidden = True
.Range("B5001").Select
Case Else
wsWTA.Visible = False
.Rows("5000:5099").Hidden = True
.Rows("383:399").Hidden = False
.Range("B374").Select
End Select
End With
End Sub
Rem Cash Management Wires Section
Sub EntryB406()
With wsDataEntry
.Activate
Select Case LCase(.Range("B406"))
Case Is = "yes"
.Rows("412:413").Hidden = False
Case Else
.Rows("412:413").Hidden = True
.Range("B407").Select
End Select
End With
End Sub
Sub EntryB425()
With wsDataEntry
.Activate
Select Case LCase(.Range("B425"))
Case Is = "yes"
.Rows("430:431").Hidden = False
Case Else
.Rows("430:431").Hidden = True
.Range("B426").Select
End Select
End With
End Sub
Rem Internal Foresight Wires Section
Sub EntryB610()
With wsDataEntry
.Activate
Select Case LCase(.Range("B610"))
Case Is = "domestic"
.Rows("611:625").Hidden = False
.Rows("648:699").Hidden = False
.Rows("626:647").Hidden = True
.Range("B612").Select
Case Is = "international"
.Rows("626:699").Hidden = False
.Rows("611:625").Hidden = True
.Range("B627").Select
Case Is <> "international", "domestic"
.Rows("611:699").Hidden = True
.Range("B610").Select
End Select
End With
End Sub
Rem Wire Transfer Agreement Section
Sub EntryB5004()
With wsDataEntry
.Activate
.Rows("5005:5011").Hidden = True
.Range("B5004").Select
Select Case LCase(.Range("B5004"))
Case Is = "entity"
.Rows("5007:5011").Hidden = False
.Range("B5007").Select
Case Is = "individual(s)"
.Rows("5005:5006").Hidden = False
.Range("B5005").Select
End Select
End With
End Sub
Rem Recurring Wire Transfer Request Section
Sub EntryB5104()
With wsDataEntry
.Activate
.Rows("5111:5114").Hidden = True
.Range("B5105").Select
Select Case LCase(.Range("B5104"))
Case Is = "yes"
.Rows("5111:5114").Hidden = False
.Range("B5105").Select
Case Is = "no"
.Rows("5111:5114").Hidden = True
.Range("B5105").Select
End Select
End With
End Sub
Sub EntryB5118()
With wsDataEntry
.Activate
Select Case LCase(.Range("B5118"))
Case Is = "domestic"
.Rows("5119:5131").Hidden = False
.Rows("5132:5199").Hidden = True
.Rows(5150).Hidden = False
.Range("B5120").Select
Case Is = "international"
.Rows("5119:5131").Hidden = True
.Rows("5132:5149").Hidden = False
.Rows("5151:5199").Hidden = True
.Range("B5133").Select
Case Is <> "international", "domestic"
.Rows("5119:5199").Hidden = True
.Range("B5118").Select
End Select
End With
End Sub | {
"domain": "codereview.stackexchange",
"id": 36694,
"tags": "vba, excel"
} |
Modifiable array-of-structures to represent devices | Question: I'm really only a tinkerer in Java, but for work I need to write in Java for a while. Most of my experience is in C / C++ / JavaScript.
Anyway, the program needs arrays of structured data. Then, based on runtime criteria, the program needs to be able to add to or modify part of the data. Later the program will use the modified data to accomplish its objectives.
Today I spent a while looking for ways to do this in Java. I was able to achieve the goal using ArrayLists of class objects.
import java.util.*;
import java.util.regex.*;
class BusConfig extends Object {
public int bus;
public int gpio;
public ArrayList<String> devices = new ArrayList<String> ( );
public BusConfig( int newBus, int newGPIO, String[] newDevices ) {
this.bus = newBus;
this.gpio = newGPIO;
this.devices = new ArrayList<String>( Arrays.asList( newDevices ) );
}
public String toString() {
return String.format( "{ bus %d, gpio %d, devices %s }", this.bus, this.gpio, this.devices.toString() );
}
}
class ConfigData {
public static void main(String[] args) {
final boolean IS_EXTENDED = false;
ArrayList<BusConfig> basicConfig = new ArrayList<BusConfig>();
basicConfig.add( new BusConfig( 0, -1, new String[] { "0xa6" } ) );
basicConfig.add( new BusConfig( 1, 35, new String[] { "0x80", "0xae", "0xe4" } ) );
basicConfig.add( new BusConfig( 2, 38, new String[] { "0x80", "0xae", "0xe4" } ) );
ArrayList<BusConfig> extendConfig = new ArrayList<BusConfig>();
extendConfig.add( new BusConfig( 8, -1, new String[] { "0xe8" } ) );
ArrayList<String> extendBus0Device = new ArrayList<String>( Arrays.asList( new String[] { "0xa8" } ) );
if( IS_EXTENDED ) {
basicConfig.get(0).devices.addAll(extendBus0Device);
basicConfig.addAll(extendConfig);
}
System.out.println( basicConfig );
}
}
The code is far more verbose than what I would write in either C or JavaScript. Is there a less verbose way to accomplish the same outcome in Java? Maybe another class, or a cleaner syntax for initializing structured data?
My team is not Java-savvy, so it is important that the code be easy to read for C/C++ programmers. The kinds of code changes that are most likely are adding and changing the BusConfig objects.
In JavaScript same code looks like this:
const IS_EXTENDED = true;
let basicConfig = [
{ bus: 0, gpio: -1, devices: [ "0xa6" ] },
{ bus: 1, gpio: 35, devices: [ "0x80", "0xae", "0xe4" ] },
{ bus: 3, gpio: 38, devices: [ "0x80", "0xae", "0xe4" ] },
];
let extendConfig = [
{ bus: 8, gpio: -1, devices: [ "0xe8" ] },
];
let extendBus0Device = [ "0xa8" ];
if( IS_EXTENDED ) {
basicConfig = basicConfig.concat( extendConfig );
basicConfig[0].devices = basicConfig[0].devices.concat( extendBus0Device );
}
console.log( JSON.stringify(basicConfig,null,4) );
Answer: Initializing variables that are set in the constructor is unnecessary. Working with the abstract List type instead of concrete ArrayList allows you to take full advantage of Arrays.asList without having to convert the result into an ArrayList every time. I personally dislike forcing constructor parameters to have different names than the fields they are assigned to. The this. reference is intended for making a distinction between the scope and I find it distracting when I have to read the code to figure out what parameter goes into which field.
public final List<String> devices;
public BusConfig(int bus, int gpio, String ... devices) {
this.bus = bus;
this.gpio = gpio;
this.devices = Arrays.asList(devices);
}
And then...
List<BusConfig> basicConfig = Arrays.asList(
new BusConfig(0, -1, "0xa6")),
new BusConfig(1, 35, "0x80", "0xae", "0xe4")),
new BusConfig(2, 38, "0x80", "0xae", "0xe4"))
);
Whether that is easier to understand is subjective, but at least it is shorter and does not have too many unnecessary statements cluttering the code.
There are a few things that I would advise against; all fields being public and the devices-list being exposed and manipulated from outside of the BusConfig-class.
Addendum: I should have read the JavaDoc... I did look into the code and confirmed that Arrays.asList(...) does return an ArrayList, but with closer inspection, I found out it's not the java.util.ArrayList but an specialized internal ArrayList that is completely different! There doesn't seem to be a common utility library like Apache Commons or Google Commons that would have such a utility, which makes me think that there is a reason for that. If you know why, please add a comment. Anyway, you need to write your own:
public class ArrayListUtils {
@SafeVarargs
@SuppressWarnings("varargs")
public static <T> ArrayList<T> asArrayList(T... elements) {
final ArrayList<T> list = new ArrayList<>(elements.length);
for (T e: elements) {
list.add(e);
}
return list;
}
}
and...
import static com.example.ArrayListUtils.toArrayList;
and instead of Arrays.asList(devices) just call
asArrayList(devices) | {
"domain": "codereview.stackexchange",
"id": 42886,
"tags": "java"
} |
Navigation costmap parameter unknown_cost_value | Question:
Hello
Does anyone know why the costmap_2d parameter "unknown_cost_value" cannot be set to 0 or lower. My understanding is that a costmap with static_map set to true will take in an occupancy_grid from the map topic. It then uses this unknown_cost_value to set all the corresponding cells with this value as unknown space in the newly created costmap. Since occupancy grids from the map topic have values at -1 for unknown space 0 for free space and 100 for occupied space it seems I can't set the unknown space for the cost map to be the same as the unknown_space in the map. It works fine if I set the unknown_cost_value larger then 0 to 100 for example but this then sets the unknown space in the cost map to be equal to the occupied space in the map.
Originally posted by Roy89 on ROS Answers with karma: 133 on 2012-08-01
Post score: 1
Original comments
Comment by weiin on 2012-09-19:
The inconsistency is also noted here http://answers.ros.org/question/38765/occupancygrid-vs-costmap/
Answer:
I think I solved the problem by setting the unknown_cost_value to 255. The cost maps have a range of 0 to 255 so a value of -1 in the occupancy grid is set as 255 in the cost map. It seems to be setting the new cost maps unknown space the same as the occupancy grid from gmappings map which is what I wanted. Thanks me :)
Originally posted by Roy89 with karma: 133 on 2012-08-01
This answer was ACCEPTED on the original site
Post score: 2
Original comments
Comment by Achim on 2012-08-29:
Does that mean you managed to use navfn with gmapping planing in unknown space? Or are you using your own stuff? I'm quite stuck here...
Comment by Roy89 on 2012-09-10:
I used gmapping and the navigation stack with navfn planner. There's is an allow_unknown parameter in navfn that should allow plans to traverse unknown space. I can't seem to get the planner to do so though :(
Comment by weiin on 2012-09-19:
I managed to get the planner to plan into unknown space with track_unknown_space: true
unknown_cost_value: 255
Comment by weiin on 2012-09-19:
These two parameters are set in costmap_common.yaml, global_costmap uses static_map: true, local_costmap uses static_map: false
Comment by Achim on 2012-09-22:
I have the same parameters, now. It works, but usually I have to start move_base two times, in order to get it working. Usually on the first run unknown cells still are obstacles. | {
"domain": "robotics.stackexchange",
"id": 10438,
"tags": "ros, navigation, exploration, costmap, parameter"
} |
Reset pose to reset velocities | Question:
If I were to tinker in the file reset_pose.py and add lines about resetting other states than just the positions, could I also zero out the velocities of the joint states? For example, when I move the waist joint to rotate the robot, it sways around due to the controller being underdamped. If I just hit 'reset pose', the robot might snap back to a certain position, but it won't have zero velocity at the joints. But if I published additional commands in the reset, I could take that out, right?
Originally posted by DRC_Justin on Gazebo Answers with karma: 61 on 2013-01-11
Post score: 0
Answer:
Sorry I am unclear what you are trying to achieve here.
But in general, reset_pose.py only works with atlas_position_controllers.launch by setting controllers desired joint positions. Once the published commands are received by individual position controllers, each controller will actuate the joints towards desired position.
The Reset Model Poses button in the GUI simply resets the Model pose, and leaves model joint configuration untouched, the model is "teleported" to its starting pose, but the dynamic properties (linear and angular velocities of the links) are unchanged.
Originally posted by hsu with karma: 1873 on 2013-01-11
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 2908,
"tags": "gazebo"
} |
Why is isoindole unstable? | Question: According to this paper, isoindole is unstable because the 1-carbon in the isoindolidine tautomer is very electron-deficient. I figured that the electron-deficiency of this carbon must be because it is bonded to an aromatic carbon and double bonded to a nitrogen, both of which atoms are electron-withdrawing.
By this logic, I would expect other carbons in this situation (C(aryl)-C=N) to also be very electron-deficient. However, aromatic Schiff bases seem pretty common, for example salen. What am I missing?
Answer: Kopecký et.al. indeed utter the word unstable, but not in the sense of its recommended definitions, which were established later.
Although the parent compound, isoindole (I), has resisted isolation, the existence of this unstable species has been shown by trapping with dienophiles (i).
The problem here is that stability always requires a point of reference, which is not given in the quote above.
I would assume that the authors use unstable (or instability) as a way to say the compound cannot be isolated. They go on and justify why this compound behaves in this way, describing what I would say is its reactivity. While their argument is sound and I believe correct, the choice of their wording is a little bit ambiguous (in today's terms).
We clearly need to distinguish a compounds reactivity (towards a certain reaction) from its stability within a common set of parameters. As such the following statements are correct, but neither of them is a justification for the other.
2H-Isoindole is a less stable $\ce{C8NH7}$ isomer than 1H-indole.
2H-Isoindole is more reactive towards dienophiles than 1H-indole.
While statement one is easy enough to prove computationally, it is a lot harder to explain or analyse. Quick calculations on the DF-M06L/def2-TZVPP level of theory gives the following hypothetical reaction energy:
$$\begin{align}
\ce{1$H$-indole &-> 2$H$-isoindole} & \Delta_\mathrm{r}G^\circ &= +38.6~\pu{kJ mol^-1}
\end{align}$$
Unfortunately Kopecký et.al. offer no explanation for this, because they are more or less focused on explaining why it is "unstable", by which they most likely mean highly reactive or not isolatable. They are concerned with the reaction paths that lead away from this species and find that 1H-isoindole plays a big (probably most important) role in this.
The [...] electron deficiency for position 1 of the isoindolenine [...] leads us to conclude leads us to conclude that the reactive species responsible for the instability of this series of compounds probably is the isoindolenine [...].
(Note that the numbering differs from the IUPAC name and the position referred to as 1 is actually 3.)
I think this analysis is spot on a holds up to modern standards of computational methodology. The 1H-tautomer indeed opens up a very big variety of decomposition and reaction pathways, and it is readily accessible at room temperature (DF-M06L/def2-TZVPP):
$$\begin{align}
\ce{2$H$-isoindole &-> 1$H$-isoindole} & \Delta_\mathrm{r}G^\circ &= +3.4~\pu{kJ mol^-1}
\end{align}$$
The reason for this is - and this is the point where Joule and Mills get it completely wrong - the retention (not the creation) of the aromatic system in the benzene moiety. The chemical behaviour (reactivity towards cycloadditions) is not determined by the lack of a 'complete' benzene ring. But that is us slipping into statement 2, which we are not quite ready yet.
Let us first dissect the latter statement in the same book (p.447), which is equally as wrong:
Isoindole, benzo[c]thiophene and isobenzofuran are much less stable than their isomers, indole, and benzo[b]thiophene and benzo[b]furan.
Although they offer no appreciable effort to prove this statement they are probably correct. At least the calculations for the $\ce{C8NH7}$ system confirm that. They even use the correct (or recommended) definition of stable in this case.
This is undoubtedly associated with their lower aromaticity, which can be appreciated qualitatively by noting that in these [c]-systems, the six-membered ring is not a complete benzenoid unit.
Again, they offer no proof for that other than a hand-wavy argument based on a single Lewis-structure. If we include resonance into the mix then we have an at least as reasonable argument as this to disprove it.
If we have a look at the molecular orbitals of the compounds, we will see that there is more to this than just one resonance description. I have pulled out two characteristic molecular orbitals for example, but the complete set can be found here. On the left side (top) we have 1H-indole and on the right side (bottom) we have 2H-isoindole. Apart from a light shift of the electron density due to the loosened symmetry in indole, the orbitals are pretty much the same.
When we take the 1H-isoinole isomer into the mix, we see that the aromatic system of the benzene ring is retained completely.
Aromaticity is still not a fully understood concept and as a result there is not really a rigorous concept that can be applied (see the definitions below). There will probably be debate about it for the next decades. (Hopefully my bounty will pay off and we can find out more in this question.)
As a result we don't really have a handle on judging whether or not 2H-isoindole has a lower aromaticity than 1H-indole. And according to Kopecký's paper, with available methods at this time, this cannot be backed up with theoretical results.
While I hoped to find some experimental evidence for the existence of isoindole due to the general progress within the last fifty years, I was still unable to find a single source.
I'll skip over the rest of the quote because the stated facts are certainly accurate, just the reasoning with the 'incomplete' benzene ring is false.
Let's look at statement two for a while.
The reactivity of 2H-isoindole towards dienophiles is very easily explained looking at the HOMO of the compound; for comparison the HOMO of butadiene on the right (bottom).
The very easy rationale is that the HOMO is of the right symmetry for a Diels-Alder reaction, which additionally retains the aromatic character of the benzene moiety.
Now a look at the HOMO (left/ top) of 1H-indole will tell us that the only viable attack can be carried out on the 4,7-positions, leaving a much smaller aromatic moiety, i.e. pyrrole, than isoindole. The only way for an inverse electron demand Diels-Alder reaction would also be in the benzene moiety as the LUMO (right/ bottom) shows.
This is why one molecule reacts readily, while another has to be forced (really, really hard) or reacts as the dienophile via 'the double bond in the pyrrole system'.
After all that information the question still remains, or rather the question has become
"Why is isoindole less stable than indole?"
As I said before, this is really not easy to interpret. In general, lifting symmetry restrictions results in a more stable configuration, as fewer boundary conditions equal more flexibility and that usually gives a lower energy.
One aspect is certainly also that the lone pair of the nitrogen will be better delocalised in indole as there are more carbon atoms in proximity. This is actually something visible in the molecular orbitals.
Apart from that, neither geometries differ much from idealised benzene ($\mathbf{d}(\ce{C-C})=138.7~\pu{pm}$) or pyrrole($\mathbf{d}(\ce{C-N})=136.7~\pu{pm}$, $\mathbf{d}(\ce{C-C'})=137.1~\pu{pm}$, $\mathbf{d}(\ce{C'-C'})=141.4~\pu{pm}$, picture), so that a significant distortion energy is probably not a large contributing factor.
Other than that, I don't know. I also cannot explain, why the other aromatic $\ce{C8NH7}$ isomers are even less stable than isoindole. That would probably require more extensive calculations and analyses.
\begin{array}{lr}\hline
\text{Isomer}& \Delta E_\mathrm{el}/\pu{kJ mol^-1} & \Delta G/\pu{kJ mol^-1}\\\hline
\text{1$H$-indole} & 0.0 & 0.0\\
\text{2$H$-isoindole} & 37.3 & 38.6\\
\text{indolizine} & 52.0 & 50.6\\
\text{1$H$-cyclopenta[$b$]pyridine} & 83.4 & 82.4\\
\text{2$H$-cyclopenta[$c$]pyridine} & 79.7 & 79.1\\\hline
\end{array}
By this logic, I would expect other carbons in this situation, $\ce{C_{ar}-\color{\red}{C}=N}$, to also be very electron-deficient. However, aromatic Schiff bases seem pretty common, for example salen. What am I missing?
It is true that the carbon atoms in question will be very electron deficient, and they will probably be a good attach point for nucleophiles. In the salen ligand the nitrogens are probably stabilised by intramolecular hydrogen bonds, and while in complexes the entropy factor should make is very stable, too.
On another note, the isoindole moiety is also very, very common. Just think heme ligands and complexes.
One final thought: Carbonic acid decomposes rapidly in water, however, it is seldom questioned, that the molecule itself is stable under certain conditions.
My personal preferred definition for stable: An certain arrangement of atoms that forms a local minimum on a potential energy hypersurface and therefore could theoretically probed is stable.
Helpful definitions from the IUPAC gold book
unstable
The opposite of stable, i.e. the chemical species concerned has a higher molar Gibbs energy than some assumed standard. The term should not be used in place of reactive or transient, although more reactive or transient species are frequently also more unstable. (Very unstable chemical species tend to undergo exothermic unimolecular decompositions. Variations in the structure of the related chemical species of this kind generally affect the energy of the transition states for these decompositions less than they affect the stability of the decomposing chemical species. Low stability may therefore parallel a relatively high rate of unimolecular decomposition.)
stable
As applied to chemical species, the term expresses a thermodynamic property, which is quantitatively measured by relative molar standard Gibbs energies. A chemical species A is more stable than its isomer B if $\Delta_\mathrm{r}G^\circ > 0$ for the (real or hypothetical) reaction $$\ce{A -> B},$$
under standard conditions. If for the two reactions:
\begin{align}
\ce{P &-> X + Y} & (\Delta_\mathrm{r}G_1^\circ)\\
\ce{Q &-> X + Z} & (\Delta_\mathrm{r}G_1^\circ)\\
\end{align}
$\Delta_\mathrm{r}G_1^\circ > \Delta_\mathrm{r}G_2^\circ$, P is more stable relative to the product Y than is Q relative to Z. Both in qualitative and quantitative usage the term stable is therefore always used in reference to some explicitly stated or implicitly assumed standard. The term should not be used as a synonym for unreactive or 'less reactive' since this confuses thermodynamics and kinetics. A relatively more stable chemical species may be more reactive than some reference species towards a given reaction partner.
aromatic
In the traditional sense, 'having a chemistry typified by benzene'.
A cyclically conjugated molecular entity with a stability (due to delocalization ) significantly greater than that of a hypothetical localized structure (e.g. Kekulé structure ) is said to possess aromatic character. If the structure is of higher energy (less stable) than such a hypothetical classical structure, the molecular entity is 'antiaromatic'. The most widely used method for determining aromaticity is the observation of diatropicity in the 1H NMR spectrum.
See also: Hückel (4n + 2) rule, Möbius aromaticity
The terms aromatic and antiaromatic have been extended to describe the stabilization or destabilization of transition states of pericyclic reactions The hypothetical reference structure is here less clearly defined, and use of the term is based on application of the Hückel (4n + 2) rule and on consideration of the topology of orbital overlap in the transition state. Reactions of molecules in the ground state involving antiaromatic transition states proceed, if at all, much less easily than those involving aromatic transition states.
aromaticity
The concept of spatial and electronic structure of cyclic molecular systems displaying the effects of cyclic electron delocalization which provide for their enhanced thermodynamic stability (relative to acyclic structural analogues) and tendency to retain the structural type in the course of chemical transformations. A quantitative assessment of the degree of aromaticity is given by the value of the resonance energy. It may also be evaluated by the energies of relevant isodesmic and homodesmotic reactions. Along with energetic criteria of aromaticity, important and complementary are also a structural criterion (the lesser the alternation of bond lengths in the rings, the greater is the aromaticity of the molecule) and a magnetic criterion (existence of the diamagnetic ring current induced in a conjugated cyclic molecule by an external magnetic field and manifested by an exaltation and anisotropy of magnetic susceptibility). Although originally introduced for characterization of peculiar properties of cyclic conjugated hydrocarbons and their ions, the concept of aromaticity has been extended to their homoderivatives (see homoaromaticity), conjugated heterocyclic compounds (heteroaromaticity), saturated cyclic compounds (σ-aromaticity) as well as to three-dimensional organic and organometallic compounds (three-dimensional aromaticity). A common feature of the electronic structure inherent in all aromatic molecules is the close nature of their valence electron shells, i.e., double electron occupation of all bonding MOs with all antibonding and delocalized nonbonding MOs unfilled. The notion of aromaticity is applied also to transition states. | {
"domain": "chemistry.stackexchange",
"id": 7051,
"tags": "organic-chemistry, aromatic-compounds, stability"
} |
How can an inverted anharmonic potential $V(x)=-x^4$ have discrete bound states? | Question: I've been watching the lectures on mathematical physics by Carl Bender on youtube where he uses the non-Hermitian Hamiltonian methods to prove that the inverted anharmonic potential $V(x)=-x^4$ has a discrete bound states with positive energy. How can it be?
Answer: More generally, Carl Bender et al. are considering $PT$-symmetric Hamiltonians of the form
$$ H~=~ p^2 + x^2 (ix)^{\varepsilon}, \qquad \varepsilon\in\mathbb{R} ,$$
cf. e.g. Refs. 1-3. The Hamiltonian $H$ is not self-adjoint in the usual sense, but self-adjoint in a $PT$-symmetric sense. OP's case corresponds to $\varepsilon=2$. The trick is to analytically continue the wave function $\psi$ with real 1D position $x\in\mathbb{R}$ into the complex position plane $x\in\mathbb{C}$, and prescribe appropriate boundary behaviour in the complex position plane.
See e.g. Refs. 1-3 and references therein for further details and applications.
Note that Refs. 1-3 mainly discuss the point spectrum of the operator $H$.
References:
C.M. Bender, D.C. Brody, and H.F. Jones, Must a Hamiltonian be Hermitian?, arXiv:hep-th/0303005.
C.M. Bender, Introduction to $PT$-Symmetric Quantum Theory, arXiv:quant-ph/0501052.
C.M. Bender, D.W. Hook, and S.P. Klevansky, Negative-energy $PT$-symmetric Hamiltonians, arXiv:1203.6590. | {
"domain": "physics.stackexchange",
"id": 19375,
"tags": "quantum-mechanics, hamiltonian, eigenvalue, unitarity, anharmonic-oscillators"
} |
Where and how did computers help prove a theorem? | Question: The purposes of this question is to collect examples from theoretical computer science where the systematic use of computers was helpful
in building a conjecture that lead to a theorem,
falsifying a conjecture or proof approach,
constructing/verifying (parts of) a proof.
If you have a specific example, please describe how it was done. Perhaps this will help others use computers more effectively in their daily research (which still seems to be a fairly uncommon practice in TCS as of today).
(Flagged as community wiki, since there is no single "correct" answer.)
Answer: A very well-known example is the Four Color Theorem, originally proven by exhaustive checking. | {
"domain": "cstheory.stackexchange",
"id": 5207,
"tags": "proofs, survey, examples, proof-techniques"
} |
Conversion from assembly program to low-level machine language | Question: While studying COMPILER-DESIGN through an online book from Google Books, referenced as Compiler Design by A.A.Puntambekar, I got stuck across a line. Actually, I am more curious to know the inner-detail.
The assembler converts the assembly-program to low-level machine
language using two passes. A pass means one complete scan of the input
program. The end of the second pass is the relocatable machine code.
Why the 2 passes for conversion and what are the phases involved like lexical analysis,syntax analysis,etc while conversion from assembly to machine-code??? I have very less/no idea about it.
If someone over here would like to describe those two-passes or link out to some good resources, I'd be thankful to him/her.
Answer: The first pass can't resolve any forward jumps. For example:
cmp r1, 0
bne label
add r2, r3, r4
label:
add r3, r4, r5
On the first pass, when the assembler gets to the bne label instruction, it doesn't know how far the branch needs to jump because it hasn't seen label yet. On the second pass, it knows where all the branch targets are located and therefore it can go ahead and generate the proper branch/jump instructions at that time. | {
"domain": "cs.stackexchange",
"id": 3673,
"tags": "compilers, code-generation"
} |
Uniqueness in the path integral vs canonical quantisation | Question: In quantum mechanics it is well known that if you have a Lagrangian $\mathcal{L}$ and you want to quantise it, there is no unique way of doing this. This is because when you construct the Hamiltonian $\mathcal{H}$ and try to promote observables to operators, canonical momenta and position don't commute.
However the other way of doing quantum mechanics is in terms of path integrals. If one has a classical Lagrangian $\mathcal{L}$ then you can write down the propagator as $\int \mathcal{D}[q] e^{i S[q]}$ where $S[q] = \int \mathcal{L} dt$ is the action. It would therefore seem like $\mathcal{L}$ uniquely specifies the appropriate quantum theory, at least for any measurement that only involves propagators.
So my question is: how can one have non-uniqueness for canonical quantisation but apparent uniqueness for path integrals if the two formulations are supposed to be equivalent? Is it that the propagators do not fully determine the theory? Does this then mean that there are some quantities that cannot, even in principle, be calculated with path integrals?
Answer:
In quantum mechanics it is well known that if you have a Lagrangian L and you want to quantise it, there is no unique way of doing this.
This is correct, however note that you also have the constraint that H needs to be Hermitian.
However the other way of doing quantum mechanics is in terms of path integrals. If one has a classical Lagrangian L then you can write down the propagator as $∫D[q]e^{iS[q]}$ where $S[q]=∫Ldt$ is the action. It would therefore seem like L uniquely specifies the appropriate quantum theory, at least for any measurement that only involves propagators.
This procedure is also not unique, but it is subtle to see why. It is true that the Lagrangian is uniquely defined, however path integrals can only be understood in discrete time.
The correct way to understand path integral is to write it in discrete time. The continuous notation is only a shorthand and a practical way to write path integrals. Therefore one needs to define the Lagrangian in discrete time, where the definition is not unique. To see this, consider the discrete time form of the path integral where the path is divided into infinitesimal segment starting fromtime $t$ and finishing $t+\Delta t$. The question is then how should one define the Lagrangian? Is it with respect to time $t$, $t+\Delta$, or may be an average. This arbitrary choice is directly connected to how you choose to quantize your Hamiltonian. In particular, one can show that in order to recover the Schrodinger equation, the Lagrangian has to be defined as $L(\dot{q}(t),\bar{q}(t))$.
Suppose you start with a lagragian that contains $q(t)\dot q(t)$. When you discretize this term there will be an ambiguity:
$$
q(t)\dot q(t)dt\qquad\rightarrow\qquad q_k (q_k-q_{k-1}) \qquad or \qquad (q_k-q_{k-1}) q_{k-1}
$$
or any superpostion of that. This is related with $[q,p]\neq 0$. | {
"domain": "physics.stackexchange",
"id": 58431,
"tags": "quantum-mechanics, path-integral, quantization"
} |
How does a buffer overrun corrupt the return address? | Question: I got stuck on the following note while reading a paper about computer security:
The stack mixes program data and control data
– by overrunning buffers on the stack we can corrupt the return addresses!
In fact, I didn't understand how a buffer overrun corrupts the return address within a stack frame. Also the use of the term buffers was not very clear to me - what do they mean?
Answer: When your code calls a function, the called function usually allocates memory for its variables, just before the memory where the return address is stored (the address of the caller).
Let's say you have an array int a[100], and a double variable x. The compiler may quite reasonably allocate memory for x either just before or just after the memory for a. In the first case, setting a [-1] = 100; a [-2] = 1000; will likely change x. In the latter case, setting a [100] = 100; a [101] = 1000; will do the same thing. Trying to store into an array, using an index that is too small or too large, is called a "buffer overflow".
Now if you store a [102] = 0; or a [103] = 0; then chances are that this overwrites the memory where the caller's return address was stored. And if the function tries to return, the processor will start running code at the address that you stored into a [102] and a [103] with all kinds of negative consequences. | {
"domain": "cs.stackexchange",
"id": 10625,
"tags": "operating-systems, memory-management, virtual-memory, security"
} |
rviz ignoring transforms | Question:
With the lates update of Kinetic the display of laser scan data in some old bag files stopped working. I get the message
[ERROR] [1490090873.307973855]: Ignoring transform for child_frame_id "uav0/vicon_laser" from authority "unknown_publisher" because of an invalid quaternion in the transform (-0,018510 0,706860 -0,018510 0,706860)
in the terminal where rviz is started. This transform might be generated from a ros2 static transform but it worked before the latest update.
Is the quaternion invalid or is the check here buggy? And it did display properly before so can I disable this new check?
The launch file line generaing the transform that do now work is:
<node pkg="tf2_ros" type="static_transform_publisher" name="laser_frame_publisher" args="0.12 0 -0.1 -0.01851 0.70686 -0.01851 0.70686 /uav0/vicon /uav0/vicon_laser" />
Originally posted by tompe17 on ROS Answers with karma: 140 on 2017-03-21
Post score: 2
Original comments
Comment by William on 2017-03-21:
You cannot disable the check (what would it do with what it thinks is an invalid quaternion). This would not be in rviz, but rather in tf2 or tf. Do you mean tf2 or ros2? @tfoote that quaternion looks valid, do you know of recent changes that might affect this?
Answer:
We recently added this assertion in #196 Plugging the values into the calculator the quaternion appears to be just outside the epsilon threshold of 10e-6.
0.018510^2 + 0.706860^2 + 0.018510^2 + 0.706860^2 = 0.9999873594
That's a larger error than I would expect for a quaternion based on floating point calculation errors. There's currently not a way to adjust or change this check.
Originally posted by tfoote with karma: 58457 on 2017-03-21
This answer was ACCEPTED on the original site
Post score: 2
Original comments
Comment by tompe17 on 2017-03-21:
These numbers are computed by a program (Python or C++) and then printed. Then the numbers are taken from the printout to be arguments to the static transform program. So of course the whole double precision is not preserved because of truncation in the printing.
Comment by tompe17 on 2017-03-21:
I think it is unreasonable to expect the double precision to be kept since this small error will not influence the measurments at all from the laser. I cannot be the only person that have used a program to compute the quaternion for a simple rotation and put it into a static transform.
Comment by tompe17 on 2017-03-21:
OK, saw you said floating point precision but I think my argument is still valid.
Comment by William on 2017-03-21:
Can you post a bag file with the raw float values? Or do the calculations yourself and show them here, avoiding the truncation of precision due to printing? Ultimately, I think you'll want to raise this on the issue tracker of tf: https://github.com/ros/geometry2/pull/196
Comment by tompe17 on 2017-03-22:
I am not sure how I got the values I have above in the launch file line that starts the static transform publisher. One theory is that I computed it from the angles -90, -87, -90 and then just used 5 of the decimals that was printed. In Python more decimals are printed. Maybe C++ gives 5 decimals.
Comment by tompe17 on 2017-03-22:
Here is the Python calculation with the order messed up somewhat:
HPR: -90.0 -87.0 -90.0
[-0.70686447 0.0185099 -0.70686447 0.0185099 ]
Comment by tompe17 on 2017-03-22:
I added a comment on https://github.com/ros/geometry2/pull/196
Comment by wong_jowo on 2017-04-28:
I tried to publish TF from IMU via Arduino, here is the data: Q = 0.001404 -0.001587 -0.903198 0.42926 printed from Arduino 2560. and resulting the epsilon 3.52645890000502e-5 which still larger then 10e-6 and RVIZ confirms error. How do we solve this issue, otherwise Arduino devices are unusable | {
"domain": "robotics.stackexchange",
"id": 27376,
"tags": "rviz, transform"
} |
Why do we use dual space in some circumstances and inner product in others? | Question: In undergraduate linear algebra, the concept of a dot product, generalized to the inner product on an inner product space, is introduced fairly early as a way to multiply 2 vectors together to get a scalar.
As one continues through an undergraduate curriculum to physics, however, this notion gets largely replaced by that of a dual space, and the inner product of vectors becomes replaced by the notion of multiplying vectors with a member of their dual space.
This seems to me to be two realizations of the same motivation: multiplying 2 vectors together to get a scalar. I understand that there are formal mathematical differences between the two, and the Riesz representation theorem gives a map between them, but I'm struggling with why we need both to exist to begin with.
To help me understand the fundamental conceptual differences between the two, in physics, why does "multiplying 2 vectors together" take the form of the inner product in some circumstances, and dual spaces in others?
E.g., I've seen bra/ket multiplication represented both by an inner product and dual space formalism, and it seems the structures are somewhat redundant if they have the same utility.
General relativity seems to prefer using dual spaces, but it's not clear to me why the extra structure is necessary and, for example, why we couldn't inner product vectors at a point with other members of the same vector space to achieve the same physical results rather than formalizing the physics in a dual space construct.
I'm sure there are more examples, where physics needs a concept of "multiplying 2 vectors together" and the community decides to say the second operand lives in its own dual vector space, rather than having it "share" an inner product space with the first operand. Why is this natural? In what cases is one representation of this idea more "powerful" or "natural" than the other?
Answer: In Hilbert Spaces, you are correct that the Riesz Representation Theorem (RRT) allows you to use inner products and dual spaces interchangeably, however there are some reasons the dual space can be preferable, or even necessary.
Example 1: In Quantum Mechanics one can associate a pure state with an element $|\phi\rangle$ of a Hilbert Space $\mathcal H$, and its naturally corresponding yes/no observable with an element $\langle \phi |$ of the dual $\mathcal H^*$. The probability of a positive outcome for a measurement of $\langle \chi |$ in the state $|\psi\rangle$ is given by $|\langle\chi|\psi\rangle|^2$. Without referencing the dual, we could alternatively just use the symbols $\chi,\psi\in \mathcal H$, and the same calculation would be represented by using the inner product $|(\chi,\psi)|^2$. Now, however, we have to explain what this means since both of the symbols in the inner product are naturally interpreted as states, and there appear to be no measurements involved. In other words, we have two vectors which each have different physical interpretations and the use of the dual distinguishes them perfectly. Not to mention all the other symbolic advantages of the Dirac notation, which are mathematically grounded in the use of the dual space.
Example 2: In an infinite dimensional inner product space $V$ (that is not a Hilbert Space, so that we can't invoke the RRT) the dual $V^*$ can actually be larger than the original space, $V\subset V^*$ (in the sense of an isomorphic embedding). In Quantum Mechanics this fact is used to build something called a Rigged Hilbert Space in which certain states live in the dual but do not have counterparts in the inner product space itself. One can then only calculate probabilities by evaluating a dual element. (This is how Dirac Delta functions can be handled rigorously.)
Example 3: In relativity theory physical quantities are general tensors, which are defined properly as multilinear functionals of the form: $$\tau:V^*\times V^* \cdots \times V^* \times V\times V \cdots \times V \rightarrow \mathbb R$$Whilst a bilinear form corresponding to a 'dot product' of vectors is certainly a tensor (and an important one at that!), most tensors do not reduce to such simple rules. Sooner or later, you have to get used to working with duals. | {
"domain": "physics.stackexchange",
"id": 42491,
"tags": "mathematical-physics, hilbert-space, metric-tensor, mathematics, metric-space"
} |
What are ROS projects to practice? | Question:
Hello, I am using ROS on and off on average every 6 months.
To stay active however, I would like to do more projects in ROS and do more advanced stuff (since I always go back after a long break, I most often do rudimentary IPC things, not much with RVIZ, etc.).
So I wanted to know if you guys can recommend ROS projects that one can do to practice.
Something like these projects of this upcoming course: https://www.udacity.com/course/robotics-nanodegree--nd209
But without grading and free.
Basically I am looking for advanced tutorials beyond what the wiki offers or something like the Kaggle challenges for ROS.
I wouldn't even be opposed if you can refer me to a book that contains this. But I think all the ROS books around are either obsolete and/or very basic.
Originally posted by Borob on ROS Answers with karma: 111 on 2017-04-28
Post score: 0
Answer:
This is not a real answer, but frankly speaking: get yourself a robot and try to do something cool with it! ;-)
No really, i think at some point it does not make sense to make the Tutorials more and more advanced. If you reached that point maybe it is time to get your hands dirty on real (or Student) robotics Projects. You will see that there is a Lot of Advanced stuff behind the Tutorials and Brooks.
Originally posted by Wolf with karma: 7555 on 2017-04-28
This answer was ACCEPTED on the original site
Post score: 1 | {
"domain": "robotics.stackexchange",
"id": 27747,
"tags": "ros, tutorials"
} |
Examples of $2^{\Theta(n^2)}\text{poly}(n)$-time algorithms | Question: What are notable examples of problems for which the best currently known algorithm has $2^{\Theta(n^2)}\text{poly}(n)$ running time ?
Answer: New answer:
The number of pseudo line arrangements is 2^{Theta(n^2)} http://page.math.tu-berlin.de/~felsner/Paper/numarr.pdf . Which is in turn can be used to bound the number order type of n points in the plane. Thus if you want to check some concrete conjecture on point configurations in the plane, you are going to get your desired running time. Examples of algorithms using this approach and use this approach are here: http://www.ist.tugraz.at/aichholzer/research/rp/triangulations/ordertypes/ . If you do not aggressively cut the search space for the specific conjecture you are checking, you would get the running you want.
In a similar direction, checking if a generic property holds for all binary n*n matrices, would take this running time. This in particular, the time it would take to verify a "generic" property over graphs over n vertices. As a "silly" example, think about a Ramsey type conjecture: Every graph over n vertices contains either a clique or an independent set of size $\Theta(\log n)$. Ha! This running time is explicitly mentioned in the Wikipedia page: http://en.wikipedia.org/wiki/Ramsey%27s_theorem#Ramsey_numbers.
Old answer:
Some variants of Cylindrical algebraic decomposition if my memory serves me right.
http://en.wikipedia.org/wiki/Cylindrical_algebraic_decomposition | {
"domain": "cstheory.stackexchange",
"id": 2804,
"tags": "ds.algorithms, exp-time-algorithms"
} |
Negative energy/mass bounds on de-Sitter spacetime | Question: There exists a Positive Energy theorem for General Relativity in Anti-de Sitter and asymptotically flat spacetimes, but there is no equivalent theorem for de Sitter spacetimes
Question: Is there a lower bound theorem on negative mass-energy density on de Sitter spacetimes?
The intuition says that the absence of a positive energy theorem in dS has to do with the fact that for small enough positive energy densities, the cosmological expansion beats the gravitational attraction, which means that positive energy densities need to exceed a threshold in order to behave attractively from far away. Is this intuition correct?
Answer: The positive energy theorem talks about the lower bound on the total energy/mass, like the ADM mass.
To be able to define such a concept of the total energy/mass in general relativity, one needs some asymptotic region respecting a time-translational symmetry. That's the region where the gravitational potential (something like the deviation of $g_{00}$ from the vacuum value) goes like $GM/r$.
Minkowski and anti de Sitter space have this global time-like Killing vector and the required asymptotic region where the ADM-like mass may be measured. However, de Sitter space doesn't have one.
So not only there is no positive energy theorem in de Sitter space. There is even no well-defined definition of a conserved mass in that spacetime background! To understand all these things, one has to see why there is no nontrivial conserved energy/mass in cosmology or general backgrounds of general relativity, see e.g.
http://motls.blogspot.com/2010/08/why-and-how-energy-is-not-conserved-in.html?m=1 | {
"domain": "physics.stackexchange",
"id": 15248,
"tags": "general-relativity, de-sitter-spacetime"
} |
Overfitting results with Random Forest Regression | Question: I have one image that contains for each pixel 4 different values.
I have used RF in order to see if I can predict the 4th value based on the other 3 values of each pixel. for that I have used python and scikit learn. first I have fit the model, and after validate it I used it to predict this image.
I was very happy and scared to see that I got very high accuracy for my model : 99.95%!
but then when I saw the resulted image it absolutly wasn't 99.95% of accuracy:
original image:
result image:
(I have makrd the biggest and most visible difference).
My question is- why would I get this high accuracy when the visualization shows very well that there is much less accuracy? I understand it might come from overfitting but then how this different is not detected?
edit:
Mean Absolute Error: 0.048246606512422616
Mean Squared Error: 0.00670919112477127
Root Mean Squared Error: 0.0819096522076078
Accuracy: 99.95175339348758
Answer: Where are you evaluating the performance of your algorithm?
Are you making a train test split and evaluating in the test split? It might be that you overfitted your train and you are just measuring the accuracy there.
If you have made correctly the train/test split and the evaluation it could be that the images that you are predicting do not have the same properties/configuration/topology than the with you are trainning | {
"domain": "datascience.stackexchange",
"id": 7741,
"tags": "machine-learning, python, scikit-learn, random-forest, accuracy"
} |
Are "TM M accepts some string of length greater than 100" and "TM M accepts some string of length at most 100" decidable? | Question: I have two questions as in the title:
TM M accepts some string of length greater than 100
TM M accepts some string of length at most 100
Since 1. is infinite, we can rephrase question as "does TM accept a string", which is clearly undecidable and there are many proofs for that.
2nd question on the other hand is finite and all finite languages are decidable.
Is my reasoning correct? I can't find any confirmation of my theories.
Answer: I might be wrong ( ignore if you understand it correctly, I am beginner myself ), but I think you are misunderstanding the Rice Theorem. Broadly speaking any non-trivial property of the languages recognized by Turing machines is not decidable. A property is trivial if it holds for all or for none of the languages recognized by Turing machines. There are various posts explaining/regarding the theorem. For example I referred to the answer here How to show that a function is not computable? while learning the theorem. ( Skip to end if you only needed hint )
Coming to your case, the languages are $L_1=\{\langle M \rangle \;| \; \text{M accepts some string of length greater than 100} \}$ and $L_2=\{\langle M \rangle \;| \; \text{M accepts some string of length less than equal to 100} \}$
It is not difficult to see that in both the cases you are trying to decide non-trivial properties and thus both $L_1$ and $L_2$ are undecidable. As for your reasoning it is flawed as I commented, because knowing that size of a language is finite and being able to construct a Turing machine deciding it, is different from being able to decide whether an arbitrary Turing machine recognizes a language of finite size.
On a side note you could have approached the problem by first seeing that if both of the languages were decidable, the halting problem would become decidable which we no is not true. | {
"domain": "cs.stackexchange",
"id": 5559,
"tags": "turing-machines, undecidability, decision-problem"
} |
Gazebo 1.8 & DRCSim 2.6 doesn't see new DRC World models | Question:
After updating to the newest Gazebo & DRCSim I am encountering issues downloading the proper models from the models database for the DRC Worlds.
I have deleted the .gazebo/models directory. I have also tried updating the GAZEBO_MASTER_URI to point to the models_latest which allows me to download some, but not all, of the newest models. (e.g. It still can not find mud_box)
I'm not sure if this is something strange since I am behind a proxy server, but when I look at the URL with my browser everything looks okay. Is there somewhere that the contents of the online models database is cached? Is there some other error that I'm encountering? Is this a bug?
Screenshot Here: http://imgur.com/mZuxTb6
Originally posted by jhoare on Gazebo Answers with karma: 15 on 2013-05-18
Post score: 0
Answer:
There's a ticket for this.
Originally posted by ThomasK with karma: 508 on 2013-05-18
This answer was ACCEPTED on the original site
Post score: 1 | {
"domain": "robotics.stackexchange",
"id": 3297,
"tags": "gazebo"
} |
How to serve a map for pr2_2dnav | Question:
Hi, I'd like to serve a map to demonstrate pr2_2dnav with a simulated PR2.
I've read the following wiki.
http://www.ros.org/wiki/pr2_2dnav
When I tried to demonstrate pr2_2dnav, I used a map which I downloaded from wiki. (http://www.ros.org/wiki/slam_gmapping/Tutorials/MappingFromLoggedData)
I mean, I typed as following.
rosrun map_server map_server map.yaml
But, there is a problem.
The served map is 2d, so the environment in rviz is just a plane without any walls.
I'd like to bring up the 3D map,
make the simulated PR2 scan the walls virtually
and make it move using navigation stack in rviz.
How do I solove this?
Thanks in advance.
Originally posted by moyashi on ROS Answers with karma: 721 on 2012-06-25
Post score: 0
Answer:
You will have to set up a simulation environment that matches your map. I assume that you are using Gazebo for simulating the PR2. To run pr2_2dnav, it's probably easiest to first create a Gazebo world file with your 3D environment and then build a map from that.
For map building, check out this tutorial. For using pr2_2dnav check out this tutorial. It seems like both tutorials are pretty outdated, you will have to replace cturtle by fuerte or electric, and for the latter one you will have to check out and compile the stack wg_robots_gazebo by hand (repository URI: https://code.ros.org/svn/wg-ros-pkg/stacks/wg_robots_gazebo/trunk) but it should contain all configuration files you need.
Originally posted by Lorenz with karma: 22731 on 2012-06-25
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 9930,
"tags": "ros, navigation, rviz, stack"
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.