anchor stringlengths 0 150 | positive stringlengths 0 96k | source dict |
|---|---|---|
How many colors exist? | Question: How many "colors" do exist?
Our perception:
As far as I know, colors are just different frequencies of light. According to wikipedia, we can see wavelengths from about 380 nm und 740 nm. This means we can see light with a frequency from about $4.051 \cdot 10^{14}$ Hz to about $7.889 \cdot 10^{14} $ Hz. Is this correct? I don't know if time (and frequencies) are discrete or continuous values. If both are continuous, an uncountable number of "colors" would exist. If it is discrete, there might still exist no upper bound.
An upper bound? I found the article Orders of magnitude of frequencies. The Planck angular frequency seems to be by far higher than all other frequencies. Is this the highest frequency which is possible? Do higher frequencies make sense in physics?
Why do I ask this question: I am imagining the vector space $\mathbb{R}^4$ like the $\mathbb{R}^3$, but with colors. I need an infinite amount of colors if this should make sense. In fact the number has to be uncountable.
Answer: A human eye may only distinguish thousands or millions of colors – obviously, one can't give a precise figure because colors that are too close may be mistakenly identified, or the same colors may be mistakenly said to be different, and so on. The RGB colors of the generic modern PC monitors written by 24 bits, like #003322, distinguish $2^{24}\sim 17,000,000$ colors.
If we neglect the imperfections of the human eyes, there are of course continuously many colors. Each frequency $f$ in the visible spectrum gives a different color. However, this counting really underestimates the actual number of colors: colors given by a unique frequency are just "monochromatic" colors or colors of "monochromatic" light.
We may also combine different frequencies – which is something totally different than adding the frequencies or taking the average of frequencies. In this more generous counting, there are $\infty^\infty$ colors of light where both the exponent and the base are "continuous" infinities.
If we forget about the visibility by the human eye, frequencies may be any real positive numbers. Well, if you're strict, there is an "academic" lower limit on the frequency, associated with an electromagnetic wave that is as long as the visible Universe. Lower frequencies really "don't make sense". But this is just an academic issue because no one will ever detect or talk about these extremely low frequencies, anyway.
On the other hand, there is no upper limit on the frequency. This is guaranteed by the principle of relativity: a photon may always be boosted by another ditch if we switch to another reference frame. The Planck frequency is a special value that may be constructed out of universal constants and various "characteristic processes" in quantum gravity (in the rest frame of a material object such as the minimum-size black hole) may depend on this characteristic frequency. But the frequency of a single photon isn't in the rest frame and it may be arbitrarily high. | {
"domain": "physics.stackexchange",
"id": 34840,
"tags": "electromagnetic-radiation, time, visible-light, frequency"
} |
An iterator returning all possible permutations of a list in Java | Question: I have this class that iterates over all permutations of an input list:
PermutationIterable.java:
package net.coderodde.util;
import java.util.ArrayList;
import java.util.Arrays;
import java.util.Iterator;
import java.util.List;
import java.util.NoSuchElementException;
/**
* This class implements an {@code Iterable} returning all possible permutations
* of a list.
*
* @author Rodion "rodde" Efremov
* @version 1.6 (Feb 14, 2016) :*
*/
public class PermutationIterable<T> implements Iterable<List<T>> {
final List<T> allElements = new ArrayList<>();
public PermutationIterable(List<T> allElements) {
this.allElements.addAll(allElements);
}
@Override
public Iterator<List<T>> iterator() {
return new PermutationIterator<>(allElements);
}
private static final class PermutationIterator<T>
implements Iterator<List<T>> {
private List<T> nextPermutation;
private final List<T> allElements = new ArrayList<>();
private int[] indices;
PermutationIterator(List<T> allElements) {
if (allElements.isEmpty()) {
nextPermutation = null;
return;
}
this.allElements.addAll(allElements);
this.indices = new int[allElements.size()];
for (int i = 0; i < indices.length; ++i) {
indices[i] = i;
}
nextPermutation = new ArrayList<>(this.allElements);
}
@Override
public boolean hasNext() {
return nextPermutation != null;
}
@Override
public List<T> next() {
if (nextPermutation == null) {
throw new NoSuchElementException("No permutations left.");
}
List<T> ret = nextPermutation;
generateNextPermutation();
return ret;
}
private void generateNextPermutation() {
int i = indices.length - 2;
while (i >= 0 && indices[i] > indices[i + 1]) {
--i;
}
if (i == -1) {
// No more new permutations.
nextPermutation = null;
return;
}
int j = i + 1;
int min = indices[j];
int minIndex = j;
while (j < indices.length) {
if (indices[i] < indices[j] && indices[j] < min) {
min = indices[j];
minIndex = j;
}
++j;
}
swap(indices, i, minIndex);
++i;
j = indices.length - 1;
while (i < j) {
swap(indices, i++, j--);
}
loadPermutation();
}
private void loadPermutation() {
List<T> newPermutation = new ArrayList<>(indices.length);
for (int i : indices) {
newPermutation.add(allElements.get(i));
}
this.nextPermutation = newPermutation;
}
}
private static void swap(int[] array, int a, int b) {
int tmp = array[a];
array[a] = array[b];
array[b] = tmp;
}
public static void main(final String... args) {
List<String> alphabet = Arrays.asList("A", "B", "C", "D");
int row = 1;
for (List<String> permutation : new PermutationIterable<>(alphabet)) {
System.out.printf("%2d: %s\n", row++, permutation);
}
}
}
What can I improve here? Please, tell me anything that comes to mind.
Answer: Your solution looks good. It's a really challenging task. This is because you have to "flatten" an algorithm with recursive nature. Furthermore you have to break it apart into single steps and to be able to resume at the last step made.
So there are little things that I would change:
I like separated classes instead of inner classes they are more handy in testing
I would have separated the algorithm in "generateNextPermutation()" in an additional class.
I would follow the recursive nature of the problem and build a recursive structure. This is a slightly different approach.
I do not exactly know how the algorithm should behave if you provide an empty list. I assume that you will have one permutation as result. So I adapted your algorithm to work with an empty array of indices.
You see that our solutions do not differ in structure. Only the algorithm to generate a new permutation is extracted in a class and reformulated.
I unpacked this riddle (I somehow liked this riddle) and I want to provide a solution that can use either your algorithm or mine. But finally they converged against the same interface. I expected that as I know there is only one structure that fits the problem best. We may not have found it but both solutions have a "meeting" where the structure is mostly the same AND the things that may be different can work under the same abstraction. Sure, most structure was provided by the interfaces "Iterable" and "Iterator" but I found it interesting how the algorithms are interchangeable.
So first of all the interface for our algorithms:
public interface PermutationResolver<T> {
List<T> resolvePermutation(List<T> base);
boolean nextStep();
}
Your algorithm:
public class IndicesWalker<T> implements PermutationResolver<T> {
private int[] indices;
public IndicesWalker(int elements) {
indices = new int[elements];
for (int i = 0; i < indices.length; ++i) {
indices[i] = i;
}
}
@Override
public boolean nextStep() {
if (indices.length == 0) return false;
int i = indices.length - 2;
while (i >= 0 && indices[i] > indices[i + 1]) {
--i;
}
if (i == -1) {
// No more new permutations.
return false;
}
int j = i + 1;
int min = indices[j];
int minIndex = j;
while (j < indices.length) {
if (indices[i] < indices[j] && indices[j] < min) {
min = indices[j];
minIndex = j;
}
++j;
}
swap(indices, i, minIndex);
++i;
j = indices.length - 1;
while (i < j) {
swap(indices, i++, j--);
}
return true;
}
@Override
public List<T> resolvePermutation(List<T> base) {
List<T> newPermutation = new ArrayList<>(indices.length);
for (int i : indices) {
newPermutation.add(base.get(i));
}
return newPermutation;
}
private void swap(int[] array, int a, int b) {
int tmp = array[a];
array[a] = array[b];
array[b] = tmp;
}
}
Mine:
public class RecursiveCounter<T> implements PermutationResolver<T> {
private int i;
private int max;
private RecursiveCounter<T> nextState;
public RecursiveCounter(int max) {
this.max = max;
this.i = 0;
if (this.max > 1) {
nextState = new RecursiveCounter<T>(max - 1);
}
}
@Override
public boolean nextStep() {
boolean wasIncremented = true;
if (nextState != null) {
if (!nextState.nextStep()) {
if (this.i == this.max - 1) {
this.i = 0;
wasIncremented = false;
} else {
i++;
}
}
} else {
wasIncremented = false;
}
return wasIncremented;
}
private int getI(int level) {
if (level == 0) {
return this.i;
} else {
return nextState.getI(level - 1);
}
}
@Override
public List<T> resolvePermutation(List<T> base) {
List<T> result = new ArrayList<>();
List<T> work = new ArrayList<>(base);
for(int i = 0; i < base.size(); i++) {
result.add(work.remove(getI(i)));
}
return result;
}
}
The iterator:
public class PermutationIterator<T> implements Iterator<List<T>> {
private List<T> base;
private PermutationResolver<T> permutationResolver;
private List<T> next;
public PermutationIterator(List<T> base, PermutationResolver<T> resolver) {
this.base = base;
this.next = new ArrayList<>(base);
this.permutationResolver = resolver;
}
private List<T> generateNextPermutation(boolean isLast) {
List<T> result = null;
if (!isLast) {
result = getPermutationResolver().resolvePermutation(base);
}
return result;
}
@Override
public boolean hasNext() {
return next != null;
}
@Override
public List<T> next() {
if (!hasNext()) {
throw new NoSuchElementException();
}
List<T> current = next;
this.next = generateNextPermutation(!getPermutationResolver().nextStep());
return current;
}
private PermutationResolver<T> getPermutationResolver() {
return permutationResolver;
}
}
The iterable:
public class PermutationIterable<T> implements Iterable<List<T>> {
private List<T> base;
private PermutationResolver<T> resolver;
public PermutationIterable(List<T> base, PermutationResolver<T> resolver) {
super();
this.base = base;
this.resolver = resolver;
}
@Override
public Iterator<List<T>> iterator() {
return new PermutationIterator<T>(base, resolver);
}
}
Code in action with both algorithms:
public class Main {
public static void main(String[] args) {
List<String> base = Arrays.asList();
PermutationIterable<String> permutationIterable1 = new PermutationIterable<String>(base, new IndicesWalker<String>(base.size()));
for (List<String> permutation : permutationIterable1) {
System.out.println(permutation);
}
PermutationIterable<String> permutationIterable2 = new PermutationIterable<String>(base, new RecursiveCounter<String>(base.size()));
for (List<String> permutation : permutationIterable2) {
System.out.println(permutation);
}
}
} | {
"domain": "codereview.stackexchange",
"id": 20365,
"tags": "java, algorithm, combinatorics, iterator"
} |
Distance from Earth to Mars at time of November 8, 2022 lunar eclipse maximum | Question: I’m looking for the “exact” distance from the Earth to Mars, at the moment of the lunar eclipse maximum earlier this week (November 8, 2022). As many significant digits as you can muster.
Between the center of the Earth and the center of Mars would be fine, or similar.
Let’s say, at UTC, Nov 8 at 10:59:11.
Extra “points” if you provide information on how to find this distance with the date and time as a parameter.
Thanks.
—— edit/update (responding to a comment, asking for the reason for this question)
After taking handheld iphone photos of the November 8, 2022 lunar eclipse from my porch, I was moved to caption one of the images that began to show a touch of the northern lights as the moon dimmed. (resolutions are reduced to allow upload)
Answer: JPL Horizons makes this pretty easy. It gives 0.58950405608881 AU, or 88188551.559899346096867 km, or 54797825.42445825 miles.
Though I believe the ephemeris used by Horizons is accurate to only about 1km, but it is the most accurate ephemeris publicly available. | {
"domain": "astronomy.stackexchange",
"id": 6676,
"tags": "distances, mars, lunar-eclipse"
} |
Title case a sentence function | Question: I am new to programming. I have written a solution in two different ways, but would like to know what is considered a better solution, and why.
Additionally, in terms of performance, why would one be considered better?
Solution 1:
function titleCase(str) {
str = str.toLowerCase();
str = str.split("");
str[0] = str[0].toUpperCase();
for(i = 1; i<str.length; i++){
if(str[i+1] == " "){
str[i+2] = str[i+2].toUpperCase();
}
}
str = str.join("");
return str;
}
Solution 2:
function titleCase(str) {
str = str.toLowerCase();
str = str.split(" ");
str = str.map(function(val){
val = val.charAt(0).toUpperCase() + val.slice(1);
return val;
});
str = str.join(" ");
return str;
}
Answer: Given the two solutions, I think the second is stylistically a better way to do things in javascript for the reason that solution 1 is using the for loop which is not using the power of javascript as a functional programming language. An alternative modification of solution 2 would be make everything one statement and never change the value of str but instead return the value:
function titleCase(str) {
return str.toLowerCase()
.split(" ")
.map(function(v){return v.charAt(0).toUpperCase() + v.slice(1)})
.join(" ");
}
This modification means that the function will not mutate any variables. As for performance, the first solution runs slightly faster on my system using nodejs.
function time_it (fn, arg, n) {
var time = Date.now()
for(var i = 0; i < n; ++i) {
fn(arg);
}
return Date.now() - time;
}
SENTANCE = "Lorem ipsum dolor sit amet, consectetur adipiscing elit. Phasellus quis augue consequat, ullamcorper ipsum in, scelerisque tortor. Nulla justo dolor, ultrices ac varius a, fringilla et nisi. Vestibulum tristique euismod turpis, sed fermentum nibh rutrum tempus. Fusce a metus tincidunt, convallis lectus sed, suscipit ipsum. Duis sagittis et dolor id dapibus. Morbi quam urna, tristique non bibendum sit amet, viverra eget magna. Duis felis nisi, sodales eu ante et, vulputate pellentesque sem. Integer luctus lacus blandit, euismod dui vel, ultrices nibh. Suspendisse potenti. Phasellus bibendum, quam sit amet vehicula dapibus, nunc augue blandit sapien, nec bibendum purus elit at turpis."
console.log("titleCase_1 (ms): " + time_it(titleCase_1,SENTANCE,100000))
console.log("titleCase_2 (ms): " + time_it(titleCase_2,SENTANCE,100000))
console.log("titleCase_3 (ms): " + time_it(titleCase_3,SENTANCE,100000))
The results
titleCase_1 (ms): 2688
titleCase_2 (ms): 4391
titleCase_3 (ms): 4364
This has to do with the line
val.charAt(0).toUpperCase() + val.slice(1);
and the fact that solution 1 doesn't make copies of the str in the loop like map does.
An argument in favor of the second solution is that its performance could be much better. Likely what is happening is the map function is running serially but in many functional languages the map function can or is parallelized automatically. | {
"domain": "codereview.stackexchange",
"id": 17296,
"tags": "javascript, performance, beginner, strings, comparative-review"
} |
What is "Cresol Soap"? | Question: I recently came across a suggestion in an old (first published 1893) book that a solution of ~1% cresol soap in water could be used as a way to store a particular kind of root without allowing it to dry out or rot.
I've tried searching, but have been unable to determine what cresol soap might be. The closest I can find are several shady poorly-translated websites selling Lysol as cresol soap, or possibly a mixture of the two.
Does anyone have a clue what this book could be referring to?
Answer:
According to an 1894 tract with the title Merck's Market Report and Pharmaceutical Journal: An Independent Monthly Magazine Devoted to the Professional and Commercial Interests of the Druggist, Volume 3, "cresol soap" was also called "crelium" or "crelium soap".
The google n-gram viewer suggests that "cresol soap" is a historical term, whose usage began in the 1890s, peaked dramatically during World War I and its immediate aftermath, and declined substantially afterwards.
A 1915 British medical report argues for using cresol soap as a vermin control agent in troop populations. It gives a recipe as:
10 gallons of water
1.5 pounds of soft soap
Jeyes' fluid 1.5 ounces
This raises the question of what "Jeyes' fluid" is. Amazingly, Jeyes' Fluid is apparently still a commercial product, and contains a mixture of 4-chloro-m-cresol, isopropanol, terpineol, and "tar acids".
"Cresol soaps" are apparently still an item of commerce somewhere in the world, as a 2011 applications report form an analytical equipment vendor illustrates the analysis of a "cresol soap" sample, although a specific source is sadly not mentioned. The primary detected constituents were m-cresol and p-cresol.
I suspect there are many other recipes that involve mixing soap, water, and crude cresol preparations of various kinds. In the era of cresol soap's invention and popularity, coal was far more ubiquitous than now, and the cresol sources were probably crude fractions of distilled coal tar that were enriched in various isomers of cresol. In those days awareness of potential toxicity from cresols was also not as developed as now. The "tar acids" in Jeyes' fluid probably include a variety of cresol isomers. | {
"domain": "chemistry.stackexchange",
"id": 6799,
"tags": "nomenclature, identification"
} |
Comparison of semiconductors | Question: What quantitative measures could I use to compare semiconductors? I.e. what properties are useful when comparing semiconductors? That seems very non-specific, especially because which properties to compare will depend on what the semiconductor is to be used for. But that's the question as given. I thought bandgap energy, maybe breakdown electric field, but I can't really think of any others. Whether it's n or p type, maybe. Thanks for any suggestions, I appreciate it's fairly vague!
Answer: Important parameters to compare semiconductors are, as you mentioned, band gap energy Eg and doping concentration Nd or Na, very important are also intrinsic carrier density ni, densities of state of conduction and valence band, and electron and hole mobilities, n and p. | {
"domain": "physics.stackexchange",
"id": 34327,
"tags": "semiconductor-physics"
} |
Returning latest database ID | Question: I want my method to use a finally block so I can be ure that my EF entity is disposed after usage, even if an exception is thrown, but I would also like to not handle exceptions in that method.
The current code works but is it possible to omit the catch{} block? Would an exception from the try{} block be re-thrown or would it be forgotten?
Is it necessary to set tm_entity to null in the finally block, or will Dispose take care of everything?
Any other suggestions are welcome!
/// <summary>
/// Returns ID of the latest result database.
/// </summary>
/// <param name="managementEntity">EF Test Management entity.</param>
/// <returns>Latest result database ID.</returns>
public static int GetLatestDatabaseIteration(TEST_MANAGEMENTEntities managementEntity)
{
if (managementEntity == null)
{
throw new ArgumentNullException("Passed entity can not be null!.");
}
TEST_MANAGEMENTEntities tm_entity = managementEntity;
int? latestDatabaseID = null;
try
{
latestDatabaseID = (from trb in tm_entity.TestRunsDatabases
select trb.DatabaseID).Max();
if (latestDatabaseID == null)
{
throw new Exception("Cannot evaluate latest Test Result Database ID.");
}
}
catch (Exception ex) //Could I just omit this block?
{
throw ex;
}
finally
{
tm_entity.Dispose();
tm_entity = null; //is it necessary?
}
return (int)latestDatabaseID;
}
EDIT:
After making suggested changes my method looks like this.
public static int GetLatestDatabaseIteration(TEST_MANAGEMENTEntities managementEntity)
{
if (managementEntity == null)
{
throw new ArgumentNullException("Passed entity can not be null!.");
}
TEST_MANAGEMENTEntities tmEntity = managementEntity;
int? latestDatabaseID = (from trb in tmEntity.TestRunsDatabases
select (int?)trb.DatabaseID).Max();
if (latestDatabaseID == null)
{
throw new NoRecordsFoundException("Cannot evaluate latest Test Result Database ID.");
}
return (int)latestDatabaseID;
}
Changes I made:
I have renamed variables to match CamelCase.
I have removed .Dispose on a passed reference as it would also Dispose object in the Caller method.
I have abandoned calling generic exception and replaced it with custom one that more accurately describes this exception.
Calling .Max() if query return null may cause problems, but I will check it with UnitTests soon.
Answer: You really shouldn't rethrow caught exceptions like that.
catch (Exception ex) //Could I just omit this block?
{
throw ex;
}
Erases the call stack of the exception, instead use
catch (Exception) //Could I just omit this block?
{
throw;
}
However the point is moot, because the try-finally block is syntactically legal, but you should bear in mind a very major caveat (emphasis my own):
Within a handled exception, the associated finally block is guaranteed to be run. However, if the exception is unhandled, execution of the finally block is dependent on how the exception unwind operation is triggered. That, in turn, is dependent on how your computer is set up.
As such, it would be a good idea to make sure you catch any exceptions you can reasonably expect, but then that's just standard good practice.
As an aside, I really don't like your use of caps. Type names should be PascalCase, and you should avoid underscores in type names or variable names.
Lastly, you really should throw a subclass of Exception, and never the parent class. | {
"domain": "codereview.stackexchange",
"id": 10673,
"tags": "c#, entity-framework, error-handling"
} |
ModuleNotFoundError: No module named 'qiskit_aer' | Question: I am trying to perform some noise simulations using the Aer Provider. I read through a tutorials here, but I am not able to really implement the qiskit_aer. When I tried import qiskit_aer.noise as noise, it shows me the error message ModuleNotFoundError: No module named 'qiskit_aer'. I tried pip install qiskit_aer but it returns Requirement already satisfied. How can I implement this module? Thanks!
Answer: The namespace qiskit_aer was introduced in 0.11. Try pip install "qiskit-aer>=0.11.0" or pip install -U qiskit-aer. | {
"domain": "quantumcomputing.stackexchange",
"id": 4169,
"tags": "qiskit, programming"
} |
Hilbert space decomposition into irreps | Question: I'm currently following a course in representation theory for physicists, and I'm rather confused about irreps, and how they relate to states in Hilbert spaces.
First what I think I know:
If a representation $D: G \rightarrow L(V)$ of a group $G$ is reducible, then that means there exists a proper subspace of $V$, such that that for all $g$ in $G$ the action of $D(g)$ on any vector in the subspace is still in that subspace: that subspace is invariant under transformations induced by $D$.
Irreducible means not reducible: my interpretation is that an irrep is a representation restricted to its invariant subspace. In other words, an irrep $R$ only works the subspace that it leaves invariant. Is this a correct view?
Now, my confusion is the following:
Say we have a system invariant under the symmetries of a group $G$. If this group is finite then any rep $D$ of $G$ can be fully decomposed into irreps $R_i$. We could write any $D(g)$ as the following block diagonal matrix:
$D(g) = \left( \begin{array}{cccc}
R_1(g) & & & \\
& R_2(g) & & \\
& & \ddots & \\
& & & R_n(g)
\end{array} \right)$
I suppose the basis of this matrix is formed by vectors in the respective subspaces that are left invariant by $R_i(g), \forall g \in G$, but here is where I'm not clear on the meaning of it all. How does such a matrix transform states in the Hilbert space, when Hilbert space is infinite dimensional, and this rep $D$ isn't?
I've found a book that gives an example of parity symmetry, using $Z_2 = \{ e,p \}$.
The Hilbert space of any parity invariant system can be decomposed into states that behave like irreducible representations.
So we can choose a basis of Hilbert space consisting of such states, which I suppose would be the basis of the matrix $D(g)$ above? Then the Hilbert space is the union of all these invariant subspaces? In the case of parity there exist two irreps: the trivial one (symmetric) and the one that maps $p$ to $-1$ (anti-symmetric). I suppose this is also a choice of basis, but in this basis $D(g)$ is $2$-dimensional, so I don't understand how this could possibly work on the entire Hilbert space.
I apologize for the barrage of questions, but I honestly can't see the forest for the trees anymore.
Answer: Your understanding of reducible and irreducible representations is a little bit muddled. Let me try to clarify this a bit:
A reducible representation $D:G\to \text{GL}(V)$ is one that has a nontrivial invariant subspace $W$. That is, there exists a nonzero $W<V$ such that for all $g\in G$ and all $w\in W$, the action $D(g)w\in W$ remains in the subspace.
By contrast, an irreducible representation is one where no such subspace exists. That is, for any nonzero proper subspace $W$, there exist a $g\in G$ and a $w\in W$ such that $D(g)w \notin W$.
After that, the main source of your confusion, I think, is the fact that the invariant subspaces do not need to be finite-dimensional. This is why formulations of the type
$$D(g) = \left( \begin{array}{cccc}
R_1(g) & & & \\
& R_2(g) & & \\
& & \ddots & \\
& & & R_n(g)
\end{array} \right)$$
can be rather misleading. It is indeed possible to construct finite direct sums of vectors and of operators which are infinite-dimensional, and to represent them graphically using matrices; it's a little bit involved but I think it will help clarify the issue.
Consider, then, a vector space $V$ which is the direct sum of its subspaces $W_1,\ldots,W_n\leq V$. By definition, this means that for every $v\in V$ there exist unique vectors $w_j\in W_j$ such that $v=\sum_j w_j$. It is possible, in this case to represent $v$ using the notation
$$v = \left( \begin{array}{c}
w_1\\
\vdots \\
w_n
\end{array} \right).$$
However, it is important to note that the $w_j$ are not numbers; instead, they are vectors in as-yet-unspecified vector spaces $W_j$. Moreover, these could indeed be infinite-dimensional. (Indeed, if $V$ is infinite-dimensional then at least one of the $W_j$ needs to be.)
Linear transformations $T:V\to V$ can be treated similarly. For any $w_j\in W_j$, $T(w_j)$ is in $V$ which means that it can be decomposed as $T(w_j)=w'_1+\ldots+w'_n$, with each $w'_j\in W_j$. These new vectors are unique for each $w_j$, which means that they are functions of it, and it's easy to show that the dependence is linear. This allows us to get new sub-functions $T_{kj}:W_j\to W_k$, which have the property that for every $w_j\in W_j$
$$
T(w_j)=\sum_k T_{kj}(w_j).
$$
This then extends, by linearity, to the action of $T$ on a general vector $v=\sum_j w_j\in V$, which is then written
$$
T(v)=\sum_{k,j} T_{kj}(w_j).
$$
With this machinery in place, you can represent $T$ as a matrix,
$$T = \left( \begin{array}{cccc}
T_{11} & T_{12} & \cdots & T_{1n} \\
T_{21} & T_{22} & \cdots & T_{2n} \\
\vdots & \vdots & \ddots & \vdots \\
T_{n1} & T_{n2} & \cdots & T_{nn}
\end{array} \right).$$
The advantage of this notation is that the matrix-vector product works perfectly:
$$T\, v = \left( \begin{array}{cccc}
T_{11} & T_{12} & \cdots & T_{1n} \\
T_{21} & T_{22} & \cdots & T_{2n} \\
\vdots & \vdots & \ddots & \vdots \\
T_{n1} & T_{n2} & \cdots & T_{nn}
\end{array} \right)\left( \begin{array}{c}
w_1\\
\vdots \\
w_n
\end{array} \right).$$
So why have I gone to such lengths to define matrices? The important thing is that the submatrices need not be finite dimensional.
To bring things down to something more concrete, consider the specific case of parity on $L_2(\mathbb R)$. Here $L_2$ (dropping the $\mathbb R$) splits into an even and an odd component,
$$L_2=L_2^+\oplus L_2^-,$$
which is just the statement that every function $f$ can be seen as the sum of its even and odd parts $f_+$ and $f_-$, or in column vector notation
$$f=\begin{pmatrix}f_+\\f_-\end{pmatrix}.$$
Similarly, the parity operator splits into two constant parts, the identity $\mathbb I:L_2^+\to L_2^+$ on even functions, and minus the identity on odd functions, $-\mathbb I:L_2^-\to L_2^-$. In matrix notation,
$$
P=\begin{pmatrix}\mathbb I&0\\ 0&-\mathbb I\end{pmatrix},
$$
and
$$
Pf=\begin{pmatrix}\mathbb I&0\\ 0&-\mathbb I\end{pmatrix}
\begin{pmatrix}f_+\\f_-\end{pmatrix}
=\begin{pmatrix}f_+\\-f_-\end{pmatrix}.
$$
As before, the individual subrepresentations $R_j(g)=\pm\mathbb I$ are infinite-dimensional operators, and the fact that $D(g)$ is written as a matrix with finite rows and columns does not imply that it is finite-dimensional. This aspect of the discussion can get dropped from textbooks (and is never very prominent to begin with), so it's perfectly understandable to be confused about it.
I hope this helps clarify the issue but let me know if it doesn't. | {
"domain": "physics.stackexchange",
"id": 16604,
"tags": "hilbert-space, group-theory, group-representations, representation-theory"
} |
RVIZ: Efficient way to display large, incremental 2D images in 3D space | Question:
Hey guys,
as the title states, I am looking for an efficient way to display a large, incrementally growing, high resolution image in 3d space in RVIZ. Any suggestions?
My project partner are currently rendering every image on a seperate rectangle, which seems to be working. However it is eating the RAM for breakfast, which is not ideal.
Best regards,
Alex
Originally posted by Laxnpander on ROS Answers with karma: 31 on 2017-11-29
Post score: 0
Answer:
Take a look at https://github.com/lucasw/rviz_textured_quads (See also #q22730).
How high is your resolution and frame rate?
When you say incrementally growing do you mean that you'd like to transmit smaller images that update a larger image (like costmap updates) rather than retransmitting the entire image every frame? It probably wouldn't be too hard to make a version of rviz_textured_quads that can work like that.
Originally posted by lucasw with karma: 8729 on 2017-11-29
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by Laxnpander on 2017-11-29:
Yes, that is what I mean! Resolution can be up to several megapixels I guess, but framerate is quite low. Maybe round about ~1 Hz. Graphics card is available.
Comment by lucasw on 2017-11-29:
Several megapixels at 1 fps doesn't sound too bad for rebroadcasting the full image, inefficient as it may be.
Comment by Laxnpander on 2017-11-29:
Ah, no that was a little misguiding. I have a stream of images that are mapped into a common reference plane. Every single image has 1 Mpix and comes with 1 FPS. The total map however is therefore at least 1 Mpix but can grow up to way beyond that.
Comment by lucasw on 2017-11-29:
I see- even with a costmap update style approach the rviz end might break down with such a huge texture. It would maybe need to be turned into tiles. | {
"domain": "robotics.stackexchange",
"id": 29476,
"tags": "ros, navigation, mapping, rviz, visualization"
} |
What is "noise" in observed data? | Question: I am reading pattern Recognition and machine learning by Bishop and in the chapter about probability, "noise in the observed data" is mentioned many times. I have read on the internet that noise refers to the inaccuracy while reading data but I am not sure whether it is correct. SO, what actually is noise in observed data? And what is additive noise and Gaussian noise ?
Answer: When you have sensors, the values you receive change even if the signal that was recorded didn't change. This is one example of noise.
When you have a model of the world, it abstracts from the real relationships by simplifying things which are not too important. To take into account for the simplification, you model the error as noise (e.g. in a Kalman filter).
But noise sources can be anything. For example, in an image classification problem, data compression can distort an image. Images can have different resolution; low resolution signals are harder to classify than high resolution figures. Aliasing effects can also distort images.
And what is additive noise?
Suppose your system equation is
$$z = H \cdot x$$
where $z \in \mathbb{R}^{n_m}$ is the observation, $x \in \mathbb{R}^{n_x}$ is the state you're interested in and $H \in \mathbb{R}^{n_m \cdot n_x}$ is a transformation matrix. Then the noise could interact with your system in any way. But most of the time it is logical and practical that the noise is additive, meaning your model is
$$z = H \cdot x + r$$
where $r$ is sampled from a random variable of any distribution.
What is (additive) Gaussian noise?
$$r \sim \mathcal{N}(\mu, \sigma^2)$$
See normal distribution | {
"domain": "datascience.stackexchange",
"id": 1349,
"tags": "machine-learning, data, probability"
} |
How are Interference Avoidance and Collision Avoidance different? | Question: Someone told me when explaining about a controller module named CollisionDetector that it only checks self-interference and moves accordingly without detecting collision. To me both sounds the same. How are they different?
Answer: One is internal, one is external.
Self-interference refers to instances where something like a robot arm (with many degrees of freedom) may attempt to move in a path that crosses part of its own body. Your CollisionDetector is likely keeping track of the joint angles in a planned arm movement and seeing if any of them result in such a condition. The robot doesn't need any information about its environment to do this, just a reliable measurement of its own position (and possibly the shape of whatever it may be gripping).
"Collision Avoidance" usually refers to the detection of transient objects with externally-facing sensors. For example, a robot has planned a path based on a map of a room but other robots (or people) might be moving around in that same space; the robot detects them with some sort of sensor, and adjusts its planned path (or just stops) until the risk of collision has gone away. This is usually more difficult than detecting self-interference, because modeling the external environment is a bit more complicated than modeling a robot's internal state. | {
"domain": "robotics.stackexchange",
"id": 319,
"tags": "movement"
} |
Problem with using subscriber from initialize method | Question:
Hi. Sorry for all mistakes, English is not my native language. I am porting my project from Noetic to Foxy and I have flowing code:
bool STMRobotHW::initialize(ros::NodeHandle nh)
{
...
hw_feedback_sub = nh.subscribe("hw_feedback", 1000, &STMRobotHW::feedbackCallback, this);
}
void STMRobotHW::feedbackCallback(const minicar_control::Feedback &fb_msg)
{
....
}
I have hard time understanding how this subscribe should look in Foxy if there are no obj parameter parameter for create_subscribe in ROS 2, or at least I couldn't find any. Appreciate any help.
Originally posted by Edvard on ROS Answers with karma: 95 on 2022-10-20
Post score: 0
Original comments
Comment by ravijoshi on 2022-10-20:
Please look at Writing a simple publisher and subscriber (C++) tutorial suggested by @thejeeb. Furthermore, you can look at ros2/examples and ros2/demos
Comment by Edvard on 2022-10-21:
Thank you for help
Answer:
You need a Node to create a subscriber. See this tutorial for an example.
Originally posted by thejeeb with karma: 118 on 2022-10-20
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by Edvard on 2022-10-21:
Thank you, for your help. I'll try it | {
"domain": "robotics.stackexchange",
"id": 38066,
"tags": "ros"
} |
How is thermodynamic entropy defined? What is its relationship to information entropy? | Question: I read that thermodynamic entropy is a measure of the number of microenergy states. What is the derivation for $S=k\log N$, where $k$ is Boltzmann constant, $N$ number of microenergy states.
How is the logarithmic measure justified?
Does thermodynamic entropy have anything to do with information entropy (defined by Shannon) used in information theory?
Answer: I think that the best way to justify the logarithm is that you want entropy to be an extensive quantity -- that is, if you have two non-interacting systems A and B, you want the entropy of the combined system to be
$$
S_{AB}=S_A+S_B.
$$
If the two systems have $N_A,N_B$ states each, then the combined system has $N_AN_B$ states. So to get additivity in the entropy, you need to take the log.
You might wonder why it's so important that the entropy be extensive (i.e., additive). That's partly just history. Before people had worked out the microscopic basis for entropy, they'd worked out a lot of the theory on macroscopic thermodynamic grounds alone, and the quantity that they'd defined as entropy was additive.
Also, the number of states available to a macroscopic system tends to be absurdly, exponentially large, so if you don't take logarithms it's very inconvenient: who wants to be constantly dealing with numbers like $10^{10^{20}}$? | {
"domain": "physics.stackexchange",
"id": 39517,
"tags": "thermodynamics, entropy, information"
} |
Factored form vs partial fraction form? | Question: I have already understood partial fraction and here is link for my relevant DSP SE question
Finding inverse z transform for two sided ROC?
But now i want to know, is there any difference between partial fraction form and factored form in signal processing context?
For example I have a z transform $$Y(z)=\frac{(z^2−z)}{(z^2+1.3z+0.3)}$$
a)What will be its partial fraction form?
b)What will be its factored form?
Answer: The partial fraction form helps in calculating the z-transform inverse since we can get the inverse of each term in partial fraction by inspection and hence get the inverse of the whole transfer function. The factored form can directly give poles and zeros by equating the numerator and denominator to zero. | {
"domain": "dsp.stackexchange",
"id": 8470,
"tags": "matlab, signal-analysis, z-transform"
} |
When considering Van der Waals forces, why do dipoles form? | Question: Imagine two atoms, and only consider the Van der Waals force. The electron cloud will jitter due to its quantum mechanical nature- some of these jitters forming dipoles, some not. However, on average they form dipoles. Why?
This question is equivalent to asking (although it is useful here to ask in another way): why is the potential energy of a dipole lower than a non-dipole?
What determines which electrons are going to move to the middle of the atoms to form a dipole, and which ones move to the outside of an atom (I assume that a dipole is formed like this $e^-..ion...........e^-..ion$ for the centre of positive charge shifted rightwards of the centre of negative charge): given that the jitters are instantaneous, and photons travel only at the speed of light, how are dipoles formed quickly enough before the next jitter?
Am I falling into a rut because I am not considering the wave nature of electrons?
Answer:
on average they form dipoles.
Not quite. On average they form the nice (electrically neutral) electron orbitals we know and love. However, when they happen to be in a dipole state, there is an attraction between the two atoms, and that is evidently more than enough to overcome the randomly occurring opposing dipole states. That's just mathematics.
why is the potential energy of a dipole lower than a non-dipole
It isn't. But quantum mechanics allows electrons to visit higher energy states, with a probability dependent on the energy difference.
What determines which electrons are going to move to the middle of the atoms to form a dipole, and which ones move to the outside of an atom
Nothing. The "movement" of electrons is just a random statistical fluctuation in the location of the electrons' centers of charge. Quantum-mechanical events like this are inherently nondeterministic. And strictly speaking, you can't tell electrons apart anyway. There is no "which electron," at one point in time you have one electron here and another there, the next they may have switched places. Or not, you can never know.
(I assume that a dipole is formed like this $e−..ion...........e−..ion$ for the centre of positive charge shifted rightwards of the centre of negative charge)
Don't think of those as being electrons, just the statistical average of the negative charge distribution.
given that the jitters are instantaneous, and photons travel only at the speed of light, how are dipoles formed quickly enough before the next jitter
First, They're not quite instantaneous (that would have them basically everywhere possible at once), and not quite jitters.
In summary, don't think of electrons as particles with a distinct position moving around, just think of them as waves, with a peak that occasionally forms, and that peak causes the electric arrangement of two nearby atoms to, on average, be mildly attractive. | {
"domain": "chemistry.stackexchange",
"id": 208,
"tags": "energy, intermolecular-forces"
} |
Is the theory of evolution being disproved by bats? | Question: For some species the Darwin's theory evolution makes perfect sense. I can easily imagine how, for example, the giraffe has evolved to its current appearance: the natural selection was favoring individuals that could consume more vegetable food from trees using longer necks, and some individuals were getting at birth necks longer than average by pure genetic randomness and the long neck trait was being propagated to descendant individuals by means of genetic inheritance. I have no problem with understanding this kind of evolution.
Now let's have a look at the bat and its relatives. The bat is one of the few mammals that have something to do with flying and the only one that took flying to the bird level. Paleontologically, first mammals date to the dinosaur era and initially looked similar to the present-day shrew (which looks much like a mouse). The question is: how in the world prehistoric mouse-like creatures could grow wings over time? It impossible to believe that some mouse-like individuals were getting wing-like limbs by mutation and the "wings" were growing out accompanied with the knowledge of how the "wings" can actually be used. Ok, then maybe first wings were tiny moth-size wings and then grew larger? But where natural selection would come into play in this case? Such mouse-like individuals would have no advantage over their wingless relatives and thus would not be able to transfer those wing-growing genes to their descendants, quite the contrary, such individuals with useless mutations that interfere with their ability to walk would be suppressed by natural selection and therefore "weeded out".
So what is the story behind the bat's wings and is the Darwin's theory really able to support it?
Answer: Take a look at this little fellow:
It's a flying squirrel — a shy little nocturnal rodent which lives in trees and, despite its name, does not actually fly. It does, however, have a skin membrane called a patagium between its fore and hind limbs which allows it to glide from tree to tree and thus evade ground predators.
It's not hard to see how the flying squirrel's patagium may have evolved: after all, ordinary squirrels, to which the flying squirrel is indeed related, also spend most of their time in trees and avoid the ground, often performing quite impressive leaps to cross from one tree to another. With sufficient pressure to minimize time spent on the ground, any little morphological changes that allowed longer leaps would be favored by natural selection.
Indeed, there are plenty of other groups of mammals which have independently evolved very similar adaptations to gliding. Given how many small arboreal mammals there are, this is perhaps not surprising. What's special about bats is not the fact that they possess flight membranes — it's that they're the only group of mammals, so far, to have taken the next step to actual powered flight. | {
"domain": "biology.stackexchange",
"id": 494,
"tags": "evolution, genetics, zoology"
} |
What is the best way to learn about gene regulation? | Question: For those of you who already have a decent knowledge of how gene regulation works, how should someone new to the topic acquire a detailed overview?
Is there a particularly good and up-to-date resource?
Is it better to learn from the primary literature? If so, what are the main sub-topics within gene regulation to cover?
Answer: I would not learn from the primary literature, the terminology will likely be confusing and there will be a tendency to conflate whatever the current methods are with the concepts.
I'd suggest going for a basic introductory course in the topic, or a textbook. Here is a resource from NHGRI which may be helpful as an introduction.
Here is an online textbook section that might handle the topic at a very basic level. Here is an online version of a traditional standard text at the undergraduate level.
Probably the best somewhat more technical introduction to the basic concepts, without a lot of the fuss, is Mark Ptashne's classic 1986 book on phage lambda.
I saw Ptashne give a talk in the mid 2010s in which he declared (somewhat pompously) that we still didn't understand anything in gene regulation other than phage lambda. No one contradicted him, and I don't think that anyone would contradict him if he said the same thing today. | {
"domain": "biology.stackexchange",
"id": 12415,
"tags": "gene-expression, book-recommendation, gene-regulation"
} |
Reverse direction of fillet | Question: How do you reverse the direction of a fillet like that highlighted below?
Answer: During the selection process, select the two faces (instead of selecting an edge).
This should create the desired outcome. | {
"domain": "engineering.stackexchange",
"id": 3949,
"tags": "solidworks"
} |
How is asteroidal rock formed? | Question: So when planets form, dust from the protoplanetary nebula gets collected by gravity and then heated and reformed under pressure until it forms dense masses of stuff which we call rock.
However, asteroids don't have nearly enough gravity to do this, but pictures of asteroids like Bennu still show sizeable boulders on their surface. I'm aware that some asteroids are formed from fragments knocked off actual planets, but isn't that a tiny proportion of the whole? I'd expect asteroids to be made out of fluffy dust lightly sintered together from vacuum welding, but that's not what we see. Why?
(Osiris-Rex's photo of Bennu's surface)
Answer: Another major theory regarding the early solar system is that there was a relative abundance of short-lived radioactive isotopes at the time the solar system formed. These short-lived isotopes would have contributed greatly to the heating of stony materials, thereby making it possible for rocky planetesimal-sized (~ 1km diameter) objects to form in conditions that only existed for an astronomically short period of time. Aluminum-26 (26Al), with a half-life of 717 thousand years, is the key suspect with regard to this heating.
There are lots of signs that 26Al did exist in the early stages of the solar system and did contribute significant heating to planetesimal-sized objects. For a while it was thought that these signs of 26Al meant that a nearby nova / supernova must have occurred right about at the time that our solar system started to form. Recent research indicates that a protostar might be able to produce 26Al by itself.
Once that period of early heating via short-lived radioactive isotopes ended, the resulting planetesimal-sized objects would have been subjected to collisions, and those collisions would have resulted in either larger or smaller objects, depending on collisional energy. The asteroid belt, being subject to perturbations from Jupiter, would have been subject to greater collisional energy than the inner solar system and thus a greater chance of objects breaking apart.
References:
Typhoon Lee, D. A. Papanastassiou, and G. J. Wasserburg. "Aluminum-26 in the early solar system-Fossil or fuel" The Astrophysical Journal 211 (1977): L107-L110
Using Aluminum-26 as a Clock for Early Solar System Events
Brandt AL Gaches, Stefanie Walch, Stella SR Offner, and Carsten Münker. "Aluminum-26 enrichment in the surface of protostellar disks due to protostellar cosmic rays" The Astrophysical Journal 898, no. 1 (2020): 79 | {
"domain": "astronomy.stackexchange",
"id": 6203,
"tags": "asteroids, planetary-formation"
} |
Sulfinate R/S configuration | Question: How to determine the configuration of the attached chiral compound? It can be represented with two Lewis structures, in the one to the left, the =O substituent has the biggest priority, while in the right one, the -OMe substituent has the bigger priority. In the former case the configuration would be R, in the second S.
Answer: According to the current version of Nomenclature of Organic Chemistry – IUPAC Recommendations and Preferred Names 2013 (Blue Book), the trigonal pyramidal centre of the sulfinate group is converted to a tetrahedral centre using a phantom atom of low priority.
P-93.3.3.2 Trigonal pyramid
The configuration of molecules containing a trigonal pyramidal center (TPY-3) is described in a similar way to that of tetrahedral centers (T-4) described above in P-92 (see IR-9.3.4.3, ref. 12). The tetrahedral configuration is achieved by adding a phantom atom (0) to the central atom perpendicular to the base of the pyramid. (…)
Traditionally, sulfoxides have been considered as a tetrahedral system composed of a central atom, ligands, and a lone pair of electrons (or phantom atom). No polyhedral symbol is used.
P-93.3.4.1 The chirality symbols ‘R/S’.
The stereodescriptors ‘R’ and ‘S’ (as defined in P-92.2) are used to indicate the absolute configuration of a trigonal pyramidal system discussed in P-93.3.3.2 (see Rule IR-9.3.4.3, ref. 12). A phantom atom of low priority, and not a pair of electrons, is used to create the tetrahedral configuration permitting the use of ‘R/S’ stereodescriptors in the manner described for tetrahedral stereogenic centers. As no locants are present, the name following the stereodescriptor is placed in parentheses, or bracket, according to the required nesting order.
The examples given for this rule in the Blue Book include ethyl (R)-(4-nitrobenzene-1-sulfinate), which confirms that this rule is applicable to the similar compound methyl (S)-methanesulfinate, which is given in the question.
Notably, an equivalent rule also exists in the current version of the current version of Nomenclature of Inorganic Chemistry – IUPAC Recommendations 2005 (Red Book).
IR-9.3.4.3 The R/S convention for trigonal pyramidal centres
Molecules containing a trigonal pyramidal centre (TPY-3) may exist as a pair of stereoisomers. The configuration of this centre can be described in a similar way to that of a tetrahedral centre. This is achieved through notional placement of a ‘phantom atom’ of low priority in the coordination site that would create a tetrahedral centre from a trigonal pyramidal centre. The centre can then be identified as R or S by the methods described above.
The use of some bonding theories leads to the placement of a lone pair on a trigonal pyramidal centre. If this is done, the absolute configuration of the centre is also described by the R/S convention, in this case by placing the ‘phantom atom’ in the site that is occupied by the lone pair. Examples of this practice may be found in the description of absolute configurations for sulfoxides in which the alkyl substituents are different.
After placement of the phantom atom, the R/S convention for tetrahedral centres can be used. Rules for the tetrahedral configuration of elements other than carbon are given in Section P-93.2 of the Blue Book. In particular, the rule for compounds containing sulfur refers to the rule for similar phosphorus compounds.
P-93.2.5 Sulfates, sulfonates, and related compounds
Sulfates, sulfonates, and related anions are treated in the same way as phosphate anions (see P-93.2.4). (…) Sulfoxides are discussed in P-93.3.3.2.
The corresponding rule for compounds containing phosphorus reads as follows.
P-93.2.4 Phosphates, phosphonates, and related compounds
The ‘$\ce{P=O}$’ bond, as conventionally written in phosphates, phosphonates, and related compounds, is considered as a single bond, as there are already four atoms or groups in the tetrahedral configuration. Similarly, the formal arrangement of charges is not considered when determining the configuration of a chiral molecule. As the stereodescriptors ‘R’ and ‘S’ describe the entire structure, either a salt or an ester, the full name is placed in parentheses to denote the global configuration.
Therefore, the analogous ‘$\ce{S=O}$’ bond in the sulfinate group is considered only as a single bond since the tetrahedral centre of the sulfinate group with an attached phantom atom already has four groups. (Note that this is a deviation from the usual Sequence Rules for the Cahn-Ingold-Prelog (CIP) System. Using the Sequence Rules, double bonds are normally split into two bonds with duplicate representations of the atoms at the other end of the double bond.)
Thus, for the compound given in the question, the $\ce{-O-CH3}$ group ranks higher than the $\ce{=O}$ group. This leads to the configuration S and the systematic name methyl (S)-methanesulfinate. | {
"domain": "chemistry.stackexchange",
"id": 6745,
"tags": "nomenclature, stereochemistry, organosulfur-compounds"
} |
More efficient jQuery scripting when manipulating multiple elements with multiple CSS attributes | Question: I'm relatively new to JavaScript and jQuery so go easy on me. I'm creating a website where upon jQuery document.ready a set of basic animations are performed on different divs on the HTML markup. All divs have separate IDs and I am storing all divs with same CSS property change in the same variable. Using these variables I run the function after. This code works fine but what would be a more effective manner of writing it?
<script src="jquery-1.8.3.js" type="text/javascript" ></script>
<script type="text/javascript">
$(document).ready(function() {
function fader(){
var logofade = $('#portlogo, #toolslogo, #contactlogo, #portfoliolblw, #toolslblw, #contactlblw'),
homefade = $('#homelogo'),
homeline = $('#hline'),
uline = $('#upline'),
acrossline = $('#acrossline'),
glow = $('#logoglow');
logofade.fadeOut(0)
homefade.fadeOut(0).delay(300).fadeIn(100)
homeline.delay(100).animate({'width': '150px'}, 100)
uline.delay(200).animate({'height': '41px', 'top':'-30px'}, 100)
acrossline.delay(300).animate({'width': '825px'}, 100)
glow.fadeOut(0).delay(600).fadeIn(600);
}
fader()
});
function logochange() { $('#homelogo').delay(300).fadeIn(100);}
function logochange1() { $('#portlogo, #toolslogo, #contactlogo').fadeOut(100);}
function logochange2() { $('#portlogo').delay(300).fadeIn(100);}
function logochange3() { $('#toolslogo, #homelogo, #contactlogo').fadeOut(100);}
function logochange4() { $('#toolslogo').delay(300).fadeIn(100);}
function logochange5() { $('#portlogo, #homelogo, #contactlogo').fadeOut(100);}
function logochange6() { $('#contactlogo').delay(300).fadeIn(100);}
function logochange7() { $('#portlogo, #homelogo, #toolslogo').fadeOut(100);}
function homebtn() { $('#homelblw').fadeIn(0);}
function homebtn1() { $('#homelblw').fadeOut(0);}
function portbtn() { $('#portfoliolblw').fadeIn(0);}
function portbtn1() { $('#portfoliolblw').fadeOut(0);}
function toolsbtn() { $('#toolslblw').fadeIn(0);}
function toolsbtn1() { $('#toolslblw').fadeOut(0);}
function contactbtn() { $('#contactlblw').fadeIn(0);}
function contactbtn1() { $('#contactlblw').fadeOut(0);}
function hline1() {$('#hline').animate({'width': '150px'}, 100);}
function hline2() {$('#hline').animate({'width': '0px'}, 100);}
function pline1() {$('#pline').animate({'width': '150px'}, 100);}
function pline2() {$('#pline').animate({'width': '0px'}, 100);}
function tline1() {$('#tline').animate({'width': '150px'}, 100);}
function tline2() {$('#tline').animate({'width': '0px'}, 100);}
function cline1() {$('#cline').animate({'width': '150px'}, 100);}
function cline2() {$('#cline').animate({'width': '0px'}, 100);}
function upline1() {
$('#upline').animate({
'height': '-41px', 'top':'0px'
}, 0).delay(100).animate({
'height': '41px', 'top':'-30px'
}, 100);
}
function acrossline1() {
$('#acrossline').animate({
'width': '0px'
}, 0).delay(200).animate({
'width': '825px'
}, 100);
}
</script>
Answer: I would say two things could improve this considerably:
Use CSS classes elements that share the same animations. This way you can just fetch all the elements that need to be animated with a single $(). e.g. $('.animate')
Instead of using jQuery's animation methods, use CSS transitions. This will make your code simpler, and you know you're using the browser's native animation rendering.
Here's an example:
<div id="logo1" class="fade-out">Logo 1</div>
<div id="logo2" class="fade-out">Logo 2</div>
<div id="upline" class="expand-x">Some text</div>
<style>
.fade-out {
opacity: 1.0;
transition: opacity 0 .2s;
}
.fade-out.animate { opacity: 0; }
.expand-x {
width: 100px;
transition: width .1s .2s;
}
.expand-x.animate { width: 200px; }
</style>
<script>
$('.fade-out').addClass('animate');
$('.expand-x').addClass('animate');
</script>
You could simplify this even further by using a single CSS class for all elements that need animating. e.g. $('.needs-animation').addClass('animate');
Also, if there are any animations that are triggered by mousehover, you could do all the animation in CSS with the :hover pseudo-selector.
Finally, make sure the CSS transitions you use are compatible with all the browsers you're supporting. | {
"domain": "codereview.stackexchange",
"id": 3895,
"tags": "javascript, jquery, css"
} |
What do we mean by saying "VC dimension gives a LOOSE, not TIGHT bound"? | Question: From what I understand VC dimension is what establishes the feasibility of learning for infinite hypothesis sets, the only kind we would use in practice.
But, the literature (i.e. Learning from Data) states that VC gives a loose bound, and that in real applications, learning models with lower VC dimension tend to generalize better than those with higher VC dimension. So, a good rule of thumb would be to require at least 10xVC dimension examples in order to get decent generalization.
I am having trouble interpreting what loose bound means. Is the VC generalization bound loose due to its universality? Meaning, its results apply to all hypothesis sets, learning algorithms, input spaces, probability distributions, and binary target functions.
Answer:
But, the literature (i.e. Learning from Data) states that VC gives a loose bound and that in real applications, learning models with lower VC dimension tend to generalize better than those with higher VC dimension.
It's true that people often use techniques such as regularisation to avoid over-parametrized models. However, I think it's dangerous to say that, in real applications, those models are really generalizing better, given that you're typically assessing their generalization ability by using a finite validation dataset (possibly selected in a biased way). Moreover, note that the validation dataset is often ignored in certain bounds and the only thing that is taken into account is the expected risk and the empirical risk (on the training data).
In any case, the $\mathcal{VC}$ bounds may be "loose" because
the $\mathcal{VC}$ dimension is often expressed with the big-$\mathcal{O}$ notation (e.g. see this answer)
the bounds often involve probabilities and uncertainties (e.g. see this other answer)
there aren't stricter bounds (i.e. no one found better bounds yet)
Is the VC generalization bound loose due to its universality? Meaning, its results apply to all hypothesis sets, learning algorithms, input spaces, probability distributions, and binary target functions.
I think the answer is yes. In fact, often, the specific tasks or learning algorithms are ignored in the analyses, but, in general, you may exploit your knowledge of the problem to improve e.g. the number of useful examples that the learning algorithm could use. | {
"domain": "ai.stackexchange",
"id": 1886,
"tags": "machine-learning, computational-learning-theory, vc-dimension, vc-theory"
} |
How does graph classification work with graph neural networks | Question: I am reading the paper The Graph Neural Network Model by Scarselli et al. I understand how node classification works. I am having trouble understanding how graph classification works however. In particular, in the section titled The Learning algorithm, the authors mention that
Learning in GNNs consists of estimating the parameter such that w
approximates the data in the learning data set
where qi is the number of supervised nodes in Gi. For graph focused
tasks, one special node is used for the target (qi = 1 holds), whereas for
node-focused tasks, in principle, the supervision can be performed on
every node.
The node focused task approach makes sense to me; you would essentially compare the ground truth to each output of the "local output function" for each node, and backprop accordingly. However, based on the above description, I do not understand what you would do to classify the graph as a whole, given its label. What do they mean by "one special node is used for the target (qi = 1 holds)"? Why are they talking about a "special node"? Why is there no mention of the graph's label? Isn't that what we want to predict?
EDIT:
After reading through the entire paper, and specifically looking at the Mutagenesis example, I got a better understanding of how graph classification works (as described in this paper at least). However, my understanding is still not complete. I will explain what I understand, and raise a followup question below.
As the text above suggests, a particular node in the graph is chosen (I believe this can be done at random), and it will be the only node in the graph that is "supervised." All other nodes will be unsupervised (so we will not make any predictions on those nodes). We choose the local output function in such a way as to have it output a number between -1 and 1 (although I'm unsure as to whether or not you could pick a function that instead outputs a number between 0 and 1. I believe you can, and it's just a matter of what activation function you would like to choose i.e sigmoid vs tanh in this example). If the output is < 0, we predict the graph has label -1, and 1 otherwise.
Now we just do what we did with node prediction, except we only backpropagate on this single node that we chose.
This, however, raised a followup question for me. If you are training on multiple graphs (for graph classification), each of which has a different connectivity (which is usually the case, and is the case in the Mutagenesis example), how do you backpropagate? Each graph (in this case, molecule) represents a different neural network ...
Answer: The equation you pointed out in the paper has posed the graph learning problem based only on the nodes of the graph. Therefore, to perform graph level tasks like graph classification, one would need a 'special node' which introduces a node that represents the entire graph. This is all just to make the equation hold for graph level tasks which are not dependent on just one node.
The paper which you are reading is quite old and the new GNN formulations are based on the kipf-GCN (https://arxiv.org/abs/1609.02907). It is much simpler to understand, implement and has become the standard GCN. If you want to keep with the current hot topic of GNN, I would recommend you having a look at Kipf's blog (https://tkipf.github.io/graph-convolutional-networks/) for a simple intuition for the modern-day GCNs. | {
"domain": "datascience.stackexchange",
"id": 7537,
"tags": "neural-network, deep-learning, classification, graph-neural-network"
} |
Friction Behavior at corners | Question:
The question:
Find the range of value of $L/D$ such that the system remains in static equilibrium. And explain why such a range exists and not just a single value.
This is the question where I encountered a confusion. I know how friction acts on plane surfaces. But how do I analyze the friction in two places of the diagram marked in red. It's especially confusing to understand in which direction does the normal force act at a corner. Help anyone? It would be great if no one posted a complete solution to the problem (I would love to solve it myself), but instead if someone could provide an approach to solve such a case.
Answer: Friction always acts perpendicular to the normal force, and parallel to relative motion (but in an opposite sense).
In this case, I have noted the normal force vectors in green and the friction vectors in magenta.
As you can see, when an odd shape or a corner is in contact with a straight edge, then the normal force is always perpendicular to that edge, regardless on which body the straight edge belongs to.
You can also slide the normal vectors along their line of action (contact normal line) until they meet
Where they meet is the center of rotation (shown as a blue arc arrow), and any force that goes through the center of rotation does zero work, which is the definition of a constraint force. Where they meet is also where the combined normal forces act though.
Also shown is gravity in cyan through the center of mass.
Also, slide the friction forces to where they meet to find where the combined friction forces act through.
Finally, the combined normal force (green) and combined friction force (magenta) must meet at a single point, along the line of action of gravity (cyan line) in order to counter-act it for static equilibrium.
Above I describe the geometry of statics, which is a beautiful subject, not taught in school. It helps in the visualization of forces and understanding why they point at the direction they do. | {
"domain": "physics.stackexchange",
"id": 97056,
"tags": "homework-and-exercises, friction, home-experiment, statics"
} |
Directly integrating the Lagrangian for a simple harmonic oscillator | Question: I've just started studying Lagrangian mechanics and am wrestling with the concept of "action". In the case of a simple harmonic oscillator where $x(t)$ is the position of the mass, I understand that the Lagrangian is written down as $L = T - V$ (difference between kinetic and potential energy), and that one can use the Euler-Lagrange equation to obtain the $F = ma$ equation of motion.
But my question is what happens if you simply integrate the Lagrangian directly with respect to time? ie $$S = \int\left(\frac{1}{2}m\dot{x}^2 - \frac{1}{2}kx^2\right)dt.\tag{1}$$
Is there a way to directly integrate the RHS here to get some kind of expression for $S$? I'm wondering in particularly about integrating $\dot{x}^2$ with respect to $t$ as I don't know how to approach that.
Answer:
Well, if we know the classical solution $q_{\rm cl}:[t_i,t_f] \to \mathbb{R}$ (which we do for the harmonic oscillator), we can plug it into the action functional $S[q]$ and obtain the on-shell action function $$\begin{align}S(q_f,t_f;q_i,t_i)~:=~&S[q_{\rm cl}]~=~\ldots\cr
~=~&\frac{m\omega}{2}\left((q_f^2+q_i^2)\cot(\omega\Delta t_{fi})-\frac{2q_fq_i}{\sin(\omega\Delta t_{fi})}\right),\cr
\Delta t_{fi}~:=~&t_f-t_i,\qquad \omega~:=~\sqrt{\frac{k}{m}},\end{align}$$ cf. e.g. this Phys.SE post.
Generically, we can only explicitly perform the integration in the action functional $S[q]$ if we know the explicit form of the (possibly virtual) path $q:[t_i,t_f] \to \mathbb{R}$, if that's what OP is asking. | {
"domain": "physics.stackexchange",
"id": 100630,
"tags": "classical-mechanics, lagrangian-formalism, harmonic-oscillator, action"
} |
Set log levels in rosjava nodes | Question:
How can I set the default log level of rosjava nodes to something below INFO?
The Apache Commons 'Log' interface does not support this, but delegates the configuration to the underlying logging mechanism. I have tried putting the usual log4j.properties files into the root of my catkin package or into the rosjava subprojects contained therein, but that did not have any effect. My application calls rosjava from a Prolog shell that currently gets clogged with log messages at the INFO level, so I'd be happy about any hints.
Originally posted by moritz on ROS Answers with karma: 2673 on 2014-11-26
Post score: 0
Answer:
To set the logs, create a file like this one:
log-config.properties
# The following creates two handlers
handlers=java.util.logging.ConsoleHandler, java.util.logging.FileHandler
# Set the default logging level for the root logger
.level=ALL
# log level for the "com.example" package
org.ros.logging.level=ALL
# Set the default logging level
java.util.logging.ConsoleHandler.level=ALL
java.util.logging.FileHandler.level=ALL
# Set the default formatter
java.util.logging.ConsoleHandler.formatter=java.util.logging.SimpleFormatter
java.util.logging.FileHandler.formatter=java.util.logging.SimpleFormatter
# Specify the location and name of the log file
java.util.logging.FileHandler.pattern=/home/robot/test.log
later, in the execution:
java -Djava.util.logging.config.file=/home/robot/log-config.properties -jar rosjava-helloworld-0.1.0-SNAPSHOT-all.jar
Juan Antonio
Originally posted by Juan Antonio Breña Moral with karma: 274 on 2017-07-03
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 20171,
"tags": "ros, rosjava, logging"
} |
Formula to calculate confidence value in Adaboost | Question: I am coding an AdaBoostClassifier with the two class variant of SAMME algorithm. Here is the code.
def I(flag):
return 1 if flag else 0
def sign(x):
return abs(x)/x if x!=0 else 1
AdaBoost Class
class AdaBoost:
def __init__(self,n_estimators=50):
self.n_estimators = n_estimators
self.models = [None]*n_estimators
def fit(self,X,y):
X = np.float64(X)
N = len(y)
w = np.array([1/N for i in range(N)])
for m in range(self.n_estimators):
Gm = DecisionTreeClassifier(max_depth=1)\
.fit(X,y,sample_weight=w).predict
errM = sum([w[i]*I(y[i]!=Gm(X[i].reshape(1,-1))) \
for i in range(N)])/sum(w)
'''Confidence Value'''
#BetaM = (1/2)*(np.log((1-errM)/errM))
BetaM = np.log((1-errM)/errM)
w = [w[i]*np.exp(BetaM*I(y[i]!=Gm(X[i].reshape(1,-1))))\
for i in range(N)]
self.models[m] = (BetaM,Gm)
def predict(self,X):
y = 0
for m in range(self.n_estimators):
BetaM,Gm = self.models[m]
y += BetaM*Gm(X)
signA = np.vectorize(sign)
y = np.where(signA(y)==-1,-1,1)
return y
The much I know the formula for confidence is
The much I read, the actual minima occurs when c=1/2 but for any value of c the classifier should produce the same result. But when I am coding the class the output for c = 1 and c = (1/2) are coming different. Moreover if I am not multiplying anything ie. c=1 then the output of my classifier is better and produces identical results with the sklearn implementation of AdaBoost Classifier.
So why multiplying 1/2 is giving bad results?
Answer: Actually this equation is not quite right.
The actual minima occurs at
In the derivation this BetaM is the solution to the equation
Notice how in the function we are scaling the weights of wrong predictions up by exp(BetaM) and scaling down the weights of correct predictions by multiplying exp(-BetaM).
But in the code that I have written (also in the sklearn implementation) only the wrong predictions are being scaled up. So to get the relative scaling correct we have to scale the wrong predictions by exp(2BetaM)
i.e.
And as of the hypothesis function has the form
Any scalar multiple of BetaM will work fine as all the predictions will be scaled by the same amount. So for convenience in coding BetaM is written as just
Note: You can perfectly use BetaM = 0.5 * log((1/errM)/errM) . But then you have to scale down the correct predictions too. If you do that then the code will give correct results. | {
"domain": "datascience.stackexchange",
"id": 7223,
"tags": "python, classification, scikit-learn, adaboost"
} |
Count the occurence of nucleobases in DNA string | Question: Inspired by this meta question I decided to take a look at Rosalind. Their first challenge seemed easy enough:
An example of a length 21 DNA string (whose alphabet contains the symbols 'A', 'C', 'G', and 'T') is "ATGCTTCAGAAAGGTCTTACG."
Given: A DNA string s of length at most 1000 nt.
Return: Four integers (separated by spaces) counting the respective number of times that the symbols 'A', 'C', 'G', and 'T' occur in s.
Sample Dataset
AGCTTTTCATTCTGACTGCAACGGGCAATATGTCTCTGTGTGGATTAAAAAAAGAGTGTCTGATAGCAGC
Sample Output
20 12 17 21
Since I'm still on my quest to learn both regex and Ruby, I decided to go that route:
def countACGT(str)
list = [0,0,0,0]
str.scan(/A|C|G|T/) do |sub|
if sub == "A"
list[0] += 1
end
if sub == "C"
list[1] += 1
end
if sub == "G"
list[2] += 1
end
if sub == "T"
list[3] += 1
end
end
return list
end
I'm not a big fan of long if chains. Luckily, Ruby has a case statement as well:
def countACGT(str)
list = [0,0,0,0]
str.scan(/A|C|G|T/) do |sub|
puts case sub
when "A"
list[0] += 1
when "C"
list[1] += 1
when "G"
list[2] += 1
when "T"
list[3] += 1
end
end
return list
end
Both can be invoked like:
p countACGT("TCCCACTTCAGGGTCAGGGAGCTCCAAACTCTCTTTCTAGAGATGACAATCGAGAGTGAGATAAGGTGGATAGCAATCGTTATGGGATGTAAGCGCCAAGCGTTCGGGTAGCCCACGTTGCGGGCTAATCGCTAGGCTAGAACCTCTAAGCTGTACTTCTGTCAAAACGGAAAGAATCATACCGCACACCAACACTCGATGTAATGTAAGGATATCCTGTGCAGATGAGGTGCTTGGTACGCTAGATACTAGTATTACTAACACACAACATTACCGCCCAAGCGTGTCAGCCACGGACCAGATGACTCTTGCCGATTGAATACCTATCATCCTTACGGTCCGGAATCAGTATATCGCGTGCACAGTTACAGTGGTTAACTTGAGCTAGAGCAAGATAATGTGCGATCTGCGCACTCGGTGGGCTTGGATCACCCTACTTCCAATTGCCCGCGTATGATAGTTCCACCACTCACAAGTCTGTCATAGTGATTATCAAGAGTAGGCGTAGTGGGCACCCAAGAAATTAATGAATCTCACAGTCGAGTGTATCTTCGGCCATATCCCTACGGCAAATGGTCGCTCAGCTTGTCTCCGAGAGTTCGTTGGTTCAGAACCTCCGAAGGGTTGGGTGATTGTTGCGGCGCGCATGCGAGCTATGGTGGCTGTGTGTGGAGGTATTATCAGGGGAAATTTATTCCGAGTACTTGCTTGACTGCTCTTTTGTAAGCCGTTTGGGGTGCGTCCTCTGTATAGTCGTCGCCGCGAAGCCGATTCCCTCTAATCAAACACGCCAGAGGATCACTGGTCTTCTTCAAATTCCTGTATACCTCTGGCTAAATGACCCGACGGTACGAGCGTTTACTTCGAAGTG")
Yes, the dataset I had to solve was that long.
However, my interpreter suddenly feels the need to print the updated list[x] every time a new value gets assigned. This leads to a significant decrease in performance.
So I downloaded the official Ruby interpreter via the downloader (version 2.2.3 (x64)) and the exact same thing happens.
It almost looks like case is not the preferred way of doing this, but that's not intuitive.
I'm mainly looking for a definitive answer on which version I should stick with (and why) and general maintainability improvements. I'm perfectly aware regex may not be the optimal solution here, but I'd like to stick with it anyway.
Answer: Your code is fine and readable from a C/Java perspective. I don't think it's particularly Ruby-ic to use return statements. Just put list at the end on its own.
Why your case is slow
You have this extra puts here:
str.scan(/A|C|G|T/) do |sub|
puts case sub
^^^^
when "A"
...
You may want to get rid of that :) That's why you interpreter prints the list every time. You told it to...
Functionally better
But functionally, we can do way better. Ruby's enumerables have a group_by method:
def countACGT(str)
str.chars.group_by(&:chr)
end
That'll give you a hash from each key (nucleotide base) to the values in the collection (a list of each occurence of each nucleotide base). All you have to do then is map it to just give you the size:
def countACGT(str)
str.chars.group_by(&:chr).map { |k, v| [k, v.size] }
end
That'll give you, for your example, the list:
[["T", 228], ["C", 209], ["A", 214], ["G", 220]]
If you want to get it in ACGT order like your original, you can just sort it and map off the key:
def countACGT(str)
str.chars.group_by(&:chr).sort.map{|k, v| v.size}
end | {
"domain": "codereview.stackexchange",
"id": 17680,
"tags": "programming-challenge, ruby, regex, comparative-review, bioinformatics"
} |
What is the generator of an anti-unitary operator? | Question: As the generator of a Unitary operator is a Hermitian operator, is the generator of an Anti-Unitary operator Anti-Hermitian?
Answer: I think you mean the following. Consider a (strongly continuous) one-parameter group of unitary operators $\mathbb R \ni t \mapsto U_t$. Then Stone's theorem implies that
$$U_t= e^{itA}$$
for some self-adjoint operator $A$. Similarly, let $\mathbb R \ni t \mapsto U_t$ be a (strongly continuous) one-parameter group of anti-unitary operators. Is there a corresponding version of Stone's theorem where
$$U_t= e^{itA}$$
for some antiself-adjoint operator $A$?
The answer is negative simply because it does not exist anything like a one-parameter group of anti-unitary operators. Since $U_t = U_{t/2}U_{t/2}$, every $U_t$ must be linear even if $U_{t/2}$ is antilinear (the product of two antilinear operators is linear).
This is the reason why antiunitary operators only describe discrete symmetries. | {
"domain": "physics.stackexchange",
"id": 34322,
"tags": "quantum-mechanics, operators, hilbert-space, complex-numbers"
} |
What is this spider, and what is this object that emerged from its rear end? | Question: I was observing this spider (caught in a Petri dish) under my microscope, when I noticed that a small, round object emerged from its rear end. It appeared to be black and white in appearance. I am guessing that it is an egg, but I am not sure whether it is one or not. The diameter of the object is approximately 300µm.
The spider seems to resemble Hasarius adansoni from photos:
The scale bars in the image are 5mm in length. The spider and the object are imaged together in the frame for scale.
Can anyone conclusively identify the spider and the object? The photo was taken in Singapore in November (although the seasons don't really matter in Singapore).
Answer: I do not know what species of spider this is, but I think the white ball is actually spider poop. Unfortunately a quick search did not return many references, but here is a picture for comparison.
Spiders produce uric acid, which is a near-solid and excreted out white. This is done to minimize water loss. These malpighian tubules drain into an pouch attached to the digestive tract, or stercoral pocket. The uric acid waste from the is combined and eliminated together with solid waste from the digestive tract. In your spiders case, this would be undigested insect parts (black) mixed with the uric acid (white). | {
"domain": "biology.stackexchange",
"id": 5843,
"tags": "species-identification, arachnology"
} |
Compare and find the best match from multiple lists | Question: Lets say I have n lists from different sources, each contains m possible location of the user. I need to choose the most probable prediction of the user location. My idea was to pick one location from each source that so that those n locations are closest to each other. The average of those n locations will be my estimated user location.
My approach so far was to find every combination of n locations of those lists, and choose the smallest cluster. This works however for each prediction I have to iterate m^n times for all the possible combinations. The time complexity becomes too high if i have many sources with many possible locations. Is there a way to achieve what I wanted without iterating through every single combination or is there any other algorithms that does the similar thing even at the cost of reduced accuracy? Any help would be appreciated.
Answer: I don't know whether there is a polynomial-time algorithm for this problem that always gives the optimal solution. Below I show a candidate heuristic that might be useful in some settings. For notation, let $L[i,j]$ be the location of the $j$th source in the $i$th list.
Initialize a queue $Q$ with all of the $L[i,j]$ locations.
Repeat until $Q$ is empty:
Pop the first element of $Q$, and call it $\ell$.
For each $i$, let $\ell_i$ be the $L[i,j]$ that is nearest to $\ell$ (where $j$ ranges over all $m$ possibilities).
Let $\hat{\ell}$ be the average of the $\ell_i$'s.
Measure the "goodness" of $\hat{\ell}$ according your objective function (e.g., $\sum_i \|\hat{\ell}-\ell_i\|_2^2$, or whatever objective function you use).
If $\hat{\ell}$ is better than anything seen so far, remember it.
If $\hat{\ell}$ has not been previously added to $Q$, append $\hat{\ell}$ to $Q$.
Output the best location seen during this process.
If you want to spend more computation time in exchange for the possibility of finding more accurate solutions, at initialization you could also add all averages of all pairs of locations from $L$. | {
"domain": "cs.stackexchange",
"id": 20615,
"tags": "algorithms"
} |
The role of dark matter in black holes and star formation | Question: In my understanding, there exists a critical mass for which a star needs to be in order for it to collapse into a black hole. This also applies to a certain critical density of gas in order for stars to form in the first place. However, given a massive star or a massive amount of gas, it must have a relatively large gravitational field and therefore will interact with dark matter, attracting dark matter towards itself. This makes me question, doesn't the critical mass or critical density that we've figured out before our knowledge of dark matter now need to take that into account?
OR
Shouldn't the critical mass and critical density resemble the following pseudo equations?
Critical mass (normal and dark) for a star to turn to black hole = A*normal matter mass + B*dark matter mass, where A and B are fractional amounts
Critical density for a cloud of gas (normal and dark) for star formation = C*normal matter mass + D*dark matter mass, where C and D are fractional amounts
So if the answer is yes, then what are the fractions, A, B, C, and D? Moreover, What kind of experiment or calculation would allow us to figure out these fractions?
Answer: Yes, you do need to add in the mass of dark matter if it's present, however on small scales the dark matter is almost uniformly distributed.
To see this, consider formation of the Solar System from the original dust cloud. If you take some test particle far from the Sun and let it fall towards the Sun it will accelerate towards the Sun, then pass it and head on out again. If every particle in the original dust cloud behaved this way the Solar System could never have formed since the dust cloud would simply oscillate about its centre of mass and stay the same overall size. The reason the Sun formed is that electrostatic interactions between the dust particles allowed the cloud to dissipate energy as heat and settle towards the centre.
Now you see why the Sun isn't full of dark matter. Dark matter only interacts by the weak and gravitational forces so the dark matter particles can't dissipate energy and can't settle into the Sun. Assuming there is dark matter in the Solar System it will be oscillating about the Sun. In principle weak interactions will eventually dissipate enough energy for the dark matter to become gravitationally bound within the Sun, but it's going to take a long time.
The average density of matter is astonishingly low. At present the total density is around 5 protons per cubic metre, so the dark matter density is only 1 proton per cubic meter (and the average baryon density is 1 proton per five cubic metres!). There will be variations in the dark matter density caused by quantum fluctuations during inflation, and indeed these are thought to have been critical in seeding formation of the first galaxies. However on sub-galactic scales the dark matter density fluctuations are so small we can ignore them. | {
"domain": "physics.stackexchange",
"id": 3730,
"tags": "black-holes, dark-matter, stars, mass"
} |
What precisely leads to planets like COCONUTS-2b to orbit so far away from their host stars, 6000 AU in its case? | Question: Taking our Solar System as an example, most gas giants formed relatively close by (a few AU) and drifted away to reasons I don't know, from an explanation I recall reading. Simply orbiting a few AU away leaves Jupiter very cold. From what I've read, simply observing the bodies orbiting the Sun a few 40 or so AU away reveals they are very sparse in number, and small dwarf planets at best.
This too, is for a fairly large sun like ours, which I assume would have had a much larger protoplanetary disc compared to your average red dwarf star, which COCONUTS-2a is.
How would a planet, that too a gas giant of all things, find itself orbiting a very small star which would have a weaker gravitational pull compared to a Sun-like star like ours at a ludicrous distance of 6000 AU?
Would the gravitational pull not be very weak there for a star like COCONUTS-2a? Not to mention, how would a planet like 2b make it there? Drifting a few AU away is understandable and probable, but assuming it formed an AU at best from 2a, how would it end up at 6000 AU?
Answer: 2B or not 2b? That is the question.
The published paper - Zhang et al. (2021) - defines COCONUTS 2b as an exoplanet based upon the mass-ratio of 2b/2A, which is of order 0.02. I think this is a bit arbitrary and it just looks like a wide, low-mass binary system, with a secondary that is a low-mass brown dwarf ($\sim 10 M_{\rm Jupiter}$).
As the authors say, it is very unlikely that this object formed close to its primary star and was then ejected, because its current orbital binding energy is so small that it would imply very fine tuning of the ejection mechanism to have it now orbiting at 6500 au rather than being ejected entirely.
As for the origins, the authors say
However, given its wide orbital separation, COCONUTS-2b
probably formed in situ, like components in stellar binaries
via the gravitational collapse of molecular cloud [sic].
i.e. It forms from the fragmentation of a collapsing molecular cloud, just like other binary systems, not like the planets in our own Solar System and therefore should perhaps be COCONUTS 2B. | {
"domain": "astronomy.stackexchange",
"id": 5931,
"tags": "orbit, exoplanet"
} |
Why my DC thick charging cable is slower than a thinner one? (having same power rating & adaptor) | Question: Does that means thick wire cause more current loss than thinner one?
Answer: The thicker cable will have less power loss because its resistance will be less than that of the thinner cable. If there really is a difference between the charging rates it means there's something suspicious going on here. I suggest you do an experiment with the battery in your device discharged to exactly the same extent and then time the charging with the two different cables, and report the results here so we can study them. | {
"domain": "physics.stackexchange",
"id": 98373,
"tags": "electric-circuits, charge, electric-current, electronics, electrical-engineering"
} |
Why does the gas get cold when I spray it? | Question: When you spray gas from a compressed spray, the gas gets very cold, even though, the compressed spray is in the room temperature.
I think, when it goes from high pressure to lower one, it gets cold, right? but what is the reason behind that literally?
Answer: This is a very confused discussion. Gas being forced through a nozzle, after which it has a lower pressure, is an irreversible process in which the entropy increases. This has nothing to do with adiabatic expansion. It has everything to do with the Joule-Thomson effect.
The change in temperature following the drop in pressure behind the nozzle is proportional to the Joule-Thomson coefficient, which can be related to the (isobaric) heat capacity of the gas, its thermal expansion coefficient, and its temperature. This is a famous standard example in thermodynamics for deriving a nontrivial thermodynamic relation by using Maxwell relations, Jacobians, and whatnot. Interestingly, it is not certain that the temperature drops. For an ideal gas – which seems to be the only example discussed so far in this thread – it wouldn't, because the Joule-Thomson coefficient exactly vanishes. This is because the cooling results from the work which the gas does against its internal van der Waals cohesive forces, and there are no such forces in an ideal gas.
For a real gas cooling can happen, but only below the inversion temperature. For instance, the inversion temperature of oxygen is about $1040$ $K$, much higher than room temperature, so the JT expansion of oxygen will cool it. $\text{CO}_2$ has an even higher inversion temperature (about $2050$ $K$), so $\text{CO}_2$ fire extinguishers, which really just spray $\text{CO}_2$, end up spraying something that is very cold. Hydrogen, on the other hand, has an inversion temperature of about $220$ $K$, much smaller than room temperature, so the JT expansion of hydrogen actually increase its temperature. | {
"domain": "physics.stackexchange",
"id": 99904,
"tags": "thermodynamics, temperature, pressure"
} |
Why does steam in a teapot move to the largest opening? | Question: This morning I was making tea, and I took my teapot off of the stove and opened the spigot (where the front / whistle clasps onto) to let it cool. When I opened the spigot, steam started coming out as expected. However, when I opened the top of the teapot, a significantly larger opening, the steam began to pour out of the top and none of it (visibly) out of the spigot.
I assumed this is related to thermodynamics, but can’t find an exact explanation. Is there some law that dictates that gasses will always choose the path of least resistance? I know that’s true of electrons, but haven’t found something for gasses.
Answer: It’s possible that your teakettle’s second opening makes it act as a chimney. The heat transfer might be more efficient if high-density cold air enters the small opening to replace the low-density hot air exiting the large opening.
There are certainly kettle geometries where you would get a chimney effect. For example, if you boiled water in your fireplace, opening the flue would make nearly all of the steam go up that actual chimney, to be replaced by air from your living room. (Pro tip: do not operate your fireplace with the flue closed.) There are also geometries where you would expect no chimney effect: for instance, if the two openings were symmetrical. Whether this effect holds in an “ordinary” teakettle seems like a computational question; there are lots of different versions of “ordinary.”
I’ve just tested this by boiling a kettle, holding a lit match at the whistle, then removing the lid with my other hand. With the kettle lid on, of course steam is blowing furiously out of the whistle and will blow the match out. With the kettle lid removed, it seems to me that the flame is pulled towards the hole, which supports the airflow-in hypothesis. This isn’t slam-dunk evidence — and now I’ve burned my hand trying to take a photo, so you’re either going to have to trust me or to get your own matches — but it’s promising. | {
"domain": "physics.stackexchange",
"id": 85926,
"tags": "thermodynamics, gas"
} |
When inserting into a binary tree, is there a universal agreed upon place to insert then new node to minimize complexity? | Question: Do programmers (in real life), always insert at the top node or somewhere else? Because in my text book CLRS it is not made very clear, so the insertion process can take a best case of O(1), if you insert it just at the right place, or a worst case when you have to rebalance the entire tree.
So in practice is there a place that all new nodes are inserted?
Answer: There are several different concepts here. Let's start from the most general. A tree is a data structure that consists of a root and a collection of children, each of which is a tree. A node of the tree is either the root of the tree itself of a node of the children, i.e. it's the root, or the root of a child, or the root of a child's child, etc. The nodes of a tree can contain information of any type. Here are a few examples of trees:
One that you encounter everyday when working with computers is the filesystem¹: there's a root directory, subtrees are subdirectories, and there are subtrees which have no children and which are regular files; the children of a node are identified by a name (so it's not “first child”, “second child”, etc. but “child named hello.txt, child named pictures”, etc.).
An example from real life is the section structure of a book: there's the book, which is divided into chapters, which are divided into sections, which are divided into subsections, etc.
One type of tree data structure is the trie, which stores information indexed by a string. Each node has one child per character in the alphabet. The information corresponding to a string is located by starting from the root, going to the child corresponding to the first character of the string, then to the child of that child corresponding to the second character, etc. For example the string abc would be stored three levels deep, going down from the root to the a child, then to its b child, then to its c child.
You'll notice that in the filesystem and in the book sections example, information can be inserted anywhere. I can store my files in whichever directory I want; I can add a section to the chapter of my choice. In contrast, the structure of a trie is completely rigid: there's only a single place in the tree where abc can go.
A binary tree is a tree where each node happens to have at most two children. The two children are typically known as the left and right child. There is a common variation on this definition that defines a binary tree to have two types of nodes: internal nodes which must have exactly two children, and leaves which have no children.
A binary search tree is a type of tree data structure. Like a trie, it imposes a constraint on where the information must be stored. A trie indexes information by a string; a search tree indexes information that is ordered: it can be numbers, strings, etc. In a binary search tree, the data in the left subtree of a node is always smaller than the datum in the node itself, and the data in the right subtree of a node is always smaller than the datum in the node.
If you try to add element into an existing search tree, there's a single way to do that without changing the current structure of the tree: going down from the root, take the left child if the node is larger than the element to add, take the right child if the node is smaller, and keep going until you get to a child that doesn't exist, and create that child.
However, the structure of search trees is less rigid than the structure of tries: there are different search trees that contain the same data. For example, here are three binary search trees containing the integers 2, 4, 5, 8, 9 (there are of course many more):
8 4 2
/ \ / \ \
4 9 2 5 4
/ \ / \ \
2 5 8 9 5
\
9
/
8
In the worst case, inserting an element in a search tree requires traversing the whole height of the tree, which as the last example above show can be as much as the number of elements of the tree.
The leeway in the tree structure allows one to reshape the tree when adding or removing elements from it. This is called balancing. Balancing is done in order to keep the height small, which keeps operations such as search, insertion and deletion efficient. There are several variants of balanced search trees, where the balancing operations is designed to keep the height of the tree logarithmic in the number of elements of the tree: $h = O(\log(n))$. The basic principle of a balanced search tree is to keep some information about the height of the subtree in each node; when the height difference between the two children of a node is too large, some nodes are moved from the deeper subtree to the shallower subtree to rebalance. The rebalancing operation costs $O(1)$ at each node and is performed over the path to the element that's being inserted or removed, so its cost over the whole tree is $O(h) = O(\log n)$. This way, search, insertion and deletion in a balanced search tree cost $O(h)$ to find the target location, plus $O(h)$ to rebalance, which is $O(h) + O(h) = O(\log n)$ total.
¹ In practice filesystems can be more complex than what I'm talking about here. Filesystems with hard links aren't even trees. | {
"domain": "cs.stackexchange",
"id": 3824,
"tags": "data-structures, binary-trees"
} |
Induced motional EMF where the wire is stationary and the field source is moving? | Question: If a wire is moving in a magnetic field $B$, there is a Lorentz force acting on both the postive and negative charges, separating them to create an electric field, which is a great explanation to aid my understanding of how ($- \epsilon$) is induced.
What if I changed the dynamics of this system, making the wire stationary $$\therefore v = 0$$
And moved the magnetic field source(solenoid,magnet,etc...) in the same direction of the previous case(i.e same $v$). Will that change the direction of the Lorentz force acting on the charges? Or is it the same?
My initial assumption, is that they are relative, leading to the same results. Yet not so sure when considering Lenz's law it confuses me further.
Answer: The principle of relativity: The laws of physics are the same in all inertial frames of reference. Since the Lorentz force is a valid law of physics, it will not change when we pass from one reference frame to another.
First frame, wire is moving. There is no $\mathbf E$ field. Lorentz force $\mathbf F = q\mathbf v\times\mathbf B$. Apparently, you were OK with this frame.
Second frame, you are moving with wire. If you transform the electromagnetic field to the new frame, you will get:
$ \mathbf E' = \mathbf v\times\mathbf B $, where $\mathbf v$ is the velocity of your frame of reference with respect to the first frame of reference (provided $\mathbf v$ is perpendicular to $\mathbf B$). And: $\mathbf B' = \mathbf B$.
Its wise to notice, this transformations of the electromagnetic field from one frame to another are Galilean. They are not relativistic. So, they are only valid when the velocity of your frame $v$ is far less than $c$. If you want, the complete relativistic transformations can be seen here.
Now, apply lorentz force to the fields in your frame:
$$
\mathbf F = q(\mathbf E' + \mathbf v'\times\mathbf B') = q\mathbf E'
$$
where $\mathbf v'$ is the velocity of the wire with respect your frame of reference (ie, zero. After all, you are moving with the wire in this frame, so your relative velocity is zero). | {
"domain": "physics.stackexchange",
"id": 96267,
"tags": "electromagnetism, electrostatics, electric-fields, relative-motion"
} |
Logistic regression threshold value | Question: How can i set the threshold value for the target variable. For example if a target variable is chance_of_admit and it has values from 0 to 1, how can I pick a value and so that I can convert it to 0's and 1's to perform logistic regression
Answer: So there two ways of doing this, IMHO,
By creating a well balanced target variable by choosing the right threshold. As I suggested in the comments above. In doing so we are simply taking care of values which should be treated as positive which would otherwise become negative if we take lower threshold.
By using the mean threshold and that will generate imbalanced target variable and when you perform the modelling, you can see the ROC and PRC curves and decide the threshold based on that. But keep in mind, it also depends what kind of problem you are solving. | {
"domain": "datascience.stackexchange",
"id": 6489,
"tags": "r, regression, predictive-modeling, logistic-regression, data-science-model"
} |
How to subscribe to IMU data? Offboard control, PX4, MAVROS | Question:
Sorry I couldn't find out how to paste code properly. I'm trying to get IMU data from px4 through MAVROS. I am doing offboard control.
I referred from this website to make an IMU listener. (Writing a Simple Subscriber for IMU)
Although I am not sure how to find type in classes that are used in this code.
Right now, it isn't even going through chatterCallback even though I move px4 around and changing IMU values. Also when I run rqt_graph , it will give me imu_listener standing alone and not connected to MAVROS.
Can anyone give me advise how to make this IMUlistener work?
Thankyou,
Sarah
.#include "ros/ros.h" //take away period in the beggining
.#include "sensor_msgs/Imu.h"
.#include "mavros_msgs/State.h"
void chatterCallback(const sensor_msgs::Imu::ConstPtr& msg)
{
ROS_INFO("Hello");
ROS_INFO("Imu Seq: [%d]", msg->header.seq);
ROS_INFO("Imu Orientation x: [%f], y: [%f], z: [%f], w: [%f]", msg->orientation.x,msg->orientation.y,msg->orientation.z,msg->orientation.w);
}
mavros_msgs::State current_state;
void state_cb(const mavros_msgs::State::ConstPtr& msg){
current_state = *msg;
}
int main(int argc, char **argv)
{
ros::init(argc, argv, "imu_listener");
ros::NodeHandle n;
ros::Subscriber state_sub = n.subscribe<mavros_msgs::State>
("mavros/state", 10, state_cb);
ros::Subscriber sub = n.subscribe<sensor_msgs::Imu>
("mavros/imu/data", 1000, chatterCallback);
ros::Rate rate(20.0);
while(ros::ok() && current_state.connected){
ros::spinOnce();
rate.sleep();
}
for(int i = 100; ros::ok() && i > 0; --i){
ros::spinOnce();
rate.sleep();
}
ros::spin();
return 0;
}
Originally posted by SarahMars on ROS Answers with karma: 3 on 2017-12-04
Post score: 0
Original comments
Comment by jayess on 2017-12-04:
Welcome! In order to get your the code formatted correctly, you don't use the <code> or <pre> tags. What you do is paste your code, highlight it, then click the preformatted text (101010) button.
Comment by SarahMars on 2017-12-06:
Oh ok thank you for the information!
Answer:
This may be a hardware issue. Try publishing the topic yourself without the IMU and see if you can get the callback to execute. You can do this with the command line as seen in the wiki:
rostopic pub /mavros/imu/data sensor_msgs/Imu "header:
seq: 0
stamp: {secs: 0, nsecs: 0}
frame_id: ''
orientation: {x: 0.0, y: 0.0, z: 0.0, w: 0.0}
orientation_covariance: [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0]
angular_velocity: {x: 0.0, y: 0.0, z: 0.0}
angular_velocity_covariance: [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0]
linear_acceleration: {x: 0.0, y: 0.0, z: 0.0}
linear_acceleration_covariance: [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0]"
Now, the easy way to do this is to type rostopic pub /mavros/imu/data sensor_msgs/Imu in the terminal then hit the tab key twice and it'll fill in the rest for you. Next, hit Enter and it'll publish this message for you.
If your callback executes you'll that it may be a hardware issue versus something being wrong with your code, or at least the callback portion of it. Although, if you're following the MAVROS PX4 ROS offboard tutorial you may be missing certain portions of code such as keeping the keeping the vehicle armed. If it doesn't receive any commands for a certain amount of time (faster than 2Hz) then
the commander will fall back to the last mode the vehicle was in before entering Offboard mode
Update
You do not have to publish like this (from the terminal) to get your callbacks to execute. You do, however, need to have something publishing on that topic. What I mean is, a callback is only executed when it receives data. Therefore, if nothing is publishing then callbacks are not executed. That's why you weren't seeing your callback doing anything, there was nothing being published.
I highly recommend that you go through the tutorials to make sure that you understand how the publish and subscribe system works along with the other basics of ROS.
Originally posted by jayess with karma: 6155 on 2017-12-04
This answer was ACCEPTED on the original site
Post score: 3
Original comments
Comment by SarahMars on 2017-12-05:
Thank you so much for your reply. After I ran your command once in the terminal, it is working fine!
Do I have to publish the topic like this all the time if I want to access new data, or is there any other way doing this (like changing source files)?
Comment by SarahMars on 2017-12-05:
Also once you publish using "rostopic pub /mavros/imu/data sensor_msgs/Imu" it will continue to have settings for publishing the data, right?
Comment by jayess on 2017-12-05:
I'm glad that you know it works. If this solved your problem please click on the checkmark to set this answer as correct. | {
"domain": "robotics.stackexchange",
"id": 29525,
"tags": "ros, imu, px4, mavros"
} |
Phase conventions for phasors and Fourier transforms | Question: There is a natural isomorphism between the complex plane and the set of sine waves, but this isomorphism is ambiguous up to a rotation and/or flip of the plane. This ambiguity seems to be related to some of the differences in conventions about the Fourier transform and inverse Fourier transform.
In electrical applications, it's conventional that capacitive impedance is negative imaginary, and this requires that the cosine be represented by a point that lies counterclockwise from the one representing the sine. With this handedness, differentiation means multiplication by $i\omega$.
The convention I've been teaching is that $\sin\rightarrow1$ and $\cos\rightarrow i$, which is consistent with this. Call this convention A.
Looking at the form of Euler's equation, it does seem like it would be nicer to have $\cos\rightarrow1$. Sticking to the same handedness, we would then have to have $\sin\rightarrow -i$. Call this convention B.
Is this standardized? Is there a physicist's convention that is different from an electrical engineer's convention?
The WP article on the Fourier transform has some material that seems relevant, at "The reason for the negative sign convention in the exponent is that in electrical engineering, it is common..." This seems to imply that electrical engineers use convention B, but that other people use some other convention. Are the other people physicists? Some other kind of engineers? Do they use convention A?
Related:
https://math.stackexchange.com/a/2306802/13618
Fourier transform standard practice for physics (I think what I'm calling convention A is not describable in the $(a,b)$ parametrization defined in jgerber's answer.)
Answer: My response in the linked answer is to clarify the identification of $e^{i\omega t}$ as either a positive or negative frequency signal. The text in the Wikipedia article is related to exactly this. The convention that capacitive impedance is negative imaginary fixes this positive/negative frequency convention already. So both your A and B take the same convention for positive/negative frequency.
Regarding the main part of your question it sounds like you are asking about the definition of a phase factor (rather than a frequency scaling factor) in the Fourier transform. Suppose we define the Fourier transform
$$
\mathcal{FT}_{a,b,c}[f(t)](\omega) = \sqrt{\frac {|b|}{(2\pi)^{1-a}}}\int_{-\infty}^{+\infty} e^{+i b \omega t} e^{ic} f(t) dt
$$
And the inverse Fourier Transform
$$
\mathcal{FT}_{a,b,c}^{-1}[\tilde{f}(\omega)](t) = \sqrt{\frac{|b|}{(2\pi)^{1+a}}}\int_{-\infty}^{+\infty} e^{-i b \omega t}e^{-ic} \tilde{f}(\omega) d\omega
$$
Let
$$
\tilde{f}_{a,b,c}(\omega) = \mathcal{FT}_{a,b,c}[f(t)](\omega)
$$
$$
\check{f}_{a,b,c}(t) = \mathcal{FT}_{a,b,c}^{-1}[\tilde{f}_{a,b,c}(\omega)](t)
$$
Here, (compared to my answer in the linked question) I have introduced a phase factor $e^{ic}$ into the definition for the Fourier and inverse Fourier transform. As defined we will have that $\check{f}_{a,b,c}(t) = f(t)$ for any $(a,b,c)$.
What I can tell you is that, as far as I'm aware, everyone (EE, signal processing, math, physics etc.) all take $c=0$. This coincides with your convention B. I believe your convention A corresponds to $c = + \frac{\pi}{2}$ so that $e^{ic} = i$. I've never seen someone use this for their definition of the Fourier transform. | {
"domain": "physics.stackexchange",
"id": 55309,
"tags": "conventions, fourier-transform"
} |
How to Plot Summation of Shifted impulses in MATLAB? | Question: I am trying to plot the following equation in MATLAB:-
I am not sure how to write the following summation equation in MATLAB.
Answer: According to the features of the unit impulse function $\delta(t)$; at t = 0, the function goes toward infinity. Actually, the unit impulse function is meaningful with its area which is 1.
Additionally, MATLAB don’t show the famous arrow notation of the unit impulse function. Instead, it shows an invisible line which can be noticed where the function increases into infinity, i.e. the time value on the time axis where the function increases into infinity is blank. This can be seen from the figure.
On the other hand, the unit sample function $\delta[n]$ value is defined with the area of the unit impulse function which is 1.
We can obtain h[n] function by sampling h(t) function. But, what about those inifinities?
So, let’s do some gamble!
If we code h(t) function in MATLAB, it will actually be a row vector. In this row vector, we can change those infinities with ones in order to sample the function correctly. This process will require a for loop and an if statement.
The MATLAB source code for the complete process is given below:
t = -10: 1: 10; % Time range
Ts = 1; % Sampling period
syms k; % Symbol assingment for summation
function_1 = (t + 1) .* symsum(dirac(t - k), k, 1, 5); % Summation
subplot(2, 1, 1);
plot(t, function_1, "r", "LineWidth", 3);
xlabel("Time (s)");
ylabel("h(t)");
title("h(t) Time Function");
for a = 1: length(function_1) - 1 % For loop-if statement duo for replacing infinity results with unit sample function
if function_1(a) == Inf
function_1(a) = 1;
end
end
function_2 = (t + 1) .* function_1; % Complete function to be sampled
n = t / Ts; % Sample range
subplot(2, 1, 2);
stem(n, function_2, "ro", "LineWidth", 3);
xlabel("Samples [n]");
ylabel("h[n]");
title("h[n] Sample Function");
According to the figure, sampling process has gone well and it matches with analysis that I’ve done on the paper.
Actually, there may be other approaches for the replacements of the infinities with ones but this one does the trick well. | {
"domain": "dsp.stackexchange",
"id": 9975,
"tags": "matlab"
} |
What is the difference in time rates between the volumes where fusion takes place in a star vs outside observers like ships orbiting the star? | Question: I have found information suggesting the difference between the surface and the center isn't even worth worrying about, but since we needed GR to account for Mercury's orbit, I can't help but wonder about more substantial differences.
Answer: The GR corrections to Mercury's orbit are exceedingly tiny even though the mass of the sun is exceedingly large (on a human scale). The time dilation in the center of a star that is supporting fusion as determined by an outside observer at a safe distance can be calculated but it will be small. Wikipedia has a useful discussion of gravitational time dilation. | {
"domain": "physics.stackexchange",
"id": 90473,
"tags": "general-relativity, time-dilation, stars"
} |
DFT Functional Selection Criteria | Question: I have a very very general question:
In DFT functional selection , mostly people speak about the most recent ones. For example my professor always asks: " which DFT Functional did you select ? " and if I say B3LYP, he says : " No ! that's too old ! " but if I answer: M06 , he says : " hmm ... sounds promising, that's a modern functional ".
I think it is too naive to select functionals just based on their chronologic sequence. I want to ask if there is any good and reliable criteria for functional selection. For example a criteria that says for an alkene with some specific characteristics, go for M06-L and for an alkane with other characteristics, use B97xxx family and so on..
Is there such a criteria ? I hope this topic can become a good guideline for future reference. !
Answer: What's up with all that magic? (A chapter formerly known as Introduction)
The hunt for for the holy grail of density functional theory (DFT) has come a long way.[1] Becke states in the introduction of the cited paper:
Density-functional theory (DFT) is a subtle, seductive, provocative business. Its basic premise, that all the intricate motions and pair correlations in a many-electron system are somehow contained in the total electron density alone, is so compelling it can drive one mad.
I really like this description, it points out why we use and need DFT, and as it also points out the flaws, that every computational chemist has to deal with: How can something with such a simple approach be correct?
Something that is often forgotten about DFT is, that in principle it is correct. It's the implementations and approximations, that make it incorrect, but usable. Becke states this in the following quote:
Let us introduce the acronym DFA at this point for “density-functional approximation.” If you attend DFT meetings, you will know that Mel Levy often needs to remind us that DFT is exact. The failures we report at meetings and in papers are not failures of DFT, but failures of DFAs.
I sometimes here that the abbreviation DFT is often used in the wrong context, since we are not talking about the theory itself any more, but about the implementations and approximations of it. One suggestion I heard was, that it should rather be used as density functional technique.
With that in mind I would like to state, that I absolutely agree with the previous answer by user1420303 and it's subsequent comment by Geoff Hutchison. Since you asked for a somewhat more practical approach, I like to offer the advice I usually give to students new in the field.
Old is bad, isn't it?
Some of the functionals are now around for about thirty years. That does not make them bad, maybe even the opposite. It shows, that they are still applicable today, giving reasonable results. One of my personal favourites is the conjunction of Becke 1988 and Perdew 1986, often abbreviated as BP86.[2] It's a pure functional which is available most modern quantum chemical packages.[3] It performs usually well enough for geometries and reasonable well for energies for simple systems, i.e. small organic molecules and reactions.
The magical functional B3LYP was one of the first hybrid functionals, and it was introduced by Gaussian's very own developers.[4] A lot of people were surprised how well it worked and it quickly became one of the most popular functionals of all time. It combines Becke's three parameter functional B3[5] with Lee, Yang and Parr's correlation functional.[6] But why are we surprised it works? The answer is quite simple, it was not fitted to anything. Frish et. al. just reworked the B3PW91 functional to use LYP instead of PW91. As a result, it heavily suffers or benefits from error compensation. Some even go as far as to say: “It is right for the wrong reasons.”[7-9] Is it a bad choice? No. It might not be the best choice, but as long as you know what you are doing and you know it is not failing for your system, it is a reasonable choice.
One functional is enough, is it?
Now that we established, that old functionals are not out of fashion, we should establish something very, very important: One is never enough.
There are a few things, where it is appropriate to do most of the work with one functional, but in these cases the observations have to be validated with other methods. Often it is best to work your way up Jacob's ladder.[10]
How do I start?
It really depends on your system and what you are looking for. You are trying to elucidate a reaction mechanism? Start with something very simple, to gain structures, many structures. Reaction mechanisms are often about the quantity of the different conformers and later about suitable initial structures for transition states. As this can get complex very fast, it's best to keep it simple. Semi-empirical methods and force fields can often shorten a long voyage. Then use something more robust for a first approach to energy barriers. I rely on BP86 for most of the heavy computing. As a modern alternative, another pure density functional, M06-L is quite a good choice, too.[11] Some of the popular quantum chemistry suites let you use density fitting procedures, which allow you to get even more out of the computer. Just to name a few, without any particular order: Gaussian, MolPro, Turbomole.
After you have developed a decent understanding of the various structures you obtained, you would probably want to take it up a notch. Now it really depends on what equipment you have at hand. How much can you afford? Ideally, more is better. At least you should check your results with a pure, a hybrid, and a meta-hybrid functional. But even that can sometimes be a stretch.[12]
If you are doing bonding analysis, elucidation of the electronic structure, conformation analysis, or you want to know more about the spectrum, you should try to use at least five different functionals, which you later also check versus ab initio approaches. Most of the times you do not have the hassle to deal with hundreds of structures, so you should focus of getting the most accurate result. As a starting point I would still use a pure functional, the worst thing that could happen is probably, that is reduces the times of subsequent optimisations. Work your way up Jacob's ladder, do what you can, take it to the max.[13]
But of course, keep in mind, that some functionals were designed for a specific purpose. You can see that in the Minnesota family of functionals. The basic one is M06-L, as previously stated a pure functional, with the sole purpose of giving fast results. M06 is probably the most robust functional in this family. It was designed for a wide range of applications and is best chosen when dealing with transition metals. M06-2X is designed for main group chemistry. It comes with somewhat built in non-covalent interactions and other features. This functional (like most other though) will fail horribly, if you have multi-reference character in your system. The M06-HF functional incorporates 100% Hartree-Fock exchange and was designed to accurately calculate time dependent DFT properties and spectra. It should be a good choice for charge transfer systems. See the original publication for a more detailed description.[14]
Then we have another popular functional: PBE.[15a] In this initial publication an exchange as well as a correlation functional was proposed, both pure density functionals, often used in conjunction.[15b] I don't know much about it's usefulness, since I prefer another quite robust variation of it: PBE0, which is a hybrid functional.[15c,d] Because of its adiabatic connection formula, it is described by the authors as a non-empirical hybrid functional.[15d]
Over the years there have been various developments, some of the are called improvement, but it often boils down to personal taste and applicability. For example, Handy and Cohen reintroduced the concept of left-right correlation into their OPTX functional and subsequently used it in combination with LYP, P86 and P91. Aparently, they work well and are now often used also as a reference for other density functionals. They went on and developed a functional analogous to B3LYP but outperforming it.[16]
But these were obviously not the only attempts. Xu and Goddard III extended the B3LYP scheme to include long range effects. They claim a good description of dipole moments, polarizabilities and accurate excitation energies.[17]
And with the last part in mind, it is also necessary to address long range corrections. Sometimes a system cannot be described accurately without them, sometimes they make the description worse. To name only one, CAM-B3LYP, which uses the coulomb attenuating method.[18] And there are a couple of more, and a couple of more to come, head on over to a similar question: What do short-range and long-range corrections mean in DFT methods?
As you can see, there is no universal choice, it depends on your budget and on the properties you are interested in. There are a couple of theoretical/ computational chemists on this platform. I like BP86 as a quick shot and answer questions relating to MO theory with it, shameless self-promotion: Rationalizing the Planarity of Formamide or Rationalising the order of reactivity of carbonyl compounds towards nucleophiles. And sometimes we have overachievers like LordStryker, that use a whole bunch of methods to make a point: Dipole moment of cis-2-butene.
So I picked a functional, what else?
You still have to pick a basis set. And even here you have to pick one that fits what you need. Since this answer is already way longer than I intended in the first place (Procrastination, yay!), I will keep it short(er).
There are a couple of universally applicable basis sets. The most famous is probably 6-31G*. This is a nice ancient basis set that is often used for its elegance and simplicity. Explaining how it was built is easier, than for other basis sets. I personally prefer the Ahlrichs basis set def2-SVP, as it comes with a pre-defined auxiliary basis set suitable for density fitting (even in Gaussian).[19]
Worth mentioning is the Dunning basis set family cc-pVDZ, cc-pVTZ, ... . They were specifically designed to be used in correlated molecular calculations. They have been reworked and improved after its initial publication, to fit them to modern computational standards.[20]
The range of suitable basis sets is large, most of them are available through the basis set exchange portal for a variety of QC programs.
Sometimes an effective core potential can be used to reduce computational cost and is worth considering.
*Sigh* What else?
When you are done with that, consider dispersion corrections. The easiest way is to pick a functional that has already implemented this, but this is quite dependent on the program of you choice (although the main ones should have this by now, it's not something brand new). However, the standalone DFT-D3 program by Stefan Grimme's group can be obtained from his website.[21]
Still reading? Read more! (A chapter formerly known as Notes and References)
Axel D. Becke, J. Chem. Phys., 2014, 140, 18A301.
(a) A. D. Becke, Phys. Rev. A, 1988, 38, 3098-3100. (b) John P. Perdew, Phys. Rev. B, 1986, 33, 8822-8824.
Unfortunately this functional is not always implemented in the same way, although the differences are pretty small. It basically boils down as to which VWN variation is used in the local spin density approximation term. Also see S. H. Vosko, L. Wilk, and M. Nusair, Can. J. Phys., 1980, 58 (8), 1200-1211.
P. J. Stephens, F. J. Devlin, C. F. Chabalowski, and M. J. Frisch, J. Phys. Chem., 1994, 98 (45), 11623–11627.
Axel D. Becke, J. Chem. Phys., 1993, 93, 5648.
C. Lee, W. Yang, and R. G. Parr, Phys. Rev. B, 1988, 37, 785–789
Unfortunately the B3LYP functional suffers from the same problems that are mentioned in [3].
The failures of B3LYP are known and often well documented. Here are a few recent papers, but there are many, many more. (a) Holger Kruse, Lars Goerigk, and Stefan Grimme, J. Org. Chem., 2012, 77 (23), 10824–10834. (b) Joachim Paier, Martijn Marsman and Georg Kresse, J. Chem. Phys., 2007, 127, 024103. (c) Igor Ying Zhang, Jianming Wu and Xin Xu, Chem. Commun., 2010, 46, 3057-3070. (pdf via researchgate.net)
Just my two cents, that I am hiding in the footnotes: “Pretty please do not make this your first choice.”
John P. Perdew and Karla Schmidt, AIP Conf. Proc., 2001, 577, 1. (pdf via molphys.org)
Yan Zhao and Donald G. Truhlar, J. Chem. Phys., 2006, 125, 194101.
Note, that not always full recomputations of geometries are necessary for all different functionals you apply. Often single point energies can tell you quite much how good your original model performs. Keep the computations to what you can afford.
Don't use an overkill of methods, if you already have five functionals agreeing with each other and possibly with an MP2 calculation, you are pretty much done. What can the use of another five functionals tell you more?
Y. Zhao, N.E. Schultz, and D.G. Truhlar, Theor. Chem. Account, 2008, 120, 215–241.
(a) John P. Perdew, Kieron Burke, and Matthias Ernzerhof, Phys. Rev. Lett., 1996, 77, 3865. (b) The exchange functional was revised in Matthias Ernzerhof and John P. Perdew, J. Chem. Phys., 1998, 109, 3313. (c) Carlo Adamo and Vincenzo Barone, J. Chem. Phys., 1999, 110, 6158. (d) Kieron Burke, Matthias Ernzerhof, and John P. Perdew, Chem. Phys. Lett., 1997, 265, 115-120.
(a) N. C. Handy and A. J. Cohen, Mol. Phys., 2001, 99, 403-12. (b) A. J. Cohen and N. C. Handy, Mol. Phys., 2001, 99 607-15.
X. Xu and W. A. Goddard III, Proc. Natl. Acad. Sci. USA, 2004, 101, 2673-77.
T. Yanai, D. P. Tew, and N. C. Handy, Chem. Phys. Lett., 2004, 393, 51-57.
(a) Florian Weigend and Reinhart Ahlrichs, Phys. Chem. Chem. Phys., 2005, 7, 3297-3305. (b) Florian Weigend, Phys. Chem. Chem. Phys., 2006, 8, 1057-1065.
The point where the use of more basis functions does not effect the calculation. For the correlation consistent basis sets, see a comment by Ernest R. Davidson, Chem. Phys. Rev., 1996, 260, 514-518 and references therein. Also see Thom H. Dunning Jr, J. Chem. Phys., 1989, 90, 1007 as the original source.
DFT-D3 Website; Stefan Grimme, Jens Antony, Stephan Ehrlich, and Helge Krieg,
J. Chem. Phys., 2010, 132, 154104.
Have fun and good luck! | {
"domain": "chemistry.stackexchange",
"id": 2993,
"tags": "computational-chemistry, density-functional-theory"
} |
Conservation of Mechanical Energy in Collisions | Question: The Law of conservation of Mechanical Energy states that in a closed system with no non-conservative forces acting, the energy of the system will always remain constant. This makes sense for an elastic collision where Mechanical energy is conserved. However, if an inelastic or completely inelastic collision takes places in a closed system with no dissipative forces acting on it, how is mechanical energy not conserved?
Answer: Energy is conserved in inelastic collisions. Bulk kinetic energy is not conserved.
The sources I learned from never introduced a "Law of conservation of Mechanical Energy". I assume it applies in a restricted mechanics where thermalization is disallowed and all energy must be expressed in terms of macroscopic coordinates.
In that case the energy lost from (or added to) the kinetic channel must be hiding in strain potentials of some kind (elastic potential energy or some non-linear generalization). | {
"domain": "physics.stackexchange",
"id": 64123,
"tags": "newtonian-mechanics, energy-conservation, collision"
} |
Barrier Penetration in Spontaneous Symmetry Breaking | Question: In spontaneous symmetry breaking discussion in Weinberg Chapter 19 section 19.1, he says that the off-diagonal elements between two vacua $$|VAC, +> \pm |VAC->$$ is suppressed by the factor of $$\exp({-C\mathcal{V}}).$$ For a scalar field theory ($-\frac{1}{2}(\partial \phi)^2 - V(\phi)$) by analogy to wave mechanical problem of barrier penetration we have $$C = \int_{-\bar{\phi}}^{\bar{\phi}} \sqrt{2V(\phi)} d\phi.$$ I am trying to understand how to get this result. Please let me know if you know how to obtain this mathematically.
Answer: Following Weinberg, we'll consider a QFT with a symmetry $\phi \rightarrow -\phi$.
We'll use the usual phi4 theory so common in describing such systems, described by the action,
\begin{equation}
\mathcal{S} = \int dt \, d^d x \left\{ \frac{1}{2}\left( \partial_t \phi \right)^2 - \frac{1}{2}\left( \nabla \phi \right)^2 - \frac{m^2}{2} \phi^2 - \frac{\lambda}{4!} \phi^4 \right\},
\end{equation}
and the path integral
\begin{equation}
\mathcal{Z} = \int \mathcal{D}\phi \, e^{i \mathcal{S}}.
\end{equation}
We'll be interested in $m^2 \ll 0$ where the symmetry-broken ground states are favored.
Unlike usual, however, we'll take this to be in a finite spatial volume, $V = L^d$, with periodic boundary conditions.
In this case, the Fourier transform is a function of discrete momenta $k_i=2 \pi n_i/L$.
To focus on the low-energy physics, it will be useful to concentrate on the zero-momentum mode,
\begin{equation}
\phi(x,t) = L^{-d/2} \varphi(t) + \sum_{k \neq 0} \tilde{\phi}(k,t),
\end{equation}
where the $k=0$ term in the mode expansion is equivalent to a spatial average over the field
\begin{equation}
\varphi(t) = L^{-d/2} \int d^d x \, \phi(x,t).
\end{equation}
The ground state(s) of this theory will have zero momentum, so we can study these by integrating out the finite-momentum modes:
\begin{equation}
\mathcal{Z} = \int \mathcal{D}\varphi \, e^{i \mathcal{S}_{eff}[\varphi]}, \qquad \mathcal{S}_{eff}[\varphi] = - i \log \int \mathcal{D}\tilde{\phi} \, e^{i \mathcal{S}}.
\end{equation}
If you follow this procedure, you'll find that the zero-mode action is
\begin{equation}
\mathcal{S}_{eff}[\varphi] = \int dt \left\{ \frac{1}{2}\left( \partial_t \varphi \right)^2 - \frac{m'^2}{2} \varphi^2 - \frac{\lambda'}{ L^d 4!} \varphi^4 \right\} + \cdots.
\end{equation}
Integrating out the $\tilde{\phi}$ will cause a shift in the coupling constants (which we denote by $m'^2$ and $\lambda'$ now), and generate an infinite number of extra terms, $\varphi^6$, $\varphi^8$, and higher time derivatives. We will ignore these terms, which don't change the physics (especially when we can appeal to perturbation theory).
The point of this procedure is that the resulting theory $\mathcal{Z}$ is given precisely by the Feynman path integral for a simple and familiar model in quantum mechanics. This path integral is that obtained from the Hamiltonian
\begin{equation}
H = \frac{p^2}{2} + \frac{m'^2}{2} x^2 + \frac{\lambda'}{L^d 4!} x^{4},
\end{equation}
with $[x,p] = i$. In the limit we are interested in, $m'^{2} \ll 0$, this is precisely the Hamiltonian of a non-relativistic particle in a one-dimensional double-well potential, and you can read off all of your knowledge from that problem (see previous posts here and here for example).
The exact eigenstates of the system are symmetric under $x \rightarrow -x$, with the ground state being the symmetric combination of the particle in both wells, and the first excited state is the antisymmetric combination. As Weinberg says in his Section 19.1, the energy difference between the states is (half of) the matrix element between these states.
If you prepare a particle in a state where it is initially in one of the two wells, with a position centered at $\pm x_0 = \pm\sqrt{-6L^{d}m'^2/\lambda'}$, this will be a linear combination of the ground state and the first excited state, so it will oscillate between the two wells with a period given by the inverse energy difference. There's an asymptotic expression for the energy difference given on Wikipedia. This period can alternatively be estimated by the WKB expression for tunneling between the wells:
\begin{equation}
T \sim \exp\left[ 2 \int_{-x_0}^{x_0} \sqrt{2 [V(x) - V(x_0)]} dx \right].
\end{equation}
Either way, we find that the two lowest ground states have a splitting proportional to $\exp(-CL^d)$, as stated by Weinberg.
You can go through this procedure for a more general field potential and see how it's not particular to phi4 theory. | {
"domain": "physics.stackexchange",
"id": 90237,
"tags": "quantum-field-theory, vacuum, symmetry-breaking, instantons"
} |
Slider PHP loop | Question: I am working on a image slider for a website, the user can add up to 0 - 3 images to the slider. What kind of loop should I use to make this code more efficient and DRY? "data-slide-number" starts from 0 and changes based on the image. The images are not "required" so there won't always be 3 images.
<?php
// image urls
$pr_img_1 = first image url;
$pr_img_2 = second image url;
$pr_img_3 = third image url;
?>
<div class="carousel-inner">
<?php if($pr_img_1); { ?>
<div class="active item" data-slide-number="0">
<img src="<?= esc_html( $pr_img_1 ); ?>">
</div>
<?php } ?>
<?php if($pr_img_2); { ?>
<div class="active item" data-slide-number="1">
<img src="<?= esc_html( $pr_img_2 ); ?>">
</div>
<?php } ?>
<?php if($pr_img_3); { ?>
<div class="active item" data-slide-number="2">
<img src="<?= esc_html( $pr_img_3 ); ?>">
</div>
<?php } ?>
</div><!-- Carousel nav -->
Answer: You can follow the below code.
<?php
$pr_imgs = array("first image url","second image url","third image url");
$i = 0;
foreach($pr_imgs as $pimg)
{
?>
<div class="active item" data-slide-number="<?php echo $i; ?>">
<img src="<?= esc_html( $pimg ); ?>">
</div>
<?php
$i++;
}
?>
Using foreach you don't have to take care of the number of image path. | {
"domain": "codereview.stackexchange",
"id": 21001,
"tags": "php"
} |
Tracking Spacetime Events | Question: In the linked post: Liouville's Theorem For Spacetime, I indicated the need for tracking the evolution of spacetime events. Is it sufficient to track a spacetime event by placing a particle there with zero initial velocity? You would then identify the spacetime event with the location of the particle. It would need to be a particle with zero mass, so that it could respond to the changes in curvature as quickly as possible. This however runs into trouble with my condition of zero initial velocity.
This seems like a natural choice, since the only way you can measure a spacetime event is by placing a particle there. But without the notion of particles, the notion of spacetime events and how they "move" becomes ambiguous in my mind.
Answer: If a spacetime is globally hyperbolic, it can be foliated by timelike geodesics, ie it is possible that through each point of the spacetime passes a free particle such that no two such particles ever intersect (this can be shown by using the Hamiltonian flow of a Cauchy surface). It can indeed be a way to track spacetime events, although a more important method to do this in the practical case is the intersection of null geodesics with timelike curves - a light ray bouncing between different objects, as this is the most common way to define distances in relativity.
I'm not sure if the global hyperbolicity condition is necessary, but I feel like it might run into some difficulties for instance on the Carter spacetime. | {
"domain": "physics.stackexchange",
"id": 50044,
"tags": "general-relativity, spacetime, reference-frames, metric-tensor, coordinate-systems"
} |
Is there a "simple" way to tell if a stabilizer code is degenerate? | Question: Suppose I have a stabilizer code defined by $m$ independent Pauli strings. Is there a "simple" way to check if the code is degenerate or not?
As test cases the $[[5,1,3]]$ code is not degenerate, and Shor's $[[9,1,3]]$ code is.
Answer: Set of errors
It is somewhat imprecise to say that a quantum error correcting code is degenerate. Degeneracy means that the code assigns the same syndrome to two different errors, but that is of course true of every code$^1$. Therefore, one should specify the set of errors to be considered. Then the code $C$ is degenerate for the set of errors $\mathcal{E}$ if two distinct errors $e_1,e_2\in\mathcal{E}$ have the same syndrome. Often, $\mathcal{E}$ is implicitly taken to be the set of correctable errors of the code.
(Interestingly, it is impossible for a classical error correcting code to be degenerate for any set of correctable errors.)
Single-qubit errors
It is particularly easy to check if a stabilizer code is degenerate for the set of single-qubit $X$ and $Z$ errors. All we need to do is write down the check matrix $H=[H_Z|H_X]$ and see whether it has two identical columns. We can extend this to all single-qubit errors by expanding the check matrix to $H'=[H_Z|H_X|H_Y]$ where$^2$ $H_Y=H_Z+H_X$ and verifying that all columns of $H'$ are distinct.
For example, the check matrix for the $[[5,1,3]]$ code is
$$
\left[
\begin{array}{ccccc|ccccc}
0&1&1&0&0 & 1&0&0&1&0\\
0&0&1&1&0 & 0&1&0&0&1\\
0&0&0&1&1 & 1&0&1&0&0\\
1&0&0&0&1 & 0&1&0&1&0\\
\end{array}
\right]
$$
where all columns are distinct. Moreover, $H_Y=H_Z+H_X$ consists of distinct columns with Hamming weight three and four, so $H'$ has distinct columns, too. Therefore, this code is not degenerate for single-qubit errors.
On the other hand, the check matrix for the $[[9,1,3]]$ code is
$$
\left[
\begin{array}{ccccccccc|ccccccccc}
1&1&0& 0&0&0& 0&0&0 & 0&0&0& 0&0&0& 0&0&0\\
0&1&1& 0&0&0& 0&0&0 & 0&0&0& 0&0&0& 0&0&0\\
0&0&0& 1&1&0& 0&0&0 & 0&0&0& 0&0&0& 0&0&0\\
0&0&0& 0&1&1& 0&0&0 & 0&0&0& 0&0&0& 0&0&0\\
0&0&0& 0&0&0& 1&1&0 & 0&0&0& 0&0&0& 0&0&0\\
0&0&0& 0&0&0& 0&1&1 & 0&0&0& 0&0&0& 0&0&0\\
0&0&0& 0&0&0& 0&0&0 & 1&1&1& 1&1&1& 0&0&0\\
0&0&0& 0&0&0& 0&0&0 & 0&0&0& 1&1&1& 1&1&1\\
\end{array}
\right]
$$
where some columns, e.g. tenth and eleventh, are identical. Therefore, the $[[9,1,3]]$ code is degenerate.
The reason this procedure works is that the syndrome $S$ of an error given as a binary vector $e=[z_1,\dots,z_n,x_1,\dots,x_n]$ can be computed as
$$
S = H\Lambda e
$$
where $H$ is the check matrix and $\Lambda=\begin{bmatrix}0&I\\I&0\end{bmatrix}$ is a matrix that realizes the standard symplectic inner product in $\mathbb{Z}_2^{2n}$.
General procedure
The above procedure directly generalizes to more complicated sets of errors, but instead of comparing columns of the check matrix, we now need to compare their linear combinations.
The most general way to state the above procedures is the following simple algorithm
def is_degenerate(code, errors):
syndromes = set()
for e in errors:
s = code.get_syndrome(e)
if s in syndromes:
return True
syndromes.add(s)
return False
$^1$ This is easiest to see for stabilizer codes. Take an error $e$ and a stabilizer $g$. Then $e\ne eg$, but $e$ and $eg$ have the same syndrome, because $g$'s syndrome is trivial.
$^2$ Addition in $H_Y=H_Z+H_X$ is modulo two. | {
"domain": "quantumcomputing.stackexchange",
"id": 3961,
"tags": "error-correction, stabilizer-code"
} |
Exception handling with null check in java | Question: //not an elf file. try PE parser
PE pe=PEParser.parse(path);
if(pe!=null)
{
PESignature ps =pe.getSignature();
if(ps==null||!ps.isValid())
{
//What is it?
Toast.makeText(this,"The file seems that it is neither an Elf file or PE file!",3).show();
throw new IOException(e);
}
}
else
{
//What is it?
Toast.makeText(this,"The file seems that it is neither an Elf file or PE file!",3).show();
throw new IOException(e);
}
How can I organize the above code, so that
//What is it?
Toast.makeText(this,"The file seems that it is neither an Elf file or PE file!",3).show();
throw new IOException(e);
Appears only once, or just be better-looking(easy to read)?
Summary
Please comment or advise on organizing the if statements.
Answer: Your business logic is getting muddled with your error-checking logic. I would recommend extracting your "null-to-Exception" conversions into different methods, so they don't appear here.
Ideally, PEParser.parse() and PE.getSignature() would not return null values, and would instead throw an exception directly if they encountered an issue. If you have the option to change these methods, I would do so. Otherwise, I would recommend wrapping them in methods that will convert null return values to exceptions, like so:
PE GetPEFromPath(String path)
{
PE pe = PEParser.parse(path);
if (pe == null)
{
throw new IOException();
}
return pe;
}
PESignature GetSignatureFromPE(PE pe)
{
PESignature ps = pe.getSignature();
if (ps == null || !ps.isValid())
{
throw new IOException();
}
return ps;
}
These methods should return a valid object, or throw an exception - no other options. This way, we can write our business logic in a try block with no null-check interruptions, and if something fails, we will move gracefully to the catch block.
Now, we can transform your given code snippet into something more readable:
//not an elf file. try PE parser
try
{
PE pe = GetPEFromPath(path);
PESignature ps = GetSignatureFromPE(pe);
}
catch (IOException e)
{
Toast.makeText(this, "The file seems that it is neither an Elf file or PE file!", 3).show();
throw e;
} | {
"domain": "codereview.stackexchange",
"id": 32233,
"tags": "java, error-handling, exception"
} |
Requiring at least one alldiff constraint to be satisfied converted to SAT | Question: For generating certain hard puzzles, I am trying to model a problem (ultimately) in SAT. I don't know how to do that, so I am starting with CSP because it's more expressive. In CSP, there is a global alldiff constraint, which requires all variables to take on different values from their domains. I have a set of alldiff constraints. However, out of this set, I only require at least one of them to be true. That is, not all of them have to be satisfied.
For concreteness, suppose we have 8 variables $x_1,\ldots,x_8$. Each take values from the domain $\{0,1,2\}$. We want to satisfy the following formula:
$$\text{alldiff}(x_1,x_2,x_3) \lor \text{alldiff}(x_1,x_6,x_7) \lor \text{alldiff}(x_4,x_5,x_6) \lor \text{alldiff}(x_3,x_4,x_8).$$
We are happy if at least one of the 4 alldiff clauses is satisfied, e.g., if $x_1=0$, $x_2=1$, and $x_3=2$, we are already happy.
But how is then modeled in SAT? Specifically, how do we write an alldiff constraint in SAT?
Answer: If you want to model an alldiff() constraint in SAT, there are several options. Here are two different options you can try:
One way is to expand $\text{alldiff}(x_1,\dots,x_n)$ into $n(n-1)/2$ inequality constraints: $(x_1 \ne x_2) \land (x_1 \ne x_3) \land \cdots$. Now you can express each inequality constraint $x_i \ne x_j$ on $b$-bit values in turn as a boolean formula: $(x_i[0] \ne x_j[0]) \lor (x_i[1] \ne x_j[1]) \lor \cdots$, where $x_i[k]$ is the $k$th bit of $x_i$. In this way each alldiff constraint on $n$ $b$-bit variables expands into a boolean formula of size $\Theta(n^2 b)$.
Alternatively, a different way is to use a one-hot encoding. Suppose each $x_i$ needs to take values from some universe $U$ of $u$ values. Instead of encoding each $x_i$ as a $\lceil \lg u \rceil$-bit value, encode it as a $u$-bit boolean vector, where $x_i[k]=1$ if $x_i = k$ and $x_i[k]=0$ otherwise. Now an alldiff constraint on $n$ variables amounts to requiring that there are exactly $n$ ones in the $nu$-bit vector obtained by concatenating the vectors for each of the $n$ variables in the alldiff constraint. Thus, we obtain a $n$-out-of-$nu$ constraint.
This $n$-out-of-$nu$ constraint can be encoded in multiple ways; one way is to use a tree of half-adders to sum up these values. The tree will have $nu$ leaves, and with a $\lceil \lg nu \rceil$-bit output at the root. You can find more about encoding cardinality constraints here: https://cs.stackexchange.com/a/6522/755.
These approaches give you a boolean formula, i.e., a boolean circuit; it's not in CNF form. So, strictly speaking, it's not an instance of SAT. However, there is a standard way to convert any such boolean formula to CNF form: you use the Tseitin transform (equivalently, you use the standard reduction from CircuitSAT to SAT) and then run a SAT solver on the result. There are several SAT front-ends that will do this work for you; STP is one I have used that is quite convenient, but there are others as well.
This gives you at least options you could consider, for converting your problem to SAT. Which one is most efficient might depend on your specific problem and the parameters of your problem (the size of the domain, the number of variables in each alldiff constraint).
Also, if you have a lot of alldiff constraints, it seems likely that a CSP solver might do better than a SAT solver. CSP solvers are designed to handle alldiff constraints well, thus they might perform better than naively bitblasting to SAT, as SAT solvers won't be aware of that structure.
See also When to use SAT vs Constraint Satisfaction? and Converting (math) problems to SAT instances and https://cs.stackexchange.com/a/12153/755. | {
"domain": "cs.stackexchange",
"id": 4602,
"tags": "modelling, constraint-programming, constraint-satisfaction"
} |
Role of potassium hydroxide in alkaline battery? | Question: Is the role of potassium hydroxide ($\ce{KOH}$) in an alkaline battery to provide hydroxide for the reaction with zinc? Would the battery cease to work if $\ce{KOH}$ was removed and only $\ce{OH-}$ from autoionization ($\pu{10^{-7} M}$) remained?
Answer: The role of $\ce{KOH}$ is to provide enough ions for redox reactions. Remember that redox reactions come in pairs, for every reduction there must be oxidation. If $\ce{Zn}$ reacts with $\ce{OH-}$ to form a hydroxide, what would be a counter-reaction?
A battery also needs to have certain ionic conductivities to be functional. Could you ever get enough $\ce{OH-}$ ions through autoionization? | {
"domain": "chemistry.stackexchange",
"id": 14039,
"tags": "electrochemistry"
} |
Actor Network Target Value in A2C Reinforcement Learning | Question: In DQN, we use;
$Target = r+\gamma v(s')$ equation to train (fit) our network. It is easy to understand since we use the $Target$ value as the dependent variable like we do in supervised learning. I.e. we can use codes in python to train the model like,
model.fit(state,target, verbose = 0)
where $r$ and $v(s')$ can be found by model prediction.
When it comes to A2C network, things becomes more complicated. Now we have got two networks. Actor and Ctitic. It is said, the Critic network is not different from the what it is done in DQN. Only difference is now we have got only one output neuron in the network. So, similarly, we calculate the $Target = r+\gamma v(s')$ after acting via sampled action from $\pi(a|s)$ distribution. And we train the model with model.fit(state,target, verbose = 0) in python as well.
However, in Actor case it is so confusing. Now, we have another neural network which takes states as input and gives probabilities as the output by using softmax activation function. It is cool. But the point I stuck is, the dependent variable to adjust the Actor network. So in python,
model2.fit(state,?,verbose=0)
What is the ? value to "supervise" the network to adjust the weights? and WHY?
In several resources I found the Advantage value which is nothing but the $Target - V(s)$.
And also there is something called as the actor loss which is calculated by,
Why?
Thanks in advance!
Answer: The target will still be a form of a return estimation ($V(s_t)$, $Q(a_t,s_t)$, Advantage, n-step reward, etc). For example in your case the $Q_w$ that Critic estimated.
You will need to review a bit Policy Gradient methods in this order: PG Theorem, REINFORCE (Actor only method) then AC (Actor-Critic) and then A2C. I will give you a conceptual explanation, abstracted from the math behind. The general form of Policy Gradient is:
likelihood of action given state multiplied by a form of return: $log\pi_{\mathbf{\theta}}(a_t|s_t)\cdot R_t$
What this tells us is: maximize the likelihood of a specific action multiplied by the return. In classification we do know the correct class (action) but here this is not the case. The only learning signal comes from the reward. Therefore multiplying the likelihood of an action with a form of return for selecting that action, that will decrease/increase the probability of selecting that action again given at the current state. The PG theorem states that this is the direction to update the weights $\theta$ in order to maximize return.
In A2C, you can have various implementations: 2 separate networks or 1 network with 2 separate heads. Actor is responsible for learning the distribution of actions that maximize the return given state. Critic is responsible for estimating the return from current state (and action). Thus the loss function in A2C is usually the Policy Gradient loss plus the Mean Squared Error between expected return and observed return (plus entropy for exploration). Please refer to the original A2C paper for the equations. As you can see we have 2 main loss functions (one for each network (head) ).
The return, in any form, affects training in 2 ways:
Learning signal for the actor: Multiplies the likelihood of actions and by doing so increases/decreases the probability for selecting these actions. This will change the parameters of the policy in order for the policy to favor rewarding actions.
Learning signal for the critic: MSE target.
As you can see in a loose sense you are doing again supervised learning for both Actor and Critic but with classification loss for Actor and regression loss for Critic. The why this occurs comes from the Policy Gradient theorem that shows that in order to maximize the return following a parametrized policy we need to update the parameters towards the direction that maximize the PG loss function (for the Actor).
Please note that for various reasons using the return might not be that of a good idea. That's why you will see the PG loss in various forms ( e.g. policy likelihood multiplied by the advantage $A(t)$ or $V(s_t)-R_t$, $Q(a_t,s_t)$ etc) | {
"domain": "datascience.stackexchange",
"id": 9403,
"tags": "machine-learning, reinforcement-learning, actor-critic"
} |
Evaluating Polish Prefix Notation and Polish Postfix Notation | Question: In Polish postfix notation, the operators follow their operands. For example, to add 3 and 4 together, the expression is 3 4 + rather than 3 + 4. The conventional notation expression 3 − 4 + 5 becomes 3 4 − 5 + in Polish postfix notation: 4 is first subtracted from 3, then 5 is added to it.
Polish prefix notation, on the other hand, require that its operators precede the operands they work on. 3 + 4 would then be as + 3 4.
The algorithm used for parsing the postfix expression is simple:
while not end of file
read token into var
if (var is num)
push var
else if (var is operator)
if (operands >= 2)
pop into rhs
pop into lhs
push lhs operator rhs
else
throw exception
else
return fail
pop into var
return var
The code uses a stack to implement it. Prefix notation is parsed in the same manner, except that the tokens are reversed and the operands are swapped for each operation.
Parentheses are not supported.
Code:
#!/usr/bin/env python3
from typing import Union # For older Python versions.
OPCODES = {
"+": lambda lhs, rhs: lhs + rhs,
"-": lambda lhs, rhs: lhs - rhs,
"*": lambda lhs, rhs: lhs * rhs,
"/": lambda lhs, rhs: lhs / rhs,
"//": lambda lhs, rhs: lhs // rhs,
"%": lambda lhs, rhs: lhs % rhs,
"^": lambda lhs, rhs: lhs ** rhs,
}
def _eval_op(
lhs: Union[int, float], rhs: Union[int, float], opcode: str
) -> Union[int, float]:
return OPCODES[opcode](lhs, rhs)
def eval_postfix_expr(expr: str) -> Union[int, float]:
"""
Evaluate a postfix expression.
Supported operators:
- Addition (+)
- Subtraction (-)
- Multiplication (*)
- Division (/)
- Floor Division (//)
- Modulo (%)
- Exponentiation (^)
Raises:
ValueError: If 'expr' is an invalid postfix expression.
"""
tokens = expr.split()
if tokens[-1] not in OPCODES.keys() or not tokens[0].isdigit():
raise ValueError("Invalid expression.")
stack = []
for tok in tokens:
if tok.isdigit():
stack.append(int(tok))
elif tok in OPCODES.keys():
if len(stack) >= 2:
rhs = stack.pop()
lhs = stack.pop()
stack.append(_eval_op(lhs, rhs, tok))
else:
raise ValueError("Invalid expression.")
return stack.pop()
def eval_prefix_expr(expr: str) -> Union[int, float]:
"""
Evaluate a prefix expression.
Supported operators:
- Addition (+)
- Subtraction (-)
- Multiplication (*)
- Division (/)
- Floor Division (//)
- Modulo (%)
- Exponentiation (^)
Raises:
ValueError: If 'expr' is an invalid prefix expression.
"""
tokens = expr.split()
if tokens[0] not in OPCODES.keys() or not tokens[-1].isdigit():
raise ValueError("Invalid expression.")
tokens = reversed(tokens)
stack = []
for tok in tokens:
if tok.isdigit():
stack.append(int(tok))
elif tok in OPCODES.keys():
if len(stack) >= 2:
lhs = stack.pop()
rhs = stack.pop()
stack.append(_eval_op(lhs, rhs, tok))
else:
raise ValueError("Invalid expression.")
return stack.pop()
def run_tests(func, test_data) -> None:
func_len = len(func.__name__)
max_expr_len = max(len(repr(expr)) for expr, _ in test_data)
for expr, res in test_data:
func_name = func.__name__
expr_repr = repr(expr)
print(
f"func: {func_name:<{func_len}}, expr: {expr_repr:<{max_expr_len}}",
end="",
)
try:
rv = func(expr)
assert rv == res, f"Expected: {res}, Received: {rv}"
except Exception as e:
rv = e
print(f", result: {repr(rv)}")
def main() -> None:
post_test_data = [
("3 4 +", 7),
("3 4 + 5 4 ^ +", 632),
("+ 10a 29", 0),
("10 5 * 6 2 + /", 6.25),
("10 7 2 3 * + -", -3),
("icaoscasjcs", 0),
("* 10 5 * 7 3 + // 2 -", 3),
]
pre_test_data = [
("+ 3 4", 7),
("+ ^ 5 4 + 3 4", 632),
("+ 10a 29", 0),
("/ * 10 5 + 6 2", 6.25),
("- 10 + 7 * 2 3", -3),
("icaoscasjcs", 0),
("1038 - // * 10 5 + 7 3 2", 3),
]
run_tests(eval_postfix_expr, post_test_data)
print()
run_tests(eval_prefix_expr, pre_test_data)
if __name__ == "__main__":
main()
Prints:
func: eval_postfix_expr, expr: '3 4 +' , result: 7
func: eval_postfix_expr, expr: '3 4 + 5 4 ^ +' , result: 632
func: eval_postfix_expr, expr: '+ 10a 29' , result: ValueError('Invalid expression.')
func: eval_postfix_expr, expr: '10 5 * 6 2 + /' , result: 6.25
func: eval_postfix_expr, expr: '10 7 2 3 * + -' , result: -3
func: eval_postfix_expr, expr: 'icaoscasjcs' , result: ValueError('Invalid expression.')
func: eval_postfix_expr, expr: '* 10 5 * 7 3 + // 2 -', result: ValueError('Invalid expression.')
func: eval_prefix_expr, expr: '+ 3 4' , result: 7
func: eval_prefix_expr, expr: '+ ^ 5 4 + 3 4' , result: 632
func: eval_prefix_expr, expr: '+ 10a 29' , result: ValueError('Invalid expression.')
func: eval_prefix_expr, expr: '/ * 10 5 + 6 2' , result: 6.25
func: eval_prefix_expr, expr: '- 10 + 7 * 2 3' , result: -3
func: eval_prefix_expr, expr: 'icaoscasjcs' , result: ValueError('Invalid expression.')
func: eval_prefix_expr, expr: '1038 - // * 10 5 + 7 3 2', result: ValueError('Invalid expression.')
Review Request:
There's some duplication in the docstrings. How can that be helped?
Are there any bugs in my code? Did I miss something?
I should have liked to implement eval_prefix_expr() with eval_postfix_expr(), but I did not see a way to swap the operands for each operator. Do you?
General coding comments, style, naming, et cetera.
Answer: operator module
OPCODES = {
"+": lambda lhs, rhs: lhs + rhs, ...
The lambdas are nice enough, they get the job done.
We could have just mentioned
operator.add,
floordiv, and so on.
ints are floats
def _eval_op(
lhs: Union[int, float], ...
Well, ok, clearly an int is not a float, nor does it inherit.
But rather than writing e.g. lhs: int | float,
Pep-484
explains that
when an argument is annotated as having type float, an argument of type int is acceptable
So prefer to shorten such annotations, as int is implicit.
short docstring
The docstring for eval_postfix_expr() is lovely.
You lamented the repetition of longish docstrings.
Maybe include the operators by reference?
That is, refer the interested reader
over to OPCODES, which could have its own docstring.
And then a one-liner of "+ - * / // % ^" suffices.
It goes on to comment on ValueError, suggesting
that's the only Bad Thing that could happen.
Maybe any error that gets raised is self explanatory,
and doesn't need a mention?
Or maybe enumerate the others here, such as ZeroDivisionError.
Too bad there's no i operator.
This turns out to be a very poor substitute:
print((-1) ** .5)
(6.123233995736766e-17+1j)
unary minus
if tok.isdigit():
It makes me sad that "3 -1 +" won't parse,
since "-".isdigit() == False
I agree with @vnp's EAFP suggestion.
Directly accepting an input operand of "-3.14" seems simple enough,
without making me divide 314 by 100.
silent error
elif tok in OPCODES.keys():
if len(stack) >= 2:
rhs = stack.pop()
lhs = stack.pop()
stack.append(_eval_op(lhs, rhs, tok))
else:
When presented with a short stack (bad input expression!)
we just silently move on without comment?!?
Worse, this could happen in the middle of an expression,
leading to a debugging nightmare.
(A dozen properly nested items, then short stack,
followed by another dozen properly nested items.
Where's the bad entry, we wonder?)
long stack
Evaluating "1 2 3 +" will return 5 but will leave 1 on the stack.
I feel this should be a fatal error reported to the user.
DRY
eval_prefix_expr() is Too Long.
It should reverse, and then call eval_postfix_expr().
Or both should call a common _helper().
I did not see a way to swap the operands for each operator.
Yeah, I see what you mean.
Define a postfix_swap(a, b) helper which does nothing
(identity function), and a prefix_swap(a, b) helper
which swaps its args, and pass in the swapper function
as part of the call.
Or maybe pass in (0, 1) and (1, 0) arg_selector tuples,
the indexes of a and b.
simple tests
The custom run_tests() routine is nice.
We could tighten up a few things, like {func_name:<{func_len}}
is just {func_name}, and we could assign func_name just once
if we need it at all.
Consider phrasing this test in terms of from unittest import TestCase.
assert rv == res
This equality test works for now.
But soon you'll want self.assertAlmostEqual(), I think.
commutativity
I really do appreciate the emphasis on "simple" here.
post_test_data = [ ...
("10 5 * 6 2 + /", 6.25),
...
pre_test_data = [ ...
("/ * 10 5 + 6 2", 6.25),
Those say nearly the same thing.
But the one is not a reversal of the other.
In mathematics, addition over the reals commutes.
Similarly for multiplication.
In IEEE-754, addition is not commutative, nor is multiplication.
Order of evaluation matters.
We worry about catastrophic cancellation.
One way to avoid cumulative rounding errors in a long sum
is to order FP operands by magnitude and sum the little ones
before the big ones.
So consider ditching pre_test_data,
in favor of just reversing post_test_data.
hypothesis
Consider writing a postfix_to_infix() converter,
whose parenthesized output you can just hand to eval().
Now you have an "oracle" that knows the right answer.
With that in hand you can invite
hypothesis
to dream up random expressions of limited length,
and then you verify that the evaluations match.
I predict you will learn new things about
your code and about FP.
This code achieves most of its design goals.
I would be willing to delegate or accept maintenance tasks on it. | {
"domain": "codereview.stackexchange",
"id": 45514,
"tags": "python, parsing, reinventing-the-wheel, math-expression-eval"
} |
rosmsg show unable to load message [std_msgs/] | Question: I am a newbie to ROS and know absolutely nothing about it. I am following this tutorial and at 5:33 he enters the following command rosmsg show std_msgs/ and when I do the same in the terminal i get the following error, and I am not sure what the problem is. I have tried looking for an answer in other places but I couldn't find anything. Any help would be much appreciated.
Error:
Unable to load msg [std_msgs/]: Cannot locate message [] in package [std_msgs] with paths [['/opt/ros/noetic/share/std_msgs/msg']]
Answer: The user in the video is pressing the tab key to get a list of possible completions of rosmsg show std_msgs/, you are pressing 'enter' and tring to execute that command, but it's invalid, it wants a specific message to show. (It'd be helpful if the video creator was using a program to show which keys they were pressing on screen)
rosmsg show std_msgs/
don't press enter, press tab, then get this output:
std_msgs/Bool std_msgs/Empty std_msgs/Int16 std_msgs/Int8 std_msgs/UInt16 std_msgs/UInt8
std_msgs/Byte std_msgs/Float32 std_msgs/Int16MultiArray std_msgs/Int8MultiArray std_msgs/UInt16MultiArray std_msgs/UInt8MultiArray
std_msgs/ByteMultiArray std_msgs/Float32MultiArray std_msgs/Int32 std_msgs/MultiArrayDimension std_msgs/UInt32
std_msgs/Char std_msgs/Float64 std_msgs/Int32MultiArray std_msgs/MultiArrayLayout std_msgs/UInt32MultiArray
std_msgs/ColorRGBA std_msgs/Float64MultiArray std_msgs/Int64 std_msgs/String std_msgs/UInt64
std_msgs/Duration std_msgs/Header std_msgs/Int64MultiArray std_msgs/Time std_msgs/UInt64MultiArray
I usually do this to get a list of possible messages without using the tab completion:
rosmsg list | grep std_msgs | {
"domain": "robotics.stackexchange",
"id": 38833,
"tags": "ros, ros-noetic"
} |
properly implementing FFT in python problem | Question: I have signal which a sinous wave with frequency 1MHZ and DC signal at 0.9V.
The signal is sampled every T the total length is 5e-6 sec so for T=3.301028538570082e-09 i have 220 samples.
sampling frequency is Fs=302935884.4722904Hz
The original sinous plot is shown bellow.
As you can see Sinous amplitude is 0.1V
But is the last two FFT plot there is not destiguish between DC and my 1MHZ signal,also the amplitudes are not an in the analog picture.
Where did i go wrong?
The ful python code with the sample table is shown bellow.
Thanks.
Code:
# -*- coding: utf-8 -*-
"""
Created on Thu Mar 30 13:04:11 2023
@author: Asus
"""
from scipy.fftpack import fft
#import plotly
#import chart_studio.plotly as py
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from matplotlib.widgets import Cursor
#%matplotlib qt
dataset_fft=pd.read_table("sinus_1mhz.txt")
array_fft=dataset_fft.values
Ts=array_fft[4][0]-array_fft[3][0];
Fs=1/Ts #Hz
L=np.size(array_fft)
freq_vec=Fs*np.arange(0,1,1/220)
L=np.size(freq_vec)
fft_y=fft(array_fft[:,1],220)
fig=plt.figure()
ax=fig.subplots()
ax.grid()
cursor=Cursor(ax, horizOn=True,vertOn=True,useblit=True,color='r',linewidth =1)
#ax.plot(array_fft[:,0],array_fft[:,1])
ax.plot(freq_vec,abs(fft_y)/220)
plt.show()
sample table:
time V(n020)
0.000000000000000e+000 8.924428e-001
3.301028538570077e-009 8.922886e-001
6.602057077140153e-009 8.920012e-001
9.903085615710230e-009 8.915803e-001
1.320411415428031e-008 8.910261e-001
1.650514269285038e-008 8.903385e-001
1.980617123142046e-008 8.895176e-001
2.310719976999053e-008 8.885633e-001
3.310719976999053e-008 8.853104e-001
4.310719976999053e-008 8.813376e-001
5.310719976999053e-008 8.768258e-001
6.310719976999053e-008 8.719212e-001
7.310719976999052e-008 8.667433e-001
8.310719976999052e-008 8.613904e-001
9.310719976999052e-008 8.559445e-001
1.031071997699905e-007 8.504745e-001
1.231071997699905e-007 8.396893e-001
1.431071997699905e-007 8.294162e-001
1.631071997699906e-007 8.199491e-001
1.831071997699906e-007 8.115207e-001
2.131071997699906e-007 8.012237e-001
2.431071997699906e-007 7.941512e-001
2.731071997699906e-007 7.906163e-001
3.031071997699907e-007 7.907807e-001
3.331071997699907e-007 7.946613e-001
3.631071997699907e-007 8.021348e-001
3.931071997699908e-007 8.129446e-001
4.231071997699908e-007 8.267109e-001
4.531071997699908e-007 8.429437e-001
4.831071997699908e-007 8.610607e-001
5.131071997699908e-007 8.804088e-001
5.331071997699908e-007 8.936476e-001
5.631071997699909e-007 9.134864e-001
5.931071997699909e-007 9.326717e-001
6.231071997699909e-007 9.505267e-001
6.531071997699910e-007 9.664334e-001
6.831071997699910e-007 9.798563e-001
7.131071997699910e-007 9.903588e-001
7.431071997699911e-007 9.976145e-001
7.731071997699911e-007 1.001411e+000
8.031071997699911e-007 1.001653e+000
8.331071997699912e-007 9.983562e-001
8.531071997699912e-007 9.942504e-001
8.731071997699912e-007 9.886890e-001
8.931071997699912e-007 9.817557e-001
9.131071997699912e-007 9.735535e-001
9.331071997699913e-007 9.642041e-001
9.531071997699913e-007 9.538467e-001
9.731071997699911e-007 9.426368e-001
9.931071997699909e-007 9.307446e-001
1.013107199769991e-006 9.183531e-001
1.033107199769991e-006 9.056551e-001
1.053107199769990e-006 8.928512e-001
1.073107199769990e-006 8.801460e-001
1.093107199769990e-006 8.677447e-001
1.113107199769990e-006 8.558499e-001
1.133107199769990e-006 8.446571e-001
1.153107199769989e-006 8.343519e-001
1.173107199769989e-006 8.251061e-001
1.203107199769989e-006 8.135566e-001
1.233107199769989e-006 8.051701e-001
1.263107199769988e-006 8.002657e-001
1.293107199769988e-006 7.990336e-001
1.323107199769988e-006 8.015288e-001
1.353107199769987e-006 8.076692e-001
1.383107199769987e-006 8.172396e-001
1.413107199769987e-006 8.298985e-001
1.443107199769987e-006 8.451898e-001
1.473107199769986e-006 8.625590e-001
1.503107199769986e-006 8.813735e-001
1.523107199769986e-006 8.943822e-001
1.543107199769986e-006 9.075180e-001
1.573107199769985e-006 9.269971e-001
1.603107199769985e-006 9.455855e-001
1.633107199769985e-006 9.626255e-001
1.663107199769985e-006 9.775270e-001
1.693107199769984e-006 9.897886e-001
1.723107199769984e-006 9.990119e-001
1.753107199769984e-006 1.004910e+000
1.783107199769983e-006 1.007312e+000
1.813107199769983e-006 1.006159e+000
1.843107199769983e-006 1.001504e+000
1.873107199769983e-006 9.935084e-001
1.903107199769982e-006 9.824341e-001
1.933107199769982e-006 9.686427e-001
1.963107199769982e-006 9.525865e-001
1.993107199769981e-006 9.348007e-001
2.023107199769981e-006 9.158893e-001
2.053107199769981e-006 8.965079e-001
2.083107199769981e-006 8.773422e-001
2.113107199769980e-006 8.590813e-001
2.143107199769980e-006 8.423911e-001
2.173107199769980e-006 8.278856e-001
2.203107199769979e-006 8.161016e-001
2.233107199769979e-006 8.074761e-001
2.263107199769979e-006 8.023291e-001
2.293107199769978e-006 8.008513e-001
2.323107199769978e-006 8.030988e-001
2.353107199769978e-006 8.089899e-001
2.373107199769978e-006 8.148408e-001
2.393107199769978e-006 8.221229e-001
2.413107199769977e-006 8.307174e-001
2.433107199769977e-006 8.404839e-001
2.453107199769977e-006 8.512621e-001
2.473107199769977e-006 8.628752e-001
2.493107199769977e-006 8.751321e-001
2.513107199769976e-006 8.878315e-001
2.533107199769976e-006 9.007647e-001
2.553107199769976e-006 9.137201e-001
2.573107199769976e-006 9.264866e-001
2.593107199769976e-006 9.388579e-001
2.613107199769975e-006 9.506359e-001
2.633107199769975e-006 9.616345e-001
2.653107199769975e-006 9.716824e-001
2.673107199769975e-006 9.806260e-001
2.703107199769975e-006 9.916830e-001
2.733107199769974e-006 9.995958e-001
2.763107199769974e-006 1.004115e+000
2.793107199769974e-006 1.005104e+000
2.823107199769973e-006 1.002543e+000
2.853107199769973e-006 9.965196e-001
2.883107199769973e-006 9.872273e-001
2.913107199769973e-006 9.749626e-001
2.943107199769972e-006 9.601180e-001
2.973107199769972e-006 9.431753e-001
3.003107199769972e-006 9.246952e-001
3.033107199769971e-006 9.053026e-001
3.063107199769971e-006 8.856674e-001
3.093107199769971e-006 8.664808e-001
3.123107199769971e-006 8.484289e-001
3.153107199769970e-006 8.321643e-001
3.183107199769970e-006 8.182788e-001
3.213107199769970e-006 8.072786e-001
3.243107199769969e-006 7.995634e-001
3.273107199769969e-006 7.954113e-001
3.303107199769969e-006 7.949686e-001
3.333107199769969e-006 7.982451e-001
3.353107199769968e-006 8.024388e-001
3.373107199769968e-006 8.081621e-001
3.393107199769968e-006 8.153199e-001
3.413107199769968e-006 8.237936e-001
3.423107199769968e-006 8.284810e-001
3.433107199769968e-006 8.334432e-001
3.443107199769967e-006 8.386596e-001
3.453107199769967e-006 8.441087e-001
3.463107199769967e-006 8.497679e-001
3.473107199769967e-006 8.556136e-001
3.483107199769967e-006 8.616217e-001
3.493107199769967e-006 8.677673e-001
3.503107199769967e-006 8.740249e-001
3.513107199769967e-006 8.803685e-001
3.523107199769967e-006 8.867720e-001
3.533107199769967e-006 8.932088e-001
3.543107199769967e-006 8.996524e-001
3.553107199769966e-006 9.060764e-001
3.563107199769966e-006 9.124544e-001
3.573107199769966e-006 9.187604e-001
3.593107199769966e-006 9.310538e-001
3.613107199769966e-006 9.427583e-001
3.633107199769966e-006 9.536871e-001
3.663107199769965e-006 9.682541e-001
3.693107199769965e-006 9.801879e-001
3.723107199769965e-006 9.890857e-001
3.753107199769965e-006 9.946566e-001
3.783107199769964e-006 9.967242e-001
3.813107199769964e-006 9.952271e-001
3.843107199769964e-006 9.902158e-001
3.873107199769964e-006 9.818498e-001
3.903107199769963e-006 9.703925e-001
3.933107199769963e-006 9.562072e-001
3.963107199769963e-006 9.397500e-001
3.993107199769963e-006 9.215599e-001
4.023107199769962e-006 9.022462e-001
4.053107199769962e-006 8.824695e-001
4.083107199769962e-006 8.629198e-001
4.113107199769961e-006 8.442901e-001
4.143107199769961e-006 8.272491e-001
4.173107199769961e-006 8.124120e-001
4.203107199769961e-006 8.003163e-001
4.233107199769960e-006 7.913983e-001
4.263107199769960e-006 7.859771e-001
4.293107199769960e-006 7.842424e-001
4.323107199769959e-006 7.862490e-001
4.353107199769959e-006 7.919157e-001
4.373107199769959e-006 7.976262e-001
4.393107199769959e-006 8.047760e-001
4.413107199769959e-006 8.132467e-001
4.423107199769959e-006 8.179345e-001
4.433107199769958e-006 8.228985e-001
4.443107199769958e-006 8.281181e-001
4.453107199769958e-006 8.335719e-001
4.463107199769958e-006 8.392373e-001
4.473107199769958e-006 8.450908e-001
4.483107199769958e-006 8.511083e-001
4.493107199769958e-006 8.572647e-001
4.503107199769958e-006 8.635347e-001
4.513107199769958e-006 8.698925e-001
4.523107199769958e-006 8.763117e-001
4.533107199769957e-006 8.827658e-001
4.543107199769957e-006 8.892283e-001
4.553107199769957e-006 8.956727e-001
4.563107199769957e-006 9.020725e-001
4.573107199769957e-006 9.084017e-001
4.583107199769957e-006 9.146345e-001
4.603107199769957e-006 9.267105e-001
4.623107199769957e-006 9.381067e-001
4.643107199769956e-006 9.486422e-001
4.673107199769956e-006 9.624765e-001
4.703107199769956e-006 9.735320e-001
4.733107199769956e-006 9.814385e-001
4.763107199769955e-006 9.859397e-001
4.793107199769955e-006 9.868951e-001
4.823107199769955e-006 9.842795e-001
4.853107199769954e-006 9.781802e-001
4.883107199769954e-006 9.687927e-001
4.913107199769954e-006 9.564171e-001
4.943107199769954e-006 9.414513e-001
4.973107199769953e-006 9.243839e-001
4.993107199769953e-006 9.121249e-001
5.000000000000000e-006 9.077891e-001
Answer: The amplitudes have been distorted by the spectral leakage of the strong DC tone.
The "truth" for the expected levels on a dB scale (convenient for magnitude on spectrum plots) assuming the DC is 0.9V and the AC is 0.1 V peak would be:
DC: $20log_{10}(0.9) = 0.92$ dB
AC (each of the two tones): $20log_{10}(V_p/2) = 20log_{10}(0.1/2) = -26$ dB
The DC tone over the finite duration of the FFT will result in a Sinc in frequency with the main lobe going to a null at $1/T$ where $T$ is the duration of the capture. Thus we can see how the subsequent side-lobes which only roll-off at a rate of $1/f$ could interfere (either swamp out, or modify the level of) higher frequency tones.
A couple suggestions can improve the visibility when using the FFT and plotting the magnitude spectrum:
Plot magnitude on a dB scale.
Zero pad the FFT to interpolate more samples in frequency (doesn't add any more info but can visually fill in details we don't otherwise see).
Increase frequency resolution by increasing the time duration of the signal (this will decrease the width of the main lobe of the Sinc, and therefore reduce all the sidelobes as well).
Use windowing to significantly reduce the spectral leakage sidelobes; this comes at the expense of increasing frequency resolution, but given the benefit of significantly smaller side-lobes, this is in most cases a good trade to make.
Below I show the result for the OP's waveform with plotting on a dB scale and zero padding to more samples, zooming in on the area of interest:
We think we see evidence of other frequencies (many in addition to the OP's 1 MHz), but this is all just spectral leakage of the DC tone! Below I plot the result when the signal is just a constant 0.9V DC using the same processing as above. We clearly see from this that our signal of interest is completely swamped and obscured by the spectral leakage:
The same plot after windowing with a Kaiser window (using $\beta=6$) and then properly compensating for the coherent gain of the window, results in a much better picture of the spectral content. Here we see the accurate levels anticipated for both the DC and sinusoidal signal, and also is revealed that the actual frequency is slightly less than 1 MHz (which we can make out as well in the OP's plot that the cycle hasn't completed at the 5e-6 which we would otherwise expect with a true 1MHz frequency.
And for comparison to that above, the processing of the DC only signal when properly windowed appears as in the plot below, in which case we see the additional spectral artifacts in the plot above is the other signal present in addition to DC: | {
"domain": "dsp.stackexchange",
"id": 11912,
"tags": "fft, python"
} |
Two-phase locks: why is it better? | Question: I'm reading Arpaci's Operating Systems: Three Easy Pieces, the chapter on Locks.
At the end of the chapter, they present Two-phase locks (section 28.16). They say
A two-phase lock realizes that spinning can be useful, particularly if the lock is about to be released. So in the first phase, the lock spins for a while, hoping that it can acquire the lock.
I understand this may only be useful in a multiprocessor environment, is that right?
Also, it seems a little arbitrary to me to wait the first time and then go to sleep. I mean, I don't see why this would be such a great improvement over going directly to sleep in the case the lock is being held.
Is there anything I'm missing?
Thanks in advance!
Below is the whole paragraph on Two-phase locks:
One final note: the Linux approach has the flavor of an old approach that has been used on and off for years, going at least as far back to Dahm Locks in the early 1960’s [M82], and is now referred to as a two-phase lock. A two-phase lock realizes that spinning can be useful, particularly if the lock is about to be released. So in the first phase, the lock spins for a while, hoping that it can acquire the lock.
However, if the lock is not acquired during the first spin phase, a sec- ond phase is entered, where the caller is put to sleep, and only woken up when the lock becomes free later. The Linux lock above is a form of such a lock, but it only spins once; a generalization of this could spin in a loop for a fixed amount of time before using futex support to sleep.
Two-phase locks are yet another instance of a hybrid approach, where combining two good ideas may indeed yield a better one. Of course, whether it does depends strongly on many things, including the hard- ware environment, number of threads, and other workload details. As always, making a single general-purpose lock, good for all possible use cases, is quite a challenge.
Answer: To answer your first question: Spinlocks are only useful in a multiprocessing environment.
Spinning is waiting. If you're waiting for a lock to be released, and it must occur on the current CPU because there are no other CPUs, then a context switch must occur for the lock to be released. It follows that the current thread should just sleep, because it will need to anyway.
Of course, in a single-CPU environment, you can still spin-wait for other types of resource, such as a hardware device.
As Joppy indicated in the comment, putting a thread to sleep is a much heavier-weight operation than sitting in a loop. Whether or not it's worth spinning depends on a bunch of things, such as how expensive it is to sleep and context switch, and how long the lock is likely to be held.
There is a huge design space here. The default behaviour of Solaris kernel locks, for example, is to spin if the owner is running on another CPU, and sleep otherwise, the theory being that if the owner is making progress, it's more likely that the lock may be released soon. As a nice bonus, this automatically does the "right" thing on a single CPU. | {
"domain": "cs.stackexchange",
"id": 16994,
"tags": "concurrency"
} |
Madden-Julian Oscillation (MJO) - How to interpret the index? | Question: How the Madden-Julian Oscillation (MJO) index can be interpret?
Let's suppose I have got an MJO index value of 0.6 in a given day, what this does mean?
Does 0.6 represents a weak MJO?
Data from https://www.esrl.noaa.gov/psd/mjo/mjoindex/
Thanks
Answer: I prefer to use the BOM MJO index and the explanation provided over there -
When the index is within the centre circle the MJO is considered weak, meaning it is difficult to discern using the RMM methods. Outside of this circle the index is stronger and will usually move in an anti-clockwise direction as the MJO moves from west to east. For convenience, we define 8 different MJO phases in this diagram.
So in your case your signal value is 0.6 and that means it is fairly weak in amplitude as it is inside the circle. You also need to mention the phase of the MJO phase diagram. There are eight phases
Phase 1 & 8 - Western Hemisphere And Africa
Phase 2 & 3 - Indian Ocean
Phase 4 & 5 - Maritime Continent
Phase 6 & 7 - Western Pacific.
Currently the signal is a weak one as seen in this phase diagram MJO Phase diagram
When the MJO signal is strong it's amplitude will be greater than 1 and the contour line will be outside the circle. It should be noted that the MJO is an empircal index consisting of the 850 hPa winds, OLR and 200 hPa winds.
MJO passage through phase 6 and 7 is always of global interest as the impact can be of planetary scale. Usually El Ninos are preceded by Westerly Wind Bursts and the forcing factor can be a MJO passage through phase 6 and 7.
One can look at the raw data of the signal here - RMM Index text. This provides the amplitude of the signal as well as the phase of the MJO.
Another version of the same can be seen here - MJO RMM index | {
"domain": "earthscience.stackexchange",
"id": 1726,
"tags": "meteorology, climate"
} |
Copying sections of one picture to another | Question: I have two solutions (in Jython) to take two pictures and copy and paste alternate sections of one picture to the other, like so:
for x in range (0,w):
for y in range (0,h):
sourcePx=getPixel(pic1,x,y)
targetPx=getPixel(pic2,x,y)
if (y>=0 and y<20) or (y>=40 and y<60)or (y>=80 and y<=100):
setColor(targetPx, getColor(sourcePx))
repaint (pic2)
for y in range(0,h):
leaf = (y>=0 and y<20) or (y>=40 and y<60)or (y>=80 and y<=100):
if leaf:
for x in range(width):
sourcePx = getPixel(pic1, x, y)
targetPx = getPixel(pic2, x, y)
setColor(targetPx, getColor(sourcePx))
repaint(pic2)
I have been trying to work this out and have two questions about this:
Which is the more efficient way to loop through the pixels and why?
Does it make any difference to efficiency (and if so how), creating the variable leaf and then using that in the if statement?
Answer: The second one is more efficient. Simply because in the first one you are calculating the if w * h times but in the second you are calculating it h times.
Also it can be made more readable and efficient just by using this. It eliminates leaf which you aren't using anywhere. Readability doesn't need explaining.
for y in range(0,h):
if (20 > y >= 0) or (60 > y >= 40)or (100 >= y >= 80):
for x in range(width):
sourcePx = getPixel(pic1, x, y)
targetPx = getPixel(pic2, x, y)
setColor(targetPx, getColor(sourcePx))
repaint(pic2)
Update 1
As limelights has suggested, use of xrange is better than use of range in Python2. If your h or w are big then using range would return a list which occupies memory while use of xrange returns a generator which is much more memory efficient. Performance might suffer a bit but in most cases it is worth it. | {
"domain": "codereview.stackexchange",
"id": 4241,
"tags": "image, jython"
} |
Finding the Work done | Question: Find the work done for an object to slide down an inclined plane against the frictional force.
The answer my book shows is : $$ W = mg ({\mu}_k \cos \theta - \sin \theta) S $$ where S = displacement done by the object down the inclined plane and the other symbols have their usual meaning.
Well, when I saw the question first time, I thought we may find the Total Energy of the object at initial and final state and then find the change in Energies (Total) which will be equal to the Work done by the object. But, following this approach I came to the following expression (in bold) :
Total Energy at starting point, A = P.E = mgh
Total Energy at final point, B = K.E = $ \cfrac{1}{2} mv^2 = \cfrac{1}{2} m (2gS) = mgS$
So, Change in Energy of the object in sliding down the inclined plane from A to B is:
$$\textbf{Change in Energy} = mgS - mgh $$
The above expression looks far different from the answer given by book. Is my approach somewhere wrong? If it is right, then how can I simplify it further to get the answer similar to the book's one. Any help will be greatly appreciated.
Answer: Well first off, I think you forgot that $S$ was the displacement along the incline. The object will slide down from $y=h$ to $y=h-(S\sin(\theta))$ (change in vertical displacement).
The only work done here is by friction and by gravity. We know gravity is doing positive work and that friction is doing negative work.
The work done by friction is simply the friction force exerted on the object by the incline (normal force), given by
$$W_f=-mg\cos(\theta)\mu_k S$$
The work done by gravity on the object is simple given by
$$\Delta\,PE = mg\left(y-\left(y-(S\sin(\theta))\right)\right) = mg(S\sin(\theta))$$
Add these two together and voilà,
$$mgS\left(\sin(\theta)-\cos(\theta)\mu_k\right)$$
But this is the work done on the object, so to get the work done by the object just multiply the whole thing by $-1$. | {
"domain": "physics.stackexchange",
"id": 18073,
"tags": "homework-and-exercises, newtonian-mechanics, work"
} |
Is the Source Coding Theorem straightforward for uniformly distributed random variables? | Question: Shannon's source coding theorem states the following:
$n$ i.i.d. random variables $X_1,\dots,X_n$ each with entropy H(x) can be compressed into more than n⋅H(x) bits with negligible risk of information loss, as n→∞; conversely if they are compressed into fewer than n⋅H(x) bits it is virtually certain that information will be lost.
I was thinking about the easy case when the $X_1,\dots,X_n$ are uniformly distributed on the integers $[1,n]$. Now suppose I want to transmit a value of $(X_1,\dots,X_n)$.
Clearly, each $X_i$ has entropy $\log n$ and thus, if we were to send $o(n\log n)$ bits, we would lose information about some of the $X_i$, with high probability $1 - o(1)$.
The way to argue the Theorem for this case, is that it $\log n$ bits suffice to reconstruct any $X_i$, and conversely, if you receive, for example, only $(1/4)n\log n$ bits, then, you would need to correctly guess $3/4 n\log n$ bits, which happens with probability of only $2^{-3/4n\log n}$.
Is my reasoning correct?
Answer: The theorem you quote is not stated formally, so it is in fact impossible to prove it. That said, the idea behind the source coding theorem is that a variable of entropy $H(X)$ behaves (roughly) as if it was uniformly distributed on $2^{H(X)}$ values. When the distribution of $X$ is uniform, the theorem becomes trivial, once stated formally. So it's a good idea for you to look up a formal statement of the theorem, and then verify that it indeed holds for uniformly distributed random variables. | {
"domain": "cs.stackexchange",
"id": 3547,
"tags": "probability-theory, information-theory, coding-theory"
} |
Density of dark matter along the galaxy | Question: I was reading these 2 interesting articles about dark matter inside the solar system:
Does Dark Matter affect the motion of the Solar System?
The Incredible Dark Matter Mystery: Why Astronomers Say it is
Missing in Action
But I can't figure if:
A) Dark matter doesn't affect the planetary motion because this can't radiate, and thus has large orbits and then is less dense in the inner region of the galaxies and more dense in the outside regions. Or,
B) Dark matter has equal low density across the galaxy (compared with the higher density of baryonic mass inside the solar system), and simply his effects are only important if we sum all the matter in the vast space between stars.
I'm confused because I tend to think that if the dark matter is in a kind of shell, mainly outside the galaxy, this would help to spread the stars far from the galaxy center instead of contain everything in his relative rotation place. I can't find a distribution graph by the way.
Excuse my simple english and thanks in advance for your help!
Edit: I had heard a wrong idea in some documentaries, or I misinterpreted it, about a greater DM density in the outside region of the galaxies. Thanks to @KyleOman for pointing out DM is denser in the center.
Answer: Dark matter has a small/negligible influence in the Solar System because there isn't all that much of it in the Solar System, compared to say the mass of the Sun.
The NFW profile is the current default density profile for DM "haloes" (spherical-ish self-gravitating structures, such as the one in which the Milky Way resides). This is a fit to the density as a function of radius for dark matter haloes in cosmological simulations, and in a very broad sense it seems to work pretty well in the real Universe, most of the time (though see below about "cores"). The density is high in the centre, and decreases are $r^{-2}$ for a while, then $r^{-3}$ further out. The formula will give you infinite density at zero radius - clearly this is unphysical, but the point is that the central density rises sharply toward the centre to some high value. Plugging in parameters to the NFW profile for a Milky Way sized galaxy and evaluating the density at $8\,{\rm kpc}$ (distance of the Sun from the centre of the galaxy), I get about $8\times10^{6}\,{\rm M}_\odot\,{\rm kpc^{-3}}$, or about $9\times10^{-19}\,{\rm M}_\odot\,{\rm AU}^{-3}$. The volume of the Solar System is say about $30000\,{\rm AU}^3$, so the DM is outgunned in mass by the Sun by a factor of $4\times10^{13}$. In perhaps more familiar units, my estimate gives $5\times10^{16}\,{\rm kg}$ of DM - compare that with some Solar System bodies and you'll find that it's something along the lines of a medium asteroid. And the DM is diffuse all over the Solar System, so it's even more insignificant than a medium asteroid, gravitationally speaking.
So why is it such a big deal? Because space is big, all that interstellar space has similarly puny densities of DM, but there's so much space that it adds up to a lot of mass - typical estimates say that there should be about $10-100$ times more DM mass in the Milky Way than star and gas mass.
There are other density profiles that are proposed, e.g. Einasto profile, Di Cintio+2014 profile, and a handful of others. Qualitatively they're all fairly similar (except for "cored" profiles, for which I'll point you to a wiki article and, shamelessly, to my own work).
Just to cover all the points in your question, the DM distribution is certainly not a shell outside the galaxy - more like a cloud (denser in the middle) inside which the galaxy lives. And it must be diffuse - it cannot collapse to form dense structures like a DM star or a DM planet (provided something like the standard $\Lambda$CDM theory applies).
Please let me know if you'd like anything clarified, or if you have any followup questions post them and poke me, I'd be happy to have a look :) | {
"domain": "physics.stackexchange",
"id": 23739,
"tags": "dark-matter, galaxy-rotation-curve"
} |
Prime generator SPOJ problem in Python 3 | Question: I am trying to solve an SPOJ problem: print all prime numbers in a given range (as large as \$10^9\$). I have used the Sieve of Eratosthenes algorithm. But it is still slow when input is in range of \$10^4\$.
import math
no_of_cases = int(input())
for i in range(no_of_cases):
x = input().split(" ")
a = int(x[0])
b = int(x[1])
lis = set([p*i for p in range(2,b+1) for i in range(2,b+1)])
print (sorted(set(range(a,b+1))- lis))
I have already asked a similar question over here.
Answer: I'll convert your program to a primes_below function. (I don't take into account the lower bound.)
After doing this I got:
def primes_below(n):
lis = set([p * i for p in range(2, n + 1) for i in range(2, n + 1)])
return sorted(set(range(2, n + 1)) - lis)
First things first, you need a maximum of \$n ^ 2\$ amount of memory.
Why?
If we look at your algorithm what is \$i_{\text{max}}\$ and \$p_{\text{max}}\$?
Both are \$n\$. And so your current largest number is \$n^2\$.
You said:
I also tried using square root but that was throwing error.
This also wouldn't work correctly.
Lets go through every combination if we were to do that with 16.
2 * 2 = 4
2 * 3 = 6
2 * 4 = 8
3 * 2 = 6
3 * 3 = 9
3 * 4 = 12
4 * 2 = 8
4 * 3 = 12
4 * 4 = 16
This means that 4, 6, 8, 9, 12, 16 are not prime. We should know that the primes in this range are 2, 3, 5, 7, 11, 13.
And so 10, 14, 15 are all incorrectly said as prime.
But looking at the above you should be able to see that there is no point on having \$3 * 2\$, \$4 * 2\$, \$4 * 3\$.
This is as you've already done those calculations and so you should start the range for \$i\$ at \$p\$.
And this is true for your current solution.
And so we know that we should at least use:
def primes_below(n):
lis = set([p * i for p in range(2, n + 1) for i in range(p, n + 1)])
return sorted(set(range(2, n + 1)) - lis)
Now we need to remove the creation of numbers greater than (or equal) \$n\$.
Rather than looking at some complex maths, we'll have a look at range.
range(stop)
range(start, stop[, step])
This is a versatile function to create lists containing arithmetic progressions.
It is most often used in for loops. The arguments must be plain integers.
If the step argument is omitted, it defaults to 1.
If the start argument is omitted, it defaults to 0.
The full form returns a list of plain integers [start, start + step, start + 2 * step, ...].
If step is positive, the last element is the largest start + i * step less than stop; if step is negative,
the last element is the smallest start + i * step greater than stop.
step must not be zero (or else ValueError is raised).
This means that you can use this instead of your multiplication!
From the above we know that the first number we use is \$p * p\$, this is as \$i\$ starts as \$p\$.
The largest number we want is also \$n\$.
But we don't want to remove all the numbers that are between \$p^2\$ and \$n\$,
and so we need that step.
That step will also be \$p\$.
Or in Python:
def primes_below(n):
lis = set([i for p in range(2, n) for i in range(p * p, n, p)])
return sorted(set(range(2, n)) - lis)
Before we say all is good, some maths!
What is the largest \$p\$ that will mean the second range contains a value?
If \$p^2 > n\$ then the second range will be empty, and so the largest \$p\$ follows the equation \$p^2 <= n\$.
And so \$p_{\text{max}} = \sqrt{n}\$.
And adding this to the range significantly improves performance.
def primes_below(n):
lis = set([i for p in range(2, int(n ** 0.5) + 1) for i in range(p * p, n, p)])
return sorted(set(range(2, n)) - lis)
This function is roughly ~3800 times faster than the original, at n = 5000.
Now you probably think it's good enough for this challenge, it's pretty fast!
But unfortunately no. Actually I thought I'd broke my machine when I ran it the first time, and I have 32GB of RAM!
I closed every app I had on my machine and the program still broke!
This means that you have to care about the memory and your current method doesn't.
Instead we need to limit the amount of memory to \$n\$.
To do this you can use lis = [True] * n.
And then you want to change them to False if you come across one.
And you'll want to set 0 and 1 to false at the beginning.
The same way you are at the moment.
This should get you something like:
def primes_below(n):
lis = [True] * n
lis[0] = lis[1] = False
for p in range(2, int(n ** 0.5) + 1):
for i in range(p * p, n, p):
lis[i] = False
return ...
Now you want to be able to return the numbers.
Python has enumerate which will give us the index and the value,
this will allow us to make a comprehension.
And so if the value is True add the index to the output.
Which will result in:
def primes_below(n):
lis = [True] * n
lis[0] = lis[1] = False
for p in range(2, int(n ** 0.5) + 1):
for i in range(p * p, n, p):
lis[i] = False
return (i for i, v in enumerate(lis) if v)
Since you don't want to display some numbers you'll want to implement a lower bound, I'd do this out of the function as then if you ever need the Sieve again you'll have a 'pure' one.
This can simply be another comprehension:
numbers = (i for i in primes_below(b + 1) if i < a)
And will make usage simple:
no_of_cases = int(input())
for _ in range(no_of_cases):
a, b = map(int, input().split(" ")[:2])
print([i for i in primes_below(b + 1) if i < a)])
This however is still too slow for my liking.
And so to make it 3 times faster, at n = 10 ** 7, you can add a check if p is prime, if it is then do what we were doing before if not go to the next number.
This results in:
def primes_below2(n):
lis = [True] * n
lis[0] = lis[1] = False
for p in range(2, int(n ** 0.5) + 1):
if not lis[p]:
continue
for i in range(p * p, n, p):
lis[i] = False
return (i for i, d in enumerate(lis) if d)
This is roughly 11500 times faster than the original, at n = 5000.
And takes about 244 seconds, on my machine, to generate all the primes below 10 ** 9. | {
"domain": "codereview.stackexchange",
"id": 19980,
"tags": "python, programming-challenge, python-3.x, primes, time-limit-exceeded"
} |
nao teleop and camera feed | Question:
Hi, I can't seem to get the Brown University Nao driver to work, nor the alufr (Freiburg) teleop package to work. I'm using the new Nao h25 model, with NaoQi library included in the linux Choregraphe download (I believe it is version 1.10)
[alufr] What I am able to get is odometry and joint information, via running nao_ctrl/nao_sensors.py remotely. When I run nao_ctrl/nao_walker.py, the robot will respond to "stand up" and "joystick control enabled" buttons, but won't actually walk. I'm using a PS3 joystick, and I checked /joy topic and indeed the walk axes (specified in ps3joystick.yaml) are getting published.
[brown] When I run (locally on the robot) ./eyes, I am told "Could not get shared memory" (same issue as on [http://code.google.com/p/brown-ros-pkg/issues/detail?id=7]). When I run control.py, I get SOAP errors of the form "setVolume not a member of tts" and "setStiffness not a member of motion" ...
Thanks for any assistance!
Originally posted by Nick Armstrong-Crews on ROS Answers with karma: 481 on 2011-08-15
Post score: 0
Answer:
OK, got alufr joystick walking to work! I may not have been hitting the right button sequence; it seems the requirement is:
EnableControl button (robot speaks aloud "joystick control enabled")
InitPose button (robot stands up)
Walk around with joystick axes
I also may have been pointing to the wrong version of naoqi-sdk.
I am still having issues cross-compiling ROS for the nao... but I'll open a distinct ticket for that.
Thanks!
Originally posted by Nick Armstrong-Crews with karma: 481 on 2011-08-17
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 6418,
"tags": "nao"
} |
Relativistic Lorentz force law | Question: If we consider the the relativistic Lorentz force law:
$$\frac{d}{dt} (m\gamma \vec{u})=e(\vec{E}+\vec{u} \times \vec{B})$$
How can we deduce:
$$\frac{d}{dt} (m\gamma c^2)=e \vec{E} \cdot \vec{u}$$
Clearly dotting with $\vec{u}$ will give us the RHS.
Which leaves us:
$$\vec{u} \cdot \frac{d}{dt} (m\gamma \vec{u})=e \vec{u} \cdot \vec{E}$$
Could anyone help explain how to proceed and if this is the correct method?
EDIT: If it helps: with reference to these notes i'm working through:
http://www.maths.ox.ac.uk/system/files/coursematerial/2012/2393/8/WoodhouseLectures.pdf
Page 86, eq (178), the paragraph underneath states 'The first equation (which follows from the second)', this is what i'm trying to prove (a warning, the notes are riddled with errors..).
Answer: Let's set $c=1$ for simplicity.
Using your observations, it suffices to show that (just combine the second and third equations you write down)
$$
\dot \gamma = \vec u \cdot \frac{d}{dt}(\gamma \vec u).
$$
To prove this, the following facts are useful:
$$
\dot \gamma = \gamma^3\vec u \cdot\dot{\vec u}, \qquad \gamma^2\vec u^2 +1 = \gamma^2.
$$
Now just compute
\begin{align}
\vec u \cdot \frac{d}{dt}(\gamma \vec u)
&=\vec u \cdot (\dot \gamma \vec u + \gamma \dot{\vec u}) \\
&=\vec u \cdot (\gamma^3(\vec u \cdot \dot{\vec u})\vec u + \gamma \dot{\vec u}) \\
&= \gamma \vec u \cdot \dot{\vec u}(\gamma^2 \vec u^2 + 1) \\
&= \gamma^3\vec u \cdot\dot {\vec u} \\
&= \dot \gamma
\end{align} | {
"domain": "physics.stackexchange",
"id": 7105,
"tags": "electromagnetism, special-relativity, forces"
} |
Does the commutator of anything with itself not vanish? | Question: In a quantum mechanics exam one question was to write the commutator of a couple of operators. Everybody got points taken away since they did not write $[Q_i, Q_i] = 0$ for all the operators $Q_i$ in question. They said that they had to require this since there is something in QFT which will not make those commutators vanish.
What are they talking about? Can anything not commute with itself?
Answer: I) Yes, they are probably referring to that a Grassmann-odd operator needs not (super)commute with itself. Take e.g. the 1st order Grassmann-odd differential operator
$$\tag{1} D~:=~\frac{d}{d\theta}+ \theta\frac{d}{dt}. $$
In eq. (1) $t$ is a Grassmann-even variable and $\theta$ is a Grassmann-odd variable, which (super)commute
$$\tag{2} [t,t]_{SC}~=~0, \qquad [t,\theta]_{SC}~=~0, \qquad [\theta,\theta]_{SC}~=~2\theta^2~=~0.$$
In eq. (2) the bracket $[\cdot,\cdot]_{SC}$ denotes the super-commutator
$$\tag{3} [A,B]_{SC}~:=~ AB-(-1)^{|A|~|B|}BA. $$
The supercommutator is the appropriate$^1$ generalization of the notion of a commutator to superalgebras.
The super-commutator of $D$ operator (1) with itself is not zero:
$$\tag{4} [D,D]_{SC}~=~ 2D^2~=~2\frac{d}{dt}\neq 0 .$$
II) More generally, the fact that a Grassmann-odd operator (super)commute with itself is a non-trivial condition, which encodes non-trivial information about the theory. This is e.g. used in supersymmetry and in BRST formulations.
On the other hand, the super-commutator of an arbitrary Grassmann-even operator with itself is automatically zero.
--
$^1$ One may wonder why one uses the supercommutator $[\cdot,\cdot]_{SC}$ rather than the ordinary commutator
$$\tag{5} [A,B]_{C}~:=~ AB-BA $$
in superalgebras? The commutator (5) satisfies a Jacobi identity, and the supercommutator (3) satisfies a super Jacobi identity, so that's a tie. :)
One physical motivation comes from canonical quantization: As is well-known, quantum mechanically, two Grassmann-graded operators may fail to commute or fail to supercommute. However classically ($\equiv$ when Planck's constant $\hbar$ is zero), for two Grassmann-graded functions $f$ and $g$, one would like that the appropriate bracket generalization $[f,g]$ vanishes. To ensure this one has to use the supercommutator $[\cdot,\cdot]_{SC}$ rather than the commutator $[\cdot,\cdot]_{C}$. From this perspective, the canonical anticommutator relation (CAR) for fermions is merely a quite natural quantum deformation of a classical supercommuting description. Moreover, the supercommutator (3) [as opposed to the commutator (5)] provides a unified description of CCR for bosons and CAR for fermions. See also e.g. this Phys.SE post and links therein. | {
"domain": "physics.stackexchange",
"id": 16814,
"tags": "operators, supersymmetry, commutator, grassmann-numbers, superalgebra"
} |
Distributed predicate computation on event stream | Question: My question is actually a request for papers, articles, texts or books on the problem that I'm trying to solve on my work.
I'm working on a program that computes a predicate value (true or false) for a given object in a distributed system in which there is a stream of events that can change the object's attributes and, consequentially, the predicate value. Whenever the predicate value changes, the program must send a notification about this change.
For example, consider that there is an object A which has an attribute called name and consider that there is a predicate P which is true when the object's name is equal to Jhon.
Each event in the stream has a timestamp and a value for the attribute name. So consider the following sequence of events:
e1 = { name: Jhon, timestamp: 1 }
e2 = { name: Jhon, timestamp: 2 }
e3 = { name: Peter, timestamp: 3 }
e4 = { name: Doug, timestamp: 4 }
e5 = { name: Jhon, timestamp: 5 }
In this problem the events have a total order relation: If you have two events you always can say which one is the oldest of them.
Now, the events don't necessarily show up in the stream in the correct order according to its timestamp. Each event is unique to its timestamp, so there are no two or more events with the same timestamp for the same object. Also, the timestamps don't necessarily form a sequence that always increase by one: if we see e1 with timestamp 1 and e3 with timestamp 3, it doesn't imply the existence of e2 with timestamp 2.
There is no guarantee that all events will be received or when they will be received. It's part of the problem that we only know about the existence of the events that we see in the stream.
The real scenario is even worse: there are multiple computers parallelly processing this stream of events. However, for simplicity, I'll go further in this example considering only one computer.
If the events arrive and are processed in the order described above, then the notifications sent should be:
P(A) = true when e1 arrives
P(A) = false when e3 arrives
P(A) = true when e5 arrives.
That is the correct sequence of notifications because it respects the timestamp order.
Now, imagine that the computer receives the events in the following order:
e1, e5, e2, e4, e3
A naive algorithm which doesn't consider the event's timestamp would send an incorrect sequence of notifications:
P(A) = true when e1 arrives
P(A) = false when e4 arrives
The algorithm that I'm working on considers the timestamps and infers when a notification should have been sent but was not. So when e3 arrives it will notice that the notification P(A) = true for e5 was not sent.
This feels a bit like reinventing the wheel, though I'm not aware of any reading about this problem.
I would like some references to this problem or to something similar, like some papers dealing with this kind of problem.
The real problem is quite more complex since it involves storing the predicate $\times$ object state in a database that works as a shared state between the computers processing the stream and I'm talking about thousands of events arriving per second so it's not possible to keep all events stored in some database.
Is there any literature about the problem that I have described? if so, could you give me links to it?
I would like to see a paper or a text that explains an algorithm that solves this problem and it would be even better if such paper provides proofs about the algorithm (e.g. correctness).
If such paper doesn't exist (I actually think that is the case), I would accept an answer that describes an algorithm and provides an argument or a proof about its correctness.
For this algorithm to be correct, it should always send the correct sequence of notifications no matter what order that events arrives.
And the algorithm shouldn't keep all the received events in memory, because the real problem deals with too many events to save in memory or to store in a DB.
It would be reasonable to keep some events in memory, preferably a fixed amount.
Answer: Impossibility result #1: dropped events
The problem cannot be solved in general; there is no way to guarantee that your requirements will be met if some events are dropped (i.e., not received). Consider first this stream:
e1 = { name: Jhon, timestamp: 1 }
e2 = { name: Jhon, timestamp: 4 }
where the algorithm sees both events. Next, consider this stream:
e1' = { name: Jhon, timestamp: 1 }
e2' = { name: Pete, timestamp: 2 }
e3' = { name: Jhon, timestamp: 3 }
e4' = { name: Jhon, timestamp: 4 }
where the algorithm sees only the events e1',e4' (the other events are lost and never received). You might notice that what the algorithm sees in both cases is identical, so its outputs will be identical in both cases. However, the correct answer differs in these two cases, so there is no hope for an algorithm that always produces a correct output. (The correct response in the first case is to produce no notifications; the correct response in the second case is to produce two notifications, one to indicate that the predicate is false after receiving e2', and one to indicate that the predicate is true after receiving e3'.)
It is not clear how to adapt the requirements to deal with this situation. The only plausible solution I can see is to say that the notifications that are produced should depend only on the received events, not on the events that are sent. This is equivalent to specifying that events cannot be dropped.
Impossibility result #2: re-ordered events
You state that you must be able to handle re-ordered events, without storing all events in memory, and with arbitrary re-ordering. However, these requirements are incompatible: that is impossible to achieve. Consider a long sequence of events with timestamps 2,4,6,8,10,12,... At the end of the long sequence of events, if an event with an odd timestamp arrives, the only way to be sure you can handle it correctly is to store the entire history of past events (or past states of the object).
So, you're going to have to relax the requirement about re-ordering as well. Perhaps you're willing to store all events in memory forevermore. (If so, you have a solution.) Perhaps you are willing to impose a bound on re-ordering, e.g., no event will be delayed by more than 10 minutes. (If so, you only have to store history for the past 10 minutes, and everything older can be deleted.) Perhaps something else makes more sense in your particular situation.
But the one thing that is not an option is to impose all of the strong requirements stated in your question, and require an algorithm that is always correct.
I'm not aware of any literature on this and I don't particularly see any reason to expect there to be any. It's a very specific set of requirements, and it looks to me like the resulting task is either trivial or impossible to solve. Those usually aren't the kind of problems that tend to be studied in the literature. Perhaps you might be interested in persistent data structures, but that's just a fancy way of storing the entire history of events, which you said you want to do; and you don't need a fancy data structure to do that in your particular situation. | {
"domain": "cs.stackexchange",
"id": 16725,
"tags": "algorithms, reference-request, distributed-systems"
} |
The recent results on Hubble constant measurements | Question: In recent news there is this announcement
In the introduction they say:
Distance measurement discrepancy: a $4.4σ$ tension on the value of $H_0$
and I understand the discrepancy is with the value derived using the cosmic microwave radiation (CMB). There will be more publications to follow.
These measurements are local, i.e. of the universe as it is now.
By coincidence I was looking up baryogenesis and came upon the Sakharov condition for generating the asymmetry between baryons and antibaryons which must have happened by the time the CMB map solidified.
Any of the three necessary conditions that a baryon-generating interaction must satisfy to produce matter and antimatter at different rates: (i) baryon number violation; (ii) C-symmetry and CP-symmetry violation; (iii) interactions out of thermal equilibrium.
iii) seems to me to be relevant to a measurement of the Hubble constant using the CMB data because it would affect what is presumed in the standard cosmology :
The CMB is also used to determine the Hubble constant, where the temperature is analyzed as a function of frequency – a power spectrum – and a best fit analysis is made to constrain the Hubble constant.
If interactions out of thermal equilibrium are responsible for baryogenesis, maybe the assumptions entering the fit should change, and maybe the discrepancy between the two methods tells something about baryogenesis.
As a particle physicist I am not up to astrophysical calculations. Is anybody familiar with this topic who could say whether this is a possible suggestion?
Edit:added
This link says that the Sakharov conditions are necessary, and the standard model obeys all three:
During a first order electroweak phase transition bubbles of the broken vacuum form in an unbroken phase. The expansion of these bubble walls is shown to lead to considerable departure from thermal equilibrium.
Since Sakharov’s conditions are satisfied, the baryon asymmetry of the Universe could be created during the electroweak phase transition. Unfortunately, it has been shown that the amount of CP violation coupled with the strength of a first order phase transition is not sufficient to create enough baryon asymmetry in the SM
Surely the standard model must be used in estimating by fits to the CMB the Hubble constant, but not the extra baryogenesis mechanism necessary for the observed asymmetry. These could affect the thermal equilibrium estimates.
Answer: The Carnegie-Chicago Hubble Program. VIII. An Independent Determination of the
Hubble Constant Based on the Tip of the Red Giant Branch. I hope this helps.
https://arxiv.org/pdf/1907.05922.pdf | {
"domain": "astronomy.stackexchange",
"id": 4298,
"tags": "cosmology, hubble-constant"
} |
ADODB Wrapper Class (Revisited) | Question: I recently posted this question on my implementation of an ADODB Wrapper Class. I realized in my own review that I was missing some very important things, so much so, that I decided it would be worth it to re-write the entire class. Saying that I have done quite a bit of restructuring so I am going to provide an outline of what I have done and why.
Numeric Parameters:
I removed the public properties ParameterNumericScale and ParameterPrecision as I was not considering the possibility of a parameters with varying precision and numericscale. To address this, I created 2 functions that automatically calculate the precision and numericscale for each parameter passed in:
Private Function CalculatePrecision(ByVal Value As Variant) As Byte
CalculatePrecision = CByte(Len(Replace(CStr(Value), ".", vbNullString)))
End Function
Private Function CalculateNumericScale(ByVal Value As Variant) As Byte
CalculateNumericScale = CByte(Len(Split(CStr(Value), ".")(1)))
End Function
ADO Connection Error's Collection:
I opted to pass the Connection.Errors collection alone, instead of the entire Connection Object to each of the sub procedures ValidateConnection and PopulateADOErrorObject:
Private Sub ValidateConnection(ByVal ConnectionErrors As ADODB.Errors)
If ConnectionErrors.Count > 0 Then
If Not this.HasADOError Then PopulateADOErrorObject ConnectionErrors
Dim ADOError As ADODB.Error
Set ADOError = GetError(ConnectionErrors, ConnectionErrors.Count - 1) 'Note: 0 based collection
Err.Raise ADOError.Number, ADOError.Source, ADOError.Description, ADOError.HelpFile, ADOError.HelpContext
End If
End Sub
Bi-Directional Parameters:
Previously, I was only considering the use of Input Parameters for a given command, because there is no way to know what Direction a parameter should be mapped. However, I was able to come up with something close to this, by implicitly calling the Parameters.Refresh method of the Parameters collection object. Note that Parameters STILL have to be passed in the correct order or ADO will populate the Connection.Errors collection. It is also worth mentioning that this has a very small (virtually unnoticeable) performance hit, but even still, I chose to leave it up to the client to choose which method that they want use. I did so by adding a boolean property called DeriveParameterDirection, which If set to true, then the DerivedDirectionParameters implementation of the IADODBParametersWrapper will be used, in the private CreateCommand procedure. If false, then the AssumeParameterDirection of IADODBParametersWrapper will be used.
Also, If output parameters are used, you need a way to return them, so I use the following in ADODBWrapper to do so:
'note: this.OuputParameters is a read only property at the class level
Private Sub PopulateOutPutParameters(ByRef Parameters As ADODB.Parameters)
Dim Param As ADODB.Parameter
Set this.OuputParameters = New Collection
For Each Param In Parameters
Select Case Param.Direction
Case adParamInputOutput
this.OuputParameters.Add Param
Case adParamOutput
this.OuputParameters.Add Param
Case adParamReturnValue
this.OuputParameters.Add Param
End Select
Next
End Sub
IADODBParametersWrapper (Interface):
Option Explicit
Public Sub SetParameters(ByRef Command As ADODB.Command, ByRef ParameterValues As Variant)
End Sub
Private Sub Class_Initialize()
Err.Raise vbObjectError + 1024, TypeName(Me), "An Interface class must not be instantiated."
End Sub
AssumedDirectionParameters (Class):
Option Explicit
Implements IADODBParametersWrapper
Private Sub IADODBParametersWrapper_SetParameters(ByRef Command As ADODB.Command, ByRef ParameterValues As Variant)
Dim i As Long
Dim ParamVal As Variant
If UBound(ParameterValues) = -1 Then Exit Sub 'not allocated
For i = LBound(ParameterValues) To UBound(ParameterValues)
ParamVal = ParameterValues(i)
Command.Parameters.Append ToADOInputParameter(ParamVal)
Next i
End Sub
Private Function ToADOInputParameter(ByVal ParameterValue As Variant) As ADODB.Parameter
Dim ResultParameter As New ADODB.Parameter
With ResultParameter
Select Case VarType(ParameterValue)
Case vbInteger
.Type = adInteger
Case vbLong
.Type = adInteger
Case vbSingle
.Type = adSingle
.Precision = CalculatePrecision(ParameterValue)
.NumericScale = CalculateNumericScale(ParameterValue)
Case vbDouble
.Type = adDouble
.Precision = CalculatePrecision(ParameterValue)
.NumericScale = CalculateNumericScale(ParameterValue)
Case vbDate
.Type = adDate
Case vbCurrency
.Type = adCurrency
.Precision = CalculatePrecision(ParameterValue)
.NumericScale = CalculateNumericScale(ParameterValue)
Case vbString
.Type = adVarChar
.Size = Len(ParameterValue)
Case vbBoolean
.Type = adBoolean
End Select
.Direction = ADODB.ParameterDirectionEnum.adParamInput
.value = ParameterValue
End With
Set ToADOInputParameter = ResultParameter
End Function
Private Function CalculatePrecision(ByVal value As Variant) As Byte
CalculatePrecision = CByte(Len(Replace(CStr(value), ".", vbNullString)))
End Function
Private Function CalculateNumericScale(ByVal value As Variant) As Byte
CalculateNumericScale = CByte(Len(Split(CStr(value), ".")(1)))
End Function
DerivedDirectionParameters (Class):
Option Explicit
Implements IADODBParametersWrapper
Private Sub IADODBParametersWrapper_SetParameters(ByRef Command As ADODB.Command, ByRef ParameterValues As Variant)
Dim i As Long
Dim ParamVal As Variant
If UBound(ParameterValues) = -1 Then Exit Sub 'not allocated
With Command
If .Parameters.Count = 0 Then
Err.Raise vbObjectError + 1024, TypeName(Me), "This Provider does " & _
"not support parameter retrieval."
End If
Select Case .CommandType
Case adCmdStoredProc
If .Parameters.Count > 1 Then 'Debug.Print Cmnd.Parameters.Count prints 1 b/c it includes '@RETURN_VALUE'
'which is a default value
For i = LBound(ParameterValues) To UBound(ParameterValues)
ParamVal = ParameterValues(i)
'Explicitly set size to prevent error
'as per the Note at: https://docs.microsoft.com/en-us/sql/ado/reference/ado-api/refresh-method-ado?view=sql-server-2017
SetVariableLengthProperties .Parameters(i + 1), ParamVal
.Parameters(i + 1).Value = ParamVal '.Parameters(i + 1) b/c of @RETURN_VALUE
'mentioned above
Next i
End If
Case adCmdText
For i = LBound(ParameterValues) To UBound(ParameterValues)
ParamVal = ParameterValues(i)
'Explicitly set size to prevent error
SetVariableLengthProperties .Parameters(i), ParamVal
.Parameters(i).Value = ParamVal
Next i
End Select
End With
End Sub
Private Sub SetVariableLengthProperties(ByRef Parameter As ADODB.Parameter, ByRef ParameterValue As Variant)
With Parameter
Select Case VarType(ParameterValue)
Case vbSingle
.Precision = CalculatePrecision(ParameterValue)
.NumericScale = CalculateNumericScale(ParameterValue)
Case vbDouble
.Precision = CalculatePrecision(ParameterValue)
.NumericScale = CalculateNumericScale(ParameterValue)
Case vbCurrency
.Precision = CalculatePrecision(ParameterValue)
.NumericScale = CalculateNumericScale(ParameterValue)
Case vbString
.Size = Len(ParameterValue)
End Select
End With
End Sub
Private Function CalculatePrecision(ByVal value As Variant) As Byte
CalculatePrecision = CByte(Len(Replace(CStr(value), ".", vbNullString)))
End Function
Private Function CalculateNumericScale(ByVal value As Variant) As Byte
CalculateNumericScale = CByte(Len(Split(CStr(value), ".")(1)))
End Function
ADODBWrapper (Class):
Option Explicit
Private Type TADODBWrapper
DeriveParameterDirection As Boolean
CommandTimeout As Long
OuputParameters As Collection
ADOErrors As ADODB.Errors
HasADOError As Boolean
End Type
Private this As TADODBWrapper
Public Property Get DeriveParameterDirection() As Boolean
DeriveParameterDirection = this.DeriveParameterDirection
End Property
Public Property Let DeriveParameterDirection(ByVal value As Boolean)
this.DeriveParameterDirection = value
End Property
Public Property Get CommandTimeout() As Long
CommandTimeout = this.CommandTimeout
End Property
Public Property Let CommandTimeout(ByVal value As Long)
this.CommandTimeout = value
End Property
Public Property Get OuputParameters() As Collection
Set OuputParameters = this.OuputParameters
End Property
Public Property Get Errors() As ADODB.Errors
Set Errors = this.ADOErrors
End Property
Public Property Get HasADOError() As Boolean
HasADOError = this.HasADOError
End Property
Private Sub Class_Terminate()
With this
.CommandTimeout = Empty
.DeriveParameterDirection = Empty
Set .OuputParameters = Nothing
Set .ADOErrors = Nothing
.HasADOError = Empty
End With
End Sub
Public Function GetRecordSet(ByRef Connection As ADODB.Connection, _
ByVal CommandText As String, _
ByVal CommandType As ADODB.CommandTypeEnum, _
ByVal CursorType As ADODB.CursorTypeEnum, _
ByVal LockType As ADODB.LockTypeEnum, _
ParamArray ParameterValues() As Variant) As ADODB.Recordset
Dim Cmnd As ADODB.Command
ValidateConnection Connection.Errors
On Error GoTo CleanFail
Set Cmnd = CreateCommand(Connection, CommandText, CommandType, CVar(ParameterValues)) 'must convert paramarray to
'a variant in order to pass
'to another function
'Note: When used on a client-side Recordset object,
' the CursorType property can be set only to adOpenStatic.
Set GetRecordSet = New ADODB.Recordset
GetRecordSet.CursorType = CursorType
GetRecordSet.LockType = LockType
Set GetRecordSet = Cmnd.Execute(Options:=ExecuteOptionEnum.adAsyncFetch)
'if successful
If Not this.ADOErrors Is Nothing Then this.ADOErrors.Clear
CleanExit:
Set Cmnd = Nothing
Exit Function
CleanFail:
PopulateADOErrorObject Connection.Errors
Resume CleanExit
End Function
Public Function GetDisconnectedRecordSet(ByRef ConnectionString As String, _
ByVal CursorLocation As ADODB.CursorLocationEnum, _
ByVal CommandText As String, _
ByVal CommandType As ADODB.CommandTypeEnum, _
ParamArray ParameterValues() As Variant) As ADODB.Recordset
Dim Cmnd As ADODB.Command
Dim CurrentConnection As ADODB.Connection
On Error GoTo CleanFail
Set CurrentConnection = CreateConnection(ConnectionString, CursorLocation)
Set Cmnd = CreateCommand(CurrentConnection, CommandText, CommandType, CVar(ParameterValues)) 'must convert paramarray to
'a variant in order to pass
'to another function
Set GetDisconnectedRecordSet = New ADODB.Recordset
With GetDisconnectedRecordSet
.CursorType = adOpenStatic 'Must use this cursortype and this locktype to work with a disconnected recordset
.LockType = adLockBatchOptimistic
.Open Cmnd, , , , Options:=ExecuteOptionEnum.adAsyncFetch
'disconnect the recordset
Set .ActiveConnection = Nothing
End With
'if successful
If Not this.ADOErrors Is Nothing Then this.ADOErrors.Clear
CleanExit:
Set Cmnd = Nothing
If Not CurrentConnection Is Nothing Then: If (CurrentConnection.State And adStateOpen) = adStateOpen Then CurrentConnection.Close
Set CurrentConnection = Nothing
Exit Function
CleanFail:
PopulateADOErrorObject CurrentConnection.Errors
Resume CleanExit
End Function
Public Function QuickExecuteNonQuery(ByVal ConnectionString As String, _
ByVal CommandText As String, _
ByVal CommandType As ADODB.CommandTypeEnum, _
ByRef RecordsAffectedReturnVal As Long, _
ParamArray ParameterValues() As Variant) As Boolean
Dim Cmnd As ADODB.Command
Dim CurrentConnection As ADODB.Connection
On Error GoTo CleanFail
Set CurrentConnection = CreateConnection(ConnectionString, adUseServer)
Set Cmnd = CreateCommand(CurrentConnection, CommandText, CommandType, CVar(ParameterValues)) 'must convert paramarray to
'a variant in order to pass
'to another function
Cmnd.Execute RecordsAffected:=RecordsAffectedReturnVal, Options:=ExecuteOptionEnum.adExecuteNoRecords
QuickExecuteNonQuery = True
'if successful
If Not this.ADOErrors Is Nothing Then this.ADOErrors.Clear
CleanExit:
Set Cmnd = Nothing
If Not CurrentConnection Is Nothing Then: If (CurrentConnection.State And adStateOpen) = adStateOpen Then CurrentConnection.Close
Set CurrentConnection = Nothing
Exit Function
CleanFail:
PopulateADOErrorObject CurrentConnection.Errors
Resume CleanExit
End Function
Public Function ExecuteNonQuery(ByRef Connection As ADODB.Connection, _
ByVal CommandText As String, _
ByVal CommandType As ADODB.CommandTypeEnum, _
ByRef RecordsAffectedReturnVal As Long, _
ParamArray ParameterValues() As Variant) As Boolean
Dim Cmnd As ADODB.Command
ValidateConnection Connection.Errors
On Error GoTo CleanFail
Set Cmnd = CreateCommand(Connection, CommandText, CommandType, CVar(ParameterValues)) 'must convert paramarray to
'a variant in order to pass
'to another function
Cmnd.Execute RecordsAffected:=RecordsAffectedReturnVal, Options:=ExecuteOptionEnum.adExecuteNoRecords
ExecuteNonQuery = True
'if successful
If Not this.ADOErrors Is Nothing Then this.ADOErrors.Clear
CleanExit:
Set Cmnd = Nothing
Exit Function
CleanFail:
PopulateADOErrorObject Connection.Errors
Resume CleanExit
End Function
Public Function CreateConnection(ByRef ConnectionString As String, ByVal CursorLocation As ADODB.CursorLocationEnum) As ADODB.Connection
On Error GoTo CleanFail
Set CreateConnection = New ADODB.Connection
CreateConnection.CursorLocation = CursorLocation
CreateConnection.Open ConnectionString
CleanExit:
Exit Function
CleanFail:
PopulateADOErrorObject CreateConnection.Errors
Resume CleanExit
End Function
Private Function CreateCommand(ByRef Connection As ADODB.Connection, _
ByVal CommandText As String, _
ByVal CommandType As ADODB.CommandTypeEnum, _
ByRef ParameterValues As Variant) As ADODB.Command
Dim ParameterGenerator As IADODBParametersWrapper
Set CreateCommand = New ADODB.Command
With CreateCommand
.ActiveConnection = Connection
.CommandText = CommandText
.CommandTimeout = Me.CommandTimeout '0
End With
If Me.DeriveParameterDirection Then
Set ParameterGenerator = New DerivedDirectionParameters
CreateCommand.CommandType = CommandType 'When set before accessing the Parameters Collection,
'Parameters.Refresh is impilicitly called
ParameterGenerator.SetParameters CreateCommand, ParameterValues
PopulateOutPutParameters CreateCommand.Parameters
Else
Set ParameterGenerator = New AssumedDirectionParameters
ParameterGenerator.SetParameters CreateCommand, ParameterValues
CreateCommand.CommandType = CommandType
End If
End Function
Private Sub ValidateConnection(ByRef ConnectionErrors As ADODB.Errors)
If ConnectionErrors.Count > 0 Then
If Not this.HasADOError Then PopulateADOErrorObject ConnectionErrors
Dim ADOError As ADODB.Error
Set ADOError = GetError(ConnectionErrors, ConnectionErrors.Count - 1) 'Note: 0 based collection
Err.Raise ADOError.Number, ADOError.Source, ADOError.Description, ADOError.HelpFile, ADOError.HelpContext
End If
End Sub
Private Sub PopulateADOErrorObject(ByVal ConnectionErrors As ADODB.Errors)
If ConnectionErrors.Count = 0 Then Exit Sub
this.HasADOError = True
Set this.ADOErrors = ConnectionErrors
End Sub
Public Function ErrorsToString() As String
Dim ADOError As ADODB.Error
Dim i As Long
Dim ErrorMsg As String
For Each ADOError In this.ADOErrors
i = i + 1
With ADOError
ErrorMsg = ErrorMsg & "Count: " & vbTab & i & vbNewLine
ErrorMsg = ErrorMsg & "ADO Error Number: " & vbTab & CStr(.Number) & vbNewLine
ErrorMsg = ErrorMsg & "Description: " & vbTab & .Description & vbNewLine
ErrorMsg = ErrorMsg & "Source: " & vbTab & .Source & vbNewLine
ErrorMsg = ErrorMsg & "NativeError: " & vbTab & CStr(.NativeError) & vbNewLine
ErrorMsg = ErrorMsg & "HelpFile: " & vbTab & .HelpFile & vbNewLine
ErrorMsg = ErrorMsg & "HelpContext: " & vbTab & CStr(.HelpContext) & vbNewLine
ErrorMsg = ErrorMsg & "SQLState: " & vbTab & .SqlState & vbNewLine
End With
Next
ErrorsToString = ErrorMsg & vbNewLine
End Function
Public Function GetError(ByRef ADOErrors As ADODB.Errors, ByVal Index As Variant) As ADODB.Error
Set GetError = ADOErrors.item(Index)
End Function
Private Sub PopulateOutPutParameters(ByRef Parameters As ADODB.Parameters)
Dim Param As ADODB.Parameter
Set this.OuputParameters = New Collection
For Each Param In Parameters
Select Case Param.Direction
Case adParamInputOutput
this.OuputParameters.Add Param
Case adParamOutput
this.OuputParameters.Add Param
Case adParamReturnValue
this.OuputParameters.Add Param
End Select
Next
End Sub
Answer: CommandTimeout:
Allowing the client to specify a given command's execution time threshold by making it a read/write property is good improvement from the first post of this class, that you did not mention in your "outline of what I have done and why", so I am mentioning it here.
Public Property Get CommandTimeout() As Long
CommandTimeout = this.CommandTimeout
End Property
Public Property Let CommandTimeout(ByVal value As Long)
this.CommandTimeout = value
End Property
Managing The Connection Object:
Since I am on the topic of things you forgot to mention, In both of GetDisconnectedRecordset and QuickExecuteNonQuery, you wrote this:
If Not CurrentConnection Is Nothing Then: If (CurrentConnection.State And adStateOpen) = adStateOpen Then CurrentConnection.Close
Set CurrentConnection = Nothing
Bit-wise comparisons, specifically with respect to the Connection object's state, is good, but you could probably make the code look more friendly:
If Not CurrentConnection Is Nothing Then
If (CurrentConnection.State And adStateOpen) = adStateOpen Then
CurrentConnection.Close
End If
End If
Set CurrentConnection = Nothing
OutPut Parameters:
"Also, If output parameters are used, you need a way to return them, so I use the following in ADODBWrapper to do so"
You are indeed able to return parameters, from your OuputParameters property, in the sense that you are returning the ACTual Parameter object, but why do that if you only want to access a parameter's value? As you have it now, one would have to write code like the following, just to get a value:
Private Sub GetOutputParams()
Dim SQLDataAdapter As ADODBWrapper
Dim rsDisConnected As ADODB.Recordset
Dim InputParam As String
Dim OutPutParam As Integer
Set SQLDataAdapter = New ADODBWrapper
SQLDataAdapter.DeriveParameterDirection = True
On Error GoTo CleanFail
InputParam = "Val1,Val2,Val3"
Set rsDisConnected = SQLDataAdapter.GetDisconnectedRecordSet(CONN_STRING, adUseClient, _
"SCHEMA.SOME_STORED_PROC_NAME", _
adCmdStoredProc, InputParam, OutPutParam)
Sheet1.Range("A2").CopyFromRecordset rsDisConnected
'***************************************************
'set the parameter object only to return the value?
Dim Param As ADODB.Parameter
If SQLDataAdapter.OuputParameters.Count > 0 Then
Set Param = SQLDataAdapter.OuputParameters(1)
Debug.Print Param.Value
End If
'***************************************************
CleanExit:
Exit Sub
CleanFail:
If SQLDataAdapter.HasADOError Then Debug.Print SQLDataAdapter.ErrorsToString()
Resume CleanExit
End Sub
If you change the private PopulateOutPutParameters procedure In ADODBWrapper to add only the Parameter.Value to OutPutParameters collection like this:
Private Sub PopulateOutPutParameters(ByRef Parameters As ADODB.Parameters)
Dim Param As ADODB.Parameter
Set this.OuputParameters = New Collection
For Each Param In Parameters
Select Case Param.Direction
Case adParamInputOutput
this.OuputParameters.Add Param.value
Case adParamOutput
this.OuputParameters.Add Param.value
Case adParamReturnValue
this.OuputParameters.Add Param.value
End Select
Next
End Sub
Then you could do this in the client code:
If SQLDataAdapter.OuputParameters.Count > 0 Then
Debug.Print SQLDataAdapter.OuputParameters(1)
End If
Saying all of that, it would still be nice to have a way to map parameters without the client having to know their ordinal position as determined by the way a stored procedure was written, but this is much easier said than done. | {
"domain": "codereview.stackexchange",
"id": 36047,
"tags": "sql, vba, vb6, adodb"
} |
Work and Energy question, using Tension | Question:
Given this image, $2$ objects are connected via a spring (its mass is negligible)
The spring constant is $k$ and that is all that is given.. mass of $A$ is just $m_A$ etc.. and $g \approx 9.8 \frac{m}{s^2}$ (on Earth)
I need to find the maximum displacement ("stretching") of the spring ($L$)
I tried to draw the forces (Positive Y points down, positive X points to the right):
For mass $B$:
$$M_b g - T = (M_a + M_b)a$$
For mass $A$:
$$T - f_{\text{spring}} = M_a a$$
Now I add these two equations and substitute $f_{\text{spring}} = k \cdot L$ :
$$ L = \frac{M_b g - a(2M_a + M_b)}{k}$$
This is a MCQ and none of the answers have $a$ in them... what have I done wrong? I would appreciate your help!
Answer: Suppose if the block A is displaced by L , then the block b should also move down by L as length of the string is constant and the string is in extensible.
Now if the spring and block A is extended by L i.e their PE is increased $\frac{KL^2}{2}$ which is equal to decrease (as moved down) in gravitational potential energy of block A $m_bgL$
After equating these two, we get $L=\frac{2m_bg}{k}$ | {
"domain": "physics.stackexchange",
"id": 69487,
"tags": "homework-and-exercises, newtonian-mechanics, work, spring, string"
} |
Sound when traveling faster than sound | Question: I was wondering, if I am running at the speed of sound while playing music on my iPod will I be able to listen to my iPod while running at the speed of sound? or we cant hear anything while running at the speed of sound.
Answer: Well..
1) If you were running at the speed of sound, you probably wouldn't be for long. The human body isn't designed to handle those kinds of stresses.
2) Assuming you're listening to the iPod using ear buds (in your ear) You can probably think of the air between the seal on the ear bud and your ear drum as isolated from the air you're running through, so no. You could still hear your music.
It is important to note that the air turbulence will probably create an extremely loud sound at such a high velocity. So while you may be able to hear your music in theory, in practice the music would likely be drowned out by the sound of rushing air. | {
"domain": "physics.stackexchange",
"id": 25222,
"tags": "classical-mechanics, waves, acoustics"
} |
reduction of sample from videos sample | Question: Well, I post the same question in the main stack before finding the right place, sorry.
A friend of mine is working with more than a 100 videos as sample for his neural network. Each video last more than a couple of minutes with around 24 frames per second. The objective, using deep learning, is to detect movement through all the samples.
The problem for him is the quantity of data he is dealing with. The training part require/consumes too much time. I'm no expert with data preparation, but I thought maybe he could turn all frame into dataframe, clean them from mono color image (full black/white), turn them into gray instead of full rgb and compress them but, I'm not sure if it will be enough.
Do you think of better method to reduce the training sample?
Answer:
Reduce the size e.g. using cv2.resize()
Compress the image (it is not lossless) e.g. cv2.imencode()
Lower the frame rate
Use lower precision - images are uint8 when loaded, but the deep learning frameworks use float32 by default. You could try float16 or mixed precision.
Using JPEG compression has been shown to be fairly good in terms of the reduction in memory and minimal loss of performance. Have a look a this research.
You could also drop the frame rate, so say 10 FPS. The actually values could be computed based on the expected velocity of the moving objects -> do you really require 24 FPS for the task?
Otherwise, the hardware you are using will determine which steps to take afterwards. Memory, number of operations, inference speed etc. will change how you optimise the process.
You mentioned "dataframe", so I will just point out that using Pandas Dataframes to hold raw image data, whilst looking easy, it generally very inefficient due to the number of data-points involved (pixels), and the fact that Pandas DataFrames are essentially annotated NumPy arrays - the annotations take a lot of space. Better to load into pure numpy arrays and use OpenCV for things suchs as making gray-scale (black and white) images from RGB, resizing them, normalising pixel values, and so on. | {
"domain": "datascience.stackexchange",
"id": 7346,
"tags": "python, deep-learning, data-stream-mining"
} |
How to derive Eq. (1.2.17) in Polchinski? | Question: I have a super stupid question about deriving Eq. (1.2.17) in Polchinski's string theory, vol 1.
The book seems to derive from
$$\tag{1.2.16} h_{ab}=\frac{1}{2} \gamma_{ab} \gamma^{cd} h_{cd} $$
to
$$\tag{1.2.17} h_{ab} (-h)^{-1/2} = \gamma_{ab} (-\gamma)^{-1/2}.$$
I cannot figure out how to manipulate the determinants $\gamma$ and $h$. Excuse me, would you provide at least any hint for the derivation?
Answer: Calculate the determinant of (1.2.16). Note that these are determinants of $2\times 2$ matrices, as Peter Kravchuk told you, so $\det(CM)=C^2\det(M)$ for a constant $C$. Just to be sure, by the word "constant", I really mean $C$ is a scalar, not a matrix. This $C$ may still depend on $\tau,\sigma$ (or other variables if there were any).
There's a coefficient $C^2$ because two columns (or two rows) are multiplied by $C$ each and the determinant is multilinear in the rows (or columns). So the determinant of (1.2.16) is
$$\det_{a,b} (h_{ab}) = \det_{ab} \left( \frac{1}{2}\gamma_{ab} \gamma^{cd}h_{cd} \right)$$
Because $\gamma^{cd}h_{cd}/2$ plays the role of the constant $C$ from my previous general identity, the equation above may be simplified to
$$\det_{a,b} (h_{ab}) = \det_{ab} (\gamma_{ab})\times \left( \frac{1}{2} \gamma^{cd}h_{cd} \right)^2$$
Using the symbols such as $h=\det_{ab}(h_{ab})$ and similarly for $\gamma$, the square root of the minus equation above (minus because the determinants are negative in a Minkowskian signature) is
$$\sqrt{-h} = \sqrt{-\gamma}\times \frac 12 \gamma^{cd}h_{cd} $$
The second power disappeared again. Divide (1.2.16) by the displayed equation I just wrote down. On the right hand side, $(1/2)\gamma^{cd}h_{cd}$ will cancel again and you're left with (1.2.17).
In none of the arguments above, it's important that $C$ may depend on $\tau,\sigma$. Also, it doesn't matter at all that/whether $h_{ab}$ may be calculated as the induced metric from $\partial_\alpha X^\mu$ etc.
Why others can derive the second equation immediately
Those experienced among us see the result immediately because the left hand side of (1.2.17) is the tensor proportional to $h_{ab}$ with a normalization constant chosen so that the determinant of this left hand side equals minus one (the minus can't be eliminated because it's given by the Minkowski signature). In other words, the normalization factor is chosen exactly to "forget" the normalization. Similarly, the right hand side of (1.2.17) is the tensor proportional to $\gamma_{ab}$ with the right normalization factor to makes the determinant of this product equal to minus one. Because both $h_{ab}$ and $\gamma_{ab}$ were converted – by adding $(-h)^{-1/2}$ factors etc. – to matrices whose determinant is equal to minus one, both sides of (1.2.17) actually encode the information about the matrices $h_{ab }$ and $\gamma_{ab}$ up to their normalization. In other words, (1.2.17) is equivalent to saying that $h_{ab}$ and $\gamma_{ab}$ are proportional to one another as matrices. The equation (1.2.16) also says that they're proportional to each other as matrices – with a coefficient that is explicitly given in (1.2.16) but not in (1.2.17) – so (1.2.17) follows from (1.2.16) (but not the other way around). This paragraph may sound complicated but it's really obvious for those who have a sufficient experience with tensors, matrices, and determinants. | {
"domain": "physics.stackexchange",
"id": 8568,
"tags": "homework-and-exercises, string-theory"
} |
How to compute the Shannon entropy for a strand of DNA? | Question: I'm confused by the computation of sequence logo. Wikipedia gives a process about this without a concrete example.
Let's just consider DNA, so there are 4 bases (nucleic acids).
The following data comes from the book "Machine Learning - A Probabilistic Perspective (Figure 1)"
The Shannon entropy of position $i$ is:
${\displaystyle H_{i}=-\sum _{b=a}^{t}f_{b,i}\times \log _{2}f_{b,i}}$
Where ${\displaystyle f_{b,i}}$ is the relative frequency of base
This post is computing position 3, where it seems that the relative frequency of a is 100%, and the relative frequency of 3 other bases are 0%.
Now, how do you calculate the Shannon entropy of position 3? It seems to be 0, which is apparently incorrect, where am I wrong?
Answer: Why do you think the entropy of 0 is incorrect? It intuitively makes sense, as there is no uncertainty about the base at position 3, and thus there is no entropy.
However, what is plotted in a sequence logo isn't the entropy, but rather a measure of the "decrease in uncertainty" as the sequence is aligned. This is calculated by taking the entropy at this position if we randomly aligned sequences ($H_g(L)$), and subtracting from it the entropy of the alignment ($H_g(s)$):
$$
R_{sequence}=H_g(L) - H_s(L)
$$
The entropy at position 3 based on a random alignment is calculated by assuming there are 4 equally likely events (one for each base), and thus:
$$
\begin{align*}
H_g(L) & = -((1/4 \times -2) + (1/4 \times -2) + (1/4 \times -2) + (1/4 \times -2)) \\
H_g(L) & = 2
\end{align*}
$$
Notably, $H_g(L)$ will always be 2 when dealing with nucleotide sequences.
So in your example, we have:
$$
\begin{align*}
R_{sequence}&=H_g(L) - H_s(L) \\
R_{sequence}&=2 - 0 \\
R_{sequence}&=2 \text{ bits of entropy}
\end{align*}
$$
Finally, to work out the height of each base at each position in the logo, we multiply the frequency of that base by the overall information gain $R_{sequence}$. In this case the frequency of the base A at position 3 is of course 1:
$$
\begin {align*}
H(b,l) &=f(b, l) \times R_{sequence} \\
H(A, 3) &= f(a, 3) \times 2 \\
&= 1 \times 2 \\
&= 2
\end{align*}
$$
Source: Sequence logos: a new way to display consensus sequences.
(Thomas D. Schneider, R. Michael Stephens, 1990)
Note: I've excluded the correction factor $e(n)$ from the overall information-gain calculation for the sake of simplicity. Refer to the paper above for an explanation of this. | {
"domain": "bioinformatics.stackexchange",
"id": 1057,
"tags": "sequence-alignment, ngs, sequence-analysis"
} |
Is spacetime flat inside a spherical shell? | Question: In a perfectly symmetrical spherical hollow shell, there is a null net gravitational force according to Newton, since in his theory the force is exactly inversely proportional to the square of the distance.
What is the result of general theory of relativity? Is the spacetime flat inside (given the fact that orbit of Mercury rotates I don't think so)? How is signal from the cavity redshifted to an observer at infinity?
Answer: Here we will only answer OP's two first question(v1). Yes, Newton's Shell Theorem generalizes to General Relativity as follows. The Birkhoff's Theorem states that a spherically symmetric solution is static, and a (not necessarily thin) vacuum shell (i.e. a region with no mass/matter) corresponds to a radial branch of the Schwarzschild solution
$$\tag{1} ds^2~=~-\left(1-\frac{R}{r}\right)c^2dt^2
+ \left(1-\frac{R}{r}\right)^{-1}dr^2 +r^2 d\Omega^2$$
in some radial interval $r \in I:=[r_1, r_2]$. Here the constant $R$ is the Schwarzschild radius, and $d\Omega^2$ denotes the metric of the angular $2$-sphere.
Since there is no mass $M$ at the center of OP's internal hollow region $r \in I:=[0, r_2]$, the Schwarzschild radius $R=\frac{2GM}{c^2}=0$ is zero. Hence the metric (1) in the hollow region is just flat Minkowski space in spherical coordinates. | {
"domain": "physics.stackexchange",
"id": 5247,
"tags": "general-relativity, gravity, spacetime, metric-tensor, curvature"
} |
Least action in differential steps | Question: It says here that the entire integral is minimum if and only if the actions for each step is minimum but here is a contradiction. Suppose there are two fixed points in space, time: A and C. it takes only two infinitesimal steps to reach C from A. At A of all the possible paths, the path to B has the least action (of value say 5) and the action from B to C is 10. But there might be a second B' for which the first action is a little higher (like 7) but the action for B' to C more than makes up for it (say 5). So in such a case the overall action will be lower. Where am I wrong in this logic?
He also talks about fiddling paths between two points infinitesimally close - there can only be a straight line, but if it isn't then those points aren't infinitesimally close, their separation can still be broken down into simpler paths, just as in A to C. I know that if the least action principle is valid for infinitesimal paths, then it is valid for any path, but how can we say for sure that it is also true the other way around?
Answer: OP is misquoting the textbook as saying
"The entire integral is minimal if and only if the actions at each step are minimal."
Rather, the textbook is effectively saying that
"The entire integral is minimal only if the actions at each step are minimal."
In fact, the textbook argues that
"Otherwise you could just fiddle with just that piece of the path and make the whole integral a little lower."
The if-part does not need to hold: Consider e.g. arc length $S[\gamma]$ along a curve $\gamma$ between two points $A$ and $B$ on a cylinder $\mathbb{S}^1\times \mathbb{R}$. There are infinitely many stationary paths between $A$ and $B$ (which happens to be classified by a winding number). At most two of them have minimal arc length, although they are all minimal at each (sufficiently small) step. | {
"domain": "physics.stackexchange",
"id": 54378,
"tags": "lagrangian-formalism, variational-principle, action"
} |
Understand audio signals | Question: Most audio signal are described by a curve having two parameters: The amplitude and the frequency (amount of time from positive to negative amplitude value).
What I want to understand is how the sound level and absolute pitch are encoded to these two values. How would amplitude and frequency change if I'd change volume or pitch?
Thank you in advance!
Answer: We can think of a sound signal either as a function of time or as a function of frequency. The two ways are related by what is called a Fourier transform, so they are often written as $f(t)$ and as $\tilde f(\omega)$, using the same letter, but with the "tilde" above one of them indicating that one is the Fourier transform of the other. From these functions it is also possible to construct other functions of both time and frequency, such as the Wigner function.
I would take "pitch" as more-or-less another name for frequency. Audacity help says "Generally synonymous with the fundamental frequency of a note, but in music, often also taken to imply a perceived measurement that can be affected by overtones above the fundamental."
The squared amplitude of a function is the square of the function, properly speaking the norm, written as $|f(t)|^2$ and $|\tilde f(\omega)|^2$. This function of time or of frequency is always positive or zero, whereas the original function can be positive or negative (or it can be a 2-valued function, which in the time coordinates represent both the signal and its rate of change at a given time). It's common to call the "squared amplitude" the amplitude, just to confuse matters.
Sound level sometimes refers to the logarithm of the squared amplitude, as Georg says, but sometimes not. Wikipedia, for example, suggests that a loudness meter reports on a logarithmic scale, whereas sound level might mean either linear or logarithmic scale. One test for what is meant is whether the signal level is followed by "dB", which will always indicate a logarithmic scale.
Ultimately I'd say that it's the mathematical relationships between these objects that one tries to understand, but a lot can be understood just by experimenting with something like Audacity (which I think is simpler to use than the software mentioned by @whoplisp, although also less powerful), using the "Analyze->Plot Spectrum" option for various sounds. Here's a plot for me whistling, apparently at about 1300Hz (the fundamental frequency) in the 0.026 seconds that I analyzed here, but with various higher frequencies also represented.
When reading something about sound or music, there are various technical usages that are not very standardized, and one often has to figure out what's meant by the overall context. There are many different expert groups that have a strong interest in signal analysis of various kinds, and they have often developed specialized languages appropriate to their own interests. | {
"domain": "physics.stackexchange",
"id": 1328,
"tags": "acoustics"
} |
Using R and Python together | Question: I'm new in this field and I started working with data by using R. Because of that, I find R much easier to approach a data project. However, apparently an employer wants you to know an object-oriented programing language (a language like Python).
So it would be smart to think I can use Python just when I need to deal with a complex programming process, like replacing the na's in Titanic/kaggle project with the average based on name and to use R for anything else? So use them both interchangeably?
Beside the fact that Python is more programming oriented I don't see why somebody would use it over R...
Answer: Several clarifications:
you can program with object-oriented (OOP) concepts in R, even though OOP in R has slightly different syntax from other languages. Methods do not bind to objects. In R, different method versions will be involved based on the input argument classes (and types). (Ref: Advanced R)
you can also replace nans with mean / any stat. / value in R using a mask if you store the data in a dataframe (See SO post)
There is no problem using them interchangeably. I use R from Python using the package RPy2. I assume it is equally easy to do it the other way round.
At the end of the day, any language is only as good as how much the users know about it. Use one that you are more familiar with and try to learn it properly using the vast online resources online. | {
"domain": "datascience.stackexchange",
"id": 638,
"tags": "r, python"
} |
How can an engine provide more torque at the point of contact than the ground's response by static friction without slipping? | Question: In response to the top answer here: Newtonian Mechanics - How does something like a car wheel roll?
I clearly fundamentally misunderstand something here. I can't reconcile there being an angular acceleration about the tires (and thus, a net torque) and the torque provided by the engine being equal to the torque in response to static friction. If the torque provided by the engine was greater than the torque in response to static friction, wouldn't the wheel slip?
update: My confusion was resolved in chat with @V.F., and for anyone else who might have had the same simple misunderstanding in this system, it came down to a misunderstanding of contact forces.
As stated, the response from static friction is equal to the force of the tyre on the road, but the force of the tyre on the road (and of the road of the tyre though static friction) is NOT equal to the force of the engine on the tyre. This is because the tyre delivers the force to the road through contact, meaning it must physically accelerate to bring the atoms closer to the atoms of the tyre on the road, resulting in both the tyre and the road experiencing a force in opposing directions via Coloumbs Law. Therefore, a little bit of the engine force necessarily goes into accelerating the tyre into the road, and the rest of it is is then delivered between the road and the tyre as a contact force, through static friction. This delta is what results in the constraint of rolling motion, as the tyre experiences both a clockwise rotational acceleration from the engine to bring into contact with the road, and then a forward translation via static friction.
This was easier to understand in the simpler translational case that is typically used to illustrate the difference between contact force and input force. Imagine two boxes, A and B, of equal mass, M, on a frictionless surface. If you exerted a force, F, on box A, it's clear that both boxes would equally accelerate as one system over the surface - but if you exert a force on box A, and box A is in contact with box B, wouldn't box B respond with an equal and opposite force? Then why would box A accelerate at all? i.e. why isn't the force, F, perfectly transmitted through box A to box B? Well, because box A and box B are, of course, not actually in contact. The atoms at their respective surfaces are at some mutual equilibrium caused by Columbus Law. So, to transmit force F to box B would require accelerating box A into box B - i.e. force F is actually distributed between box A and box B based on their respective masses, as "contact" in this sense requires the acceleration of box A into box B, and thus the contact force between box A and box B is necessarily less than the input force F.
Answer: We can treat a driving wheel as just another gear that has to pass the torque down the line.
Ideally, the torque it passes should be equal to the torque it receives, but, since the wheel has some mass and, therefore, some moment of inertia, a little torque differential is needed to speed it up. At steady state, though, the torque on both sides should be the same.
The wheel may slip, if the drive torque exceeds the maximum torque that can be supported by static friction, which can happen, for instance, on a slippery surface (low static friction) or when the car is overloaded or somehow is blocked from moving or there is an attempt to accelerate it too fast. | {
"domain": "physics.stackexchange",
"id": 51590,
"tags": "newtonian-mechanics, kinematics, rotational-dynamics, friction"
} |
Use CNN for time series regression | How to implement sliding window? | Question: I'm trying to use CNN for time series regression in python. I have 9 elements in each time step (from sensor readings) and the output (target/reference) is 4 elements.
Input Shape = (time steps, 9)
Output Shape = (time steps, 4)
Based on papers I should use rolling windows, such as:
I don't understand how could I implement that. Should I convert the input to as follows?
Input Shape = (Time Steps, Sliding Windows Length, 9)
The Model is:
####################################################################################################################
# Define ANN Model
# define two sets of inputs
acc = layers.Input(shape=(3,1,))
gyro = layers.Input(shape=(3,1,))
# the first branch operates on the first input
x = Conv1D(256, 1, activation='relu')(acc)
x = Conv1D(128, 1, activation='relu')(x)
x = Conv1D(64, 1, activation='relu')(x)
x = MaxPooling1D(pool_size=3)(x)
x = Model(inputs=acc, outputs=x)
# the second branch opreates on the second input
y = Conv1D(256, 1, activation='relu')(gyro)
y = Conv1D(128, 1, activation='relu')(y)
y = Conv1D(64, 1, activation='relu')(y)
y = MaxPooling1D(pool_size=3)(y)
y = Model(inputs=gyro, outputs=y)
# combine the output of the three branches
combined = layers.concatenate([x.output, y.output])
# combined outputs
z = Bidirectional(LSTM(128, dropout=0.25, return_sequences=False,activation='tanh'))(combined)
z = Reshape((256,1),input_shape=(128,))
z = Bidirectional(LSTM(128, dropout=0.25, return_sequences=False,activation='tanh'))(combined)
#z = Dense(10, activation="relu")(z)
z = Flatten()(z)
z = Dense(4, activation="linear")(z)
model = Model(inputs=[x.input, y.input], outputs=z)
model.compile(loss='mse', optimizer = tf.keras.optimizers.Adam(learning_rate=0.01),metrics=['accuracy','mse'],run_eagerly=True) #, callbacks=[tensorboard]
model.summary()
Model: "model_2"
__________________________________________________________________________________________________
Layer (type) Output Shape Param # Connected to
==================================================================================================
input_1 (InputLayer) [(None, 3, 1)] 0 []
input_2 (InputLayer) [(None, 3, 1)] 0 []
conv1d (Conv1D) (None, 3, 256) 512 ['input_1[0][0]']
conv1d_3 (Conv1D) (None, 3, 256) 512 ['input_2[0][0]']
conv1d_1 (Conv1D) (None, 3, 128) 32896 ['conv1d[0][0]']
conv1d_4 (Conv1D) (None, 3, 128) 32896 ['conv1d_3[0][0]']
conv1d_2 (Conv1D) (None, 3, 64) 8256 ['conv1d_1[0][0]']
conv1d_5 (Conv1D) (None, 3, 64) 8256 ['conv1d_4[0][0]']
max_pooling1d (MaxPooling1D) (None, 1, 64) 0 ['conv1d_2[0][0]']
max_pooling1d_1 (MaxPooling1D) (None, 1, 64) 0 ['conv1d_5[0][0]']
concatenate (Concatenate) (None, 1, 128) 0 ['max_pooling1d[0][0]',
'max_pooling1d_1[0][0]']
bidirectional_1 (Bidirectional (None, 256) 263168 ['concatenate[0][0]']
)
flatten (Flatten) (None, 256) 0 ['bidirectional_1[0][0]']
dense (Dense) (None, 4) 1028 ['flatten[0][0]']
==================================================================================================
Total params: 347,524
Trainable params: 347,524
Non-trainable params: 0
Answer: I wrote this code to solve this problem. This code requires windows size and stride value.
def load_dataset(gyro_data, acc_data, ori_data, window_size, stride):
x_gyro = []
x_acc = []
x_ori = []
for idx in range(0, gyro_data.shape[0] - window_size - 1, stride):
x_gyro.append(gyro_data[idx + 1: idx + 1 + window_size, :])
x_acc.append(acc_data[idx + 1: idx + 1 + window_size, :])
x_ori.append(mag_data[idx + 1: idx + 1 + window_size, :])
x_gyro = np.reshape(
x_gyro, (len(x_gyro), x_gyro[0].shape[0], x_gyro[0].shape[1]))
x_acc = np.reshape(
x_acc, (len(x_acc), x_acc[0].shape[0], x_acc[0].shape[1]))
x_ori = np.reshape(x_ori, (len(x_ori), x_ori[0].shape[0]))
return [x_gyro, x_acc], [x_ori] | {
"domain": "datascience.stackexchange",
"id": 11370,
"tags": "python, keras, time-series"
} |
Is there any drawback to breathing deeply all the time? | Question: Normal, relaxed breathing (eupnea) uses the external intercostal muscles. Diaphragmatic breathing (deep breathing) is more efficient because it balances the ventilation/perfusion ratio across the lungs by ventilating the lower half of the lungs more (as far as I understand), allowing for more efficient gas exchange. It also has various health benefits. The respiratory center in our brain controls breathing to regulate carbon dioxide and oxygen levels in the blood. Is there any drawback to breathing slowly but deeply all the time, or is it just that humans are inefficient?
Answer: Short answer no, no evidence to support this at all.
Longer answer:
The adult respiratory rate is about 12-16 breaths per minute - this equates to about one breath per 3.75-6 seconds.
A study that I found looked at gas distribution in the lung during inspiration and breath hold found that in healthy volunteers, the lungs had approximately even distribution of gas throughout the lungs within about 1.7 seconds1 (see fig 3.2, also reproduced below), and that this rate was more affected by posture and gravity effects on the lungs than by breathing rate. This means that even if you do slow your breathing rate and use your diaphragm more, you aren't getting more oxygen than if you breathe normally.
Reproduced from reference 1.
There are some autonomic responses to breath control, such as lowering heart-rate, but I don't know that these have any scientifically validated benefits beyond this.
1: J.M. Wild, F.C. Horn, G.J. Collier, H. Marshall,
Chapter 3 - Dynamic Imaging of Lung Ventilation and Gas Flow With Hyperpolarized Gas MRI,
Editor(s): Mitchell S. Albert, Francis T. Hane,
Hyperpolarized and Inert Gas MRI,
Academic Press,
2017,
Pages 47-59,
ISBN 9780128036754,
https://doi.org/10.1016/B978-0-12-803675-4.00003-8. | {
"domain": "biology.stackexchange",
"id": 12159,
"tags": "breathing"
} |
Simple Employee Records Program | Question: I am a beginner Java programmer. I have just finished up an assignment and would appreciate some advice and/or criticism on my program. For context, I have the assignment details, followed by the full code below:
Create an Employee Records program that incorporates the following properties:
employeeIdNumber;
firstName;
lastName;
annualSalary;
startDate;
The following three custom methods are to be coded and should execute when a user presses the corresponding button.
List Button
This function should list all of the data currently stored in the lists.
Add Button
A user is to fill in all fields (ID, first name, last name, salary, and start date) and then press the Add button to add data to the array.
Give the user an error message if they have missed a field
The user is to then press the List button to verify that the record was added.
Remove Button
A user is to fill in the ID field for an employee that they wish to remove and then press the Remove button to delete the employee's data from the list.
The user could then press List to verify that the record was removed.
public class Employee implements ActionListener {
public static String div = "------------------------------------------";
public static ArrayList<Integer> ids, salary;
public static ArrayList<String> firstNames, lastNames, startDates;
public static JTextArea display;
public static JButton[] buttons = new JButton[3];
public static JLabel[] subTitles = new JLabel[5];
public static JTextField[] inputs = new JTextField[5];
public static void main(String[] args) {
// Defining all array lists
ids = new ArrayList();
salary = new ArrayList();
startDates = new ArrayList();
firstNames = new ArrayList();
lastNames = new ArrayList();
// Fonts
Font titleFont = new Font("Courier New", 1, 24);
Font subFont = new Font("Courier New", 1, 16);
// Frame
JFrame frame = new JFrame("Employee Records");
frame.setSize(550, 450);
frame.setLocationRelativeTo(null);
frame.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE);
frame.setResizable(false);
// Container
JPanel container = new JPanel();
container.setLayout(null);
frame.setContentPane(container);
// Title
JLabel title = new JLabel("Employee Records");
title.setFont(titleFont);
title.setForeground(Color.blue);
title.setBounds(160, 10, 250, 24);
// Lablels and text fields
for (int i = 0; i < subTitles.length; i++) {
subTitles[i] = new JLabel();
subTitles[i].setFont(subFont);
subTitles[i].setBounds(5, 50 + (i * 35), 190, 16);
}
subTitles[0].setText("Employee ID#: ");
subTitles[1].setText("First Name: ");
subTitles[2].setText("Last Name: ");
subTitles[3].setText("Annual Salary: ");
subTitles[4].setText("Start Date: ");
for (int i = 0; i < subTitles.length; i++) {
inputs[i] = new JTextField();
inputs[i].setBounds(160, 47 + (35 * i), 150, 22);
}
// Buttons
for (int i = 0; i < buttons.length; i++) {
buttons[i] = new JButton();
buttons[i].addActionListener(new Employee());
buttons[i].setBounds(330, 47 + (35 * i), 200, 20);
}
buttons[0].setText("Add (REQUIRES ALL FIELDS)");
buttons[1].setText("Remove (by ID#)");
buttons[2].setText("List");
// Text area
display = new JTextArea();
display.setEditable(false);
JScrollPane scrollPane = new JScrollPane(display);
scrollPane.setBounds(5, 217, 535, 200);
// Adding everything
container.add(title);
container.add(scrollPane);
// Since # of textfields will always equal # of subtitles, we can use the
// max value of subtitles for the loop
for (int i = 0; i < subTitles.length; i++) {
container.add(subTitles[i]);
container.add(inputs[i]);
}
for (int i = 0; i < buttons.length; i++) {
container.add(buttons[i]);
}
// Extras
frame.toFront();
frame.setVisible(true);
}
public void actionPerformed(ActionEvent event) {
if (event.getSource().equals(buttons[0])) {
// Pass boolean to check if the program should continue or not
boolean pass = true;
// Loop to check if all textfields have data
for (int i = 0; i < inputs.length; i++) {
if (inputs[i].getText().equals("")) {
display.setText("Error: enter data for ALL fields.");
pass = false;
}
}
// If the user passed, the program continues
if (pass == true) {
// Checking if ID# already exists
if (ids.contains(Integer.parseInt(inputs[0].getText()))) {
// Displaying error message if entered ID# exists
display.setText("Error: employee ID# exists, use another.");
// If not, it adds all the data
} else {
// Adding all the info to the arrays
ids.add(Integer.parseInt(inputs[0].getText()));
firstNames.add(inputs[1].getText());
lastNames.add(inputs[2].getText());
salary.add(Integer.parseInt(inputs[3].getText()));
startDates.add(inputs[4].getText());
display.setText("Employee #" + inputs[0].getText() + " added to record(s).");
// Loop to set all textfields to empty
for (int i = 0; i < inputs.length; i++) {
inputs[i].setText(null);
}
}
}
} else if (event.getSource().equals(buttons[1])) {
// Loop to search list for requested removal
for (int i = ids.size() - 1; i >= 0; i--) {
// If the request is found, it removes all data
if (Integer.parseInt(inputs[0].getText()) == ids.get(i)) {
display.setText("Employee #" + ids.get(i) + " has been removed from the records.");
ids.remove(i);
firstNames.remove(i);
lastNames.remove(i);
salary.remove(i);
startDates.remove(i);
break;
// If not, the ID# does not exist
} else {
display.setText("Error: employee ID# does not exist, try again.");
}
}
} else {
// Resets text area and lists all the data
display.setText(null);
for (int i = 0; i < ids.size(); i++) {
display.append(div + "\nEmployee ID#: " + ids.get(i) + "\nFirst Name: " + firstNames.get(i)
+ "\nLast Name: " + lastNames.get(i) + "\nAnnual Salary: $" + salary.get(i)
+ "\nStart Date: " + startDates.get(i) + "\n");
}
}
}
}
Answer: I'd separate the logic from the UI. In this case, I'd rather have a class Employee with all attributes, and this class has methods, and the UI (swing) invokes the actions on this class.
import java.util.Date;
public class Employee {
private int employeeIdNumber;
private String firstName;
private String lastName;
private int annualSalary;
private Date startDate;
public Employee(int id, String firstName, String lastName, int salary, Date startDate) {
this.employeeIdNumber = id;
this.firstName = firstName;
this.lastName = lastName;
this.annualSalary = salary;
this.startDate = startDate;
}
public int getId() {
return employeeIdNumber;
}
public String getFirstName() {
return firstName;
}
public int getSalary() {
return annualSalary;
}
public String getLastName() {
return lastName;
}
public Date getStartDate() {
return startDate;
}
}
...and the UI:
import java.awt.*;
import java.awt.event.ActionEvent;
import java.awt.event.ActionListener;
import java.util.ArrayList;
import java.util.Date;
import java.util.List;
import javax.swing.*;
public class EmployeeUI implements ActionListener {
public static String div = "------------------------------------------";
public static List<Employee> employees = new ArrayList<Employee>();
public static JTextArea display;
public static JButton[] buttons = new JButton[3];
public static JLabel[] subTitles = new JLabel[5];
public static JTextField[] inputs = new JTextField[5];
public static void main(String[] args) {
// Fonts
Font titleFont = new Font("Courier New", 1, 24);
Font subFont = new Font("Courier New", 1, 16);
// Frame
JFrame frame = new JFrame("Employee Records");
frame.setSize(550, 450);
frame.setLocationRelativeTo(null);
frame.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE);
frame.setResizable(false);
// Container
JPanel container = new JPanel();
container.setLayout(null);
frame.setContentPane(container);
// Title
JLabel title = new JLabel("Employee Records");
title.setFont(titleFont);
title.setForeground(Color.blue);
title.setBounds(160, 10, 250, 24);
// Lablels and text fields
for (int i = 0; i < subTitles.length; i++) {
subTitles[i] = new JLabel();
subTitles[i].setFont(subFont);
subTitles[i].setBounds(5, 50 + (i * 35), 190, 16);
}
subTitles[0].setText("Employee ID#: ");
subTitles[1].setText("First Name: ");
subTitles[2].setText("Last Name: ");
subTitles[3].setText("Annual Salary: ");
subTitles[4].setText("Start Date: ");
for (int i = 0; i < subTitles.length; i++) {
inputs[i] = new JTextField();
inputs[i].setBounds(160, 47 + (35 * i), 150, 22);
}
// Buttons
for (int i = 0; i < buttons.length; i++) {
buttons[i] = new JButton();
buttons[i].addActionListener(new EmployeeUI());
buttons[i].setBounds(330, 47 + (35 * i), 200, 20);
}
buttons[0].setText("Add (REQUIRES ALL FIELDS)");
buttons[1].setText("Remove (by ID#)");
buttons[2].setText("List");
// Text area
display = new JTextArea();
display.setEditable(false);
JScrollPane scrollPane = new JScrollPane(display);
scrollPane.setBounds(5, 217, 535, 200);
// Adding everything
container.add(title);
container.add(scrollPane);
// Since # of textfields will always equal # of subtitles, we can use the
// max value of subtitles for the loop
for (int i = 0; i < subTitles.length; i++) {
container.add(subTitles[i]);
container.add(inputs[i]);
}
for (int i = 0; i < buttons.length; i++) {
container.add(buttons[i]);
}
// Extras
frame.toFront();
frame.setVisible(true);
}
public void actionPerformed(ActionEvent event) {
if (event.getSource().equals(buttons[0])) {
// Pass boolean to check if the program should continue or not
boolean pass = true;
// Loop to check if all textfields have data
for (int i = 0; i < inputs.length; i++) {
if (inputs[i].getText().equals("")) {
display.setText("Error: enter data for ALL fields.");
pass = false;
}
}
// If the user passed, the program continues
if (pass == true) {
// Checking if ID# already exists
if (employees.contains(Integer.parseInt(inputs[0].getText()))) {
// Displaying error message if entered ID# exists
display.setText("Error: employee ID# exists, use another.");
// If not, it adds all the data
} else {
// Adding all the info to the array
employees.add(new Employee(Integer.parseInt(inputs[0].getText()),//id
inputs[1].getText(), //firstname
inputs[2].getText(), //last name
Integer.parseInt(inputs[3].getText()), //salary
new Date(inputs[4].getText()) //startDate
));
display.setText("Employee #" + inputs[0].getText() + " added to record(s).");
// Loop to set all textfields to empty
for (int i = 0; i < inputs.length; i++) {
inputs[i].setText(null);
}
}
}
} else if (event.getSource().equals(buttons[1])) {
// Loop to search list for requested removal
for (int i = employees.size() - 1; i >= 0; i--) {
// If the request is found, it removes all data
if (Integer.parseInt(inputs[0].getText()) == employees.get(i).getId()) {
display.setText("Employee #" + employees.get(i).getId() + " has been removed from the records.");
employees.remove(i);
break;
// If not, the ID# does not exist
} else {
display.setText("Error: employee ID# does not exist, try again.");
}
}
} else {
// Resets text area and lists all the data
display.setText(null);
for (int i = 0; i < employees.size(); i++) {
display.append(div + "\nEmployee ID#: " + employees.get(i).getId() + "\nFirst Name: " + employees.get(i).getFirstName()
+ "\nLast Name: " + employees.get(i).getLastName() + "\nAnnual Salary: $" + employees.get(i).getSalary()
+ "\nStart Date: " + employees.get(i).getStartDate() + "\n");
}
}
}
} | {
"domain": "codereview.stackexchange",
"id": 22632,
"tags": "java, beginner, array, gui"
} |
Does the CPT theorem imply $CP=T$? | Question: Does the CPT theorem imply $CP=T$?
That is, does it imply that the action of Charge Conjugation and Parity inversion on some representation of the Lorentz group, is the same as doing a time reversal?
Specifically, given explicit expressions for $P$ and $C$ (in terms of matrices and complex conjugation) in some basis, how does $CP$ relate to the expression for $T$ and $T^{-1}$
Answer: The assertion of the CPT theorem is that, under natural hypotheses, the Hamiltonian $H$ operator of a theory is invariant under the simultaneous action of the symmetries (in Wigner's sense i.e. unitary/antiunitary operators) C, P, and T.
$$CPT H (CPT)^{-1} = H\:.\tag{1}$$
This action can also be implemented by a direct action on the quantum fields the Hamiltonian is made of. However, the fact that the Hamiltonian is CPT invariant does not imply a precise relation between CP and T, since their combination is equivalent to the identity when they act on the Hamiltonian, not in general.
In particular, $CP=T$ or $T^{-1}$ do not make sense (also including phases), since the left hand side is linear and the right hand side is anti linear, when viewing them as operators in the Hilbert space as in Eq.(1).
However, from the above reasoning it is evident that
the action of T on the Hamiltonian is the same as the combined action of CP on the Hamiltonian. | {
"domain": "physics.stackexchange",
"id": 86984,
"tags": "special-relativity, operators, spin-statistics, cp-violation, cpt-symmetry"
} |
PySpark v Pandas Dataframe Memory Issue | Question: Suppose I have a csv file with 20k rows, which I import into Pandas dataframe. I then run models like Random Forest or Logistic Regression from sklearn package and it runs fine. However, when I import into PySpark dataframe format and run the same models (Random Forest or Logistic Regression) from PySpark packages, I get a memory error and I have to reduce the size of the csv down to say 3-4k rows. Why does this happen? Is this a conceptual problem or am I coding it wrong somewhere?
For Pandas dataframe, my sample code is something like this:
df=pd.read_csv("xx.csv")
features=TfIdf().fit(df['text'])
....
RandomForest.fit(features,labels)
And for PySpark, I'm first reading the file like this:
data = sqlContext.read.format('com.databricks.spark.csv').options(header='true', inferschema='true')\
.load('xx.csv')
data.show()
from pyspark.ml.feature import RegexTokenizer, StopWordsRemover, CountVectorizer
from pyspark.ml.classification import LogisticRegression
# regular expression tokenizer
regexTokenizer = RegexTokenizer(inputCol="converted_text", outputCol="words", pattern="\\W")
# stop words
add_stopwords = ["http","https","amp","rt","t","c","the"]
stopwordsRemover = StopWordsRemover(inputCol="words", outputCol="filtered").setStopWords(add_stopwords)
# bag of words count
countVectors = CountVectorizer(inputCol="filtered", outputCol="features", vocabSize=10000, minDF=5)
from pyspark.ml import Pipeline
from pyspark.ml.feature import OneHotEncoder, StringIndexer, VectorAssembler
label_stringIdx = StringIndexer(inputCol = "Complaint-Status", outputCol = "label")
pipeline = Pipeline(stages=[regexTokenizer, stopwordsRemover, countVectors, label_stringIdx])
# Fit the pipeline to training documents.
pipelineFit = pipeline.fit(data)
dataset = pipelineFit.transform(data)
dataset.show(5)
(trainingData, testData) = dataset.randomSplit([0.7, 0.3], seed = 100)
print("Training Dataset Count: " + str(trainingData.count()))
print("Test Dataset Count: " + str(testData.count()))
from pyspark.ml.classification import RandomForestClassifier
rf = RandomForestClassifier(labelCol="label", \
featuresCol="features", \
numTrees = 100, \
maxDepth = 4, \
maxBins = 32)
# Train model with Training Data
rfModel = rf.fit(trainingData)
predictions = rfModel.transform(testData)
predictions.filter(predictions['prediction'] == 0) \
.select("converted_text","Complaint-Status","probability","label","prediction") \
.orderBy("probability", ascending=False) \
.show(n = 10, truncate = 30)
I was trying for lightgbm, only changing the .fit() part:
from mmlspark import LightGBMClassifier
lgb = LightGBMClassifier(learningRate=0.3,
numIterations=100,
numLeaves=31)
lgb_model=lgb.fit(trainingData)
predictions = lgb_model.transform(testData)
predictions.filter(predictions['prediction'] == 0) \
.select("converted_text","Complaint-Status","probability","label","prediction") \
.orderBy("probability", ascending=False) \
.show(n = 10, truncate = 30)
from pyspark.ml.evaluation import MulticlassClassificationEvaluator
evaluator = MulticlassClassificationEvaluator(predictionCol="prediction")
evaluator.evaluate(predictions)
And the dataset has hardly 5k rows inside the csv files. Why is it happening? How can I solve it?
Answer: While I can't tell you why Spark is so slow (it does come with overheads, and it only makes sense to use Spark when you have 20+ nodes in a big cluster and data that does not fit into RAM of a single PC - unless you use distributed processing, the overheads will cause such problems. For example, your program first has to copy all the data into Spark, so it will need at least twice as much memory. Probably even three copies: your original data, the pyspark copy, and then the Spark copy in the JVM. In the worst case, the data is transformed into a dense format when doing so, at which point you may easily waste 100x as much memory because of storing all the zeros).
Use an appropriate - smaller - vocabulary.
There is no use in including every single word, as most of them will never score well in the decision trees anyway!
It's safe to assume that you can omit both very frequent (stop-) words, as well as rare words (using them would be overfitting anyway!). So use min_df=10 and max_df=1000 or so. | {
"domain": "datascience.stackexchange",
"id": 4805,
"tags": "machine-learning, pandas, pyspark"
} |
How large was Mercury before it shrunk? | Question: There's a theory that the reason Mercury has such an enormous iron core is that it was once a much larger planet before it got impacted by an object, resulting in most of its mass getting blasted away. If this theory is true, how large was Mercury before the impact? Could it have been the size of Earth or Mars, or maybe even a super-earth?
Answer: If we look at the planet's cores and I'm going to ignore liquid vs solid and focus on size overall.
Mars: Core estimated 1,794 +/- 65 km radius. The planet is 3,390 km radius. About 53% of the planet's radius is its core. Mars also has more sulfur in it's core and more Iron in it's mantle than Earth, suggesting that it probably didn't mix as well as Earth did, but I'm not sure that would significantly effect the size of it's core.
Venus: Core estimates have some uncertainty, but by this article, it's core is thought to be about 3,000 km, about 49.5% of it's 6,052 km radius.
Earth's core is about 3,400 km and it's radius 6,371 km, about 53.4% of it's total radius. If we use the 49.5%-53.4% as a guideline, Mercury's 2,440 km radius (85% core, so it's core is about 2,074 KM), so a rough estimate using the other 3 planets as a guideline, 3,880 - 4,190 km radius. That puts it 500-800 km larger than Mars in radius, roughly 2,000 km smaller than Venus in radius.
That's obviously just a rough estimate, but assuming it was similar to the other 3 inner planets, it's probably in that range. I should probably also account for compression. The more massive the planet the greater the compression of it's core, but that would probably only vary a couple percentage points off the estimate and it wouldn't make a big difference. | {
"domain": "astronomy.stackexchange",
"id": 2520,
"tags": "planet, mercury"
} |
Where can I find the $\mu$ value in galaxy clusters for ideal gas law? | Question: I am studying hydrostatic equilibrium in galaxy clusters and encountered the following expression:
$P=(kT/\mu m_p)\rho$
The interpretation of this formula is obvious. It is just the ideal gas law expressed slightly differently. It is apparent that $\mu m_p$ here is the average mass of molecules in galaxy clusters, where $m_p$ is the mass of proton. However, the problem is that I do not know what $\mu$ is called or where to look to find the value of $\mu$.
Could somebody help me?
Answer: tl;dr:
For you, $\mu\simeq0.6$.
Mean molecular mass
The term you're looking for is the mean molecular mass $\mu$, more often called the mean molecular weight, despite not being a weight, despite usually not involving molecules, and despite being a unitless number, not a mass.
It is the average mass per particle, measured in terms of the proton mass (or sometimes the hydrogen mass which is almost the same, and sometimes the atomic mass unit $m_\mathrm{u} \equiv m_{^{12}\mathrm{C}}/12$), and it includes any free electrons.
That is, it is defined as
$$
\mu = \frac{\langle m \rangle}{m_p},
$$
Neutral gas
For a neutral gas (i.e. no free electrons) of mass fractions $X$, $Y$, and $Z$ of hydrogen, helium, and heavier elements (called "metals" in astronomy), you have
$$
\frac{1}{\mu_\mathrm{neut.}} = X + \frac{Y}{4} + \langle 1/A_n \rangle Z,
$$
because helium weighs four times as much as hydrogen, and where $\langle 1/A_n \rangle$ is the weighted average of all metals. For Solar abundances $\{X,Y,Z\}\simeq\{0.70,0.28,0.02\}$, and $\langle 1/A_n \rangle \simeq 1/15.5$ (e.g. Carroll & Ostlie 1996), so
$$
\mu_{\mathrm{neut.,\odot}} = \frac{1}{0.70+0.28/4+0.02/15.5} \simeq 1.30.
$$
For a primordial (i.e. metal-free) gas, $\{X,Y,Z\}\simeq\{0.75,0.25,0\}$ so
$$
\mu_{\mathrm{neut.,prim.}} = \frac{1}{0.75+0.25/4} \simeq 1.23.
$$
Ionized gas
For an ionized gas you have roughly twice the amount of particles, but the mass of half of them can be neglected, so $\mu$ is roughly half of the above:
$$
\frac{1}{\mu_{\mathrm{ion.}}} \simeq 2X + \frac{3Y}{4} + \frac{Z}{2}.
$$
Thus, a fully ionized primordial gas has
$$
\mu_{\mathrm{ion.,prim.}} = \frac{1}{2\times0.75 + 3\times0.25/4} \simeq 0.59.
$$
while a fully ionized metal-rich gas has
$$
\mu_{\mathrm{ion.,\odot}} = \frac{1}{2\times0.70 + 3\times0.28/4 + 0.02/2} \simeq 0.62.
$$
The answer for you
Since you're dealing with galaxy clusters, the bulk of the gas is ionized. As you can see above, the exact value of $\mu$ depends on the metallicity, but as an astronomer you will most likely not offend anyone by just using $\mu=0.6$.
Or even $\mu\sim1$. I mean if we can't even make a proper definition, why bother about anything more precise than an order-of-magnitude estimate… | {
"domain": "astronomy.stackexchange",
"id": 7233,
"tags": "galaxy, cosmology, galaxy-cluster, nucleosynthesis, hydrostatic-equilibrium"
} |
Automatic EqualityComparer tests | Question: Writing tests is sometimes a really boring task especially if you need to write the same test for the n-th time like when you are testing another custom EqualityComparer<T>.
I thought if I write another such test I'm going crazy ;-) then maybe I'll try to automate it at least a little bit so I wrote a base class for such tests. In particular it tests the GetHashCode and Equals methods with the possibility to disable the hash-code part because I sometimes just use a 0 if I am not interested in it or it's not possible to calculate it.
public abstract class EqualityComparerTest<T>
{
private readonly IEqualityComparer<T> _comparer;
protected EqualityComparerTest(IEqualityComparer<T> comparer)
{
_comparer = comparer;
}
protected bool IgnoreHashCode { get; set; }
[TestMethod]
public void GetHashCode_SameElements_SameHashCodes()
{
if (IgnoreHashCode)
{
return;
}
foreach (var x in GetEqualElements())
{
Assert.AreEqual(_comparer.GetHashCode(x.Left), _comparer.GetHashCode(x.Right), $"{x.Left} == {x.Right}");
}
}
[TestMethod]
public void GetHashCode_DifferentElements_DifferentHashCodes()
{
if (IgnoreHashCode)
{
return;
}
foreach (var x in GetNonEqualElements())
{
Assert.AreNotEqual(_comparer.GetHashCode(x.Left), _comparer.GetHashCode(x.Right), $"{x.Left} != {x.Right}");
}
}
[TestMethod]
public void Equals_SameElements_True()
{
foreach (var x in GetEqualElements())
{
Assert.IsTrue(_comparer.Equals(x.Left, x.Right), $"{x.Left} == {x.Right}");
Assert.IsTrue(_comparer.Equals(x.Left, x.Left), $"{x.Left} == {x.Right}");
Assert.IsTrue(_comparer.Equals(x.Right, x.Right), $"{x.Left} == {x.Right}");
}
}
[TestMethod]
public void Equals_DifferentElements_False()
{
foreach (var x in GetNonEqualElements())
{
Assert.IsFalse(_comparer.Equals(x.Left, x.Right), $"{x.Left} != {x.Right}");
}
}
protected abstract IEnumerable<(T Left, T Right)> GetEqualElements();
protected abstract IEnumerable<(T Left, T Right)> GetNonEqualElements();
}
The data for the tests comes from two abstract methods that the actual test needs to provide.
Here I'm testing an ImmutableNameSet that implements the IImmutableSet<string> interface and internaly the set uses the StringComparer.OrdinalIgnoreCase and they are considered equal if they overlap.
I added the comparer I test just for reference (the question is about the test-base class and the implementation - the comparer works as required).
private sealed class ImmutableNameSetEqualityComparer : IEqualityComparer<IImmutableSet<string>>
{
public bool Equals(IImmutableSet<string> x, IImmutableSet<string> y)
{
if (ReferenceEquals(x, null)) return false;
if (ReferenceEquals(y, null)) return false;
return ReferenceEquals(x, y) || x.Overlaps(y) || (!x.Any() && !y.Any());
}
// The hash code are always different thus this comparer. We need to check if the sets overlap so we cannot relay on the hash code.
public int GetHashCode(IImmutableSet<string> obj) => 0;
}
}
And this is the actual test for it:
[TestClass]
public class ImmutableNameSetEqualityComparerTest : EqualityComparerTest<IImmutableSet<string>>
{
public ImmutableNameSetEqualityComparerTest() : base(ImmutableNameSet.Comparer)
{
IgnoreHashCode = true;
}
protected override IEnumerable<(IImmutableSet<string> Left, IImmutableSet<string> Right)> GetEqualElements()
{
yield return (ImmutableNameSet.Create("foo"), ImmutableNameSet.Create("FOO"));
yield return (ImmutableNameSet.Create("foo"), ImmutableNameSet.Create("FOO", "bar"));
yield return (ImmutableNameSet.Create("foo"), ImmutableNameSet.Create("foo"));
}
protected override IEnumerable<(IImmutableSet<string> Left, IImmutableSet<string> Right)> GetNonEqualElements()
{
yield return (ImmutableNameSet.Create("foo"), ImmutableNameSet.Create("bar"));
yield return (ImmutableNameSet.Create("baz"), ImmutableNameSet.Create("foo"));
yield return (null, ImmutableNameSet.Create("foo"));
yield return (null, null);
}
}
Answer:
public bool Equals(IImmutableSet<string> x, IImmutableSet<string> y)
{
if (ReferenceEquals(x, null)) return false;
if (ReferenceEquals(y, null)) return false;
return ReferenceEquals(x, y) || x.Overlaps(y) || (!x.Any() && !y.Any());
}
Well, with this implementation, Equals(null, null) is going to return false. I don't think that's the best option.
The intended meaning of Equals is "is identical to", and without a doubt, null is identical to null, because null is the same value as null. You can give Equals whatever behavior you want, but if you give it behavior that contradicts its intended meaning, people will be surprised and confused.
Furthermore, if you pass this instance of IEqualityComparer into some code which expects Equals(x, x) to always be true, then that code may malfunction. For example, if you use it to create a dictionary which allows null as a key, then that dictionary will allow inserting multiple key-value pairs with null keys, and will fail to ever retrieve any of them.
You write in a comment:
I consider anything that is not a valid element and not equal as un-equal thus I consider two nulls are not equals. It wouldn't make any sense to have to null names and consider their equality as true. With this assumption or rather adjustment I don't have to filter null values.
Well, there's no such thing as "two nulls". There is only one null reference, so if two variables both contain a null reference, then those two variables contain the same value. I'm not sure what you mean by "I don't have to filter null values"; when would you have to filter null values if Equals(null, null) returned true? | {
"domain": "codereview.stackexchange",
"id": 26366,
"tags": "c#, unit-testing, generics"
} |
Slicing multi index DataFrame into JSON object | Question: I have a MultiIndex pd.DataFrame that I generated from a .txt that is forecast model data.
Edit:
per request I've included a small data sample to generate the DataFrame
The dict below can be used to generate a more concise version of my data DataFrame
data_dict:
{0: {('1000mb', 'gph'): 166.88, ('1000mb', 'temp'): 283.88, ('1000mb', 'dewpt'): 280.18, ('1000mb', 'dir'): 300.86, ('1000mb', 'speed'): 6.0, ('975mb', 'gph'): 377.88, ('975mb', 'temp'): 282.95, ('975mb', 'dewpt'): 278.56, ('975mb', 'dir'): 313.81, ('975mb', 'speed'): 13.0, ('950mb', 'gph'): 592.7, ('950mb', 'temp'): 280.97, ('950mb', 'dewpt'): 277.65, ('950mb', 'dir'): 319.71, ('950mb', 'speed'): 14.0, ('925mb', 'gph'): 811.98, ('925mb', 'temp'): 279.11, ('925mb', 'dewpt'): 276.72, ('925mb', 'dir'): 315.06, ('925mb', 'speed'): 13.0, ('900mb', 'gph'): 1035.98, ('900mb', 'temp'): 278.04, ('900mb', 'dewpt'): 276.56, ('900mb', 'dir'): 301.76, ('900mb', 'speed'): 10.0, ('875mb', 'gph'): 1266.98, ('875mb', 'temp'): 279.16, ('875mb', 'dewpt'): 277.68, ('875mb', 'dir'): 296.34, ('875mb', 'speed'): 8.0, ('850mb', 'gph'): 1503.98, ('850mb', 'temp'): 278.64, ('850mb', 'dewpt'): 276.81, ('850mb', 'dir'): 298.57, ('850mb', 'speed'): 9.0, ('825mb', 'gph'): 1747.98, ('825mb', 'temp'): 277.25, ('825mb', 'dewpt'): 275.69, ('825mb', 'dir'): 295.48, ('825mb', 'speed'): 11.0, ('800mb', 'gph'): 1998.26, ('800mb', 'temp'): 277.12, ('800mb', 'dewpt'): 273.89, ('800mb', 'dir'): 297.67, ('800mb', 'speed'): 13.0, ('775mb', 'gph'): 2256.98, ('775mb', 'temp'): 277.94, ('775mb', 'dewpt'): 272.48, ('775mb', 'dir'): 302.27, ('775mb', 'speed'): 15.0, ('750mb', 'gph'): 2523.86, ('750mb', 'temp'): 277.0, ('750mb', 'dewpt'): 270.96, ('750mb', 'dir'): 303.6, ('750mb', 'speed'): 16.0, ('725mb', 'gph'): 2798.8, ('725mb', 'temp'): 275.64, ('725mb', 'dewpt'): 269.21, ('725mb', 'dir'): 301.65, ('725mb', 'speed'): 19.0, ('700mb', 'gph'): 3081.8, ('700mb', 'temp'): 273.87, ('700mb', 'dewpt'): 266.74, ('700mb', 'dir'): 301.08, ('700mb', 'speed'): 23.0, ('675mb', 'gph'): 3371.96, ('675mb', 'temp'): 272.21, ('675mb', 'dewpt'): 263.85, ('675mb', 'dir'): 301.7, ('675mb', 'speed'): 25.0, ('650mb', 'gph'): 3673.08, ('650mb', 'temp'): 270.48, ('650mb', 'dewpt'): 260.75, ('650mb', 'dir'): 302.23, ('650mb', 'speed'): 28.0, ('625mb', 'gph'): 3982.04, ('625mb', 'temp'): 268.73, ('625mb', 'dewpt'): 257.99, ('625mb', 'dir'): 299.64, ('625mb', 'speed'): 29.0, ('600mb', 'gph'): 4303.62, ('600mb', 'temp'): 266.9, ('600mb', 'dewpt'): 255.05, ('600mb', 'dir'): 297.19, ('600mb', 'speed'): 30.0, ('575mb', 'gph'): 4633.92, ('575mb', 'temp'): 264.93, ('575mb', 'dewpt'): 251.89, ('575mb', 'dir'): 295.12, ('575mb', 'speed'): 31.0, ('550mb', 'gph'): 4978.9, ('550mb', 'temp'): 262.88, ('550mb', 'dewpt'): 248.45, ('550mb', 'dir'): 293.07, ('550mb', 'speed'): 32.0, ('525mb', 'gph'): 5333.54, ('525mb', 'temp'): 260.29, ('525mb', 'dewpt'): 245.08, ('525mb', 'dir'): 292.02, ('525mb', 'speed'): 33.0, ('500mb', 'gph'): 5705.5, ('500mb', 'temp'): 257.56, ('500mb', 'dewpt'): 241.47, ('500mb', 'dir'): 291.0, ('500mb', 'speed'): 35.0, ('475mb', 'gph'): 6087.4, ('475mb', 'temp'): 254.42, ('475mb', 'dewpt'): 241.58, ('475mb', 'dir'): 289.75, ('475mb', 'speed'): 34.0, ('450mb', 'gph'): 6489.96, ('450mb', 'temp'): 251.1, ('450mb', 'dewpt'): 240.94, ('450mb', 'dir'): 288.38, ('450mb', 'speed'): 33.0, ('425mb', 'gph'): 6904.36, ('425mb', 'temp'): 247.61, ('425mb', 'dewpt'): 237.23, ('425mb', 'dir'): 288.55, ('425mb', 'speed'): 34.0, ('400mb', 'gph'): 7343.9, ('400mb', 'temp'): 243.89, ('400mb', 'dewpt'): 233.3, ('400mb', 'dir'): 288.71, ('400mb', 'speed'): 36.0, ('375mb', 'gph'): 7798.15, ('375mb', 'temp'): 240.77, ('375mb', 'dewpt'): 226.58, ('375mb', 'dir'): 290.47, ('375mb', 'speed'): 43.0, ('350mb', 'gph'): 8283.76, ('350mb', 'temp'): 237.42, ('350mb', 'dewpt'): 216.64, ('350mb', 'dir'): 291.77, ('350mb', 'speed'): 52.0, ('325mb', 'gph'): 8790.03, ('325mb', 'temp'): 233.47, ('325mb', 'dewpt'): 217.34, ('325mb', 'dir'): 291.04, ('325mb', 'speed'): 57.0, ('300mb', 'gph'): 9336.86, ('300mb', 'temp'): 229.21, ('300mb', 'dewpt'): 216.5, ('300mb', 'dir'): 290.39, ('300mb', 'speed'): 62.0, ('275mb', 'gph'): 9909.55, ('275mb', 'temp'): 225.18, ('275mb', 'dewpt'): 214.54, ('275mb', 'dir'): 289.57, ('275mb', 'speed'): 62.0, ('250mb', 'gph'): 10536.86, ('250mb', 'temp'): 220.76, ('250mb', 'dewpt'): 211.98, ('250mb', 'dir'): 288.67, ('250mb', 'speed'): 62.0, ('225mb', 'gph'): 11209.65, ('225mb', 'temp'): 218.6, ('225mb', 'dewpt'): 208.69, ('225mb', 'dir'): 287.05, ('225mb', 'speed'): 62.0, ('200mb', 'gph'): 11961.78, ('200mb', 'temp'): 216.2, ('200mb', 'dewpt'): 204.77, ('200mb', 'dir'): 285.21, ('200mb', 'speed'): 62.0, ('175mb', 'gph'): 12805.89, ('175mb', 'temp'): 216.36, ('175mb', 'dewpt'): 201.73, ('175mb', 'dir'): 289.56, ('175mb', 'speed'): 56.0, ('150mb', 'gph'): 13780.35, ('150mb', 'temp'): 216.54, ('150mb', 'dewpt'): 194.67, ('150mb', 'dir'): 295.83, ('150mb', 'speed'): 49.0, ('125mb', 'gph'): 14929.06, ('125mb', 'temp'): 215.55, ('125mb', 'dewpt'): 193.0, ('125mb', 'dir'): 293.53, ('125mb', 'speed'): 41.0, ('100mb', 'gph'): 16334.96, ('100mb', 'temp'): 214.34, ('100mb', 'dewpt'): 190.78, ('100mb', 'dir'): 288.88, ('100mb', 'speed'): 30.0, ('75mb', 'gph'): 18128.15, ('75mb', 'temp'): 214.56, ('75mb', 'dewpt'): 189.31, ('75mb', 'dir'): 292.03, ('75mb', 'speed'): 22.0, ('50mb', 'gph'): 20655.52, ('50mb', 'temp'): 214.89, ('50mb', 'dewpt'): 186.13, ('50mb', 'dir'): 303.46, ('50mb', 'speed'): 12.0}, 1: {('1000mb', 'gph'): 165.16, ('1000mb', 'temp'): 283.48, ('1000mb', 'dewpt'): 280.17, ('1000mb', 'dir'): 305.02, ('1000mb', 'speed'): 6.0, ('975mb', 'gph'): 375.34, ('975mb', 'temp'): 282.49, ('975mb', 'dewpt'): 278.69, ('975mb', 'dir'): 317.14, ('975mb', 'speed'): 13.0, ('950mb', 'gph'): 590.16, ('950mb', 'temp'): 280.58, ('950mb', 'dewpt'): 277.87, ('950mb', 'dir'): 324.11, ('950mb', 'speed'): 13.0, ('925mb', 'gph'): 809.16, ('925mb', 'temp'): 278.92, ('925mb', 'dewpt'): 276.77, ('925mb', 'dir'): 313.02, ('925mb', 'speed'): 12.0, ('900mb', 'gph'): 1033.7, ('900mb', 'temp'): 278.32, ('900mb', 'dewpt'): 276.69, ('900mb', 'dir'): 291.26, ('900mb', 'speed'): 11.0, ('875mb', 'gph'): 1263.98, ('875mb', 'temp'): 279.08, ('875mb', 'dewpt'): 277.62, ('875mb', 'dir'): 281.37, ('875mb', 'speed'): 10.0, ('850mb', 'gph'): 1501.7, ('850mb', 'temp'): 278.63, ('850mb', 'dewpt'): 276.58, ('850mb', 'dir'): 283.97, ('850mb', 'speed'): 10.0, ('825mb', 'gph'): 1745.64, ('825mb', 'temp'): 277.59, ('825mb', 'dewpt'): 275.47, ('825mb', 'dir'): 289.33, ('825mb', 'speed'): 12.0, ('800mb', 'gph'): 1996.26, ('800mb', 'temp'): 277.69, ('800mb', 'dewpt'): 274.2, ('800mb', 'dir'): 296.36, ('800mb', 'speed'): 15.0, ('775mb', 'gph'): 2255.26, ('775mb', 'temp'): 278.15, ('775mb', 'dewpt'): 272.78, ('775mb', 'dir'): 297.89, ('775mb', 'speed'): 17.0, ('750mb', 'gph'): 2522.8, ('750mb', 'temp'): 276.96, ('750mb', 'dewpt'): 271.04, ('750mb', 'dir'): 294.99, ('750mb', 'speed'): 18.0, ('725mb', 'gph'): 2797.14, ('725mb', 'temp'): 275.4, ('725mb', 'dewpt'): 268.56, ('725mb', 'dir'): 294.07, ('725mb', 'speed'): 20.0, ('700mb', 'gph'): 3079.8, ('700mb', 'temp'): 273.71, ('700mb', 'dewpt'): 265.46, ('700mb', 'dir'): 296.39, ('700mb', 'speed'): 23.0, ('675mb', 'gph'): 3369.96, ('675mb', 'temp'): 272.08, ('675mb', 'dewpt'): 263.34, ('675mb', 'dir'): 300.0, ('675mb', 'speed'): 25.0, ('650mb', 'gph'): 3671.08, ('650mb', 'temp'): 270.39, ('650mb', 'dewpt'): 261.12, ('650mb', 'dir'): 303.18, ('650mb', 'speed'): 27.0, ('625mb', 'gph'): 3979.72, ('625mb', 'temp'): 268.74, ('625mb', 'dewpt'): 256.77, ('625mb', 'dir'): 301.49, ('625mb', 'speed'): 29.0, ('600mb', 'gph'): 4300.96, ('600mb', 'temp'): 267.02, ('600mb', 'dewpt'): 251.51, ('600mb', 'dir'): 299.96, ('600mb', 'speed'): 31.0, ('575mb', 'gph'): 4631.58, ('575mb', 'temp'): 264.99, ('575mb', 'dewpt'): 249.31, ('575mb', 'dir'): 297.06, ('575mb', 'speed'): 32.0, ('550mb', 'gph'): 4976.9, ('550mb', 'temp'): 262.87, ('550mb', 'dewpt'): 247.0, ('550mb', 'dir'): 294.2, ('550mb', 'speed'): 33.0, ('525mb', 'gph'): 5331.25, ('525mb', 'temp'): 260.22, ('525mb', 'dewpt'): 245.18, ('525mb', 'dir'): 293.26, ('525mb', 'speed'): 33.0, ('500mb', 'gph'): 5702.9, ('500mb', 'temp'): 257.43, ('500mb', 'dewpt'): 243.23, ('500mb', 'dir'): 292.32, ('500mb', 'speed'): 34.0, ('475mb', 'gph'): 6085.06, ('475mb', 'temp'): 254.43, ('475mb', 'dewpt'): 241.41, ('475mb', 'dir'): 291.09, ('475mb', 'speed'): 34.0, ('450mb', 'gph'): 6487.9, ('450mb', 'temp'): 251.26, ('450mb', 'dewpt'): 239.39, ('450mb', 'dir'): 289.79, ('450mb', 'speed'): 34.0, ('425mb', 'gph'): 6902.76, ('425mb', 'temp'): 248.02, ('425mb', 'dewpt'): 233.63, ('425mb', 'dir'): 293.14, ('425mb', 'speed'): 38.0, ('400mb', 'gph'): 7342.78, ('400mb', 'temp'): 244.58, ('400mb', 'dewpt'): 226.55, ('400mb', 'dir'): 295.98, ('400mb', 'speed'): 43.0, ('375mb', 'gph'): 7798.48, ('375mb', 'temp'): 241.27, ('375mb', 'dewpt'): 223.77, ('375mb', 'dir'): 295.9, ('375mb', 'speed'): 49.0, ('350mb', 'gph'): 8285.64, ('350mb', 'temp'): 237.73, ('350mb', 'dewpt'): 220.79, ('350mb', 'dir'): 295.83, ('350mb', 'speed'): 55.0, ('325mb', 'gph'): 8791.36, ('325mb', 'temp'): 233.48, ('325mb', 'dewpt'): 220.96, ('325mb', 'dir'): 292.78, ('325mb', 'speed'): 57.0, ('300mb', 'gph'): 9337.58, ('300mb', 'temp'): 228.89, ('300mb', 'dewpt'): 219.65, ('300mb', 'dir'): 289.8, ('300mb', 'speed'): 60.0, ('275mb', 'gph'): 9909.7, ('275mb', 'temp'): 225.04, ('275mb', 'dewpt'): 216.01, ('275mb', 'dir'): 288.82, ('275mb', 'speed'): 60.0, ('250mb', 'gph'): 10536.4, ('250mb', 'temp'): 220.82, ('250mb', 'dewpt'): 212.03, ('250mb', 'dir'): 287.75, ('250mb', 'speed'): 60.0, ('225mb', 'gph'): 11207.71, ('225mb', 'temp'): 218.23, ('225mb', 'dewpt'): 208.96, ('225mb', 'dir'): 285.43, ('225mb', 'speed'): 61.0, ('200mb', 'gph'): 11958.17, ('200mb', 'temp'): 215.33, ('200mb', 'dewpt'): 205.5, ('200mb', 'dir'): 282.93, ('200mb', 'speed'): 63.0, ('175mb', 'gph'): 12802.98, ('175mb', 'temp'): 216.25, ('175mb', 'dewpt'): 202.77, ('175mb', 'dir'): 287.48, ('175mb', 'speed'): 56.0, ('150mb', 'gph'): 13778.24, ('150mb', 'temp'): 217.31, ('150mb', 'dewpt'): 194.46, ('150mb', 'dir'): 294.22, ('150mb', 'speed'): 49.0, ('125mb', 'gph'): 14926.09, ('125mb', 'temp'): 215.54, ('125mb', 'dewpt'): 192.81, ('125mb', 'dir'): 292.33, ('125mb', 'speed'): 42.0, ('100mb', 'gph'): 16330.96, ('100mb', 'temp'): 213.37, ('100mb', 'dewpt'): 190.79, ('100mb', 'dir'): 288.86, ('100mb', 'speed'): 33.0, ('75mb', 'gph'): 18120.98, ('75mb', 'temp'): 214.0, ('75mb', 'dewpt'): 189.51, ('75mb', 'dir'): 291.2, ('75mb', 'speed'): 24.0, ('50mb', 'gph'): 20643.88, ('50mb', 'temp'): 214.89, ('50mb', 'dewpt'): 186.35, ('50mb', 'dir'): 300.53, ('50mb', 'speed'): 12.0}}
In the data provided above the I dropped the unused values as such...
PROPS_2_DROP = ['gph_dval', 'temp_dval', 'clouds', 'fl-vis', 'icing_type', 'cwmr',
'rh', 'theta-e', 'parcel_temp', 'vvs', 'mixing_ratio', 'turbulence']
midf = self.DataFrame[range(0, 2)].drop('sfc').drop(labels=PROPS_2_DROP, level='props')
midf.to_dict(orient='dict' )
data_dict -> MultiIndex.DataFrame
midf = pd.DataFrame(data_dict)
DataFrame
0 1 2 3 4 5 ... 139 140 141 142 143 144
lvl props ...
sfc mean_slp 1019.73 1019.50 1019.19 1018.83 1019.03 1019.02 ... 995.10 995.44 995.79 995.99 996.20 996.41
altimeter 30.10 30.09 30.09 30.07 30.08 30.08 ... 29.37 29.38 29.39 29.40 29.40 29.41
press_alt 285.78 291.39 299.50 309.49 304.92 305.71 ... 963.21 953.97 944.74 939.51 934.28 929.05
density_alt -55.22 -88.18 -135.38 -186.09 -235.12 -262.16 ... 1265.79 1224.16 1182.28 1137.12 1091.92 1046.88
2_m_agl_tmp 283.50 283.17 282.70 282.18 281.82 281.59 ... 287.05 286.84 286.62 286.34 286.06 285.78
... ... ... ... ... ... ... ... ... ... ... ... ... ...
50mb mixing_ratio 0.00 0.00 0.00 0.00 0.00 0.00 ... 0.00 0.00 0.00 0.00 0.00 0.00
cwmr 0.00 0.00 0.00 0.00 0.00 0.00 ... 0.00 0.00 0.00 0.00 0.00 0.00
icing_type -1.00 -1.00 -1.00 -1.00 -1.00 -1.00 ... -1.00 -1.00 -1.00 -1.00 -1.00 -1.00
turbulence 0.00 0.00 0.00 0.00 0.00 0.00 ... 0.00 0.00 0.00 0.00 0.00 0.00
vvs 0.00 0.00 -0.00 -0.00 -0.00 0.00 ... -0.00 -0.00 -0.00 -0.00 -0.00 0.00
[804 rows x 145 columns]
There are various methods for working with the forecast DataFrame. The one I've been working on tonight slices and structures data into a JSON object that is sent to a TypeScript application.
types
type Dataset = Datum[][];
type Datums = Datum[];
interface Datum {
press: number;
hght: number;
temp: number;
dwpt: number;
wdir: number;
wspd: number;
};
method
DICT_KEYS = ['temp', 'dwpt', 'wdir', 'wspd', 'hght', 'press']
ABSOLUTE_ZERO = -273.15
def feature_skewt_dataset(self, start=0, stop=30) -> Dict[str, List[List[Dict[str, int]]]]:
# slice multi-index-dataframe time by argument range
midf = self.DataFrame[range(start, stop)]
# kelvin to celcius
temperature = midf.loc[(slice(None), "temp"), :] + ABSOLUTE_ZERO # - 273.15
dewpoint = midf.loc[(slice(None), "dewpt"), :] + ABSOLUTE_ZERO #- 273.15
wind_speed = midf.loc[(slice(None), "speed"), :]
wind_direction = midf.loc[(slice(None), "dir"), :]
geopotential_height = midf.loc[(slice(None), "gph"), :]
# milibars = self._make_mbars(39)
# milibars = midf.droplevel(1).index.unique().str.strip('mb')
milibars = midf.index.get_level_values('lvl').unique().str.rstrip('mb')
# STEP 3 zip the keys -> stack
dataset = [[dict(zip(DICT_KEYS, stack))for stack in np.column_stack([
# STEP 2 -> slice the properties by the time index
temperature.loc[:, time_index],
dewpoint.loc[:, time_index],
wind_direction.loc[:, time_index],
wind_speed.loc[:, time_index],
geopotential_height.loc[:, time_index],
milibars
# STEP 1 -> iterate the time_index column index
]).astype(int)] for time_index in midf.columns]
return {'dataset': dataset}
Answer: It's good that you provided data_dict. In my suggested code, I had first dumped this dataframe to a pickle to be able to get it back without fuss and without having to include it in the source code verbatim.
You initially didn't define your pressure quantity; _make_mbars was missing. You now say that it's effectively
midf.droplevel(1).index.unique( ).str.strip('mb')
but this is non-ideal for a collection of reasons:
you don't actually want to get unique values, but instead want to associate each value with a row;
you should use rstrip instead of strip;
you need to cast to integers; and
rather than droplevel, for this use you should just use get_level_values.
Likewise, you failed to provide the whole class so I ignored self and just accepted a starting dataframe as a function parameter.
You're over-abbreviating your column and variable names. Just write the names in plain English. You offered two reasons for the abbreviated names, the first being that you need to adhere to a NOAA format. Externally that's fine; but internally you should use sane names and not the names you're required to use in external serialised formats.
The second reason you offered for over-abbreviation is
they are being called in a React d3 application [...] frequently
Cutting a couple of characters in your field names is premature and mis-directed optimisation. There are plenty of opportunities for actual optimisation elsewhere.
Put 273.15 into a constant rather than leaving it as a magic number. When you subtract this number, it's probably a good idea to round(); I was conservative and rounded to 12 decimals to cut the error. This is only possible because your real data stop at two decimals.
Most of your difficulty comes from the fact that your data frame is effectively rotated, and needs to be rotated again to be sane. In Pandas terminology this rotation is called stacking. You need to unstack your property names to columns, and you need to stack your time columns to indices. Once this is done, the data are much, much easier to manipulate with (nearly) stock Pandas functions.
Suggested
from pprint import pprint
import pandas as pd
ABSOLUTE_ZERO = -273.15
def kelvin_to_celsius(temperature: pd.Series) -> pd.Series:
return (temperature + ABSOLUTE_ZERO).round(decimals=12)
def sanitise(df: pd.DataFrame) -> pd.DataFrame:
stacked: pd.DataFrame = df.stack().unstack(level=1)
stacked.rename(
columns={
'dewpt': 'dewpoint',
'gph': 'height',
'dir': 'wind_direction',
'speed': 'wind_speed',
'temp': 'temperature',
},
inplace=True,
)
stacked.temperature = kelvin_to_celsius(stacked.temperature)
stacked.dewpoint = kelvin_to_celsius(stacked.dewpoint)
stacked['pressure'] = (
stacked.index.get_level_values(level=0)
.str.rstrip('mb').astype(int)
)
return stacked.droplevel(0)
def feature_skewt_dataset(
midf: pd.DataFrame,
start_time: int = 0,
stop_time: int = 30,
) -> dict[str, list]:
# Since the index only contains time and is non-unique, we cannot use
# to_dict(orient='index')
return {
'dataset': [
midf.loc[time_index].to_dict(orient='records')
for time_index in range(start_time, stop_time)
]
}
def test() -> None:
insane = pd.read_pickle('midf.pickle')
sane = sanitise(insane)
dataset = feature_skewt_dataset(sane, stop_time=2)
pprint(dataset)
if __name__ == '__main__':
test() | {
"domain": "codereview.stackexchange",
"id": 42765,
"tags": "python, numpy, pandas"
} |
How to train 3 models with single loss function in pytorch | Question: optimizer=torch.optim.AdamW(list(model3.parameters())+list(model1.parameters())+list(model2.parameters()))
optimizer.zero_grad()
prediction=model3(model1(x)+model2(x))
loss=nn.BCELoss(prediction,labels)
loss.backward()
optimizer.step()
How can I update parameters of all three models with single loss
Answer: Yes, there is no problem with the code you showed. The gradients are propagated all the way up, unless you do something to prevent it (e.g. .detach(), param.requires_grad = False, etc) | {
"domain": "datascience.stackexchange",
"id": 9667,
"tags": "pytorch"
} |
Why is interpolation a time varying system | Question: I was reading about interpolation (Interpolation and Decimation of Digital Signals - A tutorial Review, Ronald E. Crochiere) and found that Interpolation filter is a time varying system. Can someone give an simple example why is it a time varying system. Because, as far as I understand, interpolation just stuffs zeroes between the samples to increase the sample rate by appropriate amount, even if we apply the input at a later time it should give the same output.
Also, even though the process is time varying we can represent it by using transfer functions. For instance, Zero order hold is represented by sinc filter. Why is this possible?
Answer: Considering a discrete-time framework, interpolation operation has two stage: first expand the signal by zero stuffing between the existing samples, and then lowpass filter the expanded signal to get the interpolated samples. The second stage is LTI, but the first stage (expansion) is not. Hence, an interpolator is not an LTI system.
The expansion stage is shown like $$x[n]\longrightarrow \boxed{ \uparrow L } \longrightarrow y[n]$$ and mathematically defined as
$$ y[n] = \begin{cases}{ x[\frac{n}{L}] ~~~ , ~~~ \text{ for } n = m L~~ ,~~ m=0,\pm 1, \pm 2,... \\ ~~~ 0 ~~~~~ , ~~~ \text{ otherwise } }\end{cases} $$
Apply the test for time-invariance :
First, apply a shifted input $x_d[n] = x[n-d]$, and denote the output as $y_d[n]$:
$$ y_d[n] = \begin{cases}{ x_d[\frac{n}{L}] = x_d[m] ~~~ , ~~~ \text{ for } n = m L~~ ,~~ m=0,\pm 1, \pm 2,... \\ ~~~~~ ~~~~ ~~~ 0 ~~~ ~~~~~~~~~~~~ ~, ~~~ \text{ otherwise }}\end{cases} $$
which becomes:
$$ y_d[n] = \begin{cases}{ x[\frac{n}{L} - d] = x[m-d] ~~~ , ~~~ \text{ for } n = m L~~ ,~~ m=0,\pm 1, \pm 2,... \\ ~~~~~ ~~~~ ~~~ 0 ~~~ ~~~~~~~~~~~~~~~~~ ~ , ~~~ \text{ otherwise } }\end{cases} $$
Second, just shift the output $y[n]$ by $d$, denoted as $y[n-d]$:
$$ y[n-d] = \begin{cases}{ x[\frac{n-d}{L}]=x[m] ~~~ , ~~~ \text{ for } n-d = m L~~ ,~~ m=0,\pm 1, \pm 2,... \\ ~~~~~ ~ ~~~~ ~~~~ 0 ~~~ ~~~~~~~~~~ ~, ~~~ \text{ otherwise } }\end{cases} $$
which becomes:
$$ y[n-d] = \begin{cases}{ x[\frac{n-d}{L}] = x[m] ~~~ , ~~~ \text{ for } n = m L + d~~ ,~~ m=0,\pm 1, \pm 2,... \\ ~~~~~ ~~~~ ~~~ 0 ~~~ ~~~~~~~~~~ ~, ~~~ \text{ otherwise } }\end{cases} $$
As you can see, $y_d[n]$ and $y[n-d]$ are not the same signals. You can explicitly see this by evaluating them for specific values of $n$ and $d$, such as $d=1$ and $n = L$, then $y[n-d] = y[L-1] = 0$ while $y_d[n] = y_d[L] = x[0]$, hence $y[n-d] \neq y_d[n]$. Therefore, the expander (or as a consequence the interpolator) is a Time-Varying system which is not LTI. | {
"domain": "dsp.stackexchange",
"id": 6946,
"tags": "transfer-function, interpolation, digital-filters"
} |
How to incorporate moveable sensor (Lidar) into navigation stack | Question:
I am building a robot which will move using ros navigation stack with move_base. I am using Lidar Lite 2 Laser Rangefinder to detect obstacles and for mapping as well. I have two questions regarding the use of this lidar.
Shall I use LaserScan (in obstacle_layer) or Range message data (in RangeSensorLayer) to put obstacles into the costmap. I am confused because generally Lidar is considered as a Laser sensor so it should be providing data in LaserScan format. But this specific lidar is providing range data.
I am using a motor to rotate this lidar to cover 180 degrees of scan area. How can I let my obstacle layer know the current position of lidar so that the obstacle point is inserted at the correct place in the costmap. In my robot's urdf, I can add this sensor at one frame and it will be fixed to that place according to the urdf. Is there any method to resolve this issue ?
Originally posted by b2meer on ROS Answers with karma: 66 on 2016-03-22
Post score: 0
Answer:
You should use LaserScan messages, imo, and take care of the spinning logic in your laser scan publisher. Each LaserScan messsage's ranges field will be populated with, e.g., 180 degrees worth of LiDAR-Lite measurements.
Here are some guidelines and examples:
http://wiki.ros.org/navigation/Tutorials/RobotSetup/Sensors
laser_scan_publisher_tutorial
https://github.com/rohbotics/xv_11_laser_driver
https://github.com/ros-drivers/hokuyo_node
Keep in mind that you'll need some way of keeping track of your motor's / LiDAR's orientation. Some sort of encoder. (Unless you're using a stepper motor.)
Originally posted by spmaniato with karma: 1788 on 2016-03-22
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by b2meer on 2016-03-24:
I'm using a stepper motor to rotate my lidar lite, for design reasons. The stepper motor has been fixed at a rotation rate of 2Hz. The method you've described would therefore mean that there would be a lag of 0.5 seconds due to the refresh-rate. This isn't much lag, but can it be reduced further?
Comment by spmaniato on 2016-03-24:
That's what the LaserScan message's time_increment field is for :-) Check out this detailed explanation: http://answers.ros.org/question/198843/need-explanation-on-sensor_msgslaserscanmsg/?answer=198848#post-id-198848
Comment by b2meer on 2016-03-28:
alright. Thank you very much for your help | {
"domain": "robotics.stackexchange",
"id": 24206,
"tags": "navigation, lidar, urdf, costmap, sensor"
} |
MoveIt Difficulties with end-effector Pose transformation for humanoid robot | Question:
ROS Distro: Melodic
OS: Ubuntu 18.04
MoveIt compiled from sources
OpenManipulator-P + RH8D
Problem Description:
We have two different Ik_systems (one ROS/MoveIt, one custom) for our humanoid robot. We'd like to experiment with them with regard to the quality of ik_solutions. When comparing ik_solutions of our two ik systems, we noticed, that we have severe differences between the orientation of our ROS IK solution and the Custom IK solution, while they visually seem to be extremely close.
We figured there is a difference between the coordinate system of our custom implementation and the ROS one. Next step was to compare the values of our Homing Pose. The Homing Pose is set via Joint States, so the Joint Positions are exactly the same in both systems (also visual positions are identical). While the xyz position is exactly the same in both systems (difference per axis < 1e-08), the Orientation is completely different: Custom System: [x: 0, y: 0, z: 0, w: 1] MoveIt System: [x: -0.5, y: 0.5, z: 0.5, w: 0.5]. Big problem for us is that due to this difference we are currently have tremendous problems executing some IK experiments.
We would like the Homing Pose Orientation to also be [x: 0, y: 0, z: 0, w: 1] (so that both of our systems are configured identically, we think our current problem will then vanish) but we are not able to transform it. Orientation always stays [x: -0.5, y: 0.5, z: 0.5, w: 0.5], I already modified the urdf, tried setPoseReferenceFrame in controller, nothing helps. Both systems use the same base_link and the same eef link. I looked up the Orientation of the base_link in rviz and it is [x: 0, y: 0, z: 0, w: 1], so this seems correct for me.
Any idea how to fix this?
Homing Pose:
Kinematic Chain:
MoveIt Orientation shown below, Expectation [0,0,0,1]
[INFO] Current Orientation: r: 0.000000 p: 1.570796 y: 4.709419 || x: -0.499931 y: 0.501418 z: 0.500063 w: 0.498584
Originally posted by jangerrit on ROS Answers with karma: 11 on 2021-08-18
Post score: 0
Original comments
Comment by gvdhoorn on 2021-08-18:\
I would give you pictures but I do not have enough points
you do now.
Please do not post images of terminals, source code or .launch/.urdf or any other files though. Those all contain text or "are" text, so copy-paste those into your question.
Comment by jangerrit on 2021-08-18:
thank you really much
Comment by Mike Scheutzow on 2021-08-22:
Please explain what "I already modified the urdf... nothing helps" means. Please describe the idea behind the change you make to the urdf. And what was the result?
Comment by fvd on 2021-08-22:
Sounds like you're not using the same end effector frame or planning frame in your custom IK and the ROS/MoveIt plugin.
Comment by jangerrit on 2021-08-23:
"I already modified the urdf" means I played around with the tag of the joints where the coordinate frames change orientation, as shown in the last picture. It sometimes broke urdf, sometimes did not have any effect as far as I could observe.
Comment by fvd on 2021-08-23:
Please explain in more detail in your post how you set up the IK and call it in your code, what your end effector links are, what your URDF and planning groups look like, and which outputs are different than what you expect. Also see the documentation for getEndEffectorLink and setEndEffectorLink
Comment by jangerrit on 2021-08-23:
Solved the Problem, it has nothing to do with the urdf. MoveIt just does not recognize our kinematic chain correctly.
Our kin. chain should look like the following:
r_arm:
base_link -> r_tool0
So what I did was:
configure kin. chain for Planning Group in MoveIt setup_assistant as shown above
execute move_group->setPoseReferenceFrame("base_link") in controller
Problem is getPlanningFrame() always returns "world" frame. So our custom IK system uses "base_link" as reference frame, while MoveIt uses "world". I discovered a lot of people experiencing this bug and there seems to be no solution, we will transform everything with tf2 now.
Comment by Mike Scheutzow on 2021-08-23:
@jangerrit I'm glad you figured it out. You should post your comment as the answer.
Answer:
Solved the Problem, it has nothing to do with the urdf. MoveIt just does not recognize our kinematic chain correctly.
Our kin. chain should look like the following: r_arm: base_link -> r_tool0
So what I did was:
configure kin. chain for Planning
Group in MoveIt setup_assistant as
shown above
execute
move_group->setPoseReferenceFrame("base_link")
in controller
Problem is getPlanningFrame() always returns "world" frame. So our custom IK system uses "base_link" as reference frame, while MoveIt uses "world". I discovered a lot of people experiencing this bug and there seems to be no solution, we will transform everything with tf2 now.
Originally posted by jangerrit with karma: 11 on 2021-08-24
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 36811,
"tags": "ros, inverse-kinematics, moveit, ros-melodic, ik"
} |
Ehrenfest's theorem derivation | Question: I'm stuck at a question from Griffiths which ask to prove that:
$$\dfrac{d \left\langle p \right\rangle}{dt}=\left\langle -\dfrac{\partial V}{\partial x}\right\rangle.$$
What I did is the following:
$$\dfrac{d \left\langle p \right\rangle}{dt}=\dfrac{d}{dt}\int\psi^*\dfrac{h}{i}\dfrac{d\psi}{dx}=\dfrac{h}{i}\int\left(\dfrac{d\psi^*}{dt}\dfrac{d\psi}{dx}+\psi^*\dfrac{d}{dx}\dfrac{d\psi}{dt}\right)$$
And after inserting the time derivative of $\psi^*$ and $\psi$ and taking the integration I found the following equation:
$$-\dfrac{d\psi^*}{dx}\dfrac{d\psi}{dt}-\int \psi^*\dfrac{\partial V}{\partial x}\psi$$ First term is evaluated at infinity and minus inifnity.
My question is, can I say that derivative of wave function at inifinity always goes to zero ? Or am I making a mistake somewhere?
Answer: Lets start by deriving Ehrenfest's theorem. The expectation value is given as:
$$\left< A \right> = \left< \psi \left| \hat{A} \right| \psi \right>$$
We can now take the time derivative of the expectation value:
$$\frac{d}{dt}\left< A \right> = \frac{d}{dt}\left< \psi \left| \hat{A} \right| \psi \right>$$
Now by expanding the right hand side:
$$\frac{d}{dt}\left< A \right> =\left< \frac{d}{dt}\psi \left| \hat{A} \right| \psi \right> + \left< \psi \left| \frac{\partial}{\partial t}\hat{A} \right| \psi \right>+ \left< \psi \left| \hat{A} \right| \frac{d}{dt}\psi \right>$$
Now we can see that the first and last term can be replaced directly by considering the time-depend Schrödinger equation:
$$i\hbar\frac{\partial}{\partial t}\left| \psi \right>=\hat{H}\left| \psi \right>$$
Now giving:
$$\frac{d}{dt}\left< A \right> =-\frac{1}{i\hbar}\left< \hat{H} \psi \left| \hat{A} \right| \psi \right> + \left< \psi \left| \frac{\partial}{\partial t}\hat{A} \right| \psi \right>+ \frac{1}{i\hbar}\left< \psi \left| \hat{A} \right| \hat{H}\psi \right>$$
By the definition of commutators this can be seen to reduce to:
$$\frac{d}{dt}\left< A \right> =\frac{i}{\hbar}\left< \psi \left| \left[ \hat{H} , \hat{A} \right] \right| \psi \right> + \left< \psi \left| \frac{\partial}{\partial t}\hat{A} \right| \psi \right>$$
Note here that I have made no assumptions about the positional derivative of the wave function.
Lets now inspect the expectation value of the momentum:
$$\frac{d}{dt}\left< p \right> =\frac{i}{\hbar}\left< \psi \left| \left[ \hat{H} , \hat{p} \right] \right| \psi \right> + \left< \psi \left| \frac{\partial}{\partial t}\hat{p} \right| \psi \right>$$
Since $\hat{p}$ is time-independent, the last term vanishes:
$$\frac{d}{dt}\left< p \right> =\frac{i}{\hbar}\left< \psi \left| \left[ \hat{H} , \hat{p} \right] \right| \psi \right>\tag{1}$$
Now we define our Hamiltonian operator as:
$$\hat{H}=\frac{1}{2m}\hat{p}^2 - \hat{V}$$
We there have that:
$$\left[ \hat{H} , \hat{p} \right] = \left[ \frac{1}{2m}\hat{p}^2 , \hat{p} \right] + \left[ \hat{V} , \hat{p} \right]$$
Clearly the first commutator is equal to zero. Now lets inspect the second commutator. Here we will let the commutator work on a function:
$$\left[ \hat{V} , \hat{p} \right]f = V(-i\hbar)\frac{\partial f}{\partial x} - (-i\hbar)\frac{\partial \left(V f\right)}{\partial x}$$
Now expanding the second term:
$$\left[ \hat{V} , \hat{p} \right]f = -i\hbar V\frac{\partial f}{\partial x} +i\hbar f\frac{\partial V}{\partial x} +i\hbar V\frac{df}{dx} = i\hbar f\frac{\partial V}{\partial x}$$
Thus:
$$\left[ \hat{V} , \hat{p} \right] = i\hbar \frac{\partial V}{\partial x}$$
Now by inserting in equation (1):
$$\frac{d}{dt}\left< p \right> =\frac{i}{\hbar}\left< \psi \left| i\hbar \frac{\partial V}{\partial x} \right| \psi \right> = -\left< \frac{\partial V}{\partial x} \right>$$
Which is the result we were looking for. I know that I didn't directly answer you what you were asking for. But I think that you can find use in this indirect answer also. | {
"domain": "physics.stackexchange",
"id": 49571,
"tags": "quantum-mechanics, homework-and-exercises, operators, wavefunction"
} |
Custom MVC framework | Question: So, I've been working on a very simple MVC framework and I wanted some feedback on how it works. I've worked it slightly differently where the controllers don't need to extend the base Controller class (I really didn't see the need to).
Firstly, you'll see the main classes Controller.php, View.php and index.php.
Controller.php
class Controller {
public function __construct($module)
{
$class = implode('', array_map('ucwords', explode('-', $module)));
$model = MODELS . $class . '.php';
$controller = CONTROLLERS . $class . '.php';
if (file_exists($controller) && file_exists($model)) {
require_once $controller;
require_once $model;
$ctrl = '\ctrl\\' . $class;
$model = '\model\\' . $class;
require_once VIEWS . DS . 'partials' . DS . 'header.php';
new $ctrl(new $model(), new View());
require_once VIEWS . DS . 'partials' . DS . 'footer.php';
}
}
}
View.php
class View {
public function __construct()
{
$this->view = null;
$this->data = [];
}
public function load($view)
{
if (!file_exists($view))
exit('View 404: ' . $view);
$this->view = $view;
}
public function set($k, $v)
{
$this->data[$k] = $v;
}
public function get($k)
{
return $this->data[$k];
}
public function render()
{
extract($this->data);
ob_start();
require_once $this->view;
echo ob_get_clean();
}
}
index.php
<?php
session_start();
define('DS', DIRECTORY_SEPARATOR);
define('ROOT', __DIR__ . DS);
define('SRC', __DIR__ . DS . 'src' . DS);
define('VIEWS', __DIR__ . DS . 'app' . DS . 'views' . DS);
define('MODELS', __DIR__ . DS . 'app' . DS . 'models' . DS);
define('CONTROLLERS', __DIR__ . DS . 'app' . DS . 'controllers' . DS);
spl_autoload_register(function ($class_name) {
$core = SRC . 'core' . DS . $class_name . '.php';
$app = SRC . 'app' . DS . $class_name . '.php';
if (file_exists($core)) {
require_once $core;
} else if (file_exists($app)) {
require_once $app;
} else {
exit('Class 404: ' . $class_name);
}
});
if (isset($_GET['module'])) {
$module = $_GET['module'];
$ctrl = new Controller($module);
} else {
$ctrl = new Controller('sign-in');
}
Now we have an actual example:
app
models
controllers
views
partials
header.php
footer.php
sign-in.php
sign-in.php (View)
<h1><?= $title ?></h1>
<form method="post">
<input type="text" name="username" />
<input type="password" name="password" />
<input type="submit" name="sign-in" />
</form>
<?php if (isset($errors)) : ?>
<?php foreach ($errors as $error) : ?>
<li style="font-weight: bold; color: red;">
<?= $error ?>
</li>
<?php endforeach ?>
<?php endif ?>
SignIn (Controller)
<?php
namespace ctrl;
class SignIn {
private $tpl = VIEWS . 'sign-in.php';
public function __construct($model, $view)
{
$this->model = $model;
$this->view = $view;
$view->load($this->tpl);
if (isset($_POST['sign-in'])) {
$this->sign_in();
}
$this->render();
}
public function render()
{
$this->view->set('title', 'Sign In');
$this->view->render();
}
public function sign_in()
{
$errors = [];
if (!isset($_POST['username']) || empty(trim($_POST['username']))) {
$errors[] = 'Username is empty';
}
if (!isset($_POST['password']) || empty(trim($_POST['password']))) {
$errors[] = 'Password is empty';
}
if (empty($errors)) {
$this->model->find();
} else {
$this->view->set('errors', $errors);
}
}
}
Model (Model)
<?php
namespace model;
class SignIn {
public function find()
{
echo 'found';
}
}
I understand the error reporting / validation can be improved for example from the exit to actual logging along with environment management and the code cann be tidied up but I'm mainly looking for feedback on how the base controller and view class interacts with rest of the system.
Answer: Ill just jump right in;
Composer To The Rescue
So you have attempted autoloading, thats great! But by modern standards you should be using composer to;
Manage your dependecies
Manage autoloading your namespaces
Here is a stackoverflow question that shows how to add your own
Namespaces
Modern PHP should conform to standrads such as the PSR's your namespace is simply model this isn't good enough, take a look at PSR4, your namespaces should be something like Script47\MVC_FRAMEWORK_NAME\Models
Dependency Injection - The king of kings
There are many design paterns you can use to implement applications, and you've gone for dependency injection, which is great!
But your not doing it quite right, you should use a depedency injection container.
Take a look at php-di (Its a personal faviourte, but there are others to choose from)
This will also do away with your spl_autoload_register functions!
Controller is a Router not a Controller
Your controller class is resonsible for finding dependcies setting them up and letting them run!
This is not a controller (in my understanding) its a router and;
It should be named as such (maybe ControllerRouter)
It assumes there will be a Model with the same name as the Controller (bad)
It manually requires certian header files (bad)
It should use dependecy injection and not hand woven classes / models
Your controller router is going to be one of the more diffuclt classes to write, but it should be a lot of fun.
Web applications use ajax (or node but don't open that box, yet!)
You can POST to every page to get things to update, but thats annoying for the user (effectivly a page refresh everytime) and doesn't seperate concerns!
You should make ajax requests (a HTTP request) to your "api" which your index & your new ControllerRouter should be handling and returning prefferably JSON to the client.
Conculsion
There is far to much out of date code for me to sit here and try and improve when the concepts are flawerd. I would recomend you read up on modern PHP techniques and try applying some of them.
Symfony has series on creating your own framework, I have never used it and its ideas and opions may conflict with my own, but you should follow their guide more than my advice | {
"domain": "codereview.stackexchange",
"id": 31544,
"tags": "php, mvc"
} |
How to find a complete understanding the 2nd law of Thermodynamics in terms of forms? | Question: I have two straightforward question, and below I introduce more context to interpret them:
What is, or is there, an order relation for forms that one can use to make sense of the 2nd law of thermodynamics for processes (reversible or not)? Or is the 2nd law fundamentally given in an integral way?
Is there a sense in which a form $dS$ can be "path-dependent", to accommodate the distinction between reversible and irreversible statements of the 2nd law (with saturation or not of the inequality)?
I'm trying to understand the laws of thermodynamics from a differential forms formalism, and in doing so I stumbled upon concepts I could not make sense of very well. I'll start with the second law of thermodynamics for reversible/quasi-static processes (which at every instant are at equilibrium) as in Quantum Thermodynamics by Mahler et al. (eq. 3.8 p. 26):
$$
dS = \delta Q/T ,
$$
where $d$ is the exterior derivative and $\delta$ gives an infinitesimal difference. At this point I take this statement as an equality of forms, so $\delta Q$ is not a number, but a 1-form.
In extending this to general thermodynamic process (irreversible, no longer quasi-static, involving non-equilibrium states), Mahler et al. write
$$
\delta S \geq \delta Q / T .
$$
Here I understand as statement of numbers, with both $\delta$ meaning a very small difference. But with the previous considerations, this should be an inequality on forms, where $\delta S$ is a 1-form, albeit perhaps no longer exact. More importantly, they are related through an inequality, which motivates the 1st question.
My first guess is yes, and it is given by the order relation on the real number line given by integrating these forms. Would that be enough to define such an order relation? I would imagine there could be some caveats to this (e.g. it's just a pre-order...). I can see how that would make sense with Mahler's internal logic, where we'd make the substitution $\delta S \to \int dS$ (whilst maintaining the meaning of $\delta$ in the $\delta Q$ notation).
Another version of this question could be thought of when looking at the book on Mechanical Foundations of Thermodynamics by Campisi, where the 2nd law is stated as
$$
dS \geq \delta Q / T ,
$$
although here the author seems to admit from the outset on using a definition of an order relation of forms.
The second question actually stems from the statement, already given, that the second law depends on the nature of the process.
If so, it even seems this form would even no longer be exact whilst also being path-dependent, in such a way that we would prefer to write it as $\delta S(\gamma_\text{gen})$, and reduce it to $dS = \delta S(\gamma_\text{rev})$, for a path $\gamma$. As far as I'm understanding, this is also a different statement from the path-dependence on the integral of an inexact form.
Could this path-dependence of forms be understood in terms of a coarse-graining from the microstate space to the macrostate space? Viewing the state spaces as a manifold, an irreversible path, by accessing microstates, would require a notion where many 1-forms, each defined at a point on the macrostate manifold, are associated to a path. I could then imagine that, given an underlying theory for the microstates such as quantum mechanics, this path-dependence should come into light. Is there a path already laid out in this direction?
Answer: For your first question, I guess you could, but it would be a mere translation of the inequalities of integrals. For example, when people write $dS\geq \delta Q/T$, they mean $\int dS (=\Delta S)\geq \int \delta Q/T$ along all paths. Mathematically, you can prove that this defines a partial order on the $1-$forms, but it is a bit pedantic and thinking in terms of integrals is more transparent and physically relevant.
For your second question, by construction you want to build $S$ as a state function. This is why $\Delta S$ between two states is defined by $\int \delta Q/T$ along a reversible path linking these two (with implicit assumption that it exists). This is consistent thanks to Clausius inequality which states $\oint \delta Q/T=0$ for a reversible cycle. With this definition, the above inequality can be derived easily. I know that people sometimes introduce the notion of "created entropy" which is defined as $\delta S_c = dS-\delta Q/T$ which captures this path dependent $1-$form, with which the second law becomes $\delta S_c \geq 0$. This is the closest notion to what you hinted as a path dependent entropy.
Finally, for your final question on the microscopic origin of the 2nd principle, the coarse graining approach is the usual way to interpret it, as was proposed by Jaynes. Careful though, the coarse graining is done on macrostates. In QM, there is also the notion of partial trace and measurement which gives rise to entropy increase. In this case the order in which you coarse grain/measure will influence the end result.
Hope this help and tell me if you find some mistakes. | {
"domain": "physics.stackexchange",
"id": 87926,
"tags": "thermodynamics, statistical-mechanics, differential-geometry, entropy, reversibility"
} |
Dimensions of momentum? | Question: I am learning realitivity in college and in our class our lecturer explained four-momentum. When I was reading a book in QFT.
it writes the momentum as $p^{\mu} = (E,p^i)$. Why is one of the components energy? Energy and momentum have different dimensions
or is it different in quantum field theory?
Answer: Actually, you are correct in that energy and momentum have different dimensions. What is actually happening is that
in the book you are reading, the author is using units (called "natural units") in which the speed of light $c=1$.
The momentum
four vector can be written explicitly as $\hat p = [\frac{E}{c},p_i]$.
Click here for more about natural units. | {
"domain": "physics.stackexchange",
"id": 72050,
"tags": "special-relativity, momentum, speed-of-light, dimensional-analysis, absolute-units"
} |
Can we call this to be total charge of infinite planar sheet? | Question: Im following Ncert textbook for physics and I was learning about Charge due to infinitely planar sheet. In this they say that the electric field due to the infinitely long planar sheet to be the same as the electric field enclosed within the gaussian surface
Title of subtopic: Field due to uniformly charged infinite plane sheet
Then : Therefore the net flux through the Gaussian surface is $2 EA$.
The charge enclosed by the closed surface is $\sigma A$.
Therefore by Gauss’s law, $E=\frac{\sigma}{2\epsilon_{0}}$
My doubt is how can this be the charge for all of the charge in the plane sheet, For problems also they use this same formula. Problems like this:
Two large, thin metal plates are parallel and close to each other. On
their inner faces, the plates have surface charge densities of opposite
signs and of magnitude 17.0 × 10–22 C/m2
. What is E: (a) in the outer
region of the first plate, (b) in the outer region of the second plate,
and (c) between the plates?
Use the same formula. How can this be possible because If we want to find the whole charge of the plate we need to integrate right but here we are not integrating and using formula. Have I misunderstood something
Answer: You are dealing with approximations of situations which occur in the real world.
The approximations are made to make the analysis of a real situation easier whilst at the same getting a result which is not too far from that obtained by undertaking more complex analysis.
There is no such thing in the real world as an infinite charged plate.
If you have a charged plate whose dimensions are very large compared with the distance from the plate at which you are trying to find the electric field then the field from that plate of finite size approximates to that of an imaginary plate of infinite size.
If the two parallel plates have linear dimensions which are very much larger than the separation of the plates then most of the electric field between the two plates would approximately the same as that if the plates were infinite in extent.
In the example that you have given you are to assume as an approximation that the "fridge" field outside the parallel plate arrangement is zero.
In the real world that can never be so but the approximation gets better as the separation of the plates decrease and/or the dimensions of the plates increase.
Here are two images in the article Solving the Generalized Poisson Equation with the Finite-Difference Method which illustrates the true complexity of the parallel plate arrangement which you usually get around by assuming that the linear dimensions of the plates are much greater than their separation.
As has been pointed out the total charge on an "infinite" plate is infinite but that should not bother you because you will never come across such a plate except in your imagination. | {
"domain": "physics.stackexchange",
"id": 94905,
"tags": "homework-and-exercises, electrostatics, electric-fields, charge, gauss-law"
} |
The training process of a conditional GAN | Question: For example, consider a dataset like MNIST. I give the conditional vector to produce only the number $7$ for both the generator and discriminator. In the following scenarios, what will the discriminator classify as fake or real:
The generator produces realistic numbers other than $7$, such as a realistic number $9$ ?
The samples from the MNIST dataset that are not the number $7$ (i.e., other numbers) ?
Answer: I assume you mean how to label the image and class inputs since the discriminator can reasonably output either "real" or "fake" labels for either of those inputs, and you generally want to be training with an imperfect discriminator.
In both your scenarios the correct ground truth for training the discriminator is "fake", although it may be better to think of it as "incorrect" in the case of mislabeled real inputs.
You may also reasonably decide not to train with mislabeled real images. They are not necessary, and although they might improve the discriminator training, that's not going to make a difference for the MNIST digits task.
You shouldn't train with deliberately mislabeled generator images, either. If the generator accidentally makes a "1" when you asked it to generate a "7", then you should label ground truth as "fake" for training the discriminator and "real" for training the generator, plus in both cases you should include the attempted "7" as input to the discriminator alongside the generated image | {
"domain": "ai.stackexchange",
"id": 4054,
"tags": "machine-learning, deep-learning, convolutional-neural-networks, computer-vision, generative-adversarial-networks"
} |
Occurrence of turbulence in fluid dynamics from the equations of motion? | Question: How can it be shown that turbulence occurs in fluid dynamics?
I think people imply that it develops because of the $\text{rot}$ terms in the equations of motion, i.e. the Navier-Stokes equations, but I don't see how. Of course, these cross-derivatives are large for a curled up vector field, but why do these expressions force the fluid on an inward spiraly trajectory, and then even multiples of these?
Is there maybe an illustrative explicit calculation in two dimensions?
Answer: There is a simple general argument for why you get small-scale motion from large-scale motion in any nonlinear nonintegrable continuous mechanical system, whether it is fluids, or electromagnetic waves interacting with charged plasmas, or surface waves on water, or anything nonlinear at all. This argument must break down for those special cases where the usual turbulence doesn't occur, like 2D fluids.
The reason is the ultraviolet catastrophe--- the idea that to get to thermal equilibrium, all modes have to have the same amount of energy. Any mechanical system is only in statistical equilibrium when all its modes have about the same amount of energy. This Boltzmann equilibrium is therefore unattainable for smooth motions of continuous fields, because it requires that you divide a finite energy between infinitely many modes, most of which involve very short wavelengths.
The finite-energy statistical equilibrium for any continuous field is then a zero temperature state where all modes contain an infinitesimal amount of energy, which is the partition of the initial energy. The energy cascade is the method by which a fluid tries to do the paritition, by sending energy down into short wavelength modes in a random looking way, to get closer to the statistical equilibrium state. Since this is impossible, you just get a continuous draining of energy from long-wavelength motion to short wavelength motion, and when this draining process reaches a scale-invariant steady state, we call the situation isotropic homogenous turbulence.
There is always damping in a physical system, and damping drains energy into molecular motion in one step, not by a nonlinear cascade, just by thermodynamically converting the energy to heat. This one-step process is only relevant in fluids at short wavelengths, because it goes like the gradient of the velocity. A uniform velocity carries energy, but by Galilean invariance, it has no dissipative damping.
Because of Galilean invariance, in fluids there is an arbitrarily large separation of scales between the distance scales where nonlinearity is important and the much smaller scales where damping is important. Inbetween, you get a regular nonlinear mixing which drains energy from long wavelength modes to short wavelength modes, without significant damping, and in a statistically random way, because any tiny perturbation to the long-wavelength modes will produces a completely different short wavelength modes, because the process is spreading out into the much large phase space of the short wavelength modes, in an unstable, chaotic, way.
One dimension
This argument only fails in certain special cases. It fails generically in 1+1 dimensions, when space is a line, because in a line there are only two modes at any wavenumber k. You still have an inaccessible Boltzmann state, because there are infinitely many k, but there is no growth in the number of modes with k, as there is in 2D and higher. So energy is just as likely to move to smaller k as to larger k, and if you have turbulence, it is more like energy diffusion, where the energy random walks from smaller to larger k, without any particular reason to get to larger k except if it randomly happens to get there.
This means that you can easily have energy in a certain number of k modes bound up in a closed motion, and this is reflected in the fact that thermalization in homogenous 1D systems with local nonlinear interactions is difficult. You run into a lot of soliton solutions, and other special states, where the energy just refuses to get random, but is nonlinearly shared in a non-thermal way between a bunch of low-k modes. This was discovered by Fermi Pasta and Ulam, when they tried to simulate the approach to statistical equilibrium in a 1d system using an early computer. Instead of thermalization, they discovered that their model never reached equilibrium, and this was a major motivation for the study of one dimensional integrable systems.
But 1d systems, as interesting as they are, are rare. This sort of nonsense doesn't happen often in 2d and above, because there are just so many more modes at high k. The number of modes grows as $k^{d-1}$.
Extra conserved quantities
Nevertheless, there is still no downward cascade for the special case of 2d fluid turbulence. The reason there is that there is a second continuous integral conserved quantity in this special case, the enstropy, which is the square of the curl of the 2d velocity.
$$ S = \int (\partial_y V_x - \partial_x V_y)^2 dx dy$$
This quantity has more derivatives than the energy, which is just $E=\int |v|^2$. The conservation laws require that the total $v_k$ and the total $k^2 |v_k|^2$ are both conserved, and if you just try to equipartition energy naively, you will increase the enstrophy by a huge amount, because the enstrophy of a high-k motion is just so much bigger. So you can't equipartition energy, you have to equipartition energy and enstropy together.
The law of enstropy partition requires paradoxically that the energy in the short modes is small. The enstropy equipartition is the important dynamics, and the energy ends up cascading the wrong way, from short wavelength to long wavelength modes, so that 2d fluids in a periodic box will cascade up to a single flow of two large counterrotating vortices.
The inverse cascade phenomenon was discovered in the 1960s, and it is most often attributed to Kraichnan. But several people noted the enstrophy cascade would wreck the traditional ideas for why turbulence occurs
Generic nonlinear systems
The generic PDEs all have a turbulence, when they have a dissipation free regime, in a regime where the nonlinearity is important, but not the damping. This is studied today in models of preheating, in inflationary cosmology, and it is studied withing mathematics here and there.
The number of example systems is too large to list, it consists of any nonlinear nonintegrable equation without extra conserved quantities. Scale-invariant scalar field theory is a simple example, the one relevant for preheating:
$$ \partial_t^2 \phi_k - \nabla^2 \phi_k + \lambda_{ijlk}\phi_i\phi_j\phi_l = f(x,t) $$
With the appropriate choice of coefficients $\lambda$. You can also add mass scales, by adding a linear term in $\phi_k$, or additional quadratic terms which also come with an explicit scale. One is generally interested in the cascade at scales smaller than those defined by the low-order terms, so that the scale invariant cubic nonlinearity is the only important thing.
You should also add a damping, to give a short distance cut-off analogous to viscosity in turbulence, and you can do this by adding a $\partial_t \nabla^2\phi $ term with the appropriate coefficient, for example. I didn't do that, because in a numerical simulation, you can just do damping by artificially zeroing out very small k modes without an explicit local term to do this for you. | {
"domain": "physics.stackexchange",
"id": 2376,
"tags": "fluid-dynamics, turbulence, navier-stokes"
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.