anchor stringlengths 0 150 | positive stringlengths 0 96k | source dict |
|---|---|---|
Enumeration of BPP machines | Question: Can we enumerate all probabilistic Turing machines (with bounded error), like we do for deterministic Turing machines (when using diagonalization arguments against deterministic Turing machines)? If not, does this mean we cannot diagonalize against BPP machines?
Answer: Yes. Enumerate over all pairs $(M,p)$ of probabilistic Turing machines $M$ and polynomials $p$. Interpret each such pair in the following way: on input $x$, run $M$ for $p(|x|)$ steps, and if it doesn't halt, output "yes". | {
"domain": "cs.stackexchange",
"id": 17336,
"tags": "complexity-theory, turing-machines"
} |
Are the halting problem proofs refuted by software engineering? | Question: Can D simulated by H terminate normally?
The x86utm operating system based on an open source x86 emulator. This system enables one C function to execute another C function in debug step mode. When H simulates D it creates a separate process context for D with its own memory, stack and virtual registers. H is able to simulate D simulating itself, thus the only limit to recursive simulations is RAM.
// The following is written in C
01 int D(int (*x)())
02 {
03 int Halt_Status = H(x, x);
04 if (Halt_Status)
05 HERE: goto HERE;
06 return Halt_Status;
07 }
08
09 void main()
10 {
11 H(D,D);
12 }
Execution Trace
main() calls H(D,D) that simulates D(D) at line 11
keeps repeating:
simulated D(D) calls simulated H(D,D) that simulates D(D) at line 03 ...
Is this clear enough to see that D correctly simulated by H can never terminate normally? (because D remains stuck in recursive simulation)
For any program H that might determine whether programs halt, a
"pathological" program D, called with some input, can pass its own
source and its input to H and then specifically do the opposite of
what H predicts D will do. No H can exist that handles this case.
https://en.wikipedia.org/wiki/Halting_problem
The relationship between H and D exactly matches the above specified pathological relationship.
Simulation invariant: D correctly simulated by H cannot possibly reach its own line 04.
When one applies a simulating (partial) halt decider to the Turing machine based proofs the same non-terminating result is derived as indicated by the exact match with the above criteria.
Answer: // The following is written in C
01 int D(int (*x)())
02 {
03 int Halt_Status = H(x, x);
04 if (Halt_Status)
05 HERE: goto HERE;
06 return Halt_Status;
07 }
08
09 void main()
10 {
11 H(D,D);
12 }
Execution Trace
main() calls H(D,D) that simulates D(D) at line 11
keeps repeating:
simulated D(D) calls simulated H(D,D) that simulates D(D) at line 03 ...
D correctly simulated by H can never terminate normally because D remains stuck in recursive simulation.
The fully operational source code of H/D and the x86utm operating system shows that H correctly matches a non-halting behavior pattern that is equivalent to infinite recursion.
H(D,D) fully operational in x86utm operating system: https://github.com/plolcott/x86utm
Source-code of several different partial halt deciders and their sample inputs.
https://github.com/plolcott/x86utm/blob/master/Halt7.c | {
"domain": "cs.stackexchange",
"id": 18551,
"tags": "undecidability, halting-problem, simulation, c, decidability"
} |
What's that tangy smell on cheap plastic? | Question: Most cheap items (from China mostly) have a strong tangy odor to the plastics (or some resin, but I can mostly smell it on plastics). The smell is always consistent over a broad range or products and I get it for at least a decade. So I do not believe this is a one-off thing and I bet most people have experienced it. The only variation is intensity.
On some really bad products that smell impregnates other things and I failed to remove it with soap, baking soda, acid, scrubbing...
I'm curious to know what exactly is that smell.
Apparently it got some attention already, but there's no consensus, and a lot of crywolf(?) e.g. Campaign to Halt the Import of
Chemical-Emitting
Smelly Plastic from China.
Answer: Without a sample and a gaschromatograph it is hard to say what exactly it can be but... there are at least three well-known sources of odour in plastics:
Some residual of the monomer that makes up the plastic (that is a polymer).
Some residual of some other substance used during the manufatcturing process (catalyzer, co-polymers, modifiers and so on). For example, PETcan sometime contain small traces of terephthalic acid.
Some odourizing substance that is added to the plastic just to cover others smells.
Usually, tangy smells are a symptom of some kind of ester | {
"domain": "chemistry.stackexchange",
"id": 267,
"tags": "everyday-chemistry, plastics, smell"
} |
Components of State vector in quantum mechanics | Question: I am currently learning quantum mechanics and I am trying to understand the connection between the wave function and the state vector.
Is it correct to say that the components of the state vector of a quantum system are all possible wavefunctions in that state?
Answer: A state vector is just a wavefunction where the domain doesn't need to be $\mathbb{R}^d$. Similarly, you can call wavefunctions vectors because the functions used in quantum mechanics (square integrable ones) are elements of a space which is closed under addition and scalar multiplication. I.e. a vector space.
A simple example is a large atom with a magnetic moment. In the Stern-Gerlach experiment, quantum fluctuations of its position are negligible but its spin (which can be up or down) is highly probabilistic. So instead of a wavefunction $\psi: \mathbb{R}^3 \to \mathbb{C}$, we just need $\psi: \{0,1\} \to \mathbb{C}$. A function from a two element set to $\mathbb{C}$ can be regarded as a two component vector so we can write this object as
\begin{align}
\left | \psi \right > = \begin{pmatrix} a \\ b \end{pmatrix}
\end{align}
where $|a|^2$ is the spin-up probability and $|b|^2$ is the spin-down probability. This is also written
\begin{align}
\left < + | \psi \right > = a, \quad \left < - | \psi \right > = b.
\end{align}
Depending on how formal you want to be, the normalization condition $|a|^2 + |b|^2 = 1$ can also be seen as an integral of $|\psi|^2$ except using the counting measure instead of the Lebesgue measure.
We can also consider the opposite case. A particle with no spin but probabilistic position on a line. Position is a continuous degree of freedom so our $\psi: \mathbb{R} \to \mathbb{C}$ will be a vector with "a continuum of components". Instead of being able to project $\left | \psi \right >$ onto just two basis states ($\left < + \right |$ or $\left < - \right |$), we will be able to project it onto infinitely many position eigenstates. One for each position we can imagine the particle to have. The eigenstates can be labelled by $\left < x \right |$ and the projection by $\left < x | \psi \right >$ which is nothing more than $\psi(x)$ in the usual function notation.
Since we now have an all-encompassing framework that accounts for both continuous and discrete degrees of freedom, we can combine the two and consider things like $\psi: \mathbb{R} \times \{0, 1\} \to \mathbb{C}$ for particles with uncertain positions and spins. This allows us to consider projections like $\left < +, x | \psi \right >$ or $\left < -, y | \psi \right >$. In this situation, textbooks often use a hybrid notation where the $2 \times \infty$ components of the state vector are written as two functions worth of components. Something like
\begin{align}
\left | \psi \right > = \begin{pmatrix} \psi_+(x) \\ \psi_-(x) \end{pmatrix}.
\end{align}
This may have given rise to your notion of a vector consisting of many wavefunctions. | {
"domain": "physics.stackexchange",
"id": 89823,
"tags": "quantum-mechanics, hilbert-space, wavefunction, notation, quantum-states"
} |
How much energy from gravitatational waves does the sun absorb? | Question: I was wondering how much energy from gravitational waves the sun could absorb since it is so big and also has a massive gravitational pull. Is it possible for the sun to trap gravitational waves within its volume and eventually get absorbed by the hydrogen? If not, how would it not be possible? This question was based on the fact that maybe pulsars could absorb gravitational waves and spin because of it. I am wondering whether such a thing would make the sun spin faster or get hotter.
Answer: That's an interesting question, but also a hard one. There isn't a lot of convincing work dealing with this regime — mostly people have been interested in "tenuous" matter like the interstellar/intergalactic medium, to understand whether we'd even see distant gravitational-wave sources. For example, see this paper, another paper by Hawking, or this paper by Dyson. Hawking's result is probably most relevant, but it's also hard to compute, and relies on viscosity in the Sun, which I can't find good numbers for. A full treatment would probably require some sort of plasma dynamics and detailed stellar structure.
Just for a rough idea of what we're talking about, Dyson calculates absorption via elastic motion of the Earth's surface, which is probably a terrible approximation to the Sun's behavior, but comes up with a fraction $10^{-21}$ of the energy being absorbed. My guess is that the Sun would absorb a fraction somewhere very roughly in that neighborhood — I wouldn't be surprised at $10^{-10}$ or $10^{-30}$.
So it may be helpful to look at how much energy from a typical binary black-hole merger actually passes through the Sun. Looking at the JSON file supplied with the LIGO/Virgo/KAGRA catalog, it turns out that the first detection, GW150914, was the best case for this (as of this writing). The total energy given off was $3.1 M_\odot\, c^2$, and was about $440\,\mathrm{Mpc}$ away. I calculate that just over $10^{18}\,\mathrm{J}$ of gravitational-wave energy passed through the Sun from that event.
Now, remember that the Sun would absorb only a small fraction of that energy. But for comparison, the Sun emits about $4\times 10^{26}\,\mathrm{J}$ every second. So even if the Sun absorbed all that gravitational-wave energy (which it wouldn't), it would only gain about one billionth the energy that it emits every few seconds.
Of course, any energy it does absorb would indeed make the Sun that tiny bit hotter. But the energy would be absorbed symmetrically, so there wouldn't be any change in angular momentum at all. In fact, because angular momentum would be conserved, and the temperature increased ever so slightly, you might argue that the Sun would increase in size eeeevver so slightly, which means that its rotation rate would have to slow down. But these effects would surely be immeasurably small compared to random fluctuations in all parts of the Sun. | {
"domain": "physics.stackexchange",
"id": 90812,
"tags": "energy, gravity, gravitational-waves, sun"
} |
EIRP conversion from given value at one polarization to both bolarization when antenna operate at both polarizations | Question: For an antenna that operates in both polarizations, how do I convert its EIRP density, given in dBW/Hz assuming one polarization, to the dBW/Hz value for both polarizations?
For instance, if EIRP = -76.47 dBW/Hz/pol (this is the value assuming one polarizations), technical documents simply give -73.46 dBW/Hz due to both polarizations. I'm not sure how we are getting about 3.01 dB more.
Answer: The extra 3.01 dB is simply $10\log_{10}2$, a factor of 2 accounting for equal power in each of the two polarizations. | {
"domain": "dsp.stackexchange",
"id": 11641,
"tags": "digital-communications, power-spectral-density"
} |
What would a super bright red look like? | Question: I was watching this video which talks about why violet and idigo in the rainbow can be shown on a computer screen even though the computer screen can not produce light with higher wavelength than blue.
It got me wondering though, brighter and brighter blues are shown on a computer screen as more and more white, which makes sense from a camera perspective in that the colour filters which are used over the ccd to permit coloured pixels wouldn't be perfect and so more of the other colours would filter through making the image white.
When it comes to the human eye though there isn't a filter but a different molecule (Same molecule held by different protein) and so if a particular wavelength is required to activate the molecule, surly much like the gold leaf experiment, the eye can't see more blue when a super bright red is incident on the eye. Can a super bright red look white then?
What would a very intense low frequency (red) light look like?
I was originally going to ask about blue, but the gold leaf experiment analogy doesn't work that way around.
Answer: It's possible to briefly experience colors outside the usual color gamut by exploiting after-image effects. There are various options, as explained in the Wikipedia article, Impossible color
Impossible colors are colors that do not appear in ordinary visual functioning. Different color theories suggest different hypothetical colors that humans are incapable of seeing for one reason or another, and fictional colors are routinely created in popular culture. While some such colors have no basis in reality, phenomena such as cone cell fatigue enable colors to be perceived in certain circumstances that would not be otherwise.
Colors that appear more saturated than what your eye can actually detect are called hyperbolic colors.
Here's a simple demo of hyperbolic red. (This site doesn't permit JavaScript, so the demo runs on the SageMathCell server).
Stare at the black dot in the middle of the cyan square for 30 seconds or so, then click it to turn the square red. You may get an impression of a "redder than red" color for a second or two, due to the after-image.
You may get a better result with a different complementary color pair, rather than cyan & red.
Here's a demo of hyperbolic blue. You need to open it in a new tab or window to make it change color.
You can play with other hues using my Python script on Github. | {
"domain": "physics.stackexchange",
"id": 80621,
"tags": "visible-light, vision"
} |
Extended Kalman Filter in robotics - Worth it? | Question: I wonder if Extended Kalman Filter(EKF) is used in robotics, or is only Kalman Filter(KF) used in robotics.
Kalman Filter is included in Linear Quadratic Gaussian(LQG) controllers. But how would EKF work i practice?
I know how to build a Extended Kalman Filter just by linearizing the mathematical model in the estimated state vector.
What is your experience in EKF?
Answer: The Kalman filter is an optimal linear filter in the presence of Gaussian noise. It is optimal in the sense that it minimizes the mean-squared error. This means that the covariance of the estimated states will be minimized:
$$
P = E\{(x_k - \hat{x}_{k|k})(x_k - \hat{x}_{k|k})^T\}.
$$
As this covariance is minimized, the goal of any kind of estimation filtering is accomplished -- the error between the true state $x_k$ and the estimated state $\hat{x}_{k|k}$ is driven to zero [1] [2].
The Extended Kalman filter is more or less a mathematical "hack" that allows you to apply these techniques to mildly nonlinear systems. I say "hack" because it is sub-optimal, i.e., it does not have any mathematical guarantees like the KF does. If you initialize the filter with poor conditions (i.e., the initial state), it will quickly diverge. If propagation and/or measurement updates happen at too great a timestep, it will quickly diverge. As you state in your question, you build an EKF by linearizing your system -- which implies that you loose information about your system and if you operate too far from your linearization point, the solution will be incorrect (like trying to fit a line to a quadratic).
There is another standard implementation of Kalman filters called the Unscented Kalman Filter (UKF). The UKF is ideal for grossly nonlinear systems because: (1) you don't have to do the math / computation to linearize, and (2) it intelligently chooses sigma points to propagate (instead of an entire model) that best represent the statistics of the system. The UKF kind of bridges "the best of both worlds" from Kalman filtering (which assumes a Gaussian noise and belief propagation) and nonlinear Particle filtering (which makes no assumption of the form of the belief propagation), another type of Bayes filter.
In my experience (3-wheeled ground robots, aerial robots), and EKF is a nice and computationally efficient way to estimate state and fuse information from multiple sensors (accel, gyro, airspeed, camera). Tuning can often be a pain, but the more intuition about your system and the noise of your sensors, the easier it becomes. Because of the EKF's efficiency, relative ease-of-implementation, and demonstrated effectiveness it is used widely in robotics.
For more information on applying an EKF to wheeled robots (or aircraft) I would direct you Dr. Randy Beard's notes on Robot Soccer and Rober Labbe's iPython Notebooks on Bayesian filters.
[1] R. W. Beard and T. W. McLain, Small Unmanned Aircraft. Princeton, New Jersey: Princeton University Press, 2012.
[2] R. Faragher, “Understanding the basis of the kalman filter via a simple and intuitive derivation [lecture notes],” IEEE Signal Process. Mag., vol. 29, no. 5, pp. 128–132, 2012. | {
"domain": "robotics.stackexchange",
"id": 1472,
"tags": "control, robotic-arm, kinematics, kalman-filter, industrial-robot"
} |
Is this language Context-Free? | Question: Is the language
$$L = \{a,b\}^* \setminus \{(a^nb^n)^n\mid n \geq1 \}$$
context-free? I believe that the answer is that it is not a CFL, but I can't prove it by Ogden's lemma or Pumping lemma.
Answer: Hint:
Yes
Solution:
$$\{(a^n b^n)^n \mid n \geq 1 \} = \{a^{n_1} b^{n_2} \dots a^{n_{2k-1}} b^{n_{2k}}\}: k \geq 1 \land n_1 = k \land \forall i. n_i = n_{i+1} \}$$
and therefore the complement is
$$\{a,b\}^{\ast} \setminus \{(a^n b^n)^n \mid n \geq 1 \} = \{a^{n_1} b^{n_2} \dots a^{n_{2k-1}} b^{n_{2k}}: n_1 \neq k \lor \exists i. n_i \neq n_{i+1}\}$$
which is context-free as you can easily write a nondeterministic PDA. | {
"domain": "cs.stackexchange",
"id": 325,
"tags": "formal-languages, context-free, pumping-lemma"
} |
What meteorological data is needed for air quality machine learning models? | Question: Are there any studies that discuss what meteorological data I should look at if I am creating an air quality model with machine learning? That is, what should I extract from NCEI or ERA5 first if I have limited storage capacity?
Something like wind speed is very obvious, but what else should I be looking for?
Answer: The variables needed for machine learning are different than a standard air quality model. It's actually a much shorter list because you don't have to numerically model atmospheric motion/chemistry/deposition.
Most AQMs have a spatial domain that has horizontal and vertical depth. However, the machine learning forecasts I've seen are only for point locations where meteorological and air quality variables are measured. That is to say, the machine learning model is trained using variables from a ground station, and is only valid for that location. You can train multiple locations using similar variables, but each station would have it's own set of data to train with.
I've seen machine learning forecasts from Washington State University for ozone and PM2.5, and the list of important meteorological variables are basically the same.
For ozone they say:
Hourly data for six meteorological variables (temperature and relative
humidity at 2m, wind components u & v, planetary boundary layer height, and sea level pressure) from 4km WRF archives, time information (month, weekday and hour), and the previous day’s observed moving 8-hour averaged O3 mixing ratios are used to train the RF classifier models.
For more info you can see the latest presentation given. | {
"domain": "earthscience.stackexchange",
"id": 2344,
"tags": "air-pollution, open-data, air-quality, machine-learning"
} |
Using Enum to Handle String literals | Question: I have a java component that was using a lot of string literals that I need to make comparisons on and return booleans based on these comparisons.
In order to make the code more robust I externalized these strings first as class constants, then after other classes started to use these constants I had to separate them to decrease the dependency between the classes.
Knowing that the best practice is not to use variable-classes dedicated for string for many reasons, and since I am using Java 6, I decided to go for enums. below is the implementation that I had in mind
public enum SecurityClassification {
SENSITIVE("Sensitive"), HIGHLY_SENSITIVE("Highly Sensitive"), PUBLIC("Public"), INTERNAL("Internal");
private String value;
private SecurityClassification(String value) {
this.value = value;
}
public String getValue() {
return value;
}
public boolean hasValue(String param) {
return value.equalsIgnoreCase(param);
}
public static SecurityClassification enumForValue(String param){
for (SecurityClassification securityClassification : SecurityClassification.values()) {
if(securityClassification.getValue().equals(param)){
return securityClassification;
}
}
return null;
}
}
I was wondering specifically about the enumForValue method, is this an optimal solution? is there any other better way?
Answer: It looks fine overall, but...
A few pointers
You should make private String value final too though, to clearly indicate that they cannot be modified after instantiation.
One thing to note for values() is that it always returns a new array, so for that reason, sometimes it may be recommended to also construct a lookup Map<String, SecurityClassification> to avoid the extra arrays creation.
Also, is it really OK to just return null if an invalid security classification is specified here? Depending on your implementation, you may want to consider whether you should throw an IllegalArgumentException here to have a slightly better modelling of such cases.
How is hasValue() used? In fact, can it be used in enumForValue() for a case-insensitive comparison?
Finally, enumForValue() may seem like a mouthful, you can consider a shorter name like of(). The other thing to consider is that you don't really need to express that it's an enum this method is returning. | {
"domain": "codereview.stackexchange",
"id": 18904,
"tags": "java, performance, enum"
} |
Design RRC filter | Question: I have designed a RRC filter, as below :
clear all; clc;
Ts = 8.1380e-06; %sampling rate
Nos = 24; %upsampling factor
alpha = 0.5; %Rollback
t1 = [-6*Ts:Ts/Nos:-Ts/Nos];
t2 = [Ts/Nos:Ts/Nos:6*Ts];
r1 = (4*alpha/(pi*sqrt(Ts)))*(cos((1+alpha)*pi*t1/Ts)+(Ts./(4*alpha*t1)).*sin((1-alpha)*pi*t1/Ts))./(1-(4*alpha*t1/Ts).^2);
r2 = (4*alpha/(pi*sqrt(Ts)))*(cos((1+alpha)*pi*t2/Ts)+(Ts./(4*alpha*t2)).*sin((1-alpha)*pi*t2/Ts))./(1-(4*alpha*t2/Ts).^2);
r = [r1 (4*alpha/(pi*sqrt(Ts))+(1-alpha)/sqrt(Ts)) r2];
The creation of that filter is following the theoretical ways. But the problem is when I set the rollback to 0.5, the shape of filter is shown as below:
/
What's the problem of that non-defined values (NAN)?? however when I set the rollback into 0.9, it's ok
Answer: You have $\frac{0}{0}$ situation, similar to evaluating $\frac{\text{sin}(x)}{x}$ for $x = 0$. It's a well defined result but you can't directly compute it this way.
In your case that happens if $\frac{t}{T_s} = \pm0.5$ since the denominator $\bigg(1-\big(4\alpha \frac{t_2}{T_s}\big)^2\bigg)$ becomes 0. The numerator ALSO becomes zero and the result is well defined.
You have a few choices here
Isolate the zero in the denominator and calculate the result manually with pen and paper.
Isolate the zero and just "nudge" this x-value by a very small number. Maybe $10^{-20}$ or so.
Choose a grid for your sampling that doesn't include $\frac{t}{T_s} = 0.5$ | {
"domain": "dsp.stackexchange",
"id": 9266,
"tags": "filters, digital-communications, filter-design, digital-filters"
} |
Measuring the water vapour of room | Question: Is it possible to measure the water vapour of my room in kg/litre if I know the volume,temperature and humidity of my room? If it is possible then how can I measure it?
Answer: If you know the absolute humidity or if you know the relative humidity and the temperature, it is easy. There are tons of calculators online, such as this one. If you look around a bit more, you'll find plenty of formulas for calculating dewpoint, densities, and so on. | {
"domain": "physics.stackexchange",
"id": 26446,
"tags": "humidity"
} |
My implementation of FixedSizedPriorityQueue | Question: I implemented a fixed sized priority queue.
How can I improve my code?
public class FixedSizedPriorityQueue {
private final int capacity;
private boolean sorted = false;
private double smallest = Double.POSITIVE_INFINITY;
private ArrayList<Double> list;
protected static final Comparator<Double> comparator = new Comparator<Double>() {
@Override
public int compare(Double o1, Double o2) {
return o2.compareTo(o1);
}
};
public FixedSizedPriorityQueue(int capacity) {
this.capacity = capacity;
list = new ArrayList<Double>(capacity);
}
public void add(double value) {
if (list.size() < capacity) {
list.add(value);
sorted = false;
if (value < smallest) {
smallest = value;
}
} else {
if (value > smallest) {
if (!sorted) {
Collections.sort(list, comparator);
sorted = true;
}
list.set(capacity - 1, value);
if (list.get(capacity - 2) > value) {
smallest = value;
} else {
smallest = list.get(capacity - 2);
sorted = false;
}
}
}
}
public int getSize() {
return list.size();
}
public Double poll() {
if (list.size() == 0) return null;
if (!sorted) {
Collections.sort(list, comparator);
sorted = true;
}
Double value = list.get(0);
list.remove(0);
if (list.size() == 0) smallest = Double.POSITIVE_INFINITY;
return value;
}
public void print() {
for (int i = 0; i < list.size(); i++) {
System.out.println(i + ": " + list.get(i));
}
System.out.println("Smallest: " + smallest);
}
}
Answer:
The following is used multiple times and it could be extracted out:
if (!sorted) {
Collections.sort(list, comparator);
sorted = true;
}
to a helper method:
private void sortIfNeccessary() {
if (sorted) {
return;
}
Collections.sort(list, comparator);
sorted = true;
}
Note the guard clause.
List.remove returns the removed elements.
Double value = list.get(0);
list.remove(0);
The code above could be simplified to this:
final Double value = list.remove(0);
The comparator could be substituted with the reverseOrder method/comparator:
Collections.sort(list, Collections.reverseOrder());
The list could be final. final helps readers, because they know that the reference always points to the same instance and it doesn't change later. https://softwareengineering.stackexchange.com/questions/115690/why-declare-final-variables-inside-methods but you can find other questions on Programmers.SE in the topic.
Furthermore, its type could be only List<Double> instead of ArrayList<Double>. See: Effective Java, 2nd edition, Item 52: Refer to objects by their interfaces
The print method violates the single responsibility principle. Other clients may want to write the elements to a log file or a network socket so you shouldn't mix the queue logic with the printing. I'd consider creating a List<Double> getElements method (which would return a copy or unmodifiable version of list to make sure that malicious clients does not modify the inner state of the queue) or providing an iterator to clients and move the print method to another class.
Does it make sense to create a queue with negative capacity? If not, check it and throw an IllegalArgumentException.
Actually, it does not work with capacities less than 2. It throws a java.lang.ArrayIndexOutOfBoundsException: -1 when the capacity is 1 because of the list.get(capacity - 2) call. (Effective Java, Second Edition, Item 38: Check parameters for validity)
public FixedSizedPriorityQueue(final int capacity) {
if (capacity < 2) {
throw new IllegalArgumentException("Illegal capacity: " + capacity);
}
...
}
Instead of
if (list.size() == 0) ...
you could use
if (list.isEmpty()) ...
It's the same but easier to read. | {
"domain": "codereview.stackexchange",
"id": 2402,
"tags": "java, priority-queue"
} |
Capture image using gscam | Question:
Hello,
I can view a live streaming of my laptop camera in image_view. How can I capture the image using termianl? It can be done by right clicking the image_view window but I need a node to capture image from gscam.
Originally posted by Bala on ROS Answers with karma: 31 on 2013-02-07
Post score: 0
Answer:
can't you just write a small node (10 min work) to take image from gscam and save it ?
Originally posted by navderm with karma: 78 on 2013-02-07
This answer was ACCEPTED on the original site
Post score: 1 | {
"domain": "robotics.stackexchange",
"id": 12793,
"tags": "ros"
} |
How detectable is the error in special relativity experiments because of general relativistic effects at the surface of the Earth? | Question: I believe that experiments done on special relativity in a laboratory at the surface of the Earth usually do not consider the effects of general relativity (since the gravitational field at Earth's surface is weak). Of course, errors shall emerge from the approximations, but is this error detectable at all? How much do the measurements deviate from the theoretical predictions because of general relativistic effects and does laboratory equipment have enough precision for these kinds of deviations to be detected?
Answer: Nice question.
One way of stating the equivalence principle (e.p.) is that locally, spacetime is flat, so that special relativity is valid. Therefore any experiment in a small enough laboratory, if the apparatus is in free fall, is predicted by GR to give the same results as in SR. For example, SR predicts that if we release a brass ball and an iron ball at rest relative to one another, they will stay at rest relative to one another. This has been tested in Eotvos-style experiments, and the null results confirm SR and the e.p., but they are not specifically tests of general relativity.
If the experiment is local but the apparatus is not in free fall, but is anchored to the earth, then the e.p. predicts that the results are the same as in a rocket ship in outer space that is accelerating at g. An example of this type of experiment is the Pound-Rebka experiment. So this experiment as well is only really a test of SR+e.p., not specifically GR.
To get a real test of GR at the surface of the earth, the experiment needs to explore a region of spacetime at the surface of the earth that is large enough so that different parts of it do not all have the same acceleration. Tests that fall in this category include the Hafele-Keating experiment and the detection of gravitational waves by LIGO/Virgo. | {
"domain": "physics.stackexchange",
"id": 49578,
"tags": "general-relativity, special-relativity, experimental-physics, error-analysis"
} |
Local costmap is empty | Question:
I am trying something with the simulation of navigation with Turtlebot but with different frame_id's and topics. I checked that the laser data is being relayed to /scan and that my TF tree is consistent and that amcl and move_base are using the right frame_id's and subscribing to the right topics. Even with those conditions and with the laser detecting obstacles, my local costmap is empty (full of zeros) but the global costmap is just fine. Is there some other parameters I should consider that could influence the local costmap?
Originally posted by Mehdi. on ROS Answers with karma: 3339 on 2014-10-01
Post score: 2
Answer:
I finally got it to work. While defining the obstacle_layer, my min and max obstacle height parameters were set between 0.25 and 0.35. Apparently the map walls in STDR are defined to be higher or lower than that. Setting the range between 0 and 2 was enough to fix the problem and my local costmap is working fine.
Originally posted by Mehdi. with karma: 3339 on 2014-10-05
This answer was ACCEPTED on the original site
Post score: 3 | {
"domain": "robotics.stackexchange",
"id": 19588,
"tags": "navigation, simulation, costmap, move-base, costmap-2d"
} |
Alkyl halide reaction | Question: In the following the reaction ,
According to me the product should be compound (A) due to substitution reaction.
But the product formed is the compound (B) , what could be the mechanism for the reaction for that product.
Answer: To reflect on the comments below:
In literature this reaction seems to actually produce product A and synthesis of product B seems to require much more exotic conditions than described in the question. As such I have to conclude that the correct answer would have be A.
Old answer:
Due to the polar aprotic solvent (DMF) one would think Sn2 is favoured as the reaction mechanism, however if the product is indeed B it would suggest an Sn1 mechanism where the aromatic ring stabilises the cation. One of the resulting resonance structures would indeed react with fluoride. Due to the stability of aromatics the product is formed by re-aromatisation. | {
"domain": "chemistry.stackexchange",
"id": 5702,
"tags": "organic-chemistry, reaction-mechanism, reactivity, halides"
} |
What the link between radical(root) and its meaning in chemistry inside etymology | Question: All i know radical mean root and when i heard root all what i think about is the root in plant so what the link?
Why they chose that word?
Answer: Words in any language can have one etymology but different senses and meanings. Yes, radix means plant roots but radical in chemistry and mathematics have nothing to do with plant roots. They have completely different meanings. Now one can ask why mathematicians call something radical? The square root sign is basically an "r" of the mathematical radical. Plant roots have only a figurative connotation with the radical as something which is the "basis". A radical person sticks to closely to the original roots of his/her idealogy (often used in a negative sense).
In a chemical usage, one can add a little bit more to the above answer, radical or radicle (in even older books) simply meant:
An atom or group of atoms regarded as a (or the) primary constituent
of a compound or class of compounds, remaining unaltered during
chemical reactions in which other constituents are added or removed.
Now more widely: an atom or group of atoms which behaves as a unit and
is regarded as a distinct entity (OED entry)
Suppose a chemist analyzes sulfuric acid $\ce{H2SO4}$ in the 17-18th century. He finds that sulfur and oxygen ratio is 1:4 by mass. Next he add a base, sodium hydroxide to neutralize the acid, and after evaporation he isolates the salt. The salt is analyzed and it is also found the contain sulfur and oxygen ratio of 1:4. For that ancient chemist, $\ce{SO4}$ is a sulfate radical. It is a group of atom that is behaving as a unit in a chemical reaction. Forget about charges on sulfate, that did not exist when "radicals" were discussed
Looks at a snippet from 1858 book, that shows organic radicals= group of atoms that behave as a unit.
The above answer by Nilay shows more modern meaning of radicals. | {
"domain": "chemistry.stackexchange",
"id": 17528,
"tags": "nomenclature, etymology"
} |
Also Higgs Potential can be CP violated, can we talk about a CP violation like a Homomorphism of Higgs Potential? | Question: I read something here
http://cosmology.princeton.edu/~mcdonald/examples/EP/lee_pr_9_143_74.pdf
The observed CP violation is assumed to be due to the spontaneous symmetry-breaking mechanism; the Lagrangian is CP invariant but its particular solution is not.
I think that Higgs Potential is only a particular way which physicists took advantage of to explain this difference to export a mass function (exist also negative mass, not only 'positive' mass) not to explain real difference between CP violation and spontaneous symmetry breaking deeper.
Higgs Potential is a set of matrix/equations solutions that physicists borrowed for using spontaneous symmetry breaking to explain a mechanism that can be 'exported' (or valued like in Computer Science) to enhance a particular extension of Vector Field: a mass (a scalar), in fact: (I don't know if negative mass -2kg can be explained using Higgs Potential)
** I read from wiki
The Higgs field, through the interactions specified (summarized, represented, or even simulated) by its potential, induces spontaneous breaking of three out of the four generators ("directions") of the gauge group U(2). This is often written as SU(2) × U(1), (which is strictly speaking only the same on the level of infinitesimal symmetries) because the diagonal phase factor also acts on other fields in particular quarks
Higgs Potential $\rightarrow$ induce a SBS but to have a CP violation we need to change SOMETHING about Higgs Potential to have a new, extended Higgs Potential: CP-Violating Higgs Potentials
So, if also Higgs Potential can be CP violated :
What role plays spontaneous symmetry breaking if SBS is explained by Higgs Potential ?
Is CP violation a Homomorphism of Higgs Potential ?
Answer: The CP violation observed in particle physics is attributed to the complex phase in the CKM matrix. This does not come from the Higgs potential. It is a global property of the yukawa couplings between the quarks and the Higgs, only possible because there are three generations of fermions. The paper by T.D. Lee was written just before the third generation was discovered and does not address this possibility.
There are various ways in which additional CP violation might be created by presently unknown physics. One of these is for the Higgs potential to contain CP-violating interactions. The question seems to be asking whether that example of CP violation would be "a homomorphism of the Higgs potential". Well, there might be homomorphisms somewhere, but I would like to better understand the question's motivation before I go looking for them. | {
"domain": "physics.stackexchange",
"id": 46887,
"tags": "symmetry, gauge-theory"
} |
Why identical particle states are multiplied? | Question: In case of identical particles we multiply the individual wave functions of the particles to get the system wave funtion. But why are we not adding? Or performing any other operation to get the system wave function. Can anyone show the math that what physics will be violated if i simply add the individual particle's wave function?
Answer: The wave function for any composite system consisting of two uncorrelated (unentangled) subsystems is the (tensor) product
$$ |\psi_A\rangle |\psi_B\rangle \equiv |\psi_A\rangle \otimes |\psi_B\rangle $$
This multiplicative behavior of states isn't a specific feature of identical particles – or any particles. It holds for any physical system that may be divided to two (or several) parts.
The reason why the "combination" of several particles or objects is given by the multiplication and not addition is that the numerical values of wave functions in quantum mechanics don't represent "properties of objects" themselves but the probabilities (probability amplitudes, more precisely) of different properties of the objects.
We have the Born rule that says e.g.
$$ \rho (x) = |\psi(x)|^2 $$
The probability or probability density is given by the squared absolute value of some value of the wave function (or a linear combination of these values).
If we describe properties of objects $A,B$ that are independent, they may have various probabilities. The probability that the system $A$ has the $i$-th property is $P_{A,i}$. The probability that $B$ has the $j$-th property is $P_{B,j}$. If the objects $A,B$ are independent of each other, the probability that $A$ has the $i$-th property and $B$ has the $j$-th property is the product of probabilities
$$ P_{AB,ij} = P_{A,i} P_{B,j} $$
This is the usual multiplicative rule for probabilities of independent things. For example, if die $A$ has $P=1/6$ to land as 6 and $B$ has $1/6$ to land as $6$, the probability that we get $6+6$ is $1/6\times 1/6 = 1/36$.
The multiplicative formula for the wave functions $\psi_A$ and $\psi_B$ is basically just a "square root" of the formula for the probabilities. The calculus of the wave functions must reproduce and does reproduce some rules from the probability calculus – because the probabilities are linked to the wave functions by the relatively simple rule.
In particular, all Hilbert spaces of allowed states of composite systems $A+B$ are unavoidably given by tensor products ${\mathcal H}_A \otimes {\mathcal H}_B$ and unentangled states of two particles are tensor products of the two pure states.
The addition is something completely different. The state
$$|\uparrow\rangle + |\downarrow\rangle $$
describes a particle whose spin (that's what my arrow referred to) is either up or down. The addition translates to the word "or", not the word "and", because the addition of the wave functions is sort of analogous to the addition of probabilities, and the probability of "U or V" is
$$ P(U\text{ or }V) = P(U) + P(V) - P(U\text{ and } V) $$
If $U,V$ are mutually exclusive, the last term is zero. But we still see that "or" refers to the addition of probabilities, and a similar statement about the wave function says that their sums refer to "either one term" or "the other terms". Except that the relative phase between the complex probability amplitudes also matters in quantum mechanics – while the classical probabilities never come with any phases. | {
"domain": "physics.stackexchange",
"id": 30950,
"tags": "quantum-mechanics, condensed-matter, identical-particles"
} |
Rewritten implementation of quadratic equation formula | Question: I just started learning the C programming language at the university, and today we got a new assignment there is about Refinement of a C software there is designed to find the roots of the general quadratic equation. We got an old C software that we have to rewrite with functions instead of using the Whole code inside main. We have to split the functions into 3 where is about
Calculation of the discriminant.
Calculation of the first root.
Calculation of other root.
The old code:
#include <stdio.h>
#include <math.h>
/* Prints roots of the quadratic equation a * x*x + b * x + c = 0 */
void solveQuadraticEquation(double a, double b, double c){
double discriminant, root1, root2;
discriminant = b * b - 4 * a * c;
if (discriminant < 0)
printf("No roots\n");
else if (discriminant == 0){
root1 = -b/(2*a);
printf("One root: %f\n", root1);
}
else {
root1 = (-b + sqrt(discriminant))/(2*a);
root2 = (-b - sqrt(discriminant))/(2*a);
printf("Two roots: %f and %f\n", root1, root2);
}
}
int main(void) {
double a = 1.0, b = -8.0, c = 15.0,
d = 2.0, e = 8.0, f = 2.0,
g, h, i;
/* First call - coefficents are values of variables */
solveQuadraticEquation(a, b, c);
/* Second call - coefficents are values of expressions */
solveQuadraticEquation(d - 1, -e, 7 * f + 1);
/* Third call - coefficents are entered by user outside
solveQuadraticEquation */
printf("Enter coeficients a, b, and c: ");
scanf("%lf %lf %lf", &g, &h, &i);
solveQuadraticEquation(g, h, i);
return 0;
}
My Solution:
#include <stdio.h>
#include <math.h>
double discriminant(double a, double b, double c);
double root1(double a, double b, double c);
double root2(double a, double b, double c);
/* Prints roots of the quadratic equation a * x*x + b * x + c = 0 */
void solveQuadraticEquation(double a, double b, double c){
if (discriminant(a, b, c) < 0)
printf("No roots\n");
else if (discriminant(a, b, c) == 0){
printf("One root: %f\n", root1(a, b, c));
}
else {
printf("Two roots: %f and %f\n", root1(a, b, c), root2(a, b, c));
}
}
int main(void) {
double a = 1.0, b = -8.0, c = 15.0,
d = 2.0, e = 8.0, f = 2.0,
g, h, i;
/* First call - coefficents are values of variables */
solveQuadraticEquation(a, b, c);
/* Second call - coefficents are values of expressions */
solveQuadraticEquation(d - 1, -e, 7 * f + 1);
/* Third call - coefficents are entered by user outside
solveQuadraticEquation */
printf("Enter coeficients a, b, and c: ");
scanf("%lf %lf %lf", &g, &h, &i);
solveQuadraticEquation(g, h, i);
return 0;
}
double discriminant(double a, double b, double c){
return b * b - 4 * a * c;
}
double root1(double a, double b, double c){
return (-b + sqrt(discriminant(a, b, c)))/(2*a);
}
double root2(double a, double b, double c){
return (-b - sqrt(discriminant(a, b, c)))/(2*a);
}
I have already made a well-working solution, but I was wondering if I could make it even better, so if I have done something wrong or weird, please let me know.
Answer: Aside from re-computing discriminant(), a reasonable quadratic implementation.
When assessing the quality of floating-point computations, use "%e" rather than "%f". I recommend using "%e" with enough precision too. Like "%.17e", or use one of the DECIMAL_DIG family of constants.
Precision improvement when b*b near discriminant:
When |b| is about sqrt(discriminant), the computation of root1 or root2 will cancel many digits. To avoid this, consider that a*root1*root2 = c
// calculate root1 and root2 together.
if (b < 0) {
root1 = (-b + sqrt(discriminant))/(2*a);
root2 = c/(root1*a);
} else {
root2 = (-b - sqrt(discriminant))/(2*a);
root1 = c/(root2*a);
}
Try a=1.0, b = -1e24-1, c = 1e24. Roots should be 1e24, 1 as here. OP's approach results in 1e24, 0, total loss of precision. Keep in mind, FP has overall logarithmic precision, not linear.
When b*b and discriminant greatly differ, this approach is not significantly weaker than OP's original.
Precision improvements are possible with b * b - 4 * a * c, yet tend to impact performance more.
Advanced concern:
root1(), root2() is on thin ice at it relies on a non-negative result from discriminant().
C is sneaky in that FP code is allowed to use higher than needed precision selectively. See FLT_EVAL_METHOD.
With OP's code, this certainly will not happen as is, yet with a subtle change of in-lining b * b - 4 * a * c; instead of an explicit function call, may yield the following:
A test of b * b - 4 * a * c < 0 may be false yet the later (-b + sqrt(b * b - 4 * a * c)) may attempt a square root of a negative value due to different optimizations.
Solution: Use the same result rather than re-compute discriminant. Do not rely on "same looking code" to generate the exact same FP result especially when near 0.0.
This also eliminates the wasteful re-computation of discriminant().
Check spelling on "coefficents": I'd expect "coefficients".
To meet OP's goal to split the functions into 3: discriminant, first root, other root consider:
double discriminant(double a, double b, double c);
double first_root(double discriminant, double a, double b);
double other_root(double root1, double a, double c);
...
doubled = discriminant(a, b, c);
if (d < 0) ...
else if (d == 0) ....
else {
double root1 = first_root(discriminant, a, b);
double root2 = other_root(root1, a, c);
...
} | {
"domain": "codereview.stackexchange",
"id": 27869,
"tags": "c, homework, floating-point"
} |
Why does roboearth database search triger exception? | Question:
I'm trying to use the RoboEarth API through their website http://api.roboearth.org but it seems to be down for more than 2 weeks now.
This problem already appeared in the past. (http://answers.ros.org/question/147006/roboearth-dbexception-api/) But seems like nobody knew what's the problem or who to contact so I'm asking again.
DBException at /objects/result
'Hbase connection failed: Could not connect to pcube:9090'
Request Method: GET
Request URL: http://api.roboearth.org/objects/result?csrfmiddlewaretoken=144d3ff07953f78d9e7c00bd3912e748&query=cup&format=html
Originally posted by Matthew.J on ROS Answers with karma: 32 on 2015-03-27
Post score: 0
Answer:
The RoboEarth project has ended and support for the servers is running out. They are kept alive (i.e. connected to electricity), but no maintenance can be guaranteed. One alternative could be to install your own server according to the documentation (http://wiki.ros.org/re_platform), though I do not know how easy that is.
Originally posted by moritz with karma: 2673 on 2015-04-09
This answer was ACCEPTED on the original site
Post score: 1 | {
"domain": "robotics.stackexchange",
"id": 21273,
"tags": "roboearth"
} |
Forward versus backward numerical simulations in population genetics | Question: My question is closely related to this post.
There are a number of existing platforms to perform numerical individual-based simulation in populations genetics. An almost exhaustive list of such platforms can be found here.
Some program (like NEMO or SFS_CODE for example) perform forward simulations while some others (like SIMCOAL2 for example) perform backward (coalescence) simulations.
I am asking for a general comparison between those two methods.
Is one faster than the other? Why? (Intuitively I'd expect backward simulations to be faster).
Does backward simulation allow for simulating selection, recombination, gender-specific mutation rate, complex demography, etc..
Why/when would one use a backward/forward simulation model rather than the other type of model?
Answer: Your intuitions all seem correct. Coalescent simulation should be faster, because you don't track the entire population history over all t generations as you do in the forward simulation. Rather, as you work backwards in time you are tracking a smaller and small population. And with coalescent simulations it is probably very hard to incorporate the full range of evolutionary processes whereas doing so is more or less trivial with forward simulations.
A 2008 paper by Carvajal-Rodriguez reviews some of the issues in more detail that I have here. | {
"domain": "biology.stackexchange",
"id": 4191,
"tags": "evolution, theoretical-biology, population-dynamics, population-genetics, computational-model"
} |
Does CDCl3 also Knock People Out? | Question: I know that chloroform can be used to make people unconscious, but would that also work with deuterated chloroform, $\ce{CDCl3}$?
My instinct is "yes", but I'm not sure I'm willing to try this experiment on anyone.
Answer: In almost all respects, deuterium behaves like hydrogen, chemically. You can safely drink pure $\ce{D2O}$ in reasonable quantities (though not enough to replace more than ~25% of the $\ce{H2O}$ of the body). In fact, prokaryotes can continue to grow in ~100% deuterated media, albeit more slowly. It does prevent mitosis in eukaryotes in large quantities, though.
Given that similarity, there is no reason to assume any different behavior between deuterated chloroform and that with just protium. But don't knock yourself out trying the experiment, as chloroform is hepatotoxic. | {
"domain": "chemistry.stackexchange",
"id": 4161,
"tags": "organic-chemistry, isotope, pharmacology"
} |
Advantage of anti-windup | Question: What is the definition of anti-windup? How does it impose the constraints?
What are the advantages of MPC and anti-windup over each other?
Does anti-windup guarantee the constraints or does it just try respecting them?
Answer: Anti-windup is a concept for feedback controllers with integral terms, e.g. PID, to keep the integral term from „overcharging“ when regulating a large set point error. It basically saturates the integral term to keep the system from overshooting the set point.
The classic form of anti-windup, as described above, does not actually ensure satisfaction of input or state constraints of the system, it just enforces a somewhat „auxiliary constraint“ on the integral term.
EDIT: Anti-windup can also be used to ensure satisfaction of state and input constraints by exploiting the unwanted mechanic:
The classic saturation circuit gets extended by a predictor, that predicts the state that corresponds to the control input. A possible violation of constraints can then be detected with saturation. In that case a feasible control input, i.e. a control input such that the resulting state will not violate constraints, is calculated. The details of the approach are explained in this paper.
I am not familiar enough with the above approach to directly compare Anti-windup constraint satisfaction and MPC constraint satisfaction. I read once in some lecture slides about MPC (I'll link to them when I find them again)
If PID does the job, take PID, otherwise take MPC
so now that you know how to ensure constraint satisfaction with the former, just take the one that does the job. | {
"domain": "engineering.stackexchange",
"id": 2114,
"tags": "control-engineering, control-theory, pid-control, optimal-control"
} |
Checking if two strings are permutations (anagrams) in Python | Question: Practicing for my interview, I am working on some problems in Python.
This problem is to decide two strings are permutation (or anagrams) of each each other.
def is_permutation(first, second):
if len(first) != len(second):
return False
first_arr = [0 for i in range(256)] # ASCII size
second_arr = [0 for i in range(256)] # ASCII size
for i in range(len(first)):
first_arr[ord(first[i])] += 1
second_arr[ord(second[i])] += 1
if first_arr != second_arr:
return False
else:
return True
I feel this code can be more efficient, but I cannot make it. Can anybody give me advice or tip so that I can improve please?
Answer: The performance for the second line is \$O(1)\$, the fourth and fifth are \$O(256)\$ and your for loop on line six is \$O(n)\$. The code on line seven and eight are \$O(1)\$ and finally line nine is \$O(n)\$. Combining all this leads to \$O(1 + 256 + n (1 + 1) + n)\$, which simplify to \$O(n)\$.
The only time memory changes is on line four and five, where you use \$O(256)\$ more memory. And so this has \$O(1)\$ memory usage.
However, big O notation is just a guide. When you want to know what performs better test it.
If however you gave me this in an interview, I'd think you produce hard to read code. I would much prefer one of:
from collections import Counter
def is_permutation(first, second):
return sorted(first) == sorted(second)
def is_permutation(first, second):
return Counter(first) == Counter(second)
The former being \$O(n\log(n))\$ performance and \$O(n)\$ memory usage. The latter however is \$O(n)\$ in both performance and memory. 'Worse' than yours, but much easier to maintain. | {
"domain": "codereview.stackexchange",
"id": 26668,
"tags": "python, performance, strings"
} |
Confusion matrix in sklearn | Question: If you look at this:
>>> y_true = ["cat", "ant", "cat", "cat", "ant", "bird"]
>>> y_pred = ["ant", "ant", "cat", "cat", "ant", "cat"]
>>> confusion_matrix(y_true, y_pred, labels=["ant", "bird", "cat"])
array([[2, 0, 0],
[0, 0, 1],
[1, 0, 2]])
I suppose fist row of array means "predicted ant" and first column is "actually is ant" second column is "actually is bird" etc.
So first row first col 2 i read like "predicted ant, is ant", first row second col 0 i read as "precited ant is bird" is 0 which fits, and third column is "predicted ant is cat" is 0 but should be 1.
What i am doing wrong while understanding the confusion matrix.
Another example is this
>>> from sklearn.metrics import confusion_matrix
>>> y_true = [2, 0, 2, 2, 0, 1]
>>> y_pred = [0, 0, 2, 2, 0, 2]
>>> confusion_matrix(y_true, y_pred)
array([[2, 0, 0],
[0, 0, 1],
[1, 0, 2]])
Where is not even clear, what is the order of classes.
Source: https://scikit-learn.org/stable/modules/generated/sklearn.metrics.confusion_matrix.html
edit: Unless it is swapped. First row is "is ant" not "predicted ant". Only that on wikipedia the system is that row is the prediction.
Answer: You just confused the actual and predicted. Every row represents actual values of distinct elements in your array and columns represent predicted values of them. That is,
First row: There are 2 ants, and 2 samples are predicted as ant.
Seconds row: There are 1 bird and 1 sample is predicted as cat.
Third row: There are 3 cats, 1 sample is predicted as ant, 2 samples are predicted as cat. | {
"domain": "datascience.stackexchange",
"id": 8551,
"tags": "scikit-learn, confusion-matrix"
} |
rosbag playback speed | Question:
Hi all,
We have some sensor data that are saved as .csv files (data+timestamp) from a robot, which does not run ROS. To run some algorithms, I converted once such file into a ROS bag file using a python script.
After setting the /use_sim_time parameter and when playback this bag file with rosbag play --clock abc.bag, all the data seems to be flushed at once.
I need to know what determines the natural play speed of a bag file. I thought it is done using the header.stamp
Thank you in advance
CS
Originally posted by ChickenSoup on ROS Answers with karma: 387 on 2012-11-13
Post score: 0
Answer:
In your Python script, I guess you were using Bag.write. Did you specify the t parameter and set it to the correct time stamp? As far as I know, that's the parameter that controls playback speed.
The system cannot use header.stamp for a simple reason: not every message has a stamp but rosbag needs time information. Also, the C++ rosbag library uses the raw (serialized) message data blobs to store them in the file. There is just no way to access the header without knowing the exact message definition.
Originally posted by Lorenz with karma: 22731 on 2012-11-13
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by ChickenSoup on 2012-11-13:
Thank you very much. Yeah it was the 't' paramater. | {
"domain": "robotics.stackexchange",
"id": 11741,
"tags": "rosbag"
} |
Acceleration of a Bouncing ball when it hits the ground | Question: The conceptual problem I am having difficulty with is something like this:
If a bouncy ball is dropped from some height $h$ and rebounds to a height of $0.75h$ in some time $t$ (for example), what is the balls acceleration at the instant when it hits the ground?
Here's what I think:
I assume that the initial speed of the ball is $0 m/s$ and that it reaches a maximum velocity just before hitting the ground. The instant the ball touches the ground, the velocity becomes $0 m/s$ again, after which it starts accelerating upward.
Drawing the velocity time graph, I have something like this:
Not exactly the best diagram, but I hope that it gets my idea across.
My question is then, does this mean that the acceleration at the instant that the ball hits the floor is $\infty$? (since slope of the v-t graph will approach $\infty$)
Answer: No, the acceleration of the ball isn't infinite. What happens is that when the ball touches the ground the face in contact with the ground comes to a stop but the rest of the ball above it slows down more gradually, compressing and distorting the ball like a spring. The ball resists being compressed, and when the its centre of mass comes to a halt the compression is released- the ball expands, sending the centre of mass back up again.
During the process the KE of the falling ball is converted to the PE of compression, which is then converted back to KE with some loss as heat etc.
The process takes a finite time, which is why the acceleration isn't infinite. The acceleration is high, however. It's value will depend on the coefficients of restitution of the ball and the surface it bounces from. | {
"domain": "physics.stackexchange",
"id": 78751,
"tags": "kinematics, acceleration, velocity"
} |
Bash script error at paste command | Question: I wrote script for pasting rsids on CADD output. Here is script.
#!/bin/bash
cd tmp
cut -f 1,2 CADD.tsv > fileA
paste fileA <(cut -f 2,125 CADD.tsv) > myNewFile
bedtools intersect -a myNewFile -b New.vcf -wb |cut -f 1-4,7 > CADD.rsids.tsv
I have tested commands one by one and they work fine. But when I run script of these commands, it gives me this error.
$ sh cadd.rsids.sh
4: cadd.rsids.sh: Syntax error: "(" unexpected
Can anyone please tell me how to resolve this error?
Here are few lines of CADD.tsv file.
"##CADD GRCh38-v1.4 (c) University of Washington, Hudson-Alpha Institute for Biotechnology and Berlin Institute of Health 2013-2018. All rights reserved."
#Chrom Pos Ref Alt Type Length AnnoType Consequence ConsScore ConsDetail GC CpG motifECount motifEName motifEHIPos motifEScoreChng oAA nAA GeneID FeatureID GeneName CCDS Intron Exon cDNApos relcDNApos CDSpos relCDSpos protPos relProtPos Domain Dst2Splice Dst2SplType minDistTSS minDistTSE SIFTcat SIFTval PolyPhenCat PolyPhenVal priPhCons mamPhCons verPhCons priPhyloP mamPhyloP verPhyloP bStatistic targetScan mirSVR-Score mirSVR-E mirSVR-Aln cHmm_E1 cHmm_E2 cHmm_E3 cHmm_E4 cHmm_E5 cHmm_E6 cHmm_E7 cHmm_E8 cHmm_E9 cHmm_E10 cHmm_E11 cHmm_E12 cHmm_E13 cHmm_E14 cHmm_E15 cHmm_E16 cHmm_E17 cHmm_E18 cHmm_E19 cHmm_E20 cHmm_E21 cHmm_E22 cHmm_E23 cHmm_E24 cHmm_E25 GerpRS GerpRSpval GerpN GerpS tOverlapMotifs motifDist EncodeH3K4me1-sum EncodeH3K4me1-max EncodeH3K4me2-sum EncodeH3K4me2-max EncodeH3K4me3-sum EncodeH3K4me3-max EncodeH3K9ac-sum EncodeH3K9ac-max EncodeH3K9me3-sum EncodeH3K9me3-max EncodeH3K27ac-sum EncodeH3K27ac-max EncodeH3K27me3-sum EncodeH3K27me3-max EncodeH3K36me3-sum EncodeH3K36me3-max EncodeH3K79me2-sum EncodeH3K79me2-max EncodeH4K20me1-sum EncodeH4K20me1-max EncodeH2AFZ-sum EncodeH2AFZ-max EncodeDNase-sum EncodeDNase-max EncodetotalRNA-sum EncodetotalRNA-max Grantham Dist2Mutation Freq100bp Rare100bp Sngl100bp Freq1000bp Rare1000bp Sngl1000bp Freq10000bp Rare10000bp Sngl10000bp EnsembleRegulatoryFeature dbscSNV-ada_score dbscSNV-rf_score RemapOverlapTF RemapOverlapCL RawScore PHRED
1 3362704 T A SNV 0 Transcript INTRONIC 2 intron 0.536423841 0.04 NA NA NA NA NA NA ENSG00000142611 ENST00000270722 PRDM16 CCDS41236.2 12:00:00 AM NA NA NA NA NA NA NA NA NA NA 118572 24215 NA NA NA NA 0 0 0 -0.405 -1.689 -1.593 964 NA NA NA NA 0 1 1 5 0 0 1 0 0 0 0 1 0 1 12 0 0 0 0 0 8 11 5 2 0 NA NA 2.46 -4.93 1 0.02 4.53362 1.5498 3.49082 1.00224 9.30658 3.84201 5.88121 2.95189 9.31556 5.27366 4.01294 1.08327 25.99 5.10373 4.47027 1.51567 1.55624 0.7046 4.95028 1.49632 8.57929 1.45764 1.19689 0.644106 0.01783 0.01476 NA 16 1 0 15 4 10 184 17 101 1741 NA NA NA 5 19 -0.173019 0.424
1 7785635 T C SNV 0 Intergenic DOWNSTREAM 1 downstream 0.483443709 0.053333333 NA NA NA NA NA NA ENSG00000049245 ENST00000054666 VAMP3 CCDS88.1 NA NA NA NA NA NA NA NA NA NA NA 112 2704 NA NA NA NA 0.009 0 0 0.418 -0.242 -0.522 726 NA NA NA NA 0 0 0 3 1 0 14 1 0 0 1 0 0 0 22 0 0 0 0 0 1 1 4 0 0 NA NA 5.52 -11 NA NA 20.4013 3.22673 13.621 2.38936 11.7271 2.25561 9.94895 2.15059 16.5529 2.79938 11.0724 2.29032 12.2962 2.07181 32.6588 8.63119 15.114 2.76883 7.39549 2.14311 14.1811 3.08082 0.603948 0.246907 0.55734 0.16905 NA 9 1 1 15 3 5 174 35 68 1667 NA NA NA 2 4 0.049024 1.923
1 7785635 T C SNV 0 Transcript INTRONIC 2 intron 0.483443709 0.053333333 NA NA NA NA NA NA ENSG00000049246 ENST00000613533 PER3 CCDS72695.1 12:00:00 AM NA NA NA NA NA NA NA NA NA NA 112 2704 NA NA NA NA 0.009 0 0 0.418 -0.242 -0.522 726 NA NA NA NA 0 0 0 3 1 0 14 1 0 0 1 0 0 0 22 0 0 0 0 0 1 1 4 0 0 NA NA 5.52 -11 NA NA 20.4013 3.22673 13.621 2.38936 11.7271 2.25561 9.94895 2.15059 16.5529 2.79938 11.0724 2.29032 12.2962 2.07181 32.6588 8.63119 15.114 2.76883 7.39549 2.14311 14.1811 3.08082 0.603948 0.246907 0.55734 0.16905 NA 9 1 1 15 3 5 174 35 68 1667 NA NA NA 2 4 0.049024 1.923
1 7803233 C T SNV 0 Transcript INTRONIC 2 intron 0.331125828 0.013333333 NA NA NA NA NA NA ENSG00000049246 ENST00000613533 PER3 CCDS72695.1 12:00:00 AM NA NA NA NA NA NA NA NA NA NA 182 726 NA NA NA NA 0.014 0 0 -0.553 -1.498 -0.938 673 NA NA NA NA 0 0 0 6 0 0 23 5 0 0 0 0 0 0 14 0 0 0 0 0 0 0 0 0 0 NA NA 7.49 -15 NA NA 4.36329 1.02663 4.55863 1.4118 4.72226 1.51608 6.80823 2.09997 7.44497 2.86816 5.71457 2.35876 3.06623 0.62198 18.3251 3.23263 2.1423 0.84413 8.63178 2.70111 6.29127 1.59229 0.198671 0.048037 0.0465 0.01705 NA 6 1 1 15 2 2 134 26 39 1314 NA NA NA NA NA -0.395046 0.067
1 7808665 T C SNV 0 Intergenic DOWNSTREAM 1 downstream 0.344370861 0 NA NA NA NA NA NA ENSG00000236266 ENST00000451646 Z98884.1 NA NA NA NA NA NA NA NA NA NA NA NA 5614 1683 NA NA NA NA 0.082 0 0 0.463 -0.365 -0.373 659 NA NA NA NA 0 0 0 5 3 1 24 0 0 0 0 0 0 0 15 0 0 0 0 0 0 0 0 0 0 NA NA 5.23 -10.5 NA NA 7.16527 1.57568 5.63104 1.29085 5.96112 1.41329 8.31159 2.11434 4.41828 1.14843 5.03919 1.48679 3.91787 1.13788 4.97324 1.34708 5.22225 1.10759 7.38244 1.40721 11.4505 2.73652 0.977252 0.19269 0.09436 0.03409 NA 43 1 0 10 3 2 110 30 49 1288 NA NA NA 7 26 0.11875 2.796
1 7808665 T C SNV 0 Transcript INTRONIC 2 intron 0.344370861 0 NA NA NA NA NA NA ENSG00000049246 ENST00000613533 PER3 CCDS72695.1 12:00:00 AM NA NA NA NA NA NA NA NA NA NA 5614 1683 NA NA NA NA 0.082 0 0 0.463 -0.365 -0.373 659 NA NA NA NA 0 0 0 5 3 1 24 0 0 0 0 0 0 0 15 0 0 0 0 0 0 0 0 0 0 NA NA 5.23 -10.5 NA NA 7.16527 1.57568 5.63104 1.29085 5.96112 1.41329 8.31159 2.11434 4.41828 1.14843 5.03919 1.48679 3.91787 1.13788 4.97324 1.34708 5.22225 1.10759 7.38244 1.40721 11.4505 2.73652 0.977252 0.19269 0.09436 0.03409 NA 43 1 0 10 3 2 110 30 49 1288 NA NA NA 7 26 0.11875 2.796
1 7827519 C G SNV 0 CodingTranscript NON_SYNONYMOUS 7 missense 0.529801325 0.066666667 NA NA NA NA P A ENSG00000049246 ENST00000613533 PER3 CCDS72695.1 NA 18/22 2854 0.45172523 2590 0.712909441 864 0.714049587 lcompl NA NA 177 17277 tolerated 0.05 benign 0.127 0.041 0 0 -0.553 -0.852 0.026 525 NA NA NA NA 1 0 0 3 0 2 8 9 1 1 1 0 0 1 2 6 1 2 9 0 0 0 0 1 0 NA NA 12.1 -24.2 NA NA 8.89687 3.01407 58.9843 24.0334 58.3619 18.9077 55.4967 17.2149 10.5523 3.06178 87.6613 48.8498 8.66388 2.84705 31.4762 9.05454 11.8069 3.99886 8.7429 2.26128 72.4647 18.4371 1.08886 0.638979 2.10021 1.18017 27 2 2 1 19 4 8 204 31 59 1420 Promoter NA NA 86 94 0.346155 6.007
1 7827519 C G SNV 0 Intergenic UPSTREAM 1 upstream 0.529801325 0.066666667 NA NA NA NA NA NA ENSG00000236266 ENST00000451646 Z98884.1 NA NA NA NA NA NA NA NA NA NA NA NA 177 17277 NA NA NA NA 0.041 0 0 -0.553 -0.852 0.026 525 NA NA NA NA 1 0 0 3 0 2 8 9 1 1 1 0 0 1 2 6 1 2 9 0 0 0 0 1 0 NA NA 12.1 -24.2 NA NA 8.89687 3.01407 58.9843 24.0334 58.3619 18.9077 55.4967 17.2149 10.5523 3.06178 87.6613 48.8498 8.66388 2.84705 31.4762 9.05454 11.8069 3.99886 8.7429 2.26128 72.4647 18.4371 1.08886 0.638979 2.10021 1.18017 NA 2 2 1 19 4 8 204 31 59 1420 Promoter NA NA 86 94 0.346155 6.007
1 7827519 C G SNV 0 RegulatoryFeature REGULATORY 4 regulatory 0.529801325 0.066666667 NA NA NA NA NA NA NA ENSR00000000832 NA NA NA NA NA NA NA NA NA NA NA NA NA 177 17277 NA NA NA NA 0.041 0 0 -0.553 -0.852 0.026 525 NA NA NA NA 1 0 0 3 0 2 8 9 1 1 1 0 0 1 2 6 1 2 9 0 0 0 0 1 0 NA NA 12.1 -24.2 NA NA 8.89687 3.01407 58.9843 24.0334 58.3619 18.9077 55.4967 17.2149 10.5523 3.06178 87.6613 48.8498 8.66388 2.84705 31.4762 9.05454 11.8069 3.99886 8.7429 2.26128 72.4647 18.4371 1.08886 0.638979 2.10021 1.18017 NA 2 2 1 19 4 8 204 31 59 1420 Promoter NA NA 86 94 0.346155 6.007
1 7828155 A G SNV 0 Intergenic UPSTREAM 1 upstream 0.364238411 0.066666667 NA NA NA NA NA NA ENSG00000236266 ENST00000451646 Z98884.1 NA NA NA NA NA NA NA NA NA NA NA NA 813 17022 NA NA NA NA 0.009 0 0 -0.389 -0.735 -0.731 522 NA NA NA NA 0 0 0 4 0 0 20 4 1 0 0 0 0 0 14 1 0 0 3 0 0 0 0 1 0 NA NA 4.56 4.56 1 0.12 10.0753 2.35506 38.5512 20.107 21.923 10.3608 21.217 11.8934 5.61317 1.38406 9.74608 6.16783 3.76905 1.10216 24.6436 5.18488 14.8421 7.93141 4.56698 1.13989 18.7058 4.31099 0.500167 0.211141 0.16854 0.06818 NA 20 2 0 14 4 4 159 34 57 1421 NA NA NA 11 14 0.225659 4.402
Here are lines of New.vcf
#CHROM POS ID REF ALT QUAL FILTER INFO
1 3362704 rs11807862 T A 923.01 PASS "BaseCounts=31,0,0,30;BaseQRankSum=-0.108;DB;Dels=0;FS=3.561;GC=61.35;HaplotypeScore=1.7256;MQ=60;MQ0=0;MQRankSum=-0.368;QD=15.13;ReadPosRankSum=0.498;DP=282;AF=0.5;MLEAC=1;MLEAF=0.5;AN=14;AC=7"
1 7785635 rs228729 T C 2294.01 PASS "BaseCounts=0,44,0,0;DB;Dels=0;FS=0;GC=42.39;HaplotypeScore=0;MQ=60;MQ0=0;QD=28.13;BaseQRankSum=2.114;MQRankSum=-1.268;ReadPosRankSum=-0.548;DP=1430;AF=0.5;MLEAC=1;MLEAF=0.5;AN=70;AC=51"
1 7803233 rs228642 C T 2082.01 PASS "BaseCounts=0,19,0,19;BaseQRankSum=-0.963;DB;Dels=0;FS=2.884;GC=44.89;HaplotypeScore=0.9999;MQ=60;MQ0=0;MQRankSum=0.058;QD=14.92;ReadPosRankSum=0.409;DP=1468;AF=0.5;MLEAC=1;MLEAF=0.5;AN=62;AC=44"
1 7808665 rs228666 T C 1925.01 PASS "BaseCounts=0,36,0,26;BaseQRankSum=0.064;DB;Dels=0;FS=0.979;GC=30.42;HaplotypeScore=1.7333;MQ=60;MQ0=0;MQRankSum=-0.82;QD=18.03;ReadPosRankSum=-1.32;DP=939;AF=0.5;MLEAC=1;MLEAF=0.5;AN=42;AC=25"
1 7827519 rs228697 C G 786.01 PASS "BaseCounts=0,14,24,0;BaseQRankSum=0.016;DB;Dels=0;FS=4.925;GC=58.35;HaplotypeScore=0;MQ=60;MQ0=0;MQRankSum=-0.698;QD=18.63;ReadPosRankSum=0.99;DP=355;AF=0.5;MLEAC=1;MLEAF=0.5;AN=18;AC=9"
1 7828155 rs2859388 A G 2385.01 PASS "BaseCounts=0,0,56,0;DB;Dels=0;FS=0;GC=35.41;HaplotypeScore=0;MQ=60;MQ0=0;QD=24.91;BaseQRankSum=-0.231;MQRankSum=0.948;ReadPosRankSum=-0.128;DP=1136;AF=0.5;MLEAC=1;MLEAF=0.5;AN=46;AC=31"
1 7830057 rs2640909 T C 1421.01 PASS "BaseCounts=0,19,0,19;BaseQRankSum=1.051;DB;Dels=0;FS=5.041;GC=56.11;HaplotypeScore=0.734;MQ=60;MQ0=0;MQRankSum=0.496;QD=16.61;ReadPosRankSum=-0.526;DP=638;AF=0.5;MLEAC=1;MLEAF=0.5;AN=32;AC=19"
1 11107089 rs12139042 G A 1668.01 PASS "BaseCounts=43,0,0,0;DB;Dels=0;FS=0;GC=45.64;HaplotypeScore=0.734;MQ=60;MQ0=0;QD=29.7;BaseQRankSum=-1.652;MQRankSum=0.027;ReadPosRankSum=-1.192;DP=354;AF=0.5;MLEAC=1;MLEAF=0.5;AN=18;AC=10"
1 11787392 rs3737967 G A 719.01 PASS "BaseCounts=23,0,23,0;BaseQRankSum=-0.475;DB;Dels=0;FS=4.566;GC=61.85;HaplotypeScore=0.734;MQ=60;MQ0=0;MQRankSum=-0.84;QD=15.63;ReadPosRankSum=0.329;DP=378;AF=0.5;MLEAC=1;MLEAF=0.5;AN=20;AC=10"
1 11790870 rs2274976 C T 777.01 PASS "BaseCounts=0,18,0,24;BaseQRankSum=-0.042;DB;Dels=0;FS=4.677;GC=56.11;HaplotypeScore=0.6651;MQ=60;MQ0=0;MQRankSum=1.204;QD=15.05;ReadPosRankSum=0.836;DP=355;AF=0.5;MLEAC=1;MLEAF=0.5;AN=20;AC=10"
1 11792243 rs1476413 C T 1751.01 PASS "BaseCounts=1,18,0,21;BaseQRankSum=1.028;DB;Dels=0;FS=0;GC=56.11;HaplotypeScore=1.7287;MQ=60;MQ0=0;MQRankSum=-0.408;QD=16.65;ReadPosRankSum=-0.099;DP=954;AF=0.5;MLEAC=1;MLEAF=0.5;AN=50;AC=29"
1 11794400 rs4846051 G A 2616.01 PASS "BaseCounts=46,0,0,0;DB;Dels=0;FS=0;GC=59.35;HaplotypeScore=6.2573;MQ=60;MQ0=0;QD=31.75;DP=1654;AF=1;MLEAC=2;MLEAF=1;AN=78;AC=78"
1 11794419 rs1801131 T G 2028.01 PASS "BaseCounts=0,1,21,21;BaseQRankSum=-0.117;DB;Dels=0;FS=9.995;GC=59.35;HaplotypeScore=2.4328;MQ=60;MQ0=0;MQRankSum=1.03;QD=14.84;ReadPosRankSum=0.665;DP=1176;AF=0.5;MLEAC=1;MLEAF=0.5;AN=54;AC=33"
Answer: Your problem is that you have written a script in bash but are then running it using sh. Bash, the Bourne-again shell, is not the same as the Bourne shell (sh). What is more, on Ubuntu, sh is actually not even the Bourne shell but another minimal shell called dash. The /bin/sh on Ubuntu is a symlink to dash:
$ ls -l /bin/sh
lrwxrwxrwx 1 root root 4 Mar 1 2018 /bin/sh -> dash
Dash is a POSIX compliant shell which means it has the features defined by the POSIX specification for the sh shell. In your case, the problem is caused by the fact that the <() input redirection construct is supported by bash but not by dash. So, in order to run your script, you can do one of two things:
Make the script executable (chmod a+x cadd.rsids.sh), and then run it directly:
/path/to/cadd.rsids.sh
This is the simplest approach since you already have the shebang line (#!/bin/bash) which will cause the script to be run by bash and not sh.
Run the script with bash instead of sh:
bash /path/to/cadd.rsids.sh
Instead of
sh /path/to/cadd.rsids.sh | {
"domain": "bioinformatics.stackexchange",
"id": 803,
"tags": "vcf, shell"
} |
pr2_arm_navigation_tutorials | Question:
http://wiki.ros.org/motion_planning_environment/Tutorials
http://wiki.ros.org/motion_planning_environment/Tutorials/Making%20collision%20maps%20from%20self-filtered%20tilting%20laser%20data
In these tutorials, the pr2_arm_navigation package is being referenced, but I cannot find it anywhere. Where can I get it from? I am currently using indigo. I am trying to learn how to add objects to the planning environment to test planners with collision avoidance. If there is another tutorial with packages that are available, could you point me to them? Thanks.
Originally posted by cornwallis on ROS Answers with karma: 1 on 2017-08-08
Post score: 0
Answer:
Have you tried googling it? It's the first result.
Link: http://wiki.ros.org/pr2_arm_navigation
Originally posted by jayess with karma: 6155 on 2017-08-08
This answer was ACCEPTED on the original site
Post score: 0
Original comments
Comment by cornwallis on 2017-08-16:
I have. The link on that page is dead. | {
"domain": "robotics.stackexchange",
"id": 28555,
"tags": "ros, pr2-arm-navigation, environment, pr2, ros-indigo"
} |
The differential of a quantity | Question: I often see the differentials of the electric field strength and the acceleration due to gravity being written as:
$$dE= \mathcal{k}\frac{dQ}{r^2} \tag{1}$$
and
$$dg=\frac{GdM}{r^2} \tag{2}$$
respectively.
Which I interpret as "the infinitesimally small field strength due to an infinitesimally small charge/mass dQ/dM at some distance r away, and they are used when we are calculating the sum of the field strength at the origin due to these small charges/masses each at some distance r away from the origin"
But there are cases in which the term involving the radial coordinate differential also comes in:
$$dE=\mathcal{k}\frac{dQ}{r^2}+ (-2)k\frac{Q}{r^3}dr\tag{3}$$
$$dg=\frac{GdM}{r^2}+(-2)\frac{GM}{r^3}dr \tag{4}$$
I understand that (3) & (4) come directly from the product rule, but what is the physical interpretation of the additional dr terms?
(I understand that the differential of a quantity is non-zero when it is not a constant, e.g. the radial distance of each small charge/mass would vary along a long rod pointing normally away from the origin, but isn't this already included in the dQ/dM term because you shrink the charge/mass source to so small that the radial distance is almost constant, and then you integrate over the entire length of the rod to obtain the total field strength?)
Answer: The context of (1) and (2) is different from that of (3) and (4).
Although (1) and (2) are written as differential, they are not "differences": in (1) $d\vec E$is the field at some given $\vec r_p$ created by a small amount of charge $dq$ located at $\vec r_s$, with $r=\vert \vec r_p-\vec r_s\vert$. (2) has a similar interpretation.
On the other hand, given a vector field $\vec E$ everywhere near point $\vec r_p$, the differential $d\vec E$ given in your (3) is the (linear part of the) change in the field $\vec E$ near $\vec r_p$. This changes may come because the source charges $q$ creating the fields are changed by a small amount amount $dq$, or the distance from the source charges to the point $\vec r_p$ is changed by a small amount.
Please note that your equations should really have vector signs since the fields are vectors. If you do this then the change in the source charge $dq$ is a scalar but the change $d\vec r$ in the distance is actually a vector. | {
"domain": "physics.stackexchange",
"id": 62338,
"tags": "gravity, electric-fields, differentiation, calculus"
} |
What is electrical charge and how is it different from electric energy? | Question: I am a grade 12 student, and my textbook mentions
Charge is always associated with mass.
Why does charge need to have mass associated and not the other way around necessarily?
Also , if mass is a form of energy (E=mc2),then should not charge be too(as it can not exist without mass),if so what is the difference between charge and electrical energy ?
Answer:
what charge actually is, and why is it that charge is always associated with mass (but not necessarily the other way around)
A charge is a subatomic particle with an electric field around it. The particle and the field form a unit.
The electron has a negative unit of electric field around it and the proton has a positive unit around it.
If one separates the charges, one obtains general and easily measurable fields. If you put a current-carrying wire (a piece of metal with its loosely bound electrons) between the separated charges, a current flows.
Subatomic particles are the carriers of the electric fields. Electric fields without sources do not exist.
This is an assumption based on our observations and is so strong that it has been elevated to the status of a law.
what is the difference between charge and electrical energy
You get electrical energy when you separate charges. The more you separate, the more they tend to flow back to equilibrium. Every atom wants to balance its charges in the nucleus and in the shell.
A bit history
By rubbing materials and collecting the electricity in a Leyden jar, scientists learned to create an electric current. They achieve the same result with insulated chemical compounds, with a current flowing between the connections.
At the end of the 19th century, J. J. Thomson discovered the electron. He came to the conclusion that these rays consist of negatively charged particles, which he called "corpuscles", later renamed electrons.
Electrons and their electric field
Electrons are massive particles and they obey an electric field. This field usually is compensated be the opposite electric field of the protons inside the nucleus of atoms.
A lot of elements called metals and metal-like and their electrons are loosely bonded inside the atoms. Beside rubbing and using chemical processes the electromagnetic induction is the main technology to separate charges. Using the interaction between external magnetic fields and the electrons own magnetic field lead to the deflection of electrons. This is used to separate them from the nucleus and to get an electric potential difference.
Electrons indeed are flowing (direct current DC) or oscillating forth and back (alternating current AC), so the name electric current was happily chosen. | {
"domain": "physics.stackexchange",
"id": 76034,
"tags": "homework-and-exercises, electrostatics, charge"
} |
Why is cos(n/6) aperiodic? | Question: This is a very common example in most Signal Processing books I have come across.
x(n) = cos($\frac{n}{6}$) is a non-periodic discrete signal because it doesn't satisfy the periodicity condition for discrete time signals i.e, it is not of the form 2$\pi$($\frac{m}{N}$).
My question is :
the coefficient of n, i.e, $\Omega_0$=$\frac{1}{6}$ here can also be expressed as $\frac{1}{6}$ = $\frac{1}{6}$ * $\frac{2\pi}{2\pi}$ = 2$\pi$$\frac{1}{12\pi}$
Now, substituting for $\pi$ = $\frac{22}{7}$ in above, we get 2$\pi$$\frac{7}{12*22}$. So, $\frac{1}{6}$ can be written as 2$\pi$($\frac{7}{264}$), which is in the form 2$\pi$($\frac{m}{N}$) with a period N=264.
I'm sure I'm missing something which may be obvious but it would be of great help if someone could point it out and explain.
Answer: The problem with your reasoning is that $\pi \ne \frac{22}{7}$; $\pi$ is an irrational number. There is no period $N$ for which $x[n] = x[n+N] \ \forall \ n \in \mathbb{Z}$. Hence, the sequence is not periodic. | {
"domain": "dsp.stackexchange",
"id": 7311,
"tags": "discrete-signals, digital, periodic"
} |
Is there a rule of thumb for zero padding in image processing? | Question: I see there are a lot of answers on why zero padding is necessary and how it avoids wrapping around the sides of images. However is there a rule of thumb on how much padding will be good for the image processing? I am mainly thinking from the point of view of:
a.) speed of processing the image
b.) noise
I have till now referred to the following answers:
Why images need to be padded before filtering in frequency domain
Advantages/disadvantage of zero padding
FFT Zero Padding - Amplitude Change
Response will be really appreciated.
Thank you.
Answer: See Press et al. "Numerical Recipes in C++". Chapter 13 on "Fourier and Spectral Applications", Section 13.1 with a subsection entitled "Treatment of end effects by zero padding". This is perhaps the best summary of zero-padding anywhere in the literature (but it's not referenced in either the contents or index). My edition of this book is the 2002 second edition and the page number there is 545. | {
"domain": "dsp.stackexchange",
"id": 7342,
"tags": "fft, image-processing, zero-padding"
} |
Existence of representation of symmetry transformation | Question: There is a simple fact that we can change our point of view and that physical laws should remain the same, id est, outcomes of our experiments should be the same no matter from which frame of reference we are observing. So, there is a physical state that changes under the change of our point of view and there is a new wave function which describes it. Should there be a representation of this transformation which takes the old wave function into new one? Is there any proof of such a claim? Should there always be transformation of wave function for every transformation of the physical system? Space-time coordinates are transformed of course, but what about the state itself? Is there any proof of this?
Answer: By Wigner's theorem every symmetry is represented as a unitary or anti-unitary operator upon the Hilbert space of states. | {
"domain": "physics.stackexchange",
"id": 28950,
"tags": "quantum-mechanics, wavefunction, symmetry, hilbert-space"
} |
Is the average kinetic energy of evaporating water molecules (at room temperature) equivalent to the average kinetic energy of boiling water? | Question: Purpose:
On new year's eve, after a splendid red and an assortment of sumptuous repasts, I made a bold remark which, on further consideration, may turn out to be incorrect. Unless! Unless I can concoct an impressive scientific explanation - for which I will need your input
Context:
Temperature is the average kinetic energy of a "bulk" liquid
Particles within the liquid will have a range of kinetic energies
according to kinetic molecular theory (Maxwell-Boltzmann curve etc.)
Water boils at 100 deg Celsius (101.3kpa etc)
Evaporation can happen at any temperature (>273K)
Individual particles don't have temperature but rather kinetic energy
Question:
Assuming we could measure the kinetic energy of multiple evaporating water molecules just as they left the surface of liquid water (at standard temp and pressure) over time, would the average kinetic energy of the sample of evaporating water molecules equal the average kinetic energy of boiling water (at 101.3kpa)? That is, if the evaporating water molecules had a temperature - would it be approximately one hundred degrees. Or, as I may have put it at the time, at a molecular level, do my drying undies effectively boil? (Note: I understand that the bulk water is at room temperature)
Thoughts:
If I knew the speed of evaporating water molecules I could calculate their energy and compare this to the average energy of boiling water and see if they were similar - but I can't see how to estimate the speed of evaporating water molecules?
Also:
...I have read "Is it true that an evaporating molecule has the same kinetic energy as a molecule in a pot of boiling water?" on this site. I don't think it answers this question.
Diagram for discussion
If I assume that Boltzmann's distribution works for liquids (I get that it's meant for ideal gases) and assume 9 degrees of freedom for water molecules then:
This suggests (if it's even remotely correct) that a very small number of molecules in room temperature water are moving very fast and therefore are at very high temperatures? It's not a relationship per se but, if correct, it does affirm the initial idea that evaporating water molecules are "hot"...even boiling?
Final Comments?
Thanks for the input and careful consideration. The chart below is my attempt to summarise my thoughts inspired by your comments. It's rendered in excel from the equation shown and accords well with the chart for water included in Boltzmann distribution for water which didn't extend far enough on the x axis for what we're trying to show here. Thanks for the heads up re energy distribution rather than speed - much easier to understand.
The equation comes from BC Campus Molecular Speeds I have ignored degrees of freedom effects in this and the other equations on the chart.
Clearly the energy of evaporating water molecules (let's say liquid immediately before take off), at room temperature (25 deg) is significantly higher than the average energy of boiling water.
What can be said about the little molecules about to liberate themselves from my drying undies then? The original question precipitated from the idea that molecules evaporating from washing 'boiled'.
They are at similar energy, a little greater in fact, than the energy of molecules about to jump from a pot of boiling water (44kJ vs 41kJ, 7% difference), and massively more energetic than the average energy of boiling water (4.7kJ)
If we could measure the temperature of a bunch of them they would be at an energy equivalent temperature some ten fold greater (3527K/373K) than the average temperature of boiling water (T = 2E/3Nk, bc campus university physics internal energy), but close to the equivalent (theoretical) temperature of the 'boiling' molecules. I know we can't measure their temperature as it is related to the average energy of the bulk liquid - but theoretically... (there's a case of beer in this)
The proportion of molecules ready to let loose from my y-fronts is much smaller per unit of room temperature water than it would be for boiling water (see the area under the curve right of the 41kJ and 44kJ points)
So...to all intents and purposes, the undie vaporizing molecules are doing what the boiling water vaporizing molecules are doing ....boiling (but they're not at 100 degrees, and we can't really measure their temperature - maybe half a case of beer then?). We don't feel the heat of the highly excited little blighters because it's a (relatively) small number of individual molecules within a bulk liquid at an average temperature of 25 degrees.
Other Stuff
In coming to the answer from a kinetic energy perspective, I found this paper Velocity of a droplet evaporated from waterwhich measures and models the speed of protonated water molecules evaporated from nano droplets. Speeds of non-protonated molecules in bulk liquid water will vary but it was encouraging to see that the speeds (say 2000m/s) were of an order of magnitude commensurate with with Boltzmann's distribution for water in Boltzmann distribution for water. On this curve, the experimental speeds would also be out in the flat area of the curve equivalent to my energy curve above.
Answer:
[OP] the average kinetic energy of evaporating water molecules
You have to specify whether you are talking about the kinetic energy just before the water molecule breaks the hydrogen bonds to its neighbors or just afterwards. A millisecond before or after the event, of course, the average kinetic energy will be determined by the bulk temperature.
One way to picture this is two water molecules "colliding" at the surface of the liquid. In order for one water molecule to leave the liquid phase, the collision needs to have at least sufficient energy to break the hydrogen bonds (and perhaps some more to overcome an activation energy). When that happens, the total kinetic energy of the molecules in the collision will be lowered (because energy is conserved). What exactly we mean by "collision" in the liquid phase is not so important because the temperature-dependence of the kinetics of the process is still governed by the Arrhenius equation (and the Boltzmann distribution of collision energies) even when we go from simple mono-atomic gases to wicked-complicated liquids.
The energy required to break the water loose (~40 kJ/mol, according to Poutnik's answer) is much higher than the median collision energy at room temperature (~2 kJ/mol). Whether the water molecule has more or less kinetic energy than the average after it enters the gas phase is not so important. The important part is that it had an unusually high energy, and most of that excess energy went into breaking the hydrogen bonds.
[OP] Assuming we could measure the kinetic energy of multiple evaporating water molecules just as they left the surface of liquid water (at standard temp and pressure) over time, would the average kinetic energy of the sample of evaporating water molecules equal the average kinetic energy of boiling water (at 101.3kpa)?
https://chemistry.stackexchange.com/a/121656/
The kinetic energy of the water molecules that just evaporated is not exceptionally high or low. As the molecule leaves the liquid phase, the available energy is somehow partitioned between that water molecules and the ones it leaves behind. Your question boils down to whether the evaporative cooling of the liquid is temperature-dependent. I'm not sure if there is an experiment that would address this, and whether it has been done already.
[OP] Diagram for discussion
The diagram has a lot of problems, so I would not use it for discussion. The curve does not change that much with temperature, the shaded area is too big by far (according to Poutnik's answer, a molecule on the surface of the liquid has about a one in a million chance per collision to get into the gas phase at 373 K). Here is a diagram showing how the energy available from collisions changes when the temperature is raised from room temperature to body temperature. You can see that the changes for the bulk of energies are subtle. If you zoom in to the regions of exceptionally high collision energies, however, you can see that the changes are substantial. | {
"domain": "chemistry.stackexchange",
"id": 14975,
"tags": "water, boiling-point, kinetic-theory-of-gases, statistical-mechanics, evaporation"
} |
What is an equivalent definition of mP/poly in terms of a Turing machine? | Question: P/poly is the class of decision problems solvable by a family of polynomial-size Boolean circuits. It can alternatively be defined as a polynomial-time Turing machine that receives an advice string that is size polynomial in n and that is based solely on the size of n.
mP/poly is the class of decision problems solvable by a family of polynomial-size monotone Boolean circuits, but is there a natural alternative definition of mP/poly in terms of a polynomial-time Turing machine?
Answer: There is a notion of a monotone non-deterministic and, more generally, alternating Turing machine in the paper Monotone Complexity by Grigni and Sipser. Since polynomial time is the same as alternating logarithmic space, a machine characterization of uniform $\mathsf{mP}$ is the monotone alternating logspace Turing machine. Providing such a machine with polynomial advice will then give a machine definition of $\mathsf{mP/poly}$. | {
"domain": "cstheory.stackexchange",
"id": 3299,
"tags": "cc.complexity-theory, complexity-classes, circuit-complexity, polynomial-time, monotone"
} |
Finding a subset of objects with max value with constraints | Question: There are m people and n kinds of objects. Each person has exactly one instance of each kind of objects, but these objects have different values associated. Now I want to find a subset of objects from all those m*n objects satisfying the condition: it should have p objects of each kind (p < m), and a person cannot contribute more than p objects. I want to maximize the total value of the objects picked. I have tried several obvious greedy algorithms, but none of them has found the optimal solution. Is there an algorithm that is not NP hard?
Answer: Yes sure. You can solve this problem using Minimum Cost Maximum Flow technique. With this algorithm you can find the maximum flow in a network with weighted edges that have minimum total weight (weights can be negative). So let's build the model that fits our problem best.
Build the following graph
There are $n$ nodes in the set $A$ that represent each person.
There are $m$ nodes in the set $B$ that represent each type of object.
The set $A$ and $B$ are full connected the edge $e_{i,j}$ that connects node $A_i$ and $B_j$ has:
capacity: 1 (It can be taken at most once)
cost: $-w_{i,j}$ (We are looking maximum which is the same as minimize the negation of each value)
Source node $S$ is connected with an edge to each node in the set $A$.
capacity: $p$ (each person can carry at most $p$ objects)
cost: 0
Nodes on set $B$ are connected to target node $T$.
capacity: $p$ (each object can be carried by at most $p$ persons)
cost: 0
Maximum flow in this graph is $m * p$ since there is an answer where each object is taken exactly $p$ times.
Minimum cost is the negation of the answer you are looking for.
This problem is in P. | {
"domain": "cs.stackexchange",
"id": 10405,
"tags": "algorithms"
} |
Lassaigne's test for nitrogen | Question: Why is it that benzene diazonium salts don't give the Lassaigne's test for nitrogen. After all they've got both carbon and nitrogen.
Answer: Benzene diazonium chloride is stable upto 5 degree Celcius, while in Lassaigne's test we need to heat the organic compound with sodium till red hot, before breaking it by plunging into cold water. All the nitrogen will be liberated before it finds a chance to combine with Na and form any NaCN, which is responsible for the test ultimately. | {
"domain": "chemistry.stackexchange",
"id": 7934,
"tags": "organic-chemistry, redox, analytical-chemistry"
} |
Reverse lift mechanism | Question: I have made a RC robot from a wheelchair and I'm planning to attach a snow plow. I'm wondering if there is any mechanism that would be able to lift the plow when reversing.
I have only 2 channel transmitter so I can't control the plow's movement through it so I was thinking of some mechanical lift that triggers when reversing.
Do you guys know about something I could use for it?
Thanks.
Answer: I assume that your 2 channels control forward/backward and left/right. But even if the 2 channels control forward/backward in each wheel (differential-drive style), it should still be possible to do what you are suggesting electrically instead of mechanically.
You should be able to read the input signal to the motors, decide whether those signals are commanding a "reverse" movement, and trigger your snow plow lift motor accordingly. | {
"domain": "robotics.stackexchange",
"id": 453,
"tags": "wheeled-robot, mechanism"
} |
How to use equation of motion like this here? | Question: So I will explain my problem exactly what it is.Please do answer in such a format by telling me places where I am wrong from the text I write.I posted this question earlier also but I noticed many of them to give answers which I didn’t even ask and had to explain all the things again in the comments. In the end , some of them leave in between or some don’t even tell the reason why the post is not good.So please check upon these points.
Questions is as follows:
Q :
particle of mass 1 kg has a velocity of 2 m/s. A constant force of 2 N acts on the particle for 1 s in a direction perpendicular to its initial velocity. Find the velocity and displacement of the particle at the end of 1 second.
My questions:
Doubt regarding direction of force:
Force means push or pull.
In image 1,
I drew the force in downwards direction or pulling force (downward vector) which is perpendicular to u (right direction)
Image 2
I drew the force in upwards direction or pushing force (upward vector) which is perpendicular to u (right direction).
So which of these 2 images is right ? Question doesn’t specify it. Also, whether what I though is right or not?
Doubt 2::
In the solution, they have made the equations of motion into two mutually perpendicular vectors.
So I thought of it for a while and thought that maybe because v is velocity, a is acceleration.So they are vectors and have a direction and can be drawn like this.
Q1 I am sure you must have had noticed the direction of vector which is in upward direction.So is it necessary for it to be in upward direction or it can be in downward direction as well.
Q2 why is t also drawn along with those vectors since time is a scalar quantity.
Q3 v = u + at in this equation. They have written v as 2$\sqrt{2}$m/s (Used by solving it in the vectors method )but if you use the equation directly. Then you get v = 2+2(1) = 4m/s.
Why are both of them different.
Q4 why did we consider direction of vector v in a direction i.e in between of u and at. Since force was supposed to there at $” at “$. It looks as if at^2 is equivalent to F = ma in the diagram. Why did that happen ?
Ok.So I am not saying that you exactly answer to me one by one. But in such a way that you cover all my doubts and not that just leave one line and don’t even respond after that.
Do tell me the exact line form the text that you didn’t understand.
Answer:
Force means push or pull.
Doesn't matter.
So which of these 2 images is right ? Question doesn’t specify it. Also, whether what I though is right or not?
It doesn't matter. When the question says velocity they mean the magnitude only in this case (we can see that from the wording). Sure, the direction of the force will determine wether the new velocity vector tilts one way or the other - but in both scenarios the magnitude will be the same. So it doesn't matter from which direction the force pushes, as long as it is perpendicular.
Q1 I am sure you must have had noticed the direction of vector which is in upward direction.So is it necessary for it to be in upward direction or it can be in downward direction as well.
Again, it doesn't matter. The direction doesn't matter at all - that just depends on from where you are looking at the scenario - only the relative direction matters. And that is already fixed with the force being always perpendicular to initial velocity.
Q2 why is t also drawn along with those vectors since time is a scalar quantity.
There is no rule against including scalars in vector equations. If I multiply the scalar $2$ with the vector $(1,3)$ then I get a new vector $2\cdot (1,3)=(2,6)$. No problem doing this.
Q3 v = u + at in this equation. They have written v as $\sqrt 2$m/s (Used by solving it in the vectors method )but if you use the equation directly. Then you get v = 2+2(1) = 4m/s.
No, they have not written $v$ as $\sqrt 2$ m/s. They have written $|v|$ as $\sqrt 2$ m/s. Be careful with what is a vector ($v$) and what is a vector's magnitude ($|v|$).
If you look only in the perpendicular direction, then your equation is correct, though. Just remember to input the values that apply to this perpendicular direction also. The perpendicular acceleration $a$ comes from the force, and that acceleration can be calculated to be $F=ma\Leftrightarrow a=F/m=2\,\mathrm N\,/\,1\,\mathrm{kg}=2\,\mathrm{m/s^2}$. Using this acceleration value with your motion equation in the perpendicular direciton we get:
$$v_\perp = u_{\perp} + at=0+2\,\mathrm{m/s^2}\cdot1\,\mathrm s=2\,\mathrm{m/s}$$
So, here we have the speed component that is added to the motion in the perpendicular direction. The acceleration does not act along with the motion (in the direction of the initial speed) so there is no change along that parallel direction. In that direction the speed stays constant $$v_\parallel=u_\parallel+\underbrace{a}_0t=2\,\mathrm{m/s}+0\cdot 2\,\mathrm{s}=2\,\mathrm{m/s}$$ So, your equation v = 2+2(1) = 4m/s is not correct. With the perpendicular and the parallel components we can now find the magnitude via Pythagora's Theorem:
$$|v|^2=v_\perp^2+v_\parallel^2\quad\Leftrightarrow\quad\\ |v|=\sqrt{v_\perp^2+v_\parallel^2}=\sqrt{(2\,\mathrm{m/s})^2+(2\,\mathrm{m/s})^2}=\sqrt{8\,(\mathrm{m/s})^2}=\sqrt{8}\,\mathrm{m/s}=2\sqrt 2\,\mathrm{m/s}$$
Q4 why did we consider direction of vector v in a direction i.e in between of u and at.
Because it has a direction. The motion started with only a speed component in one direction (the initial direction), $u=(u_\parallel,0)$, but ends out with a speed component in both that and now also in the perpendicular direction, $v=(v_\parallel,v_\perp)$. This is a new vector that has been tilted a bit. It has an angle.
Since force was supposed to there at ”at“. It looks as if at^2 is equivalent to F = ma in the diagram. Why did that happen ?
Not understood. Please clarify this part. | {
"domain": "physics.stackexchange",
"id": 75092,
"tags": "homework-and-exercises, newtonian-mechanics, forces, kinematics, vectors"
} |
Cannot see images beyond a certain magnification with a microscope | Question: Using a new microscope with my son, we cannot see images beyond a certain magnification:
Eyepiece: 25x - Objective lens: 4x --> OK
Eyepiece: 25x - Objective lens: 10x --> OK
Eyepiece: 25x - Objective lens: 40x --> not focused
Eyepiece: 25x - Objective lens: 100x --> not focused
In the last two cases, we try moving the stage along all its range, without getting anything.
Same issue using a 10x eyepiece.
On the 100x objective lens "OIL" is written; does it mean the observation needs to be done through oil instead of air? Or just that oil is used inside the objective?
We did not prepare the leaf; we just put a small piece of it between two slides.
Are we doing anything wrong?
Observing a leaf at 250x, we can see tiny round structures (I think barely bigger than what can be possibly seen by our eye): are those the cells?
Answer: This is probably the problem: "We did not prepare the leaf; we just put a small piece of it between two slides.".
I suppose you mean you put a leaf fragment between two regular microscope slides, those measuring 26mm * 76mm (1in. * 3in.)?
That won't work. Long story short: microscope optics/objectives are designed with a specific working distance in mind, which is not an entiraly free choice, but dictated by the laws of physics/optics. An average achromat 40/0.65 has a working distance of about 0.6 mm. An average achromat 100/1.25 has a working distance of about 0.2 mm. An average microscope slide has a thickness of about 1 - 1.2 mm...
In the... well... regular microtechnique, a specimen is put on a slide and covered with a very thin coverslip, having a thickness of around 0.15 - 0.17 mm.
Also, as others mentioned, using a 25x eyepiece in combination with a 40/0.65 or a 100/1.25 is kind of overkill. With that combination, you enter the realm of "empty magnification". In short: structures appear larger, but no new structures are revealed and the image becomes more and more fuzzy.
And yes: you need to use immersion oil with the 100x objective, but keep in mind this is a very demanding objective to be used, on the part of the microscopist as well as on the part of the slide preparer.
Using a microscope is not all that simple for novices, but there's an easy way to find out -if no image can be produced-, if it's a microscope problem or an inexperienced user problem: take as a specimen an as thin as possible large object, such as a leaf of cigarette paper, and try to focus on it, using low power and gradually use stronger objectives up to 40/0.65. If an image can be produced, even if it's rather blurred at 40/0.65, the microscope is okay. | {
"domain": "biology.stackexchange",
"id": 7025,
"tags": "microscopy"
} |
Is this language countable : $L= \{ w : w \in (1 + 0)^{*} \}$ | Question: This is my take :
Epsilon ---> 1
0 --> 2
01 ---> 3
10 ---> 4
11 ---> 5
001 ---> 6
010 ---> 7
.
.
.
So therefore we can count them.
But based on this video : https://www.youtube.com/watch?v=oe-ZAJQz9Cc&index=5&list=PLsFENPUZBqiqbnD-WatYxUhRWLMNDoMun
They should not be countable, but i did not understand that video, the guy says all possible languages over $\{0,1\}^{*}$ are uncountable!
Answer: The language $$L= (0+1)^*$$ (the set of all strings over $0$ and $1$) is countable. Furthermore, any subset of $L$ is also countable. However, the set of all sublanguages of $L$
$$S = P(L) = \{M \mid M \subseteq L\}$$ (a set of sets in fact) is not countable. This video proves this using the diagonalization argument.
$S$ and $L$ are two different sets. In fact, $L$ is a member of $S$. | {
"domain": "cs.stackexchange",
"id": 9872,
"tags": "formal-languages, uncountability"
} |
Better way to incorporate HTML and PHP | Question: When I look at this style of code I feel like there has to be a better way to write this:
<div id="by_cx">
<!-- free text input for name -->
<label>Name:<input type="text" id="cxByNameInput" name="cxByNameInput"/></label>
<!-- cx state drop down -->
<label for="cxState">State:</label>
<select id="cxState">
<option></option>
<?$states = getStates();
while($state = $states->fetch_assoc())
{?>
<option value="<?=$state['st']?>"><?=$state['state']?></option>
<?}?>
</select>
<!-- cx status drop down -->
<label for="cxStatus">Status</label>
<select id="cxStatus">
<option></option>
<?$statusList = getCxStatusList();
while($status = $statusList->fetch_assoc())
{?>
<option value="<?=$status['cxType']?>"><?=$status['cxTypeDescription']?></option>
<?}?>
</select>
</div>
I'd like to mention that keeping eclipses HTML code coloring would be a huge bonus...
Answer: Code separation
What you have there is everything mixed together - just by moving the php logic to the top of the file, makes it easier to read. With some minor reformatting it becomes easier to read/maintain:
<?php
$states = getStates();
$statusList = getCxStatusList();
?>
<div id="by_cx">
<!-- free text input for name -->
<label>Name:<input type="text" id="cxByNameInput" name="cxByNameInput"/></label>
<!-- cx state drop down -->
<label for="cxState">State:</label>
<select id="cxState">
<option></option>
<?php while($state = $states->fetch_assoc()): ?>
<option value="<?= $state['st'] ?>"><?= $state['state'] ?></option>
<?php endwhile; ?>
</select>
<!-- cx status drop down -->
<label for="cxStatus">Status</label>
<select id="cxStatus">
<option></option>
<?php while($status = $statusList->fetch_assoc()): ?>
<option value="<?= $status['cxType'] ?>"><?= $status['cxTypeDescription'] ?></option>
<?php endwhile; ?>
</select>
</div>
Format for readability
If you're jumping in and out of php - having lines like this: {?> make things hard to read, especially if there is some nesting in the code. Whether you choose to use curly braces or the alternate (colon) syntax is up to you, but in html code using a style which aides readability (putting the curly brace/colon on the same line as the statement it relates to) helps.
No PHP Short tags
Shorttags are often considered a bad practice, and as such should be avoided unless the code is your own, and you control where it's going to be used. Note that you can use <?= with PHP 5.4 irrespective of the shorttags setting.
Consistent whitespace
The following:
<? foo(); ?>
Is easier to read than
<?foo();?>
Especially where it appears inline somewhere - use whitespace for readability.
Sprintf
If you find that your code gets to be like this:
<foo x="<?= $x ?>" y="<?= $y ?>" z="<?= $z ?>"> ...
Then it's probably easier to read and maintain using:
<?php echo sprintf('<foo x="%s" y="%s" z="%s">...', $x, $y, $z); ?> | {
"domain": "codereview.stackexchange",
"id": 2609,
"tags": "php, html"
} |
Finding a hamiltonian cycle in $G'$ given a hamiltonian cycle in $G$ | Question: Say I have an undirected, weighted graph $G=(V,E)$ and I know a hamiltonian cycle of minimum weight in that graph. Can I use that information to efficiently find a hamiltonian cycle in $G'=(V',E')$ where $V'=V-\{v\}$ for some vertex $v$ and $E'$ is $E$ with all edges touching $v$ removed?
It is safe to assume that there still exists a hamiltonian cycle in $G'$.
Answer: I believe the answer is no and here is why.
Assume that I can efficiently find a hamiltonian cycle of minimum weight in $G'$ given a hamiltonian cycle of minimum weight in $G$. I can use this procedure to find a hamiltonian cycle in any graph, say $A$.
The procedure would work like this. Take $A$ and add vertices and edges (with weight $0$) such that a hamiltonian cycle of weight $0$ is easy to find. The cycle, for example, could alternate between new vertices and the original vertices of $A$. Since all new edges have weight $0$, the cycle weight would be $0$. If we call the new graph $A_1$, we know a simple hamiltonian cycle in $A_1$. Remove one of the newly added vertices from $A_1$ to create $A_2$ then run our procedure to efficiently find a hamiltonian cycle in $A_2$. Then repeat the procedure.
Eventually we will arrive at a graph, say $A_k$, which only has one extra vertex when compared to $A$. We will know a hamiltonian cycle in $A_k$ from the procedure above. So we can use this information to efficiently find a hamiltonian cycle in $A$ (using the same procedure as above).
Therefore, finding a hamiltonian cycle in $G'$ given one in $G$ must be at least as hard as finding a hamiltonian cycle in general. | {
"domain": "cstheory.stackexchange",
"id": 2079,
"tags": "cc.complexity-theory, graph-theory, hamiltonian-paths"
} |
Calculating shopping cart discounts | Question: I have a method that checks to see if a hash of given items should have discounts applied and if so, determines and returns the discount:
def get_discounts
@items.each do |name, attr|
@specials.each do |special|
while name == special.sale_item && attr[:quantity] >= special.quantity
@discounts << special.discount
attr[:quantity] = attr[:quantity] - special.quantity
end
end
end
determine_discount
end
def determine_discount
if @discounts.empty?
@discounts = 0
else
@discounts = @discounts.inject(:+)
end
end
This works perfectly, but is there a more concise way to write it? I'm looking especially at the two each loops. I'm also a bit iffy about the while loop - it was an if statement (if name == special.sale_item) but it felt like too much so I combined it into the while loop.
Answer: I agree with both of your suspected issues:
Pointless iteration through @items hash: As you suspected, iterating through all entries of a hash defeats the purpose of a hash. That takes O(I * S) time — the number of items times the number of specials. Why not iterate through @specials, then look up the item by name? That's only O(S).
while loop: The loop can be replaced with arithmetic.
In addition, I would like to point out some more problems:
Surprising side-effect in getter: By convention, any method named get_*() is assumed to have no side-effects. However, get_discounts() alters @items, reducing their quantity. If that is intentional, you should rename the method to apply_discounts() or something even more suggestive that there is a side-effect.
Abuse of instance variable: @discounts is assumed to be pre-initialized to an empty array before get_discounts() is called. get_discounts() populates the array, then calls determine_discount(), which converts it into a scalar. That indicates that @discounts is not storing the state of the object, so using an instance variable for that purpose is abuse.
Here is a revised version of the code:
def get_discounts
discount = 0
@specials.each do |special|
if item = @items[special.sale_item]
multiples = (item[:quantity] / special.quantity).floor
discount += multiples * special.discount
# Suspicious side-effect...
# item[:quantity] -= multiples * special.quantity
end
end
return discount
end | {
"domain": "codereview.stackexchange",
"id": 5224,
"tags": "ruby, e-commerce"
} |
In an avalanche breakdown, where are the electrons that break free from? | Question: In an avalanche breakdown, are the electrons that break free under the influence of the applied electric field from the depletion region or outside it?
Also, under reverse bias, how exactly is the internal potential difference widened? (causal mechanism, microscopically) I've always seen it stated but never explained.
Answer: Avalanche breakdown is caused by impact ionisation. At very large reverse biases the electric field across the depletion region is so large that electrons gain enough kinetic energy to ionise lattice atoms. This causes a chain reaction as the electrons generated from impact ionisation gain energy from the field and cause more ionisation events. So to answer your question impact ionisation occurs where the electric field is large, in the depletion region.
As for your second point, the widening of the depletion region. Remember that the depletion region is formed by the equilibrium between a drift (field assisted) and diffusion (concentration gradient assisted) currents. In reverse bias the field is increased, which means that the drift current is stronger than at equilibrium (V=0). The drift current pushes electrons and holes away from the depletion region so the depletion region grows in size as reverse bias increases. | {
"domain": "physics.stackexchange",
"id": 10551,
"tags": "semiconductor-physics"
} |
What do we do instead of DFS on directed graphs? | Question: All the example of DFS I've seen so far are for undirected graph.
In a directed graph the basic DFS algorithm won't work because some vertex will be unreachable.
The algorithm I'm talking about :
https://en.wikipedia.org/wiki/Depth-first_search
1 procedure DFS-iterative(G,v):
2 let S be a stack
3 S.push(v)
4 while S is not empty
5 v = S.pop()
6 if v is not labeled as discovered:
7 label v as discovered
8 for all edges from v to w in G.adjacentEdges(v) do
9 S.push(w)
Example :
So let's say I start with '1' I will never access 4,5,6 with this algorithm.
For directed graph it looks like we have to know all the vertex and iterate through them.
And so we cannot use a DFS for directed graph ?
Is there a DFS variant or another algorithm ?
Answer: The issue is not specific to DFS.
When looking at directed graphs, even for connected graphs not all nodes are reachable from everywhere. That's why the notion of a graph being strongly connected exists.
In a strongly connected graph, graph traversals starting in a single node will reach all nodes.
In other graphs, it won't.
This affects all traversal algorithms. For some reason, (some) canonical BFS implementations include looping over all starting nodes but DFS implementations do not. I don't know why; I assume historical reasons. There certainly is no conceptual difference; you can just as easily restart DFS with new starting nodes until you cover the whole graph. | {
"domain": "cs.stackexchange",
"id": 10580,
"tags": "algorithms, graphs, graph-traversal"
} |
Question regarding weights in a Model | Question: Let's take a Linear regression model.I just want to know are weights the same for every row once the model is trained? Or are weights vector or an array for eg I have data X= [4,5] four rows and 5 features will the weights W be the same for every row or it is has a different value for every row?
I am a beginner so please spare me these basic questions
Answer: In the basic regression setup, the weights are a function of the whole of the training dataset therefore they are the same for every row. | {
"domain": "datascience.stackexchange",
"id": 9735,
"tags": "machine-learning"
} |
Simple, encapsulated C++ logger that can deal with fork/exec situations | Question: Motivation: for whatever reason, some of the available 3rd party logging libraries don't really deal with programs that get fork'ed/exectuted well. For instance, boost::log creates some static state that can cause deadlocks if the program using it gets forked.
This class attempts to be a simple logger for programs that get forked/exec'ed.
For locking, I chose to use a boost interprocess mutex rather than a file lock, because if a child process gets stuck with the lock taken out, restarting the parent process will destroy the current mutex and create a new one.
I'm looking for style, performance and viability feedback. I've tested it and it seems to work well enough.
#pragma once
#include <iostream>
#include <fstream>
#include <locale>
#include <string>
#include <boost/interprocess/managed_shared_memory.hpp>
#include <boost/interprocess/containers/string.hpp>
#include <boost/interprocess/permissions.hpp>
#include <boost/interprocess/sync/interprocess_sharable_mutex.hpp>
#include <boost/date_time/posix_time/posix_time.hpp>
#include <boost/format.hpp>
#include <boost/date_time/posix_time/posix_time.hpp>
#include <boost/date_time/posix_time/posix_time_io.hpp>
static const char *kLoggerSegmentName = "MPLLoggerSeg";
static const char *kLoggerMutexName= "MPLLoggerIPCMutex";
enum LogLevel { MPLTRACE, MPLDEBUG, MPLINFO, MPLWARN, MPLERROR, MPLFATAL };
class MultiProcessLogger {
private:
LogLevel fLoggingLevel;
bool fEnabled;
bool fParent;
std::ofstream fLogFile;
inline void writeToFile(std::string &msg) {
using namespace boost::interprocess;
managed_shared_memory segment(open_only, kLoggerSegmentName);
interprocess_sharable_mutex *mutex = segment.find<interprocess_sharable_mutex>(kLoggerMutexName).first;
scoped_lock<interprocess_sharable_mutex> lock(*mutex);
fLogFile << msg;
//lock automatically unlocks when scope is left.
}
inline const char *getLevelString() {
switch(fLoggingLevel) {
case MPLTRACE:
return "<trace>";
case MPLDEBUG:
return "<debug>";
case MPLINFO:
return "<info.>";
case MPLWARN:
return "<warn.>";
case MPLERROR:
return "<error>";
case MPLFATAL:
return "<fatal>";
default:
return "< >";
}
}
void destroySharedMemory() {
using namespace boost::interprocess;
try
{
shared_memory_object::remove(kLoggerSegmentName);
} catch (...) {
std::cerr << "Error: unable to remove segment: " << kLoggerSegmentName << std::endl;
}
}
//disable copy constructor.
MultiProcessLogger(MultiProcessLogger &that);
public:
MultiProcessLogger(bool enabled, const char* logDir, const char* fileName, LogLevel level) :
fLoggingLevel(level), fEnabled(enabled), fParent(false) {
if (!fEnabled) {
return;
}
std::string logFilePath(logDir);
logFilePath.append("/");
logFilePath.append(fileName);
fLogFile.open(logFilePath.c_str(), std::ios::app);
}
~MultiProcessLogger() {
if (!fEnabled) {
return;
}
fLogFile.close();
if (fParent) {
destroySharedMemory();
}
}
void initParentProcess() {
using namespace boost::interprocess;
fParent = true;
destroySharedMemory();
permissions perms;
perms.set_unrestricted();
managed_shared_memory segment(create_only, kLoggerSegmentName, 1024, 0, perms);
interprocess_sharable_mutex *mutex= segment.construct<interprocess_sharable_mutex>(kLoggerMutexName)();
if (!mutex) {
std::cerr << "Error: unable to create interprocess mutex for logger" << std::endl;
abort();
}
}
void initChildProcess() {
fParent = false;
}
void log(LogLevel level, const char* msg, ...) {
if (!fEnabled) {
return;
}
if (fLoggingLevel > level) {
return;
}
va_list args;
va_start(args, msg);
char msgBuf[512];
vsnprintf(msgBuf, sizeof msgBuf, msg, args);
va_end(args);
std::string logMessage;
//format:
//time <level> proc:pid message
boost::posix_time::ptime now = boost::posix_time::second_clock::local_time();
boost::posix_time::time_facet* facet = new boost::posix_time::time_facet();
facet->format("%m-%d-%Y %I:%M:%S %p %Q");
std::stringstream stream;
stream.imbue(std::locale(std::locale::classic(), facet));
stream << now;
logMessage.append(stream.str());
logMessage.append(" ");
logMessage.append(getLevelString());
logMessage.append(" ");
logMessage.append("process");
logMessage.append(":");
char pidStr[6];
snprintf(pidStr, 6, "%05d", getpid());
logMessage.append(pidStr);
logMessage.append(" ");
logMessage.append(msgBuf);
logMessage.append("\n");
writeToFile(logMessage);
}
};
Usage:
//in parent process
MultiProcessLogger logger(true, "/tmp", "myLog.log", MLPINFO);
logger.initParentProcess();
...
logger.log(MPLERROR, "hello %s", "world");
Answer:
512 is really a small fixed-size buffer for the formatted message (seriously)
Given all the other stuff the function has to do, I'd go with a dynamic buffer here as well. (Calling vsnprintf with 0 as buffer size should tell you how much you need.
the log function seems horribly inefficient: Creating the facet each time, multiple append without preallocation
Just use a single stringstream for the whole logMessage | {
"domain": "codereview.stackexchange",
"id": 12747,
"tags": "c++, beginner, locking, logging, child-process"
} |
Structuring Events in JavaScript | Question: Below is the basic shell of an application I am working on that is part of a Webhosting Control Panel. This part is for the DNS records management.
So in my code below, I have taken away all the main functionality as it's not relevant to my question and to make it less cluttered.
On the page I have between 12-15 JavaScript Events ranging from click events to keypress event to keydown events etc...
Right now I have these main functions...
dnsRecords.init()
dnsRecords.events.init()
dnsRecords.records.addRow(row)
dnsRecords.records.save()
dnsRecords.records.undoRow(row)
dnsRecords.records.deleteRow(row)
I have put all my Events into dnsRecords.events.init(). So each Event basically calls or passes Data to the dnsRecords.records functions.
Since I am new to JavaScript I am wanting to know if there is anything really wrong with this method or is there is a better location or way to put all those different Events?
Is it generally a good idea or not to put all my Events into 1 area like that and then have the fire of other functions instead of cluttering there callback area with logic code?
Also note, I am not looking to use Backbone or some other Framework at this time, just want to know a good way to structure a small single page application like this. Thank you.
var dnsRecords = {
unsavedChanges : false,
init: function() {
dnsRecords.events.init();
},
events: {
init: function() {
// If user trys to leave the page with UN-SAVED changes, we will Alert them to...
$(window).on('beforeunload',dnsRecords.events.promptBeforeClose);
$(document).on('keypress','#typeMX div.hostName > input',function() {
$(document).off('keypress','#typeMX div.hostName > input');
});
// Activate SAVE and UNDO Buttons when Record Row EDITED
$(document).on("keydown", "#dnsRecords input" ,function() {
});
// Add new Record Row
$("#dnsRecords div.add > .btn").click(function(e) {
e.preventDefault();
});
// Mark Record as "Deleted" and change the view of it's Row to reflect a Deleted item
$(document).on("click", ".delete" ,function(e) {
dnsRecords.records.deleteRow($(this));
e.preventDefault();
});
// Show Undo button when editing an EXISTING ROW
$(document).on("keydown","div.dnsRecord input[type='text']",function() {
});
// Undo editing of an EXISTING ROW
$("button.undo").on("click",function() {
dnsRecords.records.undoRow($(this));
});
//Save Changes
$("#dnsTitle a.save").click(function() {
zPanel.loader.showLoader();
});
//Undo ALL Record Type Changes
$("#dnsTitle a.undo").click(function() {
});
$("form").submit(function() {
});
},
},
records: {
addRow: function(record) {
// All the code to Add a record here
},
save: function() {
// All the code to Save a record here
},
undoRow: function(row) {
// All the code to Undo a record here
},
deleteRow: function(row) {
// All the code to Delete a record here
dnsRecords.unsavedChanges = true;
},
}
};
$(function(){
dnsRecords.init();
});
Answer: General tips:
Use semi-colons. In JS, although they are optional, you should use them to avoid syntax errors, especially when minifying.
To modularize your code, wrap them in a closure. Consider it your "sandbox" for your code.
As for the rest, it's in the comments
//enclosing it in a closure so we won't spill to the global scope
//a general tip I keep is that
//if this module performs something only it should do, then keep it in the scope (private)
//everything that others can use (public) gets exposed via the namespace
(function (window, document, $, undefined) {
//we cache a few useful values, like jQuery wrapped window and document
var $window = $(window),
$document = $(document),
//here's an example of exposing. we expose this object as dnsRecords
//to the global scope. Since assignment operations "spill left"
//the same object gets assigned to the local dnsRecord
//we do this since every access to a property (something dot something)
//is an overhead. also, assigning to a variable is shorter anyway.
dnsRecords = window.dnsRecords = {
unsavedChanges: false
addRow: function (record) {},
save: function () {},
undoRow: function (row) {},
deleteRow: function (row) {
dnsRecords.unsavedChanges = true;
}
};
//here, we declare your init methods
//like stated above, since only this module uses init, it's kept in the scope
//rather than it being exposed
function bindDom() {
//we then use the cached values
$window.on('beforeunload', dnsRecords.promptBeforeClose);
//did you know that the on method returns the object it operated on
//which means it returns $document, which also means we can chain on
//also, I suggest you delegate to the nearest available parent
//for shorter delegation. In this code you have, the event needs
//to "bubble" to the document root in order for handlers to execute
//which is also an overhead, and the same reason live was deprecated
$document
.on('keypress', '#typeMX div.hostName > input', function () {
$document.off('keypress', '#typeMX div.hostName > input');
})
.on("keydown", "#dnsRecords input", function () {
})
.on("click", ".delete", function (e) {
dnsRecords.deleteRow($(this));
e.preventDefault();
})
.on("keydown", "div.dnsRecord input[type='text']", function () {
});
$("#dnsRecords div.add > .btn").click(function (e) {
e.preventDefault();
});
$("button.undo").on("click", function () {
dnsRecords.undoRow($(this))
});
$("#dnsTitle a.save").click(function () {
zPanel.loader.showLoader();
});
$("#dnsTitle a.undo").click(function () {});
$("form").submit(function () {});
}
//you can declare other functions here as well to split operations
function someOtherInitStuff(){
...
}
//I notice your init function is called on documentReady
//why not merge it in the module, and make that function your init
$(function () {
//call stuff you want to init
bindDom();
someOtherInitStuff();
});
}(this, document, jQuery));
//out here, your exposed methods are like:
dnsRecords.addRow();
dnsRecords.save(); | {
"domain": "codereview.stackexchange",
"id": 3664,
"tags": "javascript"
} |
Extracting nodes from a graph database | Question: I'm new to Postgres and PostGIS, but not to geospatial applications.
I have a table loaded up with graph data in the form of links (edges). The link database has about 60,000,000 rows. I am trying to extract the nodes to allow for quicker searching. For some of my proof-of-concept work I was able to use the link table for the searches, but there will be lots of duplicates, and there's no guarantee that either the source column or the target column contains all nodes.
This is Postgres & PostGIS, so I am also using the table to cache a geometry->geography conversion. Yes I do need to use geography fields. I'm also copying the geometry information "just in case".
Table creation SQL:
-- Recreate table and index
DROP TABLE IF EXISTS nodes;
CREATE TABLE nodes (node integer PRIMARY KEY, geog geography(POINT,4326) );
CREATE INDEX geogIndex ON nodes USING GIST(geog);
SELECT AddGeometryColumn('nodes', 'geom', 4326, 'POINT', 2);
-- Insert all unique nodes from the source column
INSERT INTO nodes (node,geog,geom)
SELECT DISTINCT ON (source) source,geography( ST_Transform(geom_source,4326)),geom_source
FROM view_topo;
-- Insert any nodes in the target column that we don't have already
INSERT INTO nodes (node,geog,geom)
SELECT DISTINCT ON (target) target,geography( ST_Transform(geom_target,4326)),geom_target
FROM view_topo
WHERE NOT EXISTS( SELECT 1 FROM nodes WHERE nodes.node = view_topo.target);
VACUUM ANALYZE;
I left the first INSERT running overnight and it took about 2-3hrs to run. This resulted in about 40,000,000 unique nodes being added.
I have just enabled the second INSERT and the VACUUM ANALYZE. I am expecting it to take until at least lunchtime.
Luckily this is only a batch job that has to be executed once after I've loaded a new link table, but is there a better way? Is there a faster way?
Answer: Check out PostgreSQL's tips on adding a lot of data into a table. In particular, are you sure that you need that index before INSERTing all that data? It might speed things up if you create the index after all the data has been added to the table. | {
"domain": "codereview.stackexchange",
"id": 15,
"tags": "sql, postgresql"
} |
How random is Python's Random Module | Question: Recently i wrote a programme using random module of python. Then i realised that the same output was being repeated quite a number of times though it was supposed to be completely random.
Answer: You need to understand that all of those libraries/functions won't really give you a truly random number. Why? because whatever algorithm it uses, would be taking input, and performing some predefined steps on it i.e. it is deterministic in nature. So in theory, if you know the exact input and algorithm used to generate random numbers you can replicate it.
Also, the distribution of the random numbers matter, for example, if it is a gaussian distribution, you might find some numbers repeating themselves which are closer to the mean for given range of values. The numbers are repeating themselves because they are generated that way, you might want to try seeding or shuffling to induce more randomness, or try implementing some of the algorithms yourself.
If you really want a truly random number sequence, the input would have to be truly random (variation in the path of Brownian motion, weather patterns, radioactive decay, etc).
You can start by looking at how a Linear congruential generator works. It's pretty easy to notice why numbers would repeat after a point. | {
"domain": "cs.stackexchange",
"id": 19461,
"tags": "algorithms, random-number-generator"
} |
Does tidal heating imply orbit degradation? | Question: What compensate the energy loss in tidal heating? Is it orbital decay?
Answer: There is a wonderful post at Physics Stackexchange:
Gravitational coupling between the Moon and the tidal bulge nearest the Moon acts as a torque on the Earth's rotation, draining angular momentum and rotational kinetic energy from the Earth's spin. In turn, angular momentum is added to the Moon's orbit, accelerating it, which lifts the Moon into a higher orbit with a longer period. As a result, the distance between the Earth and Moon is increasing, and the Earth's spin slowing down.
You can find more reading material and information in the original thread. | {
"domain": "astronomy.stackexchange",
"id": 1060,
"tags": "orbit, planet, tidal-forces"
} |
Why is helium-3 stable? | Question: Why is helium-3 stable? Besides hydrogen, helium-3 is the only isotope that has a neutron-to-proton ratio less than 1. Why is it not radioactive?
Answer: To what would $^3$He decay (strongly)? The options are:
$$^3{\rm He}\rightarrow D+p$$
$$^3{\rm He}\rightarrow 2p+n$$
neither of which makes sense based on mass. A weak decay to $^3$Li is out of the question (https://en.wikipedia.org/wiki/Isotopes_of_lithium#Lithium-3).
The fact that tri-neutrons and tri-protons do not exist, not even as resonances, tells you $^3$He is isospin 1/2, as its mirror nucleus, $^3$H.
The strong force is approximately iso-scalar, meaning $^3$He and $^3$H have the same nuclear wave functions, with neutron and proton swapped. (The latter decays to the former via the weak interaction because it is energetically possible).
So, the answer to the question, "Why is helium-3 stable" comes down to binding energy. Why the binding energy is what it is comes down to the nuclear wave function. The nuclear wave function of helium-3 and the triton is, and has been for decades, an active area of research. For example, the Triton collaboration at J-Labs Hall-A: https://www.jlab.org/news/releases/physicists-study-mirror-nuclei-precision-theory-test.
I have been to many seminars on the topic (but not recently). One topic that has never come up is the semi-empirical mass formula, it is simply not used at $A=3$, as it treats the nucleus (approximately) as liquid drop with volume terms, surface terms, and so on.
The largest component of the wave function has the two protons in a spin-singlet $S$-wave, with the neutron also in an $S$-wave (hence: no electric quadruple moment). The nucleus provides the simplest place to study three-nucleon forces. It is small enough to allow recent quark model investigations (e.g. https://www.sciencedirect.com/science/article/pii/S2211379717308094).
Beyond that, the question cannot be answered here. The subject is far too large. | {
"domain": "physics.stackexchange",
"id": 98645,
"tags": "nuclear-physics, radioactivity, stability, elements, isotopes"
} |
Red black tree partition to $\sqrt{n}$ trees | Question: This is a question I have stumbled upon in an old Algorithms test I found online:
A) Plan an algorithm that does the following:
Input: Red-Black tree
Output: $\sqrt{n}$ seperate trees, so that every tree has $\sqrt{n}$ nodes.
What is the complexity of the Algorithm you planned? must show analysis.
B) Assume that you have started from an empty Red Black tree, and that the input is a
set of nodes and not a Red Black tree.
Show how can you make a more efficient algorithm of partitioning the nodes to $\sqrt{n}$
Red Black trees so that every tree has $\sqrt{n}$ nodes.
What is the complexity of the new Algorithm you planned and how does it affect existing Red Black tree functions? must show analysis.
Now I have answered A and I am pretty sure that's the best answer there is, but I need your help in telling me if I can do better. This is without analysis:
Algorithm:
1. Scan the Red Black tree using In-Order traversal to build a sorted array out of it. < > O(n).
2. Divide the array to $\sqrt{n}$ sub-arrays and build a Red Black tree out of every sub > array - O(n) total.
Now what I don't really understand is how do I solve B.
I'm not exactly sure if the input in B is a Red Black tree or just a set of nodes, so both will be acceptable if you want to share your answer to B with me.
I have asked a student and he told me that the complexity that I should get in B is $O(\sqrt{n}*log(n))$.
I need help reaching that, or maybe something better (hints and stuff).
Answer: For question B) you need to make $O(\sqrt{n})$ RB trees from $n$ nodes (there is no tree to start with just bunch of nodes). Thus you need to process each node at least once. Thus time complexity of this building should be $\Omega(n)$. Thus I think there is no hope of getting the $O(\sqrt{n}\log n)$ bound. | {
"domain": "cs.stackexchange",
"id": 2841,
"tags": "algorithms, trees"
} |
A trace formula of two noncommutative operators | Question: In many cases of quantum many-body problems, the Hamiltonian $H$ can always be divided into two parts, i.e. $H_0$ and $H'$. In this occasion, one can systemically calculate the partition function throught the formula
$$
Z ~= Tre^{-\beta(H_0+H')} ~=~ Tre^{-\beta H_0}Te^{-\int_0^{\beta}H'(\tau)d\tau}
$$
where
$$H'(\tau)=e^{\tau H_0}H'e^{-\tau H_0},$$
and $T$ is the time ordering operator. Here, I want to ask that how can I prove the above formula?
Answer: Hints:
OP's identity follows from standard manipulations in the interaction picture, cf. e.g. Ref. 1.
Start with the evolution operator
$$\tag{1} U(t_f,t_i)~:=~\exp\left(-\frac{i}{\hbar}H (t_f-t_i) \right), \qquad H~=~H_0+V, $$
which satisfies the Schrödinger equation
$$\tag{2} i\hbar\frac{\partial}{\partial t_f}U(t_f,t_i)~=~HU(t_f,t_i).$$
Define the evolution operator in the interaction picture
$$\tag{3} U_I(t_f,t_i)~:=~\exp\left(\frac{i}{\hbar}H_0 (t_f-t_i) \right)U(t_f,t_i) ,$$
and define the interaction Hamiltonian
$$\tag{4} H_I(t_f) ~:=~\exp\left(\frac{i}{\hbar}H_0 (t_f-t_i) \right)V\exp\left(-\frac{i}{\hbar}H_0 (t_f-t_i) \right). $$
Show that the evolution operator (3) satisfies the 1st order ODE
$$\tag{5} -i\hbar\frac{\partial}{\partial t_f}U_I(t_f,t_i)~=~H_I(t_f)U_I(t_f,t_i).$$
Deduce from (5) that evolution operator $U_I(t_f,t_i)$ in the interaction picture can be written as a time-ordered exponential
$$\tag{6} U_I(t_f,t_i)~=~T\exp\left(-\frac{i}{\hbar}\int_{t_i}^{t_f}\! dt~H_I(t) \right).$$
Finally, deduce that
$$\tag{7} {\rm Tr}\left[U(t_f,t_i)\right]
~\stackrel{(3)}{=}~{\rm Tr}\left[\exp\left(-\frac{i}{\hbar}H_0 (t_f-t_i)\right) U_I(t_f,t_i) \right], $$
and wick-rotate in order to obtain OP's identity.
References:
M.E. Peskin & D.V. Schroeder, An Intro to QFT; Section 4.2. | {
"domain": "physics.stackexchange",
"id": 23122,
"tags": "quantum-field-theory, hamiltonian, many-body, partition-function, time-evolution"
} |
String splitting in C | Question: I am trying to improve my C skills, and I hope someone might be able to provide me with some feedback on the following code.
It is just basic string splitting, it should behave the similar to Ruby's String#split or Clojure's clojure.string.split. I couldn't think of a simple/efficient way to create an array of variable-size strings so I went the callback route.
Anyway, any and all feedback is greatly appreciated, thank you! Check out the code:
void strsplit(char *str, char *delim, int limit, void (*cb)(char *s, int idx))
{
char *search = strdup(str);
if (limit == 1) {
cb(search, 0);
}
else {
int i = 0, count = 0, len = strlen(str), len_delim = strlen(delim);
char *segment = NULL, *leftover = NULL;
limit = limit > 0 ? limit - 1 : len;
segment = strtok(search, delim);
for (i = 0; segment != NULL && i < limit; i++) {
count += strlen(segment) + len_delim;
cb(segment, i);
segment = strtok(NULL, delim);
}
if (len != limit && count < len) {
leftover = (char*) malloc(len - count + 1);
memcpy(leftover, str + count, len - count);
leftover[len - count] = '\0';
cb(leftover, i);
free(leftover);
}
}
}
Also see the code with a test framework.
Answer: Here's a few comments. You need a comment giving a better definition of what the function should do. Otherwise we have to guess what your approximation to the Ruby function might be. An alternative interface might be to pass in a pointer array and its length instead of the callback.
I prefer to use strspn and strcspn or strsep if available to locate the tokens (just my preference).
str and delim should be const char *
I'd prefer to see the limit == 1 case handled in the main clause. Is there
a good reason to treat it separately?
Each variable on its own line
I think you can modify the algorithm to avoid having to count the length of
the string (strlen(str)). I don't think either len or count is necessary.
efficiency: you traverse the string with strlen at the start, then strtok traverses each word and then you do it again with strlen on the word.
why limit - 1 ??
not sure count will be computed correctly (len_delim added but
the actual sequence of separators present may not be of that length)
prefer len < limit to len != limit (more robust to future changes)
is the condition for the final if clause correct? When do you want it to be
executed?
prefer brackets round multiple conditions ((len != limit) && (count < len))
why use both strdup and malloc ?
malloc return should not be cast (C not C++)
why malloc the remaining string from str instead of just modifying search - it is
already writeable because you strdup-ed it
the duped string should be freed on return, else it is a leak. Depends
what the callback does with the bits of course.
Possibly missed other things - but there are enough things above to be going on with I guess :-) | {
"domain": "codereview.stackexchange",
"id": 1695,
"tags": "c"
} |
Name of $H|a\rangle=n|a\rangle$ | Question: I was wondering if the form $$H\vert a\rangle=n\vert a\rangle$$ has a proper name. I am familiar with each part like the Hermitian matrix, eigenvalue and eigenstate, but is there a word to classify the whole form? If you need any clarification ask, and thanks for reading.
If you understood the first part I also would like to know if there is a name to classify the multiple eigenvalues that fall under a Hermitian matrix, so if $$H\vert a\rangle=n\vert a\rangle$$ and also $$H\vert b\rangle=c\vert b\rangle,$$ then is there a word to refer to both $\vert a\rangle$ and $\vert b\rangle$ that share the same Hermitian matrix?
Answer: The equation
$$ H |a\rangle = n |a\rangle $$
is called the eigenvalue equation or eigenequation for the operator/hermitian matrix $H$. The eigenvectors $|a\rangle$ and $|b\rangle$ are part of the set of eigenvectors of the operator/matrix. | {
"domain": "physics.stackexchange",
"id": 32726,
"tags": "quantum-mechanics, terminology, linear-algebra, eigenvalue"
} |
Scale transformation of the scalar field and gauge field | Question: I am reading this paper: "Magnetic monopoles in gauge field theories", by Goddard and Olive. I don't understand some scale transformations that appear in Page 1427.
Start from the energy expression in page 1426, the energy in weak interaction theory can be written as
$$H=T_\phi+T_W+V, \tag{7.1}$$
where $T_\phi, T_W$ and $V$ represent scalar field kinetic part (scalar field $\phi$), Yang-Mills term (gauge field $W$), and scalar potential, respectively. With
$$
\begin{aligned}
T_\phi[\phi, W] & =\int \mathrm{d}^D x F(\phi)\left(\mathscr{D}^i \phi\right)^{\dagger} \mathscr{D}^i \phi, \\
T_W[W] & =\frac{1}{4} \int \mathrm{d}^D x G_a^{i j} G_{a i j}, \\
V[\phi] & =\int \mathrm{d}^D x U(\phi) .
\end{aligned} \tag{7.2-7.4}
$$
It is assumed that $F$ and $U$ are positive functions of $\phi$ involving no derivatives.
Under the scale transformation
$$
\begin{gathered}
\phi(\boldsymbol{x}) \rightarrow \phi_\lambda(\boldsymbol{x})=\phi(\lambda \boldsymbol{x}), \\
W(\boldsymbol{x}) \rightarrow W_\lambda(\boldsymbol{x})=\lambda W(\lambda \boldsymbol{x}),
\end{gathered} \tag{7.5}
$$
we find that $\mathcal{D}_\mu \phi(\mathbf{x})\rightarrow \lambda \mathcal{D}_\mu \phi(\lambda\mathbf{x}), \mathbf{G}_{\mu \nu}(\mathbf{x})\rightarrow \lambda^2\mathbf{G}_{\mu \nu}(\lambda \mathbf{x})$, then
$$
\begin{aligned}
T_\phi\left[\phi_\lambda, W_\lambda\right] & =\lambda^{2-D} T_\phi[\phi, W], \\
T_W\left[W_\lambda\right] & =\lambda^{4-D} T_W[W], \\
V\left[\phi_\lambda\right] & =\lambda^{-D} V[\phi] .
\end{aligned} \tag{7.6}
$$
One conclusion of such transformation is that, if $D=3$ and $\phi$ denotes the Higgs, the energy has a unique minimum with respect to $\nu$ at some finite value, since the energy diverges when $\nu\rightarrow 0$ or $\nu \rightarrow \infty$. While for pure Yang-Mills theory, we don't have such minimum, since the energy varies monotonically with $\nu$.
$My\ question\ is:$ why the fields are scaled the way shown in (7.5)? Why there is a $\lambda$ factor before $W$ and not $\phi$? Is this $\lambda$ a dimensional quantity?
Answer:
The specific scaling (7.5) originates from a dilation/dilatation of space $x\to \lambda x$ (and similarly for space-derivatives) used in the proof of a generalized Derrick's No-Go theorem, cf. e.g. this Phys.SE post.
There is no dilation/dilatation of target space the scalar field $\phi$, however the gauge field $W$ should scale since the different terms in the gauge covariant derivative should scale homogeneously. | {
"domain": "physics.stackexchange",
"id": 96433,
"tags": "field-theory, gauge-theory, yang-mills, scale-invariance, solitons"
} |
Is there a formal difference between $f:X \to X$ and $f\in X \to X$? | Question: We can denote by $X\to X$ the set of all functions from $X$ to $X$.
Therefore, we can use the following statement to say that $f$ is a function from $X$ to $X$:
$$f\in X\to X$$
But we usually state instead
$$f:X \to X$$
In fact, we could say that $X\to X$ is defined as $\{f| f:X\to X\}$. If I'm not mistaken, $:$ is a notation that belongs to type theory, rather than set theory as $\in$ does.
So is there a fundamental difference between these two notations? Does it make sense to say that one of the two is more "fundamental" than the other?
Answer: No, they're mostly notational variations. There are different connotations to the different notations, and different notations are common in different fields where they can mean quite different things. Also, sometimes they are used in a particular context for different (but usually related) things. You'll, of course, have to see how it has been defined in that context.
Any use of $\in$ usually strongly suggests a set-theoretic context (though you occasionally see it in a type-theoretic context). The $f:X\to Y$ notation was, I believe, popularized in mathematics by category theory. In this case, it means $f\in\mathsf{Hom}(X,Y)$. This is compatible with $f\in Y^X$ in the case that the category is the category of sets and functions. In a general categorical context, $Y^X$ is usually reserved for exponential objects for which it makes no sense to ask if $f\in Y^X$, though you (usually) can ask if there is an arrow $1 \to Y^X$.
In logic, when $X$ and $Y$ are sorts, usually $f:X\to Y$ means that $f$ is a function symbol. In this case, $X$ and $Y$ aren't sets and neither is $X \to Y$. It makes no sense to say $f\in X\to Y$ unless $\in$ isn't being thought of as set membership or we want to say $X\to Y$ means the set of function symbols from $X$ to $Y$. This latter view starts to get rather similar to the categorical view.
In type theory, usually $f:X\to Y$ means something like $f$ has type $X\to Y$ and $X\to Y$ is the type of functions from $X$ to $Y$. Types are not sets and part of the purpose of this notation is to remind one of that fact. Indeed, types are more like sorts and this notation is similar to how it is used in logic. Sometimes $Y^X$ is also used where $Y^X$ is used in a more first-class sense in a way that is compatible with the similar distinction I mentioned for category theory, so we might write $X\to Z^Y$ rather than $X\to(Y\to Z)$. Sometimes no distinction is being made and the notation is completely synonymous and the choice is made purely for typographical reasons. For computer programming, the choice is mostly typographical, f : X -> Y is easier to write and easier to read, though programming languages are also closely related to type theories.
It doesn't really make sense to talk about which of these notations is "more fundamental" than the others in general. That said, the $f:X\to Y$ notation is usually making the least commitments if multiple notations are in use. Often $f:X\to Y$ isn't a proposition with a truth value, so it wouldn't make much sense to write $\{f\mid f:X\to Y\}$. Admittedly, in the context where it isn't a proposition, you usually aren't using set theory. If you really wanted to, you could meta-theoretically write $\{t\mid \cdot\vdash t:X\to Y\}$ to mean the set of closed terms of type $X\to Y$, say. This would have nothing to do with a set of functions though. | {
"domain": "cs.stackexchange",
"id": 11327,
"tags": "type-theory, sets, notation"
} |
How does the strength of dark energy compare to the strength of the other forces? | Question: I have read this question:
http://hyperphysics.phy-astr.gsu.edu/hbase/Forces/funfor.html
So , in a nutshell, it is the fitting of data with a specific standard model that organizes the particle interactions in line with four forces.
How do physicists compare the relative strengths of the four forces?
We do have data, because we do know how fast the galaxies are flying apart, and this is in mainstream physics accounted for to the existence of dark energy.
Is there a way to put dark energy into this table and somehow match its strength compared to the other forces?
Answer:
Árpád Szendrei asked: Is there a way to put dark energy into this table and somehow match its strength compared to the other forces?
In general relativity dark energy acts gravitationally (as you can see on the $\rm G $ in the equation below), so you have to put it in that category.
Its effect is due to its density, since the square of the Hubble parameter
$$ \rm H^2 = \frac{8 \ \pi \ G \ \rho}{3} $$
is proportional to the product of the gravitational constant $\rm G$ times the density $\rm \rho=\rho_r+\rho_m+\rho_{\Lambda}$ for radiation, matter and dark energy.
Since the dark energy density $\rho_{\Lambda}$ is constant the Hubble parameter is also constant when dark energy dominates, while it shrinks when radiation or matter whose densities dilute while space expands dominate.
For better understanding: if you could magically multiply the matter so that its density would stay constant while the space expands, that would have the same gravitational effect on the Hubble parameter as dark matter's constant density. | {
"domain": "physics.stackexchange",
"id": 88510,
"tags": "forces, cosmology, interactions, dark-energy, cosmological-constant"
} |
Email a notification when detecting changes on a website | Question: The text of a website is checked in a given time period. If there are any changes a mail is sent. There is a option to show/mail the new parts in the website. What could be improved?
#!/usr/bin/env python3
import urllib.request, hashlib, time, html2text, smtplib, datetime, argparse
class urlchange:
def __init__(self, url):
self.url = url
self.urlhash = self.createhash()
self.content = self.getcontent()
date = datetime.datetime.now().strftime( "%d.%m.%Y %H:%M:%S" )
print(date+": Start Monitoring... hash: "+self.urlhash)
def getcontent(self):
#Try to get data
try:
urldata = urllib.request.urlopen(self.url).read().decode("utf-8","ignore")
urldata = html2text.html2text(urldata)
except:
print("Can't open url: ", self.url)
return urldata
def createhash(self):
#create hash
urldata = self.getcontent().encode("utf-8")
md5hash = hashlib.md5()
md5hash.update(urldata)
return md5hash.hexdigest()
def comparehash(self):
date = datetime.datetime.now().strftime( "%d.%m.%Y %H:%M:%S" )
if(self.createhash() == self.urlhash):
print(date+": Nothing has changed")
return False
else:
print(date+": Something has changed")
if(not args.nodiff):
print(self.diff())
if(not args.nomail):
try:
sendmail("Url has changed!","The Url "+self.url+" has changed at "+date+" .\n\nNew content:\n"+self.diff())
except:
sendmail("Url has changed!","The Url "+self.url+" has changed at "+date+" .")
elif(not args.nomail):
sendmail("Url has changed!","The Url "+self.url+" has changed at "+date+" .")
return True
def diff(self):
#what has chaged
start, end = 0, 0
newcontent = self.getcontent()
#start of changes
for i,j in enumerate(self.content):
if(i<len(newcontent) and j != newcontent[i]):
start=i
break
#end of changes
for i,j in enumerate(reversed(self.content)):
if( (len(newcontent)-(i+1))>0 and j != newcontent[len(newcontent)-(i+1)]):
end=len(newcontent)-i
break
return newcontent[start:end]
def sendmail(subject,message):
try:
server = smtplib.SMTP("smtp.server.com",587)
server.set_debuglevel(0)
server.ehlo()
server.starttls()
server.login("email@server.de","password")
except:
print("Can't connect to the SMTP server!")
date = datetime.datetime.now().strftime( "%d.%m.%Y %H:%M:%S" )
msg = "From: email@server.de\nSubject: %s\nDate: %s\n\n%s""" % (subject, date, message)
server.sendmail("email@server.de","email2@server.de",msg)
server.quit()
print(date+": email was sent")
parser = argparse.ArgumentParser(description="Monitor if a website has changed.")
parser.add_argument("url",help="url that should be monitored")
parser.add_argument("-t","--time",help="seconds between checks (default: 600)",default=600,type=int)
parser.add_argument("-nd","--nodiff",help="show no difference",action="store_true")
parser.add_argument("-n","--nomail",help="no email is sent",action="store_true")
args = parser.parse_args()
url1 = urlchange(args.url)
time.sleep(args.time)
while(True):
if(url1.comparehash()):
break
time.sleep(args.time)
Improved code at: Email a notification when detecting changes on a website - follow-up
Answer: You do not need to find the differences manually, you can use difflib.SequenceMatcher:
I think this is what you need:
>>> a, b = "foobxr", "foobar"
>>> diffs = difflib.SequenceMatcher(None, a, b).get_matching_blocks()
>>> diffs
[Match(a=0, b=0, size=4), Match(a=5, b=5, size=1), Match(a=6, b=6, size=0)]
>>> max((a, b), key=len)[diffs[0].size : diffs[1].a]
'x'
Specific except
Specify what you want to except exactly. Bare except catches even typos!
Use a logger
You print a lot of info, a logger is more flexible and can be very easily redirected to a file.
.format
You can make your messages more readable:
For example:
print(date+": Start Monitoring... hash: "+self.urlhash)
Becomes:
print("{date}: Start Monitoring... Hash: {self.urlhash}".format(**locals()))
Or the more standard:
print("{}: Start Monitoring... Hash: {}".format(date, self.urlhash))
Thanks to @Jatimir for noticing that in python 3.6+ f-strings are a nice way to interpolate variables in strings with a clean sintax: for example:
print(f'{date}: Start Monitoring... Hash: {self.urlhash}') | {
"domain": "codereview.stackexchange",
"id": 18517,
"tags": "python, python-3.x, email, web-scraping"
} |
Implementation of a ticket queue system in JavaScript | Question: I got this question during an interview. The question itself was very open-ended - it asked me to implement a ticket queue system, where by default it would have 3 different queues to hold tickets that are of severity 1, severity 2, and severity 3 respectively. It should have functionalities like:
add a ticket to the corresponding queue
get the ticket from the highest severity (priority) queue
resolve ticket i.e. remove the ticket from the queue
loop over the tickets starting at the highest severity to the lowest severity
check if a ticket is in the queue.
Finally, the design should be easily extensible to adapt future changes e.g. adding a new queue.
It doesn't have any predefined data structure for either the queue or the ticket so we have to come up with our own design. Here is how I implemented:
First for the tickets I have a class Ticket:
class Ticket {
constructor({ name, desc = '', severity }) {
this.id = Math.floor(Math.random() * 1000)
this.timestamp = Date.now()
this.name = name
this.desc = desc
this.severity = severity
}
}
And for the ticket queues I have this:
class TicketQueues {
constructor(numOfQueues = 3) {
this.queues = Array.from({ length: numOfQueues }, () => [])
this.hashSet = new Set()
}
addTicket(ticket) {
if(this.hasTicket(ticket)) return false
const severityIndex = ticket.severity - 1
this.queues[severityIndex].push(ticket)
this.hashSet.add(ticket)
return true
}
*getTicketBySeverity() {
for (const queue of this.queues) {
for (const ticket of queue) {
yield ticket
}
}
}
getTicketAtHighestSeverity() {
for (const queue of this.queues) {
for (const ticket of queue) {
return ticket
}
}
}
resolveTicket(ticket) {
if(!this.hasTicket(ticket)) return false
const [severity, index] = this._findTicketIndex(ticket)
this.queues[severity].splice(index, 1)
this.hashSet.delete(ticket)
return true
}
_findTicketIndex(ticket) {
for (let i = 0; i < this.queues.length; i++) {
for (let j = 0; j < this.queues[i].length; j++) {
if (this.queues[i][j] === ticket) {
return [i, j]
}
}
}
return [-1, -1]
}
hasTicket(ticket) {
return this.hashSet.has(ticket)
}
}
The idea is that I have a two-dimensional array to represent the ticket queue system. The first array inside the two-dimensional array has the highest severity.
i.e. [ [ticket1] , [ticket2], [ticket3]] means we have one ticket for severity 1 and 2 and 3
And I also have a hash set to hold a reference to every ticket so I can achieve constant time look up. For getTicketBySeverity I implemented a generator function and the idea here is to take advantage of the lazy evaluation of generators since the data set might be huge so the user might not want to iterate through the whole list of tickets.
Please feel free to give me any feedback that you think might be helpful. There are a few design decision that I couldn't really think through all of the pros and cons and I would like you to give me some suggestion on the following design choices:
As I mentioned I used two-dimensional array to hold the tickets, and it worked out fine. However one alternative I can think of is to use a hash map. so instead of having [ [ticket1] , [ticket2], [ticket3]] we have
{
sev1: [ticket1],
sev2: [ticket2],
sev3: [ticket3],
...
}
I couldn't really pinpoint exactly what are some of pros and cons of using either one data structure. One argument I can think of that is against using a hash map is that there might be an issue when looping through all of the tickets in the order of severity since normally hash map doesn't have the notion of order for keys. Another way I can think of is to have separate variable for each queue. so
class TicketQueues {
constructor(numOfQueues = 3) {
this.sev1Queue = []
this.sev2Queue = []
this.sev3Queue = []
}
However it seems cumbersome to me but I still am not super clear what exactly is bad about this design.
I am using reference equality to find the ticket instead of using an id. Not sure in real world scenario which approach is better. I guess using id is better since it is not always possible to keep the original reference for the object?
To bucket the ticket to the right queue, the current implementation relies on the user to specify the correct severity i.e. 1, 2, 3. If they use some weird to represent the severity then it would break e.g. A B C. I guess this can be partially addressed by using TypesScript with either Enum or Union type so the user will know the severity's type. However another approach I thought of is to expose specific API for each valid severity queue we have for the user to use. For example,
class TicketQueues {
constructor(numOfQueues = 3) {
this.queues = Array.from({length: numOfQueues}, () => [])
}
addSev1(ticket) {
this.queues[0].push(ticket)
}
addSev2(ticket) {
this.queues[1].push(ticket)
}
addSev3(ticket) {
this.queues[2].push(ticket)
}
but again I cannot really pinpoin exactly which approach is better. The second approach feels like it is hard for us to extend the queues if we ever need to add another queue, since we would have to add another method for the new queue.
the current implementation is inherently susceptible to starvation. e.g. as long as the sev1 queue is not empty, we will always start with tickets in sev1 queue when looping over the system even when there might a sev2 ticket that has been there for a long long time.
Lastly, I wonder if we can use priority queues instead of plain old queues here?
Answer: Responses to your questions
I think for this specific case, a two-dimensional array would is a great data structure to solve the problem of multiple ticket queues. Like you mentioned, giving concrete names to each ticket queue would just add complexity to the code, because you would then have to sort the named queue by priority whenever you want to use them - effectively turning them into the 2d array you currently have.
When checking for the existence of a ticket, you are correct in assuming that it would be better to check the id instead of using a reference to the original object. I think your choice of a set was the wrong data structure to hold all of your tickets in. If you instead did a map (either a Map instance or a plain object), then it would be much easier to look up tickets by id.
I would just not worry about data validation here for the following reasons:
Like you mentioned, it's really the job of typescript to do that kind of validation.
There's a number of bad parameters a user could pass into this class to put it in a bad state. Too much validation bloats code, making it difficult to read, and can do more harm than good.
Validation is more important when the data you're receiving is foreign to the project you're working in. i.e. validate when coding publicly-exposed library methods, in rest endpoint handlers, reading config files, etc. Don't bother when creating the internals of a project.
The fact that this implementation is susceptible to starvation is due to the design requirements, isn't it? There's nothing you can do about it in your implementation.
Using a single priority queue instead of a separate queue for each severity would also work. Adding new entries will be a little slower as it would have to sort them into the right spot. Removing entries from the middle is likely quicker because priority queues are often implemented using a heap. The main issue is that there is no native priority queue in javascript, making this solution more difficult.
As an aside: The computer science definition of the word "queue" doesn't allow removing entries from the middle of the queue. That kind of flexibility is supposed to belong to lists or arrays.
Code quality notes
You did a really good job with your function naming. Overall, the code was easy enough to understand.
If you really expect a large amount of tickets to be in this system, then you ought to be able to generate more than 1000 unique ids. The current implementation will have a lot of id clashes.
Instead of looping over an array and yielding each entry, you can yield* entireArray.
Simplified example
The problem description didn't state how many tickets were expected to be in the queue at a given time, or how often a user would read from it, or add a ticket to it, etc. There's a number of different data structures that can be used to optimize the code - the right one would depend on which aspects need to be fast, and which ones are OK being a little slower.
The following solution is meant to be a dead simple solution that's easy to understand and modify, but will start to run slow if you put too many tickets into the system. Code like this is ideal when you don't expect many tickets or much usage.
const generateId = () => Math.floor(Math.random() * 1e12)
export const createTicket = ({ name, description = '', severity }) => Object.freeze({
id: generateId(),
timestamp: Date.now(),
name,
description,
severity,
})
export function createTicketCollection() {
const tickets = {}
const getBySeverity = () => Object.values(tickets).sort(
(a, b) => (a.severity - b.severity) || (a.timestamp - b.timestamp)
)
return Object.freeze({
add(ticket) { tickets[ticket.id] = ticket },
getBySeverity,
getTicketAtHighestSeverity: () => getBySeverity()[0],
resolve(ticket) { delete tickets[ticket.id] },
has: id => !!tickets[id],
})
}
(classes can of course be used instead of factory functions - I just tend to prefer factory functions)
Optimized example
On the other end of the spectrum, if you want all of the operations performed against the system to run fast, then a combination of a map (for quick lookup by id), and a doubly-linked list (for quick iteration and removal of entries) will allow all function to run in a constant big-O (even faster than a priority queue). Optimizations like this will naturally cause the code to be less readable and less flexible, which is why it's never good to prematurely optimize. I do not recommend actually using a design like this unless it is really needed, but I'll post it here as a proof-of-concept. (I've done some light testing with it, but there could still be some bugs).
const generateId = () => Math.floor(Math.random() * 1e12)
function createLinkedList() {
let front, back
return Object.freeze({
get front() { return front },
get back() { return back },
pushBack(value) {
const node = { last: back, next: undefined, value }
if (back) back.next = node
back = node
if (!front) front = node
return node
},
removeNode(node) {
if (node.last) node.last.next = node.next
if (node.next) node.next.last = node.last
if (front === node) front = node.next
if (back === node) back = node.last
},
*values() {
let node = front
while (node) {
yield node.value
node = node.next
}
},
})
}
const createTicket = ({ name, description = '', severity }) => Object.freeze({
id: generateId(),
timestamp: Date.now(),
name,
description,
severity,
})
export function createTicketCollection() {
const listNodeLookup = {}
const severityGroups = []
function* getBySeverity() {
for (const linkedList of severityGroups.filter(x => x != null)) {
yield* linkedList.values()
}
}
return Object.freeze({
add(options) {
const ticket = createTicket(options)
severityGroups[ticket.severity] ??= createLinkedList()
const listNode = severityGroups[ticket.severity].pushBack(ticket)
listNodeLookup[ticket.id] = listNode
return ticket
},
getBySeverity,
getTicketAtHighestSeverity: () => getBySeverity().next().value,
resolve(ticket) {
const listNode = listNodeLookup[ticket.id]
delete listNodeLookup[ticket.id]
severityGroups[ticket.severity].removeNode(listNode)
},
has: id => !!listNodeLookup[id],
})
}
Update
In response to @Joji's comment about why I prefer factory functions over classes:
This question actually made me do some deep thinking. In the end, I think it's simply because factory functions tend to work better when programming in a functional style, while classes tend to fit better when programming in an object-oriented style. I usually program in a more functional style, but I'm not hard-core about it.
Let me expound:
I consider this solution "object-oriented" because I'm encapsulating private data and only allowing the private data to get touched via the methods I provide. This can be achieved with both factory functions or classes. I used factory functions there as it felt more concise, but if you continued to add more and more methods to it, it might start to feel clunky and hard to work with (i.e. private functions have to be declared at the top, not next to the public method that needs it). Maybe a normal class would have been a better tool to use in this scenario (I do sometimes use the class syntax too, just not as often).
Now I want to show the same answer but written in a way that's a little more functional.
// ticket.js
const generateId = () => Math.floor(Math.random() * 1e12)
export const create = ({ name, description = '', severity }) => Object.freeze({
id: generateId(),
timestamp: Date.now(),
name,
description,
severity,
})
// ticketCollection.js
export const create = ({ ticketsById = {} } = {}) => Object.freeze({ ticketsById })
export const getBySeverity = ticketCollection => Object.values(ticketCollection.ticketsById).sort(
(a, b) => (a.severity - b.severity) || (a.timestamp - b.timestamp)
)
export const getTicketAtHighestSeverity = ticketCollection => getBySeverity(ticketCollection)[0]
export const addTicket = (ticketCollection, ticket) => create({
ticketsById: { ...ticketCollection.ticketsById, [ticket.id]: ticket },
})
export const resolve = (ticketCollection, ticket) => {
const ticketsById = { ...ticketCollection.ticketsById }
delete ticketsById[ticket.id]
return create({ ticketsById })
}
export const has = (ticketCollection, id) => !!ticketCollection.ticketsById[id]
Note that we've sacrificed our encapsulation, all data is now publicly accessible (a no-no in OOP). But we've gained other benefits in the process:
We're not mutating data anymore (leading to fewer unexpected side-effects).
We now have the power to organize the methods into separate modules if desired (they don't all have to be right there)
My favorite one: We don't have to tie specific methods to specific classes. As a concrete example: I've been porting some user management code to node. The old implementation was written in a very object-oriented way. There was a User class, and a Group class. user's were members of groups, and groups had users in them. So, which class do you think the implementors put the addUserToGroup() function? If it's put in the User class, how would it update the group's member list? If it's put in the group's class, how will it update the user's group list? Their solution was pretty gross: They put it in both. The addUserToGroup() method on both the user and group class would update their own internal data, then they would call each other's methods using a special optional parameter to keep them from calling each other yet again. There are arguably better ways to do that while still following OOP style, but none of them are very elegant. The port I wrote didn't have a User or Group class, there were just functions to get or set user/group data. Because these encapsulation boundaries didn't exist, there only needed to be one addUserToGroup() function.
If we're programming in this style, it should hopefully be clearer why the factory functions work a little better. The actual instances of the factory are just pure data, usually, there are few to no functions as those are all found outside of the factory. In this case, it's much easier to just have a function that returns an object literal, then to write a class that takes parameters in a constructor and assigns them all to "this". | {
"domain": "codereview.stackexchange",
"id": 40517,
"tags": "javascript, object-oriented, interview-questions"
} |
What is the size of the largest asteroid orbiting the Earth, other than the Moon? | Question: It seems odd to me that no other asteroid around the Earth is even close to the size of the Moon. Anyone have any idea about the size and orbit radius of Earth's "second moon" ?
Answer: First of all any body revolving around a planet is known as its satellite, not as an "asteroid". And at this time Earth only has one natural satellite which is the Moon (but there are few theories suggesting that Earth once had two moons but they merged into each other due to gravitational pull between them and formed a single moon).
Asteroids are the masses of rock, gases, minerals etc. which are generally revolving around the star (in our case it's the Sun). Even in our solar system we have an asteroid belt between Mars and Jupiter in which thousands of asteroids are orbiting around our star. But asteroids don't have much mass and are small in size so they are not characterized as planets, though sometimes larger asteroids are known as planetoids.
By the way the largest asteroid yet found is named Ceres. It is about 1/4 of the size of our Moon and it is also orbiting around the Sun like other asteroids. | {
"domain": "astronomy.stackexchange",
"id": 3965,
"tags": "the-moon, earth"
} |
What does "linear unit" mean in the names of activation functions? | Question: Activation functions, in neural networks, are used to introduce non-linearity. Many activation functions that are used in neural networks have the term "Linear Unit" in their full form. "Linear unit" can be abbreviated as LU.
For example, consider some activation functions
ELU - Exponential Linear Unit
ReLU - Rectified Linear Unit
................................................
Why does the function name contain the term "Linear Unit"? What is meant by Linear Unit here? Is it saying anything about the nature of function under consideration?
Answer: Have a look at these graphics showing popular linear units (image taken from Clevert et al. 2016):
You can see that these functions are linear functions for $x > 0$, that's why they are called Linear Units.
For example, the ELU is defined as
$$ ELU(x) = \begin{cases}
x &\text{if } x > 0\\
\alpha (\exp(x)-1) & \text{if } x \leq 0.
\end{cases} $$
These functions introduce the nonlinearity around zero, each in its own way, which can be used for different problems. | {
"domain": "ai.stackexchange",
"id": 2942,
"tags": "neural-networks, deep-learning, terminology, activation-functions"
} |
List of classes to datatable | Question: I'm looking for a function to take a list of generic classes, and map it to different layout of datatable.
That means, I can't just directly map my class property names and values to grid, I need to convert it to the new values.
The solution that I came up with works, but it feels ... sloppy. Can anyone recommend a better way of doing it?
class Program
{
static void Main(string[] args)
{
var list = new List<InputClass>()
{
new InputClass {SomeField = "A", OtherField = 1},
new InputClass {SomeField = "B", OtherField = 2},
new InputClass {SomeField = "C", OtherField = 3},
new InputClass {SomeField = "D", OtherField = 4}
};
var dict = new Dictionary<string, Func<InputClass, object>>()
{
{"ColA", (f) => $"{f.SomeField}ABCD"},
{"ColB", (f) => DateTime.Now}
};
var dt = ToDataTable(list, dict);
Console.ReadKey();
}
public static DataTable ToDataTable<T>(List<T> list, Dictionary<string, Func<T, object>> columns) where T : class
{
var dt = new DataTable();
foreach (var key in columns.Keys)
{
var func = columns[key];
Type[] funcParams = (func.GetType()).GenericTypeArguments;
dt.Columns.Add(key, funcParams[1]);
}
foreach (var record in list)
{
var objects = new List<object>();
foreach (var key in columns.Keys)
{
objects.Add(columns[key](record));
}
dt.Rows.Add(objects.ToArray());
}
return dt;
}
}
class InputClass
{
public string SomeField { get; set; }
public int OtherField { get; set; }
}
Answer: Cases like this I would hide the "sloppy"
I would create a class that holds the mappings from the class to the table and create methods to build the mappings. This way there is no need for reflection and we can save the funcs to be reused.
public class Mapper<TSource>
{
private readonly DataTable _source = new DataTable();
private readonly List<Func<TSource, object>> _mappings = new List<Func<TSource, object>>();
public Mapper<TSource> Configure<TProperty, TData>(Func<TSource, TProperty> property, string colName,
Func<TProperty, TData> mapper)
{
_source.Columns.Add(colName, typeof (TData));
Func<TSource, object> map = s => mapper(property(s));
_mappings.Add(map);
return this;
}
public Mapper<TSource> Configure<TProperty>(Func<TSource, TProperty> property, string colName)
{
_source.Columns.Add(colName, typeof(TProperty));
Func<TSource, object> map = s => property(s);
_mappings.Add(map);
return this;
}
public Mapper<TSource> Configure<TData>(string colName, Func<TData> mapper)
{
_source.Columns.Add(colName, typeof(TData));
Func<TSource, object> map = _ => mapper();
_mappings.Add(map);
return this;
}
}
With this class we now have a datatable in _source that holds the columns that we want and a list of mappings from our source to the datatable. Now we just need to create a ToDataTable method to use these in the same class.
public DataTable ToDataTable(IEnumerable<TSource> items)
{
// make a new datatable with the same columns as the source
var dt = _source.Clone();
foreach (var item in items)
{
dt.Rows.Add(_mappings.Select(f => f(item)).ToArray());
}
return dt;
}
We would use it like so. Typically you would only do the configurations once and then just reuse it with different lists.
var inputClassMapper = new Mapper<InputClass>();
inputClassMapper.Configure(f => f.SomeField, "ColA", s => $"{s}ABCD")
.Configure("ColB", () => DateTime.Now);
// have mapper convert input to datatables
var dt = inputClassMapper.ToDataTable(list); | {
"domain": "codereview.stackexchange",
"id": 31425,
"tags": "c#, generics, .net-datatable"
} |
How does a synchroton work? | Question: I know that a linear accelerator (linac) works by having terminals that get longer progressively and changes polarity due to AC current. And I also know that a cyclotron works by having two semi-circles that are connected to an AC source and magnetic field perpendicular to keep the particle moving in circles. But in a synchrotron, I understand that variable magnets are used to control the radius of the accelerating particle, however how is the particle accelerated?
Answer: To make things simple,
A synchrotron consists of:
radio frequency accelerating cavity : in this cavity, the charged particle is accelerated using an electric field along the direction of motion of the charged particle
Bending magnet: magnets are used to bend the charged particles
Accelerated in cavity...the bent by magnet...then again accelerated again and this goes on.
As a result, the charged particle moves in a circular path and its velocity is increased by the electric field. Note that the strength of the magnetic field has to be increased in order to apply the right amount of force on the charged particle so that the radius of the path it follows remains the same and a bigger synchrotron not needed
This diagram should be useful: | {
"domain": "physics.stackexchange",
"id": 10231,
"tags": "accelerator-physics, particle-detectors"
} |
Understanding the concept of a "Place Field" and the difference between place cells and grid cells | Question: I have 3 questions that are interrelated:
After reading the proper literature on the subject, my understanding of the place field is that it's a place in space to which an animal's place cell reacts by firing. Then there might also be another place in field to which the same place cell reacts by firing. Is this understanding correct?
1.1 Why is it called place "field," what does the "field" actually imply, etymologically speaking?
1.2. The Wikipedia Grid Cell Page says that:
By contrast, if a place cell from the rat hippocampus is examined in the same way (i.e., by placing a dot at the location of the rat's head whenever the cell emits an action potential), then the dots build up to form small clusters, but frequently there is only one cluster (one "place field") in a given environment, and even when multiple clusters are seen, there is no perceptible regularity in their arrangement.
So then what are those "clusters", i.e. I guess the place fields, are they the ones I highlighted in red on this picture (sorry, I am a very visual learner!) and also, why do they fire in clusters?
The Wikipedia Page has the following description of this picture but I am still not sure I am understanding it correctly:
Spatial firing patterns of 8 place cells recorded from the CA1 layer of a rat. The rat ran back and forth along an elevated track, stopping at each end to eat a small food reward. Dots indicate positions where action potentials were recorded, with color indicating which neuron emitted that action potential.
The first question stems from the second and the main one I am confused about: what is actually the difference between the place cells and the grid cells, in a layman terms (beyond the obvious structural differences, i.e. that the place cells are pyramidal neurons and the grid cells fire in a hexagonal pattern)?
In terms of their functional differences, is the difference mainly in the fact that the grid cells are theoretically responsible for path integration, whereas "The place cells are thought, collectively, to act as a cognitive representation of a specific location in space, known as a cognitive map" ( O’Keefe John, 1978 ).
What is the physiological or anatomical difference between.... Available from: https://www.researchgate.net/post/What_is_the_physiological_or_anatomical_difference_between_place_cells_and_grid_cells_in_the_hippocampus [accessed Aug 16, 2017].
O'Keefe, John (1978). The Hippocampus as a Cognitive Map. ISBN 978-0198572060.
Answer:
Yes, that's correct. Mostly, place cells fire in different places in different contexts, and they don't necessarily share the same spatial relationship in different contexts (like in your picture, the "yellow" cell comes right before a "cyan" cell; those place fields wouldn't necessarily be adjacent in a different context.
1.1. "Field" is just a term that refers to a region of space. I'd say the Merriam-Webster definition closest is #6:
a region or space in which a given effect (such as magnetism) exists
1.2 The picture you have is probably a schematic, but what it depicts is a recording from several different cells. Each cell is assigned a color. Every time the cell fires a spike, there is a dot placed at the location in space where the animal was at the time of the spike. You let the animal run through the maze and get a record of lots of these points, and you see that they cluster together into place fields.
2&3 - These questions are getting to be a bit too broad and also you are only supposed to ask one question per submission, so I'll be very brief here. Grid cells are located in an entirely different part of the brain, the entorhinal cortex (part of neocortex), although this part does project to the hippocampus where place cells are found.
Grid cells fire in regular patterns in a spatial field, like in this image:
https://en.wikipedia.org/wiki/Grid_cell#/media/File:Autocorrelation_image.jpg
They also keep their relationship of firing locations relative to each other constant across contexts. You can think of them like a nested coordinate system.
Place cells do not do this. For a given environment, they seem to be somewhat randomly arranged relative to each other. Most likely, what actually happens with place cells is that if you could record from all of them, there would be a unique combination of cells active at any location, and this location could then be associated with other information like "I found food here."
Perhaps you could think about grid cells like a GPS location, but place cells are like a map marker. | {
"domain": "biology.stackexchange",
"id": 7641,
"tags": "neuroscience, terminology, etymology"
} |
Why, in low energy situations like atomic physics, are massive particles found to be in integer number states? | Question: In quantum field theory electrons are conceptualized as quantized excitations of the quantum electron field. Generically the electron field can be in a superposition of number states. This is related to the fact that under QFT Hamiltonian/Lagrangian energy can be exchanged between electrons and other quantum fields reducing the number of electrons while increasing the number of quantum in other fields.
However, in low energy situations like atomic physics, it is overwhelmingly likely to find the electron field to have a fixed integer number of electron quanta. For example the hydrogen atom always has 1 (and not 1.2 or 1.5) electron quanta and likewise the carbon atom has 12 (and not 12.3 or 11.7) electron quanta.
Note that this in contrast to the photon field which is regularly found in photon coherent states which are superpositions of photon number states.
Why are fixed integer states with little or no number uncertainty likely for massive quantum fields like the electron field in low energy situations?
It may be related to the fact that nuclei are found to have fixed proton numbers so charge neutrality for bound states leads to fixed electron numbers.
But this kicks the can down the road. Why don’t we see, for example, a superposition state of multiple proton and electron numbers. Or maybe out differently, why don’t we see stable bound atomic states that are superpositions of H and He atoms?
Does this have to do with the fact the electrons and protons are massive?
Answer: This is a good question. It turns out to be several different questions.
For example the hydrogen atom always has 1 (and not 1.2 or 1.5) electron quanta
This is not actually true. The hydrogen atom is by definition a charge eigenstate, but an energy eigenstate of hydrogen is only approximately an electron number eigenstate. The approximation is a good one because the atomic energy scale is small compared to the mass-energy of an electron-positron pair. For heavy enough atoms (hand-wavingly, those with $Z\gtrsim 1/\alpha$), you can "spark the vacuum," and the uncertainty in particle number is of order unity. My knowledge of QCD is not great, but I believe that, for example, a hydrogen atom does have an uncertainty in quark number that is of order unity.
Note that this in contrast to the photon field which is regularly found in photon coherent states which are superpositions of photon number states.
There is a number-phase uncertainty relation. Because of this, you can't get a classical field (one whose phase can be measured) made out of fermions. To get coherence, you need to have the density of particles (per unit volume $\lambda^3$) to be $\gg 1$, but this is impossible for fermions.
For massive neutral bosons, it's possible in principle to have a coherent wave, but because of the number-phase uncertainty relation, it will have to have a large uncertainty in particle number. This means that it has a large uncertainty in energy.
But this kicks the can down the road. Why don’t we see, for example, a superposition state of multiple proton and electron numbers.
You can have this, for example, in beta decay. If you prepare the parent nucleus in its ground state, and then wait, the Schrödinger equation says that the wave function becomes a mixture of the decayed and undecayed states. (The decayed state includes the emitted beta particle and neutrino.) These states aren't uncommon in any definable objective sense, but normally when we do nuclear physics experiments, we measure the emitted beta particle as our way of detecting that something has happened. Then we get decoherence between the state of the detector that has seen a beta particle and the one that hasn't.
Or maybe out differently, why don’t we see stable bound atomic states that are superpositions of H and He atoms?
These systems have different baryon numbers. Baryon number is not only conserved, but we also don't have any observables $A$ such that $\langle \text{H} | A | \text{He} \rangle\ne0$. Therefore they inhabit different superselection sectors, and we can't make coherent superpositions of them. So there is nothing in quantum mechanics that prohibits the superposition, but the superposition doesn't have any observable consequences, e.g., you can't observe interference effects. | {
"domain": "physics.stackexchange",
"id": 91196,
"tags": "quantum-mechanics, nuclear-physics, atomic-physics, superposition, discrete"
} |
Options with a "yes" or "no" radio button | Question: I have about 6 of these, but I'm only posting two. Essentially I have a bunch of options with a "yes" or "no" radio button. If "no" is selected, I want a form to popup.
$('#Option1').live('change', function() {
if ($('input[name=Option1]:checked').val() == "False") {
$('#formDiv').show();
} else {
$('#formDiv').hide();
}
});
$('#Option2').live('change', function() {
if ($('input[name=Option2]:checked').val() == "False") {
$('#formDiv').show();
} else {
$('#formDiv').hide();
}
});
How can I make this more efficient, or even consolidate it? Or is having a function for each pair of radio buttons necessary?
Answer: This should work:
$('#Option1, #Option2').live('change', function() {
if ($('input[name=' + $(this).attr('id') + ']:checked').val() == 'False') {
$('#formDiv').show();
else {
$('#formDiv').hide();
}
});
Or, if whatever has the id option1 is the same page element as the input with the name option1, you could do:
$('#Option1, #Option2').live('change', function() {
if ($(this).val() == 'False')
$('#formDiv').show();
else {
$('#formDiv').hide();
}
}); | {
"domain": "codereview.stackexchange",
"id": 1154,
"tags": "javascript, jquery, form, event-handling"
} |
Wavefunction of many particles | Question: Suppose that we have $N$-many identical particles, whose space-spin-coordinates are given by $x_{1}, x_{2},...x_{n}$ and whose composite system is represented by $|\Psi\rangle$. Then, according to the textbook, the wavefunction of these particles can be represented by:
$$\Psi(x_{1}, x_{2}, ... x_{n}) = \langle x_{1}, x_{2}, ... x_{2}|\Psi
\rangle$$
What is the physical meaning of this? Whence the inner product of $|x_{1}, x_{2},... x_{n}\rangle$ and $|\Psi \rangle$? Why take the inner product to get the wavefunction?
I know it's a very elementary question, but I want to understand the physical meaning of the equation. (Also, I know this is not particularly about many-particle systems, but it's just what I happen to be looking at right now.)
Answer: In very informal terms (unless you consider tensor products of rigged Hilbert spaces), the vector $|x_1,\ldots,x_n\rangle$ is the element of $\bigotimes^n H$ given by
$$|x_1\rangle\otimes\cdots\otimes|x_n\rangle$$
That is, for each particle you have a copy of $H$. Note that the statistics of the problem will reduce $\bigotimes^n H$ to a subspace. So, for bosons, you will have the subspace $S$ given by the closure of the span of all the vectors generated by symmetric tensor products of one-particle states.
Now, in order to grasp the meaning of that "inner product", suppose that your global state $|\Psi\rangle$ is the product of one-particle states, that is $|\Psi\rangle = |\psi_1\rangle\otimes\cdots\otimes|\psi_n\rangle$. The inner product gives
$$\Psi(x_1,\ldots,x_n) = \psi_1(x_1)\cdots\psi_n(x_n).$$
More generally, the expression of $\Psi$ will be an $L^2$-integrable function that can be approximated arbitrarily well with linear combinations of functions like the above. | {
"domain": "physics.stackexchange",
"id": 66796,
"tags": "quantum-mechanics, hilbert-space, wavefunction, many-body, identical-particles"
} |
Will NaF + CaCO3 precipitate much CaF2? | Question: If I mix sodium fluoride, calcium carbonate powder, and water, under what conditions (if any) would there be an equilibrium?
I.e. assume we start out with fully dissociated $\ce{Na+(aq) + F-(aq)}$, because it's fairly soluble. But when I add the $\ce{CaCO3}$, it's only slightly soluble, so assume an excess of solid $\ce{CaCO3}$ and a small amount dissolved.
Will that be stable with $\ce{NaF}$ concentration much greater than the saturated concentrations of $\ce{CaCO3}$, or will $\ce{CaF2}$ start to precipitate, and will it continue until exhaustion of a reactant, or reach equilibrium?
I'm wondering if it's the relative solubilities of $\ce{CaCO3}$ and $\ce{CaF2}$ that will drive the equilibrium (if that's what it is) one way or the other. I can't work out how to apply the theory of an equilibrium constant because of the excess solid reactant and corresponding possible precipitate messes up how concentrations balance out in a normal equilibrium… And I'm wondering if the absolute concentration of $\ce{NaF}$ effects things as well.
Any pointers appreciated. Or even some jargon to say what this kind of reaction is called.
Answer: Yes, the calcium ion could lead to precipitation. The solubility of $\ce{CaCO3}$ in distilled water is about 15 mg/L, which is about 0.15 mM calcium ion if there is no other source of carbonate.
The solubility constant for $\ce{CaF2}$ is about $4\times 10^{-11}$, which means that we can only have 0.5 mM fluoride ions before precipitation will start. That's well below the solubility of NaF.
However, the presence of additional carbonate could be used to reduce the calcium concentration. Similarly, any addition of acid will convert carbonate to bicarbonate and increase the maximum calcium concentration.
UPDATE:
In the above answer, I assumed a basic familiarity with solubility products. For those not familiar with those, here's more detail.
The key quantitative measure of solubility of ionic compounds is the solubility product, usually indicated as $K_{sp}$, which is the product of the concentrations of the separate ions.
For $\ce{CaCO3}$, we have $K_{sp}=[\ce{Ca^2+}][\ce{CO3^2-}]$. Reported values vary somewhat, but are typically around $2\times 10^{-8}$.
Likewise, for $\ce{CaF2}$, we have $K_{sp}=[\ce{Ca^2+}][\ce{F-}][\ce{F-}]$, and the reported values are around $4\times 10^{-11}$. Thus, the condition for keeping fluoride in solution is
$[\ce{Ca^2+}][\ce{F-}][\ce{F-}]<4\times 10^{-11}$.
If $\ce{CaCO3}$ is dissolved in distilled water to maximum solubility, the calcium ion concentration is $\sqrt{K_{sp}}\approx 0.15$ mM, which is the basis of the calculation above. | {
"domain": "chemistry.stackexchange",
"id": 13145,
"tags": "inorganic-chemistry, equilibrium, solubility, solutions, precipitation"
} |
Covering radius of code | Question: Let $C$ be a $[n, k]$ linear code over $\mathbb{F}_q$.
I want to calculate the covering radius of the Hamming codes.
I have thought the following:
Since the Hamming distance is $3$, the coverig radius will always be $3$.
Am I right?
Answer: The Hamming codes are perfect codes. This means that balls of radius $(d-1)/2$ (where $d$ is the minimal distance) centered around the codewords partition the space. In particular, the covering radius is $(d-1)/2$. | {
"domain": "cs.stackexchange",
"id": 6364,
"tags": "coding-theory"
} |
What does it mean when a gene and its transcript have opposite orientations in a GFF3 file? | Question: I was working with a given GFF3 file, and I observed that some transcripts have orientation opposite to their transcripts. Here is a snippet:
chr1 . gene 2189548 2194772 . + . ID=c5eaf8c1-2bb2-566f-e154-6a9be797c11f;Name=FAAP20;gene_biotype=protein_coding;gene_id=34a96673-bfab-ebbf-b2b7-f9cb73163a4a;gene_name=FAAP20
chr1 . transcript 2189548 2194772 . - . ID=f47feeac-74ce-1543-53b6-afd9bbb8ab19;Name=rna-NM_182533.4;gene_biotype=protein_coding;gene_id=34a96673-bfab-ebbf-b2b7-f9cb73163a4a;gene_name=FAAP20;Parent=c5eaf8c1-2bb2-566f-e154-6a9be797c11f;transcript_biotype=protein_coding;transcript_name=rna-NM_182533.4;protein_id=NP_872339.3
How do I interpret this? I checked UCSC Browser and in each case it seems that the browser agrees with the transcript in each case, which seems like it should be canonical.
Moreover, I observed in this file that gene orientation is always +. Whereas a highly curated GFF3 of the human genome represents both directions:
# my file
$ awk '$3 == "gene" {print}' filtered_potential_genes.gff | cut -f7 | sort | uniq -c
1600 +
# highly curated annotations
$ awk '$3 == "gene" {print}' /MANE.GRCh38.v0.95.select_refseq_genomic.gff | cut -f7 | sort| uniq -c
9393 +
9191 -
According to the person who provided me the file, the existence of bidirectional transcription was indeed the motivation for this. However, it's obviously confusing to have it still be a definitive + rather than . or something to signify unstrandedness.
It seems likely that this is due to an artifact in my file. Are there other interpretations? Is the standard simply that gene orientation is meaningless and transcript orientation is important?
Answer: Looks like a poor annotation to me. I could understand a few scattered ncRNAs being arranged in the opposite direction from the gene, but all genes being oriented identically doesn't make sense. | {
"domain": "bioinformatics.stackexchange",
"id": 2285,
"tags": "sequence-annotation, gff3"
} |
Deriving the velocity for circular orbital motion | Question: I'm trying to derive the (circular) orbital velocity for a given height from the center of mass of a body, like Earth.
First I used the equation of the circle $x^2 + y^2 = r^2$ to get $y = \pm \sqrt{r^2 - x^2}$. I then found the derivative to decide the slope of the velocity vector, which was $\pm \frac{x}{\sqrt{r^2-x^2}}$.
The acceleration vector, though, will always be at 90 degrees the velocity vector, inwards towards the center of mass. The definition of perpendicular lines gives that the slopes must be the negative inverse of each other, so the acceleration vector must have a slope of $\pm \frac{\sqrt{r^2-x^2}}{x}$.
We also know that $v' = a$ must be true (the derivative of the velocity vector is the acceleration vector) and that $|a| = \frac{GM}{r^2}$, from Newton's Law of Gravity ($\frac{F}{m} = a = G \frac{Mm}{r^2 \cdot m}$).
How can I get $|v|$, knowing the slopes of the vectors and the magnitude of $a$? Is this a good approach? If not, how should I try to understand this?
Answer: Your problem is that you are trying to do this in Cartesian coordinates (x,y). This makes the math much harder than it needs to be.
It is more natural, when dealing with circular motion, to use polar coordinates - we express a position relative to the origin by its distance ($r$) and the angle relative to some reference axis ($\theta$). The relationship between $(x,y)$ and $(r,\theta)$ is given by
$$x = r \cos\theta\\
y = r \sin\theta$$
Now the solution becomes trivial. The velocity of an object in circular motion ($r$ is constant) is given by
$$|v| = r\omega = r \dot\theta$$
We know that the force needed to keep an object with mass $m$ in an orbit of radius $r$ at an angular velocity $\omega$ is
$$F = m \omega^2 r = \frac{m v^2}{r}$$
Setting this force equal to the force of gravity:
$$F_g = \frac{GMm}{r^2}$$
we find
$$ \frac{m v^2}{r} = \frac{GMm}{r^2}\\
v = \sqrt{\frac{GM}{r}}$$
Trying to get to the same answer while working with Cartesian coordinates is masochistic. | {
"domain": "physics.stackexchange",
"id": 22729,
"tags": "homework-and-exercises, newtonian-mechanics, newtonian-gravity, orbital-motion, celestial-mechanics"
} |
How to work out momentum when there are velocity and mass changes | Question: I have a pretty simple homework question, but I can't rap my head around it.
In the question a swimmer of $55 \mbox{ } \mathrm{kg}$, jumps off a stationary raft of $210\mbox{ }\mathrm{kg} $. The swimmer jumps off the raft with a speed of $4.6 \mbox{ } \mathrm{ms}^{-1} $. I need to work out the recoil velocity of the raft.
So because Momentum before = Momentum after, I went:
$p_i = 0$
Therefore $0 = p_f$ and $p_f = mv$, $m = 210\mbox{ }\mathrm{ } $, so the $v$ would have to equal $0$. Making the recoil velocity equal $0$. However that doesn't seem right. Could use some clarification or help, thanks.
Answer: The total momentum of the whole Swimmer+Raft system is conserved, not of the raft only.
So your equation should be $$P_{i,system}=P_{f,system}$$ $$0=m_{raft}\vec v_{raft}+m_{swimmer}\vec v_{swimmer}$$
You cannot conserve momentum for the raft alone because there is a force on the raft(The swimmer pushing back with her legs, trying to jump forward).
Also note that $v_{raft}$ and $v_{swimmer}$ are the velocities in the ground frame. The $4.6 m/s$ of the swimmer might be with respect to the raft. So you'll have to convert that velocity to the ground frame velocity. | {
"domain": "physics.stackexchange",
"id": 8908,
"tags": "homework-and-exercises, newtonian-mechanics, momentum"
} |
A rod in empty space | Question: I imagine a rod in empty space. It has no momentum and it is not rotating. Now, suppose I hit the rod at one end. The rod starts to rotate. And, according to the definition of centre of mass, CM must move as if the force is applied on CM. But, if this is true, then it has both translational and rotational Kinetic energy.
Now, if I apply the force on CM of the rod, then it only has translational KE.
In the above two cases, energies are different. Why is that?!
(Source)
I got this picture from wikipedia. It said that both cases shown in the picture are same. In the first case, we have only a single force, but in the second, I have both the force and a couple. Isn't the conservation of energy violated?
Answer: The difference in the amount of work you do comes down to the difference in displacement, which depends on exactly how the force is applied. If you don't think very carefully about how the force is applied in each situation it's easy for things to get confusing.
The change in the translational kinetic energy is $\Delta K_{trans.} = \int \vec{F}_{net} \cdot d\vec{x}_{CM}$. The change in rotational kinetic energy is $\Delta K_{rot} = \int \tau \,d\theta = \int |\vec{F}_{net}| \sin(\phi_{rF}) \,rd\theta$ where $r$ is the distance from the CM to the point of application of the force and $\phi_{rF}$ tells us about the angle of application of the force.
It helps here to imagine the force happening over a finite amount of time. If you apply a constant force (with constant direction) at the center of mass the object moves a small distance $\Delta x$ so you do work $F\Delta x$. If you try the same thing at a point away from the center of mass the point of application begins to rotate away from you as well as move away from you, so you will end up doing work $F(\Delta x + r\Delta \theta)$. The image below shows how the rotation leads to extra displacement, which means extra work.
There are lots of different ways we could apply the forces in the two different situations. Different methods of application will lead to different motion and hence different amounts of work. Thinking about instantaneous forces kind of hides what's going on, which is why you have to be careful with it. | {
"domain": "physics.stackexchange",
"id": 75396,
"tags": "rotational-dynamics"
} |
Abstracting OpenGL shaders and uniforms into objects for ease of use | Question: I am following an OpenGL tutorial series and it got to the point where the program needed some code abstraction. I followed the tutorial on the abstraction of VBOs, IBOs and the like and when it got time to do the shaders, I decided it would be a good opportunity for me to practice, so I did my best to make it happen.
Seeming as I am fairly new to C++ I thought it would be a great idea to post my code on here to take in some constructive criticism. Make sure to point out what I did bad (because I am sure there will be plenty such examples) but it also would be nice to tell me what I did good so I can reinforce such behavior in the future. There are 4 relevant source files. (Excluding the headers and some helper classes, but I will post them as well.)
I'll start with the main program - Application.cpp:
#include <GL/glew.h>
#include <GLFW/glfw3.h>
#include <iostream>
#include <fstream>
#include <string>
#include <sstream>
#include "Renderer.h"
#include "IndexBuffer.h"
#include "VertexBuffer.h"
#include "VertexArray.h"
#include "BufferLayout.h"
#include "Shader.h"
#include "GLProgram.h"
#include "Uniform.h"
#define _USE_MATH_DEFINES
#include <math.h>
struct Vector4
{
float x, y, z, w;
Vector4(float x, float y, float z, float w)
: x(x), y(y), z(z), w(w)
{
}
};
int main(void)
{
GLFWwindow* window;
/* Initialize the library */
if (!glfwInit())
return -1;
glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR, 3);
glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR, 3);
glfwWindowHint(GLFW_OPENGL_PROFILE, GLFW_OPENGL_CORE_PROFILE);
/* Create a windowed mode window and its OpenGL context */
window = glfwCreateWindow(640, 480, "Hello World", NULL, NULL); // Window with OpenGL context is created but context is not set to current.
if (!window)
{
glfwTerminate();
return -1;
}
/* Make the window's context current */
glfwMakeContextCurrent(window); // Setting the window's OpenGL context before initializing GLEW is critical.
glfwSwapInterval(1);
if (glewInit() != GLEW_OK) // GLEW needs to be initialized before attempting to call any GL functions. Beware of the scary NULL pointers.
std::cout << "GLEW initialization error. Terminating..." << std::endl;
std::cout << glGetString(GL_VERSION) << std::endl;
float tri2Dpositions[8] = { // Defined an array containing all our verticies.
-0.4f, -0.35f,
0.8f, -0.35f,
0.4f, 0.35f,
-0.8f, 0.35f
};
unsigned int indicies[6] = { // Defined indicies of drawing order.
0, 1, 2,
2, 3, 0
};
{
VertexBuffer vbo(tri2Dpositions, sizeof(tri2Dpositions));
IndexBuffer ibo(indicies, 6);
BufferLayout layout;
layout.Push<float>(2);
VertexArray va;
va.AddBuffer(vbo, layout);
Shader testShaders[2] = // Shader abstraction in use.
{
Shader(GL_VERTEX_SHADER, "res/shaders/Basic.shader"),
Shader(GL_FRAGMENT_SHADER, "res/shaders/Basic.shader")
};
GLProgram program(testShaders, 2);
float slopeIncrement = 0.04f;
bool clockwise = true;
unsigned long count = 0;
int windowWidth, windowHeight;
glfwGetWindowSize(window, &windowWidth, &windowHeight);
float color1[4] = { 0.0f, 1.0f, 0.0f, 1.0f };
float color2[4] = { 1.0f, 0.0f, 0.0f, 1.0f };
float windowSize[2] = { windowWidth, windowHeight };
float slope = 0.0f;
int switched = false;
Uniform c1(color1, UniformType::FLOAT4, "u_Color", false); // Uniform abstraction in use.
Uniform c2(color2, UniformType::FLOAT4, "u_Color2", false);
Uniform WindowSize(windowSize, UniformType::FLOAT2, "u_WindowSize", false);
Uniform SlopeBounds(&slope, UniformType::FLOAT, "u_SlopeBoundary", false);
Uniform ColorSwitched(&switched, UniformType::INT, "u_Switched", false);
program.Bind();
program.AttachUniform(c1);
program.AttachUniform(c2);
program.AttachUniform(WindowSize);
program.AttachUniform(SlopeBounds);
program.AttachUniform(ColorSwitched);
program.RefreshUniforms();
/* Loop until the user closes the window */
while (!glfwWindowShouldClose(window))
{
/* Render here */
glClear(GL_COLOR_BUFFER_BIT);
ibo.Bind();
va.Bind();
SlopeBounds.SetData(&slope);
ColorSwitched.SetData(&switched);
program.RefreshUniforms();
GLCall(glDrawElements(GL_TRIANGLES, 6, GL_UNSIGNED_INT, nullptr)); // Draws the currently bound array with specified draw mode using default shaders (if available)
// in the occasion that we aren't binding any custom shaders prior.
if (!clockwise)
{
slopeIncrement = abs(slopeIncrement);
if (slope > M_PI)
{
slope = 0;
switched = !switched;
count++;
if (count % 4 == 0) clockwise = !clockwise;
}
}
else if (clockwise)
{
slopeIncrement = -abs(slopeIncrement);
if (slope < 0)
{
slope = M_PI;
switched = !switched;
count++;
if (count % 4 == 0) clockwise = !clockwise;
}
}
slope += slopeIncrement;
/* Swap front and back buffers */
glfwSwapBuffers(window);
/* Poll for and process events */
glfwPollEvents();
}
}
glfwTerminate();
return 0;
}
Shader.h:
#pragma once
#include "Renderer.h"
#include <GL/glew.h>
#include <iostream>
#include <string>
#include "Uniform.h"
class Shader
{
private:
unsigned int m_RendererID;
unsigned int m_ShaderType;
bool m_Attachable = false;
std::string m_Source;
std::string Parse(const unsigned int type, const std::string& filepath);
void Compile();
bool CompileCheck();
public:
Shader(const unsigned int type, const std::string& filepath);
~Shader();
void Recompile(const std::string& filepath);
inline unsigned int GetHandle() const
{
return m_RendererID;
}
inline bool Attachable() const
{
return m_Attachable;
}
inline unsigned int GetType() const
{
return m_ShaderType;
}
inline const std::string& GetSource() const
{
return m_Source;
}
inline bool SameInstance(const Shader& s)
{
return (this == &s) ? true : false;
}
};
Shader.cpp:
#include "Shader.h"
#include <sstream>
#include <fstream>
Shader::Shader(const unsigned int type, const std::string& filepath)
{
switch (type)
{
case GL_VERTEX_SHADER:
m_ShaderType = GL_VERTEX_SHADER;
m_Source = Parse(m_ShaderType, filepath);
break;
case GL_FRAGMENT_SHADER:
m_ShaderType = GL_FRAGMENT_SHADER;
m_Source = Parse(m_ShaderType, filepath);
break;
default:
std::cout << "Unrecognized shader type. Defaulting to GL_VERTEX_SHADER..." << std::endl;
m_ShaderType = GL_VERTEX_SHADER;
m_Source = Parse(m_ShaderType, filepath);
}
GLCall(m_RendererID = glCreateShader(m_ShaderType));
Compile();
}
Shader::~Shader()
{
GLCall(glDeleteShader(m_RendererID));
}
std::string Shader::Parse(const unsigned int type, const std::string& filepath)
{
std::ifstream stream(filepath);
std::string line;
std::stringstream stringStream;
bool write = false;
if (type == GL_VERTEX_SHADER)
{
while (getline(stream, line))
{
if (line.find("#shader vertex") != std::string::npos && write == false)
{
write = true;
}
else if (line.find("#shader") != std::string::npos && write == true)
{
write = false;
}
else if (write)
{
stringStream << line << "\n";
}
}
}
else if (type == GL_FRAGMENT_SHADER)
{
while (getline(stream, line))
{
if (line.find("#shader fragment") != std::string::npos && write == false)
{
write = true;
}
else if (line.find("#shader") != std::string::npos && write == true)
{
write = false;
}
else if (write)
{
stringStream << line << "\n";
}
}
}
else
{
std::cout << "Couldn't find appropriate shader (Maybe misspelled markup or defaulted type?)... Aborting" << std::endl;
return nullptr;
}
return stringStream.str();
}
void Shader::Compile()
{
const char* src = m_Source.c_str();
GLCall(glShaderSource(m_RendererID, 1, &src, nullptr));
GLCall(glCompileShader(m_RendererID));
bool state = CompileCheck();
ASSERT(state);
m_Attachable = state ? true : false;
}
void Shader::Recompile(const std::string& filepath)
{
m_Source = Parse(m_ShaderType, filepath);
const char* src = m_Source.c_str();
GLCall(glShaderSource(m_RendererID, 1, &src, nullptr));
GLCall(glCompileShader(m_RendererID));
bool state = CompileCheck();
ASSERT(state);
m_Attachable = state ? true : false;
}
bool Shader::CompileCheck()
{
int result;
GLCall(glGetShaderiv(m_RendererID, GL_COMPILE_STATUS, &result));
if (result == GL_FALSE)
{
int length;
GLCall(glGetShaderiv(m_RendererID, GL_INFO_LOG_LENGTH, &length));
char* message = (char*)(alloca(length * sizeof(char)));
GLCall(glGetShaderInfoLog(m_RendererID, length, &length, message));
std::cout << "Failed to compile " << (m_ShaderType == GL_VERTEX_SHADER ? "vertex" : "fragment") << " shader" << std::endl;
std::cout << message << std::endl;
GLCall(glDeleteShader(m_RendererID));
return false;
}
return true;
}
Uniform.h:
#pragma once
#include <iostream>
#include <vector>
#include <GL\glew.h>
#include "Renderer.h"
enum UniformType
{
INVALID = -1,
FLOAT,
FLOAT2,
FLOAT3,
FLOAT4,
fMAT2x2,
fMAT3x3,
fMAT4x4,
DOUBLE,
DOUBLE2,
DOUBLE3,
DOUBLE4,
dMAT2x2,
dMAT3x3,
dMAT4x4,
INT,
INT2,
INT3,
INT4,
iMAT2x2,
iMAT3x3,
iMAT4x4
};
class Uniform
{
private:
void* m_Data;
UniformType m_Type = INVALID;
std::string m_UName;
bool m_Transpose = false;
template<typename T>
void ChangeData(void* data, unsigned int count);
template<>
void ChangeData<float>(void* data, unsigned int count);
template<>
void ChangeData<double>(void* data, unsigned int count);
template<>
void ChangeData<int>(void* data, unsigned int count);
public:
Uniform(void* data, UniformType type, const std::string& identifier, bool transpose);
~Uniform();
void* GetData() const
{
if (m_Type < 0)
{
std::cout << "Error getting uniform data. Uniform is not initialized properly." << std::endl;
return nullptr;
}
return m_Data;
}
inline const std::string& GetName() const
{
return m_UName;
}
inline const UniformType GetType() const
{
return m_Type;
}
inline const bool Transpose() const
{
return m_Transpose;
}
void SetData(void* data);
};
Uniform.cpp:
#include "Uniform.h"
Uniform::Uniform(void * data, UniformType type, const std::string& identifier, bool transpose)
{
switch (type)
{
case FLOAT:
m_Data = new float;
m_Type = type;
SetData(data);
m_UName = identifier;
break;
case FLOAT2:
m_Data = new float[2];
m_Type = type;
SetData(data);
m_UName = identifier;
break;
case FLOAT3:
m_Data = new float[3];
m_Type = type;
SetData(data);
m_UName = identifier;
break;
case FLOAT4:
m_Data = new float[4];
m_Type = type;
SetData(data);
m_UName = identifier;
break;
case DOUBLE:
m_Data = new double;
m_Type = type;
SetData(data);
m_UName = identifier;
break;
case DOUBLE2:
m_Data = new double[2];
m_Type = type;
SetData(data);
m_UName = identifier;
break;
case DOUBLE3:
m_Data = new double[3];
m_Type = type;
SetData(data);
m_UName = identifier;
break;
case DOUBLE4:
m_Data = new double[4];
m_Type = type;
SetData(data);
m_UName = identifier;
break;
case INT:
m_Data = new int;
m_Type = type;
SetData(data);
m_UName = identifier;
break;
case INT2:
m_Data = new int[2];
m_Type = type;
SetData(data);
m_UName = identifier;
break;
case INT3:
m_Data = new int[3];
m_Type = type;
SetData(data);
m_UName = identifier;
break;
case INT4:
m_Data = new int[4];
m_Type = type;
SetData(data);
m_UName = identifier;
break;
case fMAT2x2:
m_Data = new float[4];
m_Type = type;
SetData(data);
m_UName = identifier;
m_Transpose = transpose;
break;
case fMAT3x3:
m_Data = new float[9];
m_Type = type;
SetData(data);
m_UName = identifier;
m_Transpose = transpose;
break;
case fMAT4x4:
m_Data = new float[16];
m_Type = type;
SetData(data);
m_UName = identifier;
m_Transpose = transpose;
break;
case dMAT2x2:
m_Data = new double[4];
m_Type = type;
SetData(data);
m_UName = identifier;
m_Transpose = transpose;
break;
case dMAT3x3:
m_Data = new double[9];
m_Type = type;
SetData(data);
m_UName = identifier;
m_Transpose = transpose;
break;
case dMAT4x4:
m_Data = new double[16];
m_Type = type;
SetData(data);
m_UName = identifier;
m_Transpose = transpose;
break;
case iMAT2x2:
m_Data = new int[4];
m_Type = type;
SetData(data);
m_UName = identifier;
m_Transpose = transpose;
break;
case iMAT3x3:
m_Data = new int[9];
m_Type = type;
SetData(data);
m_UName = identifier;
m_Transpose = transpose;
break;
case iMAT4x4:
m_Data = new int[16];
m_Type = type;
SetData(data);
m_UName = identifier;
m_Transpose = transpose;
break;
}
}
Uniform::~Uniform()
{
if ((m_Type >= 0 && m_Type <= 3))
{
delete (float*)m_Data;
}
else if ((m_Type >= 4 && m_Type <= 6))
{
delete[] (float*)m_Data;
}
else if ((m_Type >= 7 && m_Type <= 10))
{
delete (double*)m_Data;
}
else if ((m_Type >= 11 && m_Type <= 13))
{
delete[] (double*)m_Data;
}
else if ((m_Type >= 14 && m_Type <= 16))
{
delete (int*)m_Data;
}
else if ((m_Type >= 17 && m_Type <= 20))
{
delete[] (int*)m_Data;
}
}
void Uniform::SetData(void* data)
{
switch (m_Type)
{
case FLOAT:
ChangeData<float>(data, 1);
break;
case FLOAT2:
ChangeData<float>(data, 2);
break;
case FLOAT3:
ChangeData<float>(data, 3);
break;
case FLOAT4:
ChangeData<float>(data, 4);
break;
case DOUBLE:
ChangeData<double>(data, 1);
break;
case DOUBLE2:
ChangeData<double>(data, 2);
break;
case DOUBLE3:
ChangeData<double>(data, 3);
break;
case DOUBLE4:
ChangeData<double>(data, 4);
break;
case INT:
ChangeData<int>(data, 1);
break;
case INT2:
ChangeData<int>(data, 2);
case INT3:
ChangeData<int>(data, 3);
break;
case INT4:
ChangeData<int>(data, 4);
break;
case fMAT2x2:
ChangeData<float>(data, 4);
break;
case fMAT3x3:
ChangeData<float>(data, 9);
break;
case fMAT4x4:
ChangeData<float>(data, 16);
break;
case dMAT2x2:
ChangeData<double>(data, 4);
break;
case dMAT3x3:
ChangeData<double>(data, 9);
break;
case dMAT4x4:
ChangeData<double>(data, 16);
break;
case iMAT2x2:
ChangeData<int>(data, 4);
break;
case iMAT3x3:
ChangeData<int>(data, 9);
break;
case iMAT4x4:
ChangeData<int>(data, 16);
break;
}
}
template<>
void Uniform::ChangeData<float>(void* data, unsigned int count)
{
for (unsigned int i = 0; i < count; i++)
{
*((float*)m_Data + i) = *((float*)data + i);
}
}
template<>
void Uniform::ChangeData<double>(void* data, unsigned int count)
{
for (unsigned int i = 0; i < count; i++)
{
*((double*)m_Data + i) = *((double*)data + i);
}
}
template<>
void Uniform::ChangeData<int>(void* data, unsigned int count)
{
for (unsigned int i = 0; i < count; i++)
{
*((int*)m_Data + i) = *((int*)data + i);
}
}
GLProgram.h:
#pragma once
#include "Shader.h"
#include <vector>
class GLProgram
{
private:
unsigned int m_RendererID;
std::vector<Shader*> m_AttachedShaders;
std::vector<Uniform*> m_Uniforms;
int* m_UniformLocations = nullptr;
void LinkProgram();
int* GetUniformLocations();
void ParseUniform(Uniform* uniform);
public:
GLProgram(Shader shaders[], unsigned int count);
GLProgram();
~GLProgram();
void Attach(Shader shaders[], unsigned int count);
void Attach(Shader& shader);
void Detach(Shader& shader);
void Reattach();
void AttachUniform(Uniform& uniform);
void DeleteUniform(const std::string& identifier);
void RefreshUniforms();
void Bind();
void Unbind();
inline const std::vector<Shader*> AttachedShaders() const
{
return m_AttachedShaders;
}
inline const unsigned int GetHandle() const
{
return m_RendererID;
}
inline int GetUniformLocation(Uniform& uniform) const
{
for (unsigned int i = 0; i < m_Uniforms.size(); i++)
{
if (&uniform == m_Uniforms[i])
{
return *(m_UniformLocations + i);
}
}
}
inline int GetUniformLocation(std::string& uName) const
{
for (unsigned int i = 0; i < m_Uniforms.size(); i++)
{
if (m_Uniforms[i]->GetName() == uName)
{
return *(m_UniformLocations + i);
}
}
}
};
GLProgram.cpp:
#include "GLProgram.h"
GLProgram::GLProgram(Shader shaders[], unsigned int count)
{
GLCall(m_RendererID = glCreateProgram());
Attach(shaders, count);
Bind();
}
GLProgram::GLProgram()
{
GLCall(m_RendererID = glCreateProgram());
}
GLProgram::~GLProgram()
{
GLCall(glDeleteProgram(m_RendererID));
}
void GLProgram::Bind()
{
GLCall(glUseProgram(m_RendererID));
}
void GLProgram::Unbind()
{
GLCall(glUseProgram(0));
}
void GLProgram::Attach(Shader& shader)
{
if (shader.Attachable())
{
GLCall(glAttachShader(m_RendererID, shader.GetHandle()));
m_AttachedShaders.push_back(&shader);
}
else
std::cout << "Shader of type '" << shader.GetType() << "' and handle '" << shader.GetHandle() << "' is not attachable. It was left off the final program with handle '" << m_RendererID << "'" << std::endl;
LinkProgram();
}
void GLProgram::Attach(Shader shaders[], unsigned int count)
{
for (unsigned int i = 0; i < count; i++)
{
if (shaders[i].Attachable())
{
GLCall(glAttachShader(m_RendererID, shaders[i].GetHandle()));
m_AttachedShaders.push_back(&shaders[i]);
}
else
std::cout << "Shader of type '" << shaders[i].GetType() << "' and handle '" << shaders[i].GetHandle() << "' is not attachable. It was left off the final program with handle '" << m_RendererID << "'" << std::endl;
}
LinkProgram();
}
void GLProgram::Detach(Shader& shader)
{
unsigned int count = m_AttachedShaders.size();
for (unsigned int i = 0; i < count; i++)
{
if (shader.SameInstance(*m_AttachedShaders[i]))
{
m_AttachedShaders.erase(m_AttachedShaders.begin() + i);
GLCall(glDetachShader(m_RendererID, m_AttachedShaders[i]->GetHandle()));
}
}
LinkProgram();
}
void GLProgram::Reattach()
{
unsigned int count = m_AttachedShaders.size();
for (unsigned int i = 0; i < count; i++)
{
GLCall(glDetachShader(m_RendererID, m_AttachedShaders[i]->GetHandle()));
GLCall(glAttachShader(m_RendererID, m_AttachedShaders[i]->GetHandle()));
}
LinkProgram();
}
void GLProgram::LinkProgram()
{
GLCall(glLinkProgram(m_RendererID));
GLCall(glValidateProgram(m_RendererID));
unsigned int count = m_AttachedShaders.size();
for (unsigned int i = 0; i < count; i++)
{
GLCall(glDeleteShader(m_AttachedShaders[i]->GetHandle()));
}
}
void GLProgram::AttachUniform(Uniform& uniform) // Returns location of uniform in program (for modifying uniform data at runtime).
{
m_Uniforms.push_back(&uniform);
m_UniformLocations = GetUniformLocations();
ParseUniform(&uniform);
}
void GLProgram::DeleteUniform(const std::string& identifier)
{
unsigned int i = 0;
for (Uniform* u : m_Uniforms)
{
if (u->GetName() == identifier)
m_Uniforms.erase(m_Uniforms.begin() + i);
i++;
}
m_UniformLocations = GetUniformLocations();
}
int* GLProgram::GetUniformLocations()
{
if (m_UniformLocations != nullptr) free(m_UniformLocations);
unsigned int count = m_Uniforms.size();
int* ptr = (int*)malloc(count * sizeof(int));
for (unsigned int i = 0; i < count; i++)
{
GLCall(*(ptr + i) = glGetUniformLocation(m_RendererID, m_Uniforms[i]->GetName().c_str()));
ASSERT(*(ptr + i) != -1);
}
return ptr;
}
void GLProgram::ParseUniform(Uniform* uniform)
{
unsigned int locationOffset = 0;
for (Uniform* u : m_Uniforms)
{
if (u == uniform)
break;
locationOffset++;
}
if (locationOffset >= m_Uniforms.size())
{
std::cout << "No uniform with identifier '" << uniform->GetName() << "' present. It was not parsed." << std::endl;
return;
}
UniformType type = uniform->GetType();
void* data = nullptr;
bool oneValue = type == UniformType::FLOAT || type == UniformType::DOUBLE || type == UniformType::INT;
bool twoValues = type == UniformType::FLOAT2 || type == UniformType::DOUBLE2 || type == UniformType::INT2;
bool threeValues = type == UniformType::FLOAT3 || type == UniformType::DOUBLE3 || type == UniformType::INT3;
bool fourValues = type == UniformType::FLOAT4 || type == UniformType::DOUBLE4 || type == UniformType::INT4;
bool mat2x2 = type == UniformType::fMAT2x2 || type == UniformType::dMAT2x2 || type == UniformType::iMAT2x2;
bool mat3x3 = type == UniformType::fMAT3x3 || type == UniformType::dMAT3x3 || type == UniformType::iMAT3x3;
bool mat4x4 = type == UniformType::fMAT4x4 || type == UniformType::dMAT4x4 || type == UniformType::iMAT4x4;
bool floatCast = type >= 0 && type <= 6 ? true : false;
bool doubleCast = type >= 7 && type <= 13 ? true : false;
bool intCast = type > 13 ? true : false;
if (oneValue)
{
if (floatCast)
{
data = uniform->GetData();
GLCall(glUniform1f(*(m_UniformLocations + locationOffset), *((float*)data)));
}
else if (doubleCast)
{
data = uniform->GetData();
GLCall(glUniform1d(*(m_UniformLocations + locationOffset), *((double*)data)));
}
else if (intCast)
{
data = uniform->GetData();
GLCall(glUniform1i(*(m_UniformLocations + locationOffset), *((int*)data)));
}
}
else if (twoValues)
{
if (floatCast)
{
data = uniform->GetData();
GLCall(glUniform2f(*(m_UniformLocations + locationOffset), *(((float*)data)), *(((float*)data) + 1)));
}
else if (doubleCast)
{
data = uniform->GetData();
GLCall(glUniform2d(*(m_UniformLocations + locationOffset), *(((double*)data)), *(((double*)data) + 1)));
}
else if (intCast)
{
data = uniform->GetData();
GLCall(glUniform2i(*(m_UniformLocations + locationOffset), *(((int*)data)), *(((int*)data) + 1)));
}
}
else if (threeValues)
{
if (floatCast)
{
data = uniform->GetData();
GLCall(glUniform3f(*(m_UniformLocations + locationOffset), *(((float*)data)), *(((float*)data) + 1), *(((float*)data) + 2)));
}
else if (doubleCast)
{
data = uniform->GetData();
GLCall(glUniform3d(*(m_UniformLocations + locationOffset), *(((double*)data)), *(((double*)data) + 1), *(((double*)data) + 2)));
}
else if (intCast)
{
data = uniform->GetData();
GLCall(glUniform3i(*(m_UniformLocations + locationOffset), *(((int*)data)), *(((int*)data) + 1), *(((int*)data) + 2)));
}
}
else if (fourValues)
{
if (floatCast)
{
data = uniform->GetData();
GLCall(glUniform4f(*(m_UniformLocations + locationOffset), *(((float*)data)), *(((float*)data) + 1), *(((float*)data) + 2), *(((float*)data) + 3)));
}
else if (doubleCast)
{
data = uniform->GetData();
GLCall(glUniform4d(*(m_UniformLocations + locationOffset), *(((double*)data)), *(((double*)data) + 1), *(((double*)data) + 2), *(((double*)data) + 3)));
}
else if (intCast)
{
data = uniform->GetData();
GLCall(glUniform4i(*(m_UniformLocations + locationOffset), *(((int*)data)), *(((int*)data) + 1), *(((int*)data) + 2), *(((int*)data) + 3)));
}
}
else if (mat2x2)
{
if (floatCast)
{
data = uniform->GetData();
GLCall(glUniformMatrix2fv(*(m_UniformLocations + locationOffset), 1, uniform->Transpose(), (float*)data));
}
else if (doubleCast)
{
data = uniform->GetData();
GLCall(glUniformMatrix2dv(*(m_UniformLocations + locationOffset), 1, uniform->Transpose(), (double*)data));
}
}
else if (mat3x3)
{
if (floatCast)
{
data = uniform->GetData();
GLCall(glUniformMatrix3fv(*(m_UniformLocations + locationOffset), 1, uniform->Transpose(), (float*)data));
}
else if (doubleCast)
{
data = uniform->GetData();
GLCall(glUniformMatrix3dv(*(m_UniformLocations + locationOffset), 1, uniform->Transpose(), (double*)data));
}
}
else if (mat4x4)
{
if (floatCast)
{
data = uniform->GetData();
GLCall(glUniformMatrix4fv(*(m_UniformLocations + locationOffset), 1, uniform->Transpose(), (float*)data));
}
else if (doubleCast)
{
data = uniform->GetData();
GLCall(glUniformMatrix4dv(*(m_UniformLocations + locationOffset), 1, uniform->Transpose(), (double*)data));
}
}
}
void GLProgram::RefreshUniforms()
{
for (Uniform* u : m_Uniforms)
{
ParseUniform(u);
}
}
Renderer.h: // Pretty basic GL error handling for now. Contains GLCall macro.
#pragma once
#include <GL/glew.h>
// GL function calls error checking macro.
#ifdef _DEBUG
#define ASSERT(x)\
if(!(x)) __debugbreak();
#define GLCall(x) GLClearError(); x; ASSERT(GLCheckError(#x, __FILE__, __LINE__))
#else
#define ASSERT(x) ;
#define GLCall(x) x;
#endif
void GLClearError();
bool GLCheckError(const char* functionLog, const char* sourceFile, int line);
Renderer.cpp:
#include "Renderer.h"
#include <GL\glew.h>
#include <iostream>
bool GLCheckError(const char* functionLog, const char* sourceFile, int line)
{
while (GLenum error = glGetError())
{
std::cout << "\n[OpenGL Error " << error << "]: " << functionLog << " : " << sourceFile << " : " << line << std::endl;
return false;
}
return true;
}
void GLClearError()
{
while (glGetError() != GL_NO_ERROR);
}
And here is the actual shader file containing a simple vertex and fragment shader I am parsing: // Posted this to demonstrate shader file layout.
#shader vertex
#version 330 core
layout(location = 0) in vec4 position;
void main()
{
gl_Position = position;
}
#shader fragment
#version 330 core
layout(location = 0) out vec4 color;
uniform vec4 u_Color;
uniform vec4 u_Color2;
uniform vec2 u_WindowSize;
uniform float u_SlopeBoundary;
uniform bool u_Switched;
vec2 transformedVec = vec2(gl_FragCoord.x - u_WindowSize.x / 2, gl_FragCoord.y - u_WindowSize.y / 2);
vec2 coordHat = normalize(transformedVec);
float sProductCC = dot(vec2(1, 0), coordHat);
float sProductC = dot(vec2(-1, 0), coordHat);
float angleCC = acos(sProductCC);
float angleC = acos(sProductC);
void main()
{
if (gl_FragCoord.y > u_WindowSize.y / 2)
{
if (!u_Switched)
{
if (angleCC < u_SlopeBoundary)
{
color = u_Color;
}
else
{
color = u_Color2;
}
}
else if (angleCC > u_SlopeBoundary)
{
color = u_Color;
}
else
{
color = u_Color2;
}
}
else if (!u_Switched)
{
if (angleC < u_SlopeBoundary)
{
color = u_Color2;
}
else
{
color = u_Color;
}
}
else if (angleC > u_SlopeBoundary)
{
color = u_Color2;
}
else
{
color = u_Color;
}
}
It's a bit of a long one. But excluding the implementation usage in Application.cpp and all the small helper translation units I've posted, the main focus are the files: Uniforms, Shader and GLProgram.
It has been fun writing this as a newbie, but it would be way more fun if I can improve from it. Anyone that would take the time to read this and post some feedback would truly be a sincere help to me.
Answer: Shader
Shader::GetSource should probably return a copy of the string (rather than a reference to it). This will prevent any caller from having access to m_Source, and prevent a possible change in value if the returned reference is stored in a reference and m_Source changes after the call.
Using the conditional operator for a bool expression to return true or false is just adding to the size of the code, since a comparison operator will result in the same return value. The definition for SameInstance in Shader.h can be reduced to just
return this == &s;
You're repeating yourself in Shader::Shader. Since all the cases have the same expression for m_Source, this can be removed from the switch and placed after. And m_ShaderType is normally a copy of type, so that can be simplified as well. Finally, you're missing a break in the default case. While not required here (since the default is the last case statement), you can avoid future problems (when a new case is added after the default one) by always including it. Since this is an error condition, adding an ASSERT(0); will help spot these when you are debugging. Put this all together and you end up with:
Shader::Shader(const unsigned int type, const std::string& filepath): m_ShaderType(type)
{
switch (type)
{
case GL_VERTEX_SHADER:
case GL_FRAGMENT_SHADER:
break;
default:
ASSERT(0);
std::cout << "Unrecognized shader type. Defaulting to GL_VERTEX_SHADER..." << std::endl;
m_ShaderType = GL_VERTEX_SHADER;
break;
}
m_Source = Parse(m_ShaderType, filepath);
GLCall(m_RendererID = glCreateShader(m_ShaderType));
Compile();
}
While we're on the topic of ASSERT, your ASSERT macro has a problem when used in an if statement where an else can bind improperly. You should make it change into a standalone statement:
#define ASSERT(x) do { if(!(x)) __debugbreak(); } while (0)
In not building debug, it should expand to nothing - not even a semicolon.
Why do you pass type into Shader::Parse? The Shader object already has a type that gives you that info.
Since state is already a bool value, Shader::Compile and Recompile do not need to use the conditional expression: m_Attachable = state;
Uniform
Uniform::GetName: See notes above for Shader::GetSource.
Uniform::GetData checks for a negative m_Type, rather than checking for the explicit INVALID value. This should either just check for m_Type == INVALID, or if you're going to assume that all negative values are invalid, it should also check against the largest legal value as well (which can be added to the UniformType enum).
Uniform::Uniform has lots of code duplication. Start with always assigning m_Type = type (or m_Type(type) in the member initializer list). The same for m_UName and m_Transpose. You should consider always allocating m_Data as an array (m_Data = new float[1]) which will simplify destruction. There's no default error case. This leaves m_Data uninitialized and results in Undefined Behavior (likely a crash) in the destructor. You could make a private CreateData member function to both allocate space then call ChangeData directly with the values (size) you already know, so that the allocator and copier use a single value, rather than having it in multiple places in the code.
In the destructor, you use constants rather than your UniformType enum values. You should always use the enum values. Also many of your deletes are wrong. When you call new float[2], you must use delete []. This why allocating a one element array in the constructor is beneficial. This should use a switch statement instead of the cascading if statements.
SetData is missing a break for the INT2 case.
GLProgram
Consider using std::vector<Shader> rather than passing a pointer and count to the GLProgram constructor and other functions. Some of the member functions could be made const.
Should the destructor clean up various allocated memories (m_UniformLocations)?
In GLProgram::Detach, can there be multiple shaders with the same Instance that need to be detached? If not, break from the for loop after removing the shader. Otherwise you'll access past-the-end of the m_AttachedShaders vector because you remove one but don't reduce the internal count you're using to control the for loop. You could also use iterators rather than an index to search for the instance. Or use std::find with an appropriate comparator.
GLProgram::Reattach can use the for (auto s: m_AttachedShaders) form of the for statement to process all the shaders. LinkProgram would also benefit from this. Should Reattach be calling LinkProgram?
The comment with AttachUniform seems to be incorrect.
DeleteUniform has Undefined Behavior because you modify the m_Uniforms vector while looping with it. Use std::find, break from the loop when you find one, or use iterators if there can be multiple uniforms with the same name that need to be deleted.
GetUniformLocations frees m_UniformLocations but does not NULL out the pointer, not does it store the pointer to the newly allocated memory. When allocating memory for ptr, why are you using malloc and not new? You can use ptr[i] rather than *(ptr + i) to access elements in the for loop.
ParseUniform should use another switch statement rather than creating a bunch of bool variables based on constants (when they should be using the enum values). For any given call, you're only going to be making one GLCall call, so just put that in a switch and don't use multiple nested if statements. | {
"domain": "codereview.stackexchange",
"id": 31310,
"tags": "c++, object-oriented, opengl"
} |
Reversing a string - two approaches | Question: Test Case:
Input- "I'm hungry!"
Output- "!yrgnuh m'I"
Approach 1: In this approach, I used a empty string and bind it with the input string reversely.
public static class ReverseString {
public static string Reverse (string input) {
//bind the string to an empty string reversly
var reversedString = "";
//check if the input is empty
if (input == "")
{
return "";
}
else
{
for (int i = input.Length - 1; i >= 0; i--)
{
reversedString += input[i];
}
return reversedString;
}
}
}
Approach 2: In this approach, I've created an empty char array which has the same length of the string. Then I've copied the value of string's last index to char array's first index and so on.
public static class ReverseString {
public static string Reverse (string input) {
char[] chars = new char[input.Length];
for(int i = 0, j = input.Length -1; i <= j; i++, j--)
{
chars[i] = input[j];
chars[j] = input[i];
}
return new string(chars);
}
}
There are lots of approaches like this(without using built in library). But I wonder which one is the most recommended among programmers preferably C# programmers.Which one do you recommend and why?
Answer: Performance
I didn't believe that the LINQ method could be faster, and I would never trust a profiler to give an accurate result (for numerous reasons), so I ran a benchmark with BenchmarkDotNet, and got the opposite result from tinstaafl. (Code in a gist)
Here are the results. Linq is as tinstaafl's, StringBuilder is as Joe C's, Char2 is as OP's second method, Char1a and Char1b are variations of what I would have suggested off-hand. On this machine (old i7), under .NET Core 2.1, in a dedicated benchmark, the OP's code was significantly faster than the Linq and StringBuilder methods. (Results may be very different under .NET Framework)
Method | TestString | Mean | Error | StdDev |
---------------------- |---------------------- |-------------:|-----------:|----------:|
ReverseLinq | | 81.472 ns | 0.1537 ns | 0.1284 ns |
ReverseChar1a | | 7.946 ns | 0.1156 ns | 0.1081 ns |
ReverseChar1b | | 7.518 ns | 0.0177 ns | 0.0157 ns |
ReverseChar2 | | 7.507 ns | 0.0232 ns | 0.0206 ns |
ReverseStringBuilders | | 12.894 ns | 0.1740 ns | 0.1542 ns |
ReverseLinq | It's (...)ow it [39] | 671.946 ns | 1.9982 ns | 1.8691 ns |
ReverseChar1a | It's (...)ow it [39] | 61.711 ns | 0.0774 ns | 0.0604 ns |
ReverseChar1b | It's (...)ow it [39] | 61.952 ns | 0.2241 ns | 0.1986 ns |
ReverseChar2 | It's (...)ow it [39] | 48.417 ns | 0.0877 ns | 0.0732 ns |
ReverseStringBuilders | It's (...)ow it [39] | 203.733 ns | 0.7540 ns | 0.6684 ns |
ReverseLinq | Magpies | 235.176 ns | 0.5324 ns | 0.4446 ns |
ReverseChar1a | Magpies | 23.412 ns | 0.0979 ns | 0.0916 ns |
ReverseChar1b | Magpies | 24.032 ns | 0.0582 ns | 0.0544 ns |
ReverseChar2 | Magpies | 22.401 ns | 0.1193 ns | 0.0996 ns |
ReverseStringBuilders | Magpies | 44.056 ns | 0.1313 ns | 0.1097 ns |
ReverseLinq | ifhia(...) oiha [432] | 4,102.307 ns | 10.4197 ns | 9.2368 ns |
ReverseChar1a | ifhia(...) oiha [432] | 454.764 ns | 1.0899 ns | 1.0195 ns |
ReverseChar1b | ifhia(...) oiha [432] | 453.764 ns | 2.3080 ns | 2.0460 ns |
ReverseChar2 | ifhia(...) oiha [432] | 400.077 ns | 1.0022 ns | 0.7824 ns |
ReverseStringBuilders | ifhia(...) oiha [432] | 1,630.961 ns | 6.1210 ns | 5.4261 ns |
Note: never used BenchmarkDotNet before... hopefully I've not misused/misunderstood it in any way (please comment if I have), and hopefully it is good at it's job.
Commentary
Performance is not everything. The linq method is the most compact, and the hardest to get wrong, which is very good. However, if performance is important, than you need to profile the method as realistically as possible. The results above may not generalise. However, I'd be very surprised if the StringBuilder and Linq methods out-performed any of the char-array based methods ever, because they just incur a fair amount of overhead (i.e. probably a dynamic array, and probably a second copy in the LINQ case (not to mention the general enumeration overhead)).
Personally, I have no issue with your second piece of code. It may not be the most obvious implementation ever, but it doesn't take long to work out, and it's not a method whose job is going to change any time soon, so I'd worry much more about its API than its internals. That said, the API is a problem, as Adriano Repetti has mentioned: the behaviour of this method is going to create problems as soon as you start trying to reverse non-trivial Unicode. Simply, 'reverses a string' is a deficient contract. | {
"domain": "codereview.stackexchange",
"id": 33289,
"tags": "c#, performance, strings, programming-challenge, comparative-review"
} |
Updating libgdx game framework library files | Question: I wrote a script that updates some library files for the game framework libgdx by grabbing the latest nightly build .zip file from a server and extracting the contents to the appropriate locations.
#!/usr/bin/python
__appname__ = 'libgdx_library_updater'
__version__ = "0.1"
__author__ = "Jon Renner <rennerjc@gmail.com>"
__url__ = "http://github.com/jrenner/libgdx-updater"
__licence__ = "MIT"
import os, time, sys, urllib2, re, datetime, tempfile, zipfile, argparse
# error handling functions and utils
def fatal_error(msg):
print "ERROR: %s" % msg
sys.exit(1)
def warning_error(msg):
print "WARNING: %s" % msg
if not FORCE:
answer = confirm("abort? (Y/n): ")
if answer in YES:
fatal_error("USER QUIT")
def confirm(msg):
answer = raw_input(msg)
return answer.lower()
def human_time(t):
minutes = t / 60
seconds = t % 60
return "%.0fm %.1fs" % (minutes, seconds)
# constants
YES = ['y', 'ye', 'yes', '']
# for finding the time of the latest nightly build from the web page html
DATE_RE = r"[0-9]{1,2}-[A-Za-z]{3,4}-[0-9]{4}\s[0-9]+:[0-9]+"
REMOTE_DATE_FORMAT = "%d-%b-%Y %H:%M"
SUPPORTED_PLATFORMS = ['android', 'desktop', 'gwt']
CORE_LIBS = ["gdx.jar",
"gdx-sources.jar"]
DESKTOP_LIBS = ["gdx-backend-lwjgl.jar",
"gdx-backend-lwjgl-natives.jar",
"gdx-natives.jar"]
ANDROID_LIBS = ["gdx-backend-android.jar",
"armeabi/libgdx.so",
"armeabi/libandroidgl20.so",
"armeabi-v7a/libgdx.so",
"armeabi-v7a/libandroidgl20.so"]
GWT_LIBS = ["gdx-backend-gwt.jar"]
# parse arguments
EPILOGUE_TEXT = "%s\n%s" % (__author__, __url__) + "\nUSE AT YOUR OWN RISK!"
parser = argparse.ArgumentParser(description='LibGDX Library Updater %s' % __version__, epilog=EPILOGUE_TEXT)
parser.add_argument('-d', '--directory', help='set the libgdx project/workspace directory', default=os.getcwd())
parser.add_argument('-i', '--interactive', action='store_true', help='ask for confirmation for every file', default=False)
parser.add_argument('-f', '--force-update', action='store_true', help='no confirmations, just update without checking nightly\'s datetime', default=False)
parser.add_argument('-a', '--archive', help='specify libgdx zip file to use for update', default=None)
args = parser.parse_args()
PROJECT_DIR = args.directory
INTERACTIVE = args.interactive
FORCE = args.force_update
ARCHIVE = args.archive
# mutually exclusive
if FORCE:
INTERACTIVE = False
# check the time of the latest archive on the nightlies server
def get_remote_archive_mtime():
index_page = urllib2.urlopen("http://libgdx.badlogicgames.com/nightlies/")
contents = index_page.read()
print "-- OK --"
# regex for filename
regex = r"libgdx-nightly-latest\.zip"
# add regex for anything followed by the nighlty html time format
regex += r".*%s" % DATE_RE
try:
result = re.findall(regex, contents)[0]
except IndexError as e:
print "REGEX ERROR: failed to find '%s' in:\n%s" % (regex, contents)
fatal_error("regex failure to match")
try:
mtime = re.findall(DATE_RE, result)[0]
except IndexError as e:
print "REGEX ERROR: failed to find datetime in: %s" % result
fatal_error("regex failure to match")
dtime = datetime.datetime.strptime(mtime, REMOTE_DATE_FORMAT)
return dtime
# downloads and returns a temporary file contained the latest nightly archive
def download_libgdx_zip():
libgdx = tempfile.TemporaryFile()
url = "http://libgdx.badlogicgames.com/nightlies/libgdx-nightly-latest.zip"
# testing url - don't hammer badlogic server, host the file on localhost instead
# url = "http://localhost/libgdx-nightly-latest.zip"
resp = urllib2.urlopen(url)
print "downloading file: %s" % url
total_size = resp.info().getheader('Content-Length').strip()
total_size = int(total_size)
# base 10 SI units - following Ubuntu policy because it makes sense - https://wiki.ubuntu.com/UnitsPolicy
total_size_megabytes = total_size / 1000000.0
bytes_read = 0
chunk_size = 10000 # 10kB per chunk
while True:
chunk = resp.read(chunk_size)
libgdx.write(chunk)
bytes_read += len(chunk)
bytes_read_megabytes = bytes_read / 1000000.0
percent = (bytes_read / float(total_size)) * 100
sys.stdout.write("\rprogress: {:>8}{:.2f} / {:.2f} mB ({:.0f}% complete)".format(
"", bytes_read_megabytes, total_size_megabytes, percent))
sys.stdout.flush()
if bytes_read >= total_size:
print "finished download"
break
return libgdx
def update_files(libs, locations, archive):
for lib in libs:
if lib in archive.namelist():
if INTERACTIVE:
answer = confirm("overwrite %s? (Y/n): " % lib)
if answer not in YES:
print "skipped: %s" % lib
continue
with archive.open(lib, "r") as fin:
filename = os.path.basename(lib)
final_path = os.path.join(locations[lib], filename)
with open(final_path, "w") as fout:
fout.write(fin.read())
print "extracted to %s" % final_path
def run_core(locations, archive):
title("CORE")
update_files(CORE_LIBS, locations, archive)
def run_android(locations, archive):
title("ANDROID")
update_files(ANDROID_LIBS, locations, archive)
def run_desktop(locations, archive):
title("DESKTOP")
update_files(DESKTOP_LIBS, locations, archive)
def run_gwt(locations, archive):
title("GWT")
update_files(GWT_LIBS, locations, archive)
def search_for_lib_locations(directory):
platforms = []
search_list = CORE_LIBS + DESKTOP_LIBS + ANDROID_LIBS
locations = {}
for element in search_list:
locations[element] = None
for (this_dir, dirs, files) in os.walk(directory):
for element in search_list:
split_path = os.path.split(element)
path = os.path.split(split_path[0])[-1]
filename = split_path[1]
for f in files:
match = False
if filename == f:
f_dir = os.path.split(this_dir)[-1]
if path == "":
match = True
else:
if path == f_dir:
match = True
if match:
if locations[element] != None:
print "WARNING: found %s in more than one place!" % element
if not FORCE:
answer = confirm("continue? (Y/n): ")
if answer not in YES:
fatal_error("USER ABORT")
locations[element] = this_dir
for lib, loc in locations.items():
if loc == None:
print "WARNING: did not find library %s in directory tree of: %s" % (lib, directory)
found_libraries = [lib for lib, loc in locations.items() if locations[lib] != None]
if found_all_in_set(CORE_LIBS, found_libraries):
platforms.append("core")
if found_all_in_set(ANDROID_LIBS, found_libraries):
platforms.append("android")
if found_all_in_set(DESKTOP_LIBS, found_libraries):
platforms.append("desktop")
if found_all_in_set(GWT_LIBS, found_libraries):
platforms.append("gwt")
return platforms, locations
def found_all_in_set(lib_set, found_list):
for lib in lib_set:
if lib not in found_list:
return False
return True
def main():
start_time = time.time()
print "finding local libraries in %s" % PROJECT_DIR
platforms, locations = search_for_lib_locations(PROJECT_DIR)
if "core" not in platforms:
fatal_error("did not find CORE libraries %s in project directory tree" % str(CORE_LIBS))
else:
print "found CORE libraries"
for supported in SUPPORTED_PLATFORMS:
if supported in platforms:
print "found libraries for platform: %s" % supported.upper()
else:
print "WARNING: did not find libraries for platform: %s - WILL NOT UPDATE" % supported.upper()
if ARCHIVE == None:
print "checking latest nightly..."
mtime = get_remote_archive_mtime()
print "lastest nightly from server: %s" % mtime
if not FORCE:
answer = confirm("replace local libraries with files from latest nightly?(Y/n): ")
if answer not in YES:
fatal_error("USER QUIT")
libgdx = download_libgdx_zip()
else:
if not os.path.exists(ARCHIVE):
fatal_error("archive file not found: %s" % ARCHIVE)
if not FORCE:
answer = confirm("replace local libraries with files from '%s'?(Y/n): " % os.path.basename(ARCHIVE))
if answer not in YES:
fatal_error("USER QUIT")
libgdx = open(ARCHIVE, "r")
with zipfile.ZipFile(libgdx) as archive:
if "core" in platforms:
run_core(locations, archive)
if "desktop" in platforms:
run_desktop(locations, archive)
if "android" in platforms:
run_android(locations, archive)
if "gwt" in platforms:
run_gwt(locations, archive)
duration = time.time() - start_time
print "finished updates in %s" % human_time(duration)
libgdx.close()
def title(text):
dashes = "-" * 10
print dashes + " %s " % text + dashes
if __name__ == "__main__":
main()
Answer: Your code looks pretty good. Some notes:
According to PEP8, imports should be written in separate lines.
fatal_error: I'd probably write the signature this way: fatal_error(msg, code=1).
INTERACTIVE -> interactive. According to PEP8, global variables should be written lower-case.
That's an opinion: I prefer to write multi-line lists/dictionaries in JSON style. You save indentation space and the reordering of elements is straightforward (all at the meager cost of two lines):
DESKTOP_LIBS = [
"gdx-backend-lwjgl.jar",
"gdx-backend-lwjgl-natives.jar",
"gdx-natives.jar",
]
Functions download_libgdx_zip and search_for_lib_locations are written (unnecessarily IMO) in a very imperative fashion. I'd probably refactor it with a functional approach in mind. At least don't reuse the same variable name to hold different values (i.e. total_size), as that takes away the sacred mathematical meaning of =.
Those functions run_xyz(locations, archive) look very similar, why not a unique run(platform, locations, archive).
Function found_all_in_set can be written:
def found_all_in_set(lib_set, found_list):
return all(lib in found_list for lib in lib_set)
Or:
def found_all_in_set(lib_set, found_list):
return set(lib_list).issubset(set(found_list))
if ARCHIVE == None: -> if ARCHIVE is None: although I prefer the (almost) equivalent, more declarative if not ARCHIVE:.
There are a lot of imperative snippets that could be written functionally, for example this simple list-comprehension replaces a dozen lines from your code:
platforms = [platform for (platform, libs) in zip(platforms, libs_list)
if found_all_in_set(libs, found_libraries)] | {
"domain": "codereview.stackexchange",
"id": 4028,
"tags": "python, library, libgdx"
} |
Choosing the right point when calculating moments | Question: A body is composed of two straight pins that are joined at a right angle. They have lengths $a$ and $b$ and the mass per unit length is $\rho$. When the body is balanced on a flat surface, as shown, how large is the normal force against the ground in the right point of contact? 4 options as can be seen in the picture.
Let me point out that this is a conceptual question. When I first tried to solve this problem I decided to choose to calculate the moment around the left contact-point in order to reduce one term(left normal force). This seems like a natural way but gives a false answer($N_2= \rho g$. If I instead calculate about the vertex of the triangle and use Newton's second law I get the correct solution.(answer is D).
So how should I choose the "correct" point?
Around left contact point:
By dropping an altitude h at the right angle we get that:
Notice that: $\cos\alpha=\frac{a}{\sqrt{a^2+b^2}}$, $\cos\beta=\frac{b}{\sqrt{a^2+b^2}}$ and $m_ag=\rho ag$ and $m_b=pbg$
$-\frac{\rho a^2g}{\sqrt{a^2+b^2}}-\frac{\rho b^2g}{\sqrt{a^2+b^2}}+N_2\sqrt{a^2+b^2}=0$. Simplifying gives $N_2=\rho g$
Answer: There is no "correct" point. Using any point will give you the same answer. If you didn't get the same answer using both points then you've done something wrong.
Hint: you can treat the gravitational force as though it were acting at the shape's center of mass. | {
"domain": "physics.stackexchange",
"id": 6748,
"tags": "homework-and-exercises, forces, statics, free-body-diagram"
} |
Distributed PCA or an equivalent | Question: We normally have fairly large datasets to model on, just to give you an idea:
over 1M features (sparse, average population of features is around
12%);
over 60M rows.
A lot of modeling algorithms and tools don't scale to such wide datasets.
So we're looking for a dimensionality reduction implementation that runs distributely (i.e. in Spark/Hadoop/ etc). We think to bring number of features down to several thousand.
Since PCA operate on matrix multiplication, which don't distribute very well over a cluster of servers, we're looking at other algorithms or probably, at other implementations of distributed dimensionality reduction.
Anyone ran into similar issues? What do you do to solve this?
There is a Cornell/Stanford Abstract on "Generalized Low-Rank Models" http://web.stanford.edu/~boyd/papers/pdf/glrm.pdf that talks specifically into this:
page 8 "Parallelizing alternating minimization" tells how it can be
distributed;
also page 9 "Missing data and matrix completion" talks
how sparse/ missing data can be handled.
GLRM although seems to be what we are looking for, but we can't find good actual implementations of those ideas.
Somebody wrote a version for Spark (e.g.
https://github.com/rezazadeh/spark/tree/glrm/examples/src/main/scala/org/apache/spark/examples/glrm
but it looks more of a proof of a concept rather than a version that
we could use in production);
H2O has a version of GLRM
http://docs.h2o.ai/h2o/latest-stable/h2o-docs/data-science/glrm.html
but it actually doesn't scale to our dataset size (see above).
Update 7/15/2018: Another Abstract is Fast Randomized SVD from Facebook (read here http://tygert.com/spark.pdf ) and also idea to do low-rank matrix approximation using ALS - http://tygert.com/als.pdf . Although there is no clear way how to use them now - see discussion at https://github.com/facebook/fbpca/issues/6
Any other ideas how to tackle this? Other available GLRM or other distributed dimensionalaity reduction implementations?
Answer: From the problem description what strikes me most relevant is the X-wing like autoencoder. Basically you have 2 neural nets that could have any of the popular neural net architectures like fully connected, convolutional and pooling layers or even sequential units like LSTM/GRU, the encoder and the decoder. If the encoding dimension is much smaller than the original one it could be used as a lower dimension representation of the input. The decoder is used to retrieve the original dimension/information. There are many types of autoencoders but for this use case you can take a look at sparse and denoising autoencoders. You could read more about autoencoders in the deep learning book:
https://www.deeplearningbook.org/contents/autoencoders.html
I don't really understand why you definitely need to do the training process distributed but even for that there are distributed implementations of Tensorflow so you could do some research on the Tensorflow docs. Also if you want to learn a new framework Uber's Horovod is a distributed framework for writing Tensorflow solutions: https://github.com/uber/horovod
One last comment that I would like to make is about the underlying dimension of the data. You mentioned that an acceptable dimension would be in the thousands. In my experience sparse data reside in much smaller manifolds. So I would suggest to treat the encoding dimension as a hyper-parameter and optimize for the corresponding loss function. | {
"domain": "datascience.stackexchange",
"id": 3491,
"tags": "dimensionality-reduction, pca, distributed, matrix-factorisation"
} |
Is the SN1 reaction faster with an axial or equatorial leaving group? | Question:
Why does compound 1 undergo the $\mathrm{S_N1}$ reaction faster than 2 even though both proceed via the same carbocation intermediate?
Answer:
In compund 2, both substituents can be placed in equatorial positions, whereas in 1 the $\ce{Cl}$ group is forced into an axial position since the bulky t-butyl group has to be placed equatorial. This makes compound 1 more unstable (higher energy), leading to a faster rate of formation of the carbocation. | {
"domain": "chemistry.stackexchange",
"id": 6361,
"tags": "organic-chemistry, conformers, cyclohexane"
} |
Kinect on Jetson TK1 | Question:
I cannot get Kinect (PC edition) to be detected with ROS (indigo) freenect_launch.
Ubuntu (using grinch kernel) packaged freenect_glview does give me Kinect image and depth information, ROS glview does not find device.
Bus 002 Device 009: ID 045e:02bf Microsoft Corp.
Bus 002 Device 008: ID 045e:02be Microsoft Corp.
Bus 002 Device 005: ID 045e:02c2 Microsoft Corp.
Thanks for hints.
Originally posted by cc_smart on ROS Answers with karma: 3 on 2014-10-16
Post score: 0
Answer:
It looks like the version of libfreenect that is included in ROS is currently too old to support the Kinect4Windows.
There is an open ticket about this.
Originally posted by ahendrix with karma: 47576 on 2014-10-20
This answer was ACCEPTED on the original site
Post score: 0
Original comments
Comment by cc_smart on 2014-10-20:
Thanks for reply !
Unfortunately neither of these worked for me, but if I understand correctly, Kinect does work for you in the Jetson ?
I'm wondering if the USB id values do not match "expectations" of the library coming with ROS ? What Kinect edition / USB ID values do you use / see ?
Thanks again
Comment by ahendrix on 2014-10-20:
Yes, I have an original Kinect.
Comment by cc_smart on 2014-10-20:
If the note regarding freenect version and recommendation for bugtracker ticket was pulled out, I believe I can give that the "correct answer" mark.
I believe this can added as bugtracker link: Ticket | {
"domain": "robotics.stackexchange",
"id": 19755,
"tags": "robotic-arm, ros, kinect, freenect, jetson"
} |
How can molecular mass be calculated from vapour density? | Question:
An organic compound contains 71.7% carbon, 6.7% hydrogen, 10.4% nitrogen and 11.8% oxygen by mass. Given that its vapour density is 67.5, determine its empirical formula and molecular formula.
From the mass percentages, I have found out that the empirical formula is $\ce{C8H9NO}$. However, to find the molecular formula we usually require the molecular mass.
What does "vapour density" mean, and how can I use it to calculate the molecular mass of the compound?
Answer: According to Wikipedia, the vapour density of a molecule is "the density of a vapour in relation to that of hydrogen". The density of a gas, $\rho$, is proportional to its molecular mass, $M$:
$$\rho = \frac{m}{V} \propto \frac{m}{n} = M$$
where $m$ is the mass of the gas, $V$ is the volume, and $n$ the amount of gas. Therefore, if we denote your compound by $\ce{X}$:
$$\begin{align}
67.5 &= \frac{\rho(\ce{X})}{\rho(\ce{H2})} \\
&= \frac{M(\ce{X})}{M(\ce{H2)}} \\
M(\ce{X}) &= 67.5 \cdot M(\ce{H2}) \\
&= 67.5 \cdot \pu{2 g mol-1} \\
&= \pu{135 g mol-1}
\end{align}$$
Finally, you can use this piece of information to find that the molecular formula is $\ce{C8H9NO}$ as well. | {
"domain": "chemistry.stackexchange",
"id": 1197,
"tags": "stoichiometry, terminology, density"
} |
Install turtlebot3 | Question:
Hellow guys!!
I recived error install to Turtlebot3_simulations
CMake Warning at /opt/ros/kinetic/share/catkin/cmake/catkinConfig.cmake:76 (find_package):
Could not find a package configuration file provided by "gazebo_ros" with
any of the following names:
gazebo_rosConfig.cmake
gazebo_ros-config.cmake
Originally posted by otluix on ROS Answers with karma: 1 on 2018-04-16
Post score: 0
Original comments
Comment by jayess on 2018-04-16:
Welcome! What command did you run to get this error?
Answer:
Did you install the package you mentioned? If you want to install the package you mentioned, you can simply install it with the following command.
sudo apt-get install ros-kinetic-gazebo-ros
Also, when building packages from source, It is recommended to use the rosdep command. This command installs all the packages that the packages in your catkin workspace depend upon but are missing on your computer.
rosdep install --from-paths /path/to/your/catkin_ws/src --ignore-src -r -y
Originally posted by Pyo with karma: 358 on 2018-04-16
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 30661,
"tags": "ros-kinetic, turtlebot3"
} |
Are black panthers the same as jaguars? | Question: I google the scientific name for black panthers. I get panthera onca. I google panthera onca, and I get jaguar. Can someone explain what is going on? Are they the same? What is the real scientific names for them? Because panthers are black and jaguars are yellow with black spots.
Answer: Black panther refers to melanistic examples of either leopards (Panthera pardus):
or jaguars (Panthera onca):
It is not typically used to refer to (the rare/historic/unconfirmed) examples of melanism or pseudo-melanism in other species of Panthera (e.g. black tigers). | {
"domain": "biology.stackexchange",
"id": 10547,
"tags": "species"
} |
Consequences of a distillation algorithm for PSPACE | Question: The following notion of a distillation algorithm comes from "On Problems Without Polynomial Kernels".
Let a language $L$ be given. A distillation algorithm for $L$ takes a given
list of input strings $\{ x_i \}_{i \in [t]}$ and computes an output string
$y$ such that:
(1) $y \in L$ if and only if there exists $i \in [t]$ such that $x_i \in L$
(2) $\vert y \vert \leq p(Max_{i\in[t]} \vert x_i \vert)$ for some polynomial $p$
(3) The algorithm computes $y$ in at most $q(\sum_{i\in[t]}\vert x_i \vert)$ time for some polynomial $q$
It has been shown that if there exists a distillation algorithm for an $NP$-complete problem, then $coNP \subseteq NP/poly$. Moreover, $PH = \Sigma_3$.
See details and discussion in:
"Infeasibility of Instance Compression and Succinct PCPs for NP"
"On Problems Without Polynomial Kernels"
"Lower bounds on kernelization"
Questions:
Could there exist a distillation algorithm for a $PSPACE$-complete
problem?
If such an algorithm existed, what complexity consequences would we
get?
Answer: Theorem 15.3 of the recent "Parameterized Algorithms" textbook by Cygan et al. states the following:
"Let $L, R ⊆ \Sigma^*$ be two languages. If there exists an OR-distillation of L into R, then $L\in coNP / poly$"
So, I think that if there exists an OR-distillation from a PSPACE-complete language $L$ to itself, then $PSPACE \subseteq coNP/poly$, i.e. not only does the polynomial-hierarchy collapse, but also PSPACE collapses with it. | {
"domain": "cstheory.stackexchange",
"id": 3856,
"tags": "cc.complexity-theory, np-hardness, parameterized-complexity, polynomial-hierarchy, pspace"
} |
How to search a specific sequence in BAM files for 10X experiment | Question: I have to search a specific sequence in a set of cells (> 1k cells) from a single-cell experiment done with 10X genomics.
As input file I have a single bam file, and 24 fastqs, therefore each file contains information about several cells. How can I search for a specific sequence and underpin which single cells have it?
I should be able to search for a specific sequence in a bam using samtools, etc. But the issue here is how to demultiplex the data from different cells to understand which cell provide the sequence.
Answer: The bam file has bam tags which say what reads belong to what cells.
https://support.10xgenomics.com/single-cell-gene-expression/software/pipelines/latest/output/bam | {
"domain": "bioinformatics.stackexchange",
"id": 584,
"tags": "rna-seq, scrnaseq, 10x-genomics"
} |
Klein-Gordon Charge Acting Like Angular Momentum | Question: This post is referring to the solution of exercise 2.2(d) of Peskin and Schroeder's An Introduction to Quantum Field Theory. Once again, I am using the manual provided by Dr. Zhong-Zhi Xianyu. Equation (2.27) in the manual provides the value of $[Q^a, Q^b]$ in $SU(N)$. I am currently working in $SU(2)$ whose generators are the pauli sigma matrices. I was able to get the third to last line of Equation (2.27) which I present below (the differences between this and the one presented in the manual are that the matrices presented here are the Pauli matrices and I dropped the Dirac delta):
$$\int \frac{d^3p}{(2\pi)^3}\bigg[\alpha_{ip}^\dagger\sigma_{ij}^a \sigma_{jl}^b \alpha_{lp} - \alpha_{kp}^\dagger \sigma_{kl}^b \sigma_{lj}^a \alpha_{jp} + \beta_{ip}^\dagger\sigma_{ij}^a \sigma_{jl}^b \beta_{lp} - \beta_{kp}^\dagger \sigma_{kl}^b \sigma_{lj}^a \beta_{jp}\bigg]$$
The second to last line of equation (2.27) says that this is equal to
$$i\epsilon^{abc}\int \frac{d^3p}{(2\pi)^3} \bigg[\alpha_{ip}^\dagger \sigma^c_{il} \alpha_{lp} - \beta_{kp}^\dagger \sigma_{kj}^c \beta_{jp} \bigg]$$
To try to prove this, I used the identity $\sigma^a \sigma^b = \delta_{ab}1 + i\epsilon_{abc}\sigma^c$. From this I see $\sigma_i^a \cdot \sigma_l^b = \delta_{ab}\delta_{il}1 + i\epsilon_{abc}\sigma^c_{il}$ and $\sigma_k^a \cdot \sigma_j^b = \delta_{kj}\delta_{ab}1 + i\epsilon_{abc}\sigma^c_{kj}$.
Thus the first integrand I presented becomes $$(\alpha_{ip}^\dagger \alpha_{lp})(\delta_{ab}\delta_{il}1 + i\epsilon_{abc}\sigma^c_{il}) - (\alpha_{kp}^\dagger \alpha_{jp})(\delta_{kj}\delta_{ab}1 + i\epsilon_{bac}\sigma^c_{kj}) + (\beta_{ip}^\dagger \beta_{lp})(\delta_{il}\delta_{ab}1 + i\epsilon_{abc}\sigma^c_{lp}) - (\beta_{kp}^\dagger \beta_{jp})(\delta_{ab}\delta_{kj}1 + i\epsilon_{bac}\sigma^c_{kj})$$
Now if $a = b$, I see that the $\epsilon_{abc}$'s becomes $0$, and so I obtain $$\alpha_{ip}^\dagger \alpha_{lp}\delta_{il} - \alpha_{kp}^\dagger \alpha_{jp}\delta_{kj} + \beta_{ip}^\dagger \beta_{lp}\delta_{il} - \beta_{kp}^\dagger \beta_{jp}\delta_{kj}.$$ Now the j's,k's, i's, and l's are dummy indicies, since the are used for summation and so we see that the first pair cancel and the second pair cancel. Thus the integrand is 0 (Is this logic correct?)
Now suppose $a \neq b$. Then the delta's are 0 and I obtain:
$$(\alpha_{ip}^\dagger \alpha_{lp})( i\epsilon_{abc}\sigma^c_{il}) - (\alpha_{kp}^\dagger \alpha_{jp})(i\epsilon_{bac}\sigma^c_{kj}) + (\beta_{ip}^\dagger \beta_{lp})( i\epsilon_{abc}\sigma^c_{lp}) - (\beta_{kp}^\dagger \beta_{jp})(i\epsilon_{bac}\sigma^c_{kj})$$
Now, the second and third terms do not cancel, and I am unable to recover the desired result. Can anyone show me what I did wrong? I believe I made all the indices adjustments.
Answer: Here is answer to my question, which I got with the help of @Connor Behan. The $a = b$ case is shown in the question post. For the $a\neq b$ case we have
$(\alpha_{ip}^\dagger \alpha_{lp})( i\epsilon_{abc}\sigma^c_{il}) - (\alpha_{kp}^\dagger \alpha_{jp})(i\epsilon_{bac}\sigma^c_{kj}) + (\beta_{ip}^\dagger \beta_{lp})( i\epsilon_{abc}\sigma^c_{lp}) - (\beta_{kp}^\dagger \beta_{jp})(i\epsilon_{bac}\sigma^c_{kj}) \\= (\alpha_{ip}^\dagger \alpha_{lp})( i\epsilon_{abc}\sigma^c_{il}) + (\alpha_{kp}^\dagger \alpha_{jp})(i\epsilon_{abc}\sigma^c_{kj}) + (\beta_{ip}^\dagger \beta_{lp})( i\epsilon_{abc}\sigma^c_{lp}) + (\beta_{kp}^\dagger \beta_{jp})(i\epsilon_{abc}\sigma^c_{kj}) $
Noting that $i's,l's, j's,k's$ are dummy indices, we can combine the first two terms and the last two terms. | {
"domain": "physics.stackexchange",
"id": 80880,
"tags": "quantum-field-theory, operators, angular-momentum, klein-gordon-equation"
} |
Error-handling #ifdefs for AFNetworking requests | Question: I am using AFNetworking 1.4.3 to send and receive network messages in iOS. My application works slightly differently in DEBUG and RELEASE mode, so I need to use #ifdef clauses. How can I simplify the following fragments without making these macros:
#ifdef DEBUG
if (failure) failure([NSError fromDict:responseObject]);
#else
if (failure) failure([NSError networkError]);
#endif
#ifdef DEBUG
if (failure) failure(error);
#else
if (failure) failure([NSError networkError]);
#endif
Whole code:
@implementation ELHTTPClient
- (instancetype)initWithBaseURL:(NSURL *)url {
self = [super initWithBaseURL:url];
if (self) {
__weak ELHTTPClient *weakSelf = self;
[self setParameterEncoding:AFJSONParameterEncoding];
[self registerHTTPOperationClass:[AFJSONRequestOperation class]];
[self setDefaultHeader:@"Accept" value:@"application/json"];
[self setReachabilityStatusChangeBlock:^(AFNetworkReachabilityStatus status) {
[weakSelf.delegate httpClient:weakSelf connectionStateChanged:status];
}];
}
return self;
}
- (void)myPostPath:(NSString *)path parameters:(NSDictionary *)parameters success:(void (^)(NSDictionary *))success failure:(ELErrorBlock)failure {
[self postPath:path parameters:parameters success:^(AFHTTPRequestOperation *operation, id responseObject) {
if ([responseObject isResponseValid]) {
if (success) success(responseObject);
}
else {
#ifdef DEBUG
if (failure) failure([NSError fromDict:responseObject]);
#else
if (failure) failure([NSError networkError]);
#endif
}
} failure:^(AFHTTPRequestOperation *operation, NSError *error) {
// [self.delegate httpClient:self networkError:error];
// NSLog(@"Network error. Operation: %@, \n Error: %@", operation, error);
#ifdef DEBUG
if (failure) failure(error);
#else
if (failure) failure([NSError networkError]);
#endif
}];
}
- (void)myGetPath:(NSString *)path parameters:(NSDictionary *)parameters
success:(void (^)(NSDictionary *dict))success
failure:(ELErrorBlock)failure {
[self getPath:path parameters:parameters success:^(AFHTTPRequestOperation *operation, id responseObject) {
if ([responseObject isResponseValid]) {
if (success) success(responseObject);
}
else {
#ifdef DEBUG
if (failure) failure([NSError fromDict:responseObject]);
#else
if (failure) failure([NSError networkError]);
#endif
// NSError *error = [self.serializer errorFromDictionary:responseObject];
// [self.delegate httpClient:self requestFailedError:error];
}
}failure:^(AFHTTPRequestOperation *operation, NSError *error) {
#ifdef DEBUG
if (failure) failure(error);
#else
if (failure) failure([NSError networkError]);
#endif
}];
}
@end
My implementation of NSError category:
@implementation NSError (My)
+ (NSError *)networkError {
NSMutableDictionary *userInfo = [[NSMutableDictionary alloc] init];
[userInfo setValue:@"Some problems with connection to network. Please try again later." forKey:NSLocalizedDescriptionKey];//FIXME: add localized string.
return [NSError errorWithDomain:@"Connection error" code:1024 userInfo:userInfo];
}
+ (NSError *)fromDict:(NSDictionary *)dict {
NSString *errorDesc = dict[@"description"];
errorDesc = errorDesc ? : [NSString stringWithFormat:@"Dictionary: \n%@", dict];
NSMutableDictionary* details = [NSMutableDictionary dictionary];
details[NSLocalizedDescriptionKey] = errorDesc;
NSNumber *errorCodeNum = dict[@"errorCode"];
return [NSError errorWithDomain:@"world" code:[errorCodeNum intValue] userInfo:details];
}
@end
Answer: I would put ifdef DEBUG inside the failure method, so it's handled in one place only.
(void) failure(id response, NSError error) {
#ifdef DEBUG
#else
#endif
} | {
"domain": "codereview.stackexchange",
"id": 10546,
"tags": "objective-c, ios, networking, error-handling, macros"
} |
Do black holes rip apart even atoms and protons and neutrons? | Question: I am aware that black holes rip apart objects because of spaghettification. What about atoms and protons and neutrons? Do they get ripped apart too? But, in the case of protons and neutrons, wouldn't them getting ripped apart into quarks violate the fact that you can't isolate quarks?
Answer: The "spaghettification force" you're referring to is better known as the tidal force on the object. To within an order of magnitude, it is proportional to
$$
F_\text{tidal} = \frac{G m M d}{r^3}
$$
where $m$ is the mass of the object, $d$ is its physical size, $r$ is the radial coordinate, and $M$ is the mass of the black hole.
For an object just crossing the event horizon of a black hole of radius $r$, we have $M = c^2 r/2G$, and so this becomes
$$
F_\text{tidal} = \frac{m c^2 d}{2 r^2}
$$
For a black hole of radius $r = 3$ km and an atom ($d = 10^{-10}$ m, $m = 10^{-27}$ kg), this force works out to be about $10^{-34}$ N. Hardly anything to worry about; for comparison, the force between hydrogen nucleus and its electron is about $10^{-8}$ N.
Of course, as the atom gets closer to the singularity $r \to 0$, this force will increase. But you'd have to make $r$ very small to get that to work. In such regimes, it would not surprise me if well-known phenomena of quantum field theory in normal regimes (in particular, color confinement) broke down. But, of course, such situations can never be observed from the outside universe. | {
"domain": "physics.stackexchange",
"id": 79189,
"tags": "black-holes, event-horizon, singularities, tidal-effect"
} |
Triggering re-build on jenkins | Question:
Hi,
Is there a way to trigger a re-build of the jenkins builds? I'm thinking about something like this.
The reason I would like to do this is that I am releasing a package which runs fine in the pre-release tests, but fails on Jenkins (see also my other question about this). Now I'd like to be able to test this particular case locally.
I can apply the fixes in my repo and then re-run bloom-release to push to the release repository. I don't increase the version, and skip the part about the PR to rosdistro. This causes all packages which have failed so far to re-build (they failed because they depend on the faulty package which needs fixing). However the faulty package itself has built fine, so a jenkins re-build is not triggered for it - with the result of having all depending packages fail again, even though the cause of error has been fixed in the release repository.
In my case, I just forgot to set a flag needed for jenkins build in my cmake file. Is it necessary to increase version and do a full bloom bloom-release (incl. PR), even if changes are tiny, in order to trigger a re-build of a particular jenkins project? Or is there an easier way to do this? I don't want to flood the system with silly version increases, I'd prefer to test this locally before I re-release the package. Is there a standard way to do this?
Thanks for your help,
Jenny
Originally posted by JenniferBuehler on ROS Answers with karma: 1 on 2016-06-01
Post score: 0
Original comments
Comment by Dirk Thomas on 2016-06-01:
Can you please clarify which job you are referring to. A devel job or a sourcedeb / binarydeb job?
Comment by JenniferBuehler on 2016-06-01:
One example was arm_components_name_manager. It needs the baselib_binding package, which had built fine. I wanted to re-trigger building baselib_binding (after only setting a flag to true in cmakelists)
Comment by Dirk Thomas on 2016-06-01:
http://build.ros.org/view/Idev/job/Idev__jb_common_libs__ubuntu_trusty_amd64/ should be the job since that repo contains the package of the referenced binary job (https://github.com/ros/rosdistro/blob/c9164b251da44df2aaa7dd928f596f29ff0da74b/indigo/distribution.yaml#L4085-L4105).
Comment by JenniferBuehler on 2016-06-01:
Ok, that's great to know, I'll try this. So far the devel branches have built fine, but maybe I have missed something there. I'll re-release the package with the fixes - it's anyway only required for the first release of the package, until things are fixed, so it's not a huge concern. Thanks!
Answer:
I binary job builds a released package. Therefore it requires you to make a new release to trigger it again. You might want to use a devel job which is being triggered on new commits to your upstream repo.
Originally posted by Dirk Thomas with karma: 16276 on 2016-06-01
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 24779,
"tags": "jenkins"
} |
Different ways of creating a multiplication grid in ruby | Question: The idea is to create a function that gets a number passed in and ouputs a multiplication grid.
For example the desired output for grid(10) would be :
1,2,3,4,5,6,7,8,9,10
2,4,6,8,10,12,14,16,18,20
3,6,9,12,15,18,21,24,27,30
4,8,12,16,20,24,28,32,36,40
5,10,15,20,25,30,35,40,45,50
6,12,18,24,30,36,42,48,54,60
7,14,21,28,35,42,49,56,63,70
8,16,24,32,40,48,56,64,72,80
9,18,27,36,45,54,63,72,81,90
10,20,30,40,50,60,70,80,90,100
I've come up with two variations of the same solution:
Variation 1
def grid1(grid_size)
output = String.new
for y in 1..grid_size
line = Array.new
for x in 1..grid_size
line << x * y
end
output += "#{line.join(",")}\n"
end
output
end
Variation 2
def grid(grid_size)
output = (1..grid_size).to_a.map do |y|
(1..grid_size).to_a.map{|x| x*y}.join(",")
end
output
end
Any other ideas on how to solve this or ways that the current solutions could be improved would be most appreciated!
Answer: The second variant is definitely more expressive. If you can write the whole function as one expression, and that expression is readable, then do it that way.
There is no need to call Range#to_a before doing #map.
Naming could be better. grid_size is redundant when the function is already called grid. I'd also suggest row and col instead of y and x.
def grid(size)
(1..size).map do |row|
(1..size).map { |col| row * col }.join(',')
end.join("\n")
end | {
"domain": "codereview.stackexchange",
"id": 17099,
"tags": "ruby, comparative-review"
} |
OVH email manager script using docopt | Question: This is what I coded to realize a command line program to make it easier to manage emails on ovh shared hostings. It's working ok but it seems ugly.
I'm a beginner with Python and docopt. What would you suggest?
#!/usr/bin/env python
# -*- coding: utf-8 -*-
''' OVH Email Manager (ovhEmailMan)
A small script that helps to add and remove one or more email addresses on the OVH shared domains
Usage:
ovh_mails.py list [--ugly]
ovh_mails.py add (<address> [--pswd=<password>][--description=<description>] | --file <filename> [--notify])
ovh_mails.py remove (<address> | --file <filename>)
ovh_mails.py (-h | --help)
Arguments:
<password> Password to access the mailbox (if not provided it's random generated)
<filename> Name of the files to process (csv). Check README to see how to format it
Options:
-h, --help Show this help message
-u, --ugly Print without nice tables
-p, --pswd=<password> Set the password to the one provided
-n, --notify If set, notification mail is sent using smtp credentials in ovh.conf
Commands:
list list all the email addresses currently configured
add add one or more (configured in <filename>) email addresses
remove remove one ore more (configured in <filename>) email addresses
'''
import ovh
from docopt import docopt
from ovhem import EmailManager
from ovhem import fileprocesser as fp
if __name__ == '__main__':
args = docopt(__doc__)
#Validate args ---- TODO
eman = EmailManager()
# 'List' command parsing
if args['list']:
if args['--ugly']:
eman.niceoutput = False
eman.list_emails()
# 'Add' command parsing
elif args['add']:
if args['<address>']:
emails = (
{
'address': args['<address>'],
'password': None,
'description': None,
},
)
if args['--description']:
emails[0]['description'] = args['<description>']
if args['--pswd']:
emails[0]['password'] = args['<password>']
if args['--file']:
emails = fp.process_file(args['<filename>'])
# Getting back the emails dict
emails=eman.add_emails(emails)
if args['--notify']:
fp.send_notifications(emails)
# 'remove' command parsing
elif args['remove']:
if args['<address>']:
emails = (
{
'address': args['<address>'],
},
)
if args['--file']:
emails = fp.process_file(args['<filename>'])
eman.remove_emails(emails)
Answer: Overall, it looks quite fine, with some smaller and bigger issues.
Parsing the add command
It's not so great to use the '<address>' string twice here:
if args['<address>']:
emails = (
{
'address': args['<address>'],
# ...
You might mistype one of them. Or you might change something later and forget to update it everywhere. It would be better to put the value of args['<address>'] into a variable, and use that variable in evaluations and assignments.
This part is also awkward:
emails = ( ... )
if args['--description']:
emails[0]['description'] = args['<description>']
if args['--pswd']:
emails[0]['password'] = args['<password>']
It would be much easier to use description and password variables to store defaults, then update these variables if --description or --pswd were given, and finally create the emails tuple.
Like this:
address = args['<address>']
if address:
password = None
description = None
if args['--description']:
description = args['<description>']
if args['--pswd']:
password = args['<password>']
emails = (
{
'address': address,
'password': password,
'description': description,
},
)
Parsing the remove command
Here, if the second if statement is true,
it will overwrite the effect of the first:
if args['<address>']:
emails = (
{
'address': args['<address>'],
},
)
if args['--file']:
emails = fp.process_file(args['<filename>'])
That suggest that you should use an elif, and switch the order of the statements:
if args['--file']:
emails = fp.process_file(args['<filename>'])
elif args['<address>']:
emails = (
{
'address': args['<address>'],
},
)
emails might be undefined
Both the parsing of add and remove have a problem:
As their last step they do something with emails,
for example eman.add_emails(emails) and eman.remove_emails(emails),
but at that point the emails variable might be undefined.
This can be fixed by adding validation for these commands.
It looks that either of these must be true, checked in this order:
--file was specified and <filename> parameter was given
<address> parameter was given
For example:
if args['--file']:
filename = args['<filename>']
if filename:
emails = fp.process_file(filename)
else:
raise Exception("Filename parameter missing")
else:
address = args['<address>']
if address:
emails = ( ... )
else:
raise Exception("Address parameter missing")
# main action | {
"domain": "codereview.stackexchange",
"id": 12391,
"tags": "python, beginner, python-2.x, email"
} |
Does rotation increase mass? | Question: If an object is rotated on its axis near the speed of light would its mass increase?
Normally if the object was moving (in relationship to the Earth for example) I would agree that its mass would increase. I think, when rotating that it won't increase mass, but I can't prove it.
In the other hand if $E=mc^2$ and $E$ is the energy of motion, does that mean that rotation has energy of motion?
Answer: Yes, this is certainly true. Mass is defined by $m^2=E^2-p^2$ (in units with $c=1$), where $(E,p)$ is the momentum four-vector built out of the mass-energy and momentum. (This defines what's known as invariant mass, as opposed to "relativistic mass.") Mass as defined in this way is not additive, and depends on the motion of the particles within a system.
As a simple example, say we have two masses $m$ at the ends of a massless stick. When the stick is at rest and not rotating, the momentum four-vectors are both $(m,0)$, the sum is $(2m,0)$, and the mass of the system is $2m$.
Now let the stick rotate end over end. The momentum vectors are now $(m\gamma,m\gamma v)$ and $(m\gamma,-m\gamma v)$. The total momentum four-vector is $(2m\gamma,0)$, which means the mass of the system is $2m\gamma$. | {
"domain": "physics.stackexchange",
"id": 17692,
"tags": "special-relativity, mass, kinematics, rotational-kinematics"
} |
Usage of Multibeam 2D Imaging Sonar for AUVs, testing them in the pool environment | Question: I belong to an AUV team at my university. We are planning to have a Multibeam 2D Imaging Sonar (the Blueview P900) for our AUV to detect obstacles underwater.
I have the following questions to ask on the feasibility of testing/implementing such sonars on AUVs.
As we know that these multibeam sonars have multiple reflections arriving at different times from various surfaces while testing in a pool, is there any recommended way to filter these noises in the image obtained from the sonar pings?
Are such sonars in use/test by any other team/organization anywhere else who do pool testing other than a ocean/reservoir testing where multiple reflections are almost zero except the reflections from the obstacle(s)?
Also i would like to know the recommended image processing algorithms that can be implemented/used to detect obstacles from the sonar images.
Answer: As you've noted, a pool is one of the worst environments in which to test acoustic sensors; there is nothing to dampen the echoes, and the multipath is quite extreme.
Generally, one of 3 things is done to help mitigate these effects -- the goal being an anechoic test chamber.
Make the shape of your test tank such that all echoes are reflected into a trap. The TRANSDEC anechoic pool accomplishes this with its eliptical shape; sounds that hit the bottom are directed toward the edge, which has a "trap" around its perimeter. (I can no longer find the cut-away diagram for that... sorry).
Put foam wedges along the outside of the tank to absorb acoustic energy, similar to the way that recording-studio anechoic chambers are designed.
Aerate the water to scatter the echoes, a technique described in this paper: "Anechoic aquarium
for ultrasonic neural telemetry". The use of microbubbles is apparently able to prevent most of the multipath effects. | {
"domain": "robotics.stackexchange",
"id": 252,
"tags": "sonar"
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.