anchor stringlengths 0 150 | positive stringlengths 0 96k | source dict |
|---|---|---|
When filling a bottle, why is the level after an overflow lower than the top? | Question: When I fill a water bottle from a tap (aiming the flow from the tap so that it goes entirely into the bottle), if I time it correctly I can turn off the tap so that the bottle is filled right up to the brim.
If I mistime it so that the water overflows, and then turn off the tap, the resulting level in the bottle is below the brim, often by a decent margin.
The same effect can be observed filling up a saucepan or bowl, so it doesn't seem to be the shape of the container. My instinct is that something to do with viscosity or surface tension means that some water is carried away with the overflow, but I don't have the knowledge to tell if this makes sense.
What's going on here?
Answer: There are two effects that both reduce the final water level:
Kinetic energy of the water
Entrapped air bubbles in the water
When the water is pouring into the bottle and back out of it, it does not immediately turn around at the surface. Instead, the kinetic energy of the water causes it to flow quite deep into the bottle, then make a turn and flow back upwards.
When the incoming flow stops, the remaining water in the bottle still has that kinetic energy, and will continue flowing upwards and over the rim for a short time.
Depending on the faucet, the water flow usually has also entrapped air bubbles, which make it appear white rather than clear. Once the flow stops, the larger bubbles quickly rise to the surface and pop, further lowering the water level.
Just for fun, I took a slow motion video of filling a bottle (slowed 8x). With my faucet, it appears the contribution of air bubbles is quite large. | {
"domain": "physics.stackexchange",
"id": 81389,
"tags": "fluid-dynamics, water"
} |
Machine learning model bundled with a library vs. an API | Question: I am thinking to "deploy" a machine learning model (in pickle it is sized 3 megabytes) and after discussing with my developer colleagues, they said it would be better if the model is packed as a python library instead of a microservice (like a rest API).
I wanted to ask what's your view on this: Pickled model packed in a library specifically meant for it vs. a rest API, pros and cons?
I was thinking that having it as a library could possibly be easier to use and wouldn't require to have to worry about deployment, web address etc.
Answer: The advantages of deploying the model as a package is the code will be part of the monolith application and require no remote calls thus:
No additional operational demands
No external dependencies
Faster
Fewer security issues
Same uptime as the rest of the application
In general, most features stay in a single monolith application until there is are specific reasons to create separate services. | {
"domain": "datascience.stackexchange",
"id": 6873,
"tags": "python, predictive-modeling, data-product"
} |
GANs: Should Generator update weights when Discriminator says false continuously | Question: My GANs is like this:
Train an autoencoder (VAE), get the decoder part and use as Generator
Train Discriminator
After training, do the generation in these steps:
Call Generator to generate an image
Call the Discriminator to classify the image to see whether it's acceptable
The problem is that the Discriminator says 'false' a lot, which means the generated image is not useful.
How should the Generator change (update weights) when Discriminator doesn't accept its generated image?
Answer: In general, you should train both discriminator D and generator G simultaneously.
Depending on the metric that you use as the target for your model, you may encounter a Vanishing gradient problem. It can happen when you implement original loss (i.e. JS-divergence). In that case D can become overconfident regarding fake samples and won't provide any useful feedback to the G. To find out if training fell into this problem, you should plot D and G loss. It will look as follows:
The original GAN has a lot of problems, that's why I suggest you to use Wasserstein metric. More information you can find in WGAN paper
Here you can find more information about GAN problems:
https://arxiv.org/pdf/1904.08994.pdf
https://arxiv.org/pdf/1804.00140.pdf | {
"domain": "ai.stackexchange",
"id": 1752,
"tags": "machine-learning, generative-adversarial-networks, generative-model, generator, discriminator"
} |
How to determine beam deflection with this unusual configuration (diagram attached) | Question: So I've got a support system shown here in the attached image. Comparing configuration "A" to "B" it seems intuitive that "A" could support a larger load. The stubs are locking the two beams above and below the tube steel to eachother to increase resistance to deflection. How can I show this mathematically? A colleague suggested I would add the deflections of the beam above and below the tube steel to get a total deflection and you could show that this total deflection would be less than the deflection of the support beam in config. "B". This doesn't seem quite right to me, but I'm not sure. I should note I am assuming the taller base supports are 100% rigid.
Answer: Let's label the upper beam as "A", the lower beam as "B". Assume rigid vertical studs, the deflection at beam A at points 1 & 2 must equal to beam B at the respective locations, that is $\Delta_{A1} = \Delta_{B1}$ and $\Delta_{A2} = \Delta_{B2}$.
Now, release the studs and obtain the deflections at points 1 and 2 due to concentrate load $P_B$ at the mid-span of beam B.
$\Delta_{B1} = \Delta_{B2} = \frac {P_Ba}{48EI}(3L^2 - 4a^2)$
Next, apply a force $P_A$ at points 1 and 2 on beam A, and get the deflections at points 1 and 2.
$\Delta_{A1} = \Delta_{A2} = \frac {P_Aa}{6EI}(3aL - 4a^2)$
Now you have two equations and two unknowns. By equation $\Delta_{A1}$ and $\Delta_ {B1}$ you can get $P_A$ in proportion to $P_B$. And based on the equivalent forces concept, $2P_A = P_B = Applied Load$, you can get the exact/actual mangitude of $P_A$ and $P_B$.
Finally, compute the beam deflections using the actual $P_A and P_B$ using the appropriate equations as shown above. Note the difference in deflection in segments between points 1 and 2 of beams A and B, it implies the beams suffer different internal forces.
Note, this solution is valid for the case that the load is directly attached to beam B; beam A shares the load through force transfer from the rigid studs, not through connection to the load. | {
"domain": "engineering.stackexchange",
"id": 4138,
"tags": "mechanical-engineering, statics, beam, deflection"
} |
Is there any optical component that uniformizes the incoming light? | Question: Is there any optical component in existence that uniformizes randomly pointing rays?
Answer: To add to Carl Witthoft's answer: your proposed device would violate Conservation of Optical Extent aka Optical Étendue unless it were an active device (i.e. one needing a work input to "uniformise" a given quantity of light).
The law that optical extent can only be held constant or increased by a passive optical system is equivalent to the second law of thermodynamics for light, because the optical extent of a light source is its volume in phase space.
The optical extent $\Sigma$ for the light radiated from a surface $S$ is:
$$\Sigma = \int_S \int_\Omega I(x) \cos(\theta(x, \Omega)) \,{\rm d} \Omega\, {\rm d} S$$
where we integrate the intensity $I$ at each point $x\in S$ over all solid angles $\Omega$ taking account of the angle $\theta$ each component of the radiation from point $x$ makes with the surface's unit normal. Then we integrate this quantity over all points on the surface $S$.
So, the $\Sigma$ for your output would be nought, whilst it would be large for your input, so no passive imaging device can do what you ask.
So, another way of putting Carl's answer would be that the proposed device would have to "forget" the state encoded in the input light's wavefront direction at each point. Thus your proposed device, if at all possible, would needfully be an active device, needing work input of $k_B\,T\,\log 2$ joules for each bit of light state forgotten in accordance with the Landauer Principle form of the second law of thermodynamics. I say more about this in my answer here. | {
"domain": "physics.stackexchange",
"id": 11083,
"tags": "thermodynamics, optics, entropy, geometric-optics"
} |
Logistics regression with polynomial features vs neural networks for classification | Question: I am taking Andrew Ng's Coursera class on machine learning. He mentions that training a logistic regression model with polynomial features would be very expensive for certain tasks compared to training a neural network. Why is that though? I mean, when we talk about neural networks we're usually looking at a model with a very large number of parameters so why would logistic regression be more computationally expensive?
PS: Here's some context (at the beginning of an exercise on neural networks):
In the previous part of this exercise, you implemented multi-class logistic regression to recognize handwritten digits. However, logistic regression cannot form more complex hypotheses as it is only a linear classifier (You could add more features, such as polynomial features, to logistic regression, but that can be very expensive to train).
Answer: I expect what he's referring to is the combinatorial explosion in the number of terms (features)
as the degree of the polynomial increases.
Let's say you have $N$ measurements/variables you're using to predict some other variable. A $k$th degree polynomial of those $N$ variables has $ N+k \choose k$ terms (see here). This increases very quickly with $k$.
Example:
Say we have N = 100 variables and we choose a third degree polynomial, we'll have
$ 103\choose3$ = 176,851 features. For a five-degree polyomial it goes to:
$105 \choose 5$ = ~96 million features. You'll then need to learn as many parameters as you have features.
Compare to a NN:
Compare this to using a fully connected NN
say we choose K fully connected hidden layers with $M$ units each. That gives:
$ (NM) + (K-1)(MM) + M$ parameters. This is linear in $K$ (though the $MM$ term attached to it might be big).
For N = 100 variables again, two hidden layers, and 350 features nodes per layer we get 157,580 parameters - less than we'd need for logistic regression with third degree polynomial features.
Representational power
(Added this section after seanv507's great comment) * See caveat below
The argument above was just a numbers game - how big of a NN can you get while still having the same parameter count as logistic regression with polynomail features. But you're getting at something when you say in your question:
I mean, when we talk about neural networks we're usually looking at a
model with a very large number of parameters so why would logistic
regression be more computationally expensive?
Well said.
Efficiency
Neural nets are universal function approximators, and we know that polynomials can approximate functions too. Which is better? What should one use? I'd bet that, for a given parameter "budget", a NN could better approximate "more" functions than a polynomial with the same number of parameters. It wouldn't surprise me if there was theory to back it up, but I don't know it off hand. seanv507's statement
One possibility is to assume that the nonlinear activation function is quadratic...and identify what range of polynomials the NN could represent.
is an interesting idea. Empirically, NN's have done better for many hard tasks than polynomial representations, and that's pretty strong evidence.
*Caveat
As seanv507 says - this is the hard part. The above statements won't always be true - I'd argue they're probably mostly true. If a low-degree polynomial basis nicely and reliably separates your classes, then it's probably worth using / trying polynomial features. | {
"domain": "datascience.stackexchange",
"id": 5902,
"tags": "machine-learning, neural-network, logistic-regression"
} |
Bandwidth and gain of operational amplifier | Question: Please explain the relation between bandwidth and gain of op-amp.
I have learned that negative feedback could reduce gain and increase bandwidth of an amplifier. Since gain= Vout/Vin, I understand that less gain means higher Vin. The larger Vin, the greater bandwidth. (I made my own assumption here. Please correct me if I am wrong!)
However, this seemed contradictory with what I knew about the properties of an ideal operational amplifier (op-amp). The ideal op-amp should has infinite open-loop voltage gain and infinite bandwidth. How can infinite open-loop voltage gain has infinite bandwidth? Or this is why it is an ideal op-amp (impossible to be made).
This confuses me a lot. Thank you for your help!
Answer: Less gain doesn't mean higher Vin. In a constant gain the ratio Vout/Vin remains the same. You are right an ideal op-amp should have an infinite gain and an infinite bandwidth but this does not happen in a real op-amp due to the output going down with increasing frequency. You can imagine the output going through an RC low-pass filter within the op-amp so the output starts to fall off at some stage as the frequency is increased. The gain thus decreases with frequency. This means if the op-amp gain is reduces as in negative feedback, you can have a large range of frequency where the gain remains constant, that is larger bandwidth. | {
"domain": "physics.stackexchange",
"id": 36056,
"tags": "electronics"
} |
How can a torus admit half a flux quantum, and why does a vortex induce an AB phase? | Question: There is an issue that I have with the argument given in “Topological Degeneracy of non-Abelian States for Dummies” http://arxiv.org/abs/cond-mat/0607743 , regarding the ground state degeneracy of the Pfaffian state on a torus: This is They argue that adiabatic pairwise annihilation of quasiparticles around the $x$-direction (implemented by $T_x$) should be equivalent to inserting half a flux quantum into the y-direction “hole” of the torus, i.e.
$$T_x=F_y^2.$$
Explicitly, this is described in the following paragraph, on page 9, Section 4: Consistency between non-Abelian statistics and charge fractionalization:
The issue I have with this is that the magnetic field in a vortex is localized at a point, and is not like the uniform field described by "insertion of half a flux quantum into a hole", and so the authors still have to prove that a quasiparticle encircling the appropriate fundamental cycle of the torus still picks up the same Aharonov Bohm phase, despite the fact that the magnetic fields are different in each case.
Moreover, I do not believe that it is possible for for half a flux quantum to exist: then the Chern number of the electromagnetic $U(1)$-bundle would then be 1/2, not an integer, which violates mathematics!!!
What's going on here?
Answer: I'll try to answer the last question, which is of course closely related to the first one (in bold font).
The existence of half flux quanta is special to Moore-Read state (the paper cited in the OP called it the Pfaffian state). Let us take the simplest example, the Moore-Read state at $\nu=1/2$, corresponding to $M=1$ in the paper. The quasiparticles are the following: the trivial electrons, a charge-$e/2$ quasiparticle, a neutral fermion, and charge-$\pm e/4$ non-Abelian quasiparticle (often called the quasiholes, and the paper called them the vortex/anti-vortex). It is useful to use the composite fermion picture, where the composite fermion is one electron plus two units of flux quanta, and the Moore-Read state can be thought of as the $p+ip$ superconductor of the composite fermions. Of course there is also the charge sector, which is very important in this discussion. More formally, we can construct the Moore-Read state by a parton approach, write the electron operator $c$ as $c=\psi b$ where $\psi$ is the neutral composite fermion and $b$ is a charge-$e$ boson. We then put $\psi$ fermions into a $p+ip$ superconductor, and $b$ into a $\nu=1/2$ bosonic Laughlin state. It is not hard to see that the resulting wavefunction is exactly the same as the Moore-Read state. The charge-$e/4$ quasiparticle is then the vortex for the composite fermion.
Now we can understand the effect of external U(1) flux insertion. Inserting a $2\pi$ U(1) flux nucleates the Abelian charge-$e/2$ quasiparticle, as one would expect for a $\nu=1/2$ bosonic Laughlin state. Now the question is whether a $\pi$ U(1) flux is allowed. The reason that usually we do not allow such non-unit fluxes is because they introduce a "branch cut" into the wavefunction of electrons, so can not represent a quasi-particle type excitation of the system, which is the objection raised by the OP near the end (although phrased differently). However, what happens in the Moore-Read state is that when a $\pi$ U(1) flux is inserted, the $b$ boson sees it as a $\pi$ flux, but at the same time a vortex in the $p+ip$ supercondutor is also created, so the $\psi$ fermion also sees a $\pi$ flux (from the vortex). The electron thus sees no effectively no flux, so there is no "branch cut". Therefore, half flux quanta insertion creates a charge-$e/4$ "vortex" quasiparticle. | {
"domain": "physics.stackexchange",
"id": 26930,
"tags": "electromagnetism, condensed-matter, topological-order"
} |
Energy released from destruction of an object | Question: In the movie “Star Wars: A New Hope”, Luke Skywalker blows up the „Death Star‟. Assume
that the „Death Star‟ is a perfectly spherical spaceship with uniform mass distribution. The mass
of „Death Star‟ M= 1021 kg and the radius R= 667 km. Estimate the amount of energy that was
released when the Death Star was destroyed.
Assume that initially all the energy was stored as the gravitational potential energy of the „Death
Star‟ and that after the explosion, the remaining parts of the spaceship are infinitesimally small
and infinitely far from each other.
I am trying to solve this problem. I was thinking to calculate the potential energy. That should be equal to the energy released. As mentioned in the question, it was stored as gravitaional potential energy.
But the problem is, we know
$$ P.E. = mgh $$
As the height is not mentioned here, how can I calculate it? Besides, g should not affect this as well(the thing was a spaceship)
Answer: The energy required by the question is the gravitational self-binding energy. When two elements of masses exist near each other, there is a certain energy associated with the system. This is the same energy of a ball at some height from the earth's surface ($mgh$). But, this formula is limited to the Earth and other objects.
What you need to use for the question is, $U = \frac{3GM^2}{5R}$ | {
"domain": "physics.stackexchange",
"id": 85821,
"tags": "homework-and-exercises, energy, potential-energy, mass-energy"
} |
A 'better' definition of time | Question: I have been annoyed by official definitions of time from many Google searches and decided after listening to Sean Carroll's podcast to write my own. Here is what I did so far:
What is time?
A state that changes.
What does time look like?
It is perceived as a non-spatial continuum of changing states where each state has the most infinitely small differences compared to any other state in the continuum.
The arrow of time?
A state in high entropy is not changing, and so only a state that has low entropy can be a state that can change. Any changing state therefore is perceived as a state changing with linear direction governed by the entropy of the continuum.
Please let me know if I used any time-like words, or you have some better definitions.
Answer: You would be better to conflate your first two questions and define time as a continuum of non-spatial change, as the question 'what does time look like?' is literally meaningless-we cannot see time.
Your statement about the arrow of time is false. High entropy does not rule out change over time.
The idea of time having a direction is unnecessary- one can consider time to be a cumulative measure of change, which by definition is always positive. | {
"domain": "physics.stackexchange",
"id": 82699,
"tags": "cosmology, entropy, time, definition, arrow-of-time"
} |
Show all primes in go | Question: I'm trying to go through Adrian simple programming tasks; Elementary task 8 It's basically asking for an infinite loop to print every prime number as it finds it.
Are there too many if statements? Call it intuitive knowledge, but something is just bugging me, like I'm missing something. Like, I can remove one of the if statements. So I'm wondering what you guys will find.
package main
import "fmt"
func main() {
var current_prime int
var prime bool
current_prime = 0
for {
prime = true
current_prime++
for i := 2; i < current_prime; i++ {
if current_prime % i == 0 {
prime = false
i = current_prime
}
}
if prime {
fmt.Println("found prime:", current_prime);
}
}
}
Answer:
Simple Programming Problems
Write a program that prints all prime numbers. (Note: if your
programming language does not support arbitrary size numbers, printing
all primes up to the largest number you can easily represent is fine
too.)
Your program says one is prime. That is not correct.
The only even prime number is two. You don't appear take advantage of that. That is inefficient.
A number can only be divisible by a number less than or equal to its square root. You don't appear take advantage of that. That is inefficient.
You are asked to "[print] all primes up to the largest number you can easily represent." In Go, type int is either 32 or 64 bits. You don't guarantee the largest number by using type int64. Your program is not correct.
You have no program termination condition. Your program is not correct.
And so on.
For example, fixing your code,
package main
import (
"fmt"
"math"
)
func main() {
fmt.Println("prime numbers:")
fmt.Println(2)
for n := int64(3); n > 0; n += 2 {
prime := true
r := int64(math.Sqrt(float64(n))) + 1
for i := int64(3); i < r; i += 2 {
if n%i == 0 {
prime = false
break
}
}
if prime {
fmt.Println(n)
}
}
}
To provide a measure of performance, the results from a Go benchmark for all prime numbers up to 32,771:
BenchmarkPeterSO-4 500 2661556 ns/op
BenchmarkIbnRushd-4 3 492864429 ns/op | {
"domain": "codereview.stackexchange",
"id": 31536,
"tags": "performance, primes, formatting, go"
} |
Calculating motion of equation in tensor form | Question: for the Lagrangian density $$\mathscr{L}=\frac{1}{2}(\partial_{\mu}A^{\mu})^2$$
how can I get this $$\frac{\partial{\mathscr{L}}}{\partial(\partial_{\mu}A_\nu)}=(\partial_\rho A^\rho)\eta^{\mu\nu}$$
or I can only get $\frac{\partial{\mathscr{L}}}{\partial(\partial_{\mu}A_\nu)}=\partial_\rho A^\rho$ but where the $\eta^{\mu\nu}$ comes from.
Answer: Writing
$$\mathscr{L}=\frac{1}{2}(\partial_{\mu}A^{\mu})^2=\frac{1}{2}(\partial_{\mu}A_{\nu}\eta^{\mu\nu})^2$$
we can write
$$\frac{\partial{\mathscr{L}}}{\partial(\partial_{\mu}A_\nu)}=\frac{1}{2}2(\partial_{\rho}A_{\sigma}\eta^{\rho\sigma})\eta^{\mu\nu}=(\partial_{\rho}A^{\rho})\eta^{\mu\nu} .$$ | {
"domain": "physics.stackexchange",
"id": 29227,
"tags": "homework-and-exercises, lagrangian-formalism, notation, tensor-calculus, differentiation"
} |
leg_detector and kinect | Question:
Hey,
I am trying to run a leg_detector2 package from here:
http://code.google.com/p/wu-robotics/source/browse/#svn%2Ftrunk%2Fpeople2
Has anyone had success to run it with kinect? I think that it should not be the problem. I successfully remapped kinect's laser scan to base_scan. But now I don't now how to read or visualize detected legs or any data produced by leg_detector.
I would really appreciate if anybody has any tips.
Thank you.
Originally posted by Grega Pusnik on ROS Answers with karma: 460 on 2012-04-06
Post score: 1
Answer:
I successfully run it with some minor adjustments.
First you have to edit your launch file and add to start openni, pointcloud_to_laser_scan and remap kinects laser scan from scan to base_scan.
To properly show the data in rviz I modified all .cpp and .py files to change fixed_frame from odom_combined to camera_depth_frame. Maybe you won't have to do that if you actually use odom_combined.
In the end don't forget to rosmake it.
Now you should be able to show leg detection in rviz by adding MARKER.
Originally posted by Grega Pusnik with karma: 460 on 2012-04-09
This answer was ACCEPTED on the original site
Post score: 2 | {
"domain": "robotics.stackexchange",
"id": 8888,
"tags": "ros, kinect, leg-detector, people"
} |
Priority queue implementation in C based on heap ordered (resizable) array - take 2 | Question: You think that your code is perfect ... until you put it up for code review.
I put up my priority queue for review and received lots of really good feedback. Including a memory leak which was embarrassing.
See here:
Priority queue implementation in C based on heap ordered (resizable) array
So I added in most of the suggestions and here it is. I would like to see if I interpreted the suggestions correctly and also see if there is anything else that needs fixing.
I realise of course that to make it really industrial strength, like for example the C library sort functions or the C++ standard library then changes would be required. But hopefully this code is still useful for some uses.
One remaining item that I have not addressed is the array_resize function. Fixing that is not trivial so have left that issue for now. Basically, if realloc fails, what to do.
Here is the code.
priority_queue.h - header:
/*
Heap ordered priority queue storing in a resizable array
*/
#ifndef PRIORITY_QUEUE_
#define PRIORITY_QUEUE_
struct priority_queue;
typedef struct priority_queue priority_queue_t;
/* priority_queue_init initialises the priority queue and returns a handle which
must be passed to subsequent priority_queue_xxx functions.. Argument is the
comparison function. This comparison function must return a negative value if
the first argument is less than the second, a positive integer value if the
first argument is greater than the second, and zero if the arguments are equal.
The function must also not modify the objects passed to it. The meaning of
greater or less can be reversed. */
priority_queue_t* priority_queue_init(int(*compare)(const void* element1, const void* element2));
/* priority_queue_free frees memory used by priority queue. init in constant time */
void priority_queue_free(priority_queue_t* pq);
/* returns 1 if the queue is empty, 0 otherwise. constant time */
int priority_queue_empty(const priority_queue_t* pq);
/* insert an object into the priority queue. insert in logarithmic time */
void priority_queue_insert(priority_queue_t* pq, void* el);
/* pops the 'top' element and removes from the priority queue. pop in logarithmic time */
void* priority_queue_pop(priority_queue_t* pq);
/* returns the top element but does not remove from priority queue. top in constant time */
void* priority_queue_top(const priority_queue_t* pq);
/* returns number of elements in priority queue. constant time */
int priority_queue_size(const priority_queue_t* pq);
#endif // PRIORITY_QUEUE_
priority_queue.c - the implementation:
#include "priority_queue.h"
#include <stdlib.h>
typedef int(*compare)(const void* element1, const void* element2);
struct priority_queue {
int capacity;
int n;
void** array;
compare cmp;
};
static const int initial_size = 16;
static void swap(priority_queue_t* pq, int index1, int index2) {
// shallow copy of pointers only
void* tmp = pq->array[index1];
pq->array[index1] = pq->array[index2];
pq->array[index2] = tmp;
}
static void rise(priority_queue_t* pq, int k) {
while (k > 1 && pq->cmp(pq->array[k / 2], pq->array[k]) < 0) {
swap(pq, k, k / 2);
k = k / 2;
}
}
static void fall(priority_queue_t* pq, int k) {
while (2 * k <= pq->n) {
int child = 2 * k;
if (child < pq->n && pq->cmp(pq->array[child], pq->array[child + 1]) < 0) {
child++;
}
if (pq->cmp(pq->array[k], pq->array[child]) < 0) {
swap(pq, k, child);
}
k = child;
}
}
static void** array_resize(void** array, int newlength) {
/* reallocate array to new size
this is problematic because realloc may fail and return NULL
in which case there is a leak because array is still allocated
but not returned so cannot be free'd */
return realloc(array, newlength * sizeof(void*));
}
priority_queue_t* priority_queue_init(int(*compare)(const void* element1, const void* element2)) {
priority_queue_t* pq = malloc(sizeof(priority_queue_t));
pq->array = NULL;
pq->capacity = 0;
pq->n = 0;
pq->cmp = compare;
return pq;
}
void priority_queue_free(priority_queue_t* pq) {
free(pq->array);
free(pq);
}
int priority_queue_empty(const priority_queue_t* pq) {
return pq->n == 0;
}
void priority_queue_insert(priority_queue_t* pq, void* el) {
if (pq->capacity == 0) {
pq->capacity = initial_size;
pq->array = array_resize(pq->array, pq->capacity + 1);
}
else if (pq->n == pq->capacity) {
pq->capacity *= 2;
// we need to resize the array
pq->array = array_resize(pq->array, pq->capacity + 1);
}
// we always insert at end of array
pq->array[++pq->n] = el;
rise(pq, pq->n);
}
void* priority_queue_pop(priority_queue_t* pq) {
// reduce array memory use if appropriate
if (pq->capacity > initial_size && pq->n < pq->capacity / 4) {
pq->capacity /= 2;
pq->array = array_resize(pq->array, pq->capacity + 1);
}
void* el = pq->array[1];
swap(pq, 1, pq->n--);
pq->array[pq->n + 1] = NULL; // looks tidier when stepping through code - not really necessary
fall(pq, 1);
return el;
}
void* priority_queue_top(const priority_queue_t* pq) {
return pq->array[1];
}
int priority_queue_size(const priority_queue_t* pq) {
return pq->n;
}
example driver program:
#include "priority_queue.h"
#include <string.h>
#include <stdlib.h>
#include <stdio.h>
typedef struct {
int weight;
char* data;
} element;
int descending(const void* a, const void* b) {
const element* element1 = a;
const element* element2 = b;
if (element1->weight < element2->weight)
return -1;
if (element1->weight > element2->weight)
return 1;
return 0;
}
typedef struct {
int vertex;
int weight;
} edge;
int ascending(const void* a, const void* b) {
const edge* edge1 = a;
const edge* edge2 = b;
if (edge2->weight < edge1->weight)
return -1;
if (edge2->weight > edge1->weight)
return 1;
return 0;
}
int main() {
priority_queue_t* pq = priority_queue_init(descending);
printf("size of pq now = %d\n", priority_queue_size(pq));
int weights[] = { 14,8,15,16,11,1,12,13,4,10,9,3,5,7,2,6,6,6 };
int size = sizeof(weights) / sizeof(weights[0]);
// insert each one into priority queue
for (int i = 0; i < size; ++i) {
element* el = malloc(sizeof(element));
// generate string
char buffer[20];
sprintf(buffer, "added no: %d", i + 1);
el->data = malloc(strlen(buffer) + 1);
strcpy(el->data, buffer);
el->weight = weights[i];
priority_queue_insert(pq, el);
}
printf("size of pq now = %d\n", priority_queue_size(pq));
element* el = malloc(sizeof(element));
el->weight = 22;
el->data = "hi guys";
priority_queue_insert(pq, el);
printf("size of pq now = %d\n", priority_queue_size(pq));
const element* top = priority_queue_top(pq);
printf("peek of top item: %d %s\n", top->weight, top->data);
while (!priority_queue_empty(pq)) {
element* top = priority_queue_pop(pq);
printf("top is: %d %s\n", top->weight, top->data);
free(top);
}
printf("size of pq now = %d\n", priority_queue_size(pq));
priority_queue_free(pq);
// try using different data/comparator
pq = priority_queue_init(ascending);
edge* e1 = malloc(sizeof(edge));
e1->vertex = 0;
e1->weight = 1;
priority_queue_insert(pq, e1);
edge* e2 = malloc(sizeof(edge));
e2->vertex = 1;
e2->weight = 3;
priority_queue_insert(pq, e2);
edge* e3 = malloc(sizeof(edge));
e3->vertex = 2;
e3->weight = 3; // same weight
priority_queue_insert(pq, e3);
while (!priority_queue_empty(pq)) {
edge* top = priority_queue_pop(pq);
printf("top is: %d %d\n", top->weight, top->vertex);
free(top);
}
printf("size of pq now = %d\n", priority_queue_size(pq));
priority_queue_free(pq);
}
Answer:
One remaining item that I have not addressed is the array_resize function. Fixing that is not trivial so have left that issue for now. Basically, if realloc fails, what to do(?)
I find confusing for the pq->array allocated memory as proportional to pq->capacity + 1. Re-write code so the array's count is pq->capacity.
I'd go for a array_resize() that can affect the state of priority_queue_t* pq to put it into a error/start state. Example
// return true on problem
static bool array_resize(priority_queue_t* pq, size_t new_count) {
if (new_count > 0) {
void *new_pointer = realloc(pq->array, sizeof *pq->array * new_count);
if (new_pointer) { // Success path
pq->array = new_pointer;
pq->capacity = new_count;
return false;
}
if (new_count <= pq->capacity) {
return false; // failure to reduce is not really an error.
}
// fall though on allocation failure
}
pq->n = 0;
pq->capacity = 0;
free(pq->array);
pq->array = NULL;
return new_count > 0;
}
Other stuff
Cope with unexpected function order usage. True that out of order function usage concerning init() and free() are problematic, but what about priority_queue_pop() with nothing in the queue? Code exhibits UB. Better to test and return NULL or somehow stop/warn/handle.
Good ideas like below that are useful in debug could wrap in a macro. IMO, when the burden is light, just leave in debug and production code.
#ifndef NDEBUG
pq->array[pq->n + 1] = NULL;
#endif
Note: priority_queue_pop() never shrinks the array allocation to 0. Not a bad design goal. Hmmm.
Minor
Nice improvement.
Add some blank lines to improve clarity.
/* returns 1 if the queue is empty, 0 otherwise. constant time */
int priority_queue_empty(const priority_queue_t* pq);
/* insert an object into the priority queue. insert in logarithmic time */
void priority_queue_insert(priority_queue_t* pq, void* el);
/* pops the 'top' element and removes from the priority queue. pop in logarithmic time */
void* priority_queue_pop(priority_queue_t* pq);
size_t is "just right" type for array indexing and sizing. Neither too wide nor too narrow.
struct priority_queue {
// int capacity, n;
size_t capacity, n;
...
};
// int priority_queue_size(const priority_queue_t* pq);
size_t priority_queue_size(const priority_queue_t* pq);
Good formatting. Yet look at the question's code. Is there a horizontal slide bar to show priority_queue.*? To me, that means code is too wide. Wrap to review's presentation width. Auto formatting should make that easy.
Idiomatic compare that many compilers recognize and emit efficient code:
(a > b) - (a < b)
int descending(const void* a, const void* b) {
...
return (element1->weight > element2->weight) -
(element1->weight < element2->weight);
}
Advanced idea:
Comments in .h do not details what priority_queue functions will do when items compare equal. Stable? Non-deterministic? If N items were inserted, all at the same priority, is the order in which they pop() determined? A classy priority_queue would return these in a way to prevent a stale item (one that sits in a long time). Perhaps if 2 items have the same priority, favor the one with the lower array index? | {
"domain": "codereview.stackexchange",
"id": 29332,
"tags": "algorithm, c, library, priority-queue"
} |
Renormalization of Harmonic Oscillator | Question: In Appendix A, Polchinski does the Euclidean path integral for the Harmonic oscillator. After he Pauli-Villars regularizes the determinant of the kinetic term, he obtains the following expression (A.1.62):
$$
\langle q_f, U | q_i, 0 \rangle \to \left( \frac{\omega}{2 \sinh \omega U}
\right)^{1/2} \exp \left[ - S_{cl}(q_i,q_f) + \frac{1}{2}\left( \Omega U - \ln \Omega \right) - S_{ct} \right] \tag{A.1.62}.
$$
where $\Omega$ is a frequency scale.
In what follows he says:
"To get a finite answer as $\Omega \to \infty$, we need first to include a term $\frac{1}{2} \Omega$ in the Lagrangian $L_{ct}$, canceling the linear divergence in (A.1.62). That is, there is a linearly divergent bare coupling for the operator 1. It may seem strange that we need to renormalize in a quantum mechanics problem, but power counting is completely uniform with quantum field theory. The logarithmic divergence is a wavefunction renormalization.
What does the italic sentence mean? The operator $1$ should change by some $Z. 1$? Why the logarithm is a wave function renormalization? And is he power counting what?
Answer:
We allow counterterms $L_{\rm ct}= \sum_X \delta_X ~X$ of all possible field monomials/"operator" $X$ in the Lagrangian. Here we need $\delta_1 =\frac{\Omega}{2}$ for the constant monomial $X=1$.
Since $\Omega$ appear linearly in $\delta_1$, we speak of a linear divergence. In principle $\Omega$ could have appeared in other powers, cf. power counting.
The wave function renormalization (aka. field strength renormalization) means that the normalization of the inner product$^1$ $\langle q_f, U | q_i, 0 \rangle$ is changed with a factor $\sqrt{\frac{\pi}{\Omega}}$. Upstairs in an exponential, this becomes a logarithm.
--
$^1$ Here $U=iT$ denotes Euclidian time. | {
"domain": "physics.stackexchange",
"id": 49587,
"tags": "quantum-mechanics, harmonic-oscillator, renormalization, path-integral, regularization"
} |
What is a suitable Tensorflow model to classify images into foggy/not foggy? | Question: I want to classify photos taken by multiple webcams that are operating in mountainous regions into foggy / not foggy. The photos are in various sizes and were taken under very different light conditions and in different areas.
I read about Tensorflow and its ready-to-use image recognition models (which of course would have to be re-trained for the foggy/non-foggy categories).
However, these models are trained to classify images into categories according to objects within these images. As I want to classify my images based on their overall appearance (blurry, greyish, far away objects hardly detectable, ...) I was wondering if these models are really suitable or if there is a better approach for this task. Any help is highly appreciated!
Answer: From the sounds of the problem you could probably do some thing with extracting some features from the images such as how many edges they have, brightness (to get day/night), average color values. Then using a more simpler classification algorithm such as SVM, KNN, Decision Tree, Random Forest.
The Tensorflow ready-to-use models look to be very complicated models with a large number of layers so it will take a long time to train and run. It will also be very hard to train them from scratch (Retraining a pretrainid network could help with that though). So I think they might be a bit overkill. Also note that those models were probably made with the imageNet dataset in mind which has 1000 classes where you have just 2.
Its very hard to know what will work without seeing the images or being able to trying it first.
I would start with simpler faster methods before trying slower more complicated methods.
So in order try the feature extraction plus classifier if that is unable to learn a good relationship then move on to a basic CNN if that doesnt work move on to the more complicated CNN models.
With respect to the objects within rather than the overall appearance. For a CNN using different pooling layers can effect this change this. eg. Max Pooling will get the max value of a filter so can be largely effected by a small part of the image where as Average Pooling will be better at looking at the whole image. | {
"domain": "datascience.stackexchange",
"id": 4383,
"tags": "tensorflow, cnn, image-recognition"
} |
Global properties of hereditary classes? | Question: A hereditary class of structures (e.g. graphs) is one that is closed under induced substructures, or equivalently, is closed under vertex removal.
Classes of graphs that exclude a minor have nice properties that do not depend on the specific excluded minor. Martin Grohe showed that for graph classes excluding a minor there is a polynomial algorithm for isomorphism, and fixed-point logic with counting captures polynomial time for these graph classes. (Grohe, Fixed-Point Definability and Polynomial Time on Graphs with Excluded Minors, LICS, 2010.) These can be thought of as "global" properties.
Are there similar "global" properties known for hereditary classes (either graphs or more general structures)?
It would be good to see each answer focus on just one specific property.
Answer: Hereditary properties are very "robust" in the following sense.
Noga Alon and Asaf Shapira showed that for any hereditary property ${\cal P}$, if a graph $G$ needs more than $\epsilon n^2$ edges to be added or removed in order to satisfy ${\cal P}$, then there is a subgraph in $G$, of size at most $f_{\cal P}(\epsilon)$, which does not satisfy ${\cal P}$. Here, the function $f$ only depends on the property ${\cal P}$ (and not on the size of the graph $G$, for instance). Erdős had made such a conjecture only about the property of $k$-colorability.
Indeed, Alon and Shapira prove the following stronger fact: given ${\cal P}$, for any $\epsilon$ in $(0,1)$, there are $N(\epsilon)$, $h(\epsilon)$ and $\delta(\epsilon)$ such that if a graph $G$ has at least $N$ vertices and needs at least $\epsilon n^2$ edges added/removed in order to satisfy ${\cal P}$, then for at least $\delta$ fraction of induced subgraphs on $h$ vertices, the induced subgraph violates ${\cal P}$. Thus, if $\epsilon$ and the property ${\cal P}$ are fixed, in order to test if an input graph satisfies ${\cal P}$ or is $\epsilon$-far from satisfying ${\cal P}$, then one only needs to query the edges of a random induced subgraph of constant size from the graph and check if it satisfies the property or not. Such a tester would always accept graphs satisfying ${\cal P}$ and would reject graphs $\epsilon$-far from satisfying it with constant probability. Furthermore, any property that is one-sided testable in this sense is a hereditary property! See the paper by Alon and Shapira for details. | {
"domain": "cstheory.stackexchange",
"id": 319,
"tags": "cc.complexity-theory, graph-theory, relational-structures"
} |
Aging: Only humans have grey hair? | Question: I am not sure if I ever saw a monkey or a bird age and lose hair pigment as it grows. It maybe due to my lack of information. But it appears to be a general presumption that only humans age and lose hair pigment (hence developing grey hair).
If this is true, then I am curious why only humans?
Answer: I don't think it is restricted to humans. Dogs lose hair pigment as well, usually around the muzzle.
This work used graying of hair to discriminate senior dogs.
I don't know which other mammals lose hair pigment, though, nor why. | {
"domain": "biology.stackexchange",
"id": 10526,
"tags": "human-biology, senescence, hair"
} |
Ros_Controllers Package Catkin_Make Error | Question:
There seems to be an error when looking to catkin_make the Ros_Controllers package:
-- ==> add_subdirectory(ros_controllers/joint_state_controller)
-- +++ processing catkin package: 'joint_trajectory_controller'
-- ==> add_subdirectory(ros_controllers/joint_trajectory_controller)
CMake Error at ros_controllers/joint_trajectory_controller/CMakeLists.txt:116 (add_rostest_gtest):
Unknown CMake command "add_rostest_gtest".
-- Configuring incomplete, errors occurred!
Invoking "cmake" failed
Checking github shows some recent changes linked with add_rostest_gtest, though I'm not sure if this is the reason for catkin_make failing?
Anyone else able to install the package
Originally posted by cdrwolfe on ROS Answers with karma: 137 on 2013-09-02
Post score: 0
Answer:
The controller should be searching for the rostest dependency correctly (see here). Could it be that you have an older version of rostest installed?. The add_rostest_gtest function is relatively new. Could you check if your rostest installation contains this CMake function?.
Originally posted by Adolfo Rodriguez T with karma: 3907 on 2013-09-02
This answer was ACCEPTED on the original site
Post score: 3
Original comments
Comment by fritz on 2013-09-04:
Solved my problem.. Thx
Comment by Dave Coleman on 2013-09-05:
This new function has not been released into Groovy yet, for some reason. So ros_controllers is failing to build, causing gazebo_ros_pkgs to fail building as well :/
The ros_comm repo is on version 1.9.48 but the corresponding Groovy debians is only on 1.9.47, which does not contain the add_rostest_gtest function.
Comment by Adolfo Rodriguez T on 2013-09-05:
The version of ros_controllers that uses add_rostest_gtest has been only released in Hydro. As Hydro diverges from Groovy these things will happen more and more (confessions of a Fuerte user ;-). I'd suggest that you overlay ros_comm with the Hydro equivalent, or if that doesn't work, with the Groovy version and add the extra CMake function manually.
Comment by Dave Coleman on 2013-09-05:
I agree, but we have been telling lots of Gazebo users to install ros_control pkgs from source for Groovy and until now its been no problem. Ideally, the ros_controllers hydro branch should feature freeze and we begin development on a new indigo branch.
Comment by Adolfo Rodriguez T on 2013-09-05:
It should strive to not break the (C++, ROS) API within a ROS distro. It's possible to revert the change and use a more verbose cmake command combination, though.
Comment by Dave Coleman on 2013-09-05:
I think if we just create a new release of ros_comm into Groovy the problem should be fixed. | {
"domain": "robotics.stackexchange",
"id": 15402,
"tags": "ros, catkin-make, ros-controllers"
} |
Converting click count data to JSON | Question: I have an array filled with 5 other arrays that consists of arrays of 2 values (first a date then a count):
[
[
[dt, cnt], [dt, cnt], ....
],
[
[dt, cnt], [dt, cnt], ....
],
[
[dt, cnt], [dt, cnt], ....
],
[
[dt, cnt], [dt, cnt], ....
],
[
[dt, cnt], [dt, cnt], ....
],
]
It concerns click statistics of different websites. This data needs to be converted to data to make a consistent chart with Google visualisations. So first a conversion is done in Python, then this result converted to JSON to pass to the Google libs.
My current code is this:
# determine min date
mindate = datetime.date.max
for dataSet in sets:
if (dataSet[0][0] < mindate):
mindate = dataSet[0][0];
# fill a dictionary with all dates involved
datedict = {}
for dat in daterange(mindate, today):
datedict[dat] = [dat];
# fill dictionary with rest of data
arrlen = 2
for dataSet in sets:
# first the values
for value in dataSet:
datedict[value[0]] = datedict[value[0]] + [value[1]];
# don't forget the missing values (use 0)
for dat in daterange(mindate, today):
if len(datedict[dat]) < arrlen:
datedict[dat] = datedict[dat] + [0]
arrlen = arrlen + 1
# convert to an array
datearr = []
for dat in daterange(mindate, today):
datearr = datearr + [datedict[dat]]
# convert to json
result = json.dumps(datearr, cls=DateTimeEncoder)
(The DateTimeEncoder just gives a JavaScript-friendly datetime in the JSON)
The output looks like this:
[
["new Date(2008, 7, 27)", 0, 5371, 1042, 69, 0],
["new Date(2008, 7, 28)", 0, 5665, 1100, 89, 0],
...
]
This is one of my first Python adventures, so I expect this piece of code can be improved upon easily. I want this to be shorter and more elegant. Show me the awesomeness of Python because I'm still a bit disappointed.
I'm using Django, by the way.
Answer: List comprehensions are definitely the way to go here. They allow you to say what you mean without getting mired in the details of looping.
To find the first date, make use of the built-in min() function.
defaultdict is useful for handling lookups where there may be missing keys. For example, defaultdict(int) gives you a dictionary that reports 0 whenever you lookup a date that it doesn't know about.
I assume that each of the lists within sets represents the data for one of your sites. I recommend using site_data as a more meaningful alternative to dataSet.
from collections import defaultdict
import json
def first_date(series):
return series[0][0]
start_date = min(first_date(site_data) for site_data in sets)
# counts_by_site_and_date is just a "more convenient" version of sets, where
# the [dt, cnt] pairs have been transformed into a dictionary where you can
# lookup the count by date.
counts_by_site_and_date = [defaultdict(int, site_data) for site_data in sets]
result = json.dumps([
[date] + [site_counts[date] for site_counts in counts_by_site_and_date]
for date in daterange(start_date, today)
], cls=DateTimeEncoder) | {
"domain": "codereview.stackexchange",
"id": 11774,
"tags": "python, beginner, datetime, converting, json"
} |
What does the Torque vs Frequency plot mean for a stepper motor | Question: I found this motor which I believed had interesting specs, but when I checked out the data sheet, the torque-speed graph was replaced by a torque-frequency graph! I do not know what it means. How does it intuitively translate to torque at a given speed?
Thank you so much!
Answer: RPM of a stepper motor
Formula for calculating stepping motor speed.
$$RPM = \dfrac{a}{360} \cdot f_z \cdot 60$$
$RPM$ = Revolutions per minute.
$a$ = step angle
$f_z$ = pulse frequency in hertz | {
"domain": "engineering.stackexchange",
"id": 3322,
"tags": "torque, stepper-motor"
} |
What is nonlocal resistance? | Question: We are first taught to calculate local resistance, where current and voltage are on the same part of the material.
But many experiments measure nonlocal resistance, where current and voltage are measured on different parts of the material.
What is nonlocal resistance?
What is the advantage of measuring nonlocal resistance than its local counterpart?
How to calculate nonlocal resistance?
If local resistance increase, is it a must that nonlocal resistance must increase too?
Answer:
Nonlocal resistance is the ratio of the current in a material to the voltage between some other two points. It is a much less useful quantity, because this depends on the details of induced changes on the conductor and elsewhere to determine, it isn't something that is determined only by local material quantities.
The only advantage is that you can measure it away from the material, if you can't stick a probe in. It can be measured using the electric field far away from the material. The disadvantage is that you then have to break your head to figure out what is going on inside the conducting material itself, which is what you usually care about.
The ratio of voltage to current, where voltage is between to other points.
In the linear regime, with only materials with linear response and conductors around, the answer is always yes, with one important caveat--- if the resistance is negative (meaning you measured the voltage between two points which for some reason have the opposite voltage than two points at successive position in the wire), it gets more negative as you increase the current, so it technically goes down. The precise statement is that if you multiply the current by a factor of k, you also multiply the additional voltage elsewhere by a factor of k, so it's a linear relationship. | {
"domain": "physics.stackexchange",
"id": 5080,
"tags": "electromagnetism, condensed-matter, material-science, quantum-hall-effect"
} |
Which side of a silver atom is "north" and which is "south"? | Question: From what little I've learned about quantum mechanics, I understand that atoms with electrons whose spins don't cancel each other out act like tiny magnets. I assume this means these atoms, like silver atoms, have a north and south pole. But if electrons are described as a giant cloud that surrounds the nucleus, then how do you know which way the particle is "pointing"? Or is spin not a physical direction? Please help me understand!
Answer: Atoms with one unpaired electron, like silver, do indeed act like tiny bar magnets; however, if the atom is in isolation, the direction of north and south is not well defined until you measure it.
Specifically, an atom with one unpaired electron can react in one of two ways to an applied magnetic field gradient: it can either move in the direction of the gradient (we'll call this "spin-up") or against the gradient (we'll call this "spin-down"). No matter which direction your applied field gradient points, you will always measure the atom's spin as either up or down - nothing in between. (This is why we say spin is quantized - measurements of it only take on a finite number of values).
Because of this, the position of north and south on the atom depend on the direction in which we measure. This may seem odd, but that's because the spin of an atom is not like a typical magnetic moment, which is a vector. Instead, it exists as a quantum state, which is a fundamentally different object. In the case of a silver atom (or, in general, a spin-1/2 particle), the spin of an object is a pair of two complex numbers, both of which lie on the unit circle in the complex plane. These two complex numbers are related to the relative probabilities of detecting the atom in the spin-up and spin-down configurations, respectively, relative to some arbitrarily chosen (fixed) direction. If both numbers are nonzero, we say the object is in a superposition of spin states in a certain direction. If one of them is zero, we say the object is in a pure spin state in that direction.
Specifying these two complex numbers actually determines the relative probabilities of measuring spin-up or spin-down in any arbitrary direction. In order to determine the probabilities when measuring in a different direction than the one you originally specified, all you have to do is transform the two complex numbers in a particular way.
So, long story short:
An atom's north and south poles will only ever be measured to be along the direction you're measuring them in, and
Unless the atom is in a pure spin state in the direction you're measuring, you will always have some probability of measuring its north pole pointing in one direction, and some probability of measuring its north pole pointing in the other direction. | {
"domain": "physics.stackexchange",
"id": 42160,
"tags": "quantum-mechanics, magnetic-fields, electrons, quantum-spin, atoms"
} |
Synthesis of hydrazine with bleach and ammonia | Question: I am trying to make rocket fuel using hydrazine, hydrogen peroxide and home products. I want to put an action figure in orbit.
Is there any way to synthesize hydrazine with bleach and ammonia, using $\ce{H2SO4}$ as a catalyst?
Answer: I sympathize with your desire to launch an action figure. However, the chemistry you propose is really dangerous. You would be better off with black-powder based rockets.
Is there any way to synthesize hydrazine with bleach and ammonia?
Do not do this.
My answer is not "yes" or "no". My answer is Don't. Do not try this without proper safety equipment and training, and then think twice or three times before doing so, and then have an ambulance standing by. Certainly don't do it at home, or by yourself.
This particular reaction is capable of producing hydrazine. However, it also producing a whole host of additional stuff, like chloramine, hydroxylamine, nitrogen trichloride, hydrogen chloride, and chlorine gas, all of which are either gases or volatile liquids, and all of which are poisonous. Hydrazine is poisonous. Nitrogen trichloride is a shock-sensitive explosive on the order of a sneeze can set it off. The reactions are as follows.
Chloramine:
$$\ce{NH3 + HOCl -> Cl-NH2 + H2O}$$
Dichloramine:
$$\ce{Cl-NH2 + HOCl -> NHCl2 + H2O}$$
Nitrogen trichloride:
$$\ce{NHCl2 + HOCl -> NCl3 + H2O}$$
Hydrazine and HCl:
$$\ce{NH3 + NH2Cl -> H2N-NH2 + HCl}$$
Hydroxylamine and HCl
$$\ce{NH2Cl + H2O -> NH2OH + HCl}$$
Chlorine gas:
$$\ce{HCl + HOCl -> H2O + Cl2}$$
Any organic impurities in the mix can lead to other deadly stuff being produced.
This reaction is not controllable to produce a single specific product. Everything it produces is poisonous and/or explosive. I will recommend against, and I will not help you find a recipe. | {
"domain": "chemistry.stackexchange",
"id": 10045,
"tags": "everyday-chemistry, synthesis, home-experiment"
} |
How to define conditions for state-machines in roby? | Question: I am searching for a way that allows me to wait for some conditions on ports before applying a new state.
My concrete Problem:
I want to make sure that my AUV aligns to the right pipeline. Therefore before starting the pipeline-tracking, I want to check for the current system heading.
My current state-machine looks like this:
find_pipe_back = state target_move_def(:finish_when_reached => false ,
:heading => 1 ...)
pipe_detector = state pipeline_detector_def
pipe_detector.depends_on find_pipe_back, :role => "detector"
start(pipe_detector)
forward pipe_detector.align_auv_event, success_event
roughly I am looking for a way to condition the last-forward.
Answer: Such "conditioning" was existing in the pre-syskit days, and it got removed as it made the state machines considerably more complex. Usually, what starts as a simple condition becomes quickly something a lot more involved.
The best way is therefore to create a dedicated composition that would do the monitoring:
class PipelineFinder < Syskit::Composition
argument :expected_heading
add PipelineDetector::Task, :as => 'detector'
def heading_within_tolerance?(current, tolerance)
# Test for tolerance
end
task_script do
# Don't know the name of the relevant port, so I am making it up
reader = detector.pipeline_state_port.reader
wait detector.align_auv_event
if (data = reader.read_new) && heading_within_tolerance?(data.yaw, tolerance)
emit :success
else
emit :false_positive
end
end
end
Note that you could also attach the task script on the detector task in a separate action:
describe('runs the pipeline detector but filters against an expected direction').
required_arg('expected_heading')
def detect_pipeline_with_validation(arguments = Hash.new)
detector = pipeline_detector_def
expected_heading = arguments[:expected_heading]
detector.task_script do
# same script than up there, except that expected_heading
# is in this case a local variable
end
detector
end | {
"domain": "robotics.stackexchange",
"id": 387,
"tags": "rock, syskit"
} |
Rapid code pick and place | Question: Considering the scenario represented in the following image, and the function open_gripper(), close_gripper() and Offs(), present the necessary instructions for the robot to perform the path 1 - 2 - 3, represented by the red line, to transport the block on the table. Consider that the only known point is the point P1. The length of each route is shown in the image. The directions of the robot's movements must be in accordance with the direction of the arrows.
This is my version of code, but i am not sure that this is right. What i am doing wrong? How the code should be?
Answer: There is a discrepancy in notation. P1 is the current point at the TCP on the image but 1 denoted the position of the box. By this logic you would need a P4 corresponding to the point on the table marked as 3.
P1 does not seem to be defined in the code and P2, defined relative to P1 should have -700 as z coordinate and P3 should have -200 as x coordinated, relative to P1. I assume the axes of P1 are not rotated relative to the global CS.
You seem to be closing the gripper before arriving to P2. If P2 is a pickup point you should close the gripper after moving to P2. | {
"domain": "robotics.stackexchange",
"id": 2196,
"tags": "robotic-arm"
} |
Levinson's algorithm and QR decomposition for complex least-squares FIR design | Question: I'm studying Mathias's thesis Algorithms for the Constrained Design of Digital Filters with Arbitrary Magnitude and Phase Response. In section 2.1.2 the complex LS approximation problem is defined by an overdetermined linear system
$$
\mathbf{W}^{1/2}\mathbf{C}^H\mathbf{h}=\mathbf{W}^{1/2}\mathbf{d} \tag{2.8}
$$
where $\mathbf{W}^{1/2}$ is a diagonal squared weighting matrix, $\mathbf{C}$ is a complex matrix which transforms the unknown impulse response vector $\mathbf{h}$ into frequency response, and $\mathbf{d}$ is the vector of desired response. This system can be represented by a normal equations
$$
\mathbf{CWC}^H\mathbf{h} = \mathbf{CWd} \tag{2.9}
$$
The unknown impulse response $\mathbf{h}$ can be solved by Eq. (2.9) as well as Eq. (2.8).
It is stated that solving Eq. (2.8) by QR decomposition is better from a numerical point of view because the condition number is squared in Eq. (2.9). Solving the normal equation, on the other hand, is better in terms of computational effort and memory requirement. This makes sense to me and I agree with it. So I try to compare these two different ways. Matt provides the Matlab code of the former and I write the second one. However the results don't support this view.
I use the example design in Matt's code: length 61 bandpass, band edges [.23,.3,.5,.57]*pi, weighting 1 in passband and 10 in stopbands, desired passband group delay 20 samples and another one in which passband group delay is 30 samples to make it linear phase while other requirements remains the same. The results are as follows.
Group delay = 20
Group delay = 30 (linear phase)
It is shown that Levinson's algorithm is better in both magnitude and phase approximation. The error norm of Levinson's in both cases is smaller than the one of QR decomposition. I also tried to solve the normal equation using QR decomposition by h3 = real(C * W * C') \ real(C * W * D); which results in the same filter coefficients as Eq. (2.8).
Another example is to approximates the magnitude response of a linear-phase audio EQ filter with coefficients
b = [1.04065742117985,-3.10743551019314,3.78294931146517,-2.20556080775822,0.510654247004686];
a = [1,-3.08533488056927,3.79819788644552,-2.22766143738208,0.536063093204188];
and the phase response is set to be linear. FIR length is 255 and the result shows in this case these two methods performs quite similarly but QR decomposition cannot give a strickly linear-phase filter.
So my question is
Why Levinson's algorithm outperforms QR decomposition in both terms of accuracy and efficiency?
What makes that the Levinson's algorithm able to design a strictly linear phase filter while QR decomposition cannot.
EDIT:
Sorry I found something wrong with my code. The weighted error measure should be
e = norm(Wsqrt * C' * h - Wsqrt * D);
e2 = norm(Wsqrt * C' * h2 - Wsqrt * D);
After the bug fixed I found that in all cases the $\ell_2$ error of QR decomposition is indeed smaller than the error of Levinson's algorithm, which meets what Matt says in his thesis. So my questions should be then:
Is it a better choice to use QR decomposition instead of Levinson's algorithm for such problem since issue on computational effort and memory consumption is no longer a problem these days.
What makes that the Levinson's algorithm able to design a strictly linear phase filter while QR decomposition cannot.
Matlab code
N = 61;
groupdelay = 20;
om = pi * [linspace(0, .23, 230), linspace(.3, .5, 200), linspace(.57, 1, 430)];
D = [zeros(1, 230), exp(-1j * om(231:430) * groupdelay), zeros(1, 430)];
W = [10 * ones(1, 230), ones(1, 200), 10 * ones(1, 430)];
[h, h2, e, e2] = lslevin(N, om, D, W);
function [h, h2, e, e2] = lslevin(N, om, D, W)
% h = lslevin(N,om,D,W)
% Complex Least Squares FIR filter design using Levinson's algorithm
%
% h filter impulse response
% N filter length
% om frequency grid (0 <= om <= pi)
% D complex desired frequency response on the grid om
% W positive weighting function on the grid om
%
% example: length 61 bandpass, band edges [.23,.3,.5,.57]*pi,
% weighting 1 in passband and 10 in stopbands, desired passband
% group delay 20 samples
%
% om=pi*[linspace(0,.23,230),linspace(.3,.5,200),linspace(.57,1,430)];
% D=[zeros(1,230),exp(-j*om(231:430)*20),zeros(1,430)];
% W=[10*ones(1,230),ones(1,200),10*ones(1,430)];
% h = lslevin(61,om,D,W);
%
% Author: Mathias C. Lang, Vienna University of Technology
% 1998-07
% mattsdspblog@gmail.com
om = om(:); D = D(:); W = W(:); L = length(om);
% DR = real(D); DI = imag(D);
%% solve normal equation using Levinson's algorithm
a = zeros(N, 1); b = a;
% Set up vectors for quadratic objective function
% (avoid building matrices)
dvec = D; evec = ones(L, 1); e1 = exp(1j * om);
for i = 1:N
a(i) = W.' * real(evec);
b(i) = W.' * real(dvec);
evec = evec .* e1; dvec = dvec .* e1;
end
a = a / L; b = b / L;
% Compute weighted l2 solution
h = levin(a, b);
%% solve original overdetermined linear system using QR decomposition
n = (0:N-1).'; % building matrix
C = cos(n*om.'); % real part of matrix C = exp(1j*n*om.')
W = diag(W);
Wsqrt = sqrt(W);
h2 = (Wsqrt * C') \ real(Wsqrt * D);
% h3 = real(C * W * C') \ real(C * W * D); % try to solve normal equation using QR decomposition
%% weighted error measure
e = norm(Wsqrt * C' * h);
e2 = norm(Wsqrt * C' * h2);
end
function x = levin(a, b)
% function x = levin(a,b)
% solves system of complex linear equations toeplitz(a)*x=b
% using Levinson's algorithm
% a ... first row of positive definite Hermitian Toeplitz matrix
% b ... right hand side vector
%
% Author: Mathias C. Lang, Vienna University of Technology, AUSTRIA
% 1997-09
% mattsdspblog@gmail.com
a = a(:); b = b(:); n = length(a);
t = 1; alpha = a(1); x = b(1) / a(1);
for i = 1:n - 1
k =- (a(i + 1:-1:2)' * t) / alpha;
t = [t; 0] + k * flipud([conj(t); 0]);
alpha = alpha * (1 - abs(k)^2);
k = (b(i + 1) - a(i + 1:-1:2)' * x) / alpha;
x = [x; 0] + k * flipud(conj(t));
end
end
Answer: Nice work! However, I still believe that QR decomposition of the rectangular system matrix describing the overdetermined system $(2.8)$ is more numerically stable than the solution of the square system $(2.9)$. I'm not sure about the reason for your results, but I'm almost sure that there must be a bug somewhere. Both methods should give very accurate results for such a simple low order design problem.
I used Octave's implementation of the QR solver which is called when using the backslash operator to solve an overdetermined system (c.f. the corresponding doc-page).
I designed the filters in Octave using the following specs (which are the same as in your question):
w = pi * [linspace(0,.23,230),linspace(.3,.5,200),linspace(.57,1,430)];
D = [zeros(1,230),exp(-j*w(231:430)*20),zeros(1,430)];
W = [10*ones(1,230),ones(1,200),10*ones(1,430)];
w = w(:); D = D(:); W = W(:);
srW = sqrt(W);
N = 61;
The corresponding system $(2.8)$ was solved using the following code:
% solve overdetermined system using QR-decomposition
M = w * (0:N-1);
E = exp(-1i*M);
E = srW(:,ones(1,N)) .* E;
D = srW .* D;
h2 = [real(E);imag(E)] \ [real(D);imag(D)];
I solved the system $(2.9)$ using Levinson's algorithm as implemented by the function lslevin.m.
For the filter length $N=61$, both designs are identical, up to numerical precision.
Then I changed the filter length to $N=610$ (without changing the other specs), which gives a useless result because of the transition band which is much too wide for that filter length. But the idea was to create a system that is numerically problematic to solve.
Despite the absurd specs, both methods result in a relatively small approximation error (note that the transition band is a don't care band, which doesn't contribute to the error). But the $L2$-error obtained from solving $(2.9)$ using Levinson's algorithm was 2.74e-07, which is significantly larger than the approximation error 1.62e-10 resulting from solving $(2.8)$ using Octave's QR-solver.
This result seems to confirm my suspicion that in degenerate cases it is numerically more stable to directly solve the overdetermined system $(2.8)$ instead of the square system $(2.9)$. Luckily, in most practical cases I've encountered, with specs that are chosen carefully to match the desired filter length, the efficient solution of $(2.9)$ using Levinson's algorithm is very accurate.
Addressing the two questions in the edited part of your post:
I think that Levinson's algorithm gives sufficiently accurate results in almost all practical cases. It is efficient and it doesn't require storing a matrix due to the Toeplitz structure of the system of equations. I haven't yet come across a practical example for which Levinson's algorithm failed and where it was necessary to resort to QR decomposition to obtain a useful solution.
I haven't been able to reproduce the problem. If the desired group delay equals $(N-1)/2$, where $N$ is the number of taps, then the optimal solution has an exactly linear phase, and this solution can be computed either by Levinson's algorithm or by QR decomposition. In both cases, the resulting filter will have an exactly linear phase, up to numerical accuracy. | {
"domain": "dsp.stackexchange",
"id": 11461,
"tags": "filter-design, finite-impulse-response"
} |
Async wrapper around public API | Question: Tear it apart. I'm mostly concerned around the appropriate use of ConfigureAwait(false), and possible poor handling and duplication of the HttpClient in Invest and Get<T>.
Additionally, I'd also like thoughts on the fact that for usage of these methods, one must call .Result to get the actual results. Is this standard for building such a wrapper or is there another way I should achieve this?
using System;
using System.Collections.Generic;
using System.Net;
using System.Net.Http;
using System.Net.Http.Headers;
using System.ServiceModel;
using System.Text;
using System.Threading.Tasks;
using Newtonsoft.Json;
namespace ProsperAPI
{
public class ProsperApi
{
private readonly string _username;
private readonly string _password;
private readonly string _apiBaseUrl = "https://api.prosper.com/v1/";
private AuthenticationHeaderValue _authenticationHeader;
#region Constructors
public ProsperApi(string username, string password)
{
_username = username;
_password = password;
_authenticationHeader =
new AuthenticationHeaderValue(
"Basic",
Convert.ToBase64String(
Encoding.UTF8.GetBytes(
string.Format("{0}:{1}", _username, _password))));
}
public ProsperApi(string username, string password, string baseUrl) : this(username, password)
{
_apiBaseUrl = baseUrl;
}
#endregion
public async Task<bool> Authenticate()
{
if (String.IsNullOrEmpty(_username))
throw new ArgumentNullException("_username", "Credentials are not set");
if (String.IsNullOrEmpty(_username))
throw new ArgumentNullException("_password", "Credentials are not set");
try
{
// The account call will fail out if credentials are incorrect, thus
// we won't spend time getting Notes. If Account information is right,
// then we load the notes data at the same time, so we can use it
await GetAccount().ConfigureAwait(false);
return true;
}
catch (Exception)
{
return false;
}
}
public async Task<List<Note>> GetNotes()
{
return await Get<List<Note>>("notes/").ConfigureAwait(false);
}
public async Task<Account> GetAccount()
{
return await Get<Account>("account/").ConfigureAwait(false);
}
public async Task<List<Listing>> GetListings()
{
return await Get<List<Listing>>("Listings/").ConfigureAwait(false);
}
public async Task<List<Investment>> GetPendingInvestments()
{
return await Get<List<Investment>>("Investments/$filter=ListingStatus eq 2").ConfigureAwait(false);
}
public async Task<InvestResponse> Invest(string listingId, string amount)
{
using (var client = HttpClientSetup())
{
var investment = new List<KeyValuePair<string, string>>
{
new KeyValuePair<string, string>("listingId", listingId),
new KeyValuePair<string, string>("amount", amount)
};
var content = new FormUrlEncodedContent(investment);
var response = await client.PostAsync("Invest/", content).ConfigureAwait(false);
if (response.StatusCode != HttpStatusCode.OK)
throw new CommunicationException();
var obj = await response.Content.ReadAsStringAsync().ConfigureAwait(false);
return JsonConvert.DeserializeObject<InvestResponse>(obj);
}
}
private async Task<T> Get<T>(string url)
{
using (var client = HttpClientSetup())
{
var response = await client.GetAsync(url).ConfigureAwait(false);
if (response.StatusCode != HttpStatusCode.OK)
throw new CommunicationException();
var obj = await response.Content.ReadAsStringAsync().ConfigureAwait(false);
return JsonConvert.DeserializeObject<T>(obj);
}
}
private HttpClient HttpClientSetup()
{
var client = new HttpClient {BaseAddress = new Uri(_apiBaseUrl)};
client.DefaultRequestHeaders.Accept.Clear();
client.DefaultRequestHeaders.Accept.Add(new MediaTypeWithQualityHeaderValue("application/json"));
client.DefaultRequestHeaders.Authorization = _authenticationHeader;
return client;
}
}
}
Answer: There's something odd about your constructors - normally chained constructors call the constructor with the most parameters, not the other way around:
public ProsperApi(string username, string password, string baseUrl)
: this(username, password)
public ProsperApi(string username, string password)
Should be:
public ProsperApi(string username, string password)
: this(username, password, null)
public ProsperApi(string username, string password, string baseUrl)
_apiBaseUrl will be null anyway when you call the 2-parameter constructor. The idea is that you write - and maintain, one constructor body.
This would be it:
public ProsperApi(string username, string password)
: this(username, password, null)
{ }
public ProsperApi(string username, string password, string baseUrl)
{
_username = username;
_password = password;
_apiBaseUrl = baseUrl;
_authenticationHeader =
new AuthenticationHeaderValue(
"Basic",
Convert.ToBase64String(
Encoding.UTF8.GetBytes(
string.Format("{0}:{1}", _username, _password))));
}
With only 1 constructor doing all the construction work, you don't really need a #region here; regions are seldom ever really needed anyway, and more often than not their usage is a smell: if you need to "sectionize" a class, it's likely doing too many things. Now a Constructors region is different, it's more like a comment that says what the code already say.
Besides, you can collapse the constructor bodies, leaving only the signatures visible in the IDE; that's possibly more descriptive than any comment you can write there.
I like your private fields, private readonly fields assigned in the constructor are great. Why isn't _authenticationHeader readonly as well? If it's only meant to be assigned by the constructor, and not tampered with, then it should be made readonly.
In the Authenticate method, your guard clause is a bit weird:
if (String.IsNullOrEmpty(_username))
throw new ArgumentNullException("_username", "Credentials are not set");
if (String.IsNullOrEmpty(_username))
throw new ArgumentNullException("_password", "Credentials are not set");
You're not validating the _password (you're checking for a null or empty _username write) - nice copy+paste error you got here. But nonetheless, the goal of a guard clause is to fail early, and with _username and _password being readonly and only assignable in the constructor, wouldn't it make more sense to throw at construction time, if you're going to pass illegal constructor arguments that will make the object fail later, might as well throw right away and refuse to create an instance with null/empty credentials.
Other than by the method's name, it's not very clear why Authenticate needs _username and _password to be set, one has to trace all the way down to HttpClientSetup() to see _authenticationHeader in use, with the actual method call that would probably fail if _username or _password was null or empty. Preventing null/empty credentials at construction would ensure that the instance is always in a state that allows it to make these calls without worrying about the username and password values.
You're storing username and password as private readonly fields, but in reality you don't need to keep either - you only need them to construct the _authenticationHeader. I wouldn't keep that data in the instance, it's not needed. I'd change your constructor to this:
public ProsperApi(string username, string password, string baseUrl)
{
if (string.IsNullOrWhiteSpace(username) || string.IsNullOrWhiteSpace(password))
{
throw new ArgumentException("Username and password cannot be null or empty.");
}
_apiBaseUrl = baseUrl;
_authenticationHeader =
new AuthenticationHeaderValue(
"Basic",
Convert.ToBase64String(
Encoding.UTF8.GetBytes(
string.Format("{0}:{1}", username, password))));
}
Using string.IsNullOrWhiteSpace over string.IsNullOrEmpty will also refuse to take " " as a valid user name or password.
I went with an ArgumentException, because ArgumentNullException should be specifically for when an argument is null when it shouldn't be - and in this case the argument being null isn't the only way to trip the guard clause, so I'd rather throw a more general exception than a potentially lying/confusing specific exception - there doesn't need to be two separate checks, since the exception message will be the same either way.
Lastly, I like how you've named [almost] everything, except the async methods should, by convention, be appended with the word Async as a suffix, like:
public async Task<List<Note>> GetNotesAsync()
public async Task<Account> GetAccountAsync()
public async Task<List<Listing>> GetListingsAsync()
private async Task<T> GetAsync<T>(string url)
Not sure Get<T> or GetAsync<T> is a very descriptive name though. Perhaps GetJsonResultAsync<TResult>? | {
"domain": "codereview.stackexchange",
"id": 8277,
"tags": "c#, api, http, async-await, wrapper"
} |
If Electric field is constant in a region, does it imply potential is also constant? | Question: I am a bit confused about these concepts- when is electric field or electric potential constant so i would appreciate if anyone could brief me about these things as well. Here i am referring to conservative electrostatic fields.
Answer: Electric field lines are always at right angles to equipotential lines or surfaces.
The electric field is minus the potential gradient.
So in the diagram showing a uniform electric field a positive charge would experience a downward force in the direction of decreasing electric potential.
In this case the magnitude of the electric field is $\frac {20}{5} = 4 $ N/C.
If the potential is constant then the electric field is zero.
If the rate of change of potential with distance is constant then the electric field strength is constant. | {
"domain": "physics.stackexchange",
"id": 42542,
"tags": "electrostatics, electric-fields, potential, voltage"
} |
how dense fluid affect the buoyancy force? | Question: I read a story regarding the Archimedes' principle in a magazine of popular science and I am thinking of the following question: how does the density of the fluid change the buoyancy force for the same object? As we know, the Archimedes' principle tell that for any object in a fluid, the buoyancy force equals to the weight of the fluid displaced by the object. It is pretty straightforward. Now, if I have an object partially floating on the surface of a liquid, so we have
$$
F_b = \Delta V \rho g
$$
where $\Delta V$ is the volume of the displaced liquid and $\rho$ is the density of the liquid. So what happen if we place the same object into a denser liquid? Physically or intuitively, since the liquid is denser, it is harder for the object to 'inject' into the liquid, so the buoyancy force should be bigger, so less part of the object submerge into the liquid. But if you look at the math, it seems not like this. Well, now $\rho$ is bigger, but the volume of displaced liquid will be smaller because it is harder to submerge the object into a denser liquid too. So how do we that for denser fluid, the same object will experience bigger buoyancy force instead of being the same?
So my question is from intuition, the same object in the denser liquid should submerse less than the case in less dense liquid. But from the math, it buoyancy force could stay the same or more. So how to prove from the math that our intuition is correct?
Answer: For completely submerged bodies the buoyance force, being simply equal to the weight of the displaced fluid, is stronger for a denser fluid.
But you know that the buoyancy force for a partially submerged body (like a sailing boat) must be equal to the weight of the body (unless the boat sinks or starts flying like a balloon).
Since the buoyant force is equal to the weight of the displaced fluid, a (non-sinking) boat displaces always the same mass, no matter which fluid, but more volume of a less dense fluid.
A classical example happens if you submerge an egg in water. It sinks to the bottom of the top. Then start adding salt, until eventually the egg will raise. See for example Tommy's webpage:
A quite different question is if a boat would happily float in a denser fluid like mercury, without turning upside down. The shape of the submerged part is very important for the stability. The buoyancy centre must be higher than the centre of mass, otherwise it will be unstable (that is why ballast is needed in many cases, to make a boat heavier in its underwater part... too much of the boat above water would result in a dangerous high centre of mass)
EDIT: Ok, when the partially submerged body is in equilibrium, then
$$W_{\text{displaced fluid}}=W_{\text{object}}$$
$$\rho g \Delta V = W_{object}$$
Since $g$ and the weight of the object $W_{\text{object}}$ are fixed, an increase in density means a decrease in the submerged volume, for the equation to hold. | {
"domain": "physics.stackexchange",
"id": 37335,
"tags": "buoyancy"
} |
Meeting point of one falling object and a climbing one | Question: We have this red ball, that falls free, with no inicial velocity, off of the top of a building 60 meters high.
A green ball is launched vertically, upwards, with inicial velocity of 20 m/s, after 2 seconds of the red ball launching.
How long does it take for them to meet (in the air, obviously), since the red ball is launched?
I tried to solve this using Newton's Laws of Motion, but my results are not the same of those of the solutions of the book (it gives me 2 seconds and in the book, the solution is 3 seconds, however they don't show us how did they get to that result).
Answer: Red Ball:
$$y_r(t) = 60m -(5m/s^2)t^2$$
After 2 seconds:
$$y_r(2)=60m-20m = 40m,$$
$$v_r(2) = -20m/s.$$
Therefore the equation of motion of the red ball is (after 2 seconds):
$$y_r(t) = 40m -(20m/s)t-(5m/s^2)t^2.$$
Now the equation of motion of the green ball:
$$y_g(t)=(20m/s)t-(5m/s^2)t^2.$$
They meet when their positions are the same, so:
$$y_r(t)=y_g(t) \Rightarrow 40m -(20m/s)t-(5m/s^2)t^2=(20m/s)t-(5m/s^2)t^2,$$
$$40m=(40m/s)t\Rightarrow t=1s.$$
Therefore the time since the red ball was first thrown is $2s+1s=3s$. I used the approximation of gravity to $10m/s^2$. The coordinate system is an axis ($y$) set perpendicular to the surface of the Earth where the building is, where $0$ is at the ground and $60$ at the top of the building. | {
"domain": "physics.stackexchange",
"id": 17397,
"tags": "homework-and-exercises, newtonian-mechanics, projectile"
} |
Law of reflection of a moving mirror in special relativity (velocity perpendicular to the normal) | Question: I have seen this question about the law of reflection in special relativity, and it is shown that, if you have a mirror moving in the opposite direction to the mirror's surface normal, the law of reflection doesn't apply anymore. For example, consider the mirror in the $yz$ plane, so that velocity and normal are both on the $x$-axis.
But what about the situation where the mirror is moving perpendicularly to the mirror's surface normal? Consider for example the mirror in the $xz$ plane, with the normal on the $y$-axis and the velocity on the $x$-axis.
I have done some calculations, and in my solution, the law of reflection does apply in this case. Am I right?
PS: I hope that this is the correct format to ask this question.
EDIT:
This is the procedure I have used:
I looked at the first answer to the question that I have linked before.
I applied the same method that is explained there, but with the new projections of velocities.
My results are (using $c = 1$):
$$
\cos(\theta_i) = \frac{\cos(\theta_i')\sqrt{1-v^2}}{1 + v\sin(\theta_i')} \\
\sin(\theta_i) = \frac{\sin(\theta_i') + v}{1 + v\sin(\theta_i')}
$$
$cos(\theta_r)$ and $\sin(\theta_r)$ could be derived from the above equations by replacing $\theta_i \to \theta_r$ and $\theta_i' \to \theta_r'$.
In the primed system of reference, $\theta_i' = \theta_r'$, so $\theta_i = \theta_r$. Is this correct? If it isn't, where am I wrong?
Answer:
I have done some calculations, and in my solution, the law of reflection does apply in this case. Am I right?
Yes, you're right. However, I offer you to use the following equation that relates the reflected angle to the incident one as measured in $S$ moving at an arbitrary velocity $v$ with respect to the mirror's rest frame ($S^\prime$): (See this article.)
$$\cos \theta_r=\frac{-2\frac{\boldsymbol{\vec{v}_n}}{c}+(1+\frac{\boldsymbol{\vec{v}_n}^2}{c^2})\cos \theta_i}{1-2\frac{\boldsymbol{\vec{v}_n}}{c}\cos \theta_i+\frac{\boldsymbol{\vec{v}_n}^2}{c^2}},$$
where $\boldsymbol{\vec{v}_n}$ is always the velocity vector projection onto the mirror's normal. In your example, when the mirror is set in motion perpendicular to its normal, we have $\boldsymbol{\vec{v}_n}=0$, and thus:
$$\cos \theta_r=\cos \theta_i \rightarrow \theta_r=\theta_i$$
Remember that the light-clock (known as Einstein's light-clock), which is used for deriving the familiar time dilation equation ($t=\gamma t^\prime$), is a good example for your special case: The incident and reflected angles are measured equally by the lab observer as long as the mirror (light-clock) moves perpendicular to its normal. | {
"domain": "physics.stackexchange",
"id": 61077,
"tags": "special-relativity, reflection"
} |
Implementing CircularQueue using Linked List in C++ | Question: I'm currently doing a project, requiring me to implement a CircularQueue data structure in C++:
template<typename Type> class CircularQueue {
template<typename Type> struct CQNode {
Type head;
CQNode* tail;
CQNode() {
this->head = NULL;
this->tail = NULL;
}
CQNode(Type head, CQNode* tail = NULL) {
this->head = head;
this->tail = tail;
}
};
private:
CQNode<Type>* front;
CQNode<Type>* rear;
CQNode<Type>** circularQueue = NULL;
int MAX = 0;
int currentSize = 0;
protected:
public:
/**********************************************************/
/********************** CONSTRUCTORS **********************/
/**********************************************************/
/**
* @brief
*
* [DETAILED DESCRIPTION]
*
* @param capacity integer size capacity of CircularQueue object to be instantiated
*/
CircularQueue(int capacity) {
// Set the maximum queue capacity to the size passed to constructor
this->MAX = capacity;
// Locally set the limit (to be used as array of CQNode size) to MAX
const int limit = this->MAX;
// Create a pointer array of CQNode with size limit and assign to this->circularQueue
this->circularQueue = new CQNode<Type>*[limit];
// Loop over queue capacity, implementing CircularQueue type structure
for (int i = 0; i < limit; i++) {
// set i'th element of circularQueue array to a new CQNode struct
this->circularQueue[i] = new CQNode<Type>();
// if the value of i is greater than 0, assign the tail of the (i-1)'th element
// of circularQueue to i'th element of circularQueue, thereby implementing the Queue
// FIFO type data structure
if (i > 0) {
this->circularQueue[i - 1] = this->circularQueue[i];
}
// otherwise, if i is equal to the capacity - 1 (i.e. the last elementt of circularQueue array),
// then set the tail of the last element of circularQueue to the first element of circularQueue,
// thereby implementing the circular nature of this queue data structure
else if (i == (limit - 1)) {
this->circularQueue[i - 1]->tail = this->circularQueue[0];
}
}
this->front = NULL;
this->rear = NULL;
}
/**********************************************************/
/********************** DESTRUCTORS ***********************/
/**********************************************************/
/**
* @brief
*
* [DETAILED DESCRIPTION]
*
*/
~CircularQueue() {
//TODO: Destroy Queue
}
/********************************************************/
/*********************** GETTERS ************************/
/********************************************************/
CQNode<Type>* peekFront() {
return this->front;
}
CQNode<Type>* peekRear() {
return this->rear;
}
Type peekHeadOfFront() {
if (this->front != NULL)
return this->front->head;
}
int getCurrentSize() {
return this->currentSize;
}
bool isEmpty() {
return this->rear == NULL;
}
/**********************************************/
/****************** TOSTRING ******************/
/**********************************************/
/**
* @brief
*
* [DETAILED DESCRIPTION]
*
* @return string representation of this CircularQueue object
*/
string toString() {
string temp = "[";
CQNode<Type>* tempNode = this->front;
int i = 0;
while (tempNode != NULL && i < this->currentSize) {
i++;
//cout << temp << endl;
if (tempNode->tail != NULL) {
temp += std::to_string(tempNode->head) + ", ";
}
else {
temp += std::to_string(tempNode->head);
}
tempNode = tempNode->tail;
}
temp += "]";
return temp;
}
/********************************************************/
/******************* QUEUE OPERATIONS *******************/
/********************************************************/
/**
* @brief
*
* [DETAILED DESCRIPTION]
*
* @param item Object to enqueue to this instance of CircularQueue
* @return The item of data structure "Type" enqueued upon this CircularQueue instance
*/
Type enqueue(Type item) {
// if the currentSize is greater than the maximum allowed capacity,
// throw a CircularQueueException
if (this->currentSize == this->MAX) {
throw CircularQueueException("Circular queue is full, cannot enqueue any more objects!");
}
// if the front of this CQ object is null, assign first element of circularQueue array to
// front of queue and set the rear to the front (single-element queue)
if (this->front == NULL) {
//this->front = new CQNode<Type>(item);
this->front = this->circularQueue[0];
this->rear = this->front;
}
// else if the front is not-null, assign the tail of the rear of this CQ object
// to a new CQNode with head = item, and shift the new rear to tail of old rear
else {
this->rear->tail = new CQNode<Type>(item);
this->rear = this->rear->tail;
if (this->currentSize == (this->MAX - 1))
this->rear->tail = this->front;
}
this->currentSize++;
return item;
}
/**
* @brief
*
* [DETAILED DESCRIPTION]
*
* @return The item of data structure "Type" dequeued from this CircularQueue instance
*/
Type dequeue() {
// Create a pointer to a CQNode initialised as NULL
// this variavle will store the dequeued node
CQNode<Type>* dequeuedNode = NULL;
// if rear is empty, throw a CircularQueueException
if (this->rear == NULL) {
throw CircularQueueException("Circular queue is already empty, cannot dequeue any objects!");
}
else {
// decrement currentSize of this CircularQueue instance by 1
this->currentSize--;
// assign front of this CircularQueue to dequeuedNode
dequeuedNode = this->front;
// set the new front of the queue to the tail of the current front
this->front = this->front->tail;
}
return dequeuedNode->head;
}
};
Is this a good implementation for such a structure so far? Please mention any bad coding practices or potential problems that you can see with this code.
Note that my toString() function is designed as to only print a single "circle" of the queue, if there is a better way to make a toString() for a CircularQueue then please mention it.
Answer: A few other points:
Since you have defined everything inline inside the class declaration, it would be nice to add one level of indentation to everything inside the class, otherwise the methods don't look like they belong to a class.
MAX is not a great name. Actually, the constructor assigns a capacity variable to it, so why not rename MAX to capacity and clearly indicate that this is the maximum capacity of the backing array? Also, if you properly initialize your member data in the constructor initializer list, you can declare capacity (or former MAX) as a constant (const int capacity;), which is nice, since it doesn't seem like you've devised this data structure to ever change capacity after construction. Quick example:
// This is what a constructor initializer list looks like.
// It is a comma separated list of members. This is unique to C++
// and the recommended way of initializing member data in a constructor.
CircularQueue(int cap)
: capacity(cap)
, currentSize(0)
, circularQueue(new CQNode<Type>*[capacity])
{
// stuff ...
}
this-> qualifying member access should be avoided. It's not mandatory as it is in some other languages, so it's only adding verbosity to your code. But also, this qualifying member access can hide problems of name shadowing, which is a very bad practice. Each entity should have a unique name.
Prefer using nullptr instead of NULL. Very likely that your compiler is C++11 compliant. If not, see to update it, there are several free alternatives out there. nullptr is the default null pointer literal for C++11 and above.
As mentioned by @Zulan, declaring a template class inside a template class with the same parameter name for the template is incorrect (i.e.: template<typename Type> for both CircularQueue and CQNode). If I recall, the Visual Studio compiler did allow this kind of malformed code in the past, so perhaps that's why it is working for you... You don't have to make CQNode a template. It can directly access the parent Type from CircularQueue, so this is an easy fix.
protected section of your class is empty, so you can remove that tag.
Those header comments are way too verbose. Unless you really have a requirement for that kind of mechanical code commenting, don't do it. It's just distracting from the actual code.
You never freed memory in the destructor, which would be the place for it, so your class is leaking resources. However, in modern C++ it is rare to manually manage memory like that. The standard library offers containers meant to keep track of mechanical and error-prone resource management for you. You can either use a std::vector for the node backing store or at the very least a std::unique_ptr with array storage, to make cleanup automated.
Methods that don't modify member data should be const to allow them to be called on a const object and to also clearly signify that to readers. You can learn more about const member functions in here.
A huge architectural improvement and code cleanup would come from dropping this linked list setup and using a plain array. Since the queue is fixed size, you can very easily implement the ring buffer with an array of values and a head and tail indexes. This will make your code much simpler and more efficient, since you'll get rid of a ton of pointer indirections. | {
"domain": "codereview.stackexchange",
"id": 16546,
"tags": "c++, queue, circular-list"
} |
Using a lot of IF in loop WHILE to print information | Question: I made a code and it is doing what I want.
But, I would like to know if I am doing it right or in the best way.
Could you please give me your thoughts about it? Thank you.
$result = mysqli_query($sql);
$p_before = "";
$year_before = "";
while ($rows = mysqli_fetch_array($result)) {
$p_now= $rows["cop"];
if ($rows["year"] < 10) {
$year = "0" . $rows["year"];
} else {
$year = $rows["year"];
}
if ($rows["month"] < 10) {
$month = "0" . $rows["month"];
} else {
$month = $rows["month"];
}
if ($p_now!== $p_before) {
echo $year . " " . $month . " " . $p_now. "</br>";
echo $rows["cdp"] . "</br>";
}
if ($year_before !== $rows["year"] && $rows["cf"] == $rows["month"]) {
echo $year . " " . $month . " " . "FRST" . "<br>";
}
if ($rows["cdi"] !== null) {
echo $year . " " . $month . " " . $rows["coi"] . "<br>" . $rows["cdi"] . "<br>";
}
if ($rows["cdh"] !== null) {
echo $year . " " . $month . " " . $rows["coh"] . "<br>" . $rows["cdh"] . "<br>";
}
if ($rows["cdf"] !== null) {
echo $year . " " . $month . " " . $rows["cof"] . "<br>" . $rows["cdf"] . "<br>";
}
if ($year_before !== $rows["year"] && $rows["cs"] == $rows["month"]) {
echo $year . " " . $month . " " . "SENM" . "<br>";
}
if ($year_before !== $rows["year"] && $rows["cl"] == $rows["month"]) {
echo $year . " " . $month . " " . "LAST" . "<br>";
}
$p_before = $rows["cop"];
$year_before = $rows["year"];
}
Answer: Extract common logic to functions
Notice the duplicated logic here:
if ($rows["year"] < 10) {
$year = "0" . $rows["year"];
} else {
$year = $rows["year"];
}
if ($rows["month"] < 10) {
$month = "0" . $rows["month"];
} else {
$month = $rows["month"];
}
Extract to a function:
function pad($num) {
return $num < 10 ? "0" . $num : $num;
}
And then reuse instead of copy-paste:
$year = pad($rows["year"]);
$month = pad($rows["month"]);
Another example:
if ($rows["cdi"] !== null) {
echo $year . " " . $month . " " . $rows["coi"] . "<br>" . $rows["cdi"] . "<br>";
}
if ($rows["cdh"] !== null) {
echo $year . " " . $month . " " . $rows["coh"] . "<br>" . $rows["cdh"] . "<br>";
}
if ($rows["cdf"] !== null) {
echo $year . " " . $month . " " . $rows["cof"] . "<br>" . $rows["cdf"] . "<br>";
}
Extract common logic:
function print_date($year, $month, $key1, $key2) {
$value1 = $rows[$key1];
if ($value1 !== null) {
echo $year . " " . $month . " " . $rows[$key2] . "<br>" . $value1 . "<br>";
}
}
And then reuse instead of copy-paste:
print_date($year, $month, "cdi", "coi");
print_date($year, $month, "cdh", "coh");
print_date($year, $month, "cdf", "cof");
Actually,
now that I see that only the "i", "h", "f" are different,
it might make sense to go further, improve the helper function to make this easier use work too:
print_date($year, $month, "i");
print_date($year, $month, "h");
print_date($year, $month, "f");
Or not, maybe that's too much.
How far you go, I leave that up to you.
Apply this technique to the rest of the code:
try to reuse logic as much as possible,
by extracting to common functions with appropriate parameters.
Stop copy-pasting code. | {
"domain": "codereview.stackexchange",
"id": 15449,
"tags": "php, mysql"
} |
In quantum mechanics does the truth/accuracy of a measurement really matter? | Question: Once John Wheeler said "the past has no meaning or existence unless it exist as a record in the present". So, in a experiment of delayed choice entanglement swapping, if we used a faulty detector to know "which-path" data and first observer (that will determine the reality) looks at the data, will it effect the result/reality? As he will be convinced into having the data about reality but maybe not real data.
Answer: Imagine a simpler set up. You have a double slit and you purchase a super fancy which way device and out it next to the left slit. But you forget to remove the wrapper it came with.
So it just goes off randomly at random times based on some thermal properties of say the wall current you plug it into.
You might incorrectly think that almost all the particles are going through the right slit and then be surprised you see an interference pattern.
Now let's analyze what happens in this simpler set up. You can describe the whole confirmation as in three dimensions. There is a dimension for the state of the which way detector and there is the direction for the particle heading to the screen and a dimension for the direction in which there is some places a barrier and some places a slit. So maybe there is a slit at every $y=0$ except when $1<|x|<2.$ And finally if $z<2$ then the which way detector is reporting no particle and if it is at $z>3$ then it is reporting a particle.
To be clear the detector is not moving up and down (and if it were that would be irrelevant), we need to describe the configuration of the entire system and I chose 3 numbers because it is easy to visualize.
So if the which way detector were functioning then we could right down the Schrödinger equation for the actual experimental setup (particle and which way detector) and then look at the probability current density and its streamlines. Since the wave is defined on confirmation space in our case it assigns a complex number to these three coordinates.
So each configuration starts with $y<0$ and then evolves to have $y$ increase as $x$ does some spreading or focusing depending on the incoming beam and $z$ starts at $z=1.$ Now in this case the detector is working so if the configuration evolves (according to the flow determined by the probability current density for the wave determined by the solution to the Schrödinger equation) to have it go through he left slit then the detector changes and so the current goes up to $z=4$. If it goes through the right it stays at $z=1.$ Since the streamlines are not approaching each other it is just like you have an intensity slattern in the upper left on the the lower right. No overlap, no interference. And this is what happens, you get configurations that are different so no interference. If you took the results of the detector and dumped them into a furnace where the output is lost in the thermal noise you could try to recover the interference but if any particle anywhere is at all different then the z axis could be the state of that particle, so you'd have to perfectly get rid of that information not just get rid of the part that was of practical ease of use to you.
Now let's look at the broken device, that isn't sensing the particle at all. The configuration starts at with $y<0$ and then evolves to have $y$ increase as $x$ does some spreading or focusing depending on the incoming beam and $z$ often started starts at $z=1$ and just stays there but ever so often depending on other things it creeps over to $z=3.1$ but it does this in a way that is unrelated to the $x$ part of the configuration. This means that sometimes the configurations that go through both the slits are both deflected up in configuration space. So they continue to interfere just like normal.
So you have a series of false records of whether it went through. These false records are not something that destroys the interference pattern. Effectively a configuration changes for two reasons, a reason like inertia, the wave gives a probability current and so it changes in that direction, another is like an acceleration there is a potential associated with the configuration and its gradient would determine an acceleration if it were the only agent. And then there is a change based on the state, and it shows up also like a force and it is responsible for absolutely everything we consider quantum mechanical in origin. It affects every configuration in changing how the configuration stream line changes in a second order way in time. You can think of it as the two contributions to the how the current changes and part is from the potential for that configuration and part (the rest) from the state. Or you can just think of the quantum thing as the difference between the actual change in current and what the classical potential would predict for that configuration with that current.
The interference pattern is a quantum effect, so it happens when the configuration has its velocity evolve differently than the classical potential would predict. This happens when two waves overlap in configuration space. Anything that prevents that prevents interference and leads to some degree of classical results.
So for your delayed choice erasure. When you do it right it is like you get an interference pattern deflected down and one deflected up but don't have reliable information about which is up or down and the troughs and leaks line up so well that it looms the same as no interference to you.
If you had faulty information (like the example we had that incorrectly was saying it went left or right and was actually unrelated) then it's like you took a percentage of the points that were up and were down and labeled them as up and labelled the rest as down. If it was a full 50-50 from each then you notice no interference even though it was there. Just like in real life if you really had two beams with fringes one up and one down and you randomly looked at points and wrote the x coordinate and ignored the z then you wouldn't "know" about the patterns.
Whenever someone talks about knowing or a record or something like that. They are trying to avoid using (or sometimes telling you) that the wave is a wave in configuration space.
Which is weird. It's like one of the absolutely most basic facts of the Schrödinger equation, $$i\hbar \frac{\partial}{\partial t}\Psi=\hat H\Psi.$$ | {
"domain": "physics.stackexchange",
"id": 24814,
"tags": "quantum-mechanics, quantum-entanglement, quantum-interpretations, measurement-problem, observers"
} |
Splitting an integer to its digits | Question: I want to split an integer of type unsigned long long to its digits.
Any comments and suggestions are always welcome.
#include <iostream>
#include <string>
#include <vector>
using namespace std;
vector<unsigned short> IntegerDigits(unsigned long long input)
{
vector<unsigned short> output;
const unsigned short n = log10(ULLONG_MAX);
unsigned long long divisor = pow(10, n);
bool leadingZero = true;
for (int i = 0; i < n + 1; i++)
{
unsigned short digit = input / divisor % 10;
if (!leadingZero || digit != 0)
{
output.push_back(digit);
leadingZero = false;
}
divisor /= 10;
}
return output;
}
void main()
{
vector<unsigned short> output = IntegerDigits(ULLONG_MAX);
cout << ULLONG_MAX << ": [";
for (auto y : output)
cout << y << ", ";
cout << "\b\b]";
cout << ""<<endl;
}
Answer: Avoid mixing floating point and integer arithmetic
As mentioned by greybeard, there is a potential problem here:
const unsigned short n = log10(ULLONG_MAX);
ULLONG_MAX is larger than can be exactly represented by a double. This means the result might not be what you expect. The same goes for pow(10, n). While you can compensate for it, it is better to find a way to calculate the length of a number without using floating point math.
Keep it simple
Unless performance is a big concern, keep it simple. You don't have to know the number of digits up front if you push trailing digits to the front of the vector, like so:
vector<unsigned short> IntegerDigits(unsigned long long input)
{
vector<unsigned short> output;
while (input)
{
output.insert(output.begin(), input % 10);
input /= 10;
}
// Handle input being equal to 0
if (output.empty())
{
output.push_back(0);
}
return output;
}
Pushing to the front of a std::vector is less efficient, but on the other hand you don't need the double<->int conversions, and you don't need to handle the leading zeros inside the loop.
Avoid using std::endl
Prefer using "\n" over std::endl, the latter is equivalent to the former, but also forces a flush of the output, which can be bad for performance.
Avoid backspaces in the output
You used a neat trick to get rid of the last comma without having to have extra logic inside the for-loop in main(). However, consider that the output might not just be for human consumption, but is written to a file and/or is parsed by another program. In that case, the \b characters are probably unexpected and might cause problems. | {
"domain": "codereview.stackexchange",
"id": 39809,
"tags": "c++, algorithm"
} |
How to make LightGBM to suppress output? | Question: I have tried for a while to figure out how to "shut up" LightGBM. Especially, I would like to suppress the output of LightGBM during training (i.e. feedback on the boosting steps).
My model:
params = {
'objective': 'regression',
'learning_rate' :0.9,
'max_depth' : 1,
'metric': 'mean_squared_error',
'seed': 7,
'boosting_type' : 'gbdt'
}
gbm = lgb.train(params,
lgb_train,
num_boost_round=100000,
valid_sets=lgb_eval,
early_stopping_rounds=100)
I tried to add verbose=0 as suggested in the docs, but this does not work.
https://github.com/microsoft/LightGBM/blob/master/docs/Parameters.rst
Does anyone know how to suppress LightGBM output during training?
Answer: As @Peter has suggested, setting verbose_eval = -1 suppresses most of LightGBM output (link: here).
However, LightGBM may still return other warnings - e.g. No further splits with positive gain. This can be suppressed as follows (source: here ):
lgb_train = lgb.Dataset(X_train, y_train, params={'verbose': -1}, free_raw_data=False)
lgb_eval = lgb.Dataset(X_test, y_test, params={'verbose': -1},free_raw_data=False)
gbm = lgb.train({'verbose': -1}, lgb_train, valid_sets=lgb_eval, verbose_eval=False) | {
"domain": "datascience.stackexchange",
"id": 11224,
"tags": "python, boosting, lightgbm"
} |
Kernels in parameterized complexity | Question: Can anyone explain me what (problem-)kernels are and what's the use of them? My slides say:
The kernel of a parameterized problem $L$ is a transformation $(x,k) \mapsto (x',k')$ such that:
$(x,k) \in L \Leftrightarrow (x',k') \in L$
$|x'| \leq f(k)$ for some function $f$
$k' \leq g(k)$ for some function $g$
transformation must be computed in polynomial time.
My questions are:
How is this connected with a problem being fixed parameter tractable?
What makes kernels useful?
Where does this definition come from.
The example on my slides is for vertex cover, but I don't really get it, cause the slides are kind of short.
Answer: Intuitively, a kernelization algorithm is an algorithm which in polynomial time preprocesses a given instance and outputs an instance whose size is bounded in the parameter. The goal of kernelization is (at least) two-fold. We get provable performance guarantees, i.e., we can prove upper bounds on the output instance, which has applications both in the design of algorithms, and also as a complexity measure.
More formally a kernelization algorithm (often referred to as the kernel), is an algorithm for a problem which on an input $(G,k)$ outputs an equivalent
instance $(G',k')$ with $\max\{|G'|,k'\} \leq f(k)$ for some function $f$.
Furthermore, the algorithm needs to run in polynomial time.
The following result shows that the power of kernels is, so to speak, equivalent to the power of fixed parameter tractability (PDF)
Theorem (Folklore). A problem is fixed parameter tractable if and only if it admits a kernel and is decidable.
Although the notion of kernel coincides with fixed parameter tractability, there is a stronger version of kernelization where we demand the function $f$ above to be a polynomial.
If you want to see the original definitions, I advice you to pick up Downey and Fellows' book on parameterized complexity, or start from Niedermeier's Habilitation thesis mentioned above. There is also a Wikipedia article on Kernelization. | {
"domain": "cs.stackexchange",
"id": 4313,
"tags": "parameterized-complexity, algorithm-design"
} |
Consequences of compactness in physics | Question: If we understand spacetime as a $4$-dimensional manifold $M$, from the point of view of physics what are the consquences of a subset of it being compact? My point here is simple: in math we usually think of compactness as some analogue of finiteness because it shares many properties with finite sets, but what are the consequences of this when we deal with physics?
Of course, we need not to go into relativity, we can even think about the usual three space $\mathbb{R}^3$. What are again the consequences of a set $A \subset \mathbb{R}^3$ being compact? Are there any cool things we can get out from this, or we simply use compactness in physics to grant the mathematical properties desired without having any direct impact in the way we understand and interpret those sets?
Answer: If your spacetime is compact and if every point has an open neighborhood that locally looks like special relativity, then it contains closed timelike curves (aka time travel). The argument is standard math.
For each point p, there is an open neighborhood that looks like SR and hence contains points q in p's causal past. So the set of causal futures of every point is an open cover of your spacetime (each p has a q that was in its past, so p is in q's causal future). By compactness there is a finite subcover. Since the subcover is finite, we can make it minimal (consider every subcollection of the subcover, there are finitely many, so go through them and pick one of the ones that is of minimal size that is also a cover and use that instead).
So there are points $q_1$, $\dots,$ $q_n$ such that union of their causal futures contain everything. So in particular $q_1$ needs to be in one of those causal futures. If it were in a different $q_k$'s causal future (i.e. $q_k\neq q_1$ for a different $q_k$ and there is a future pointing path from $q_k$ to $q_1$) then the causal future of $q_1$ was redundant (because it was contained in the causal future of $q_k$), contradicting the minimality. But $q_1$ has to be in one of the $q_k$'s causal futures, so it must be in its own. If it is in its own causal future, there is a closed timelike curve (CTC).
How big a deal is this? If the universe must be truly ancient before it repeats is this such a big deal? After all this curve would have to go through the earlier universe and be subject to super high pressures and densities and radiation and maybe inflation and not much information would survive even if there is a geometric line that wiggled throughout that region.
It is even possible that all closed time like curves pass through the same incredible small region, so they are all practically the same so there is no specialness to any of them, to us far from the hot dense early universe they all just look like curves that go back to the hot dense early universe where all the paths near us where very very very very near to each other back then, so no one path feels special.
So you could technically have time travel, but where the modern universe looks like it does now and nothing feels special. So not necessarily a big deal. | {
"domain": "physics.stackexchange",
"id": 22855,
"tags": "mathematical-physics, topology"
} |
Heat switch material? | Question: Sort of a more hypothetical/general question here. Does there exist or might there exist a sort of thermal switching material? Like a material that could be switched from a thermal insulator (maybe more of a thermal insulator than air) to a thermal conductor? Kind of like a thermal semiconductor? It would be a useful material if you wanted a device to remain hot or cool off quickly, without using a fan.
Answer: Vanadium dioxide might be what you're looking for. Below a certain temperature it's an insulator, but changes to a conductor above that temperature. The temperature threshold of the material can be changed by doping it with other elements.
Other materials have this property but most switch far below freezing. Vanadium dioxide can switch somewhere between the freezing and boiling points of water, as well as being tuned for the exact temperature, making it potentially useful for household applications.
phys.org: For this metal, electricity flows, but not the heat | {
"domain": "physics.stackexchange",
"id": 46195,
"tags": "thermodynamics, thermal-conductivity"
} |
synchronized access to HttpSession attribute vs static ConcurrentMap with HttpSessionListener | Question: As a part of library implementing websocket Guice scopes I need to maintain an object related to each HttpSession called sessionContextAttributes. Currently sessionContextAttributes is stored as an attribute of a session, which means the access and initialization needs to be synchronized:
ConcurrentMap<Key<?>, Object> getHttpSessionContextAttributes() {
HttpSession session = getHttpSession();
try {
synchronized (session) {
@SuppressWarnings("unchecked")
var sessionContextAttributes = (ConcurrentMap<Key<?>, Object>)
session.getAttribute(SESSION_CONTEXT_ATTRIBUTE_NAME);
if (sessionContextAttributes == null) {
sessionContextAttributes = new ConcurrentHashMap<>();
session.setAttribute(SESSION_CONTEXT_ATTRIBUTE_NAME, sessionContextAttributes);
}
return sessionContextAttributes;
}
} catch (NullPointerException e) {
throw new OutOfScopeException(/* ... */);
}
}
static final String SESSION_CONTEXT_ATTRIBUTE_NAME =
ContainerCallContext.class.getPackageName() + ".contextAttributes";
Now I'm considering to maintain a static ConcurrentMap from HttpSession to sessionContextAttributes like this:
static final ConcurrentMap<HttpSession, ConcurrentMap<Key<?>, Object>> sessionCtxs =
new ConcurrentHashMap<>();
ConcurrentMap<Key<?>, Object> getHttpSessionContextAttributes() {
try {
return sessionCtxs.computeIfAbsent(
getHttpSession(), (ignored) -> new ConcurrentHashMap<>());
} catch (NullPointerException e) {
throw new OutOfScopeException(/* ... */);
}
}
This simplifies getHttpSessionContextAttributes() method and is probably slightly faster, BUT introduces static data AND requires manual removal to prevent leaking of sessionContextAttributes:
static class SessionContextJanitor implements HttpSessionListener {
@Override
public void sessionDestroyed(HttpSessionEvent event) {
sessionCtxs.remove(event.getSession()); // this avoids resource leaks
}
}
I cannot confidently tell which approach is better: both have pros&cons. Hence I'm posting it here for review: maybe someone knows some reasons that make 1 of these approaches significantly better than the other.
Answer: After some more research and thinking, it turned out that both of the above approaches are wrong ;-]
The reason for both cases is that user Filters may put HttpSession object into a wrapper, so neither synchronizing on it nor using it as a key in a ConcurrentHashMap is safe :/
Furthermore, in case of "maintaining a static ConcurrentMap" solution, sessionContextAttributes would not get serialized together with its session for example when sharing between cluster nodes or temporarily storing on disk to use memory for other requests.
Instead, I use HttpSessionListener to create and store the new context as a session attribute:
ConcurrentMap<Key<?>, Object> getHttpSessionContextAttributes() {
try {
return (ConcurrentMap<Key<?>, Object>)
getHttpSession().getAttribute(SESSION_CONTEXT_ATTRIBUTE_NAME);
} catch (NullPointerException e) {
throw new OutOfScopeException(/* ... */);
}
}
static class SessionContextCreator implements HttpSessionListener {
@Override
public void sessionCreated(HttpSessionEvent event) {
event.getSession().setAttribute(
SESSION_CONTEXT_ATTRIBUTE_NAME,
new ConcurrentHashMap<Key<?>, Object>()
);
}
}
While it is not explicitly stated in the spec that sessionCreated(...) will be called on the same thread that first created the session with request.getSession(), it is kinda implied and both Tomcat and Jetty do so. Last but not least, certain Spring utilities assume that this is always the case.
Further discussions about this:
1,
2,
3 | {
"domain": "codereview.stackexchange",
"id": 42412,
"tags": "java, concurrency, servlets"
} |
What makes energy content of a body harder to accelerate it? | Question: There was a video on the YouTube's PBS Spacetime channel where the host, Matt O'Dowd, a PhD and a working physicist, explained the "Real meaning of $E=mc^2$" by saying that even at the most microscopic level even the fundamental particles could be thought of as really a manifestation of binding energy of things that are bound together to create the particle. This only means that there is no such thing as mass and all of physics could be explained via the energy picture only. While I find this somewhat convincing I don't understand how this picture can explain why heavier things are harder to accelerate. What is it about a body that makes it harder to push if it has just more of energy? One possible explanation I've found in the comments section says that if you have two bodies of different mass and want them to have the same velocity, obviously the heavier one of them will have more KE and thus you need to input more energy pushing it. Is this correct?
Answer: Well, let's go back to Einstein's paper from September 1905, which demonstrated the famous relation $E=mc^2$. It's quite short, and well worth a read. An English translation is available here. Note that at the time Einstein actually used $L$ where we would now use $E$.
What Einstein does is he considers a stationary body (in some inertial reference frame) which emits an equal amount of light in two opposite directions. Because the light emitted in each direction is the same, the body remains stationary after this emission. However, because the emitted light has energy, the body has necessarily has lost some energy.
Einstein then considers the same body viewed from a different inertial reference frame, moving with velocity $v$ relative to the first frame. By transforming the equations which describe the energy of the light to this new frame, Einstein shows a greater amount of energy is lost in this reference frame.
What can account for the difference? Einstein argues that because the laws of physics hold equally in either frame, that the body's energy before the experiment must be the same in the two reference frames (up to an arbitrary constant) and likewise the energy after the experiment must be the same (up to that same arbitrary constant) - except, in the second frame the body is not at rest! It has an additional form of energy, kinetic energy, both before and after emitting the light.
So, when comparing the change in the body's energy in the first frame to the change in the body's energy in the second frame, the arbitrary constant cancels out, and the difference can only be due to a change in kinetic energy. Specifically, Einstein finds that if the body loses energy $E$ in the rest frame, then it loses additional kinetic energy $E \left( \frac{1}{\sqrt{1-v^2/c^2}} - 1\right)$ in the other frame. In the limit of $v$ much less than $c$ this simplifies to $\frac{1}{2}\frac{E}{c^2}v^2$ - exactly the classical formula for kinetic energy with $m$ replaced with $E/c^2$. (In this case $m$ is really the change in the body's mass, but if we want we can go a step further and imagine the body giving up all its mass to the outgoing radiation.)
So Einstein concluded that when a body loses energy, its mass goes down by a proportional amount, and inferred from this that a body's mass simply is a measure of its energy content.
But suppose we'd never seen the classical formula $E = \frac{1}{2} m v^2$. We might not have then thought to call this energy "mass", but nevertheless Einstein's result shows that more energy can be extracted from a body in motion than from a body at rest, and that this additional energy content in the moving body is proportional to $E/c^2$. If we instead imagine adding energy to a body, the same reasoning holds, and we see that giving a particle an additional rest energy $E$ means that it now takes an additional kinetic energy proportional to $E/c^2$ to get it moving. | {
"domain": "physics.stackexchange",
"id": 26121,
"tags": "special-relativity, energy, inertia"
} |
Use of chain rule in deriving Lorentz velocity transformation | Question: When deriving Lorentz velocity transformation in a boosted frame of reference, we are obviously going to use the Lorentz coordinate transformation and then differentiate it with respect to $t'$. My question is: Since $x'$ is a function of both $x$ and $t$ and they are both functions of $x'$ and $t'$, I'm wondering why we use the chain rule for normal derivatives (single variable) e.g. $\text{d}x'/\text{d}t' = (\text{d}x'/\text{d}t)\times(\text{d}t/\text{d}t')$? The reason this confuses me is because since its a multivariable function, wouldn't we have to take partial derivatives? Hopefully my question makes sense. Thanks in advance!
Answer: It is important to remember what the symbols $x$ and $t$ actually stand for, as a great number of sources tend to overload them which becomes confusing.
In the original Lorentz transformations
$$t'=\gamma\left(t-\frac{vx}{c^2}\right) \\ x'=\gamma\left(x-vt\right)$$
the symbols $x$, $t$, $x'$ and $t'$ are the coordinates of a single event in spacetime. That is, one point in space and one instant in time.
However, when deriving the velocity addition formula, you are giving the symbols $x$ and $x'$ a slightly different meaning: the position of an object as a function of time. Suppose the object is traveling with velocity $u$ in $S$ and $u'$ in $S'$, with $S'$ moving with $v$ relative to $S$.
Therefore, we have
$$x=x(t)=ut\\ x'=x'(t')=u't'$$
This is why $u=\text{d}x/\text{d}t$ and $u'=\text{d}x'/\text{d}t'$ are ordinary derivatives, because they represent the position of an object instead of an event in spacetime. The transformations then become
$$t'=\gamma\left(1-\frac{uv}{c^2}\right)t \\ x'=\gamma\left(u-v\right)t$$ from which we can see that $\text{d}t'/\text{d}t$ and $\text{d}x'/\text{d}t$ are ordinary derivatives too. The desired formula $u'=(u-v)/(1-uv/c^2)$ is also apparent.
In summary, once we use $x$ and $x'$ to represent the positions of the moving object, $x$ and $t$ are no longer independent; neither are $x'$ and $t'$. The Lorentz transformations become functions of a single variable. | {
"domain": "physics.stackexchange",
"id": 81139,
"tags": "special-relativity, coordinate-systems, inertial-frames, velocity, calculus"
} |
How to estimate the temperature required for the 2nd energy level of hydrogen to be occupied? | Question: In my atstrophysics lecture today we were covering atomic spectra and we saw that at both the hot and cool end, the atomic spectra of stars showed a very weak balmer series. The reason being at the hot end, most of the hydrogen atoms have been ionised and so there aren't many making the specific transitions and the scattering photons. At the cooler end the electrons don't have enough energy to occupy the n=2 states of the hydrogen atom.
I wanted to then find the temperature of star at which the n=2 orbittal would be occupied because I didn't think it would be that high that even the sun didn't have many of these filled. My attempt was:
Seeing as the energy required for a transition to the n=2 orbital for hydrogen corresponds to the photons having $E_{photon}=E_{n=2}-E_{n=1}=10.2eV$. As $E_{photon}=\frac{hc}{\lambda}$ for photons and according to wiens displacement law $\lambda_{peak}T=0.0029 $. $E_{photon}=\frac{hcT}{0.0029}$. So$\frac{hcT}{0.0029}=10.2e$, this gives $T\approx 24000K$.
This is much too large as apparently by this temperature the hydrogen has already been ionised. So how do I find the temperature at which this energy level is likely to be occupied?
Answer: Radiative considerations (via e.g. Wien's law) are inappropriate here ─ as a starting point, at least, the only thing you should need is Boltzmann statistics, which tell you that the probability of occupation of the $n$th energy level (with enrgy $E_n$ and degeneracy $g_n$) is
$$
P_n = \frac{1}{Z} g_n e^{-E_n/k_BT},
$$
with the partition function $Z$ functioning as a normalization factor.
For hydrogen under these conditions, you can set $E_1=-13.6\:\rm eV$ and $E_2=-3.4\:\rm eV$, and the degeneracies are $g_1=2$ states for the $n=1$ shell and $g_2=8$ states for the $n=2$ shell. (More generally, $E_n=E_1/n^2$ and $g_n=2n^2$.) As such, the relative occupation of the $n=2$ shell at temperature $T$ is
$$
P_2(T) = \frac{8e^{-E_2/k_BT}}{2+8e^{-E_2/k_BT}+\cdots}.
$$
If the temperature is too low, then the exponential factor kills you, but (because the $n=3$ and higher shells are very close in energy) there is only a very short window in temperature before the higher-lying levels start rising as well (and they have higher degeneracy).
It is reasonably easy to calculate $P_2(T)$ using only the bound states, and this already yields a probability that peaks at about $T\approx 11\,000\:\rm K$ (graphed below). The decay at temperatures higher than this peak will be made steeper once you factor in the ionized states (which makes the calculation significantly harder). | {
"domain": "physics.stackexchange",
"id": 57578,
"tags": "quantum-mechanics, homework-and-exercises, thermodynamics, photons, astrophysics"
} |
What makes some metals melt at higher temperature? | Question: I'm looking at the melting temperature of metallic elements, and notice that the metals with high melting temperature are all grouped in some lower-left corner of the $\mathrm{d}$-block. If I take for example the periodic table with physical state indicated at $\pu{2165 K}$:
I see that (apart from boron and carbon) the only elements still solid at that temperature form a rather well-defined block around tungsten (which melts at $\pu{3695 K}$). So what makes this group of metals melt at such high temperature?
Answer: Some factors were hinted, but let me put them in an order of importance and mention some more:
metals generally have a high melting point, because metallic interatomic bonding by delocalized electrons ($\ce{Li}$ having only a few electrons for this "electron sea") between core atoms is pretty effective in those pure element solids compared to alternative bonding types (ionic $\pu{6-20 eV/atom}$ bond energy, covalent 1-7, metallic 1-5, van-der-Waals much lower). Also, ionic lattices like $\ce{NaCl}$ have a higher lattice and bonding energy, they have weak interatomic long-range bonding, unlike most metals. They break apart or are easily solvable, metals are malleable but don't break, the electron sea is the reason for their welding ability.
the crystal structure and mass play an inferior role among your filtered elements (just look up the crystal structure of those elements), as metallic bonding is not directional unlike covalent bonding (orbital symmetry). Metals often have half filled $\mathrm{s}$ and $\mathrm{p}$ bands (stronger delocalized than $\mathrm{d}$ and $\mathrm{f}$) at the Fermi-edge (meaning high conductivity) and therefore many delocalised electrons which can move into unoccupied energy states yielding the biggest electron sea with half or less fill bands.
noble metals like $\ce{Au,Ag}$ have a full $\mathrm{d}$ orbital, therefore low reactivity/electronegativity and are often used as contact materials (high conductivity because of "very fluid" electron sea consisting only of $\mathrm{s}$-orbital electrons. Unlike tungsten with half or less occupied $\mathrm{d}$-orbitals they show no interatomic $\mathrm{d-d}$ bonding by delocalized $\mathrm{d}$-electrons, and more importantly, a half filled $\mathrm{d}$-orbital contributes 5 electrons to the energy band, while a $\mathrm{s}$ only 1, $\mathrm{p}$ only 3, the electron sea is bigger among the $\mathrm{d}$-group.
The "packaging" of core atoms in the lattice (interatomic distance) among the high $Z$ atoms (compared to e.g. $\ce{Li}$) is denser (more protons, stronger attraction of shell electrons, smaller interatomic radius), means stronger interatomic bonding transmitted by the electron sea:
You can see here that in each series ($\ce{Li,\ Na,\ K}$) the melting points rise to a maximum and then decrease with increasing atomic number (lacking unoccupied energy states for delocalized $\mathrm{d}$-electrons), bigger electron sea being here a stronger factor than a bit more dense packaging.
Boron as a semi-metal shows metallic and covalent bonding, Carbon strong directional covalent bonding and is able to build a network of bonds unlike other non-metal elements showing covalent intramolecular bonding, e.g., in diatomic molecules but not strong intermolecular bonding in macromolecules because of lacking unpaired electrons.
So there are some bigger trends for melting points explaining the high melting points of $\mathrm{d}$-metals, but also some minor exceptions to the rule like $\ce{Mn}$. | {
"domain": "chemistry.stackexchange",
"id": 36,
"tags": "melting-point, metal"
} |
Unit testing for a multi-dimensional array class | Question: I have designed a class bunji::Tensor which is a multi-dimensional array. I have designed it to have a similar interface to a multi-dimensional std::vector, just that the bunji::Tensor constructor takes in an std::vector in its constructor which defines the dimensions, i.e.
bunji::Tensor<double, 3> my_tensor({10, 4, 2});
Creates a 3 dimensional tensor of doubles, with dimensions x=10, y=4, and z=2.
I have made some tests for this tensor using the google/googletest testing framework. My tests are below.
#include "tensor.hpp"
#include <gtest/gtest.h>
void test_manual_1d(const std::size_t x)
{
bunji::Tensor<int, 1> tensor({x});
for (std::size_t i = 0; i < x; ++i)
{
EXPECT_EQ(tensor[i], 0);
tensor[i] = i+1;
}
for (std::size_t i = 0; i < x; ++i)
{
EXPECT_EQ(tensor[i], i+1);
}
}
void test_manual_2d(const std::size_t x, const std::size_t y)
{
bunji::Tensor<int, 2> tensor({x, y});
for (std::size_t i = 0; i < x; ++i)
{
for (std::size_t j = 0; j < y; ++j)
{
EXPECT_EQ(tensor[i][j], 0);
tensor[i][j] = (i+1) * (j+1);
}
}
for (std::size_t i = 0; i < x; ++i)
{
for (std::size_t j = 0; j < y; ++j)
{
EXPECT_EQ(tensor[i][j], (i+1) * (j+1));
}
}
}
void test_manual_3d(const std::size_t x, const std::size_t y, const std::size_t z)
{
bunji::Tensor<int, 3> tensor({x, y, z});
for (std::size_t i = 0; i < x; ++i)
{
for (std::size_t j = 0; j < y; ++j)
{
for (std::size_t k = 0; k < z; ++k)
{
EXPECT_EQ(tensor[i][j][k], 0);
tensor[i][j][k] = (i+1) * (j+1) * (k+1);
}
}
}
for (std::size_t i = 0; i < x; ++i)
{
for (std::size_t j = 0; j < y; ++j)
{
for (std::size_t k = 0; k < z; ++k)
{
EXPECT_EQ(tensor[i][j][k], (i+1) * (j+1) * (k+1));
}
}
}
}
void test_manual_4d(const std::size_t x, const std::size_t y, const std::size_t z, const std::size_t v)
{
bunji::Tensor<int, 4> tensor({x, y, z, v});
for (std::size_t i = 0; i < x; ++i)
{
for (std::size_t j = 0; j < y; ++j)
{
for (std::size_t k = 0; k < z; ++k)
{
for (std::size_t l = 0; l < v; ++l)
{
EXPECT_EQ(tensor[i][j][k][l], 0);
tensor[i][j][k][l] = (i+1) * (j+1) * (k+1) * (l+1);
}
}
}
}
for (std::size_t i = 0; i < x; ++i)
{
for (std::size_t j = 0; j < y; ++j)
{
for (std::size_t k = 0; k < z; ++k)
{
for (std::size_t l = 0; l < v; ++l)
{
EXPECT_EQ(tensor[i][j][k][l], (i+1) * (j+1) * (k+1) * (l+1));
}
}
}
}
}
template<std::size_t Dimensions, class Callable>
void nd_for_loop(std::size_t begin, std::size_t end, Callable &&c)
{
for(size_t i = begin; i != end; ++i)
{
if constexpr(Dimensions == 1)
{
c(i);
}
else
{
auto bind_argument = [i, &c](auto... args)
{
c(i, args...);
};
nd_for_loop<Dimensions-1>(begin, end, bind_argument);
}
}
}
TEST(tensor, tensor_manual)
{
nd_for_loop<1>(0, 8500, test_manual_1d);
nd_for_loop<2>(0, 100, test_manual_2d);
nd_for_loop<3>(0, 24, test_manual_3d);
nd_for_loop<4>(0, 12, test_manual_4d);
}
int main(int argc, char **argv)
{
testing::InitGoogleTest(&argc, argv);
return RUN_ALL_TESTS();
}
Answer: The biggest issue with your test suite is the fact that you had to manually write four versions to handle tensors of up to four dimensions. You should be able to write generic code to test arbitrary-dimension tensors. Ideally, all test_manual_*() functions should be replaced with:
template<typename... Dims>
void test(const Dims&... dims) {
bunji::Tensor<int, sizeof...(Dims)> tensor({dims...});
...
}
Now the question is: how to implement an arbitrarily-nested for-loop in a generic way? The solution is to store the indices in a std::vector or std::array, and treat this as a multidimensional index. Then you can create a function to increment this:
template<std::size_t N>
static bool next(std::array<std::size_t, N>& indices, const std::array<std::size_t, N>& dims) {
for (std::size_t i = 0; i < N; ++i) {
if (++indices[i] != dims[i])
return true;
indices[i] = 0;
}
return false;
}
The way you use it is like so:
template<typename... Dims>
static void test(const Dims&... dims) {
constexpr std::size_t N = sizeof...(Dims);
bunji::Tensor<int, N> tensor({dims...});
std::array<std::size_t, N> indices = {};
std::array<std::size_t, N> dims_arr = {dims...};
do {
...
} while (next<N>(indices, dims_arr));
}
The next issue is how to generically apply [] multiple times. A recursive template function can be used here, similar to how nd_for_loop() works:
template<std::size_t N, std::size_t I = N - 1, typename T>
static auto& get(T& tensor, const std::array<std::size_t, N>& indices) {
if constexpr (I == 0) {
return tensor[indices[0]];
} else {
return get<N, I - 1>(tensor[indices[I]], indices);
}
}
Then inside the do-while-loop above you can write:
auto& value = get<N>(tensor, indices);
EXPECT_EQ(value, 0);
value = std::accumulate(indices.begin(), indices.end(), 1, std::multiplies<int>());
There are some variations possible on this theme regarding how exactly you pass the dimensions to the various functions. Maybe pass a std::array to test() as well, instead of using a variable number of arguments? That would simplify nd_for_loop(). | {
"domain": "codereview.stackexchange",
"id": 43857,
"tags": "c++, array, unit-testing"
} |
Separating Cats and Dogs | Question: I'm writing an animal shelter program, which keeps a database of different classes of animals (dog, cat, monkey). Functions that create the form and add the animals are very similar (difference is two questions and the rest is the same). Is it possible to divide this functions into parts so that the program code is not repeated?
You can see the differences between the functions here at Diffchecker. They have a lot in common.
private void admitDog() {
boolean lb = false;
boolean nw = false;
String[] options = {"OK"};
JPanel panel = new JPanel();
panel.setLayout(new BoxLayout(panel, BoxLayout.PAGE_AXIS));
JLabel nameLabel = new JLabel("What is his/her name?");
JTextField nameField = new JTextField(10);
panel.add(nameLabel);
panel.add(nameField);
JLabel favLabel = new JLabel("What is his/her favourite food?");
JTextField favField = new JTextField(10);
panel.add(favLabel);
panel.add(favField);
JLabel numTimesLabel = new JLabel("How many times is he/she fed a day?"); //tu tylko cyfry
JTextField numTimesField = new JTextField(10);
panel.add(numTimesLabel);
panel.add(numTimesField);
JLabel needWalkLabel = new JLabel("Does he need walk?");
JRadioButton needWalkYes = new JRadioButton("Yes");
JRadioButton needWalkNo = new JRadioButton("No");
ButtonGroup needWalkGroup = new ButtonGroup();
needWalkGroup.add(needWalkYes);
needWalkGroup.add(needWalkNo);
needWalkYes.setSelected(true);
panel.add(needWalkLabel);
panel.add(needWalkYes);
panel.add(needWalkNo);
JLabel likeBonesLabel = new JLabel("Does he like bones?");
JRadioButton likeBonesYes = new JRadioButton("Yes");
JRadioButton likeBonesNo = new JRadioButton("No");
ButtonGroup likeBonesGroup = new ButtonGroup();
likeBonesGroup.add(likeBonesYes);
likeBonesGroup.add(likeBonesNo);
likeBonesYes.setSelected(true);
panel.add(likeBonesLabel);
panel.add(likeBonesYes);
panel.add(likeBonesNo);
int selectedOption = JOptionPane.showOptionDialog(null, panel,
"Enter informations", JOptionPane.NO_OPTION, JOptionPane.QUESTION_MESSAGE,
null, options , options[0]); //dODAMY jeszcze cancel
if(selectedOption == 0)
{
String name = nameField.getText();
String fav = favField.getText();
int numTimes = Integer.parseInt(numTimesField.getText());
if(likeBonesYes.isSelected())
lb=true;
if(needWalkYes.isSelected())
nw=true;
ArrayList<Owner> owners = getOwners();
Dog newDog = new Dog(name, owners, lb, fav, numTimes, nw);
kennel.addAnimal(newDog);
}
}
private void admitCat() {
boolean sr = false;
String[] options = {"OK"};
JPanel panel = new JPanel();
panel.setLayout(new BoxLayout(panel, BoxLayout.PAGE_AXIS));
JLabel nameLabel = new JLabel("What is his/her name?");
JTextField nameField = new JTextField(10);
panel.add(nameLabel);
panel.add(nameField);
JLabel favLabel = new JLabel("What is his/her favourite food?");
JTextField favField = new JTextField(10);
panel.add(favLabel);
panel.add(favField);
JLabel numTimesLabel = new JLabel("How many times is he/she fed a day?"); //tu tylko cyfry
JTextField numTimesField = new JTextField(10);
panel.add(numTimesLabel);
panel.add(numTimesField);
JLabel shareRunLabel = new JLabel("Does he share run?");
JRadioButton shareRunYes = new JRadioButton("Yes");
JRadioButton shareRunNo = new JRadioButton("No");
ButtonGroup shareRunGroup = new ButtonGroup();
shareRunGroup.add(shareRunYes);
shareRunGroup.add(shareRunNo);
shareRunYes.setSelected(true);
panel.add(shareRunLabel);
panel.add(shareRunYes);
panel.add(shareRunNo);
int selectedOption = JOptionPane.showOptionDialog(null, panel,
"Enter informations", JOptionPane.NO_OPTION, JOptionPane.QUESTION_MESSAGE,
null, options , options[0]); //dODAMY jeszcze cancel
if(selectedOption == 0)
{
String name = nameField.getText();
String fav = favField.getText();
int numTimes = Integer.parseInt(numTimesField.getText());
if(shareRunYes.isSelected())
sr=true;
ArrayList<Owner> owners = getOwners();
Cat newCat = new Cat(name, owners, sr, fav, numTimes);
kennel.addAnimal(newCat);
}
}
Answer: There are some abstractions that can be done here. I will try to walk you through it a bit.
Extracting methods
This pattern is something you do a lot:
JLabel xxx = new JLabel("???????????");
JTextField yyy = new JTextField(10);
panel.add(xxx);
panel.add(yyy);
This is a perfect opportunity to extract a method:
private JTextField inputField(JPanel panel, String prompt) {
JLabel promptLabel = new JLabel(prompt);
JTextField textField = new JTextField(10);
panel.add(promptLabel);
panel.add(textField);
return textField;
}
Then whenever you need a text-prompt, use:
JTextField nameField = inputField(panel, "What is his/her name?");
Something similar can be done for Yes/No questions:
private JRadioButton yesNoField(JPanel panel, String prompt) {
JLabel promptLabel = new JLabel(prompt);
JRadioButton yes = new JRadioButton("Yes");
JRadioButton no = new JRadioButton("No");
ButtonGroup group = new ButtonGroup();
needWalkGroup.add(yes);
needWalkGroup.add(no);
needWalkYes.setSelected(true);
panel.add(promptLabel);
panel.add(yes);
panel.add(no);
return yes;
}
Usage:
JRadioButton needWalkYes = yesNoField(panel, "Does he/she need walk?");
Additionally...
boolean lb = false;
boolean nw = false;
I had to look much further down in your code to see where/how these variables were used. The current names of these variables is not optimal. You might think that "the cool programmers" use short hard-to-understand variable names, but I'll tell you the truth: We do not. In this case likesBones and needsWalk would be much better names.
Additionally, to better see the usage of these variables, it is recommended to declare them as close to their usage as possible. Don't declare them at the top of your method. In fact, they are not needed at all. You can change the code that currently uses them.
if (selectedOption == 0) {
String name = nameField.getText();
String fav = favField.getText();
int numTimes = Integer.parseInt(numTimesField.getText());
ArrayList<Owner> owners = getOwners();
Dog newDog = new Dog(name, owners, likeBonesYes.isSelected(),
fav, numTimes, needWalkYes.isSelected());
kennel.addAnimal(newDog);
}
Side note: Your Dog class seems to have an excessive amount of parameters to the constructor. This can be a code smell. Consider using setter methods instead, such as setNeedsWalk, setLikesBones, etc. or (advanced concept) use the Builder pattern.
It is possible to do more abstraction as well, but I think this should be good enough for now. You should be able to reduce your code size and code duplication a lot by using the inputField and yesNoField methods. | {
"domain": "codereview.stackexchange",
"id": 12900,
"tags": "java, object-oriented, swing, gui"
} |
Pascal's law: pressure of fluid at different locations | Question: I know that's stupid question, but I'm really confused what my teachers says, so I need to check that theory.
Here are just two ordinary connected containers, which are full of water.
On grounds of theory of hydrostatics we can say that :
p3 is greater than p1
p4 is greater than p2
p1 equals p2
p3 equals p4
Let's say:
p1 = 2Pa
p2 = 2Pa
p3 = 5Pa
p4 = 5Pa
But what if piston 1 goes down, like here:
What pressure will be at places p1, p2, p3, and p4?
Will they increase by the same number with maintaining amounts based on hydrostatic theory, or they will be same?
Answer:
Hoping you know the elementary law:The points at same horizontal heights have same pressures.
Let's see , this also holds for a system with closed top.
Now we can see when a force F is applied on the piston 1 , it depresses by some height h.where $\rho g h A=F$ and thus is the pressure on the $p1$ ie. $F/A$ is equal to pressure of p2 ie. $\rho gh$.
And P3=P1+$\rho g d$,P4=P2+$\rho gd$ where d is depth of p3 point. So, they also have same pressures. | {
"domain": "physics.stackexchange",
"id": 9329,
"tags": "homework-and-exercises, water, pressure, fluid-statics"
} |
Justification of bilinear transform | Question: I would like to understand the "justification" for the bilinear transform. The basic idea as I understand it is that by integration rule of Laplace transform we have for continuous $y(t)$:
$$\mathcal{L}\{y\}(s)=\mathcal{L}\{\int_0^ty'(x)dx\}(s)=\frac{1}{s}\mathcal L\{y'\}(s)$$
$$\Leftrightarrow \frac{\mathcal L\{y\}(s)}{\mathcal L\{y'\}(s)}=\frac{1}{s}$$
Using the trapezoid rule gives the approximation ($h$ is the sampling period):
$$y(t)=y(t-h)+\int_{t-h}^ty'(x)dx\approx y(t-h)+\frac{h}{2}(y'(t)+y'(t-h))$$
And taking the $z$ transform using the approximation gives
$$\mathcal Z\{y\}(z)\approx \mathcal Z\{ y[n-1]+\frac{h}{2}(y'[n]+y'[n-1])\}(z)$$
$$\approx z^{-1}\mathcal{Z\{y\}(z)}+\frac{h}{2}(\mathcal Z\{y'\}(z)+z^{-1}\mathcal Z\{y'\}(z))$$
$$\Leftrightarrow\frac{\mathcal Z\{y\}(z)}{\mathcal Z\{y'\}(z)}\approx \frac{h}{2}(\frac{1+z^{-1}}{1-z^{-1}})$$
Putting the above together we finally end up with the estimate
$$\frac{\mathcal L\{y\}}{\mathcal L\{y'\}}(\frac{2}{h}(\frac{1-z^{-1}}{1+z^{-1}}))\approx \frac{\mathcal Z\{y\}(z)}{\mathcal Z\{y'\}(z)}$$
Question 1. How do we get from the above estimate to the desired estimate
$$\mathcal L\{y\}(\frac{2}{h}(\frac{1-z^{-1}}{1+z^{-1}}))\approx \mathcal Z\{y\}(z)$$
Question 2. It is possible to use trapezoid rule on any linear ODE directly (iterating the rule for higher order derivatives as needed). However, interestingly this gives a different answer than the bilinear transform. The resulting coefficients $a_i$ are the same, however the $b_i$s are different.
So it appears that the bilinear transform distorts the start of the impulse. For example, for 1st order Butterworth lowpass filter we have from bilinear approximation that $b_0=b_1$. As such the impulse response ramps up before ramping down, while the continuous response and trapezoidal estimate only ramps down (see example below). Why are the $b_i$s given by the bilinear transform the "best" (e.g. most widely used) estimates (note: we only consider $b_i$s up to the order of the filter)?
Example:
Consider the 1-pole Butterworth filter given by the transfer function:
$$H(s)=\frac{1}{1+s/\omega}$$
The differential eq. is given by:
$$y=-y'/\omega$$
By trapezoid method and the above eq.:
$$y(t+h)\approx y(t)+\frac{h}{2}(y'(t+h)+y'(t))=y(t)-\frac{h\omega}{2}(y(t+h)+y(t))$$
Collecting the terms we get the difference equation
$$y[n+1]=\frac{(1-\frac{h\omega}{2})}{(1+\frac{h\omega}{2})}y[n]$$
Which is the same as with bilinear transform, but lacking the term $b_1$.
Answer: Let me rephrase your derivation of the bilinear transform to clarify the result. For any $t_0<t-T$ we have
$$y(t)=\int_{t_0}^tx(\tau)d\tau=y(t-T)+\int_{t-T}^tx(\tau)d\tau\tag{1}$$
Approximating the right-most integral in $(1)$ by the trapezoidal rule we get
$$y(t)\approx y(t-T)+\frac{T}{2}\big(x(t)+x(t-T)\big)\tag{2}$$
Switching over to samples of $y(t)$ and $x(t)$ and using the sloppy but common and handy notation $y[n]=y(nT)$, we define from $(2)$ the discrete-time (DT) system:
$$y[n]=y[n-1]+\frac{T}{2}\big(x[n]+x[n-1]\big)\tag{3}$$
This system is now used as the discrete-time approximation to an integrator. In the $\mathcal{Z}$-transform domain it is described by
$$H(z)=\frac{Y(z)}{X(z)}=\frac{T}{2}\frac{z+1}{z-1}\tag{4}$$
So the bilinear transform replaces the transfer function of a continuous-time (CT) integrator $H_c(s)=1/s$ by the transfer function $(4)$, i.e., the bilinear transform uses the mapping
$$s\leftrightarrow \frac{2}{T}\frac{z-1}{z+1}\tag{5}$$
So far for part $1$ of your question.
I must admit that I'm not sure I understand part $2$ correctly. Could you give a simple concrete example showing how you obtain two different DT systems from a given CT system?
As for you final question, the bilinear transform is of course not the best way to transform a CT system to a DT system, simply because there cannot be one best way to do so. However, in some applications - such as optimal filter design - it is considered the most useful transform because it preserves optimality of the magnitude. This is the case because the frequency response of the resulting DT system is just a compressed version of the frequency response of the original CT system, i.e., all properties concerning the magnitude (e.g., ripple size, number of zeros in the stopband, etc.) remain unchanged. Just the frequency axis gets warped, but we can take this warping into account when designing the CT system such that the resulting DT system implements the desired band edges.
Furthermore, the bilinear transform is the only direct mapping from the $s$-plane to the $z$-plane that preserves the frequency response in the sense that the $j\omega$-axis is mapped to the unit circle of the $z$-plane. This is not the case with the forward and backward Euler methods. Furthermore, the left half-plane of the complex $s$-plane is mapped to the inside of the unit circle of the $z$-plane, i.e., stability is preserved in the most complete way. This is not the case with the forward Euler method, and even though stability is also preserved with the backward Euler method, the backward Euler method maps the left half-plane to a small region (a circle centered at $z=\frac12$ with radius $\frac12$) inside the unit circle. This means among other things that certain frequency characteristics cannot be achieved with the backward Euler method.
EDIT: Concerning your example of a first order Butterworth filter
$$H(s)=\frac{Y(s)}{X(s)}=\frac{1}{1+\frac{s}{\omega_0}}\tag{6}$$
you can't ignore the input of the filter in the differential equation, so you get from $(6)$
$$y'(t)=\omega_0\big(x(t)-y(t)\big)\tag{7}$$
Now if you use $(7)$ in the trapezoidal approximation
$$y(t)\approx y(t-T)+\frac{T}{2}\big(y'(t)+y'(t-T)\big)\tag{8}$$
you obtain the same difference equation as if you simply had replaced $s$ in $(6)$ by $\frac{2}{T}\frac{z-1}{z+1}$. This must be the case because the bilinear transform simply is the trapezoidal rule (also called Tustin's method). | {
"domain": "dsp.stackexchange",
"id": 8181,
"tags": "discrete-signals, filter-design, z-transform, bilinear-transform"
} |
Does it help to find the ground state of a spin glass to maximize the number of ferromagnetic bonds? | Question: Consider a classical spin glass in zero field, with degrees of freedom $\{s_i = \pm 1\}$ and the energy $H = \sum_{i \neq j} J_{ij} s_i s_j$, where the $J_{ij}$ are arbitrary real coupling constants. It's well-known that this problem is difficult to solve in general - in fact, it's NP-complete.
The energy spectrum is clearly unchanged if we simultaneously change the sign of all couplings $J_{ij}$ for any single fixed $i$ and all $j \neq i$; this simply corresponds to flipping the spin $i$ while leaving all other spins unchanged (the active transformation version), or equivalently to simply redefining which direction is positive for spin $i$ (the passive transformation version).
I would naively expect that in general, spins glasses with more ferromagnetic bonds are easier to solve than spin glass with more antiferromagnetic bonds. This intuition comes from considering the extreme cases: a system with all ferromagnetic bonds is trivial to solve - all spins point in the same direction - while a system with all antiferromagnetic bonds can still be strongly frustrated, depending on the connectivity pattern. (I don't expect this difficultly to necessarily increase monotonically with the fraction of antiferromagnetic bonds, though.)
If this pattern is indeed true, then that suggests that before we deploy whatever heuristic spin-glass solver we're using to find an approximate ground state, it may help to change variables for certain spins that happen to have an unusually large number of antiferromagnetic bonds, thereby biasing the system toward ferromagnetic bonds before we start trying to solve it. If we keep track of which spins we flipped at the beginning, then it's trivial to reverse that step at the end and solve the original problem encoding.
But I could also imagine several reasons why this wouldn't help, e.g.:
Spin glasses with more ferromagnetic bounds aren't actually significantly easier to solve than those with more antiferromagnetic bonds, so there isn't much or any advantage to doing so.
It's provably unlikely to be able to significantly change the proportion of ferromagnetic bounds from a generic initial bond configuration.
Figuring out which spins need to be flipped in order to significantly increase the number of ferromagnetic bonds turns out to be just as difficult as the original spin glass problem, so that just shifts the original problem to the new problem of figuring out which spins to flip at the beginning and the end.
Is the strategy of flipping spins at the beginning and the end of the optimization in order to maximize the proportion of ferromagnetic bonds actually useful? If not, which of the reasons above (or some other reason) explains why not?
Answer: As far as I can see, maximization of ferromagnetic bonds and minimization of energy are equivalent, so they require same computational time, which makes them in the same class of complexity, NP-complete.
Let us reformulate a model based on maximization of ferromagnetic bonds. Given a lattice of $N$ spins with each spin $s_i=1$, and coupling constants $J_{ij}$'s which are randomly generated, the "number" of ferromagnetic bonds $F$ is
$$F=-\sum_{i \neq j}{J_{ij}}=-\sum_{i \neq j}{J_{ij}s_is_j}$$
Here we flip the signs of $J_{ij}$'s with some fixed $i$ as mentioned in your post. We do so until we find a set of couplings $\{\tilde{J}_{ij}\}$ which maximizes $F$. In comparison with the usual spin model, we can start from the configuration with all spins equal to $1$, and flip the spins to minimize the energy.
However, as you said, the flipping of the signs of $J_{ij}$'s for fixed $i$ is equivalent to the flipping of $s_i$. Therefore, for any path of updates on the spin configuration $\{s_i\}$ in the usual spin glass model, there is one path in the updates of $\{J_{ij}\}$ isomorphic to it. Therefore, both methods are equivalent and require same computational time.
This also gives the physical meaning of $\{\tilde{J}_{ij}\}$: The ground state of $\{\tilde{J}_{ij}\}$ is ferromagnetic with all spins equal to $+1$ or $-1$ since if any update on $s_i$'s given the fixed $\{\tilde{J}_{ij}\}$ reduces the energy, it implies there is some update on $\{\tilde{J}_{ij}\}$ with the fixed spin configuration that increases $F$. The above explanation shall be enough to answer the reasons you give, and knowing how to increase the ferromagnetic bonds $F$ to its maximum is as hard as knowing how to decrease the energy of the spin glass to its minimum. This is also true for the case when we use some hybrid method which flips the couplings and spins alternatively. | {
"domain": "physics.stackexchange",
"id": 91117,
"tags": "statistical-mechanics, spin-models"
} |
How could an MGA's frame/chassis best be improved? | Question: As an assignment for my Automotive Engineering study, I have to practice manual FEM calculations. I'm free to choose whichever existing construction, and with that I have to make FEM analyses of both the original and the improved construction. I chose the chassis of the '57 MGA i'm restoring, which is shown below:
I suppose there are several ways to improve the driveability, but I expect increased torsional stiffness to have the most impact. The rear bridge seems rather wobbly to me, but I can't really tell; I haven't driven an MGA before.
(If you have better ideas how to improve the chassis please do suggest.)
What would be a good approach to improve the torsional stiffness of the chassis?
With that, I mean what kind of construction/structure would be effective; a triangular structure between the two side beams for instance.
Please note: it doesn't have to be practically feasible or cost effective. It's just for practicing.
Answer: The overall stiffness of the frame is determined by the stiffness of the side beams augmented primarily by the three tubular cross members.
An interesting academic exercise would be to evaluate how changing the diameter of the cross members affects the overall stiffness. | {
"domain": "engineering.stackexchange",
"id": 1313,
"tags": "structural-engineering, finite-element-method, reinforcement"
} |
Thermodynamics of thermometer | Question: Mercury is used in thermometers because it increases in length significantly due to rise in temperature, However, mercury has high density relative to water, which means more inter-molecular forces which makes it harder to expand than water. Then,why is mercury preferred to water in a thermometer?
Answer: The density of mercury (13.534 g/cm^3) does not imply high intermolecular forces. It simply reflects that the mercury atom is much more massive than a water molecule. The atomic weight of mercury is 200.6, while the molecular weight of water is about 18, so mercury atoms take up $\frac {200.6} {18\cdot 13.534}=0.823$ as much volume as a water molecule. This doesn't say anything about the forces between the atoms.
Probably the worst thing about water for a thermometer (assuming you keep it from freezing) is that the expansion is not linear. Water hits a maximum density at 4C, so contracts as it drops in temperature from 0 to 4. Above 4C, it expands quite slowly for a while, then more quickly as it heats up. A plot of the volume of water as a function of temperature is below. It makes thermometers much easier if the curve is linear, which this is not. Over a small range, it is close to linear. It would work well for medical thermometers with a range of $35-42 C$, say | {
"domain": "physics.stackexchange",
"id": 11222,
"tags": "thermal-radiation, density"
} |
What algorithms exist for construction a DFA that recognizes the language described by a given regex? | Question: All of my textbooks use the same algorithm for producing a DFA given a regex: First, make an NFA that recognizes the language of the regex, then, using the subset (aka "powerset") construction, convert the NFA into an equivalent DFA (optionally minimizing the DFA). I also once heard a professor allude to there being other algorithms. Does anyone know of any? Perhaps one that goes directly from the regex to a DFA without the intermediate NFA?
Answer: There are different algorithms to convert regular expressions to finite automata. You can go directly from regular expressions to DFAs without building any other automaton first by implicitly doing the subset construction while generating the automaton. Another option to directly obtain deterministic automata is to use the method of derivatives.
Checking if a regular expression represents the language containing all strings is a PSPACE complete problem (see this answer for a reference). Checking if a DFA accepts that language can be done in polynomial time, so if you go directly from a regular expression to a DFA, there will be a blow-up somewhere.
My understanding of the literature is that we can choose translations that allow us to localise the blow-up. Meaning, there are different ways to go from a regular expression to a finite automaton, and methods that are linear, or polynomial are preferred. Usually, the exponential costs are pushed into determinization of automata.
There has been a lot of work on identifying sub-families of regular expressions from which we can efficiently generate DFAs. This line of work is dependent on the translation you use. Meaning, you fix a mapping from regular expressions to NFAs and try to characterise the regular expressions which map to DFAs.
The standard construction of automata from regular expressions is not the preferred construction in such work. The constructions of choice produce automata which closely resemble the structure of the regular expression. These constructions use the notion of a derivative of a regular expression.
Derivatives of regular expressions, J. A. Brzozowski. 1964.
A derivative $s$ of a regular expression $r$ with respect to a symbol $a$ from the alphabet is a regular expression representing the language of $r$ with the leading $a$ removed from strings. This notion was extended to partial derivatives of regular expressions by Antimirov.
Partial Derivatives of Regular Expressions and Finite Automata Constructions, V. Antimirov. 1995.
If you think of a state of an automaton as a representation of all strings accepted from that state, (partial) derivatives allow you to treat regular expressions as states. Contrast with the standard textbook construction which intuitively treats regular expressions as automata, not states.
From regular expressions to deterministic automata, G. Berry and R. Sethi, 1986.
The correspondence between regular expressions and states of an automaton and determinism is discussed explicitly by Berry and Sethi, who combine the notion of Brzozowski derivatives with the idea of distinguishing between occurrences of the same symbol to give a syntax-based translation of regular expressions into finite automata.
One-Unambiguous Regular Languages, A. Brüggemann-Klein and Derick Wood, 1998.
This paper builds on earlier work by Brüggemann-Klein and studies cases in which you can use derivatives to generate DFAs in polynomial time. There is a large amount of work following this paper. It was significant from the perspective of web technologies because regular expressions that can be manipulated efficiently (aka, corresponding to DFAs) were important for processing SGML and XML.
There has been much work studying other special cases of deterministic regular expressions. A very recent paper studying when some of these problems can be solved in linear time is from 2012.
Deterministic Regular Expressions in Linear Time, Benoit Groz, Sebastian Maneth, Slawomir Staworko. 2012. | {
"domain": "cstheory.stackexchange",
"id": 1864,
"tags": "automata-theory, regular-expressions, dfa"
} |
A Sudoku game made from Google's Dart language | Question: This is my first real web project and I have never touched JavaScript (barely touched CSS), so I just skipped that and went to dart for fun. Here's a live demo
The code for the dart file is down below. If you could help me improve the code in any way, that would be nice.
Sudoku.dart
import 'dart:html';
TableElement board;
var tbody;
var tableCell;
var currentRow = 0;
var currentCell = 0;
String puzzle = "200000060000075030048090100000300000300010009000008000001020570080730000090000004";
int counter = 0;
void main() {
board = new TableElement();
board..setAttribute("border", "1");
tbody = board.createTBody();
makeSudokuBoard();
ChangeCurrentCell();
window.onKeyDown.listen(mykeyDown);
window.onKeyUp.listen(callCheckBoard);
window.onClick.listen(clickDown);
}
void clickDown(Event e){
CheckCurrentCell();
ChangeCurrentCell();
CheckBoard();
}
void mykeyDown(Event e){
if(e is KeyboardEvent){
KeyboardEvent kevent = e;
switch(e.keyCode){
case 38:
currentRow--;
while(!CheckNextCell())currentRow--;
ChangeCurrentCell();
CheckBoard();
break;
case 40:
currentRow++;
while(!CheckNextCell())currentRow++;
ChangeCurrentCell();
CheckBoard();
break;
case 37:
currentCell--;
while(!CheckNextCell())currentCell--;
ChangeCurrentCell();
CheckBoard();
break;
case 39:
currentCell++;
while(!CheckNextCell())currentCell++;
ChangeCurrentCell();
CheckBoard();
break;
default:
CheckCurrentCell();
ChangeCurrentCell();
CheckBoard();
break;
}
}
}
void callCheckBoard(Event e){
CheckBoard();
}
void CheckBoard(){
var check = 0;
TableRowElement r = board.rows[currentRow];
bool worked = true;
List<TableCellElement> c = r.cells;
var current = tableCell.text;
try{
var foo = int.parse(current);
}catch(e){
worked = false;
}
if(current == ""){}
else if(!(worked)){ChangeColor(); return;}
else{
for(var i = 0; i < c.length; i++){
if(currentCell == i){}
else if(current == c[i].text){
ChangeColor();
return;
}
for(var i = 0; i < board.rows.length; i++){
if(i == currentRow){}
else if(board.rows[i].cells[currentCell].text == current){
ChangeColor();
return;
}
}
int boxRowOffset = (currentRow ~/ 3)*3;
int boxColOffset = (currentCell ~/ 3)*3;
for (int k = 0; k < 3; ++k) // box
for (int m = 0; m < 3; ++m)
if (currentCell == boxColOffset+m && currentRow == boxRowOffset+k){}
else if(board.rows[boxRowOffset+k].cells[boxColOffset+m].text == current){
ChangeColor();
return;
}
}
}
for(var i = 0; i < board.rows.length; i++)
{for(var j = 0; j < board.rows[i].cells.length; j++){
if(board.rows[i].cells[j].text != "") check++;
}
tableCell.style.background = "";
if(check == 81)
document.querySelector("#pageTitle").text = "winner";
}}
void CheckCurrentCell(){
List<TableRowElement> r = board.rows;
for(var i = 0; i < r.length; i++){
List<TableCellElement> c = r[i].cells;
for(var j = 0; j < c.length; j++){
if(c[j] == querySelector(":focus")){
currentRow = i;
currentCell = j;
}
}
}
}
bool CheckNextCell(){
if(currentRow == -1) currentRow = 8;
else if(currentRow == 9) currentRow = 0;
else if(currentCell == -1) currentCell = 8;
else if(currentCell == 9) currentCell = 0;
var tableRow = board.rows[currentRow];
tableCell = tableRow.cells[currentCell];
return tableCell.isContentEditable;
}
void ChangeCurrentCell() {
if(currentRow == -1) currentRow = 8;
if(currentRow == 9) currentRow = 0;
if(currentCell == -1) currentCell = 8;
if(currentCell == 9) currentCell = 0;
var tableRow = board.rows[currentRow];
tableCell = tableRow.cells[currentCell];
tableCell.focus();
}
void ChangeColor(){
//tableCell.style.color = "red";
tableCell.style.background = "#f44";
}
void makeSudokuBoard(){
for(var i = 0; i < 9; i++){
TableRowElement rows = tbody.addRow();
for(var j = 0; j < 9; j++){
String puzzleplace = puzzle.substring(counter, counter+1);
if(puzzleplace == "0"){
rows.insertCell(j).text = "";
rows.cells[j].setAttribute("contenteditable","true");
}else{
rows.insertCell(j).text = puzzleplace;
rows.cells[j].setAttribute("contenteditable","false");
rows.cells[j].classes.add("default");
}
rows.cells[j].setAttribute("onkeypress", "return (this.innerText.length <= 0)");
counter++;
}
}
document.querySelector('#container').append(board);
}
Answer: First, try running the code through the Dart Formatter.
Second, try putting all of the game logic into one or more classes. Most CS professors – and development professionals – hate to see top-level, mutable state.
Finally, it's good to always be reading more code than you're writing. You'll learn a lot. For a game like this you might want to look at Pop, Pop, Win!. It's a good example of separating game state from the game visualization.
If you're using the Dart Editor, you can open the sample by clicking on the link on the Welcome Page.
Keep up the hacking! | {
"domain": "codereview.stackexchange",
"id": 7754,
"tags": "homework, sudoku, dart"
} |
Do the terms "damping constant" and "damping coefficient" have standard uses? | Question: I've heard the terms "damping constant" and "damping coefficient" used to describe both the $c$ from the viscous damping force equation $F = -c\dot{x}$ and the $\gamma$ from the definition $\gamma = \frac{c}{2m}$. Is there standard usage for each term, or is it context dependent? If it is context dependent, are there terms which uniquely refer to each of $c$ and $\gamma$?
Answer: The answer is yes. Talk to any engineer and if say the terms "damping constant" and "damping ratio" they know exactly what you mean without any further explanations.
Damping coefficient $c$ signifies the contribution of velocity to force, as in $F = \ldots + c \dot{x} + \ldots$
Damping ratio $\zeta$ is a number the signifies the region of damping. When $\zeta<1$ the problem is underdamped: when $\gamma>1$ it is overdamped: and when $\zeta=1$ it is critically damping. This means the form of the solution (as in the equation used to solve for motion) is different depending on the region.
The complementary terms of the above but with respect to stiffness are
Stiffness coefficient $k$, which is the contribution of deflection to force, as in $F = \ldots + k x + \ldots$
Natural frequency $\omega_n$ which combines stiffness and mass to tell you how fast the system responds to inputs.
In practive the above terms are using the transform the standard equation of motion $$ m \ddot{x} = -k x - c \dot{x} $$ into one with known solutions. By substituting $$\begin{aligned} k & = m \omega_n^2 & c & = 2 \zeta m \omega_n \end{aligned} $$ you get
$$ \ddot{x} + 2 \zeta \omega_n \dot{x} + \omega^2_n x =0 $$
with the well-known solutions
$$ x(t) = \begin{cases}
{\rm e}^{-\zeta \omega_n t} \left( A \cos\left( \omega_n \sqrt{1-\zeta^2} \right) + B \sin\left( \omega_n \sqrt{1-\zeta^2} \right) \right) & \zeta < 1 \\
{\rm e}^{-\zeta \omega_n t} \left( A \cosh\left( \omega_n \sqrt{\zeta^2-1} \right) + B \sinh\left( \omega_n \sqrt{\zeta^2-1} \right) \right) & \zeta> 1 \\
(A+t B) {\rm e}^{-\omega_n t} & \zeta = 1
\end{cases} $$ | {
"domain": "physics.stackexchange",
"id": 58474,
"tags": "classical-mechanics, terminology"
} |
Polishing partitions - Reimplementing a partitionOn | Question: So I've read an F# question on partitioning a list in F# and wondered whether I could cleanly write the concept down in haskell and if that would help me review the code ...
Now I have a haskell implementation of this "not-quite partition, not-quite group" and noticed it seems somewhat clumsy
partitionOn :: ([a] -> Bool) -> [a] -> [[a]]
partitionOn pred list = go pred [] list
where
go :: ([a] -> Bool) -> [a] -> [a] -> [[a]]
go _ working [] = [working]
go pred working l@(x:xs) = if pred working
then (working:(go pred [] l))
else go pred (working++[x]) xs
A few questions:
The compiler gets very annoyed with me when I try to express the else-branch as a concatenation with : instead of ++. Would it be cleaner to use the following go?
go _ working [] = [reverse working]
go pred working l@(x:xs) = if pred working
then ((reverse working):(go pred [] l))
else go pred (x:working) xs
Am I missing a special case of groupBy here? It feels like there should be a simple standard function to do this
Answer: The code can be written cleaner if one removes the pred from go. go has access to all bindings in its context, including the ones in partitionOn pred list.
A first step to declutter partitionOn is therefore to get rid of duplication and get rid of unnecessary parentheses:
partitionOn :: ([a] -> Bool) -> [a] -> [[a]]
partitionOn pred list = go pred [] list
where
go working [] = [working]
go working l@(x:xs) = if pred working
then working : go [] l
else go (working ++ [x]) xs
Note that I've removed go's type signature. This is somewhat controversial, but the general rule-of-thumb is that you want type signatures at the top-level and usually leave the type-signatures at the not-top level, unless GHC gets confused.
The (small) problem with type signatures in where and let is that the type parameter a in parititionOn is not the same as the one in go. -XScopedTypeVariables is necessary for that. So lets get rid of that.
Would it be cleaner to use the following go?
No. Or rather: it would be slightly cleaner, but pred also needs to work on the reversed list. This is more obvious if we don't use a position-independent predicate:
notInOrder [] = False
notInOrder (x:xs@(y:_)) = x > y || notInOrder xs
Your old version returns [1..10] on partitionOn notInOrder [1..10], but your new one returns [[1],[2],[3],…,[10]], unless you also check pred (reverse working).
Therefore—At least in the sense of asymptotic analysis—both variants will have the same performance.
Am I missing a special case of groupBy here? It feels like there should be a simple standard function to do this.
The standard functions for splitting lists at some point are span, break, splitAt, take(While), drop(While(End)), group(By), inits and tails. They all are concerned with a single element in their predicate, so they are not candidates for our problem, unless we already have our splits:
splits :: [a] -> [([a],[a])]
splits xs = zip (inits xs) (tails xs)
That function does not exist, though. A perfect building block for partitionOn would have the type signature ([a] -> Bool) -> [a] -> ([a],[a]), but that's missing from the standard library. Others packages provide functions with that signature, but none does provide our needed functionality.
But we can write that function ourself:
breakAcc :: ([a] -> Bool) -> [a] -> ([a],[a])
breakAcc p = go []
where
go ys [] = if p ys then (ys,[]) else ([],ys)
go ys (x:xs) = if p ys then (ys, xs)
else go (ys ++ [x]) xs
-- or, using `splits` above:
breakAcc p xs = case find (p . fst) (splits xs) of
Just ys -> ys
Nothing -> ([], xs)
Note that the first variant still has the somewhat awkward ++ [x] there. A partition that starts from the right wouldn't have that problem, but we're stuck with that.
Now partitionOn is just
-- feel free to use a worker wrapper approach here
partitionOn p xs = case spanAcc p xs of
([],bs) -> [bs]
(as,bs) -> as : partitionOn p bs
Note that this will add the empty list at the end if our partition works for all elements, e.g. partitionOn ((0 <) . sum) [1..4] == [[1],[2],[3],[4],[]]. But that's easy to get rid of.
All at once:
import Data.List (inits, tails, find)
splits :: [a] -> [([a],[a])]
splits xs = zip (inits xs) (tails xs)
breakAcc :: ([a] -> Bool) -> [a] -> ([a],[a])
breakAcc p xs = case find (p . fst) (splits xs) of
Just ps -> ps
Nothing -> ([],xs)
partitionOn :: ([a] -> Bool) -> [a] -> [[a]]
partitionOn p xs = case breakAcc p xs of
([],bs) -> [bs]
(as,bs) -> as : partitionOn p bs
-- for QuickCheck
partitionOnResult_prop :: ([a] -> Bool) -> [[a]] -> Bool
partitionOnResult_prop p xs = all p (init xs)
partitionOn_prop :: ([a] -> Bool) -> [a] -> Bool
partitionOn_prop p xs = partitionOnResult_prop p (partitionOn p xs)
-- quickCheck $ \(Blind p) -> partitionOn_prop p
Now that we have all of this code, let's compare it again with your (almost) original variant:
partitionOn :: ([a] -> Bool) -> [a] -> [[a]]
partitionOn pred list = go pred [] list
where
go working [] = [working]
go working l@(x:xs) = if pred working
then working : go [] l
else go (working ++ [x]) xs
If we quint our eyes, both codes look the same (if we use the first breakAcc variant). However, I wouldn't call a list l, since lists are usually identified with a suffix s called e.g. ls. Also, working is somewhat of a misnomer: are we working with that list? Or does that list fulfill some requirement? As we currently accumulate elements in there, the usual acc or accs might be better, but naming is hard and left as an exercise.
Other than that, unless you want to have little, test-able functions, your original variant was therefore fine to begin with, but contained some functions that can be re-used. | {
"domain": "codereview.stackexchange",
"id": 28186,
"tags": "haskell, rags-to-riches"
} |
Physical interpretation of the Pauli Exclusion Principle | Question: As I understand it, the Pauli Exclusion principle states that two electrons in orbitals of a given atom with the same values for quantum numbers $n$, $l$ and $m_j$ must have different (opposite) values for $m_s$, their spin. My question is: what is the physical interpretation of having "opposite spin", considering that spin can be measured along any axis? Does this means that if one measured the spins of the two electrons in any direction, they would always be antiparallel - or does it mean that there is some inherent axis along which the atom "measures" the spin of the two electrons and only allows them into the same orbital if they have opposite spins?
Answer: It's your first interpretation. The spin-singlet is the unique state of $s=0$, $m_s=0$, which is an eigenstate of $S_z$, $S_x$, and $S_y$ simultaneously, and in fact
$$
\lvert 00\rangle = \frac{\lvert \uparrow \downarrow \rangle -\lvert\downarrow \uparrow \rangle}{\sqrt{2}}
=
\frac{\lvert \leftarrow ,\to \rangle -\lvert\to,\leftarrow \rangle}{\sqrt{2}}
=
\frac{\lvert s_{\hat{n}}=\uparrow,s_{\hat{n}}=\downarrow \rangle -\lvert s_{\hat{n}}=\downarrow,\lvert s_{\hat{n}}=\uparrow \rangle}{\sqrt{2}}
\,,
$$
where $\hat{n}$ is any direction; which is to say, it is the antisymmetric combination of up-and-down states along any axis. Thus, yes, if you measure the spins of the two electrons along any axis, they will always end up being antiparallel. | {
"domain": "physics.stackexchange",
"id": 91491,
"tags": "electrons, quantum-spin, fermions, pauli-exclusion-principle, identical-particles"
} |
Why does balancing the reduction of dichromate(VI) to chromium(III) require four water molecules among the products? | Question: Problem
Balance the following chemical equation using oxidation numbers:
$$\ce{Cr2O7^2–(aq) + HNO2(aq) –> Cr3+(aq) + NO3–(aq)}$$
Solution (in Swedish)
$$\ce{\overset{+VI}{Cr}_2O7^2–(aq) + H\overset{+III}{N}O2(aq) –> \overset{+III}{Cr}^3+(aq) + \overset{+V}{N}O3–(aq)}$$
\begin{align}
\text{Elektronövergång:}\quad
\ce{\overset{+VI}{Cr}_2 + 6 e- &-> 2 \overset{+III}{Cr}} &\quad &|\times 1 \\
\ce{\overset{+III}{N} &-> \overset{+V}{N} + 2 e-} &\quad &|\times 3
\end{align}
$$\ce{Cr2O7^2–(aq) + 3 HNO2(aq) –> 2 Cr^3+(aq) + 3 NO3–(aq)}$$
\begin{align}
\text{Laddningar:}\quad
&\text{vänsterled} &\quad &= -2 &\quad &|\text{lägg till}~\ce{5 H+} \\
&\text{högerled} &\quad 2\cdot(+3) + 3\cdot(-1) &= +3 &\quad &|\text{lägg till}~\ce{4 H2O} \\
\hline
&\text{skillnad} &\quad &= +5
\end{align}
$$\ce{Cr2O7^2–(aq) + 3 HNO2(aq) + 5 H+(aq) –> 2 Cr3+(aq) + 3 NO3–(aq) + 4 H2O}$$
Question
I can understand everything about the solution, except for the added $\ce{4 H2O}$ on the right side of the equation. Why is it that $\ce{4 H2O}$ is added to the right side and not $\ce{2.5 H2O}?$ How come the equation is balanced despite the oxygen atoms not being equal on both sides?
Answer: $$\ce{Cr2O7^2–(aq) + 3 HNO2(aq) + 5 H+(aq) -> 2 Cr^3+(aq) + 3 NO3–(aq) + 4 H2O} $$
The left side has $1\cdot7 + 3\cdot2 = 13$ oxygen atoms and $3\cdot1 + 5\cdot1 = 8$ hydrogen atoms. The right side has $3\cdot3 + 4\cdot1 = 13$ oxygen atoms and $4\cdot2 = 8$ hydrogen atoms. It is not clear to my why you would expect 2.5 water molecules instead of 4. You can't just count the $\ce{5 H+}$, you have to consider that $\ce{HNO2}$ contains hydrogen as well. | {
"domain": "chemistry.stackexchange",
"id": 17971,
"tags": "inorganic-chemistry, equilibrium, aqueous-solution, oxidation-state"
} |
Why planets and stars are always round/oval in size? | Question: What kind of stability it provides to each and every celestial body that each one them are round in shape?
Answer: The gravitational is radial only. That means if you tried a different shape then you would have regions, like hills, where a change in the angle would experience a much less attraction towards the center. This region would, over time, be attracted to a different angle with a greater force of attraction. Or spherical. This results in a spherical-like shape. | {
"domain": "astronomy.stackexchange",
"id": 2927,
"tags": "star, planet, celestial-mechanics"
} |
Web Scraper in Python | Question: So, this is my first web scraper (or part of it at least) and looking for things that I may have done wrong, or things that could be improved so I can learn from my mistakes.
I made a few short function that can take a user and search tpb to get the maximum number of pages of content that that user has by requesting the url, parsing the html for the links div, and then crawling through them to see if each page is valid or not (since some linked pages actually have no content).
I know that I haven't covered all cases such as invalid urls, non-existent users etc, just trying to find what I've done wrong so far in actually parsing existent content before I go further.
There's two main things I'm concerned about:
First, I have filter/map/lambda combos all over the place here. These are generally slow from what I remember about their efficiency, and they seemed to be a fairly easy and concise way to get the filtering I needed (although not the prettiest). So, is this acceptable and/or is there a better way with bs4 or another alternative?
Second, since beautifulsoup is recursive in finding nested tags anyways does calling my get_max_pages function recursively really matter here?
import requests, urllib, re
from bs4 import BeautifulSoup
BASE_USER = "https://thepiratebay.se/user/"
def request_html(url):
hdr = {'User-Agent': 'Mozilla/5.0'}
request = urllib.request.Request(url, headers=hdr)
html = urllib.request.urlopen(request)
return html
def get_url(*args):
for arg in args:
url = BASE_USER + "%s/" % arg
return url
def check_page_content(url):
bs = BeautifulSoup(request_html(url), "html.parser")
rows = bs.findAll("tr")
rows = list(filter(None, map(lambda row: row if row.findChildren('div', {'class': 'detName'}) else None, rows)))
return True if rows else False
def get_max_pages(user, url=None, placeholder=0, links = list(), valid=True):
if url is None:
url = get_url(user, str(placeholder), 3)
if valid:
td = BeautifulSoup(request_html(url), "html.parser").find('td', {'colspan': 9, 'style':'text-align:center;'})
pg_nums = td.findAll('a', {'href': re.compile("/user/%s/\d{,3}/\d{,2}" % user)})
pages = list(filter(None, map(lambda a: int(a.text) - 1 if a.text else None, pg_nums)))
if links: #Unnecessary to filter this at start as placeholder = 0
pages = list(filter(None, map(lambda x: x if int(x) > placeholder else None, pages)))
if pages:
i = 0
while valid and i < len(pages):
element = pages[i]
valid_page = check_page_content(get_url(user, element, 3))
if valid_page:
links.append(element)
else:
valid = False
i += 1
return get_max_pages(user, get_url(user, len(links), 3), len(links), links, valid)
else:
return links
else:
return links
Also, if anyone wouldn't mind would it be viable to split this into threads for say the content validation? I don't see how that would really improve anything all that much with python's GIL, but curious nonetheless.
To find ~35 pages of valid content it takes about 1.5 mins.
Answer: Your get_url function is confusing. It looks like you keep assigning new values to url and ignoring all the previous values. This is what happens when I run it:
>>> get_url("Hello", "World")
'https://thepiratebay.se/user/World/'
Surely this is either a bug or redundant behaviour, since the only argument that matters is the last one? It seems like instead you should be using str.join, which will concatenate all the arguments together with a string separator, so for example:
def get_url(*args):
return BASE_USER + "/".join(args)
>>> get_url("Hello", "World")
'https://thepiratebay.se/user/Hello/World'
>>> get_url("ban", "an", "a")
'https://thepiratebay.se/user/ban/an/a'
Though as @Mathias Ettinger noted, you should use map(str, args) to ensure that all the arguments are converted to strings as join will raise errors if any of the arguments aren't strings.
check_page_content is hard to understand at first, a lot happens in one line so you should try spread it out to multiple lines. It would be easier to build rows with a list comprehension that filters in advance, rather than gathering up a lot of rows just to later remove them:
def check_page_content(url):
bs = BeautifulSoup(request_html(url), "html.parser")
rows = [row for row in bs.findAll("tr")
if row.findChildren('div', {'class': 'detName'})]
But then I see that you're just returning the boolean value of the list! That means you can break as soon as you found that there's any row, no need to store the list at all. Luckily there's the any function that will do this for you. It supports short circuiting, which means that it will return as soon as it finds a condition that evaluates as True:
def check_page_content(url):
bs = BeautifulSoup(request_html(url), "html.parser")
return any(row.findChildren('div', {'class': 'detName'})
for row in bs.findAll("tr"))
any will apply truthiness to all the values in bs.findAll so if any of them have truthy results then your function will immediately return. Even if you have to check every single row, this is faster than building a full list, mapping and filtering it.
In get_max_pages you have the default value links=list(). I'm not sure if you know about the mutable default and thought this would avoid it but it wont. links will be created once as an empty list. Every time you call the function the same list exists so the same list will be appended to, which is not what you need. Here's a simple example:
>>> def a(b=list()):
b.append("another")
return b
>>> a()
['another']
>>> a()
['another', 'another']
>>> a()
['another', 'another', 'another']
Instead you need to use links=None and then use if links is None: links = [] so that a new list is created within each function call.
You have most of your code nested inside if valid, but if you flipped it around you could save nesting by doing this:
def get_max_pages(user, url=None, placeholder=0, links = list(), valid=True):
if url is None:
url = get_url(user, str(placeholder), 3)
if not valid:
return links
td = BeautifulSoup(request_html(url), "html.parser").find('td', {'colspan': 9, 'style':'text-align:center;'})
Less nesting generally makes it easier to read code like this.
You have another map filter, I'd recommend doing a similar list comprehension, though it's a little harder to follow:
regex = re.compile("/user/%s/\d{,3}/\d{,2}" % user)
pages = [int(a.text) - 1
for a in td.findAll('a', {'href': regex}) if a.text]
What's happening here is that I'm iterating through td.findAll, and if a.text is True then I'm storing int(a.text) - 1 in pages. This does what your next line does without needing multiple calls.
However, you then may need to further filter pages based on a condition. You could still do this in a comprehension though, just run a list comprehension over pages itself:
if links:
pages = [x for x in pages if x > placeholder]
Note I removed the int call because you already store them as integers to it's not necessary.
Lastly, your while loop here is strange. It would be easier to just do for element in pages and then break from the loop instead of setting valid = False. This gives you a much simpler loop with less lines:
for element in pages:
valid_page = check_page_content(get_url(user, element, 3))
if valid_page:
links.append(element)
else:
break | {
"domain": "codereview.stackexchange",
"id": 17014,
"tags": "python, python-3.x, recursion, web-scraping, beautifulsoup"
} |
Induction in a Coil with no direct field | Question: A circular coil of radius A surrounds a cylindrical, infinitely tall, uniform magnetic field pointed upwards, of radius B. A>B, and both are centered on the same point. The field is steadily increase in strength, but the constrained area stays the same. Will a current be induced in the wire?
The flux is increasing, yes, but at the wire itself, there is no changing magnetic field because the magnetic field is constrained to a smaller area. Does using magnetic flux not work to explain this problem? How could the loop be affected? My teacher says it is, because of the flux. My theory is it isn't, because it wouldn't be consider an enclosed circle, because of the gap between the flux and the wire. Can anyone explain?
Answer: A changing magnetic flux induces an emf , which in turn gives rise to a current in a wire. Note this statement, for I shall use it later to explain my answer. Here the field that you have mentioned, changes with time not space. Let the field at any instant t be Ct and the area to which it is constrained be $πx^2$.Now, according to your problem the circular coil has a radius R>x. So there is a gap between the coil and the field which is causing your intuition trouble.
But you see, it does not matter whether there is a gap or not. An emf is always induced along a surface if the flux linked with it is changing. Although it may seem weird, but the emf is induced irrespective of whether there is a wire or not, as the field is changing with time . In case of field changing with time, if you consider any surface which contains even a bit of this changing field then the flux linked with it changes and an emf is induced. Your circular coil contains this entire changing field and the flux linked with its area is changing with time. That is all an emf needs to be induced. And it is not the magnetic field but the changing magnetic flux which gives rise to an electric field which makes the electrons move.
It would help allay your confusion once you realize that emf and current are two completely different things. Current needs an emf as well as a material with free electrons(conductor). Your traditional view of electromagnetic induction is a magnetic field encompassing a closed circuit, whose value changes with space and hence flux changes with time and an induced emf gives rise to current in the wire. But even if you had an insulator, Faraday's Law would hold.No current will flow due to no free electrons being present( just like connecting a battery to an insulator; nothing happens) but the emf will still be there.
So, flux linked with your coil is $Cπx^2t$ which changes with time and gives you the induced emf as $Cπx^2$. Thus, it all comes down to whether the flux linked is changing with time or not. And this happens with your coil, as the field is within its area. | {
"domain": "physics.stackexchange",
"id": 37692,
"tags": "electromagnetism, electromagnetic-induction"
} |
Lower bounds on the number of measurements outcomes required for quantum state tomography | Question: It seems that in order to reconstruct a quantum state, a large number of measurements is typically used.
Are there any known theoretical lower bounds on the number of measurements required to reconstruct a state?
Do we get different lower bounds if we consider pure states instead of mixed states?
Answer: I apologise in advance. This is a rough and hand-waivy answer.
You can give "information-theoretic" lower bounds by noting that the measurements can be seen as a linear map $M$ from quantum states to outcome probabilities $y$. For instance, if you have a POVM $(E_i)_{i=1,\dots,N}$, then the probability vector is $y = \sum_i \mathrm{tr}(E_i \rho) e_i$, where $e_i$ is the standard basis of $\mathbb{R}^N$.
Tomography is about finding an approximate solution to the equation
$$y = Mx$$
where $x$ is a description of your state. Without further assumptions, this only works if $M$ has full rank, in particular your POVM has to have size $N=\Omega(d^2)$ when $d$ is your Hilbert space dimension.
However, if your state has low rank $r$, the problem can be solved more efficiently.
For instance, this occurs when $\rho$ is a pure state (rank 1).
In this case, a lower bound is $N=\Omega(rd)$ which can be achieved e.g. with ideas from compressed sensing and low-rank matrix recovery.
However, the number of measurement settings is not the only important quantity in tomography. Sample complexity (i.e. the number of copies of your state), robustness to measurement errors, post-processing time etc. are important aspects as well.
Low rank assumptions can also be used to reduce the sample complexity.
There's a lot of literature on this ... with different flavors. You can have a look at the introduction and the references in
Stilck França, Brandão, and Kueng: "Fast and robust quantum state tomography from few basis measurements" https://arxiv.org/abs/2009.08216 | {
"domain": "quantumcomputing.stackexchange",
"id": 2874,
"tags": "quantum-state, state-tomography"
} |
Would pyramid shape buildings prevent people from falling to their deaths? | Question: If a building the shape of pyramids slops downwards, wouldn't it minimize the risk of fall from heights, whether intentionally or by accident (like fire/bomb scare), from a balcony or window opening? That is discounting falling in a lift shaft or stairwell? Perhaps Egyptians were smart in that sense?
Answer: If you take the ancient Egyptian pyramids as an example, the slope angle of the sides is greater than 50 degrees. You can also experiment with pyramid dimensions and walls angles using this website.
People may not "fall" to their deaths from such pyramids, but they will slide to their deaths.
To prevent anyone from sliding to their death from a pyramid the wall angles would need to be less than the angle of sliding, which for most situations would be less than 30 degrees. | {
"domain": "engineering.stackexchange",
"id": 4786,
"tags": "building-design, building-physics"
} |
Is there a JSON-based genomic feature format? | Question: There's a variety of formats for storing genomic features (e.g. Genbank, GFF, GTF, Bedfile) but all of the ones that I'm familiar with use either a custom format (Genbank) or a CSV with defined column conventions and space for additional data (GFF, GTF, Bedfile).
Is there a feature format that is JSON based and adheres to a defined schema?
Answer: JVCF is a JSON schema for describing genetic variants. BioJSON is a JSON schema for multiple sequence alignments. The output of mygene.info's gene annotation web service is JSON-formatted.
You might build one of your own schemas from any or all of these, as inspiration for structure and content.
Consider that most genomics formats are historically tabular, because they are not hierarchical, because they originated prior to the advent of JavaScript as a common programming language (and subsequent use of JSON to serialize data), and because a lot of stream-oriented UNIX tools read in data and process it in ways that are natural to tabular structure.
Some tabular results can be hierarchical, but use inefficiencies in field values to apply hierarchy. JSON is inefficient for storing tabular data, but it is absolutely wonderful for hierarchical data. This difference is perhaps worth considering.
One could think of storing a gene as a master object containing various haplotypes, exons, introns, features, associated variants, associated TF binding sites, gene ontologies, etc. Ordering or prioritization of such items would be possible with the use of sorted lists within a gene object. The format is as extensible as it is a matter of adding a property, list, or object to the schema.
To gain adoption or "buy-in", build tools which publish it to users, and tools that consume what users get in order to do things, along with conversion and validation utilities. | {
"domain": "bioinformatics.stackexchange",
"id": 2032,
"tags": "features, genomefeatures"
} |
Electroporation of one-cell embryo? | Question: Would electroporation be successful on a one/two-celled mouse embryo? If it would, what buffer could be used and what percentage of cells would be viable?
Thank you.
Answer: The buffers are usually supplied with the electroporation instrument; at least Lonza provides buffers for different types of cells. Single cell electroporation techniques exist, but they have been mostly done on neurons.[1,2]
You can also try microinjection but it requires some practice. This study reports a transfection agent called VisuFect which can be used for zygotes as well.
I am not very sure about the success rate (AFAIK it is not that good) but I think it is anyway risky with organisms which do not produce a lot of embryos. | {
"domain": "biology.stackexchange",
"id": 3548,
"tags": "molecular-biology"
} |
What are the prerequisites for considering any other planet to be habitable? | Question: Well, there is a measure of how a planet could be considered like Earth, called Planetary habitability. Based on this measure, what are the prerequisites needed to consider a planet to be a habitable one?
Answer: Habitable by whom? There are conditions that are uninhabitable by humans, however, many "extremophiles" survive perfectly happy.
Although, if you are talking about humans, here is a small list (all the rest are probably more "nice to have" requisites):
Approximately 20% oxygen (more or less depending on the pressure)
Temperatures that allow for liquid water
Adequate access to water (and food)
Adequate protection from radiation.
Then a whole host of conditions that wouldn't end human life. Such as deadly pathogens on chemicals in the atmosphere.
All this said, since we only have a sample size of one currently for planetary life, we really don't know what is possible, or how to bound the problem. ANYTHING is just speculation drawn from this one sample. That said, we have found life on our own planet where we never suspected it to be. Life has proven to be nearly unstoppable in propagating throughout every niche on this planet. So, the better question I think would be what are the requisites for abiogenesis? I think once life manages to start on any planet, it will adapt to whatever conditions the planet presents (to within a reasonable degree). | {
"domain": "physics.stackexchange",
"id": 2982,
"tags": "astronomy, planets"
} |
Why in some cases normal force does not equal apply force that is in line with the normal force? | Question: My understanding is that normal force is always equal to the applied force vector that is in line with it. Like weight going down, normal force going up along the same line, equal in magnitude.
Here is a case where I guess Newton's third law somehow doesn't apply:
I push a object with one of my hands. It starts to move and then accelerate. Therefore, I know the net force is bigger than zero.
According to user Wolphram Jonny in his answer to a similar question, the force received by the object as I push it is defined by the interaction.
Can someone perhaps answer this question by explaining how the "interaction" is involved here?
the question I am talking about
Answer: I would rather answer the question in a different way than the previous answer you have pointed at.
So, you are analysing the motion of the object and hence you need to find out all the forces that are being applied "on" the object. And hence, you take into account the force you apply. Because of your applied force, the body starts to move and henceforth accelerate.
Now, yes the said object is also "pushing" you. But this (normal) force applied by the object does not effect it's own motion; it will effect your own motion. ( Simply because this normal force is applied by the box on you). But because of your larger mass and hence greater friction with the ground; this normal force cannot move you.
So the bottomline is:- Newton's Third Law is perfectly valid here! The normal force exerted by the object on you does not effect the object's motion in any manner. | {
"domain": "physics.stackexchange",
"id": 23592,
"tags": "forces"
} |
ROS Android Apps: can't add a robot on Kindle Fire | Question:
Edit: Added better screenshot
Hello all,
Please forgive what I'm sure is a silly question, but it has me stumped.
I have a rooted Kindle Fire with the ROS apps from the Google Android Market installed. I started the app and it prompts me to add a robot. I typed in the IP address of the robot, but there's no "ok" button to click and no "enter" button showing on the keyboard (the "enter" key is context sensitive on the Fire and doesn't show up if it's not mapped to any action). The Fire doesn't have a camera, so I can't use the QR code option either. I don't have any other android devices available to figure out how the dialog is supposed to work.
Here's a screenshot
So, I can't figure out how to say "ok" and get the ROS app to add my robot. Any suggestions? I'm comfortable editing a file or something if that's what it takes.
Sorry for what seems like a silly question, but I'm stumped.
Thanks in advance,
-Brian
Originally posted by brianpen on ROS Answers with karma: 183 on 2012-01-12
Post score: 0
Original comments
Comment by brianpen on 2012-01-13:
Letting me know how the app is supposed to work (are you supposed to press "enter"?) would also be helpful.
Answer:
I figured out a workaround.
So... the ROS android app is expecting the "enter" key to be pressed, but for some reason the Fire doesn't realize that and isn't showing the enter key.
However, you can use the android debugger (or a terminal application) to simulate keypresses on android. See here: Simulating Keypresses on Android
You need to use the input command to send code 66 (enter).
On your laptop type this after you have entered the ip address of the robot.
$ adb shell input 66
A more permanent solution would be adding an "ok" button to the dialog
Originally posted by brianpen with karma: 183 on 2012-01-13
This answer was ACCEPTED on the original site
Post score: 2
Original comments
Comment by brianpen on 2012-01-14:
Great. Thanks! I'm happy to do further testing on the Fire if the devs need it.
Comment by ahendrix on 2012-01-13:
I've filed a bug on the android app to get this fixed: https://kforge.ros.org/appmanandroid/trac/ticket/27 | {
"domain": "robotics.stackexchange",
"id": 7878,
"tags": "android"
} |
What physical or mental actions can be picked up by EEGs? | Question: There certainly seem to be a lot of gadgets and gizmos leveraging EEG technologies to the control of devices.
This makes me wonder: what intentions/thoughts can be captured by EEG technology, and which ones can't?
For example: how about motor movements? If I am wearing one of these EEG devices, and I shake one of my fingers back and forth, is my brain generating waves that the EEG device will detect? Why/why not?
Answer: Many EEG responses are swamped in random brain activity, artifacts and background noise. A single movement typically doesn't evoke measurable activity, because its amplitude is so small with respect to background noise.
I think the potentials you are looking for are event-related potentials or ERPs (Fig. 1). During ERP recording, basically a regular EEG is registered, but certain events (presentation of stimuli or actions by the subject) are time locked with the EEG recordings. By repeatedly measuring (e.g., 25 times) the time-locked response to the event, an average EEG can be obtained. Assuming remaining EEG activity, noise and artifacts are random, the averaging process will decrease noise, but will not affect the ERP. The averaging hence leaves you with a clean ERP response (Yordanova et al., 2003).
Fig. 1. ERPs. Left are stimulus-evoked ERPs (auditory, visual) and right are ERPs associated with the motor response by the subject (button-press upon stimulus presentation). Source: Yordanova et al., 2003.
In Fig. 1, the reaction times indicated correspond to the subject's button press, so the intention to press the button in fact precedes the actual motor response. In fact, the covert (imagined) ERP response can be nearly identical to the overt ERP (the motor action), see Fig. 2 (Kranczioch et al., 2009).
Fig. 2. Overt motor-evoked ERPs (top three panels) and equivalent covert (imagined) ERPs (lower three panels). The three panels cover different electrodes. Source: Kranczioch et al. (2009).
By characterizing the exact shape and other characteristics of the ERP, computer software can be deployed to recognize this ERP wave within a normal EEG. This is done in computer-brain interfaces. By filtering the ERG outside the frequency range associated with the ERP a lot of the cleaning can be done on the fly in near-real time.
A list of various stimuli in ERPs can be found in Table 1 in (Goodman, 2010).
References
- Goodman, Atten Percept Psychophys (2010); 72(8): 10.3758
- Kranczioch et al., Human Brain Mapping (2009); 30(10): 3275–86
- Yordanova et al., Brain (2003); 127(2): 351 - 62 | {
"domain": "biology.stackexchange",
"id": 5123,
"tags": "brain, electrophysiology, electroencephalography, signal-processing"
} |
Adjoint momentum Dirac equation | Question: So we have the commonly quoted momentum space version of the Dirac equation and the adjoint Dirac equation:
$$
(\gamma^{\mu}p_{\mu}-m)u=0
$$
Often, we are asked to show that the adjoint momentum Dirac equation can be written as:
$$
\bar{u}(\gamma^{\mu}p_{\mu}-m)=0
$$
I'm not too sure on the method. However, I have attempted something.
I multiply the top equation by $\bar{u}\gamma^{\nu}\times$ and the bottom equation by $\times\gamma^{\nu}u$ giving:
$$
\bar{u}\gamma^{\nu}(\gamma^{\mu}p_{\mu}-m)u=0
$$
$$
\bar{u}(\gamma^{\mu}p_{\mu}-m)\gamma^{\nu}u=0
$$
Taking the sum of these two gave me:
$$
\bar{u}\gamma^{\nu}\gamma^{\mu}p_{\mu}u-\bar{u}\gamma^{\nu}mu+\bar{u}\gamma^{\mu}p_{\mu}\gamma^{\nu}u-\bar{u}m\gamma^{\nu}u=0
$$
I then use $\gamma^{\mu}\gamma^{\nu}=-\gamma^{\nu}\gamma^{\mu}$, giving me:
$$
\bar{u}\gamma^{\nu}\gamma^{\mu}p_{\mu}u-\bar{u}\gamma^{\nu}mu-\bar{u}\gamma^{\nu}\gamma^{\mu}p_{\mu}u+\bar{u}\gamma^{\nu}mu=0
$$
And which point I can say that it is true. I'm just wondering if this is sufficient or if there is another more correct way of getting the adjoint Dirac equation...
Answer: Let
$$
(\gamma^\mu p_\mu-m)u=0
$$
Using the property $\overline{AB}=\bar{B}\bar {A}$, we have the following:
$$
0=\overline{(\gamma^\mu p_\mu-m)u}=\bar u \overline{(\gamma^\mu p_\mu-m)}
$$
Now, use $\overline{A+B}=\bar A+\bar B$:
$$
0=\bar u (\overline{\gamma^\mu p_\mu}-\bar m)
$$
Next, as both $m$ and $p_\mu$ are real numbers, we have $\bar m=m$ and $\bar p_\mu=p_\mu$:
$$
0=\bar u (\overline{\gamma^\mu} p_\mu- m)
$$
Finally, use the fact that the gamma matrices are self-adjoint, that is, $\bar \gamma^\mu=\gamma^\mu$:
$$
0=\bar u (\gamma^\mu p_\mu- m)
$$
and we are done.
Your attempt cannot work because you are trying to prove the adjoint equation by using the adjoint equation, which means that your argument is cyclical. | {
"domain": "physics.stackexchange",
"id": 29987,
"tags": "quantum-mechanics, homework-and-exercises, particle-physics, dirac-equation"
} |
Cross-thread label countdown | Question: Short and sweet...
I wrote a cross-thread method that displays the countdown [in seconds] of a delay on a label.
I'm fairly confident it's far from optimal, so I'm in need of that glorious optimization advice.
private async Task SnoozeAsync(int seconds)
{
for (var i = 0; i < seconds; i++)
{
Invoke((MethodInvoker)(() => statusLabel.Text = $"Waiting {seconds - i} seconds..."));
await Task.Delay(1000);
}
}
await SnoozeAsync(60);
Answer: Invoke is a blocking call that returns only after that call has competed.
That means your loop is also including the time it takes to marshal over to the GUI thread and complete. You probably don't want that.
I would use BeginInvoke instead, which does not wait for the method to complete on the GUI thread.
This is also the difference between SynchronizationContext methods Post and Send.
I would also prevent await from potentially capturing the current SynchronizationContext using ConfigureAwait(false).
To protect against exceptions if the control is disposed of (happens on form close and for other reasons) I'd add an IsDisposed check.
Finally I would allow this Task to be cancelled as a matter of best practices using a CancellationToken.
private async Task SnoozeAsync(int seconds, CancellationToken token)
{
for (var i = 0; i < seconds; i++)
{
if (token.IsCancellationRequested)
break;
BeginInvoke((MethodInvoker)(() =>
{
if (!statusLabel.IsDisposed)
statusLabel.Text = $"Waiting {seconds - i} seconds...";
}));
await Task.Delay(1000, token).ConfigureAwait(false);
}
} | {
"domain": "codereview.stackexchange",
"id": 37714,
"tags": "c#, performance, multithreading, async-await"
} |
Why does stimulated emission happen? | Question: In stimulated emission, why don't electrons just jump to a higher energy level instead of a lower one when they absorb a photon for the second time? Isn't that counter intuitive?
Answer:
why don't electrons just jump to a higher energy level
They can't, because energy needed for jumping into a second energy level is different than that of incoming photon. There may be some exceptions such as multi-photon absorption involving an intermediate virtual energy level, however the probability of such processes is low compared to single-photon absorption. Thus, simply energy band of higher level does not fit the one which the incoming photon has: $\Delta E_{2\to3} \neq h\nu$.
when they absorb a photon for the second time
This is actually wrong. Atoms DO NOT absorb incoming photons a second time as I've explained previously; they can't do it, because the system is already in an inverted state - where many of atoms are excited and not in ground state. Thus such material in an inverted state is basically transparent for incoming photons.
Why does stimulated emission happen?
This simple question has no easy answer. We need deep quantum mechanics and quantum electrodynamics framework knowledge to explain this. However, if this question would be attacked on intuitive level, I would explain that like this:
Already excited atoms become "unstable", in the sense that there's a spontaneous emission underway, so any event can force an atom to go back to ground level. It's like when at top of mountain is a lot of snow which has a lot of potential energy, waiting to be released, and it's enough for a small perturbation, like strong sound, skier, etc., to start a snow avalanche effect. Then falling snow will touch other volumes of snow and process repeats, until huge amounts of snow reach the ground.
Same for stimulated emission. An incoming photon "shakes" an excited atom, which then emits duplicated photon, which in turn shakes other atom, and process repeats further inducing stimulated emission avalanche, until all excited atoms go back to ground state. But then they are excited again by an external light source or by other means, and the overall process repeats producing a stable laser beam. | {
"domain": "physics.stackexchange",
"id": 65605,
"tags": "laser"
} |
Inheritance and forcing methods to be called inside another method | Question: I've written a text parser that extracts useful information from a given text. Texts are formatted different, so there are different specialized parsers. I've designed them so that they all must define the methods from the interface Parser.
class Parser:
def parse(text):
raise NotImplementedError
def pre_validate(text):
raise NotImplementedError
def post_validate():
raise NotImplementedError
How can I force all subclasses of Parser to call pre_validate before running parse and post_validate after calling parse?
I could solve it like this, but I don't think it's very elegant to wrap methods like this:
class Parser:
def parse(text):
self.pre_validate(text)
self._parse()
self.post_validate(text)
def _parse(text):
raise NotImplementedError
def pre_validate(text):
raise NotImplementedError
def post_validate():
raise NotImplementedError
Ideally I'd like to allow the subclasses to implement the method parse instead of _parse. What's the recommended way of solving this? Would it make sense to use a python decorator here?
Answer: You could fix the issue by making parser a collection of callables, rather than a class with fixed methods. This would be handy for other aspects of this kind of parsing, so you could easily reuse pre-parsing and post-parsing functions that might be more similar than the parsing:
class ParseOperation(object):
def __init__(self, pre_validator, parser, post_validator):
self.pre_validator = pre_validator
self.parser= parser
self.post_validator= post_validator
def parse(self, text):
self.pre_validator(self, text)
self.parser(self, text)
self.post_validator(self, text)
class Validator(object):
def validate(self, owner, text):
print 'validating %s' % text
def __call__(self, owner, text):
self.validate(owner, text)
class Parser (object):
def parse(self, owner, text):
print 'parsing %s' % text
def __call__(self, owner, text):
self.parse(owner, text)
Instead of overriding methods, you subclass Parser and Validator to compose your parse operation. | {
"domain": "codereview.stackexchange",
"id": 4374,
"tags": "python, inheritance"
} |
When do special relativity causally linked reference frames split into general relativity un-linked? | Question: I'm looking for the "a-ha" moment in trying to understand how the super-luminal apparent speeds of the universe inflation theory are allowed.
Specifically, the short explanation is that "the two objects are not in a causal reference frame to each other, and thus they can move faster than the speed of light/causality relative to each other." So far, so good!
At the beginning of the big bang, weren't all matter causally linked, and thus all in the same reference frame, though? So there would have been some point at which two collections of matter, previously causally linked, became unlinked?
Also, the inflation theory says that far-away galaxies may expand into the event horizon faster than the event horizon expands, such that they will stop becoming observable from us -- meaning, they go from "causally linked" to "not causally linked."
What is the mechanism that allows causally linked reference frames (like two close galaxies) to suddenly un-link (and thus pass beyond the visible universe edge of each other?)
Or am I thinking about this all wrong, and if so, how should I be thinking about this?
Answer: You seem to be conflating "causally linked" with "in the same reference frame". This is not correct. Two points in the spacetime are "causally linked" if there is a causal (timelike or null) curve that connects them (meaning you can reach one point from the other without moving faster than $c$), so two observers can be considered "causally linked" if there is such a curve connecting two points on their worldlines. If we consider flat spacetime in a particular reference frame, then an observer moving at a relative velocity will not be in the same frame but will still be causally linked to the original observer.
In general relativity the situation is completely different, since the global notion of a reference frame doesn't exist. Only at a single point can you compare the velocities of observers passing through that point and have a notion of frames of reference. So talking of separated observers as being in the same reference frame in GR doesn't make sense. The notion of "casually linked" points remains more or less the same however, just now the connecting causal path is on curved spacetime. The "unlinking", as you call it, is just two observers originally having a causal curve between them and then not, which can occur for accelerated expansion. You can see this in the figure below, which shows the FLRW spacetime sliced into spacelike hypersurfaces on which the so-called co-moving coordinates are defined by the intersections of timelike geodesics. If the expansion accelerates at a large enough rate, then two observers at fixed comoving coordinates will eventually no longer have a causal curve connecting them (you can imagine the causal curves as curves whose tangents make angle of $\leq 45$ degrees to the geodesics).
arXiv:1803.05148
If you want a very crude intuitive picture, think of somebody accelerating in a car away from you and you (representing a light signal) are trying to run after them. As soon as they pick up enough speed you will never be able to catch them and so there is no "causal" link between your starting point and the car. But it's not like there's any explicit "mechanism" that causes this to happen.
As for this question:
At the beginning of the big bang, weren't all matter causally linked ...
This may be the intuitive expectation, however for an FLRW model, if the spacetime expands sufficiently quickly it need not be the case and particle horizons will be present from the moment of the Big Bang. Wald (1984) Sec 5.3b goes through this and I've included a passage from it here:
This demonstration is most easily made in the case of flat spatial
geometry, $$ds^2=-d\tau^2+a^2(\tau)(dx^2+dy^2+dz^2)\tag*{(5.3.10)}$$
and we will focus our attention on that case. By making the coordinate
transformation $\tau\rightarrow t$ defined by
$$t=\int\frac{d\tau}{a(\tau)}\tag*{(5.3.11)}$$ we can reexpress the
metric, equation (5.3.10), as
$$ds^2=a^2(t)(-dt^2+dx^2+dy^2+dz^2)\tag*{(5.3.12)}$$ Written in
this form, it becomes manifest that this metric is merely a multiple
of the metric of the flat Minkowski spacetime metric. Such a metric is
called conformally flat. The relevance of this remark arises from
the fact that a vector will be timelike, null, or spacelike in the
metric of equation (5.3.12) if and only if it has the same property
with respect to the flat metric
$$ds^2=-dt^2+dx^2+dy^2+dz^2\tag*{(5.3.13)}$$ Thus, it is possible to
send a signal between two events (i.e., join the two events by a
timelike or null curve) in the metric of equation (5.3.12) if and only
if this can be done in the flat metric, equation (5.3.13). With this
in mind, it is not difficult to see that an observer at an event $P$
will be able to receive a signal from all other isotropic observers if
and only if the integral, equation (5.3.11), which defines $t$,
diverges as one approaches the big bang singularity, $\tau\rightarrow$
$0$. Namely, if this integral diverges --- which will be the case if
$a(\tau)\leq\alpha\tau$ for some constant $\alpha$ as $\tau\rightarrow$
$0$ --- then the Robertson-Walker model will be conformally related to all
of Minkowski spacetime (i.e., $t$ will range down to $-\infty$) and
thus there will be no particle horizon. On the other hand, if the
integral converges, the Robertson-Walker model will be conformally
related only to the portion of Minkowski spacetime above a
$t=\text{constant}$ surface, and particle horizons will exist, as
illustrated in Figure 5.6.
Even in the case of pure dust, we have $a(\tau)\propto\tau^{2/3}$ so the integral converges and particle horizons are present. | {
"domain": "physics.stackexchange",
"id": 81118,
"tags": "cosmology, space-expansion, faster-than-light, causality, cosmological-inflation"
} |
Is there any way to improve (shorten) this F# code? | Question: I have a very good grasp of the syntax and features of F# as well as some of the concepts that mesh well with the language. However, I do not have enough experience writing it to feel comfortable that the way I am handling the language is appropriate.
Ignoring the fact that I do not provide proper escaping functionality for the file format, is there any way I could write this code better? I know that this is a trivial example, but I fear that I might be making things too hard on myself with this code.
Also, is there a way that I could write this more robustly so that adding in an escape sequence for the ";" character would be easier?
module Comments
open System.IO
type Comment = { Author: string; Body: string }
let parseComment (line:string) =
match line.Split(';') with
| [|author; body|] -> Some({Author=author; Body=body})
| _ -> None
let filterOutNone maybe = match maybe with | Some(_) -> true | _ -> false
let makeSome some = match some with | Some(v) -> v | _ -> failwith "error"
let readAllComments () =
File.ReadAllLines("comments.txt")
|> Array.map parseComment
|> Array.filter filterOutNone
|> Array.map makeSome
Answer: Look into Array.choose. Using that function cuts the length of your code in half, because you have basically reinvented it. (I did the same at some point ;-) | {
"domain": "codereview.stackexchange",
"id": 1452,
"tags": "parsing, f#"
} |
ROS communication works only in one direction | Question:
Hi,
I'm using ROS Kinetic and I have a strange issue.
I want to run one node in a Docker container on my pc and one node on a RaspberryPi.
The problem is that if I publish messages from the pc, no message is received on the Raspberry.
Interestingly if I publish from the Raspberry I receive messages on the pc.
I tried to run the ROS master both on the pc as well as on the Raspberry.
In the .bashrc file of both devices I have the following lines
export ROS_MASTER_URI=http://masterhost:11311
export ROS_HOSTNAME=localhost
Where localhost is the ip address of the specific device and masterhost is equal to the localhost of the device running the master.
Do you know why this could happen?
Originally posted by alsora on ROS Answers with karma: 1322 on 2018-10-12
Post score: 0
Answer:
The pc and the raspberry shouldn't have the same configuration. They have to have the same ROS_MASTER_URI but their ROS_HOSTNAME (or ROS_IP) should be their own.
You should have in the .bashrc:
On the pc :
export ROS_MASTER_URI=http://PC_IP_ADDRESS:11311
export ROS_HOSTNAME=PC_IP_ADDRESS
On the rapsberry :
export ROS_MASTER_URI=http://PC_IP_ADDRESS:11311
export ROS_HOSTNAME=RASPBERRY_IP_ADDRESS
See #q272065 for a much more detailled answer.
Originally posted by Delb with karma: 3907 on 2018-10-12
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by gvdhoorn on 2018-10-12:
Note that a complicating factor here could be the Docker setup. Docker by default uses NAT (ie: masquerading), so it's not always as straightforward as setting ROS_IP and ROS_MASTER_URI to get things to work.
Comment by Delb on 2018-10-12:
Does the masqueradidng change the ROS_MASTER_URI ? If not, is it possible to replacePC_IP_ADDRESS for the ROS_MASTER_URI by RASPBERRY_IP_ADDRESS to prevent the issue ?
Comment by gvdhoorn on 2018-10-12:
Docker creates a "virtual network" by default, so depending on where you run your master you'll have to update the ROS_MASTER_URI yes (and most likely the routing configuration of the involved hosts). But the more difficult issue is that NAT will "hide" nodes running inside your container.
Comment by alsora on 2018-10-12:
@Delb the ROS_HOSTNAME is different across devices, It was not clear from my question.
Comment by alsora on 2018-10-12:
@gvdhoorn You are right, the error is due to the Docker setup.
I'm running the container with the --privileged and --net=host flags.
From ifconfig inside Docker, I see the docker and wlan interfaces. I am using the wlan inet address in the exported env variables | {
"domain": "robotics.stackexchange",
"id": 31896,
"tags": "ros-kinetic"
} |
How to construct context-free language $L$ to prove $L′=\{x|xx∈L\}$ is not context-free? | Question: Can someone please explain me how to solve this?
In this post here was one user sketching the solution but I still don't understand how to construct a context-free language $L$ in such a way that the words in $L$ of the form $ww$ are $a^mb^mc^ma^mb^mc^m(m≥0)$. How does the grammar for L looks like? What equalities have the grammar to ensure and why?
Thanks!
Answer: Take a look at
$$L = \{a^nb^nc^ma^mb^kc^l : n, m, k, l \geq 0\}.$$
Now take some $x \in L$ with $x = ww$, then $x = a^nb^nc^ma^mb^kc^l$ for some $n, m, k, l \geq 0$.
There's only one possible way to split $x$ into two equal parts $w$:
$$w = a^nb^nc^m = a^mb^kc^l.$$
Note that for distinct symbols $c_i$, $c_1^{n_1}c_2^{n_2}...c_k^{n_k} = c_1^{m_1}c_2^{m_2}...c_k^{m_k}$ implies $n_i = m_i$ for all $i$.
From this we know that
$n = m$,
$n = k$, and
$m = l$.
So from 1. it follows that $A(L) = \{a^nb^nc^n : n \geq 0\}$ which is known to be not context-free.
To prove that $L$ is context free, note that $L$ can be generated by the following context-free grammar with start variable $S$:
$$S \to ABCD$$
$$A \to aAb | \varepsilon$$
$$B \to cBa | \varepsilon$$
$$C \to bC | \varepsilon$$
$$D \to cD | \varepsilon$$ | {
"domain": "cs.stackexchange",
"id": 21823,
"tags": "formal-languages, context-free, formal-grammars"
} |
What would happen if a ball flew through the open window of a car? | Question: I asked a science teacher this question a while ago but I did not receive a clear answer. Now, I was hoping someone on this Physics site could shed some light on my conundrum,
I've been wondering for a long time what would happen if a ball flew into a car through an open window.
An example of this situation is detailed below in this excerpt from a novel I once wrote. The situation on the pages below highlights an event where two different scenarios occur and I was wondering which one would be more realistic.
Excerpt, pages 831-833:
To start with positive footing, the work of that employee was not wasted that evening. On the negative side, the very next pitch that Gregg rolled was slammed so hard by the kicker that C.1 thought the ball would explode in mid-air. Instead, something much worse happened; the ball flew with great speed towards the road, and at the very same moment, to the right, a red sedan emerged from behind a clump of pine trees speeding towards the ball. Would they intersect? C.1 was afraid that the ball would hit the red car. But after what happened next, C.1 considered that hitting the red car would have been better than what really did happen next.
By this time, everyone was watching the ball. A blur of green flew over the sidewalk and the gutter, and just about when C.1 knew that the ball would surely smash the driver’s mirror, the ball flew right towards the window, and inside the car!
They could all hear a shriek from the car as the driver freaked out. The driver’s window had been rolled down and the ball had somehow managed to fly inside the car! The car swerved and skidded before screeching to a halt, but not before taking out someone’s mailbox and narrowly avoiding a large tree as it came to a standstill on the lawn of the one-story house that still bore the ball on its roof.
“Oh my goodness!” Gregg exclaimed after they had all closed their mouths. “I hope she’s alright!”
Several employees dashed towards the road as fast as they could, sprinting with all their might until they reached the car. They began talking to the driver, and finally, they began sprinting back, right before the red car rolled off of the lawn, bounced over the curb, and continued on its way down the road.
“Nearly took out her nose,” David explained as he reached the diamond again. “We explained what was going on and apologized, and she said that she had just suffered a fright that was all; she just rolled up her windows and drove away.”
“Good to see that she has some common sense now,” Gregg lazily chuckled.
“The ball, on the other hand, was not so lucky,” David said, handing Gregg a flat piece of green rubber. “It hit one of the knobs on the dashboard and popped. It’s very lucky that Marcia has inflated those extra balls, because we’ll need one.”
The explanations concluded and the game resumed play. Several fielders guarded the road more than was necessary, and they kept their hands ready every time Gregg pitched the ball.
Finally, the teams switched again, and Gregg’s team was up to kick again. Several home runs were scored, including Gregg’s, and he received a hearty applause from the team when he came running back in, his foot brushing over home plate just before a blur of green flew over the plate and bounced against the foul pole.
Finally, it was C.1’s turn to kick. He imagined C.15’s face in the outfield, but quickly realized that he needed to think about someone he hated; not someone he loved. He finally decided on his deceased father and nodded his head towards the pitcher.
Just before the ball came over the plate, C.1 slammed the ball for all it was worth. Without stopping to think, he shot off of the plate and began rounding the bases like his life depended on it. He kept one eye on the baseline and another on the ball, but just before he got to 3rd base, things got interesting, and C.1 stopped running to watch.
The ball had indeed flown towards the air, but C.1 had kicked it with so much power, that it was still nearly 40 to 50 feet above the ground when it passed the infield. One of the men in the outfield made a jump for the ball, but it flew several feet above his hand and…
Just as C.1 thought that it would just land in the street and that would be the end of it, another car came rocketed out of the blue from nowhere, speeding towards the intersection where the ball and the driver had previously collided earlier that game. C.1’s eyes filled with horror, and just as he braced himself for the worst, a very interesting thing happened.
First, the ball did indeed fly through the driver’s window, but instead of getting stuck inside the car as it would have if it were not for the driver rolling all of the car’s windows down, it flew right out of the other window on the passenger side. But before everyone could express their awe to each other, another interesting thing happened.
Right after the ball flew out of the car’s window, the car continued down the street, on its way as if nothing had happened. C.1 was confused at first, but decided that it must have happened so fast that the driver would definitely not have remembered anything but a greenish blur in front of his or her face. But this was soon driven out of his mind; for the next moment, they were all distracted by the sound of a garage door closing!
They all turned to look; the garage door of the house right in front of where the car had been had just started closing, and just before the ball reached the ground, and just before the garage door fully closed, the ball flew under the garage door and into the garage, just before the garage door reached the ground!
Gregg’s hand flew up to his mouth in horror, as did several others. Clearly, nothing like this had ever happened before at a kickball game. Gregg called for time-out, and soon, Willis, Gregg, David, and C.1 were sprinting as fast as they could towards the house that the ball was now somewhere in.
At last, they quickly crossed the street over to the house that the ball was in. A large maple tree bordered by an oak occupied the majority of the front yard, with much of its many upper branches hanging over into the street as if sheltering it from the house. They hurried up the front walk which was to the right of the garage; jumping the porch steps two at a time, and Gregg rang the doorbell before the rest of them had reached the door.
It was several rings later before anyone answered the door. When the door opened, it revealed a lady with brown hair likely in her mid to late forties. Without wasting any time, Gregg quickly began to explain what had just happened, and the lady nodded and said, “Hmm…hmm… OK…” from time to time until Gregg had completed his explanation.
“So it’s in the garage, you say?” she asked finally.
“That it is, ma’am,” Gregg kindly replied.
The lady beckoned them into the house and they followed her. They turned left to go down a short hallway into a dimly lit room before the lady unlocked a door to their left and they peered into the dark garage, its only light coming through a window in the attic of the garage. Underneath a wheelbarrow, in plain view, was the green kickball, without so much as a scratch upon it.
“Thank you, ma’am,” Gregg replied, as he fetched the ball and they turned to leave the house. The lady waved goodbye and they hurried back to the baseball diamond.
(C) Copyright 2011, http://interlinked.x10host.com
The excerpt above may have been long and confusing. I also made this rough sketch of what it might look like:
In the situation on the left, the ball moves with the car while it is travelling in the car, and then flies out the other open window. In the situation on the right, the ball does not move with the car and keeps moving relative to the ground, and does not make it out the other window.
The ball never touches anything in the car in either example.
So, which example represented here would likely occur? Is it possible that it could be either one? What would it depend on?
I've been doing a lot of thinking about this lately and I'm thoroughly befuddled. There were some other questions about flies in cars on this site but they didn't quite answer this question.
Hope this question makes sense - If anyone could help, that would be great! I looked online, but I couldn't find an example like this anywhere and I imagine this represents a unique conundrum,
Answer: I think your question is essentially a duplicate of Ball thrown from a moving train.
As QuantumBrick says in a comment :
If the person that throws the ball has the same velocity as the car, the ball will pass through both windows (no air drag considered).
So if both the thrower and the car are stationary on the ground, or the ball is thrown from a 2nd car which is moving alongside the 1st with the same speed, then the ball will pass through both windows. This is because all velocities are relative : if everything is moving at the same speed along the ground, this is equivalent to everything being stationary relative to the ground.
On the other hand if the thrower is stationary but the car is moving, then the ball will probably miss the 2nd window. The ball follows the same path relative to the ground as though the car is not there. It will go into one window, but by the time it reaches the other window the car will have moved forward. If the car is going fast enough the ball will probably miss the 2nd window.
The latter would also happen if the ball is thrown sideways from a car which is moving in the opposite direction into a stationary car. The ball moves forward with the same speed as the moving car, as well as sideways - so it moves diagonally relative to the ground. It enters one window of the stationary car but moves diagonally across the stationary car, and probably misses the far window.
[I didn't read your Exerpt. It was too long and mostly irrelevant.] | {
"domain": "physics.stackexchange",
"id": 32022,
"tags": "newtonian-mechanics, relative-motion"
} |
Abstract concept of wave propagating on a string | Question: I'm a real beginner in physics with a really basic doubt about waves.
Suppose i have a string ( perfect elastic material ) whose left-end i can manipulate ( i can change its heigth ) and whose right-end can either be fixed, loose or actually be non-existant ( the string goes indefinitely towards the right direction ) .
I'm interested in the effect of manipulating the heigth of the left-end of the string.
Let's consider two cases.
i) The heigth starts at $c_0$ and i indefinitely ( non-stop ) make it grow .
ii) The heigth starts at $c_0$ and i lift it to $c_1 > c_0 $ , keeping it constant at $c_1 $afterwards.
What would be the effect of i) and ii) in the remainder of the string ?
I was thinking that we could consider a wave would be generated, that is, the action i) or ii) would be propagated sequentially to all points of the string, starting at the consecutive point to the right of the left-end, forming a kind of wave.
My motivation for thinking of it as a wave, is that we can perceive that effect as simply energy traveling through a medium ( the string ) withouth causing a permanent change in its constituents ( the points of the string ) ... and that is the common feature that all waves we know ( electromagnetic, surface waves, etc ) share.
P.S : Here we can consider no damping ratio at all ( the energy is transfered withouth loss ) .
But is it really a wave ? Can we talk about frequency,period, wave-length and velocity of wave in both cases i) and ii) ?
In the i) action, i was think of considering , only theoretically, the period of the generated-wave as infinite and the frequency as 0. What about the wave-length and the velocity of propagation of the wave ? On what parameters would it depend ? Would it depend solely on the properties of the string ( density, tension,etc ) , or would it maybe also depend on the velocity that the i) action is realized ?
In the ii) action, i was thinkg of considering the period of the generated-wave simply as the ammount of time that it takes for the heigth $c_0$ grow to $c_1$, and considering the frequency as the inverse of that . But again, What about the wave-length and the velocity of propagation of the wave ?
Thanks a lot , and sorry for elaborating the question in a simplistic way ( i don't have enough physic's knowledge to elaborate it better ) .
P.S : What motivated this question was exploring with the following web app :
http://phet.colorado.edu/sims/wave-on-a-string/wave-on-a-string_en.html
Answer: From Wikipedia:
In physics, a wave is disturbance or oscillation (of a physical
quantity), that travels through matter or space, accompanied by a
transfer of energy. Wave motion transfers energy from one point to
another, often with no permanent displacement of the particles of the
medium—that is, with little or no associated mass transport. They
consist, instead, of oscillations or vibrations around almost fixed
locations. Waves are described by a wave equation which sets out how
the disturbance proceeds over time.
If we use that definition, in both cases you will have a wave, because there is a perturbation that moves and transfers energy without mass transport. But is not a sinusoidal wave, that is one that has a sine shape with endless peaks and valleys. However, Fourier's theorem shows that any shape can be decomposed into a collection of sinusoidal waves of different frequencies. What this means is that in your example the wave has multiple frequencies (an infinite number to be more precise) or wavelengths. | {
"domain": "physics.stackexchange",
"id": 17820,
"tags": "waves"
} |
How to reshape data for LSTM training in multivariate sequence prediction | Question: I want to build an LSTM model for customer behaviour. It's the first time for me working on a timeseries, so some concepts are not clear to me at all.
My prediction problem is multidimensional, meaning that I also want to predict many informations associated to an action for each customer.
The dataset is currently shaped as a list of 2d padded arrays of one-hot encoded features (customer actions + other informations), for example:
customer_id encoded_features
0 25464205 [[0,1,0],..,[1,1,1],[1,0,1],..,[1,0,1]]
1 56456574 [[0,1,1],..,[1,0,1],[1,0,1],..,[1,1,1]]
where each element in the encoded_features entries represents a specific timestep.
My idea here is to use keras input shape
(n. customers, n. timesteps, length of features encoding)
In the example above it would be (2,#timesteps,3).
I have two main questions:
Is this whole setting rigth for the prediction of next single customer action? I would like to simply give a new sequence of features for a certain customer and predict all features in the next timestep.
I am thinking about splitting the data (according to a certain ratio) into sequential training and test sets, in order to test the trained model on unseen feature vectors. In the example above it would be:
customer_id X_train y_train
0 25464205 [[0,1,0],..] [1,1,1]
1 56456574 [[0,1,1],..] [1,0,1]
customer_id X_test y_test
0 25464205 [[1,0,1],..] [1,0,1]
1 56456574 [[1,0,1],..] [1,1,1]
Notice that X_train and X_test will generally contain all Train/Test events, except for the last one which has to be predicted.
Is this a correct interpretation?
Answer: This makes sense. It should work for input and first couple of layers. For output layers, you can have a softmax if you need to generate only next record in sequence.
Following Keras code has an example that :
Accepts multi-dimensional inputs (Each sample is a Sequence of video frames)
Predicts next few frames of video ( Multi dimensional since each pixel is a feature)
https://github.com/keras-team/keras/blob/master/examples/conv_lstm.py | {
"domain": "datascience.stackexchange",
"id": 4618,
"tags": "python, keras, time-series, lstm, rnn"
} |
Solving a in/equality constraint problem with graph search | Question: You are given a list of m constraints over n distinct variables x1, ..., xn. Each constraint is of one of the following two types.
An equality constraint of the form xi = xj for some i!=j.
An inequality constraint of the form xi!= xj for some i!=j.
I want to find an assignment, if it exists, for each variable such that it conforms to all the constraints using a graph search algorithm in O(m+n) time.
This reminds me of the graph colouring problem, however that only involves checking graph neighbours where as in any efficient graph I could think of, the nodes sharing a constraint may not be neighbours.
My first thought was to create a graph such that all nodes that equal are connected then use DFS to traverse each node and check if it has an inequality with a parent, however that doesn't seem very efficient as for every node (m) I have to traverse every inequality constraint (at most n) which brings me to nm time, where as DFS inherently has O(m+n)on an ideal representation.
Any clues?
Answer: Let there be K equalities.
Create a forest of n nodes stored in an adjacency list O(n)
For each equality between Xi and Xj, add an edge between them. O(2k)
Call DFS on any node. O(|m| +|n|)
DFS has a counter that starts at 0
When DFS visits a node it sets that node’s value to the current value of its counter
When DFS crosses a cross-edge, it increments it’s counter by 1
For each inequality between Xi and Xj, check their values assigned by the counter during the DFS, if they are equal return NIL O(2(m - k))
Create the result array by going through the adjacency list and setting the value at index j of the node X_j O(n)
=> Worst case time complexity O(3|m| + 3|n|) => O(|m| + |n|) | {
"domain": "cs.stackexchange",
"id": 13561,
"tags": "graphs, graph-traversal, constraint-satisfaction"
} |
Determining if a Protein Model Contains a Backbone Clash | Question: I have an ensemble of homology models of a protein, and I now wish to remove those models which have backbone clashes. I could obviously check by eye but this is subjective and probably will not be accepted for publication.
What is the best (reproducible) method to determine if a particular protein model contains a backbone clash?
Answer: The way to check for steric clashing between any two atoms, backbone or otherwise, is to compute their Euclidean distance. If a and b represent two atoms (with a_x being the X coordinate of atom a and so forth), you can calculate their Euclidean distance as follows.
d(a, b) = sqrt( (a_x - b_x)^2 + (a_y - b_y)^2 + (a_z - b_z)^2) )
So essentially the idea would be to calculate the pairwise distance between each of the backbone atoms. For any pair of atoms, there is steric clashing if the distance between them falls below a certain threshold. If I remember correctly, this threshold is the sum of the van der Waals radii of the two atoms. | {
"domain": "biology.stackexchange",
"id": 522,
"tags": "bioinformatics, proteins"
} |
How to control pitch, roll, yaw with mavros | Question:
Hello,
i am quiet new to the pixhawk, but quite familiar with ROS. I am searching for some informations how to control the pixhawk via mavros. I am using the px4 firmware.
My plan is to use the altiude_hold_mode, so that the multicopter hovers at a specific height. Additionaly i want to control pitch, roll and yaw angles of the multicopter via ROS. A bit like a 2D control in 3D space but at a specific height.
I am a bit lost how to setup my pixhawk with the external computer (an odroid fixed to the multicopter).
I found this tutorial [https://pixhawk.org/dev/ros/mavros_offboard] .
So i can control the copter with help of standard ROS messages. More specific i can control: position, attitude, velocity and accelration. Here are my questions:
In order to control the copter via mavros, it has to be in offboard_mode. Would this conflict with the altitude_hold_mode?
And which message should i use to control roll/pitch/yaw? In my little project the copter has to fly with specific "angles".
Thx for your help.
EDIT1:
The connection between the odroid and the pixhawk is working. I can handle the streams (baudrate, Hz) aswell. I am a bit confused about the whole mavros-setpoint part. I will try to order a bit ;).
If i use setpoint_position, how is the actual position of the copter calculated? I think it depends on my sensor setup right? For example distance sensor+GPS. Unfourtanetly i can't use a GPS or Optical Flow. Therefore this method will not work or?
The setpoint_attitude seems more like something that would fit, but it can't offer a stabil z position right? So i would have to read out the distance sensor and regulate the throttle by myself correct?
I wonder if it is easier to use the standard altitude_hold mode of the pixhawk. Additionally i would read out the imu data of the pixhawk and regulate pitch, roll, yaw over mavros/rc/override. Would this also be a possibility?
Originally posted by Tirgo on ROS Answers with karma: 66 on 2015-07-15
Post score: 0
Answer:
PX4 allow controlling attitude only in OFFBOARD, it is impossible to combine modes.
But depending on control type you may hold altitude. Easiest is when you send position setpoints, just send same Z.
You want rpy control, it is done via attitude setpoints (in quaternion form), but in that mode you only control throttle, so altitude controller should be implemented by user.
FCU-OBC connection depends on what computer you use. In general you need 3.3V UART from OBC and connect it to TELEM2.
Example: Raspberry Pi UART pins may be directly used; Odroid U3 need level converter 1.8<->3.3; Intel NUC need FTDI USB-UART.
Then you should setup SYS_COMPANION FCU parameter to 921600 or 57600 (depends on maximum baud allowed by your computer).
Tip for raspberry pi 2 (and 1 too): place in /boot/config.txt (alternate: /boot/firmware/config.txt) this lines:
# Higher UART Speed
init_uart_baud=921600
init_uart_clock=14745600
Originally posted by vooon with karma: 404 on 2015-07-16
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by Tirgo on 2015-07-16:
Thx for your help. (Too long for a comment i edit my question.)
Comment by vooon on 2015-07-17:
rc/override is APM-only feature. PX4 has actuator_control, but for flying vehicle it is harder that do altitude controller.
Position sp uses local position, like data published by local_position plugin.
Not sure what if there no global position source, better to ask px4-users.
Comment by Tirgo on 2015-07-20:
So the way to go is to use the setpoint_attitude feature to control roll, pitch and yaw. Additionally control the altitude manual by sending throttle commands. Do you know if these features are integrated in the stable version? Or do i need the master version for it?
Comment by vooon on 2015-07-20:
All setpoint handling exist while ago, so it is in stable. Also Nuno working on tests for setpoints & PX4 SITL. May be useful.
Comment by Tirgo on 2015-09-11:
Hi. I had no time to go on with the project till now. Just another short question. Is the setpoint_attitude feature px4 only? Couldn't find much documentation about it in the apm wikis. PX4 offers the Offboard mode, what is the corresponding mode in apm? Thx
Comment by vooon on 2015-09-11:
You can check that in ArduCopter/capabilities.cpp. Current master can handle local or global position setpoints. Mavros only supports local ones.
I'm not checked current sources, but previously setpoints wants GUIDED mode. | {
"domain": "robotics.stackexchange",
"id": 22193,
"tags": "ros, pixhawk, yaw, mavros"
} |
The set of possible values of linear programs | Question: Consider the set of all linear programs of the form:
maximize $c x$
subject to $A x \leq b$
$x \geq 0$
where there are $m$ variables, $n$ constraints, and all coefficients in $A, b, c$ are integers from a given finite set $K$.
Since the number of such programs is finite, the number of possible optimal values of these programs (those that have an optimal value, that is, are feasible and bounded) is finite.
My questions are:
Is there any non-trivial upper bound on the number of possible optimal values? (besides the number of possible programs).
Is there any simple representation for the set of all possible optimal values, as a function of $m, n$ and $K$?
Answer: For simplicity, assume $0 \in K$. It allows us to treat $x \geq 0$ constraints in the same way as other constraints.
For an LP that has an optimal solution, there is an optimal solution in the basis. Therefore, it is sufficient to consider the set of possible solutions for systems of $m$ non-singular linear equations on $m$ variables.
By Cramer's rule, a solution of a system of linear equations $Ax = b$ can be represented by ratios of determinants $x_i = \frac{\det(A_i)}{\det(A)}$. Therefore, an optimal value of an LP can be represented by a ratio $\frac{\sum_i c_i \det(A_i)}{\det(A)}$.
If the maximum absolute value of the coefficients is small, say $k$, we can use Hadamard's theorem.
The absolute value of the denominator is at most $\det(A)$ is $k^m m^{m/2}$. Similarly, the numerator has an absolute value at most $mk \cdot k^m m^{m/2}$.
Because both the numerator and the denominator are integers, the number of possible values is $O(k^{2m+1} m^{m+1})$. | {
"domain": "cs.stackexchange",
"id": 20621,
"tags": "linear-programming"
} |
Can a DFA have an empty string as input? | Question: Given string w such that every odd position in w is a 1
My solution is
The book's solution is
from my understanding $\epsilon$ should not be an accepted state, however in the book's solution it is. How is it handling this case?
Answer: Yes, an empty string is a valid input to a DFA. If this were not the case, DFAs would not be closed under all of the Kleene algebra operations.
In this case, the argument is correct. Think of the problem statement as an implication. If $i$ is an odd position, then $S_i$ is 1. | {
"domain": "cs.stackexchange",
"id": 9891,
"tags": "finite-automata"
} |
Complexity of $\{0,\pm1\}$ determinant in sparse cases? | Question: If $M\in\{-1,0,+1\}^{n\times n}$ be a matrix with only $O(n)$ non-zero entries and hadamard product $M\odot M$ being symmetric can we compute $Det(M)$ in $O(n)$ bit complexity?
Assume that the matrix is given in form a list where $1,-1$ locations are and so the presentation is only $O(n)$ sized.
Answer: Depends how you feel about the exponent of matrix multiplication, as this would come very close to showing $\omega=2$.
If the answer to your question were positive, then you could compute the determinant of an arbitrary symmetric $n \times n$ $\{0,1\}$ matrix $M$ (=adjacency matrix of an undirected graph, possibly with self-loops) in $O(n^2)$ time. As the algebraic complexity of matrix multiplication and the determinant are essentially the same, this comes very close* to showing the exponent of matrix multiplication is 2.
More precisely: given a matrix $M$ as above, embed it as the upper-left corner of an $n^2 \times n^2$ matrix as follows:
$M' = \left(\begin{array}{cc} M & 0 \\ 0 & I_{n^2-n} \end{array}\right)$
Then $M'$ is an $N \times N$ matrix $(N=n^2)$ satisfying your conditions, and $\det M' = \det M$, so if your question had a positive answer then we could compute $\det M' = \det M$ in $O(N) = O(n^2)$ operations.
*-The main differences are: (1) using bit complexity, and (2) restriction to $\{0,1,-1\}$ entries. However, while it is possible, I have little reason to suspect that these restrictions significantly change the complexity of matrix multiplication from $O(n^\omega)$. | {
"domain": "cstheory.stackexchange",
"id": 4074,
"tags": "cc.complexity-theory, ds.algorithms, matrices, sparse-matrix, determinant"
} |
Calculation of image offsets for performing cropping | Question: I am using the java.awt.Rectangle class to construct subsets of a GEOTIFF file. In order to do this I would need to specify the x,y offsets, height and width of each subset image. In my particular case I would need to crop the original GEOTIFF image bottom to top. The java.awt.Rectangle class specifies that the origin point is to be found in the upper left hand corner.
Please let me know if the calculation of the image offsets are correct and/or there is a better way of calculating the image offsets. The crop(r) method is a proprietary method used to crop subsets of the original image.
int width = 40;
int height = 34;
int cellSize = 3600;
int xOffset = 0;
int yOffset = 0;
int pixelWidth = cellSize * width;
int pixelHeight = cellSize * height;
for (int i = 0 ; i < width; i++)
{
for (int j = 0; j < height; j++)
{
yOffset = pixelHeight - cellSize *(j+1) ;
Rectangle r = new Rectangle(xOffset,
yOffset,
cellSize,
cellSize);
crop(r);
}
xOffset = pixelWidth - cellSize * (i+1);
}
Answer:
int width = 40;
int height = 34;
int cellSize = 3600;
int xOffset = 0;
int yOffset = 0;
int pixelWidth = cellSize * width;
int pixelHeight = cellSize * height;
I would find this easier to follow in a different order:
final int CELL_SIZE = 3600;
final int WIDTH = 40;
final int HEIGHT = 34;
final int PIXEL_WIDTH = CELL_SIZE * WIDTH;
final int PIXEL_HEIGHT = CELL_SIZE * HEIGHT;
int xOffset = 0;
int yOffset = 0;
Now I see a natural progression of values and have the variables separate from the constants. The variables are also closest to where they are changed.
I changed the values that never change to final to reflect that.
The ALL_CAPS notation is common for constants.
for (int i = 0 ; i < width; i++)
{
for (int j = 0; j < height; j++)
{
yOffset = pixelHeight - cellSize *(j+1) ;
Rectangle r = new Rectangle(xOffset,
yOffset,
cellSize,
cellSize);
crop(r);
}
xOffset = pixelWidth - cellSize * (i+1);
}
You can write this as
for (int xOffset = PIXEL_WIDTH - CELL_SIZE; xOffset >= 0; xOffset -= CELL_SIZE)
{
for (int yOffset = PIXEL_HEIGHT - CELL_SIZE; yOffset >= 0; yOffset -= CELL_SIZE)
{
Rectangle r = new Rectangle(xOffset,
yOffset,
CELL_SIZE,
CELL_SIZE);
crop(r);
}
}
Then you don't need i or j at all and you don't have to calculate the offsets further.
I also changed the indent of the brackets to match the more common practice of lining up with the for declarations rather than the block contents.
You may want to do this in the other order:
for (int xOffset = 0; xOffset < PIXEL_WIDTH; xOffset += CELL_SIZE)
{
for (int yOffset = 0; yOffset < PIXEL_HEIGHT; yOffset += CELL_SIZE)
{
As that is easier to read. Not sure how much order matters.
If you need the original order exactly, you may need an extra loop. The original order started with an xOffset of 0 and then went to PIXEL_WIDTH - CELL_SIZE.
If you are committed to the i and j method, consider starting them at 1, as so
for (int i = 1 ; i <= width; i++)
{
for (int j = 1; j <= height; j++)
{
yOffset = pixelHeight - cellSize * j;
Then you don't have to add 1 to i and j each time. | {
"domain": "codereview.stackexchange",
"id": 15427,
"tags": "java, algorithm, image, awt"
} |
Lagrangian of massless particle in a potential | Question: There are a few questions about a Lagrangian for massless relativistic particles, notably here, here and here, regarding free particles in particular.
In the case of the square-root Lagrangian for massive particles, adding a potential term (such as a Coulomb potential energy) was no different to the non-relativistic case.
Is the same still true in the case of the Lagrangian without the square-root? My instinct is no, due to the difference in parameterisation and presence of the einbein. If not, is there a way to determine how the potential should be expressed?
Answer:
Let us parametrize the point particle by an arbitrary world-line (WL) parameter $\tau$ (which does not have to be the proper time).
Let us now address OP's question: Yes, even for the non-square-root Lagrangian
$$\begin{align}L~=~&\frac{\dot{x}^2}{2e}-\frac{e (mc)^2}{2} -V, \cr
\dot{x}^2~:=~&g_{\mu\nu}(x)~ \dot{x}^{\mu}\dot{x}^{\nu}~<~0,\cr
\dot{x}~:=~&\frac{dx^{\mu}}{d\tau}, \end{align}\tag{A}$$
with the einbein field $e>0$ one just subtracts the potential $V$ as usual.
As a consistency check (in the massive case $m>0$ and assuming that $V$ does not depend on $e$), we can integrate out the einbein field $e$ to obtain the usual square-root Lagrangian
$$ L_0~=~ -mc\sqrt{-\dot{x}^2}-V,\tag{B}$$
cf. e.g. this Phys.SE post.
Geometrically, we should point out that it is implicitly assumed that the 1-forms
$$ e\mathrm{d}\tau, \qquad V \mathrm{d}\tau, \qquad L_0\mathrm{d}\tau, \qquad L\mathrm{d}\tau,\tag{C}$$
(and the corresponding actions) are invariant under WL reparametrizations
$$ \tau\longrightarrow \tau^{\prime}=f(\tau). \tag{D}$$
In other words, the WL reparametrization (D) is a gauge symmetry. This means that one can choose a gauge-fixing condition, cf. e.g. this, this, this & this related Phys.SE posts.
Concerning coexistence of square-root & non-square-root Lagrangians, see also e.g. this Phys.SE post. | {
"domain": "physics.stackexchange",
"id": 81091,
"tags": "special-relativity, lagrangian-formalism, potential-energy, action, point-particles"
} |
Accessing previous estimated joint states of a robot | Question:
Hi,
I am new to ROS and I am trying to implement a Kalman filter for estimating the joints velocity and acceleration of a robotic manipulator.
The angular position of each joint is published to the topic /j2n6s300/joint_states, and the 'estimate_joint_states' node subscribes to that topic to get the data. Now I want the estimation to be done within 'estimate_joint_states' node and published to another topic /j2n6s300/estimated_joint_states. However, in each iteration of the Kalman filter, the previous estimates of the system (angular position, velocity and acceleration) are required. What is the "ROS way" to access that data? Can the node estimate_joint_states subscribe to /j2n6s300/estimated_joint_states (the topic in which it is also publishing) and then use its data in the next time step? Or is there a better way to access the previous estimated states?
I know it is possible to write a node which subscribes and publishes at the same time. But is it also possible to implement a node which subscribes to two topics and publishes in to one of them?
I am implementing in python by the way.
Thanks in advance!
Originally posted by Sahand_Rez on ROS Answers with karma: 3 on 2018-04-07
Post score: 0
Answer:
What you ask (subscribe to your own topic) is possible, but in this case it seems strange: if estimate_joint_states calculates the estimate, why not keep it around in a local variable and reuse it in the next iteration?
Originally posted by gvdhoorn with karma: 86574 on 2018-04-07
This answer was ACCEPTED on the original site
Post score: 2
Original comments
Comment by Sahand_Rez on 2018-04-07:
Thanks a lot for pointing that out! I wasn't at all thinking about something this simple. | {
"domain": "robotics.stackexchange",
"id": 30572,
"tags": "estimation, rospy, ros-indigo, joint-state-publisher, subscribe"
} |
What determines the Hamiltonian of an isolated quantum system? | Question: Let an isolated quantum system be in state $|\psi\rangle$. Then, quantum mechanics says the system evolves in time according to some Hamiltonian $H$, which does not depend on $|\psi\rangle$. But the first postulate of quantum mechanics also says that $|\psi\rangle$ completely describes the isolated system. If so, how can $H$ not depend on it? What then determines $H$ of an isolated system? (Take the universe as an example, that's isolated in the ideal sense, as there is nothing outside it. Still, its Hamiltonian depends on something other than its own state?)
Answer:
But the first postulate of quantum mechanics also says that |ψ⟩ completely describes the isolated system.
$|\psi\rangle$ completely describes the state of an isolated system at that moment in time. It does not describe how that state evolves with time.
What then determines H of an isolated system?
One could ask the same about a classical system as well. Different Hamiltonians describe different physical systems. If you have a particular system in mind, which Hamiltonian you should use to model it is not always a trivial question to answer.
When describing particles interacting with some potential, you will typically encounter so-called Schrodinger Hamiltonians of the form
$$\hat H = \frac{1}{2m} \hat P^2 + V(\hat X)$$
with $\hat P$ and $\hat X$ the position and momentum operators defined on whichever Hilbert space is appropriate (usually $L^2(\mathbb R)$ for a 1D system or $L^2(\mathbb R^3)$ for a 3D system).
On the other hand, sometimes your system is best modeled as a system of interacting spins with fixed spatial locations as in the Ising model, which involves a different Hilbert space and a different kind of Hamiltonian. Sometimes your system can be reasonably well-modeled as a simple two-state system subjected to an external potential, as in the ammonia maser toy model.
At the end of the day, constructing a mathematical model for a quantum system involves choosing a Hilbert space (which defines the set of states the system can occupy) and a Hamiltonian operator (which defines how those states evolve with time). In real applications, both tasks are generally non-trivial; the standard recipe is to make those choices based on physical intuition and experience, check the predictions of the resulting model against experiment, and then update your choices if those predictions aren't sufficiently accurate for your needs. | {
"domain": "physics.stackexchange",
"id": 76308,
"tags": "quantum-mechanics, schroedinger-equation, hamiltonian, time-evolution"
} |
How can I list/know the available nodes in a package? | Question:
Hello,
How do you list/know the available nodes to run using rosrun in a package???
Thanks.
--Luis
Originally posted by Luis Ruiz on ROS Answers with karma: 114 on 2014-03-31
Post score: 0
Answer:
Just enter rosrun package_name in bash and press 2 times tab.... E. g. for image_view it gives:
~$ rosrun image_view
disparity_view extract_images image_saver image_view stereo_view video_recorder
Originally posted by Wolf with karma: 7555 on 2014-03-31
This answer was ACCEPTED on the original site
Post score: 2 | {
"domain": "robotics.stackexchange",
"id": 17475,
"tags": "ros, package, nodes"
} |
How to measure a solid-solid surface energy? | Question: Many techniques exist to measure the surface energy between a liquid and a liquid or a liquid and a gas (see e.g. the wiki page).
Methods to measure the surface energy between a solid and a fluid are rare, but still there is a method developed by Zisman (see e.g. here) that allows you to at least estimate it by extrapolation for solid/gas or solid/liquid, depending on the environment that you use in your experiment
What I wonder: is there a method to measure the surface energy between two non-elastic solids?
One option I could think of is that you could melt one of the solids and then use the technique of Zisman, but this will limit your knowledge to high temperature surface energy, whereas the ones at low temperature are the thing you are typically interested in.
EDIT: just for future reference, this is a study on surface energies between 2 solids, but with 1 being highly elastic
Answer: I have done some searching and found out that there is a technique that has been around for roughly 10 years already and it is surprisingly simple (if you have the right, expensive, equipment). It can be found in this JCIS paper (which is also freely available here).
The technique works as follows: an atomic force microscope (AFM) with a well-defined spherical tip made out of solid 1 is brought into contact with solid 2. Then the tip is pulled of the surface again and the work of adhesion is measured. Based on the pull-off force and theoretical contact mechanics models (for details see the paper) you can calculate the surface energy $\gamma$ between the two solids from the following equation:
$$ \gamma = \frac{F}{2\pi c R} $$
where $F$ is the pull-off force, $R$ is the tip radius and $c$ is a constant between 1.5 and 2 depending on the details of the contact model. The paper explains how to choose which model is appropriate for the type of measurement you do.
Some conditions (assumptions) for the theoretical models apply:
deformations of materials are purely elastic, described by
classical continuum elasticity theory
materials are elastically isotropic
both Young’s modulus and Poisson’s ratio ofmaterials remain constant during deformation
the contact diameter between particle and substrate is small compared to the
diameter of particle
a paraboloid describes the curvature of the
particle in the particle–substrate contact area
no chemical bonds are formed during adhesion
contact area significantly exceeds molecular/atomic dimensions
The paper explains in quite some details how deviations from these conditions are often source of error, but also how they can be met to get an appropriate measurement.
So to conclude: the surface energy of a solid-solid system can be measured using AFM when taking into account that the assumptions of models used in data processing are thoroughly met. | {
"domain": "physics.stackexchange",
"id": 6747,
"tags": "measurements, interactions, surface-tension"
} |
How can i write a tf? | Question:
How can i write a tf?
do anybody have a example of tf?
Originally posted by programmer on ROS Answers with karma: 61 on 2014-01-22
Post score: 0
Original comments
Comment by ZdenekM on 2014-01-22:
What you mean by "write a tf"?
Comment by programmer on 2014-01-22:
I don't know but when i want to creating a map using hector slam, it needs a tf?
Comment by ZdenekM on 2014-01-22:
Is your laser fixed on a robot or are you using laser alone without anything else? It would be nice to make your question more detailed - more details will lead to better answers ;)
Comment by ZdenekM on 2014-01-22:
Did you check this tutorial: http://wiki.ros.org/hector_slam/Tutorials/SettingUpForYourRobot ?
Comment by programmer on 2014-01-22:
Thanks for your attention dear.
yes, i read tutorial but i don't understand it completely.
my laser fixed on robot by two dynamixel motor (roll and pitch) to Balance it by gyro,
how can i write a tf with this stuff?
Answer:
Ok, with laser scanner fixed on two servos you need following:
create appropriate URDF model (including robot, those servos / joints, your laser scanner)
publish current angles of servos to joint_states topic (sensor_msgs/JointState)
run robot_state_publisher which will transform joint state to TF
read carefully tutorial on how to setup hector_slam on your robot
Then, there will be TF transformation between laser scanner (laser_link in the tutorial) and robot base (base_link).
Originally posted by ZdenekM with karma: 704 on 2014-01-22
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by programmer on 2014-01-22:
Thanks for your time. | {
"domain": "robotics.stackexchange",
"id": 16723,
"tags": "ros"
} |
Does this diagram represent several LSTMs, or one through several timesteps? | Question: I'm trying to read this paper describing Google's LSTM architecture for machine translation. It features this diagram on page 4:
I'm interested in the encoder block, on the left. Apparently, the pink and green cells are LSTMs. However, I can't tell if the x-axis is space or time. That is, are the LSTM cells on a given row all the same cell, with time flowing forward from left to right? The diagram on the next page in the paper seems to suggest that.
Answer:
are the LSTM cells on a given row all the same cell, with time flowing forward from left to right?
Yes this is correct
The x-axis on this figure is basically the time axis. Essentially all pink boxes in the same row are the same LSTM cell, with different inputs from the same sequence. At each timestep, the cell takes an input and produces an output which is fed to the next layer. At the 8-th layer, the outputs over all timesteps are inputted at the same time to the attention layer. | {
"domain": "ai.stackexchange",
"id": 2456,
"tags": "neural-networks, deep-learning, long-short-term-memory, papers, google"
} |
Electrons decaying into heavier particles | Question: Suppose we have a system of electrons a very tightly confined space (like a tiny magnetic trap).
Let's say we continually increase the degree of confinement such that the electrons are confined into a smaller and smaller space, to the point where the degeneracy pressure reaches a scale comparable to the masses of the other charged leptons.
Does there come a point where it's favourable for an electron to "decay" into a heavier lepton due to degeneracy pressure (producing neutrinos and antineutrinos in the process)? My thought is that this process would be somewhat analogous to the formation of a neutron star (or hypothetical quark star).
What would happen if we kept confining the particles into a smaller and smaller space to increase the potential further? Would even heavier particles be produced?
Answer: If you confine the electrons such that they have a higher and higher number density then their Fermi energy will increase. The Fermi energy will first become relativistic ($ \gg$ 0.51 MeV) and ultimately may exceed the rest mass energy of a muon (105.7 MeV).
At that point it become energetically possible for "muonisation" to take place, for example:
$$ e \rightarrow \mu + \bar{\nu}_\mu + \nu_e $$
with the neutrinos escaping.
An equilibrium would then be set up where the chemical potentials of the electrons and muons were equal. If both behave as ideal fermion gases then we coould say
$$ m_\mu c^2 (1 + x_\mu^2)^{1/2} = m_e c^2 (1 + x_e^2)^{1/2}\ , $$
where $x$ is the dimensionless Fermi momentum given by
$$ x_{\mu, e} = \left(\frac{3h^3}{8\pi}\right)^{1/3} \frac{n_{\mu,e}^{1/3}}{m_{\mu, e}c}\ .$$
This equation can be solved (for example) as a function of $n_e$ to give the ratio of $n_\mu/n_e$, which will increase with $n_e$.
This is a process thought to occur in neutron star interiors but requires densities about twice that of nuclear matter. The threshold in your setup would be found by equating the electron Fermi energy to 105.7 MeV, giving muonisation beginning at $n_e \simeq 5\times 10^{42}$ m$^{-3}$.
Further complications will follow when the Fermi energies exceed the rest mass energies of pions at $\sim$ 140 MeV and then kaons at $\sim 500$ MeV and $\rho$ mesons at $\sim 770$ MeV, since these could be produced (with the appropriate neutrinos to conserve lepton numbers).
If you were to keep pushing the densities higher, then the Fermi energies of the electrons and muons (which are equal) would reach the rest mass energy of a tau lepton (1777 MeV). At that point it becomes energetically feasible to create tau leptons from electrons and muons at the highest energies in their respective Fermi distributions.
The equilibrium concentrations could then again be worked out as a function of the electron number density but I think this is considerably more complicated because of the possible decay routes into mesons. One would also have to consider whether all particles are confined equally. Presumably neutral mesons would just escape along with the neutrinos.
Note that baryonic hadrons could not be created (conservation of baryon number).
EDIT:
@AXensen points out that electrons cannot simply decay into a muon in the way suggested above because it is not possible to conserve both energy and momentum. I think that is correct, one either needs reactions involving two electrons or an electron and a neutrino: e.g.
$$ \bar{\nu}_e + e \rightarrow \bar{\nu}_\mu + \mu $$
A two-electron process would need to produce two muons because otherwise space would have to be found at lower energies in the electron Fermi distribution, which is strongly disfavoured.
A process involving another neutrino requires a source of those neutrinos and there is nothing obvious if your plasma consists solely of a cloud of electrons.
In more realistic (neutral) plasmas then there can also be some baryons present. If so then that opens up many possible pathways to produce neutrinos (e.g. electron capture) and then muons and heavier particles as described above (this is indeed exactly what is inferred to happen inside a neutron star). | {
"domain": "physics.stackexchange",
"id": 97970,
"tags": "quantum-field-theory, pressure, electrons, pauli-exclusion-principle, leptons"
} |
Broken symmetries realized nonlinearly | Question: I'm trying to understand some concepts of spontaneous symmetry breaking, I'll write first the statement that I can't understand and later my questions.
STATEMENT
Consider a group $G$ and a subgroup $H \subset G$. In particular: If $V_a$ are the generators of $H$ and $A_a$ are the remaining generators of G,
then we can choose a representation of the group to take the form:
$g(ξ, u) = e^{iξ·A} e^{iu·V}$
where $\xi^a$ are the Goldstone bosons, $e^{iu·V} ∈ H$ and $e^{iξ·A} ∈ G/H$.
For a general $g ∈ G$, we have from closure of G that
$g\, e^{iξ·A} = e^{iξ′·A} e^{iu′·V},$
where $ξ′ = ξ′(ξ, g)$ and $u′ = u'(ξ, g)$ are analytic functions due the Lie group structure.
In this manner, as required, the Goldstone fields linearly realize the symmetries
of the preserved subgroup H and nonlinearly realize the remaining broken symmetries.
QUESTIONS
1.- How can I see that $e^{iξ·A} ∈ G/H$?
2.- What is the definition of linearly and nonlinearly realizing symmetries?
3.- How can I see from above that if $ g∈ H$ then the relation between $\xi$ and $\xi'$ is linear and otherwise it's nonlinear?
I'm new with group theory so I hope you can explain this extensively but still as formal as possible.
Answer:
The factors $e^{i\xi\cdot A}$ and $e^{iu\cdot V}$ are chosen this way so there is a unique label $(u,\xi)$ for each group element $g=e^{i\xi\cdot A}e^{iu\cdot V}$. To see that $\xi$ is a label for $G/H$, it's enough to check that $g$ and $gh$ are associated with the same value $\xi$ when $h\in H$. This is easy to see: set $h=e^{iu_h\cdot V}$, so that $gh=e^{i\xi\cdot A}e^{iu\cdot V}e^{iu_h\cdot V}=e^{i\xi\cdot A}e^{iu'\cdot V}$. Hence, $gh\mapsto (u',\xi)$.
Some remarks: (i) note that $G/H$ should be generally viewed as a topological quotient space (and not a quotient group), because $H$ is not always a normal subgroup. A simple example is the case $G=SO(3)$, $H=SO(2)\subset SO(3)$. The quotient space $SO(3)/SO(2)$ is homeomorphic to the 2-sphere, and spherical polar angular coordinates can be obtained naturally this way, but the 2-sphere is certainly not a group. (ii) there is some arbitrariness in the choice of representative $\xi(g)$. However, once a representative $\xi$ is chosen for $g$, it also represents $gh$ for all $h\in H$.
To see the difference between linearly and nonlinearly realized symmetries, it helps to think of symmetries as homomorphisms of the symmetry group into the space of functions of fields, with `multiplication' mapping to composition. For example, let $\vec\phi\in\mathcal{F}$ be a label for fields, and $G$ a symmetry group. Then in general, $G$ can act on $\vec\phi$ by choosing functions $f_g(\vec\phi):\mathcal{F}\rightarrow\mathcal{F}$ in such a way that $f_g(f_{g'}(\phi))=f_{gg'}(\phi)$. A linearly realized symmetry corresponds to the special case where $f_g(\phi)$ is a linear map for every $g\in G$.
The relationship between $\xi$ and $\xi'$ when $g\in H$ is not generally linear, it depends on your convention for $\xi(g)$. When $\xi$ is small, however, we can define $\xi$ to be `locally flat' coordinates for the tangent space of $G/H$ near some point $\phi_0$ (presumably the VEV of the field theory). In this case, the action of $H$ on $\xi$ is linear as long as $\xi$ is very small. This is easy to see in the example of spherical coordinates: here $\vec\phi_0$ is given by a unit vector $\hat n_0(\theta,\phi)$, and the rotation group $SO(2)$ is the set of rotations about the axis $\hat n_0$. This group acts linearly on the tangent space spanned by the angular tangent vectors $\hat\theta(\hat n_0)$, $\hat\phi(\hat n_0)$. In general, however, the multiplication $(u,\xi)(v,\zeta)=(u',\xi')$ involves solving a nonlinear equation for $u'$ and $\xi'$ in terms of $u$, $v$, $\xi$ and $\zeta$. This multiplication rule is generically nonlinear. | {
"domain": "physics.stackexchange",
"id": 26656,
"tags": "lie-algebra, symmetry-breaking"
} |
Question on D'Alembert's formula | Question: On page 3 of these lecture notes, it says:
$$u(x,t)= \frac 1{2c}[ f(x+t)+ f(x-t) ]+ \frac1{2c} \int_{x-ct}^{x+ct} g(y)dy .$$
This important expression is known as D'Alembert's formula. Letting $G$ denote the antiderivative of $g$ vanishing at $x=0$, we may write this as:
$$u(x,t)=\frac12[(f+\frac1cG(x+ct)+(f-\frac1cG(x-ct))], G'=g, G(0)=0 . $$
My question is why is it required that $G(0)=0$ ?
Answer: It does not look like this is necessary, maybe they just require that to be definite. | {
"domain": "physics.stackexchange",
"id": 47779,
"tags": "waves, boundary-conditions"
} |
What determines the apparent radius of the rainbow? | Question: Let's say I know how to compute the apparent radius of a rainbow from the viewpoint of the observer: take a photo of the scene, measure the distance to a known reference object, and its dimensions. Using triangle similarity, I can extrapolate the radius of the rainbow.
But my question is: which physical phenomenon determines the radius?
Answer: It depends on where the sun is. If it is near the horizon (behind you) and in front of you there are water droplets, then you will see a rainbow with a radius (in angular measure) of about 42 degrees, because each water droplet returns a cone of light, whose axis is parallel to the direction to the sun and whose aperture is roughly $2 \cdot 42 = 84$ degrees.
I've never seen better explanations of dozens of phenomena concerning rainbows than in Walter Lewin's lectures. | {
"domain": "physics.stackexchange",
"id": 29187,
"tags": "visible-light, atmospheric-science"
} |
Usage of Dirac delta function in physics | Question: I know that the Dirac delta function is not really a function but a distribution.
It means that when we write $\int \delta(x-x_0) f(x) dx$ we want to say in fact $<\delta_{x_0},f>$.
So the dirac "eats" a function and not a number.
But sometimes in physics we don't have expression like :
$$\int \delta(x-x_0) f(x) dx $$
but just something like :
$$\delta(x-x_0) f(x)$$
without any integral around it.
I know how to interpret the last line physically (it will be $f(x_0)$ if $x=x_0$ and $0$ elsewhere). But how to give it a mathematical sense ? Indeed it is not a dirac that "eats" f here, but it is something else that I don't know how to give a mathematical sense.
I insist on the fact that it's not a Kronecker symbol that we have (in my example we obtained this term by derivating $\Theta(x-x_0)f(x)$ where $\Theta$ is the Heaviside function).
(I just know basics distribution theory)
Answer: If you have some text that deals with
$$
g(x) = f(x)\delta(x-x_0),
$$
with no integration, then $g(x)$ is also a distribution, to be used only in the form
$$
\langle g, h\rangle = \int g(x) h(x) \mathrm dx= \int f(x)\delta(x-x_0) h(x) \mathrm dx = f(x_0)h(x_0).
$$
In this particular example, it's easy to see that you can simply replace the $f(x)$ by $f(x_0)$, i.e. in the form
$$
g(x) = f(x_0)\delta(x-x_0),
$$
so $g(x)$ is just a delta function multiplied by a constant $f(x_0)$, but that's a bit secondary. The point, really, is that if you start handling distributions, then everything is a distribution (instead of a function) unless proven otherwise. | {
"domain": "physics.stackexchange",
"id": 39293,
"tags": "mathematics, dirac-delta-distributions"
} |
Add "1" and overwrite the previous value | Question: I have a cookie where a number is stored.
When I want to add "1" to the stored number and then overwrite it, I use this php code:
$temp = $_COOKIE['mycookie']++;
$_COOKIE['mycookie']=$temp;
Is it the best way to do that?
Answer: I think what you may be looking for is found on this page.
This is happening because of the ++ being used as a post-increment
instead of a pre-increment. Essentially what is happening is you're
saying, "set $cookie to the value of $_COOKIE['count'], and then
increment $_COOKIE['count'].
Go for ++$_COOKIE['mycookie']; instead and let us know how that works out. | {
"domain": "codereview.stackexchange",
"id": 8063,
"tags": "php"
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.