anchor stringlengths 0 150 | positive stringlengths 0 96k | source dict |
|---|---|---|
Variant VCF: AD vs DP? | Question: In my VCF file from GATK, I have the following definitions for AD and DP.
AD - Allelic depths for the ref and alt alleles in the order listed
DP - Approximate read depth (reads with MQ=255 or with bad mates are filtered
I don't understand the definitions, can anybody explain in a less technical way?
From what I can read, AD gives the number of reads spanning the reference and variant allele. But what does DP mean? This doesn't look like the total number of reads spanning a variant, so what is this? How does this different to AD?
Answer: DP is the total number of read bases spanning a particular position. If you add up the different AD, you should get a number close to DP, the difference being merely in how the reads are filtered in either set of numbers. | {
"domain": "biology.stackexchange",
"id": 5598,
"tags": "bioinformatics, variant"
} |
Trying to understand adding custom shapes to model editor | Question:
Problem:
I am trying to import a .dae custom shape I exported using Blender to the model editor in Gazebo and it's not loading my textures properly.
Context:
To try to understand what's going on I tried with a model that imports texture properly.
I have a construction cone model in ~/.gazebo/models/construction_cone
This folder contains two subfolders : a meshes subfolder containing a .dae file and a materials/textures subfolder with a .png texture file. It also contains a model.sdf file and a model.config file
If I import this model using its .dae file into the model editor by clicking Add under Custom Shapes and browsing to the meshes directory, the textures are applied fine after importing the object.
Here is what I don't understand:
If I make an IDENTICAL COPY of the meshes and materials subfolders to a different location on my computer and repeat the same exact import process, I get no textures at all. I tried moving over the sdf and config filles, having cloned exacly my ~/.gazebo/models/construction_cones to some other location and the model explorer is not loading my textures. Where is the texture information located ? What tells gazebo where to find the texture? I tought the dae file would suffice, but it looks like it doesn't.
Originally posted by emile on Gazebo Answers with karma: 1 on 2019-05-14
Post score: 0
Answer:
I solved my problem by not using the Model Editor and using a text editor to write my model files instead. The model editor can be used to create a new SDF file that will define a model that can be imported in the world in Gazebo (outside of the model editor), but that generates a very lengthy SDF that I didn't fully understand.
The model editor appends a default blank gray material to any model by default, and this seems to overwrite whatever texture the DAE file is trying to map. I tried to delete the gazebo/grey texture in the GUI but I still had no textures.
What finally SOLVED my problem was to write my own simple SDF file where I didn't define any script for the texture/material.
Originally posted by emile with karma: 1 on 2019-05-15
This answer was ACCEPTED on the original site
Post score: 0
Original comments
Comment by kumpakri on 2019-05-17:
Nice solution. If that is the answer to your question, please, click the check-mark button on the left of your answer to accept it as the right answer. | {
"domain": "robotics.stackexchange",
"id": 4403,
"tags": "gazebo"
} |
Why is this compound more basic than the other? | Question: So, the original question was which compound is more basic.
Compound A and Compound B respectively.
My reasoning is that on protonation, Compound B would be more stable than Compound A as well, Compound B is an allylic nitrogen cation. Thus, compound B should be more basic.
Why don't nitrogen cations act similar to carbo cations in this case?
But the given answer is that compound A is more basic, because of being connected to a sp2 hybridized carbon. Another given reason was that the first cation was stabilized by water while the second one wasn't?
Where am I wrong?
Answer: What may be happening is that the A cation has a resonance structure where the $\pi$ bond polarizes to the nitrogen and thus turns the $sp^2$ carbon into a carbocation center (leaving the nitrogen formally uncharged). This carbocation contribution is enhanced by the $sp^2$ carbon also hyperconjugating with the carbon atom next to it in the ring. Also the B cation cannot form an allylic structure because protonation of the nitrogen atom in that compound saturates it. | {
"domain": "chemistry.stackexchange",
"id": 15973,
"tags": "organic-chemistry, stability"
} |
Definition of a ray? | Question: The typical definition of a ray and the one that I was initially taught was that a ray was a line perpendicular to the wave front. However, when reading up on birefringence it seems as though there are cases where the 'ray' itself is not perpendicular to the wave front (although the wave vector $\vec k$ is). So given this, how do we define a ray and what determines the direction it point??
(If I was to take a guess I would say it had something to do with the Poynting vector)
Answer: I just found this source (1) which explains that rays are curves such that the direction a ray points is the same as that of the Poynting vector at a given point.
References
(1) p110 of 'Microwave Antenna Theory and Design' by S.Silver (link to google books) | {
"domain": "physics.stackexchange",
"id": 29992,
"tags": "optics, electromagnetic-radiation, terminology, geometric-optics"
} |
Conceptual question about entropy and information | Question: Shannon's entropy measures the information content by means of probability. Is it the information content or the information that increases or decreases with entropy? Increase in entropy means that we are more uncertain about what will happen next.
What I would like to know is if entropy increases, does this mean that information increases?
If there are 2 signals, one is the desired and the other is the measurement signal. Let error be the difference between the two. Or error can be the estimation error in the context of weight learning.
What can we infer if the entropy of this error term decreases? Can we conclude that the error is reducing and the system is behaving close to the desired signal's behavior?
Shall be grateful for these clarifications
Answer: Information = Entropy = Surprise = Uncertainty = How Much You Learn By Making an Observation. They all increase or decrease together. The entropy of a random variable $X$ is just another number summarizing some quality of that random variable. Just like the mean of a random variable is the expected value of $X$ or the variance of a random variable is the expected value of $(X - \mu)^2$, the entropy is just the expected value of some function, $f(X)$ of the random variable $X$. You find expectations of functions by using
$\mathbb{E}[f(X)] = \sum_{x\in X}p(x)f(x).$ In this case the function of $X$ you care about is the log (base 2) of the probability mass function.
$$ H(X) = \mathbb{E}[-\log_2 P(X)] = -\sum_{x \in X} p(x) \log_2 p(x).$$
This particular expectation is useful because it doesn't depend on the actual values that $X$ can take on, just the probabilities of those values. So you can use it to talk about situations where you aren't sending numbers, or where the numbers are just arbitrarily assigned to particular messages or symbols that you need to send.
What you want for your second question is the conditional entropy of the measurement random variable $X$ given the random variable $Y$ that represents what was sent. When there is no error the conditional entropy will be 0, when there is error the conditional entropy will be greater than 0. | {
"domain": "cs.stackexchange",
"id": 3152,
"tags": "terminology, information-theory, entropy"
} |
Problem in getting correct coefficients of frictional forces in the Lagrangian equation | Question: I am getting correct equations on using the Lagrangian method in Systems with no non conservative forces, but when I use it in Systems with friction, sometimes I get correct equations, and sometimes I do not. Most of the equations have some problem with the coefficients of the frictional forces.
For example, let us take a look at this system........
Here $f_1,f_2$ are the frictional forces(and not the coefficients of friction)
Now, let the bolock with mass $m_2$ move through a distance $x$ to ward the right.
now, when we apply Newton's second law, we see that this is wrong and the coefficient of $f_1$ should have been $2$.
Why is the problem?....on the right hand side I have written the generalized force and the the two Lagrangian terms on the left hand side. Please help me out.
(Please do not down vote or close my question.....if there is a problem, please put a comment)
EDIT(Someone told me to add this):
Let $a$ be the acceleration of $m_2$ towards right. So we get the equations
$$m_1 g-T-f_1=2m_1a$$
and
$$2T-f_2=m_2a$$
from FBDs for blocks with masses $m_1$ and $m_2$ respectively.
From here, we get
$$a=\frac{2m_1g-2f_1-f_2}{4m_1+m_2}$$
Answer: Your solution using Newton's 2nd Law is correct. Your mistake is incorrectly writing the generalised force corresponding to $f_1$.
The Generalised Force is defined as
$$Q_j = \sum\limits_{i=1}^N F_i \bullet (\frac{\partial r_i}{\partial q_j})$$
$F_i \bullet \delta r_i$ is the work done during a change $\delta q_j$ in the co-ordinate $q_j$. Here $r_1, r_2$ are the displacements of $m_1, m_2$. You have defined $q_j=r_2=x$ so $r_1=2x$. The missing coefficient of 2 in the Lagrange Equation follows from this. | {
"domain": "physics.stackexchange",
"id": 48235,
"tags": "newtonian-mechanics, forces, classical-mechanics, lagrangian-formalism, friction"
} |
execises in computational complexity | Question: I am trying to get better in proofs and deep understanding of concept of computational complexity. Unfortunately, so far, with no success.
In order to get more intuition, I decided to do more exercises, but most of them are still difficult for me.
I am looking for exercises with solutions in field of computational complexity. Sometimes on course pages there are homeworks with solutions.
I am asking if you aware about any decent course on computational complexity with exercises and solutions on course page, please let me know.
Answer: I had a course held by Jiri Srba a few years ago on basic complexity and computability theory is good, I would say. The second part (Lecture 9 to 15) goes through time and space complexity, shows some important results on the field and has pdf's of both exercises and solutions for each lecture.
It is based on the book "Introduction to the Theory of Computation" by Michael Sipser, which also has a good walkthrough of both topics in general.
Link: https://intranet.cs.aau.dk/education/courses/2010/cc/course-plan/ (owned by Aalborg University)
Good luck! | {
"domain": "cs.stackexchange",
"id": 783,
"tags": "complexity-theory"
} |
Apparent inconsistency when reading off connection one-forms from Cartan's structural equations | Question: The connection one-forms can be defined as follows: let $(M,g)$ be spacetime, $U\subset M$ an open set and $\{e_a\}$ an orthonormal basis of vector fields on $U$ with dual basis $\{\omega^a\}$ of one-forms. We define the connection one-forms as follows. First define the connection coefficients by
$$\nabla e_a=\Gamma^b_{ca}e_b\otimes \omega^c,$$
then define the connection one-forms
$$\theta^b_a=\Gamma^b_{ca}\omega^c.$$
Thus the connection one-forms are defined so that
$$\nabla_X e_a=\theta^b_a(X)e_b.$$
We can thus show that if the basis is orthonormal then $\theta^a_b=-\theta^b_a$.
We then have Cartan's first structural equation:
$$d\omega^a=\omega^b\wedge \theta^a_b.$$
I've heard then that this can be used to give a method to compute the connection one-forms. Find $\omega^a$, then take the exterior derivatives, expand in terms of $\{\omega^a\}$ and read of the terms as above.
Now in the case of Schwarzschild I'm running into a inconsistency. The metric is of the form
$$ds^2=f^2 dt^2-g^2 dr^2-h^2 (d\theta^2+\sin^2\theta d\phi^2)$$
We thus have
$$\omega^0=fdt,\quad \omega^1=gdr,\quad \omega^2=hd\theta,\quad \omega^3=h\sin\theta d\phi.$$
On the other hand we can compute
$$d\omega^0=-\frac{f'}{fg}\omega^0\wedge \omega^1, \quad d\omega^1=0$$
Now it would seen from $d\omega^0$ that
$$\theta^0_1=-\frac{f'}{fg}\omega^0$$
but from $d\omega^1$ it seems $\theta^1_0=0$, however $\theta^0_1=-\theta^1_0$ which leads to one inconsistency.
So what am I missing here? Why am I running into one inconsistency?
Answer: First off, note that it is $\theta_{ab} = -\theta_{ba}$ with both indices in covariant position, and we have let the metric act to lower the connection form indices: $\theta_{ab} = g_{ac}\theta^c{}_b$. This follows, as you may know, from the metricity of the connection and by choosing a frame such that the metric components are constant functions (rigid frame). The distinction is important, although irrelevant to this particular mistake, because even in an orthonormal frame, using e.g. the $(+---)$ sign convention, we have $\theta^0{}_1 = \theta_{01} = -\theta_{10} = \theta^1{}_0$.
Your mistake is that $\theta^1{}_0 = 0$ does not follow from $d\omega^1 = 0$. What does follow is that $\theta^1{}_0 = A\omega^0$ for some function $A$, which is, as you can see, precisely what you get from your expression for $d\omega^0$. In effect, you neglect the fact that the wedge product between two parallell one-forms vanishes. | {
"domain": "physics.stackexchange",
"id": 47886,
"tags": "homework-and-exercises, general-relativity, differential-geometry, metric-tensor"
} |
Machine Learning - Precision and Recall - differences in interpretation and preferring one over other | Question: I have summarising this from lot of blogs about Precision and Recall.
Precision is:
Proportion of actual positives that classifier has predicted as positive.
meaning out of the sample identified as positives by classifier as positive, how many are actually positive?
and Recall is:
Proportion of actual positives were predicted as positive correctly.
meaning out of the ground truth positives, how many were identified correctly by the classifier as positive?
That sounded very confusing to me. I couldn't interpret difference between both of them and relate each to real examples. some very small questions about interpretation I have are:
if avoiding false-positives matter the most to me, i should be measuring precision; And if avoiding false-negatives matters the most to me, i should be measuring recall. Is my understanding correct?
Suppose, I am predicting if a patient should be given a vaccine, that when given to healthy person is catastrophic and hence should only be given to an affected person; and I can't afford giving vaccine to healthy people. assuming positive stands for should-give-vaccine and negative is should-not-give-vaccine, should I be measuring Precision? or Recall of my classifier?
Suppose, I am predicting if an email is spam(+ve) or non-spam(-ve). and I can't afford a spam email being classified as non-spam, meaning can't afford false-negatives, should I be measuring Precision? or Recall of my classifier?
What does it mean to have high precision(> 0.95) and low recall(< 0.05)? And what does it mean to have low precision(> 0.95) and high recall(< 0.05)?
Put simply, in what kind of cases is to preferable or good choice to use Precision over Recall as metric and vice versa. I get the definition and I can't relate it to real examples to answer when one is preferable over other, so I would really like some clarification.
Answer: To make sure everything is clear let me quickly summarize what we are talking about. precision and recall are evaluation measures for binary classification, in which every instance has a ground truth class (also called gold standard class, I'll call it 'gold') and a predicted class, both being either positive or negative (note that it's important to clearly define which one is the positive one). Therefore there are four possibilities for every instance:
gold positive and predicted positive -> TP
gold positive and predicted negative -> FN (also called type II errors)
gold negative and predicted positive -> FP (also called type I errors)
gold negative and predicted negative -> TN
$$Precision=\frac{TP}{TP+FP}\ \ \ Recall=\frac{TP}{TP+FN}$$
In case it helps, I think a figure such as the one on the Wikipedia Precision and Recall page summarizes these concepts quite well.
About your questions:
if avoiding false-positives matter the most to me, i should be measuring precision; And if avoiding false-negatives matters the most to me, i should be measuring recall. Is my understanding correct?
Correct.
Suppose, I am predicting if a patient should be given a vaccine, that when given to healthy person is catastrophic and hence should only be given to an affected person; and I can't afford giving vaccine to healthy people. assuming positive stands for should-give-vaccine and negative is should-not-give-vaccine, should I be measuring Precision? or Recall of my classifier?
Here one wants to avoid giving the vaccine to somebody who doesn't need it, i.e. we need to avoid predicting a positive for a gold negative instance. Since we want to avoid FP errors at all cost, we must have a very high precision -> precision should be used.
Suppose, I am predicting if an email is spam(+ve) or non-spam(-ve). and I can't afford a spam email being classified as non-spam, meaning can't afford false-negatives, should I be measuring Precision? or Recall of my classifier?
We want to avoid false negative -> recall should be used.
Note: the choice of the positive class is important, here spam = positive. This is the standard way, but sometimes people confuse "positive" with a positive outcome, i.e. mentally associate positive with non-spam.
What does it mean to have high precision(> 0.95) and low recall(< 0.05)? And what does it mean to have low precision(> 0.95) and high recall(< 0.05)?
Let's say you're a classifier in charge of labeling a set of pictures based on whether they contain a dog (positive) or not (negative). You see that some pictures clearly contain a dog so you label them as positive, and some clearly don't so you label them as negative. Now let's assume that for a large majority of pictures you are not sure: maybe the picture is too dark, blurry, there's an animal but it is masked by another object, etc. For these uncertain cases you have two possible strategies:
Label them as negative, in other words favor precision. Best case scenario, most of them turn out to be negative so you will get both high precision and high recall. But if most of these uncertain cases turn out to be actually positive, then you have a lot of FN errors: your recall will be very low, but your precision will still be very high since you are sure that all/most of the ones you labeled as positive are actually positive.
Label them as positive, in other words favor recall. Now in the best case scenario most of them turn out to be positive, so high precision and high recall. But if most of the uncertain cases turn out to be actually negative, then you have a lot of FP errors: your precision will be very low, but your recall will still be very high since you're sure that all/most the true positive are labeled as positive.
Side note: it's not really relevant to your question but the example of spam is not very realistic for a case where high recall is important. Typically high recall is important in tasks where the goal is to find all the potential positive cases: for instance a police investigation to find everybody susceptible of being at a certain place at a certain time. Here FP errors don't matter since detectives are going to check afterwards but FN errors could cause missing a potential suspect. | {
"domain": "datascience.stackexchange",
"id": 11109,
"tags": "machine-learning, classification, data-mining, multiclass-classification"
} |
How to check if a given point is inside a polygon with holes? | Question: How to check if a given point lies inside or outside a polygon with holes?
Does the below algorithm works for polygon with holes?
https://www.geeksforgeeks.org/how-to-check-if-a-given-point-lies-inside-a-polygon/
Answer: Yes, basically.
That algorithm is called ray casting algorithm, also known as the crossing number algorithm or the even–odd rule algorithm.
Why is it correct? "The algorithm is based on a simple observation that if a point moves along a ray from infinity to the probe point and if it crosses the boundary of a polygon, possibly several times, then it alternately goes from the outside to inside, then from the inside to the outside, etc. As a result, after every two 'border crossings' the moving point goes outside. This observation may be mathematically proved using the Jordan curve theorem." Note this reasoning works in all basic situations, whether the polygon is convex or not and whether the polygon has holes or not.
However, you should be aware of the degenerate cases as well as the case of a hole inside a hole. For an example of degenerate case, consider a degenerated polygon that are two triangles intersecting only on their only common vertex. Then the algorithm may fail when the ray goes through that common vertex. For the case of a hole inside a hole, you have to define the area inside the smaller area is inside the polygon so as to not invalidate the algorithm.
It is actually not easy to make a rigorous statement mathematically about the general situations when the ray casting algorithm works. For this particular question, let me just say the ray casting algorithm will work for a polygon with separate polygon holes inside it where all line segments of the outer polygon and inner polygons do not intersect each other. | {
"domain": "cs.stackexchange",
"id": 12564,
"tags": "algorithms, computational-geometry"
} |
Consider gene is countable, can anyone give a concrete example of "a gene"? | Question: gene is a countable noun but people always say genes, so what is A gene?
for instance, Each chromosome contains many genes, so, which part of the chromosome of Escherichia coli could be A gene?
Answer: There are about 20,000 genes in the human genome. Here are the first 5 genes (following alphabetical order) on chromosome 1:
AADACL3: Arylacetamide deacetylase-like 3
AADACL4: Arylacetamide deacetylase-like 4
ACADM: acyl-Coenzyme A dehydrogenase, C-4 to C-12 straight chain
ACTL8: Actin-like 8
ADGRL2 (1p31.1): adhesion G protein-coupled receptor L2 | {
"domain": "biology.stackexchange",
"id": 9963,
"tags": "genetics, genomics, gene, genomes"
} |
How do phase carries structural information about the function? | Question: Suppose you are on a railway platform and you hear the sound of train coming towards you. Now, Using Fourier transformation we can convert the time domain function (here take sound as a function) into the frequency domain.
I have heard that location (structural) information of a function in the time domain is tightly coupled with phase information in the frequency domain. So can anybody help me to understand the above statement given in bold letters with the help of above example?
Also, can we find that whether train is coming towards you or going away from you from the phase information of the sound function?
Answer: To keep things simple, let's talk about plane acoustic waves in one dimension.
If we solve the wave equation in one dimension , we find that the acoustic pressure as a function of space and time is of the form
$$P(x,t) = Ae^{i(kx -\omega t)}$$
where $A$ is the maximum amplitude, $x$ and $t$ are the displacement and time respectively, $\omega$ is the frequency, and $k= \frac{2\pi}{\lambda}$ is the wavenumber (where $\lambda$ is the wavelength). $i$ is the imaginary unit.
Notice that both the spatial and temporal variation of the acoustic pressure are information carried in the phase (the exponentiated term) , not the magnitude (amplitude).
Now let's talk about Fourier transforms. One can take a Fourier transform from the time domain to the frequency domain, or vice versa. In our simple example, there is only one frequency, but real sound is usually composed of a mix of frequencies (like human speech for instance). So the Fourier transform from the time to the frequency domain is key in understanding what frequencies a complex acoustic signal is composed of.
But note something else. As is suggested by the exponential form above, we can equally well do a Fourier transform from the spatial domain to the wavenumber domain, or vice versa. And this fact, turns out to be key in analyzing the spatial structure of a signal in terms of the superposition of wavenumbers (which, remember, are related to wavelengths) found in that signal.
As just a simple example, imagine we wanted to make a line array of microphones that is sensitive to signals coming from a certain direction. It turns out that if we apply weight vectors to each microphone to "steer" it to a certain direction $\theta$, the solution looks an awful lot like the expression for a spatial-wavenumber Fourier transform:
$$
D(\theta) = \sum_{n=0}^{N-1} w_n \exp\left(iknd \sin(\theta) \right).
$$
where $w_n$ are the weights applied to each microphone, $n$ just is an integer that labels each microphone, and $d$ is the microphone spacing.
So to sum it up, the phase carries both spatial and temporal information about the signal, and the respective Fourier transforms carry information about the frequency structure and wavenumber structure of the signal. | {
"domain": "physics.stackexchange",
"id": 23138,
"tags": "acoustics, wavefunction, fourier-transform, signal-processing"
} |
A Few Questions About The Way A Linear Polarizer Filters Light | Question: I'm trying to wrap my head around how polarizers work out of curiosity and I have a few questions.
Let's say I have a polarizer whose bars are vertical. I see some sources which claim that it lets only vertical waves to pass through because vertical waves get through the small slits. Some sources contradict this though, as claim they're actually the ones getting absorbed by the bars and don't make it through. Which theory is correct?
Given an unpolarized wave composed out of many polarized electric fields. I kind of "can" decompose them to parallel and perpendicular to the bars, and I "can" claim the components parallel get through (that is assuming the first option in 1 is correct!). Is this wrong simply because the decomposition isn't a valid move here? After all the waves with a perpendicular component physically shouldn't get through.
Why does nobody talk about the magnetic fields in relation to polarization? Is this because only the electric field is related to vision? Even so, does the polarizer affect the magnetic fields?
Is there a simple reason that can help one can understand why reflective surfaces (like water or ice) polarize light? and how do you determine the direction of propagation of the electric field after polarization?
Thanks in advance.
Answer: $1$. For a wire-grid polarizer, the light polarized perpendicular to the wires will pass through. This counterintuitive result is because any electric field polarized parallel to the wires would excite currents in the wires, thereby reflecting/absorbing the light. But for electric fields perpendicular to the wires, the confinement of the electrons pushes the plasma oscillation frequency too high, effectively suppressing the AC conductivity of the wires at the light frequency. Thus, perpendicular polarized light basically doesn't see the metal, and it is transmitted.
$4$. The reason why smooth dielectric interfaces (such as water or glass) polarize light is due to the fact that Maxwell's equations impose different boundary conditions for in-plane magnetic and electric fields. Thus s-polarized light at a boundary (in-plane electric field) reflects differently than p-polarized light (in-plane magnetic fields). So if you start with unpolarized light incident at an angle, you'll get at least partially polarized light in reflection/transmission. | {
"domain": "physics.stackexchange",
"id": 49632,
"tags": "electromagnetism, polarization"
} |
Could I reflect a commercial laser off of the reflectors left on the moon by the astronauts? | Question: Would an ordinary laser pointer reach the reflectors placed on the moon? I want to reflect a laser off of the moon. How powerful must my laser be?
Answer: For sure the lasers used by the agency are strong , and what is important, the detection of the return from the reflector difficult even for the agency:
Even under good viewing conditions, only a single reflected photon is received every few seconds. This makes the job of filtering laser-generated photons from naturally occurring photons challenging.
For sure an ordinary laser cannot work. | {
"domain": "physics.stackexchange",
"id": 90706,
"tags": "quantum-mechanics, laser"
} |
In the nitration, why fluorobenzene has less products in the ortho position than Clorobenzene? | Question: Is it because, due to the eletronegativity of Fluorine, his mesomeric effect will be less pronounced, which means the electronic density will be lower, than for the clorobenzes, in the ortho positions?
Answer: Halogens are deactivating groups even though they show $+M$ mesomeric effect. The inductive effect is stronger than mesomeric effect. As the halogens show $-I$ inductive effect, they are deactivating groups.
When it comes to selectivity, the mesomeric effect plays a role. As the halogens show $+M$ mesomeric effect, the resonance structure has higher electron density at ortho and para positions. Therefore, halogens are ortho-para directing.
When you take inductive effect into account for its directing properties, the para position is more favorable for electrophilic reagents because it has a higher electron density compared to the ortho position. The strength of the inductive effect decreases with distance. As the ortho position is closer to the halogen, the inductive is stronger at this position. Therefore, the major product for electrophilic substitution reactions for haloarenes is the para-substituted product.
Fluorine is more electronegative than chlorine and shows a higher tendency to withdraw electrons towards itself than chlorine. Therefore, fluorine shows a stronger inductive effect than chlorine. As fluorine reduces the electron density at the ortho position more than chlorine, you get a lesser amount of ortho substituted product with fluorine. | {
"domain": "chemistry.stackexchange",
"id": 7899,
"tags": "organic-chemistry"
} |
Is rosdoc exist in electric? | Question:
I found when I run rosdoc, it shows:
sam@sam:~/code/ros/topic/basic_topic$ rosdoc
rosdoc: command not found
sam@sam:~/code/ros/topic/basic_topic$
Should I install rosdoc from source?
Thank you~
Originally posted by sam on ROS Answers with karma: 2570 on 2012-03-02
Post score: 0
Answer:
I'm on diamondback, and there I usually run rosdoc with rosrun rosdoc rosdoc, since rosdoc itself is not in the path the way rosmake etc. is. I hope this applies for electric as well.
Originally posted by Chris L with karma: 61 on 2012-03-03
This answer was ACCEPTED on the original site
Post score: 3
Original comments
Comment by joq on 2012-03-03:
It does.
Comment by kwc on 2012-03-03:
You also need to install the "documentation" stack, e.g. "sudo apt-get install ros-electric-documentation" | {
"domain": "robotics.stackexchange",
"id": 8473,
"tags": "ros, rosdoc"
} |
How to calculate excluded volume in Onsager's hard-rod model? | Question: Can somebody please provide a derivation of how to calculate excluded volume of two rods with angle of intersection being $\gamma$. rods are cylinders, capped with semi-spheres. Onsager theory of hard rods is based on this and I cant seem to find a derivation for excluded volume.
Answer: In order to calculate the excluded volume between two spherocylindric rods, the relative angle between them changes the base area of the excluded volume, usually taken as a parallelogram, and the thickness of rods, i.e. $D$ the height of the excluded volume. Consider the picture below, taken from (Basic Concepts for Simple and Complex Liquids, from Jean-Louis Barrat, Jean-Pierre Hansen)
As you can see, we have a base in the form of a parallelogram with area $L^2 \sin\gamma$ and with thickness $2D$, excluded volume: $$V_{ex}=2L^2D|\sin \gamma|.$$
The $2$ comes from the fact that the volume $L^2D|\sin \gamma|$ is excluded for the other rod on both sides. | {
"domain": "physics.stackexchange",
"id": 17715,
"tags": "phase-transition, density-functional-theory, liquid-crystal"
} |
Using Ramda in point-free way to transform data into new format | Question: Recently I asked on SO about using point free methods to rearrange some data
The idea was to turn data in this format:
const data = [
{
timeline_map: {
"2017-05-06": 770,
"2017-05-07": 760,
"2017-05-08": 1250,
}
}, {
timeline_map: {
"2017-05-06": 590,
"2017-05-07": 210,
"2017-05-08": 300,
}
}, {
timeline_map: {
"2017-05-06": 890,
"2017-05-07": 2200,
"2017-05-08": 1032,
}
}
]
Into this:
const hope = [
["2017-05-06", 770, 590, 890],
["2017-05-07", 760, 210, 2200],
["2017-05-08", 1250, 300, 1032],
]
The answers I got seemed kind of verbose in my opinion however (though I've no idea if they are better or worse than mine I haven't checked performance yet) so I spent more time studying Ramda and have come up with a solution of my own I like a little better. However I've only been doing this a little over a week so I'm sure it can be improved.
My code:
const datesValuesReducer = (accum, curr) => {
if (accum.hasOwnProperty(curr[0])) {
accum[curr[0]] = accum[curr[0]].concat(curr[1])
} else {
accum[curr[0]] = [curr[0], curr[1]]
}
return accum
}
const res = R.pipe(
R.pluck('timeline_map'),
R.map(R.toPairs),
R.flatten,
R.splitEvery(2),
R.reduce(datesValuesReducer, {}),
R.values
)
console.log(res(data))
Three points of possible concern:
1) Omitting R.flatten and R.splitEvery(2) get pretty close to what Is being output directly from R.map(R.toPairs) so maybe there is a way to deal with that data more directly and those methods can be omitted.
2) datesValuesReducer is pretty complex, maybe it could be simplified. I'm not sure if having the accum and curr means this solution isn't entirely "point free". Thoughts?
3) Also, I use pipe in favor over compose; I just find it reads more naturally. Maybe someone has some opinions about that.
JSBIN
Answer: I like your pipeline approach in general!
1) Omitting R.flatten and R.splitEvery(2) get pretty close to what Is being output directly from R.map(R.toPairs) so maybe there is a way to deal with that data more directly and those methods can be omitted.
Yup, you want something like R.chain here.
R.chain(R.toPairs)
is equivalent here to
R.pipe(
R.map(R.toPairs),
R.flatten,
R.splitEvery(2)
)
effectively performing a non-recursive flatten on the output of the chained function. Clojure calls this mapcat.
2) datesValuesReducer is pretty complex, maybe it could be simplified. I'm not sure if having the accum and curr means this solution isn't entirely "point free". Thoughts?
Yeah, named arguments are essentially the "points" you're trying to avoid in the pointfree style.
I see two directions here. One is to shorten the function but add more points via destructuring:
R.reduce((acc, [k, v]) => R.assoc(k, (acc[k] || [k]).concat(v), acc),
{})
The other is to fall further into the Ramda world and do something like this:
const f = R.pipe(
R.pluck('timeline_map'),
R.chain(R.toPairs),
R.reduceBy(R.flip(R.useWith(R.append, [R.nth(1)])), [], R.head),
R.toPairs,
R.map(R.unnest)
)
(The last two functions could probably be turned into a reduce as well if you're determined.)
This is fully pointfree, but seems kinda deliberately obtuse, even if the reduceBy reducer were to be pulled out into a named function. If you want an opinion I'd say stick the entire thing in a function that takes in the base keyword (eg. 'timeline_map'), and then use whatever you find most readable inside that function, without striving for total pointfree purity.
If you want more opinions, arguably there's a sweet spot of, say, 60-70% pointfree where you gain the most from the style. Beyond that, things can become increasingly contorted, and you end up with, well, R.reduceBy(...R.useWith... | {
"domain": "codereview.stackexchange",
"id": 26777,
"tags": "javascript, functional-programming, ramda.js"
} |
Where's *the* ROS executable? | Question:
I'm used to running eclipse, chrome.exe and gimp by just clicking on an icon.
Now I read all this stuff about ROS nodes and launch files...
Is there a way to just click an icon and make my robot run?
Originally posted by Alex Bravo on ROS Answers with karma: 901 on 2011-02-15
Post score: 2
Answer:
Short Answer: No.
Longer Answer:
ROS isn't just an executable, it's a platform. ROS is made up of many smaller components that do everything from interfacing with hardware, path planning, localization, filtering, and decision making.
In order to run a robot, you have to pick and choose from which of these parts (divided into stacks and nodes) to see what applies to the project that you are working on. Each of these nodes is programmed in some mixture of C/C++/Python, with a few other languages mixed in. Which a node may have some executables associated with it, they are generally designed to operate as part of a complete system.
You typically put all of these parts together in launch file, which then systematically launches all of the nodes that you need for your application. These are launched in conjunction with a ROS core, that keeps everything communicating and synchronized.
On another point, the common "face" of ROS is RVIZ, which is part of the visualization stack. This is what provides the pretty GUI that you see in many YouTube videos and images. RVIZ is displaying the underlying data that the nodes are passing back and forth to each other.
Originally posted by mjcarroll with karma: 6414 on 2011-02-15
This answer was ACCEPTED on the original site
Post score: 17 | {
"domain": "robotics.stackexchange",
"id": 4751,
"tags": "ros, roslaunch, node, gui"
} |
what must i add to the cmakefile.txt | Question:
hi
I want create a node that uses a library that is installed in my pc.
To compile i typed:
gcc -o myprogram myprogram.c -l rt -l bcm2835
what must i add to the cmakefile.txt?
thanks
Originally posted by mrpiccolo on ROS Answers with karma: 36 on 2013-01-28
Post score: 0
Answer:
As in any other cmake project, you'd put target_link_libraries(${PROJECT_NAME) <NAME_OF_LIBRARY_TO_LINK>) in your CmakeLists.txt. There wasn't any ros-specific cmake macro for using an installed library in a node in rosbuild, and if I understand correctly, the new catkin build system does not use ANY ros-specific macros. I haven't actually used catkin yet, though. Some useful links:
cmake documentation
catkin page
rosbuild cmake API (for pre-groovy distros)
Originally posted by thebyohazard with karma: 3562 on 2013-01-28
This answer was ACCEPTED on the original site
Post score: 3 | {
"domain": "robotics.stackexchange",
"id": 12606,
"tags": "ros"
} |
What is a 3rd-order Fresnel lens? | Question: The NPR News article and podcast Job Opening: Seeking Historian With Tolerance For Harsh Weather, The Occasional Bear talks about the Split Rock Lighthouse. That Wikipedia article links to the image title 3rd-order_Fresnel lens at Split Rock which in turn cites the Flickr image Split Rock Lighthouse; 3rd-order, Fresnel lens at Split Rock Lighthouse.
As I understand them the prism surfaces of Fresnel lenses have either flat-cut faces, curved faces that each match a best-form lens' profile at that radius.
Question: What is a 3rd-order Fresnel lens? What aspect of the lens is considered to be third order?
LakeSuperior.com's Lighthouses of Western Lake Superior seems to address "third order" somewhat, but I don't understand the limited explanation except that first order is better than seventh order, which seems counterintuitive.
When the U.S. Coast Guard deeded Split Rock Lighthouse to the state of Minnesota and the Minnesota Historical Society in 1971, it did something unusual … it left the classic Fresnel lens in place.
Amazing in design and beauty, Fresnel lenses, introduced by French physicist Augustin Jean Fresnel in 1822, were a technological leap for lighthouse beacons, significantly multiplying the lights’ range. Measured in seven orders from the most powerful First Order to the weakest Sixth Order (there is a Three-and-a-half Order), nothing larger than a Second Order was used on Lake Superior.
The Split Rock lens is a Third Order Fresnel, comprised of 252 cut-glass prisms. It measures 7 feet across, 5 feet high and weighs 2-1/2 tons. The prisms are mounted in a brass framework and the clamshell-shaped lens revolves around a central light source, floating on about 250 pounds of mercury. This revolution, driven by the original, hand-cranked clockwork mechanism, causes it to “flash” to passing ships once every 10 seconds when in use.
Answer: The "order" doesn't seem to be a physical term: it's just a word used in a classification of lighthouse lenses by size. See in Wikipedia's article Fresnel lens; Lighthouse lens sizes:
Fresnel produced six sizes of lighthouse lenses, divided into four orders based on their size and focal length. In modern use, these are classified as first through sixth order. An intermediate size between third and fourth order was added later, as well as sizes above first order and below sixth.
A first-order lens has a focal length of 920 mm (36 in) and a maximum diameter 2590 mm (8.5 ft) high. The complete assembly is about 3.7 m (12 ft) tall and 1.8 m (6 ft) wide. The smallest (sixth-order) has a focal length of 150 mm (5.9 in) and an optical diameter 433 mm (17 in) high.
There's also a table of sizes in the article linked above, shown partially below:
Order Focal Length(mm) Height(meters)
------ ---------------- --------------
Sixth 150 0.433
Fifth 182.5 0.541
Fourth 250 0.722
3 1/2 375
Third 500 1.576
Second 750 2.069
First 920 2.59
Mesoradial 1125
Hyperradial 1330 | {
"domain": "physics.stackexchange",
"id": 57482,
"tags": "optics, geometric-optics, lenses"
} |
Turning an array of words into a random license plate | Question: This is an assignment question from school:
Write a method called licencePlate that takes an array of objectionable words and returns a random licence plate that does not have any of those words in it. Your plate should consist of four letters, a space, then three numbers.
import java.util.*;
class MethodAssign5{
static String licensePlate(String[] a){
char[] lchars={'A','B','C','D','E','F','G','H','I','J','K','L','M','N','O','P','Q','R','S','T','U','V','W','X','Y','Z'};
char[] nchars={'0','1','2','3','4','5','6','7','8','9'};
boolean contain = true;
char[] license = new char[8];
license[4]=' ';
Random generator = new Random();
do{
for(int i=0;i<4;i++){
license[i]=lchars[generator.nextInt(26)];
}
for(int i=0;i<3;i++){
license[i+5]=nchars[generator.nextInt(10)];
}
contain = false;
for(String s:a){
boolean same = true;
char[] wchars = s.toCharArray();
for(int i=0;i<4;i++){
if(license[i]!=wchars[i]){
same = false;
}
}
if(same==true){
contain = false;
}
}
}while(contain==true);
String ans = new String(license);
return ans;
}
public static void main(String[]args){
String[] words ={"HAHA","GORD"};
System.out.println(licensePlate(words));
}
}
As you can see, I have no idea if my solution is right. It's such a low probability.
What do you think about my solution? What should I do if this was an in-class test and had the same problem where the probability of getting the error is low?
Answer: A few suggestions, and then I'll get to how to test it:
Don't be afraid of whitespace -- your code is a little hard to read due to the lack of new lines. Also, it's fairly standard to indent method names inside of classes.
Always use descriptive names: String[] a tells me nothing about what that parameter is.
When x is a bool, x == true is equivalent to x.
On an extremely picky note, contains seems like a more natural variable name for contain (grammatically anyway)
It doesn't really matter when there's only 4 characters (so 4 * A.length runs), but when doing a linear comparison, you should typically bail out of it with break or continue
Also, you might want to look into indexOf, contains and equalsIgnoreCase
Obviously this is just a little homework assignment, so it would be a bit overkill, but I would put the license plate generation in its own class in a real application.
I would also pass the banned words as a parameter to the constructor instead of to a method -- that allows your instance to carry around the words without any farther down consumers having to know what the words are
Imagine that you make a machine that's used in the DMV. Imagine that this machine has a "generate license plate" button. When pressed, a screen simply shows the plate.
Passing the words to the method is like having to key in the banned words every time the button is pressed -- the DMV employee shouldn't be responsible for knowing/controlling those words
Storing the words inside of an instance (whether set with a setBannedWords method or passed to the constructor) is like configuring the machine at creation. The DMV employee no longer has to know what the words are, or be responsible for entering them. Years could go by before an employee realized "Oh my, I just noticed that the machine has never output a cuss word!"
For completeness: A third option would be to not filter the words -- it would then be up to the employee to go "Hrmmm, 'FART 743' is probably a bad license plate."
Ok, so does this work? Probably. It looks correct to my rusty-with-Java-eye anyway.
The easiest way to test it would probably be to whittle down the set of possible characters from A-Z to A-D and then make ABC a bad word. After a lot of test runs, you could be fairly certain that it was throwing out any ABCX or XABC.
This just made me realize: did your professor specify that the bad words would always be four characters? If not, your code is wrong.
So how could this be tested more properly-ish? Unfortunately it requires that the code become a lot longer. Your professor would probably think you got a bit carried away if you truly made this testable. Despite a fairly simple requirement, to test this with granularity would require breaking it down into quite a few pieces.
I can code this up if you want, but for the sake of brevity and time, I'll just describe it here.
The main problem with testing this is that all of the concerns are mixed together. You have a few concerns:
Generating a license plate
Generating the text part
Generating the number part
Filtering a license plate
Note that separating this concerns allows you to test them independently.
"Does my class properly generate and filter license plates?" becomes "Does class X properly generate license plates? Does class Y properly filter license plates?"
The second option is easier to test because it allows you to pass in a license plate generator that is rigged to generate bad words.
The text and number generation should arguably be separated. That would allow for easily reusing generated numbers when a bad word is in the word part, but other than that, the added complexity would probably not be worth it.
One option would be to use an interface:
public interface LicensePlateGenerator {
public String generateLicensePlate();
}
You could then have your classes implement it:
public class LicensePlateGeneratorRandom implements LicensePlateGenerator {
private final lchar[] = {'A', 'B', 'C', ...};
private final nchar[] = {'0', '1', '2', ...};
public String generateLicensePlate() {
//Randomly grab 4 lchars
//Randomly grab 3 nchars
}
}
public class LicensePlateGeneratorFiltered implements LicensePlateGenerator {
private final LicensePlateGenerator gen;
private final List<String> badWords;
public LicensePlateGeneratorFiltered(LicensePlateGenerator generator, List<String> badWords) {
this.gen = generator;
this.badWords = badWords;
}
public String generateLicensePlate() {
//It might be a good idea to test how many plates have already been generated
//An infinite loop of generating could happen depending on the underlying generator
//and what kind of badWords are defined
String lp;
do {
lp = gen.generateLicensePlate();
} while (isBad(lp));
return lp;
}
private boolean isBad(String lp) {
//Return false if lp contains any badWords
//and true otherwise
}
}
Note how this is transparent to any consuming class. A consuming class doesn't need to know whether it has a LicensePlateGeneratorRandom or LicensePlateGeneratorFiltered; it just needs to know it has a LicensePlateGenerator.
Note that this allows you to test very easily. Testing your random generator means just checking a few outputs. Checking your filter could be done by rigging a generator:
(I would probably define this as an anonymous class in actual testing code, but my Java is too rusty to remember the syntax for that :p)
public class LicensePlateGeneratorFake implements LicensePlateGenerator {
private final List<String> words;
private Iterator<String> currentPos;
public LicensePlateGeneratorFake(List<String> words) {
this.words = words;
currentPos = this.words.iterator();
}
public function generateLicensePlate() {
if (!currentPos.hasNext()) {
//Something should happen here...
//could always just loop back around, or it
//might also be worth considering having the interface
//declare a certain exception as being possible in this method.
//(That would also allow a way out of the infinite loop in the
//(filter generator)
//throw new LicensePlateGeneratorException("...");
//(that could be the base class, and then a LicensePlateGeneratorInfiniteLoopException
//could be thrown in the filter class)
}
//Obviously the end of the iterator should be checked, but I'm lazy
return currentPos.next() + " 123";
}
}
You then configure this fake generator to generate a certain stream of words ("ABCD", "DCBA", etc). Then, you pass this fake generator to the filter, and give the filter a list of words you know that the fake generator will generate.
If you tell the fake generator to generate "ABCD" and you tell the filter to reject "ABCD", you can then test success by whether or not the filter generator returns "ABCD" in any of the generations.
An alternative approach would be to have the filter not be a generator. Instead, your consuming code would be responsible for configuring its own filters and then generating plates until a suitable one is found.
Both designs have fairly strong pros and cons.
(Note: I completely bastardized the class names in my examples. In a real application, namespaces should be used. The code formatting on here makes namespaces a bit clumsy though, so I didn't use them.) | {
"domain": "codereview.stackexchange",
"id": 2999,
"tags": "java, homework, random"
} |
How do you prove these string/number radix encoding/decoding algorithms work? | Question: A while back I learned of these great algorithms:
function parseInt(value, code) {
let x = 0
let i = 0
while (i < value.length) {
const a = value[i]
x = x * code.length + code.indexOf(a)
i++
}
return x
}
function toString(value, code) {
const radix = code.length
let result = ''
do {
const digit = value % radix
result = code[digit] + result
value = Math.floor(value / radix)
} while (value)
return result
}
which allow you to convert toString any (albeit 32-bit) number to a string using a custom "alphabet" or "code", and do the reverse and parse a corresponding string to int.
console.log(parseInt('dj', 'abcdefghijklmnopqrstuvwxyz0123456789+-'));
// 123
console.log(toString(123, 'abcdefghijklmnopqrstuvwxyz0123456789+-'));
// dj
console.log(parseInt('a', 'abcdefghijklmnopqrstuvwxyz0123456789+-'));
// 0
console.log(toString(0, 'abcdefghijklmnopqrstuvwxyz0123456789+-'));
// a
How do you prove that these algorithms actually do the trick? What does the proof look like from base principles? Tangent/bonus: How would you ever go about figuring this out from scratch? To me this is pure magic and I would never have been able to figure this out "from scratch". I would like to know what I could've done to figure this out using mathematical techniques/principles.
Answer: First, you need to figure out what it is you want to prove. Here, I'd say we want to prove that
toString(parseInt(x, code),code) = x
and
parseInt(toString(x, code),code) = x
It's pretty obvious that trying to prove this is going to fail if:
we overflow the result in parseInt, because then parseInt isn't surjective
the input in parseInt contains characters that aren't in the code
So, our proof has to include the assumption that both are not the case.
The first proof is probably easier than the second. We can do induction on the length of `value':
toString(parseInt(x, code),code) = x
Given an arbitrary code which contains all characters in x
If x.length == 0:
x == ''
parseInt(x, code) returns 0
toString(0, code) returns '' <- actually it returns 'a' so the proof breaks here, but you can do the same if you assume length 1
If x.length > 1
our induction hypothesis: for all prefixes y of x toString(parseInt(y, code),code) = y
parseInt(x, code) = parseInt(x[:-1], code) * code.length + code.indexOf(x[-1]) (I am using python syntax here to say everything but the last element, and the last element. This in and of itself would have to be proved I guess but it would be very easy if the algorithm was written recursively instead of iteratively)
toString(a * code.length + b, code) = code[b] + toString(a, code) (again, this would be straightforward if the algorithm was recursive, but it should be convincing enough that this is the case here)
that means toString(parseInt(x,code),code) = toString(parseInt(x[:-1], code) * code.length + code.indexOf(x[-1]), code) = code[code.indexOf(x[-1])] + toString(parseInt(x[:-1], code), code) = (using induction hypothesis) x[-1] + x[:-1] which breaks our proof because toString has a bug and inverts the string, as far as I can tell.
Assuming the line result = code[digit] + result actually read result = result + code[digit] like I think it should:
toString(a * code.length + b, code) = toString(a, code) + code[b] (again, this would be straightforward if the algorithm was recursive, but it should be convincing enough that this is the case here)
that means toString(parseInt(x,code),code) = toString(parseInt(x[:-1], code) * code.length + code.indexOf(x[-1]), code) = toString(parseInt(x[:-1], code), code) + code[code.indexOf(x[-1])] = (using induction hypothesis) x[:-1] + x[-1] = x
QED.
The other direction is similar, you can again use induction as long as you take care that your induction hypothesis is "for all y < x: parseInt(toString(y, code),code) = y". | {
"domain": "cs.stackexchange",
"id": 16974,
"tags": "algorithms, proof-techniques, strings, correctness-proof"
} |
How to disable collision between a link and a newly added collision object? | Question:
I have a scene with UR10 arm mounted on wall. I also have a table. I have this entire scene configured using moveit setup assistant.
I launch the scene. I then run a node(part_spawner) which spawns a mesh on the table(I am doing this by loading a mesh and adding this mesh as a collision object to planning_scene_interface). My rviz screen properly loads it and spawns it, as expected rviz also shows that there is a collision between the table and the mesh. But when I check for the collisions in the same node(part_spawner) using planning_scene.CheckCollision(collision_request, collision_result, copied_state, allowed_collision_matrix), the collision_result does not return any contact between table and mesh. I'm guessing this should ideally return a pair of contact (table, mesh)
My goal is to manually disable this specific collision by adding the pair (table, mesh) to the Allowed Collision Matrix using allowedCollisionMatrix.setEntry('table', 'mesh', true). But the collision is not being detected in the first place.
Is this the right way of doing it?
Is there a better way to do it?
Originally posted by srujan on ROS Answers with karma: 32 on 2020-12-16
Post score: 0
Answer:
Yes, you describe the correct way of disabling collision checking between two bodies.
But this post contains the unspoken question "Why are collisions not detected for my newly added collision object?", which needs to be answered: I suspect that the PlanningScene object that you are calling checkCollision on is not the same as the one that RViz displays, and that is why you are not seeing the collision. You can use a PlanningSceneMonitor to get the newest scene, or use the /get_planning_scene service (this is probably easier). Here is some boilerplate code for the latter, which you can copy from:
bool SkillServer::updatePlanningScene()
{
moveit_msgs::GetPlanningScene srv;
// Request only the collision matrix
srv.request.components.components = moveit_msgs::PlanningSceneComponents::ALLOWED_COLLISION_MATRIX;
get_planning_scene_client.call(srv);
if (get_planning_scene_client.call(srv))
{
ROS_INFO("Got planning scene from move group.");
planning_scene_ = srv.response.scene;
return true;
}
else
{
ROS_ERROR("Failed to get planning scene from move group.");
return false;
}
}
Regarding a better way to enable/disable collisions: Internally I am using this extension of the planning_scene_interface that defines the functions allowCollisions and disallowCollisions and connects them to the Python interface as well, but we did not merge this upstream because the updating method is not strictly safe. If you are confident that it does not affect your application, you can use those changes as well.
Related question: https://answers.ros.org/question/359898/faster-way-to-disable-and-enable-collision-check-between-gripper-and-object/
Originally posted by fvd with karma: 2180 on 2020-12-16
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by srujan on 2020-12-18:
Thanks for the answer @fvd . I used this post as a reference to get this working
Comment by fvd on 2020-12-18:
Thanks for adding the reference. Don't forget to accept the answer to take the question out of the queue, too. | {
"domain": "robotics.stackexchange",
"id": 35883,
"tags": "ros, moveit, planning-scene"
} |
What do "e" "-" "C" and "E" mean in this output? | Question: I have given an input of this protein sequence:
MEPVDPRLEPWKHPGSQPKTACTTCYCKKCCFHCQVCFTTKALGISYGRKKRRQRRRPPQGSQTHQVSLSKQPTSQPRGDPTGPKE
from this website along with the option of COBEpro. Now the output sent by this site to my email is as follows:
Name: temp_prot
Amino Acids:
MEPVDPRLEPWKHPGSQPKTACTTCYCKKCCFHCQVCFTTKALGISYGRKKRRQRRRPPQGSQTHQVSLSKQPTSQPRGDPTGPKE
Predicted Continuous B-cell Epitopes:
MOST LIKELY EPITOPES:
0.82848577 39 TKALGIS CCCCCEE eee-e-e
0.82036375 39 TKALGI CCCCCE eee-e-
0.76503265 38 TTKALGI ECCCCCE eeee-e-
0.73178638 73 TSQPRGDP CCCCCCCC -eeeeeee
…
and a few more. Now can you tell me, if I consider the very first result, I was able to find TKALGIS in the parent sequence but could not find CCCCCEE. What is this CCCCCEE?
And what does this eee-e-e mean?
Answer: CCCEEE etc. are the secondary structural elements. In this case, the C refers to non-strand and non-helix regions i.e loop regions rather than a coiled region. The C or E usually refers to whether the residue is coiled (C) or part of a strand (E). H would be used to denote a helix, however, in the question, it appears that there are no helices. These letters are often different in different software: it's merely a point of semantics between different software.
e or - refers to if the amino acid at that position is exposed or not (- = buried).
For the method, here is the COBEpro paper, although it does seem very technical. The help page has more easily accessible information on the output. | {
"domain": "biology.stackexchange",
"id": 3441,
"tags": "molecular-biology, bioinformatics, proteins, database"
} |
Calculate the Output of Linear Time Invariant System Given it Impulse Response | Question: A filter is defined as $ h \left[ n \right] = \delta \left[ n \right] - \delta \left[ n - 1 \right] $.
Given a signal $ h \left[ n \right] $ defined as:
$$ x \left [ n \right ] = \begin{cases}
1 & \text{ if } x \geq 0 \\
0 & \text{ if } x < 0
\end{cases} $$
Let $ y \left[ n \right] = \left( x \ast h \right) \left[ n \right] $.
What is the value of $ y \left[ -1 \right], \, y \left[ 0 \right], \, y \left[ 1 \right], \, y \left[ 2 \right] $?
Answer: The Discrete Delta Function, $ \delta \left[ n \right] $ is the identity operator of Linear Time Invariant Systems.
Moreover, since it LTI System we can computer for each element of the filter by itself.
So the first element of the filter, $ \delta \left[ n \right] $, just outputs the signal itself.
The other element $ \delta \left[ n - 1 \right] $ just shifts the input signal.
Since the input is 1 for any $ n \geq 0 $ we subtract 1 from 1 unless it is $ n = 0 $ then we subtract zero from 1.
Hence the solution is 0, 1, 0, 0. | {
"domain": "dsp.stackexchange",
"id": 6446,
"tags": "filters, linear-systems, homework"
} |
General Retry Strategy #3 with TryResult | Question: I wanted to use the Try helper from @DmitryNogin's General Retry Strategy #2 but each attempt to implement it revealed another thing that is missing.
In my review I mentioned that it would be useful to be able to log the exceptions but later I came to the conclusion that perhaps it would be better to implement something similar to the ParallelLoopResult - this is how the TryResult was formed.
public struct TryResult
{
public TryResult(bool isCompleted, IEnumerable<Exception> exceptions) : this()
{
IsCompleted = isCompleted;
Exceptions = exceptions;
}
public bool IsCompleted { get; private set; }
public IEnumerable<Exception> Exceptions { get; private set; }
}
Then I added it to the base class:
public abstract class Try
{
public static readonly Try Never = new Never();
public static readonly Try Once = Retry(delay: 0, times: 0, ratio: 0);
public static Try Retry(int delay, int times, double ratio) =>
RetryAfter(from i in Enumerable.Range(0, times)
select delay * Math.Pow(ratio, i) into d
select (int)d);
public static Try RetryAfter(params int[] delays) => RetryAfter(delays.AsEnumerable());
public static Try RetryAfter(IEnumerable<int> delays) => new Retry(delays);
public void Execute(Action action) => Execute(action, CancellationToken.None);
public abstract TryResult Execute(Action action, CancellationToken cancellationToken);
public Task ExecuteAsync(Action action) => ExecuteAsync(action, CancellationToken.None);
public Task ExecuteAsync(Action action, CancellationToken cancellationToken) =>
ExecuteAsync(() => { action(); return Task.CompletedTask; }, cancellationToken);
public Task ExecuteAsync(Func<Task> action) => ExecuteAsync(action, CancellationToken.None);
public abstract Task<TryResult> ExecuteAsync(Func<Task> action, CancellationToken cancellationToken);
}
and adjusted the implementations:
class Never : Try
{
public override TryResult Execute(Action action, CancellationToken cancellationToken)
{
return new TryResult(true, Enumerable.Empty<Exception>());
}
public override Task<TryResult> ExecuteAsync(Func<Task> action, CancellationToken cancellationToken)
=> Task.FromResult<TryResult>(new TryResult(true, Enumerable.Empty<Exception>()));
}
the same for Retry:
class Retry : Try
{
IEnumerable<int> Delays { get; }
public Retry(IEnumerable<int> delays)
{
Delays = delays;
}
public override TryResult Execute(Action action, CancellationToken cancellationToken)
{
var exceptions = new List<Exception>();
foreach (var delay in Delays)
{
try
{
action();
return new TryResult(true, exceptions);
}
catch
{
cancellationToken.WaitHandle.WaitOne(delay);
cancellationToken.ThrowIfCancellationRequested();
}
}
return new TryResult(false, exceptions);
}
public override async Task<TryResult> ExecuteAsync(Func<Task> action, CancellationToken cancellationToken)
{
var exceptions = new List<Exception>();
foreach (var delay in Delays)
{
try
{
await action();
return new TryResult(true, exceptions);
}
catch
{
await Task.Delay(delay, cancellationToken);
}
}
return new TryResult(false, exceptions);
}
}
Answer: Let’s put the cart before the horse – I mean Catch before the Try :)
class Program
{
static void Main(string[] args)
{
using (new Catch<InvalidOperationException>())
using (new Catch<FormatException>(ex => Console.WriteLine("Oops!")))
Try.Retry(100, 3, 3).Execute(() =>
{
Console.WriteLine("Trying!");
throw new FormatException();
});
}
}
These using Catch statements define exceptions we can tolerate and continue trying. All other exceptions will be interpreted as critical. We can also rethrow or log in the optional handler.
Library classes:
public class Catch<TException> : Catch
where TException : Exception
{
public Catch()
: this(ex => { })
{
}
public Catch(Action<TException> handler)
{
Handler = handler;
}
protected internal override bool HandleCore(Exception ex)
{
if (ex is TException)
{
Handler(ex as TException);
return true;
}
if (Previous == null)
throw ex;
else
return Previous.HandleCore(ex);
}
Action<TException> Handler { get; }
}
And:
public abstract class Catch : Ambient<Catch>
{
public static void Handle(Exception ex) =>
Current?.HandleCore(ex);
protected internal abstract bool HandleCore(Exception ex);
}
And:
public abstract class Ambient<T> : IDisposable where T : Ambient<T>
{
static readonly string Id = typeof(T).FullName;
protected static T Current
{
get { return (T)CallContext.LogicalGetData(Id); }
set { CallContext.LogicalSetData(Id, value); }
}
protected Ambient()
{
Previous = Current;
Current = (T)this;
}
public void Dispose() => Current = Previous;
protected T Previous { get; }
}
And we also need to update Retry strategy:
public override void Execute(Action action, CancellationToken cancellationToken)
{
foreach (var delay in Delays)
try
{
action();
return;
}
catch(Exception ex)
{
Catch.Handle(ex); // <- NEW LINE
cancellationToken.WaitHandle.WaitOne(delay);
cancellationToken.ThrowIfCancellationRequested();
}
action();
}
UPDATE
Let's have this for the Log class - note Log.IfFail():
class Log : Ambient<Log>
{
public static Exception IfFail(Action action)
{
try
{
action();
return null;
}
catch(Exception ex)
{
Write(ex.Message);
return ex;
}
}
public static void Write(string line) =>
Current?.WriteCore(line);
public Log(string fileName)
{
FileName = fileName;
}
void WriteCore(string line) => File.AppendAllText(FileName, line);
string FileName { get; }
}
And:
public static class Error
{
public static void Rethrow(this Exception ex)
{
if (ex != null)
throw ex;
}
}
So we can have TryExecute:
public override bool TryExecute(Action action, CancellationToken cancellationToken)
{
foreach (var delay in Delays)
try
{
Log.IfFail(action).Rethrow();
return true;
}
catch(Exception ex)
{
Catch.Handle(ex);
cancellationToken.WaitHandle.WaitOne(delay);
cancellationToken.ThrowIfCancellationRequested();
}
return Log.IfFail(action) == null; // yes, ugly :) let's invent something better
}
We can pass more context for logging at the moment of ambient log construction.
UPDATE #2 :)
public static class Error
{
public static void Rethrow(this Exception ex)
{
if (ex != null)
throw ex;
}
public static bool ToTryResult(this Exception ex)
{
return ex == null;
}
}
And:
public override bool TryExecute(Action action, CancellationToken cancellationToken)
{
foreach (var delay in Delays)
try
{
Log.IfFail(action).Rethrow();
return true;
}
catch(Exception ex)
{
Catch.Handle(ex);
cancellationToken.WaitHandle.WaitOne(delay);
cancellationToken.ThrowIfCancellationRequested();
}
return Log.IfFail(action).ToTryResult();
} | {
"domain": "codereview.stackexchange",
"id": 21896,
"tags": "c#, design-patterns, error-handling"
} |
Energy change when heating a substance | Question: When a substance is heated but not changing its phase, is the potential energy between the particles constituting the substance also increasing, or is it only the random kinetic energy of particles that increases?
Answer: The potential energy between the particles constituting the substance basically depends on the average distance between particles and the nature of the interaction (ionic, dipole-dipole, etc.). For many systems, this variation is orders of magnitude smaller than the increase of the kinetic energy of particles for many systems.
However, indeed, thermal dilatation might change the average distance between particles. Temperature increase might favor disorientation of electric dipoles, contributing to the increase of the potential energy of the arrangement. In this sense, you are right. | {
"domain": "physics.stackexchange",
"id": 76536,
"tags": "thermodynamics, energy, potential-energy"
} |
Why is Sagittarius A* called so? | Question: At the center of the Milky Way, we have a supermassive black hole. It's name is "Sagittarius A*". I was wondering that why is it called so? I mean, why "A*" why not just "A". Is there a naming system for super massive black holes?
Thanks!
Answer: Check this wiki link out. "The name Sgr A* was coined by Brown in a 1982 paper because the radio source was "exciting", and excited states of atoms are denoted with asterisks."
There is no unified naming system for black holes. They are usually named after their host galaxy. Others are identified by the name of the survey in which they were observed. A few black holes are catalogued by their constellation and the order in which they were discovered
EDIT: The 1982 paper by Brown and Lo can be found here. | {
"domain": "astronomy.stackexchange",
"id": 2606,
"tags": "supermassive-black-hole, naming"
} |
How long does it take for a white dwarf to cool to a black dwarf? | Question: I was reading on white dwarfs, and I came across this sentence—
Without energy sources, the white dwarf cools to a black dwarf in a few billion years.[1]
However, when I looked into the Wikipedia page on White dwarf, it says
Because the length of time it takes for a white dwarf to reach this state is calculated to be longer than the current age of the universe (approximately 13.8 billion years), it is thought that no black dwarfs yet exist.
So which is true?
And what is the proper definition of a black dwarf?
References:
[1] Introductory Astronomy and Astrophysics. Zellik, M. Gregory, S. 4th edition. Brooks/Cole. 1998
Answer: I think what you need is here on the Wikipedia. In section "Radiation and cooling," it says "The rate of cooling has been estimated ... After initially taking approximately 1.5 billion years to cool to a surface temperature of 7140 K, cooling approximately 500 more K ... takes around 0.3 billion years, but the next two steps of around 500 K ... take first 0.4 and then 1.1 billion years."
One takeaway is that the rate of cooling (giving a fixed change in temperature, i.e., every 500 K) is increasing non-linearly. This is because the cooling is governed by diffusion process. So, at low temperature, to cool down 500 K more would take very long time than what it did in the past.
As someone said in the comment, there is no precise definition of a black dwarf. So, I would not say who is right or wrong without understanding how they define the cutoff.
However, if you roughly define it to be at the level that its color temperature passes beyond the visible wavelength (i.e. >7000 A or < 4000 K), and if you follow the info mentioned above by extrapolating from about 5500 K and assuming the rate of changing 500 K is constant as what it did in the previous step (i.e., from 6000 to 5500K taking 1.1 billion years), approximately we get the upper limit for cooling from 5500 to 4000 K as 3 billion years. By adding about the previous 2 billion years from the initial temperature down to 5500 K, we have >5 billion years for a white dwarf from its initial state down to about 4000 K. Note that the 5 billion is a lower limit because we did not consider non-linearity.
(Note that you can also approximate the non-linearity effect by assuming an increment of 1 billion years in each step implying by the step 6000-5000 K. By doing this, the lower limit would be >7 billion years.)
Since the age of the universe is 13 billion years, whether you believe that a black dwarf exists or not depend on i) definition, ii) rate of cooling, and iii) variation (which means there might be a white dwarf that was born cool or living in the environment that supports better cooling than the typical population). | {
"domain": "astronomy.stackexchange",
"id": 3397,
"tags": "stellar-evolution, white-dwarf"
} |
Dice roll game with arbitrary face count | Question: I have this code written and I wanted to get a better performance out of it. Could somebody help me please? If possible I would like my code to get a better performance when rolling high numbers and when it getting to the percentage operation.
#implementation of multiple dice rolls at the same time and showing what each dice rolled individually
import random
from collections import Counter
#Welcoming the user to the game
print("Hello and welcome to the dice roller!")
name = input("What is your name?: ")
print ("Hi", name)
while True:
#Asking the user if he wants to play to make it break the loop and say bye whenever he stops playing
wants_to_play = input("Do you want to play? (y/n): ").lower()
#if statement to determine the number of dice faces, and to "roll the dice" hypothetically.
if wants_to_play == "y":
dice_face = int(input("Select the number of faces you want your dice to have. WARNING: if you don't select a whole number, the dice roller won't work. "))
dice_amount = int(input("Select how many dices of the given faces you want to roll: "))
#generating dice roll number with the given input
dice_rolls = []
for i in range(0, dice_amount):
dice_rolls.append(random.randint(1,dice_face))
dice_rolls = []
for i in range(0, dice_amount):
dice_rolls.append(random.randint(1,dice_face))
#Counting how many of each number are they and defining percentages
num = Counter(dice_rolls)
percentages = sum(num.values())
total = round(percentages, 1)
#printing final message output to the user
print("Your dice percentages where: ")
for face, count in num.most_common():
print(f"{face}: {count} ({count/total:.2%})")
#When the user stops wanting to play, saying them goodbye
else:
print("Well, it was great while it lasted! Until next time!")
break
print("Developed by SMB Studios")
Answer: First off, I would say your code doesn't do what it's stated to do:
#implementation of multiple dice rolls at the same time and showing what each dice rolled individually
It doesn't show the individual rolls, only the cumulative stats. If this is intentional, then fine, but I suggest you change the docstring (and make it one) to show that.
PEP-8
PEP-8 is a set of recommendations regarding the format of Python code. They mean that there is a standard way of writing Python code which makes it easier to read and use others' code.
This includes recommendations on indent levels (4 spaces), spaces after commas, comments, etc. I suggest you look into getting a linter such as flake8 or pylint and run your code through them to standardise it against others' codes.
Comments
As it is, many of your comments don't really help me understand the code any more than the code does and in some cases are somewhat misleading:
# if statement to determine the number of dice faces, and to "roll the dice" hypothetically.
The if statement doesn't determine the number of faces. It's whether the code will run or not. I'm not sure what you mean by
"roll the dice" hypothetically
Do you mean you're not really rolling dice because it's a computer?
Comments are very much personal choice, but I would usually just outline what a block of code is to do in simple terms, and in complex parts how it does so (algorithm name, references, etc).
Handling user input
Currently, your code crashes if you enter a non-int value into your input.
We can do better than this using a function to get an integer from a user:
def get_int(prompt: str) -> int: # Type hints tell a user what to provide and expect
""" Get a valid integer from the user """
while True:
val = input(prompt)
try:
val = int(val)
except ValueError: # If it can't be turned into an int,
# it will raise this error, and this code will run
print(f"Invalid integer value ({val}), please try again.")
else: # If there isn't an error, this code will run
return val # Set the name of the function where it is call
# to this value and leave the function
Which means we now use:
dice_face = get_int("Select the number of faces you want your dice to have: ")
dice_amount = get_int("Select how many dice of the given faces you want to roll: ")
Duplicated effort
In your code, you roll the dice, discard them, then roll them again:
dice_rolls = []
for i in range(0, dice_amount):
dice_rolls.append(random.randint(1,dice_face))
dice_rolls = []
for i in range(0, dice_amount):
dice_rolls.append(random.randint(1,dice_face))
This means you spend a lot of effort doing nothing.
The other thing to bear in mind is that you are using append. List comprehensions (which you may not have met yet) are faster in general.
dice_rolls = [random.randint(1, dice_face) for _ in range(dice_amount)] # 0 is implied in the range
The other thing is that you are building this list of values, and then only using the Counter to do consume the list. You might want to instead, just directly increment the counter.
num = Counter()
for i in range(dice_amount):
roll = random.randint(1, dice_face)
num[roll] += 1
or (advanced) you could use a generator expression as the iterable of the counter:
num = collections.Counter(random.randint(1, dice_face) for _ in range(dice_amount))
Here:
percentages = sum(num.values())
total = round(percentages, 1)
percentages doesn't compute the percentages, just the counts and is not really used again, and total being the rounding of this is irrelevant because the counts will always be integral. There is already a method on Counter (total) which does this.
print(f"{face}: {count} ({count / num.total():.2%})")
Summary
Putting all this together, along with Tamoghna Chowdhury's comments, we end up with something like:
"""
Code to roll multiple dice and show individual rolls as well as aggregated statistics
"""
import random
from collections import Counter
def get_int(prompt: str) -> int:
""" Get a valid integer from the user """
while True:
val = input(prompt)
try:
val = int(val)
except ValueError:
print(f"Invalid integer value ({val}), please try again.")
else:
return val
# Welcome user to the game
print("Hello and welcome to the dice roller!")
name = input("What is your name? ")
print("Hi", name)
while True:
# Ask if user wants to play
wants_to_play = input("Do you want to play? (y/n) ").lower()
if wants_to_play == "y":
# Get the number of faces, and number to roll.
dice_face = get_int("Select the number of faces you want your dice to have: ")
dice_amount = get_int("Select how many dices of the given faces you want to roll: ")
# generate rolls
num = Counter()
for i in range(dice_amount):
roll = random.randint(1, dice_face)
print(f"Die {i}: {roll}") # N.B. May want to add capability to disable this for large numbers of rolls (e.g > 10)
num[roll] += 1
# print stats
print("\nYour dice percentages were: ")
for face, count in num.most_common():
print(f"{face}: {count} ({count / num.total():.2%})")
elif wants_to_play == "n":
# say goodbye and quit
print("Well, it was great while it lasted! Until next time!")
break
else:
print(f"Unknown option ({wants_to_play}), please try again.")
print("Developed by SMB Studios") | {
"domain": "codereview.stackexchange",
"id": 43931,
"tags": "python, performance, python-3.x, dice"
} |
Building custom messages on Desktop | Question:
I'm trying to plot a custom message with rxplot on my desktop while my Turtlebot runs. However, this fails and asks if I've built my messages? How can I build custom messages on my desktop computer?
EDIT:
You can't copy and paste from turtlebot to the desktop because the directories are different. Make files only work if they're in the proper directory. I can't roscreate a new package on the desktop in the ROS directory. I can make it anywhere else on the machine, but when I try roscreate-pkg, it's denied, and when I sudo it, roscreate-pkg the command is not found.
Originally posted by IFLORbot on ROS Answers with karma: 33 on 2012-09-13
Post score: 0
Answer:
Never create packages in /opt/ros nor use sudo for compiling packages.
On the desktop, you need to create an overlay. Then copy the package containing your custom messages into it. If you copied the build directory in your package, too, you will run into compiler errors because the paths have changed. In that case, all you need to do is call rosmake --pre-clean <your package> to make sure that the old build directories are all removed.
Originally posted by Lorenz with karma: 22731 on 2012-09-20
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by Lorenz on 2012-09-20:
Sure. They work since the beginning of ros and are commonly used.
Comment by Lorenz on 2012-09-20:
It should. rosws is independent from the ros release.
Comment by IFLORbot on 2012-09-20:
Ok, yeah, sorry I hadn't installed it and assumed it was there.
Comment by IFLORbot on 2012-09-20:
Yeah, that should be it, I'm just having multiple problems either adding the overlay to the package path, or once, I do that, not having interfere with ROS_ROOT. I don't have anymore time today to fix it, so I'll try to mess with it more tomorrow.
Comment by Lorenz on 2012-09-20:
You should not have any problems if you follow the overlay tutorial closely.
Comment by Lorenz on 2012-09-21:
When you create the overlay with rosws init ~/overlay /opt/ros/electric and source ~/overlay/setup.bash you should be able to access all normal ros packages. Verify that first. Create a subdir in ~/overlay, put your msgs in there and add it with rosws set. Source setup.bash again.
Comment by IFLORbot on 2012-09-21:
That didn't work. Set didn't add the subdirectory to ROS_PACKAGE_PATH, it seems, and it should, right? I had it working correctly for a minute after I added all the package paths to ROS_PACKAGE_PATH, but then it reverted when I opened a new terminal.
Comment by Lorenz on 2012-09-21:
Did you source the right setup.bash (that one in the overlay)?
Comment by IFLORbot on 2012-09-21:
Yes, I did.
Comment by Lorenz on 2012-09-21:
Have a look at the file ~/overlay/.rosinstall. It should contain an entry for the directory you added using rosws set. setup.bash uses that file to generate the ROS_PACKAGE_PATH.
Comment by IFLORbot on 2012-09-21:
It does. But now there is no ROS_ROOT.
Comment by Lorenz on 2012-09-21:
I actually doubt that. The auto-generated setup.sh should set ROS_ROOT if you initialized with rosws init ~/overlay /opt/ros/electric
Comment by IFLORbot on 2012-09-21:
It didn't. Everything seems to work fine once I reset ROS_ROOT myself. Technically, my root is /opt/ros/electric/ros, I found, so that could have been the problem. If I had set it to that, it probably would have worked without issue. | {
"domain": "robotics.stackexchange",
"id": 11006,
"tags": "ros, message, rxplot, rostopic, messages"
} |
Does adsorption violate thermodynamics? | Question: My textbook reads as follows:
When a gas is adsorbed, the freedom of movement of its molecules become restricted. This amounts to decrease in the entropy of the gas after adsorption, i.e. Entropy change is negative.
But now, I am unable to understand why it isn't a violation of the second law of thermodynamics, which states that the entropy change of a system can never be negative. Please guide.
Answer: The second law of thermodynamics states that the entropy of the universe always increases.
$$\mathrm{d}S > 0$$
In the case of adsorption the entropy of the system; the gas being adsorbed; decreases but the entropy of the surroundings;the rest of the gas and the surface (and everything else in the universe); increases and this outweighs the decrease in entropy of the system.
$$\Delta S_\mathrm{sys} < 0 \qquad \Delta S_\mathrm{surr} > 0$$
$$|{\Delta S_\mathrm{surr}}| > |{\Delta S_\mathrm{sys}}|$$
This is because adsorption is an exothermic process and so the surroundings are heated up and therefore increase in entropy. If you consider the Gibbs energy change for adsorption it will be negative because the negative $\Delta H$ term is larger than the positive $-T\Delta S$ term. This chem.uic.edu page goes through the maths of this very nicely. | {
"domain": "chemistry.stackexchange",
"id": 2731,
"tags": "surface-chemistry, entropy, adsorption"
} |
Redshift of the Cosmic Microwave Background: increasing or decreasing? | Question:
$$\dot z\equiv\frac{\mathrm d z}{\mathrm d t_{\text{obs}}}(t_0)=(1+z)H_0-H(z)$$
The picture and equation above are quoted from Liske et al. (2008).
According to the equation, the redshift of the cosmic microwave background radiation is expected to decrease at this point in time.
However, I think the redshift that will be measured in the future will increase.
Doesn't this equation apply to the cosmic background radiation?
Answer: I think you cannot apply this equation to the cosmic microwave background and indeed, the redshift of the CMB is increasing with time.
The difference is that the photons we receive from the CMB will always come from a fixed epoch in the universe (the epoch of recombination).
In contrast, the photons that we receive from a distant galaxy were emitted at an epoch that depends on the redshift of the galaxy and this will change with time. In other words, we can watch galaxies getting older. At high redshifts, as a galaxy ages it will experience a deceleration in the universal expansion, as we see it, and thus its redshift decreases. At later times and lower redshifts, the expansion accelerates and the redshift increases. | {
"domain": "physics.stackexchange",
"id": 80693,
"tags": "cosmology, space-expansion, cosmic-microwave-background, redshift"
} |
Exciton state, ground state and completeness relation | Question: Considering the exciton eigensystem $\mathcal{H} | \lambda \rangle = E_\lambda |\lambda\rangle$ with the Hermitian Hamiltonian $\mathcal{H}$ and wave function $|\lambda\rangle$. I'm thinking about the overlap $\langle \lambda|0 \rangle$, where $|0\rangle$ is the vacuum or ground state. If the overlap is zero, then for the oscillator strength which is proportional to $\langle \lambda|\mathbf{r}|0 \rangle$, where $\mathbf{r}$ is the position operator, we insert the completeness relation $\sum_{\mu}|\mu\rangle\langle \mu|=1$ between $\mathbf{r}$ and $|0\rangle$
$$
\langle \lambda|\mathbf{r}|0 \rangle = \sum_{\mu}\langle \lambda|\mathbf{r}|\mu\rangle\langle \mu|0 \rangle \tag{1}
$$
which seems to be zero due to the overlap term. This is of course wrong. Then what should the overlap be or am I missing something?
Now going further for the finite momentum exciton $|\lambda \mathbf{q}\rangle$, for which we have
$$
\mathcal{H} | \lambda \mathbf{q} \rangle = E_{\lambda \mathbf{q}} |\lambda \mathbf{q}\rangle \tag{2}
$$
Then what the completeness relation should be? $\sum_{\lambda \mathbf{q}}|\lambda \mathbf{q}\rangle\langle \lambda \mathbf{q}|=1$?
Also, if the ground state or vacuum state has energy of $E_0$, I'd like to know the velocity of the ground state $\partial_\mathbf{q} E_0$ or $\langle 0|\mathbf{v}|0\rangle$ where $\mathbf{v}$ is the exciton velocity operator. Is that zero?
Answer: Your formula is correct. But this only gives
$$\langle\lambda|r|0\rangle = \langle\lambda|r|0\rangle,$$
because a complete set of basis vectors includes the ground state term $|0\rangle\langle 0|$. | {
"domain": "physics.stackexchange",
"id": 83706,
"tags": "quantum-mechanics, hilbert-space, ground-state"
} |
The Why Behind Sum of Squared Errors in a Linear Regression | Question: I'm just starting to learn about linear regressions and was wondering why it is that we opt to minimize the sum of squared errors. I understand the squaring helps us balance positive and negative individual errors (so say e1 = -2 and e2 = 4, we'd consider them as both regular distances of 2 and 4 respectively before squaring them), however, I wonder why we don't deal with minimizing the absolute value rather than the squares. If you square it, e2 has a relatively higher individual contribution to minimize than e1 compared to just the absolutely values (and do we want that?). I also wonder about decimal values. For instance, say we have e1 = 0.5 and e2 = 1.05, e1 will be weighted less when squared because 0.25 is less than 0.5 and e2 will be weighted more. Lastly, there is the case of e1 = 0.5 and e2 = 0.2. E1 is further away to start, but when you square it 0.25 is compared with 0.4. Anyway, just wondering why we do sum of squares Erie minimization rather than absolute value.
Answer: Simple google search on "stats why regression not absolute difference" would give you good answers. Try it yourself!
I can quickly summarise:
Your regression parameters are solutions to the maximum likelihood optimisation. That involves derivative, but absolute difference doesn't have a derivative at zero. There's no unique solution for least absolute regression.
Least absolute regression is an alternative to the regular sum of squares regression, commonly classified as one of the robust statistical methods.
You'd prefer least absolute regression if you care about outliers, otherwise the regular regression is generally better.
You might want to read about L1 vs L2:
https://stats.stackexchange.com/questions/45643/why-l1-norm-for-sparse-models | {
"domain": "datascience.stackexchange",
"id": 2063,
"tags": "linear-regression"
} |
Printing a Φ pattern using * characters | Question: So I am a newbie obviously, This was part of a small assignment I had to do for Uni, and it works alright. It just reads a number from the console and then outputs the Greek letter Φ (sort of). Sure it could use some outofbound exceptions etc, but my question is this: My code overall (not just in this example) seems a little bit messy to me after I read it. My variable names are messy and I need to work on that and perhaps a few methods here and there, even though this assignment wasn't about methods but rather a warm up of basic Java syntax.
I want to write better code, but I don't know where to start.Should I use more variables? I am not greatly familiar with JavaDocs yet, should they be used for small programs as well? How much procedures is too much? Should I break it down more? I feel like my "algorithm" even though it works it could be better. Online tutorials usually teach syntax but I want to make my code as good and readable as possible.
import java.util.*;
public class A11{
public static void main(String args[]){
/*Declare Num, then parse the number from the command line
into the variable,and finaly inform the user before moving on*/
int temp=0;
int num =0;
num=Integer.parseInt(args[0]);
System.out.println();
System.out.println("Number = " + num);
temp=num/2;
/*Prints the first Line (approp spaces and a Star)*/
if(num%2==0){
temp-=1;
}
for(int v=0;v<=temp;v++){
System.out.print(" ");
}
System.out.println("*");
/*Print the remaining Shape according to the input*/
/* Loop algorithm to calculate and output the appropreate amount of
characters required to form the TOP half of the shape */
if(num%2==0){temp=num/2-1;}
for(int i=1;i<=temp;i++){
for(int j=0;j<temp-i;j++){
System.out.print(" ");
}
System.out.print("*");
for(int k=0;k<2;k++){
for(int j=0;j<i;j++){
System.out.print(" ");
}
System.out.print("*");
}
System.out.println();
}
/*Bottom half - if Num is an odd number flip the top shape but loose one line*/
int space=temp;
if(num%2==1){
temp-=1;
}
for(int i=temp;i>=1;i--){
for(int j=space-i;j>0;j--){
System.out.print(" ");
}
System.out.print("*");
for(int k=0;k<2;k++){
for(int j=0;j<i;j++){
System.out.print(" ");
}
System.out.print("*");
}
System.out.println();
}
temp=num/2;
/*Prints the last line and Star*/
if(num%2==0){
temp-=1;
}
for(int v=0;v<=temp;v++){
System.out.print(" ");
}
System.out.println("*");
}
}
Answer: Your use of the temp variable makes your code particularly infuriating to follow:
It's not even that temporary in scope: it's used throughout the function!
temp is almost never a good name for a variable, since it doesn't give any clue what it is supposed to represent.
You redefine and repurpose it several times within the function.
There are too many if(num%2==0) special cases. You shouldn't need any special cases if you just use the truncation that naturally happens with integer division in your favour.
For that matter, num is also a meaningless variable name. I can see that it's an int, which is, obviously, a number. How about size instead?
There's a lot of copy-and-paste code. You could benefit greatly splitting the work into functions. Decomposing the task would also force you to think about what the subroutines are, and help you organize your code.
Calling System.out.print() to print each character at a time is very tedious. It's also bad for performance, if that matters to you. A much better strategy would be to create a buffer to print at least a line at a time, if not the entire picture at once. Another benefit of using a buffer is that you can just place the stars in the right place using a formula.
Suggested solution
import java.util.Arrays;
public class Phi {
/**
* Produces a string of spaces that has the character <code>c</code>
* at the three specified indices, followed by a newline.
*/
private static String row(char c, int left, int mid, int right) {
char[] line = new char[right + 2];
Arrays.fill(line, ' ');
line[left] = line[mid] = line[right] = c;
line[right + 1] = '\n';
return new String(line);
}
public static String toString(char c, int size) {
StringBuilder out = new StringBuilder();
int halfWidth = (size + 1) / 2;
// Top stem
out.append(row(c, halfWidth, halfWidth, halfWidth));
// Body
for (int row = 2; row <= size / 2; row++) {
out.append(row(c, halfWidth - row, halfWidth, halfWidth + row));
}
for (int row = halfWidth; row >= 2; row--) {
out.append(row(c, halfWidth - row, halfWidth, halfWidth + row));
}
// Bottom stem
out.append(row(c, halfWidth, halfWidth, halfWidth));
return out.toString();
}
public static void main(String[] args) {
int size = Integer.parseInt(args[0]);
System.out.printf("\nNumber = %d\n%s", size, toString('*', size));
}
} | {
"domain": "codereview.stackexchange",
"id": 16126,
"tags": "java, beginner, ascii-art"
} |
Is the valence band neutral? | Question: While studying about band theory of semiconductors, I observed that when the electrons were excited from the valence band to the conduction band, they left behind holes in the valence band. From my existing knowledge, I believe that the valence electrons alone occupy the valence band which tells me the valence band is negative. For the holes to exist, the valence band has to be neutral. So, why is the valence band neutral?
If my reason for the valence band to be negative was wrong, I would like to know the reason.
Answer: A band, in itself, does not have charge. A band is just a collection of possible states where electons might be found. The electrons have charge but the states themselves do not.
If the valence band is fully occupied, the charge of the electrons in the band will be balanced by the charge of the protons in the lattice, and the overall charge of the material will be neutral. As an aside, since the protons are well-localized in position space, they are very spread out in k-space.
If there is an unoccupied state in the valence band, there will be a local (in k-space) net positive charge because the nuclear positive charges are not compensated by valence band electrons. This positve charge behaves as if it were attached to a particle, and we call that particle a hole.
Often the electron removed from the valence band is only moved up to the conduction band, so the net charge in the material may still be neutral, even though there is a hole present. The conduction band particle (electron) and valence band particle (hole) may move apart in k-space (and in position space) so we must keep track of them seperately. | {
"domain": "physics.stackexchange",
"id": 12878,
"tags": "semiconductor-physics"
} |
Game server packet handler | Question: I am developing an online game and my code is getting pretty hard to work with. I would appreciate any suggestions how to clean it up or make it simpler to work with. Thanks for any suggestions.
import java.io.BufferedWriter;
import java.io.FileWriter;
import java.io.IOException;
import java.util.ArrayList;
import server.engine.*;
import java.util.Arrays;
import java.util.concurrent.CopyOnWriteArrayList;
import java.util.logging.Level;
import java.util.logging.Logger;
public class ServerPacketHandler extends PacketHandler {
public static CopyOnWriteArrayList<User> usersOnline = new CopyOnWriteArrayList();
@Override
public void handlePacket(Packet p) throws ArrayIndexOutOfBoundsException {
ArrayList<String> newcontent = new ArrayList();
for (String c : p.content) {
newcontent.add(c.replace("<html>", "").replace("</html>", ""));
}
p.content = newcontent.toArray(new String[newcontent.size()]);
if(Server.debugMode) CustomLog.info(p.id + " " + Arrays.toString(p.content));
if (!p.user.isLogged) {
switch (p.id) {
case 0: { //user sent login information
if (!p.content[2].equals(Server.version)) {
p.user.sendPacket(-2);
return;
}
User user;
if ((user = UserManager.loadUser(p.content[0])) != null) {
if (!p.content[1].equals(user.password)) {
p.user.sendPacket(-1, "Incorrect login");
return;
}
p.user.isLogged = true;
p.user.id = p.content[0];
p.user.password = p.content[1];
if (user.username.equals("null")) {
p.user.sendPacket(2);
} else {
for (User u : usersOnline) {
if (u.username.equals(user.username)) {
p.user.sendPacket(-1, "This username is already online!");
u.sendPacket(-1, "You were disconnected from the server!");
p.user.isLogged = false;
try {
p.user.socket.close();
u.socket.close();
} catch (IOException ex) {
Logger.getLogger(ServerPacketHandler.class.getName()).log(Level.SEVERE, null, ex);
}
onClientDisconnected(p.user);
onClientDisconnected(u);
return;
}
}
p.user.username = user.username;
p.user.sendPacket(3, p.user.username);
}
} else {
p.user.sendPacket(-1, "Incorrect login");
}
break;
}
case 1: { //user requested registration
if (!p.content[0].equals(Server.version)) {
p.user.sendPacket(-2);
return;
}
String id = Integer.toString(UserGenerator.randInt());
String password = UserGenerator.randString();
while (UserManager.loadUser(id) != null) {
id = Integer.toString(UserGenerator.randInt());
password = UserGenerator.randString();
}
p.user.id = id;
p.user.password = password;
UserManager.saveUser(p.user);
p.user.sendPacket(1, id, password);
break;
}
default:
break;
}
} else if (p.user.username.equals("null")) {
switch (p.id) {
case 2: { //user wants to setup username
if (p.content[0].length() < 15 && p.content[0].length() > 2 && p.content[0].matches("^[a-zA-Z0-9]*$") && !p.content[0].equals("null")) {
boolean found = false;
for (User u : UserManager.loadAllUsers()) {
if (u.username.equals(p.content[0])) {
found = true;
}
}
if (found) {
p.user.sendPacket(-1, "This username already exists.");
return;
}
p.user.username = p.content[0];
UserManager.saveUser(p.user);
p.user.sendPacket(3, p.user.username);
} else {
p.user.sendPacket(-1, "Incorrect username");
}
break;
}
default:
break;
}
} else {
switch (p.id) {
case 10: { //match list request
for (ServerMatch match : ServerMatch.matchList) {
if (!match.ingame) {
p.user.sendPacket(10, Integer.toString(match.id), match.name, match.host.username, match.password.equals("") ? "false" : "true", match.userList.size() + "/" + match.maxplayers, Integer.toString(match.time), Integer.toString(match.maxquestions), match.topics.toString());
}
}
p.user.sendPacket(11);
break;
}
case 15: { //match connect request
if (p.content[0].matches("^[0-9]*$")) {
ServerMatch m = ServerMatch.getMatch(Integer.parseInt(p.content[0]));
if (m != null && !m.ingame) {
if (m.password.equals("")) {
p.user.connectMatch(m);
} else if (p.content.length > 1 && m.password.equals(p.content[1])) {
p.user.connectMatch(m);
} else {
p.user.sendPacket(-1, "Wrong password!");
}
}
}
break;
}
case 16: { //match disconnect request
p.user.disconnectMatch();
break;
}
case 20: { //create new match
boolean found = false;
for (ServerMatch match : ServerMatch.matchList) {
if (match.host == p.user) {
found = true;
}
}
if (!found) {
if (!p.content[2].matches("^[0-9]*$") || !p.content[3].matches("^[0-9]*$") || !p.content[4].matches("^[0-9]*$") || p.content[5].equals("")) {
p.user.sendPacket(-3);
return;
}
String matchname = p.content[0];
String password = p.content[1];
int maxplayers = Integer.parseInt(p.content[2]);
int time = Integer.parseInt(p.content[3]);
int count = Integer.parseInt(p.content[4]);
String[] topics = p.content[5].split(";");
if (matchname.length() < 30 && matchname.length() > 2 && password.length() < 30 && count > 0 && count < 101 && maxplayers < 11 && maxplayers > 1 && time < 61 && time > 4) {
ServerMatch match = new ServerMatch(matchname, p.user);
if (!password.equals("[null]")) {
match.password = password;
}
match.time = time;
match.maxplayers = maxplayers;
match.maxquestions = count;
if(Arrays.asList(topics).contains("Custom")) { match.topics.add(Question.Topic.Custom); if(p.content.length < 7) p.user.sendPacket(-3); }
else for (String topic : topics) {
switch (topic) {
case "General":
match.topics.add(Question.Topic.General);
break;
case "Technology":
match.topics.add(Question.Topic.Technology);
break;
case "History":
match.topics.add(Question.Topic.History);
break;
case "Geography":
match.topics.add(Question.Topic.Geography);
break;
case "Sport":
match.topics.add(Question.Topic.Sport);
break;
case "Custom":
match.topics.add(Question.Topic.Custom);
break;
default:
p.user.sendPacket(-3);
return;
}
}
if (match.topics.contains(Question.Topic.Custom)) {
if (p.content[6].length() > 5000) {
p.user.sendPacket(-3);
return;
}
ArrayList<Question> list = parseQuestions(p.content[6]);
if (list == null || list.isEmpty()) {
p.user.sendPacket(-3);
return;
}
match.customQuestions = list;
}
ServerMatch.matchList.add(match);
p.user.connectMatch(match);
} else {
p.user.sendPacket(-3);
}
}
break;
}
case 21: { //modify match
if (p.user.match != null && p.user==p.user.match.host && !p.user.match.ingame) {
if (!p.content[2].matches("^[0-9]*$") || !p.content[3].matches("^[0-9]*$") || p.content[4].equals("")) {
p.user.sendPacket(-1, "Match settings are not correct or custom questions are empty or failed to load.");
return;
}
String matchname = p.content[0];
String password = p.content[1];
int time = Integer.parseInt(p.content[2]);
int count = Integer.parseInt(p.content[3]);
String[] topics = p.content[4].split(";");
if (matchname.length() < 30 && matchname.length() > 2 && password.length() < 30 && count > 0 && count < 101 && time < 61 && time > 4) {
ServerMatch match = p.user.match;
if(match.customQuestions.isEmpty() && Arrays.asList(topics).contains("Custom")) { p.user.sendPacket(-1, "Match settings are not correct or custom questions are empty or failed to load."); return; }
if (!password.equals("[null]")) {
match.password = password;
} else match.password = "";
match.name = matchname;
match.time = time;
match.maxquestions = count;
match.topics.clear();
for (String topic : topics) {
switch (topic) {
case "General":
match.topics.add(Question.Topic.General);
break;
case "Technology":
match.topics.add(Question.Topic.Technology);
break;
case "History":
match.topics.add(Question.Topic.History);
break;
case "Geography":
match.topics.add(Question.Topic.Geography);
break;
case "Sport":
match.topics.add(Question.Topic.Sport);
break;
case "Custom":
match.topics.add(Question.Topic.Custom);
break;
default:
p.user.sendPacket(-1, "Match settings are not correct or custom questions are empty or failed to load.");
return;
}
}
match.sendPacketAll(20, match.name,Integer.toString(match.time), Integer.toString(match.maxquestions), match.getTopicString());
} else {
p.user.sendPacket(-1, "Match settings are not correct or custom questions are empty or failed to load.");
}
}
break;
}
case 30: { //match start
if (p.user.match != null && !p.user.match.ingame && p.user == p.user.match.host) {
if (p.user.match.userList.size() > 1) {
p.user.match.start();
} else {
p.user.match.chatMessage("Not enough players to start the match!");
}
}
break;
}
case 31: { //question answer
if (p.user.match != null && !p.user.match.usersAnswer1.contains(p.user) && !p.user.match.usersAnswer2.contains(p.user) && !p.user.match.usersAnswer3.contains(p.user) && !p.user.match.usersAnswer4.contains(p.user)) {
if (p.content[0].equals("1")) {
p.user.match.usersAnswer1.add(p.user);
}
if (p.content[0].equals("2")) {
p.user.match.usersAnswer2.add(p.user);
}
if (p.content[0].equals("3")) {
p.user.match.usersAnswer3.add(p.user);
}
if (p.content[0].equals("4")) {
p.user.match.usersAnswer4.add(p.user);
}
}
break;
}
case 50: { //chat message
if (p.user.match != null) {
for (User u : p.user.match.userList) {
u.sendPacket(50, p.user.username + ": " + p.content[0]);
}
}
break;
}
case 1000: { //add question request
String question = p.content[0];
String answer1 = p.content[1];
String answer2 = p.content[2];
String answer3 = p.content[3];
String answer4 = p.content[4];
String answerNumber = p.content[5];
String topic = p.content[6];
if (answerNumber.length() > 0 && answer1.length() > 0 && answer2.length() > 0 && answer3.length() > 0 && answer4.length() > 0 && question.length() > 0 && question.length() < 301 && answer1.length() < 51 && answer2.length() < 51 && answer3.length() < 51 && answer4.length() < 51 && answerNumber.matches("^[0-9]*$") && answerNumber.length() < 2 && Integer.parseInt(answerNumber) < 5) {
for (Question q : ServerMatch.questions) {
if (q.text.equalsIgnoreCase(question)) {
p.user.sendPacket(-1, "This question is already in the game!");
return;
}
}
try (BufferedWriter writer = new BufferedWriter(new FileWriter("questions.txt", true))) {
writer.write(question + ";;" + answer1 + ";;" + answer2 + ";;" + answer3 + ";;" + answer4 + ";;" + answerNumber + ";;" + topic + System.getProperty("line.separator"));
writer.close();
} catch (IOException ex) {
CustomLog.error("Failed to write custom question!");
ex.printStackTrace(System.out);
}
}
break;
}
default:
break;
}
}
}
@Override
public void onClientConnected(User u) {
if(Server.debugMode) CustomLog.info("Client has connected: " + u.socket.getInetAddress().getHostAddress());
usersOnline.add(u);
this.sendPacket(0, null, u.socket);
}
@Override
public void onClientDisconnected(User u) {
usersOnline.remove(u);
u.disconnectMatch();
if(!u.socket.isClosed())
{
try {
u.socket.close();
} catch (IOException ex) {
CustomLog.error(ex.getMessage());
}
}
if(Server.debugMode) CustomLog.info("Client has disconnected: " + u.socket.getInetAddress().getHostAddress());
}
private ArrayList<Question> parseQuestions(String str) {
ArrayList<Question> list = new ArrayList();
try {
for (String s : str.split(";;")) {
s = s.replace("\n", "").replace("\r", "");
if (!s.equals("")) {
String[] question = s.split(";");
if (Arrays.asList(question).contains("")) {
continue;
}
if(Integer.parseInt(question[5]) < 5 && Integer.parseInt(question[5]) > 0)
list.add(new Question(question[0], question[1], question[2], question[3], question[4], Integer.parseInt(question[5]), Question.Topic.Custom));
}
}
} catch (Exception ex) {
return null;
}
if (list.size() > 0) {
return list;
} else {
return null;
}
}
}
If you want me to show any imports or custom classes, feel free to comment.
Answer:
my code is getting pretty hard to work with.
That's usually a sign that your code smells (excuse the expression).
Despite having 'Hello, World!' as my Java experience, there's quite a bit I can comment on.
Overall Comments:
Your code is aiming to do everything, all bundled together.
You should be spreading it out a bit more.
Stop going for a God object, and settle with a President object (That's not a real thing, I was trying to be witty).
Why are you returning HTML in your packets ...?
HTML is for marking up pages with specific styling (<h1>, <p>), not optimised for returning data.
Consider using JSON instead, as it is designed for returning data in the form of an object.
By converting the JSON into a Java array, you can access all the properties without having to split by delimiters.
Also among the pros is that it's widely implemented across other systems, too.
HTML takes way more bytes to send data with delimiters also, meaning the response time will be slower.
Why are you sending packets by magic numbers (unexplained numbers)?
First, you should define those numbers as constants.
Actually, that would be the thing to do. Except, using integers like that is bad practice. It's easily confusable, and unnecessarily complicated. Use strings instead of integers there.
And instead of a switch, use an object, and associate variables to the keys (strings, not integers).
c.replace("<html>", "").replace("</html>", "")
Stacking .replaces is bad practice, use an array instead.
switch (topic) {
case "General":
match.topics.add(Question.Topic.General);
break;
case "Technology":
match.topics.add(Question.Topic.Technology);
break;
case "History":
match.topics.add(Question.Topic.History);
break;
case "Geography":
match.topics.add(Question.Topic.Geography);
break;
case "Sport":
match.topics.add(Question.Topic.Sport);
break;
case "Custom":
match.topics.add(Question.Topic.Custom);
break;
default:
p.user.sendPacket(-3);
return;
There's better ways to do this, than to use a switch.
Consider an object, or something better there.
You write this more than once, also. Consider converting this into a function.
if (!password.equals("[null]")) {
match.password = password;
} else match.password = "";
First, please, please wrap your statements in brackets, bad things can happen if you mess that up.
Consider a ternary statement instead. Ternary statements let you simplify small if-else statements, and even provide job security (joke).
For example, here, using a ternary statement, you would write:
match.password = password.equals("[null]") ? "" : password;
A ternary here too:
if (list.size() > 0) {
return list;
} else {
return null;
}
into:
return list.size() > 0 ? list : null;
if (p.content[0].equals("1")) {
p.user.match.usersAnswer1.add(p.user);
}
if (p.content[0].equals("2")) {
p.user.match.usersAnswer2.add(p.user);
}
if (p.content[0].equals("3")) {
p.user.match.usersAnswer3.add(p.user);
}
if (p.content[0].equals("4")) {
p.user.match.usersAnswer4.add(p.user);
}
There's a better ways to approach this: switch (which you seem confident with) or array (not an object, as the keys would be integers anyway).
if (!found){
// Lots of code here.
} else {
p.user.sendPacket(-3);
}
You should be flipping things like this, so that the smaller one comes first. Especially seeing as it's a false test.
String answer1 = p.content[1];
String answer2 = p.content[2];
String answer3 = p.content[3];
String answer4 = p.content[4];
String answerNumber = p.content[5];
What would happen if you wanted a 25 answer quiz? You'd have to have it all the way up to answer25.
Don't manually type out variables like that.
Use a loop (for or foreach, whatever floats your boat) instead.
Speaking of bad practice, this should really be improved:
writer.write(question + ";;" + answer1 + ";;" + answer2 + ";;" + answer3 + ";;" + answer4 + ";;" + answerNumber + ";;" + topic + System.getProperty("line.separator"));
Fortunately, there's an easier way to do this. Use an array. By using an array, you can just join all the variables together with some glue (a string, in this case: ";;")
String[] items = {question, answer1, answer2, answer3, answer4, answerNumber, topic};
String combinedString = StringUtils.join(items, ";;");
combinedString += System.getProperty("line.separator");
writer.write(combinedString);
Although, using a loop here would be much better, it'd probably have a similar style (each answer being added to array, that would be joined after all the elements have been iterated over).
See here for more info on array joining and the StringUtils class.
Or, as @Vogel612 kindly pointed out; you can use Collections.Joining instead:
String combinedString = Stream.of(items).collect(Collectors.joining(";;"));
import java.io.BufferedWriter;
import java.io.FileWriter;
import java.io.IOException;
import java.util.ArrayList;
import server.engine.*;
import java.util.Arrays;
import java.util.concurrent.CopyOnWriteArrayList;
import java.util.logging.Level;
import java.util.logging.Logger;
Why is import server.engine.* just thrown in the middle?
Finish declaring all the java.utils and then declare it at the end. | {
"domain": "codereview.stackexchange",
"id": 15269,
"tags": "java, game, multithreading"
} |
What benefits can be got by applying Graph Convolutional Neural Network instead of ordinary CNN? | Question: What benefits can we got by applying Graph Convolutional Neural Network instead of ordinary CNN? I mean if we can solve a problem by CNN, what is the reason should we convert to Graph Convolutional Neural Network to solve it? Are there any examples i.e. papers can show by replacing ordinary CNN with Graph Convolutional Neural Network, an accuracy increasement or a quality improvement or a performance gain is achieved? Can anyone introduce some examples as image classification, image recognition especially in medical imaging, bioinfomatics or biomedical areas?
Answer: Generally speaking a graph CNN is applied to data represented by graphs, not images.
a graph is a collection of nodes and edges connecting them.
an image is a 2D or 3D matrix, in which each element denotes a pixel in space
If your data are just images, or something similar (e.g. some fMRI data), you usually cannot benefit from graph CNN compared with usual CNN.
Sometimes, the class labels of your images may be organized in a graph-like (or tree-like) structure. In that case, you may have a chance to benefit from graph CNN. | {
"domain": "ai.stackexchange",
"id": 688,
"tags": "machine-learning, deep-learning, convolutional-neural-networks, graphs, geometric-deep-learning"
} |
Getting topyellow, topred, slider, and navbar divs stacked | Question: I previously asked in StackOverflow for help putting this together, and was told that I could improve the HTML and CSS.
Could you help me?
Here is my code, and the pictures I am using:
Code
<!DOCTYPE html>
<html class="no-js" lang="en-us">
<head>
<link rel="dns-prefetch" href="//analytics.google.com/">
<meta charset="utf-8">
<title>Berlin Airlift Veterans Association: News</title>
<meta http-equiv="X-UA-Compatible" content="IE=edge">
<meta name="Berlin Airlift Veterans Association" content="">
<meta name="viewport" content="width=device-width,initial-scale=1">
<style>
#yellow{
width:100%;
height: 7px;
background-color:rgb(253,248,12);
position:relative;
display:block
}
#slider{
position:relative;
left:50%;
margin-left:-440px;
width:857px;
height:350px;
bottom: 0;
}
body{
border-top:6px solid rgb(211,5,24);
box-shadow: 0 5px 0 green;
background-color:black;
margin:0;
padding:0;
font-size:15px;
}
header{ background-color:white;
width:55%;
height:900px;
margin:0 auto;
border-left: 6px solid rgb(8,44,180)
}
nav{
background:white;
position:relative;
left:50%;
margin-left:-440px;
top: 0;
margin-bottom:0;
margin-top:0;
width:61%;
}
nav ul{margin:0; padding:0; width:875px
}
nav li{
display:inline;
margin: 0.0px
}
article{
}
fieldset{
border:0;
}
footer{
border-left:6px solid rgb(8,44,180);
width:55%;
margin:0 auto;
text-align:center;
background-color:white;
}
.addr *{
display:inline-block;
float:left;
}
.logo{
z-index:1;
position:absolute;
top: 5%;
left:17.5%;
}
.subscribe{
position:absolute;
right:0;
bottom:0;
color:white;
text-align:center;
}
.subscribe legend{
margin-bottom:300px;
}
.subscribe label:before{
content:'';
background-image:url('emailBomb.png');
background-size:cover;
height:300px;
width:150px;
position:absolute;
top:0;
}
.vh {
border: 0;
clip: rect(0000);
height: 1px;
overflow: hidden;
position: absolute;
width: 1px;
margin: -1px;
padding: 0;
}
.cf:before, .cf:after {
content:' ';
display: table;
}
.cf:after {
clear: both
}
.cf {
zoom: 1
}
.currentButton {
-moz-box-shadow:inset 0px 0px 0px 0px #d3051a;
-webkit-box-shadow:inset 0px 0px 0px 0px #d3051a;
box-shadow:inset 0px 0px 0px 0px #d3051a;
background:-webkit-gradient( linear, left top, left bottom, color-stop(0.05, #000000), color-stop(1, #000000) );
background:-moz-linear-gradient( center top, #000000 5%, #000000 100% );
filter:progid:DXImageTransform.Microsoft.gradient(startColorstr='#000000', endColorstr='#000000');
background-color:#000000;
-webkit-border-top-left-radius:0px;
-moz-border-radius-topleft:0px;
border-top-left-radius:0px;
-webkit-border-top-right-radius:0px;
-moz-border-radius-topright:0px;
border-top-right-radius:0px;
-webkit-border-bottom-right-radius:20px;
-moz-border-radius-bottomright:20px;
border-bottom-right-radius:20px;
-webkit-border-bottom-left-radius:20px;
-moz-border-radius-bottomleft:20px;
border-bottom-left-radius:20px;
text-indent:0;
display:inline-block;
color:rgb(211,5,24);
font-family:Arial;
font-size:14px;
font-weight:bold;
font-style:normal;
height:33pxpx;
line-height:33px;
width:105px;
text-decoration:none;
text-align:center;}
.currentButton:hover {
background:-webkit-gradient( linear, left top, left bottom, color-stop(0.05, #000000), color-stop(1, #000000) );
background:-moz-linear-gradient( center top, #000000 5%, #000000 100% );
filter:progid:DXImageTransform.Microsoft.gradient(startColorstr='#000000', endColorstr='#000000');
background-color:#000000;color:rgb(211,5,24);}
.currentutton:active {
position:relative;
top:1px;}
.button {
-moz-box-shadow:inset 0px 0px 0px 0px #d3051a;
-webkit-box-shadow:inset 0px 0px 0px 0px #d3051a;
box-shadow:inset 0px 0px 0px 0px #d3051a;
background:-webkit-gradient( linear, left top, left bottom, color-stop(0.05, #000000), color-stop(1, #000000) );
background:-moz-linear-gradient( center top, #000000 5%, #000000 100% );
filter:progid:DXImageTransform.Microsoft.gradient(startColorstr='#000000', endColorstr='#000000');
background-color:#000000;
-webkit-border-top-left-radius:0px;
-moz-border-radius-topleft:0px;
border-top-left-radius:0px;
-webkit-border-top-right-radius:0px;
-moz-border-radius-topright:0px;
border-top-right-radius:0px;
-webkit-border-bottom-right-radius:20px;
-moz-border-radius-bottomright:20px;
border-bottom-right-radius:20px;
-webkit-border-bottom-left-radius:20px;
-moz-border-radius-bottomleft:20px;
border-bottom-left-radius:20px;
text-indent:0;
display:inline-block;
color:#fdf902;
font-family:Arial;
font-size:14px;
font-weight:bold;
font-style:normal;
height:33pxpx;
line-height:33px;
width:105px;
text-decoration:none;
text-align:center;}
.button:hover {
background:-webkit-gradient( linear, left top, left bottom, color-stop(0.05, #000000), color-stop(1, #000000) );
background:-moz-linear-gradient( center top, #000000 5%, #000000 100% );
filter:progid:DXImageTransform.Microsoft.gradient(startColorstr='#000000', endColorstr='#000000');
background-color:#000000;color:rgb(211,5,24);}
.button:active {
position:relative;
top:1px;}
</style>
<script>
var i = 0; var path = new Array();
path[0] = "1.jpg";
path[1] = "2.jpg";
path[2] = "3.jpg";
path[3] = "4.jpg";
path[4] = "5.jpg";
path[5] = "6.jpg";
function swapImage()
{
document.slide.src = path[i];
if(i < path.length - 1) i++;
else i = 0;
setTimeout("swapImage()",5500);
}
function GetClock(){
tzOffset = +2;
d = new Date();
dx = d.toGMTString();
dx = dx.substr(0,dx.length -3);
d.setTime(Date.parse(dx))
d.setHours(d.getHours() + tzOffset);
nday = d.getDay();
nmonth = d.getMonth();
ndate = d.getDate();
nyear = d.getYear();
nhour = d.getHours();
nmin = d.getMinutes();
if(nyear<1000) nyear=nyear+1900;
if(nmin <= 9){nmin="0"+nmin}
document.getElementById('berlinClock').innerHTML=""+(nmonth+1)+"/"+ndate+"/"+nyear+" "+nhour+":"+nmin+"";
setTimeout("GetClock()", 1000);
}
window.onload = function() { swapImage(); GetClock(); };
</script>
<body>
<div id="yellow"></div>
<header role="banner">
<a href="/" accesskey="h" tabindex="1" title="Return Home">
<img class="logo" src="BAVA.png" width="150" height="150" alt="Logo">
<h1 class="vh">First Level Heading</h1>
</a>
<div id="slider">
<img name="slide" width="875" height="350"><!-- I'm not sure this is the best approach, but I'll just use what you have -->
</div>
<nav role="navigation">
<ul>
<li><a href="index.htm" class="currentButton">NEWS</a></li>
<li><a href="aboutbava.htm" class="button">ABOUT BAVA</a></li>
<li><a href="history.htm" class="button">HISTORY</a></li>
<li><a href="biographies.htm" class="button">BIOGRAPHIES</a></li>
<li><a href="calendar.htm" class="button">CALENDAR</a></li>
<li><a href="contact.htm" class="button">CONTACT</a></li>
<li><a href="links.htm" class="button">LINKS</a></li>
<li><a href="donate.htm" class="button">DONATE</a></li>
</ul>
</nav>
</header>
<main role="main">
<article role="article">
</article>
</main>
<aside class="subscribe" role="complimentary">
<form action="demo_form.asp" method="GET" novalidate>
<fieldset>
<legend>Sign Up for Email Updates</legend>
<div>
<label for="email">Email</label>
<input type="email" tabindex="9" id="email" name="email" placeholder="kevin.d.rankin@gmail.com">
</div>
<div>
<button type="submit" tabindex="10">Submit</button>
</div>
</fieldset>
</form>
<div id="berlinClock"></div>
</aside>
<footer role="contentinfo">
<div class="adr cf">
<span class="street-address">15 N. College Ave</span>,
<span class="locality">Newton</span>,
<span class="region">NC</span>
<span class="postal-code">28658</span> |
<span class="country-name vh">U.S.A.</span>
<a class="tel" href="tel:+8284663410" tabindex="11" accesskey="p" title="phone">(828) 466-3410</a>
</div>
</footer>
Pictures
Here are my pictures.
Logo:
emailBomb:
slideshow pictures 1-6:
Answer: There are a few things here that definitely can be improved/optimized. I'm not sure what your levels of support are but...
I'd do something along these lines, This doesn't take into account mobile devices, and it was just thrown together really quickly, but you can take from it what you need and run with it.
Hopefully you can see how to simplify things from it, if not, just let me know and I'll update with more comments and explanation.
EDIT: I've updated the code.
removed the HTML5 elements so you don't have to include and HTML5Shiv (for browsers that lack HTML5 element support).
I placed the nav/banner in the header and applied a few wrappers for more styling ability.
I added some spacing for the header and applied a few comments
I put classes on the elements to make the styling more scalable. you can use them interchangeably where as only a single ID is allowed. This helps with Scalability on larger applications.
I must get back to work, I'll update with more comments later today.
I'll also put up a few links for you to read through so you can get a better understanding of some of these methods.
<!DOCTYPE html>
<html class="no-js" lang="en-us">
<head>
<link rel="dns-prefetch" href="//analytics.google.com/"><!-- if you're using google analytics, prefetch the url -->
<meta charset="utf-8">
<title>Title</title>
<meta http-equiv="X-UA-Compatible" content="IE=edge">
<meta name="description" content="">
<meta name="viewport" content="width=device-width,initial-scale=1">
<style>
body{
border-top:5px solid red; /* red line on top */
background-color:black;
margin:0;
padding:0;
font-size:15px;
}
.masthead{
border-top:5px solid yellow; /* yellow line below top red line */
}
/** navigation
-----------------------
navigation container */
.navbar{
margin:10px auto 0; /* center navigation by giving left/right margin "auto" */
}
/* navigation individual button/link styles */
.navbar a{
background-color:black;
border-radius: 0 0 25px 25px; /* only round bottom edges - this is the shorthand code for broder-radius */
padding:10px 15px;
text-decoration:none; /* remove underline */
text-transform:uppercase; /* make all text uppercase */
font-weight:700; /* bold the font */
}
.banner{
background-color:white;
width:875px;
height:900px;
margin:0 auto;
padding: 5px 5px 0;
border-left:5px solid blue;
}
/* hide fieldset default values */
fieldset{
border:0;
padding:0;
margin:0;
}
/* use this to style fieldset */
div.fieldset{
padding:0 10px;
}
.colophon{
border-left:5px solid blue;
width:875px;
padding:0 5px; /* add padding to make border and left/right edges line up with main area */
margin:0 auto;
text-align:center;
background-color:white;
}
.logo{
position:absolute;
left:50%;
margin-left:-480px;
}
.subscribe{
position:absolute;
right:0;
bottom:0;
color:white;
text-align:center;
}
/* use this to style email heading, hide legend with hidden class*/
.subscribe h2{
font-size:18px;
margin-bottom:350px;/* give emailBomb image space under the legend */
}
/* placed emailBomb image in CSS as it's not really "content" it's more of a support image */
.subscribe label:before{
content:''; /* :before must have "content" in order to display */
background-image:url('emailBomb.png');
background-size:100% 100%; /* stretch the image to fit area */
background-position:center;
height:300px; /*actual image size*/
width:150px; /* actual image site*/
position:absolute;
top:50px; /* give room for h2 heading */
}
/* visually hidden helper utility class */
.vh {
border: 0;
clip: rect(0000);
height: 1px;
overflow: hidden;
position: absolute;
width: 1px;
margin: -1px;
padding: 0;
}
/* clearfix helper utility class to clear floats */
.cf:before, .cf:after {
content:' ';
display: table;
}
.cf:after {
clear: both
}
.cf {
zoom: 1
}
</style>
<script>
var i = 0; var path = new Array();
path[0] = "1.jpg";
path[1] = "2.jpg";
path[2] = "3.jpg";
path[3] = "4.jpg";
path[4] = "5.jpg";
path[5] = "6.jpg";
function swapImage()
{
document.slide.src = path[i];
if(i < path.length - 1) i++;
else i = 0;
setTimeout("swapImage()",5500);
}
function GetClock(){
tzOffset = +2;
d = new Date();
dx = d.toGMTString();
dx = dx.substr(0,dx.length -3);
d.setTime(Date.parse(dx))
d.setHours(d.getHours() + tzOffset);
nday = d.getDay();
nmonth = d.getMonth();
ndate = d.getDate();
nyear = d.getYear();
nhour = d.getHours();
nmin = d.getMinutes();
if(nyear<1000) nyear=nyear+1900;
if(nmin <= 9){nmin="0"+nmin}
document.getElementById('berlinClock').innerHTML=""+(nmonth+1)+"/"+ndate+"/"+nyear+" "+nhour+":"+nmin+"";
setTimeout("GetClock()", 1000);
}
window.onload = function() { swapImage(); GetClock(); };
</script>
<body>
<div role="banner" class="masthead">
<div class="inner-masthead">
<a href="/" accesskey="h" tabindex="1" title="Return Home">
<img class="logo" src="BAVA.png" width="150" height="150" alt="Logo">
<h1 class="vh">First Level Heading</h1>
</a>
<div role="section" class="banner">
<div class="slider">
<img name="slide" width="875" height="350"><!-- I'm not sure this is the best approach, but I'll just use what you have -->
</div>
<div role="navigation" class="navbar"><!-- removed unordered list as accessibility for screen readers is difficult with ul > li -->
<a href="#" accesskey="n" title="News" tabindex="1" class="active">News</a>
<a href="#" accesskey="a" title="About me" tabindex="2">About BAVA</a>
<a href="#" accesskey="h" title="History" tabindex="3">History</a>
<a href="#" accesskey="b" title="Biographies" tabindex="4">Biographies</a>
<a href="#" accesskey="h" title="Calendar" tabindex="5">Calender</a>
<a href="#" accesskey="c" title="Contact" tabindex="6">Contact</a>
<a href="#" accesskey="l" title="Links" tabindex="7">Links</a>
<a href="#" accesskey="d" title="Donate" tabindex="8">Donate</a>
</div>
</div>
</div>
</div>
<div role="main" class="main">
<div class="inner-main">
</div>
</div>
<div class="subscribe" role="section">
<form action="demo_form.asp" method="GET" novalidate>
<div class="fieldset">
<fieldset>
<legend class="vh">Sign Up for Email Updates</legend> <!-- visually hide as <legend> is difficult to style -->
<h2>Sign Up for Email Updates</h2>
<div>
<label for="email">Email</label>
<input type="email" tabindex="9" id="email" name="email" placeholder="email@address.com">
</div>
<div>
<button type="submit" tabindex="10">Submit</button>
</div>
</fieldset>
</div>
</form>
<div id="berlinClock"></div>
</div>
<div role="contentinfo" class="colophon">
<div class="inner-colophon">
<div class="adr cf"><!-- use hcard format for better semantics, applied clearfix class to remove floats -->
<span class="street-address">15 N. College Ave</span>,
<!--div class="extended-address"></div--> <!-- use this for apartment number if required -->
<span class="locality">Newton</span>,
<span class="region">NC</span>
<span class="postal-code">28658</span> |
<span class="country-name vh">U.S.A.</span>
<a class="tel" href="tel:+8284663410" tabindex="11" accesskey="p" title="phone">(828) 466-3410</a>
</div>
</div>
</div> | {
"domain": "codereview.stackexchange",
"id": 8149,
"tags": "html, css"
} |
What exactly is meant by the expression "differentially expressed"? | Question: As far as I've seen, this expression is almost always used in relation to gene expression profiling. Unfortunately, I have no background in this area. Can someone please explain this in layman terms?
Answer: Although each cell of your body essentially contains the same DNA and the same genes, cells in different tissues express (turn on) different genes under different conditions. Measuring differential gene expression involves looking at the amount of expression for a gene (or set of genes) in two contrasting scenarios. The contrast could be across different times, different tissues, different conditions, different related species, etc.
When, you say a gene is "differentially expressed", this is very context-specific. The phrase means nothing by itself, and it is only useful in terms of the applicable contrast. For example, the statement "gene A is differentially expressed" is uninformative, while the statement "gene A is differentially expressed in liver and muscle tissue" is descriptive--it tells you that liver tissues and muscle tissues have a significantly different level of gene A products. Often the terms "up-regulated" and "down-regulated" are also used to provide additional detail. In the context of the previous example, the statement "gene A is up-regulated in muscle tissues" tells you that the level of gene A products is higher in muscle tissues than in liver tissues. | {
"domain": "biology.stackexchange",
"id": 346,
"tags": "genetics, gene-expression"
} |
How do I obtain the value for the Fermi Coupling Constant? | Question: I have been given an equation, without an explanation on the constant included.
The equation is the following: $$\Gamma= \frac{7\pi}{24} G^2_{\text{Fermi}}$$
When looking for a value for the Fermi Coupling Constant, $G_{\text{Fermi}}$ , I only seem to find values for $G_{\text{Fermi}}/{(\hbar c)^3}$.
To obtain the value for $G_{\text{Fermi}}$,must I just multiply the value I have for $G_{\text{Fermi}}/{(\hbar c)^3}$ by $(\hbar c)^3$ (the product of plancks constant with the speed of light)?
If so, why is the value of the constant shown in this form , divided by another constant, instead of just being presented as itself?
Answer: Most particle theory texts use "natural units" units are such that $\hbar c$ is numerically unity, although it has dimensions $ML^2T^{-2}$ of energy. $G_0= G_F/(\hbar c)^3$ is usually quoted units of $GeV^{-2}$. This tells us that $G_F$ itself has units of $GeV$.
You do not tell us what $\Gamma$ is. Is it a decay width? If so, it has dimensions of energy and you probably need $G_F$, and not $G_F^2$ or $G_0$. Anyway knowing the dimensions of $\Gamma$ will tell you. | {
"domain": "physics.stackexchange",
"id": 96041,
"tags": "dimensional-analysis, physical-constants, electroweak"
} |
What does being an "elementary substance" mean? | Question:
Which of the following isn't an elementary substance?
Ozone
Sulfur
diamond
quartz
I don't get what elementary substance means.
By seeing the options and the word elementary, I guess it means something like pure or obtained in its original form.
Answer: From LATINTOS:
An elementary substance is a pure chemical substance that consists of atoms belonging to a single chemical element.
Now you should be able to solve your question. | {
"domain": "chemistry.stackexchange",
"id": 2005,
"tags": "elements"
} |
2D Frequency Domain Convolution Using FFT (Convolution Theorem) | Question: In the time domain I have an image matrix ($256x256$) and a gaussian blur kernel ($5x5$). I've used FFT within Matlab to convert both the image and kernel to the frequency domain as zero padded $260x260$ matrices ($N + M -1 = 256 + 5 -1 = 260$)
I then multiply the image matrix by the kernel and use IFFT to convert the result back to the time domain. When I try to display the result, it is just junk and doesn't resemble the original image with a gaussian blur like it should.
Here is the Matlab code I am using, where image = $256x256$ and kernel = $5x5$:
imagefreqdomain = fft2(image,260,260)
kernfreqdomain = fft2(kernel,260,260)
filtimagefreqdomain = imagefreqdomain * kernfreqdomain
filtimage = ifft2(filtimagefreqdomain)
What am I doing wrong? Thanks
Answer: Similar to your question Applying 2D Image Convolution in Frequency Domain with Replicate Border Conditions in MATLAB the issue is what happens when you multiply in 2D in frequency domain.
So few remarks about that:
Multiplying in frequency domain for discrete signals with finite support is equivalent to applying convolution in spatial domain under the assumption of cyclic / periodic boundary conditions.
In image processing we usually define per kernel the anchor pixel of the kernel. Usually it is marked as (0, 0) of the kernel. We also mostly set it as the center pixel (In Image Processing most kernels have odd length). When we pad the kernel to the size of the image we usually add zeros on its bottom and right. Which means its (0, 0) isn't aligned with the image.
The misalignment with the circular boundary extension yields the following for the naïve code:
clear();
close('all');
gaussianKernelStd = 0.5;
gaussianKernelRadius = ceil(5 * gaussianKernelStd);
mI = im2double(imread('cameraman.tif'));
mI = mI(:, :, 1);
numRows = size(mI, 1);
numCols = size(mI, 2);
vX = [-gaussianKernelRadius:gaussianKernelRadius].';
vK = exp(-(vX .* vX) ./ (2 * gaussianKernelStd * gaussianKernelStd));
mK = vK * vK.';
mK = mK ./ sum(mK(:)); %<! The Gaussian Kernel
mIFiltered = ifft2(fft2(mI) .* fft2(mK, numRows, numCols), 'symmetric');
figure();
imshow([mI, mIFiltered]);
As you can see at the top and left filtered image (The right) has artifacts which are the result of the circular extension and the misalignment. How to fix it?
Well, padding the image correctly and padding with circular extension the kernel.
I showed it in Applying 2D Image Convolution in Frequency Domain with Replicate Border Conditions in MATLAB. | {
"domain": "dsp.stackexchange",
"id": 9451,
"tags": "matlab, convolution, image-processing"
} |
What is the Fourier Transform integral equation for a 1D signal and how can it be expanded into 2D, 3D, and 4D? | Question: I know that
is the 1D Fourier Transform (FT), and the 2D FT is and 3D FT is , but I am not sure whether these expressions are in fact Fourier transform integral equations for a 1D signal, expanded into 2D and 3D. If they are, then I assume 4D would just be $$ G(u,v,w,\omega) = g(x,y,z,t)e^{-i2\pi \cdot (xu+yv+wz+\omega t)}dxdydzdt$$
Please help clarify the above. Thank you. I'm new to signal processing.
Answer: Welcome to the DSP stack exchange! For the future, please type out all of your equations using $\LaTeX$ !
As to your question, I will do my best to answer, but I'm not sure if I fully understand your question. But, yes, these are exact expressions for the multi-dimensional extensions of the 1D Fourier transform. For an $N$-D Fourier transform, there needs to be $N$ data domain variables, or dimensions, and $N$ Fourier domain variables/dimensions.
If, for example, you wanted to apply a 1D transform to 2D data you would have something like
$$ G(u,y) = \int_{-\infty}^{\infty}g(x,y)e^{-j2\pi(xu)}dx $$
If you wanted to apply a 2D transform to 1D data, for example, you would get
$$ G(u,v) = \int_{-\infty}^{\infty}g(x)e^{-j2\pi(xu+yv)}dxdy $$
In this case, in the $y$ direction, $g(x)$ is treated as a delta function, so $G(u,v)$ along the $v$ dimension would be a constant.
So, to answer what I think is your question, yes, the equations you provided are the exact equations for the multi-dimensional Fourier transforms. | {
"domain": "dsp.stackexchange",
"id": 12427,
"tags": "signal-analysis, fourier-transform, 3d, 2d, 1d"
} |
What is the physical significance of the value of wave amplitude being $1$? | Question: In Feynman Lectures Vol.1, it is written that:
First of all, we know that the new way of representing the world in quantum mechanics - the new framework - is to give an amplitude for every event that can occur, and if the event involves the reception of one particle, then we can give the amplitude to find that one particle at different places and at different times. The probability of finding the particle is then proportional to the absolute square of the amplitude. In general , the amplitude to find a particle in different places at different times varies with position and time.
In some special case it can be that the amplitude varies sinusoidally in space and time like $e^{i(\omega t-\vec k\cdot r)},$ where $\vec r$ is the vector position from the origin. (Do not forget that these amplitudes are complex numbers, not real numbers.) Such an amplitude varies according to a definite frequency $\omega$ and wave number $\vec k$...
But when $\omega t=k.r$, the value of the amplitude becomes $1$ which is a real number . What does this mean? What is the physical significance of value of wave amplitude being $1$? Does this mean that $\omega t$ can not be equal to $k.r$?
Answer: This should be a comment, but it is too long.
The amplitude, $Ψ$ as :
$e^{i(\omega t-\vec k\cdot r)},$
The observable is the complex conjugate squared $Ψ$ , which gives the probability, the only measurable quantity.
When
$\omega t=k.r$
$Ψ$ becomes $e^{i(0)}$, a complex number .
It is $Ψ^*Ψ$ that becomes equal to 1, a real number. When the probability becomes 1 it means that you have a fixed (not time or space depended) measurement at that value of the variables.
The expression $e^{i(\omega t-\vec k\cdot r)},$ defines a plane wave, the $t$ and $r$ are independent variables in the expression.I cannot see how the equality is physically relevant. | {
"domain": "physics.stackexchange",
"id": 66271,
"tags": "quantum-mechanics, waves, wavefunction, frequency, probability"
} |
Boundary conditions for dielectric medium | Question: If we are having a dielectric medium with $\epsilon > \epsilon_0$ and vacuum $\epsilon_0$. I want to find the boundary conditions in this case. Therefore I do:
$$\nabla \cdot \vec D(\vec r)=\rho_{\text{free}}(\vec r)$$
By integrating this and also using Gauss's law you can find:
$$D_{n_{\text{outside}}} - D_{n_{\text{inside}}}= \eta_{\text{free}}(\vec r)$$
Then if we have no free charges: $$D_{n_{\text{outside}}} - D_{n_{\text{inside}}}= 0$$ and from here we find out: $$ D_{n_{\text{outside}}}=D_{n_{\text{inside}}}$$
where in all the above equations: $D_{n_{\text{outside}}}$ normal component of the D-Field outside the dielectric and $D_{n_{\text{outside}}}$ the normal component inside the dielectric.
I have 2 questions about two things:
What is the free charge density here? The charge inside the dielectric or a charge distribution (characterized by a charge density function) in front of the dielectric?
What do we mean when we say that we have no free charge density? That we lack charges inside the dielectric or outside of it?
If you lack free charges, which are the source of the D-field how can you speak about the components of the D-field, when you have no D-field in the first place, since you lack the source that generates this field?
Answer:
Not sure what you mean by the charge inside the dielectric? The dielectric will be net neutral. The free charge density here refers to the density of mobile conduction charges on the interface between the media. It excludes the (static) charges associated with the induced dipoles in the media.
That there are no mobile conduction charges at the interface and so $\eta_{\rm free}=0$.
The D-field begins and ends on mobile conduction charges that are not on the interface. They could be anywhere else. It is not necessary to have a charge present at the position where you measure the D-field. It just means that the D-field must have zero divergence at that point. | {
"domain": "physics.stackexchange",
"id": 84626,
"tags": "electromagnetism, boundary-conditions"
} |
Roscore on android device | Question:
Hi,
Is it posible to run roscore on an android device, and make other nodes on a ubuntu laptop connect to it?
Thanks in advance.
Originally posted by Gosu on ROS Answers with karma: 22 on 2013-03-26
Post score: 0
Answer:
Yes, it's possible to run the roscore on the Android device. You can start one programmatically using RosCore.newPublic() which will listen on the public interfaces. (The MasterChoooser default implementation will start a Private server, not accessible by the outside world).
Once you start your RosCore, you can set the ROS_MASTER_URI environment variable on your Ubuntu laptop to your Android device: export ROS_MASTER_URI=http://192.168.1.10:11311
Once that is set (in all your shells), you should be able to see topics using rostopic and the other basic tools.
Originally posted by jamuraa with karma: 218 on 2013-03-27
This answer was ACCEPTED on the original site
Post score: 0
Original comments
Comment by Gosu on 2013-03-27:
Thank you very much for your answer. I will give it a try. | {
"domain": "robotics.stackexchange",
"id": 13553,
"tags": "roscore, android"
} |
How does mitochondrial uncoupling enhance performance in muscle cells? | Question: As far aas I understand, in mitochondria, the citric acid cycle breaks down fatty acid or glucose to produce NADH and FADH2, which are then utilized by Complexes I through IV to generate a proton gradient in the intermembrane space. This gradient powers ATP synthase to ultimately generate ATP for the cell.
Uncoupling refers to alternative ways of decreasing the proton gradient without involving ATP synthase, such as through transmembrane fatty acids or Uncoupling Proteins (UCPs).
On the one hand, I recall learning that uncoupling is considered a disease condition since it disrupts the 'pull-effect' where an increased need for ATP essentially drives the entire reaction chain from the back, and because it generally dissipates energy into non-productive work. For instance, a textbook I recently read stated that ketones promote uncoupling in cardiac muscle cells, which is detrimental for obese individuals as the heart's performance capacity is impaired due to efficiency loss.
On the other hand, it is widely accepted nowadays that physical activity leads to significant mitochondrial uncoupling. This is surprising to me since one would assume that during a biologically relevant process, such as the adaptation of a muscle to movement, the adaptation would typically be 'beneficial'. Multiple authors write sentences like 'this optimizes the mitochondria's ability to synthesize ATP.'
It is evident that mitochondria deal better with a high nutrient supply when uncoupled, as fewer superoxides and peroxides are produced. But during physical exertion, an excess supply of nutrients is probably not the main issue?
I'm looking for insights to guide my understanding. How could the uncoupling of the electron transport chain I-IV from ATP synthase potentially enhance the performance capability of mitochondria and by this means the performance capability of skeletal muscle cells?
Answer: Skeletal muscles have multiple systems that increase the rate of energy generation in response to faster energy consumption during excercise.
System
Rate
Sustainability
Phosphagen
highest
10-15s
Glycogen-lactic-acid
30-40s
Aerobic
lowest
indefinite
The phosphagen system represents the immediate source of ATP, it is relied on for power surges. ATP itself and phosphocreatine, which can donate its phosphate radical to ADP, form this system.
When muscle activity is prolonged, the other two systems come into play. Which of the two being predominant depends on the intensity of the activity (rate of ATP utilisation).
The glycogen-lactic-acid system is used for strenuous excercise because it generates ATP faster than the aerobic system. It involves glycogenolysis, glycolysis and homolactic fermentation. Glycogenolysis is the breakdown of glycogen to glucose. Glycolysis is glucose oxidation into pyruvic acid by oxidative coenzymes with production of ATP (substrate-level phosphorylation). In homolactic fermentation, which happens under anaerobic conditions (oxygen insufficiency), instead of the reduced coenzymes (byproducts of glycolysis) being oxidised by the ETC in the mitochonderia, pyruvic acid acts as the final hydrogen acceptor to oxidise them so they can be used for another cycle of glycolysis. This produces lactic acid which is a strong acid that causes fatigue as will be explained later. Krebs' cycle is not part of the glycogen-lactic-acid system because there is no pyruvic acid to oxidatively decarboxylate into the acetyl moiety of acetyl-CoA (active acetate).
The aerobic system involves glycogenolysis, glycolysis, Krebs' cycle and the ETC. The Kreb cycle substrate is the active acetate radical that is oxidatively decarboxylated completely by oxidative coenzymes, with substrate-level phosphorylation. The ETC requires oxygen supply since oxygen is the final electron acceptor that drives the movement of hydrogen from the reduced coenzymes and its splitting to electrons and protons. (Oxygen is the second most electronegative element in the periodic table.) The movement of electrons (redox reactions) generates energy that is used to pump protons in the matrix across the inner mitochonderial membrane into the intermembrane space. This pumping continues until a proton back-pressure (proton motive force or proton gradient) is established. Once established, ATP synthase inhibits further proton pumping and stops the activity of the ETC. The pumped protons then diffuse through ATP synthase (chemiosmosis) and the energy that was used to pump them before is now used to phosphorylate ADP to ATP (oxidative phosphorylation). ATP synthase is sometimes also called complex V though it doesn't transport electrons, only facilitates proton diffusion. ATP synthase is activated by high ADP concentration in the matrix which discharges the proton back-pressure and consequently activates the ETC. ADP is produced from ATP hydrolysis and increases at low energy states when energy consumption is faster than energy production. The aerobic system is thus relied on for moderate-intensity prolonged excercise where oxygen consumption by the ETC is balanced by oxygen supply and where the long metabolic pathway still provides ATP at a sufficient rate. The reduction of oxygen by electrons can form either
water if the electrons are sufficient $\ce{O2 + 4e- + 4H+ \to 2H2O}$ (The protons are taken up from the matrix).
superoxide ion if the electrons are insufficient to form water $\ce{O2 + e- \to [O2]-}$.
Superoxide ions form hydrogen peroxide $\ce{2[O2]- + 2H+ \to H2O2 + O2}$. Hydrogen peroxide, as with all ROS, has a high affinity to electrons and pulls them from other compounds damaging them, to combine with the electrons and with protons forming water $\ce{H2O2 + 2e- + 2H+ \to 2H2O}$. This causes peroxidation (oxidative damage) to lipids, proteins and DNA. Cells therefore have to split hydrogen peroxide to water and oxygen $\ce{2H2O2 \to 2H2O + O2}$ by catalase enzyme. Hydrogen peroxide can also be useful for the same reason. It is used by immunity cells to kill pathogens by damaging their cell membranes, which are made up of phopholipids, causing their lysis.
The muscle burns through the phosphagen system first and then operates through the aerobic system indefinitely, as long as there is sufficient oxygen and glucose supply. Any extra energy needed in a short amount of time is supplied by the glycogen-lactic-acid system. However, the latter limits activity time (causes fatigue) as will be explained later.
Muscles are divided into fascicles which are groups of muscle fibers (cells). Following are the most important organelles in a muscle fiber.
A sarcolemma (plasmalemma)
A sarcoplasm (cytoplasm)
Many mitochonderia
Multiple nuclei
Many sarcoendoplasmic reticula (smooth endoplasmic reticula) that surround the most important organelles, myofibrils
Many myofibrils
Myofibrils are formed of many myofilaments (actin and myosin) organized into contractile (motor) units known as sarcomeres.
Muscles contract when a threshold electrical stimulus depolarises the sarcolemma resulting in an action potential that propagates to T-tubules. At the T-tubules, the reversal of relative charge accross the membrane changes the shape of dihydropyridine receptors so they now allow for the influx of $\ce{Ca^{2+}}$. Consequently, ryanodine receptors on the terminal cisternae of sarcoendoplasmic reticula open and more $\ce{Ca^{2+}}$ diffuse into the sarcoplasm ($\ce{Ca^{2+}}$-induced $\ce{Ca^{2+}}$ release). This increases sarcoplasmic $\ce{Ca^{2+}}$ concentration. $\ce{Ca^{2+}}$ binds to troponin which changes shape. Since troponin is bound to tropomyosin, changing the shape of the former displaces the latter. Tropomyosin consequently exposes myosin-binding sites on actin so myosin can now bind, bend, detach and get back to original conformation to repeat the process (cross-bridge cycling) until $\ce{Ca^{2+}}$ is actively pumped back to where it came from causing tropomyosin to block active sites and relaxation. The magnitude of the response of the muscle to stimulation (whether acetylcholine or depolarisation can elicit an action potential) depends on
strength of the stimulus (amount of acetylcholine or change in membrane potential).
duration of the stimulus (time for which acetylcholine or charge persist).
frequency of stimulation or rise of stimulus intensity (since acetylcholine and charge accumulate).
Muscle fatigue as we know it can be attributed to different mechanisms that happen at different stations.
Neural (central) fatigue
Metabolic (peripheral) fatigue
Neural fatigue happens because the muscle stops listening, specifically because the frequency of electric stimulation is not high enough to maintain contraction. It can happen because of exhaustion of neurotransmitter (acetylcholine) stores for example. That is to say, the rate of release and breakdown of acetylcholine by acetylcholinesterase is greater than the rate of its formation and packaging into vesicles. Neural fatigue is what limits activity time in well-trained athletes. It's not associated with pain. Athletes work on increasing the capacity of their neurons to keep firing at a high frequency by training.
Metabolic fatigue on the other hand happens because the muscle gives up, specifically because of
$\ce{Ca^{2+}}$ not doing its job being too little or ineffective.
substrate shortage.
Lactic acid, being highly acidic, dissociates readily into protons and lactate. The protons, having the same charge as $\ce{Ca^{2+}}$, displace it decreasing the sensitivity of the muscle to $\ce{Ca^{2+}}$. The binding of protons can also deform proteins by breaking the bonds that form their secondary, tertiary and quaternary structures. Amino acid radical groups pick up the surplus of positive charge and the interactions between them change. Thus, lactic acid causes inhibition of active $\ce{Ca^{2+}}$ pumping outside the sarcoplasm by deforming pumps, so high $\ce{Ca^{2+}}$ concentration is maintained allowing for more forceful contraction. However, it's a matter of time until the insensitivity becomes too much. Lactic acid (which would have been an active acetate inside mitochonderia in the aerobic system) also causes hydrolytic damage to myofilaments. Any damage to a muscle is called microtrauma. Repetitive strain on myofilaments, connective tissue, tendons and bones by lengthening (eccentric contraction) can also cause microtrauma. Microtrauma results in inflammation and muscle soreness. Muscle soreness (pain) aims to stop the physical activity which can cause more microtrauma to the muscle. Muscles can adapt to metabolic fatigue and microtrauma by increasing
substrate stores (glycogen)
$\ce{Ca^{2+}}$ stores (sarcoendoplasmic reticula)
number of myofibrils (hypertrophy)
number of connective tissue fibers, to increase stiffness
diameter of nerve fiber, to increase speed of conduction
amount of glycolytic enzymes
as seen in pale (also known as type II or fast-twitch) muscle fibers, to decrease the likelihood of reinjury.
Thus, muscles don't necessarily need the ETC to generate energy and in fact, rely on the fermentation pathway for faster and more forceful activities because of its higher rate of ATP production despite its fatiguing effects which they can adapt to.
Now, we can look at the effects of uncoupling. Uncouplers work on the aerobic pathway by uncoupling oxidation from phosphorylation; they make it possible for oxidation to take place without phosphorylation. They cause proton discharge through the inner mitochonderial membrane before the threshold back-pressure required for protons to flow through ATP synthase is established. As a result, the energy from proton diffusion across the membrane is released as heat instead. This mechanism is utilized for example in brown adipose tissue where thermogenin transmembrane protein (uncoupling protein 1) causes rising of body temperature (to maintain body temperature in cold environments) and fat loss by metabolising fatty acids in stored fat which enter Krebs' cycle as acetyl groups and the resulting reduced coenzymes are oxidised by the ETC, with no oxidative phosphorylation. That's why this type of adipose tissue is more vascular relative to white adipose tissue; to maintain the high rate of oxidation which requires oxygen. The increase in heat can
decrease viscosity of myofilaments so they slide more easily over each other.
increase enzyme activity since more enzyme-substrate complexes form as enzymes and substrates move faster with greater chances to collide which increases the rate of ATP formation and utilisation.
relax blood vessel smooth muscles resulting in vasodilatation and increased blood flow, with consequent increases in oxygen supply and lactic acid removal during excercise. Whether lactic acid causes fatigue depends on the difference between the rate of its production and the rate of its flushing during activity. At rest, the remaining lactic acid is flushed or oxidised locally. We maintain the vasodilatation and high ventillation rate at rest, to oxidise the remaining lactic acid and myoglobin and form ATP by the ETC to rephosphorylate creatine to phosphocreatine (oxygen debt). The flushed lactic acid is mostly reoxidised into pyruvic acid then glucose in the liver by reversal of glycolysis, also known as gluconeogenesis (Cori cycle). The remaining lactic acid is reoxidized locally to pyruvic acid which enters Krebs' cycle instead, but the majority is flushed. The increased blood supply also helps in healing from microtrauma at rest which ultimately stops soreness. Cold, on the other hand, treats soreness directly by decreasing blood flow carrying inflammatory cells that cause pain by secreting cytokines.
Besides increasing temperature, uncoupling causes
faster flow of electrons through the ETC since the sufficient proton back-pressure is never established so ATP synthase doesn't inhibit pumping of protons and electron transfer (electron transfer is continuous). As a result, oxygen is reduced rapidly to water instead of being reduced merely to hydrogen peroxide that can cause damage if catalase is saturated and can't split the excess to water and oxygen. | {
"domain": "biology.stackexchange",
"id": 12335,
"tags": "mitochondria, energy-metabolism"
} |
Intermittent subscription to a topic with/without timer possible? | Question:
Hello,
It is really important for me to know this before I start any coding. Here is what I want:
A continuous stream of data is being published as a topic. (At frequency >= 50Hz).
I want to read from this stream, and write the data to a text file when a parameter in the parameter server is set. Whenever I take readings from the stream, I want data to be written to the file at 50Hz (this frequency is important. Cannot reduce this).
I made a node which is dedicated to publishing the topic for this data. I then tried subscribing to it at different time intervals. IT DOES NOT WORK. I got numerous suggestions from you guys (for which I am really thankful), but I am still stuck with my problem. I basically want to tap into the topic whenever I want, read data and write it to a text file, and then leave reading the topic till the next time I feel like writing to the file again. Never thought it would be so hard.
Anyway, I tried using [ delays (sleep(n) commands) and spinOnce() ] combinations in while loops but that did not work as well. I have a question here:
--> Does that mean, that a subscriber in ROS HAS to CONTINUOUSLY listen to the topic being published? What if I don't want to? What if I want to listen to a topic say, only when 'flag' is set to 1?
I then stumbled upon timers. Timers basically take a callback function, and execute that function every T seconds (where T is the time period). A clever way to publish a topic is then to use the publish() command in the timer callback. An even more clever way to exploit this structure, is to declare the callback function as a member function of your node class [ I got the above two 'clever ideas' from people who helped me out in previous related questions asked by me. Thanks!]. I have a question here:
--> What if my timer has time period say, 0.2 seconds, and my callback function requires 0.3 seconds to execute? What happens then? Is there some sort of queue where timer callbacks are lined up?
I am sorry if I am repeating stuff from previous questions, or if my questions sound too basic. It just bothers me that ROS being such an amazing platform, does not seem to have a simple solution to intermittent/'at will' subscription to a topic!
All help is greatly appreciated. Thank you for the patience to go through this post!
Originally posted by Nishant on ROS Answers with karma: 143 on 2012-07-10
Post score: 0
Answer:
50Hz is not much. There is no problem with being subscribed all the time as long as the messages you send around are not too big.
I wouldn't use a ROS parameter for enabling/disabling recording because you needed to poll if the parameter is set all the time. This required doing XMLRPC calls to the master all the time which can be slow. Instead, I would use a ROS service.
You shouldn't require to deal with any timings or whatever in the subscriber. Just subscribe to a topic and write to your file whenever your callback is executed. Then the rate is controlled by the publisher. If you need a specific rate, use a throttle node (see topic_tools) as mentioned in your previous question.
Originally posted by Lorenz with karma: 22731 on 2012-07-10
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by Nishant on 2012-07-10:
So the data I am reading is from a load sensor. I want it to record load values at 50Hz, only when the robot touches it. The thing is, I have another node running the code to execute robot motions. So I want the load sensor to record sort of "in the background".
Comment by Lorenz on 2012-07-10:
Can't you just extend your node that reads the sensor to publish on a separate topic only if the robot touches?
Comment by Nishant on 2012-07-10:
So with a service, I can tell it to start recording when the robot motions start, and in the service callback, it will start looping through the serial read commands for the load sensor. But then how do I tell it to stop after the robot motions are over?
Comment by Nishant on 2012-07-10:
You are right, and I am thinking of doing exactly that. But how does the node that reads the sensor know that the robot is about to touch? The robot motion program is a different node altogether. So each node has to know what the other's state is right? That's why I thought of parameters initially.
Comment by Lorenz on 2012-07-10:
Just do as @ipso said. Just call Subscriber::shutdown(). When the enable service gets called, instantiate a new subscriber and when the disable service is called, just shut id down. Don't use spinOnce but spin in the main loop, the rest is managed by the callbacks.
Comment by Nishant on 2012-07-10:
Ohh!! So I should make TWO services huh! And I was thinking of having a variable 'int16 resp' which flips between 1 and 0 as enable/disable toggle. But this is where I was getting a mind block. I was thinking how to return from a callback once an enable has been detected. Solution: TWO services! :)
Comment by Nishant on 2012-07-10:
Thanks Lorenz! This has been one of the most helpful posts. :) | {
"domain": "robotics.stackexchange",
"id": 10134,
"tags": "ros, timer, publisher"
} |
what are Structural recursion, primitive recursion, recursion combinator and recursion principles? | Question: Recently, I encountered terminologies such as primitive recursion and recursion combinator. One of the sources is here link
I googled and read some, but missing the points of them. I know that recursion occurs when a function appears within its definition. While $\lambda$-calculus cannot express such recursive definitions directly, it can use combinators to implement recursion.
So, what are Structural recursion, primitive recursion, recursion combinator and recursion principles? how each related to others?
Hoping someone could explain these clearly, so I can go in the right direction. now I am struggling to figure out their "points".
Thanks in advance!
Answer: The recursion combinator you mention seems to be the recursor associated to an inductive (or recursive) data type. In the paper this seems to be the type describing the syntax of lambda terms. Here, I'll take lists as a simpler recursive type.
Note that the "lists of naturals type" can be intuitively described as the "least" type admitting these constructors:
$$
\begin{array}{l}
nil : list \\
cons : nat \to list \to list
\end{array}
$$
Recursive types as the one above have an associated induction principle. For instance, if we wanted to prove a property on all lists $p(l)$, it would suffice to prove
the base case $p(nil)$, and
the inductive case $p(l) \implies p(cons\ n\ l)$ for any $n,l$.
If we had more constructors, we would have move base or inductive cases, accordingly.
Similarly, we can define a function $f : list \to A$ by induction. That is to define $f(l)$ on all lists, all we have to do is to define
what is the result in base case, i.e. $f(nil) = a : A$
provided we already defined $f(l)$, we need to define $f(cons\ n\ l)$ for all $n,l$.
Note that step 2 amount to define a function $g : nat\to A \to A$, which takes $n:nat$ and $f(l):A$ and produces $g(n)(f(l)) = f(cons\ n\ l)$.
We can generalize this by crafting a combinator that given $a,g$ produces $f$ defined as above. This is called the (primitive) recursor.
$$
\begin{array}{l}
rec : A \times (nat \to A \to A) \to (list \to A) \\
rec(a,g)(nil) = a \\
rec(a,g)(cons\ n\ l) = g(n)(rec(a,g)(l))
\end{array}
$$
Usually this is called fold_right or foldr in functional programming languages.
Note how, roughly, $a$ replaces $nil$, and $g$ replaces $cons$. Indeed, in the general case, the recursor takes one argument for each constructor of the recursively defined type at hand.
If you have a general fixed point combinator like Church's $Y$, you can easily encode the above. However, in many type theories, you don't have that luxury, since $Y$ causes the inconsistency of the related logic. Instead, for any recursive type you define, you get a restricted version of $Y$ which is the recursor: each type has its own recursion combinator. This ensures the termination of the calculus, which is important to ensure the consistency of the logic. | {
"domain": "cs.stackexchange",
"id": 9396,
"tags": "recursion, primitive-recursion, variable-binding"
} |
Applicability of Cardy's "doubling trick" to the 2D Ising Model | Question: In Section 11.2.2 of the book on Conformal Field Theory by di Francesco, Mathieu, and Senechal (page 417), the two point function on the Upper Half Plane is written as being equal to the four point function in the CFT on the full complex plane:
$$G_{s}(y_1, y_2, \rho) \equiv \langle \sigma(z_1, \bar{\sigma}_1)\sigma(z_2, \bar{z}_2)\rangle_{UHP} = \langle \sigma(z_1)\sigma(z_2)\sigma(z_1^*)\sigma(z_2^*)\rangle$$
I have a number of questions about this:
Is this doubling trick even applicable to the 2D Ising Model?
When do operators factorize into their chiral and anti-chiral parts in general? It is clear that this should happen in free-field theories, but in interacting systems (or minimal models for that matter), when can this happen?
Also, the spin operator has conformal dimension $h = 1/16$, but this is presumably $\sigma(z)$. How does equating a two point function to a four point function respect conformal weights? Is it just because $\sigma(z_1)$ has half the conformal weight that $\sigma(z_1, \bar{z}_1)$ does?
Answer: The short answer is that the doubling trick applies to correlators at the level of conformal blocks. It is not true that the two point function is the "same" as the four point function. Rather, one must look at the equality in a formal way as emphasizing the equivalence at a block-by-block level.
Thanks to Prof. John Cardy for a discussion. | {
"domain": "physics.stackexchange",
"id": 28800,
"tags": "quantum-field-theory, statistical-mechanics, conformal-field-theory, ising-model, critical-phenomena"
} |
Doesn't over(/under)sampling an imbalanced dataset cause issues? | Question: I'm reading a lot about how to use different metrics specifically for imbalanced datasets (e.g. two classes present, but 80% of the data is one class) and how to tackle the issue of imbalanced datasets.
One trick is to oversample, so to take more (or even duplicate some) data belonging to the underrepresented class. I've tried this and did achieve better results (before my models would easily just predict a single class for everything, achieving 80% accuracy lol).
However, I was wondering, will this model work well with real-life data? One of the 'laws' of data science/machine learning is that your training data has to have the same/similar attributes as your live data you're intending to use your model on. However, by oversampling, I create a dataset that's 50% one class and 50% other, as opposed to the "natural", real-life-data having 80% of one class and 20% of the other.
So I guess the question in short is: Will oversampling my imbalanced dataset of 80/20 class distribution to 50/50 class distribution impact the usability of my model for real-life data? Why?
Answer: Yes, the classifier will expect the relative class frequencies in operation to be the same as those in the training set. This means that if you over-sample the minority class in the training set, the classifier is likely to over-predict that class in operational use.
To see why it is best to consider probabilistic classifiers, where the decision is based on the posterior probability of class membership p(C_i|x), but this can be written using Bayes' rule as
$p(C_i|x) = \frac{p(x|C_i)p(C_i)}{p(x)}\qquad$ where $\qquad p(x) = \sum_j p(x|C_j)p(c_j)$,
so we can see that the decision depends on the prior probabilities of the classes, $p(C_i)$, so if the prior probabilities in the training set are different than those in operation, the operational performance of our classifier will be suboptimal, even if it is optimal for the training set conditions.
Some classifiers have a problem learning from imbalanced datasets, so one solution is to oversample the classes to ameliorate this bias in the classifier. There are to approaches. The first is to oversample by just the right amount to overcome this (usually unknown) bias and no more, but that is really difficult. The other approach is to balance the training set and then post-process the output to compensate for the difference in training set and operational priors. We take the output of the classifier trained on an oversampled dataset and multiply by the ratio of operational and training set prior probabilities,
$q_o(C_i|x) \propto p_t(x|C_i)p_t(C_i) \times \frac{p_o(C_i)}{p_t(C_i} = p_t(x|C_i)p_o(C_i)$
Quantities with the o subscript relate to operational conditions and those wit the t subscript relate to training set conditions. I have written this as $q_o(C_i|x)$ as it is an un-normalised probability, but it is straight forward to renormalise them by dividing by the sum of $q_o(C_i|x)$ over all classes. For some problems it may be better to use cross-validation to chose the correction factor, rather than the theoretical value used here, as it depends on the bias in the classifier due to the imbalance.
So in short, for imbalanced datasets, use a probabilistic classifier and oversample (or reweight) to get a balanced dataset, in order to overcome the bias a classifier may have for imbalanced datasets. Then post-process the output of the classifier so that it doesn't over-predict the minority class in operation. | {
"domain": "datascience.stackexchange",
"id": 9475,
"tags": "classification, class-imbalance, imbalanced-data"
} |
Move_base error: Local costmap doesn't work (robot unable to do obstacle avoidance) | Question:
Hi!
I am using Hydro on Ubuntu 12.04.
I have tried the turtlebot_simulation tutorials and all went well.
I am using Kobuki robot but I have changed the laser scanner input from the kinect to a laser.
The laser seems to be correct as I can see the laser in RVIZ.
However, when I launch move_base, the local costmap does not seem to be reading the laser and hence the robot fails to avoid obstacles that are not on the map.
The global costmap shows correctly, but there is no local costmap.
I am using 100% the same settings as in turtlebot_navigation package - the only difference is that I have a laser model instead of the Kinect.
I have tried the following reference, but it still does not work.
http://answers.ros.org/question/128496/navigation-stack-cant-avoid-obstacles-in-hydro/
Any ideas? Thanks!
Originally posted by jwang on ROS Answers with karma: 56 on 2015-01-29
Post score: 0
Original comments
Comment by David Lu on 2015-01-30:
Are you able to visualize the local costmap in Rviz?
Comment by jwang on 2015-01-30:
There is no local costmap in Rviz, only global costmap can been seen.
Comment by David Lu on 2015-01-30:
Change the local_costmap update_frequency parameter to 1.0 and try again.
Answer:
I figured out that after I change the min_obstacle_height of the laser from 0.25 to 0.0 in costmap_commom_params.yaml, the obstacles can be shown in the costmap now. And the robot can do obstacle avoidance. Still thank you for your reply!
Originally posted by jwang with karma: 56 on 2015-01-30
This answer was ACCEPTED on the original site
Post score: 3 | {
"domain": "robotics.stackexchange",
"id": 20733,
"tags": "navigation, laser, ros-hydro, costmap, base-laser"
} |
Why does only the symmetric part of the matrix enter into the Euler Lagrange Equations of Motion | Question: Given the Lagrangian,
$$L = \frac{1}{2}\dot{q}_i M_{ij} \dot{q}_j + f(q,\dot(q),t)$$
where $M_{ij}$ is a non-degenerate matrix and $q,\dot{q}$ are generalised coordinates & velocities and summation over repeated indices is assumed. I came across a question which asks to show only the symmetric part of $M$ enters the Euler-Lagrange equations. My attempt was to simply calculate the E.O.M as being:
$$
\frac{1}{2}\frac{d}{dt}(M_{ij}\dot{q}_j+\dot{q}_jM_{ji}) +g(q,\dot{q},t)=0
$$
where $g$ depends on the derivatives of $f$.
So I was able to show this, however I am confused as to why only the symmetric part stays. If anyone could clarify my misunderstanding that would be great.
Answer: The symmetric part of $M$ is defined to be $M_S = (M + M^T)/2$.
Note that $\dot{q}_j M_{ji} = M_{ji}\dot{q}_j = M^T_{ij}\dot{q}_j$, so you can write your EOM as $$\frac{d}{dt}((M_{ij}+M^T_{ij})/2 \,\dot{q}_j +\dots = \frac{d}{dt}(M_{S,ij} \dot{q}_j)$$
More fundamentally,you can split $M$ into its symmetric and antisymmetric part, $M = M_S + M_A$, where $M_S^T = M_S$ and $M_A^T = -M_A$. Since $\dot{q}_i\dot{q}_j =\dot{q}_j\dot{q}_i$ we can see that
$$\dot{q}_i M_{A,ij}\dot{q}_j = \dot{q}_j M_{A,ji}^T \dot{q}_i = -\dot{q}_i M_{A,ij} \dot{q}_j = 0$$
So $\dot{q}^T M \dot{q} = \dot{q}^T (M_S + M_A) \dot{q} = \dot{q}^T M_S \dot{q} + 0 $. Only the symmetric part of $M$ actually contributes to the Lagrangian. | {
"domain": "physics.stackexchange",
"id": 48214,
"tags": "homework-and-exercises, lagrangian-formalism"
} |
Does wavelength-specific emissivity depend on temperature? | Question: The wavelength-specific emissivity $\epsilon_{\lambda}$ of a body is the ratio of the body's spectral radiance at the specific wavelength compared to that of the ideal blackbody. Does $\epsilon_{\lambda}$ depend on temperature?
If so, it would mean that an object with uniform $\epsilon_{\lambda}$ across all wavelength at one temperature might no longer have this property at another temperature. So am I right to say that the property of being a "grey body" is temperature dependent?
Answer: Yes, it depends on temperature :
$$
\begin{align}
\varepsilon _{\lambda }&={\frac {M_{\mathrm {e} ,\lambda }}{M_{\mathrm {e} ,\lambda }^{\circ }}}
\\&= {{\frac {\lambda ^{5} \left({e^{\frac {\mathrm {h} c}{\lambda \mathrm {k} T}}-1}\right)}{2\pi \mathrm {h} c^{2}}}}
\cdot {\frac {\partial M_{\mathrm {e} }}{\partial \lambda }}
\\&={{\frac {\varepsilon \sigma \lambda ^{5} \left({e^{\frac {\mathrm {h} c}{\lambda \mathrm {k} T}}-1}\right)}{2\pi \mathrm {h} c^{2}}}}
\cdot {\frac {\partial T^{4} }{\partial \lambda }}
\end{align}
$$
where $\varepsilon$ is the emissivity coefficient of material surface and $\sigma$ is the Stefan–Boltzmann constant. So for being able to calculate wavelength-specific emissivity of body, you need to know it's temperature and it's temperature distribution law with respect to emitted wavelength (derivative part in equation). | {
"domain": "physics.stackexchange",
"id": 64723,
"tags": "temperature, thermal-radiation, wavelength"
} |
MasterViewController | Question: I plan on including this work in a portfolio.
Will this code get me hired or laughed at?
More specifically:
How would you rate the general complexity of the code?
How bad does the code smell?
Is the VC too fat?
Glaring best practices mistakes?
Follow-up questions:
Can you expand on what you said about magic numbers? (Totally missed the bool/BOOL usually I'm better than that. The names w/2 are just temp renames for SO)
I have a lot of .h files because I tried to refactor somethings, like animation and URL methods, into helpers, presumably to increase readability. Did I go overboard?
In theory, the way the program is setup, the MasterVC should never be deallocated. VC's are just pushed on top, not more than one layer at a time. Is it still necessary to unsubscribe from notifications?
Lastly, on a scale of 1-10 what is your impression of the code overall? (from what you've seen and discounting the name and bool problem)
Details:
iOS: 6 (updating to 7 currently)
Xcode: 4.6
Tested: iPhone 4 device
ARC Enabled
MasterViewController.h
#import <UIKit/UIKit.h>
#import "SRAPI.h"
#import "SRChoiceBox.h"
#import "SRPostTopic.h"
#import "SRDetailViewController.h"
#import "SRCollapsibleCell.h"
#import "SRAnimationHelper.h"
#import "SROpenTokVideoHandler.h"
#import "SRObserveViewController.h"
@interface SRMasterViewController2 : UIViewController <UITableViewDelegate, UITableViewDataSource, SRChoiceBoxDelegate, SRPostTopicDelegate>
@property (weak, nonatomic) IBOutlet UITableView *topicsTableView;
@property (weak, nonatomic) IBOutlet UIView *postTopicContainer;
@property (weak, nonatomic) IBOutlet UILabel *statusLabel;
@property (strong, nonatomic) NSIndexPath *openCellIndex;
@property (strong, nonatomic) RKPaginator *paginator;
@property (strong, nonatomic) SROpenTokVideoHandler *openTokHandler;
@end
MasterViewController.m
#import "SRMasterViewController2.h"
#import "UIScrollView+SVPullToRefresh.h"
#import "UIScrollView+SVInfiniteScrolling.h"
#import "SRUrlHelper.h"
#import "SRNavBarHelper.h"
@interface SRMasterViewController2 ()
@property NSInteger offset;
@property NSInteger totalPages;
@property NSMutableArray *topicsArray;
@property bool isPaginatorLoading;
@end
@implementation SRMasterViewController2
- (void)viewDidLoad {
[super viewDidLoad];
[self configureTableView];
[self configureNavBar];
[self configurePostTopicContainer];
[SRAPI sharedInstance];
[self paginate];
self.openTokHandler = [SROpenTokVideoHandler new];
[self configureNotifications];
}
- (void)configureNotifications {
[[NSNotificationCenter defaultCenter] addObserver:self selector:@selector(receiveNotifications:) name:kFetchNewTopicsAndReloadTableData object:nil];
[[NSNotificationCenter defaultCenter] addObserver:self selector:@selector(receiveNotifications:) name:kFetchRoomFromUrl object:nil];
}
- (void)receiveNotifications:(NSNotification *)notificaiton {
if ([notificaiton.name isEqualToString:kFetchRoomFromUrl]) {
NSURL *url = notificaiton.userInfo[@"url"];
[self fetchRoomWithUrl:url];
}
else if ([notificaiton.name isEqualToString:kFetchNewTopicsAndReloadTableData]) {
self.offset = 1;
[self.topicsTableView.infiniteScrollingView startAnimating];
[self paginate];
}
}
- (void)fetchRoomWithUrl:(NSURL *)url {
NSDictionary *dict = [SRUrlHelper parseQueryString:[url query]];
SRRoom *room = [[SRRoom alloc] init];
room.position = (NSString *)[url pathComponents][3];
room.topicId = [url pathComponents][2];
room.sessionId = dict[@"sessionID"];
[self performSegueWithIdentifier:@"showDetail2" sender:room];
}
- (void)configureNavBar {
//Button displays container for posting topics
UIBarButtonItem *rightPostTopicButton =
[SRNavBarHelper buttonForNavBarWithImage:[UIImage imageNamed:@"logo"]
highlightedImage:nil
selector:@selector(showPostTopicContainer)
target:self];
self.navigationItem.rightBarButtonItem = rightPostTopicButton;
//suffle button - Joins random room
UIBarButtonItem *leftShuffleButton =
[SRNavBarHelper buttonForNavBarWithImage:[UIImage imageNamed:@"shuffle.png"]
highlightedImage:[UIImage imageNamed:@"shufflePressed.png"]
selector:@selector(joinRandomRoom)
target:self];
self.navigationItem.leftBarButtonItem = leftShuffleButton;
}
- (void)joinRandomRoom {
NSMutableArray *activeTopics = [NSMutableArray new];
SRTopic *randomTopic = [SRTopic new];
SRRoom *randomRoom = [SRRoom new];
//find topics with people in them
for (SRTopic *topic in self.topicsArray) {
if ([topic.agreeDebaters integerValue] > 0 || [topic.disagreeDebaters integerValue] > 0) {
[activeTopics addObject:topic];
}
}
int numberOfActiveTopics = activeTopics.count;
if (numberOfActiveTopics > 0) {
//Put user into a random Active Room
int random = arc4random() % numberOfActiveTopics;
randomTopic = (SRTopic *)activeTopics[random];
if (randomTopic.agreeDebaters.intValue > randomTopic.disagreeDebaters.intValue) {
randomRoom.position = @"disagree";
}
else if (randomTopic.agreeDebaters.intValue < randomTopic.disagreeDebaters.intValue) {
randomRoom.position = @"agree";
}
else {
randomRoom.position = [self randomlyChooseAgreeDisagree];
}
}
else {
//No Active Rooms, put user in a random room
int random = arc4random() % self.topicsArray.count;
randomTopic = (SRTopic *)self.topicsArray[random];
randomRoom.position = [self randomlyChooseAgreeDisagree];
}
randomRoom.topicId = randomTopic.topicId;
[self performSegueWithIdentifier:@"showDetail2" sender:randomRoom];
}
- (NSString *)randomlyChooseAgreeDisagree {
int r = arc4random() % 2;
return (r == 0) ? @"agree" : @"disagree";
}
- (void)configureTableView {
//set offset for loading tabledata
self.offset = 1;
//add pull to refresh controls
UIRefreshControl *refreshControl = [[UIRefreshControl alloc] init];
[refreshControl addTarget:self action:@selector(refresh:) forControlEvents:UIControlEventValueChanged];
[self.topicsTableView addSubview:refreshControl];
//add infinite scrolling
[self addInfiniteScrolling:self.topicsTableView];
//close all cells
self.openCellIndex = nil;
//Smooth scrolling
self.topicsTableView.layer.shouldRasterize = YES;
self.topicsTableView.layer.rasterizationScale = [[UIScreen mainScreen] scale];
}
- (void)configurePostTopicContainer {
//configure container for posting topics
SRPostTopic *postTopic = [[SRPostTopic alloc]initWithFrame:CGRectMake(0, 0, 320, 133)];
[self.postTopicContainer addSubview:postTopic];
postTopic.delegate = self;
}
- (void)addInfiniteScrolling:(UITableView *)tableView {
[tableView addInfiniteScrollingWithActionHandler: ^(void) {
self.offset += 1;
dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0), ^{
[self paginate];
double delayInSeconds = 0.8;
dispatch_time_t popTime = dispatch_time(DISPATCH_TIME_NOW, delayInSeconds * NSEC_PER_SEC);
dispatch_after(popTime, dispatch_get_main_queue(), ^(void) {
[self.topicsTableView.infiniteScrollingView stopAnimating];
});
});
}];
//configure infinite scrolling style
self.topicsTableView.infiniteScrollingView.activityIndicatorViewStyle = UIActivityIndicatorViewStyleWhiteLarge;
}
- (void)refresh:(UIRefreshControl *)refreshControl {
self.offset = 1;
self.openCellIndex = nil;
//stop refresh after successful AJAX call for topics
dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0), ^{
[self paginate];
double delayInSeconds = 1;
dispatch_time_t popTime = dispatch_time(DISPATCH_TIME_NOW, delayInSeconds * NSEC_PER_SEC);
dispatch_after(popTime, dispatch_get_main_queue(), ^(void) {
[refreshControl endRefreshing];
});
});
}
- (void)paginate {
// Create weak reference to self to use within the paginators completion block
__weak typeof(self) weakSelf = self;
// Setup paginator
if (!self.paginator) {
self.paginator.perPage = 20;
NSString *requestString = [NSString stringWithFormat:@"?page=:currentPage&per_page=:perPage"];
self.paginator = [[RKObjectManager sharedManager] paginatorWithPathPattern:requestString];
[self.paginator setCompletionBlockWithSuccess: ^(RKPaginator *paginator, NSArray *objects, NSUInteger page) {
NSMutableArray *topicsArrayTemp = [objects mutableCopy];
weakSelf.isPaginatorLoading = NO;
if (weakSelf.offset == 1) {
[weakSelf replaceRowsInTableView:topicsArrayTemp];
}
else {
[weakSelf insertRowsInTableView:topicsArrayTemp];
}
[weakSelf.topicsTableView.infiniteScrollingView stopAnimating];
} failure: ^(RKPaginator *paginator, NSError *error) {
weakSelf.isPaginatorLoading = NO;
[weakSelf.topicsTableView.infiniteScrollingView stopAnimating];
[weakSelf.self noResults];
}];
}
if (!weakSelf.isPaginatorLoading) {
weakSelf.isPaginatorLoading = YES;
[self.paginator loadPage:self.offset];
}
}
- (void)noResults {
double delayInSeconds = 0.6;
dispatch_time_t popTime = dispatch_time(DISPATCH_TIME_NOW, delayInSeconds * NSEC_PER_SEC);
dispatch_after(popTime, dispatch_get_main_queue(), ^(void) {
[self performSegueWithIdentifier:@"noResults" sender:nil];
});
}
#pragma mark - Posting a new topic
//open/close container for posting topics
- (void)showPostTopicContainer {
[self.view endEditing:YES];
CGRect newTableViewFrame = self.topicsTableView.frame;
CGRect newPostTopicFrame = self.postTopicContainer.frame;
float duration, alpha;
if ([self isPostTopicContainerOpen]) {
newTableViewFrame.origin.y -= 133;
newPostTopicFrame.origin.y -= 133;
duration = .3;
alpha = 0;
}
else {
newTableViewFrame.origin.y += 133;
newPostTopicFrame.origin.y += 133;
duration = .4;
alpha = 1;
}
[UIView animateWithDuration:duration delay:0 options:UIViewAnimationOptionCurveEaseInOut animations: ^{
self.postTopicContainer.alpha = alpha;
self.topicsTableView.frame = newTableViewFrame;
self.postTopicContainer.frame = newPostTopicFrame;
} completion:nil];
}
- (BOOL)isPostTopicContainerOpen {
return (self.postTopicContainer.frame.origin.y < 0) ? NO : YES;
}
//update fading status UILabel at the bottom of the screen
- (void)statusUpdate:(NSString *)message {
self.statusLabel.text = message;
[self.statusLabel.layer addAnimation:[SRAnimationHelper fadeOfSRMasterViewStatusLabel] forKey:nil];
}
//Post a new Topic to the Server
- (void)postTopicButtonPressed:(NSString *)contents {
//set up params
NSDictionary *newTopic = @{ @"topic":contents };
//send new topic posting
[[RKObjectManager sharedManager] postObject:nil path:@"topics/new" parameters:newTopic success: ^(RKObjectRequestOperation *operation, RKMappingResult *mappingResult) {
if ([self isPostTopicContainerOpen]) {
//close post box if it's open
[self showPostTopicContainer];
}
[self statusUpdate:@"Topic Posted!"];
} failure: ^(RKObjectRequestOperation *operation, NSError *error) {
UIAlertView *alert = [[UIAlertView alloc] initWithTitle:@"Oops"
message:@"We weren't able to post your shout. Try again soon!"
delegate:nil
cancelButtonTitle:@"Sure"
otherButtonTitles:nil, nil];
[alert show];
}];
}
//Delegate for SRChoiceBox - user chooses Agree/Disagree/Observe
- (void)positionWasChoosen:(NSString *)choice topicId:(NSNumber *)topicId {
SRRoom *room = [[SRRoom alloc] init];
room.position = choice;
room.topicId = topicId;
if ([choice isEqualToString:@"observe"]) {
[self performSegueWithIdentifier:@"showObserve" sender:room];
}
else {
[self performSegueWithIdentifier:@"showDetail2" sender:room];
}
}
- (void)segueToRoomWithTopicID:(NSNumber *)topicId andPosition:(NSString *)choice {
SRRoom *room = [[SRRoom alloc] init];
room.position = choice;
room.topicId = topicId;
if ([choice isEqualToString:@"observe"]) {
[self performSegueWithIdentifier:@"showObserve" sender:room];
}
else {
[self performSegueWithIdentifier:@"delete" sender:nil];
}
}
- (void)prepareForSegue:(UIStoryboardSegue *)segue sender:(id)sender {
//close Post Topic Container
if ([self isPostTopicContainerOpen]) {
[self showPostTopicContainer];
}
if ([[segue identifier] isEqualToString:@"showDetail2"] || [[segue identifier] isEqualToString:@"showObserve"]) {
if (self.openTokHandler) {
[self.openTokHandler safetlyCloseSession];
}
[[segue destinationViewController] setOpenTokHandler:self.openTokHandler];
[[segue destinationViewController] setRoom:sender];
}
sender = nil;
}
#pragma mark - UITABLEVIEW
- (NSInteger)numberOfSectionsInTableView:(UITableView *)tableView {
return 1;
}
- (NSInteger)tableView:(UITableView *)tableView numberOfRowsInSection:(NSInteger)section {
return [self.topicsArray count];
}
- (void)insertRowsInTableView:(NSMutableArray *)topics {
if (topics.count < 1) {
[self noNewResults];
return;
}
NSMutableArray *temp = [NSMutableArray new];
int lastRowNumber = [self.topicsTableView numberOfRowsInSection:0] - 1;
for (SRTopic *topic in topics) {
if (![self.topicsArray containsObject:topic]) {
[self.topicsArray addObject:topic];
NSIndexPath *ip = [NSIndexPath indexPathForRow:lastRowNumber inSection:0];
[temp addObject:ip];
++lastRowNumber;
}
}
[self.topicsTableView beginUpdates];
[self.topicsTableView insertRowsAtIndexPaths:temp
withRowAnimation:UITableViewRowAnimationTop];
[self.topicsTableView endUpdates];
if (temp.count == 0) {
[self noNewResults];
}
}
- (void)noNewResults {
int lastRowNumber = [self.topicsTableView numberOfRowsInSection:0] - 1;
[self statusUpdate:@"No New Topics. Check Back Soon!"];
[self.topicsTableView scrollToRowAtIndexPath:[NSIndexPath indexPathForRow:lastRowNumber - 6 inSection:0] atScrollPosition:UITableViewScrollPositionTop animated:YES];
self.offset--;
}
- (void)replaceRowsInTableView:(NSMutableArray *)topics {
self.topicsArray = topics;
[UIView animateWithDuration:.3 delay:.5 options:UIViewAnimationOptionCurveEaseInOut animations: ^{
self.topicsTableView.layer.opacity = 0;
} completion: ^(BOOL finished) {
self.topicsTableView.layer.opacity = 1;
[[self.topicsTableView layer] addAnimation:[SRAnimationHelper tableViewReloadDataAnimation] forKey:@"UITableViewReloadDataAnimationKey"];
[self.topicsTableView reloadData];
}];
}
- (UITableViewCell *)tableView:(UITableView *)tableView cellForRowAtIndexPath:(NSIndexPath *)indexPath {
NSString *CellIdentifier2 = @"SRCollapsibleCellClosed";
SRCollapsibleCell *cell = [tableView dequeueReusableCellWithIdentifier:CellIdentifier2];
if (cell == nil) {
cell = [[SRCollapsibleCell alloc] initWithStyle:UITableViewCellStyleDefault reuseIdentifier:CellIdentifier2];
}
SRTopic *topic = [self.topicsArray objectAtIndex:indexPath.row];
[cell updateWithTopic:topic];
if ([self isCellOpen:indexPath]) {
CGAffineTransform transformation = CGAffineTransformMakeRotation(M_PI / 2);
cell.arrow.transform = transformation;
if (![self hasChoiceBox:cell]) {
[self insertChoiceBox:cell atIndex:indexPath];
}
}
else {
CGAffineTransform transformation = CGAffineTransformMakeRotation(0);
cell.arrow.transform = transformation;
}
return cell;
}
- (void)tableView:(UITableView *)tableView didSelectRowAtIndexPath:(NSIndexPath *)indexPath {
if ([self isCellOpen:indexPath]) {
[self closeCellAtIndexPath:indexPath];
}
else {
NSIndexPath *openCell = self.openCellIndex;
NSIndexPath *newOpenCell = indexPath;
[self closeCellAtIndexPath:openCell];
[self openCellAtIndexPath:newOpenCell];
}
[tableView beginUpdates];
[tableView endUpdates];
[tableView deselectRowAtIndexPath:indexPath animated:NO];
}
- (CGFloat)tableView:(UITableView *)tableView heightForRowAtIndexPath:(NSIndexPath *)indexPath {
if ([indexPath isEqual:self.openCellIndex]) {
return 217.0;
}
else {
return 63.0;
}
}
- (void)rotateCellArrowAtIndexPath:(NSIndexPath *)indexPath willOpen:(bool)willOpen animated:(bool)animated {
// Change Arrow orientation
SRCollapsibleCell *cell = (SRCollapsibleCell *)[self.topicsTableView cellForRowAtIndexPath:indexPath];
CGAffineTransform transformation;
if (willOpen) {
transformation = CGAffineTransformMakeRotation(M_PI / 2);
}
else {
transformation = CGAffineTransformMakeRotation(0);
}
if (animated) {
[UIView animateWithDuration:.2 delay:0 options:UIViewAnimationOptionCurveLinear animations: ^{
cell.arrow.transform = transformation;
} completion:nil];
}
else {
cell.arrow.transform = transformation;
}
}
- (BOOL)isCellOpen:(NSIndexPath *)indexPath {
return [indexPath isEqual:self.openCellIndex];
}
- (void)closeCellAtIndexPath:(NSIndexPath *)indexPath {
[self rotateCellArrowAtIndexPath:indexPath willOpen:NO animated:YES];
[self removeSRChoiceBoxFromCellAtIndexPath:indexPath];
self.openCellIndex = nil;
}
- (void)openCellAtIndexPath:(NSIndexPath *)indexPath {
[self rotateCellArrowAtIndexPath:indexPath willOpen:YES animated:YES];
SRCollapsibleCell *cell = (SRCollapsibleCell *)[self.topicsTableView cellForRowAtIndexPath:indexPath];
[self insertChoiceBox:cell atIndex:indexPath];
self.openCellIndex = indexPath;
}
- (void)removeSRChoiceBoxFromCellAtIndexPath:(NSIndexPath *)indexPath {
SRCollapsibleCell *cell = (SRCollapsibleCell *)[self.topicsTableView cellForRowAtIndexPath:indexPath];
for (id subview in cell.SRCollapsibleCellContent.subviews) {
if ([subview isKindOfClass:[SRChoiceBox class]]) {
[subview removeFromSuperview];
}
}
}
- (void)insertChoiceBox:(SRCollapsibleCell *)cell atIndex:(NSIndexPath *)indexPath {
SRChoiceBox *newBox = [[SRChoiceBox alloc] initWithFrame:CGRectMake(0, 0, 310, 141)];
SRTopic *topic = [self.topicsArray objectAtIndex:indexPath.row];
[newBox updateWithSRTopic:topic];
newBox.delegate = self;
[cell.SRCollapsibleCellContent addSubview:newBox];
}
- (bool)hasChoiceBox:(SRCollapsibleCell *)cell {
for (UIView *subview in cell.SRCollapsibleCellContent.subviews) {
if ([subview isKindOfClass:[SRChoiceBox class]]) {
return true;
}
}
return false;
}
@end
Answer: A few things I would ask if I saw this as an interviewer (from a very quick read, and obviously not having seen it run):
why use a UIViewController and not a UITableViewController (refresh control property is free then)
the rasterisation on table view shouldn't be needed for smooth scrolling, you have other problems if that is required
there are a few things which I wouldn't like to see and would ask about: magic numbers, names with a 2 at the end, inconsistencies (bool vs BOOL etc)
why all the imports in the .h?
why aren't the notification observers removed?
Follow-up responses:
makes sense
was just more about practice of importing .h files in a .h (meaning any class that needs to import your class knows about all those imported classes)
It's just good practice I guess, but I didn't realise this was a permanent view controller
Overall impressions are good (would depend a bit on the level of developer the job is for): the code is reasonably complex, uses 3rd party libraries, has some fairly advanced functions like dealing with layers / transforms.
The real test though would be how well you can explain it during the interview, e.g. if you sound like its just a cut and paste job then the impression is lessened, but if you can explain your choices, pitfalls of the approach, anything you tried which you discard and why, and are able to talk about what you have learnt and where you would make improvements next time then you'd be looking pretty good I think. | {
"domain": "codereview.stackexchange",
"id": 4947,
"tags": "objective-c, interview-questions, ios"
} |
TensorFlow - Resume training in middle of an epoch? | Question: I have a general question regarding TensorFlow's saver function.
The saver class allows us to save a session via:
saver.save(sess, "checkpoints.ckpt")
And allows us to restore the session:
saver.restore(sess, tf.train.latest_checkpoint("checkpoints.ckpt"))
Inside the TensorFlow documentation, there is an example code (with an added epoch loop, and restore):
# Create a saver.
saver = tf.train.Saver(...variables...)
# Launch the graph and train, saving the model every 1,000 steps.
sess = tf.Session()
saver.restore(sess, tf.train.latest_checkpoint("checkpoints.ckpt"))
for epoch in xrange(25):
for step in xrange(1000000):
sess.run(..training_op..)
if step % 1000 == 0:
# Append the step number to the checkpoint name:
saver.save(sess, 'my-model', global_step=step)
The problem is that if we stopped the training loop at epoch=15, and execute again, then if we would start at epoch=0 again, but the model is trained up to epoch=15.
Is there a way to resume from epoch=15?
Answer: The network doesn't store its training progress with respect to training data - this is not part of its state, because at any point you could decide to change what data set to feed it. You could maybe modify it so that it knew about the training data and progress, stored in some tensor somewhere, but that would be unusual. So, in order to do this, you will need to save and make use of additional data outside of the TensorFlow framework.
Probably the simplest thing to do is add the epoch number to the filename. You are already adding the current step within the epoch, so just add in the epoch multiplied:
saver.save(sess, 'my-model', global_step=epoch*1000000+step)
When you load the file, you can parse the filename to discover what epoch and step you were on and use those as the start point for the xrange functions. To make this easier to re-start from any given checkpoint, you could use argparse to allow your script to take the name of the checkpoint file you want to use.
In brief, it might look like this:
# Near top of script
import argparse
import re
# Before main logic
parser = argparse.ArgumentParser()
parser.add_argument('checkpoint')
args = parser.parse_args()
start_epoch = 0
start_step = 0
if args.checkpoint:
saver.restore(sess, tf.train.latest_checkpoint(args.checkpoint))
found_num = re.search(r'\d+', args.checkpoint)
if found_num:
checkpoint_id = int(found_num.group(0))
start_epoch = checkpoint_id // 1000000
start_step = checkpoint_id % 1000000
# Change to xrange:
for epoch in xrange(start_epoch, 25):
for step in xrange(start_step, 1000000):
sess.run(..training_op..) # etc
# At end of epoch loop, you need to re-set steps:
start_step = 0
You may want to reduce the number of checkpoints you are creating - as it stands you would have 25,000 checkpoint files generated by your code.
Another option would be use a single checkpoint file, and to save and restore a Python pickle of a simple dict containing the state at the time you made the checkpoint, with a similar name. | {
"domain": "datascience.stackexchange",
"id": 1863,
"tags": "tensorflow"
} |
are photons are being exceptional in case of mass? | Question: photons has no mass ,in relativity any thing that moves in the speed of light has the mass of infinite ,then photons have to be with the mass of infinite, so why does photons are being an exceptional case ?
Answer: Photon doesn't have any rest mass. According to Einstein, anything which have rest mass would have infinite mass at speed of light.
m' = mγ. As velocity tends to c, the factor γ tends to infinity. Thus m would be infinite. But for photons, at speed of light, m = 0/0 (actually meaningless!!). But would mean it is a finite mass. | {
"domain": "physics.stackexchange",
"id": 33121,
"tags": "photons, mass"
} |
When is the Journal of the ACM issued? | Question: Is there a specific time that the Journal of the ACM is released? Monthly, quarterly, etc? Or does it depend on how many significant papers have been gathered to warrant a new issue? I couldn't find anything specific on their website on this point.
Answer: Judging from their archive, it seems that since 1994 the Journal of the ACM is issued 6 times a year, so roughly every other month. Until 2006 it was published on odd months. From 2007 on, the schedule seems to be somewhat random, but there are still 6 issues a year (though sometimes one of them is published on the wrong year, as in 2009 and 2010). | {
"domain": "cstheory.stackexchange",
"id": 3818,
"tags": "journals"
} |
Why does this one move faster? | Question: Consider a 2 body system as shown:
Consider the floor to be absolutely smooth and the coefficient of friction for the contact between $m_1$ and $m_2$ to be $\mu$. Now suppose I apply a force $F$ that causes the system to move, and that force $F$ is applied on the upper block ($m_1$).
Then, why does it ($m_1$) move faster than $m_2$? Why does it have a greater acceleration?
Answer: Considering $m_1$ and $m_2$ as a system, there is a net horizontal force $F$ acting on them so their centre of mass must accelerate with acceleration
$\displaystyle a_0 = \frac F {m_1+m_2}$
This is true regardless of whether or not $m_1$ moves relative to $m_2$.
The frictional force which opposes $m_1$ moving relative to $m_2$ has a maximum value of $\mu m_1 g$. If $F - \mu m_1 g \le m_1a_0$ then $m_1$ will not move relative to $m_2$ and both blocks will accelerate with acceleration $a_0$. Substituting the expression for $a_0$ above, we see that this condition becomes
$\displaystyle F - \mu m_1 g \le F \frac {m_1} {m_1+m_2}
\\ \displaystyle \Rightarrow F \frac {m_2} {m_1+m_2} \le \mu m_1 g
\\ \displaystyle \Rightarrow F \le \mu g (m_1+m_2) \frac {m_1}{m_2}$
In this case, the frictional force is
$\displaystyle F \frac {m_2} {m_1+m_2}$
and we can see that (by Newton's Third Law) this acts on $m_2$ to accelerate it with acceleration $a_0$ too. In effect, the force $F$ is divided between $m_1$ and $m_2$ in proportion to their masses, so that they both have the same acceleration.
On the other hand, if
$\displaystyle F > \mu g (m_1+m_2) \frac {m_1}{m_2}$
then the net force on $m_1$ is $F - \mu m_1 g$ so $m_1$ accelerate with acceleration
$\displaystyle a_1 = \frac F {m_1} - \mu g$
and the force on $m_2$ is $\mu m_1 g$ so $m_2$ accelerates with acceleratiion
$\displaystyle a_2 = \mu g \frac {m_1}{m_2}$
and we have
$\displaystyle a_1 > \mu g \frac {m_1+m_2}{m_2} - \mu g
\\ \displaystyle \Rightarrow a_1 > \mu g \frac {m_1}{m_2}
\\ \displaystyle \Rightarrow a_1 > a_2$
Also note that the acceleration of the centre of mass of the system is the weighted sum of $a_1$ and $a_2$ which is
$\displaystyle \frac {m_1a_1+m_2a_2} {m_1+m_2} = \frac {(F - \mu m_1 g) + \mu m_1 g} {m_1+m_2} = \frac {F} {m_1+m_2} = a_0$
as we expect. | {
"domain": "physics.stackexchange",
"id": 71304,
"tags": "newtonian-mechanics, forces, acceleration, friction, free-body-diagram"
} |
How to prove the language as "Recursive"? | Question: How to prove the statement
"If the strings of a language $L$ can be enumerated in lexicographically(alphabetic) order, then the language is Recursive but not context free" ?
Basically, My point is how can the strings of $a^nb^nc^n$ can be lexicographically enumerated as first $n$ has infinite terms .
Answer: Your confusion may stem from the interpretation of "lexicographically". It's common to take this to mean "by length and then for strings of the same length, by alphabetic order. If this is the case, then an enumeration of your language would be: $\epsilon, abc, aabbcc, aaabbbccc, \dotsc$.
If your language was $\{a^ib^jc^k\mid i,j,k\ge 0\}$ then the enumeration would be
$$
\epsilon, a, b, c, aa, ab, ac, ba, bb, bc, ca, cb, cc, aaa, aab, aab, \dotsc
$$ | {
"domain": "cs.stackexchange",
"id": 8688,
"tags": "computability, turing-machines"
} |
Why do robots need rangefinders while animals don't? | Question: Will vision based system eventually prevail rangefinders based systems given that the most successful autonomous agents in the world like animals use primarily visions to navigate?
Answer: Animals and robots both need to understand something about the 3D structure of the world in order to thrive. Because it's so important, animals have evolved a huge number of strategies to estimate depth based on camera-like projective sensors (eyes). Many make use of binocular disparity -- the fact that the distance between the same scene point in two different views is inversely proportional to depth. A mantis moves its head sideways to do this, a dragonfly makes very precise forward motion, birds move their heads forward and backwards, we use the fact that our eyes are widely spaced on our heads and have overlapping fields of view. Stereo vision is a computer vision technique that mimics this.
However disparity is not the only trick we use, there are lots of others including proprioceptive signals from the muscles in our eyes (focus and vergence), observing how things in the world move as we move (optical flow), our knowledge about the world (the tree looks small, we know trees are big, therefore it must be far away), haziness, texture density, occlusion (thing A blocks thing B, therefore A must be closer). Each technique (or cue) is imperfect, but mostly in a different way, and each operates across a different range of distances. Our brain stitches (fuses) all that together to give a result that is mostly pretty robust.
Checkout the wikipedia page for more details. I like this (old) article
How the eye measures reality and virtual reality.
JAMES E. CUTTING, Cornell University, Ithaca, New York.
Behavior Research Methods, Instruments, & Computers 1997, 29 (1), 27-36
which gives a good insight into this issue.
There are now deep-learnt networks now that can take a monocular (single) image and output estimated depth. We can never know for sure but my theory is that they are encoding some of the cues that we use for depth estimation such as texture and apparent size. | {
"domain": "robotics.stackexchange",
"id": 1825,
"tags": "computer-vision, rangefinder"
} |
Product of all ints in array besides that an index i | Question: I'm looking for a better implementation of the algorithm.
/*
* You have an ordered array X of n integers. Find the array M containing
* elements where Mi is the product of all integers in X except for Xi.
* You may not use division. You can use extra memory.
*/
#include <iostream>
#include <stack>
#include <queue>
#include <vector>
/*!
* Class used to generate sequences with helper function
*/
template <class T>
class sequence : public std::iterator<std::forward_iterator_tag, T>
{
T val;
public:
sequence(T init) : val(init) {}
T operator *() { return val; }
sequence &operator++() { ++val; return *this; }
bool operator!=(sequence const &other) { return val != other.val; }
};
template <class T>
sequence<T> gen_seq(T const &val) {
return sequence<T>(val);
}
static const int N = 3;
std::stack<int> stk;
std::queue<int> que;
int main(int argc, char *argv[]) {
std::vector<int> seq(gen_seq(1), gen_seq(N + 1));
for (int x = 0, y = N - 1; x < N && y >= 0; ++x, --y) {
if (x == 0 && y == N - 1) {
stk.push(1);
que.push(1);
} else {
stk.push(stk.top() * seq[x - 1]);
que.push(que.back() * seq[y + 1]);
}
}
for (int x = 0; x < N; ++x) {
std::cout << (stk.top() * que.front()) << std::endl;
stk.pop();
que.pop();
}
}
Answer: First, some comments on the existing code:
/*
* You have an ordered array X of n integers. Find the array M containing
* elements where Mi is the product of all integers in X except for Xi.
* You may not use division. You can use extra memory
*/
You don't have anything called X in the code, or generate M, so this comment could be better
static const int N = 3;
std::stack<int> stk;
std::queue<int> que;
Why use globals? Also, try to name variables by what they do or represent, rather than their type.
int main(int argc, char *argv[]) {
For that matter, why put the logic in main where you can't test it?
I would hope to see something structured like:
std::vector<int> products(std::vector<int> const &X)
{
auto n = X.size();
std::vector<int> M(n, 1);
// ... algorithm ...
return M;
}
int main()
{
static const int N = 3;
std::vector<int> seq(gen_seq(1), gen_seq(N + 1))
std::vector<int> result = products(seq);
// print the result
// or assert correctness
}
Or possibly several test functions, each called from main.
As for the algorithm itself, its working isn't really clear, and that's where comments would be useful. I'm sure you have a deep intuitive understanding of why your nameless stack and queue generate the right results, but without a lot of effort I don't. In six months' time, you may not either.
Your loop can be cleaner anyway; instead of:
for (int x = 0, y = N - 1; x < N && y >= 0; ++x, --y) {
if (x == 0 && y == N - 1) {
stk.push(1);
que.push(1);
} else {
stk.push(stk.top() * seq[x - 1]);
que.push(que.back() * seq[y + 1]);
}
}
we can:
move the special case if (x == 0 ... outside
then, you only have to iterate over the values used in the second branch (so 1 <= x < N, N-2 >= y >= 0)
but we only use x-1 and y+1 in the body of the loop, so simplify this to 0 <= x < N-1 and N-1 >= y >= 0 and just use x and y in the body
notice that the two conditions (x < N-1 && y >= 0) will always agree, so we don't have to test both
to get:
stk.push(1);
que.push(1);
for (int x = 0, y = N-1; x < N-1; ++x, --y)
{
stk.push(stk.top() * seq[x]);
que.push(que.back() * seq[y]);
}
and a simpler implementation:
std::vector<int> products(std::vector<int> const &X)
{
auto n = X.size();
std::vector<int> M(n, 1); // initialise all values to 1
for (int i = 0; i < n; ++i)
{
// set Mj <= Mj * Xi for all j != i
for (int j = 0; j < n; ++j)
if (j != i) M[j] *= X[i];
}
return M;
}
This one uses no extra memory but is O(n^2) | {
"domain": "codereview.stackexchange",
"id": 8248,
"tags": "c++, algorithm, programming-challenge"
} |
What happens if you carbonate ethanol? | Question: The electronics youtuber bigclivedotcom has an on-and-off-again series where he carbonates various types of alcohol and comments on the taste. One thing he's noticed is that the stronger the alcohol, the more carbon dioxide it will absorb; so, he recently tried carbonating 'moonshine' (70% lab ethanol, 30% water).
This absorbed a staggering 22g of CO₂ per litre (and ended up completely undrinkable, or at least more so than it was previously).
I know that when you carbonate water you end up with a weak carbonic acid solution based on the H₂O + CO₂ ←→ H₂CO₃ equilibrium. But the references I've found says that carbon dioxide is much less soluble in ethanol than in water because ethanol is less polar. So, I'd expect that the stronger the ethanol solution, the less carbon dioxide would be absorbed. This is the complete opposite of what he actually observed.
So, what's happening here?
Reference: https://www.youtube.com/watch?v=yArcH80PiP4
Answer: Carbon dioxide is in fact roughly ten times as soluble in ethanol as in water [1](https://doi.org/10.1016/j.fluid.2006.04.017) (meaning you need more dissolved under pressure to get that bubbling effect; alcoholic beverages that bubble or sparkle are mostly water). The figure below from the reference illustrates the increasing solubility of carbon dioxide under pressure as we increase the ethanol content in an ethanol-water mixture.
Carbon dioxide is actually non-dipolar, and while the strong quadrupole enhances its solubility in water, it still fits better with less polar solvents such as ethanol.
Reference
I. Dalmolin, E. Skovroinski, A. Biasi, M.L. Corazza, C. Dariva, J. Vladimir Oliveira (2005). "Solubility of carbon dioxide in binary and ternary mixtures with ethanol and water". Fluid Phase Equilibria
245, 2, 193-200,
ISSN 0378-3812,
https://doi.org/10.1016/j.fluid.2006.04.017. | {
"domain": "chemistry.stackexchange",
"id": 17318,
"tags": "solubility, alcohols, polarity, carbon-dioxide"
} |
Quadcopter controlled by Raspberry Pi | Question: I have:
Raspberry Pi 3.
Pi camera.
CC3D flight controller.
I have already developed a Python script that decides whether the quadcopter dron has to turn left/right, move straight on or to stop.
Is there a way to connect the Raspberry Pi to the flight controller to send it the commands?
Is it as easy as it sounds?
Edit: I can change the flight controller if necessary.
Answer: The easiest way to do this would be using UART for serial communication. The CC3D has TX/RX/GND pins which you connect to Raspberry Pi.
Now you will need some sort of protocol or data framing to send pitch/roll/yaw/throttle values to the flight controller and differentiate these values somehow. You can implement and use the MultiWii Serial Protocol for this purpose. Flight controller firmware such as CleanFlight already support MSP. On the RaspPi, PyMultiWii can be used to handle MSP frames.
What is the format or range of the controls the python script decides for the quadcopter? You will have to convert or map these values into MSP RC frame format. The pitch/roll/yaw/throttle range in MSP is 1000-2000 (centered at 1500 at pitch/roll/yaw) and 1000 for throttle stops all motors and 2000 is the max value for throttle. The RC frame also support 4 AUX channels, where you can send any sort of data you want. Here's a good tutorial for implementing MSP with python and STM32 microntroller (which is used on CC3D).
Tip: You can also make use of the MSP attitude and heading frame to get the pitch, roll and yaw computed from the IMU on the flight controller and integrate this with your Python script on RaspPi for better control! | {
"domain": "robotics.stackexchange",
"id": 1440,
"tags": "quadcopter, raspberry-pi"
} |
Functions to escape CSS rules in PHP | Question: Some context
I've been tasked with supplying an escaping function to arbitrary CSS values that are entered through a form. The goals and caveats are:
I know it's bad practice to let users input CSS. Deal with it.
Data will be injected either to a style attribute, or to an external stylesheet.
This is run on PHP 5.1 (Again, I know, deal with it).
I'm trying to follow this cheat sheet as closely as possible.
The Code
/**
* ord() alternative that works with UTF8 characters
* @param string $c
*
* @return int UTF-8 character code value
*/
function getUTF8CharCode($c) {
$h = ord($c{0});
if ($h <= 0x7F) {
return $h;
} else if ($h < 0xC2) {
return false;
} else if ($h <= 0xDF) {
return ($h & 0x1F) << 6 | (ord($c{1}) & 0x3F);
} else if ($h <= 0xEF) {
return ($h & 0x0F) << 12 | (ord($c{1}) & 0x3F) << 6
| (ord($c{2}) & 0x3F);
} else if ($h <= 0xF4) {
return ($h & 0x0F) << 18 | (ord($c{1}) & 0x3F) << 12
| (ord($c{2}) & 0x3F) << 6
| (ord($c{3}) & 0x3F);
} else {
return -1;
}
}
/**
* Escape a single character for CSS context.
* @param $c
* @return string
*/
function escapeCSSCharacter($c) {
return "\\" . base_convert(getUTF8CharCode($c), 10, 16) . " ";
}
/**
* Escape CSS rule
*
* @param string $data The CSS rule
* @param array $immuneChars Array of immune character. These characters will not be escaped.
*
* @return string Escaped string
*/
function escapeCSSValue($data, array $immuneChars = array()) {
$result = "";
for ($i = 0; $i < mb_strlen($data); $i++) {
$currChar = mb_substr($data, $i, 1);
if (getUTF8CharCode($currChar) < 256 && //Character value is less than 256
!preg_match("/^\w$/", $currChar) && //Character is not alphanumeric (underscore is considered safe too)
!in_array($currChar, $immuneChars) //Character is not immune
) {
$result .= escapeCSSCharacter($currChar);
}
else {
$result .= $currChar;
}
}
return $result;
}
Usage
$colorRule = "color: " . escapeCSSValue("#BADA55;}*{display:none;}/*", array("#")) . ";"; //Will be obviously broken, but will not break the rest of the document.
echo $colorRule;
My worries
Is this a good way to escape CSS? Is this safe? Will this method be impossible to break out of?
Am I doing this relatively efficiently? Given that the strings I'm going to be encoding are arbitrary, I'm worried about attacks that involve thousands of characters.
Any review will be welcomed.
Answer: Your code appears to be inconsistent with respect to UTF-8 characters.
Two significant issues I can see are:
You are intending to return an int value, yet you return a false for the block between 0x80 and 0xC2. In PHP, false is not an int, and 0 is also false-ey. Then, in the 'else' block, you return -1, which is 'true-ey'.
I am uncertain that your char ranges are correct. In UTF-8, the invalid blocks are inconsistent with 0xc2, there are multiple valid UTF-8 encodings in that range. I am not certain you have your conditions right. Is there something I am unaware of? | {
"domain": "codereview.stackexchange",
"id": 8282,
"tags": "php, css, security, utf-8, escaping"
} |
Computing distance traveled from jerk | Question: When dealing with higher time derivatives like jerk, how does one find the distance traveled? Can it be calculated by just knowing time?
Answer: Knowing only "jerk" (third derivative of position), you cannot determine the distance traveled.
To get distance traveled (or equivalently, position as a function of time) from jerk, you need to integrate three times. Each integration produces a constant of integration representing an initial value; your final equation looks something like this:
$$p(t) = \iiint j(t) + at^2 + vt + x$$
where "a" is your initial acceleration, "v" is your initial velocity, and "x" is your initial position. "x" doesn't matter for computing distance traveled, but the other two do. | {
"domain": "physics.stackexchange",
"id": 21720,
"tags": "newtonian-mechanics, kinematics, integration, jerk"
} |
Explanation of the knowledge representation hypothesis (Brian Smith) | Question: In 1982 Brian Smith proposed his Knowledge Representation Hypothesis:
Any mechanically embodied intelligent process will be comprised of
structural ingredients that
we as external observers naturally take to represent a propositional account of the knowledge that the overall process
exhibits, and
independent of such external semantic attribution, play a formal but causal and essential role in engendering the behavior that
manifests that knowledge.
Can someone simplify this statement or add some explanations to it?
In particular I don't understand what is meant by "propositional account" and "formal but causal role".
Thanks!
Answer: This is a philosophical statement which uses sophisticated language per convention. Here is another version (according to this page):
Any process capable of reasoning intelligently about the world must consist in part of a field of structures, of a roughly linguistic sort, which in some fashion represent whatever knowledge and beliefs the process may be said to possess.
You can visit the page I linked from some critique of both statements. For another critique, here is an example taken from these lecture notes. Consider the following two procedures for translating the colors red and blue into French:
function translate1(color):
if color is "red" return "rouge"
if color is "blue" return "bleu"
function translate2(color):
dictionary = [ "red" -> "rouge", "blue" -> "bleu" ]
return dictionary[color]
According to the lecture notes, only the second function translate2 conforms to the hypothesis.
Now for some positive example. Consider an automated translation service such as Google translate. As a vast simplification, Google has a dictionary that it uses to translate words from (say) English to French. This dictionary is a "propositional account" of the knowledge of the process. Here by "propositional account", Smith means a set of logical statements, for example:
The translation of red is rouge.
The translation of blue is bleu.
(Technically, he means first-order logic, so you would put these statements in concise logical form, or as Prolog statements.)
The translation program uses its dictionary in its efforts. Thus the dictionary plays a "causal ... role" in the "behavior" of the system. That is, since rouge is the counterpart of red, if you give the program red it outputs rouge. We don't claim any 'real' intelligence for the program, so this role is only "formal", and moreover our understanding of the dictionary as a list of matching words in two different languages is only an "external semantic attribution" that is irrelevant for explaining the behavior of the program. After all, the program doesn't really 'speak' English or French, it only gives the impression of being able to.
Let me try to put the hypothesis in simpler words:
Programs use databases that represent knowledge.
(See Surely you're joking, Mr Feynman!, page 281.)
Now we can come up with many more examples. JPEG compression programs use knowledge about the human visual system in the form of the quantization matrix, which explains which Fourier coefficients are more important for image representation. Recommendation systems use a database of products that can be recommended, and another database that keeps track of what other users liked. OCR systems use implicit representations of symbols (letters, digits, and punctuation) in the form of a machine learning recognizer for them.
Does modern machine learning conform to the hypothesis? Considering the last example, optical character recognition, the "catalog of characters" isn't stored as such, but only implicitly as (say) a set of weights in some neural network. This is certainly not a "field of structures, of a roughly linguistic sort", as per the other version of the hypothesis.
Modern artificial intelligence has largely moved away from the naive and romantic view of classical artificial intelligence, as exemplified by the KR hypothesis. Instead, now we often use statistical methods of machine learning which are much more successful in practice but much less satisfying from a human perspective.
Are knowledge representation techniques used in the real world? This is a question left for the experts to answer. | {
"domain": "cs.stackexchange",
"id": 5847,
"tags": "artificial-intelligence"
} |
Finding radius of turning car to calculate the centripital force | Question: I'm writing a program to simulate a car driving. I'm wondering how I should find the radius when calculating the centripital force. If we let the car travel at a constant speed, then $F_{net} = mv^2/r$. Does each of the front wheels have their own radius? How do I find the radius?
My first thought was that to find the radius as in my illustration below. But at second thoughts I don't think that's correct.
Answer: The issue here is that your front wheels are turned/steered by the same angle.
When you try to find the instantaneous centre of curvature, you may first want to assume the wheels won't slip from side to side, like you may get if you drive around a corner on a slippy road. As there is no slip, the velocity of each wheel must occur in the direction the wheel, as you'd expect.
Then, for all four wheels, draw lines perpendicular to the velocities. All four lines must meet at the same point, otherwise the assumption that there is no slip is false.
So, your front wheels are steered by the same angle, and this is not good as this would mean the perpendicular lines you drew would never meet. So, there would have to be slip. If there is slip, this would become a dynamics problem rather than just kinematics (i.e. have to use Newton's 2nd Law). So the front wheels have to turn by different amounts (unless the front wheels are going straight forward).
This is shown in the following diagram:
Notice how all the lines connecting the instantaneous centre and the wheels are perpendicular to the velocities. In this system, there may be no slip (provided the magnitudes of velocity are calculated appropriately!).
It's important to note the inst centre always lies on the back wheels' axis line. Note the odd case when all wheels face forward: it seems the 4 perpendicular lines never meet. However, this is fine as the inst centre actually lies at infinity, and parallel lines 'meet at infinity'. An inst centre at infinity implies the car moves in a straight line.
So, to find the inst centre, it would be ideal to assume no slip. To do so, provided the back wheels don't steer, arrange one of the front wheels as you want, then determine the inst centre, then arrange the fourth wheel accordingly. If you include slip, velocities may not necessarily align with the direction of the wheel.
Now that you know the inst centre, you need to find the speed of the centre of mass of the car. If you don't know it, then you need to know the speed of one of the wheels. To do so, note the formula:
$$v_{wheel} = \omega r_{wheel}$$
Where $v_{wheel}$ is the speed of a particular wheel, $r_{wheel}$ is the distance of that wheel from the inst centre, and $\omega$ is the angular velocity of the car about the inst centre. Similarly, the centre of mass follows the formula:
$$v_{g} = \omega r_{g}$$
Where $v_g$ is the speed of the centre of mass, and $r_g$ is the distance of the centre of mass from the inst centre.
Finally, the centripetal acceleration can be determined by:
$$a_c = \omega^2 r_g = \frac{v_g^2}{r_g}$$ | {
"domain": "physics.stackexchange",
"id": 21721,
"tags": "newtonian-mechanics, simulations, centripetal-force"
} |
Why are Bloch waves so successful at explaining behavior of electrons in crystals? | Question: The solutions to the time independent Schrodinger equation for a periodic potential are Bloch waves of the form
$$\psi(r) = u(r)e^{ik.r}$$
where $u(r)$ is a periodic function with the same periodicity as the potential, and $k$ is the crystal wave vector.
At a given temperature, the crystal is vibrating due to its thermal energy (a superposition of its various vibrational modes) and therefore at any instant in time the crystal should be quite disordered violating the assumption of periodicity. So why are Bloch waves often the starting point for so many calculations in semiconductor physics?
Answer: Actually, even without phonons, Bloch's theorem hypothesis of a periodic potential is badly violated by all real crystals: finite samples have boundaries, then cannot be periodic.
Then, how do Bloch states, although impossible in the real world, are so useful in Solid State Physics?
The answer is Perturbation Theory. The main ideas to justify it in the case of crystal physics are:
the existence of an ideal periodic problem close enough to the real-world non-periodic system, at least locally;
the difference between observable quantities evaluated for the ideal and real system should be negligible.
The two ingredients work differently in different contexts, but both are required.
For example, Bloch's states are useless in describing a glass because the atomic structure of glass has no resemblance with a periodic structure, even at the scale of a few neighbors. They are useful for finite-size crystals, even if the translational symmetry is broken at the boundary because it is possible to show that the difference between properties of a finite part of a perfect (infinite) periodic crystal and of a finite-size crystal is proportional to the surface area of the surface. Therefore, this difference becomes negligible for macroscopic samples (it grows with the volume of the sample as $V^{\frac23}$, while bulk properties grow as $V$).
The same argument can be used to justify the usefulness of Bloch's states in the presence of thermal motion (phonons) or defects. Provided the number of phonons or defects is not too high, in most of cases, the perfect crystal solution is an excellent zeroth-order approximation. Only when thermal disturbance becomes too high (too many phonons, resulting in too large amplitude vibrations$^{(*)}$) or if the number of defects reaches a threshold of concentration, the structure of the real system departs so much from the ideal crystal to make Bloch description inadequate.
$^{(*)}$ The empirical Lindemann's melting criterion predicts melting as soon as the mean square fluctuation around equilibrium positions reaches a given percentage of the first neighbor distance. | {
"domain": "physics.stackexchange",
"id": 91581,
"tags": "condensed-matter, semiconductor-physics, crystals"
} |
Query all bounding boxes which contain a point | Question: I'm looking for the most efficient spatial-indexing data-structure for storing and querying bounding boxes which contain individual points. The points represent 2D coordinates on a grid, while the bounding boxes represent regions of the grid. The bounding boxes may vary greatly in size, and multiple bounding boxes may overlap a single point. Both points and bounding boxes are stored as signed integers.
For example, in the diagram below, if I were to query points $B$ and $C$, I'd expect a single bounding box in return. However, if I query point $A$, I'd expect an array containing both bounding boxes in return.
--------
| B ============
| |A| |
-----|-- C |
============
I'm not concerned with insert/remove efficient for adding bounding-boxes to the structure as all bounding-boxes are added to the structure during a one-time initialization. My main concern is efficient look-ups for finding which bounding boxes contain a point, as such queries will be made frequently.
My initial thought is to use a quadtree, and to test all objects contained in a particular node to see if they contain the point being queried. However, I'm wondering: is there a better data-structure I could use to implement this behavior with?
Answer: Use a 2-d segment tree. Assuming we have $n$ items, construction takes time $O(n \cdot \log^2(n))$ and each query takes $O(\log^2(n))$ time. These times become $O(n \cdot \log(n))$ and $O(\log(n))$ time, respectively, if we use fractional cascading and lowest-level interval tree. These are good times unless there is more problem structure.
The query is called "multi-dimensional stabbing query" or "point enclosure query".
Range tree involves finding points in a query range. Segment tree involves finding rectangles that contain a query point.
On an unrelated note, one might wish to use an R-tree with sort-tile-recursive (STR) bulk-loading. This leads to almost no overlap between bounding boxes for a node's children and the structure is balanced. If we are lucky (i.e. R-tree involves heuristics and we wish to avoid ties for each component), the structure is good for moderate number of dimensions because factor of $d \cdot \log(n)$ for time is lower than $\log^{\textrm{max}(d - 1, 1)}(n)$ (noting that $d \geq 1$). We take advantage of fact that use of R-tree does not involve cloning primitives. Also, nearest neighbor via heuristic can perform quite well with R-tree, which seems to be something that that excels at w.r.t. segment tree and range tree. Additionally, one might wish to for dynamic structure use an R*-tree or one might wish to for slightly more dimensions use an X-tree. The more dimensions one has, the more likely linear scan is more affordable via a kind of "curse of dimensionality".
Further, if one has only distances and no absolute locations, a metric tree will prove useful. Structures that assume rectangle primitives will be aided by fact that a point is a degenerate rectangle. A pair of points that act as opposite corners of a rectangle can be turned into one point via a "corner transformation" from Pagel 1993.
One strategy that may be used with R-tree is augmenting it with "look-ahead" and edge checks to get guaranteed theoretically acceptable time for a query such as point enclosure query. This is even when it seems an R-tree is designed specifically for this kind of query.
References
Pagel et al. - The transformation technique for spatial objects revisited (1993) | {
"domain": "cs.stackexchange",
"id": 8039,
"tags": "algorithms, data-structures, space-partitioning"
} |
Is Kirchhoff's scalar theory of diffraction mathematically inconsistent? | Question: I've heard that Kirchhoff's scalar diffraction theory is mathematically inconsistent. Is this true? If so, where in the formulation does this inconsistency arise and are there ways to remedy it?
Answer: Kirchhoff's scalar diffraction theory is mathematically inconsistent. The reason is as follows. Kirchhoff proved that any function $U$ satisfying the homogeneous wave equation $\nabla^2U+k^2U=0$ also satisfies the integral equation
$$U(\mathcal P)=\frac{1}{4\pi}\int\int_{\mathcal S} \left[U(\mathcal Q)\frac{\partial}{\partial n} \left(\frac{e^{\mathfrak j kr}}{r}\right)-\left(\frac{e^{\mathfrak j kr}}{r}\right)\frac{\partial U(\mathcal Q)}{\partial n}\right]ds(\mathcal Q) \tag{1}$$
In Eq (1) the point of observation is $\mathcal P$ at which the wave $U$ is calculated by integrating the function over the closed surface $\mathcal S$ that contains the point of observation. The $r$ is the distance between the points $\mathcal Q$ of the surface and $\mathcal P$.
This Eq. (1) is true for any reasonable smooth surface $\mathcal S$. It also holds if the surface is a half-plane while one extends a hemisphere to infinity.
This integral equation or rather identity expresses that given the solution and its normal derivative of the wave equation on a closed surface one can obtain the complete solution inside that surface. So far so good.
The mathematical problem starts when the formula is applied to a diffraction problem, say an opaque screen with a hole being illuminated by a plane or spherical wave. If the hole is several wavelengths across then it is reasonable to assume that inside the hole the wave does not change much and can be taken as given. But what about the shadow side of the opaque screen where the wave should be known so we can apply Eq (1)? A reasonable assumption would be that on the screen in the shadow side we can set $U(\mathcal Q)=0$, but we still need to know the normal derivative of $U$.
We cannot just set that normal derivative to zero, there is no physical justification for that, and moreover, one cannot just arbitrarily set both boundary condition independently for the Helmholtz equation: at any surface point we can only prescribe one or the other, but not both. This is the mathematical inconsistency. Amazingly, despite the inconsistency and to the chagrin to all mathematicians it works very well in practice.
Yes, there are ways to fix the inconsistency but general consensus is that it is rarely worth it, Sommerfeld's fix is a good point of departure, see Born & Wolf Chapter 8. | {
"domain": "physics.stackexchange",
"id": 98360,
"tags": "electromagnetism, diffraction, boundary-conditions, approximations, greens-functions"
} |
What formal representation is commonly used to describe compiler optimizations? | Question: I have devised a compiler optimization that works on any structured language that has arrays assignments array[index] = value and counted loops (for i = n; i < N; i++) {doTHIS} (*).
Now I want to represent this optimization using some formal semantics. Since I want to publish my results in a Programming Languages venue, I would like to know:
What formal(s) representation(s) is (are) most commonly used to describe compiler optimizations?
I have already described my optimization using operational semantics, but I'm currently reviewing this choice. Hence, I came to the community for advice.
(*) Please forgive this C-like representation in a question asking for formal semantics.
Answer: There are many compilers, which compile widely different kinds of languages which serve widely different purposes. For example, a database language will have very different optimizations than an array-based language like APL.
Compilers themselves use several intermediate languages, from the input language, to a de-sugared version of the input language, all the way down to "fancy" assembly-like versions. You might check here to get an idea of the intermediate languages involved.
Optimizations happen at each of these stages. In general it only makes sense to state optimizations in terms of a formal execution semantics of the language in question: an optimization must preserve the observational semantics, aka the result of the computation. Ideally you would use the semantics to say something about the execution time as well: typically the number of abstract reduction steps would get lower (in certain cases) after application of the optimization.
Pragmatically, many optimizations of imperative-style languages happen on an intermediate language that looks a lot like LLVM IR. Characteristics include SSA form (no variable repeated in the left-hand-side of an assignment) and three address code (at most one assignment, and one operation per statement).
The function calls, thunks, memory layout and semantics of things like pointers are going to tend to be specific to your language. | {
"domain": "cs.stackexchange",
"id": 9196,
"tags": "formal-languages, compilers, semantics, program-optimization, operational-semantics"
} |
Help with Audio Signal noise removing using FFT and IFFT in Matlab | Question: I dont understand why my modified fft of a audio signal returns complex values after ifft.
I modified my signal by zeroing out all the unnecessary freq.
FFT: Fast fourier transfrom
IFFT: Inverse fast fourier transform
Heres the matlab code and graph
[a,fs]=audioread("Recording.m4a")
plot(a)
n=numel(a)/2 *//*since audio signal is stereo but i want only mono*
b=abs(fft(a))
y=fft(a)
c=y(:,1) *//*to convert audio signal from stereo to mono*
plot(f,abs(c))
xlim([0 1000])
c(abs(c)<31)=0
c(round(n*1000/fs):end)=0
f=0:n-1
f=(f*fs/n)'
plot(f,abs(c))
xlim([0 1000])
z=ifft(c)
plot(z)
Answer: The short answer is do not null out the "mirror frequencies" that are located above $f_s/2$ that match the frequencies you want to keep. If your FFT was generated from a real signal, then when you do the IFFT you will get the real signal back as long as you did not zero out those upper frequencies (as you did).
The DFT (which the FFT computes) returns complex values but for real signals the “positive” frequency values will be the complex conjugate of the negative frequency values which together represent real signals. Due to how the FFT is computed, instead of returning the positive and negative frequencies from $-f_s/2$ to $+f_s/2$, instead those same frequencies are given from $0$ to $f_s$.
Consider Euler’s Identity:
$$2cos(\omega t) = e^{j\omega t}+ e^{-j\omega t}$$
If it wasn’t already clear, the general form of $Ae^{j\theta}$ represents the magnitude and phase of a complex number as $A\angle{\theta}$ so above we see how the cosine is represented by two phasors each rotating in time in opposite directions (a positive and negative frequency) such that their sum is always on the real axis.
The FFT returns the frequency bins from 0 to one sample less than the sampling frequency: $n =0$ to $N-1$ bins where each bin is spaced by $f_s/N$ with $f_s$ as the sampling rate. Due to the cyclical nature of the FFT this represents the positive and negative frequencies by mapping everything above $f_s/2$ to the negative frequencies, specifically samples $N/2 + n$ for $n\ge N/2$ map to $-N/2+n$. The command fftshift in MATLAB or Octave does this specifically.
For this reason you should NOT null out all the "mirror" frequencies that are located above $N/2$ as without them your signal will be complex. Consider the example above with the cosine.
$$2cos(2\omega_n t) = e^{j2\omega_n t}+ e^{-j2\omega_n t}$$
If sampled such that there were 100 total bins, and $\omega_n$ was the spacing on one DFT bin, then the DFT would have non zero values at the 3rd bin (bin 0 is DC) and the 2nd to last DFT bin which after shifting would be at the expected +/-2 bins. If you removed the higher bin then the resulting waveform would simply be:
$$e^{j2\omega_n t}$$ | {
"domain": "dsp.stackexchange",
"id": 8579,
"tags": "matlab, signal-analysis, fourier-transform, audio, denoising"
} |
Mapping Reduction from INFINITE to REG | Question: given the following languages:
$INFINITE_{T M}=\{\langle M\rangle: M$ is a TM with $|L(M)|=\infty\}$.
$R E G_{T M}=\{\langle M\rangle: M$ is a TM with $L(M) \in \mathrm{REG}\}$.
prove the following reduction:
$I N F I N I T E_{T M} \leq_m R E G_{T M}$
I'm having difficulties, building the $M_f$ machine
would love for some help, got no clue for now :)
edit:
if that wasn't clear, would love for initial direction, not for formal solution
Answer: The following reduction mapping should work.
For any input $M\in TM$ we define $M_f$ as follows:
For any $x\in\Sigma^*$,
If $x=0^n1^n$ for any $n\in \mathbb{N}$ then $M_f$ Accepts.
Else, we run simultaneously $M$ on all words $y\in\Sigma^*$ in Minlex order such that $|y|\geq|x|$. If $M$ Accepts, then $M_f$ Accepts.
If no word was Accepted, we continue running.
It isn't difficult to see that $M_f$ is computable, and now I will leave you to prove that it is indeed a correct reduction. | {
"domain": "cs.stackexchange",
"id": 21250,
"tags": "turing-machines, rice-theorem"
} |
How does the number of clauses affect the difficulty of a 3-SAT problem? | Question: What is the relationship between the number of clauses and the difficulty of a 3-SAT problem?
Answer: In general, there is no connection. An instance with a "small" number (say a few thousands) of clauses can be very difficult to solve in practice, while an instance with a "large" number (say several millions) of clauses is easy. It's the structure that matters, not the number of clauses.
For random 3-SAT, you can have a look at the satisfiability threshold. | {
"domain": "cs.stackexchange",
"id": 10130,
"tags": "complexity-theory, satisfiability, 3-sat"
} |
Draw on a map with mouse | Question:
I'm looking for a user to demonstrate trajectories on a map. I'm using RViz for everything, so an RViz would be excellent but other options are ok too. The map is coming from map_server as nav_msgs::OccupancyGrid I want the user to click and drag to draw on the map, and I want to be able to produce a standard nav_msgs::Path from it. Any ideas how to do this?
Do I have to make my own application from the ground up for this? Or can rviz be hacked to do this?
Originally posted by petermitrano on ROS Answers with karma: 55 on 2016-06-24
Post score: 0
Original comments
Comment by C3MX on 2016-10-07:
+1 Here ! , I also want to draw things (like polygons) on the map , to define prohibited area easily without image editors and save it as a new file for global costmap. Any suggestions?
Answer:
May not be what you need though, use Publish Point tool on RViz (surprisingly I don't find it described anywhere on ROS wiki). It's usually on the top bar of RViz. See this video that shows placing multiple Publish Points in a closed loop draws polygon.
I want the user to click and drag to draw on the map, and I want to be able to produce a standard nav_msgs::Path from it. Any ideas how to do this?
Not entirely sure, but with frontier_exploration package you can specify a region for the robot to explore, and the software computes a path to that location. In the same link I mentioned above is a bit more about the package.
Originally posted by 130s with karma: 10937 on 2016-06-24
This answer was ACCEPTED on the original site
Post score: 1 | {
"domain": "robotics.stackexchange",
"id": 25052,
"tags": "ros, rviz, markers"
} |
Projectiles ability to do work to a box when connected by string? | Question: I was wondering, the work-energy theorem states that KE can do work, as it is Mechanical energy.
if the KE energy and thus Mechanical energy of a ball, if external...can do work on an object, applying force.
so, If I have a ball with a string attached to it with a box, some KE is lost in deformation and such but the ball extends out and pulls the box, doing work on the box.
I know a hammer hits in a nail with Kinetic energy, which is mechanical energy, I know that external forces do work, so like a bullet has the ability to do work and can do so but it penetrates and exerts force inside object, it doesn't do work as its energy is taken up in deformation and such...but if a bullet does penetrate inside the body and then still exerts a force without losing KE to deformation, it could do work and such but that doesn't happen as bullets usually deform or lose all energy, the deformations is the material doing work on the bullet but the bullet doing work on the material creative a cavity by the exerted/transferred force, the time it lasts isn't very long and it creates a cavity for a short time as the force is applied for a short time.
though we digress, if a bullet attached by string could do work pulling a box to which its connected to, correct?
Answer:
though we digress, if a bullet attached by string could do work pulling a box to which its connected to, correct?
Of course, when the bullet is connected to a box which was initially at rest the box will start moving as soon as the string is strained tight. While the box accelerates, the bullet decelerates.
The same force acts on both, you divide through each's mass to get the corresponding acceleration and deceleration. While the boxes velocity increases from 0 to its final value the box accelerates.
Since it accelerates for a distance greater than zero, the boxes mass times the acceleration times the distance it accelerates gives you the kinetic energy of the box, or in other words the work the bullet did to the box.
If the bullet is to fast it might tear the string or break material of the box since the destruction factor depends mostly on velocity (therefore light but fast bullets for penetration), but the same principles will still hold, you then just substract the energy that was used to break the material of the box and divide through the mass of the remaining part of the box that is still attached to the string to get your acceleration:deceleration-ratio of the box and the bullet.
If you remember that energy and momentum are conserved it is easy to calculate the final velocity of the bullet and the box if you know their mass (with air resistance it is a bit more complicated). | {
"domain": "physics.stackexchange",
"id": 29025,
"tags": "energy, kinematics, energy-conservation, work"
} |
Do epigenetics determine the proteins a cell produces and therefore it's function? | Question: I'm having trouble understanding what epigenetics is in a simple sense.
How I imagine it is that if we had 2 twins with identical DNA and we let them live we will see that they'll develop differently. Their DNA will stay the same (unless the DNA gains mutations, which could lead to cancer...) but their epigenetic tags will differ. These epigenetics tags determine the execution of parts of the DNA and the way the twins live their lives determine how the tags will be distributed. It's hard for me to find an example.
Another way I think of epigenetics is for example the cells in the muscles have a specific epigenetic tags to produce the specific proteins needed to create the muscle and the cells in the skin for example have different tags, produce different proteins which produce the skin.
Is my understanding of epigenetics anywhere close to the truth?
Answer: Short answer: Yes, epigenetics play a role in determining gene expression, therefore protein expression and function.
Lifestyle factors like diet, smoking, alcohol consumption, and stress can change one's epigenome. A historical example of this might be the Dutch Hunger Winter of 1944-45; there is evidence that after the parents' generation suffered during World War II, the children who were conceived during this time and exposed in utero to famine had different epigenetic marks than their siblings who were conceived not during the famine.
Epigenetic regulation is also essential for cell differentiation in vivo and in vitro. Embryonic stem cells undergo many changes to their epigenome in order to become mature cells. One interesting example for further reading is the transcription factor NeuroD1, which binds to targets in the DNA to cause widespread changes in gene expression that are specific to neurons, and commits cells to a neuronal fate. It doesn't bind to thousands of neuronal genes; instead, it causes a "ripple effect" thanks to epigenetic mechanisms.
Longer answer: Epigenetic marks are added to the genome for many reasons, including the two you just listed, and I think it is helpful to understand what happens at a molecular level in order to understand "why" we have them and to better visualize what the epigenetic marks ("tags") are. This might help you to think of better examples on your own.
DNA spends most of its time wrapped around proteins called histones. A complex of DNA and histones is called a nucleosome. DNA-in-nucleosomes is called chromatin. Chromatin can then either be "loose" (called euchromatin) or "compact" (called heterochromatin). There are more detailed ways to define how accessible chromatin is, but this is the basic idea. Intuitively, loose chromatin is more easily accessible and the genes there can be transcribed actively. Compact chromatin is less active.
The looseness of chromatin is determined by how closely the nucleosomes interact, which is controlled by the histones' molecular properties. Whether or not the histone proteins interact to make the chromatin compact, or repel each other to make the chromatin loose, depends on what modifications they possess. These modifications are added by histone-modifying enzymes.
Histone-modifying enzymes are regulated by transcription factors (i.e. during the process of differentiation) as well as signals generated by extrinsic factors (i.e. during the organism's lifetime). The balance of activating and silencing activity is what adds and removes these marks to give a cell its individual epigenome.
So, in simple terms, epigenetic marks determine how accessible different parts of the DNA are, and a whole lot of factors (like cell identity and lifestyle) can determine where those epigenetic marks are. | {
"domain": "biology.stackexchange",
"id": 9374,
"tags": "cell-biology, dna, epigenetics"
} |
Good Metric for Spatial Resolution | Question: I am trying to filter crawled images with their image quality.
By 'high quality' I mean high resolution, not aesthetic score or perceptual score.
However I found that some images have very high pixel resolution (e.g. 4000px * 4000px) but their actual quality look like (200*200) image is stretched by 20 times. I would like to measure the level of detail.
From wikipedia I learned spatial resolution is what I am looking for. Two images below are of the same size (the same number of pixels). But the description of spatial resolution sounds ambiguous:
In effect, spatial resolution refers to the number of independent
pixel values per unit length.
How do I measure the independence between pixels? What metrics exist?
I guess If I run 2D DFT and there is high value in high frequency region, then the image carries more details? Are the images with salt-pepper noise considered to have low spatial-resolution?
Answer: If you want to figure out if an image is actually a stretched version of a smaller image, you can probably assess if there is any information lost at the transformation. What you cannot do is assess spatial resolution because you don't really know how much actual space the image covers in the first place.
The idea is this: When you scale-down an image, you throw away some of the information it carries. If you were now to scale-up the scaled-down version of the image, you don't exactly get the original image because that would mean having a way to fill in the information you threw away during the first step. Therefore, the spatial frequency spectrum of a scaled-down->scaled-up image is identical to that of the scaled-down image but interpolated too. That spectrum actually stops at a specific spatial frequency component and from that information you can probably also work out how small this image used to be before it got scaled up.
For example:
This image:
Has this spectrum (notice pixelation):
And this image:
Has this spectrum:
The scaling here is extreme to illustrate the point. We go from 64 pixels in the horizontal direction to 64 pixels blown up to 1024 using cubic interpolation.
Notice how the large image can accommodate many more spatial components until its periphery but that space remains "dark" (i.e. low power at high spatial frequencies).
This is the signature of a stretched image (scaled-down->scaled-up). If the image happens to be simply blurred, then you would still detect less power in the higher frequencies but you would have to come up with a way of discriminating between pictures that are blurred and pictures that genuinely only use less spatial components than what would be expected.
To figure that out, you can integrate the spectrum radially in an attempt to recover the average envelope of the frequency spectrum. In the case of a blurred image, this envelope would tend towards known low pass frequency responses, compared to the spectrum of an image that is genuinely not blurred.
This latter part is a bit more challenging to get right and would have certain limits beyond which you could say confidently that it is a blurred image rather than a small but clear image.
The image that you bring as an example seems to be a simply blurred one. Its spectrum looks like this:
Hope this helps. | {
"domain": "dsp.stackexchange",
"id": 6320,
"tags": "computer-vision"
} |
What would happen to an agent trained using Markov Decision Process if the goal node changes? | Question: I was reading up a paper that did routing based on an MDP, and I was wondering because, in routing, there is a sender node and a receiver node, so if the receiver node changes (sending a message to someone else), would we have to train the MDP algorithm all over again?
This also got me thinking about what would happen even if one node in the process of transmission changes. Does using an MDP for training the agent mean that the obstacle and goals should never change?
Answer: It is possible, at design time for a reinforcement learning problem, to allow for changes within an environment. You can make any element into a variable property of the state, that the agent can realistically be told at the start or sense from the environment.
If you do add new variable to model the possibility of change:
It allows the agent to learn to solve a more general problem where the chosen property can vary.
It increases the size of the state space.
It requires training to include variations of the new variable.
Usually this also increases the time taken to train.
It is not always possible to use a state variable for the task - perhaps a goal state is effectively hidden from the agent and the purpose of training is for it to be discovered. In which case, you will require at least some re-training. It may be faster to start with the existing trained agent if the difference is not large.
If you cannot simply extend the state representation, and the environment changes in a small enough way, then it may also be possible to use an agent which continuously explores which will re-train itself over time in repsonse to changes in the environment. The DynaQ+ algorithm is an example of a method which is designed to explore and find changes in the agent's environment to allow for this kind of online retraining when things change. | {
"domain": "ai.stackexchange",
"id": 2810,
"tags": "reinforcement-learning, deep-rl, markov-decision-process"
} |
Indicated mean effective pressure | Question: The equation for indicated power is ̇ = ̅ . Where L is the piston stroke (m), A is the piston area (m2), N is the number of mechanical cycles per cylinder per second, and ̅ is the indicated mean effective pressure (imep).
How would you calculate the indicated mean effective pressure ̅ if you were given Net indicated mean effective pressure and Pumping indicated mean effective pressure for a 4-stroke engine?
Thanks in advance
Answer: Noob, can't comment, so here's an answer.
It's not the terminology I'm used to, but apparently GrossIMEP is just the work done in the two important strokes (squeeze bang)of a 4 stroke, NetIMEP is GIMEP-PMEP, and BMEP=NIMEP-FMEP. | {
"domain": "engineering.stackexchange",
"id": 4997,
"tags": "thermodynamics"
} |
How does the divergence of the Coulomb field "blow up" at origin? | Question: Many sources (DJ Griffiths, other answers on stack exchange) claim that the divergence of the vector field $\vec E=\frac{\hat r}{r^2}$, $\vec \nabla \cdot \vec E$ "blows up" at $\vec r=0$. But upon computation, we get an expression like $\frac{0}{r^2}$, which means it is 0 everwhere except origin, and at origin it is$\frac{0}{0}$ (undefined). The limit as $r \to 0$ also is 0. What exactly does "blow up" mean, are they implying it goes to infinity as $r \to 0$ ? Is there a way to prove this. If they do imply it goes to infinity, are they implying infinity is $\frac{0}{0}$ ?
Answer: The definition of the divergence of a vector field $\vec{E}(\vec{r})$
is the limit of the surface integral per volume:
$$\vec{\nabla}\cdot\vec{E}(\vec{r}_0)=
\lim_{V\to 0}\frac{\oint_{\partial V\text{ around }\vec{r}_0} \vec{E}(\vec{r})\cdot d\vec{S}}{V}$$
For the Coulomb field the surface integral around the origin
($\vec{r}_0=\vec{0}$) is
$$\oint_{\partial V\text{ around }\vec{0}} \vec{E}(\vec{r})\cdot d\vec{S}= 4\pi,$$
which is independent on how big or small the volume $V$ is.
Hence the divergence is infinite at the origin:
$$\vec{\nabla}\cdot\vec{E}(\vec{0})=\lim_{V\to 0}\frac{4\pi}{V} = \infty$$ | {
"domain": "physics.stackexchange",
"id": 94915,
"tags": "electrostatics, electric-fields, gauss-law, singularities"
} |
How do you get the Calculus of Constructions from the other points in the Lambda Cube? | Question: The CoC is said to be the culmination of all three dimensions of the Lambda Cube. This isn't apparent to me at all. I think I understand the individual dimensions, and the combination of any two seems to result in a relatively straightforward union (maybe I'm missing something?). But when I look at the CoC, instead of looking like a combination of all three, it looks like a completely different thing. Which dimension do Type, Prop, and small/large types come from? Where did dependent products disappear to? And why is there a focus on propositions and proofs instead of types and programs? Is there something equivalent that does focus on types and programs?
Edit: In case it isn't clear, I'm asking for an explanation of how the CoC is equivalent to the straightforward union of the Lambda Cube dimensions. And is there an actual union of all three out there somewhere I can study (that is in terms of programs and types, not proofs and propositions)? This is in response to comments on the question, not to any current answers.
Answer: First, to reiterate one of cody's points, the Calculus of Inductive Constructions (which Coq's kernel is
based on) is very different from the Calculus of Constructions. It is
best thought of as starting at Martin-Löf type theory with universes,
and then adding a sort Prop at the bottom of the type hierarchy. This
is a very different beast than the original CoC, which is
best thought of as a dependent version of F-omega. (For instance, CiC
has set-theoretic models and the CoC doesn't.)
That said, the lambda-cube (of which the CoC is a member) is typically presented as a pure type system for reasons of economy in the number of typing rules. By treating sorts, types, and terms as elements of the same syntactic category, you can write down many fewer rules and your proofs get quite a bit less redundant as well.
However, for understanding, it can be helpful to separate out the different categories explicitly. We can introduce three syntactic categories, kinds (ranged over by the metavariable k), types (ranged over by the metavariable A), and terms (ranged over by the metavariable e). Then all eight systems can be understood as variations on what is permitted at each of the three levels.
λ→ (Simply-typed lambda calculus)
k ::= ∗
A ::= p | A → B
e ::= x | λx:A.e | e e
This is the basic typed lambda calculus. There is a single kind ∗, which is the kind of types. The types themselves are atomic types p and function types A → B. Terms are variables, abstractions or applications.
λω_ (STLC + higher-kinded type operators)
k ::= ∗ | k → k
A ::= a | p | A → B | λa:k.A | A B
e ::= x | λx:A.e | e e
The STLC only permits abstraction at the level of terms. If we add it at the level of types, then we add a new kind k → k which is the type of type-level functions, and abstraction λa:k.A and application A B at the type level as well. So now we don't have polymorphism, but we do have type operators.
If memory serves, this system does not have any more computational power than the STLC; it just gives you the ability to abbreviate types.
λ2 (System F)
k ::= ∗
A ::= a | p | A → B | ∀a:k. A
e ::= x | λx:A.e | e e | Λa:k. e | e [A]
Instead of adding type operators, we could have added polymorphism. At the type level, we add ∀a:k. A which is a polymorphic type former, and at the term level, we add abstraction over types Λa:k. e and type application e [A].
This system is much more powerful than the STLC -- it is as strong as second-order arithmetic.
λω (System F-omega)
k ::= ∗ | k → k
A ::= a | p | A → B | ∀a:k. A | λa:k.A | A B
e ::= x | λx:A.e | e e | Λa:k. e | e [A]
If we have both type operators and polymorphism, we get F-omega. This system is more or less the kernel type theory of most modern functional languages (like ML and Haskell). It is also vastly more powerful than System F -- it is equivalent in strength to higher order arithmetic.
λP (LF)
k ::= ∗ | Πx:A. k
A ::= a | p | Πx:A. B | Λx:A.B | A [e]
e ::= x | λx:A.e | e e
Instead of polymorphism, we could have gone in the direction of dependency from simply-typed lambda calculus. If you permitted the function type to let its argument be used in the return type (ie, write Πx:A. B(x) instead of A → B), then you get λP. To make this really useful, we have to extend the set of kinds with a kind of type operators which take terms as arguments Πx:A. k , and so we have to add a corresponding abstraction Λx:A.B and application A [e] at the type level as well.
This system is sometimes called LF, or the Edinburgh Logical Framework.
It has the same computational strength as the simply-typed lambda calculus.
λP2 (no special name)
k ::= ∗ | Πx:A. k
A ::= a | p | Πx:A. B | ∀a:k.A | Λx:A.B | A [e]
e ::= x | λx:A.e | e e | Λa:k. e | e [A]
We can also add polymorphism to λP, to get λP2. This system is not often used, so it doesn't have a particular name. (The one paper I've read which used it is Herman Geuvers' Induction is Not Derivable in Second Order Dependent Type Theory.)
This system has the same strength as System F.
λPω_ (no special name)
k ::= ∗ | Πx:A. k | Πa:k. k'
A ::= a | p | Πx:A. B | Λx:A.B | A [e] | λa:k.A | A B
e ::= x | λx:A.e | e e
We could also add type operators to λP, to get λPω_. This involves adding a kind Πa:k. k' for type operators, and corresponding type-level abstraction Λx:A.B and application A [e].
Since there's again no jump in computational strength over the STLC, this system should also make a fine basis for a logical framework, but no one has done it.
λPω (the Calculus of Constructions)
k ::= ∗ | Πx:A. k | Πa:k. k'
A ::= a | p | Πx:A. B | ∀a:k.A | Λx:A.B | A [e] | λa:k.A | A B
e ::= x | λx:A.e | e e | Λa:k. e | e [A]
Finally, we get to λPω, the Calculus of Constructions, by taking λPω_ and adding a polymorphic type former ∀a:k.A and term-level abstraction Λa:k. e and application e [A] for it.
The types of this system are much more expressive than in F-omega, but it has the same computational strength. | {
"domain": "cstheory.stackexchange",
"id": 3736,
"tags": "lambda-calculus, type-systems, typed-lambda-calculus, calculus-of-constructions"
} |
Function from WordPress VoteItUp function | Question: This code comes from a Wordpress plugin's forum called Vote it Up.
It sort posts by vote. A lotof people say that there is more code than is needed.
So I was wondering if someone have any idea about how to clean it a bit.
votingfunctions.php:
function ShowPostByVotes() {
global $wpdb, $voteiu_databasetable;
mysql_connect(DB_HOST, DB_USER, DB_PASSWORD) or die(mysql_error());
mysql_select_db(DB_NAME) or die(mysql_error());
//Set a limit to reduce time taken for script to run
$upperlimit = get_option('voteiu_limit');
if ($upperlimit == '') {
$upperlimit = 100;
}
$lowerlimit = 0;
$votesarray = array();
$querystr = "
SELECT *
FROM $wpdb->posts
WHERE post_status = 'publish'
AND post_type = 'post'
ORDER BY post_date DESC
";
$pageposts = $wpdb->get_results($querystr, OBJECT);
//Use wordpress posts table
//For posts to be available for vote editing, they must be published posts.
mysql_connect(DB_HOST, DB_USER, DB_PASSWORD) or die(mysql_error());
mysql_select_db(DB_NAME) or die(mysql_error());
//Sorts by date instead of ID for more accurate representation
$posttablecontents = mysql_query("SELECT ID FROM ".$wpdb->prefix."posts WHERE post_type = 'post' AND post_status = 'publish' ORDER BY post_date_gmt DESC LIMIT ".$lowerlimit.", ".$upperlimit."") or die(mysql_error());
$returnarray = array();
while ($row = mysql_fetch_array($posttablecontents)) {
$post_id = $row['ID'];
$vote_array = GetVotes($post_id, "array");
array_push($votesarray, array(GetVotes($post_id)));
}
array_multisort($votesarray, SORT_DESC, $pageposts);
$output = $pageposts;
return $output;
}
index.php:
$pageposts = ShowPostByVotes();
?>
<?php if ($pageposts): ?>
<?php foreach ($pageposts as $post): ?>
<?php setup_postdata($post); ?>
Attention! Code above is something like:
<?php if (have_posts()) : ?>
<?php while (have_posts()) : the_post(); ?>
so in foreach loop you can use statements like in standard "The Loop" for example the_content, the_time().
To end this add
<?php endforeach; ?>
<?php else : ?>
<h2 class="center">Not Found</h2>
<p class="center">Sorry, but you are looking for something that isn't here.</p>
<?php include (TEMPLATEPATH . "/searchform.php"); ?>
<?php endif; ?>
EDIT:
How I run a custom loop:
<?php $custom_posts = new WP_Query(); ?>
<?php $custom_posts->query('category_name=Pictures'); ?>
<?php while ($custom_posts->have_posts()) : $custom_posts->the_post(); ?>
<div class="content-block-2">
<a href="<?php the_permalink(); ?>" title="<?php printf( esc_attr__( 'Permalink to %s', 'twentyten' ), the_title_attribute( 'echo=0' ) ); ?>" rel="bookmark"><?php the_content(); ?></a>
</div>
<?php endwhile; ?>
Answer: These are my suggestions
Don't use mysql functions to connect and query Wordpress posts.
You can replace the first SQL statement with this WP_Query.
$query = new WP_Query('post_type=post&post_status=publish&orderby=date&order=DESC');
$pageposts = $query->get_posts();
This way the code is somewhat protected from future Wordpress database changes.
The second select statement is a bit unnecessary as each $post in the $pagepost already contains the post id.
Only thing missing is the limit, but we'll add that to the WP_Query by adding
$query = new WP_Query('post_type=post&post_status=publish&orderby=date&posts_per_page='.$upperlimit);
Refactor and remove unused/unnecessary code
global $wpdb, $voteiu_databasetable - are not needed anymore
$upperlimit - assignment can be done on one line
$lowerlimit - not used anymore as it's enough with the $upperlimit
$output = $pageposts; - assigmentent before return is unnecessary
So the complete ShowPostByVotes in votingfunctions.php would look something like this now:
function ShowPostByVotes() {
$upperlimit = is_numeric(get_option('voteiu_limit')) ? get_option('voteiu_limit') : 100 ;
$query = new WP_Query('post_type=post&post_status=publish&orderby=date&posts_per_page='.$upperlimit);
$pageposts = $query->get_posts();
$votesarray = array();
foreach ($pageposts as $post) {
$vote_array = GetVotes($post->ID, "array");
array_push($votesarray, array(GetVotes($post->ID)));
}
array_multisort($votesarray, SORT_DESC, $pageposts);
return $pageposts;
}
And you can use it in your index.php by using this code
<?php $pageposts = ShowPostByVotes(); ?>
<?php if ($pageposts): ?>
<?php global $post; ?>
<?php foreach ($pageposts as $post): ?>
<?php setup_postdata($post); ?>
<div class="post" id="post-<?php the_ID(); ?>">
<h2><a href="<?php the_permalink() ?>" rel="bookmark" title="Permanent Link to <?php the_title(); ?>">
<?php the_title(); ?></a></h2>
<small><?php the_time('F jS, Y') ?> <!-- by <?php the_author() ?> --></small>
<div class="entry">
<?php the_content('Read the rest of this entry »'); ?>
</div>
<p class="postmetadata">Posted in <?php the_category(', ') ?> | <?php edit_post_link('Edit', '', ' | '); ?>
<?php comments_popup_link('No Comments »', '1 Comment »', '% Comments »'); ?></p>
</div>
<?php endforeach; ?>
<?php else : ?>
<h2 class="center">Not Found</h2>
<p class="center">Sorry, but you are looking for something that isn't here.</p>
<?php include (TEMPLATEPATH . "/searchform.php"); ?>
<?php endif; ?> | {
"domain": "codereview.stackexchange",
"id": 188,
"tags": "php, mysql, wordpress"
} |
'Quantum' vs 'Classical' effects in Quantum Field Theory | Question: After reading a few textbooks on Quantum Field Theory there's something that's always struck me as bizarre. Take a scattering process in QED like $\gamma$,e$^-$ $\rightarrow$ $\gamma$,e$^-$. The leading order contribution to this process starts at tree-level. If we assume the incoming photon is randomly polarized, that the incoming electron has a random spin, and that we are insensitive to the photon's final polarization/the electron's final spin, then the differential cross section for $\gamma$,e$^-$ $\rightarrow$ $\gamma$,e$^-$ in the laboratory frame where the electron is initially at rest is given by the Klein–Nishina formula.
The thing is, I constantly read in various textbooks that the tree-level contribution to a scattering process corresponds to the contribution of the 'classical' field theory to said process, and that truly 'quantum' effects begin at next to leading order (almost always involving loop diagrams). But with a process like $\gamma$,e$^-$ $\rightarrow$ $\gamma$,e$^-$, the tree-level contribution contains effects that are simply not predicted by classical electromagnetism. The differential cross section predicted by classical electromagnetism is equal to $r_e^2(1+\frac{\cos(\theta)^2}{2})$, whereas the differential cross section predicted by the Klein–Nishina formula has a dependence on the energy of the incoming photon, which goes reduces to the classical differential cross section as Planck's constant goes to zero. So clearly there's something predicted in the tree level cross section that is missed by classical electromagnetism. So the idea that 'quantum' predictions begin at next to leading order seems erroneous at best.
Am I completely off here? Is the idea that 'quantum' effects start at next to leading order only meant to be taken as a heuristic way of thinking about things? Or am I just overthinking things?
Answer: Quantum effects vanish when $\hbar \rightarrow 0$. The analysis of the $\hbar$ powers in the vertices and propagators results in a simple rule asserting that the contribution of a diagram containing $N-$ loops to the amplitudes is proportional to $\hbar^{N}$. Thus, we should expect that the classical amplitudes to be given exactly by the tree level diagrams.
However, there is one exception in the correspondence rule between the tree level diagrams and classical amplitudes. This exception is explained in the following work by: Holstein and Donoghue. Please see also, previous works of the same authors cited in the article, where more cases were analyzed.
The exception of the correspondence rule occurs when the loop diagram contains two or more massless propagators. In this case, it was observed by Holstein and Donoghue that contributions to the classical amplitudes occur at the one loop level due a certain non-analytical term in the momentum space. This tem can be recognized to contain $\hbar$ to the zeroth power when the loop diagram is expressed in terms of the momenta rather than the wave numbers as usually implicitly done when loop diagrams are solved. Holstein and Donoghue show that this term does not exist in the case of massive propagators where there is no contribution of the loop diagrams to the classical amplitudes.
The example given in the question electron-photon scattering does not suffer from the above problem. The expression given in the question is valid in the particular case of an electron in nonrelativistic motion as emphasized in Jackson's book "Classical electrodynamics" section 14.7. The fully relativistic classical cross section should be exactly equal to the tree level quantum (Klein-Nishina ) cross section.
Update
Answer to the first follow up question
In order to perform correctly the $\hbar$ expansion, the fields should be scaled by appropriate powers $\hbar$ in order to make their commutation relations proportional to $\hbar$, so they will commute in the $\hbar \rightarrow 0$ limit. The coupling constants should be scaled accordingly. For example the Dirac Lagrangian:
$$\mathcal{L}_D = c \bar{\psi}\bigg(\gamma^{\mu} (i \hbar \partial_{\mu} – e A_{\mu}) – m c \bigg)\psi$$
We need to absorb the $\hbar$ coming from the momentum operator into the field, thus by redefining
$$\mathcal{L}_D = c \bar{\tilde{\psi}}\bigg(\gamma^{\mu} (i \partial_{\mu} – \frac{e }{\hbar}A_{\mu}) – \frac{ m}{\hbar} c \bigg)\tilde{\psi}$$
Thus we need to take
$$ \tilde{e} = \frac{e}{\hbar}, \quad \tilde{m} = \frac{m}{\hbar} $$
To be fixed as $\hbar \rightarrow 0$.
Using the scaled electric charge, the fine structure constant takes the form:
$$\alpha = \frac{e^2}{4 \pi \epsilon_0 \hbar c} = \frac{\tilde{e} \hbar}{4 \pi \epsilon_0 c}$$
Now, $\alpha$ is linear in $\hbar$, therefore vanishes in the classical limit.
Please see the following work by Brodsky and Hoyer, for the scaling of the various fields and coupling constant.
Answer to the second follow up question:
In their original paper from 1929 Klein and Nishina computed the Compton cross section using Dirac's equation in the background of a classical radiation field. They did not use a quantized electromagnetic field. Therefore, they did not use QED. Please see the derivation also in Yazaki's article(There is a pdf version inside).
In my opinion, the only reason that they had to use the Dirac equation is to take into account the spin. I am quite sure that using more modern classical models for spin such as the Bargmann-Michel-Telegdi theory can be used to provide a fully classical derivation of the Klein-Nishina result. I couldn't find a reference that anyone has performed this work. I am sure it could be a nice project to do. | {
"domain": "physics.stackexchange",
"id": 42229,
"tags": "quantum-field-theory, scattering, feynman-diagrams, perturbation-theory"
} |
How do I find a system's impulse response from its state-space repersentation using the state transition matrix? | Question: Suppose we have a linear represented in the standard state space notation:
$$ \dot{x}(t)=Ax(t)+Bu(t)$$
$$y(t) = Cx(t) + Du(t)$$
In order to get its impulse response, it is possible to take its Laplace transform to get
$$sX=AX+BU$$
$$Y=CX+DU$$
and then solve for the transfer function which is
$$\frac{Y}{U}=C(sI-A)^{-1}B+D$$
Similarly, for a discrete system, the $\mathcal{Z}$-transform of
$$ x[n+1]=Ax[n]+Bu[n]$$
$$y[n] = Cx[n] + Du[n]$$
is
$$\frac{Y}{U}=C(zI-A)^{-1}B+D$$
This process seems a bit long and I remember that there is a way to find the impulse response using the state transition matrix which is the solution for $x$ of the first equations of each pair. Does anyone know how to do this?
Answer: You can approach the problem using the state transition matrix by solving the standard non-homogeneous ODE in the first equation. The solution to $\dot{x}(t)=A x(t) + B u(t)$ is
$$x(t)=x_0 e^{At}+\int_{0}^te^{A(t-t')}Bu(t')dt'$$
where $x_0=x(0)$. The quantity $e^{At}$ is called the state transition matrix (also the solution to the homogeneous ODE), which I'll refer to as $\Xi(t)$ (I don't recall the standard notation for this). Taking $x_0=0$, the equation for $y(t)$ becomes
$$y(t)=C\int_0^t\Xi(t-t')Bu(t')dt'+Du(t)$$
The above equation gives you the output as the input convolved with the system impulse response and indeed, you can take the Laplace transform of the above equation to verify. Noting that the Laplace transform of $\Xi(t)=e^{At}$ is $(sI-A)^{-1}$ and convolutions in the time domain become products in the s-domain, we get
$$Y=C(sI-A)^{-1}BU+DU$$
which gives you the same transfer function as in your question.
Regarding your comment on the fully Laplace transform approach being long, I wouldn't necessarily say it is so. However, state transition matrix approach might be simpler to implement, because several operations involving it can be computed with simple matrix multiplications and nothing more. | {
"domain": "dsp.stackexchange",
"id": 13,
"tags": "impulse-response, state-space, linear-systems"
} |
How to set Entrez Direct - ESearch NCBI entrez to pipe or ignore errors? | Question: I am using the following one-liner with NCBI entrez to query their databases from the terminal (see Entrez Direct: E-utilities on the Unix Command Line:
esearch -db assembly -query ${species name} |
xtract -pattern ENTREZ_DIRECT -element Count
This works a charm but occasionally it sends an xml error that messes up the output.
Here is an example of the error:
xml
FAILURE ( Fri Jan 27 01:12:44 PM CST 2023 )
nquire -url https://eutils.ncbi.nlm.nih.gov/entrez/eutils/ esearch.fcgi -retmax 0 -usehistory y -db assembly -term "Crepis commutata" -tool edirect -edirect 18.7 -edirect_os Linux -email {}
<ErrorList>
<PhraseNotFound>commutata</PhraseNotFound>
<PhraseNotFound>Crepis commutata[All Fields]</PhraseNotFound>
<ErrorList>
SECOND ATTEMPT
Entrez, unfortunately, does not have a man page or --help entry on my installation for some reason and I am not quite sure where I should change the one-liner to change this behavior.
Does anyone have any clues?
Answer: This is because there are not assemblies for Crepis commutate within NCBI's assembly database.
The cross check is here:
https://www.ncbi.nlm.nih.gov/assembly/?term=Crepis+commutata
The output is a bit clearer than Entrez
The following terms were not found in Assembly: commutata, Crepis commutata[All Fields].
No items found.
All you need do to correct it is issue a next statement in the a loop of species above the code you've presented.
if [[ "$species" == 'Crepis commutata' ]]; then
continue
fi
esearch -db assembly -query "$species" | xtract -pattern ENTREZ_DIRECT -element Count
To answer the comments if this is done via a Python subprocess, you could simply issue the following code (which has not been checked)
import subprocess
for species in List:
try:
subprocess.run(["esearch", "-db" "assembly", "-query", f"${species}", "|", "xtract", "-pattern", "ENTREZ_DIRECT", "-", "element"], shell=True")
except ValueError as ve:
print(f'Species {species}, is not present.')
continue | {
"domain": "bioinformatics.stackexchange",
"id": 2397,
"tags": "database, entrez, efetch, ncbi"
} |
Total angular momentum operator | Question: How do the eigenfunctions of the total angular momentum operator analytically look like?
I mean the operator is given by $J = L+S$ so the eigenfunctions have to be tensor-product states, right? Can we explicitely say what they are?
I should add that I am particularly interested in $L$ to be orbital angular momentum operator and $S$ the spin-operator for electrons.
Answer: The trick is to expand one basis (say the uncoupled one with elements $\{\vert LM_L\rangle \vert SM_s\rangle:= \vert L M_L;SM_S\rangle \}$) in terms of another (say the coupled one with elements $\{\vert JM_J\rangle\}$.)
The assumption is that the $\{\vert JM_J\rangle\}$ form a complete set in the sense that the identity
$$
\hat 1=\sum_{JM_J}\vert JM_J\rangle \langle J M_J\vert\, .
$$
Hence:
\begin{align}
\vert LM_L; S M_S\rangle=
\sum_{J(M_J)}\vert JM_J\rangle \langle J M_J\vert LM_L;SM_S\rangle\, . \tag{1}
\end{align}
The overlap coefficients $\langle J M_J\vert L M_L;SM_S\rangle$ are known as Clebsch-Gordan coefficients, sometimes also written as $C^{JM_J}_{LM_L;SM_S}$ or variations on that theme. The coefficients are easiest to calculate from recursion relations but the recursion has been solved and the coefficients have been reduced to summation form ; the simplest cases are often tabulated.
The possible values of $J$ in the sum of Eq.(1) are in the range $L+S, L+S-1, L+S-2, \ldots, \vert L-S\vert$, often written more compactly as $L+S\le J\le \vert L-S\vert$.
In addition, since the total projection $\hat J_z=\hat L_z+\hat S_z$, the eigenvalue $M_J=M_L+M_S$, further restricting the summation in (1). This restricted sum is indicated with the parenthesis around $(M_J)$.
Because they are transition coefficients from one orthonormal basis to another, the CG coefficients satisfy a number of orthonormality conditions, such as
$$
\sum_{J } \vert \langle JM_J\vert LM_L;S M_S\rangle \vert^2=1\, .
$$
There are additional such formulae. Starting from $\langle JM_J\vert J’M’_{J}\rangle=\delta_{JJ’}\delta_{M_J M’_J}$ and inserting
$$
\hat 1=\sum_{M_LM_S}\vert LM_L;SM_S\rangle \langle LM_L;SM_S\vert
$$
one gets
$$
\sum_{M_LM_S}\langle JM_J\vert L M_L;SM_S\rangle \langle LM_L;SM_s\vert J’M’_J\rangle=\delta_{JJ’}\delta_{M_JM’_J}
$$
etc. | {
"domain": "physics.stackexchange",
"id": 44609,
"tags": "quantum-mechanics, angular-momentum, operators, hilbert-space, representation-theory"
} |
Using Delta Dirac function as a mathematical tool in Green's functions | Question: So, I was studying green's functions and in general I understood that if I have an operator $\mathscr{O}$ that acts of a function $h_1(\vec{r})$ such that
$$\mathscr{O}h_1(\vec{r})=h_2(\vec{r})$$ Then all I need to do is to find the function, $g(\vec{r})$, on which the operator acts to yield the delta function.
Then I can write, $$h_1(\vec{r})=\int h_2(\vec{\tau})g(\vec{\tau}-\vec{r})\mathrm{d}^3\vec{\tau}$$
Reason being
$$\mathscr{O}h_1(\vec{r})=\mathscr{O}\int h_2(\vec{\tau})g(\vec{\tau}-\vec{r})\mathrm{d}^3\vec{\tau}=\int h_2(\vec{\tau})\delta(\vec{\tau}-\vec{r})\mathrm{d}^3\vec{\tau}$$
So far, so good, but then in an effort to solve the Poisson's equation, writes $$V(\vec{r})=\frac{1}{4\pi}\int \frac{\rho(\vec{\tau})}{\epsilon}\frac{1}{|\vec{\tau}-\vec{r}|}\mathrm{d}^3\vec{\tau} $$
Because (and I'm back-calculating)
$$-\nabla ^2\left(\frac{1}{4\pi|\vec{r}|}\right)=\delta(\vec{r}).$$
I am unable to understand this move. Is there some mathematical basis for this or just to equate the preconceived notion of potential of point charges does this equation hold good?
Answer:
Calculate the Laplacian of $1/r$ using spherical coordinates, you get that it is zero where $r \neq 0$.
Use Green's Theorem to calculate the volume integral of $\nabla^2(1/r)$ in a sphere (of arbitrary radius) about 0, the value of this integral is $-4\pi$ (regardless of the radius of the sphere chosen).
Since the volume integral is nonzero and there is only one point (0) where the function is nonzero, we have a $\delta$-function.
*edited to be less cryptic :) | {
"domain": "physics.stackexchange",
"id": 36725,
"tags": "homework-and-exercises, electrostatics, greens-functions, dirac-delta-distributions"
} |
Origin of antisymmetric $\ell=2$ irrep in direct product of two symmetric second-rank tensors | Question: In the excerpt below from Chapter 18 Section 6 of the textbook Group Theory -- Application to the Physics of Condensed Matter by Dresselhaus, Dresselhaus, and Jorio, the irreducible representations of the fourth rank elasticity tensor are derived from a tensor product of two symmetric second rank tensors (with irreps $ 0 \oplus 2$). Because the elasticity tensor is itself symmetric, our degrees of freedom only stem from symmetric irreps.
My question is, why is one of the copies of $\Gamma_{\ell=2}$ antisymmetric?
Answer:
The tensor
$$C~=~\sum_{i,j,k,l=1}^3 C_{ij,kl} (e^i\odot e^j)\otimes (e^k\odot e^l)$$
with symmetry
$$C_{ji,kl}~=~C_{ij,kl}~=~C_{ij,lk}$$
is split into symmetric and antisymmetric tensor product
$${\bf 36}~\cong~{\bf 6}\otimes{\bf 6}
~\cong~{\bf 6}\otimes_{(s)}{\bf 6}\oplus{\bf 6}\otimes_{(a)}{\bf 6}
~\cong~{\bf 6}\odot{\bf 6}\oplus{\bf 6}\wedge{\bf 6}
~\cong~{\bf 21}_{(s)}\oplus{\bf 15}_{(a)}.$$
Recall that symmetric $3\times3$ matrices decompose into traceless and traceful irreps
$${\bf 6}~\cong~{\bf 5}\oplus{\bf 1}$$
under the 3D rotation group. Next use the distributive law to rewrite in terms of irreps
$$ {\bf 6}\otimes{\bf 6}
~\cong~({\bf 5}\oplus{\bf 1})\otimes ({\bf 5}\oplus {\bf 1})
~\cong~{\bf 5}\otimes{\bf 5}\oplus({\bf 5}\otimes{\bf 1}\oplus{\bf 1}\otimes{\bf 5})\oplus {\bf 1}\otimes{\bf 1},$$
where
$$ {\bf 5}\otimes{\bf 5}~\cong~({\bf 9}\oplus{\bf 5}\oplus{\bf 1})_{(s)}\oplus({\bf 7}\oplus{\bf 3})_{(a)} ,$$
and
$$ {\bf 1}\otimes{\bf 1}~\cong~{\bf 1}_{(s)} .$$
Returning to OP's question, the antisymmetric copy ${\bf 5}_{(a)}$ comes from
the mixture of traceless and traceful parts:
$${\bf 5}\otimes{\bf 1}\oplus{\bf 1}\otimes{\bf 5}
~\cong~{\bf 5}\otimes_{(s)}{\bf 1}\oplus{\bf 5}\otimes_{(a)}{\bf 1}
~\cong~{\bf 5}\odot{\bf 1}\oplus{\bf 5}\wedge{\bf 1}
~\cong~{\bf 5}_{(s)}\oplus{\bf 5}_{(a)}.$$ | {
"domain": "physics.stackexchange",
"id": 63601,
"tags": "tensor-calculus, group-theory, representation-theory, elasticity, stress-strain"
} |
Which atom goes in the middle of a lewis dot structure? | Question: I've read online in multiple sites that the least electronegative atom goes in the middle with the exception of H, which always goes on the outside.
However, in the molecule NaCN, C is in the middle. Could anyone explain to me what is going on with this molecule?
Answer: ETA: I'm unclear about your question. Are you asking why nitrogen isn't the central atom or why sodium isn't? If it's the former, the explanation is below. If it is the latter, sodium, then it's because this is an ionic compound, and we build the lewis structures of the cation and anion separately.
First and foremost, carbon (EN=2.5) is less electronegative than nitrogen (EN=3.1), so by your rule it should be in the center.
A more complete explanation of why that is:
Your molecule is sodium cyanide, an ionic compound. The cyanide group, CN-, carries a negative charge. If we consider where that charge lies in the ion, it is a bit confusing. The basic structure of the cyanide is ${\ce :C:::N:^-}$ .
At first glance we might think that the nitrogen would be the more negative end of the ion, since it is the more electronegative atom. However, if we consider the formal charges of the atoms in the ion, we see that nitrogen has a formal charge of 0 and carbon a formal charge of -1. Formal charge is not the same as actual charge, but it does tell us that the carbon will be the negative end of the ion, and that is what we see in its bonding.
Here, the sodium ion, Na+, bonds with the carbon in the cyanide ion. Even if it weren't a sodium ion, but something like hydrogen cyanide (with covalent bonding) the bond would be formed at the carbon end. This is because that structure would allow carbon to minimize its electron excess and reach a lower energy state than if the hydrogen stuck to the nitrogen. Again, formal charges predict the HCN structure over the HNC structure. | {
"domain": "chemistry.stackexchange",
"id": 1919,
"tags": "electrons, lewis-structure, electronegativity, electron-affinity"
} |
What is the axial transformation of a group, i.e. $SU(3)$? | Question: The Gell-Mann matrices $\lambda^\alpha$ are the generators of $SU(3)$.
Applying an SU(3) - transformation on the triple $q = ( u , d, s )$ of 4-spinors looks like this:
$$ q \rightarrow q' = e^{i \Phi_\alpha \lambda^\alpha / 2} q.$$
So far I can follow and I also understand why the expression $\bar{q}q$ is invariant under this transformation.
Now my book defines axial transformations as $ q \rightarrow q' = e^{i \Phi_\alpha \lambda^\alpha / 2 \gamma_5} q$ and states that the expression $\bar{q}q$ is not invariant any longer under this transformation.
What confuses me is the fact that the $\lambda$ generators of $SU(3)$ and $\gamma$ matrices are being multiplied in the exponent, even though the $\lambda$ have 3 and the $\gamma$ have 4 dimensions.
Maybe this is not a matrix product but some sort of tensor product? In that case, how should the exponential expression be understood? I suspect $\lambda$ and $\gamma$ commute as they act on different vector spaces.
Or maybe it is a typo?
Or maybe the $\gamma_5$ is not 4-dimensional in this context?
Answer: A Dirac spinor, as your $q$ is, has four components, corresponding to one left-handed and one right-handed Weyl (two-component) spinor, $$q = q_L + q_R.$$ $\gamma_5$ is the $4\times4$ matrix that is $1$ on the right-handed part and $-1$ on the left-handed part. The expression $$q\mapsto q' = \exp(i\Phi_a \lambda^a /2 \gamma_5)q$$
means $$q_{R(L)} \mapsto q'_{R(L)} = \exp(\pm i\Phi_a \lambda^a/2) q_{R(L)} \tag{1}$$
that is, that the left- and right-handed parts of $q$ transform differently.
The operator $A = \Phi_a \lambda^a \gamma_5$ is indeed a tensor product. Write the field $q$ can most explicitly be written $q_{f\alpha}$ where $f = u,d,s$ is a flavor index and $\alpha$ is a spinor index. Then $A$ is the product of an operator $\Phi_a \lambda^a$ acting on the flavor index, and an operator $\gamma_5$ acting on the spinor index. | {
"domain": "physics.stackexchange",
"id": 15501,
"tags": "quantum-field-theory, standard-model, group-theory, group-representations, quarks"
} |
What is a flow generated by a function in phase space? | Question: I am reading the book "Quantum Field Theory" by Jean-Bernard Zuber and Claude Itzykson.
I encounter great difficulties from page 457 section 9-3-1, which introduces Dirac's constrained system in classical mechanics. To begin with, consider a classical systems with $2n$ degrees of freedom submitted to a constraint $$f(p,q)=0. \tag{1}$$ So far, as far as I understand in field theory, examples of such system are gauge theories. Call $C$ the $2n-1$-dimensional manifold in phase space characterized by equation (1). Let $\mathcal{R}$ be the ring of differentiable functions which vanishes on $C$. We use the notation $F\sim 0$ to mean $F\in\mathcal{R}$. For any two functions $f$ and $F$ that vanish on $C$, there must exists some function $\alpha(p,q)$ such that $F(p,q)=\alpha(p,q)f(p,q)$.
We then introduce a Lagrange multiplier $\lambda(t)$ to incorporate the constraint (1) into the action $$S=\int dt\left\{p\dot{q}-H(p,q)-\lambda(t)f(p,q)\right\}. \tag{2}$$
The equations of motion are $$\dot{q}^{i}=\frac{\partial H}{\partial p_{i}}+\lambda\frac{\partial f}{\partial p_{i}}, \quad \dot{q}_{i}=-\frac{\partial H}{\partial q^{i}}-\lambda\frac{\partial f}{\partial q^{i}}, \quad 1\leqslant i\leqslant n. \tag{3}$$
Next, we pick up an arbitrary member $F\in\mathcal{R}$, and define an equivalence relation $\mathcal{E}$ on $C$. The author says the following:
Consider the flow generated by $F$ in phase space. In infinitesimal form, it is described by the equations $$\frac{dq^{i}}{du}=\frac{\partial F}{\partial p_{i}},\quad \frac{dp_{i}}{du}=-\frac{\partial F}{\partial q^{i}},\quad F\sim 0. \tag{4}$$
Two points on $C$ are equivalent iff they belong to the same trajectory of the flow (4).
My first question is: What is a flow generated by a function on the phase space? What's the mathematical definition? Why does it satisfy equations (4)?
The author then proves that the above definition of the equivalence relation is independent of time evolution and the choice of $F$ in $\mathcal{R}$. Thus, the surface $C$ is split into time-independent eqquivalence classes $\mathcal{E}$, and the quotient space $C/\mathcal{E}$ is the real physical phase space, which is an $2n-2$-dimensional manifold. From my understanding, such flows corresponds to gauge orbits in QFT, and all physical observables must be invariant along these flows. It is therefore sufficient to fix the gauge by choosing an auxilliary constraint $$g(p,q)=0 \tag{5}$$
which intersects with $C$ transversely. Then the authors say that to ensure $g(p,q)$ varies monotonically along each flow line, $g$ must satisfy the condition $$\left\{f,g\right\}_{P}\neq 0, \tag{6}$$
where $\left\{,\right\}_{P}$ represents the Poisson bracket.
My second question is: why does (6) ensure that $g$ varies monotonically along each flow line?
Answer: The flow of a classical Hamiltonian observable is the flow of its associated Hamiltonian vector field. The differential equation that an integral curve $(q(u),p(u))$ of this flow/vector field obeys is precisely your eq. (4). You should think of these integral curves as the motions you would get if the associated observable $f$ was your Hamiltonian - for $f=H$ eq. (4) is just Hamilton's equations of motion!
Note that your eq. (4) can be equivalently written as $\dot{q} = \{f,q\}$ and $\dot{p} = \{f,p\}$ and more generally as $\dot{g} = \{f,g\}$ for the change $\dot{g}$ of any function $g(q(u),p(u))$ under the flow. The explains your eq. (6): When $\{f,g\} = 0$, then the derivative of $g$ along the flow is zero and may change sign, hence $\neq 0$ guarantees monotonicity of $g(q(u),p(u))$ along every integral curve. | {
"domain": "physics.stackexchange",
"id": 85595,
"tags": "classical-mechanics, differential-geometry, hamiltonian-formalism, phase-space, constrained-dynamics"
} |
What prompts sodium to give up an electron? | Question: What triggers sodium to convert from elemental Na to Na+? I know that it wants to have a full valence shell and all that, but how does it just eject the electron out?
Answer: To asses the possibility of a chemical change, we need to look into the spontaneity as given by the thermodynamic Gibbs' Free energy change at constant T and P. For your particular case, the change occurs when an electron accepting ion is present to form the crystalline compound after the electron exchange, in solution where the hydration releases energy or in certain solutions such as ammonia, where solvation of free electrons also take part.
Case (1). The changes involved are $$\ce{Na_{(s)}->Na_{(g)}^+ +e^{-}}$$, $$\ce{M_{(g)} +e^{-}->M^{-}}$$ $$\ce{Na_{(g)}^+ + M_{(g)}^{-}->NaM_{(s)}}$$
Where $M$ is an electronegative atom.The ionization energy ($\Delta H_{I}$) and the sublimation energy ($\Delta H_S$) required in the first transformation need to be compensated for by the electron gain enthalpy($\Delta H_{eg}$) released in the second transformation and the lattice enthalpy($\Delta H_{F}\text{lattice}$) released in the final transformation. If it is compensated, the change will be spontaneous. Moreover, the enthalpy released (negative) must also compensate for the lost entropy since gaseous $M$ is converted to an ordered crystal. $\Delta G=\Delta H-T\Delta S$ tells you the compensation required for $\Delta G$ to be negative.
($|\Delta H_{eg}+\Delta H_F|-|\Delta H_I+\Delta H_S|>|T\Delta S|$)
Case (2) Elemental sodium in water might lose electrons to the hydrogen evolving hydrogen gas and forming dissolved sodium hydroxide.
$$\ce{Na_{(s)}->Na_{(aq)}^+ + e^{-}}$$
$$\ce{H2O +e^{-}->\frac 12H2 +OH-}$$
here, the relesed hydration energy compensates the required sublimation energy in the first step and the negative electrode potential in the second step. There is net positive entropy change which favours the reaction, making $\Delta G$ negative. ($|\Delta H_{hyd}(\ce{OH^-,Na^+})|+|T\Delta S|=|\Delta G|$)
Case (3) very similar to case (2) but here the solvation is both of free electrons and that of metal ions by ammonia (liquid). The equations are similar except that now instead of the hydration enthalpy, solvation enthalpy will be released. the entropy change is also positive here, supporting the change. | {
"domain": "chemistry.stackexchange",
"id": 734,
"tags": "bond"
} |
What defines an element's taste? | Question: A useful post by @Martin indicated that probably the naming of Sweetwater town is because of the sweet tasting lead compounds in it's water.
Then my question arose. I know that the taste of any material is dependent on which tongue buds it provokes. Now, my question is, which chemical property of an element defines or is directly related or is causing the chemical property of taste? (I deliberately avoid asking this for all chemical species to prevent very broad answers, if you think you can give not-too-long answers, my question is about all chemical species.)
Note that by a responsible chemical property I mean things related specifically to an atom, like e.config.
Answer: TL;DR: Don't taste pure elements. They either taste of nothing, taste foul, or kill you. Or all of the above.
Edit: Also, for clarification, elements don't have a taste. Taste is a biological reaction to chemical interactions happening with an element, not a property of the element itself. So, like everything else in chemistry, the taste is decided by the electron configuration of the element.
Taste as we know it happens in chemical reactions between our taste buds and the chemical species in question. Most of the taste is actually governed by the nose, where very specific receptors interact with very specific chemicals to send very specific signals to the brain, the chemistry of which is, while not trivial, also not interesting. It's your basic, run of the mill presumptive test.
In the taste buds, however, the receptors look for certain specific characteristics of the species being tasted. As an example, acids are sour, indicating that our sour taste buds are actually pH meters (although there might be more to it than this).
Similarly, most sugars have similar characteristics, that can be detected by a relatively simple chemical detector.
If, however, you go around tasting elements, you'll find two very interesting things.
First, you'll find that you're dying from all kinds of poisonings. At a rough count, all of the periodic table is poisonous in its pure form, with only a few exceptions (carbon, oxygen, hydrogen, nitrogen, silicon, titanium, noble gases, gold and platinum, possibly a few others).
Second, you'll find that the pure elements don't taste much at all, if anything. Taste is a completely biological concept, and these elements simply aren't found in nature, meaning that the tongue and nose have no frame of reference.
Most metals will taste metallic, due to tarnishes forming on their surface, mostly oxides, which all have the familiar, metallic taste. The non-metals are mostly fairly reactive, and taste absolutely foul as they react to the air in your moist mouth.
Gases like chlorine and fluorine may react and form chloride and fluoride, which taste sour, and then dissolve your mouth and nose, and then kill you.
The more radioactive metals will seem to taste of blood, but that's actually just acute radiation poisoning setting in, actually filling your mouth with blood. If they didn't kill you, they'd probably taste metallic like the rest of the metals, but they do, so they don't.
Finally, the alkali metals like lithium and its friends will taste bitter, also kill you.
"But wait!", I hear you scream, "What of, say, lithium in pharmaceuticals?"
When elements like lithium are ingested, they are done so in a rather more controlled fashion. You don't get a brick of pure, elemental lithium and told to go crazy when you go crazy, but rather, delivered in pills, in conjunction with a delivery agent, in this case, lithium citrate.
Most elements have some biological role or other, but if they are in the kind of quantity you could taste, they are most likely poisonous, and in many cases, fatally so. | {
"domain": "chemistry.stackexchange",
"id": 2445,
"tags": "atoms, taste"
} |
Why will a strong acid neutralize as much base as a weak acid? | Question: This is a simple concept that I can't seem to understand. Why will a strong acid neutralize as much base as a weak acid, if the acids are of the same volume and concentration? A strong acid will dissociate more in solution and thus have a greater number of $\ce{H^+}$ ions, as far as my understanding goes. The opposite if true for the weak base. So wouldn't the strong acid neutralize more base than the weak acid, because it has more $\ce{H^+}$ ions to neutralize the $\ce{OH^-}$ ions of the base with?
Answer: Strong acid is dissociated all but completely: you have a certain amount of $\ce H^+$ and start neutralizing them, one drop at a time. As you do so, their numbers steadily decrease. It feels like sawing a huge log with a hand saw: you see the amount of job, and you do it until it is all done.
Weak acid is dissociated only partially. You see a relatively small amount of $\ce H^+$, but as you start neutralizing them, more molecules of acid get dissociated and their $\ce H^+$ ions stand in place of those you've neutralized. It feels like sawing a thin log, but a peculiar one: it seems to regenerate under the saw.
So at the end of the day, it is the sheer amount of acid that matters, rather than immediate concentration of $\ce H^+$. | {
"domain": "chemistry.stackexchange",
"id": 11132,
"tags": "acid-base, ph"
} |
Prim's algorithm on graph with weights of only $1$ and $2$ on each edge | Question: I have this version of Prim's algorithm
Prim$(G=(V,E),s\in V,w)\\
1.\ d(s)\leftarrow 0;\forall u \neq s:d(u)\leftarrow \infty\quad \color{red}{O(|V|)}\\
2.\ \forall u \in V:p(u)\leftarrow \text{null}\quad \color{red}{O(|V|)}\\
3.\ s\leftarrow \emptyset,T\leftarrow \emptyset\quad \color{red}{O(1)}\\
4.\ Q \leftarrow \text{Makeheap}(V)\quad \color{red}{O(|V|)}\\
5.\ \text{while}\ Q\neq \emptyset\\
6.\qquad u\leftarrow Q.\text{ExtractMin()}\quad \color{red}{O(\log(|V|))}\\
7.\qquad S\leftarrow S\cup\{u\},\ T\leftarrow T\cup\{(u,p(u))\}\quad \color{red}{O(1)}\\
8.\qquad \text{for each}\ (u,z)\in E,z\in Q\\
9.\qquad\qquad \text{if}\ d(z)>w(u,z)\ \text{then}\\
10.\qquad\qquad\qquad Q.\text{Remove}(z);Q.\text{Insert}(z,w(u,z))\quad \color{red}{O(\log(|V|))}\\
11.\qquad\qquad\qquad d(z)\leftarrow w(u,z),\ p(z)\leftarrow \quad \color{red}{O(1)}\\
12.\ \text{return }T$
Total time: $O(|E|\log(|V|))$
Given a weighted, connected, simple undirected graph $G$ with weights of only $1$ and $2$ on each edge, why in this case the running time of the algorithm is $O(|E|+|V|\log(|V|))$?
I really not understand why the running time is not the same in this case, any help?
Answer: The running time depends on how you implement the queue data structure.
Hint: Can you think of any way to implement the queue data structures, so that ExtractMin, Remove, and Insert operations are much faster, if you're given the knowledge that every edge has weight either 1 or 2? | {
"domain": "cs.stackexchange",
"id": 7767,
"tags": "algorithms, time-complexity, spanning-trees, minimum-spanning-tree, prims-algorithm"
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.