anchor stringlengths 0 150 | positive stringlengths 0 96k | source dict |
|---|---|---|
Why does burnt hair smell bad? | Question: When I use hot stuff like hair straightener on my hair, my hair begins to smell bad, which is very different from smell produced from burning other things. So what's the gas produced that is responsible for this smell?
Answer: Hair is largely (~90%) composed of a protein called keratin, which originates in the hair follicle.
Now, keratin is composed of a variety of amino acids, including the sulfur containing amino acid, cysteine. All these amino acids are joined to each other by chemical bonds called peptide bonds to form these long chains that we call polypeptide chains. In the case of human hair, the polypeptide that we're talking about is keratin. The polypeptide chains are intertwined around each other in a helix shape.
The average composition of normal hair is 45.2 % carbon, 27.9% oxygen, 6.6% hydrogen, 15.1% nitrogen and 5.2% sulfur.
(I got that diagram off of Google Images)
Now, there are a whole bunch of chemical interactions that maintain the secondary and tertiary structures of proteins, such as van der Waals forces, hydrophobic interactions, polypeptide linkages, ionic bonds, etc. But there is, however, one additional chemical interaction in proteins that contain the amino acids cysteine and methionine (both of which contain sulfur) called disulfide linkages. You can see that in the diagram above (it's been marked in yellow, which is fortunately, a very intuitive color when you're dealing with sulfur).
When you burn hair (or skin or nails... anything that has keratin in it for that matter) these disulfide linkages are broken. The sulfur atoms are now free to chemically combine with other elements present in the protein and air, such as oxygen and hydrogen. The volatile sulfur compounds formed as a result is what's responsible for the fetid odor of burning hair.
Quite a few of the "bad smells" we come across everyday are due to some sulfur containing compound or the other. A great example would be the smell of rotten eggs, which can be attributed to a volatile sulfur compound called hydrogen sulfide. Yet another example (as @VonBeche points out in the comments) would be that of tert-butylthiol, which is the odorant that is used to impart the characteristic smell of Liquefied Petroleum Gas (LPG). | {
"domain": "chemistry.stackexchange",
"id": 17592,
"tags": "everyday-chemistry, smell"
} |
Morphology - Why does the concept of opening and closing images exist when we already have erosion and dilation? | Question: If we consider the closing operation of an image, we first dilate the image and then erode it. So, first we expand the foreground (white pixels) and and then we take away some part of the foreground. The result of this particular operation is similar to dilation, but less destructive of the original shape boundary.
However, we can combine erosion and dilation in different ways to control the extent of increase or decrease of the shape boundary. Then, why do we specifically define closing and opening operations?
Answer: As you said closing is completely defined by a dilatation followed by an erosion with the same structuring element (SE) (respectively opening is an erosion followed by a dilation with the same structuring element).
The key point is that is must be the same structuring element and this is much more specific than an erosion followed by a dilatation with different random SE.
Closing and opening operations are defined just for convenience, because giving an image and a SE the result can easily be predicted and interpreted by the user.
The result of this particular operation is similar to dilation [...]
Not really, dilatation is more like a thickening operation while closing fills holes and smooth borders accordingly to the SE (opening with a disk rounds angular borders for instance).
Try erosion, dilatation, opening and closing on simple shapes (square, disk, triangle, hollow shapes, set of close shapes) with simple SE (share, disk, lines) to better visualize the effect of each of them. you can also try erosion followed by dilatation with different SE to see the difference with an opening.
To sum up opening and closing are defined for convenience only, as it is a specific case of erosion and dilatation composition that have noteworthy properties. | {
"domain": "dsp.stackexchange",
"id": 6561,
"tags": "image-processing, computer-vision, morphological-operations, morphology"
} |
Would Newton law of gravity be the same between two neutrons at smaller distance? | Question: Imagine 2 neutrons placed at some distance apart, they should only gravitationally attract each other according to Newton law of gravity. Theoretically the law should work at smaller distance until Pauli exclusion principle set in, how about experimentally?
Answer: Neutrons have a magnetic moment on the order of $\mu=e\hbar/m$ and therefore experience an electromagnetic force of order $\mu_0 \mu^2/r^4$ or $\mu_0 e^2 \hbar^2/m^2 r^4$. Their gravitational force is of course $Gm^2/r^2$. The electromagnetic force dominates over the gravitational force when $r<\mu_0^{1/2}e\hbar/G^{1/2}m^2\approx5000$ meters. | {
"domain": "physics.stackexchange",
"id": 55254,
"tags": "newtonian-gravity, neutrons"
} |
Angular momentum | Question: I'm given with the following problem - it's an easy one, but it's nearly 1AM, I'm tired and I need some push into the general direction to get the solution:
A particle is assumed to be in the state
$\left(-\sqrt{1 \over 3} Y_1^0(\theta, \phi), -\sqrt{2 \over 3} Y_1^1(\theta, \phi)\right)^T$
$Y$ are spherical harmonics. What are the
total angular momentum $\vec{j}^2$
total orbital angular momentum $\vec{l}^2$
total spin angular momentum $\vec{s}^2$
total angular momentum $j_z$
of the particle
Answer: Just recall the definitions:
spherical harmonics $Y^m_l$ are eigenvectors of both ${ \hat L}_z$ and $\hat L^2$ with eigenvalues of $m$ and $l(l+1)$ respectivelly (with $\hbar = 1$)
if you understand that this is a spin $1 \over 2$ particle then it should be obvious that $\hat s^2 = s(s+1) = {3 \over 4}$. If not, recall how spin matrices look like (e.g. start with Pauli matrices) and compute $\hat s^2$ directly.
the total Hilbert space $H$ is a tensor product of the scalar particle system with the spin system $H_{total} = H_s \otimes H_2$. So the operators on this space are $\hat {\mathbf L}_{total} = \hat {\mathbf L} \otimes \hat {\mathbb 1}_2$, $\hat {\mathbf s}_{total} = \hat {\mathbb 1}_s \otimes \hat {\mathbf s}$ (this just means that orbital momentum is still just differential operators and spin is just matrix operator) and $\hat J_{total} = \hat L_{total} + \hat s_{total}$. The rest is direct computation | {
"domain": "physics.stackexchange",
"id": 517,
"tags": "quantum-mechanics, homework-and-exercises"
} |
Beginner Java Counter code | Question: My professor wants me to do this:
Write a number of interchangeable counters using the Counter interface below:
public interface Counter {
/** Current value of this counter. */
int value();
/** Increment this counter. */
void up();
/** Decrement this counter. */
void down();
}
I need comments on my work so far. Do you think it's sufficient? How do I work on the ResetableCounter? I'm very new to Java and it's been a long time since I've done C++.
Develop the following:
An interface ResetableCounter that supports the message void reset() in addition to those of Counter.
Here's what I did:
public interface ResetableCounter {
void reset();
int value();
void up();
void down();
}
An implementation of ResetableCounter called BasicCounter that starts at the value 0 and counts up and down by +1 and -1 respectively.
Here's what I did:
public class BasicCounter implements ResetableCounter
{
int counterVariable = 0;
public static void main(String[] args)
{
BasicCounter cnt = new BasicCounter();
cnt.up();
cnt.down();
System.out.printf("The value is %d", cnt.counterVariable);
}
public void reset() {
this.counterVariable = 0;
}
public int value() {
return this.counterVariable;
}
public void up() {
++this.counterVariable;
}
public void down() {
--this.counterVariable;
}
}
An implementation of ResetableCounter called SquareCounter that starts at the value 2, counts up by squaring its current value, and counts down by taking the square root of its current value (always rounding up, i.e. 1.7 is rounded to 2, just like 1.2 is rounded to 2).
Here's what I did:
public class SquareCounter implements ResetableCounter {
int counterVariable = 2;
public static void main(String[] args) {
SquareCounter cnt = new SquareCounter();
cnt.up();
cnt.down();
double d = Math.ceil(cnt.counterVariable);
System.out.printf("The value is %f", d);
}
public void reset() {
this.counterVariable = 0;
}
public int value() {
return this.counterVariable;
}
public void up() {
Math.pow(this.counterVariable, 2);
}
public void down() {
Math.sqrt(this.counterVariable);
}
}
An implementation of ResetableCounter called FlexibleCounter that allows clients to specify a start value as well as an additive increment (used for counting up) when a counter is created. For example new FlexibleCounter(-10, 3) would yield a counter with the current value -10; after a call to up() its value would be -7.
public class FlexibleCounter implements ResetableCounter
{
public static void main(String[] args)
{
int start = Integer.parseInt(args[0]);
int step = Integer.parseInt(args[1]);
start.up();
System.out.printf("The value is %d", count);
}
public void reset() {
this.count = 0;
}
public int value() {
return this.count;
}
public void up() {
this.count = args[0] + args[1];
}
public void down() {
--this.count;
}
}
All of your implementations should be resetable, and each should contain a main method that tests whether the implementation works as expected using assert as we did in lecture (this is a simple approach to unit testing which we'll talk about more later).
Answer: Your BasicCounter is impeccable. (The indentation of the braces is a bit off, but maybe that's an artifact from pasting into this site.) I would suggest renaming counterVariable to just count, since it would be silly to name everything fooVariable and barVariable.
Your SquareCounter is buggy. Think about what its reset() should do. (Hint: it would be a good idea for the SquareCounter() constructor to call reset().) Also, in up() and down(), how is counterVariable being modified? (Hint: right now, it isn't being modified.) Note that Math.pow() and Math.sqrt() return double rather than int, so you'll have to cast to int at some point.
Unfortunately, by the rules of this website, we only improve code that has already been written. Therefore, I'll decline to comment on FlexibleCounter until you present something that at least compiles. | {
"domain": "codereview.stackexchange",
"id": 28134,
"tags": "java, homework, beginner"
} |
What is the shape of a deuterium nucleus? | Question: What is the shape of a deuterium nucleus?
I can think of two obvious extremes.
A positive proton end intersecting with a neutral neutron end.
Or a cylinder with spherical caps on the ends that is positive on one end and neutral on the other.
Answer: Since the post is asking about the shape of the deuteron, this answer is based around a picture, rather than a physical description.
Deuterium Physics Central
Dominated by three components describing the interactions of the quark components of the neutron and proton, its shape is not spherical. Recent tests have shown no deviations in the predictions of standard nuclear physics.
From Jefferson Lab Deuterium
The structure of the deuteron, the nucleus of the deuterium atom, is of prime importance to nuclear physicists. The deuteron is a bound state of one proton and one neutron, and it is the nucleus most often used in measurements of neutron structure. Studies of the deuteron have helped determine the role of non-nucleonic degrees of freedom in nuclei and the corrections from relativity. A recent series of Jefferson Lab measurements have focused on the role of quarks in the structure of the deuteron. At high-energy and high-momentum transfer, the deuteron is probed at a length scale smaller than the nucleon size and at an energy scale at which the physical picture simplifies — by considering quarks rather than numerous baryon resonances. Measurements of reaction cross sections confirm the approximate scaling behavior expected from the underlying quark structure, while polarization measurements show simple behavior that's in rough agreement with some quark-based calculations.
From Deuteron on Wikipedia
The deuteron has spin +1 ("triplet") and is thus a boson. The NMR frequency of deuterium is significantly different from common light hydrogen. Infrared spectroscopy also easily differentiates many deuterated compounds, due to the large difference in IR absorption frequency seen in the vibration of a chemical bond containing deuterium, versus light hydrogen. The two stable isotopes of hydrogen can also be distinguished by using mass spectrometry.
The triplet deuteron nucleon is barely bound at EB = 2.23 MeV, so all the higher energy states are not bound. The singlet deuteron is a virtual state, with a negative binding energy of ~60 keV. There is no such stable particle, but this virtual particle transiently exists during neutron-proton inelastic scattering, accounting for the unusually large neutron scattering cross-section of the proton. | {
"domain": "physics.stackexchange",
"id": 46372,
"tags": "nuclear-physics"
} |
C++ XOR Function | Question: This XOR function is costing my program too much time (specifically the conversions.)
How can this code be made faster?
string xor_str(string astr, string bstr){
unsigned long a = strtoul(astr.c_str(), NULL, 16); // Convert strings to longs
unsigned long b = strtoul(bstr.c_str(), NULL, 16);
stringstream sstream;
sstream << setfill('0') << setw(16) << hex << (a ^ b); // XOR numbers and
string result = sstream.str(); // Save result // convert to string
sstream.clear(); // Clear buffer (will this happen anyway?)
return result;
}
For example, xor_str("74657374696e6731", "1111111111111111") returns "65746265787f7620".
Answer: How about doing the xor on patched ASCII values instead, I'm sort of in doubt about performance of general purpose strtoul.
I didn't profile it, so I may be underestimating the optimizations of modern compilers. Anybody will profile it, just for curiosity? :)
edit:
The OP code will work even for input strings with different lengths (i.e. ("1234", "4") will produce 0000000000001230. My code will work only with already leading-zero extended numbers of same length, although the length can be arbitrary, not just 16.
std::string xor_str(const std::string& astr, const std::string& bstr)
{
const size_t bsize = bstr.size();
assert(astr.size() == bsize);
// this xor_str will work over arbitrary long strings, they just have to have same size
string result(bsize, 0);
for (size_t i = 0; i < bsize; ++i) {
// astr[i] and bstr[i] is something from '0'..'9', 'a'..'f' or 'A'..'F'
char ra = astr[i], rb = bstr[i];
// make lower nibble of chars 0..F, from ASCII 'a'+9 = 0x6A, 'A'+9 = 0x4A
if ('9' < ra) ra += 9;
if ('9' < rb) rb += 9;
ra = (ra^rb)&0x0F; // xor lower nibbles of chars
// transfer 'ra' back to ASCII and store it to result
if (ra <= 9) result[i] = ra + '0';
else result[i] = ra + 'a' - 10; // or 'A' for uppercase output
}
return result;
} | {
"domain": "codereview.stackexchange",
"id": 20794,
"tags": "c++, performance, converting"
} |
costmap2D from 3Dpointcloud | Question:
I am trying to follow tutorial http://www.ros.org/wiki/navigation/Tutorials/RobotSetup. After setting various prerequisites for launch, with move_base launch file, move_base node results in 2D costmap, obstacles, inflated obstacles etc. In rviz visualization, it is observed that obstacles detected are not getting cleared in subsequent published costmaps. Old obstacles becomes part of current costmap as well. Suddenly after sometime, everything is cleared and a correct cost map is displayed.
Am I missing setting up any specific parameter? Current all parameters are as advised in tutorial.
I had set "observation_persistence: 0.0" in costmap_common_params.yaml but it do not help.
Following is a set of screenshots of rviz
Screen 1:
Screeshot 2:
Screeshot 3:
Screeshot 4:
Screenshot5:
This is taken after around 12 seconds from screenshot4. Semicircles in front and back of robot are artifacts of costmap. It is plain road.
This is published on topic /move_base/local_costmap/obstacles.
Originally posted by prince on ROS Answers with karma: 660 on 2012-02-14
Post score: 3
Original comments
Comment by DimitriProsser on 2012-02-14:
This is when driving by an obstacle?
Comment by prince on 2012-02-14:
We had captured one data set as bag which I am replaying for test purposes. The accumulations of previous obstacles in latest costmap is happening continuously. For example when taking a turn in. In this case, obstacles which were on left will shift in horizon and also previous position will stay.
Comment by prince on 2012-02-14:
I had edited original question to add screenshots to it.
Answer:
The tuning suggestions in that tutorial are designed for indoor robots. Once you take it outdoors you will need to make the costmap much more permissive for not flat roads and miscalibrations in your data.
FYI: Having worked on autonomous vehicles roads are not flat at all.
Originally posted by tfoote with karma: 58457 on 2012-03-02
This answer was ACCEPTED on the original site
Post score: 2 | {
"domain": "robotics.stackexchange",
"id": 8234,
"tags": "navigation, costmap-2d-ros, costmap, costmap-2d"
} |
IS DMSO-o a σ-donor, a π-donor, or both? | Question: DMSO is an ambidentate ligand that can bind in a κO fashion via the oxygen lone pair and in a κS fashion via the sulfur lone pair.
I know that DMSO can behave as a π-acceptor when it bonds a metal at the sulfur, but what is the symmetry of the $\ce{O\bond{->}M}$ MO when DMSO binds at the oxygen? I have some diagrams of DMSO's HOMO and LUMO MOs, and it seems clear that DMSO-κS is a π-acceptor due to the LUMO's potential to overlap with a $\mathrm{t_{2g}}$-type d orbital, but the HOMO looks like it would do a π interaction. This confuses me because I know from IR that the $\ce{S-O}$ bond is weakened when DMSO binds at the oxygen, and there are some lower energy occupied MOs that would have σ symmetry overlapping with $\mathrm{d}_{z^2}$, for instance.
Is DMSO-κO a σ-donor, a π-donor, or both?
(keep in mind that there are some more σ-donor-looking MOs at energy levels below the HOMO)
Also; The reference below (1) says that DMSO's tendency to bind as DMSO-κS is dependent on the metal having a high enough "electron charge density" to have a sufficient back donation contribution to the $\ce{M\bond{->}S}$ π bond. What does this mean? The paper also says that ruthenium and rhodium prefers to bind as DMSO-κS. Based on Pearson's HSAB concept, I would expect that ruthenium would prefer DMSO-κS because it is a soft acid as a result of its larger second-row ionic radius giving it lower charge density (and S is softer than O), but the paper seems to be saying the opposite. What do they mean by "electron charge density"?
(1) Panina, N. S; Calligaris, M. Inorg. Chim. Acta 2002, 334, 165-171.
Answer: The first thing to remember is that a σ-symmetric bond is always better than a π-symmetric one. Orbitals that can bond with each other are typically oriented in one direction or another (except for s-orbitals, but those can only participate in σ bonds, anyway). Approaching these orbitals from the direction ‘their lobes point towards’ and in the direction ‘your own lobe is pointing towards’ results in a much better overlap than if you attempted to create a π type interaction. Thus, typically any σ interaction will be preferable to any π interaction between a given set of two bonding partners.
To be perfectly honest, I am not aware of any ligand which does not initially bond as a σ donor.
The question remains what will happen if we add π effects into the picture. π effects can either be of the π-Lewis acidic type (e.g. $\ce{CO}$) or of the π-Lewis basic type (e.g. $\ce{F-}$). In the case of DMSO, the LUMO has much greater contributions on sulfur than on oxygen — in line with the general expectation that the LUMO is centred on electropositive partners while HOMOs centre more around the electronegative partners.
Also in line with what oxygen does in other ligands, I would expect DMSO-κO to be a π base, due to the probability of another populated orbital having significant contributions on oxygen. Contrarily, the picture of the LUMO shows how well sulfur in DMSO-κS can act as a Lewis acid.
For Lewis acidic bonding, i.e. π backbonding (also written as M$\leftrightarrows$L), it is beneficial if there is significant charge density on the central metal, i.e. if the metal has a somewhat Lewis basic character. Often, this will be enhanced by the presence of electrons in d-orbitals of the $\mathrm{t_{2g}}$ type.
For Lewis basic ligand bonding, i.e. π forward bonding (also written as L$\rightrightarrows$M), it is beneficial for the metal to be in a Lewis acidic, electron-poor state — often with empty d-orbitals.
In the DMSO-κS context, remember that sulfur carries a partial positive charge and is thus already electron deficient somewhat ($\pm 0$ oxidation state, but it almost has the same electronegativity as carbon, so it is not far away from $\mathrm{+II}$). Therefore, the σ forward bond will be weak per se. Only if sufficient stabilisation through backbonding can be supplied will a DMSO-κS complex be sufficiently stable. | {
"domain": "chemistry.stackexchange",
"id": 6764,
"tags": "inorganic-chemistry, molecular-orbital-theory, coordination-compounds"
} |
Rails 3 nested controller method | Question: I have 2 nested resources in one of my Rails apps and the way the index is handled by the inner resources bothers me somewhat:
def index
@program = Program.approved.find(params[:program_id]) if params[:program_id]
if params[:tags]
tags = params[:tags].split(" ")
if @program
@sites = Site.where(:program_id => @program.id).tagged_with(tags).page(params[:page])
else
@sites = Site.joins(:program => :user).tagged_with(tags).page(params[:page])
end
else
if @program
@sites = Site.where(:program_id => @program.id).page(params[:page])
else
@sites = Site.joins(:program => :user).page(params[:page])
end
end
respond_to do |format|
format.html # index.html.erb
format.json { render json: @sites }
end
end
The complexity is caused because I want my Site resource to be accessible both through it's Program or directly. The results are also optionally filtered based on tags if they are included in the params.
The joins are required because if a site belongs to a user then I display Edit/Delete options.
It just doesn't feel like good Ruby code. So any refactoring tips would be helpful.
Answer: You could get rid of the code duplication by making use of the fact, that arel expressions are lazily evaluated:
def index
@program = Program.approved.find(params[:program_id]) if params[:program_id]
sites =
if @program
Site.where(program_id: @program)
else
Site.joins(program: :user)
end
if params[:tags]
tags = params[:tags].split(" ")
sites = sites.tagged_with(tags)
end
@sites = sites.page(params[:page])
respond_to do |format|
format.html
format.json { render json: @sites }
end
end
Because you're using ruby 1.9 hash syntax in the render expression I also adapted the other hashes. That seems more consistent for me. | {
"domain": "codereview.stackexchange",
"id": 4440,
"tags": "ruby, ruby-on-rails"
} |
Acidic nature of boric acid when its concentration high | Question: Following statement was given for boric acid $(\ce{H3BO3}):$
At low concentrations $(\leq\pu{0.02 M})$ essentially $\ce{B(OH)3}$ and $\ce{B(OH)4-}$ are present, but at higher concentration the acidity increases and pH studies are consistent with the formation of polymeric species such as $$\ce{B(OH)3 <=> H+ + [B3O3(OH)4]- + H2O}$$
Please explain the meaning of the italicised statement i.e. pH studies are consistent with the formation of polymeric species?
Answer: In the equation $\ce{H2O + H3BO3 -> H+ + B(OH)4-}$, one negative charge on the anion is shared among 4 electronegative oxygens. Yes, one positive charge on the proton is also shared with some water molecules, so the orthoborate ion probably shares some negative charge with some waters, but let's keep it simple.
Boric acid is known to complex with sugars to generate higher acidity. If boric acid complexes with itself, as in $$\ce{3 H3BO3 -> H+ + [B3O3(OH)4]- + 2 H2O}$$, the one negative charge on the anion can be considered to be distributed among 7 electronegative oxygens, or maybe still distributed mostly on the OH groups, but since they will be separated more, the stability of the ion will be greater, and the tendency to lose a proton will be greater: greater acidity. The electronegativity of boron is less than that of hydrogen (2.02 vs 2.2, electronegativity, Wikipedia), so when the atoms in the ions are counted, the first ionization equation has one hydrogen and one boron as the metallic atoms getting the positive charge, while the second equation has one hydrogen and 3 borons donating electrons - more sharing of the positive charge here, tho the hydrogen still gets the positive charge.
But this structure may be only a suggestion. Well-known polyborate anions include the triborate(1-), tetraborate(2-) and pentaborate(1-) anions. The condensation reaction for the formation of tetraborate(2-) is as follows Wikipedia
$$\ce{2 B(OH)3 + 2 [B(OH)4]− ⇌ [B4O5(OH)4]2- + 5 H2O}$$
The italicized text mentions an observation of pH that is consistent with the complexation of boric acid with itself. It doesn't prove the existence of the triborate ion only, and perhaps is only a way to keep the text shorter without ignoring complexation completely. Then along comes a student who looks into the sentence and sees that something was omitted or not explained thoroughly. (Keep asking questions!) | {
"domain": "chemistry.stackexchange",
"id": 13018,
"tags": "inorganic-chemistry, acid-base, aqueous-solution, ph, boron-family"
} |
Does GFAJ-1 use Adenosine triarsenate as its energy currency? | Question: Regarding the bacteria found in Mono Lake, CA that scientists believe uses or can use arsenic in its DNA backbone where life as we know it uses phosphorus (according to their experiments depriving the microbes of phosphorus and providing much arsenic), have researchers conjectured and tested whether the energy currency molecule used is also arsenic-based instead of ATP?
Answer: I'd have to agree that it's highly improbably that the GFAJ-1 strain used nucleoside triarsenates as an energy source. There are three main lines of evidence pointed out in the Science reviews that were published after the initial publication of the paper.
The incorporation of arsenate ions into NTAs is not a plug-and-play process, per se. Arsenate does not simply replace phosphate ions from molecules. In order for an NTA to form, it must arsenylate (like phosphorylate) a ribose molecule, then have the purine/pyrmidine ring form (and be attached to the ribose), and then two more arsenates have to be linked via ester bond to the other first arsenate. Mind you, this all happens in water. Arsenate esters are notoriously unstable in water, so it's difficult to imagine an NTA forming stably.
The original team in Felisa Wolfe-Simon's lab didn't intentionally add phosphate ions to their growth media; instead, the approximate amount of phosphate impurity was measured and reported. The lab compared this level of phosphate with the minimal levels reported in E. coli from another paper and basically said, "There isn't enough phosphate in our growth media for our GFAJ-1 strain to grow solely on phosphorous." Well, that logic is a little non sequitur because GFAJ-1 is a halomonad (they said so themselves) and there is literature pointing to a species within Halomonadaceae that can grow on phosphate levels lower than that of the impurities reported.
Although the Wolfe-Simon team added arsenate into their growth media, it's unlikely that any biologically-useful arsenic ended up in the cells at all. Under physiological electrochemical conditions, arsenate gets reduced to arsenite. Phosphate ions, on the other hand, are stable at physiological potentials.
All in all, there's a reason why life chose phosphorous instead of arsenic. Although both elements have ions with a propensity to make ester bonds, maybe arsenic just doesn't fit into our anthropocentric view of what life is. | {
"domain": "biology.stackexchange",
"id": 360,
"tags": "biochemistry, dna, extremophiles"
} |
How should we deal with interactions not from a “fundamental force”? | Question: Question
Should the cosmological constant and/or vacuum energy be listed as one of the fundamental interactions?
If not, how can we have actual energy and forces that are not assignable to one of the fundamental forces of the universe?
But If it should be added, how do we reconcile including the vacuum itself as one of the subtypes of quantum harmonic oscillation?
I simply do not understand the physics well enough to know which is the right view.
A Request
Hoping we don’t overcomplicate or divert. This much I gather: However we slice it, there are interactions that happen that cannot be assigned to one of the four fundamental interactions. Therefore, by logic alone, we are either
A. Incorrectly excluding a fifth that should be listed, perhaps with a caveat about how different it is. Or else we are
B. Incorrectly saying all real-world physical interactions come from fundamental forces. And therefore also either incorrectly calling them “the fundamental forces”, or at least incorrectly excluding their own caveat explaining how they are fundamental but don’t cover all fundamentals.
Please explain which one we are actually doing and the physics behind why that is the case. If you think part of it is opinion, please just touch on what the question of interpretation is.
If you’re following that and also see the dilemma cleanly, then you can probably skip the final section. Whether we are doing A or B, I then have two further questions below, as “If so..”.
Fundamental Interactions
We hear that our models have contained four fundamental forces for quite awhile: gravitational, electromagnetic, strong and weak nuclear.
For example even wikipedia has been telling us this for years:
https://en.m.wikipedia.org/wiki/Fundamental_interaction
Quantum Vacuum
It’s not so straightforward that $\Lambda$ is equivalent to dark energy, although some would say exactly this.
My understanding is that $\Lambda$ is a single added term for the quantum vacuum and not at all the same as the other fundamental forces that arise in the vacuum. Do you see that as a fair summary?
At a minimum we would need to say it is fundamentally different than the other four. And that’s if $\Lambda$ operates such that we should add it as fifth outright. If so, what’s a parsimonious phrase for capturing how it’s included but different? Would you go as far as saying $\Lambda$ is vacuum energy, and that it’s the basis of, and even more fundamental than, the fundamental four - and how so?
$\mathbf{\Lambda}$ Interactions
But we have the Casimir effect, and dark energy’s effects, i.e. $\Lambda$ affecting the rate of expansion.
At a minimum, we must say there is more than just a single set of four fundamental forces and all interactions can be boiled down to those. And that’s if $\Lambda$ operates such that we should continue to exclude it. If so, what do we call it if not a fundamental force, in a way that describes for me its ability to have effect but justifies its exclusion? And secondly, what should be said about the fundamental forces regarding the fact that they don’t explain every force while being identified as exhaustive?
-END-
What’s not being asked
Some but not all of this section is to motivate that this is not only not a repeat, but doesn’t have related answers that speak directly to only this question - not with any I have so far seen. Another motive is to avoid unnecessary diversions that miss the heart of the question.
Not asking:
How virtual particles cause Casimir (except in how it helps me understand whether and how the situation is more like A or B, if it does).
How similar quantum field theories explain both fundamental forces and Casimir, or and acceleration (except in how it helps us understand..).
What explains gravity or how gravity relates to quantum field theory. (Quantum gravity).
Anything at all about how the four forces relate to qft (except.. ).
Whether or in what sense $\Lambda$ is dark energy.
—
Other questions ask which fundamental force something that comes from vacuum energy is. And then say, “Oh I thought it was one of the four. How odd.” No question says please explain the physics of this and which of these two things is true. So overall they don’t even add to an answer to this question.
Answer:
However we slice it, there are interactions that happen that cannot be assigned to one of the four fundamental interactions.
It think this premise is wrong, or at least we don't know yet whether it is right. It is possible that the effects you mention (Casimir effect and the universe accelerated expansion) are both explained by the four fundamental interactions.
Casimir effect: along with spontaneous emission and Lamb shift. They are explained by quantum field theory, in particular by electromagnetic interaction, so no fifth force is required.
Accelerated expansion of the universe: recent Planck data (and we'll see what Euclid tells us) suggest that the accelerated expansion is due to a cosmological constant. In this case it may just be caused by gravity. It may be that the Einstein field equations contain a $\Lambda$ therm, just because the universe is like that. This would mean that the cosmological constant is just a feature of gravity. No fifth force required.
I'm playing the devil's advocate here, because few people would be satisfied by this explanation. Some theories would like to explain the expansion by postulating a scalar field whose effect is to provide a cosmological constant. Other theories talk about more general forms of dark energy, like a variable $\Lambda$ in time, quintessence, ghost models, modified Einstein gravity...
Many of these theories introduce one or more new fundamental fields, and so new interactions. If these theories are found to be correct, then we will add these interactions to the list.
Inflation: this also is often explained by introducing new scalar fields. New interactions? Possibly. But presently we are not even sure the inflation really took place.
My point is that, as of today, we are not sure whether there are other fundamental interactions, and we don't know how they might look like.
My answer is that we don't know yet whether it is A or B or even C (everything is explained by the four known interactions) | {
"domain": "physics.stackexchange",
"id": 82121,
"tags": "quantum-field-theory, cosmology, cosmological-inflation, cosmological-constant"
} |
Body's Electrical Resistance | Question: I am not someone specialized on physics, I am just curious on why our Body Electrical Resistance measure as shown by a multimeter varies so much.
When allocating the 2 probes each on one hand, the resistance varies from low 100k's of Ohms up to 700k Ohms and even more.
Can someone illustrate in a easy way for a physics dummy this phenomena?
Thanks
Answer: The bodies internal electrical resistance is quite low. Bodily fluids have enough ions (dissolved salts mainly) for the conductance to be high. The thin layer of skin provides almost all of the resistance. Once the skin resistance is overcome (by wetting, or for higher voltages arcing), the low internal resistance dominates. So for instance, how good a contact is made, how much surface area, and whether conductive jelly is used makes a large difference. Lie detectors use skin resistance, which largely measures sweating, as one among several measures to try to ascertain the emotional response of the subject. | {
"domain": "physics.stackexchange",
"id": 524,
"tags": "electrical-resistance"
} |
How to include motor model in my robot simulation? | Question:
Hi,
I am working on a project which needs to simulate robotic arm in gazebo. But, whatever, I understand there is no provision to include motor model in the simulation. But, motor model needs to be included due to our project requirement. I really appreciate it if someone can share their thoughts and suggestion to simulate motor model inside the gazebo.
Thank you,
RCboT
Originally posted by rcbot on ROS Answers with karma: 39 on 2022-06-21
Post score: 0
Answer:
What do you mean by simulating a motor model there? It is not exactly clear - do you want to have an input to the motor model inside the simulation? So it produces forces etc. as an output?
You could use MoveIt and it's controllers, e.g. like here:
https://medium.com/@tahsincankose/custom-manipulator-simulation-in-gazebo-and-motion-planning-with-moveit-c017eef1ea90
https://github.com/cambel/ur3
Originally posted by ljaniec with karma: 3064 on 2022-06-22
This answer was ACCEPTED on the original site
Post score: 0
Original comments
Comment by rcbot on 2022-06-22:
Thanks for your answer Ljaniec. I mean to say, how can we include motor inertia , damping and friction of our motor? Sorry I wasn't very clear before. Thank you
Comment by ljaniec on 2022-06-23:
It looks like you need to first create a mathematical model of motor you want to test (e.g. using input-output decoupling?) and then implement it in Matlab & Simulink so you can "react" to the control signals as you wanted (as in a motor with inertia, damping etc.), then (for simulation) prepare an URDF model with MoveIt or directly with ros2_control plugins and connect it with your mathematical model of motor. There is probably a much simpler approach, where the internal ROS node would translate the control signals as in a modeled real engine - I think the hardest part is the engine model part, where you have to do a literature review. I found a a repository which could be a good start base/an example at least: https://github.com/nilseuropa/gazebo_ros_motors,
Comment by rcbot on 2022-06-23:
Thanks for your suggestions Ljaniec. I will go through them :) | {
"domain": "robotics.stackexchange",
"id": 37788,
"tags": "ros, model"
} |
How inefficient is it to create the same publisher multiple times? | Question:
I'm curious about the potential performance hit of creating the same publisher multiple times, for example in a callback:
def callback(self, msg):
publisher = rospy.Publisher(topic, String, queue_size=1)
self.publisher.publish(msg.data)
versus starting the publisher once and storing it:
def __init__(self, topic):
self.publisher = rospy.Publisher(topic, String, queue_size=1)
def callback(self, msg):
self.publisher.publish(msg.data)
I have some code that will publish to (potentially) many topics; creating all the publishers at init may not be worth the additional hassle of bookkeeping if it's just as easy to just create the publishers on an as-needed basis.
For what it's worth, I ran some ipython timing code:
import rospy
from std_msgs.msg import String
%timeit p = rospy.Publisher('asdf', String, queue_size=10)
100000 loops, best of 3: 6.79 us per loop
It doesn't appear that starting the publisher is very expensive performance-wise.
Is there anything else I'm missing that would be a downside of starting the same publisher multiple times?
Originally posted by Felix Duvallet on ROS Answers with karma: 539 on 2015-10-14
Post score: 0
Answer:
With your first example, using the publisher immediately after it is created will almost certainly result in the published message not actually being published. It takes a nonzero amount of time for the publisher to register itself with the master, but is a non-blocking call.
Originally posted by Dan Lazewatsky with karma: 9115 on 2015-10-14
This answer was ACCEPTED on the original site
Post score: 2 | {
"domain": "robotics.stackexchange",
"id": 22784,
"tags": "ros, rospy, performance, publisher"
} |
Time Derivative of the Hamiltonian for a Quantum Simple Harmonic Oscillator | Question: I am reading an article on quantum refrigerator. Here is the link of the article. The arXiv version is available here. The working medium is an ensemble of non-interacting particles in a harmonic potential. The authors argue that under certain assumptions we can describe the state of the system along the cycle (which is a reversed Otto cycle), using 3 operators. These operators are the Hamiltonian:
$\hat{H(t)}=\frac{1}{2m}\hat{P}^2+\frac{1}{2m}[\omega(t)]^2\hat{Q}^2$
the Lagrangian:
$\hat{L(t)}=\frac{1}{2m}\hat{P}^2-\frac{1}{2m}[\omega(t)]^2\hat{Q}^2$
and the correlation operator:
$\hat{C(t)}=\frac{1}{2}\omega(t)(\hat{Q}\hat{P}+\hat{P}\hat{Q})$
This is due to the fact that this set of operators form a closed Lie algebra. Hence:
$\hat{\rho} = \hat{\rho}(\hat{H},\hat{L},\hat{C})$
In the adiabatic strokes of the cycle, the system doesn't interact with the environment (closed system) therefore the time evolution of any operator can be given by:
$\frac{d\hat{O}(t)}{dt}=\frac{i}{\hbar}[\hat{H}(t),\hat{O}(t)]+\frac{\partial \hat{O}(t)}{\partial t}$
Then, they claim that in the adiabatic stroke the time evolution of the Hamiltonian can be given as:
$\frac{d}{dt}\hat{H}=\frac{\dot{\omega}}{\omega}(\hat{H}-\hat{L})$
I don't understand how they derived this. In my attempt I write the Hamiltonian in terms of the ladder operators:
$\hat{H}=\hbar \omega(t) (a^{\dagger}a+\frac{1}{2})$
The Hamiltonian commutes with itself. Therefore we only need to calculate the explicit time derivative:
$\frac{\partial \hat{H}}{\partial t} = \hbar \dot{\omega}(a^{\dagger}a+\frac{1}{2})+\hbar \omega (\dot{a}^{\dagger}a+a^{\dagger}\dot{a})$
The first term on the right hand side is indeed $\frac{\dot{\omega}}{\omega}\hat{H}$. However, in my calculations I couldn't verify that the second term is $-\frac{\dot{\omega}}{\omega}\hat{L}$. In order to express the time derivatives of the ladder operators I wrote them in terms of the position and the momentum operators and used the fact that for this particular system:
$\dot{\hat{Q}}=\frac{\hat{P}}{m}, \quad \dot{\hat{P}}= -m\omega^2\hat{Q}$
Which, upon calculation didn't give me the desired result. What am I doing wrong?
Answer: As you correctly state, the Hamiltonian commutes with itself for all times, so we only need to consider the explicit time dependence. The operators $P$ and $Q$ are assumed to have no explicit time dependence (They may have a time dependence due to the dynamics of the system, but this is taken into account in the vanishing commutator term). We therefor have
\begin{align}
\frac{\partial}{\partial t} H &= \frac{\partial}{\partial t}\left[\frac{1}{m} P^2 + \frac{1}{2}m\omega(t)^2 Q^2\right]\\
&= \frac{1}{2}mQ^2 \frac{\partial}{\partial t} \omega^2\\
&= m \omega \dot{\omega}Q^2\\
&= \frac{\dot\omega}{\omega}\left(H-L\right)
\end{align} | {
"domain": "physics.stackexchange",
"id": 83406,
"tags": "quantum-mechanics, harmonic-oscillator, hamiltonian, time-evolution, open-quantum-systems"
} |
About detectors after final round of surface code | Question: Stim's surface code implementation example includes additional detectors after the final round to measure data qubits.
MX 1 3 5 8 10 12 15 17 19
DETECTOR(2, 0, 1) rec[-8] rec[-9] rec[-17] # L8 L15 Z14
DETECTOR(2, 4, 1) rec[-2] rec[-3] rec[-5] rec[-6] rec[-12] # d12 d19 d10 d17 Z18
DETECTOR(4, 2, 1) rec[-4] rec[-5] rec[-7] rec[-8] rec[-15] # d3 d10 L1 L8 Z9
DETECTOR(4, 6, 1) rec[-1] rec[-2] rec[-10] # d5 d12 Z13
Why are these detectors necessary? I understand that the decoder's role is to identify error locations based on measurements of the measurement qubits within each round, but would it also be acceptable to decode using measurements of the data qubits? Thank you very much in advance.
Answer:
Why are these detectors necessary?
If you don't check that the data measurements agree with the stabilizer measurements, you wouldn't be comparing the data measurements to anything. If you're not comparing them to anything, you're not checking anything. You need to check things to correct errors.
would it also be acceptable to decode using measurements of the data qubits?
I don't understand the question. This is decoding using the data qubit measurements. Do you mean using the data qubits but not referencing the stabilizer measurements at all? That won't work because without the stabilizer measurements you have no reference for what to expect from the data qubits.
If you try to declare detectors that only depend on the data measurements, you won't be able to decode it using matching anymore. Errors in the bulk will have four symptoms (flipping the two local bulk detectors and the two end-of-time data-only detectors) instead of two symptoms. Also, in situations more complex than a memory experiment, you'll find that these data-only detectors aren't actually deterministic because they rely on the data measurement basis at the end being the same as the data initialization basis at the start but in general computations that's not always true. | {
"domain": "quantumcomputing.stackexchange",
"id": 5044,
"tags": "stim, surface-code"
} |
Breaking a magnet | Question: When a bar magnet is broken into 2 pieces, we know that the 2 new pieces also become bar magnets with 2 poles. The piece corresponding to the north pole of the unbroken magnet gets a new south pole, and likewise the other piece gets a new north pole. My question is, how do these broken parts know which pole they're supposed to acquire?
My guess is that it has to do with the alignment of the so called "domains" of the ferromagnetic material that dictate which polarity each broken end will acquire, but I'm not entirely sure. Would appreciate some more insight!
Answer: The pieces do not need to know which end should be a $+$ or a $-$ pole, the internal distribution of the dipoles is frozen at manufacture in the permanent magnet. What is actually frozen in are the walls, the boundaries, within which the elementary dipoles are all aligned naturally. Normally, within a single crystal there are several domains separated by walls, and within each one of these several domains the elementary dipoles of the constituent atoms are aligned.
When a soft magnet is unbiased the walls within a domain are positioned naturally so that the effective polarization of the whole crystal zero. An external bias field, unless it is exceptionally strong, cannot change the direction of these aligned dipoles within a domain, there are preferred crystalline direction for the dipole alignments, but it can move the walls between them so that effective polarization of a domain is mostly along the external bias field.
In a hard magnet the walls move with great difficulty, ie., "friction", but can be unfrozen at a temperature above the so-called Curie point at which the ferromagnet becomes a paramagnet. When placed in a bias field to align the dipoles and then cooled below the Curie temperature the walls of the evolving domains will be frozen so that the magnet becomes a permanent one. Every macroscopic piece of the magnet this way is aligned with the original bias field with which it was created. When you break a big magnet into smaller but macroscopic pieces they all present themselves as similarly polarized but smaller permanent magnets. | {
"domain": "physics.stackexchange",
"id": 97058,
"tags": "ferromagnetism"
} |
Accessing argparse arguments from the class | Question: I have the following sample code running unit tests:
#!/usr/bin/env python3
import sys
import unittest
import argparse
class ParentTest(unittest.TestCase):
None
class Test1(ParentTest):
def test_if_verbose(self):
import __main__ # FIXME
print("Success!") if __main__.args.verbose else "" # FIXME
class Test2(ParentTest):
None
if __name__ == '__main__':
# Parse arguments.
parser = argparse.ArgumentParser(add_help=False)
parser.add_argument("-?", "--help", action="help", help="show this help message and exit" )
parser.add_argument("-v", "--verbose", action="store_true", dest="verbose", help="increase output verbosity" )
parser.add_argument('files', nargs='*')
args = parser.parse_args()
print(args.verbose)
# Add tests.
alltests = unittest.TestSuite()
alltests.addTest(unittest.makeSuite(Test1))
alltests.addTest(unittest.makeSuite(Test2))
result = unittest.TextTestRunner(verbosity=2).run(alltests) # Run tests.
sys.exit(not result.wasSuccessful())
and I'd like to access command-line argument values from the class.
In here it's suggested that using __main__ isn't really a good approach.
Is there any better way of doing it, where individual tests could have access to argument values which were parsed already in main?
Testing:
python3 test.py
python3 test.py -v
Answer: You could do this by making the test classes take args as parameter, and crafting a custom make_suite method instead of unittest.makeSuite, like this:
class ParentTest(unittest.TestCase):
def __init__(self, methodName='runTest', args=None):
super().__init__(methodName)
self.args = args
class Test1(ParentTest):
def test_if_verbose(self):
print("Success!") if self.args.verbose else ""
class Test2(ParentTest):
pass
if __name__ == '__main__':
parser = argparse.ArgumentParser(add_help=False)
parser.add_argument("-?", "--help", action="help", help="show this help message and exit" )
parser.add_argument("-v", "--verbose", action="store_true", dest="verbose", help="increase output verbosity" )
parser.add_argument('files', nargs='*')
args = parser.parse_args()
print(args.verbose)
def make_suite(testcase_class):
testloader = unittest.TestLoader()
testnames = testloader.getTestCaseNames(testcase_class)
suite = unittest.TestSuite()
for name in testnames:
suite.addTest(testcase_class(name, args=args))
return suite
# Add tests.
alltests = unittest.TestSuite()
alltests.addTest(make_suite(Test1))
alltests.addTest(make_suite(Test2))
result = unittest.TextTestRunner(verbosity=2).run(alltests) # Run tests.
sys.exit(not result.wasSuccessful())
The sample output is the same as the original code:
$ python t2.py ; python t2.py -v
False
test_if_verbose (__main__.Test1) ... ok
----------------------------------------------------------------------
Ran 1 test in 0.000s
OK
True
test_if_verbose (__main__.Test1) ... Success!
ok
----------------------------------------------------------------------
Ran 1 test in 0.000s
OK
The code in my answer was inspired by these posts:
https://stackoverflow.com/a/17260551/641955
http://eli.thegreenplace.net/2011/08/02/python-unit-testing-parametrized-test-cases/ | {
"domain": "codereview.stackexchange",
"id": 13735,
"tags": "python, unit-testing, python-3.x"
} |
Do fallen leaves produce oxygen? | Question: After a leaf has fallen from a tree, if it is still green and hasn't dried out, is it still converting CO2 into O2 if not put in water?
Can anyone find any data showing how long any different species of leaf will continue to produce oxygen after fallen?
Answer: Short answer
It all depends on the time window you are talking about. After having been detached from the mother plant, a leaf will typically keep on photosynthesizing for a few hours or so.
Background
Cutting of the stalk of the leaf results in impaired water flow and wilting. As soon as a leaf is detached from the plant, it will also be cut off from hormones, minerals and other nutrients. The result of this is that senescence (and death) sets in straight away. However, leaves will typically stay green and moist for hours or even days, dependent on the conditions it is stored. Hence, in practice, photosynthesis can be measures at least a few hours after a typical leaf is picked (source: Science and Plants for Schools). | {
"domain": "biology.stackexchange",
"id": 8674,
"tags": "botany, photosynthesis"
} |
A small python game | Question: My teacher gave us this game to make. basically, the computer needs to generate randomly 5 letters and there would be 5 players if the player guesses the correct letter given by the computer It would print "good job" else "good luck next time", I would appreciate it if you could make my code a bit better.
from string import ascii_uppercase
from random import choice
randompc = []
user_answers = []
for i in range(5):
randompc.append(choice(ascii_uppercase))
for i in range(5):
user_input = str(
input("User " + str(i+1)+" enter a character from A-Z:\n"))
while user_input.isalpha() == False and len(user_input) > 0:
user_input = str(
input("User " + str(i+1)+" enter a character from A-Z:\n"))
else:
user_answers.append(user_input.upper())
def checkifcorrect():
for i in range(len(user_answers)):
if user_answers[i] == randompc[i]:
print("Good Job user "+str(i+1))
else:
print("Good luck next time user "+str(i+1))
print("The answers were: ")
print(*randompc, sep=", ")
checkifcorrect()
Answer: As it is, your program is already in a pretty good shape. There is an error in the logic of the input part (what happens if the user enters a string longer than one character? what happens if they enter an empty string?), but apart from that, it's a good starting point.
But there are several improvements that you can do in order to make your code more idiomatic, to improve the readability, and to give it a more concise structure. Your program logic basically consists of three parts: an initialization, an input part, and an evaluation. You have a separate function for the last part (your function checkifcorrect()), but not for the first two parts. This is the first change that I'd recommend, and my answer is basically progressing through these three parts.
Initialization
Your initialization is fairly simple. All that needs to be done is that you need a list of five randomly chosen letters. You decided to use the choice() function for that, which does work. But there's an even more tailor-suited function for your use case: choices(lst, k), which creates a list of k choices from lst (with replacement, i.e. the same elements can be chosen more than once). Your initialization function will now contain of a single line of code:
def assign_letters():
return choices(ascii_uppercase, k=5)
User input
There is a Python idiom that you will frequently encounter when a program requires a valid input from the user (but also in many other places). This idiom uses a while True: loop. In your case, it could look like this:
while True:
user_input = input("Enter a letter: ")
if len(user_input) == 1 and user_input.isalpha():
break
The logic here basically reverses the logic of your current while loop. Currently, the loop is repeated for as long as the previous input is invalid. The while True: loop is repeated forever – unless it's explicitly left by the break instruction if the current input is valid. This results in a very strict regimen of what is considered a correct input, and it has the advantage that you don't need two more or less identical input() instructions.
When it comes to creating the input prompts, you may want to look into Python format strings. This is a subtype of strings that comes in handy whenever you want to create a string that embeds the value of a variable (or for that matter, the value of any evaluation). You create a format string are like any other strings, but you prepend the opening quotation mark by the letter f. Furthermore, at the point where you want your variable to appear, you insert the name of your variable enclosed in curly brackets. In your case, you can use the following line for input:
user_input = input(f"User {i+1} enter a character from A-Z:\n")
At runtime, the sequence "{i+1}" will automatically be replaced by the current value of i+1.
Evaluation
You already have a function checkifcorrect that evaluates the answers. It does its job, but there are a few improvements here as well. The first improvement concerns the name checkifcorrect. There is the Python Style Guide "PEP 8" that describes the style of well-formed Python scripts, including naming conventions. While these conventions are only recommendations, they are are chosen to improve the readability of Python scripts, and they are in very common use. So in order to help others read your scripts, it's always a good idea to stick to PEP8. For function names, the recommendation is to use lowercase words combined with underscore characters. So, PEP8 would recommend check_if_correct() as a name.
This also means that the variable name randompc is not really well-formed – I will use random_pc from now on. Speaking of which: the next improvement concerns variable handling. As it is, your function check_if_correct() accesses the global variables user_answers and random_pc. These variables are called "global" because they were not created within the function check_if_correct(), and they were not passed as arguments to the function call. This has a downside: your function will fail to work if these variables are not created beforehand.
In a small problem like your assignment, this won't matter much, but even in slightly bigger scripts you don't want your functions to rely on variables that are created elsewhere. Very generally speaking, global variables are useful, but they need to be used with care, and more often than not, you want to rely on local variables instead (this is something that beginners often need to get used to). So, instead of using the global variables user_answers and random_pc, you should add these variable names as arguments to the function definition:
def check_if_correct(user_answers, random_pc):
With this definition, you can pass any variable to your function regardless of the name of that variable in the global scope.
The next thing to improve would be to replace the + str(i+1) bits and use format strings again when you print the user numbers:
if user_answers[i] == random_pc[i]:
print(f"Good Job user {i+1}")
else:
print(f"Good luck next time user {i+1}")
The last improvement concerns printing the list random_pc. Your line print(*random_pc, sep=", ") is actually a pretty clever solution – using * to pass the elements of a list as arguments to a function is a very useful trick to know.
But if you want to combine the elements of a list into a single string, there's a more idiomatic way that works independently of the print function with the sep argument. Every string provides the method join(lst). This method returns a new string that uses the original string as a separator between each element of the list lst. So, in your case, if you want to print the elements of random_pc separated by ", ", you can use the following command:
print(", ".join(random_pc))
Granted, it's not really shorter or less complex than your command, but it's the more conventional way if list elements are to be joined in a string.
Overall structure
Now, you have three functions: assign_letters() which returns a list containing the randomly chosen letters per player, get_guesses() which returns a list containing the guesses of each player, and check_if_correct() that compares each guess to the randomly chosen letter. We can combine them into a single code block:
letters = assign_letters()
guesses = get_guesses()
check_if_correct(guesses, letters)
This code clearly reflects the program logic: first, letters are assigned, then, guesses are received, and then, the guesses and letters are checked for correctness.
With regard to the placement of this code block, there's another Python idiom that you'll find in virtually every non-trivial script: at the end of the main script, you will find the code block that contains every line of code that should be executed if the script is started. And this code block is wrapped into a somewhat cryptic if condition, like so:
if __name__ == "__main__":
letters = assign_letters()
guesses = get_guesses()
check_if_correct(guesses, letters)
This works because unless the script was imported as a module, the internal variable __name__ will contain the string "__main__" as its value. Technically speaking, this is the way to identify the "top-level script environment", and there's a Stackoverflow question that will explain it in detail.
All in one place
So, here's the sum of all of my recommendations:
from string import ascii_uppercase
from random import choices
def assign_letters():
return choices(ascii_uppercase, k=5)
def get_guesses():
user_answers = []
for i in range(5):
while True:
user_input = input(f"User {i+1} enter a character from A-Z:\n")
if len(user_input) == 1 and user_input.isalpha():
break
user_answers.append(user_input.upper())
return user_answers
def check_if_correct(user_answers, random_pc):
for i in range(len(user_answers)):
if user_answers[i] == random_pc[i]:
print(f"Good Job user {i+1}")
else:
print(f"Good luck next time user {i+1}")
print("The answers were: ")
print(", ".join(random_pc))
if __name__ == "__main__":
letters = assign_letters()
guesses = get_guesses()
check_if_correct(guesses, letters)
And I do agree with @zachary-vance that your teacher assigned you an incredibly stupid, un-fun game to code. | {
"domain": "codereview.stackexchange",
"id": 42125,
"tags": "python"
} |
Roscore over ssh | Question:
Hi!
I am able to do $ roscore and everything is fine (also launching launch files) but when I try: $ ssh user@machine roscore
there is a problem, it says that
bash: roscore: command not found
I am hydro. What could be the reason of this?
Thanks beforehand!
Originally posted by pexison on ROS Answers with karma: 82 on 2015-06-08
Post score: 0
Answer:
Most likely the user user on the machine system has not properly setup their ROS environment. Even if you have sourced the appropriate setup.bash files in your local terminal, once you are ssh'ed into a remote machine, the environment on that system does not inherit anything from the launching terminal. Try ssh'ing into the machine, then sourcing the appropriate setup.bash file, and then run roscore.
Originally posted by jarvisschultz with karma: 9031 on 2015-06-08
This answer was ACCEPTED on the original site
Post score: 2
Original comments
Comment by pexison on 2015-06-12:
Thanks, the superuser of my computer wasn't able to do roscore :) | {
"domain": "robotics.stackexchange",
"id": 21859,
"tags": "ros, ssh, roscore, remote-roscore"
} |
Why the total work to move a spring from point A to B equals the integral of all forces needed to hold it balance at every point between A and B? | Question: I'm reading a Calculus book that mentions the Hooke’s Law for Springs that says the force needed to hold a spring at $x$ cm from it normal position is: $F = kx$, where $k$ is a constant.
I can understand that formula but what I'm struggle to grasp is the formula to calculate the work needed to move the spring's movable endpoint from point A to point B equals to the integral of all forces needed to hold it balance at every point between A and B. Or, in symbols:
$$ W = \int_a^b kxdx$$
Where a and be denote the signed position of the spring's movable endpoint at A and B.
UPDATE: I can't relate the "stationary" forces needed to "hold" a spring at particular points to the SUMMATION of those "holding" forces becomes the work needed to MOVE the endpoint. Why they equals each other, one type of force is applied at stationary positions and the other is on the movement.
Answer: Assume a massless spring and $F=kx$ where $F$ is the force applied to the spring and $x$ is the extension of the spring.
The graph of applied force against extension is a straight line with gradient $k$, the spring constant, as shown in blue.
The spring is extended by $x$ using an external force which increases from $0$ to $F$ and then stops increasing with the applied force $F$ being kept constant.
Whilst extending the spring from $0$ to $x$ the average force applied is $F/2$ and the work done by the external force is $F/2\cdot x$.
The spring is now extended by and extra $x$ using an external force which increases from $F$ to $2F$ and then stops increasing.
Whilst extending the spring from $x$ to $2x$ the average force applied is $3F/2$ and the work done by the external force is $3F/2\cdot x$.
The spring is again extended by and extra $x$ using an external force which increases from $2F$ to $3F$ and then stops increasing.
Whilst extending the spring from $2x$ to $3x$ the average force applied is $5F/2$ and the work done by the external force is $5F/2\cdot x$.
Thus the total work done by the external force in extending the spring from $0$ to $3 x$ is
$F/2\cdot x + 3F/2\cdot x +5F/2\cdot x = \frac 12 9Fx = \frac 12 k (3x)^2$ because $F=kx$.
All the integration does is to sum the work done for those infinitesimally small pull/hold/pull more/hold/pull more/ . . . . motions. | {
"domain": "physics.stackexchange",
"id": 97118,
"tags": "newtonian-mechanics, forces, work, spring, elasticity"
} |
How can the nucleus of an atom be in an excited state? | Question: An example of the nucleus of an atom being in an excited state is the Hoyle State, which was a theory devised by the Astronomer Fred Hoyle to help describe the vast quantities of carbon-12 present in the universe, which wouldn't be possible without said Hoyle State.
It's easy to visualise and comprehend the excited states of electrons, because they exist on discrete energy levels that orbit the nucleus, so one can easily see how an electron can excite from one energy level into a higher one, hence making it excited.
However, I can't see how the nucleus can get into an excited state, because clearly, they don't exist on energy levels that they can transfer between, but instead it's just a 'ball' of protons and neutrons.
So how can the nucleus of an atom be excited? What makes it excited?
Answer: First you say
It's easy to visualise and comprehend the excited states of electrons, because they exist on discrete energy levels that orbit the nucleus
By way of preparation, I'll note that in introductory course work you never attempt to handle the multi-electron atom in detail. The reason is the complexity of the problem: the inter-electron effects (screening and so on) mean that it is not simple to describe the levels of a non-hydrogen-like atom. The complex spectra of higher Z atoms attest to this.
Later you say
[nuclei] don't exist on energy levels that they can transfer between
but the best models of the nucleus that we have (shell models) do have nucleons occupying discrete orbital states in the combined field of the all the other nucleons (and the mesons that act as the carriers of the "long-range" effective strong force).
This problem is still harder than that of the non-hydrogen-like atoms because there is no heavy, highly-charged nucleus to set the basic landscape on which the players dance, but it is computationally tractable in some cases.
See my answer to "What is an intuitive picture of the motion of nucleons?" for some experimental data exhibiting (in energy space) the shell structure of the protons in the carbon nucleus. In that image you will, however, notice the very large degree of overlap between the s- and p-shell distributions. That is different than what you see in atomic orbitals because the size of the nucleons is comparable to the range of the nuclear strong force. | {
"domain": "physics.stackexchange",
"id": 11980,
"tags": "quantum-mechanics, nuclear-physics, atoms"
} |
Drawing multi-line text fast in C++, MLTextOut | Question: I need to draw a lot of multi-line text to the screen so I first used DrawText but it was getting a bit slow... So I've been looking at a few alternatives: save the drawn text in memory DC's, use Direct3D/Direct2D, write my own function. I don't want my program to be a memory hog so using memory DC's wasn't too great. I don't want to be dependent on D3D either so I was left with one option.
Here is my routine to draw multi-line text using the standard Windows GDI. It splits up the text in lines (split by \r\n's) and then calculates the length of the full line and estimates where it should break.
Since speed is crucial, does anyone sees optimization I could perform to increase speed?
bool MLTextOut(HDC hDC, int x, int y, int cx, int cy, CString str, int nLineHeight = 0)
{
if (!hDC || cx <= 0 || cy <= 0)
return false;
if (str.IsEmpty() || x + cx <= 0 || y + cy <= 0)
return true;
const TCHAR *lpszEnd = (const TCHAR *)str + str.GetLength();
const TCHAR *p1 = str, *p2, *p3, *p4;
SIZE sz;
int yInc = 0, n;
RECT rc = {x, y, x + cx, y + cy};
bool bContinue;
while (true)
{
p2 = _tcsstr(p1, _T("\r\n"));
if (!p2)
p2 = lpszEnd;
// check if we're already out of the rect
if (y + yInc >= rc.bottom)
break;
// calculate line length
GetTextExtentPoint32(hDC, p1, p2 - p1, &sz);
// if line fits
if (sz.cx <= cx)
{
//TextOut(hDC, x, y + yInc, p1, p2 - p1);
ExtTextOut(hDC, x, y + yInc, ETO_CLIPPED, &rc, p1, p2 - p1, NULL);
yInc += (nLineHeight ? nLineHeight : sz.cy);
}
// when line does not fit
else
{
// estimate the line break point in characters
n = ((p2 - p1) * cx) / sz.cx;
if (n < 0)
n = 0;
// reverse find nearest space
for (p3 = p1 + n; p3 > p1; p3--)
if (*p3 == _T(' '))
break;
// if it's one word spanning this line, but it doesn't fit... let's clip it
if (p3 == p1)
{
// find first space on line
for (p3 = p1; p3 < p2; p3++)
if (*p3 == _T(' '))
break;
ExtTextOut(hDC, x, y + yInc, ETO_CLIPPED, &rc, p1, p3 - p1, NULL);
yInc += (nLineHeight ? nLineHeight : sz.cy);
p1 = (p3 == p2 ? p2 + 2 : p3 + 1);
continue;
}
// see if p3 as line end fits
GetTextExtentPoint32(hDC, p1, p3 - p1, &sz);
if (sz.cx <= cx)
{
// try to add another word until it doesn't fit anymore
p4 = p3;
do
{
p3 = p4; // save last position that was valid
for (p4 = p4+1; p4 < p2; p4++)
if (*p4 == _T(' '))
break;
if (p4 == p2)
break;
GetTextExtentPoint32(hDC, p1, p4 - p1, &sz);
} while (sz.cx <= cx);
ExtTextOut(hDC, x, y + yInc, ETO_CLIPPED, &rc, p1, p3 - p1, NULL);
yInc += (nLineHeight ? nLineHeight : sz.cy);
p1 = p3 + 1;
continue;
}
else
{
// try to strip another word until it fits
bContinue = false;
do
{
for (p4 = p3-1; p4 > p1; p4--)
if (*p4 == _T(' '))
break;
// if it's one word spanning this line, but it doesn't fit... let's clip it
if (p4 == p1)
{
// find first space on line
for (p3 = p1; p3 < p2; p3++)
if (*p3 == _T(' '))
break;
ExtTextOut(hDC, x, y + yInc, ETO_CLIPPED, &rc, p1, p3 - p1, NULL);
yInc += (nLineHeight ? nLineHeight : sz.cy);
p1 = (p3 == p2 ? p2 + 2 : p3 + 1);
bContinue = true;
break;
}
p3 = p4;
GetTextExtentPoint32(hDC, p1, p3 - p1, &sz);
} while (sz.cx > cx);
if (bContinue)
continue;
ExtTextOut(hDC, x, y + yInc, ETO_CLIPPED, &rc, p1, p3 - p1, NULL);
yInc += (nLineHeight ? nLineHeight : sz.cy);
p1 = p3 + 1;
continue;
}
}
if (p2 == lpszEnd)
break;
p1 = p2 + 2;
}
return true;
}
Answer: I would split this function into two parts:
Text layout part - find positions of text runs / lines of text. You
can use GetTextExtentPoint32 as you did.
And drawing using single call of PolyTextOut
That will allow to skip #1 if text position is not changing between WM_PAINTs - in this case you will get just single PolyTextOut call. | {
"domain": "codereview.stackexchange",
"id": 870,
"tags": "c++"
} |
Trying to understand gmapping and playback some bag | Question:
Hi all,
I have a bag with scanner data. I am not trying to build a map.
THe scanner is in /scan.
So I ran
rosparam set use_sim_time true
roscore
rosrun gmapping slam_gmapping scan
rosrun tf static_transform_publisher 0 0 0 0 0 0 base_link laser 100
Nothing seems to be published in /map.
In rviz, I see for map... transform:No transform from [] to [laser]
What am I missing?
From the doc, I can see base_link → odom is needed.. How?
Why is that needed? Is not the purpose of slam to determine just that? (the base_link relative to odom)?
Originally posted by SeekerOfRos on ROS Answers with karma: 1 on 2016-01-29
Post score: 0
Answer:
In order to use gmapping, you have to generate an odometry estimate by integrating your encoder and/or inertial measurements, and publish the estimate as a TF transform from base_link --> odom. See my answer on another question for more details about how particle filter SLAM algorithms like gmapping work:
http://answers.ros.org/question/227425/why-gmapping-package-requires-the-base_link-odom-tf/?answer=228259#post-id-228259
Originally posted by robustify with karma: 956 on 2016-03-04
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 23592,
"tags": "navigation, odometry, gmapping"
} |
catkin is not creating package | Question:
cd ~/catkin_ws/src catkin_create_pkg mypack
cd ..
catkin_make
Everything is ok.
roscd mypack
mypack not found.
why?
Originally posted by Oper on ROS Answers with karma: 67 on 2016-07-25
Post score: 0
Answer:
roscd will not know where to locate your package until you have sourced your setup.bash. After you have compiled, in the catkin_ws directory in the terminal enter 'source devel/setup.bash'.
Originally posted by alexvs with karma: 91 on 2016-07-25
This answer was ACCEPTED on the original site
Post score: 2
Original comments
Comment by Oper on 2016-07-25:
Thank u! this was exactly what i was missing | {
"domain": "robotics.stackexchange",
"id": 25347,
"tags": "ros, catkin-make, catkin, create, package"
} |
Argument Requirement Error in URDF Tutorial Display Launch | Question:
please help me regarding below error:-
[/opt/ros/kinetic/share/urdf_tutorial/launch/display.launch] requires the 'model' arg to be set
The traceback for the exception was written to the log file.
regards
saurabh
Originally posted by saurabh jha on ROS Answers with karma: 1 on 2022-06-01
Post score: 0
Answer:
If you check this tutorial, you'll see that it launches display.launch by using:
roslaunch urdf_tutorial display.launch model:=urdf/01-myfirst.urdf
You need to add the argument model to tell the launch file which model to use, as you can see in the example above (and the tutorial). You haven't added the argument.
Originally posted by Joe28965 with karma: 1124 on 2022-06-02
This answer was ACCEPTED on the original site
Post score: 0
Original comments
Comment by saurabh jha on 2022-06-03:
thank you problem resolve
Comment by Joe28965 on 2022-06-03:
Please mark this answer as correct so others know it's been resolved. | {
"domain": "robotics.stackexchange",
"id": 37738,
"tags": "ros, ros-kinetic"
} |
Are there 3-colorable maps that can never be colored? | Question: I just watched this explanation of zero-knowledge proofs with Avi Wigderson: https://www.youtube.com/watch?v=5ovdoxnfFV
Key claims from the video:
Every formal statement can be translated into a map in a way that preserves truth. If a statement is true, its map will be 3-colorable. If not, it isn't 3-colorable https://youtu.be/5ovdoxnfFVc?t=1721
If every formal statement can be translated to a map, and suppose there's a true statement that can't be proven true by Gödel's incompleteness theorems, then doesn't that mean that no algorithm whether P, NP, or worse can exist to color it? Does that mean that there are some 3-colorable maps that can never be colored?
This seems wildly counterintuitive (which maybe is just the nature of Gödels' incompleteness theorems, which I haven't tried to understand the proof of). Or maybe I don't understand Gödel's incompleteness theorems (also likely). If I'm misunderstanding something though, it'd be helpful to know what to look into more.
Answer:
Every formal statement can be translated into a map in a way that preserves truth. If a statement is true, its map will be 3-colorable. If not, it isn't 3-colorable https://youtu.be/5ovdoxnfFVc?t=1721
Every formal statement in propositional logic can be translated to a map 3-coloring instance. The video is being very sloppy with calling this 'every mathematical statement', as there's many mathematical statements that aren't captured in propositional logic.
Propositional logic is decidable (it's equivalent to the SAT problem). | {
"domain": "cs.stackexchange",
"id": 17753,
"tags": "undecidability"
} |
ROS Listener (Hydro) Not Working | Question:
Android 4.2.2, Android 4.1.2, Ubuntu 12.04, Android Studio 0.4.4, ROS Hydro
Hi, I have recently tried to run the listener (obtained from Google Play) through Robot Remocon, but after the "Starting Listener" dialog box disappeared, no new view (window) is opened, it is still on the main layout of the Remocon, with the Listener highlighted blue (indicating running app), and the ability to stop the app. Apart from that, even though a new ROS topic is created under "/turtlebot/chatter", I can't see any data on my device (since apparently the rosTextView has failed to be opened?).
I would really appreciate it if anyone can look into this.
PS: I have tried to compile the code from "android_apps" under Android Studio myself, and the same problem has occurred.
Originally posted by Hon Ng on ROS Answers with karma: 3 on 2014-02-13
Post score: 0
Original comments
Comment by Daniel Stonier on 2014-02-17:
Do you have your ROS_IP/ROS_HOSTNAME variables set?
Comment by Hon Ng on 2014-02-18:
Yes, otherwise, the Robot Remocon App will not connect successfully to the turtlebot, right?
Answer:
The robot remocon is not just a way of starting android apps, you can also use it to just fire up launch configurations on the robot. That's what you are seeing here (albeit unintentionally). You'll note that there are two rocon rapps available, one is talker, one is listener.
The talker rapp has a launcher that starts a ros tutorials talker on the robot. In that configuration file, you'll also see it has a defined pairing client for android - a listener android app. i.e. the robot starts the launcher, the android starts the pairing client.
The listener rapp doesn't have a pairing client (yet - simply because we never got around to writing it). So it fires up the listener on the robot, but doesn't do anything on the robot side.
This behaviour can be useful if you just want to use the robot remocon for starting and stopping launch configurations on your robot.
Originally posted by Daniel Stonier with karma: 3170 on 2014-02-18
This answer was ACCEPTED on the original site
Post score: 2
Original comments
Comment by Hon Ng on 2014-02-19:
Ok, I see, thank you.
But unfortunately, the source code for the talker is not on here: https://github.com/rosjava/android_apps/tree/hydro
Or did I missed something?
Comment by Daniel Stonier on 2014-02-19:
It's the https://github.com/rosjava/android_apps/tree/hydro/listener. The robot side runs the 'talker', the android side is the 'listener'. It's a bit confusing, but we went with the convention of naming robot apps from the robot's perspective for this pairing mode.
Comment by Hon Ng on 2014-02-23:
Ok, that makes more sense. Cheers! =D | {
"domain": "robotics.stackexchange",
"id": 16967,
"tags": "ros, ros-hydro, android"
} |
Why does Moon always poses the same face towards Earth? | Question: If both Earth and Moon are rotating as well as revolving around some focus, shouldn't they have drifted out of phase with each other long ago? So, why do we always see the same side always?
Answer: The moon always poses the same face to the earth because its rotation period around its axis is equal to its revolution period around the earth. It is this way due to tidal locking. See:
https://en.wikipedia.org/wiki/Tidal_locking | {
"domain": "physics.stackexchange",
"id": 24749,
"tags": "newtonian-gravity, earth, celestial-mechanics, moon, tidal-effect"
} |
Finding a cryptoarithmetic solution | Question: This code is intended to find all possible solutions to a cryptoarithmetic problem. The description of the problem I was trying to solve is here:
In cryptoarithmetic problems, we are given a problem wherein the
digits are replaced with characters representing digits. A solution to
such a problem is a set of digits that, when substituted in the
problem, gives a true numerical interpretation.
Example:
IS
IT
___
OK
has a solution
{ I = 1; K = 1; O = 3; S = 5; T = 6}
For each of the below cryptoarithmetic problems, write a program that finds all
the solutions in the shortest possible time.
IS I
IT AM
__ __
OK OK
I was only able to solve it using brute force, though I believe there are more efficient methods. I am also hoping to receive feedback on my formatting, naming, and really anything you think could use improvement.
(defun place-value-to-integer (the-list &OPTIONAL place-value)
(let ((place-value (if place-value place-value 1)))
(if (= (length the-list) 1) (* place-value (first the-list))
(+ (* place-value (first (last the-list))) (place-value-to-integer (butlast the-list) (* 10 place-value))))))
(defun fill-from-formula (formula guess)
(loop for digit in formula collect (gethash digit guess)))
(defun check-answer (augend-formula addend-formula sum-formula guess)
(let ((augend (fill-from-formula augend-formula guess))
(addend (fill-from-formula addend-formula guess))
(sum (fill-from-formula sum-formula guess)))
(= (place-value-to-integer sum) (+ (place-value-to-integer augend) (place-value-to-integer addend)))))
(defun brute-force-guess(augend-formula addend-formula sum-formula unique-values &OPTIONAL callback guess)
(let ((guess (if (null guess) (make-hash-table) guess)))
(loop for digit in '(0 1 2 3 4 5 6 7 8 9) do
(setf (gethash (car unique-values) guess) digit)
(if (= (length unique-values) 1)
(if (check-answer augend-formula addend-formula sum-formula guess) (print-result augend-formula addend-formula sum-formula guess) nil)
(brute-force-guess augend-formula addend-formula sum-formula (cdr unique-values) callback guess)))))
(defun print-result (augend-formula addend-formula sum-formula guess)
(format t "One answer is ~a + ~a = ~a ~%"
(fill-from-formula augend-formula guess)
(fill-from-formula addend-formula guess)
(fill-from-formula sum-formula guess)))
(defun find-unique-values (the-list)
(let ((unique-items ()))
(loop for sublist in the-list do
(loop for item in sublist do
(unless (member item unique-items) (setf unique-items (append (list item) unique-items))))) unique-items))
(let ((problemA (list (list 'I 'S) (list 'I 'T) (list 'O 'K)))
(problemB (list (list 'I) (list 'A 'M) (list 'O 'K))))
(brute-force-guess (first problemA) (second problemA) (third problemA) (find-unique-values problemA) #'print-result)
(brute-force-guess (first problemB) (second problemB) (third problemB) (find-unique-values problemB) #'print-result))
Answer: Some preliminary notes for now (I'll add later):
Whenever you need to write (if n n 2) or (if (not n) 2 n), you can instead write (or n 2). or will take any number of arguments and return either nil or the first argument that evaluates to non-nil.
When working with optional arguments, you can set defaults for them.
(defun place-value-to-integer (the-list &OPTIONAL place-value)
(let ((place-value (if place-value place-value 1)))
...
can be written as
(defun place-value-to-integer (the-list &OPTIONAL (place-value 1))
...
I don't have time to get into the rest right now, but you're using loop to setf a series of hash values, which tells me you could probably simplify it by using a more functional approach (it might be one of the exceptions, but it doesn't feel like one at first glance).
EDIT:
(if a b nil) is equivalent to (when a b) (and it's good style to use the second over the first).
EDIT2: Ok, wow, hey. That's two hours of my life I won't get back. I wrote up and edited down a pretty ridiculously long piece on my process (if you care, it's here). Here's how I would tackle a brute-force approach to this problem.
EDIT3: Simplified slightly.
(defpackage :cry-fun (:use :cl :cl-ppcre))
(in-package :cry-fun)
(defun digits->number! (&rest digits)
(apply #'+ (loop for d in (nreverse digits) for i from 0
collect (* d (expt 10 i)))))
(defun number->digits (num &optional (pad-to 5))
(let ((temp num)
(digits nil))
(loop do (multiple-value-call
(lambda (rest d) (setf temp rest digits (cons d digits)))
(floor temp 10))
until (= pad-to (length digits)))
digits))
(defun string->terms (problem-string)
(reverse
(mapcar (lambda (s) (mapcar (lambda (i) (intern (format nil "~a" i)))
(coerce s 'list)))
(split " " (string-downcase problem-string)))))
(defmacro solve-for (problem-string)
(let* ((arg-count (length (remove-duplicates (regex-replace-all " " problem-string ""))))
(nines (apply #'digits->number! (make-list arg-count :initial-element 9))))
`(loop for i from 0 to ,nines
when (apply (solution-fn ,problem-string) (number->digits i ,arg-count))
collect it)))
(defmacro solution-fn (problem-string)
(let* ((terms (string->terms problem-string))
(args (remove-duplicates (apply #'append terms))))
`(lambda ,args
(when (= (+ ,@(loop for term in (cdr terms) collect `(digits->number! ,@term)))
(digits->number! ,@(car terms)))
(list ,@(mapcan (lambda (i) (list (symbol-name i) i)) args))))))
EDIT (by jaresty): adding comments to show example intermediate values for "solution-fn"
(defmacro solution-fn (problem-string)
(let* ((terms (string->terms problem-string))
;example: (terms ((o k) (i t) (i s)))
(args (remove-duplicates (apply #'append terms))))
;example: (args (o k t i s))
`(lambda ,args
(when (= (+ ,@(loop for term in (cdr terms) collect `(digits->number! ,@term)))
(digits->number! ,@(car terms)))
;example: (when (= (+ (i t) (i s)) (o k)
(list ,@(mapcan (lambda (i) (list (symbol-name i) i)) args))))))
;example: (list "o" o "k" k "t" t "i" i "s" s) | {
"domain": "codereview.stackexchange",
"id": 147,
"tags": "lisp, cryptography, common-lisp"
} |
Finding all "basic" cycles in an undirected graph? | Question: Say you have a graph like
a — b — c
| | |
e — f — g
and you would like to find the cycles c1, {a,b,f,e}, and c2, {b, c, g, f}, but not c3, {a, b, c, g, f, e}, because c3 is not "basic" in the sense that c3 = c1 + c2 where the plus operator means to join two cycles along some edge e and then drop e from the graph.
I invented my own terminology in the above but basically I want to find all cycles in a graph that cannot be decomposed into smaller cycles. Does this problem have a name? Is there a known best algorithm for solving it?
I understand that enumerating all cycles runs in exponential time because there may be an exponential number of cycles but my intuition is that the number of basic cycles as defined above is related only polynomially to the number of edges in the graph.
Answer: You might be interested in a cycle basis, especially a fundamental cycle basis (which actually consists of cycles). | {
"domain": "cs.stackexchange",
"id": 20551,
"tags": "graphs"
} |
Does a quantum channel always preserve the identity matrix? | Question: Does a quantum channel (a completely positive trace-preserving map) always map the identity to the identity?
In other words, suppose that $ \mathcal{E}: \mathbb{C}^{N \times N} \to \mathbb{C}^{N \times N} $ is a quantum channel and $ I $ is the $ N \times N $ identity matrix. Then must we have that
$
\mathcal{E}(I)= I
$?
Answer: No, no reason it should. For instance, the amplitude damping
$$
\varepsilon_{AD}(\rho)=E_0\rho E_0^\dagger+E_1\rho E_1^\dagger
$$
with
$$
E_0=\begin{bmatrix}1 & 0\\
0 & \sqrt{1-\gamma}\end{bmatrix}
$$
$$
E_1=\begin{bmatrix}0 & \sqrt{\gamma}\\
0 & 0\end{bmatrix}
$$
We have
$$
\begin{alignat*}{1}
\varepsilon_{AD}(I)&=E_0 E_0^\dagger+E_1 E_1^\dagger \\
&=\begin{bmatrix}1+\gamma & 0\\
0 & 1-\gamma\end{bmatrix}\\
&\neq I\qquad \text{if }\gamma\neq 0
\end{alignat*}
$$
What must verify trace preserving operations is
$$
\sum_k E^\dagger_kE_k=I
$$
not
$$
\sum_k E_k E^\dagger_k=I
$$ | {
"domain": "quantumcomputing.stackexchange",
"id": 5505,
"tags": "quantum-operation"
} |
Sinc interpolation of pure sine wave sampled just at Nyquist frequency | Question: Following this question: Shannon-Nyquist theorem reconstruct 1Hz sine wave from 2 samples
could you explain the algorithm to apply for sinc interpolation to avoid the "sawtooth" effect due to linear interpolation?
(It seems to me that the https://en.wikipedia.org/wiki/Whittaker%E2%80%93Shannon_interpolation_formula Shannon-Whittaker formula would be the one suitable for this?)
Answer: Here again let me note that exact Nyquist frequency for a pure sine wave should be avoided. Shannon-Nyquist sampling theorem requires that there's no impulse at the Nyquist frequency as a consequence of bandlimitedness, the content at the exact Nyquist frequency is taken to be zero.
Then the following code demonstrates the approximate simulation of an ideal sinc based interpolator applide to (near) critical samples of a pure sine wave. Note that, any finite observation of a signal cannot be bandlimited, so this simulation is not a perfect representation of the true output from an ideal interpolator, nevertheless by chosing signal duration long enough, one can attain an approximately bandlimited signal.
f = 1; % 1 Hz. sine wave...
Fs = 4.2*f; % sampling frequency Fs = 2.2*f ; a bit more than the Nyquist rate.
Td = 25; % duration of observation ultimately determines the spectral resolution.
t = 0:1/Fs:Td; % observe 25 seconds of this sine wave at Ts = 1/Fs
Td = t(end); % get the resulting final duration
L = length(t); % number of samples in the sequence
M = 2^nextpow2(10*L); % DFT / FFT length (for smoother spectral display, not better resolution! )
x = sin(2*pi*f*t); % sinusoidal signal in [0,Td]
%x = x.*hamming(L)'; % hamming window applied for improved spectral display
% Part-II : Approximate a sinc() interpolator :
% ---------------------------------------------
K = 25; % expansion factor
xe = zeros(1,K*L); % expanded signal
xe(1:K:end) = x;
D = 1024*8;
b = K*fir1(D,1/K); % ideal lowpass filter for interpolation
y = conv(xe,b);
yi = y(D/2+1:D/2+K*L);
subplot(3,1,1);
plot(t,x);
title(['1 Hz sine wave sampled at Fs = ',num2str(Fs),' Hz, Duration : ', num2str(Td), ' s'])
%xlabel(' time [s]');
subplot(3,1,2);
plot(linspace(-Fs/2,Fs/2-Fs/M,M),fftshift(abs(fft(x,M))));
title(['magnitude of ', num2str(M), '-point DFT / FFT of y[n]']);
%xlabel('Frequency [Hz]');
subplot(3,1,3)
plot(linspace(0,Td,length(yi)),yi);
xlabel('approx simulation of ideal sinc interpolation');
With the result of | {
"domain": "dsp.stackexchange",
"id": 7650,
"tags": "interpolation"
} |
Matlab Neural Network toolbox | Question: Is there any way to use as single input an image (256x256 pixels) and get output of single value using Matlab neural network toolbox
Answer: Sure, import the image to the matlab workspace, divide the image matrix into two dimentional matrices like this:
matrix1 = yourimage(:,:,1);
matrix2 = yourimage(:,:,2);
matrix3 = yourimage(:,:,3);
each being one colour component-as nntool works with up to bidimensional matrices- and use them in the nntool. | {
"domain": "dsp.stackexchange",
"id": 2190,
"tags": "image-processing, matlab, machine-learning"
} |
Calculate difference between two poses | Question:
I running in to a problem calculating the difference between two tf poses. Currently I calculate the difference by subtracting the X, Y and Z (orientation) of pose 1 from pose 2. This works fine. Only when I try to do the same for the rotation (XYZW) the result is (sometimes) NaN. Am I doing something wrong or is there a better way to calculate the difference between two poses?
Originally posted by Robbiepr1 on ROS Answers with karma: 143 on 2013-04-14
Post score: 5
Original comments
Comment by davinci on 2013-04-14:
Are you using the c++ tf method? Sometimes a transform is not available yet, you can use waitForTransform(). Try also to print out the numbers you are subtracting.
Comment by Robbiepr1 on 2013-04-14:
Thanks for your reply. The situation is that I already have two poses (both of type tf::Stamped<tf::Pose> and can even have the same frame_id) and I want to get the difference between those two poses. I already printed the numbers I'm subtracting and they seem valid numbers.
Comment by Martin Günther on 2013-04-14:
First of all, you cannot simply subtract two rotations, since they (and the result) must be valid quaternions; for example, sqrt(x^2 + y^2 +z^2 +w^2) must be = 1. Still, you should only get NaN after subtraction if one (or both) of the input values are already NaN.
Answer:
TF provides a correct way for this:
tfpose1.inverseTimes(tfpose2)
This is the transformation from pose1 to pose2 or "pose2 - pose1" (with the '-' not being a real minus, but the correct operator).
The resulting pose will have a transformation and orientation that is the difference of the two poses.
Originally posted by dornhege with karma: 31395 on 2013-04-14
This answer was ACCEPTED on the original site
Post score: 18
Original comments
Comment by Robbiepr1 on 2013-04-15:
Thanks, that did the trick
Comment by Mehdi. on 2015-08-11:
How does this even work when you are using Pose from geometry_msgs?
Comment by 2ROS0 on 2015-12-20:
@Mehdi.
You can do the same by converting geometry_msgs Pose to tf Transform using the file tf/transform_datatypes.h which defines these data type transformations.
Comment by i_robot_flight on 2017-11-10:
Just to clarify my understanding, does this transform pose2 to the pose1 reference frame? The new pose data is written to pose1? Thanks!
Comment by sanazir on 2017-11-26:
any idea how to do this in python?
Comment by simff on 2018-02-21:
@Robbiepr1 How did you save the result of your subtraction? For me using subtract = tfpose1.inverseTimes(tfpose2) did not work! I would also need that method too.
Anyone knows? | {
"domain": "robotics.stackexchange",
"id": 13815,
"tags": "ros, pose, quaternion, transform"
} |
About ionic bonds (and ionic compounds) | Question: I have a few questions:
All ionic bond occurs between a metal and a non-metal. Is this true? In the definition of metal and/or non-metal are the metaloids included? In the definition of non-metal are all diatomic and polyatomic non-metals included? Meaning all of those elements form ionic bonds with metals?
When 2 charged molecules form an ionic bond, how is the compound classified?
For ionic compounds formed by only 2 different elements, knowing the metal and the non-metal in question, is it possible to have more than one formula for the compound? I think this is probably possible and maybe even common for the transition metals, because we usually see those roman numerals in their formulae. Am I correct in assume transition metals can form different ionic compounds (meaning different formula) with one same element? If yes, why does that happen?
does item 3 extends to charged molecules?
Answer:
Not exactly. In ammonium chloride ($\ce{NH4Cl}$), the bond is between a cation ($\ce{NH4^+}$) that has a non metal as its central atom. So the condition that ionic compounds are formed between metal cations and non metal anions is not necessary. Additionally, you will find many organic compounds where the anions or the cations are made up of many atoms, that is the cation itself consists of many atoms, and likewise for the anions. (I guess this was what you meant by charged molecules!). Organic cations (where the cation is not a metal cation) usually have their positive charge centered on a nitrogen atom. Secondly, almost all the non metals are capable of forming ionic bonds, regardless of whether they are diatomic or polyatomic in their elemental state.
It is called an ionic compound.
Yes, in fact transition metal compounds can have different formulae with the same metal and non metal elements! The best example would be $\ce{FeO}$ and $\ce{Fe2O3}$. So why do these different formulae exist? Here is a hint: $\ce{FeO \;=\; Fe^{+2} + O^{-2}}$ and $\ce{Fe2O3\;=\; 2Fe^{+3} + 3O^{-2}}$. Did you spot it? In the first compound, there are $\ce{Fe^{+2}}$ cations, while in the second one, there are $\ce{Fe^{+3}}$ cations! Why does this happen? Well some metal atoms can show two or even more different valencies. For example, once iron has lost the first two electrons, it has the possibility to lose the third electron as well, so it can show both types of positive charge on it, +2 as well as +3. Same is with other cations.
Not sure if anything like that happens. The problem is that generally only metals can lose an extra electron without changing the cation structure, charged molecules may require to change their structure to change the amount of charge they hold. The closest I know is transition metal complexes. The cation does change its compostion in various compounds (as a whole cation), but the central metal atom can remain the same. That is pretty advanced stuff though. You will learn about it in higher classes (in class 12). | {
"domain": "chemistry.stackexchange",
"id": 6138,
"tags": "bond, ionic-compounds, periodic-trends"
} |
What software do I need to read-write cas9? and .dna files? | Question: I am still a complete beginner to crispr and I am still trying to learn what it is and how to actually use it. I now realise that you have to order the crispr components after you have actually designed them yourself in software that lets you design and alter the components.
Am I correct so far?
I am wondering if anyone could help me find which software I should use to design and or alter .dna files . Preferably free and open-source software. Are there any good tutorials for someone who wants to get more into crispr?
Answer: You have a long journey ahead. This guy spent about 4 years to learn what he is doing here.
He is using Snapgene software | {
"domain": "biology.stackexchange",
"id": 10248,
"tags": "crispr, software"
} |
Force horizontal to incline plane | Question: So I got this question ( part (ii) with figure 2) through searching the internet I think I found a way of doing it .
So I found the $x$ component of the weight of the block which is $9$N
$$9= \cos(30)P$$
And thus I got $P = 6\sqrt{3}$
Is this correct? And is there any other ways of arriving at this solution?
Answer: I think you need to be a little more clear on your choice of coordinates. The answer you got out works, but it will benefit you in the future to work carefully! There are two natural choices for this problem. You can work with $x$ and $y$ or you can work with, let's call it $x'$ and $y'$ where $x'$ points down the ramp and $y'$ points normal to the ramp (from the bottom to the top of the block).
The force of gravity (call it $\vec{W}$) is $18$N in the $-y$ direction. There is the pushing force $\vec{P}$ which is in the $+x$ direction. There is also the normal force $\vec{N}$ which points in the $+y'$ direction. Before you can work out this problem, you need to break your vectors into components so that they are easy to add together. So you can choose to break $\vec{W}$ and $\vec{P}$ into their components along $x'$ and $y'$, or we can break $\vec{N}$ into its components along $x$ and $y$. Let's go with the former route since we know there's no acceleration in the $y'$ direction (i.e. it stays on the ramp).
So what's $\vec{N}$? Well it's whatever it needs to be to cancel the forces in the $y'$ direction.
Okay, so what's $\vec{W}$ in the $x',y'$ coordinate system? Here it helps to draw a triangle and clearly mark all the angles. If you do this you see that $W_{x'}= 18\cos(60°)\mbox{N}=9\mbox{N}$ and $W_{y'}=-18\sin(60°)\mbox{N}=-9\sqrt{3}\mbox{N}$ in the $x'$ direction.
How about $\vec{P}$?. You can see that the angle between $\vec{P}$ and the inclined plane is $30°$ so we find that $P_{x'}=-P\cos(30°)=-P\sqrt{3}/2$ and $P_{y'}=P\sin(30°)=P/2$.
Now armed with this information we can tackle the problem. We know the block doesn't move so adding all the forces in the $x'$ direction should give $0$ and adding all the forces in the $y'$ direction should give $0$. This tells us that $$P_{x'}+W_{x'}=0\to -P\sqrt{3}/2+9=0\to P=18/\sqrt{3}\mbox{N}=6\sqrt{3}\mbox{N}$$
You get this same answer by using the $x,y$ coordinates instead, see if you can get it to work! | {
"domain": "physics.stackexchange",
"id": 52324,
"tags": "homework-and-exercises, newtonian-mechanics, forces"
} |
References on mathematical stacks for a string theory student | Question: This question was posted on mathoverflow (here) without too much success.
I'm hoping to read the famous Kapustin-Witten Paper "Electric-magnetic duality and the geometric Langlands program" and the related "The Yang-Mills equations over Riemann surfaces".
The following statement serves to explain the origin of my trouble: Let $G$ be a simple complex Lie group and $C$ a Riemann surface. Geometric Langlands is a set of mathematical ideas relating the category of coherent sheaves over the moduli stack of flat $G^{L}$-bundles ($G^{L}$ is the dual Langlands group of $G$) over $C$ with the category of $\mathcal{D}$-modules on the moduli stack of holomorphic $G$-bundles over $C$.
My problem: I have working knowledge of representation theory, but I'm completely ignorant about the theory of mathematical stacks and the possible strategies to begin to learn it. What specifically worries me is how much previous knowledge of $2$-categories is needed to begin.
My background: I've read Hartshorne's book on algebraic geometry in great detail, specifically the chapters on varieties, schemes, sheaf cohomology and curves. My category theory and homological algebra knowledge is exactly that needed to read and solve the problems of the aforementioned book. I'm also familiar with the identification between the topological string $B$-model branes and sheaves at the level of Sharpe's lectures.
Questions: I'm asking for your kind help to find references to initiate me on the theory of stacks given my physics orientation. What would be a good introductory reference on stacks for a string theory student? Is there any physics friendly roadmap to begin? Any familiar gauge/string theoretical analogies to start to develop intuition?
Any suggestion will be extremely helpful to me.
Answer: For the "physics part" of my question: I have found the papers String Orbifolds and Quotient Stacks
, D-branes, orbifolds, and Ext groups and Stacks and D-Brane Bundles very useful, readable and explicit. Apparently the best strategy to deal with stacks in the context of a non-linear sigma model is to think on the target space as locally an orbifold (exactly the way mathematicians divulge the idea of a stack).
For the "math part" of my question: Angelo Vistoli textbook, Notes on Grothendieck topologies, fibered categories and descent theory, arXiv:math/0412512 is really pedagogical and self-contained. The best reference I've found so far. | {
"domain": "physics.stackexchange",
"id": 73420,
"tags": "string-theory, mathematical-physics, resource-recommendations, topological-field-theory, algebraic-geometry"
} |
Convert a type u16 number to a matrix (Vec> or array) of 4 x 4 | Question: I am a Rust newbie and I am not familiar with all the iterator options. This is what I have so far. How can I make this better or at least avoid collecting twice into a Vec?
let num: u16 = 0b0010001000100010; // input number
let bin = format!("{:016b}", num);
let parsed = bin
.split("")
.filter_map(|s| s.parse().ok())
.collect::<Vec<u8>>();
let mat = parsed.chunks(4).collect::<Vec<_>>();
println!("{:?}", mat);
// outputs [[0,0,1,0],[0,0,1,0],[0,0,1,0],[0,0,1,0]]
Answer: One thing I'm going to do first is create an enum with only two values. This represents a binary value and is more memory-efficient than passing around a bunch of u8s. This way 16 Bits can be represented as 16 actual bits in memory (though this isn't guaranteed).
/// a single bit
#[derive(Clone, Copy, Debug)]
enum Bit {
/// 1, high
H1 = 1,
/// 0, low
L0 = 0,
}
use Bit::*; // allow us to just use `L0` and `H1` without the `Bit::` prefix
It is likely much faster to split a number into bits using numerical operators. There are a couple ways of doing it.
Iterating a mask
With this we increase our mask each time, building an array from it. The 15 - i is there because we want the MSB at index 0.
/// convert a number to 16 bits by sliding a mask across it
fn into_bits_mask(num: u16) -> [Bit; 16] {
let mut out = [L0; 16];
for i in 0..16 {
out[15 - i] = if num & (1u16 << i) > 0 {
H1
} else {
L0
};
}
out
}
Shifting the number with a static mask
This is essentially the same thing, but we shift the number instead of the mask.
/// convert a number to 16 bits by right-shifting it
fn into_bits_shift(num: u16) -> [Bit; 16] {
let mut out = [L0; 16];
for i in 0..16 {
out[15 - i] = if (num >> i) & 1u16 > 0 {
H1
} else {
L0
};
}
out
}
We can then modify these to output an array of 4 nibbles.
/// convert a number to 4 nibbles by sliding a mask across it
fn into_nibbles_mask(num: u16) -> [[Bit; 4]; 4] {
let mut out = [[L0; 4]; 4];
for i in 0..16 {
let mask = 1u16 << (15 - i);
out[i / 4][i % 4] = if num & mask > 0 {
H1
} else {
L0
};
}
out
}
/// convert a number to 4 nibbles by right-shifting it
fn into_nibbles_shift(num: u16) -> [[Bit; 4]; 4] {
let mut out = [[L0; 4]; 4];
for i in 0..16 {
out[i /4][i % 4] = if (num >> (15 - i)) & 1u16 > 0 {
H1
} else {
L0
};
}
out
}
There is room for more optimization here, of course.
Here's a working example: https://play.rust-lang.org/?version=stable&mode=debug&edition=2015&gist=b010e34728d554e995e0de4ddb4b1eed
EDIT: I was asked about using iterators, so here's my most iterator-function styled method
/// convert a number to 4 nibbles using iterator methods
fn into_nibbles_iter(num: u16) -> Vec<Vec<Bit>> {
// split into nibbles
[num >> 12, num >> 8, num >> 4, num]
.iter()
.map(|nibble| {
// mask off each bit
[nibble & 8, nibble & 4, nibble & 2, nibble & 1]
.iter()
// convert to Bits
.map(|b| if b > &0 { H1 } else { L0 })
.collect()
})
.collect()
}
And here's a new playground link demonstrating it: https://play.rust-lang.org/?version=stable&mode=debug&edition=2015&gist=5ecda81379c3cab749709f551109adfb | {
"domain": "codereview.stackexchange",
"id": 32411,
"tags": "strings, matrix, iterator, rust, vectors"
} |
Prove that $|P(X)| = 2^{|X|}$ | Question: Prove that for any finite set $X$, $|P(X)| = 2^{|X|}$. The solution should use induction.
Answer: Basis
$$ |X| = 0 $$
$$|P(X)| = 2^{|X|} = 2^0 = 1$$
True. Any set with zero elements takes the form $\{\}$, and thus its power set will be $\{\{\}\}$. One element.
Inductive Step
We must show:
$$|P(X+1)| = 2^{|X+1|}$$
Thus:
$$|P(X+1)| = 2^{|X+1|} = 2*2^{|X|} = 2*|P(X)|$$
This is true because, if $X$ as a set grows by one, the power set, for every member in its set now has a new binary choice: include the new element or not. Thus the power set doubles. | {
"domain": "cs.stackexchange",
"id": 1788,
"tags": "sets"
} |
Is it possible to run publisher and subscriber in single node under rosserial? | Question:
I need to get input from an analog sensor and control the LED. In this case can i run both the publisher and subscriber under same node in rosserial because i have only one Arduino board. Guide me towards some similar codes written.
Originally posted by Kishore Kumar on ROS Answers with karma: 173 on 2015-01-28
Post score: 0
Answer:
Yep, absolutely common standard use case, possible without problems......
Originally posted by Wolf with karma: 7555 on 2015-01-28
This answer was ACCEPTED on the original site
Post score: 0
Original comments
Comment by Kishore Kumar on 2015-01-28:
Can you please give me an example code where both publisher and subscriber written in same node? | {
"domain": "robotics.stackexchange",
"id": 20716,
"tags": "control, ros, arduino, rosserial, publisher"
} |
Unit tests for a simple function thats part of a public API | Question: Given I have to write a simple function that essentially just does a http-fetch
to the given URL but is part of a public API that will be used by thousands.
I have two main questions about the problem:
Since this is a public API Im not sure how to handle inproper input. Should I return undefined or throw an error? (I rather throw errors to make it most obvious what types are expected). I've seen other API's however just returning undefined, which to me seems pretty stupid, as surely you dont want to be handling undefined in high level code?
function at(_url, _options) {
if (!_options)
throw new ReferenceError("No options provided.")
if (typeof _options !== "[Object object]")
throw new TypeError("No options provided.")
if (typeof _url !== "string")
throw new TypeError("URL must be a string.")
...
Is it too much to actually test if each possible inproper input is at hand?
(I find myself writing the same test code for each function I write and after a while it seemed kind "wrong" to be doing typechecking in JS. Is there a standard for typechecking inside API's in JS?)
const expect = require("chai").expect
const fetchWithOptions = require("./../index")
describe("me()", () => {
it("should throw error if no options are provided", () => {
expect(() => fetchWithOptions.at("some-url")).to.throw(ReferenceError)
})
it("should throw an error if options are not an object", () => {
expect(() => fetchWithOptions.at("some-url", "string options")).to.throw(TypeError)
})
it("should throw an error if the provided url is not a string", () => {
expect(() => fetchWithOptions.at(123123, {someOption: "someOption"})).to.throw(TypeError)
})
})
Answer:
Assuming this function is async, neither. Provide a callback parameter or return a Promise (I'd favour Promises personally). This is a more common / understood pattern in JavaScript.
Unfortunately no there isn't, it's very much part of the downsides (and beauty) of using a loosely-typed language. If you're concerned about type-safety then you should really take a look at TypeScript or equivalents.
With regards to the tests, personally I think covering each error scenario is fine - I do it myself, they're minimal effort and will help catch any regressions. | {
"domain": "codereview.stackexchange",
"id": 28245,
"tags": "javascript, unit-testing, mocha"
} |
Need clarification regarding certificates of coNP problems | Question: NOTE: this is not an attempt to prove $NP \neq coNP$
There is one thing I have never been able to completely digest about the certificates of problems in $coNP$ and I would very much appreciate a definitive clarification from this community.
Let's focus on the subset sum problem ($SUBSUM$), now we all know that this problem is in $NP$ since,
to accept language membership, a Prover $P_v$ can emit a certificate which a verifier $V_r$ can check in polynomial time. Up to here no problem. The complement of this problem ($\overline{SUBSUM}$) is in $coNP$ which means that
we do not know if there is a succinct (ie polynomial) certificate to decide the language.
If such a certificate does not exist, then $NP \neq coNP$
and therefore $P \neq NP$.
What I don't understand is this:
If I have (for example) a set $S$ of integers and the number $0$ as an input and I ask:
Prove me that $\forall s \in S \space\space\lnot SUBSUM$, ie $\nexists$ a subset of $S$ such that the sum of its element give $0$ as a result (this is $\overline{SUBSUM}$, the complement of the subset problem).
How can a certificate exist for this problem whose verification is in $P$? I mean, i need to prove it for all the subset so the search space must be the powerset of $S$. If $|S|=n$ then $\mathcal{|P(S)|}=2^n$. So if the prover, for example, produce a $2^{n/3}$ certificate, this means that I systematically leave out $2^{ \frac{2}{3}n}$ subsets.
What I don't completely understand and for which I need clarification is why this argument is not accepted as evidence that $NP$ is not closed under complement.
Answer: The proof does bot need to be a subset. It might be another indicator that the given set has a some stricture preventing it from being a positive instance of the subsetsums problem. A good example with a non-trivial certificate is linear programming. Linear programs admit both a positive and a negative certificate (For the question whether the optimal can be smaller/greater than a value k). The positive instance is of course an assignment of the variable. The negative however is given by Faraks lemma and the weak duality.
A good exercise for you is to look up linear programs, weak duality and Farkas lemma :) | {
"domain": "cs.stackexchange",
"id": 15124,
"tags": "proof-techniques, co-np"
} |
Forming Black Holes | Question: I've read a play (The Square Root of A Sonnet) about the physicist who proposed the Chandrasekhar limit, Subramanyan Chandrasekhar, and had a doubt regarding an argument that was put forward regarding the formation of black holes. I wanted to write an article about it but wanted to ensure that my understanding about the entire topic was right.
Do inform me of any further details I may have missed. This is more or less an concept-understanding attempt and I hope it's well-received by the site.
Black Hole Formation:-
A Black Hole is formed when the escape velocity of any object is greater than the speed of light. This causes an immense gravitational pull whereby the object would begin to collapse in on itself. As it does, the objects particles come closer and closer (increasing the density to an enormous amount simultaneously).
But in the process of the particles going close to one another, their position is becoming more and more well defined due to which their velocity is becoming more uncertain (as per the founding principle in Quantum Mechanics-Heisenberg's Uncertainty Principle which states
$$\Delta x \Delta p =\frac{\hbar}{2}$$
). Hence by one of the pillars in modern physics, the particles begin to vibrate in an extreme speed. Due to this, the object begins to move outward (i.e.expands instead of collapsing)
However, their speed is only limited by another pillar of modern physics-Special relativity which shows that the maximum speed any object can achieve is the speed of light. Therefore, by the laws of Special Relativity, every particle attains a speed close to the speed of light cannot move any faster. Therefore, the particles continue to collapse (since the gravitational pull is far stronger) and forms a black hole - A structure formed by the laws of Quantum Mechanics and Special Relativity but seemingly defies both.
Is this all correct or have I missed something? Moreover, is there anything further I could add?
Answer: Depends - on certain (very low) level it sounds about right.
On different level it is not so easy. If you have Earth instead of black hole, then object with velocity less then escape velocity can still go further and further away from the Earth - the distance it can reach is just bounded. In case of black hole, no matter the (subluminal) velocity, the object will always move toward the center. That is just one example, where the traditional concepts and intuition are dramatically different, or straight out fail.
The main thing about black holes is, that the relativistic effects of gravitation are extreme, so nonrelativistic ideas (like concept of escape velocity, which I am not even sure make any sense for a black hole) do not really apply for them and if you wish to understand them, you need to study general relativity.
Also, from your description one cannot know, if such process could ever happen. You assume there is already strong enough gravitational field to overcome any pressure matter could ever exert. But during collapse the gravitational pull on outer and pressure are strongly coupled together. It could be, that the pressure due to Heisenberg's Uncertainty Principle would increase more rapidly than gravitational pull as body collapses -> as it happens for bodies of small enough masses. Then, your supposed situation of superluminal escape velocity could be never reached and no black hole could be ever formed by this process. | {
"domain": "physics.stackexchange",
"id": 64891,
"tags": "quantum-mechanics, general-relativity, black-holes, speed-of-light, heisenberg-uncertainty-principle"
} |
Determination of pKb of a mono acidic base | Question:
$20$ mL of a weak monoacidic base($\text{BOH}$) requires $12$ mL of $0.3$ M $\text{HCl}$ solution for the equivalence point. During titration, the pH of the base solution was $10$ upon the addition of $4$ mL of $0.3$ M $\text{HCl}$ solution. What is the $\text{pKb}$ of the base($\text{BOH}$)?
My attempt:
The concentration of the base is given by,
$c * 20 = 12 * 0.3$ -- (equating the moles of the acid and base)
$c = 0.18$ M
So, initial moles of the base in the container $= 20 * 0.18 = 3.6$ mmol
Moles of acid that is added $ = 4 * 0.3 = 1.2$ mmol
Since the base is monoacidic, they will react in a 1:1-mole ratio. The acid is the limiting reagent, so it will be fully consumed. Therefore the moles of base left is,
$3.6 - 1.2 = 2.4$ mmol
The concentration of the base is,
$\frac{2.4}{24} = 0.1$ M
Since the pH of the solution is $10$, therefore the concentration of $\text{OH}^- = 10^{-4}$
Applying the approximated formula for calculating the $\text{k}_b$ of a weak base,
$$\text{K}_b = \frac{x^2}{c}$$
Where,
x = concentration of $\text{OH}^-$ ions
c = concetration of the base
In our case,
x = $10^{-4}$
c = $0.1$
Plugging in the values I got $pK_b = 7$. But the answer is given $4.3$.
Any help would be appreciated.
Answer: I appreciate your effort you showed in your question, but you missed one important thing that when a weak base is reacted with a strong acid a buffer solution is formed and the $\mathrm{pH}$ is not only contributed by base but also by the acidic salt ($\ce{BCl}$ in this case).
So, simply applying the Henderson–Hasselbach equation (derivation can be found here)
$$\mathrm{pOH} = \mathrm{p}K_\mathrm{b} + \log{\frac{[\ce{B-}]}{\ce{[BOH]}}}$$
we get
$$4 = \mathrm{p}K_\mathrm{b} + \log{\frac{1.2}{2.4}},$$
which gives us
$$\mathrm{p}K_\mathrm{b} = 4.3010$$ | {
"domain": "chemistry.stackexchange",
"id": 11557,
"tags": "physical-chemistry, equilibrium, ph"
} |
Increase in Solubility of a Gas with an Increase in Temperature | Question: On the UC Davis ChemWiki I read, "some gases have an increase in solubility with an increase in temperature."
I understand why this is applicable to solids in liquids such as water, but why is it applicable as well as for gases in liquids?
Answer: If the dissolution of the gas absorbs heat in order to occur (e.g. $N_2$ gas dissolving in benzene):
$$Undissolved\ N_2 + Heat \rightleftharpoons Dissolved\ N_2$$
then raising the temperature will add more heat and by Le Chatelier's principle force the reaction to the right. In other words, if the dissolution requires heat to occur and you add more heat, more dissolution will occur. This corresponds to an increase in solubility with an increase in temperature.
If the dissolution of the gas produces heat when it occurs (e.g. $O_2$ gas dissolving in water):
$$Undissolved\ O_2 \rightleftharpoons Dissolved\ O_2 + Heat$$
as raising the temperature adds more heat, having the extra heat around will instead allow more gas to leave the dissolved state. This corresponds to a decrease in solubility with a decrease in temperature.
The model that is used to explain whether the gas absorbs or produces heat upon dissolving in a liquid solvent is based on the bonding of the liquid molecules with each other vs. with the gas molecules. | {
"domain": "physics.stackexchange",
"id": 19347,
"tags": "temperature, physical-chemistry"
} |
Block Diagram Reduction: Is it necessary to do it stepwise? | Question: Just a short question: Is there any usefulness in doing block diagram reduction piecewise?
The reason I am asking is that I find it much (!) easier to just find the final $\frac{output}{input}$ transfer function using mathematics (the brute force method), but my professor does it stepwise.
If I am to do it stepwise, I still feel like I have to do it mathematically, rendering the stepwise reduction pretty useless as far as I can tell.
I realize this seems like a banal question, but I cannot find anything answering this question is my controls book.
Answer: When you are solving a problem in the real world, there are 2 main important requirements
Get the correct answer.
Solve it using sound methods.
There might be a half-dozen different ways to solve a given problem, so long as you arrive at the correct answer and don't pick a method involving rain dances or Satanic rituals, you are free to pick the method that you find easiest.
However, the requirements in academia are often different. There is a good chance that your professor has a reason for using the method he does. If he asks you to solve a problem step-wise, you should do what he asks.
It's likely that, at a certain point, doing it directly rather than piecemeal becomes overly complex and involves massive equations. At this point, knowing how to do it step-wise becomes necessary, and your professor is preparing you for this. But the only way to know for sure is to ask him.
As I said, beyond this class, the only thing that will matter is getting the correct answer and being able to explain how you arrived at that answer, so if you get the same answer through both methods, both are equally valid. | {
"domain": "engineering.stackexchange",
"id": 439,
"tags": "control-engineering, control-theory"
} |
"NP-complete" optimization problems | Question: I am slightly confused by some terminology I have encountered regarding the complexity of optimization problems. In an algorithms class, I had the large parsimony problem described as NP-complete. However, I am not exactly sure what the term NP-complete means in the context of an optimization problem. Does this just mean that the corresponding decision problem is NP-complete? And does that mean that the optimization problem may in fact be harder (perhaps outside of NP)?
In particular, I am concerned about the fact that while an NP-complete decision problem is polynomial time verifiable, a solution to a corresponding optimization problem does not appear to be polynomial time verifiable. Does that mean that the problem is not really in NP, or is polynomial time verifiability only a characteristic of NP decision problems?
Answer: An attempt on a partial answer:
Decision problems were already investigated for some time before optimization problems came into view, in the sense as they are treated from the approximation algorithms perspective.
You have to be careful when carrying over the concepts from decision problems. It can be done and a precise notion of NP-completeness for optimization problems can be given. Look at this answer. It is of course different from the NP-completeness for decision problems, but it is based on the sames ideas (reductions).
If you are faced with an optimization problem that doesn’t allow a verification with a feasible solution, then there is not much you can do. That is why one usually assumes that:
We can verify efficiently if the input is actually a valid instance of our optimization problem.
The size of the feasible solutions is bounded polynomially by the size of the inputs.
We can verify efficiently if a solution is a feasible solution of the input.
The value of a solution can be determined efficiently.
Otherwise, there is not much we can hope to achieve.
The complexity class $\mathrm{NP}$ only contains decisions problems per definition. So there aren’t any optimizations problems in it. And the Verifier-based definition of $\mathrm{NP}$ you mention is specific to $\mathrm{NP}$. I haven’t encountered it with optimization problems.
If you want to verify that a solution is not just feasible, but also optimal, I would say that this is as hard as solving the original optimization problem because, in order to refute a given feasible and possibly optimal solution as non-optimal, you have to give a better solution, which might require you to find the true optimal solution.
But that doesn’t mean that the optimization problem is harder. See this answer, which depends of course on the precise definitions. | {
"domain": "cs.stackexchange",
"id": 18319,
"tags": "complexity-theory, np-complete, terminology"
} |
Limitation of Gauss's Law | Question: We can use Gauss's law to find out the electric field $\vec{E}(\vec{r})$ due to an infinite cylinder of charge.
But if the cylinder is of finite length then it is said that $|\vec{E}(\vec{r})|$ is valid at $\vec{r}$ when $r<< R$.
Why this is so? And how can I prove this condition mathematically?
Answer: rodrigo has a good explanation for why this intuitive explanation is useful.
If you wanted to prove it mathematically, you'd have to find the exact field first. Here's an example from Griffiths: If you have a line of charge with linear charge density $\lambda$ on the $x$-axis running from $-L$ to $+L$, the field at some height $z$ above its midpoint is given by
$$
E = \frac 1{4\pi\epsilon_0} \frac{2\lambda L}{z\sqrt{z^2+L^2}}.
$$
If you have dissimilar numbers $a\gg b$, then $$\sqrt{a^2+b^2} = a\sqrt{1+\frac{b^2}{a^2}} = a \left(1 + \frac 12 \frac {b^2}{a^2} + \mathcal O\left(\frac{b^4}{a^4}\right)\right).$$
Very near the line of charge, $z \ll L$, we have
$$
E = \frac 1{4\pi\epsilon_0} \frac{2\lambda L}{zL} \left(1-\frac12 \frac{z^2}{L^2} +\cdots \right) \approx \frac 1{4\pi\epsilon_0} \frac{2\lambda}{z}
$$
which is the field of an infinite cylinder; very far from the line, $z\gg L$, we have
$$
E = \frac 1{4\pi\epsilon_0} \frac{2\lambda L}{z^2} \left(1-\frac12 \frac{L^2}{z^2} +\cdots \right) \approx \frac 1{4\pi\epsilon_0} \frac{q}{z^2}
$$
which is the field due to a point with charge $q=2\lambda L$. | {
"domain": "physics.stackexchange",
"id": 54527,
"tags": "homework-and-exercises, electrostatics, electric-fields, gauss-law, approximations"
} |
Identifying the flavour singlet baryon | Question: I would like to ask how to identify the lowest lying flavour-singlet baryon in the data on known states of particles. So far I managed to derive the following constraints:
Its quark content must be $uds$ by full antisymmetry in flavour indices. Thus it goes by the name $\Lambda$. As always, it is also fully antisymmetric in colour.
If its spatial wavefunction was an $s$-wave then we would need to make it fully antisymmetric in spin indices to satisfy Pauli's principle. This is impossible for three doublets. Hence it must be a p-wave. Therefore parity $P=1$. Then to make the whole thing fully antisymmetric I need $S= \frac{3}{2}$.
By angular momenta addition it has $J=\frac{1}{2}, \frac{3}{2}, \frac{5}{2}$. However I have no intuition whatsoever which of these is prefered.
Of course there is a lot of states in PDG which satisfy these requirements. The questions are how do I decide which one of these is the lowest-lying flavour singlet? Moreover, can we infer from the theory any further constraints on its quantum numbers and other characterstics?
Answer: I found the answer to this question in the paper "SU(3) systematization of baryons" by V. Guzey and M. V. Polyakov (arXiv:hep-ph/0512355). Apparently the lightest flavour singlet is the state $\Lambda(1520)$ with $J^P=\frac{3}{2}^-$. | {
"domain": "physics.stackexchange",
"id": 44985,
"tags": "particle-physics, standard-model, quantum-chromodynamics, quarks, baryons"
} |
What are the rules for positive recursive types in dependent type theory? | Question: I've recently started independently learning type theory, using a combination of papers found online and ncatlab.org (but have not worked with category theory), and am about to start reading TAPL.
I'm interested in understanding how recursive types can be substituted for inductive types in dependent type theory (we assume dependent sums, dependent products, identity types and finite types 0, 1, 2 in our background theory), however I am unable to find a definition for recursive types that doesn't use unrestricted fixpoint operators and hence lack strong normalisation.
I've attempted to do this myself, but the dependent eliminator seems slightly too weak, and a restriction on if b then x else y such that the boolean expression is evaluated before x and y can be reduced seems to be required for normalisation.
$$\frac{A:\text{Type} \vdash F(A):\text{Type} \\ A\text{ is positive in }F(A)}{\mu A.F(A):\text{Type}}$$
$$\frac{a:F(\mu A.F(A))}{S(A):\mu A.F(A)}$$
$$\frac{B:\text{Type}, b:B \vdash C(b,B):\text{Type} \\ B:\text{Type}, e: \prod_{b:B}C(b,B))\vdash R(B,e) : \prod_{f:F(B)}C(f,F(B))}{rec(R):\prod_{a:\mu A.F(A)}C(a, \mu A.F(A))}$$
$$\frac{B:\text{Type}, b:B \vdash C(b,B):\text{Type} \quad a:\mu A.F(A)\\ B:\text{Type}, e: \prod_{b:B}C(b,B))\vdash R(B,e) : \prod_{f:F(B)}C(f,F(B))}{rec(R)S(a)=R(\mu A.F(A),rec(R))a:C(a, \mu A.F(A))}$$
Where A is positive in F(A) precisely when A does not occur in any type indexing any dependent product types in F(A).
Is my formulation correct? What is the correct way to formulate rules for normalising recursive types, and are there any references that I could look at that expand on these?
Answer: Why are recursive types seldomly seen in dependent type theory?
The point of inductive types is precisely that you get normalization. Unrestricted recursive types simply lead to non-normalizing terms.
Given any type $A$, we may inhabit $A$ with a non-normalizing term as follows. Consider the recursive type
$$D = D \to A.$$
The term $\omega \mathrel{{:}{=}} \lambda d : D . d \; d$ has type $D \to A$ and it also has type $D$, since they are equal. Therefore $\omega \; \omega$ has type $A$, and in addition $\omega \; \omega$ reduces to itself, giving a non-normalizing term.
If instead of $D = D \to A$ we have just $D \cong D \to A$ then an easy adaptation of the above argument leads to the same conclusion. You just have to coerce $d$ to have type $D \to A$.
Is this bad? If you're using type theory to write programs then it probably isn't that bad. It's actually kind of cool. But if you're using type theory to do reasoning, i.e., you want ot use types as propositions, then it's bad because every type has an inhabitant and so every propostion has a proof. It just depends on what you want from type theory.
What are the rules for recursive types?
$\newcommand{\Type}{\mathsf{Type}}$
Let us try to formulate the rules for recursive types. A reasonable attempt goes along the same lines as what you've suggested:
$$\frac{\Gamma, X : \Type \vdash F(X) : \Type}{\Gamma\vdash \mu X . F(X) : \Type}$$
$$\frac{\Gamma \vdash e : F(\mu X . F(X))}{\Gamma \vdash S(e) : \mu X . F(X)}$$
$$\frac{\Gamma, y : \mu X . F(X) \vdash C(y) : \Type \qquad
\Gamma, x : F(\mu X . F(X)) \vdash f(x) : C(S(x)) \quad
\Gamma \vdash u : \mu X . F(X)
}{\Gamma \vdash \mathsf{rec}_F([x . f(x)], u) : C(u)}$$
We shouldn't forget to ask about equations that we expect to hold. The $\beta$-rule would be
$$\mathsf{rec}_F([x . f(x)], S(e)) = f(e)$$
and let us throw in an $\eta$-rule
$$S(\mathsf{rec}_F([x . x], u)) = u.$$
The $\eta$-rule says that if we take apart $u : \mu X . F(X)$ and put it back together, we will get $u$.
So far we have a map $S : F(\mu X . F(X)) \to \mu X . F(X)$, but we also want $R : \mu X . F(X) \to F(\mu X . F(X))$ together with
$$S(R(u)) = u \qquad\text{and}\qquad R(S(e)) = e \tag{1}$$
These say that $S$ and $R$ form an isomorphism between $F(\mu X . F(X))$ and $\mu X . F(X)$. We can get such an $R$, namely
$$R \mathbin{{:}{=}} \lambda y : F(\mu X . F(X)) . \mathsf{rec}_F([x . x], y).$$
Indeed, we have by the $\beta$-rule
$$R(S(e)) = \mathsf{rec}_F([x . x], S(e)) = e$$
and by the $\eta$-rule
$$S(R(u)) = S(\mathsf{rec}_F([x . x], u) = u.$$
We could have started with having $S$ and $R$ satisfying equations (1), and that would allow us to derive the eliminator as
$$\mathsf{rec}_F ([x . f(x)], u) = f(R(u)).$$
Such an eliminator satisfies the $\beta$-rule and the $\eta$-rule, quite obviously. This should give us pause. We discovered that our rules say precisely that $\mu X . F(X)$ is isomorphic to $F(\mu X . F(X))$ and nothing else. This is not good, because the rules should fix the meaning of $\mu X . F(X)$ up to equivalence of types.
To see that the rules are no good, consider the case $F(X) = X$. Every type is a fixed point of $F$, and our rules say nothing about which one $\mu X . X$ should be. Indeed, we can take an arbitrary type $A$ and set $R$ and $S$ to be the identity maps. That will satisfy all the rules for $\mu X . X$.
We should somehow state which fixed point of $F$ we have in mind when we write down $\mu X . F(X)$. a reasonable choice would be to ask for the smallest or initial one. But what does that mean? In category theory it means that there is a unique homomorphism from the algebra $S : F(\mu X . F(X)) \to \mu X . F(X)$ to any other $F$-algebra. Here we rely on the fact that $F$ is a functor so that we can reasonably talk about $F$ applied to a map. But our $F$ is not going to be a functor in general because it might break covariance (try to extend $F(X) = X \to X$ so that it takes $f : A \to B$ to some $F(f) : F(A) \to F(B)$ and you will see what the problem is). We are stuck, as we have no good criterion for picking one of possible fixed points of $F$!
The way out is to decompose $F : \Type \to \Type$ according to covariant and contravariant arguments,
$$F : \Type \times \Type^{\mathrm{op}} \to \Type.$$
For example $F(X) = X \to X$ becomes $F(X_1, X_2) = X_2 \to X_1$. But now it's not clear what a fixed point of $F$ might be. All this leads to Freyd's notion of algebraically compact categories and a beautiful theory surrounding them. The theory can be applied to categorical models of programming languages to explain how recursive types work there. When we throw in dependent types, then as far as I know, things break and nobody knows what to do. But I would love to be proved wrong!
The sort of recursive types we spoke about here goes under the name isorecursive types. The other option are equirecursive types, see the same link. | {
"domain": "cs.stackexchange",
"id": 10221,
"tags": "reference-request, type-theory, dependent-types, types-and-programming-languages"
} |
How would a Mars rover identify a microbialite fossil? | Question: I recently read of a company in Canada using robotic technology to study freshwater microbialites. Their claim is that greater understanding of terrestrial microbialites could help in the search for extraterrestrial microbialite fossils (if they exist) on Mars.
If a rover encountered a microbialite fossil on Mars, how would it identify the rock as being a fossil?
Answer: I'm basing this answer off an article: Kevin Lepot, Karim Benzerara, Gordon E. Brown Jr., Pascal Philippot (2008). 'Microbially influenced formation of 2.7 billion-year-old stromatolites'. Nature Geoscience Vol.1 No.2, pp.118–121. DOI: http://dx.doi.org/10.1038/ngeo107
Instruments that are desirable (and are actually used) to study stromatolites/freshwater microbialites:
Raman microspectroscope
Confocal laser scanning microscope
Scanning transmission X-ray microscope
Transmission electron microscope (TEM), including high-resolution TEM
Near-edge X-ray absorption fine structure spectroscope
The usual sets of fine analytical chemistry and microcutting tools
As you might suspect, this may require rather massive, power-hungry and bulky rovers. There is a solution and it's called sample return. Although the tools and instruments are constantly evolving and undergoing miniaturization, a skilled and attentive lab researcher will find out more back here on Earth than a rover on Mars.
Another consideration in favor of sample return is the verifiability of possibly sensational results - the sample will be curated and subdivided between universities and labs worldwide, applying diverse techniques and not relying on a single set of ultra-costly but possibly systematically biased hardware. | {
"domain": "earthscience.stackexchange",
"id": 180,
"tags": "paleontology, fossils, astrobiology, planetology"
} |
Date only limited function class | Question: Per the title date only class (or struct?).
.NET does not offer a date only data type.
I get there is time zone dynamic but then you have daylight-savings that can still kill you. I do document management and we just have a document with a date.
// minimal limited test
DateOnly date1 = new DateOnly(1901, 2, 1);
DateOnly date2 = new DateOnly(1902, 3, 2);
DateOnly date3 = new DateOnly(1902, 3, 2);
Debug.WriteLine(date1.GetDayDiff(date2)); // 365 + 29 = 394
Debug.WriteLine(date2.GetDayDiff(date1));
Debug.WriteLine(date2.CompareTo(date1));
Debug.WriteLine(date1.CompareTo(date2));
Debug.WriteLine(date2.CompareTo(date3));
// end test
public class DateOnly: Object, IComparable
{
private Dictionary<int, int> MonthDay = new Dictionary<int, int>() { { 1, 31 }, { 2, 29 }, { 3, 31 }, { 4, 30 }, { 5, 31 }, { 6, 30 }
, { 7, 31 }, { 8, 31 }, { 9, 30 }, { 10, 31 }, { 11, 30 }, { 12, 31 } };
public override int GetHashCode() { return (Int32)Days; }
public override bool Equals(Object obj)
{
if (obj == null || GetType() != obj.GetType())
return false;
DateOnly other = (DateOnly)obj;
return (this.Days == other.Days);
}
public int CompareTo(object obj)
{
if (obj == null)
{
throw new ArgumentNullException();
}
if (obj is DateOnly)
{
DateOnly other = (DateOnly)obj;
if (other == null)
{
throw new ArgumentNullException();
}
return (this.Days).CompareTo(other.Days);
}
else
{
throw new ArgumentNullException();
}
}
public override string ToString()
{
return string.Format("{0}/{1}/{2}", Year, Month, Day);
}
public Int64 GetDayDiff(DateOnly Date)
{
Int64 getDayDiff = (Int64)this.Days - (Int64)Date.Days;
return (getDayDiff);
}
public UInt32 Days { get; private set; }
public int Year { get; private set; }
public int Month { get; private set; }
public int Day { get; private set; }
public UInt32 MaxDays { get; private set; }
public DateOnly(int year, int month, int day)
{
if (!ValidateDate(year, month, day))
throw new ArgumentOutOfRangeException("invalid date");
MaxDays = GetDays(2078, 12, 31) - 1;
Year = year;
Month = month;
Day = day;
Days = GetDays(year, month, day);
}
private bool ValidateDate(int year, int month, int day)
{
bool valid = true;
if (year < 1900 || year >= 2079)
valid = false;
else if (month < 1 || month > 12)
valid = false;
else if (day < 1 || day > 31)
valid = false;
else if (day > DaysPerMonth(year, month))
valid = false;
return valid;
}
private UInt32 GetDays(int year, int month, int day)
{
UInt32 days = 0;
int yearSince1900 = year - 1900;
UInt32 temp;
for (int i = 1; i <= yearSince1900; i++)
{
temp = DaysPerYear(i + 1900 - 1);
days += temp;
}
for (int i = 2; i <= month; i++)
{
temp = DaysPerMonth(year, i - 1);
days += temp;
}
days += (UInt32)day;
return days;
}
private bool IsLeap(int year)
{
return ((year % 4 == 0 && year % 100 != 0) || year % 400 == 0);
}
private UInt32 DaysPerYear(int year)
{
if (IsLeap(year))
return 366;
else
return 365;
}
private UInt32 DaysPerMonth(int year, int month)
{
UInt32 days = (UInt32)MonthDay[month];
if (month == 2 && !IsLeap(year))
days = 28;
return days;
}
}
Answer: Properly overriding GetHashCode()
I remember when first learning how to properly override and use GetHashCode(), that there was a rule that stated The integer returned by GetHashCode() should never change. You're violating this rule because Days is not a readonly property.
Bad practices and unnecessary code
Take a look at this snippet:
public int CompareTo(object obj)
{
if (obj == null)
{
throw new ArgumentNullException();
}
if (obj is DateOnly)
{
DateOnly other = (DateOnly) obj;
if (other == null)
{
throw new ArgumentNullException();
}
return (this.Days).CompareTo(other.Days);
}
else
{
throw new ArgumentNullException();
}
}
There are 2 problems in this method.
First - you're using the is operator combined with a direct cast, instead of using the as operator.
Second - you have a redundant if statement that will never be triggered - if (other == null), if the cast (DateOnly other = (DateOnly) obj;) fails it will crash your program, it wont return null.
And lastly you don't really need an else statement there as there is flow branching involved if you trigger the if statement you will eventually reach the return statement which will break out of the method, thus not even continuing down your code, making the else redundant.
This is how I would write this method:
public int CompareTo(object obj)
{
if (obj == null)
{
throw new ArgumentNullException();
}
var other = obj as DateOnly;
if (other != null)
{
return (this.Days).CompareTo(other.Days);
}
throw new ArgumentNullException();
}
In GetDayDiff(DateOnly Date) you perform 2 casts when only 1 is necessary to operate with the proper type:
long getDayDiff = (long) this.Days - (long) Date.Days;
You can just do:
long getDayDiff = this.Days - (long) Date.Days;
I'm not sure why you inherit from object, this is unnecessary, as every type in C# inherits from that class whether it will be explicitly stated changes nothing, it's just few extra letters.
Shortening the code
You can use interpolated strings instead of String.Format():
return string.Format("{0}/{1}/{2}", Year, Month, Day);
As interpolated string:
return $"{Year}/{Month}/{Day}";
Ternary operator:
You can make use of the ternary operator here:
if (IsLeap(year))
return 366;
else
return 365;
Like this:
return (uint) (IsLeap(year) ? 366 : 365);
Overall design
Few concerns here:
I would just use DateTime as a backing field and utilise some of the already written methods there (I'm sure this was a design pattern but I cant think of the name right now, maybe Decorator?).
You lack range checks and overall validation of your public methods.
You're inconsistent with where you use uint and int, if you've decided that it would be good to use uint because x, y, z, why are you not using that everywhere? This will also partially solve point 2. | {
"domain": "codereview.stackexchange",
"id": 24976,
"tags": "c#, .net, datetime"
} |
Thermodynamics adding salt to water changes the temperature | Question:
$5\:\mathrm{g}$ of an unknown salt are dissolved in $325\:\mathrm{g}$ of water. Both the water and the salt are initially the same temperature. The water's temperature falls by $11.4\:\mathrm{^\circ{}C}$. Explain how it is possible for the salt and water to change temperature even though both substances are initially at the same temperature.
My approach:
We know that $$q = mc \Delta T$$
Since $q, m,$ and $\Delta T$ aren't changed by this action, this process must result in the raising of water's specific heat. Is this correct?
Answer: The keyword is that the salt dissolves. Dissolution entails at least two steps:
1) Overcoming solvent-solvent interactions and bonds. An extreme example: your dinner plate doesn't dissolve in your kitchen table. One reason is that there is an extremely high energy barrier to overcoming the hypothetical solvent's (in this case, the kitchen table's) intermolecular forces. You'd have to take an ax to the table to overcome the strong intermolecular forces that hold the molecules of your dinner table together.
On the other hand breaking these solvent-solvent "bonds" or intermolecular forces in water is easy; you can jump into a swimming pool just fine. I wouldn't suggest jumping into a table.
Also, as this step's suggests, this is an energy intensive process or an endothermic process. Energy must be consumed to break these solvent-solvent bonds or interactions.
2) Breaking solute-solute interactions and bonds. This however isn't required. For example, some unionized $\ce{NaCl}$ may be solvated by water. You'll find this more true with some of the less soluble salts. But most of the sodium chloride you toss in water will have dissolved (we'll get to why later), so what is solvated generally isn't $\ce{NaCl_{(s)}}$ or $\ce{NaCl_{(aq)}}$ (this is somewhat misleading terminology) but rather $\ce{Na^+}$ and $\ce{Cl^-}$.
This again is an endothermic process as bonds are being broken.
3) Solute-solvent stabilization (solvation). This steps entails the formation of solute-solvent intermolecular forces. For example, the sodium ion, with its positive charge, may form a hydration shell of water molecules.
This is an exothermic process, as "bonds" or more accurately, stabilizing intermolecular forces are formed.
Couple all three of these processes together and you have the actual dissolution process of $\ce{NaCl}$.
What's interesting about $\ce{NaCl}$ is that even with the third, exothermic step, the dissolution (which comprises all three steps) is still a tiny bit endothermic. So that's what the question is premised on - the addition of sodium chloride to water kicks off a spontaneous process which is endothermic. So one might ask why might this endothermic process still occur. The answer lies with entropy; the dissolution of sodium chloride is not enthalpically favorable (dissolution consumes energy - specifically - from the water) - yet the dissolution is entropically favorable.
Remember that all processes tend toward greater disorder over time, and the dissolution of sodium chloride is a perfect way to increase disorder; we go from unionized sodium chloride to two ions! The two ions are definitely going to have a lot more degrees of freedom than a single, unified molecule.
$$\ce{NaCl_{(s)} + H_2O ->Na^+_{(aq)} + Cl^{-}_{(aq)}}$$ | {
"domain": "chemistry.stackexchange",
"id": 1294,
"tags": "physical-chemistry, thermodynamics, solutions"
} |
Why is protein cyclisation desirable? | Question: There are a number of methods to "cyclize" an existing peptide:
Disulphide bond as described in Disulfide Bond Mimetics: Strategies and Challenges by Gori et al.
"Linchpin" based (linker chemistry) as described in Synthetic Cross-linking of Peptides: Molecular Linchpins for Peptide Cyclization by Derda et al.
Why would the "cyclization" of a protein be a desirable goal?
Answer: As described in the abstract of Synthetic Cross-linking of Peptides: Molecular Linchpins for Peptide Cyclization by Derda et al., protein cyclization improves:
resistance to proteolytic degradation and conformational stability. The latter property leads to an increase in binding potency and increased bioavailability due to increased permeation through biological membranes.
In simpler terms, a protein that is cyclized is less likely to:
Fold into an unhelpful shape.
Be broken up by an enzyme.
Thus, they are more likely to:
Act as better binders for a given target
Make it across a cell membrane | {
"domain": "biology.stackexchange",
"id": 10906,
"tags": "protein-structure"
} |
Is it really hard to learn in a stochastic environment? | Question: I understand that a stochastic environment is one that does not always lead you to the desired state by giving a particular action $a$ (But the probability to change to a not desire state is fixed, right?).
For example, the frozen lake environment is a stochastic environment. Sometimes you want to move in one direction and the agent slips and moves in another direction. Unlike an environment with multiple agents that the probability of the actions of the other agents is changing because they keep learning (a non-stationary environment).
Why is it difficult to learn in a stochastic environment, if, for example, Q-learning can solve the frozen lake environment? In what cases would it be difficult to learn in a stochastic environment?
I have found some articles that address that issue, but I don't understand why it would be difficult if Q-learning can solve it (for discrete states/actions).
Answer: A stochastic environment does not necessarily mean that the reward distribution is stationary. It can be, as in the case of FrozenLake. The paper you linked also mentions that other algorithms already addressed the non-stationary case.
If you have a simple stationary stochastic environment, then you just need more sample trajectories to determine which action is better. If the environment is fully observable, then based on the estimated action values you can build a deterministic optimal policy. | {
"domain": "ai.stackexchange",
"id": 2988,
"tags": "reinforcement-learning, q-learning, markov-decision-process, environment"
} |
Weight on carpet vs. hard floor | Question: Will the scale say I weigh more, less, or the same on a carpet as compared to a hard floor?
You can assume the scale works via spring mechanism. Free body diagrams encouraged!
Answer: According to what I understand you will show as weighing more on a carpet than on a hard floor. From what I understand it is due to the way the hard floor affects the feet of the scales.
Here is a article that explores that question: http://www.newscientist.com/article/dn2462-people-weigh-less-on-a-hard-surface.html | {
"domain": "physics.stackexchange",
"id": 8052,
"tags": "homework-and-exercises, measurements"
} |
Symmetry factor via Wick's theorem | Question: Consider the lagrangian of the real scalar field given by $$\mathcal L = \frac{1}{2} (\partial \phi)^2 - \frac{1}{2} m^2 \phi^2 - \frac{\lambda}{4!} \phi^4$$
Disregarding snail contributions, the only diagram contributing to $ \langle p_4 p_3 | T (\phi(y)^4 \phi(x)^4) | p_1 p_2 \rangle$ at one loop order is the so called dinosaur:
To argue the symmetry factor $S$ of this diagram, I say that there are 4 choice for a $\phi_y$ field to be contracted with one of the final states and then 3 choices for another $\phi_y$ field to be contracted with the remaining final state. Same arguments for the $\phi_x$ fields and their contractions with the initial states. This leaves 2! permutations of the propagators between $x$ and $y$. Two vertices => have factor $(1/4!)^2$ and such a diagram would be generated at second order in the Dyson expansion => have factor $1/2$. Putting this all together I get
$$S^{-1} = \frac{4 \cdot 3 \cdot 4 \cdot 3 \cdot 2!}{4! \cdot 4! \cdot 2} = \frac{1}{4}$$ I think the answer should be $1/2$ so can someone help in seeing where I lost a factor of $2$?
I could also evaluate $$\langle p_4 p_3 | T (\phi(y)^4 \phi(x)^4) | p_1 p_2 \rangle = \langle p_4 p_3 | : \phi(y)^4 \phi(x)^4 : | p_1 p_2 \rangle + \dots + (\text{contract}(\phi(x) \phi(y)))^2 \langle p_4 p_3 | : \phi(y)^2 \phi(x)^2 :| p_1 p_2 \rangle + \dots $$ where dots indicate diagrams generated via this correlator that do not contribute at one loop. (I don't know the latex for the Wick contraction symbol so I just write contract). Is there a way to find out the symmetry factor from computing the term $(\text{contract}(\phi(x) \phi(y)))^2 \langle p_4 p_3 | : \phi(y)^2 \phi(x)^2: | p_1 p_2 \rangle?$
Answer: Let's start with the external legs on the left. There are eight possible places for the first upper-left external leg to attach: it can attach to one of the four possible $\phi_x$ fields, or to one of the four possible $\phi_y$ fields. The lower-left external leg then only has three choices, since if the first leg attached to the $\phi_x$ field, this leg must also attach to a $\phi_x$ field, and similarly for $\phi_y$. So attaching these legs gives a factor of $2\times 4\times 3$.
Now, let's do the legs on the right. If the legs on the left attached to $\phi_x$, the legs on the right must attach to $\phi_y$, and vice-versa. So there are only four choices for the upper-right external leg, and three choices for the upper-left external leg. Thus, attaching these legs gives a factor of $4\times 3$.
Finally, let's attach the internal legs. The first leg has two places to attach, and the second only has one. So we get a factor of $2$.
Overall, the Dyson series gives us a $\frac{1}{2!}$, and the vertices give us a $\frac{1}{4!4!}$, so the symmetry factor is
$$
\frac{2\times 4 \times 3\times 4\times 3\times 2}{2!4!4!}=\frac{1}{2}
$$
Your mistake was in neglecting the factor of two that comes about from permuting the role of $\phi_x$ and $\phi_y$. | {
"domain": "physics.stackexchange",
"id": 35976,
"tags": "quantum-field-theory, symmetry, feynman-diagrams, correlation-functions, wick-theorem"
} |
What do the colors in gmapping mean? | Question:
I'm pretty new to ROS and I've been trying to do gmapping for turtlebot2 and turtlebot3 on
Ubuntu 16.04 ROS-kinetic. I have been successful, however I've been trying to figure what
the different colors mean. I've tried googling the answer but to no avail.
Below are two links to see what I'm talking about.
The first one is an image of turtlebot2 gmapping
and the second one is of turtlebot3 gmapping.
Could anyone explain what the different colors that surround the objects mean for each of the images?
What does the light blue, dark blue, red and pink colors in turtlebot2 gmapping mean ?
In turtlebot3, there are light blue colors surrounding the objects but why are there light blue spots or circles on places that have no objects?
Your answer would be much appreciated.
Originally posted by Orl on ROS Answers with karma: 36 on 2018-12-11
Post score: 0
Original comments
Comment by gvdhoorn on 2018-12-12:
Could you please attach your images directly to the question? I've given you sufficient karma for that.
Thanks.
Comment by Orl on 2018-12-12:
Sure thing. Thanks for the karma.
Answer:
http://wiki.ros.org/costmap_2d
Originally posted by Orl with karma: 36 on 2019-07-09
This answer was ACCEPTED on the original site
Post score: 0
Original comments
Comment by jayess on 2019-07-09:
Link-only answers should really be discouraged. Can you please provide an overview, quote, something to give more information? | {
"domain": "robotics.stackexchange",
"id": 32155,
"tags": "navigation, ros-kinetic, gmapping"
} |
rviz in ROS electric | Question:
Hi,
I have recently installed ROS electric and found out that rviz comes with following error:
NODES
/
rviz (rviz/rviz)
ROS_MASTER_URI=http://192.168.0.101:11311
core service [/rosout] found
process[rviz-1]: started with pid [2976]
[ERROR] [1315474799.086848538]: Caught exception while loading: OGRE EXCEPTION(7:InternalErrorException): Cannot create GL vertex buffer in GLHardwareVertexBuffer::GLHardwareVertexBuffer at /tmp/buildd/ros-electric-visualization-common-1.6.0/debian/ros-electric-visualization-common/opt/ros/electric/stacks/visualization_common/ogre/build/ogre_src_v1-7-1/RenderSystems/GL/src/OgreGLHardwareVertexBuffer.cpp (line 46)
^C[rviz-1] killing on exit
[rviz-1] escalating to SIGTERM
[rviz-1] escalating to SIGKILL
Shutdown errors:
* process[rviz-1, pid 2976]: required SIGKILL. May still be running.
shutting down processing monitor...
... shutting down processing monitor complete
done
I wonder if anyone knows what is the problem.
Originally posted by Reza on ROS Answers with karma: 116 on 2011-09-07
Post score: 3
Original comments
Comment by martimorta on 2011-09-11:
I have Ubuntu 10.04, graphics card: nVidia Corporation G92 [GeForce 9800 GT], ros installed from .deb (synaptic). Furthermore, rviz has worked fine in cturtle and diamondback.
Comment by joq on 2011-09-08:
To help you, we need a lot more information. What graphics hardware do you have? What OS version? X server settings? ROS installed from binary or source? Etc.?
Comment by martimorta on 2011-09-08:
The same happens to me
Answer:
We need more information. Can you please run rviz with the "-l" option to generate an Ogre.log file? Like:
rosrun rviz rviz -l
This will cause rviz to generate an Ogre.log file. When the Ogre 3D library has problems, it puts most of its useful output in Ogre.log (but only if you run with -l).
Also, see the rviz troubleshooting page for info on some common issues and a guide to making useful bug reports.
Originally posted by hersh with karma: 1351 on 2011-09-12
This answer was ACCEPTED on the original site
Post score: 4 | {
"domain": "robotics.stackexchange",
"id": 6633,
"tags": "rviz, ros-electric"
} |
A VBA Product Dictionary with a Collection of Product Services | Question: I need to extract unique Product Group names along with its corresponding services from a table in a worksheet. The table is generated by a bot and is not filtered, I have sorted it by alphabetical order. The data is not fixed and can contain anywhere from 5 - 100 rows of data, depending on the month which the report from the bot is generated.
I decided to use a Dictionary to store the the Product Group Name as they Key, while using a Collection to store services. The Collection only stores unique services by using On Error Resume Next
What changes could I make to my code?
Snippet of my Table
Code
Public Sub BuildTMProductDictionary()
Dim tmData As Variant
tmData = Sheet1.ListObjects("Table1").DataBodyRange.Value
Dim i As Long
For i = LBound(tmData, 1) To UBound(tmData, 1)
Dim product As String
product = tmData(i, 1)
'store unique services in a collection, On Error Resume Next used to avoid duplicates
On Error Resume Next
Dim services As New Collection
services.Add (tmData(i, 2)), (tmData(i, 2))
'get the product name of the next row
Dim nextProduct As String
nextProduct = tmData(i + 1, 2)
'compare the current product against the next product create New Dictionary if <>
If product <> nextProduct Then
Dim productGroup As New Dictionary
productGroup.Add product, services
Set services = New Collection
End If
Next
End Sub
Edit
My Collection of services needs to be unique. As an example "Positive Pay" which belong to the "ARP" product group should only appear once in the collection.
Answer: You seem to be misunderstanding how to use a Scripting.Dictionary.
There is no need to sort the data before processing into a dictionary.
There is also no need to construct a collection before you add to the dictionary.
Its also slightly more sensible to write the sub as a function.
As a final tweak I'd pass the array in as a parameter rather than hardwiring it into the function, but I'll leave that as an exercise for the reader (smile)
Public Function BuildTMProductDictionary() As Scripting.Dictionary
Dim tmData As Variant
tmData = Sheet1.ListObjects("Table1").DataBodyRange.Value
Dim myDict As Scripting.Dictionary
Set myDict = New Scripting.Dictionary
Dim i As Long
For i = LBound(tmData, 1) To UBound(tmData, 1)
Dim myProduct As String
myProduct = tmData(i, 1)
Dim myService As String
myService = tmData(i, 2)
If Not myDict.exists(myProduct) Then
myDict.Add myProduct, New Collection
End If
myDict.Item(myProduct).Add myService
Next
Set BuildTMProductDictionary = myDict
End Function
Replace
If Not myDict.exists(myProduct) Then
myDict.Add myProduct, New Collection
End If
myDict.Item(myProduct).Add myService
with
If Not myDict.exists(myProduct) Then
myDict.Add myProduct, New Scripting.Dictionary
End If
If Not myDict.Item(myProduct).exists(myService) Then
myDict.Item(myProduct).Add myService,myService
End If | {
"domain": "codereview.stackexchange",
"id": 38874,
"tags": "vba, excel"
} |
Two "Robertson-Walker observers," velocity of baseball as seen by second observer right before it's caught? | Question: The spacetime metric of a spatially flat ($k = 0$) radiation dominated FLRW universe is given by$$ds^2 = -dT^2 + T[dx^2 + dy^2 + dz^2].$$Consider two "Robertson-Walker observers," i.e., observers with $4$-velocity $(\partial/\partial T)^a$. At time $T = T_1$, the first observer throws a baseball toward the second with velocity $v_1$. The baseball is caught by the second observer at time $T = T_2$.
Now, I am wondering, what is the velocity, $v_2$, of the baseball as seen by the second observer just before it is caught?
Note that $v_1$ and $v_2$ are the physical velocities of the baseball (as would be measured, e.g., by a "radar gun"), not a "coordinate speed" (such as "$dx/dT$"). We are not assuming here that $v_1$, $v_2 \ll c$.
Answer: Throughout the question I will use $p(T_1)$ and $p(T_2)$ to denote the 4-momentum of the baseball at times $T_1$ and $T_2$, $\mathbf{v}_1$ and $\mathbf{v}_2$ to represent the spatial component of its physical velocity, and $a(T_1)$ and $a(T_2)$ to represent the scale factor of the Universe at these times.
The homogeneity and isotropy of the Universe mean that no matter what direction the baseball is thrown in by a comoving observer, it will follow a geodesic in FRW spacetime, which is a 'radial' trajectory in the sense that
\begin{equation}
ds^2 = -dT^2 + a^2(T) \: d\chi^2,
\end{equation}
and
\begin{equation}
\dot{p}_{\chi} = 0,
\end{equation}
where $\chi$ is the FRW radial co-ordinate such that $d\chi = dr/\sqrt{1-Kr^2}$ for comoving curvature $K$, and $p_{\chi}$ is the component of the baseball's 4-momentum in this direction. The dot denotes the derivative w.r.t. proper time.
Mathematically, this condition on $p_{\chi}$ can be seen by lowering indices on the geodesic equation $\dot{p}^a+\Gamma^a_{bc}p^bp^c=0$ and relabelling dummy indices to obtain
\begin{equation}
\dot{p}_a = \frac{1}{2}(\partial_a g_{bc})p^bp^c.
\end{equation}
Since the metric here is independent of $\chi$, we see that $p_{\chi}$ is constant along the geodesic.
Intuitively, since the Universe is expanding away from every point, it is expanding away from observer 1 in all direction, so all directions correspond to throws along a radial trajectory.
With this knowledge, we want to formulate the problem in terms of covariant components of the momentum, so we will use the appropriate line element for a massive baseball,
\begin{equation}
g^{\mu \nu}p_{\mu} p_{\nu} = -m^2 = -p_T^2(T_1) + \frac{1}{a^2(T_1)} p_{\chi}^2
\end{equation}
\begin{equation}
-m^2 = -p_T^2(T_2) + \frac{1}{a^2(T_2)} p_{\chi}^2.
\end{equation}
The mass is not low-velocity, so using the special-relativistic mass-shell condition $E^2 = m^2+|\mathbf{p}|^2$, we get
\begin{equation}
m^2 = p_T^2(T_1) - |\mathbf{p_1}^2|
\end{equation}
\begin{equation}
m^2 = p_T^2(T_2) - |\mathbf{p_2}^2|.
\end{equation}
Substituting these $m^2$ into the line element, cancelling the $p_T^2$ and taking the ratio of the two equations then gives
\begin{equation}
\frac{|\mathbf{p_2}^2|}{|\mathbf{p_1}^2|} = \frac{a^2(T_1) p_{\chi}(T_2)}{a^2(T_2) p_{\chi}(T_1)}.
\end{equation}
But as previously discussed, the $p_{\chi}$ are conserved along the geodesic, and so they cancel! Finally, since the mass is conserved, we can write the spatial momenta in terms of the spatial velocities as
\begin{equation}
\frac{\gamma_1 |\mathbf{v}_1|}{\gamma_2 |\mathbf{v}_2|} = \frac{a(T_2)}{a(T_1)}.
\end{equation}
This gives $|\mathbf{v}_2|$ in terms of $|\mathbf{v}_1|$ as required.
This picture of the time-sliced Universe should help to visualise the situation. The red lines are the comoving observers, the blue line is the trajectory of the baseball, and the black arrows are the spatial components of the velocity of the baseball at times $T_1$ and $T_2$. | {
"domain": "physics.stackexchange",
"id": 29799,
"tags": "general-relativity, cosmology, differential-geometry"
} |
Are the nonphysical degrees of freedom in Yang-Mills theory analogous to the worldsheet metric in the Polyakov formalism? | Question: The Polyakov string action on a flat background (in the Euclidean signature)
$$S_{P}[X,\gamma]\propto\int_{\Sigma}\mathrm{d}^2\sigma\,\sqrt{\text{det}\gamma}\,\gamma^{ab}\delta_{\mu\nu}\partial_{a}X^{\mu}\partial_{b}X^{\nu}$$
enjoys a huge gauge redundancy consisting of diffeomorphisms and Weil transformations on the world-sheet metric. These symmetries are a consequence of the fact that the world sheet metric is not a true degree of freedom, and decouple in the classical theory. After "integrating out" the extra degrees of freedom, we are left with the original Nambu-Gotto action
$$S_{NG}[X]\propto\int_{\Sigma}\mathrm{d}^2\sigma\sqrt{\det_{ab}\left(\delta_{\mu\nu}\partial_{a}X^{\mu}\partial_{b}X^{\nu}\right)},$$
which calculates the area of the worldsheet $\Sigma$ given the induced world-sheet metric $\delta_{\mu\nu}\partial_aX^{\mu}\partial_bX^{\nu}$. This action enjoys none of the original "gauge" symmetries, as the nonphysical degrees of freedom don't exist. However, we always use the Polyakov path integral in quantization because the Nambu-Gotto action is nearly impossible to quantize using path integration.
This got me to thinking about Yang-Mills theory, where the action
$$S_{YM}[A]=\frac{1}{2g_{YM}^2}\int_{\mathcal{M}}\text{Tr}[F\wedge\star F]$$
enjoys a gauge symmetry. However, due to its quadratic form, it is easy to quantize in the weak-coupling limit (after Fadeev-Popov gauge-fixing is implemented, that is).
My question is, then, is there a nonlinear action that can be obtained after "integrating out" the nonphysical polarizations of the Yang-Mills field $A$, in analogy to how the Nambu-Gotto action is obtained from the Polyakov action? If so, might this lead to generalizations of Yang-Mills theory, in the same way that the Nambu-Gotto action can be naturally generalized to worldvolume actions of higher-dimensional extended objects?
Answer: Never say never, but it looks very unlikely that there is a Nambu-Goto-analogous action free of gauge redundancy that is equivalent to the Yang-Mills action. Furthermore, generalizations of YM theory are already known, so there is little incentive to find this certainly much more inconvenient action.
The analogy already breaks at the very first step: The Polyakov action has two proper tensor fields it depends on. Eliminating one of them is a perfectly covariant goal. But the Yang-Mills action depends on a single tensor field, the gauge potential. The physical degrees of freedom do not form a proper tensor, they are peculiar combinations of components of the gauge potential. Therefore, a covariant "reduced" action seems impossible.
A generalization of Yang-Mills theory to higher dimensional objects is already known: Higher gauge theories involving a $p$-form instead of a $1$-form as the gauge potential appear commonly among many SUGRA theories, for instance as the Ramond-Ramond field of type II SUGRA or the C-field of 11d SUGRA. | {
"domain": "physics.stackexchange",
"id": 52073,
"tags": "string-theory, gauge-theory, gauge-invariance, yang-mills"
} |
Inspector Rubberduck | Question: Our Rubberduck open-source VBE add-in project is coming along nicely. One of the main features we're going to be implementing in the next release, is code inspections.
I started by defining an abstraction:
namespace Rubberduck.Inspections
{
[ComVisible(false)]
public enum CodeInspectionSeverity
{
DoNotShow,
Hint,
Suggestion,
Warning,
Error
}
[ComVisible(false)]
public enum CodeInspectionType
{
MaintainabilityAndReadabilityIssues,
CodeQualityIssues
}
/// <summary>
/// An interface that abstracts a code inspection.
/// </summary>
[ComVisible(false)]
public interface IInspection
{
/// <summary>
/// Gets a short description for the code inspection.
/// </summary>
string Name { get; }
/// <summary>
/// Gets a short message that describes how a code issue can be fixed.
/// </summary>
string QuickFixMessage { get; }
/// <summary>
/// Gets a value indicating the type of the code inspection.
/// </summary>
CodeInspectionType InspectionType { get; }
/// <summary>
/// Gets a value indicating the severity level of the code inspection.
/// </summary>
CodeInspectionSeverity Severity { get; }
/// <summary>
/// Gets/sets a valud indicating whether the inspection is enabled or not.
/// </summary>
bool IsEnabled { get; set; }
/// <summary>
/// Runs code inspection on specified tree node (and child nodes).
/// </summary>
/// <param name="node">The <see cref="SyntaxTreeNode"/> to analyze.</param>
/// <returns>Returns inspection results, if any.</returns>
IEnumerable<CodeInspectionResultBase> Inspect(SyntaxTreeNode node);
}
}
Out of necessity came a CodeInspection base class to implement it:
namespace Rubberduck.Inspections
{
[ComVisible(false)]
public abstract class CodeInspection : IInspection
{
protected CodeInspection(string name, string message, CodeInspectionType type, CodeInspectionSeverity severity)
{
_name = name;
_message = message;
_inspectionType = type;
Severity = severity;
}
private readonly string _name;
public string Name { get { return _name; } }
private readonly string _message;
public string QuickFixMessage { get { return _message; } }
private readonly CodeInspectionType _inspectionType;
public CodeInspectionType InspectionType { get { return _inspectionType; } }
public CodeInspectionSeverity Severity { get; set; }
public bool IsEnabled { get; set; }
/// <summary>
/// Inspects specified tree node, searching for code issues.
/// </summary>
/// <param name="node"></param>
/// <returns></returns>
public abstract IEnumerable<CodeInspectionResultBase> Inspect(SyntaxTreeNode node);
}
}
The first implementation I wrote was ObsoleteCommentSyntaxInspection, which looks for usages of the obsolete Rem keyword in comments:
namespace Rubberduck.Inspections
{
[ComVisible(false)]
public class ObsoleteCommentSyntaxInspection : CodeInspection
{
/// <summary>
/// Parameterless constructor required for discovery of implemented code inspections.
/// </summary>
public ObsoleteCommentSyntaxInspection()
: base("Use of obsolete Rem comment syntax",
"Replace Rem reserved keyword with single quote.",
CodeInspectionType.MaintainabilityAndReadabilityIssues,
CodeInspectionSeverity.Suggestion)
{
}
public override IEnumerable<CodeInspectionResultBase> Inspect(SyntaxTreeNode node)
{
var comments = node.FindAllComments();
var remComments = comments.Where(instruction => instruction.Value.Trim().StartsWith(ReservedKeywords.Rem));
return remComments.Select(instruction => new ObsoleteCommentSyntaxInspectionResult(Name, instruction, Severity, QuickFixMessage));
}
}
}
When an IInspection implementation finds an Instruction noteworthy, it creates an instance of a class derived from CodeInspectionResultBase:
namespace Rubberduck.Inspections
{
[ComVisible(false)]
public abstract class CodeInspectionResultBase
{
public CodeInspectionResultBase(string inspection, Instruction instruction, CodeInspectionSeverity type, string message)
{
_name = inspection;
_instruction = instruction;
_type = type;
_message = message;
}
private readonly string _name;
/// <summary>
/// Gets a string containing the name of the code inspection.
/// </summary>
public string Name { get { return _name; } }
private readonly Instruction _instruction;
/// <summary>
/// Gets the <see cref="Instruction"/> containing a code issue.
/// </summary>
public Instruction Instruction { get { return _instruction; } }
private readonly CodeInspectionSeverity _type;
/// <summary>
/// Gets the severity of the code issue.
/// </summary>
public CodeInspectionSeverity Severity { get { return _type; } }
private readonly string _message;
/// <summary>
/// Gets a short message that describes how the code issue can be fixed.
/// </summary>
public string Message { get { return _message; } }
/// <summary>
/// Addresses the issue by making changes to the code.
/// </summary>
/// <param name="vbe"></param>
public abstract void QuickFix(VBE vbe);
}
}
Having this base class makes implementing the QuickFix method, the only thing left to implement:
namespace Rubberduck.Inspections
{
public class ObsoleteCommentSyntaxInspectionResult : CodeInspectionResultBase
{
public ObsoleteCommentSyntaxInspectionResult(string inspection, Instruction instruction, CodeInspectionSeverity type, string message)
: base(inspection, instruction, type, message)
{
}
public override void QuickFix(VBE vbe)
{
var location = vbe.FindInstruction(Instruction);
int index;
if (!Instruction.Line.Content.HasComment(out index)) return;
var line = Instruction.Line.Content.Substring(0, index) + "'" + Instruction.Comment.Substring(ReservedKeywords.Rem.Length);
location.CodeModule.ReplaceLine(location.Selection.StartLine, line);
}
}
}
We're planning to implement a good number of such code inspections - is this a good, maintainable, extensible way to go about it?
The available inspections are loaded at startup, like this:
_inspections = Assembly.GetExecutingAssembly()
.GetTypes()
.Where(type => type.BaseType == typeof(CodeInspection))
.Select(type =>
{
var constructor = type.GetConstructor(Type.EmptyTypes);
return constructor != null ? constructor.Invoke(Type.EmptyTypes) : null;
})
.Where(inspection => inspection != null)
.Cast<IInspection>()
.ToList();
This way as we implement them, they're automagically effective.
I'm hoping that this design will help us fully implement this feature faster, by writing only the code that needs to be written. Hence I'm interested in extensibility, maintainability and readability in general - but also in how I've put these abstractions into use, in the CodeInspectionsDockablePresenter class:
[ComVisible(false)]
public class CodeInspectionsDockablePresenter : DockablePresenterBase
{
private readonly Parser _parser;
private CodeInspectionsWindow Control { get { return UserControl as CodeInspectionsWindow; } }
private readonly IList<IInspection> _inspections;
public CodeInspectionsDockablePresenter(Parser parser, IEnumerable<IInspection> inspections, VBE vbe, AddIn addin)
: base(vbe, addin, new CodeInspectionsWindow())
{
_parser = parser;
_inspections = inspections.ToList();
Control.RefreshCodeInspections += OnRefreshCodeInspections;
Control.NavigateCodeIssue += OnNavigateCodeIssue;
}
private void OnNavigateCodeIssue(object sender, NavigateCodeIssueEventArgs e)
{
var location = VBE.FindInstruction(e.Instruction);
location.CodeModule.CodePane.SetSelection(location.Selection);
}
private void OnRefreshCodeInspections(object sender, EventArgs e)
{
var code = _parser.Parse(VBE.ActiveVBProject);
var results = new List<CodeInspectionResultBase>();
foreach (var inspection in _inspections.Where(inspection => inspection.IsEnabled))
{
var result = inspection.Inspect(code).ToArray();
if (result.Length != 0)
{
results.AddRange(result);
}
}
DrawResultTree(results);
}
private void DrawResultTree(IEnumerable<CodeInspectionResultBase> results)
{
var tree = Control.CodeInspectionResultsTree;
tree.Nodes.Clear();
foreach (var result in results.OrderBy(r => r.Severity))
{
var node = new TreeNode(result.Name);
node.ToolTipText = result.Instruction.Content;
node.Tag = result.Instruction;
tree.Nodes.Add(node);
}
}
}
Nevermind DrawResultTree, it's there just because I wanted to see something in the dockable window; it's mostly OnRefreshCodeInspections I'm interested in - are there any obvious (or not) optimizations that could be operated in the looping?
Answer: Having worked on a similar system, I came up with a very similar design. So either we're both doing something right, or we're both doing it wrong :)
Here are some things I would change, but obviously different requirements call for different decisions and may not all be applicable here.
Remove CodeInspection class
I don't think you gain much from having a base class here. Now your inspections look more like this
public class ObsoleteCommentSyntaxInspection : IInspection
{
public string Name
{
get { return Properties.Resources.ObsoleteCommentSyntaxInspectionName; }
}
public CodeInspectionType InspectionType
{
get { return CodeInspectionType.MaintainabilityAndReadabilityIssues; }
}
public CodeInspectionSeverity Severity
{
get { return CodeInspectionSeverity.Suggestion; }
}
...
Remove IsEnabled property
I think enabling/disabling inspections should be handled at a different level. Especially if you want something more complex like a project-specific settings file overriding a global one, something like an InspectionSettings class would be good here.
Inspections might have more than one fix (or none)
Consider returning an IEnumerable<string> for quick fixes.
Rename Inspect
The return type is IEnumerable<CodeInspectionResultBase>, so I would recommend something like GetInspectionResults. | {
"domain": "codereview.stackexchange",
"id": 10862,
"tags": "c#, api, interface, com, rubberduck"
} |
SDL/C++ High-Low Guessing Game | Question: I just recently finished going through all of learncpp.com's and Lazy Foo's SDL2 tutorials. This is my first program written outside of the tutorials so I assume there's plenty of room for improvement. Any constructive criticism for what can be improved would be greatly appreciated.
The gist of the game is that it generates a random number, and the player must guess what it is. If they guess correctly, they win, if not they are told whether the guess is too high or too low and allowed to guess again for a predetermined amount of guesses.
#include <SDL.h>
#include <SDL_image.h>
#include <iostream>
#include <random>
#include <SDL_ttf.h>
#include <string>
const int SCREEN_WIDTH = 640;
const int SCREEN_HEIGHT = 480;
SDL_Window* gWindow = NULL;
SDL_Renderer* gRenderer = NULL;
TTF_Font* gFont;
SDL_Color textColor = { 0,0,0 };
const int min = 1;
int max;
int numberofGuesses;
int randomNumber;
bool quit = false;
bool willPlayAgain = false;
int guessCount = 0;
bool menuPressed = false;
void mainMenu();
int getRandomNumber(int x, int y)
{
std::random_device rd;
std::mt19937 mersenne(rd());
std::uniform_int_distribution<> number(x, y);
int rng = number(mersenne);
return rng;
}
class LTexture
{
public:
LTexture();
~LTexture();
void free();
void loadfromSurface(std::string path);
void loadfromText(std::string text, SDL_Color);
void render(int x, int y);
SDL_Texture * mTexture;
int mWidth;
int mHeight;
SDL_Rect mButton;
};
LTexture::LTexture()
{
mTexture = NULL;
mWidth = 0;
mHeight = 0;
}
LTexture::~LTexture()
{
SDL_DestroyTexture(mTexture);
mTexture = NULL;
mWidth = 0;
mHeight = 0;
}
void LTexture::loadfromSurface(std::string path)
{
SDL_Surface *surface = IMG_Load(path.c_str());
mTexture = SDL_CreateTextureFromSurface(gRenderer, surface);
mWidth = surface->w;
mHeight = surface->h;
}
void LTexture::loadfromText(std::string text, SDL_Color color)
{
free();
SDL_Surface* textSurface = TTF_RenderText_Blended_Wrapped(gFont, text.c_str(), color, 250);
mTexture = SDL_CreateTextureFromSurface(gRenderer, textSurface);
mWidth = textSurface->w;
mHeight = textSurface->h;
SDL_FreeSurface(textSurface);
textSurface = NULL;
}
void LTexture::render(int x, int y)
{
SDL_Rect destRect = { x, y, mWidth, mHeight };
SDL_RenderCopy(gRenderer, mTexture, NULL, &destRect);
//create a rectangle that coincides with texture to check for button presses
mButton = { x, y, mWidth, mHeight };
}
void LTexture::free()
{
SDL_DestroyTexture(mTexture);
mTexture = NULL;
}
//declare the Textures I will be using
LTexture yesButton;
LTexture noButton;
LTexture tenButton;
LTexture hundredButton;
LTexture thousandButton;
LTexture highLowTexture;
LTexture menuTexture;
void buttonPress(SDL_Event &e, SDL_Rect &button, int buttonNum)
{
int x, y;
SDL_GetMouseState(&x, &y);
//if mouse is not inside of the button, go back
if (x < button.x || x > button.x + button.w || y < button.y || y > button.y + button.h)
{
return;
}
else
{
if (e.type == SDL_MOUSEBUTTONDOWN)
{
//if yesButton
if (buttonNum == 0)
{
willPlayAgain = true;
guessCount = 0;
mainMenu();
}
//if noButton
if (buttonNum == 1)
{
quit = true;
}
//if 1-10 Button
if (buttonNum == 2)
{
numberofGuesses = 5;
max = 10;
menuPressed = true;
randomNumber = getRandomNumber(min, max);
//used to make sure game works correctly
std::cout << randomNumber;
}
//if 1-100 Button
if (buttonNum == 3)
{
numberofGuesses = 7;
max = 100;
menuPressed = true;
randomNumber = getRandomNumber(min, max);
std::cout << randomNumber;
}
//if 1-1000 Button
if (buttonNum == 4)
{
numberofGuesses = 9;
max = 1000;
menuPressed = true;
randomNumber = getRandomNumber(min, max);
std::cout << randomNumber;
}
}
}
}
int compare(int randomNumber, int guess)
{
if (randomNumber == guess)
{
return 0;
}
//if player has run out of guesses
else if (guessCount == numberofGuesses)
{
return 3;
}
else if (randomNumber < guess)
{
return 1;
}
else if (randomNumber > guess)
{
return 2;
}
}
void playAgain(int x)
{
willPlayAgain = false;
SDL_Event e;
while (!quit && !willPlayAgain)
{
while (SDL_PollEvent(&e) != 0)
{
if (e.type == SDL_QUIT)
{
quit = true;
}
buttonPress(e, yesButton.mButton, 0);
buttonPress(e, noButton.mButton, 1);
}
std::string dialogue;
if (x == 1)
{
dialogue = "YOU WON!!! The correct answer was " + std::to_string(randomNumber) + ".";
}
else
{
dialogue = "You lose. The correct answer was " + std::to_string(randomNumber) + ".";
}
SDL_RenderClear(gRenderer);
highLowTexture.render(0, 0);
LTexture winlose;
winlose.loadfromText(dialogue, textColor);
winlose.render(335, 70);
LTexture playAgain;
playAgain.loadfromText("Play again?", textColor);
playAgain.render(325, 300);
yesButton.render(300, 350);
noButton.render(300 + yesButton.mWidth + 10, 350);
SDL_RenderPresent(gRenderer);
}
}
void renderScene(std::string guessInput, int compare)
{
std::string dialogue;
//starting dialogue
if (guessCount == 0)
{
dialogue = "I'm thinking of a number between " + std::to_string(min) + " and " + std::to_string(max) + ". You have " + std::to_string(numberofGuesses) + " guesses.";
}
//if answer is correct
else if (compare == 0)
{
//1 indicates has won
playAgain(1);
return;
}
else if (compare == 1)
{
dialogue = "Your guess was too high.";
}
else if (compare == 2)
{
dialogue = "Your guess was too low.";
}
// if ran out of guesses
else if (compare == 3)
{
// 0 indicates has lost
playAgain(0);
return;
}
SDL_RenderClear(gRenderer);
highLowTexture.render(0, 0);
LTexture bubbleText;
bubbleText.loadfromText(dialogue, textColor);
bubbleText.render(335, 70);
LTexture guessPrompt;
guessPrompt.loadfromText("Enter a number:", textColor);
guessPrompt.render(350, 250);
LTexture guessCounter;
guessCounter.loadfromText("Guesses remaining: " + std::to_string(numberofGuesses - guessCount), textColor);
guessCounter.render(350, 200);
LTexture inputTexture;
//if input is not empty
if (guessInput != "")
{
inputTexture.loadfromText(guessInput, textColor);
}
//else add a space so it can render
else
{
inputTexture.loadfromText(" ", textColor);
}
inputTexture.render(350 + guessPrompt.mWidth, 250);
SDL_RenderPresent(gRenderer);
}
void gameLoop()
{
SDL_Event e;
//start with empty string
std::string guessInput = " ";
//comparison code to indicate which text is generated.
int comparison = 0;
SDL_StartTextInput();
while (!quit)
{
while (SDL_PollEvent(&e) != 0)
{
if (e.type == SDL_QUIT)
{
quit = true;
}
if (e.type == SDL_TEXTINPUT)
{
//if input is a numeric value, add to string.
if (e.text.text[0] == '0' || e.text.text[0] == '1' || e.text.text[0] == '2' || e.text.text[0] == '3' || e.text.text[0] == '4' ||
e.text.text[0] == '5' || e.text.text[0] == '6' || e.text.text[0] == '7' || e.text.text[0] == '8' || e.text.text[0] == '9')
{
guessInput += e.text.text;
}
}
if (e.type == SDL_KEYDOWN)
{
if (e.key.keysym.sym == SDLK_RETURN || e.key.keysym.sym == SDLK_KP_ENTER)
{
//if input is not empty
if (guessInput != " ")
{
//convert string to int
int input = stoi(guessInput);
//reset string
guessInput = " ";
//update counter
++guessCount;
//compare guess with generated number
comparison = compare(randomNumber, input);
}
}
else if (e.key.keysym.sym == SDLK_BACKSPACE && guessInput.length() > 0)
{
guessInput.pop_back();
}
}
}
renderScene(guessInput, comparison);
}
SDL_StopTextInput();
}
void mainMenu()
{
menuPressed = false;
while (!quit && !menuPressed)
{
SDL_Event e;
while (SDL_PollEvent(&e) != 0)
{
if (e.type == SDL_QUIT)
{
quit = true;
}
//check for each button and give number to indicate which button was pressed.
buttonPress(e, tenButton.mButton, 2);
buttonPress(e, hundredButton.mButton, 3);
buttonPress(e, thousandButton.mButton, 4);
}
SDL_SetRenderDrawColor(gRenderer, 0xFF, 0xFF, 0xFF, 0xFF);
SDL_RenderClear(gRenderer);
menuTexture.render(0, 0);
; tenButton.render((SCREEN_WIDTH / 2) - (tenButton.mWidth / 2), 175);
hundredButton.render((SCREEN_WIDTH / 2) - (tenButton.mWidth / 2), 225);
thousandButton.render((SCREEN_WIDTH / 2) - (tenButton.mWidth / 2), 275);
SDL_RenderPresent(gRenderer);
}
}
//create window, renderer, etc.
void init()
{
SDL_Init(SDL_INIT_VIDEO);
gWindow = SDL_CreateWindow("HiLo", SDL_WINDOWPOS_UNDEFINED, SDL_WINDOWPOS_UNDEFINED, SCREEN_WIDTH, SCREEN_HEIGHT, SDL_WINDOW_SHOWN);
gRenderer = SDL_CreateRenderer(gWindow, -1, SDL_RENDERER_ACCELERATED | SDL_RENDERER_PRESENTVSYNC);
IMG_Init(IMG_INIT_PNG);
TTF_Init();
}
//loadTextures and font.
void loadMedia()
{
highLowTexture.loadfromSurface("Resources/HiLo.png");
yesButton.loadfromSurface("Resources/HiLoYes.png");
noButton.loadfromSurface("Resources/HiLoNo.png");
tenButton.loadfromSurface("Resources/HiLo10.png");
hundredButton.loadfromSurface("Resources/HiLo100.png");
thousandButton.loadfromSurface("Resources/HiLo1000.png");
menuTexture.loadfromSurface("Resources/HiLoMenu.png");
gFont = TTF_OpenFont("Resources/opensans.ttf", 20);
}
void close()
{
highLowTexture.free();
noButton.free();
yesButton.free();
tenButton.free();
hundredButton.free();
thousandButton.free();
menuTexture.free();
TTF_CloseFont(gFont);
gFont = NULL;
SDL_DestroyWindow(gWindow);
gWindow = NULL;
SDL_DestroyRenderer(gRenderer);
gRenderer = NULL;
TTF_Quit();
IMG_Quit();
SDL_Quit();
}
int main(int argc, char* args[])
{
init();
loadMedia();
mainMenu();
gameLoop();
close();
return 0;
}
For reference if it makes it easier to understand which each texture represents;
highLowTexture is just a background texture with a stick figure and a speech bubble to render text to
menuTexture is a background that just says "HighLow"
yesButtons/noButtons just say yes or no
ten/hundred/thousandButtons are rectangles that say "1-(10/100/1000). (5/7/9) guesses." So that the player may change the difficulty.
I also have one specific question. On Lazy Foo's tutorials everything done is on one file. When is it recommended to make use of other cpp or header files? Maybe for this simple of a game it's fine, but when I get into more complicated games, I would assume it gets to be fairly overwhelming to have the entire game in one file.
Answer: Definitely not bad for a first attempt! Let's start at the top.
#include <SDL.h>
#include <SDL_image.h>
#include <iostream>
#include <random>
#include <SDL_ttf.h>
#include <string>
It's generally wise to organize your includes for several reasons, not least being the ability to see what you've included and what you haven't clearly. I generally like to group standard includes, then system includes, then project includes, and sort each group in itself. So:
#include <iostream>
#include <random>
#include <string>
#include <SDL.h>
#include <SDL_image.h>
#include <SDL_ttf.h>
Next up is the globals. Global variables are problematic. A lot of tutorials are sloppy and use them, but they cause endless headaches in real-world code.
Getting rid of the globals is not going to be easy. It's going to require a complete restructuring of your code. I'll make suggestions bit by bit as the review goes along.
const int SCREEN_WIDTH = 640;
const int SCREEN_HEIGHT = 480;
The modern way to declare constants is with constexpr, not const. This makes them not only constant, but also compile-time constant.
Unlike all the other globals, these are constants, so it's not a problem if they're global. Everything that follows, though, should not be global.
SDL_Window* gWindow = NULL;
SDL_Renderer* gRenderer = NULL;
You should never use NULL in modern C++. what you want here is nullptr.
int getRandomNumber(int x, int y)
{
std::random_device rd;
std::mt19937 mersenne(rd());
std::uniform_int_distribution<> number(x, y);
int rng = number(mersenne);
return rng;
}
This function has a major problem: the random generator is constructed every time it's called. That ruins any guarantee of randomness.
The way to solve the problem is to use function static variables. They will get initialized once, the first time the function is called.
You should start by pulling the random number engine out of the function, so it can be reused by other things that need randomness. That might look like:
std::mt19937& random_engine()
{
static std::mt19937 mersenne{std::random_device{}());
return mersenne;
}
To get what's going on there: first a std::random_device is constructed (std::random_device{}), then it is used to generate a seed (std::random_device{}()), and that is used to construct the Mersenne twister mersenne (mersenne{std::random_device{}()}), which is stored as a function static variable.
All of that gets one once, the first time random_engine() is called.
Once you have that, you can make your getRandomNumber() with it:
int getRandomNumber(int x, int y)
{
std::uniform_int_distribution<> dist{x, y};
return dist(random_engine());
}
Next up is class LTexture. It's excellent that you created a class for your textures. However, there is a crucial error with how you did it. The reason is very technical, and explained in great detail here.
Basically what you need the LTexture class to look like is this:
class LTexture
{
public:
// Note that the default constructor can now be defaulted.
LTexture() = default;
~LTexture();
void free();
void loadfromSurface(std::string path);
void loadfromText(std::string text, SDL_Color);
void render(int x, int y);
// You need to initialize *AT LEAST* mTexture. The others are
// optional (but it's a good idea to initialize everything anyway).
SDL_Texture* mTexture = nullptr;
int mWidth = 0;
int mHeight = 0;
SDL_Rect mButton = {};
// Copy operations must be deleted.
LTexture(LTexture const&) = delete;
LTexture& operator=(LTexture const&) = delete;
// Move operations must be defined.
LTexture(LTexture&&) noexcept;
LTexture& operator=(LTexture&&) noexcept;
// And you need a swap function.
friend void swap(LTexture&, LTexture&) noexcept;
};
First let's define the swap. It's pretty simple... just swap everything:
void swap(LTexture& a, LTexture& b) noexcept
{
using std::swap;
swap(a.mTexture, b.mTexture);
swap(a.mWidth, b.mWidth);
swap(a.mHeight, b.mHeight);
swap(a.mButton, b.mButton);
}
Once you have swap, the move operations are trivial:
LTexture::LTexture(LTexture&& other) noexcept
{
using std::swap;
swap(*this, other);
}
LTexture& LTexture::operator=(LTexture&& other) noexcept
{
using std::swap;
swap(*this, other);
return *this;
}
Those things will fix the critical problems with the texture class so it's safe. From this point on, it's all about improving the design.
void free();
Properly written C++ classes should generally not have any free() functions, or close() functions, or cleanup() functions, or anything like that. That's what the destructor is for.
In fact, you delete the texture twice... once in free(), and once in the destructor. SDL may tolerate sloppy programming like that - or you may have just got lucky and your program didn't crash, but it's wrong in any case. You don't need the free() function at all. (Well, as your code is currently written you do. But we'll work on fixing that.)
void loadfromSurface(std::string path);
void loadfromText(std::string text, SDL_Color);
Another problem with your LTexture class is that the constructor doesn't actually construct a texture. You are using a technique called "two-phase initialization" - first you construct, then you init (using loadfromSurface() or loadfromText()). Between those two phases, the object is in a half-broken state. This is bad practice.
Instead, these functions should be constructors:
LTexture(std::string path);
LTexture(std::string text, SDL_Color);
and the default constructor should be removed.
Now, these constructors aren't great, because their names aren't very clear. What you can do use use "tags", like this:
struct from_surface_tag {} from_surface;
struct from_text_tag {} from_text;
LTexture(from_surface_tag, std::string path);
LTexture(from_text_tag, std::string text, SDL_Color);
Now you construct a texture from a surface like this:
auto highLowTexture = LTexture{from_surface, "Resources/HiLo.png"};
and from text like this:
auto playAgain = LTexture{from_text, "Play again?", textColor};
It is now basically impossible to use LTexture wrong. You can't construct it then forget to initialize it. You can't forget to free it. You can't free it multiple times.
That is what a good, modern C++ type looks like.
LTexture::LTexture()
{
mTexture = NULL;
mWidth = 0;
mHeight = 0;
}
LTexture::~LTexture()
{
SDL_DestroyTexture(mTexture);
mTexture = NULL;
mWidth = 0;
mHeight = 0;
}
If you follow the advice above, you don't need the default constructor. In fact, you shouldn't have one.
As for the destructor, if you read the blog post about the universal resource class pattern, you know that you need to check mTexture for nullptr before calling SDL_DestroyTexture(). Other than that, there's no real point to setting everything to null and zero. It's just wasting cycles for no purpose.
So the above two functions become:
LTexture::~LTexture()
{
if (mTexture)
SDL_DestroyTexture(mTexture);
}
That's all you need.
void LTexture::loadfromSurface(std::string path)
{
SDL_Surface *surface = IMG_Load(path.c_str());
mTexture = SDL_CreateTextureFromSurface(gRenderer, surface);
mWidth = surface->w;
mHeight = surface->h;
}
The first thing that bothers me here is that you do no error checking. Failing to load images is a very common failure! It's something you should check for.
The second problem is that you use gRenderer, which is a global. Globals are bad, so this function should take the renderer as an argument.
And you take the string by value, even though you only read it with c_str(). That's very wasteful. You should take it by const&.
As mentioned above, this should be a tagged constructor, so put altogether, it becomes:
LTexture::LTexture(from_surface_tag, SDL_Renderer* renderer, std::string const& path)
{
auto surface = IMG_Load(path.c_str());
if (!surface)
throw std::runtime_error{"failed to load image texture: " + path};
mTexture = SDL_CreateTextureFromSurface(renderer, surface);
if (!mTexture)
throw std::runtime_error{"failed to create texture from surface"};
mWidth = surface->w;
mHeight = surface->h;
}
The next function is much the same, but there are a few extra issues:
void LTexture::loadfromText(std::string text, SDL_Color color)
{
free();
SDL_Surface* textSurface = TTF_RenderText_Blended_Wrapped(gFont, text.c_str(), color, 250);
mTexture = SDL_CreateTextureFromSurface(gRenderer, textSurface);
mWidth = textSurface->w;
mHeight = textSurface->h;
SDL_FreeSurface(textSurface);
textSurface = NULL;
}
First, that call to free() is troubling. Most of the places I see you using loadFromText(), it's right after the constructor, so there's nothing to free. You're asking for a crash with a pattern like that. If you follow the advice about writing a proper resource management class, you won't have these problems.
Hard-coding the width for wrapping seems unwise, especially as just a magic number in the middle of this function. It should be passed in as a parameter.
There's no reason to set textSurface to null. It doesn't help anything.
Fixed, it might look like this:
LTexture::LTexture(from_text_tag, SDL_Renderer* renderer, std::string const& text, SDL_Color color, int width)
{
auto textSurface = TTF_RenderText_Blended_Wrapped(gFont, text.c_str(), color, width);
if (!textSurface)
throw std::runtime_error{"failed to render text: " + text};
mTexture = SDL_CreateTextureFromSurface(renderer, textSurface);
if (!textSurface)
throw std::runtime_error{"failed to create texture from surface"};
mWidth = textSurface->w;
mHeight = textSurface->h;
SDL_FreeSurface(textSurface);
}
On to render().
void LTexture::render(int x, int y)
{
SDL_Rect destRect = { x, y, mWidth, mHeight };
SDL_RenderCopy(gRenderer, mTexture, NULL, &destRect);
//create a rectangle that coincides with texture to check for button presses
mButton = { x, y, mWidth, mHeight };
}
Again, this needs to take the renderer as a parameter:
void LTexture::render(SDL_Renderer* renderer, int x, int y)
{
SDL_Rect destRect = { x, y, mWidth, mHeight };
SDL_RenderCopy(renderer, mTexture, nullptr, &destRect);
//create a rectangle that coincides with texture to check for button presses
mButton = { x, y, mWidth, mHeight };
}
And free() doesn't need to exist at all.
Before moving on, here's the collected summary of suggested changes to LTexture, all put together:
class LTexture
{
public:
static struct from_surface_tag {} from_surface;
static struct from_text_tag {} from_text;
LTexture(from_surface_tag, SDL_Renderer* renderer, std::string const& path);
LTexture(from_text_tag, SDL_Renderer* renderer, std::string const& text, SDL_Color color, int width);
// Move operations must be defined.
LTexture(LTexture&& other) noexcept
{
using std::swap;
swap(*this, other);
}
~LTexture();
void render(SDL_Renderer* renderer, int x, int y);
LTexture& operator=(LTexture&&) noexcept
{
using std::swap;
swap(*this, other);
return *this;
}
// You need to initialize *AT LEAST* mTexture. The others are
// optional (but it's a good idea to initialize everything anyway).
SDL_Texture* mTexture = nullptr;
int mWidth = 0;
int mHeight = 0;
SDL_Rect mButton = {};
// Copy operations must be deleted.
LTexture(LTexture const&) = delete;
LTexture& operator=(LTexture const&) = delete;
// And you need a swap function.
friend void swap(LTexture&, LTexture&) noexcept;
};
LTexture::LTexture(from_surface_tag, SDL_Renderer* renderer, std::string const& path)
{
auto surface = IMG_Load(path.c_str());
if (!surface)
throw std::runtime_error{"failed to load image texture: " + path};
mTexture = SDL_CreateTextureFromSurface(renderer, surface);
if (!mTexture)
throw std::runtime_error{"failed to create texture from surface"};
mWidth = surface->w;
mHeight = surface->h;
}
LTexture::LTexture(from_text_tag, SDL_Renderer* renderer, std::string const& text, SDL_Color color, int width)
{
auto textSurface = TTF_RenderText_Blended_Wrapped(gFont, text.c_str(), color, width);
if (!textSurface)
throw std::runtime_error{"failed to render text: " + text};
mTexture = SDL_CreateTextureFromSurface(renderer, textSurface);
if (!textSurface)
throw std::runtime_error{"failed to create texture from surface"};
mWidth = textSurface->w;
mHeight = textSurface->h;
SDL_FreeSurface(textSurface);
}
void LTexture::render(SDL_Renderer* renderer, int x, int y)
{
SDL_Rect destRect = { x, y, mWidth, mHeight };
SDL_RenderCopy(renderer, mTexture, nullptr, &destRect);
//create a rectangle that coincides with texture to check for button presses
mButton = { x, y, mWidth, mHeight };
}
void swap(LTexture& a, LTexture& b) noexcept
{
using std::swap;
swap(a.mTexture, b.mTexture);
swap(a.mWidth, b.mWidth);
swap(a.mHeight, b.mHeight);
swap(a.mButton, b.mButton);
}
Next up is another set of globals... still a bad idea. Don't worry, I'll be explaining how to do away with all these globals eventually.
On with the review!
`void buttonPress(SDL_Event &e, SDL_Rect &button, int buttonNum)`
There are a number of issues with this function that are general across the program, so I'll discuss them here and then not repeat.
The first problem is that it does way too much. The general rule is one function, one job... most of your functions should be ~3-5 lines of actual code. If your function starts getting longer than ~20-25 lines, that is a pretty clear sign that it needs to be broken up into smaller functions.
This function checks for a button press... and then checks which button was pressed... and then does all the logic for every button in the game. That's way too much work for a single function.
That brings up the next problem. Your game logic is dispersed all throughout the code. For example, if you wanted to add another difficulty level for 1 to 10000, you'd have to edit this function, mainMenu(), loadMedia(), close(), and possibly others that I've missed. The reason you need to use globals is because your game logic is all over the place. If you localized the logic, you wouldn't need globals to share state across functions all across the code.
That leads to another issue: You're using magic numbers to determine which button has been pressed. Think of what happens you mix up the numbers for "1-100" and "1-1000", or "Play again" and "Quit". Or if you add a new function and accidentally reuse a number. There's no way to detect the error until your game starts acting weirdly or crashing, and then it will be really hard to track down why.
To give you an idea of how convoluted your game logic is, consider this:
main() calls mainMenu()
mainMenu() calls buttonPress() (to check for tenButton etc.)
buttonPress() calls mainMenu()
mainMenu() calls buttonPress()
buttonPress() calls mainMenu()
mainMenu() calls buttonPress()
buttonPress() calls mainMenu()
which means that if someone plays the game long enough, it will stack overflow and crash.
And there's still another issue: You use this function in the following way:
buttonPress(e, tenButton.mButton, 2);
buttonPress(e, hundredButton.mButton, 3);
buttonPress(e, thousandButton.mButton, 4);
Now, this will "work" so long as there's no overlap between the buttons. But let's say that the thousand button rectangle slightly overlaps the ten button rectangle. The first line will detect a ten button press and set up a 1-10 game, then immediately after the third line will detect a thousand button press and set up a 1-1000 game... and if the overlap between the two buttons isn't obvious, the poor player will be baffled at why their easy game turned out so hard.
Once you've detected a button has been pressed, you shouldn't be trying to detect others. That way lies madness. What you need is a simple "button_is_pressed()" function like:
auto button_is_pressed(SDL_Event const& event, SDL_Rect const& button_rect)
{
if (event.type == SDL_MOUSEBUTTONDOWN)
{
auto const& mouse_button_event = event.button;
auto const mouse_position = SDL_Point{mouse_button_event.x, mouse_button_event.y};
return (mouse_button_event.button == SDL_BUTTON_LEFT) && SDL_PointInRect(&mouse_position, &button_rect);
}
return false;
}
which you'd then use like:
if (button_is_pressed(e, tenButton.mButton))
{
// handle new 1-10 game logic
}
else if (button_is_pressed(e, hundredButton.mButton))
{
// handle new 1-100 game logic
}
else if (button_is_pressed(e, thousandButton.mButton))
{
// handle new 1-1000 game logic
}
On to compare():
int compare(int randomNumber, int guess)
{
if (randomNumber == guess)
{
return 0;
}
//if player has run out of guesses
else if (guessCount == numberofGuesses)
{
return 3;
}
else if (randomNumber < guess)
{
return 1;
}
else if (randomNumber > guess)
{
return 2;
}
}
The first thing that bothers me about this function is that it is standard practice for comparison functions to return <0 for "less than", 0 for "equal", and >0 for "greater than". This will even be the behaviour of the upcoming "spaceship" three-way comparison operator. By not following this pattern, this function has surprising behaviour.
The other problem is that, once again, this function is doing multiple jobs. It compares, as promised... but it also checks to see if the guess count is up. For that, it needs access to two globals.
Here's the logic you need (and as a side note: you have your game logic in the middle of your render function - it shouldn't be there):
if (guessCount == 0)
{
// starting message
}
else if (guessCount == numberofGuesses)
{
// you lose
}
else
{
auto const cmp = compare(randomNumber, input);
if (cmp < 0)
{
// too low
}
else if (cmp > 0)
{
// too high
}
else
{
// you win
}
}
once you've done this, compare() just becomes:
auto compare(int randomNumber, int guess)
{
if (randomNumber < guess)
return -1;
if (randomNumber > guess)
return 1;
return 0;
// or in C++20, just:
// return randomNumber <=> guess;
}
Now the next function is interesting:
void playAgain(int x)
This function is actually very well-designed... roughly. Game logic follows a universal pattern - all game loops in their most basic form looks like this:
process input
do game logic
render
aka:
while (!done)
{
input();
update();
render();
}
playAgain() does almost exactly that:
void playAgain(int x)
{
willPlayAgain = false;
SDL_Event e;
// <~~~~~~~ GAME LOOP STARTS HERE ~~~~~~~>
while (!quit && !willPlayAgain)
{
// <~~~~~~~ input() ~~~~~~~>
while (SDL_PollEvent(&e) != 0)
{
if (e.type == SDL_QUIT)
{
quit = true;
}
buttonPress(e, yesButton.mButton, 0);
buttonPress(e, noButton.mButton, 1);
}
// <~~~~~~~ update() ~~~~~~~>
std::string dialogue;
if (x == 1)
{
dialogue = "YOU WON!!! The correct answer was " + std::to_string(randomNumber) + ".";
}
else
{
dialogue = "You lose. The correct answer was " + std::to_string(randomNumber) + ".";
}
// <~~~~~~~ render() ~~~~~~~>
SDL_RenderClear(gRenderer);
highLowTexture.render(0, 0);
LTexture winlose;
winlose.loadfromText(dialogue, textColor);
winlose.render(335, 70);
LTexture playAgain;
playAgain.loadfromText("Play again?", textColor);
playAgain.render(325, 300);
yesButton.render(300, 350);
noButton.render(300 + yesButton.mWidth + 10, 350);
SDL_RenderPresent(gRenderer);
}
}
Almost all of the problems with this function have to do with spaghetti logic and the use of globals. Let's go through it bit-by-bit and see how it could be improved.
First let's simplify the logic. You have two booleans: quit and willPlayAgain. If both are false, then the loop continues; fine. If one's true and the other's false, then that's fine, too - the user either wants to quit or play again. But... what does it mean if both are true? The user wants to quit and play again? (This could conceivably happen, too. If I click first one button and then the other while the game is temporarily frozen (because my system is slowing down), SDL might record both clicks in the event queue, so when you pump it your input loop will detect both events and then... I don't even know what will happen, because the logic is all over the place).
It seems to me that you are interested in two conditions: is the loop done, and what did the user choose (to quit or play again)?
With that, no confusion is possible. You keep looping until "loop done" is true, and "quit/play-again" will be whatever the last thing detected was. The logic is easy to follow.
So that gives:
void playAgain()
{
auto done = false;
auto quit = true; // you could default this to true or false, depending on your preference
while (!done)
{
// ...
}
}
Now your input logic is very simple. You just check for button presses and SDL_QUIT:
while (SDL_PollEvent(&e) != 0)
{
if (e.type == SDL_QUIT || is_button_pressed(e, noButton))
{
quit = true;
done = true;
break; // no sense in continuing checking for events!
}
else if (is_button_pressed(e, yesButton))
{
quit = false;
done = true;
}
}
But wait! Where do yesButton and noButton come from? Are they globals?
No.
You have two (practical) options. The first is to create the buttons when you enter the play-again state:
void playAgain()
{
auto done = false;
auto quit = true;
auto yesButton = LTexture{LTexture::from_surface, "Resources/HiLoYes.png"};
auto noButton = LTexture{LTexture::from_surface, "Resources/HiLoNo.png"};
while (!done)
{
// ...
}
}
The other is to create a texture manager object, and pass it to the function:
void playAgain(texture_manager& textures)
{
auto done = false;
auto quit = true;
auto& yesButton = textures["yes-button"];
auto& noButton = textures["no-button"];
while (!done)
{
// ...
}
}
That's a bit more work, but it allows you to preload all the textures at game start, and reuse them rather than reloading them each time the play-again screen pops up.
Next comes the update logic. Here, all you do is check whether the user has won or lost the last game, and generate a message accordingly:
std::string dialogue;
if (game_was_won)
dialogue = // ...
else
dialogue = // ...
But wait! Whether the user won or lost doesn't change, does it? You can determine that right at the start of the function!
But there's more! Once you know what dialogue is, you can render it one time and keep reusing it. And you can do that at the start of the function... before you start looping:
void playAgain(SDL_Renderer* renderer, TTF_Font* font, bool game_was_won)
{
auto done = false;
auto quit = true;
auto yesButton = LTexture{LTexture::from_surface, "Resources/HiLoYes.png"};
auto noButton = LTexture{LTexture::from_surface, "Resources/HiLoNo.png"};
auto dialogue = std::string{};
if (game_was_won)
dialogue = // ...
else
dialogue = // ...
// Note: the colour and width can be passed as arguments, too
// (and probably should be). It might be worthwhile to pass some
// kind of object describing describing the "theme": fonts,
// colours, textures, etc..
auto const textColor = SDL_Color{ 0, 0, 0 };
auto const textWidth = 250;
auto winlose = LTexture{from_text, renderer, dialogue, font, textColor, textWidth};
while (!done)
{
// ...
}
}
While you're at it, you could also pre-render the "play again" message.
With everything prerendered, the render phase is much simpler, and much faster:
SDL_RenderClear(renderer);
highLowTexture.render(0, 0);
winlose.render(335, 70);
playAgain.render(325, 300);
yesButton.render(300, 350);
noButton.render(300 + yesButton.mWidth + 10, 350);
SDL_RenderPresent(renderer);
Putting that altogether gives something like:
bool playAgain(SDL_Renderer* renderer, TTF_Font* font, bool game_was_won)
{
auto done = false;
auto quit = true;
// Set everything up, including all pre-rendering, so the loop
// is as fast as possible.
auto yesButton = LTexture{LTexture::from_surface, "Resources/HiLoYes.png"};
auto noButton = LTexture{LTexture::from_surface, "Resources/HiLoNo.png"};
auto dialogue = std::string{};
if (game_was_won)
dialogue = // ...
else
dialogue = // ...
auto const textColor = SDL_Color{ 0, 0, 0 };
auto const textWidth = 250;
auto winlose = LTexture{from_text, renderer, dialogue, font, textColor, textWidth};
auto playAgain = LTexture{from_text, renderer, "Play again?", textColor, textWidth};
// Begin the loop.
while (!done)
{
// Input.
while (SDL_PollEvent(&e) != 0)
{
if (e.type == SDL_QUIT || is_button_pressed(e, noButton))
{
quit = true;
done = true;
break; // no sense in continuing checking for events!
}
else if (is_button_pressed(e, yesButton))
{
quit = false;
done = true;
}
}
// Update.
// Nothing to do here, because this is a pretty static
// state: everything is determined at the start.
// Render.
SDL_RenderClear(renderer);
highLowTexture.render(0, 0);
winlose.render(335, 70);
playAgain.render(325, 300);
yesButton.render(300, 350);
noButton.render(300 + yesButton.mWidth + 10, 350);
SDL_RenderPresent(renderer);
}
return quit;
}
Now, of course, this shouldn't all be one function. If we break the input, update, and render stages out to functions, where each function returns false if the loop must end:
bool playAgain_input(LTexture const& yesButton, LTexture const& noButton, bool& quit)
{
SDL_Event e;
while (SDL_PollEvent(&e) != 0)
{
if (e.type == SDL_QUIT || is_button_pressed(e, noButton))
{
quit = true;
return false;
}
else if (is_button_pressed(e, yesButton))
{
quit = false;
return false;
}
}
return true;
}
bool playAgain_update()
{
// Nothing to do here.
return true;
}
bool playAgain_render(SDL_Renderer* renderer, LTexture const& highLowTexture, /* rest of textures... */)
{
SDL_RenderClear(renderer);
highLowTexture.render(0, 0);
winlose.render(335, 70);
playAgain.render(325, 300);
yesButton.render(300, 350);
noButton.render(300 + yesButton.mWidth + 10, 350);
SDL_RenderPresent(renderer);
return true; // technically you could check that all the render
// operations succeeded, and return false on
// failure.
}
bool playAgain(SDL_Renderer* renderer, TTF_Font* font, bool game_was_won)
{
// Set everything up, including all pre-rendering, so the loop
// is as fast as possible.
auto yesButton = LTexture{LTexture::from_surface, "Resources/HiLoYes.png"};
auto noButton = LTexture{LTexture::from_surface, "Resources/HiLoNo.png"};
auto dialogue = std::string{};
if (game_was_won)
dialogue = // ...
else
dialogue = // ...
auto const textColor = SDL_Color{ 0, 0, 0 };
auto const textWidth = 250;
auto winlose = LTexture{from_text, renderer, dialogue, font, textColor, textWidth};
auto playAgain = LTexture{from_text, renderer, "Play again?", textColor, textWidth};
// The loop.
auto quit = true;
while (
playAgain_input(yesButton, noButton, quit) &&
playAgain_update() &&
playAgain_render(renderer, highLowTexture, /* rest of textures... */))
{}
return quit;
}
Now, this is still not great, because you end up having to pass a ton of textures as function arguments (it's all very fast; speed is not a problem).but it is actually very close to perfect. The important things are:
No globals.
No spaghetti code - there is one way in to the play-again state, and one way out, with no recursion.
All logic for the play-again state is in a single place. The entire play-again state of the game is completely self-contained - changes elsewhere in the code won't affect it, and changes in any of these functions won't affect the rest of the code.
But as I said, this still isn't great. We can do better. But let's put a pin in that for now.
The same ideas here apply to many of the following functions because - as I said at the start - your code is actually remarkably well-designed for a first attempt. Most of your functions follow the input-update-render loop logic... roughly. The problem is really just that they use globals to reach out of their little box and muck with stuff the rest of the program will see.
void renderScene(std::string guessInput, int compare)
The input section is missing (it's handled elsewhere, unfortunately). But the update and render sections are there.
void gameLoop()
This function is actually the input section for the previous function.
void mainMenu()
This is another input-update-render loop. The update section is empty (as it is for playAgain()) because there's not much logic needs doing here.
void init()
void loadMedia()
void close()
These functions are fine for what they are - the issue is that that they're all just working with globals.
And now, finally, we get to main().
main() is where all the problems start, because all the data is global, rather than local to main(). Also, you're mostly using C-style patterns (likely because most, if not all, SDL tutorials are C-based), which means you need a close() function.
To update this to C++, you need to wrap a lot of SDL. For example, SDL itself should be wrapped in a class. Here's a very basic example:
class sdl
{
public:
sdl()
{
auto const result = SDL_Init(SDL_INIT_VIDEO);
if (result != 0)
throw std::runtime_error{std::string{"SDL initialization failed: "} + SDL_GetError()};
}
sdl(sdl&& other) noexcept
{
_swap(*this, other);
}
~sdl()
{
if (_own)
SDL_Quit();
}
auto operator=(sdl&& other) noexcept -> sdl&
{
_swap(*this, other);
return *this
}
// Non-copyable
sdl(sdl const&) = delete;
auto operator=(sdl const&) -> sdl& = delete;
private:
static auto _swap(sdl& a, sdl& b) noexcept
{
using std::swap;
swap(a._own, b._own);
}
bool _own = false;
};
And you'd use it in main() like this:
int main()
{
try
{
auto sdl = sdl{}; // initializes SDL
// Use any SDL stuff you want here.
// Even if there's an error, no problem:
// throw std::runtime_error{"some error"};
// SDL will be automatically cleaned up.
}
catch (std::exception const& x)
{
std::cerr << "Error: " << x << '\n';
return EXIT_FAILURE;
}
}
You can make nearly identical classes for SDL image and SDL ttf, and use them the same way:
int main()
{
try
{
auto sdl = sdl{};
auto img = sdl_image{};
auto ttf = sdl_ttf{};
// Safe to use, will automatically clean up.
}
catch (std::exception const& x)
{
std::cerr << "Error: " << x << '\n';
return EXIT_FAILURE;
}
}
For stuff like SDL_CreateWindow() that returns pointers, you can use std::unique_ptr with a custom deleter:
struct sdl_deleter
{
auto operator()(SDL_Window* p) noexcept
{
if (p)
SDL_DestroyWindow(p);
}
auto operator()(SDL_Renderer* p) noexcept
{
if (p)
SDL_DestroyRenderer(p);
}
auto operator()(TTF_Font* p) noexcept
{
if (p)
TTF_CloseFont(p);
}
// and any more you need
};
int main()
{
try
{
auto const sdl = sdl{};
auto const img = sdl_image{};
auto const ttf = sdl_ttf{};
auto const window = std::unique_ptr<SDL_Window, sdl_deleter>{
SDL_CreateWindow("HiLo", SDL_WINDOWPOS_UNDEFINED, SDL_WINDOWPOS_UNDEFINED, SCREEN_WIDTH, SCREEN_HEIGHT, SDL_WINDOW_SHOWN)
};
auto const renderer = std::unique_ptr<SDL_Renderer, sdl_deleter>{
SDL_CreateRenderer(gWindow, -1, SDL_RENDERER_ACCELERATED | SDL_RENDERER_PRESENTVSYNC)
};
// and so on, loading the font and all the textures, etc.
// Main game logic goes here.
// Everything gets automatically cleaned up in the right
// order.
}
catch (std::exception const& x)
{
std::cerr << "Error: " << x << '\n';
return EXIT_FAILURE;
}
}
Keep following that pattern, and you can load all the game's resources and they'll all be automatically cleaned up even in the event of an error.
Now, as with the playAgain() function, this is good... but still not great.
You did right so far to study up on C++ - the language you need to write your game - and SDL - the library you need to display your game's graphics, handle input, and so on. But you are missing a third element. You need a tutorial on the high-level structure of a game: things like how to manage assets and so on. You've got the low-level stuff down with SDL, but you need an idea of how to make the high-level structure.
I very highly recommend Game Programming Patterns by Robert Nystrom, but there are dozens of resources out there on high-level game structure.
As a very rough sample of the kind of thing you'll learn from those resources, I suggest you try to design your game using a state machine.
What does that even mean? Well, I haven't played your game (I don't have the textures), but as I understand it, your game works like this:
The game starts.
The game shows a menu letting the user chose the game type. The game types are all the same, so I'll only describe one. Also, the user could hit the quit button to quit.
If the user chooses to play, the guessing game starts. It ends either with a win or a loss, and then goes to the play-again screen.
At the play-again screen the user can choose to quit, or play again. Depending on what they choose, they jump to either the guessing game state or the quit state.
If the user chooses to quit, the game ends.
Here's what that looks like graphically:
Each of those bubbles is a high-level state that your game is in. Some of those states have sub-states (like the guessing-game state has states where the number of guesses is zero, where the number is greater than zero but less than the max number of guesses, and so on).
At the highest level what your game is really doing is transitioning from one of those states to another based on the conditions in the current state.
Each of those states can be isolated as it's own "sub-game", with its own game loop with input-update-render logic. And here's the neat thing... you've already MOSTLY done that. It's a little clumsy and undisciplined, but you have already basically structured your game correctly as isolated states. Except... because of the globals and the logic being spread out all over the place, it's not exactly isolated... but it could be.
Let's start with the basics - a base class for your current game state:
class state
{
enum class update_result_type
{
continue_state, // continue in the current state
end_state, // current state is done; no new state
replace_state, // current state is done; new state is given
push_state, // enter a new state, but remember the current state
quit_state, // just quit the whole game immediately
};
struct update_result_t
{
update_result_type type = update_result_type::continue_state;
std::unique_ptr<state> next = nullptr;
};
virtual ~state() = default;
virtual void input() = 0;
virtual update_result_t update() = 0;
virtual void render() = 0;
// Non-copyable.
state(state const&) = delete;
auto operator=(state const&) -> state = delete;
}
At the highest level, your game has a stack of states, and a main loop that looks like this:
auto state_stack = std::stack<std::unique_ptr<state>>{};
// The game starts in the main menu state.
state_stack.push_back(new main_menu_state{/*params*/});
// Main game loop.
while (!state_stack.empty())
{
state_stack.top()->input();
auto result = state_stack.top()->update();
if (result.type == state::update_result_type::continue_state)
{
// Just continue in the current state: so render.
state_stack.top()->render();
}
else if (result.type == state::update_result_type::end_state)
{
// This state is done, so pop it off the state stack.
// The previous state will be used, and if there is none
// the game ends.
state_stack.pop();
}
else if (result.type == state::update_result_type::push_state)
{
// Switch to a new state by pushing the new state to the
// top of the stack.
state_stack.push(std::move(result.next));
}
else if (result.type == state::update_result_type::replace_state)
{
// Switch to a new state by (effectively) popping the
// current state, then pushing the new state.
state_stack.top().swap(result.next);
}
else if (result.type == state::update_result_type::quit_state)
{
// Just clear out the state stack.
while (!state_stack.empty())
state_stack.pop();
}
}
Now each game state gets its own class. For example, the play-again state:
class play_again_state : public state
{
public:
play_again_state(SDL_Renderer* renderer, TTF_Font* font, bool game_was_won);
void input() override;
update_result_t update() override;
void render() override;
private:
enum class user_choice
{
none, // no choice made
play_again,
quit,
};
user_choice choice = user_choice::none;
SDL_Renderer* renderer = nullptr;
LTexture yesButton = {LTexture::from_surface, "Resources/HiLoYes.png"};
LTexture noButton = {LTexture::from_surface, "Resources/HiLoNo.png"};
// and so on for other textures and stuff
};
play_again_state::play_again_state(SDL_Renderer* renderer, TTF_Font* font, bool game_was_won) :
renderer{renderer},
// initialize other members
{
// initialize members
}
void play_again_state::input()
{
SDL_Event e;
while (SDL_PollEvent(&e) != 0)
{
if (e.type == SDL_QUIT || is_button_pressed(e, noButton))
{
choice = user_choice::quit;
break;
}
else if (is_button_pressed(e, yesButton))
{
choice = user_choice::play_again;
break;
}
}
return user_choice::none;
}
state::update_result_t play_again_state::update()
{
switch (choice)
{
case user_choice::play_again:
return {state::update_result_type::end_state};
// or, depending on how you lay out your states:
// return {state::update_result_type::replace_state, std::unique_ptr<state>{new main_menu_state{/*params*/}}};
case user_choice::quit:
return {state::update_result_type::quit_state};
case user_choice::none:
// fallthrough
default:
return {};
}
}
void play_again_state::render()
{
SDL_RenderClear(renderer);
highLowTexture.render(0, 0);
winlose.render(335, 70);
playAgain.render(325, 300);
yesButton.render(300, 350);
noButton.render(300 + yesButton.mWidth + 10, 350);
SDL_RenderPresent(renderer);
}
As you can see, those are basically the same functions from before, just now in a class. Which means no need to pass everything in parameters, you can use the class's data members.
And as before, everything about the play-again state is completely isolated in this class, so you can modify it freely without breaking anything else (and modify everything else freely without breaking it).
With states managed like this, you can very trivially add even more states to your game. For example: You could add a button in the main menu that pushes an "about this game" state that displays info about the game, and when you press "ok" it ends the state, pops it off the stack, and falls back to the main menu state.
You could also test states in isolation, so you don't need to play all through the game just to see that the play-again state works.
You are already on the right track by almost isolating your game's states into functions. What mostly tripped you up was sharing information from state to state, which led to falling back on globals. Reading up on this kind of high-level game design will help you get a sense of how to structure your game's code to be more robust.
Question
I also have one specific question. On Lazy Foo's tutorials everything done is on one file. When is it recommended to make use of other cpp or header files? Maybe for this simple of a game it's fine, but when I get into more complicated games, I would assume it gets to be fairly overwhelming to have the entire game in one file.
First, this game isn't as simple as you're making it out to be! You really bit off quite a bit for a first attempt, and frankly, handled it admirably.
Most people's first guessing game would simply start with "guess a number between _ and _", and maybe not even have a play again option. You went for a game with a play again option, and three different difficulty levels. That ain't half bad. But it's why I spent so much of this review on high-level structural stuff. You're aiming big, and that's cool. Read up on the high-level stuff, and you might be able to tackle even bigger challenges cleanly.
To answer the question directly: There are two major reasons to start breaking code up into units (header/cpp pairs usually):
When you've written a "thing" (function, class, group of functions and classes, etc.) that looks like it might be useful in other projects.
For example, LTexture looks handy. The next time you make a game, rather than rewriting it or copy-pasting (always unwise if you can avoid it), it might be nice if you had a ltexture.hpp and ltexture.cpp that you could copy to the new project.
In fact, I've found in practice that someone's first program/game is by far the hardest an most time consuming. But if they designed well, they'll have lots of little reusable bits they can use for their next program/game. LTexture is an example, but also most of the stuff I suggested can also be reused, like the classes for SDL stuff and the state machine stuff. Once you write it once and do all the testing and stuff so you know it's working, it's really nice to be able to just copy the files from one project to another.
I recommend to beginning programmers to build their own personal code library - just keep adding useful stuff to it as you code, from time to time go through and update, and eventually you'll be able to put stuff together fast.
When some subsection of your program is getting too big and complex, and can be easily put off to one side.
This isn't really an issue in your game - there's not really much that really demands to be pulled out and set aside. But it does come up as programs get more "stuff" in them.
One common thing I find myself pulling out and moving to its own source file is command line argument processing. I have one file with main() that calls parse_command_line(argc, argv);, and that function is defined in a separate file. That file does all the command line parsing, prints any help or version messages, and so on, and then returns a settings object to the main program, so that crap doesn't clutter up the main source file.
Don't be shy about breaking your programs up into smaller components. Not only does that often lead to more reusability, it can make development so much faster (because most build systems will not recompile stuff that doesn't change).
Summary
The main issue with your code is a lack of high-level structuring. That lack causes you to scatter game logic around and fall back on globals.
You've got two out of three parts of what a game needs down: you write decent C++, you've learned your low-level I/O stuff (SDL)... what you're lacking is the high-level design stuff. High-level design concepts like state machines and manager classes will bring structure to the program, reducing or eliminating interdependencies and spaghetti logic. That should be the next thing you research. I recommend Game Programming Patterns by Robert Nystrom as a good book to check out.
The second major issue is that most of your code is very C-ish, which is a bad look for C++. Unfortunately, SDL is a C library, which makes it a pain to work with, because you need to wrap everything to get good, safe, easy-to-use C++ code. But on the other hand, if you do a good job of wrapping it up well, you can reuse your C++ SDL library for other game projects... and since you'll be starting out with good quality C++ (hopefully), those later projects should be a breeze.
That's all for the review! Happy coding! | {
"domain": "codereview.stackexchange",
"id": 31421,
"tags": "c++, sdl"
} |
Why other benzene derivatives do not undergo nitrosation | Question: Solomon and Fryhle $11 th$ ed :
Most other benzene derivatives except phenols and tertiary aromatic amines do not undergo C- nitrosation reaction.
I agree that if there is a primary or secondary aromatic amine, the reaction will follow other routes but in case of other activating substituents EAS is the only route left for the electrophile $\ce{-^+N=O}$ for example with anisole.
Also it seems to contradict ron's answer
So which one is correct?
Answer: Anisole can be nitrosated in good yield reference here. This, and other references, note that the PhOMe group is unstable to the reaction conditions being cleaved to give phenol.
m-Xylene can be nitrosated in good yield, o-xylene and toluene in modest yield in TFA or sulfuric/acetic acid mix under nitric oxide reference here
Nitrosonium tetrafluoroborate will also nitrosate a range of electron-rich aromatics, including anisole and xylenes reference here
This suggests to me that the authors of the textbook quoted are in error, as two of the references predate the publication date of the textbook. | {
"domain": "chemistry.stackexchange",
"id": 11546,
"tags": "organic-chemistry, amines"
} |
Ranking a variant array with variable dimensions | Question: I'm doing VBA macros and need to use Arrays of values extensively. After the below help from @TonyDallimore on Stack Overflow, I've decided to use nested variant arrays.
VBA chrashes when trying to execute generated sub
I use multidimensional, jagged arrays to SELECT data from DB's and efficiently perform calculations and write to a Worksheet. The data is sometimes a single value (Rank:0) sometimes a row of values (Rank:1) sometimes a table of values with some cells containing rows of values (Rank:3). I use the function below to determine what kind of operations are possible and should be performed to such arrays.
This function, along with all my array related functions reside in a module: modCustomArrayFunctions.
'***************************************************************************'
'Returns the rank of the passed array '
'Parameters: '
' Arr: The array to be processed '
'Returns: '
' The rank of the array '
'***************************************************************************'
Public Function Rank(ByRef Arr As Variant) As Byte
'Declarations **************************************************************'
Dim MaxRank As Byte 'Maximum rank of the elements of the array '
Dim i As Integer
'***************************************************************************'
If IsArray(Arr) Then
If IsArrInitialized(Arr) Then
MaxRank = 0
For i = LBound(Arr) To UBound(Arr)
Rank = Rank(Arr(i)) + 1
If Rank > MaxRank Then MaxRank = Rank
Next i
Rank = MaxRank
Else
Rank = 0
End If
Else
Rank = 0
End If
End Function
'***************************************************************************'
I code my routines pretty much this way.
And the accompanying test is as follows:
'***************************************************************************'
Public Sub Rank_Test()
Dim TestArr As Variant
Dim TestRank As Integer
'Test Non-Array
If Rank(TestArr) = 0 Then
Debug.Print "Non-Array OK"
Else
Debug.Print "Non-Array FAILED!"
End If
'Test Ranks 1 to 100
For TestRank = 1 To 100
TestArr = MakeArray(TestRank)
If Rank(TestArr) = TestRank Then
Debug.Print TestRank & "D OK"
Else
Debug.Print TestRank & "D FAILED!"
End If
Next TestRank
End Sub
'***************************************************************************'
Can the code, comments and test be deemed acceptable, what is there to improve?
Is the test okay or have I got the whole unit testing idea wrong?
IsArrInitialized() and MakeArray(n) are listed for completeness here. Note that MakeArray(n) is only used to create test arrays and is private to the array test module.
'***************************************************************************'
Private Function MakeArray( _
RankArr As Integer, _
Optional Value As Variant = 1 _
) As Variant
Dim TestArr As Variant
Dim DummyArr As Variant
Dim i As Integer
If RankArr = 0 Then
MakeArray = Value
ElseIf RankArr = 1 Then
ReDim TestArr(1 To 1)
TestArr(1) = Value
MakeArray = TestArr
Else
ReDim TestArr(1 To 1)
ReDim DummyArr(1 To 1)
DummyArr(1) = Value
For i = 1 To RankArr - 1
DoEvents
TestArr(1) = DummyArr
DummyArr = TestArr
Next i
MakeArray = TestArr
End If
End Function
'***************************************************************************'
'***************************************************************************'
'Determines if a dynamic array has been initialized '
'Parameters: '
' Arr: The array to be processed '
'Returns: '
' True if initialized, False if not '
'***************************************************************************'
Public Function IsArrInitialized(ByRef Arr As Variant) As Boolean
'Declarations **************************************************************'
Dim dum As Long
'***************************************************************************'
On Error GoTo ErrLine
If IsArray(Arr) Then
dum = LBound(Arr)
If dum > UBound(Arr) Then
GoTo ErrLine
Else
IsArrInitialized = True
End If
Else
IsArrInitialized = False
End If
Exit Function
ErrLine:
IsArrInitialized = False
End Function
'***************************************************************************'
Answer: Regarding the not-listed IsArrInitialized, after investigating on StackOverflow, I ended up using this:
Private Declare Sub CopyMemory Lib "kernel32" Alias "RtlMoveMemory" _
(pDst As Any, pSrc As Any, ByVal ByteLen As Long)
'returns true if specified array is initialized.
Public Function IsArrayInitialized(arr) As Boolean
Dim memVal As Long
CopyMemory memVal, ByVal VarPtr(arr) + 8, ByVal 4 'get pointer to array
CopyMemory memVal, ByVal memVal, ByVal 4 'see if it points to an address...
IsArrayInitialized = (memVal <> 0) '...if it does, array is intialized
End Function
I would love to see your MakeArray implementation, but I'll stick to the Rank function.
Comments
'***************************************************************************'
'Returns the rank of the passed array '
'Parameters: '
' Arr: The array to be processed '
'Returns: '
' The rank of the array '
'***************************************************************************'
I remember being told, in school, that this was a good way of documenting code. However, real-world experience has proven otherwise. These comments only clutter up the code, add to the burden of maintenance and inevitably become stale/obsolete/lies.
What's the need exactly? Say what the thing does. Good, useful comments don't do that - the code itself says "what", good comments say "why". How do you know what the thing does? With proper encapsulation and naming!
If the Rank function is defined in a code module called Helpers.bas that contains dozens of unrelated specialized functions like this, you have a problem. If it's defined in a code module called ArrayHelpers.bas that contains dozens of somewhat related specifalized functions like this, you have a lesser problem, but a problem still: in VBA anything Public declared in a code module (.bas) is accessible as a "macro" - a Public Function in Excel VBA could even be used as a cell formula, so naming is very important to avoid clashes with existing/"native" functions.
'Declarations **************************************************************'
Please don't. If you feel the need to "sectionize" the code inside a function, chances are that function is doing too many things.
Dim MaxRank As Byte 'Maximum rank of the elements of the array
These comments shouldn't be needed; it should be clear what MaxRank is used for, and if it isn't, then the identifier needs a better name, not a comment.
Naming
I think Rank is a poor name for what the function does. First, I'm tempted to read it as a verb, but you intend it as a noun - nouns are generally classes (/objects), and verbs are for methods, functions and procedures.
Readability
Your function is recursive, and it's not clear what the intent is - arrays have indices, not "ranks" - the multiple assignments of the function's return value are also a hinderance, consider this:
Public Function Rank(ByRef Arr As Variant) As Byte
Dim Result As Byte
Dim MaxRank As Byte
Dim i As Integer
If IsArray(Arr) Then
If IsArrInitialized(Arr) Then
MaxRank = 0
For i = LBound(Arr) To UBound(Arr)
Result = Rank(Arr(i)) + 1 ' recursive call
If Result > MaxRank Then MaxRank = Result
Next i
Result = MaxRank
Else
Result = 0
End If
Else
Result = 0
End If
Rank = Result
End Function
The function's return value being only assigned once, makes it easier to rename it and easier to read and tell the reads from the writes and recursive calls at a glance.
Bug?
Unless you've specified Option Base 1 VBA arrays are 0-based, which means a return value of 0 could be problematic. Typical VBA code would return -1 for an invalid index, so you can tell an invalid index from a zero-based first index.
This, of course, is countered by explicitly declaring the lower bound of your arrays (1 to 1, 1 to 1, 1 to n), but that's a rather verbose way of declaring an array.
That said, I'm not 100% clear on exactly what that function is achieving, even after reading your SO post. Is it possible that you don't really need a multidimensional array (1,1,n), but rather a list of objects that encapsulate an array and two other fields? This code could be of interest. | {
"domain": "codereview.stackexchange",
"id": 18066,
"tags": "array, unit-testing, vba"
} |
Returning random integer from interval based on last result and a seed | Question: Suppose we have an interval of integers [a, b]. I would like to have a function that returns random members from within the interval, without repetitions. Once that all members within the interval are explored, the function would start to return the same first random sequence again, in the same order.
Example: a=1, b=5
3, 1, 4, 5, 2, 3, 1, 4, 5, 2, 3, 1, 4, 5, 2, ...
This would be easy to achieve by shuffling an array of all elements between a and b, and repeating it once the array is finished. However, this would take too much memory space, and this is not suitable for my case (I might have millions of elements).
Instead, the function I'd like to have would be more or less like this:
f(a, b, n, seed) -> n+1
Where:
a - start of interval
b - end of interval
n - last element returned from list
seed - self-explanatory
n+1 - next random element from list, calculated by using the seed and the last element returned (n)
The trick is knowing some way to get a non-repeated number from the interval based only on the element returned before and the seed. In the end, it would behave like a circular list randomized at its initialization, but without using memory space.
Answer: I suggest you pick a random permutation on the range $[a,b]$, i.e., a bijective function $\pi:[a,b]\to [a,b]$. Then, maintain a counter $i$ that starts at $i=a$; at each step, output $\pi(i)$ and then increment $i$ (wrapping around so that $b+1$ becomes $a$).
There are standard methods for generating such a random permutation in the cryptography literature: look up format-preserving encryption. The seed is the cryptographic key. You will be able to compute $\pi(i)$ in $O(1)$ time and $O(1)$ space, so this should be very efficient and avoid the need for a lot of storage.
If you insist that the next output should be a function of the previous output, you can let $g(i)=i+1$ (except that $g(b)=a$), then let $f(i)=\pi^{-1}(g(\pi(i))$, where $\pi$ is a random permutation chosen as above. This will then give you a random cycle that iterates through the elements of $[a,b]$ in a random order. The outputs are the sequence $f(a),f(f(a)),f(f(f(a))),\dots$. | {
"domain": "cs.stackexchange",
"id": 16551,
"tags": "arrays, randomized-algorithms, memory-management, subsequences, intervals"
} |
Are waves affected by an under-water barrier? | Question: Given a wave propagating at the surface of still water towards a barrier that is below the surface, but at a distance that is of the order of the dimensions of the wave (such as depicted in the scheme below).
How will the course of the wave be affected ?
Will it be blind to it and pursue its course unaffected ?
Will it only partially pursue and a part of it will bounce back ?
Something else ?
<
wave
/~~~
/~~~~\ ->
~~~~/~~~~~~`~~~~~~~~~~~~~~~~~~~~~~~~ water surface
____
| |
| |
____________________| |__________
barrier
Answer: Ocean surface waves that are said to 'feel' bottom are known as shallow water waves which are categorically differentiated from deep water waves according to wavelength and depth, and which are not as affected by the depth of the sea floor.
Ocean surface waves are a movement of energy, not a bulk forward motion of water but do result in local circular orbits of the water particles. With depth the circular orbits flatten into elliptical orbits and eventually vanish, and it's at this depth obstacles will no longer influence surface wave motion. For obstacles that do intrude into this space, the circular or elliptical motions are disturbed and energy is dissipated towards the upper water layers, building up wave height. IN very shallow water the build up can get high enough that the wave can no longer sustain its shape and you have a breaking wave.
The property that actually leads to the loss of energy from obstacles is the viscosity of the water, the ability for layers of water to flow over one another. | {
"domain": "physics.stackexchange",
"id": 37891,
"tags": "fluid-dynamics"
} |
How to derive equation (N.15) in Ashcroft and Mermin's Solid State Physics? | Question: They state in their book on page 792 the following:
It can be proved, however, that if $A$ and $B$ are operators linear in the $u(R)$ and $P(R)$ of a harmonic crystal, then: $$\langle e^A e^B \rangle = \exp(\frac{1}{2}\langle A^2+2AB+B^2 \rangle)$$
They give as a reference the following paper by Mermin:
http://www.phys.ufl.edu/~maslov/phz6426/Mermin_JMathPhys_7_1038.pdf
But I don't see how to use this paper to prove the identity in the book.
Can anyone help with this?
P.S
$\langle \cdot \rangle$ denotes averaging.
Answer: Answering to your question here since it's too long for a comment. I'll change a little the notation by the way.
First recall that $\langle A \rangle$ is this context is the statistical average of the operator $A$ at thermal equilibrium, given by
\begin{equation}
\langle A \rangle = \sum_{n=0}^\infty p_n \langle n|A|n\rangle \, ,
\end{equation}
where $p_n=e^{-\beta E_n }/Z$ with $\beta=1/k_\mathrm{B}T$ and $Z$ is the canonical partition function.
In our case (harmonic approximation) the energies are those of the quantum harmonic oscillator:
$$
E_n=\hbar \omega (n+1/2)\, ,
$$
and so it is easy to obtain:
$$
p_n= e^{-\beta \hbar \omega n}(1-e^{-\beta\hbar \omega }) = z^n (1-z) \, ,
$$
where we have defined $z\equiv e^{-\beta \hbar \omega}$.
Therefore the average becomes:
$$
\langle A \rangle = (1-z)\sum_{n=0}^\infty z^n\langle n|A|n\rangle \, .
$$
Now let us define a linear operator in the positions and momenta of the crystal (equivalently, linear in the creation and annihilation operators):
$$
A = c_1a+c_2a^\dagger \, .
$$
Now we can compute $\langle A^2 \rangle = \langle (c_1a+c_2 a^\dagger)^2\rangle$. According to the expression above:
$$
\langle A ^2\rangle = (1-z)\sum_{n=0}^\infty z ^n\langle n | (c_1a+c_2 a^\dagger)^2 | n\rangle =\\ =(1-z) \sum_{n=0}^\infty z ^n\langle n |(c_1^2a^2+c_2^2a^{\dagger 2}+c_1c_2aa^\dagger+c_1 c_2a^\dagger a) | n\rangle \, .
$$
The non mixed terms (those with $a^2$ and $a^{\dagger 2}$) don't contribute to the sum so we end up with
$$
\langle A^2 \rangle = (1-z)c_1c_2\sum_{n=0}^\infty z ^n\langle n | \underbrace{(aa^\dagger+a^\dagger a)}_{[a,a^\dagger]-2a^\dagger a} | n\rangle=c_1c_2(1-z)\sum_{n=0}^\infty z^n(1+2n)\, .
$$
Thus, after a bit of algebra we obtain:
$$
\langle (c_1a+c_2 a^\dagger)^2\rangle = c_1c_2\left( 1+2\frac{z}{1-z}\right)=2c_1c_2 \left(\frac{1}{e^{\beta \hbar \omega}-1} +\frac{1}{2}\right)\, ,
$$
arriving at the result of the paper I linked in the comment. From there, the general case for several operators should not be too hard to prove as indicated in the paper.
PS: hopefully there are no errors, it's been a while since I studied QM and I've forgotten many things!
Reference: Solid State Physics, G. Grosso, G. P. Parravicini (2nd. edition) pp. 430-435. | {
"domain": "physics.stackexchange",
"id": 58480,
"tags": "quantum-field-theory, operators, solid-state-physics, correlation-functions, wick-theorem"
} |
Collect all items containing a word to a Multimap | Question: I have a simple class like
class Item {
Set<String> wordSet() {...}
}
So each item has a set of words and I need the inverse relation, i.e., for each word find all items containing it. This gets used a lot, so I want an ImmutableSetMultimap allowing me to retrieve it quickly.
I'm using
ImmutableSetMultimap<String, Item> wordToItemsMultimap(Collection<Item> items) {
Multimap<Item, String> itemToWords = items
.stream()
.collect(Multimaps.flatteningToMultimap(
i -> i,
i -> i.wordSet().stream(),
HashMultimap::create));
Multimap<String, Item> wordToItems =
Multimaps.invertFrom(itemToWords, HashMultimap.create());
return ImmutableSetMultimap.copyOf(wordToItems);
}
for creating the map and it works, but I find it too complicated. I'm looking for a simpler solution.
I don't care much about efficiency of the above snippet (as it gets called just once), but optimization hints are welcome as I'm doing a lot of similar things where speed matters.
Answer: Well, if your items has no null elements, you can use ImmutableSetMultimap#flatteningToImmutableSetMultimap collector and it's own inverse method instead of calling utility methods.
return items.stream()
.collect(flatteningToImmutableSetMultimap(
i -> i,
i -> i.wordSet().stream()
))
.inverse();
It's a bit more terse (note the static import, which in my team we agreed is acceptable, although we normally don't use static imports) as it repeats looong class name in method name) and gives you immutable multimap right away.
If some "flattening" version of Multimaps#index existed, you could have also used it, but there's no such method right now unfortunately. | {
"domain": "codereview.stackexchange",
"id": 31874,
"tags": "java, stream, guava"
} |
How does $p_x$ commute with $p_y$, i.e. $[p_x,p_y]=0$? | Question: I know it's a simple and basic question but would someone show me how to evaluate $[\hat{p}_x,\hat{p}_y]$?
Answer: From the definition of the commutator,
$[P_x,P_y] = P_xP_y - P_yP_x$
where,
$P_x = -i\hbar\frac{\partial }{\partial x}$
and,
$P_y = -i\hbar\frac{\partial }{\partial y}$
Therefore,
$P_xP_y \psi = i\hbar\frac{\partial}{\partial x}(ih\frac{\partial \psi}{\partial y})$
$ = -\hbar^2\frac{\partial^2 \psi}{\partial x \partial y}$
Similarly,
$P_yP_x\psi = i\hbar\frac{\partial }{\partial y}(ih\frac{\partial \psi}{\partial x})$
$ = -\hbar^2\frac{\partial^2 \psi}{\partial y \partial x}$
$ = -\hbar^2\frac{\partial^2 \psi}{\partial x \partial y}$
Therefore,
$[P_x,P_y]\psi = P_xP_y\psi - P_yP_x\psi = -\hbar^2\frac{\partial^2 \psi}{\partial x \partial y} - -\hbar^2\frac{\partial^2 \psi}{\partial x \partial y} = 0$
Therefore,
$[P_x,P_y] = P_xP_y - P_yP_x = 0$
Note the $\psi$ wasn't used here, but I included it anyway for teaching purposes, since occasionally evaluating commutators is easier when applying them to a test function, $\psi$. | {
"domain": "physics.stackexchange",
"id": 13481,
"tags": "quantum-mechanics, homework-and-exercises, operators, momentum, commutator"
} |
Is the value of a frequency bin of a DFT-output the average of the 'real' frequency values within that bin's range? | Question: I am wondering whether the value of a frequency bin with a certain resolution is the average of the fourier transform values of the 'real' frequencies within that bin's range.
Answer: I'll start with a counter-example:
Assume your continuous-time signal is a sinusoid at freqeuncy $f_0$. You then sample that signal, and use a rectangular window to select $N$ samples. Feed those $N$ samples to your DFT, and you will see non-zero values in the whole spectrum (unless in the special cases where $f_0$ is a multiple of $f_s/N$).
So you start with a signal whose energy is concentrated in a single frequency, and you end up with non-zero values in all DFT bins.
To understand what is happening:
Start with a continuous-time signal $x_c(t)$.
(Optionally) filter the signal to make it band-limited to $f_s/2$.
Sample the signal to get $x[n] = x_c(n/f_s)$.
At this point, the spectrum is given by the sampling theorem. You will have spectrum aliasing if the sampled signal was not band-limited in step 2.
Now apply a window of size M. The window type depends on your application (Rectangular (ugly), Hann, Hamming, Blackman, Flat top, etc.).
At this point, your spectrum is that of the sampled signal, but convolved with the window (depending on window size and type, you will get some energy spread to other frequencies). This spectrum is $X_w(e^{j\theta})$.
The DFT of the windowed signal will return samples of its spectrum, at frequencies $Y[k] = X_w( e^{j 2\pi k/N})$. | {
"domain": "dsp.stackexchange",
"id": 7768,
"tags": "fft, signal-analysis, dft, frequency-domain"
} |
How to list coding/noncoding genomic regions linked to significant SNPs? | Question: For QTL analysis in mice GEMMA was used to get P values ("p_lrt" column) for SNPs. GEMMA output (...assoc.txt) file excerpt:
chr rs ps n_miss allele1 allele0 af logl_H1 l_mle p_lrt
1 UNC6 3010274 45 T C 0.753 -1.871575e+03 9.092954e+00 4.497885e-01
1 JAX00240613 3323400 12 T C 0.766 -1.871777e+03 9.096464e+00 6.822045e-01
...
Then, drawing a Manhattan plot with qqman R package applies default suggestive/genomewide cutoffs. But if I get it right, P < 5×10^(−8) is a commonly used genomewide threshold Ref1, Ref2.
Then I have to list the relevant coding/noncoding regions linked to these significant SNP (with P < 5×10^(−8)).
How can I do this?
I found a fantastic tool called FUMA
but it seems for GWAS in humans not for QTL analysis.
Update:
In Karl Browman's presentations: pdf (p. 13); pdf (p. 23(=28), 29(=34), 30(=35)).
Answer: I would suggest seeing what others have done when using GEMMA. For example, you can see this paper's methods:
We converted p-values to LOD scores and used a 1.5-LOD support interval to approximate a critical region around each associated region, which enabled us to systematically identify overlap with eQTLs.
They further discuss this approach in the supplementary note:
Instead we converted p-values
to LOD scores and used a 1.5-LOD support interval to approximate a critical region around each
association. The LOD drop approach provides a quick, straightforward way to gauge mapping precision
and systematically identify overlap between eQTL genes and candidate QTGs; however, it does not
correspond to a specific confidence interval (e.g. 95% confidence interval).
I would suggest trying something like this to estimate such a "critical region". For a discussion of the relationship between LOD and p-values, a quick search yielded this paper. | {
"domain": "biology.stackexchange",
"id": 12117,
"tags": "genetics, snp, gwas"
} |
What is the IUPAC name of the following compound? | Question: What is the IUPAC name of this molecule with a dihalide alkyl chain?
Answer: When listing preferred IUPAC names, the ring prioritized over the chain (making it a substituent), especially when the chain is shorter than the ring.
First we label the carbon at the end of the chain as '1' and continue across the chain. The bromines are at positions 2 & 3, and it is a 5 carbon chain. So, we label it as pentyl chain and it therefore follows the IUPAC name is
(2,3-dibromopentyl)benzene | {
"domain": "chemistry.stackexchange",
"id": 7020,
"tags": "organic-chemistry, nomenclature"
} |
What causes the tonotopic organization of the inner ear? | Question: I'm trying to understand why tones are registered in the way that they are in the inner ear, i.e., why are high pitched sounds sensed at the base of the cochlea and low frequencies in the apex? I've been unable to find out the specifics of this tonotopy, but as far as I can gather this is due to the physical properties of the basilar membrane.
But what exactly are those physical properties? Does the basilar membrane vary in size and stiffness along its length, so that each bit has its own resonant frequency, or is there something else going on?
Answer: The frequency tuning in the cochlea is due to a number of factors.
The primary factors of cochlear frequency tuning are generally ascribed to the passive physical characteristics of the basilar membrane (BM), which OP already identified in the question - The BM is wider and more flexible at the apical end (low-frequency region) and narrower and stiffer at the basal end (high-frequency region). Like the strings on a guitar, stiff and thin cords produce high-pitched sounds, while loose and thick cords produce low-frequency sounds. The stiffness of the BM gradually decreases from base to apex, while its width gradually increases along that length, thereby creating the gradient of characteristic frequencies of the basilar membrane (Purves et al., 2001) (Fig. 1).
Another factor that may play a role is the length of the stereocilia. Stereocilia are the mechanotransductive hairs that give the hair cells their name. In the inner hair cells (IHCs) the stereocilia are mechanically deflected when a sound wave eneters the cochlea. That in turn leads to neurotransmitter release from the IHC and that in turn stimulates the auditory nerve. The stereocilia increase in length towards the base of the cochlea. Longer stereocilia respond most optimally to longer wavelengths (lower frequencies). Hence, stereocilia length also provides another mechanical gradient that facilitates frequency decomposition in the cochlea (Snow & Wyckam, 2009).
A last factor is more physiological in nature; the frequency tuning of the cochlea is much sharper than can be explained on the basis of the passive membrane properties of the BM. For example, studies in dead cochleas show a shallow tuning of the BM, while the tuning in alive cochleas is much sharper. Hence, it is thought that an active mechanism has to be at play. Generally, the outer hair cells (OHCs) are attributed this function. OHCs start to increase and decrease in length when the BM moves in response to sound. They are thought to amplify the BM response to sound. If these OHCs only amplify a very narrow set of frequencies and not others, they may indeed be capable of sharpening the tuning curve of the associated nerve fibers, but the exact mechanism behind this is unknown (Purves et al., 2001).
Fig. 1. Frequency tuning along the basilar membrane. source: Purves et al., 2001
References
- Purves et al. (eds.) Neuroscience 2nd ed. Sunderland (MA): Sinauer Associates (2001)
- Snow & Wyckam, Ballenger's Otorhinolaryngology: Head and Neck Surgery, John Jacob Ballenger (2009) | {
"domain": "biology.stackexchange",
"id": 6376,
"tags": "neuroscience, neurophysiology, sensation, hearing"
} |
Easy Access to robot poses created using the MoveIt! Setup Assistant | Question:
Consider that I have set 3 different poses for my robot with different names and different joint values.
Now while using the MoveIt Setup Assistant I can add the 3 poses under the tab Robot Posesand click the button MoveIt! at the bottom of the Setup Assistant screen to see the robot change the poses.
I want to know how I can easily switch between the 3 poses when I am using the move_group_interface and the RViz plugin.
Originally posted by R.Mehra on ROS Answers with karma: 49 on 2017-01-31
Post score: 0
Answer:
Those poses are called named targets.
Using them in C++: moveit::planning_interface::MoveGroup::setNamedTarget(const std::string &name).
In the RViz planning plugin: select them from the drop-down under Select Goal State on the Planning tab.
Originally posted by gvdhoorn with karma: 86574 on 2017-01-31
This answer was ACCEPTED on the original site
Post score: 3
Original comments
Comment by Jasmin on 2019-09-10:
Is there a way to do the same thing with Python?
Comment by gvdhoorn on 2019-09-10:
I believe it was added in ros-planning/moveit#1300.
Comment by Jasmin on 2019-09-10:
Perfect! thank you @gvdhoorn . | {
"domain": "robotics.stackexchange",
"id": 26877,
"tags": "ros, moveit, move-group-interface, moveit-setup-assistant"
} |
Galilean transformation of the wave equation, derivatives | Question: So I'm trying to show that when the wave function
$ (-\frac{1}{c^2}\frac{d^2}{dt^2} + \frac{d^2}{dx^2})\phi(t,x) = 0 $
undergoes the Galilean transformation
$ t' = t $
$ x' = x-Vt $
the resulting differential equation is
$ [-\frac{1}{c^2}\frac{d^2}{dt^2} + \frac{d^2}{dx^2} -\frac{V^2}{c^2}\frac{d}{dx'} + \frac{2V}{c^2}\frac{d}{dt'}\frac{d}{dx'}]\phi(t',x')=0 $
I've started by saying that
$\frac{d}{dx} = \frac{dx'}{dx} \frac{d}{dx'} + \frac{dt'}{dx}\frac{d}{dt'}$
and I know that I'm supposed to get $\frac{d}{dx} = \frac{d}{dx'}$ but when I do it,
$\frac{dt'}{dx} = 0$ must be true for that second term to cancel out.
I could understand if I'm taking the derivative using $t'=t$, but if I rearrange the transformation $ x' = x-Vt $ to be $t=\frac{x-x'}{V}=t'$, then $\frac{dt'}{dx} = \frac{1}{V}$. What am I doing wrong here?
Answer: When you perform a coordinate transformation, you write the new variables as functions of the old ones:
$$t'(x,t) = t$$
$$x'(x,t) = x - Vt$$
Differentiation then proceeds as usual:
$$\frac{\partial}{\partial x} t'(x,t) = \lim_{h\rightarrow 0} \frac{t'(x+h,t)-t'(x,t)}{h} = \lim_{h\rightarrow 0} \frac{t - t}{h} = 0$$
$$ \frac{\partial}{\partial t} t'(x,t) = \lim_{h\rightarrow 0} \frac{t'(x,t+h)-t'(x,t)}{h} = \lim_{h\rightarrow 0} \frac{t+h-t}{h} = 1$$
Similarly,
$$\frac{\partial }{\partial x} x'(x,t) = 1$$
$$\frac{\partial}{\partial t} x'(x,t) = - V$$
I chose to write out the difference quotients explicitly to make it obvious that $t'$ and $x'$ are to be considered functions with two slots - one for the old position and one for the old time.
If you rearrange that second equation to yield
$$t(x,x') = \frac{x-x'}{V}=t'(x,x')$$
Then you are writing the new time as a function of the old position and the new position. This is a different function from the one I wrote above, and is not what we are looking for when performing coordinate changes.
Fundamentally, this misunderstanding can arise when you think only about the quantity with respect to which you are differentiating and forget to also specify which quantities are being held constant. We should really write
$$\left(\frac{\partial t'}{\partial x}\right)_t = \lim_{h\rightarrow 0} \frac{t'(x+h,t)-t'(x,t)}{h} = \lim_{h\rightarrow 0} \frac{t-t}{h}=0$$
which means the partial derivative of $t'$ with respect to $x$, holding $t$ constant. Contrast this with
$$\left(\frac{\partial t'}{\partial x}\right)_{x'} = \lim_{h\rightarrow 0} \frac{t'(x+h,t+\frac{h}{V})-t'(x,t)}{h} = \lim_{h\rightarrow 0}\frac{t+\frac{h}{V}-t}{h} = \frac{1}{V}$$
which means the partial derivative of $t'$ with respect to $x$ holding $x'$ constant.
Note that the $t\rightarrow t+ \frac{h}{V}$ shift comes from the fact that if $x'(x,t)=x-Vt$ is being held constant, then when $x\rightarrow x+h$ we must also have that $t \rightarrow t + \frac{h}{V}$ to compensate. | {
"domain": "physics.stackexchange",
"id": 64618,
"tags": "homework-and-exercises, field-theory, galilean-relativity"
} |
Why is second harmonic Intensity periodic in coherent length? | Question: Solving for the intensity of second harmonic generation we get that intensity is $sinc^2(\pi/2*L/L_{coherent})$.
How is calculated that the intensity is periodic in L_{coherent} (coherent length, L medium length).
I am totally confused here.
Thank you.
Answer: Note it is not exactly periodic, it only touches zeros for some equally spaced values of L, but its maxima become lower with decreasing coherence length. Note also you are using the non-depleted-pump approximation.
The evolution of the SHG wave is determined by two factors when propagating through the nonlinear medium - it gains energy from the pump wave, and simultaneously it also goes out of sync in phase with it.
For a slight difference of the effective indices for both waves, the phase difference is negligible and SHG signal is strong. For a larger difference of phase, it will reverse the energy flow and the SHG wave eventually returns its energy to the pump - this is the first minimum of the sinc² function.
For even larger difference the process is gain-return-gain, but the gain region is shorter. This explains the pseudo-oscillating nature of the sinc² function, and also the fact its maxima get lower and lower with increasing index difference. | {
"domain": "physics.stackexchange",
"id": 30151,
"tags": "electromagnetism, optics, phase-space, non-linear-optics, harmonics"
} |
Is always converting a input vector into matrix and apply cnn's good idea? | Question: I know the benefits of using cnns(reduced size weight matrices). Is it a good idea to convert a input vector(which is not a image) into a matrix and apply cnn's. What I understand is that it should not be done because this would enforce some relationship between input vector value which actually doesn't exist.
Am I correct or there is some way we can apply cnn's to reduce computation?
If cnn's can't be applied what could be the method for reducing computation for very high dimensional input
Answer: This is a job for dimensionality reduction techniques. PCA is a simple and intuitive method that you should probably try first. If you find it is insufficient or ineffective, you could try using an autoencoder.
As you said, using a CNN would imply a locational relationship between variables. If such a relationship does not exist, the CNN could have difficulty finding suitable weights. It is possible that forcing the CNN to link seemingly unrelated variables could help the model generalize better or reveal underlying structure that you cannot perceive, but there are other problems that you will run into. For example, CNNs generally use pooling layers as a form of nonlinear dimensionality reduction. However, if your variables are completely independent and unrelated, you would simply be throwing away possibly critical information using this down-sampling technique.
So while it might be feasible to use a CNN, you should probably use the right tool for the job. | {
"domain": "datascience.stackexchange",
"id": 7986,
"tags": "cnn, convolutional-neural-network"
} |
Padding sequences for neural sequence models (RNNs) | Question: I am padding sequences for a GRU based classifier that I am building in Keras. I'm wondering if there's any accepted best practice for padding the leading or trailing side of the sequence.
E.g.
sequence = [1,2,3,4]
leading_pad = [0,0,0,1,2,3,4]
trailing_pad = [1,2,3,4,0,0,0]
In other projects I have generally padded the leading end of the sequence as a convention imprinted on me by various blogs, course work and other examples.
My question: is there any research that shows whether or not it matters which side of the sequence you pad? Or am I completely over thinking this?
Any advice would be much appreciated.
Answer: Padded values are noise when they are regarded as actual values. For example, a padded temperature sequence [20, 21, 23, 0, 0] is the same as a noisy sequence where sensor has failed to report the correct temperature for the last two readings. Therefore, padded values better be cleaned (ignored) if possible.
Best practice is to use a Mask layer before other layers such as LSTM, RNN,.. to mask (ignore) the padded values. This way, it does not matter if we place them first or last.
Check out this post (my answer) that shows how to pad and mask the sequences with different length (with a sample code). You can experiment with the code to see the effect of removing the mask (treating padded values as actual values) on the deterioration of model performance.
This is the python snippet for quick reference:
model.add(Masking(mask_value=special_value, input_shape=(max_seq_len, dimension)))
model.add(LSTM(lstm_units))
where special_value is the padded value that should not have overlap with actual values. | {
"domain": "datascience.stackexchange",
"id": 5003,
"tags": "neural-network, rnn"
} |
Which poison makes seastars inedible to possible predators? | Question: In the new citizen science project (see: Sea Floor Explorer), numbers of seastars, scallops, crustaceans and other animals are counted. Already one can see a heavy bias in favor of seastars, both the fat and brittle kind.
I would be interested in why this creature is so succesful, especially if it is poisonous to e.g., crustaceans, and what poisons exactly are responsible.
Answer: Asterosaponins are the class of compounds - they have a cholesterol like organic core.
Apparently, these saponins make pore-forming complexes with Δ5-sterols of cell membranes, and so are deadly to all usual kind of life, including bacteria and fungi. Quote:
Starfish and sea cucumber cell membranes are resistant to their own
saponines due to the presence of Δ7- and
Δ9,11-sterols, sulfated Δ5-sterols, and
β-xylosides of sterols instead of the free Δ5-sterols. | {
"domain": "biology.stackexchange",
"id": 581,
"tags": "marine-biology, ecophysiology, invertebrates"
} |
How to update multiple ROS machines simultaneously? | Question:
Hi everyone!
So here's my problem...
I have 6 netbooks with Ubuntu 10.04 and ROS c-turtle. When I first installed everything I did so on the first netbook and cloned the image to the other 5.
Now I want to update ROS to diamondback, I could do the update one by one, but I'm guessing there is a faster way! What options do I have to update/install ROS and manage the robot's ROS stacks without having to work on the netbooks one by one?
Thanks for the help in advance,
Gonçalo Cabrita
Originally posted by Gonçalo Cabrita on ROS Answers with karma: 591 on 2011-06-01
Post score: 0
Original comments
Comment by JonW on 2011-06-01:
has anyone created a puppet (http://www.puppetlabs.com/) recipe for ROS?
Answer:
You can turn on all of your machines and have them connected to the network. Then you can use Cluster SSH to ssh into all of them at once from some master computer and update all of them at once. It will take the same amount of time required to update one machine, to update all of them.
If your 'master' computer is running Ubuntu, you can install cssh by running:
sudo apt-get install clusterssh
Originally posted by ben with karma: 674 on 2011-06-02
This answer was ACCEPTED on the original site
Post score: 4
Original comments
Comment by JonW on 2011-06-02:
interesting package! | {
"domain": "robotics.stackexchange",
"id": 5738,
"tags": "ros, installation, ubuntu"
} |
Equivalent resistance | Question: [![enter image description here][2]][2]
Can any one please explain how in the solution of question above they have made the first transformation of circuit by removing the 8 ohm resistor keeping the whole circuit same. How can 8 ohm resistor be removed directly and is there any trick to solve such type of problems.
Answer: The sub-network ABCD is a Wheatstone Bridge arrangement. The condition for 'balance' - ie no current through BD - is that $R_{AB}/R_{BC} = R_{AD}/R_{DC}$. Conversely, if the resistors in the circuit are in this ratio, then there is no current in BD. The value of $R_{BD}$ does not make any difference to the currents through ABC and ADC, so it can be removed from the network.
The condition can be derived by noting that, if no current flows through BD, then B and D must be at the same voltage. Since the PD along ABC is the same as that along ADC, this means that we must have $V_{AB}/V_{BC}=V_{AD}/V_{DC}$. Since the currents in AB,BC are the same, and the currents in AD,DC are the same, and $V=IR$, this ratio is equivalent to $R_{AB}/R_{BC} = R_{AD}/R_{DC}$. | {
"domain": "physics.stackexchange",
"id": 33992,
"tags": "homework-and-exercises, electric-circuits, electrical-resistance"
} |
What causes the mass of a black hole? | Question: I heard in a lecture about the Higgs mechanism that the mass of a black hole has nothing to do with the Higgs mechanism. The point was made in relation to the proton mass being largely due to the energy content within the proton, not the mass of the constituent quarks.
I don't know what the claim 'the mass of a black hole has nothing to do with the Higgs mechanism' means. Once matter passes over the event horizon of a black hole, does the mass of the black hole arise through some other mechanism than simply 'adding' that mass to it?
Answer: The answer to your question is stress-energy, not mass. Mass is just a certain form, a manifestation of something deeper, being energy. A proton and a black hole, both have stress-energy, and both cause spacetime curvature.
It's a commonly made mistake that gravity, and therefore a black hole, is caused by matter. In fact the spacetime curvature is related to a quantity called the [stress-energy tensor][1]. This is usually represented by a matrix with ten independant values in it (it's a 4x4 matrix but it's symmetric so six of the elements in it are duplicated).
Only one of the elements in the matrix, $T_{00}$, depends directly on the mass, and actually that element gives the energy density, where mass is counted as energy using Einstein's equation $e = mc^2$.
Why do photons add mass to a black hole?
In GR, we use the term stress-energy, and anything and everything, massive or massless that does have stress-energy (please note that in certain cases not all parts of the stress-energy momentum tensor contribute to curvature), will cause spacetime curvature.
The mass of a black hole can be determined in the same way as we determine the mass of any other astronomical object - by observing how it deflects the paths of other objects that pass close to it. These might be objects in closed elliptic orbits (like satellites around planets) or objects in hyperbolic trajectories that are just passing through.
Black hole mass
Please note that in certain cases, we use ADM (or Bondi) mass for black holes.
Note that mass is computed as the length of the energy–momentum four-vector, which can be thought of as the energy and momentum of the system "at infinity".
https://en.wikipedia.org/wiki/Mass_in_general_relativity
So basically, when we talk about the mass of a black hole, we really mean according to GR,
its energy and momentum (length of the energy-momentum four vector), and its (gravitational) effects on paths of objects that are interacting with it (its gravitational field). No need to involve the Higgs mechanism. So the answer to your question is, what causes the mass of the black hole, is ultimately stress-energy. | {
"domain": "physics.stackexchange",
"id": 76696,
"tags": "general-relativity, black-holes, mass-energy, higgs"
} |
Gender classification using Periocular Region | Question: I have a dataset of the periocular region, I have images of the male and female periocular region but they are only labeled as left and right periocular region, Folder is a mixture of male and female periocular region. One can not judge which one is of male and which one is of female periocular region.
According to biometric classification male and female have different periocular regions, so in short, I have only 2 different types of the periocular region (male and female) in my folder as the dataset.
How I can classify them that system automatically separates the feature of the male and female periocular region and train them as 2 different classes.
Answer: You can't if you have nothing to start labelling them with.
As you describe it, there's simply no info available (not even about statistics) about the gender of your reference images, so there's nothing you can train.
What you can do is use a different technique (as you say "biometrics say..." I assume there's a non-ML method of classification) and run it over the dataset, and use the resulting labels to train your neural network. Discussion of the sensibility of that is left to you as an exercise for your report! | {
"domain": "dsp.stackexchange",
"id": 7714,
"tags": "computer-vision, image-processing, opencv, classification"
} |
Can any species be bred selectively/engineered to become as diverse looking as dogs? | Question: I've done some research and it appears that dogs are the most diverse looking single species of mammals. The questions that interest me is - are dogs special in respect to genes/gene activation mechanisms related to appearance? Or does this dramatic difference in appearance have something to do with dog anatomy and how they give birth?
If dogs are not special, this makes me interested if other species of mammals can also be bred selectively (or genetically engineered) to produce such dramatic variation?
Answer: Dogs have a genomic structure that allows breeding with high variation in size, shape, coat quality, color and other qualities particular to each breed as well.
Other domesticated animals can be bred for as many qualities, but dogs in particular show a wide level of morphological traits - varying in size from just over a pound to the size of a wolf, from which dogs are derived and genetically are still compatible. But more interesting than just size or coat color/texture and even their intelligence and personalities, the proportions of their bodies, of their skull length and breadth, are remarkable.
There are over 160 registered breeds of dogs, but this is only a measure of how much time people have put into them. I think its possible to get nearly anything you want with animals, if you are patient enough - its not clear what is and is not possible with enough genetic manipulating. For instance, horses can be bred over nearly as great a size range for instance (the miniature horse the size of a large dog, the Shire is 3,300 pounds), but it would not be as easy to get both the size and muscularity and shape of a bulldog in a horse. Breeding a mouse of various colors can be done, and so can interesting behaviors, but body shape seems to be harder: a Weimaraner mouse could take a tremendous amount of time and animals. | {
"domain": "biology.stackexchange",
"id": 6537,
"tags": "evolution, genetics, zoology, dogs, morphology"
} |
Buddy system allocator and slab allocator in Linux kernel | Question: In the Silberschatz's book "operating systems" the author talks about the allocation of memory via system buddy and slab.
My first question is: in the book, both memory allocation methods are described as allocation method that the kernel use to allocate memory only for kernel's process, not for user's process. Is it true?
My second question, as the title suggest, regards the allocation method that is implemented inside the Linux kernel. Looking the website kernel.org I've seen that there is a chapter dedicated to buddy system and a page dedicated to slab. So, I imagine that both are present inside the kernel, but what is one method for and what is the other for?
Answer: A modern operating system manages physical memory as page frames. A page frame can be allocated to a user process or for the kernel to use for its own purposes, such as to allocate its own data structures.
A kernel data object (e.g. a structure representing an "open file" or something) is typically less than the size of a page. So it makes sense to have a two-level allocation hierarchy: one to manage page frames, and one to allocate data structures inside allocated page frames.
Linux uses a buddy allocator to allocate page frames, and a slab allocator to allocate kernel data structures. When the slab allocator needs more memory, it obtains it from the buddy allocator.
This approach works well for Linux, since it supports different page sizes; x86-64 CPUs support 4kB, 2M, and sometimes 2GB pages, and a binary buddy allocator can support this with very little modification.
Windows NT, by comparison, uses a simple free list to implement page frame allocation. Actually, that's not quite accurate; it uses multiple free lists to implement cache colouring, but that's a story for another time. Again, it layers another allocator on top to handle kernel data structures.
It's a similar story with Mach, the microkernel underneath macOS and iOS. Kernel data structures are implemented with a zone allocator, which plays the same role as the slab allocator in Linux. It, again, operates as a layer above the physical page frame allocator. | {
"domain": "cs.stackexchange",
"id": 20130,
"tags": "virtual-memory, memory-allocation"
} |
Why does Q-learning converge under 100% exploration rate? | Question: I am working on this assignment where I made the agent learn state-action values (Q-values) with Q-learning and 100% exploration rate. The environment is the classic gridworld as shown in the following picture.
Here are the values of my parameters.
Learning rate = 0.1
Discount factor = 0.95
Default reward = 0
Reaching the trophy is the final reward, no negative reward is given for bumping into walls or for taking a step.
After 500 episodes, the arrows have converged. As shown in the figure, some states have longer arrows than others (i.e., larger Q-values). Why is this so? I don't understand how the agent learns and finds the optimal actions and states when the exploration rate is 100% (each action: N-S-E-W has 25% chance to be selected)
Answer: Q-learning is guaranteed to converge (in the tabular case) under some mild conditions, one of which is that in the limit we visit each state-action tuple infinitely many times. If your random random policy (i.e. 100% exploration) is guaranteeing this and the other conditions are met (which they probably are) then Q-learning will converge.
The reason that different state-action pairs have longer arrows, i.e. higher Q-values, is simply because the value of being in that state-action pair is higher. An example would be the arrow pointing down right above the trophy -- obviously this has the highest Q-value as the return is 1. For all other states it will be $\gamma^k$ for some $k$ -- to see this remember that a Q-value is defined as
$$Q(s, a) = \mathbb{E}_\pi \left[\sum_{j=0}^\infty \gamma^j R_{t+j+1} |S_t = s, A_t = a \right]\;;$$
so for any state-action pair that is not the block above the trophy with the down arrow $\sum_{j=0}^\infty \gamma^j R_{t+j+1}$ will be a sum of $0$'s plus $\gamma^T$ where $T$ is the time that you finally reach the trophy (assuming you give a reward of 1 for reaching the trophy). | {
"domain": "ai.stackexchange",
"id": 2658,
"tags": "reinforcement-learning, q-learning, convergence, epsilon-greedy-policy, exploration-strategies"
} |
Is it possible to quantify how chaotic a system is? | Question: In relation to this other question that I asked:
Is there anything more chaotic than fluid turbulence?
I had assumed that there are methods by which the level of 'chaotic-ness' of a system could be measured, for comparison with other non-linear systems. However, several comments called that into question, proposing that it's either/or: 'either a system is chaotic, or it's not'.
So, I am wondering if that is true? Or, are there parameters that can be used to determine and compare how chaotic one system is, compared to another?
One comment to the other question mentioned the Lyapunov exponent. I admit that I'm not very experienced in non-linear dynamical systems, but I was also thinking about other possible parameters, such as properties of the chaotic attractor; number or range of different distance scales that develop; or perhaps the speed or frequency of when bifurcations occur.
So, in general, is it possible to quantify the 'chaotic-ness' of a dynamical system? If so, what parameters are available?
Answer: There are a number of ways of quantifying chaos. For instance:
Lyapunov exponents - Sandberg's answer covers the intensity of chaos in a chaotic system as measured by its Lyapunov exponents, which is certainly the main way of quantifying chaos. Summary: larger positive exponents and larger numbers of positive exponents correspond to stronger chaos.
Relative size of the chaotic regions - An additional consideration is needed for systems which are not fully chaotic: these have regular regions mixed with chaotic ones in their phase space, and another relevant measure of chaoticity becomes the relative size of the chaotic regions. Such situation is very common and a standard example are Hamiltonian systems.
Finite-time Lyapunov exponents - Still another situation is that of transient chaos (see e.g., Tamás Tél's paper, (e-print)), where the largest Lyapunov exponent might be negative, but the finite-time exponent, positive. One could say transient chaos is weaker than asymptotic chaos, though such comparisons won't always be straightforward or even meaningful.
Hierarchy of ergodicity - Also worth mentioning is the concept of hierarchy of chaos. More than measuring the strength of chaos, it concerns itself with the nature of it. Detailed explained in its Stanford Encyclopedia of Philosophy's entry, I briefly summarized it in this answer:
Bernoulli systems are the most chaotic, equivalent to shift maps. Kolmogorov systems (often simply K-systems) have positive Lyapunov exponents and correspond to what is most often considered a chaotic system. (Strongly) mixing systems intuitively have the behavior implied by their name and, while they don't necessarily have exponentially divergent trajectories, there is a degree of unpredictability which can justify calling them weakly chaotic. Ergodic systems, on the other hand, have time correlations that don't necessarily decay at all, so are clearly not chaotic.
Interesting, if tangentially related, is a bound on chaos conjectured to apply for a broad class of quantum systems, but I'm restricting this answer to classical systems and that bound on chaos diverges in the classical limit. | {
"domain": "physics.stackexchange",
"id": 62348,
"tags": "chaos-theory"
} |
Exceptional electronic configuration of Chromium | Question: I understand that the reason for the exceptional electronic configuration of Cr is the increased stability of half filled set of orbitals
But here's an excerpt from my textbook
…consider the case of Cr, which has $3d^54s^1$ configuration instead of $3d^44s^2$; the energy gap between the two sets ($3d$ and $4s$) of orbitals is small enough to prevent electron entering the $3d$ orbitals.
What do they mean by prevent the electron from entering $3d$ orbital? Didn't one electron from the $4s$ orbital actually go to the $3d$ orbital?
Is it likely that it's a typo to write "prevent" instead of "permit"?
Answer: Yes I believe that's a typo, it should say "permit." The reason we see these Aufbau's principle exceptions in transition metals is because the $4s$ and $3d$ orbitals are very similar in energy.
In chromium, having a $4s^2$ $3d^4$ configuration results in electron-electron repulsion due to the two electrons in the $4s$ orbital. For this reason, chromium adopts a $4s^1$ $3d^5$ configuration, in which each electron occupies its own orbital. Recall that Hund's rule essentially states that you fill each orbital once before going back with the second electron in regards to orbitals of the same energy (in the same subshell). In the case of chromium, we are dealing with orbitals that are almost in the energy ($3d$ and $4s$) so you can essentially view this as a special case of Hund's rule that extends to orbitals of nearly the same energy.
For what it's worth, the other notable exception is copper, which adopts a $4s^1$ $3d^9$ configuration over $4s^2$ $3d^8$. For reasons that go beyond the scope of this answer, before you hit the transition metal elements ($Z \le 21$), $3d$ is higher in energy than $4s$, so we fill $4s$ before $3d$. But beyond this point, $3d$ becomes lower in energy than $4s$, so you lose $4s$ electrons first when we're talking about ionization. Knowing this, it is energetically more favorable to have $4s^1$ $3d^9$ because the higher energy orbital only has one unpaired electrons and the lower energy orbitals have paired ones.
The important thing to note from these two examples is that there is no special added stability form merely having a half full electron shell (i.e., something is not magically more stable because of having a half filled subshell). Adopting this form of configuration has consequences that are perfectly explainable to students of intro chemistry—do not brush it off as mere "magic." | {
"domain": "chemistry.stackexchange",
"id": 16393,
"tags": "electronic-configuration, transition-metals"
} |
Average squared Hamiltonian of linear combination of eigenfunctions | Question: As part of a larger problem, I am trying to find the average squared Hamiltonian of a system with eigenfunctions $\psi_{1,1}$, $\psi_{1,2}$, $\psi_{2,1}$, $\psi_{2,2}$ and the following wave function:
$$ \Psi\left(\mathbf{r};t=0\right)=c\sum_{j=1}^2 \psi_{ij}\left(\mathbf{r}\right) $$
The problem defines the following operators:
\begin{align*}
\hat{H}\psi_{ij} &= iE\psi_{ij} \\
\hat{Q}\psi_{ij} &= jQ\psi_{ij}
\end{align*}
where $ \{i,j\}\in\mathbb{R} $. I have already calculated that
\begin{align*}
p\left(\mathbf{r}\right) &= c\,\left(\langle\psi_{i1}|\psi_{i1}\rangle + \langle\psi_{i1}|\psi_{i2}\rangle + \langle\psi_{i2}|\psi_{i1}\rangle + \langle\psi_{i2}|\psi_{i2}\rangle\right) \\
1 &= c\,\left(1 + 0 + 0 + 1\right) \\
c &= \frac{1}{2}
\end{align*}
and
\begin{align*}
\langle\Psi|\hat{H}|\Psi\rangle &= \langle\psi_{i1}|\hat{H}|\psi_{i1}\rangle + \langle\psi_{i1}|\hat{H}|\psi_{i2}\rangle + \langle\psi_{i2}|\hat{H}|\psi_{i1}\rangle + \langle\psi_{i2}|\hat{H}|\psi_{i2}\rangle \\
&= \frac{i}{2}\left(E_{i1}\langle\psi_{i1}|\psi_{i1}\rangle + E_{i2}\langle\psi_{i1}|\psi_{i2}\rangle + E_{i2}\langle\psi_{i2}|\psi_{i1}\rangle + E_{i2}\langle\psi_{i2}|\psi_{i2}\rangle\right) \\
&= \frac{i}{2}\left(E_{i1}\langle\psi_{i1}|\psi_{i1}\rangle + 0 + 0 + E_{i2}\langle\psi_{i2}|\psi_{i2}\rangle\right) \\
&= \frac{i}{2}\left(E_{i1}+E_{i2}\right)
\end{align*}
However, I'm not quite sure how to scale it up to $ \langle\Psi|\hat{H}^2|\Psi\rangle $. I am putting forward the assumption that $ \langle\hat{H}^2\rangle - \langle\hat{H}\rangle^2 $ should = 0 since I am working with eigenfunctions, and that therefore
\begin{align*}
\langle\Psi|\hat{H}^2|\Psi\rangle &= \,?!? \\
&= \left(\frac{i}{2}\left(E_{i1}+E_{i2}\right)\right)^2 \\
&= \frac{i^2}{4}\left(E_{i1}^2 + E_{i1}E_{i2} + E_{i2}^2\right)
\end{align*}
But I am unsure how to prove it, hence the ?!?.
Answer: Actually this is not quite correct. Your $p(r)$ should be
$$
p(\boldsymbol{r})=cc^*\left(
\langle \psi_{i1}\vert\psi_{i1}\rangle + \langle \psi_{i1}\vert \psi_{i2}\rangle
+\langle \psi_{i2}\vert\psi_{i1}\rangle + \langle \psi_{i2}\vert \psi_{i2}\right)
$$
from which you find that $c=\frac{e^{i\varphi}}{\sqrt{2}}$ for arbitrary phase $\varphi$. You may choose $\varphi=0$ for convenience but you don't have to.
Then,
\begin{align}
\hat H \left(c\sum_{j=1}^2\psi_{ij}(\boldsymbol{r})\right)&=\left(c\sum_{j=1}^2i E\psi_{ij} (\boldsymbol{r})\right)=i E\left(c\sum_{j=1}^2\psi_{ij} (\boldsymbol{r})\right)\, ,\\
\hat H^2 \left(c\sum_{j=1}^2\psi_{ij}(\boldsymbol{r})\right)=
\hat H\left(\hat H \left(c\sum_{j=1}^2\psi_{ij}(\boldsymbol{r})\right)\right)&=i^2 E^2\left(c\sum_{j=1}^2\psi_{ij} (\boldsymbol{r})\right)\, .
\end{align}
You can use orthogonality to finish the calculation. Note that all your states with same $i$ have the same energy and this should produce a simplified result. | {
"domain": "physics.stackexchange",
"id": 43747,
"tags": "quantum-mechanics, homework-and-exercises, operators, wavefunction, hamiltonian"
} |
A CNN in Python WITHOUT frameworks | Question: Here's some code that I've written for implementing a Convolutional Neural Network for recognising handwritten digits from the MNIST dataset over the last two days (after a lot of research into figuring out how to convert mathematical equations into code).
""" Convolutional Neural Network """
import numpy as np
import sklearn.datasets
import random
import math
from skimage.measure import block_reduce
from scipy.signal import convolve
import time
def reLU(z): # activation function
return z * (z > 0)
""" ------------------------------------------------------------------------------- """
class ConvPoolLayer:
def __init__(self, in_dim, filter_dim, pool_dim=None, conv_stride=1):
self.in_dim = in_dim
self.out_dim = (filter_dim[0], int(round(((0.0 + in_dim[-2] - filter_dim[-2]) / conv_stride + 1) / pool_dim[-2])), \
int(round(((0.0 + in_dim[-1] - filter_dim[-1]) / conv_stride + 1) / pool_dim[-1]))) \
if pool_dim \
else \
(num_filters, ((in_dim[-2] - filter_dim[-2]) / conv_stride + 1), \
((in_dim[-1] - filter_dim[-1]) / conv_stride + 1) )
self.filter_dim = filter_dim
self.pool_dim = pool_dim
self.W = np.random.randn(*filter_dim) * np.sqrt(2.0 / (sum(filter_dim))).astype(np.float32)
self.B = np.zeros(((in_dim[-1] - filter_dim[-1]) / conv_stride + 1, 1)).astype(np.float32)
def feedforward(self, x, W, b, step):
self.x = x.reshape(step, self.in_dim[-2], self.in_dim[-1])
activation = reLU(np.array([convolve(self.x, w, mode='valid') for w in W]) + b.reshape(1, -1, 1))
if self.pool_dim:
return block_reduce(activation, block_size=tuple([1] + list(self.pool_dim)), func=np.max)
else:
return activation
def backpropagate(self, delta, W, index):
delta = delta.reshape(len(W), 1, int((np.prod(delta.shape) // len(W)) ** 0.5), -1)
if self.pool_dim:
delta = delta.repeat(self.pool_dim[-2], axis=2).repeat(self.pool_dim[-1], axis=3) # may have to change this for maxpooling
dw = np.array([np.rot90(convolve(self.x[index].reshape(1, self.in_dim[-2], self.in_dim[-1]), np.rot90(d, 2), mode='valid'), 2) for d in delta])
db = np.sum(np.array([np.sum(d, axis=(1, )).reshape(-1, 1) for d in delta]), axis=0)
return None, dw, db
class FullyConnectedLayer:
def __init__(self, in_size, out_size):
self.in_size = in_size
self.out_size = out_size
self.W = np.random.randn(out_size, in_size) * np.sqrt(2.0 / (in_size + out_size)).astype(np.float32)
self.B = np.zeros((out_size, 1)).astype(np.float32)
def feedforward(self, x, w, b, step):
self.x = x.reshape(step, -1)
activation = reLU(np.dot(w, self.x.T) + b).T
return activation
def backpropagate(self, delta, w, index):
dw = np.multiply(delta, self.x[index])
db = delta
delta = np.dot(w.T, delta) * (self.x[index].reshape(-1, 1) > 0)
return delta, dw, db
class SoftmaxLayer:
def __init__(self, in_size, out_size):
self.in_size = in_size
self.out_size = out_size
self.W = np.random.randn(out_size, in_size) * np.sqrt(2.0 / (in_size + out_size)).astype(np.float32)
self.B = np.zeros((out_size, 1)).astype(np.float32)
def feedforward(self, x, w, b, step):
self.x = x.reshape(step, -1)
return reLU(np.dot(w, self.x.T) + b).T
def backpropagate(self, t, y, w, index):
t = np.exp(t)
t /= np.sum(t)
delta = (t - y) * (t > 0)
dw = np.multiply(delta, self.x[index])
db = delta
delta = np.dot(w.T, delta) * (self.x[index].reshape(-1, 1) > 0)
return delta, dw, db
""" ------------------------------------------------------------------------------- """
class ConvolutionalNeuralNet:
def __init__(self, layers, learning_rate=0.01, reg_lambda=0.05):
self.layers = []
self.W = []
self.B = []
for l in layers:
if l['type'].lower() == 'conv':
self.layers.append(ConvPoolLayer(**l['args']))
elif l['type'].lower() == 'fc':
self.layers.append(FullyConnectedLayer(**l['args']))
else:
self.layers.append(SoftmaxLayer(**l['args']))
self.layers[-1].layer_type = l['type']
self.W.append(self.layers[-1].W)
self.B.append(self.layers[-1].B)
self.W = np.array(self.W)
self.B = np.array(self.B)
self.num_layers = len(layers)
self.learning_rate = learning_rate
self.reg_lambda = reg_lambda
def __feedforward(self, x):
for i in range(len(self.layers)):
x = self.layers[i].feedforward(x, self.W[i], self.B[i], step=1)
return x
def __backpropagation(self, inputs, targets, is_val=False):
# forward pass
step = len(inputs)
for i in range(len(self.layers)):
inputs = self.layers[i].feedforward(inputs, self.W[i], self.B[i], step)
# backward pass
weight_gradients = np.array([np.zeros(w.shape) for w in self.W])
bias_gradients = np.array([np.zeros(b.shape) for b in self.B])
for i in range(len(targets)):
delta, dw, db = self.layers[-1].backpropagate(inputs[i].reshape(-1, 1), targets[i], self.W[-1], index=i)
weight_gradients[-1] += dw
bias_gradients[-1] += db
for j in xrange(2, self.num_layers + 1):
delta, dw, db = self.layers[-j].backpropagate(delta, self.W[-j], index=i)
weight_gradients[-j] += dw
bias_gradients[-j] += db
if is_val:
weight_gradients += self.reg_lambda * weight_gradients
self.W += -self.learning_rate * weight_gradients
self.B += -self.learning_rate * bias_gradients
def train(self, training_data, validation_data, epochs=10):
acc = 0
step, val_step = 25, 25
inputs = [data[0] for data in training_data]
targets = [data[1] for data in training_data]
val_inputs = [x[0] for x in validation_data]
val_targets = [x[1] for x in validation_data]
for i in xrange(epochs):
for j in xrange(0, len(inputs), step):
self.__backpropagation(np.array(inputs[j : j + step]), targets[j : j + step])
if validation_data:
for j in xrange(0, len(val_inputs), val_step):
self.__backpropagation(np.array(val_inputs[j : j + val_step]), val_targets[j : j + val_step], is_val=True)
print("{} epoch(s) done".format(i + 1))
# new_acc = CN.test(test_data)
# acc = new_acc
# print "Accuracy:", str(acc) + "%"
self.learning_rate -= self.learning_rate * 0.35
print("Training done.")
def test(self, test_data):
test_results = [(np.argmax(self.__feedforward(x[0])), np.argmax(x[1])) for x in test_data]
return float(sum([int(x == y) for (x, y) in test_results])) / len(test_data) * 100
def dump(self, file):
pickle.dump(self, open(file, "wb"))
if __name__ == "__main__":
global test_data
def transform_target(y):
t = np.zeros((10, 1))
t[int(y)] = 1.0
return t
total = 5000
training = int(total * 0.70)
val = int(total * 0.15)
test = int(total * 0.15)
mnist = sklearn.datasets.fetch_mldata('MNIST original', data_home='./data')
data = list(zip(mnist.data, mnist.target))
random.shuffle(data)
data = data[:total]
data = [(x[0].astype(bool).astype(int).reshape(-1,), transform_target(x[1])) for x in data]
train_data = data[:training]
val_data = data[training:training + val]
test_data = data[training + val:]
print "Data fetched"
CN = ConvolutionalNeuralNet(layers=[{ 'type': 'conv',
'args':
{ 'in_dim' : (1, 28, 28),
'filter_dim' : (1, 1, 3, 3), # number of filters, (z, x, y) dims
'pool_dim' : (1, 2, 2),
},
},
{ 'type': 'fc',
'args':
{
'in_size' : 1 * 169,
'out_size' : 50,
}
},
{ 'type': 'softmax',
'args':
{
'in_size' : 50,
'out_size' : 10,
}
}, ], learning_rate=0.01, reg_lambda=0.05)
s = time.time()
CN.train(train_data, val_data, epochs=3)
e = time.time()
print "Network trained"
print "Accuracy:", str(CN.test(test_data)) + "%"
print "Time taken: ", (e - s)
I have not used Theano or any other frameworks. My goal is:
Optimise the feedforward and back-propagation functions for all
layers
Make the network robust under all training conditions (not sure if it is yet)
Replace the existing scipy.signal.convolve and skimage.measure.block_reduce with implementations that are faster,
if possible.
Replace mean pooling with max pooling (I've kept mean pooling for now since it is easier to back-propagate).
I have also assumed, for the sake of simplicity, that:
The first layer is always a convolutional layer
There is only one convolutional layer in the network
Previously, I had made it possible to include more convolutional layers, but I was not sure whether I was back-propagating properly between convolutional layers, plus it was more generic so it ran slower.
Comments and suggestions welcome.
Answer: Disclaimer: I know practically nothing about ML or neural networks.
The big problem with this program is readability. There are no docstrings or comments, so even somebody who knows about ML would have difficulty using this. for example, what arguments should I pass to the ConvPoolLayer constructor? What does reLU(z) represent? And so on.
Particularly when the code is in a specialist area, it’s essential to have comments explaining why the code was written this way – what’s it trying to do, what concepts does the code map to.
This will make it much easier for other people to follow – including you, in six months time!
And some more specific observations:
Run a PEP 8 linter. There’s a bunch of little PEP 8 violations – line length, whitespace, and so on – a linting tool like flake8 can help you spot those. Makes your code look more like other Python, and so easier for others to read.
Use new-style classes. If you’re using Python 2, your classes should all subclass from object. This comes with a bunch of minor benefits and is generally good practice. See the Python Wiki for more background.
Don’t skimp on variable names. Lots of your code uses one or two-letter variable names. That hurts readability, and can make it harder to search for a variable’s use in code. Longer, more expressive names are almost always better – use them!
Use collections.namedtuple for multi-part arguments. I’m guessing based on this ML tutorial that the arguments for ConvPoolLayer are multi-part. For example, the filter_dim argument should have four components:
number of filters
number of input feature maps
filter height
filter width
Right now, you’re accessing those components by numerical index, which isn’t great for readability. If you created a namedtuple to represent a filter shape, you’d be able to look up those properties by name. For example:
from collections import namedtuple
FilterDim = namedtuple('filterdim',
['num_filters', 'num_maps', 'height', 'width'])
foo = FilterDim(5, 5, 10, 3)
foo.num_maps # 5
This would significantly improve the readability of your code.
The __init__ method of ConvPoolLayer.
It uses a num_filters variable that doesn't seem to be defined anywhere.
I’m not a big fan of the foo = bar if condition else baz ternary operator in Python, and in this case it should definitely be split over multiple lines. I’d also suggest breaking the components of the tuple over multiple lines, to make it easier to see where one ends and the next begins. For example:
if pool_dim:
self.out_dim = (
filter_dim[0],
int(round(((0.0 + in_dim[-2] - filter_dim[-2]) / conv_stride + 1) / pool_dim[-2])),
int(round(((0.0 + in_dim[-1] - filter_dim[-1]) / conv_stride + 1) / pool_dim[-1]))
)
else:
self.out_dim = (
num_filters,
(in_dim[-2] - filter_dim[-2]) / conv_stride + 1,
(in_dim[-1] - filter_dim[-1]) / conv_stride + 1
)
This makes it easier to read, and easier to see the similarities between different arguments. And as above, you should probably consider defining a named tuple for this stuff.
Make use of enumerate. When you need to loop over both the index and element of a list, using enumerate() is cleaner than doing one or the other. For example, this snippet:
for i in range(len(targets)):
delta, dw, db = self.layers[-1].backpropagate(
inputs[i].reshape(-1, 1),
targets[i],
self.W[-1],
index=i)
can become slightly neater:
for idx, target in enumerate(targets):
delta, dw, db = self.layers[-1].backpropagate(
inputs[idx].reshape(-1, 1),
target,
self.W[-1],
index=idx)
As well as +=, you have -=. Replace:
self.W += -self.learning_rate * weight_gradients
with
self.W -= self.learning_rate * weight_gradients | {
"domain": "codereview.stackexchange",
"id": 20762,
"tags": "python, performance, python-2.x, machine-learning, neural-network"
} |
Force on the bottom of a tank full of liquid - Hydrostatic Pressure or Gravity | Question:
Imagine a tank filled with water that has some height $h$ and at the bottom area $A$ but as it goes up, for example at height $h/2$, it's area is now $A/2 $. What's the correct way to calculate the force at the bottom of the tank? (Let's ignore atmospheric pressure for now)
If I use $W=mg$, we get $F=W=ρVg=ρ(\frac{Ah}{2}+\frac{Ah}{4})g=\frac{3}{4}ρghA$
If I calculate the hydrostatic pressure at the bottom, it's $p=ρgh$, and then $F=pA=ρghA.$
Which one is the correct one and why?
Answer: As @Berend mentioned you are calculating two different things. The first calculation gives you the weight of all water in the tank which is what an scale would read.(internal forces explained in the second part cancel out.)
In the second case though you are calculating hydrostatic pressure of water. Theses answers are different because water in the tank applies force $f$ to the tank upwards as shown in the figure and according to Newton's laws the tank applies force $f$ downwards so hydrostatic pressure gets bigger than water weight and their difference is as much as the weight of water in the stripped region. | {
"domain": "physics.stackexchange",
"id": 77709,
"tags": "forces, pressure, fluid-statics"
} |
QFT perturbation theory | Question: I would like to clarify the following statement:
Perturbation theory (PT) in QFT is derived with several assumptions such as: adiabatic interaction, spectrum is bounded downward...
This statement I know from my teacher and therefore it is not a complete statement. But I do not understand where I can find this conceptual details. In Peskin-Shroeder detailed discussion is omitted and there are only implicit mentions of the PT assumptions. Can anyone help me to find these conceptual details? I mean list of assumptions for PT in QFT.
As I understand, some of these assumption are
spectrum is bounded downwards (it can be captured during the derivation of PT in P&S)
adiabatic switching on/off of interaction (it guarentees that all the evolution of the system is nothing more than phase factor $e^{iL}$)
Then, I can also derive PT for QFT from the path integral, expanding the exp into series by coupling constants. And using this approach, I do not see where the assumptions appear. Indeed, I can obtain all the correlation functions (connected, disconnected, amputated connected) from the specfic generating functional and there is no any assumptions in this derivation. How can I see the assumptions during the derivation of PT with path integral?
Answer: There is no (general) rigorous non-perturbative definition of a QFT, so there is no rigorous proof of perturbation theory either. Therefore, it makes no sense to claim that the perturbative expansion rests on some analytic assumptions. It rests on no assumptions, because it cannot be derived from anything. There is nothing "more fundamental" that, when expanded in power series, yields a perturbative QFT. That being said, you can proceed as follows:
You define a QFT through its perturbative series (say, in the causal approach if you want to be mathematically rigorous). Here, and when regarded as a formal power series, the perturbative expansion is well-defined regardless of any analytic properties of the Hamiltonian, so there are essentially no conditions on the operators.
You analyse the problem in standard QM (one-dimensional QFT, if you will: the only spacetime coordinate is time), and assume that the same formalism should hold in QFT, provided we eventually find a good formulation. A canonical reference for rigorous perturbation theory in QM is Kato's Perturbation Theoryfor Linear Operators. It is a tough route, so have fun if you want to go there; no guarantee you will find what you're looking for, but it is hard to imagine you will find anything more explicit than this.
Some very specific (lower dimensional) QFTs have been constructed rigorously, from where the perturbative expansion can be derived. The canonical example is Glimm & Jaffe's Quantum physics: A functional Integral point of view. Here the authors deal with two-dimensional (Euclidean) $\phi^4$ theory, which has the key property that normal-ordering is all you need to render it finite. Therefore, you cannot really hope to draw general conclusions from this example but, sadly, we don't have many more rigorous (interacting) QFTs that can be analysed explicitly.
Finally, let me mention that a heuristic reason the conditions in the OP are usually assumed is the so-called Gell-Mann and Low theorem, which is sometimes used to justify perturbation theory. This theorem does require the spectrum to be bounded from below, and that interactions are switched on and off adiabatically. | {
"domain": "physics.stackexchange",
"id": 58355,
"tags": "quantum-field-theory, perturbation-theory, unitarity"
} |
RTAB-Map not using correct parameters | Question:
I've been trying to get RTAB-Map going in simulation using a depth camera plugin. I've been able to get it all working, however it seems that RTAB-Map uses its default parameters, rather than the ones loaded in my launch file. For example, I want to simulate the ZED camera which has depth range 1m -> 20m. So, I'd set Grid/DepthMin and Grid/DepthMax to 1 and 20 respectively:
<node name="rtabmap" pkg="rtabmap_ros" type="rtabmap" output="screen" args="--delete_db_on_start">
...
<param name="Grid/DepthMin" type="string" value="1.0"/>
<param name="Grid/DepthMax" type="string" value="20.0"/>
...
</node>
However, when viewing the parameter server after launching the node, there are two instances of these parameters: one under the rtabmap namespace (with the correct value), and one in the global namespace (with default values of 0 and 4). It seems to me that the rtabmap node is loading its own parameters into the global namespace and using them, rather than using the ones set in the launch file (rviz shows a maximum range of approx 4m for the MapCloud).
I've tried loading the parameters into the global namespace, and then launching rtabmap, but they just get overwritten with the defaults again. Just to be clear, I'm using my own launch file and launching the node itself, not launching through rtabmap_ros/rtabmap.launch. All parameters seem to be default, not just these two used as examples.
Is there anything basic that I could be missing which would cause rtabmap to use default values?
Cheers.
Originally posted by ufr3c_tjc on ROS Answers with karma: 885 on 2017-11-01
Post score: 0
Answer:
Rtabmap reads private parameters and write back public parameters for convenience so that other nodes don't need to know rtabmap node name to modify parameters (at least nodes should be in the same namespace). Rtabmap should copy your private parameters in the public ones. I tried your example:
$ roslaunch test.launch
SUMMARY
========
PARAMETERS
* /rosdistro: indigo
* /rosversion: 1.11.21
* /rtabmap/Grid/DepthMax: 20.0
* /rtabmap/Grid/DepthMin: 1.0
NODES
/
rtabmap (rtabmap_ros/rtabmap)
...
[ INFO] [1509548641.907457423]: Setting RTAB-Map parameter "Grid/DepthMax"="20.0"
[ INFO] [1509548641.908350452]: Setting RTAB-Map parameter "Grid/DepthMin"="1.0"
We can see that rtabmap set the correct parameters. Now for the private and public parameters:
$ rosparam get /Grid/DepthMax
'20.0'
$ rosparam get /rtabmap/Grid/DepthMax
'20.0'
Both are 20. Note that if we want to change this value online, we should change the public one, then call service update_parameters.
The rtabmap_ros/MapCloud plugin has its own parameters, totally independent of rtabmap. Also, Grid/DepthMax will affect cloud_map (PointCloud2) topic, not the cloud shown by rtabmap_ros/MapCloud plugin.
cheers
Mathieu
Originally posted by matlabbe with karma: 6409 on 2017-11-01
This answer was ACCEPTED on the original site
Post score: 2
Original comments
Comment by ufr3c_tjc on 2017-11-01:
I've come back to it today and now it's accepting my parameters correctly. Thanks for the explanation on why there's two sets. Also good work with rtab, its really good stuff. | {
"domain": "robotics.stackexchange",
"id": 29250,
"tags": "slam, navigation, parameters, rtabmap, rtabmap-ros"
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.