anchor stringlengths 0 150 | positive stringlengths 0 96k | source dict |
|---|---|---|
To effectively suppress normal audible sound, how wide and how absolute would a vacuum space have to be? | Question: I don't know if this question is too specific or simple, but: to effectively suppress (say 99%) of normal audible sound (say 20-20kHz @ 100dB), how wide (mm?) and how absolute (torr?) would a vacuum space have to be?
Answer: This is really a nice question. After reading your question I have searched the web with the idea that sound is a pressure wave and in order to hear them it must transfer energy to your ears. I have found this article which is really nice. It says that (and so was my intuition) the energy density of the sound wave is directly proportional to the density of the medium. Hence if you decrease the density of the medium the energy stored in the medium decreases and less energy is transferred from that medium. Hence if you want to decrease the sound intensity by a factor of ~100 decrease the pressure to ~10 mbar.
Now if you have a vacuum curtain (i.e. suddenly the pressure of a portion of the medium is decreased) then the sound will not propagate. Now question is how thin we can make this curtain. I believe that the sound wave setup an oscillation in the gas molecules and the extent of this oscillation is of the order of wavelength. Hence if you want to effectively suppress the sound wave then the thickness of the vacuum curtain must be larger than the sound wavelength (or else the molecules will overshoot the curtain and transfer energy at other side). It may be noted that this analogy of free vacuum curtain is adopted to avoid any other type of effects i.e. sound propagation through the metal enclosures (I can hear sound of motors from inside the vacuum chambers) or scattering of sound waves from such enclosures. | {
"domain": "physics.stackexchange",
"id": 32809,
"tags": "acoustics"
} |
Reaction balance | Question: I got this redox reaction on my exam that I didn't know how to balance.
$$Sodium~iodate+sulphur~dioxide+water\ce{->}sodium~sulphate+sulphuric~acid+iodine$$, thus it would be:
$$\ce{NaIO3 +SO2 +H2O->Na2SO4 +H2SO4 +I2}$$
How does one go about balancing this using the oxidation number method?
I understand that S goes from +4 to +6 losing 2 electrons and I from +5 to 0 gaining 5 electrons.
Do we need to consider that there's 2 S atoms on the products (thus each atom loses 1 electron)? So I guess we need to only multiply the reactants' side sulphur with 5/2, while in other cases we need to multiply sulphur both in the reactants and the products side.
Answer: @user14537 sent a comment asking if we can solve the exercise using only oxidation number method:
$$\ce{NaIO4 + SO2 +H2O ->Na2SO4 +H2SO4 +I2}$$
The change in oxidation number between periodate ion and iodine molecule is $7\times 2=14$ (as we have two atoms in iodine molecule).
So, the first step is to multiply $\ce{NaIO4 }$ by $2$.
The change in oxidation number between $\ce{SO2}$ and sulfate ion is $2$.
As the changes must be equal, we have to multiply $\ce{SO2}$ by $7$ and $\ce{H2SO4}$ by
$6$. (As you have already a sulfate ion in $\ce{Na2SO4}$.
Now, as you have $12$ hydrogen atoms in the right side, you add $6$ molecules of water to the left side:
$$\ce{2NaIO4 + 7SO2 +6H2O ->Na2SO4 +6H2SO4 +I2}$$
The equation is balanced.
I hope it's clear now. | {
"domain": "chemistry.stackexchange",
"id": 2852,
"tags": "redox"
} |
C# console app that computes the Jaccard Index | Question: So I wrote a basic C# console application that computes the Jaccard Index. This has its applications in cybersecurity and machine learning. Given two sets X and Y, the basic formula is:
$$ J(X, Y) = \frac{|X \cap Y|}{|X \cup Y|} $$
where the bars in this context refers to the number of items in the set or the result of its intersection or union, not the absolute value. The jaccard(HashSet<int>, HashSet<int>) function takes in two HashSet<int> types and then implements this formula, and the Main(string[]) give the jaccard an example. The following code is my implementation of it:
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
public class Jaccard {
public static void Main(string[] args) {
HashSet<int> testSet1 = new HashSet<int>() {1, 2, 3};
HashSet<int> testSet2 = new HashSet<int>() {1, 2, 3};
Console.WriteLine("Test Jaccard is: {0}", jaccard(testSet1, testSet2));
}
public static double jaccard(HashSet<int> set1, HashSet<int> set2) {
set1.IntersectWith(set2);
set2.UnionWith(set2);
// length of set intersection ÷ length of set union
return set1.Count / set2.Count;
}
}
Questions:
Is there a way to implement HashSet<???> to take in a object of any type? Or do I have to just overload the jaccard with different types?
From an aesthetic and style perspective, how is my code?
From a security perspective, how is my code? Do you see any significant code injection or memory corruption vulnerabilities (I know that C# protects against memory corruption, but you never know ;-)?
If I should refactor my code, what do you recommend?
You may view my GitHub repo where the source code is stored here: https://github.com/Alekseyyy/InfoSec/blob/master/reversing/jaccard.cs
Answer: Bug
Your logic is flawed when computing the union of the sets: it doesn't include data from set1 in the union of sets. This leads to incorrect results in many cases.
Style
C# conventions recommend using PascalCase for method names. This makes the code easier to read for anyone who has some C# experience.
Method names should have a verb describing the action they perform. Again, this improves readability.
Remove unused using statements. These just add useless noise to the code.
Side effects
Glossing over the bugged logic, your method mutates its inputs, which means it isn't possible to do any further work with your sets after computing their Jaccard index, which will most likely cause issues.
Prefer using LINQ methods Union and Intersect over HashSet's UnionWith and IntersectWith, as the former return a new IEnumerable instead of mutating the calling instance.
Make your method static
Your method doesn't require any class members, and should be made static in order to use it without instantiating a class.
Make it generic
There is no reason to limit your method to work with HashSet<int>, as the logic doesn't rely on the underlying data type.
Fewer operations
Set operations are the most expensive operations in your code. You can save computing the set union and just work with the counts of inputs and intersection.
Document your code
This makes the code easier to reuse by other people, or yourself in the future.
Include tests
If your code was properly tested, you would have caught your bug.
Sample code:
JaccardIndex.cs
namespace JaccardIndex
{
public static class JaccardIndex
{
/// <summary>
/// Compute the Jaccard index between two sets.
/// The Jaccard index gauges the similarity between two sets, and ranges from 0.0 (disjoint sets) to 1.0 (identical sets).
/// </summary>
/// <typeparam name="T">Underlying data types of input sets</typeparam>
/// <param name="set1">1st input set</param>
/// <param name="set2">2nd imput set</param>
/// <returns>Jaccard index</returns>
public static double ComputeJaccardIndex<T>(HashSet<T> set1, HashSet<T> set2)
{
var intesectionCount = (double)set1.Intersect(set2).Count();
return intesectionCount / (set1.Count() + set2.Count() - intesectionCount);
}
}
}
JaccardIndexTests.cs
using Xunit;
using static JaccardIndex.JaccardIndex;
namespace JaccardIndexTests
{
public class JaccardIndexTests
{
/// <summary>
/// Verify that some knwon values are computed correctly
/// </summary>
[Theory]
[InlineData(new int[] { 1, 2, 3 }, new int[] { 1 }, 1.0 / 3.0)]
[InlineData(new int[] { 1 }, new int[] { 1 }, 1.0 )]
[InlineData(new int[] { 1 }, new int[] { 2 }, 0.0)]
[InlineData(new int[] { 1, 2, 3 }, new int[] { 1, 2, 4 }, 2.0 / 4.0)]
[InlineData(new int[] { 1, 2, 3 }, new int[] { 4, 5, 6 }, 0.0)]
public void Samples(int[] data1, int[] data2, double expected)
{
var set1 = new HashSet<int>(data1);
var set2 = new HashSet<int>(data2);
Assert.Equal(expected, ComputeJaccardIndex(set1, set2));
}
/// <summary>
/// Jaccard index of a should be commutative
/// </summary>
[Theory]
[InlineData(new int[] { 1, 2, 3 }, new int[] { 1 })]
[InlineData(new int[] { 1 }, new int[] { 1 })]
[InlineData(new int[] { 1 }, new int[] { 2 })]
[InlineData(new int[] { 1, 2, 3 }, new int[] { 1, 2, 4 })]
[InlineData(new int[] { 1, 2, 3 }, new int[] { 4, 5, 6 })]
public void Symmetry(int[] data1, int[] data2)
{
var set1 = new HashSet<int>(data1);
var set2 = new HashSet<int>(data2);
Assert.Equal(ComputeJaccardIndex(set1, set2), ComputeJaccardIndex(set2, set1));
}
/// <summary>
/// Jaccard index of a set with itelf should be 1
/// </summary>
[Theory]
[InlineData(new int[] { 1 })]
[InlineData(new int[] { 1, 2, 3 })]
[InlineData(new int[] { 1, 2, 3, 4, 5, 6 })]
public void WithSelf(int[] data)
{
var set = new HashSet<int>(data);
Assert.Equal(1, ComputeJaccardIndex(set, set));
}
/// <summary>
/// Jaccard index of a set with empty set should be 0
/// </summary>
[Theory]
[InlineData(new int[] { 1 })]
[InlineData(new int[] { 1, 2, 3 })]
[InlineData(new int[] { 1, 2, 3, 4, 5, 6 })]
public void WithEmpty(int[] data)
{
var set = new HashSet<int>(data);
Assert.Equal(0, ComputeJaccardIndex(set, new HashSet<int>()));
Assert.Equal(0, ComputeJaccardIndex(new HashSet<int>(), set));
}
/// <summary>
/// Jaccard index of two empty sets is undefined, should return NaN
/// </summary>
[Fact]
public void BothEmpty()
{
Assert.True(double.IsNaN(ComputeJaccardIndex(new HashSet<int>(), new HashSet<int>())));
}
/// <summary>
/// Jaccard index computation should not mutate the input data
/// </summary>
[Fact]
public void NoDataMutation()
{
var set1 = new HashSet<int>() { 1, 2, 3 };
var set2 = new HashSet<int>() { 1, 2, 4, 5 };
var set1Copy = new HashSet<int>(set1);
var set2Copy = new HashSet<int>(set2);
ComputeJaccardIndex(set1, set2);
Assert.Equal(set1Copy, set1);
Assert.Equal(set2Copy, set2);
}
}
}
Note that test coverage could probably be improved, by testing operations with other data types or large sets, but this is a good start IMO. | {
"domain": "codereview.stackexchange",
"id": 44959,
"tags": "c#, .net, hash-map, mathematics"
} |
Chemical Equilibrium - Le Chatelier's principle | Question: If the concentration of reactants is increased such that one is a limiting agent, wouldn't the concentration of products also increase (because excess of a reactant will get carried to the other(product) side of the equation considering that the limiting agent is in aqueous medium?
e.g.
(Consider 1 mole of each reactant)
$$\ce{H2O + NH3 <=> NH4+ + OH-}$$
Now, let us take another scenario in which there are two moles of NH3 and 1 mole of H20. Then concentration of one reactant has increased (from the previous scenario). Wouldn't the concentration of the products also increase because one mole of NH3 does not react and gets carried to the products' side of the equation?
I am confused with how Le Chatelier's principle is applied in the 2nd scenario. Why does the equilibrium shift to the right if the concentration of the reactants and products increase (if my assumption is right)
If my assumption is wrong (concentration of product increases to balance excess reactant), my concern is such an equation is not balanced. I am unable to reconcile how the Le Chatelier's principle is applied in the 2nd scenario.
(Based on the answers below, I would like to restate my question)
I understand that
$$\ce{2H2O <=> H3O+ + OH-}$$
Now if I add HCl, I have
$$\ce{H2O + HCl <=> H3O+ + Cl-}$$
The way I interpret this is : the concentration of reactants is 1 + 1 = 2 moles and the concentration of products is also 1 + 1 = 2 moles. Since the concentration on both sides of the equation is same, why do we say equilibrium shifts left. What am I missing?
Answer: For the case you're describing, having left-over product species will likely not cause the reverse reaction to increase.
When you define a reaction equation like: $\ce{aA + bB <=> cC + dD}$, you are explicitly defining A and B as 'reactants' and C and D as 'products'. In the scenario you've described, just because you have excess reactants because you have a limiting concentration of either A or B, doesn't mean the concentration of products will increase. This is because we can think of concentration of products as $[Products](t) = [C](t) + [D](t)$ and not $[Products](t) = [C](t) + [D](t) + [A](t) + [B](t)$.
--
A nice definition of Le Chatlier's principle is the following: "A change in one of the variables that describe a system at equilibrium produces a shift in the position of the equilibrium that counteracts the effect of this change."
If one is to get a little more technical with this, equilibrium is a function of many factors, primarily Temperature (T), Pressure (P), and mixture species (n). This is typically described by thermodynamics by the state property, Gibbs Free Energy, or $\Delta G$.
For the purposes of an example to quantify Le Chatlier's principle, let's look at the differential form of Gibbs Free Energy: $dG = -SdT +VdP + \sum^{n}_{i=1}\mu_idn_i$. If we consider a case where temperature isn't changing (isothermal), and where pressure isn't changing (isobaric), then the Gibbs Free Energy equation is affected by changes in system composition: $dG = \sum^{n}_{i=1}\mu_idn_i$. For an equilibrium case, which means that $\Delta G = 0$, it can be shown that if you expand this expression out for a reaction mechanism (like the one mentioned above), you end up with an expression like: $\frac{\Delta G^o_{T_o}}{RT_o} = -ln(K_{eq})$. This particular equation is defined for standard conditions (0 degC, 1 atm).
--
We're almost to an answer, let's look a little closer at $K_{eq}$. Let's use a gas phase system $\ce{aA + bB <=> cC + dD}$ as an example. If you look up the derivation of the $\frac{\Delta G^o_{T_o}}{RT_o} = -ln(K_{eq})$ in any standard thermodynamics textbook, you will see that $K_{eq}$ is actually equal to the ratio of the mol fractions of each species multiplied by the system pressure: $K_{eq} = \frac{y_C^c y_D^d}{y_A^a y_B^b} P^{c + d - a - b}$. If one assumes that the pressure is fixed and that $a = b = c = d = 1$, then this expression would reduce down to: $K_{eq} = \frac{y_C y_D}{y_A y_B}$.
You can further add detail to this expression by defining the mol fraction in terms of the extent of reaction $\xi$ for a reaction system.
Where, $n_i = n_{io} + v_in_{io}\xi$--which essentially says that the mols of a given species is equal to inital mols +/- the extent to which that chemical is generated or reacted away (based on the specified chemical of reference $n_{io}$). If you plug this into the expression for $K_{eq}$ you can see that now the equilibrium conversion of a reaction system is dependent on the amount of mols of each species. For the system described above, the total mols $N_{total}$ will cancel out of the expression leaving:
$K_{eq} = \frac{(n_{Ao} + v_An_{io})(n_{Bo} + v_Bn_{io})}{(n_{Co} + v_CN_{io})(n_{Do} + v_Dn_{io})} = exp(-\frac{\Delta G^o_{T_o}}{RT_o})$
For a given temperature and pressure, the Gibbs Free Energy ($\Delta G$) will be a fixed value. Therefore the conditions in the equation (composition of the mixture of reaction system) must balance in such a way that the equilibrium conversion $\xi$ will satisfy the condition. You can see here that as one either varies products (C,D) or reactants (A,B) that the solution for $\xi$ will change. You can generally show that an excess in either products or reactants will shift the system to maximum equilibrium conversion $\xi$. And as such, if one suddenly added or removed products or reactants you would cause a shift in the equilibrium conversion $\xi$. | {
"domain": "chemistry.stackexchange",
"id": 13816,
"tags": "equilibrium"
} |
Integrating glc within package for capturing video | Question:
Hi all,
I'd like to use glc ( http://www.ros.org/wiki/RecordingOpenGLAppsWithGLC ) to capture video from an OpenGL window from within a ROS package. There are instructions for running glc separately from the terminal, but I'd like to integrate it into my code. Has anyone done this? Are there any examples out there? Just wondering whether it's feasible, or if I should go the glReadPixels route.
Thanks
~Robin
Originally posted by Robin on ROS Answers with karma: 87 on 2011-04-04
Post score: 1
Answer:
I wrote a function that calls glReadPixels and outputs an IplImage that is published on a topic, this is part of my openGL program. This is actually very easy to implement in your own program.
As GLC is a wrapper it should be possible to adjust GLC and input it in a rosnode. I assume you want the argument program to be inputted as a parameter. Personally I went for the readpixels path as I didn't see a use-case for this approach.
Originally posted by KoenBuys with karma: 2314 on 2011-04-06
This answer was ACCEPTED on the original site
Post score: 1 | {
"domain": "robotics.stackexchange",
"id": 5273,
"tags": "ros, video, opengl"
} |
Reducing sugar among the given compounds | Question: The question is to determine which of the following sugars is not reducing?
I have the idea that in case of disaccharide sugars in which the monosachharide units are bonded via the carbon containing aldehydic group are non reducing.However the present question deals with sugars having monosaccharide units.I have no idea on how to deal with such sugars.Any insights?Thanks.
Answer: The English wikipedia page on reducing sugars starts with
A reducing sugar is any sugar that is capable of acting as a reducing agent because it has a free aldehyde group or a free ketone group. All monosaccharides are reducing sugars [...]
Unfortunately, the monosaccharides tend to hide the aldehyde and keto groups by undergoing intramolecular reactions between a carbonyl group and a hydroxy group. This reaction leads to a cyclic hemiacetal.
In order to figure out which of the structures in your question is a monosaccharide in disguise, we need to spot the hemiacetal moiety. With other words:
Which of the structures can open again to show a carbonyl group?
We can definitely rule out (C)! This is a hydroxymethyl-substituted cyclohexane with some hydroxy and a methoxy group.
What about (D)? Close, but no cigar. This is an acetal, which won't open that easy.
I'll leave the last two examples to you ;-) | {
"domain": "chemistry.stackexchange",
"id": 7862,
"tags": "carbohydrates"
} |
Is log(n) in complexity class P? | Question: $\log(n)$ is not polynomial; is a problem solvable in $\mathcal{O}(\log n)$ time in P?
$n\times \log(n)$ is also not polynomial; is a problem solvable in $\mathcal{O}(n\times \log n)$ time in P?
If not, what complexity classes contain those problems?
The definitions I've found all refer to "polynomial time", not "at most polynomial time". This may be at odds with $\mathcal{O}$ being defined in terms of bounds, but I haven't found a source which clarifies the discrepancy.
Answer: Wikipedia says:
An algorithm is said to be of polynomial time if its running time is upper bounded by a polynomial expression in the size of the input for the algorithm.
$\mathcal{O}(\log n)$ is upper bounded by $\mathcal{O}(n)$, and $\mathcal{O}(n \log n )$ is upper bounded by $\mathcal{O}(n^2)$, therefore they are both in $P$. | {
"domain": "cs.stackexchange",
"id": 4169,
"tags": "reference-request, asymptotics, landau-notation"
} |
Experimental tests of Schrodinger evolution, position distribution, in square well and other simple systems? | Question: Have the energy eigenfunctions in position space ever been experimentally tested for the simplest system undergraduates encounter when learning quantum mechanics, the square well? If not, what is the best example of the experimental verification of position space energy eigenstates or Schrodinger evolution, for simple systems? Almost always the 2-slit experiment is mentioned, or tests of hydrogen such as this one (for which the precision of theory-experiment agreement is quite low), but are there any other tests that test the probability distributions that undergraduates typically derive when learning quantum mechanics?
Answer: You may wish to consider the exeriment by Crommie, Lutz and Eigler (Nature 363, 524 (1993)), who look at standig waves between step edges in a surface two-dimensional electron gas using scanning tunneling microscopy. Other experiments by the same IBM group have shown the standing waves in quantum corrals. Generally, STM images of patterned surface 2D electron gases taken at cryogenic temperatures have shown beautiful images of the 'wave functions' (better, the probability distributions, or even better, the local density of states) of confined systems within the last 30 years.
Another technique uses electron tunneling in semiconductor heterostructures to image the wave functions of quantum dots, as shown in this paper, for example (there are more papers in this direction by the same and other groups). These states are the bound states of so-called 'artificial atoms' (another name for few-electron quantum dots).
I found additional experiments such as this one using photoelectron spectroscopy to image the wave functions of adsorbed molecules.
Although these experiments may not be exactly what you are looking for, because they are rather complicated to explain in detail to undergraduates, they may be used as a 'teaser' to make them curious about learning advanced methods to eventually understand them. | {
"domain": "physics.stackexchange",
"id": 56745,
"tags": "quantum-mechanics, experimental-physics, schroedinger-equation, education"
} |
Where am I confused about force addition? | Question: As far as my knowledge is concerned, a vector quantity should possess magnitude and direction & more over it should also obey the laws of vector addition.
As we all know that the vector sum of 3 newtons in the x direction and 4 newtons in the y direction will acting at a point will produce a resultant of 5 newtons. Where did the remaining 2 newtons go?
I mean we have applied a total of 7 newtons of force on a point sized particle but the output is only 5 newtons, so it appears as if a 2 newton force is disappearing here. In which other form does it reappear rather than the resultant? Or is it something like the remaining 2 newtons of force just vanishes and it doesn't appears in any other form?
Please correct me if I am going wrong on this issue.
Answer: Fear not! You are not alone in your confusion. While it is common these days to teach that forces are vectors and follow vector addition (as the other answers beautifully present it), historically, you have hit upon a quandary that lay at the center of a dispute throughout much of the 19th century, involving such characters as Newton, Lagrange, Heaviside, Poisson, Maxwell, Bernoulli, De Morgan, Young, Earnshaw, to name a few you might recognize.
For a full account of all of the back and forths that transpired and a full analysis of the weights of the arguments presented, please see A Tale of Two Vectors by Marc Lange (2009) [doi][pdf], which served as my introduction to the controversy, and reference for preparing what follows.
Just as you hint in your answer, that forces ought to add as vectors is entirely intuitive, and so it is not well known who first came up with the idea. Throughout time, everyone agrees on what the answer should be when you add two forces (as you can readily test in everyday life), but the dispute lay in exactly why it should take the form that it does. There are hints that it appeared in a lost work of Aristotle (384 - 322 B.C.E), and it definitely appeared in Heron's Mechanics (first century A.D.) [cite]
Newton and dynamics
But it was in Newton's Principia (1687) [wikisource] that we see the first proof, right at the top in Corollary I:
Corollary I: A body by two forces conjoined will describe the diagonal of a parallelogram, in the same time that it would describe the sides, by those forces apart.
If a body in a given time, by the force $M$ impressed apart in the place $A$ should with a uniform motion be carried from $A$ to $B$; and by the force $N$ impressed apart in the same place, should be carried from $A$ to $C$; complete the parallelogram $ABCD$, and by both forces acting together, it will in the same time be carried in the diagonal from $A$ to $D$. For since the force $N$ acts in the direction of the line $AC$, parallel to $BD$, this force (by the second law) will not at all alter the velocity generated by the other force $M$, by which the body is carried towards the line $BD$. The body therefore will arrive at the line $BD$ in the same time, whether the force $N$ be impressed or not; and therefore at the end of that time it will be found somewhere in the line $BD$. By the same argument, at the end of the same time it will be found somewhere in the line $CD$. Therefore it will be found in the point $D$, where both lines meet. But it will move in a right line from $A$ to $D$, by Law I
So, here Newton proves that force addition should take the form of vector addition (vectors won't be invented for a good 150 years), and gives as his proof a dynamical proof, one founded on the ideas of dynamics. He uses the motion that would be created by the forces on some hypothetical mass $m$, by his law of motion $\vec F = m \vec a$ to prove how forces should add given we know how the paths add.
This would give birth to the controversy, between the those in the dynamic camp, and those in the static camp.
I think Bernoulli† gave the most compelling damnation of the dynamical proof, in which he points out that even if were to live in a universe where Newton's law didn't hold, but instead some other law $\vec F = m \vec v$, say, forces would still add by the parallelogram law, so the ultimate reason that forces add in this way must be independent of dynamics itself.
†: Here Daniel Bernoulli, the one of the many famous Bernoullis responsible for Bernoulli's principle
Duchayla and statics
As an alternative to the dynamical explanation of Newton, the static proof usually given is ultimately traced back to Charles Dominique Marie Blanquet Duchayla (a name if I ever saw one) in 1804. In which his proof proceeds by induction, with the inductive step:
If forces $P$ and $Q$, acting together at a point, result in a force directed along the diagonal of the parallelogram representing the two forces, and likewise for forces $P$ and $R$ acting together at the same point, with $R$ acting in $Q$'s direction, then likewise for $P$ together with the resultant of $Q$ and $R$.
And proof of the inductive step:
Let $P$ be represented by segment $AB$. Grant that the resultant of $Q$ and $R$ is in their common direction and equal in magnitude to the sum of their magnitudes; let it be represented by segment $AE$, with $Q$ represented by $AC$, so that segment $CE$ is the proper length and direction to represent $R$ except that $R$ is actually applied at $A$ rather than at $C$. Nevertheless, when a force acts on a body, the result is the same whatever the point, rigidly connected to the body, at which it is applied, provided that the line through that point and the force's actual point of application lies along the force's direction.
So although $R$ is applied at $A$, its effect is the same if it is applied at $C$, since $AC$ is in the forces' direction. Continuing to treat the parallelograms in figure as a rigid body, we can move the three forces' points of application to other points along the forces' lines of action without changing their resultant. We cannot move $P$'s point of application directly to $C$, since $AC$ does not lie along $P$'s direction. But by hypothesis, the resultant of $P$ and $Q$ acts along diagonal $AD$, so the resultant can be applied at $D$. It can then be resolved into $P$ and $Q$, now acting at $D$. $Q$'s direction lies along $DG$, so $Q$ can be transferred to $G$. $P$'s direction lies along $CD$, so $P$ can be transferred to $C$, where it meets $R$. By hypothesis, their resultant acts along diagonal $CG$, so it can be transferred to $G$, where it meets $Q$. By the converse of the force transmissibility principle, $AG$ must lie along the line of action of the force resulting from $P$ composed with resultant of $Q$ and $R$...
Quotation and figure reproduced from Lange's paper
The proof goes on from here to establish that you can also demonstrate that it gets both the directions and magnitudes right, for a full reproduction, see Lange's paper.
This proof is a bit hard to read, and reviews were mixed, some thought it was the best thing since sliced bread, "very simple and beautiful", such as Mitchell, Young, Imray, Earnshaw and Pratt, other's, not so much:
forced and unnatural... a considerable waste of time
Besant 1883, Lock 1888
the proof of our youth... now voted cumbrous and antiquated, and only retained as a searching test of logical power
A.G.G. 1890
brainwasting... elaborate and painstaking, though benumbing
Heaviside 1893
certainly convincing...but...essentially artificial...cunning rather than honest argument
Goodwin
Poisson and symmetry
But not all is lost. Poisson offered an alternative proof of the static case in 1811, which is based on symmetry arguments, dimensional considerations and a uniqueness constraint. I quite like this version. You start by assuming you have two forces of equal magnitude $P$ but different directions, separated by an angle $2\theta$ (The red forces in the diagram). Invoking rotational invariance and symmetry, the resultant $R$ (black) must bisect the two in direction, and its magnitude must be a dimensionally consistent formula of the magnitude of $P$ and the angle $\theta$ alone, or of the form:
$$ R = P g(\theta) $$
with some unknown function $g(\theta)$. To figure out the function, Poisson then considers two new problems:
At the top and bottom, we've set up two new copies of the same task, now adding pairs of forces $Q$ (blue) to create the $P$s, and can see that
$$ P = Q g(\phi) \implies R = Q g(\theta) g(\phi) $$
Meanwhile the two inner $Q$ forces (slightly darker blue) and two outer $Q$ (slightly lighter blue) forces set up the same scenario, all four of them adding to the resultant $R$, and each pair a part, and since we assume that forces in the same direction just add in magnitude, we have also
$$ R = Q g(\theta + \phi) + Q g(\theta - \phi) $$
together:
$$ g(\theta) g(\phi) = g(\theta+\phi) + g(\theta-\phi) $$
the only solution to this functional equation are solutions of the form:
$$ g(\theta) = 2 \cos (\alpha \theta) $$
with an unknown $\alpha$, but we can fix $\alpha = 1$ by requiring that equal forces in opposite directions cancel. So finally we recover the force addition formula for equal magnitude forces:
$$ R = 2 P \cos \theta $$
cast in polar form. From here it is easy to generalize to the full vector addition results by decomposing forces into various pieces.
I'm quite fond of this proof as it relies only on symmetry and dimensional considerations, and is completely free from any discussion of dynamics. But, some people equate this to a deficit:
Many have been puzzled by finding that the thing which, by its very definition tends to produce motion, is reasoned on... under a compact that any introduction of the idea of motion would be out of place. The statical proofs... seem to be all geometry and no physics
Augustus De Morgan (1859)
But others find it quite elegant:
The proof which Poisson gives of the "parallelogram of forces" is applicable to the composition of any quantities such that turning them end for end is equivalent to a reversal of their sign
Maxwell in 1873
Onto the philosophical
Nowadays, people continue to argue over the force addition law's origins, but mostly in philosophy journals. But In the second half of Lange's paper, he summarizes these various philosophy arguments and comes to the rough conclusion that in the end, whether you want to believe a dynamical or static origin is sort of question of interpretation, similar to whether you want to believe Newtonian Mechanics of Lagrangian mechanics. It's up to you to decide for yourself. The historic lineup of heavyweights is roughly:
In the dynamic camp:
Newton
Lagrange
Heaviside
De Morgan
In the static camp:
Poisson
Maxwell
Bernoulli
Young
Earnshaw
Where will you stand? | {
"domain": "physics.stackexchange",
"id": 77546,
"tags": "forces, vectors"
} |
Node backend to run user-submitted code in a virtual machine | Question: As per the title, I'm making a webapp that is intended to use as a JS exercise platform. Problems are shown to users, they submit code, that code is run against a few test cases, and a report of the outcome is given back to the user.
I'm using Django for my backend, but to run the code I have also set up a node script, which is called via subprocess in Django.
The script is given a string containing the user code and a list of assertions.
I'm trying to see if the way I structured that script is sound.
Objective: I need to return to Django a list of objects, one for each assertion passed to node, where each object looks like this:
{
id: Number,
assertion: String,
public: Boolean,
passed: Boolean,
error: String,
}
Django pretty much needs to know which test cases were passed and which failed.
My idea is the following: in Node, I take each assertion passed by Django, and I turn it into a try ... catch in which I create an object I run that assertion and, if it fails, I collect the error in the object (otherwise I collect the positive outcome), then I push that object to an array which is what I ultimately return. I then take all the try ... catch strings and I inline them after the user code in the code I run in my vm.
As you might imagine, the possible issues with this are all the cases where the user might tamper with the array prototype and whatnot.
So here's my Node code. I'd like some input and feedback on what could be improved, what kind of tampering it might still be vulnerable to, and other ideas as to how I could make it generally safer andb better.
/*
usage:
node runWithAssertions.js programCode assertions
arguments:
programCode is a STRING containing a js program
assertions is an ARRAY of strings representing assertions made using node assertions
output:
an array printed to the console (and collected by Django via subprocess.check_output()) where each entry
corresponds to an assertion and is an object:
{
id: Number,
assertion: String,
public: Boolean,
passed: Boolean,
error: String,
}
where id is the id of the assertion (as in the Django database),
assertion is the string containing the assertion verbatim,
public indicates whether the assertion is to be shown to the user or it's secret,
passed represents the outcome of running the assertion on the program,
and error is only present if the assertion failed
*/
// The VM2 module allows to execute arbitrary code safely using a sandboxed, secure virtual machine
const { VM } = require('vm2')
const assert = require('assert')
const AssertionError = require('assert').AssertionError
const timeout = 1000
// instantiation of the vm that'll run the user-submitted program
const safevm = new VM({
timeout, // set timeout to prevent endless loops from running forever
sandbox: {
prettyPrintError,
prettyPrintAssertionError,
assert,
AssertionError
}
})
function prettyPrintError (e) {
const tokens = e.stack.split(/(.*)at (new Script(.*))?vm.js:([0-9]+)(.*)/)
const rawStr = tokens[0] // error message
if (rawStr.match(/execution timed out/)) {
// time out: no other information available
return `Execution timed out after ${timeout} ms`
}
const formattedStr = rawStr.replace(
/(.*)vm.js:([0-9]+):?([0-9]+)?(.*)/g,
function (a, b, c, d) {
return `on line ${parseInt(c) - 1}` + (d ? `, at position ${d})` : '')
}
) // actual line of the error is one less than what's detected due to an additional line of code injected in the vm
return formattedStr
}
// does the same as prettyPrintError(), but it's specifically designed to work with AssertionErrors
function prettyPrintAssertionError (e) {
const expected = e.expected
const actual = e.actual
const [errMsg, _] = e.stack.split('\n')
return (
errMsg +
' expected value ' +
JSON.stringify(expected) +
', but got ' +
JSON.stringify(actual)
)
}
const userCode = process.argv[2]
const assertions = JSON.parse(process.argv[3])
// turn array of strings representing assertion to a series of try-catch's where those assertions
// are evaluated and the result is pushed to an array - this string will be inlined into the program
// that the vm will run
const assertionString = assertions
.map(
(
a // put assertion into a try-catch
) =>
`
ran = {id: ${a.id}, assertion: '${a.assertion}', is_public: ${a.is_public}}
try {
${a.assertion} // run the assertion
ran.passed = true
} catch(e) {
ran.passed = false
if(e instanceof AssertionError) {
ran.error = prettyPrintAssertionError(e)
} else {
ran.error = prettyPrintError(e)
}
}
output_wquewoajfjoiwqi.push(ran)
`
)
.reduce((a, b) => a + b, '') // reduce array of strings to a string
// support for executing the user-submitted program
// contains a utility function to stringify errors, the user code, and a series of try-catch's
// where assertions are ran against the user code; the program evaluates to an array of outcomes
// resulting from those assertions
const runnableProgram = `const output_wquewoajfjoiwqi = []; const arr_jiodferwqjefio = Array; const push_djiowqufewio = Array.prototype.push; const shift_dfehwioioefn = Array.prototype.shift
${userCode}
// USER CODE ENDS HERE
// restore array prototype and relevant array methods in case user tampered with them
Array = arr_jiodferwqjefio
Array.prototype.push = push_djiowqufewio;
Array.prototype.shift = shift_dfehwioioefn;
if(Object.isFrozen(output_wquewoajfjoiwqi)) {
// abort if user intentionally froze the output array
throw new Error("Malicious user code froze vm's output array")
}
while(output_wquewoajfjoiwqi.length) {
output_wquewoajfjoiwqi.shift() // make sure the output array is empty
}
// inline assertions
${assertionString}
// output outcome object to console
output_wquewoajfjoiwqi`
try {
const outcome = safevm.run(runnableProgram) // run program
console.log(JSON.stringify({ tests: outcome })) // output outcome so Django can collect it
} catch (e) {
console.log(JSON.stringify({ error: prettyPrintError(e) }))
}
The full repo of this project is available at https://github.com/samul-1/js-exercise-platform/ if you want to take a look at the whole thing. Advice is welcomed :)
Answer:
what kind of tampering it might still be vulnerable to
I am working on testing this locally and in the process of getting it set up so I am not sure if these things are actual areas of concern but test tampering with globals like JSON, prettyPrintError , assert, Error, etc... I tried modifying JSON.parse() and was able to change its implementation...
It would be wise to setup unit tests if you haven’t already. Common frameworks like mocha/chai, jest, etc. offer techniques for ensuring exceptions are or aren’t thrown.
The code tests if the variable output_wquewoajfjoiwqi is frozen, but I assigned it to a different value and it threw TypeError: Assignment to constant variable. If the goal is to use a variable not used within the user code, then perhaps it would be wise to generate the hash dynamically - repeating if necessary until the user code does not contain the hash.
Bearing in mind the code is run in a VM instead of a browser, you might be interested to read about how the StackExchange snippets are designed to prevent XSS attacks. I tried to find a list of restrictions for those but haven't found anything comprehensive yet.
Other review points
constant style: most style guides recommend constant names be in all caps so anyone reading it can distinguish variables from constants. So instead of:
const timeout = 1000
use all capitals - e.g.
const TIMEOUT = 1000
repeated require for dependency assertionError: instead of
const AssertionError = require('assert').AssertionError`
just use:
const AssertionError = assert.AssertionError
since the previous line already loaded assert
regular expressions class \d can be used in place of [0-9]
scope of variable Is ran declared with keyword, or okay as global?
arrow function for callback The callback to rawStr.replace() could be simplified to an arrow function
assigning ran a in the callback to .map() in assertionString can be stringified with JSON.stringify()- so instead of:
ran = {id: ${a.id}, assertion: '${a.assertion}', is_public: ${a.is_public}}
just do this:
const ran = ${JSON.stringify(a)}
initially I was thinking that destructuring a - i.e. {id, assertion, is_public} since a is never passed in its entirety. Before I realized that ran is within a template literal I was thinking the object initializer shorthand notation could allow for the keys to be omitted - but within the template literal that doesn't seem to make sense.
call Array.join() instead of Array.reduce() After the assertions are mapped to an array, instead of calling reduce to join them together to a string, Array.join() can be used. | {
"domain": "codereview.stackexchange",
"id": 41255,
"tags": "javascript, node.js, ecmascript-6, django, virtual-machine"
} |
Effect on sea level if the Earth stopped rotating | Question: Would sea level change at the equator if the Earth stopped spinning? I am assuming it is currently bulging around it due to centrifugal force.
Answer: Let's assume that the earth didn't suddenly stop spinning (because intertia and conservation of angular momentum would do all sorts of "interesting" things that are deserving of a What-If answer), and stipulate that the earth slowed down gradually, or possibly that it was never spinning in the first place (although I'm sure this would have all sorts of other effects that wouldn't have got us to where we are...)
Yes, sea levels would change, but not necessarily for the reasons that you think.
Centrifugal force
Part of the bulge in oceans is due to centrifugal force on the water, but much of it is not. There is an underlying bulge in the seabed as well as the ocean. A result of this (and other variations in the thickness and density of the crust) are that the earth's gravitational field is not even across the globe, and where there is a stronger area of gravitational field, more water is pulled towards it and a bulge results. It is this effect that allows for the bathymetry of oceans to be mapped by satellites that sense the elevation of the sea's surface.
I am no geoscientist, but I imagine that this bulge in the crust at the equator is also to do with centrifugal force - but it would take a lot longer to go away, if indeed it did at all, than one caused just by water.
EDITING to add that there is now an answer elsewhere on this site re the bulge in the crust: How viscous is the Earth's mantle?
Changes in tides & ocean currents
If the planet were not rotating, the dominant period for tidal cycles would likely be related to a lunar month rather than to a day. There would also be no Coriolis effect, and these two factors would result in major differences to tides and to ocean circulations. As such, it is likely that there would be substantial differences in both short- and long-term elevation changes that are due to currents.
Other effects
I suspect that lack of rotation might have effects on the planet's core and its magnetic field, which might result in all sorts of other impacts... but we'll have to wait for a geoscientist in a speculative mood to talk about things like that :-) | {
"domain": "earthscience.stackexchange",
"id": 316,
"tags": "ocean, sea-level, earth-rotation, hypothetical"
} |
Is there a relation between Pauli exclusion principle and degenerate energy levels? | Question: Pauli exclusion principle states that two electrons cannot be in the same quantum state. If two different states lead to the same energy level, then this energy level is said to be degenerate.
Due to the fact that these statements concern the states of an atom, is there any kind of relationship?
Answer: The Pauli exclusion principle states that each electron occupies a unique state. How that state relates to energy specifically, though, is a separate consideration that depends on the system that this electron is a part of.
Without giving a long answer about Hamiltonians and wavefunctions, suffice it to say that the energy of an electron depends on other forces nearby (e.g., electric fields) as well as its surrounding environment (e.g., other electrons). For example, in a hydrogen atom — which contains only one proton and one electron — the energy of an electron depends only on the principle quantum number n. But in larger atoms, the energy may depend (albeit to a lesser extent) on the other quantum numbers l, ml, and ms. In most materials, there are slight differences in energy that occur due to these four quantum numbers (which determine the "Aufbau" ordering of electrons in many materials).
So yes, there is a relation between the quantum numbers that the Pauli exclusion principle refers to and the energy of electron, but the specific relation depends on the forces and structure that surround the electron itself. | {
"domain": "physics.stackexchange",
"id": 36612,
"tags": "quantum-mechanics, pauli-exclusion-principle"
} |
Building a chain of responsibility in Ruby to apply transformators on an object | Question: I try to make a middleware stack system in the rack way (but not for HTTP request).
Here is the main class:
class MiddlewareStack
def self.stack middleware
@middlewares ||= []
@middlewares << middleware
end
def self.middlewares
@middlewares
end
def self.next payload
next_middleware = @middlewares.shift
next_middleware.apply(payload, self)
end
def self.apply payload
self.next payload
end
end
Usage
We can setup a middleware stack this way:
class PlusOneMiddleware
def self.apply payload, stack
stack.next(payload) + 1
end
end
class DummyMiddleware
def self.apply payload, stack
return payload
end
end
class PlusOneStack < MiddlewareStack
stack ::PlusOneMiddleware
stack ::DummyMiddleware
end
Now we can call the stack like this:
PlusOneStack.apply(1) #return 2
This use case is pretty useless, but I plan to use it for filtering and caching purposes.
Questions
What do you think about the naming/the code?
Do have a better implementation in mind?
What do you think about the test suite? (above)
Test suite
describe MiddlewareStack do
context "when we stack middlewares" do
it "contains all the middlewares in the right order" do
expect(DummyMiddlewareStack.middlewares).to eq [PlusOneMiddleware, DummyMiddleware]
end
end
context "when we apply the middleware stack" do
it "call all the middlewares" do
expect(PlusOneMiddleware).to receive(:apply).ordered.and_call_original
expect(DummyMiddleware).to receive(:apply).ordered.and_call_original
PlusOneStack.apply(1)
end
end
end
Answer: So at least, I dropped my previous implementation and I update an existing gem to mimic the Rack::Builder middleware handling: https://github.com/Ibsciss/ruby-middleware
Thanks a lot for your very detailed response! | {
"domain": "codereview.stackexchange",
"id": 12966,
"tags": "ruby, design-patterns, rspec, meta-programming"
} |
Networking layer module | Question: These two classes are part of a networking layer module that I am building:
public abstract class BaseRequest {
protected RequestCreator mCreator;
public BaseRequest() {
setupRequestCreator();
}
private setupRequestCreator() {
mCreator = new RequestCreator.Builder()
.headers(getHeaders())
.parser(getParser())
.build();
}
public abstract Headers getHeaders();
public abstract Parser getParser();
}
public class UserRequest extends BaseRequest {
private Headers mHeaders;
private Request<User> mRequest;
private interface UserService {
@GET("/users/{id}")
User getUser(String id);
}
public UserRequest(Headers headers) {
mHeaders = headers;
mRequest = mCreator.createRequest(UserService.class);
}
@Override
public Headers getHeaders() {
return mHeaders;
}
@Override
public Parser getParser() {
return null;
}
}
The framework that I am using requires me to configure/build a RequestCreator for each type of Request that I am going to use.
To make the process of adding new classes for different requests as pain-free as possible, I moved the building logic of the RequestCreator in an abstract class. This way I'll only have to extend it and override some methods that indicate how the RequestCreator should be configured - sort of a template method pattern.
The actual problem is this: when someone creates an instance of UserRequest, super() gets called right before UserRequest's member variables are set. What this means is that the RequestCreator will already be built by the time mHeaders are set. This is wrong because the building process needs a reference to mHeaders in order for it to behave as intended.
One way to get around this is to leave BaseRequest's constructor empty and to call setupRequestCreator() in the UserRequest's constructor, but only after setting the instance variables first. This however feels "hacky" because other people will have to be aware of this when they are creating new subclasses.
I tried other workarounds too, but nothing felt quite right. I was hoping someone could provide me with a better solution.
Answer: Why are you using an abstract class in the first place?
I more think you should use a factory method. You could store the headers and the parser in a class, but make it an immutable class.
Another note, or maybe just my personal opinion: It is to my knowledge not a Java convention to prefix all member fields with m.
The only really valuable code that you have is this:
mCreator = new RequestCreator.Builder()
.headers(getHeaders())
.parser(getParser())
.build();
and
mRequest = mCreator.createRequest(UserService.class);
So let's make factory methods!
public class Requests {
public static RequestCreator newCreator(Headers headers, Parser parser) {
mCreator = new RequestCreator.Builder()
.headers(headers)
.parser(parser)
.build();
}
}
public class UserRequest {
private interface UserService {
@GET("/users/{id}")
User getUser(String id);
}
public static Request<User> createUserRequest(Headers headers) {
return Requests.newCreator(headers, null).createRequest(UserService.class);
}
}
Problem solved(?) | {
"domain": "codereview.stackexchange",
"id": 16138,
"tags": "java, object-oriented, design-patterns"
} |
beta velocity notation in Galilean matrix | Question: I was looking at the Galilean transformation matrix and came across a matrix for Galilean of the form.
$$\begin{pmatrix}x'\\ y'\\ z'\\ t'\end{pmatrix}=\begin{pmatrix}1&0&0&-\beta c\\ 0&1&0&0\\ 0&0&1&0\\ 0&0&0&1\end{pmatrix}\begin{pmatrix}x\\ y\\ z\\ t\end{pmatrix},$$
where $\beta=v/c$. What I don't understand is what is the point in using the $\beta$ term why not just $-v$.
Is there something I am missing, I just can't think of a reason for using this. Does it have to do with it giving some indication when $v$ is equal to some fraction of the speed of light it, demonstrates how the Galilean transformation breaks down, and there need to the a correction factor involved?
Answer: It could be that this form allows one to easily verify that the Lorentz transform reduces to the Galilean transform for small relative velocities, i.e. when beta -->0.
Once realized that the speed of light was fixed in vacuum in all inertial frames of reference, this does not immediately mean that particles obey the same symmetries as light. This was a hypothesis imposed by Einstein. He reasoned that since everything we know, we come to know by electromagnetic processes (via the nervous system, light, electric current, etc), that the laws of mechanics and gravity would also have to be bound by the same symmetry principle. This led to the derivation of the Lorentz transform and a covariant version of Newton's laws of mechanics.
The speed of light is certainly special. Even if you tried to develop a theory of moving particles that was not Lorentz invariant while Maxwell's equations for EM fields is valid, c is special. One arrives at an approximate match between the Galilean and Lorentz transforms for small relative motion between inertial observers. The relevant parameter is beta = v/c. Taking v/c <<< 1 in the Lorentz transform will recover the form of the Galilean transform you are looking at.
Taking c-->infinity would be illogical as there is nothing to prevent v from going to infinity as well without some other constraints in place. | {
"domain": "physics.stackexchange",
"id": 54224,
"tags": "special-relativity, classical-mechanics, reference-frames"
} |
Electric field outside a infinitely long capacitor | Question:
If we construct the gaussian surface as shown (cylinder with one of the discs in the negatively charged plate of the capacitor) and the field at P is to be calculated, then as the total charge within the surface is zero so total flux will be zero. And the electric field is at all points normal to the plane of the plates of the infinitely long capacitor and also the discs of the cylinder, so the electric field at P must be zero, but P is closer to the positively charged plate and thus an outward electric field must be present at P. I am unable to resolve this contradiction, any help will be appreciated.
Answer: An infinite big plate creates an uniform electric field perpendicular to it. This means that no matter how close or how far you are from that plate, you measure the same electric field.
Thus, when you put two opposite charged plates to form a capacitor, the fields between the plates will be directed the same way and this means that you have a non zero electric field in the inside, while the fields outside the plates will cancel out and you have a null electric field outside. | {
"domain": "physics.stackexchange",
"id": 32996,
"tags": "electrostatics, capacitance, gauss-law"
} |
How could eclipse happen on the same day in a different calendar? (45 AD) | Question: According to the eclipse calculator I use (Alcyone), a partial eclipse 0.314 magnitude occurred August 1, 45 A.D. in Rome starting at 8:35am local time.
This would seem to correspond to an eclipse which was predicted to fall on the birthday of the emperor Claudius in the consulship of Marcus Vinicius and Statilius Corvinus (45 AD), as described in Dio Cassius Book 6, Section 26. The birthday of Claudius was Kalends Augustus (the first of August).
If this is the case, it would seem that there is an exact correspondence between the Gregorian and Julian calendar going back to 45 AD. However, I would expect that small differences in leap years and other irregularities would make this exact match on the same day unlikely.
Another example is the eclipse of 1140 at London which the Saxon Chronicle says occurred on 13 Kalends April (March 20th) and Alcyone says occurred March 20, again a perfect match. Yet another example is the eclipse of 809 which passed over the Faroe islands and the Saxon Chronicle says occurred 17 Kalends Augustus (16 July) on the second day of the week. Now, of course, 16 July 809 is a Thursday in the Gregorian calendar, not the second day of the week, so something would seem to be fishy somewhere.
Is Alcyone/NASA making some kind of adjustment so that these pre-Gregorian dates match up?
Answer: From the documentation for your program, http://www.alcyone.de/plsv/documentation/overview.html,
Dates are in the Julian Calendar through 4 October 1582, and then in the Gregorian Calendar, which begins on 15 October 1582. (Compute for 1582, and you will see that 1. oct to 1. nov. is short.) | {
"domain": "astronomy.stackexchange",
"id": 1033,
"tags": "solar-eclipse, time"
} |
Coordinate conversion formula in Jackson’s Electrodynamics | Question: Firstly, I have very little familiarity with Einstein notation, but would definitely like to improve my skills. So, excuse any difficulties I may have understanding. I am reading Jackson's 'Classical Electrodynamics' (3rd Edition) and have gotten to Chapter 11, where Jackson begins talking about special relativity and introduces the idea of four vectors as shown below:
My questions/confusions are:
Is the {$x^\alpha$, $x'^\alpha$} notation still relating to a frame at rest (unprimed) and frame moving (primed).
I am confused about the structure of equation 11.60. I am assuming $x'^\alpha$ on the left-hand side is a four vector, but then what is the $x'^\alpha$ on the right-hand side? The right-hand side is two four vectors, $x'^\alpha$ and $(x^0, x^1, x^2, x^3)$, multiplied together (inner product)? The structure is very confusing to me.
After equation 11.61, Jackson states, "...the derivative is computed from 11.60". Seeing that I don't even understand fully what 11.60 is trying to say, I am having a difficult time understanding how the derivative is derived from 11.60. Also, I am trying to discern how the derivative is defined in this formalism: The change in $x'^\alpha$ per change in $x^\beta$?
Any help/clarity on these points would be greatly appreciated!
Answer: You’ve been bitten by an abuse of notation common among physicists: we often say “four-vector $u^\alpha$” when what we should have said is “four-vector with components $u^0$, $u^1$, $u^2$, $u^3$ [in the coordinate system we are implicitly using]”. This makes for some very confusing moments when $u^\alpha$ actually needs to refer to the $\alpha$th component of the vector $u$ in a non-covariant way for some reason. Because of this, in the following $u\equiv (u^\alpha)\equiv (u^\alpha)_{\alpha=0}^3$ is the vector with the components $u^0,\ldots,u^3$, and $u^\alpha$ is its $\alpha$th component.
Now the point of (11.60) is to write down an arbitrary (not necessarily linear) mapping of quadruples $x\equiv (x^\alpha)$ to quadruples $x'\equiv (x'^\alpha)$. (For now, I am deliberately not calling them “vectors”.) A functional relationship between the two looks like
$$x' = f(x)$$
with a quadruple-valued function $f$; or explicitly for all components
$$\left\{\begin{align}
x'^0 &= f^0(x^0, x^1, x^2, x^3),\\
x'^1 &= f^1(x^0, x^1, x^2, x^3),\\
x'^2 &= f^2(x^0, x^1, x^2, x^3),\\
x'^3 &= f^3(x^0, x^1, x^2, x^3);
\end{align}\right.$$
or finally, keeping the components but compressing similar equations,
$$x'^\alpha = f^\alpha(x^0, x^1, x^2, x^3)\quad(\alpha = 0,\ldots, 3),$$
where the qualification at the end means “repeat the preceding line four times substituting these values for $\alpha$”. The Jacobian matrix (multidimensional derivative) of this transformation will then be written as
$$\frac{\partial f(x)}{\partial x}\equiv\left(\frac{\partial f^\alpha(x^0,x^1,x^2,x^2)}{\partial x^\beta}\right)_{\alpha,\beta=0}^3\,.$$
At this point, another abuse of notation comes into view: we write $x'$ instead of $f$, so $x'$ is simultaneously the new quadruple and the transformation function. The very same equations are now written as (11.60),
$$x'^\alpha = x'^\alpha(x^0,x^1,x^2,x^3)\quad(\alpha = 0,\ldots, 3),$$
and the Jacobian matrix is (as, indeed, “computed from (11.60)”)
$$\frac{\partial x'}{\partial x}\equiv\left(\frac{\partial x'^\alpha}{\partial x^\beta}\right)_{\alpha,\beta=0}^3\,.$$
I am actually quite surprised by what Jackson is doing here, because non-linear changes of coordinates and, consequently, vectors transforming differently from coordinate quadruples is something you have most probably never seen before. This is a technique for curvilinear coordinates and/or curved spacetime that usually only comes into play in GR. Lorentz transformations are linear, so you probably won’t actually need to worry about that once chapter 11 is over, but that’s why I avoided referring to $(x^\alpha)$ as a vector: in curvilinear coordinates, it isn’t. For example, on a plane adding the $(x,y)$ pairs and the $(r,\phi)$ pairs representing the same points yields two different results, while vector addition should make sense independent of coordinates. Meanwhile, things like velocities and momenta are always vectors. Once it’s proven that Lorentz transformations are linear, you won’t need to worry about it, so ignore it for now; if you are interested, try reading the first few sections of Misner, Thorne, and Wheeler’s classic, “Gravitation”. | {
"domain": "physics.stackexchange",
"id": 37033,
"tags": "special-relativity, tensor-calculus"
} |
Why is this language involving reversal regular? | Question: For a language to be regular it needs to be recognized by DFA/NFA.
Let $L = \{ xy^rzyx^r \mid |x| , |y|, |z| \ge 1 \}$ over the alphabet $\{0,1\}$.
$x^r$ means the reverse of $x$.
A DFA has no memory, so how can it handle the reverse check?
Answer: Notice that big unconstrained $z$ in the middle. You don't need the DFA to check anything if there is a way to decompose the word that dumps almost everything into $z$.
In particular, any word of the form $a b z b a$ where $a$ and $b$ are letters and $z$ is an arbitrary non-empty word is in $L$ (with words of length 1 for $x$ and $y$).
Conversely, suppose that you have a word $w \in L$: it can be broken down as $x y^r z y x^r$ with $|x| \ge 1$ and $|y| \ge 1$. If $|x| \ge 2$ then write $x = a b x'$: we have $w = a b x' y^r z y x'^r b a = a b z' b a$ where $z' = x' y^r z y x'^r$. If $|x| = 1$ then write $y = b y'$ and do a similar decomposition.
Conclusion: another way of describing $L$ is $L = \{a b z b a \mid a,b \in \{0,1\}, z \in \{0,1\}^{+}\}$. This language is regular; a regular expression that matches it is
00..*11|01..*10|10..*01|11..*11. Intuitively, an autotomaton recognizing $L$ only needs a finite amount of memory, because it only needs to memorize the first two letters. | {
"domain": "cs.stackexchange",
"id": 1701,
"tags": "formal-languages, regular-languages"
} |
OOP PHP best practices | Question: I've worked with PHP for a few years now and have a degree of understanding about classes / objects, but I've never written my own until now.
In that vein then, I'm looking for a bit of confirmation of the code below and that I am doing things properly before I go any further. Any advice / critique is greatly appreciated.
class People {
protected $people = array();
public function __construct() {
$this->people = $people;
$defaults = array(
'model_type' => 'post_type',
'post_type' => 'post'
);
$options = wp_parse_args(get_option('my_options'), $defaults);
$this->model = $options['model_type'];
$this->type = $options['post_type'];
}
public function get_people() {
switch($this->model) {
case 'user' :
$people = get_users();
if(!empty($people)) {
foreach($people as $person) {
$new_person = new Person( $person->ID, $person->first_name, $person->user_email );
$this->people[] = $new_person;
}
}
break;
case 'post_type' :
$args = array(
'posts_per_page' => -1,
'orderby' => 'post_title',
'order' => 'DESC',
'post_type' => $this->type,
'post_status' => 'publish',
);
$people = get_posts( $args );
if(!empty($people)) {
foreach($people as $person) {
$new_person = new Person( $person->ID, $person->post_title, get_post_meta($person->ID, '_email_address', true) );
$this->people[] = $new_person;
}
}
}
return $this->people;
}
}
class Person {
public $id;
public $name;
public $email;
public function __construct( $id, $name, $email ) {
$this->id = $id;
$this->name = $name;
$this->email = $email;
}
}
Answer: protected $people = array();
public function __construct() {
$this->people = $people;
This may not be doing what you want. After doing that, I would expect $this->people to equal null, as you just overwrote the previous value of array() with the value of a previously undeclared variable.
$this->people[] = $new_person;
Which makes that line create an array the first time that it is reached.
$this->model = $options['model_type'];
$this->type = $options['post_type'];
This is bad style. You should always declare class properties before using them so that you can give them the proper scope. This also makes them quicker to access (it checks for declared variables first).
$new_person = new Person( $person->ID, $person->post_title, get_post_meta($person->ID, '_email_address', true) );
This looks suspicious to me. Is the post title really a person's name? If so, you should comment and explain why this would be true. Otherwise, someone will later create a bunch of posts whose titles aren't people and wonder why posts are showing up under a list of people.
Alternately, if post titles are not names of people, then this type is misnamed.
You also might want to consider extending your class rather than using a switch in get_people. Then you could have a People class and a Post class, and each could define its own get_people. | {
"domain": "codereview.stackexchange",
"id": 11484,
"tags": "php, object-oriented, classes"
} |
Iterate and assign weights based on two columns (python) | Question:
FI_name
ISN
Sector
Industry
REC
INE02
PS
FS
HDB
INE03
PR
FS
ABC
INE04
PR
FS
RHC
INE05
PR
CO
ZHE
INE06
PR
FS
HSE
INE07
PR
FS
ZAK
INE08
PS
MT
HGB
INE09
PR
FS
YUJ
INE10
PR
MT
WSD
INE11
PS
FS
REC
INE12
PS
FS
HDB
INE13
PR
FS
ABC
INE14
PR
FS
RHC
INE15
PR
CO
ZHE
INE16
PR
FS
HSE
INE17
PR
FS
ZAK
INE18
PS
MT
HGB
INE19
PR
FS
YUJ
INE20
PR
MT
WSD
INE21
PS
FS
All the unique ISN should be assigned an equal weight (totals 100) but with the following exceptions.
Each unique Industry which has sector type "PR" is capped at 25%
So any ISN with sector 'PR' for their entire Industry should not cross the 25% limit.
If any industry has breached the 25% limit (i.e., if total number of ISNs in any industry is more than 5) then all those ISNs in that particular industry should be adjusted between the 25%
No limit for ISNs with sector == 'PS' (irrespective of the Industry)
the expected weights should be like this....
FI_name
ISN
Sector
Industry
Weights
REC
INE02
PS
FS
7.5%
HDB
INE03
PR
FS
2.5%
ABC
INE04
PR
FS
2.5%
RHC
INE05
PR
CO
7.5%
ZHE
INE06
PR
FS
2.5%
HSE
INE07
PR
FS
2.5%
ZAK
INE08
PS
MT
7.5%
HGB
INE09
PR
FS
2.5%
YUJ
INE10
PR
MT
7.5%
WSD
INE11
PS
FS
7.5%
REC
INE12
PS
FS
7.5%
HDB
INE13
PR
FS
2.5%
ABC
INE14
PR
FS
2.5%
RHC
INE15
PR
CO
7.5%
ZHE
INE16
PR
FS
2.5%
HSE
INE17
PR
FS
2.5%
ZAK
INE18
PS
MT
7.5%
HGB
INE19
PR
FS
2.5%
YUJ
INE20
PR
MT
7.5%
WSD
INE21
PS
FS
7.5%
There are total 10 ISNs with Sector == 'PR' and Industry == 'FS', so all these ISNs are assigned an equal weight of 2.5% (25%/10)
Since industries apart from FS (and sector 'PR') do not breach the limit of 25%, so 7.5% (75%/10) has been assigned for the rest.
This is the current code but I believe there's a better approach. Is there any other approach to tackle the above condition? any shorter method?
# Sector weight identification
import pandas as pd
sff1 = pd.read_excel (r'C:\Users\RajashekarR\Downloads\Test_CodeR.xlsx')
swi = sff1.loc[sff1['Sector'] != "PS"]
swi_pivot = swi.pivot_table(values=['ISN'], index = 'Industry', aggfunc= ['count'])
swi_pivot.columns = ['count',]
swi = pd.merge(swi, swi_pivot, how='inner', on='Industry')
swi2 = pd.merge(sff1, swi, how='left', left_on=['ISN', 'FI_name', 'Sector', 'Industry'], right_on=['ISN', 'FI_name', 'Sector', 'Industry'])
# Sector weight allocation
o = 20
l = 0.25
n = o*l
c= swi2['count']
n1 = c[c > n].count()
n2 = o-n1
swi3 = swi2.loc[swi2['count'] > n]
swi3['Weights'] = l/swi2['count']
s_sum = swi3['Weights'].sum()
l1 = 1-s_sum
swif = pd.merge(swi2, swi3, how='left', left_on=['ISN', 'FI_name', 'Sector', 'Industry', 'count'], right_on=['ISN', 'FI_name', 'Sector', 'Industry', 'count'])
swif = swif.set_index('ISN')
swi4 = swif[swif['Weights'].isna()]
swi4['Weights'] = l1/n2
swif = swif.reindex(columns=swif.columns.union(swi4.columns))
swif.update(swi4)
swif.reset_index(inplace=True)
final = swif.drop(['count'], axis=1)
final.to_excel('Test_CodeR_Final.xlsx')
Answer: Your use of pivot_table, reindex and merge is in all cases unnecessary. I also find that you hold onto a lot of temporary values, none very well-named, when you can just reassign to older variables and allow Python to do cleanup based on dropped references.
Rather than c[c > n].count(), you can call .sum() directly on the boolean predicate.
There's probably more simplification possible, but the results from this pass match yours.
import pandas as pd
sff1 = pd.read_csv('Weights - Sheet1.csv')
# Sector weight allocation
o = len(sff1)
l = 0.25
n = o*l
is_not_ps = sff1.Sector != "PS"
sff1.loc[is_not_ps, 'count'] = (
sff1[is_not_ps]
.groupby('Industry')
.ISN.transform('count')
)
n1 = (sff1['count'] > n).sum()
n2 = o - n1
swi3 = sff1[sff1['count'] > n]
swi3['Weights'] = l/sff1['count']
l1 = 1 - swi3.Weights.sum()
sff1['Weights'] = swi3.Weights
sff1 = sff1.set_index('ISN')
swi4 = sff1[sff1.Weights.isna()]
swi4['Weights'] = l1 / n2
sff1.update(swi4)
final = sff1.drop(['count'], axis=1) | {
"domain": "codereview.stackexchange",
"id": 43615,
"tags": "python, pandas"
} |
Turbulent flow and Separated flow | Question: My knowledge in aerodynamics is very limited as I am doing just an introduction to the subject, and so I may not understand very complex explanations.
I know that at some point laminar flow becomes turbulent flow, but why is this, and why does this occur in less distance if the surface is rougher?
And why does turbulent flow "stick" to the surface longer?
I know for this reason, this actually decreases pressure (form) drag due to the fact that causing turbulent flow causes the separation flow to occur further, for example a smooth ball and a golf ball.
Also, it would be very much appreciated if one explains from the very basic definitions, starting from definitions of laminar flow, turbulent flow and viscosity to illustrate their explanation, so I can notice the flaws in my thinking.
Thank you.
Answer: Let's first look at laminar and turbulent flow. When an object moves through air, the air molecules directly at the surface of the object will get carried along courtesy of viscosity. Viscosity causes adjacent molecules to assume the same speed, so there will be a layer of air surrounding the object in which the speed transitions from the object's speed (right next to the surface) to the speed of the outer flow. This is the boundary layer. Initially, the layers of air within the boundary layer show no cross movement of molecules. Compare it with a multi-lane road with bumper-to-bumper traffic where no car changes lanes. Since all molecules move along their layer of air, this is called laminar flow (lat. lamina = layer). Drivers who stay in their lane are like air molecules in a laminar flow. They follow a straight path and interact with their neighbors to the left and right mainly through shear, which will change their speed only slowly.
How does roughness cause a turbulent boundary layer?
Roughness lets air molecules bump into peaks or get sucked into troughs which adds a perpendicular speed component. Normally, the motion of the flow is expressed relative to the object. This will make the molecules close to the surface the slow ones. Now you have faster molecules from more distant layers moving closer to the surface and kicking the slow ones there ahead, and slower ones from close to the surface moving away, slowing down the more distant layers. So air molecules in turbulent flow wiggle around all the time and bump into slower molecules, or get bumped by faster ones. This makes all of them assume the same speed, more or less. Only at the surface you find a strong speed gradient, maybe like at an onramp on a highway. The slow cars entering the highway speed up really fast when others will cut into their lane frequently.
What does the boundary layer look like?
Below you see two typical speed profiles, where the speed $u(y)$ increases as you move away from the surface (at $y = 0$). Left is laminar flow and right the turbulent variety.
Speed profiles of laminar (left) and turbulent (right) boundary layers. Image source.
In still air every boundary layer starts laminar. Inside the laminar boundary layer, small disturbances become less and less damped, and at some point some frequencies become unstable (see Tollmien-Schlichting waves) and will eventually create so much cross movement that the boundary layer becomes turbulent, even without roughness. How soon it transitions to a turbulent boundary layer depends on (besides roughness):
the length and speed of the flow, mostly expressed as the local Reynolds number,
the pressure gradient,
wing sweep and
disturbances like bugs, rivet heads or turbulators.
If the flow is accelerated, all speeds in flow direction increase while cross flow will not be affected, so a laminar boundary layer in accelerating flow is stabilized. On the other hand, a pressure rise in flow direction corresponds to a deceleration in flow direction, so any movements perpendicular to the flow direction will grow relative to the flow speed, and as a consequence the turbulent transition occurs rather quickly.
Why does turbulent flow "stick" to the surface longer?
Flow around a three-dimensional object will accelerate in the forward region and decelerate where the object's cross-section contracts again. Since the energy transfer across the stratified flow in the laminar boundary layer is reduced to shear, the molecules close to the surface will lose speed quickly once this deceleration starts. Soon the flow close to the surface comes to a rest relative to the surface and starts to flow backwards. This allows the outer layers of the flow to move away from the surface of the object because the space between them and the surface is now filled with slow-moving air. That is called flow separation.
The much higher energy transfer in a turbulent boundary layer will delay separation because the slow molecules close to the surface will get kicked along. Now the flow is able to follow the contracting contour of the object for much longer and separation is delayed.
Note that the golfball effect only occurs in a small range of speed and size combinations, because with bigger objects or at higher speeds the laminar-turbulent transition will have occurred already before the deceleration starts. | {
"domain": "physics.stackexchange",
"id": 40418,
"tags": "fluid-dynamics, drag, aerodynamics, viscosity, turbulence"
} |
Which NAV2 controller for following a given path with a certain speed? | Question: I need to follow a given global path with a certain speed. For now, my plan would have been to use the MPPI controller in NAV2 with a custom critic for speed.
Is this a good approach? Are there other simpler solutions?
Answer: That's a pretty good approach. RPP is also a constant speed controller that sets the linear velocity as constant and computes the appropriate angular velocity to track the path. The "Regulated" in that approach's name includes some heuristics to adjust that velocity, but you can disable them all if you want to charge forward at full speed all the time, all other constraints be damned. It might overshoot some turns by the nature of RPP like that if your robot isn't highly dynamic and while driving fast, where as DWB/MPPI might be better.
Something like MPPI/DWB where you can remove other constraints and highly weight that also seem like logical choices, assuming you wanted to also have other controls than just max speed. | {
"domain": "robotics.stackexchange",
"id": 39075,
"tags": "ros2, control, nav2"
} |
Can a neutrino act as a virtual particle between two electrons to mediate an electron-electron fermonic interaction? | Question: Can a neutrino act as a virtual particle between two electrons to mediate electron-electron fermonic interaction analogous to how a photon acts as a virtual particle between two electrons to mediate a bosonic electron-electron interaction?
What would the Lagrangian look like for an electron-electron interaction mediated by neutrinos?
Answer: I'm a bit rusty on my qed, but I'll give this a shot. The simplest case would be described by a diagram similar to:
But the $e^--e^--\nu_e$ vertex doesn't exist (also note that I can't draw the required arrow on the neutrino) - the vertices of the standard model (with the exception of vertices involving the Higgs and neutrino oscillations) are:
With these, the closest interaction to what you describe that I can see is:
There are virtual neutrinos as you specified, but also virtual $W$ bosons.
If you rotate that diagram 90 degrees, there's an $e^--e^+$ scattering mediated by virtual neutrinos and $W$ bosons, but again, not quite what you asked for. | {
"domain": "physics.stackexchange",
"id": 15346,
"tags": "standard-model, quantum-electrodynamics, neutrinos"
} |
KDL IK fails in Descartes | Question:
I am using Descartes library on Ubuntu 16.04 ROS kinetic for my Denso robot.
I am using IkFastMoveitStateAdapter with KDLKinematicsPlugin defined in the kinematics.yaml. I am able to load this plugin successfully. But the getAllIK() in IkFastMoveitStateAdapter fails since there no solutions found for any point of the cartesian trajectory. The pose seems to be reachable by the arm. How do I know if the kinematics plugin is setup correctly? Can I use the standard KDL plugin or I need to build a custom IKFast plugin for my robot? What else might I be missing?
I found a similar issue here, but somehow it does not solve my problem
Originally posted by ipa-hsd on ROS Answers with karma: 150 on 2018-03-13
Post score: 0
Answer:
Can I use the standard KDL plugin
No. You cannot use the IkFastMoveitStateAdapter with a KDL based plugin.
(this seems obvious: the name IkFastMoveitStateAdapter seems to indicate this already)
or I need to build a custom IKFast plugin for my robot?
Yes.
Descartes depends on the IK plugin returning 'all' joint space solutions for a particular Cartesian pose, otherwise it has nothing to choose from / optimise for. KDL cannot reliably return those solutions, as it is a numerical solver.
I found a similar issue here, but somehow it does not solve my problem
Jorge seems to explain it quite well in the issue you link:
Instead you can instantiate an IkFastMoveitStateAdapter object which will load the kinematics plugin of your arm as long as it is provided in the "kinematics.yaml" of the moveit config package. This flavor of the Descartes Robot Model assumes the the underlying kinematics is running a closed-form solver and so it'll exploit that fact in order to generate more pose samples and potentially produce a better path.
What would be good to verify (over at Descartes' issue tracker) is whether the MoveitStateAdapters are still needed. MoveIt's IK plugin class got extended to support everything that Descartes needs, so the intermediary may not be necessary any more. But I would suggest you verify this with the developers.
Originally posted by gvdhoorn with karma: 86574 on 2018-03-13
This answer was ACCEPTED on the original site
Post score: 0
Original comments
Comment by ipa-hsd on 2018-03-14:
My bad - my idea was that IkFastMoveitStateAdapter communicates with any available underlying IK solver to get multiple solutions. When you say Moveit's IK plugin is extended, do you mean KDL also should return multiple solutions? And how can I bypass MoveitStateAdapter? By replacing getAllIK()?
Comment by gvdhoorn on 2018-03-14:\
When you say Moveit's IK plugin is extended, do you mean KDL also should return multiple solutions?
No. I don't believe KDL can do this reliably. TracIK maybe, see ros-industrial-consortium/descartes#124.
Comment by ipa-hsd on 2018-03-15:
Thanks! Now I have a brief idea of how things are working. I used MoveitStateAdapter (Still waiting for a reply: https://github.com/ros-industrial-consortium/descartes/issues/215) and defined Trac IK in my kinematics.yaml. I am able to plan the trajectories now.
Comment by gvdhoorn on 2018-03-15:
I'm not sure trac_ik is the best choice, but it will probably work 'somewhat'. Note Patrick's comments at the end of the issue I linked. IIUC he's describing a hypothetical / desired situation, not the current state.
Comment by ipa-hsd on 2018-03-15:
IKFastMoveitStateAdapter calls getPositionIK in getAllIK, where as MoveitStateAdapter calls setFromIK with multiple seeds. Also, I was able to generate trajectories even with KDL+MoveitStateAdapter, just that it was 5x slower than Trac IK.
Comment by ipa-hsd on 2018-03-15:
Unfortunately, I was unable to generate IKFast plugin for my Denso robot (https://github.com/rdiankov/openrave/issues/568)
Comment by gvdhoorn on 2018-03-15:
See if the procedure in #q263925 helps. It's written for 4dof, but just replace with the appropriate solver type. Can't guarantee it'll help, but using the inversekinematics has helped me get past weird IKFast errors before.
Comment by ipa-hsd on 2018-03-19:
Used this PR: https://github.com/rdiankov/openrave/pull/408, with extraprec=100 and maxsteps=750, I was able to generate the plugin | {
"domain": "robotics.stackexchange",
"id": 30300,
"tags": "ros, ikfast, ros-kinetic"
} |
What version of spark in latest Cloudera QuickStart VirtualBox? | Question: I am thinking to install the quick start virtual box of Cloudera, search on Cloudera download page, but unable to find the spark version in the latest Cloudera QuickStart virtual box, can any have knowledge of this, please let me know.
Answer: QuickStart VM latest 5.7 has spark version 1.6.0 by default. | {
"domain": "datascience.stackexchange",
"id": 829,
"tags": "apache-spark"
} |
Router class in PHP | Question: I am building a small and 'simple' PHP framework to see if I can bring my PHP skills to a new level and to review the knowledge I have required during OOP classes and tutorials I've done.
Currently, I am building the Router class. I know people will try to discourage me reinventing the wheel, but since this is for learning purposes, I am quite happy to do so.
The 'problem' I am facing is that I cannot seem to implement "named routing parameters". There are frameworks out there, that have URL's such as /profile/id/22/name/test. The catch here is that they require predefined routes. I would like to avoid this as much as possible since my aim is to build a rather simple framework without too many configuration.
Also, my implementation seems a little bit "too basic and easy" compared to other advanced Routing mechanisms that some frameworks use. Am I missing some essentials?
So, how can my implementation/routing be improved when looking at scalability/usability and security while still keeping the simplicity?
class Router
{
private $segments = array();
private $parameters = array();
/*
* When a new Router instance is created,
* fill it with information of the visited route
*/
public function __construct()
{
$this->getSegments();
$this->getParameters();
}
/*
* Get the current requested URL
*/
private function getURI()
{
return rtrim(substr($_SERVER['REQUEST_URI'], 1), '/');
}
/*
* Store the route segments (controller, method, parameters)
*/
private function getSegments()
{
$uri = $this->getURI();
$this->segments = explode('/', $uri);
}
/*
* Return the name of the controller
* Returns 'index' if there is no controller given
*/
public function getController()
{
return (isset($this->segments[0])) ? $this->segments[0] : 'index';
}
/*
* Return the name of the method
* Returns 'index' if there is no method given
*/
public function getMethod()
{
return (isset($this->segments[1])) ? $this->segments[1] : 'index';
}
/*
* Store all the given parameters
*/
private function getParameters()
{
if(is_array($this->segments))
{
$parameters = (count($this->segments) > 2) ? array_slice($this->segments, 2) : false;
if(!$parameters) { return false; }
// remove empty parameters
$parameters = array_diff($parameters, array(''));
// reindex the array
$parameters = array_values($parameters);
$this->parameters = $parameters;
}
}
/*
* Return a parameter by the index
*/
public function getParameter($index)
{
return (is_array($this->parameters) && isset($this->parameters[$index])) ? $this->parameters[$index] : false;
}
/*
* Return all parameters
*/
public function getAllParameters()
{
return (!empty($this->parameters)) ? $this->parameters : false;
}
}
This can then be used like this
// url = {website.com}/profile/edit/22
$router = new Router();
$router->getController(); // profile
$router->getMethod(); // edit
$router->getAllParameters(); // returns all parameters
$router->getParameter(0); // 22
Note: I haven't figured out the way I want the MVC design pattern to work in my framework, so this code won't cover that.
Answer: I am personally a fan of pretty much limiting the "routing" aspect of a front-controller like this to just getting the request to the proper controller for further processing.
But before talking about that, I want to ask you whether what you really have here is a "router" or a "URI parser"? It seems like just a URL parser to me in that is does no routing. If your goal is to create a one-fits-all URI parser, you are probably engaging in a fool's errand. As you change your application/API's behaviors and endpoint signatures, you are going to continually add more complexity to this central parser. This is probably a bad path.
Now let's think about a true "router" (this thinking exercise ideally getting towards addressing your problem with not understanding how to implement MVC). What is a router intended to do? It is intended to interpret something like a URI string and do nothing other than to route that string (and of course any payload passed to POST/PUT) to an appropriate piece of logic that knows what to do with it.
The RESTful URI structure you present really only requires one level of routing. That can be as simple as looking at first path segment of the URI, comparing against configuration mapping controllers with each "first segment" value, then instantiating the controller and passing it the remaining URI so it can determine what to do to with it from there. You might find yourself using a combination of classes such as
Router (a class to look a "first segment" and route configuration and instantiate appropriate controller and relenquish control to that controller)
RouterConfig (a class to wrap router config settings)
Controller (an abstract class all other controllers can extend from)
*Controller (specific controller implementations for each of your resources)
Request (a class to store info on request perhaps built from parse_url or similar)
Update
I also should have noted that you currently are not doing any validation on your input here. You are working directly with $_SERVER['REQUEST_URI'] which must be considered as user-input data and thus unsafe. Most good front controller implementations will have some sort of class that is instantiated to represent the request being made. This gives you an opportunity to validate/sanitize the input as well as establish a common authoritative representation of the request - the request type (GET, POST, PUT, DELETE), content type, URI, POST/PUT payload and/or parametric data, etc. - that can be passed about in the system.
I have added a more detailed code example below for what a router might look like in MVC system. This is is pretty much close to production level code other than the need to add namespacing, document blocks, and (ideally) unit tests (and of course fill in missing logic pieces).
Notice in this code how we strive to always pass around objects that we can type-hint for in the method signature. This largely eliminates the need to have parameter validation in your methods if you are disciplined about only letting objects be used in the system once they have been put into expected state (with exception being thrown otherwise.) You can see this allows us to really be efficient with the code we write.
// this is example of static router
class Router
{
// store global route config
protected static $config;
// restrict this class to static usage
private function __construct() {}
// setter for config, likely to be called in bootstrap file of some sort
public static function setConfig(RouterConfig $config) {
self::$config = $config;
}
// main method to instantiate a controller based on route config
public static function route(Request $request) {
$controllerClass = self::getControllerClass($request);
// some form of object instantiation
return new {$controllerClass}($request);
}
public static function getControllerClass(Request $request) {
self::validateConfig();
$resourceName = $request->getUriPathSegment(0);
// have getClassForResource return default 404 class name if match not found
$className = {self::$config}::getClassForResource($resourceName);
// validate that name returned maps to an existing class
if(!class_exists($className )) {
throw new Exception("Class: '" . $className . " not found");
}
return $className;
}
private static function validateConfig {
if (self::config instanceof RouterConfig) return;
throw new Exception(
'Router::config has not bet set! You must call Router::setConfig() before calling this method.'
);
}
}
// example usage
// in bootstrap/config file
Router::setConfig(/* RouterConfig object */);
// in main routing script
try {
// some form of Request object instantiation
$request = new Request();
$controller = Router::route($request);
$controller->execute();
exit(0);
} catch (Exception $e) {
// Perhaps log exception.
// Perhaps handle end user messaging by instantiating 500-series controller
// or don't catch at all and just use top-level exception handler
exit(1);
} | {
"domain": "codereview.stackexchange",
"id": 23835,
"tags": "php, url-routing"
} |
Priority blocking queue implementation | Question: This is a follow on question from one I asked a few days ago here: Single process blocking queue
Using the comments I determined that there probably isn't a pre-built solution for this and so I have had a go at writing my own implementation using a PriorityBlockingQueue.
In order to avoid propagation of poor code, before providing my implementation as an answer to my original question Are there any obvious or horrific flaws in my code! I know the first thing everybody is going to do is curse me to hell for using a singleton approach... does anyone have any alternatives/improvements.
Abstract Task
The only important value here is the int priority that determines how important the task is!
public abstract class Task {
public abstract String id();
public abstract Class<?> callback();
public abstract int priority();
public abstract Sector Sector();
public abstract byte[] payload();
}
Task Implementation
My preference is for an immutable builder design.
/**
* Immutable implementation of {@link Task}.
* <p>
* Use builder to create immutable instances:
* {@code ImmutableTask.builder()}.
* Use static factory method to create immutable instances:
* {@code ImmutableTask.of()}.
*/
@SuppressWarnings("all")
public final class ImmutableTask extends Task {
private final String id;
private final Class<?> callback;
private final int priority;
private final Sector Sector;
private final byte[] payload;
private ImmutableTask(
String id,
Class<?> callback,
int priority,
Sector Sector,
byte[] payload) {
this.id = Objects.requireNonNull(id);
this.callback = Objects.requireNonNull(callback);
this.priority = priority;
this.Sector = Objects.requireNonNull(Sector);
this.payload = Objects.requireNonNull(payload);
}
private ImmutableTask(ImmutableTask.Builder builder) {
this.id = builder.id;
this.callback = builder.callback;
this.priority = builder.priority;
this.Sector = builder.Sector;
this.payload = builder.payload;
}
private ImmutableTask(
ImmutableTask original,
String id,
Class<?> callback,
int priority,
Sector Sector,
byte[] payload) {
this.id = id;
this.callback = callback;
this.priority = priority;
this.Sector = Sector;
this.payload = payload;
}
/**
* {@inheritDoc}
* @return value of {@code id} attribute
*/
@Override
public String id() {
return id;
}
/**
* {@inheritDoc}
* @return value of {@code callback} attribute
*/
@Override
public Class<?> callback() {
return callback;
}
/**
* {@inheritDoc}
* @return value of {@code priority} attribute
*/
@Override
public int priority() {
return priority;
}
/**
* {@inheritDoc}
* @return value of {@code Sector} attribute
*/
@Override
public Sector Sector() {
return Sector;
}
/**
* {@inheritDoc}
* @return cloned {@code payload} array
*/
@Override
public byte[] payload() {
return payload.clone();
}
/**
* Copy current immutable object by setting value for {@link Task#id() id}.
* Shallow reference equality check is used to prevent copying of the same value by returning {@code this}.
* @param value new value for id
* @return modified copy of the {@code this} object
*/
public final ImmutableTask withId(String value) {
if (this.id == value) {
return this;
}
String newValue = Objects.requireNonNull(value);
return new ImmutableTask(this, newValue, this.callback, this.priority, this.Sector, this.payload);
}
/**
* Copy current immutable object by setting value for {@link Task#callback() callback}.
* Shallow reference equality check is used to prevent copying of the same value by returning {@code this}.
* @param value new value for callback
* @return modified copy of the {@code this} object
*/
public final ImmutableTask withCallback(Class<?> value) {
if (this.callback == value) {
return this;
}
Class<?> newValue = Objects.requireNonNull(value);
return new ImmutableTask(this, this.id, newValue, this.priority, this.Sector, this.payload);
}
/**
* Copy current immutable object by setting value for {@link Task#priority() priority}.
* Value equality check is used to prevent copying of the same value by returning {@code this}.
* @param value new value for priority
* @return modified copy of the {@code this} object
*/
public final ImmutableTask withPriority(int value) {
if (this.priority == value) {
return this;
}
int newValue = value;
return new ImmutableTask(this, this.id, this.callback, newValue, this.Sector, this.payload);
}
/**
* Copy current immutable object by setting value for {@link Task#Sector() Sector}.
* Shallow reference equality check is used to prevent copying of the same value by returning {@code this}.
* @param value new value for Sector
* @return modified copy of the {@code this} object
*/
public final ImmutableTask withSector(com.solid.halo.representation.Sector value) {
if (this.Sector == value) {
return this;
}
com.solid.halo.representation.Sector newValue = Objects.requireNonNull(value);
return new ImmutableTask(this, this.id, this.callback, this.priority, newValue, this.payload);
}
/**
* Copy current immutable object with elements that replace content of {@link Task#payload() payload}.
* Array is cloned before saved as the attribute value.
* @param elements elements for payload, not null
* @return modified copy of {@code this} object
*/
public final ImmutableTask withPayload(byte... elements) {
byte[] newValue = elements.clone();
return new ImmutableTask(this, this.id, this.callback, this.priority, this.Sector, newValue);
}
/**
* This instance is equal to instances of {@code ImmutableTask} with equal attribute values.
* @return {@code true} if {@code this} is equal to {@code another} instance
*/
@Override
public boolean equals(Object another) {
return this == another
|| (another instanceof ImmutableTask && equalTo((ImmutableTask) another));
}
private boolean equalTo(ImmutableTask another) {
return id.equals(another.id)
&& callback.equals(another.callback)
&& priority == another.priority
&& Sector.equals(another.Sector)
&& Arrays.equals(payload, another.payload);
}
/**
* Computes hash code from attributes: {@code id}, {@code callback}, {@code priority}, {@code Sector}, {@code payload}.
* @return hashCode value
*/
@Override
public int hashCode() {
int h = 31;
h = h * 17 + id.hashCode();
h = h * 17 + callback.hashCode();
h = h * 17 + priority;
h = h * 17 + Sector.hashCode();
h = h * 17 + Arrays.hashCode(payload);
return h;
}
/**
* Prints immutable value {@code Task{...}} with attribute values,
* excluding any non-generated and auxiliary attributes.
* @return string representation of value
*/
@Override
public String toString() {
return new StringBuilder("Task{")
.append("id=").append(id)
.append(", callback=").append(callback)
.append(", priority=").append(priority)
.append(", Sector=").append(Sector)
.append(", payload=").append(Arrays.toString(payload))
.append('}').toString();
}
/**
* Construct new immutable {@code Task} instance.
* @param id value for {@code id}
* @param callback value for {@code callback}
* @param priority value for {@code priority}
* @param Sector value for {@code Sector}
* @param payload value for {@code payload}
* @return immutable Task instance
*/
public static ImmutableTask of(String id, Class<?> callback, int priority, Sector Sector, byte[] payload) {
return new ImmutableTask(id, callback, priority, Sector, payload);
}
/**
* Creates immutable copy of {@link Task}.
* Uses accessors to get values to initialize immutable instance.
* If an instance is already immutable, it is returned as is.
* @return copied immutable Task instance
*/
public static ImmutableTask copyOf(Task instance) {
if (instance instanceof ImmutableTask) {
return (ImmutableTask) instance;
}
return ImmutableTask.builder()
.from(instance)
.build();
}
/**
* Creates builder for {@link ImmutableTask}.
* @return new ImmutableTask builder
*/
public static ImmutableTask.Builder builder() {
return new ImmutableTask.Builder();
}
/**
* Builds instances of {@link ImmutableTask}.
* Initialized attributes and then invoke {@link #build()} method to create
* immutable instance.
* <p><em>Builder is not thread safe and generally should not be stored in field or collection,
* but used immediately to create instances.</em>
*/
public static final class Builder {
private static final long INITIALIZED_BITSET_ALL = 0x1f;
private static final long INITIALIZED_BIT_ID = 0x1L;
private static final long INITIALIZED_BIT_CALLBACK = 0x2L;
private static final long INITIALIZED_BIT_PRIORITY = 0x4L;
private static final long INITIALIZED_BIT_SECTOR = 0x8L;
private static final long INITIALIZED_BIT_PAYLOAD = 0x10L;
private long initializedBitset;
private String id;
private Class<?> callback;
private int priority;
private Sector Sector;
private byte[] payload;
private Builder() {}
/**
* Adjust builder with values from provided {@link Task} instance.
* Regular attribute values will be overridden, i.e. replaced with ones of an instance.
* Instance's absent optional values will not be copied (will not override current).
* Collection elements and entries will be added, not replaced.
* @param instance instance to copy values from
* @return {@code this} builder for chained invocation
*/
public final Builder from(Task instance) {
Objects.requireNonNull(instance);
id(instance.id());
callback(instance.callback());
priority(instance.priority());
Sector(instance.Sector());
payload(instance.payload());
return this;
}
/**
* Initializes value for {@link Task#id() id}.
* @param id value for id
* @return {@code this} builder for chained invocation
*/
public final Builder id(String id) {
this.id = Objects.requireNonNull(id);
initializedBitset |= INITIALIZED_BIT_ID;
return this;
}
/**
* Initializes value for {@link Task#callback() callback}.
* @param callback value for callback
* @return {@code this} builder for chained invocation
*/
public final Builder callback(Class<?> callback) {
this.callback = Objects.requireNonNull(callback);
initializedBitset |= INITIALIZED_BIT_CALLBACK;
return this;
}
/**
* Initializes value for {@link Task#priority() priority}.
* @param priority value for priority
* @return {@code this} builder for chained invocation
*/
public final Builder priority(int priority) {
this.priority = priority;
initializedBitset |= INITIALIZED_BIT_PRIORITY;
return this;
}
/**
* Initializes value for {@link Task#Sector() Sector}.
* @param Sector value for Sector
* @return {@code this} builder for chained invocation
*/
public final Builder Sector(Sector Sector) {
this.Sector = Objects.requireNonNull(Sector);
initializedBitset |= INITIALIZED_BIT_SECTOR;
return this;
}
/**
* Initializes value for {@link Task#payload() payload}.
* @param elements elements for payload
* @return {@code this} builder for chained invocation
*/
public final Builder payload(byte... elements) {
this.payload = elements.clone();
initializedBitset |= INITIALIZED_BIT_PAYLOAD;
return this;
}
/**
* Builds new {@link ImmutableTask}.
* @return immutable instance of Task
*/
public ImmutableTask build() {
checkRequiredAttributes();
return new ImmutableTask(this);
}
private boolean idIsSet() {
return (initializedBitset & INITIALIZED_BIT_ID) != 0;
}
private boolean callbackIsSet() {
return (initializedBitset & INITIALIZED_BIT_CALLBACK) != 0;
}
private boolean priorityIsSet() {
return (initializedBitset & INITIALIZED_BIT_PRIORITY) != 0;
}
private boolean SectorIsSet() {
return (initializedBitset & INITIALIZED_BIT_SECTOR) != 0;
}
private boolean payloadIsSet() {
return (initializedBitset & INITIALIZED_BIT_PAYLOAD) != 0;
}
private void checkRequiredAttributes() {
if (initializedBitset != INITIALIZED_BITSET_ALL) {
throw new IllegalStateException(formatRequiredAttributesMessage());
}
}
private String formatRequiredAttributesMessage() {
Collection<String> attributes = new ArrayList<String>();
if (!idIsSet()) {
attributes.add("id");
}
if (!callbackIsSet()) {
attributes.add("callback");
}
if (!priorityIsSet()) {
attributes.add("priority");
}
if (!SectorIsSet()) {
attributes.add("Sector");
}
if (!payloadIsSet()) {
attributes.add("payload");
}
return "Cannot build Task, some of required attributes are not set " + attributes;
}
}
}
Abstract Queue
The @Value.Default simply allows immutable implementations to be assigned default values. Using it with execTask() is a bit of a hack to start the execution thread during the singleton instantiation.
public abstract class Queue {
@Value.Default
protected int queueSize() {
return 100;
};
@Value.Default
protected Comparator<Task> compare() {
return new Comparator<Task>() {
public int compare(Task o1, Task o2) {
return (o1.priority() > o2.priority() ? -1 : 1);
}
};
};
@Value.Default
protected PriorityBlockingQueue<Task> taskQueue() {
return new PriorityBlockingQueue<Task>(queueSize(), compare());
};
@Value.Default
protected TaskExec taskExec() {
return ImmutableTaskExec.builder().build();
};
/**
* Takes the highest priority task from the queue and passes it to the task
* executor. The task is removed from the list.
*/
@Value.Default
private synchronized void execTask() {
// Create a new thread to run the consumer
Runnable exec = new Runnable() {
@Override
public void run() {
// Put the thread into a perpetual loop
while (true) {
try {
// Take the task from the queue
Task task = taskQueue().take();
// Access the Task Executor
taskExec().execute(task);
// Throttle the loop
Thread.sleep(500);
} catch (InterruptedException e) {
// TODO log
}
}
}
};
new Thread(exec, "QUEUE_EXEC_THREAD").start();
}
/**
* Adds a new task to the queue. The task order is prioritized based on the
* comparator specified
*
* @param task
* The task to be added to the queue
* @throws ArrayStoreException
* If a task tries to be added and the queue is full
*/
public final void addTask(Task task) throws ArrayStoreException {
if (taskQueue().size() >= queueSize()) {
// TODO log
// TODO create custom exception
throw new ArrayStoreException("QUEUE FULL");
}
try {
taskQueue().put(task);
} catch (Throwable e) {
// TODO log
}
}
}
Queue Implementation
Again my preference is for an immutable design. I know singletons are considered bad design but I feel this is the best approach unless there is a better suggestion.
/**
* Immutable implementation of {@link Queue}.
* <p>
* Use static factory method to get ault singleton instance:
* {@code ImmutableQueue.of()}.
*/
@SuppressWarnings("all")
public final class ImmutableQueue extends Queue {
private final int queueSize;
private final Comparator<Task> compare;
private final PriorityBlockingQueue<Task> taskQueue;
private final TaskExec taskExec;
private ImmutableQueue() {
this.queueSize = super.queueSize();
this.compare = Objects.requireNonNull(super.compare());
this.taskQueue = Objects.requireNonNull(super.taskQueue());
this.taskExec = Objects.requireNonNull(super.taskExec());
}
private ImmutableQueue(
ImmutableQueue original,
int queueSize,
Comparator<Task> compare,
PriorityBlockingQueue<Task> taskQueue,
TaskExec taskExec) {
this.queueSize = queueSize;
this.compare = compare;
this.taskQueue = taskQueue;
this.taskExec = taskExec;
}
/**
* {@inheritDoc}
* @return value of {@code queueSize} attribute
*/
@Override
public int queueSize() {
return queueSize;
}
/**
* {@inheritDoc}
* @return value of {@code compare} attribute
*/
@Override
public Comparator<Task> compare() {
return compare;
}
/**
* {@inheritDoc}
* @return value of {@code taskQueue} attribute
*/
@Override
public PriorityBlockingQueue<Task> taskQueue() {
return taskQueue;
}
/**
* {@inheritDoc}
* @return value of {@code taskExec} attribute
*/
@Override
public TaskExec taskExec() {
return taskExec;
}
/**
* Copy current immutable object by setting value for {@link Queue#queueSize() queueSize}.
* Value equality check is used to prevent copying of the same value by returning {@code this}.
* @param value new value for queueSize
* @return modified copy of the {@code this} object
*/
public final ImmutableQueue withQueueSize(int value) {
if (this.queueSize == value) {
return this;
}
int newValue = value;
return validate(new ImmutableQueue(this, newValue, this.compare, this.taskQueue, this.taskExec));
}
/**
* Copy current immutable object by setting value for {@link Queue#compare() compare}.
* Shallow reference equality check is used to prevent copying of the same value by returning {@code this}.
* @param value new value for compare
* @return modified copy of the {@code this} object
*/
public final ImmutableQueue withCompare(Comparator<Task> value) {
if (this.compare == value) {
return this;
}
Comparator<Task> newValue = Objects.requireNonNull(value);
return validate(new ImmutableQueue(this, this.queueSize, newValue, this.taskQueue, this.taskExec));
}
/**
* Copy current immutable object by setting value for {@link Queue#taskQueue() taskQueue}.
* Shallow reference equality check is used to prevent copying of the same value by returning {@code this}.
* @param value new value for taskQueue
* @return modified copy of the {@code this} object
*/
public final ImmutableQueue withTaskQueue(PriorityBlockingQueue<Task> value) {
if (this.taskQueue == value) {
return this;
}
PriorityBlockingQueue<Task> newValue = Objects.requireNonNull(value);
return validate(new ImmutableQueue(this, this.queueSize, this.compare, newValue, this.taskExec));
}
/**
* Copy current immutable object by setting value for {@link Queue#taskExec() taskExec}.
* Shallow reference equality check is used to prevent copying of the same value by returning {@code this}.
* @param value new value for taskExec
* @return modified copy of the {@code this} object
*/
public final ImmutableQueue withTaskExec(TaskExec value) {
if (this.taskExec == value) {
return this;
}
TaskExec newValue = Objects.requireNonNull(value);
return validate(new ImmutableQueue(this, this.queueSize, this.compare, this.taskQueue, newValue));
}
/**
* This instance is equal to instances of {@code ImmutableQueue} with equal attribute values.
* @return {@code true} if {@code this} is equal to {@code another} instance
*/
@Override
public boolean equals(Object another) {
return this == another
|| (another instanceof ImmutableQueue && equalTo((ImmutableQueue) another));
}
private boolean equalTo(ImmutableQueue another) {
return queueSize == another.queueSize
&& compare.equals(another.compare)
&& taskQueue.equals(another.taskQueue)
&& taskExec.equals(another.taskExec);
}
/**
* Computes hash code from attributes: {@code queueSize}, {@code compare}, {@code taskQueue}, {@code taskExec}.
* @return hashCode value
*/
@Override
public int hashCode() {
int h = 31;
h = h * 17 + queueSize;
h = h * 17 + compare.hashCode();
h = h * 17 + taskQueue.hashCode();
h = h * 17 + taskExec.hashCode();
return h;
}
/**
* Prints immutable value {@code Queue{...}} with attribute values,
* excluding any non-generated and auxiliary attributes.
* @return string representation of value
*/
@Override
public String toString() {
return new StringBuilder("Queue{")
.append("queueSize=").append(queueSize)
.append(", compare=").append(compare)
.append(", taskQueue=").append(taskQueue)
.append(", taskExec=").append(taskExec)
.append('}').toString();
}
private static final ImmutableQueue INSTANCE = validate(new ImmutableQueue());
/**
* Returns ault immutable singleton value of {@code Queue}
* @return immutable instance of Queue
*/
public static ImmutableQueue of() {
return INSTANCE;
}
private static ImmutableQueue validate(ImmutableQueue instance) {
return INSTANCE != null && INSTANCE.equalTo(instance) ? INSTANCE : instance;
}
}
Abstract TaskExec
Doesn't actually do anything yet. However, this would be where the Task is actually transmitted to the hardware.
public abstract class TaskExec {
public void execute(Task task) {
System.out.println("Executed task " + task.toString());
// TODO Send the task to the hardware
}
}
Usage
Create + init the singleton Queue during application startup (in main()):
ImmutableQueue.of();
Create and add a Task from any point in the application. Obviously passing in proper values for id, callback, priority, Sector and payload:
Task task = ImmutableTask.of(id, callback, priority, Sector, payload);
ImmutableQueue.of().addTask(task);
Thats it! technically the queue should then pass the highest priority Task in the queue to the TaskExec to execute.
Answer: Shouldn't class Task be an interface?
ImmutableTask
private ImmutableTask(
String id,
Class<?> callback,
int priority,
Sector Sector,
byte[] payload) {
this.id = Objects.requireNonNull(id);
this.callback = Objects.requireNonNull(callback);
this.priority = priority;
this.Sector = Objects.requireNonNull(Sector);
this.payload = Objects.requireNonNull(payload);
}
With storing the payload without cloning it first, the immutability says bye bye.
You're importing Sector, but using the FQN com.solid.halo.representation.Sector in withSector.
@Override
public byte[] payload() {
return payload.clone();
}
Consider calling it differently as it's the only getter doing something untypical.
public final ImmutableTask withId(String value) {
if (this.id == value) {
return this;
}
final String newValue = Objects.requireNonNull(value);
return new ImmutableTask(this, newValue, this.callback, this.priority, this.Sector, this.payload);
}
You don't need newValue and I wouldn't bother with the this qualifier. Using simply
Objects.requireNonNull(value);
return new ImmutableTask(this, id, value, priority, Sector, payload);
allows you to copy&paste all the boilerplate more easily. It's also less error-prone as obviously everything is a field except for the provided argument and this itself....
... which you should remove. ImmutableTask original in the constructor doesn't get used, so why bother? Using it could lead to memory leaks as each instance would carry its whole history with it.
/**
* Computes hash code from attributes: {@code id}, {@code callback}, {@code priority}, {@code Sector}, {@code payload}.
* @return hashCode value
*/
I wouldn't document it with so much detail. What if you want to change it later? It's good that -- unlike some JDK classes -- you don't document the exact formula as it'd make it hard to change it later. FYI, e.g., Map.Entry.hashCode is a pure crap, but once documented....
Actually, I wouldn't document it at all. What matters is equals and hashCode has to follow. It doesn't do anything special (like e.g. leaving fields out, which is permissible) so who cares?
/**
* Prints immutable value {@code Task{...}} with attribute values,
* excluding any non-generated and auxiliary attributes.
* @return string representation of value
*/
@Override
public String toString() {
That's not true. 1. It doesn't print anything. 2. The "non-generated" attributes don't get excluded. Maybe we could say that the "generated" do (there are none, so it's true). Again, I'd ditch it all or write
/** Returns a string representation of this. Note that it may change anytime. */
Providing both withers and a builder doesn't seem to be useful as they're pretty interchangeable. Sure, your builder allows you to make sure that no field gets forgotten, but so would a public constructor or a static factory method. Now I see, you have a factory method, too.
A builder is a huge advantage when there are many fields and many of them are of the same type. But you have 5 fields and each has a different type.
public final Builder from(Task instance) ...
I'd prefer a fluent syntax like task.toBuilder().
public ImmutableTask build() {
checkRequiredAttributes();
return new ImmutableTask(this);
}
This makes from unusable as you'll have to override all attributes (did you forgot initializedBitset |= INITIALIZED_BITSET_ALL; there?).
Summary
The class is fine (except for the non-cloning of incoming payload and the non-usability of Builder.from), but terribly verbose. Don't forget that writing code is not all, it will be read, again and again and then once more.
Because of this, I'm claiming that less is more.
Moreover, I'd strongly suggest to get a tool helping you with this. No code generator, as it wouldn't save you the reading. There's Autovalue and project Lombok (see my answer here).
I know Lombok well and if it works for you, it's a bless:
@RequiredArgsConstructor(access=AccessLevel.PRIVATE)
@EqualsAndHashCode(callSuper=false)
@Wither
@Accessors(fluent=true, chain=true)
@Getter
public final class ImmutableTask extends Task {
@NonNull private final String id;
@NonNull private final Class<?> callback;
private final int priority;
@NonNull private final Sector Sector;
@NonNull private final byte[] payload;
... write `of(....)` which clones
... write `payload()` which clones
... write `withPayload()` which clones
... write `toBuilder`
}
So you need to write just 4 methods. Nice, isn't it? You lose a single feature, namely the check if all fields in the builder were set. There's a corresponding issue, but the Lombok authors have other things to do.
Everything else works out of the box, e.g., @EqualsAndHashCodeunderstands arrays and@Nonnull` adds the null checks to all constructors, withers, and setters.
Anyway, even with Lombok, I'd leave the builder out. If it's a public API, then it's hard to take anything out later. Adding is much easier. If it's for you, then work saved now may be needed later or not. Anyway, by not adding a dubious feature, you can only win.
Moreover, you want that all uses of the class look similar, right? And providing three different creation methods is a bad start. | {
"domain": "codereview.stackexchange",
"id": 13888,
"tags": "java, queue"
} |
What is meaning of work done without doing net work on object gets stored as potential energy? There are similar questions but some doubts are still | Question: In a conservative system positive Work done by external force or negative work done by conservative force without accelerating or giving Kinetic energy to the object, gets stored as potential energy of the object.
Are the following understandings correct?
1- if F(ext)↑ = mg↓ then F(ext)-mg=0 then u=v=0, so to move the object, at the beginning take F(ext)=mg+∆f. ∆f should be very small compared to mg, lets say if mg=10N then ∆f≈0.001N. ∆f causes upward acceleration such that lets say ∆v=0.01ms-¹ and after that ∆f is removed and only F(ext) balances the mg and object continues with v=0.01ms-1. Is this point has any error? Can we have have larger ∆f like 1000N?
2- Is it correct that ∆v should be ≈0 i.e. v=0.01ms-1 so that object may not gain Kinetic energy? Some say that acceleration should be zero or velocity should be constant. But I know that for higher velocities there will be higher K.E. as well. Can we have larger velocity?
3-if F(ext) is just balancing the mg then net force =0 then none forces seem to be able of doing work as if F(ext) exerts upward then mg is exerting equal amount of force downwards the situation is similar to object moving with constant velocity due its inertia. Then why is work associated to either F(ext) as W=mgh or F(gravity) as W=-mgh?
Answer:
$\Delta \vec F$ is unnecessary. Simply start with some initial $\vec v\ne 0$. There is no requirement that $\vec v$ should be small and certainly no requirement that it be 0 initially. Since the net force is 0, there is no acceleration and the object simply continues moving at a constant $\vec v$.
If you get rid of the unnecessary $\Delta \vec F$ then automatically there is no $\Delta \vec v$.
I think in terms of power. The external force is upwards and $v$ is upwards, so $P_{ext}=\vec F_{ext} \cdot \vec v$ is positive while $P_g=\vec F_g \cdot \vec v$ is negative. Mechanical power enters the object from the external force and leaves the object from the gravitational force. The net is zero, but the individual forces still transfer power. | {
"domain": "physics.stackexchange",
"id": 90548,
"tags": "newtonian-mechanics, work, potential-energy"
} |
Json type-safe builders using jackson-annotations | Question: I have created these builders to help creating fluent acceptance tests for my api.
I would really like suggestions for improvements because this is supposed to be a "how to" project on creating acceptance tests.
Questions:
Could this be improved?
Should asJson be catching the exceptions to avoid polluting it's users?
Should the static factory methods be creating the default values or the builders?
Do you like the naming conventions Builder, I feel like they are silly when there is no asJson method but they are still only there to be a part of creating json...
public class HolidayBuilder {
@JsonProperty
private String name;
@JsonProperty
private String date;
@JsonProperty
private String duration;
public HolidayBuilder withName(String name) {
this.name = name;
return this;
}
public HolidayBuilder withDate(String date) {
this.date = date;
return this;
}
public HolidayBuilder withDuration(String duration) {
this.duration = duration;
return this;
}
public String asJson() {
try {
return new ObjectMapper().writeValueAsString(this);
} catch (JsonProcessingException e) {
throw new RuntimeException("Could not create json: " + e.getMessage());
}
}
public static HolidayBuilder aHoliday() {
return new HolidayBuilder().withDuration("1d");
}
}
public class WorkAttributeBuilder {
@JsonProperty
private int id;
@JsonProperty
private String key;
@JsonProperty
private WorkAttributeTypeBuilder type;
@JsonProperty
private String name;
public WorkAttributeBuilder() {
type = aAttributeType();
}
public WorkAttributeBuilder withId(int id) {
this.id = id;
return this;
}
public WorkAttributeBuilder withKey(String key) {
this.key = key;
return this;
}
public WorkAttributeBuilder withoutKey() {
this.key = null;
return this;
}
public WorkAttributeBuilder withType(WorkAttributeTypeBuilder type) {
this.type = type;
return this;
}
public WorkAttributeBuilder withName(String name) {
this.name = name;
return this;
}
public String asJson() {
try {
return new ObjectMapper().writeValueAsString(this);
} catch (JsonProcessingException e) {
throw new RuntimeException("Could not create json: " + e.getMessage());
}
}
public static WorkAttributeBuilder aWorkAttribute() {
return new WorkAttributeBuilder().withName("Some Name");
}
}
public class WorkAttributeTypeBuilder {
@JsonProperty
private String value;
public WorkAttributeTypeBuilder(String value) {
this.value = value;
}
public String getValue() {
return value;
}
public static WorkAttributeTypeBuilder aAttributeType() {
return new WorkAttributeTypeBuilder("ACCOUNT");
}
}
public class WorkAttributeValueBuilder {
@JsonProperty
private long worklogId;
@JsonProperty
private String value;
@JsonProperty
private WorkAttributeBuilder workAttribute;
public WorkAttributeValueBuilder withWorklogId(long worklogId) {
this.worklogId = worklogId;
return this;
}
public WorkAttributeValueBuilder withValue(String value) {
this.value = value;
return this;
}
public WorkAttributeValueBuilder withWorkAttribute(WorkAttributeBuilder workAttribute) {
this.workAttribute = workAttribute;
return this;
}
public String asJson() {
try {
return new ObjectMapper().writeValueAsString(this);
} catch (JsonProcessingException e) {
throw new RuntimeException("Could not create json: " + e.getMessage());
}
}
public static WorkAttributeValueBuilder aWorkAttributeValue() {
return new WorkAttributeValueBuilder();
}
}
Answer: Your code is overall pretty nice. A few things I noted:
Upon first reading, I did a double-take at reading the withFoo methods that your fluent API uses. This is because there's an idiom where withFoo is sometimes a "setter" for an immutable object (which creates a new version with the update info). But after that, I read the contents of the methods and it made sense.
This is not the best way to rethrow exceptions:
public String asJson() {
try {
return new ObjectMapper().writeValueAsString(this);
} catch (JsonProcessingException e) {
throw new RuntimeException("Could not create json: " + e.getMessage());
}
}
Notice that RuntimeException has a constructor which takes a throwable reason as well. Thus it would be much better to implement this like so:
public String asJson() {
try {
return new ObjectMapper().writeValueAsString(this);
} catch (JsonProcessingException e) {
throw new RuntimeException("Could not create json.", e);
}
}
Don't worry; it still has the message from e. Note that your other classes also have this problem.
What's with the name of this method?
public static HolidayBuilder aHoliday() {
return new HolidayBuilder().withDuration("1d");
}
What does aHoliday mean? Your other classes have similarly named methods, and I don't understand what they are supposed to do. You should give them proper names. | {
"domain": "codereview.stackexchange",
"id": 22880,
"tags": "java, design-patterns"
} |
Local Coordinate Expressions for Lie Derivatives | Question: I'm currently working through the math chapters of Norbert Straumann's book on General Relativity. I have trouble understanding the coordinate expression of the Lie derivative of a basis vector. The equation in the book reads:
$$L_X\partial_i = [X,\partial_i] = -X^j_{,i}\partial_j\tag{1}$$
with a vector field $X$, local coordinates $\{x^i\}$ and the base $\{\partial_i:=\partial/\partial x^i\}$.
I don't understand the second step. Writing out the commutator, we get
$$[X,\partial_i] = [X^j\partial_j,\partial_i] = X^j\partial_j\partial_i - \partial_i X^j\partial_j. \tag{2}$$
The second term here is the same as the result given in the book, suggesting that $X^j\partial_j\partial_i$ would be 0. I've thought for quite some time about this, but I still don't understand why this would be the case. Could someone please shed some light on this?
Answer: One still has to apply the Leibniz rule to the second term $$-\partial_i X^j\partial_j=-X^j\partial_i\partial_j -X^j_{,i}\partial_j$$ in eq. (2), because the derivative $\partial_i $ also acts further to the right, thereby yielding eq. (1). See also this related Phys.SE post. | {
"domain": "physics.stackexchange",
"id": 26877,
"tags": "differential-geometry, differentiation, vector-fields"
} |
what is adc_timer | Question:
In the rosserial_arduino oscilloscope example there is the following line:
long adc_timer;
does anyone know what adc_timer is or how it's being used?
Thanks!
Originally posted by magdes on ROS Answers with karma: 11 on 2012-07-12
Post score: 1
Answer:
Hi,
I've done this tutorial as well, and I don't see any use for that line in the code. I am certain if you comment it out (go ahead and try) that it will compile just fine.
The best reason I could see for something like this is to put a timestamp on the message, but you don't need a separate variable to keep track of that value, since you can simply use the nodehandle timestamp call : nh.now();
If you're interesting in adding a time stamp, you want to modify your message with a header. I had written up a similar thing on my blog that outputs the oscilloscope/ADC values in volts instead of counts, and includes the time stamp. If you're interested you can check it out here :
Custom ROS Messages with rosserial_arduino.
Regards,
Doug
Originally posted by dougbot01 with karma: 342 on 2012-07-13
This answer was ACCEPTED on the original site
Post score: 2 | {
"domain": "robotics.stackexchange",
"id": 10174,
"tags": "ros, arduino, rosserial, tutorial"
} |
Does the Hubble constant depend on redshift? | Question: I know there are lot of questions on the Hubble constant already, but I am curious to know if it changes with redshift? If at current redshift, $z=0$, we know its value to be 0.7, will it be different at higher redshift ($z=0.1$)? If so, is there any relationship with redshift?
Answer: Yes, definitely.
The Hubble constant describes the expansion rate of the Universe, and the expansion may, in turn, may be decelerated by "regular" matter/energy, and accelerated by dark energy.
It's more or less the norm to use the term Hubble constant $H_0$ for the value today, and Hubble parameter $H(t)$ or $H(a)$ for the value at a time $t$ or, equivalently, a scale factor $a = 1/(1+z)$, where $z$ is the redshift.
The value is given by the Friedmann equation:
$$
\frac{H^2(a)}{H_0^2} =
\frac{\Omega_\mathrm{r}}{a^4} +
\frac{\Omega_\mathrm{M}}{a^3} +
\frac{\Omega_k}{a^2} +
\Omega_\Lambda,
$$
where
$\{
\Omega_\mathrm{r},
\Omega_\mathrm{M},
\Omega_k,
\Omega_\Lambda
\} \simeq
\{
10^{-3},0.3,0,0.7
\}
$
are the fractional energy densities in radiation, matter, curvature, and dark energy, respectively.
For instance, you can solve the above equation at $z=0.1$ and find that the expansion rate was 5% higher than today.
Since everything but dark energy dilutes with increasing $a$, $H(a)$ will asymptotically converge to a value $H_0\sqrt{\Omega_\Lambda} \simeq 56\,\mathrm{km}\,\mathrm{s}^{-1}\,\mathrm{Mpc}^{-1}$.
The figure below shows the evolution of the Hubble parameter with time:
As noted by KenG, the fact that $H$ decreases with time may seem at odds with the accelerated expansion of the Universe. But $H$ describes how fast a point in space at a given distance recedes. Later, that point will be farther away, and so will recede faster. From the definition of the Hubble parameter, $H\equiv\dot{a}/a$, multiplying by the scale factor shows the acceleration $da/dt$: | {
"domain": "astronomy.stackexchange",
"id": 2877,
"tags": "hubble-constant"
} |
Mechanism for basic hydrolysis of α-chloronitrile to ketone? | Question:
In Corey's 1969 prostaglandin synthesis,1 compound 2 is formed from compound 1 by:
[...] treatment with 2.5 equiv of potassium hydroxide (added as a hot saturated aqueous solution) in dimethyl sulfoxide for 14 hr at 25-30°.
Can anybody explain the mechanism of this reaction? I thought of an $\mathrm{S_N2}$ reaction, followed by nitrile hydrolysis and elimination of $\ce{CO2}$. But $\mathrm{S_N2}$ there seems impossible, and the elimination of HCl should be more reasonable than decarboxylation.
Reference
(1) Corey, E. J.; Weinshenker, N. M.; Schaaf, T. K.; Huber, W. Stereo-controlled synthesis of dl-prostaglandins F2α and E2. J. Am. Chem. Soc. 1969, 91 (20), 5675–5677. DOI: 10.1021/ja01048a062.
Answer: Your analysis is slightly incorrect, actually. If you do a $\mathrm{S_N2}$ on the chloride, hydrolyse the nitrile, and decarboxylate, you end up with the wrong oxidation state:
Instead, if you actually get to the cyanohydrin, you could (in theory) simply lose $\ce{CN-}$ to form the ketone:
However, a $\mathrm{S_N2}$ reaction at that centre is going to be difficult, to say the least.
The reaction has been studied by Shiner et al.,1 who determined that the primary site of attack is likely to be the nitrile. They wrote:
No mechanistic studies of the hydrolysis step have previously been reported, although pathways involving nucleophilic displacement of chloride or cyanide, affording intermediate cyano- or chlorohydrins, have been proposed. A priori we regarded both $\mathrm{S_N2}$ and $\mathrm{S_N1}$ displacements as improbable in these systems.
The immediate product of nitrile hydrolysis is the amide, and the α-chloroamide intermediates were actually isolated:
The mechanism after this involves a cyclisation of the amide onto the $\ce{C-Cl}$. The exact steps are not entirely clear. However, the following was proposed:
Reference
(1) Shiner, C. S.; Fisher, A. M.; Yacoby, F. Intermediacy of α-chloro amides in the basic hydrolysis of α-chloro nitriles to ketones. Tetrahedron Lett. 1983, 24 (51), 5687–5690. DOI: 10.1016/S0040-4039(00)94173-X. | {
"domain": "chemistry.stackexchange",
"id": 7717,
"tags": "organic-chemistry, reaction-mechanism, synthesis, carbonyl-compounds, hydrolysis"
} |
What Species of Duck/Waterfowl is this | Question: I snapped a picture of this beaut in Irvine, CA, but I do not believe it is in the list given in "Ducks at a Distance: A Waterfowl Identification Guide" by Bob Hines (http://www.gutenberg.org/ebooks/18884). What species of duck is this? I believe it has the shape of a duck, but the colors are unfamiliar to me.
Answer: I think this is an Egyptian Goose (Alopochen aegyptiacus), see the image (First from the Wikipedia article, the second from here, interestingly taken in Irvine):
You can see the distinctive spot around the eye and also the colored feathers on the back. Also, this bird is too big to be a duck. | {
"domain": "biology.stackexchange",
"id": 5772,
"tags": "species-identification, ornithology"
} |
adjust mean of signal using exponential | Question: I have discrete signals whose values are between 0 and 1.
I wish to post-process such a signal such that the mean of its values equals 0.5, yet keeping maximum and minimum values 0/1 intact.
Intuitively, I think an 'element-wise' exponential would be the right approach, yet I can't seem to come up with the method to determine the value of the appropriate exponential.
Fictive matlab example to illustrate:
>> s=[0.2 0.7 1 0.5 0.4 0.2 0 0.3];
>> mu=mean(s)
mu = 0.41250
>> s_adjusted=s.^0.69998;
>> mu_adjusted=mean(s_adjusted)
mu_adjusted = 0.50000
What is the best method to determine the 0.69998 exponential ?
Answer: You need to do it numerically, as a related toy problem $2^a + 3^a = 4$ appears to have no symbolic solution. Bisection method (binary search) is probably the easiest solver to implement:
minexponent = 0;
maxexponent = 2;
targetsum = 10;
precision = 32;
loop precision times {
exponent = (minexponent + maxexponent)*0.5;
if (sum(s.^exponent) < targetsum) {
maxexponent = exponent;
} else {
minexponent = exponent;
}
}
return exponent;
The smallest possible exponent value is 0 for your problem. If you are unsure about the largest possible exponent value, you can test the easy-to-check exponents 1, 2, 4, 8, 16 etc. (as many as needed before the sum goes below the target) before the above code, exponentiating into an auxiliary array by multiplying each element by itself.
Note that if all of the elements are zero, or if all of the elements are one, then you cannot get any other mean than that by exponentiation.
The plot below shows 1000 runs of the algorithm, and the value of exponent at each iteration. | {
"domain": "dsp.stackexchange",
"id": 3105,
"tags": "discrete-signals, signal-analysis, statistics"
} |
Average Time Complexity of Searching An Array | Question: Why is the average time complexity of searching an array $O(n)$? Is it because if the element does not exist, then $n$ searches must be done. If the element is at the end of the array then $n$ must also be done, so $\frac{2n}{2} = n$?
Answer: It sounds like you're talking about a linear search. If we assume that the key is equally likely to be in any of the $n$ locations in the array, then the expected location is $\frac{n+1}{2}$. A linear search for this element would then require $O(\frac{n+1}{2}) = O(n)$ time. | {
"domain": "cs.stackexchange",
"id": 4113,
"tags": "algorithm-analysis, runtime-analysis, average-case"
} |
How often can you reuse coomassie stain? | Question: I usually heat to boil in the microwave, and stain for 12 minutes. How often can I reuse this buffer?
Answer: Since there's a lot of methanol in Coomassie stain, a significant amount would probably evaporate off each time you microwave it. Therefore, it'd be probably a good idea not to reuse it because (1) you're losing methanol, and (2) the concentration of everything else is thrown off.
Why do you boil the stain? I just pour mine at RT right on the gel and it works fine. | {
"domain": "biology.stackexchange",
"id": 169,
"tags": "gel-electrophoresis"
} |
Sort an array that contain two sorted array in $O(n)$ | Question:
Given an array $A[1..n]$. $A$ is mixed of two sorted arrays $B$ and
$C$ of equal sizes, such that $B$ is in the ascending order and $C$ is in
the descending order.
Consider the following example: $B=[2,4,6,8]$ and $C=[9,7,5,1]$. As a
result $A=[2,9,7,4,6,5,8,1]$. Interestingly, The instructor said: $A$
can be sorted in $O(n)$. Anyone can give me a hint?
My attempt:
I tried to partition the array $A$ into sorted arrays $B$ and $C$, but I could not do it because my approach encountered counterexamples.
Answer: Let $a$ is the first element of $A$. Let $A$ is composed of array $B = (b_{1},\dotsc,b_{n})$ and array $C = (c_{1},\dotsc,c_{n})$.
Now, we will partition $A$ into arrays: $B'$ and $C'$ in the increasing and decreasing orders respectively. Following is the process:
Let $p$ be a pointer. Initially, $p$ is pointing to $a$. Let $b$ be the element in $A$ next to $a$.
There are two possibilities:
Case 1: $a<b$. Then, one of them must be an element of $B$. In this case, if $a$ is greater than the last element of $B'$, add $a$ to $B'$. Move the pointer to $b$ and go to step $1$.
Otherwise, add $a$ to $C'$ and $b$ to $B'$. Move the pointer to the element next to $b$ and go to step $1$.
Case 2: $a>b$. Then, one of them must be an element of $C$. In this case, if $a$ is smaller than the last element of $C'$, add $a$ to $C'$. And, move the pointer to $b$ and go to step $1$.
Otherwise, add $a$ to $B'$ and $b$ to $C'$. Move the pointer to the element next to $b$ and go to step $1$.
The above process produces the array $B'$ in increasing order and $C'$ in decreasing order. However, note that these arrays might not be of the same size.
Correctness: Proof by induction.
Base Step: $A$ is composed of $B$ and $C$. $B'$ and $C'$ are empty right now. In Step $2$, Case 1, we add $a$ to $B'$. If it is indeed an element of $B$ then we are good. If not, then $b \equiv b_{1}$ and we can assume that $A$ is composed of a new array $B' = (a,b_{1},\dotsc,b_{n})$ having elements in the increasing order, and array $C' = C \setminus a$ having elements in the decreasing order. Therefore, adding $a$ to $B'$ is correct.
Similarly, we can prove the correctness of Step $2$, Case $2$.
And, the correctness of the further steps follows from the induction step.
After obtaining $B'$ and $C'$, you can merge them using merge sort like step in $O(n)$ time. | {
"domain": "cs.stackexchange",
"id": 17959,
"tags": "algorithms, data-structures, sorting"
} |
Meta-programming template for collection statistics | Question: I find that I often want to calculate summary statistics from collections of data and decided to create a template to do that. I'm interested in a general review, but I also have some specific questions.
Is the use of the lambda "too clever?"
Is std::accumulate OK? Specifically, I'm thinking of the potential for overflow for large collections of small data types.
Is the template missing anything?
Should I be using something such as std::enable_if to assure that the Iter type is actually an iterator?
The code to be reviewed is in stats.h but stats.cpp shows how it can be used.
stats.h
#ifndef STATS_H
#define STATS_H
#include <cmath>
#include <numeric>
#include <iterator>
template <typename T, typename Iter>
struct Statistics {
Statistics(Iter start, Iter end, bool sample=true) :
n(std::distance(start, end)),
sum(std::accumulate(start, end, 0)),
mean(sum / n),
variance([&start, &end, sample, this]()->T {
T sumsq = 0;
for (auto it=start; it != end; ++it) {
sumsq += (*it-mean)*(*it-mean);
}
return sumsq/(n- (sample ? 1 : 0));
}()
),
sdev(sqrt(variance))
{
}
unsigned n;
T sum;
T mean;
T variance;
T sdev;
};
#endif //STATS_H
stats.cpp
#include <iostream>
#include <vector>
#include <list>
#include "stats.h"
template <typename T>
std::ostream &print(std::ostream &out, const T &stats) {
out << "n = " << stats.n << '\n';
out << "sum = " << stats.sum << '\n';
out << "mean = " << stats.mean << '\n';
out << "var = " << stats.variance << '\n';
out << "sdev = " << stats.sdev << '\n';
return out;
}
int main()
{
std::vector<int> sample{9, 2, 5, 4, 12, 7};
Statistics<float, decltype(sample.cbegin())> stats(sample.cbegin(), sample.cend());
print(std::cout, stats);
std::list<float> sample2{2.0, 4.0, 5.0, 7.0, 9.0, 12.0};
Statistics<double, decltype(sample2.cbegin())> stats2(sample2.cbegin(), sample2.cend());
print(std::cout, stats2);
}
Sample output
n = 6
sum = 39
mean = 6.5
var = 13.1
sdev = 3.61939
n = 6
sum = 39
mean = 6.5
var = 13.1
sdev = 3.61939
Answer: Use of T for accumulator and result
I question the decision to use a template parameter T as the result and accumulator type. Especially in the calculation of the variance, using T as accumulator and mean value will cause significant errors in the calculated standard deviation and variance. The mean value may be off by at most \$\lim_{x\rightarrow 1}x\$. I cannot think of any situation where if you are interested in statistics, you would want to use anything else than double.
Clever lambda
While the lambda trick is easy to understand, I question the necessity of it when the same code could have been implemented in the constructor body and with less typing. This would also allow to use double for the mean internally and only truncate to T after assigning, this would bound the error on the variance and standard deviation. Edit: at least if the result is within the domain of T.
Use of std::accumulate
For small types, say short the risk of overflow is large. For double you run the risk of truncation errors when there are many samples. For accuracy see my answer here: Function that normalizes a vector of double. The same applies here and depends on the level of accuracy that you require, seeing as it is statistics, I wouldn't be too worried.
What about min/max value?
One stat that I'm missing is the maximum and minimum value.
Performance
What if I don't need the standard deviation but only the mean? You might want to implement the sum, mean and stdev as separate classes so the user can choose if she only needs the mean for example.
Naming
I find the name Statistics a bit too generic. The name may suggest statistics about anything (like for example memory usage for the day) but here it means the sample count, sum, mean, variance and standard deviation.
Documentation
The variable sample wasn't immediately obvious that it was used to distinguish between sample and population statistics. A comment for the constructor explaining the arguments might be in order.
Checking for iterator type
The compiler will emit an error if the type used for iterator isn't suitable, so strictly speaking this isn't really necessary if it complicates the code. Also I'm not completely sure how you would test that a fulfills the iterator concept in a useful way. I mean you can test for the existence of std::iterator_trait<Iter>::iterator_category but that can have false negatives.
But I never turn down a nice error message telling me what I'm doing wrong instead of a horrendous template substitution failure wall of text that MSVC emits... :) | {
"domain": "codereview.stackexchange",
"id": 14855,
"tags": "c++, c++11, template-meta-programming, statistics"
} |
Electro-gravitation - is it real? | Question: I came across an article claiming that if you charge two plates, one positive and one negative, and fasten them together (assuming they are insulated from each other), they will float in the air. I kind of got the idea that it depends directly on the voltage and it seems to require at least a hundred thousand volts. Is this actually possible, and if not, why not. I realize that this is asking for technical answers, and I definitely encourage that. If you give a big technical explanation, however, please summarize it in layman's terms as well.
T.T. Brown published a paper describing an experiment relating to it: How I control gravity. There are a few other papers published later (which I can't find links for right now) by T.T. Brown, as well as a Nasa proposal (Google nasa seop pdf).
Answer: The standard physcial model taught in most universities today does not allow for electrogravitics, although there are several alternative theories which can provide explanations and even predictions for the phenomenon. Without weighing my opinion on one side or the other, here are a couple of books oft-recommended for one interested in the subject:
One with more narrative, written in a documentary-style,
and another which dives into many of the technical and theoretical details. | {
"domain": "physics.stackexchange",
"id": 5966,
"tags": "electromagnetism, gravity, newtonian-gravity"
} |
Dielectric with polar molecules | Question: Suppose a dielectric slab contains polar molecules (which are not further polarisable). When placed in an electric field, (for simplicity, an uniform field), align themselves according to the field.
Do they perfectly align so that all molecules have no net torque on them, or it is only a general arrangement which is less randomized and slightly prefers to have a net dipole moment according to the field?
What effects in a dielectric will determine the extent of alignment?
There was a very poorly executed attempt to explain this with the thermal energy of the molecules i.e. $1/2k_b T$ at a temperature $T$. Can someone explain how the strength of the field and the thermal energy interact to determine the extent of alignment?
Answer: The electric field means that the potential energy of the polar molecule depends on it's angle relative to the field and is given by:
$$ V(\theta) = -p E \cos\theta $$
where $p$ is the dipole, $E$ is the field strength and $\theta$ is the angle the dipole makes to the field lines. Obviously this has a minimum when the dipole is aligned with the field, so all else being equal the molecules will line up with the field.
However at any temperature greater than absolute zero there will be some thermal motion of the molecules, and interactions between molecules will randomly perturb the molecules away from their minimum energy conformation. The size of these perturbations is around $kT$ (per molecule), so if $kT$ is comparable with $2pE$ it will disrupt the alignment of the polar molecules and at temperatures significantly above $2pE/k$ the alignment will be lost completely. This means that generally speaking the dielectric constant will fall with increasing temperature. | {
"domain": "physics.stackexchange",
"id": 8750,
"tags": "thermodynamics, electricity, electrostatics, dielectric"
} |
Are neurons in layer $l$ only affected by neurons in the previous layer? | Question: Are artificial neurons in layer $l$ only affected by those in layer $l-1$ (providing inputs) or are they also affected by neurons in layer $l$ (and maybe by neurons in other layers)?
Answer: It depends on the architecture of the neural network. However, in general, no, neurons at layer $l$ are not only affected by neurons at layer $l-1$.
In the case of a multi-layer perceptron (or feed-forward neural network), only neurons at layer $l-1$ directly affect the neurons at layer $l$. However, neurons at layers $l-i$, for $i=2, \dots, l$, also indirectly affect the neurons at layer $l$.
In the case of recurrent neural networks, the output of neuron $j$ at level $l$ can also affect the same neuron but at a different time step.
In the case of residual networks, the output of a neuron at a layer $l-i$, for $i=2, \dots, l$, can directly affect the neurons at layer $l$. These non-neighboring connections are called skip connections because they skip layers.
There are probably other combinations of connections between neurons at different layers or the same layer. | {
"domain": "ai.stackexchange",
"id": 1604,
"tags": "neural-networks, machine-learning, artificial-neuron"
} |
Problem with momentum values in a QM problem | Question: I have the following equation of $Ψ$ around a ring (the particle is bound to move only on the ring):
To visualize the state(it dies before L/2 if L=2πR):
We can see from the first picture that from trying to find the probability for each $p$ to be measured,we find $A_p$ to be a continuous function of $p$. So here is my problem. If you analyze this problem, you find that momentum can only have some particular discrete values(because $Ψ(x+L)=Ψ(x)$ ) while the Ap(p) is a continuous function of p, meaning that it gives us probability for every p. Also, because the probability is (Ap)^2 then we also have probability for the values that p can not have, so if you sum up each probability of each discrete value that p can take, then it will not be equal to 1.
So,what is going on? Did i misinterpreted something about the values of the momentum?
Answer: First, realize we are doing an approximation when we are evaluating the coefficient $A_p$ in the Fourier series
$$ \psi(x) = \sum_p A_p \psi_p(x)$$
by the integral
$$ A_p = \int_{-\infty}^\infty \psi_p^*(x)\psi(x)\mathrm{d}x$$
since the limits should really be the boundary of the interval on which the Fourier series is periodic. Furthermore, $\psi(x) = \sqrt{\alpha}\mathrm{e}^{-\alpha\lvert x \rvert}$ is not an actual proper wavefunction on a circle of length $L$, since it is not periodic (but the function
$$ \phi(x) = \mathrm{e}^{-\alpha \lvert (x \mod L) \rvert}$$
would be, so if $\alpha \gg L$, we do not really make an error here, since its integral outside $[-L/2,L/2]$ then approximately vanishes). So the $A_p$ calculated like this should be a good approximation for the precise coefficent of the Fourier series if we plug in the allowed values for the momenta and just forget about the rest.
As you note in a comment, the coefficients obtained by this do not, at a first glance, sum to one if only summed over the discrete values. However, the integral test for convergence provides the following bound:
$$ \int_N^\infty f(x)\mathrm{d}x \le \sum_{i = N}^\infty f(i) \le f(N) + \int_N^\infty f(x)\mathrm{d}x$$
and if we let $N\to-\infty$, then $\lim_{N\to-\infty} A_N = 0$, and the sum over all coefficients is bounded below and above by the integral, so if the integral is $1$, the sum also is. | {
"domain": "physics.stackexchange",
"id": 22679,
"tags": "quantum-mechanics, wavefunction, fourier-transform, probability, discrete"
} |
Why do some types of waves disperse? | Question: We know that some mediums/waves are non-dispersive such as air for sound waves, and waves on a string. But, why do some waves, for example deep water waves, disperse?
I am trying to understand the underlying physics behind the reason that the velocity of a water wave depends on the wavenumber $k$.
Answer:
But, why do some waves, for example deep water waves, disperse?
Dispersion can arise from several things. However, the basic fundamental idea is that the medium responds to the wave in some way (e.g., the wave excites a resonance in the media).
Example: Plasmas and Electromagnetic Waves
In the case of a plasma, electromagnetic waves can locally polarize the media inducing small dipoles (or induce currents, depending on the mode and the media) that alter the propagation of the wave (e.g., reduce the phase speed). If the medium is non-dispersive, this is equivalent to saying that the medium response time to the wave is so slow as to be zero compared to the frequency of the wave (i.e., it's like currents are induced instantaneously). However, if the media has a finite response time, then the phase speed of the wave will depend upon its frequency.
Two Types of Dispsersion
There are two ways to think about dispersion, spatial and temporal. In the following I will use the word current to generally describe particle motions but it can just as well represent electric currents.
In spatial dispersion (still within a plasma), the total electromagnetic field at any given point is determined by the currents within a volume centered on that point. The larger the volume necessary to determine the field, the stronger the spatial dispersion.
In temporal dispersion (still within a plasma), the total electromagnetic field at any given point can depend upon currents from previous times. The longer the memory of these previous currents, the stronger the temporal dispersion.
Both of these are representations of the concept of non-locality, i.e., the wave properties at any given spatial and temporal position may not be independent of other spatial and temporal positions.
I am trying to understand the underlying physics behind the reason that the velocity of a water wave depends on the wavenumber k.
In the case of water waves, the non-locality I mentioned early is introduced by the orbits of individual fluid elements (or wave orbits) as a wave passes. The driving force is generally wind which generates non-homogenous pressure gradients over the surface of the water. The restoring force is gravity (at short wavelengths, surface tension starts to matter and the waves are then called capillary waves). The general dispersion relation for gravity waves is:
$$
\omega^{2} = g \ k \ \tanh{\left( k \ h \right)} \tag{1}
$$
where $\omega$ is the angular frequency, $g$ is the acceleration of gravity, $k$ is the wavenumber, and $h$ is the water depth.
In shallow water (i.e., when the water depth is less than the wavelength, $\lambda$), the wave orbits are compressed into ellipses and the wavelength no longer matters in the dispersion relation. Then the phase speed reduces to (i.e., $\tanh{x} \rightarrow x$):
$$
\frac{\omega}{k} \equiv V_{ph} \approx \sqrt{g \ h} \tag{2}
$$
which has no frequency dispersion.
In the case of deep water waves (basically gravity waves), the orbits are not affected by the lake/sea/ocean floor and gravity acts as a restoring force during the fluid element orbits (or wave orbits). Then the phase speed reduces to (i.e., $\tanh{x} \rightarrow 1$):
$$
\begin{align}
V_{ph} & \approx \sqrt{\frac{g}{k}} \tag{3a} \\
& = \sqrt{\frac{g \ \lambda}{2 \ \pi}} \tag{3b} \\
& = \frac{g}{\omega} \tag{3c}
\end{align}
$$
The basic idea for why the phase velocity depends upon the wavelength in a deep water wave is similar to that of a linear pendulum, since gravity is the restoring force in both cases. One can imagine that the pendulum length is analogous to the wave's wavelength and you have a the equation for a simple harmonic oscillator. | {
"domain": "physics.stackexchange",
"id": 31991,
"tags": "waves, dispersion"
} |
Dynamic CSS insertion | Question:
I have a conf.inc.php file which has the following few lines.
// enable css using convention below:
// $css["pageName"] = ["cssName1", "cssName2", ...]
$css["all"] = ["style"];
$css["myPage"] = ["someStyle"];
$css["myOtherPage"] = ["otherStyle", "anotherStyle"];
// disable css
$exCss["myOtherPage"] = ["style"];
What this will do is give all of my pages the style.css. Then all the other keys of the index will give their respective page (e.g. "myPage.php" etc.) the styles listed in the array after the equals (e.g. "someStyle.css").
All of the styles are stored in a folder called "css" and there is no other place to store them as I have hard coded it in - this meant I didn't have to write relative paths in the multidimensional array.
The second array will exclude some styles from their respective page. In this instance the style called style.css is being excluded from the page named myOtherPage.php.
Note: The page names are from a function here. So they only read the file name and ignore extensions etc. This means that "myOtherPage" could also be "myOtherPage.php" or "myOtherPage.html". Although if you look at the following code they will have to be PHP see I use an include in my header of the PHP files.
// print css that are present in all pages
$a = (isset($css["all"])) ? count($css["all"]) : 0;
for ($i = 0; $i < $a; $i++)
if ((isset($exCss["all"]) && !in_array($css["all"][$i], $exCss["all"])) || (isset($exCss["$page"]) && !in_array($css["all"][$i], $exCss["$page"])) || (!isset($exCss["all"]) || !isset($exCss["$page"])))
echo (file_exists("{$css["all"][$i]}.css")) ? "<link rel=\"stylesheet\" type=\"text/css\" href=\"{$css["all"][$i]}.css\" />" : "";
This code is part of header.inc.php and this gets placed inside the <head> tags. This will add all the styles from the array providing that style is not listed in the $exCss array.
It is common for me to make things more difficult than they need to be...
This could be another case of that. I also tend to make functions or processes that already exist.
Anyway, I thought I would share this little snippet. If anyone has anyway of improving it then I would love to hear it. Thanks as usual!
Answer: I'm not sure why you're handling so many CSS files. If you have one CSS file for your application, which is minified and cached on the user's device, you can reduce HTTP requests and overall loading time.
Anyway. If you need to handle lots of these files and keep this approach, you could simplify your code in terms of readability.
Create an array of default CSS files:
$css = ['a', 'b', 'c'];
Create an array where you store the individual files for each page:
$pageCSS = [
'page-a' => [
'add' => ['d', 'e', 'f'],
'remove' => [],
],
'page-b' => [
'add' => ['g', 'h', 'i'],
'remove' => ['a', 'b'],
],
];
Now you need to test whether a requested page is in this array. If the key exists, add all new files and remove not-needed default files using array_merge and array_diff:
$page = 'page-a';
if (array_key_exists($page, $pageCSS)) {
$css = array_diff(array_merge($css, $pageCSS[$page]['add']), $pageCSS[$page]['remove']);
}
Finally create all HTML elements. I find it easier to read, if you use single-quotes. This way you don't need to escape all double quotes with backslashes:
foreach($css as $file) {
print '<link rel="stylesheet" type="text/css" href="' . $file . '.css">';
} | {
"domain": "codereview.stackexchange",
"id": 25121,
"tags": "php, array"
} |
Calculating the number of steps it takes to get from point S to F | Question: I have following Java class called TheNewMario. It reads a String path and iterates through the path from S to F. If the path is closed, we assume the runner can jump. The class returns the amount of steps it takes to get from S to F.
I have added two path files. In the first example path, it works fine, but when the path gets bigger it takes longer to execute. I am very sure that the performance of processing a bigger path should not take longer than 30-60 seconds. but in my case it takes over 30 minutes, and does not return an error. I use the BFS (breadth-first search) algorithm.
What can I do to optimize the performance of this?
public class TheNewMario {
private static int maxX = 0;
private static int maxY = 0;
private BufferedReader fileRead = null;
private static int[][] mapXY;
private HashSet visitedLocation = new HashSet();
public static void main(String[] args) {
TheNewMario mario = new TheNewMario();
mario.pathFileReader(args);
System.out.println(mario.bfs());
}
private int bfs() {
Queue<Node> queue = new Queue<Node>();
for (int y = 0; y < maxY; y++) {
for (int x = 0; x < maxX; x++) {
if (mapXY[x][y] == 2) {
queue.enqueue(new Node(x, y, 0, 0));
visitedLocation.add(new Node(x, y, 0, 0));
}
}
}
int disCount = 1;
while (!queue.isEmpty()) {
int queueSize = queue.size();
for (int i = 0; i < queueSize; i++) {
Node node = queue.dequeue();
if (mapXY[node.posX][node.posY] == 3) {
return disCount;
}
visitedLocation.add(node);
ArrayList<Node> results = generateNeighbours(node);
for (Node in : results) {
queue.enqueue(in);
}
}
disCount++;
System.out.print(".");
System.out.print(": " + disCount);
}
return -1;
}
private class Node {
final int posX;
final int posY;
final int velX;
final int velY;
public Node(int posX, int posY, int velX, int velY) {
this.posX = posX;
this.posY = posY;
this.velX = velX;
this.velY = velY;
}
@Override
public boolean equals(Object thatobject) {
Node that = (Node) thatobject;
if (this.posX == that.posX && this.posY == that.posY && this.velX == that.velX && this.velY == that.velY)
return true;
else return false;
}
public int hashCode() {
String hs = " " + posX + posY + velX + velY;
return hs.hashCode();
}
}
private ArrayList<Node> generateNeighbours(Node node) {
ArrayList<Node> neighbours = new ArrayList<Node>();
for (int x = -1; x < 2; x++) {
for (int y = -1; y < 2; y++) {
Node in = new Node(
node.posX + node.velX + x,
node.posY + node.velY + y,
node.velX + x,
node.velY + y
);
if (in.posX >= 0 &&
in.posX < maxX &&
in.posY >= 0 &&
in.posY < maxY &&
mapXY[in.posX][in.posY] %2 == 1 &&
!visitedLocation.contains(in)
) {
neighbours.add(in);
}
}
}
return neighbours;
}
private void pathFileReader(String[] args) {
File fileName = new File(args[0]);
int lineCount = 0;
String line;
if (!fileName.exists()) {
System.out.println(args[0] + " does not exist.");
return;
}
if (!(fileName.isFile() && fileName.canRead())) {
System.out.println(fileName.getName() + " cannot be read from.");
return;
}
try {
fileRead = new BufferedReader(new FileReader(fileName));
maxY = Integer.parseInt(fileRead.readLine());
maxX = Integer.parseInt(fileRead.readLine());
mapXY = new int[maxX][maxY];
while ((line = fileRead.readLine()) != null) {
for (int x = 0; x < maxX; x++) {
switch (line.charAt(x)) {
case 32:
mapXY[x][lineCount] = 1;
break;
case 79:
mapXY[x][lineCount] = 0;
break;
case 83:
mapXY[x][lineCount] = 2;
break;
case 70:
mapXY[x][lineCount] = 3;
break;
}
}
lineCount++;
}
} catch (FileNotFoundException e) {
e.printStackTrace();
} catch (IOException e) {
e.printStackTrace();
}
}
}
Path1.txt
40
40
OSOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO
O OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO
O OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO
O OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO
O OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO
O OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO
O OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO
O OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO
O OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO
O OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO
O OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO
O OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO
O OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO
O OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO
O OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO
O OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO
O OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO
O OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO
O OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO
O OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO
O OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO
O OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO
O OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO
O OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO
O OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO
O OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO
O OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO
O OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO
O OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO
O OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO
O OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO
O OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO
O OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO
O OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO
O OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO
O OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO
O OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO
O OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO
O F
OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO
Path2.txt
80
80
OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO
OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO
S OOOO
S OOO
S OO
OO OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO OO
OO OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO OO
OO OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO OO
OO OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO OOOOOOOOOOOOOOOOOOOO OO
OO OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO OOOOOOOOOOOOOOOOOOO OO
OO OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO OOOOOOOOOOOOOOOOOOO OO
OO OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO OOOOOOOOOOOOOOOOOO OO
OO OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO OOOOOOOOOOOOOOOOOOOOO OO
OO OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO OOOOOOOOOOOOOOOOOOOOOOO OO
OO OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO OOOOOOOOOOOOOOOOOOOOOOO OO
OO OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO OOOOOOOOOOOOOOOOOOOOOOO OO
OO OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO OOOOOOOOOOOOOOOOOOOOOOO OO
OO OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO OOOOOOOOOOOOOOOOOOOOOO OO
OO OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO OOOOOOOOOOOOOOOOOOOOOOO OO
OO OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO OOOOOOOOOOOOOOOOOOOOOOOOO OO
OO OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO OOOOOOOOOOOOOOOOOOOOOOOO OO
OO OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO OOOOOOOOOOOOOOOOOOOOOOOOOO OO
OO OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO OOOOOOOOOOOOOOOOOOOOOOOOOO OO
OO OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO OOOOOOOOOOOOOOOOOOOOOOOOO OO
OO OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO OOOOOOOOOOOOOOOOOOOOOOOOOO OO
OO OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO OOOOOOOOOOOOOOOOOOOOOOOOOO OO
OO OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO OOOOOOOOOOOOOOOOOOOOOOOOO OO
OO OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO OOOOOOOOOOOOOOOOOOOOOOOOO OO
OO OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO OOOOOOOOOOOOOOOOOOOOOOOOO OO
OO OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO OOOOOOOOOOOOOOOOOOOOOOOOOO OO
OO OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO OOOOOOOOOOOOOOOOOOOOOOOOOO OO
OO OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO OOOOOOOOOOOOOOOOOOOOOOOOOO OO
OO OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO OOOOOOOOOOOOOOOOOOOOOOOOOO OO
OO OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO OOOOOOOOOOOOOOOOOOOOOOOOOOO OO
OO OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO FFF OOOOOOOOOOOOOOOOOOOOOOOOOO OO
OO OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO OOOOOOOOOOOOOOOOOOOOOOOOOOO OO
OO OOOOOOOO OOOOOOOOOOOOO OOOOOOOOOOOOO OOO OO
OO OOOOOOOO OOOOOO OO
OO OOOOOOOOOOOOOOOOOOOOOOOOOOOOOO OOOOOOOOOOOOOOOOOOOOOOOOOO OO
OO OOOOOOOOOOOOOOOOOOOOOOOOOOOOOO OOOOOOOOOOOOOOOOOOOOOOOOOOO OO
OO OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO OOOOOOOOOOOOOOOOOOOOOOOOOOOO OO
OO OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO OOOOOOOOOOOOOOOOOOOOOOOOOOOOOO OO
OO OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO OOOOOOOOOOOOOOOOOOOOOOOOOOOOO OO
OO OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO OO
OO OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO OO
OO OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO OO
OO OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO OO
OO OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO OO
OO OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO OOOOOOOOOOOOOOOOOOOOOOOOOOOOO OO
OO OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO OOOOOOOOOOOOOOOOOOOOOOOOOOOO OO
OO OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO OOOOOOOOOOOOOOOOOOOOOOOOOOOOO OO
OO OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO OOOOOOOOOOOOOOOOOOOOOOOOOOOOO OO
OO OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO OOOOOOOOOOOOOOOOOOOOOOOOOOOOOO OO
OO OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO OO
OO OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO OO
OO OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO OO
OO OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO OO
OO OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO OO
OO OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO OO
OO OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO OO
OO OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO OO
OO OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO OO
OO OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO OO
OO OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO OO
OO OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO OO
OO OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO OO
OO OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO OO
OO OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO OO
OO OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO OO
OO OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO OO
OO OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO OO
OO OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO OO
OO OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO OO
OO OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO OO
OO OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO OO
OO OO
OOO OOO
OOOO OOOO
OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO
OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO
Answer: Initialization / Prep work
Well one thing that immediately jumps out is your initialization. If you are really trying to squeeze out every bit of performance from your code, then this chunk that is done in the bfs method should be done when reading the file:
Queue<Node> queue = new Queue<Node>();
for (int y = 0; y < maxY; y++) {
for (int x = 0; x < maxX; x++) {
if (mapXY[x][y] == 2) {
queue.enqueue(new Node(x, y, 0, 0));
visitedLocation.add(new Node(x, y, 0, 0));
}
}
}
It's no wonder your code runs very slow, just taking a look at all the work you did in that bfs method is enough to conclude that the performance will not scale very well with larger graphs. I almost have half the mind to say you should refractor by starting over
Generating neighbours
To eliminate the need for that generateNeighbours method, you should be using an adjacency list representation for your graph. This makes it so that each node already has a list of the other nodes it is connected to. It will also make it easier to extend this program to one that finds the shortest path to the finish because you can decide to represent the edges as objects and store the cost of going from one edge to the other as links joining nodes together.
Readability
Use informative enums to represent key aspects of the graph. This part of your code threw me off a bit.
switch (line.charAt(x)){
case 32:
mapXY[x][lineCount] = 1;
break;
case 79:
mapXY[x][lineCount] = 0;
break;
case 83:
mapXY[x][lineCount] = 2;
break;
case 70:
mapXY[x][lineCount] = 3;
break;
}
Apart from the 32, I had to look up what the other characters meant in ascii and even this required knowing that they are in ascii in the first place.
Something like this would have been much easier to understand at first glance
switch (line.charAt(x)){
case ' ':
mapXY[x][lineCount] = Point.SPACE;
break;
case 'O':
mapXY[x][lineCount] = Point.PATH;
break;
case 'S':
mapXY[x][lineCount] = Point.START;
break;
case 'F':
mapXY[x][lineCount] = Point.FINISH;
break;
}
Memory
Don't let that term visited set confuse you. Using a hashset is good and all, but it is a bit of overkill. Graphs and graph algorithms tend to take up a lot of space, so the best way to represent a visited set is to not use a set data structure but to use a boolean array.
The way to make this work is to number each node with values from 1-n. Then for each visited node, you simply index into the array using the value of the node and set that index to true. This is a more direct, efficient and (dare I say) faster alternative to a hashset
Extras
You didn't make use of the vel properties of your node class. This may or may not be intentional but every bit of efficiency helps and initializing unused variables is definitely going to hurt performance
Close a (Buffered)reader when you are done with it or avoid that scenario by making use of a try-with-resources statement
Do most of the initialization at the start of your program, so generating nodes and so on should be done earlier than later. This makes the code look cleaner than having initializations here and there making the overall look scattered | {
"domain": "codereview.stackexchange",
"id": 15106,
"tags": "java, performance, algorithm, search"
} |
Moist adiabatic lapse rate | Question: Wikipedia gives the following equation to calculate the moist adiabatic lapse rate $\Gamma_w$, assuming that there is only one condensible gas (water vapour) mixed in the "dry air":
$\Gamma_w = g\frac{\left(1+\frac{H_v r}{R_{sd}T} \right)}{\left(c_{pd} + \frac{H_{v}^2r}{R_{sw}T^2} \right)}$
Where:
$\Gamma_{w}$: moist adiabatic lapse rate [K/m]
$g$: gravitational acceleration [m/s2]
$H_{v}$: latent heat of vaporization of water [J/kg]
$R_{sd}$: specific gas constant of dry air [J/kg·K]
$R_{sw}$: specific gas constant of water vapour [J/kg·K]
$r={\frac {\epsilon e}{p-e}}$: mixing ratio of the mass of water vapour to the mass of dry air
$\epsilon = \frac{R_{sd}}{R_{sw}}$: ratio of the specific gas constant of dry air to the specific gas constant for water vapour = 0.622 [dimensionless]
$e$: water vapour pressure of the saturated air [Pa]
$p$: pressure of the saturated air [Pa]
$T$: temperature of the saturated air [K]
$c_{pd}$: specific heat of dry air at constant pressure [J/kg·K]
What's the form of this equation when the atmosphere composition differs from Earth's, thus allowing multiple condensible gases or even be entirely composed of only one condensible gas (for example 100% water vapour)?
Answer: A simple way to analyze the adiabatic lapse rate is to consider removing the parcel from its environment, changing its height $z$ in a gravitational field (corresponding to a potential energy $mgz$, with mass $m$ and gravitational acceleration $g$), and reinserting it into the environment.
In the absence of heat transfer (i.e., adiabatic conditions), the energy required to create a system at temperature $T$ and insert it into an environment—pushing the surroundings at pressure $P$ out of the way—is the enthalpy $H(T,P)$. Any increase in the potential energy is "paid for" by this enthalpy.
From this, we find that $H(T,P)+mgz$ is a constant for such adiabatic motion, i.e., $dh=-g\,dz$, with specific enthalpy $h$.
The change in specific enthalpy of an ideal gas is simply $dh=c_PdT$, so the dry adiabatic lapse rate is $$\Gamma_\text{dry}\equiv\left(\frac{dT}{dz}\right)_\text{dry}=-\frac{g}{c_P}.$$
(Sometimes the lapse rate is defined as the negative of this value; in this case, switch the signs here and below.)
Propagation of error tells us that for small changes in $g$ or $c_P$, $\Gamma_\text{dry}$ changes by
$$\frac{\Delta\Gamma_\text{dry}}{\Gamma_\text{dry}}=\left(\frac{1}{g_0}\right)\Delta g-\left(\frac{1}{c_{P,0}}\right)\Delta c_P.$$
Let's now consider a combination of ideal gases $i$. The change in enthalpy is $dH=\sum_i m_ic_{P,i}dT$:
$$\Gamma_\text{dry, mixture}\equiv\left(\frac{dT}{dz}\right)_\text{dry, mixture}=-\frac{g\sum_im_i}{\sum_im_ic_{P,i}}.$$
How about a combination of ideal gases $i$ saturated with a combination of vapors $j$ and suspended condensed matter $k$? The enthalpy is $H=H_0+\sum_i m_ic_{P,i}T+\sum_j m_j(c_{P,j}T+L_j)+\sum_k m_kc_{P,k}T$, where $L_j$ is the latent heat for condensation of vapor $j$. (For simplicity, I'm not considering successive phase changes, e.g., melting and then freezing. You can extend as you like.)
To recover the equation you show, assume that minimal water condenses or that precipitation falling from the parcel only minimally affects the enthalpy (sometimes called the pseudoadiabatic assumption), that only air and water vapor exist (total mass $m$, vapor mass $m_\text{vapor}$), and that the humidity only minimally affects the air specific heat capacity $c_P$:
$$H=H_0+mc_{P}T+m_\text{vapor}L;\tag{enthalpy expression}$$
$$dH=mc_{P}dT+dm_\text{vapor}L;\tag{differentiating}$$
$$dh(=-g\,dz)=c_{P}dT+\frac{dx_\text{vapor}}{dT}L\,dT=\left(c_{P}+\frac{dx_\text{vapor}}{dT}L\right)dT;\tag{per mass}$$
$$\frac{dT}{dz}=-\frac{g}{c_{P}+\frac{dx_\text{vapor}}{dT}L};\tag{rearranging}$$
where $x_\text{vapor}\equiv\frac{m_\text{vapor}}{m}$, which scales with the vapor partial pressure as $\frac{p_\text{vapor}}{P}$, so
$$\frac{dx_\text{vapor}}{dT}=\frac{x_\text{vapor}}{p_\text{vapor}}\frac{dp_\text{vapor}}{dT}-\frac{x_\text{vapor}}{P}\frac{dP}{dT};\tag{differentiating}$$
$$\frac{dx_\text{vapor}}{dT}=\frac{x_\text{vapor}}{p_\text{vapor}}\frac{dp_\text{vapor}}{dT}-\frac{x_\text{vapor}}{P}\frac{dP}{dz}\frac{dz}{dT};\tag{expanding}$$
$$\frac{dx_\text{vapor}}{dT}=\frac{x_\text{vapor}L}{T^2R_\text{vapor}}-\frac{x_\text{vapor}}{P}\frac{dP}{dz}\frac{dz}{dT};\tag{Clausius–Clapeyron}$$
$$\frac{dx_\text{vapor}}{dT}=\frac{x_\text{vapor}L}{T^2R_\text{vapor}}+\frac{x_\text{vapor}g}{TR_\text{air}}\frac{dz}{dT},\tag{barometric}$$
with some gas constants $R$, which ultimately reduces to
$$\Gamma_\text{saturated}\equiv\left(\frac{dT}{dz}\right)_\text{saturated}=-g\frac{1+\frac{x_\text{vapor}L}{TR_\text{air}}}{c_P+\frac{x_\text{vapor}L^2}{T^2R_\text{vapor}}},$$
which matches the form of your equation. | {
"domain": "physics.stackexchange",
"id": 94727,
"tags": "atmospheric-science, planets, weather, climate-science, condensation"
} |
Efficient way to read files python - 10 folders with 100k txt files in each one | Question: i am looking for an efficient way to read and append texts of .txt files to a dataframe. I currently have 10 folders with 100k documents each.
What i specifically need to do is:
getting the names of the files inside one folder (filenames contain important information like a unique ID for each company that issue the document and the date when the document was published, like this: "CIK_000000_DATE_14_12_2022.txt")
extract from the name of the files the unique ID (CIK) and the date
take the text related to the file
append this 3 pieces of informations in a dataset so that the dataset appear like:
CIK
date
text
serial
000000
11/12/2012
some text here...
1
000001
14/11/2019
some other text here...
2
the folders are individually made like this:
TXT_01
|_ CIK__000000__DATE__14-12-2012__serial__0.txt
|_ CIK__000001__DATE__12-11-2019__serial__1.txt
|_ CIK__000001__DATE__11-12-2014__serial__2.txt
|_ CIK__000175__DATE__11-04-2011__serial__3.txt
|_ ...
TXT_02
|_ CIK__000135__DATE__11-04-2001__serial__100.txt
|_ CIK__000115__DATE__11-04-2001__serial__101.txt
|_ CIK__000145__DATE__11-04-2001__serial__103.txt
|_ CIK__000155__DATE__11-04-2001__serial__104.txt
|_ ...
...
They get the job done, but the time they take to finish a test folder with 4000 documents is way too much considering that i need to do this for 100k files folders.
i can provide more informations if needed. I'm open to any advice, even learning faster programming languages in order to get this done.
thank you all
here the code i'm currently executing (consider that it comes from a collab notebook)
folders = []
names = os.listdir()
for i in names:
if i.startswith('TXT'):
folders.append(i)
def get_file_names(folder):
return os.listdir(folder)
def get_cik_date(file):
cik = re.findall(r"CIK__([0-9]*)", file)[0].zfill(10)
date = re.findall(r"date__(([0-9]*)-([0-9]*)-([0-9]*))", file)[0][0]
serial = re.findall(r"serial__([0-9]*)", file)[0]
return (cik, date, serial)
def get_text(file):
get_text.counter +=1
print(get_text.counter)
with open(file) as f:
lines = f.readlines()
return lines
get_text.counter = 0
file_names = list(map(get_file_names,folders))
folder_names = list(zip(folders,file_names))
cik_date_folder = [[get_cik_date(i) for i in j] for k,j in folder_names]
texts = [[get_text(f"{k}/{i}") for i in j] for k,j in folder_names]
test = [dict(zip(cik_date_folder[i], texts[i])) for i,n in enumerate(folders)]
df = pd.DataFrame.from_dict(test)
df = df.ffill().bfill().head(1).T
df_reset = df.reset_index()
df_reset = df_reset.rename({"index": "cik_date", 0:"text"}, axis = 1)
df_reset[['cik','dates', 'serial']] = pd.DataFrame(df_reset['cik_date'].tolist(),index=df_reset.index)
df_complete = df_reset.drop("cik_date", axis = 1)
def concat_list(text_as_list):
return " ".join(text_as_list)
df_complete["text"] = df_complete["text"].apply(concat_list)
```
Answer: I created a few functions for quickly setting up a "test environment" of N folders each with M files:
import random
import re
import os
from pathlib import Path
from string import ascii_letters
import itertools
from time import time
import pandas as pd
BASE = "C:\\some\\path\\"
DATE_POOL = pd.date_range(start="01-01-2000", end="01-01-2022")
FILE_LEN = 64
def create_filename(serial: int):
identifier = str(random.randint(0, 999999)).zfill(6)
stamp = random.choice(DATE_POOL).strftime("%m_%d_%Y")
return f"CIK__{identifier}__DATE__{stamp}__serial__{serial}.txt"
def create_file_setup(num_folders: int, num_files: int):
serial = 0
for i in range(num_folders):
subpath = os.path.join(BASE, f"TXT_{i}")
os.mkdir(subpath)
for j in range(num_files):
txt_file = os.path.join(subpath, create_filename(serial))
serial += 1
content = ''.join(random.choices(ascii_letters + ' ', k=FILE_LEN))
Path(txt_file).write_text(content)
So by running create_file_setup(10, 100_000) I create 10 folders each with 100k files, each following the naming scheme you described (or, at least, I deduced as well as I could). I don't know how large your files are, but I entered 64 characters into each. Maybe this can vary too, but you don't specify.
Now, just cleaning up your code slightly gives me:
def get_cik_date(filename: str):
_, identifier, _, date, _, serial = filename.split("__")
serial, _ = serial.split('.')
return identifier, date, serial
def build_dataframe():
folders = [os.path.join(BASE, p) for p in os.listdir(BASE)]
file_names = list(map(os.listdir, folders))
folder_names = list(zip(folders, file_names))
cik_date_folder = [[get_cik_date(i) for i in j] for _, j in folder_names]
texts = [[Path(f"{k}/{i}").read_text() for i in j] for k, j in folder_names]
df = pd.DataFrame(itertools.chain(*cik_date_folder), columns=["id", "date", "serial"])
df["text"] = list(itertools.chain(*texts))
return df
If I now run df = build_dataframe() on my machine, the wall clock time is around 40 minutes. If you just need to do this processing once, this is probably much faster than spending time on learning to program in another language or spending time on optimizing your code. Just run it, do something else in the meantime, and come back to your freshly processed data. | {
"domain": "codereview.stackexchange",
"id": 44223,
"tags": "python, performance, pandas, database"
} |
What is a good non-technical introduction to theories of everything? | Question: I'm not a physicist but I'm interested in unified theories, and I do not know how to start learning about it. What would be a good book to read to start learning about this topic?
Answer: As a theory of everything includes a theory of all particular things, it would be good if you start by learning about the theories that need to be unified.
This means first
some quantum mechanics,
something about classical electromagnetism,
something about special anf general relativity,
then
some quantum field theory,
something about quantum electrodynamics,
something about the standard model.
So you should look at the recommendations for introductions to these subjects available at our Book recommendations.
Unless you are content with such books as ''The elegant universe''
http://en.wikipedia.org/wiki/The_Elegant_Universe where you learn the buzzwords without a deeper understanding. | {
"domain": "physics.stackexchange",
"id": 77736,
"tags": "resource-recommendations, theory-of-everything, unified-theories"
} |
Recursive function and memorization to find minimum operations to transform n to 1 | Question: I'm a new, self-taught programmer working on the Google FooBar challenge. I've submitted my answer (code at bottom) and it was accepted, but I'd like suggestions on how to improve my solution.
Challenge: return minimum number of operations to transform a positive integer to 1. Valid operations are: n+1, n-1, or n/2.
My solution
I started with a recursive function, but ran into a runtime error with very large numbers. I added memorization using a global variable to store values that had already been computed, but this seems inelegant. (I think it's discouraged to use global variables?)
Suggestions on how I might improve the below solution?
paths = {1:0, 2:1}
def shortest_path(num):
if num in paths:
return paths[num]
if num % 2 == 0:
paths[num] = 1 + shortest_path(num / 2)
else:
paths[num] = min(2 + shortest_path((num+1)/2),
2 + shortest_path((num-1)/2))
return paths[num]
def answer(n):
num = int(n)
return shortest_path(num)
Test cases:
n = 15 --> 5
n = 293523 --> 25
n = 191948125412890124637565839228475657483920292872746575849397998765432345689031919481254128901246375658392284756574839202928727465758493979987654323456890319194812541289012463756583922847565748392029287274657584939799876543234568903 --> 1029
The input number can be up to 309 digits long hence the final test case
Answer: Sure, you could remove the global variable by moving the memoization to a decorator and adding a base case scenario to the function.
from functools import wraps
def memoize(func):
cache = {}
@wraps(func)
def inner(*args):
if args not in cache:
cache[args] = func(*args)
return cache[args]
return inner
@memoize
def shortest_path(num):
if num <= 2:
return num - 1
if num % 2 == 0:
return 1 + shortest_path(num/2)
else:
return min(2 + shortest_path((num + 1)/2),
2 + shortest_path((num - 1)/2))
To address the recursion depth error, you could either change sys.setrecursionlimit, or you could rewrite your initial solution to use paths as an argument to avoid the global variable.
def shortest_path(num, paths=None):
paths = paths or {1: 0, 2: 1}
if num in paths:
return paths[num]
if num % 2 == 0:
paths[num] = 1 + shortest_path(num/2, paths)
else:
paths[num] = min(2 + shortest_path((num+1)/2, paths),
2 + shortest_path((num-1)/2, paths))
return paths[num]
Or, as suggested in the other answer, test the number modulo 4 to figure out which path to take and avoid recursion all together.
def shortest_path(n):
count = 0
while n > 1:
if n % 2 == 0:
count += 1
n = n / 2
elif n % 4 == 1 or n == 3:
count += 2
n = (n - 1) / 2
elif n % 4 == 3:
count += 2
n = (n + 1) / 2
return count | {
"domain": "codereview.stackexchange",
"id": 25990,
"tags": "python, python-2.x, recursion, dynamic-programming"
} |
How is a neutron produced in a hydrogen fusion? | Question: So during a fusion reaction, a hydrogen atom (which consists of only a electron and proton) fuses with another hydrogen atom to produce deuterium which contains a proton and neutron. My question how does the neutron come out of nowhere when we have only 2 protons as the raw materials and where does the other neutron go. I know it could be a silly question but i searched and didn't understand the concept. So i came here to clarify my doubt.
Please explain in simple language.
Answer: Most of the time, proton-proton fusion results in the brief creation of a very unstable di-proton, which immediately decays into a pair of protons.
But very occasionaly, during the brief moment they are together, one of the protons will undergo a weak force interaction, changing one of its quarks from up to down, hence making a neutron, and emitting a positron and an electron-neutrino in the process.
$$ p \rightarrow n + e^+ + \nu_e$$
The resulting proton-neutron pair can form a stable deuterium nucleus that is then the starting point for the production of helium. The creation of deuterium in this way is rare, because the conversion of a proton to a neutron is (a) endothermic, requiring energy; and (b) slow, moderated by the weak interaction, and thus the di-proton normally decays first. That is why the Sun will survive for about 10 billion years in total as a main sequence star, and why the centre of the Sun generates less heat per unit volume than a compost heap. | {
"domain": "physics.stackexchange",
"id": 32690,
"tags": "particle-physics, nuclear-physics, hydrogen"
} |
How do ROS nodes in microcontrollers work? | Question:
This is a general question.
As far as I know, when a microcontroller is powered, it automatically runs its main program. So when a ROS node is implemented in a microcontroller, e.g. arduino, it runs the node even when it is not connected to any ROS master/connected to a PC (at least that's how I understand it).
My question is how does the ROS master discovery mechanism work on the microcontroller? Or in which file of the rosserial library is it implemented?
Originally posted by didi1311 on ROS Answers with karma: 18 on 2020-03-04
Post score: 0
Answer:
rosserial is a protocol for wrapping standard ROS serialized messages and multiplexing multiple topics and services over a character device such as a serial port or network socket. It has Client libraries that allow users to easily get ROS nodes up and running on various systems. And it also has ROS-side Interfaces allows you setup a node on the host machine to bridge the connection from the serial protocol to the more general ROS network. It's not ROS master discovery mechanism works between the host machine and the microcontroller. When you write your own project on microcontroller, you will depend on rosserial 's client libraries. If you just work on a host machine with a non-ROS microcontroller, you have to write a package which is able to connect with the device.
Originally posted by vaspo with karma: 61 on 2020-03-09
This answer was ACCEPTED on the original site
Post score: 2 | {
"domain": "robotics.stackexchange",
"id": 34538,
"tags": "ros, rosserial"
} |
[ROS2 Foxy] Passing argument from one launch file to another | Question:
I have compartmentalized my launch files into different purposes, I currently have a main launch file from which I would like to call one of the others. I have partly done this to be able to separate some of the functionalities, but also so the main launch doesn't become to verbose.
The main launch file gets passed an argument for the vehicle platform architecture, such as to find a specific config file highlighting some necessary parameters. This argument that is to be passed is platform.
How can I pass this argument from the main launch file to its children.
I included an example of the main.launch.py so one can see how the platform arg is being used.
import os
from ament_index_python.packages import get_package_share_directory
from launch import LaunchDescription
from launch.actions.declare_launch_argument import DeclareLaunchArgument
from launch.actions.include_launch_description import IncludeLaunchDescription
from launch.launch_description_sources import PythonLaunchDescriptionSource
from launch_ros.actions import Node
def generate_launch_description():
# defining platform
# select either newholland_t7 or fort_rd25
platforms = ["newholland_t7", "fort_rd25"]
for id, plat in enumerate(platforms):
print(f"\t{id} for {plat}")
platform_id = int(input("Select a platform: "))
try:
platform = platforms[platform_id]
except IndexError as ex:
print(f"[ERROR]: {ex}, please select valid index")
return LaunchDescription()
print(f"Selected {platform}")
urdf_path = os.path.join(
get_package_share_directory("main_pkg"), "urdf", platform + ".urdf.xml"
)
with open(urdf_path, "r") as infp:
robot_description = infp.read()
urdf_publisher = Node(
package="robot_state_publisher",
executable="robot_state_publisher",
name="robot_state_publisher",
parameters=[{"robot_description": robot_description}],
arguments=[urdf_path],
output="screen",
)
danger_zone_config = os.path.join(
get_package_share_directory("agrirobot"), "config", platform + ".yaml"
)
visualize_markers = Node(
package="visualize_markers",
executable="visualize_markers_node",
name="visualize_markers",
parameters=[danger_zone_config],
output="screen",
)
# some other things
ld = LaunchDescription()
ld.add_action(urdf_publisher)
ld.add_action(visualize_markers)
return ld
Originally posted by morten on ROS Answers with karma: 198 on 2021-09-29
Post score: 1
Original comments
Comment by shonigmann on 2021-10-01:
I just realized that you are giving your robot_state_publisher both the URDF path AND the contents of the URDF file as robot_description. You only need one or the other, and the former is deprecated, so the latter is a better bet.
I also just realized that, for the use case outlined above, you can get away without the additional LaunchArguments. I will append a secondary answer that matches what you're going for a bit more directly (but is perhaps a bit less generalizable)
Answer:
You can add launch_arguments when you call IncludeLaunchDescription
That said, looking over your sample launch file, I would suggest you to use DeclareLaunchArgument instead of input() to define whatever inputs you want to the launch file, so they can be given without the need for user input.
Question #306935 gives an example (for Crystal, but should still hold up in Foxy).
Edit - Adding a quick example for clarity:
main.launch.py could contain
import os
from ament_index_python.packages import get_package_share_directory
from launch import LaunchDescription
from launch.actions.declare_launch_argument import DeclareLaunchArgument
from launch.actions.include_launch_description import IncludeLaunchDescription
from launch.launch_description_sources import PythonLaunchDescriptionSource
from launch_ros.actions import Node
def generate_launch_description():
# defining platform
platform_arg = DeclareLaunchArgument(
'platform',
default_value='newholland_t7', # default value is optional
description='select either newholland_t7 or fort_rd25')
# some other things
# include secondary launch file that takes in the platform argument
launch2 = IncludeLaunchDescription(
PythonLaunchDescriptionSource([get_package_share_directory('my_package'), '/secondary.launch.py']),
launch_arguments=[('platform', LaunchConfiguration('platform')],
)
ld = LaunchDescription()
ld.add_action(platform_arg )
ld.add_action(launch2 )
# add some other actions
return ld
and then secondary.launch.py could contain:
import os
from ament_index_python.packages import get_package_share_directory
from launch import LaunchDescription
from launch.actions.declare_launch_argument import DeclareLaunchArgument
from launch.actions.include_launch_description import IncludeLaunchDescription
from launch.launch_description_sources import PythonLaunchDescriptionSource
from launch_ros.actions import Node
from launch.substitutions import PathJoinSubstitution
def generate_launch_description():
# defining platform
platform_arg = DeclareLaunchArgument(
'platform',
default_value='newholland_t7', # default value is optional
description='select either newholland_t7 or fort_rd25')
# LaunchConfiguration concatenation is a bit indirect, but using additional launch arguments works well (per https://answers.ros.org/question/358655/ros2-concatenate-string-to-launchargument/ )
platform_yaml = DeclareLaunchArgument('platform_yaml', default_value=[LaunchConfiguration('platform_arg'), '.yaml'])
platform_urdf = DeclareLaunchArgument('platform_yaml', default_value=[LaunchConfiguration('platform_arg'), '.urdf.xml'])
# get path to some file depending on the launch argument
platform_urdf_path = PathJoinSubstitution(get_package_share_directory('my_package'), 'urdf', LaunchConfiguration('platform_urdf'))
platform_yaml_path = PathJoinSubstitution(get_package_share_directory('my_package'), 'urdf', LaunchConfiguration('platform_yaml'))
# some other things
ld = LaunchDescription()
ld.add_action(platform_arg)
# add some other actions
return ld
More compact solution:
main.launch.py would be the same...
and then secondary.launch.py could contain:
import ...
def generate_launch_description():
# defining platform
platform_arg = DeclareLaunchArgument(
'platform',
default_value='newholland_t7', # default value is optional
description='select either newholland_t7 or fort_rd25')
platform_path = PathJoinSubstitution(get_package_share_directory('my_package'), 'urdf', LaunchConfiguration('platform'))
# get contents of urdf file using Command substitution and `cat` command to print file to output
robot_description = Command(['cat ', platform_path, '.urdf.xml'])
urdf_publisher = Node(
package="robot_state_publisher",
executable="robot_state_publisher",
name="robot_state_publisher",
parameters=[{"robot_description": robot_description}],
output="screen",
)
visualize_markers = Node(
package="visualize_markers",
executable="visualize_markers_node",
name="visualize_markers",
parameters=[('parameter_name', [platform_path, '.yaml'])], # not sure what the intended parameter name is
output="screen"
)
...
The above works because a Node's 'arguments' take take SomeSubstitutionType, which supports an iterable (a list) of Substitutions (e.g. Launch Configurations) and Text (strings). 'parameters' take SomeParameters which ultimately include SomeSubstitutionType, so they can also accept lists of Substitutions and text.
When either is evaluating the value of a given combination of substitutions and texts, they get concatenated!
Originally posted by shonigmann with karma: 1567 on 2021-09-29
This answer was ACCEPTED on the original site
Post score: 2
Original comments
Comment by morten on 2021-09-30:
How do I then parse the argument in the other file? Because I don't need the platform string in a ROS-compatible format, I need the string itself so I can parse out the file path
Edit: the example also doesn't seem to solve the problem, he finishes saying that "Maybe it has a method that returns the value- the perform() method wants a LaunchContext but I'm not sure where to get it."
Comment by shonigmann on 2021-09-30:
I edited above. Guess I didn't read the answer I linked completely.. To get the value of a LaunchArgument, you call LaunchConfiguration('argument_name'). It does need to be evaluated in context (e.g. by a ros launch action at runtime), but there are ways of getting around that if you do need the value in your own python functions (e.g. by using different Launch Substitutions or the most flexible but "opaque" way would be with OpaqueFunction)
Comment by shonigmann on 2021-09-30:
Edited again to show using the launch argument in a file path, based on your original code sample
Comment by morten on 2021-09-30:
It just seems very verbose to have to define separate launch arguments for the yaml and urdf files. But if thats what it takes I'll give it a try tomorrow. Thanks
Comment by shonigmann on 2021-09-30:
I agree - I'd love a more direct substitution for string concatenation. As an alternative, you can look into OpaqueFunction (sorry I don't have a better example handy, but this issues thread gives one). Any python functions you have inside an OpaqueFunction will be evaluated at runtime in context, so your string LaunchConfiguration can be used directly like a string (e.g. my_string = str(context.launch_configurations['my_arg']) then you can get away with my_string2 = my_string + '.urdf').
Comment by morten on 2021-10-01:
Just a final question, which library does context come from? Or what is the context object? Part of the original use case is checking if the platform is valid, so this is kind of outside the scope of a node.
Edit: Also, what is there any way to skip making additional launch arguments? To avoid making the argument list too messy for the launch file.
Comment by morten on 2021-10-01:
Is there anywhere to read documentation regarding substitutions and launch configurations? I can't seem to find any and it all feels a little needlessly complex, but that's likely just my lack of understanding.
Comment by shonigmann on 2021-10-01:
context comes from LaunchContext and captures the current state of the launch file - not necessarily a single node - including defined arguments, parameters, events, etc.. LaunchArguments are a bit like Schrodingers cat here - you're code can't know what they'll be until you actually run the code, so all the substitutions are there as placeholders, waiting for that context.
But perhaps more importantly to your question, context is passed in automatically by the OpaqueFunction substitution. You give OpaqueFunction a function handle, and it expects that handle to accept a context input. Then inside that function, you can extract information about the current context to use as needed
Comment by shonigmann on 2021-10-01:
I wouldn't worry too much about having too many launch arguments if you do not intend them to be filled out manually. I proposed a few alternative approaches not using launch arguments, using PythonExpression and Command substitutions here but have yet to find a cleaner approach.
Comment by shonigmann on 2021-10-01:
Documentation is a bit scant on launch unfortunately. It was meant to be a Google Summer of Code project last summer, but I'm not sure it ever got taken up... The best official source I've found is: https://github.com/ros2/launch/blob/foxy/launch/doc/source/architecture.rst | {
"domain": "robotics.stackexchange",
"id": 36967,
"tags": "roslaunch"
} |
Neutralization reaction between ammonia and nitric acid | Question:
Complete and balance the following molecular equation (in aqueous solution); include phase labels. Then, write the net ionic equation.
$~\ce{NH_3 + HNO_3 -> ?}$
I thought that the acid $\ce{HNO3}$ would just give its hydrogen to $\ce{NH3}$ and make the resulting reaction:
$$\ce{NH_3 + HNO_3 -> HNH_3 + NO_3}$$
However the correct answer is $\ce{NH_3 + HNO_3 -> NH_4NO_3}$. Why is this?
Answer: Your answer is very close to the answer given, except for the following two tidbits (the first being more significant).
An acid-base reaction is not the exchange of a hydrogen atom $\ce{H}$. It is the exchange of a hydrogen ion (or proton) $\ce{H+}$. Thus your answer should be: $$\ce{NH3(aq) +HNO3(aq) -> NH4+(aq) + NO3-(aq)}$$
The given answer combines the two ions produced into a single compound. $$\ce{NH4+(aq) + NO3-(aq) ->NH4NO3(aq)}$$
The result of #2 would be reasonable if you had not been told that the reaction was occurring in aqueous solution. In aqueous solution, the products should have been those of #1, since ammonium nitrate is freely soluble in water. | {
"domain": "chemistry.stackexchange",
"id": 10245,
"tags": "acid-base, aqueous-solution"
} |
Converting JPG images to PNG | Question: I had a task to convert some JPG images into PNG format using Python. I successfully did it but I was wondering if somehow I could make my code better than my current one.
Code -
#Python Code
import os
from PIL import Image
path1 = sys.argv[1]
path2 = sys.argv[2]
if not os.path.exists(path2):
os.makedirs(path2)
for filename in os.listdir(path1):
if filename.endswith(".jpg"):
# print(filename.split(".")[0])
im = Image.open(f"{path1}{filename}")
# print(Image.open(f"{path1}{filename}"))
im.save(f"{path2}/{filename.split('.')[0]}.png", "png")
print("Lets go")
Answer: Not much to improve upon, just some minor housekeeping.
Help the end user!
If I was using this script, and forgot to put one or both of the filepaths when calling the script, there would be this error: IndexError: list index out of range. This error wouldn't help the user in a significant way. Consider this:
try:
path1 = sys.argv[1]
path2 = sys.argv[2]
except IndexError:
raise Exception("Usage: python3 jpg_to_png.py <path to input folder> <path to output folder>")
This gives the user a detailed message on how the use your script.
Remove commented code.
Code you don't need anymore adds useless clutter, and can make your code harder to read. | {
"domain": "codereview.stackexchange",
"id": 37398,
"tags": "python, python-3.x, image"
} |
Given an EAN-8 or an EAN-13 code, tell if that code is valid | Question: I have the requirement to validate product codes which include EAN-8, EAN-13 and UPC-A encoded as EAN-13. I have implemented a use-case to perform this check by comparing the actual check digit with the computed one. I followed this explanation for EAN-13 and that one for EAN-8.
The reason I'm asking for a review is because I haven't worked with EANs before and also noticed that my previous versions that had logical flaws still passed the tests with some codes, so the error wasn't obvious.
Example: in an earlier version, I did not drop the last digit (which is the check digit), so it was mistakenly used in the calculation. The result was that the tests passed for the UPC-A (wrapped as EAN-13) code that can be seen in the test, but failed for a "real" EAN-13 and EAN-8, adding to my confusion.
Another example of a bad idea was giving ChatGPT a shot on solving the problem because the thing often contradicted itself and produced code that worked with some codes and didn't work with others - so be careful with that one because you might go from "I somewhat know what I'm doing and want to save time" to "I have no idea, throw everything away and start from scratch".
Note: the code can be seen in action on the Kotlin Playground
Description: the code format is valid if it consists of exactly 8 or 13 digits.
take the last digit - this is the reference check digit
take the input digits
remove the check digit
convert the rest to a List of Ints
compute the check digit
In the sumForEAN8() and sumForEAN13() extensions, you may notice that the calculation is exactly opposite of the specification, e.g. for EAN-13, I multiply odd digits by 3, whereby the spec says one should do that for even digits. The reason is that the spec divides all digits in even and odd based on the complete number including the check digit. Since we remove the check digit before proceeding with the calculation, the "odd" and "even" numbering basically shifts by one. now I realize that this explanation is wrong (because digits are iterated left-to-right, so dropping the last one cannot affect even/odd): the swap is still needed, but the real reason is that we have 0-based indices in programming, while the spec uses a 1-based index)
internal class ValidateEANUseCase() {
private val eanFormatRegex = Regex("^\\d{8}|\\d{13}$")
fun run(ean: String): Boolean = when {
ean.hasInvalidFormat() -> false
else -> {
val actualCheckDigit = ean.last()
.digitToInt()
val computedCheckDigit = ean
.dropLast(1)
.map { char -> char.digitToInt() }
.computeEANCheckDigit()
actualCheckDigit == computedCheckDigit
}
}
private fun String.hasInvalidFormat(): Boolean = !this.matches(eanFormatRegex)
private fun List<Int>.computeEANCheckDigit(): Int {
val sum = if (size == 7) sumForEAN8() else sumForEAN13()
return (10 - (sum % 10)) % 10
}
private fun List<Int>.sumForEAN8(): Int = withIndex()
.sumOf { (index, digit) ->
if (index % 2 == 0) digit * 3 else digit
}
private fun List<Int>.sumForEAN13(): Int = withIndex()
.sumOf { (index, digit) ->
if (index % 2 == 0) digit else digit * 3
}
}
Below are the unit tests (all passing; tested refers to the instance of the class being tested). Any further inputs to consider?
@Test
fun `GIVEN a valid EAN-8 number THEN the validation result is TRUE`() {
val result = tested.run(VALID_EAN_8)
assertThat(result).isTrue
}
@Test
fun `GIVEN a valid EAN-13 number THEN the validation result is TRUE`() {
val result = tested.run(VALID_EAN_13)
assertThat(result).isTrue
}
@Test
fun `GIVEN a valid UPC-A number encoded as EAN-13 THEN the validation result is TRUE`() {
val result = tested.run(VALID_UPC_A_ENCODED_AS_EAN_13)
assertThat(result).isTrue
}
@Test
fun `GIVEN ANY invalid input THEN the validation result is FALSE`() {
val results = mutableListOf<Boolean>()
invalidInputs.forEach {
results.add(tested.run(it))
}
assertThat(results).allMatch { it == false }
}
private companion object {
const val VALID_EAN_13 = "5060762541604"
const val VALID_EAN_8 = "42307167"
const val VALID_UPC_A_ENCODED_AS_EAN_13 = "0670367004760"
val invalidInputs = listOf(
"", " ", "\n\t", "\\§$%&()=?ß`+´ü-!äö'#*´;¿\"@<>", "123", "12345678",
"5060962541604", "00123456789012", "hello EAN", "привет EAN", "你好 EAN"
)
}
Thank you.
Answer: You cite e.g. open.edu.
I would feel better about a citation to the (500 page!) EAN-{8,13} spec:
§ 7.9.1 and 7.9.5
(They are named GITIN-{8,13}, there.)
internal class ValidateEANUseCase() {
I really like the initial portion of that identifier.
The second half seems awkward, but maybe y'all have
some convention this is conforming to, where a
restricted vocabulary that includes UseCase
is always used as a suffix.
Regex("^\\d{8}|\\d{13}$")
I don't understand that expression.
Or rather, I believe that instead of saying ^X|Y$
you intended ^(X|Y)$
BTW, the instinct to anchor the expression
is a good one, I applaud it.
If you're going to use just a single anchor,
the ^ caret offers the bigger win,
as it will typically do more to limit expensive backtracking.
The run function is laudably clear, thank you.
Maybe breaking out a hasInvalidFormat helper was overkill?
Maybe choose to define it in the positive, hasValidFormat,
and let caller negate if needed? Humans tend to reason
about the positive case better than they do about not not
negatives.
In computeEANCheckDigit this expression is interesting:
return (10 - (sum % 10)) % 10
It's not explicit in the code, but I assume we could assert sum >= 0.
I confess I do not understand this expression.
We have an integer 0 .. 9 inclusive,
which we subtract from ten.
And then we modulo it again? Why?
I have to believe that the difference is already within range.
Consider the 10 - 0 case.
Sorry, I was wrong, I withdraw the remark.
It's not like the spec asks for nine minus something.
I would very much appreciate seeing a unit test
that draws attention to this case.
Whenever we make a mistake, it is useful to memorialize
it in a (now Green!) test.
I'm sure that sumForEAN8 and sumForEAN13 are correct;
please ship them as-is.
But I will just say that I'm not super happy with them,
and I guess that feeling started with your wonderfully
forthright description up top about "exactly opposite".
Here's what I was looking for: a direct linkage to the spec.
In the language of the spec,
we're talking about N1 .. N7 for EAN8,
and N1 .. N12 for EAN13.
And then the spec offers integer multipliers, either 1 or 3,
in the "Multiply value of each position by" section.
Like I said, I believe the correct quantity winds up being calculated.
But I wouldn't mind seeing more direct traceability
to what the standards body actually specifies.
Also, given a pair of multiplier vectors,
I believe we could implement these with just a single function.
Look into the future for a moment.
Maybe a use case will come down the pike that requires EAN-14 support?
Common code that exploits commonality in the spec seems desirable.
val result = tested.run(VALID_EAN_8)
assertThat(result).isTrue
This impresses me as being about twice as verbose as needed,
but fine, whatever, I'm sure there's some cultural Kotlin
thing going on here that I'm unfamiliar with. Always follow
the cultural norms. I would have just asserted the expression,
without defining a temp var.
The bigger issue is that, while I really like the
helpfully descriptive VALID_EAN_8 identifier,
I'm sad that the value is so far away in a companion object.
I would much rather see a line defining it right here
in the test function, given that the value isn't shared
with other functions. I like the immediacy of it,
no scrolling back and forth to see what's happening.
But again, if we're conforming to some cultural norm here,
good, so be it. Kind of like the practice of putting SPACE
characters in function names, I guess.
There's an opportunity to include more than a single
example of each kind of valid product code, here.
You wrote
"5060962541604", "00123456789012", ...
I ran out of fingers to count on.
I would much rather see those constants expressed as
"5060962541604",
"00123456789012", ...
to visually emphasize their relative lengths.
Ok, we have been diving down into the nitty gritty.
Let's up-level for a moment, and critique the whole unit testing approach.
There's a testing aspect that I am not yet satisfied with.
We could do amplification of our test inputs.
That is, for each known-valid product code,
we also know how to trivially create nine invalid codes.
So let's do it, it's cheap!
And verify that each one fails.
Overall?
A regex should be fixed, and perhaps a modulo cleaned up.
This code has been carefully implemented,
and achieves its design objectives.
I would gladly delegate or accept maintenance tasks on this codebase. | {
"domain": "codereview.stackexchange",
"id": 44566,
"tags": "validation, kotlin"
} |
Burning vs Melting - when applied to cooking chocolate | Question: I'm aware melting is simply heat causing molecules to go from solid to liquid, and burning is a chemical reaction, normally with oxygen; but I don't understand how this applies to cooking chocolate.
See, when you cook chocolate naked on a pan, the chocolate will burn. However, if you cook a bowl of water with a smaller bowl floating on top with the chocolate in, the chocolate will melt and not burn. So, what's going on here? FYI, I'm not a physics major.
Answer: Olin Lathrop captured something important, which is that the word "burn" does not always align to the chemistry definition of the word. But I did want to answer your question.
Chocolate will burn in a "naked" pan because the heat is not evenly spread. In cooking, it's desirable to have the top surface of the pan have an even temperature across the entire cooking surface. However, in practice, that can be difficult to achieve. Stoves often provide very uneven heating to the bottom of the pan, especially gas ranges which may only heat a circle! To spread the heat evenly, a pan must be very thick, and that is costly. Pan manufacturers try to strike a balance between a pan that's thick enough to spread the heat but thin enough to be affordable. High end pans will often have several layers of metal on the bottom to achieve a more even spread.
Chocolate is particularly demanding regarding temperatures. The science of chocolate is amazing, and I highly recommend looking into it some time. Chocolateers have to work with 5 different crystalline forms of chocolate that can form when it cools, of which one of them is "the right one." Accordingly, you must have a very even temperature within your pan of chocolate. Most pans just can't meet those criteria.
The "double boiler" solves this by heating water and then using that water to heat the container with the chocolate. First off, you can create a much larger thickness of material between the burner and your precious chocolate. Instead of having maybe a quarter inch of metal protecting the chocolate, you can have 2 inches of water or more! Second, water is a liquid so it can move. The water will form convection cells which are very effective at spreading heat out uniformly throughout the liquid.
Just be careful with the steam! Chocolate is quite hydrophilic, meaning it is attracted to water. If any of the steam condenses into your chocolate, the chocolate will immediately absorb the water, which causes the chocolate to "seize" into a useless lump which cannot be used unless you heat the chocolate hot enough to drive out all the water. This temperature is high enough that it will mess up all of the crystals you were trying to form, and you will have to re-temper the chocolate again! | {
"domain": "physics.stackexchange",
"id": 39257,
"tags": "thermodynamics, states-of-matter, combustion"
} |
How to see that the trivial insulator is trivial? | Question: I have been trying to better understand gapped phases of matter — which may be "topological" or "trivial" — and I have run into the problem that I don't really understand the trivial case very well.
When I say trivial, what I mean is something like the following:
A trivial state can be transformed to a product state by a finite sequence of local unitary transformations even if the system size is infinite.
or perhaps
A state if trivial if it is the ground state of a hamiltonian that can be adiabatically transformed to a hamiltonian having a product state as its ground state without closing the gap.
I'll summarize this by saying a trivial state can be adiabatically connected to a product state.
Now, when I think of a trivial band insulator, I imagine something like the following. (I neglect interactions, because I am having enough trouble thinking about the non-interacting case, but ultimately I would like to understand this with interactions turned on.) We have some local hamiltonian describing electrons in a periodic potential, and we diagonalize the hamiltonian to obtain the band structure, something like this:
(This image is taken from this question which in turn took it from this paper.) To get the ground state of the (non-interacting) many-body hamiltonian, we simply "fill up" the bands with particles. If at the end of this all the bands are either completely filled or empty, we have an insulator. So the ground state is like a Slater determinant taken over all of the states in occupied bands.
My question is: how can I see that the ground state obtained in this way is adiabatically connected to a product state?
I am most interested in an answer to this question that describes (perhaps with some hand-waving) how to see that there is a finite-depth local unitary transformation from the "Slater determinant" state to a product state (If using bosons instead of fermions makes things easier, that's fine. It could also be a transformation applied to the hamiltonian rather than its ground state.)
I'm less interested in arguments that go like "We can calculate such-and-such topological invariant of the filled bands and see that it is the same for a trivial insulator and for a product state," but if you can convince me that this is the best or only way to make the argument, I'd accept that as an answer too.
I'd also like to understand why we can do this adiabatic transformation for trivial state produced by filling bands in the figure on the left above (a trivial insulator), but not for the state produced by filling up the bands on the right (a topological insulator). Again, I'd like to avoid arguments about topological invariants of bands if at all possible.
Answer: Here's one way to think about it. For the moment, let's restrict our attention to 1D, and for concreteness consider the Su-Schrieffer-Heeger model of polyacetylene which describes a bipartite chain with alternating $A$ and $B$ sites, with the following Hamiltonian:
$$H = \sum_{n} \bigg[v\ |n,A\rangle\langle n,B| + w |n,B\rangle\langle n+1,A| + h.c.\bigg]$$
where $v,w\in\mathbb R$ are the intra-cell and inter-cell hopping amplitudes, respectively. Performing a spatial Fourier transform yields
$$H = \int_{BZ}\mathrm dk\ |k\rangle\langle k| \otimes \underbrace{\pmatrix{0 & v+we^{-ik}\\v+we^{ik} & 0 }}_{\equiv H_k}$$
where $|k\rangle = \frac{1}{\sqrt{2\pi}}\sum_n e^{-ikn}|n\rangle$, $BZ:=[-\pi,\pi]$ (with the endpoints identified), and $H_k$ acts on the $A/B$ degree of freedom. It is convenient to write $H_k = \vec d(k) \cdot \vec \sigma$, where $\vec \sigma=(\sigma_1,\sigma_2,\sigma_3)$ are the Pauli matrices. In this instance, we would have that
$$\vec d(k) = \pmatrix{v+w\cos(k)\\ w\sin(k)\\0}$$
The eigenvalues and eigenvectors of $H_k$ are
$$E_{\pm}(k) =\pm|\vec d(k)| =\pm\sqrt{v^2+w^2+2vw\cos(k)}, \qquad u^{\pm}_k = \frac{1}{\sqrt{2}} \pmatrix{\pm 1\\ e^{i\varphi(k)}}$$
where $e^{i\varphi(k)} = (v+we^{-ik})/|v+we^{-ik}|$. Observe that the system is gapped as long as $d(k) \neq \vec 0$ for all $k$, which in this case corresponds to $v\neq w$.
The principal question at play here is the following:
Given two Hamiltonians $H_1$ and $H_2$ with respective ground states $\psi_1$ and $\psi_2$, can we adiabatically deform $H_1$ into $H_2$ (and therefore $\psi_1$ into $\psi_2$, up to a phase) without closing the gap?
To visualize this more clearly, consider the image that $\vec d$ traces out in $\mathbb R^3$ as $k$ varies over $BZ$. In 1D, this would correspond to a closed loop. Maintaining a gap means that this loop must avoid the origin, so our question essentially becomes the following: Given two closed loops in $\mathbb R^3-\{\vec 0\}$, when is it possible to continuously deform one into the other?
From a mathematical standpoint, we are trying to find the fundamental group $\pi_1(\mathbb R^3-\{\vec 0\})$, whose elements are equivalence of classes of closed loops (where loops are understood to be equivalent if they can be continuously deformed into one another). It is not difficult to see that $\mathbb R^3-\{\vec 0\}$ is simply-connected - meaning that $\pi_1(\mathbb R^3-\{0\})=\{0\}$, and every loop can be deformed into every other loop. From this standpoint, every 1D Hamiltonian is topologically equivalent to every other 1D Hamiltonian.
But this is where symmetry comes into play. Observe that our toy model exhibits chiral symmetry, meaning that there exists a Hermitian and unitary operator $\Gamma$ such that $\Gamma H_k \Gamma = -H_k$. In our case, $\Gamma= \sigma_3$. However,
$$\sigma_3 \big(\vec d(k)\cdot \vec \sigma\big)\sigma_3 = \vec d'(k) \cdot \vec \sigma, \qquad \vec d'(k) = \pmatrix{-d_1(k)\\ -d_2(k) \\ d_3(k)}$$
Therefore, requiring chiral symmetry implies that $d_3(k)=0$, which constrains our loop to the $d_3=0$ plane. With this in mind, we may ask a variation of our original question:
Given two Hamiltonians $H_1$ and $H_2$ with respective ground states $\psi_1$ and $\psi_2$ which possess chiral symmetry, can we adiabatically deform $H_1$ into $H_2$ (and therefore $\psi_1$ into $\psi_2$, up to a phase) without closing the gap and while maintaining chiral symmetry?
The answer to this question is an emphatic no. In our intuitive picture, we are now asking about closed loops in $\mathbb R^2-\{\vec 0\}$, and this space is not simply connected. Instead, we may categorize our equivalence classes of closed loops by the (integer!) number of times they encircle the origin (the so-called winding number) which is reflected in the fact that $\pi_1(\mathbb R^2-\{\vec 0\})=\mathbb Z$.
In our toy model, observe that the curve in question is a circle of radius $w$ centered at $x=v$. As a result, the winding number is $0$ if $|v|>|w|$ and $1$ if $|v|<|w|$.
Now to address your question about the topologically trivial phase. First, we write our Hilbert space as a direct product of local Hilbert spaces, $$\mathscr H = \bigotimes_{n\in \mathbb Z} \mathscr H^{loc}_n \qquad \mathscr H^{loc}_n \simeq \mathbb C^2 \text{ where } \pmatrix{\alpha\\\beta}_n \sim \alpha|n,A\rangle + \beta |n,B\rangle$$
Given this decomposition of $\mathscr H$, we may now ask whether the ground state of our system may be written as a product state. Because we're interested in spatial localization, it is convenient to consider the Wannier states
$$\phi_n = \frac{1}{\sqrt{2\pi}}\int \mathrm dk \ e^{ink} |k\rangle\otimes u_k^-$$
One can show that these states form an orthonormal basis for the valence band; as a result, the ground state may be written
$$\Psi = \prod_{n\in \mathbb Z} \phi_n$$
We can evaluate these states directly:
$$\phi_n = \frac{1}{\sqrt{2\pi}} \int \mathrm dk\left[ e^{ikn} |k\rangle\otimes\pmatrix{-1\\0} + e^{ikn + \varphi(k)}|k\rangle\otimes \pmatrix{0\\1} \right]$$
$$= |n\rangle\otimes \pmatrix{-1\\0} + \sum_{m} f(n-m) |m\rangle \otimes \pmatrix{0\\1}$$
$$f(n-m) \equiv \frac{1}{\sqrt{2\pi}} \int \mathrm dk \ e^{i[(n-m) k + \varphi(k)]}$$
Observe that if $w=0$ then $\varphi(k)=0$, and so $f(n-m)=\delta_{n,m}$. This implies that $\phi_n = \frac{1}{\sqrt{2}}|n\otimes\pmatrix{-1\\1}$, the Wannier states are exactly localized to our lattice sites, and the ground state wavefunction $\Psi$ is a product state.
On the other hand, if $v=0$ then $\varphi(k)=-ik$, and so $f(n-m)=\delta_{n,m+1}$. This implies that
$$\phi_n = \frac{1}{\sqrt{2}} \left[|n\rangle\otimes\pmatrix{-1\\0} + |n-1\rangle\otimes \pmatrix{0\\1}\right]$$
which means that the Wannier states are not localized to the lattice sites; there is (short-range) entanglement between neighboring sites, and so the ground state $\Psi$ is not a simple product state.
Here is what $f(\Delta)$ looks like in general:
The function $f(\Delta)$ suffers a discontinuity when $v=w$, because at that point the integrand develops a singularity. In contrast, if $v>w$ we can always smoothly deform $(v,w)\rightarrow(1,0)$ without closing the gap, and the ground state wavefunction can be adibatically connected to a the trivial product state mentioned above. This characterizes the trivial phase. On the other hand, if $v<w$ we cannot reach this product state without closing the gap, and so this characterizes the topological phase.
It should be mentioned that this is true only as long as we impose chiral symmetry. If we add a term to the Hamiltonian such that $d_3(k)=u$ for some parameter $u$, its easy to see that by making $u>0$ we can switch $v$ and $w$ without closing the gap. As a result, we can always continuously deform the ground state wavefunction to a product state if we don't impose any symmetry requirements.
Summary:
Two Hamiltonians are considered to be topologically equivalent if they can be continuously deformed into one another without closing the gap. In the absence of additional symmetry requirements, all 1D (and, as it turns out, 3D) Hamiltonians are topologically trivial. With the imposition of different symmetries, we obtain a zoo of different topological invariants in all dimensions which classify different topological phases as long as the relevant symmetries remain unbroken - see Symmetry-Protected Topological Order for more. On the other hand, true (long-range) topological order can arise in 2D or when interactions between particles are considered. | {
"domain": "physics.stackexchange",
"id": 90015,
"tags": "condensed-matter, electronic-band-theory, topological-insulators, topological-phase, insulators"
} |
Ascending Order Algorithm | Question: I wrote a simple function to sort an array of integers in ascending order. Here is the code -
void sort(int* begin, int* end) {
int* it = begin;
int num1 = 0, num2 = 0;
while (it != end) {
num1 = *it;
num2 = *(it + 1);
if (num1 > num2) {
*it = num2;
*(it + 1) = num1;
it = begin;
} else {
it++;
}
}
}
Is there any way I can improve this code?
Answer: Reading and writing off the end!
When it points to the last element (one before end), you read one-past-the-end, and then potentially write to it. This is undefined behavior. You need to make sure that you stop before then. One way to ensure this is to iterate from begin+1 to end, and compare the element with the one before it.
Logic
The typical way to write bubble sort is to have a loop that goes the full list, and set a flag if you swapped anything, and loop until you didn't. This will make it easier to understand what's going on - rather than having your loop next step set in two separate places, which is error prone.
Unnecessary variables
You don't need num1 or num2. Simply rely on std::swap:
if (*(it-1) > *it) {
std::swap(*(it-1), *it);
swapped = true;
}
Or you could implement such a thing yourself:
void swap(int& a, int& b) {
int tmp = a;
a = b;
b = tmp;
}
Either way, avoiding unnecessary variables is a plus.
Spacing
Don't add so many blank statements between lines. Taking up too much vertical space makes it harder to read.
Proposed implementation
The following addresses all of my points:
void sort2(int* begin, int* end) {
bool swapped = true;
while (swapped) {
swapped = false;
for (int *it = begin+1; it != end; ++it) {
if (*(it - 1) > *it) {
std::swap(*(it - 1), *it);
swapped = true;
}
}
}
}
Minor optimization
Rewriting the way I did it above allows for a minor optimization. Every time through the for loop, we know that we just put the "largest" number at the end. It "bubbled" up! At that point, we don't need to do anything else with it, so we can decrement the end pointer:
while (swapped) {
swapped = false;
for (int *it = begin+1; it != end; ++it) {
...
}
--end; // <==
}
Future work
Bubble sort is \$O(n^2)\$. It gets the job done, but it's... not great. A strictly better algorithm to start with is insertion sort, which is still \$O(n^2)\$. From there, you can look at merge sort and quicksort, both \$O(n \lg n)\$.
Also consider what you'd need to do to be able to support (a) arbitrary types, not just ints and (b) in arbitrary order, not just increasing. | {
"domain": "codereview.stackexchange",
"id": 16983,
"tags": "c++, algorithm, sorting"
} |
Output of system defined by differential equation | Question: I don't fully understand how the output of a system can be derived from the system's differential equation and a given input.
For example:
$$y(0-) = 1 $$
$$y'(0-) = -2$$
$$ u(t) : \text{Heaviside function} $$
$$ S: y''(t) +2y'(t) +y(t) = u''(t) -2u'(t) + u(t)$$
Can someone show me a good way to find $y(t)$, given $u(t)$ and $S$?
Answer: What you want to do is take the Laplace transform of both sides of S. The first and second derivative terms will be replaced by expressions of $Y(s)$ and the conditions at $t=0-$, so you can solve the equation for $Y(s)$ and then do the inverse Laplace transform, or more commonly, use partial fraction expansion to go back to the time domain.
See numbers 35 and 36 in the table of Laplace transforms here. | {
"domain": "dsp.stackexchange",
"id": 2978,
"tags": "linear-systems"
} |
Questions regarding a derivation in Understanding Molecular Simulation: From Algorithms to Applications, 3rd ed, pp 141-143 | Question: Recently, I've been learning molecular dynamics using the third edition of the book, Understanding molecular simulation: from algorithms to applications. During reading your book, I have encountered some confusion from page 141 to page 143 in section 5.1.7.2, and your help shall be very valuable.
Firstly, on page 142 between formula (5.1.47) and (5.1.48), the authors replaced $\hat{\mathbf{r}} \cdot \nabla_{\mathbf{r}}$ with $-\hat{\mathbf{r}_j} \cdot \nabla_{\mathbf{r}_j}$, using an assumption that $\hat{\mathbf{r}} \cdot \nabla_{\mathbf{r}} \delta(\mathbf{r}-\mathbf{r}_j) = -\hat{\mathbf{r}_j} \cdot \nabla_{\mathbf{r}_j} \delta(\mathbf{r}-\mathbf{r}_j)$. This assumption is not true for a generic central symmetric function centered on $\mathbf{r}_j$, so how can I show that it is true for Dirac-$\delta$ functions?
Secondly, in equation (5.1.52) of the book, the authors attempted to regenerate the results of Borgis et al., equation (6). However, in my opinion, the book's result is not evidently the equivalent to the result of Borgis et al. The book wrote in angled bracket $F^{(r)}_j-F^{(r)}_i$, where $F^{(r)}_j$ represents $\hat{\mathbf{r}}\cdot \mathbf{F}_j$, the projection of the force acting on particle $j$ over unit vector $\hat{\mathbf{r}}$. In contrast, the paper of Borgis et al. wrote $(\mathbf{F}_j - \mathbf{F}_i) \cdot \hat{\mathbf{r}_{ij}}$, which is not equal to $F^{(r)}_j-F^{(r)}_i$ as given by the authors (at least not straightforward).
A brief scan of the aforementioned pages are attached. Any suggestion will be highly appreciated!
Reproduction of the relevant derivation from pp141 to 143 in the book (as the original text, be careful as any possible mistake in the original text is not corrected):
The value of the radial distribution function at a distance $r$ from a reference particle is equal to the angular average of $\rho(\mathbf{r})/\rho$:
$$g(r)=\frac{1}{\rho}\int d\hat{\mathbf{r}} \left< \rho(\mathbf{r})\right>_{N-1}=\frac{1}{\rho} \int d\hat{\mathbf{r}} \left<\sum_{j\neq i}\delta(\mathbf{r}-\mathbf{r}_j) \right>_{N-1}$$ (5.1.44).
Where $N$ is the total number of particles in the system, $\rho$ denotes the average number density and $\mathbf{r}_j$ is the distance of particle $j$ from the origin. \hat{\mathbf{r}} is the unit vector in the direction of $\mathbf{r}$. For simplicity, we have written down the expression for $g(r)$ for a given particle $i$, and hence the sum of $j \neq i$ is keeping $i$ fixed, but in practice the expression should be averaged over all equivalent $i$. The angular bracket denotes the thermal average: $$\left<\cdots\right>_{N-1}=\frac{\int d\mathbf{r}^{N-1} e^{-\beta U(\mathbf{r}^N)(\cdots)}}{\int d\mathbf{r}^{N-1} e^{-\beta U(\mathbf{r}^N)}}$$ (5.1.45).
We can now write
$$(\frac{\partial g(r)}{\partial r}) = \frac{1}{\rho} \frac{\partial}{\partial r} \int d\hat{\mathbf{r}} \left<\sum_{j \neq i} \delta(\mathbf{r}-\mathbf{r}_j)\right>$$ (5.1.46)
The only term that depends on $r$ is the $\delta$-function.Thus:
$$(\frac{\partial g(r)}{\partial r}) = \frac{1}{\rho} \int d\hat{\mathbf{r}} \left<\sum_{j \neq i} \hat{\mathbf{r}} \cdot \nabla_{\mathbf{r}} \delta(\mathbf{r}-\mathbf{r}_j)\right>$$ (5.1.47)
Since the argument of the $\delta$-function is $\mathbf{r}-\mathbf{r}_j$, we can replace $\hat{\mathbf{r}} \cdot \nabla_{\mathbf{r}}$ by $-\hat{\mathbf{r}}_j \cdot \nabla_{\mathbf{r}_j}$ and perform a partial integration:
\begin{align}
(\frac{\partial g(r)}{\partial r}) &= \frac{-1}{\rho} \frac{\int d\hat{\mathbf{r}} \int d\mathbf{r}^{N-1} e^{-\beta U(\mathbf{r}^N)} \sum_{j \neq i} \hat{\mathbf{r}} \cdot \nabla_{\mathbf{r}} \delta(\mathbf{r}-\mathbf{r}_j)}{\int d\mathbf{r}^{N-1} e^{-\beta U(\mathbf{r}^N)}}\\
&=\frac{-\beta}{\rho} \frac{\int d\hat{\mathbf{r}} \int d\mathbf{r}^{N-1} e^{-\beta U(\mathbf{r}^N)} \sum_{j \neq i} \delta(\mathbf{r}-\mathbf{r}_j) \hat{\mathbf{r}_j} \cdot \nabla_{\mathbf{r}_j} U(\mathbf{r}^N)}{\int d\mathbf{r}^{N-1} e^{-\beta U(\mathbf{r}^N)}}\\
&=\frac{\beta}{\rho} \int d\hat{\mathbf{r}} \left<\sum_{j \neq i} \delta(\mathbf{r}-\mathbf{r}_j) \hat{\mathbf{r}}_j \cdot \mathbf{F}_j(\mathbf{r}^N)\right>_{N-1}
\end{align} (5.1.48)
where $\hat{\mathbf{r}} \cdot \mathbf{F}_j = F_j^{(r)}$ denotes the force on particle j in the radial direction. We can now integrate with respect to r
\begin{align}
g(r) &= g(r=0) + \frac{\beta}{\rho} \int_0^r dr' \int d\hat{\mathbf{r'}} \left<\sum_{j \neq i} \delta(\mathbf{r}-\mathbf{r}_j)F_j^(r)(\mathbf{r}^N)\right>_{N-1}\\
&= g(r=0) + \frac{\beta}{\rho} \int_{r'<r} d\mathbf{r}' \left< \frac{\sum_{j \neq i}\delta(\mathbf{r}-\mathbf{r}_j)F_j^{(r)}(\mathbf{r}^N)}{4\pi r'^2} \right>_{N-1}\\
&= g(r=0) + \frac{\beta}{\rho} \sum_j \left< \theta(r-r_j) \frac{F_j^{(r)}(\mathbf{r}^N)}{4\pi r_j^2} \right>_{N-1}
\end{align} (5.1.49)
where $\theta$ denotes the Heaviside step function. To make a connection to the results of Borgis et al., we note that in a homogeneous system, all particles $i$ of the same species are equivalent. We can therefore write:
$$g(r)=g(r=0)+\frac{\beta}{N\rho} \sum_{i=1}^{N} \sum_{j \neq i} \left< \theta(r-r_j) \frac{F_j^{(r)}(\mathbf{r}^N)}{4\pi r_j^2} \right>_{N-1}$$
But $i$ and $j$ are just dummy indices. So we obtain the same expression for $g(r)$ by permuting $i$ and $j$, except that if $\hat{\mathbf{r}}=\hat{\mathbf{r}}_{ij}$, then $\hat{\mathbf{r}}=-\hat{\mathbf{r}}_{ji}$. Adding the two equivalent expressions and dividing by 2, we get
$$g(r)=g(r=0)+\frac{\beta}{2N\rho} \sum_{i=1}^{N} \sum_{j \neq i} \left< \theta(r-r_{ij}) \frac{F_j^{(r)} (\mathbf{r}^N) - F_i^{(r)} (\mathbf{r}^N)}{4\pi r_{ij}^2} \right>_{N-1}$$ (5.1.50)
Eq (5.1.50) is equivalent to the results of Borgis et al.
Answer: For the first question, note that if $f$ is a differentiable function then
\begin{equation}
\begin{split}
-\mathbf y \cdot \nabla_{\mathbf y} f(\mathbf x - \mathbf y) &= -\sum_i y^i \frac{\partial}{\partial y^i} f(\mathbf x - \mathbf y) \\
&= -\sum_{i,j} y^i (D_j f)(\mathbf x - \mathbf y) \frac{\partial}{\partial y^i} (x^j - y^j) \\
&=+\sum_{i,j} y^i (D_j f)(\mathbf x - \mathbf y) \delta_{ij}\\
&= \sum_i y^i (D_i f)(\mathbf x - \mathbf y) \\
&= \mathbf y \cdot \nabla_{\mathbf x} f(\mathbf x - \mathbf y)
\end{split}
\end{equation}
where $\{x^i\}$ and $\{y^i\}$ are the cartesian components of $\mathbf x$ and $\mathbf y$, respectively, and $D_i f$ denotes the first partial derivative of $f$ with respect to its $i$th argument. Now, suppose that $f$ is somehow zero everywhere except at the origin, i.e. $f(\mathbf x - \mathbf y) = 0$ except when $\mathbf x = \mathbf y$. Certainly then $\nabla f$ is zero away from the origin as well. Thus we can safely exchange $\mathbf x$ and $\mathbf y$ in the above to get
$$-\mathbf y \cdot \nabla_{\mathbf y} f(\mathbf x - \mathbf y) = \mathbf x \cdot \nabla_{\mathbf x} f(\mathbf x - \mathbf y).$$
Finally, the premultipliers $\mathbf x$ and $\mathbf y$ can be replaced by their unit vector counterparts by the same kind of trick.
Of course no differentiable function can be zero everywhere except at the origin (that would be a contradiction), but the delta function behaves as though it was such a function. (It is actually a distribution.) | {
"domain": "physics.stackexchange",
"id": 98602,
"tags": "statistical-mechanics, computational-physics, molecular-dynamics"
} |
Apply windowing before FFT or after? | Question: I am computing a spectrogram and I've found a code example online which goes like this:
for jj = 1:size(signal_framewise,2)
current_frame = signal_framewise(:,jj);
dtf = fft(current_frame).*gausswin(window_length_s);
%nfft is half of the fft results (since the fft is symetric)
out_buffer(:,jj) = dtf(1:nfft);
end
But I intuitively did it like this:
for jj = 1:size(signal_framewise,2)
current_frame = signal_framewise(:,jj).*gausswin(window_length_s);
dtf = fft(current_frame);
out_buffer(:,jj) = dtf(1:nfft);
end
The difference is, that the guy online applies the window to the finished FFT and I apply it to the signal frame. Question is: What's the right way to do and does it make a difference? I've attached the outputs of the spectrogram and the full code.
[![%% my implementation][1]][1]
clc; clear all; close all;
Fs = 44100;
t_max = 3;
T = 1/Fs;
time = 0:T:(t_max-T);
input = chirp(time,1500,1,8000);
window_length_t = 0.01; %10ms window length
window_length_s = round(0.01 * Fs); %window length in samples
if mod(window_length_s,2) == 0
window_length_s = window_length_s + 1; %make sure we have odd window size
end
%activate the following for a no overlap implementation
%signal_framewise = buffer(input , window_length_s);
signal_framewise = buffer(input , window_length_s , floor(window_length_s/2));
nfft =((window_length_s-1)/2)+1;
out_buffer = zeros(nfft,size(signal_framewise,2));
for jj = 1:size(signal_framewise,2)
current_frame = signal_framewise(:,jj).*gausswin(window_length_s);
dtf = fft(current_frame);
out_buffer(:,jj) = dtf(1:nfft);
end
F = linspace(0,Fs/2,nfft);
T = linspace(0,3,size(signal_framewise,2));
surf(T,F,20*log10(abs(out_buffer)), 'EdgeColor', 'none');
axis xy;
axis tight; colormap(jet); view(0, 90);
xlabel('Time');
colorbar;
ylabel('Frequency(Hz)');
set(gca, 'YTickLabel', num2cell(get(gca, 'YTick')));
Answer: Element-wise multiplication in the spatial/temporal domain with a windowing function serves to reduce the effect of the potentially large jump you get at the signal edge, when making the sampled signal periodic. This jump would otherwise introduce lots of frequencies that might not be present in the signal.
Element-wise multiplication in the frequency domain with a windowing function applies a low-pass filter. Multiplying by a Gaussian window is equivalent to applying a Gaussian filter in the spatial/temporal domain.
If the code you found online were to compute the inverse transform after applying the window, then whoever wrote it would be low-pass filtering. With the code as you posted, I tend to think that person doesn't know what they're doing (or maybe it's just a typo). | {
"domain": "dsp.stackexchange",
"id": 6252,
"tags": "matlab, fft, window-functions, spectrogram"
} |
What is the Riemann curvature tensor contracted with the metric tensor? | Question: Can the Ricci curvature tensor be obtained by a 'double contraction' of the Riemann curvature tensor? For example
$R_{\mu\nu}=g^{\sigma\rho}R_{\sigma\mu\rho\nu}$.
Answer: I'm not sure what you mean by 'double contraction', but the Ricci tensor in local coordinates is given by
\begin{align}
R_{\mu \nu} = R^\rho_{~~\mu \rho \nu},
\end{align}
which is the same as $g^{\sigma \rho} R_{\sigma \mu \rho \nu}$, exactly what you have written. | {
"domain": "physics.stackexchange",
"id": 6291,
"tags": "general-relativity, differential-geometry, curvature, tensor-calculus"
} |
Are night and day lengths over a year equal everywhere on earth? | Question: (Question from math.stackexchange, because people told me to better ask here:
Original link)
As you all know, in winter the nights are longer and there is nearly no sun, but in the summer there are really long days.
If you look at different places over the world, you find strange things like a 3 month day in summer with 3 months of darkness in the winter when you keep going north or south enough.
The question I asked myself was:
Question: Is the overall length of day equal to the overall length of night over the time span of one year (thus half of the year for both) at any specific point on earth?
I tried to solve this analytically, but looking at complex trigonometric function that include the earth rotating around its tilted axis around sun lead to nowhere.
However, I noticed that when looking at mountains / high buildings, the overall day length becomes longer, thus this question is meant for a spherical earth with no altitude differences.
Answer: The length of day depends on two parameters:
the latitude $\phi$ of the observer
the current day of year
We can represent the trajectory of the sun from the point of view of an observer on Earth by a solar circle that intersects Earth's plane. See figure below:
In red is Earth's circle, and the solar circle is divided in two parts:
the blue part corresponds to the night (sun is below the horizon)
the black part corresponds to the day time (sun is above the horizon)
This representation is valid only 1. during the equinoxes and 2. only at the North Pole:
As time passes, the solar circle shifts away from the Earth's circle, along the obliquity of the ecliptic $\epsilon=23.439°$. This shift is upward during the summer, and downward during the winter.
If we leave the North Pole the solar circle tends to be tilted along the West-East axis. At the equator, the solar circle is exactly perpendicular to the Earth's plane.
Let $\epsilon$ be the axial obliquity of the ecliptic of Earth, $\phi$ the latitude of the observer and $n_d \in [0;364]$ the current day of year, $n_d=0$ corresponding to the winter solstice.
We use a unit circle (radius = 1) to model both the Solar circle and Earth circle.
We can use this representation and parametrize the geometry in order to determine the length of the segment $m(\phi,n_d)$ at any latitude $\phi$ and any day of year $n_d$. The length of $m$ with respect to the diameter of the solar circle will allow us to determine the length of day.
When the sun is at its zenith, the angle between the observer's zenith and the position of the sun is:
\begin{equation}
\delta \Phi = 90 - \phi - \mathrm{cos}\left( \pi \cdot \dfrac{n_d}{365.25 / 2} \right) \epsilon
\end{equation}
Thus, the angle $\delta \theta$ between the solar circle and sun's zenith is $\delta \Phi + \phi$, i.e.:
\begin{equation}
\delta \theta = 90 - \mathrm{cos}\left( \pi \cdot \dfrac{n_d}{365.25 / 2} \right) \epsilon
\end{equation}
We aim to determine the fraction of the radius corresponding to the exposed part of the sun's circle, i.e.:
\begin{equation}
m = 1 + \mathrm{tan}(\phi) t
\end{equation}
Where $t$ is the distance between the observer and the center of the sun's circle; given by:
\begin{equation}
t = \mathrm{cos}(\delta \theta) d
\end{equation}
Where $d$ is the distance from observer to the sun's zenith, given by:
\begin{equation}
d = \dfrac{1}{\mathrm{sin}(\delta \theta)}
\end{equation}
Hence the exposed fraction of the solar circle is determined by:
\begin{equation}
m = 1 - \dfrac{\mathrm{tan}(\phi)}{\mathrm{tan}(\delta \theta)}
\end{equation}
With $0 \leq m \leq2$. When $m=2$, the sun circle does not intersect with Earth's surface, thus the sun is shining in the sky the whole day: this is polar summer.
When $m=0$, the sun circle is completely below the horizon: it is polar winter.
The angle between the center the of sun circle and the sunrise/sunsets points on the solar circle is:
\begin{equation}
f = \mathrm{acos}(1 - m)
\end{equation}
And finally the exposed fraction of the sun's circle is:
\begin{equation}
b=f /\pi
\end{equation}
We can multiply this quantity per 24 to obtain the day length (duration for which the sun is above horizon).
Using trigonometry, the above formulas simplify into:
\begin{equation} \large b(\phi, n_d) = \mathrm{acos}\left(
\mathrm{tan}(\phi) \mathrm{tan}(\epsilon \cdot \mathrm{cos}(0.0172
\cdot n_d)) \right) / \pi \end{equation}
Where $0.0172=\pi / 182.625$ (see first equation)
I produced the curves corresponding to this relation for different values of the latitude from $\phi = 0°$ to $\phi=+90°$, with respect to the day of year.
In black is the curve corresponding to $\phi=48°$ (e.g. Paris, France.) where the length of day oscillates between 8 hours (winter) and 16 hours (summer).
In red is the curve corresponding to $\phi =70°$ (e.g. Hammerfest, Norway) where the sun never sets for approximately 10 weeks during summer, and never rises for the same duration during winter.
Your question is about the overall length of day over the span of one year. The answer is given graphically by this figure, where you can see that the summer variations of the length of day are completely symetrical with respect to the winter variations.
If you average the length of day over one year, whatever is the value of the latitude $\phi$, the mean length of day is exactly 12 hours. That is true everywhere on Earth.
Notes:
The actual definitions of day and night are more subtile than what I used here. See Twilight - Wikipedia.
Definition of night used here: sun below horizon ($\theta = 0°$)
Civil dusk: sun $6°$ below horizon.
Nautical dusk: sun $12°$ below horizon.
Astronomical dusk: sun $18°$ below horizon.
This discrepancy in definitions of the night is due to the fact that the sun continues to illuminate the sky even when it is not directly visible, due to sunlight scattering by the atmosphere.
I supposed that Earth is perfectly spheric, which is not exactly true. | {
"domain": "physics.stackexchange",
"id": 100591,
"tags": "astronomy, calculus"
} |
Calculating the amount of work done to assemble a net charge on a sphere | Question: I've been reviewing electrostatics using an old exam and I stumbled upon this question:
Calculate the amount of work required to assemble a net charge of $+Q$ on a spherical conductor of radius $R$. If an additional charge of $-Q$ were to be assembled on a concentric spherical conductor of radius $R+a$,what amount of work would the entire process require?
Now the first part is not that difficult, we just do:
$$\vec E = \frac{Q}{4 \pi \epsilon_0 R^2} \hat r \, \text{(From Gauss's Law)}$$
$$\begin{align}
W & = \frac{\epsilon_0}{2} \int E^2 \, d\tau \\
& = \left(\frac{\epsilon_0}{2}\right) \left(\frac{Q^2}{(4 \pi \epsilon_0)^2}\right) \int d \Omega \int_R^{\infty} \frac{1}{R'^{4}} R'^{2} dR'\\
& = \frac{4\pi Q^2}{32 \pi^2 \epsilon_0} \frac{1}{R} \\
&= \frac{Q^2}{8 \pi \epsilon_0 R} \\
\end{align}$$
But for the second part, according to a solution that a friend of mine gave me, only thing that we need to do to calculate the total work is to do:
$$\begin{align} W_{tot} & = \frac{\epsilon_0}{2} \int E^2 d \tau \\
& = \frac{4 \pi Q^2}{32 \pi^2 \epsilon_0} \int_R^{R+a} \frac{1}{R'^2} dR' \\
& = \frac{Q^2}{8 \pi \epsilon_0} \left(\frac{1}{R} - \frac{1}{R+a}\right) \\
\end{align}$$
But according to equation $(2.47)$ of Griffiths, total work should be equal to:
$$\begin{align}
W_{tot} & = \frac{\epsilon_0}{2} \int (E_1+E_2)^2 d\tau \\
& = \frac{epsilon_0}{2} \int (E_1^2 + E_2^2 + 2E_1 \cdot E_2) d\tau \\
& = W_1 + W_2 + \epsilon_0 \int E_1 \cdot E_2 d \tau \\
\end{align}$$
Wherein for this case $W_1$ is the work required for a sphere of radius $R$ as shown earlier, and $W_2$ is the work required for a sphere of radius $R+a$. Is the first method correct?
Answer: Answer:The two methods are both correct.
As I have suggested in the comment area, total work calculated by using the first method should be
$$W_{tot}=\frac{Q^2}{8\pi\epsilon_0}(\frac{1}{R}-\frac{1}{R+a}),
$$
since $\int \frac{1}{r^2}dr=-\frac{1}{r}+\text{Constant}$.
Next we will calculate the total work by the second method, i.e. the equation $(2.47)$ of Griffiths. As indicated in the problem, we have
\begin{align}
\mathbf{E}_1 & =\frac 1 {4\pi\epsilon_0}\frac{Q}{r^2}\hat{\mathbf{r}},\ \text{while}\ r\ge R\\
\mathbf{E}_2 & =-\frac 1 {4\pi\epsilon_0}\frac{Q}{r^2}\hat{\mathbf{r}},\ \text{while}\ r\ge R+a
\end{align}
So,
\begin{align}
E_1^2 & =\frac{Q^2}{16\pi^2\epsilon_0^2r^4}\\
E_2^2 & =\frac{Q^2}{16\pi^2\epsilon_0^2r^4}\\
E_1\cdot E_2 & =-\frac{Q^2}{16\pi^2\epsilon_0^2r^4}
\end{align}
And
\begin{align}
W_1 & =\frac{\epsilon_0}{2}\int E_1^2d\tau\\
& =\frac{Q^2}{8\pi\epsilon_0}\int_R^\infty\frac{1}{r^2}dr\\
& =\frac{Q^2}{8\pi\epsilon_0R}
\end{align}
By using same method, we get $W_2$ as foolow,
$$
W_2=\frac{Q^2}{8\pi\epsilon_0(R+a)}
$$
Finally, the cross term is that
\begin{align}
\epsilon_0 \int \mathbf{E}_1 \cdot \mathbf{E}_2 d\tau & =-\frac{Q^2}{4\pi\epsilon_0}\int_{R+a}^\infty \frac{1}{r^2}dr\\
& =-\frac{Q^2}{8\pi\epsilon_0}\frac{2}{R+a}
\end{align}
Then we add them all up, we have,
\begin{align}
W_{tot} & =W_1+W_2+\epsilon_0\int \mathbf{E}_1\cdot\mathbf{E}_2d\tau \\
& =\frac{Q^2}{8\pi\epsilon_0}\frac{1}{R}+\frac{Q^2}{8\pi\epsilon_0}\frac{1}{R+a}-\frac{Q^2}{8\pi\epsilon_0}\frac{2}{R+a}\\
& =\frac{Q^2}{8\pi\epsilon_0}(\frac{1}{R}-\frac{1}{R+a})
\end{align}
Conclusion: The results are the same by two methods.In the first method, when we wrote the formula of $W_{tot}$, the electric field $E$ is the final field after used superposition princeple. In the second, we also used superposition princeple, but we wrote it in the form of $W$ explicitly. I mean that the two methods are the same, but have different forms. | {
"domain": "physics.stackexchange",
"id": 27465,
"tags": "homework-and-exercises, electrostatics"
} |
How are long time periods measured in biological systems? | Question: Biological systems are pretty good at measuring fairly long times, for example, menstrual cycles (month), or puberty (years). Counting days or years seems to be implausible, and chemical concentration also seems implausible. What are the physiological processes that are involved in keeping track of such long periods? Is it just a long sequence of finite state changes?
I understand there are environmental correlates such as seasonal changes and relative position of celestial objects to measure the relative time, but regardless of these external cues, I suspect that a pretty good internal clock for longer time scale could exist. For example, the time to menopause can actually be thought of as a counting mechanism of a shorter clock which is the menstrual cycle. But, how does the body know when to stop growing? I cannot think of biological processes with such long time constants that is stable.
Answer: The short answer is: we do not know exactly, although we do have some insights.
I will take the example of puberty.
Although a clear definition of puberty is lacking, it is quite clear that it corresponds to a period where gonadal function starts.
This, in turn, is derived from the activation of the gonadotropin system, which consists of two main cell types:
a small number of neurons located in the preoptic area of the hypothalamus (a nucleus at the base of the brain) called the GnRH neurons. GnRH is the Gonadotropin-releasing hormone, a small peptide that stimulates the production of gonadotropins from the pituitary.
the gonadotrophs, a specialised group of cells in the pituitary (a gland located underneath the brain) which produce two hormones, called luteinizing hormone (LH) and follicle-stimulating hormone (FSH) which stimulate the gonads to produce various hormones, such as estrogen.
In mammals, the secretion of GnRH, and thus LH/FSH varies during the course of the menstrual/estral cycle. This cyclicity lasts several days (4-5 days in rodents, ~1 month in humans) and entrains the cyclical secretion of estradiol (E2) from the gonads.
Note that this is a sort of self-sustaining cycle, as cyclic levels of E2 will then allow for cyclic GnRH secretion and so on.
But, back to your question: how does the GnRH/LH/E2 system "wake up" at puberty?
The exact mechanism is still unknown, but recent work has found an important mediator, called kisspeptin, that is produced from two population of neurons in the hypothalamus, called the kisspeptin neurons.
Kisspeptin has been shown to be a very potent activator of GnRH neurons and work in the mouse has shown that these neurons appear at the time of puberty, their number increasing dramatically between 25 and 31 days (puberty is at around 30 days in mice).
Postnatal Development of Kisspeptin Neurons in Mouse Hypothalamus; Sexual Dimorphism and Projections to Gonadotropin-Releasing Hormone Neurons - Clarkson and Herbison - Endocrinology, 2006
Similar work exist in the monkey:
Increased hypothalamic GPR54 signaling: A potential mechanism for initiation of puberty in primates - Shahab et al. - PNAS, 2005
In humans mutations in either kisspeptin, or its cognate receptor GPR54 results in disturbances of pubertal maturation because of underactivation of the system (hypogonadotropic hypogonadism) or hyperactivation (precocious puberty).
So, now the question is shifted: why do kisspeptin neurons show up only at puberty? We don't know for sure, but it looks like increased levels of E2 could be important for this.
Again, we get into a self-sustaining cycle. Growth of the body generates an increase in E2 production (possibly due to increased volume of the gonads?), which, when over a certain level permits the development of kisspeptin neurons, which will then stimulate the GnRH neurons, resulting in increased LH and E2. We then have more E2 and this makes kisspeptin neuron grow even more etc etc.
From: Postnatal development of an estradiol-kisspeptin positive feedback mechanism implicated in puberty onset. - Clarkson et al. - Endocrinology, 2009 | {
"domain": "biology.stackexchange",
"id": 951,
"tags": "physiology"
} |
Maximum reachable checkers on a checkerboard | Question: This is a problem inspired by a video game that I've been thinking about for a long while, and I haven't convinced myself that I can do any better than brute force + pruning techniques, which would still take way too long to solve.
Is there anything better? Any class of algorithm I should look into more?
Problem description
You are on a checkerboard. Let's say 8x8 for now, but interested in general solutions too.
┌───┬───┬───┬───┬───┬───┬───┬───┐
│ │ │ │ │ │ │ │ │
├───┼───┼───┼───┼───┼───┼───┼───┤
│ │ │ │ │ │ │ │ │
├───┼───┼───┼───┼───┼───┼───┼───┤
│ │ │ │ │ │ │ │ │
├───┼───┼───┼───┼───┼───┼───┼───┤
│ │ │ │ │ │ │ │ │
├───┼───┼───┼───┼───┼───┼───┼───┤
│ │ │ │ │ │ │ │ │
├───┼───┼───┼───┼───┼───┼───┼───┤
│ │ │ │ │ │ │ │ │
├───┼───┼───┼───┼───┼───┼───┼───┤
│ │ │ │ │ │ │ │ │
├───┼───┼───┼───┼───┼───┼───┼───┤
│ │ │ │ │ │ │ │ │
└───┴───┴───┴───┴───┴───┴───┴───┘
There is a "start square" on the board, somewhere along the edge.
┌───┬───┬───┬───┬───┬───┬───┬───┐
│ │ │ │ │ │ │ │ │
├───┼───┼───┼───┼───┼───┼───┼───┤
│ │ │ │ │ │ │ │ │
├───┼───┼───┼───┼───┼───┼───┼───┤
│ │ │ │ │ │ │ │ │
├───┼───┼───┼───┼───┼───┼───┼───┤
│ │ │ │ │ │ │ │ │
├───┼───┼───┼───┼───┼───┼───┼───┤
│ │ │ │ │ │ │ │ │
├───┼───┼───┼───┼───┼───┼───┼───┤
│ │ │ │ │ │ │ │ │
├───┼───┼───┼───┼───┼───┼───┼───┤
│ │ │ │ │ │ │ │ │
├───┼───┼───┼───┼───┼───┼───┼───┤
│ │ │ │ │ S │ │ │ │
└───┴───┴───┴───┴───┴───┴───┴───┘
You may place checkers onto the board anywhere you like, with the following restrictions:
You cannot place a checker on the start square.
Each square may contain 0 or 1 checker; you can't place multiple in one square.
All checkers must be "reachable" (described further in the next bullet).
From the start square, you are able to traverse the board, using up/down/left/right movements to adjacent squares (diagonals are not considered adjacent).
You cannot "traverse" into a square that is occupied by a checker.
You must remain on the board.
To "reach" a given checker, you must be able to traverse to a square adjacent to the square that checker is placed on.
What is the maximum amount of checkers you can place on the board?
Examples
Valid (but not the maximum solution: this has 36, I've found 38 by hand for this start square configuration):
┌───┬───┬───┬───┬───┬───┬───┬───┐
│ ● │ ● │ ● │ ● │ ● │ ● │ ● │ ● │
├───┼───┼───┼───┼───┼───┼───┼───┤
│ │ │ │ │ │ │ │ │
├───┼───┼───┼───┼───┼───┼───┼───┤
│ ● │ ● │ ● │ ● │ ● │ ● │ ● │ │
├───┼───┼───┼───┼───┼───┼───┼───┤
│ ● │ ● │ ● │ ● │ ● │ ● │ ● │ │
├───┼───┼───┼───┼───┼───┼───┼───┤
│ │ │ │ │ │ │ │ │
├───┼───┼───┼───┼───┼───┼───┼───┤
│ │ ● │ ● │ ● │ ● │ ● │ ● │ ● │
├───┼───┼───┼───┼───┼───┼───┼───┤
│ │ ● │ ● │ ● │ ● │ ● │ ● │ ● │
├───┼───┼───┼───┼───┼───┼───┼───┤
│ │ │ │ │ S │ │ │ │
└───┴───┴───┴───┴───┴───┴───┴───┘
Valid (but definitely not the maximum):
┌───┬───┬───┬───┬───┬───┬───┬───┐
│ │ │ │ │ │ │ │ │
├───┼───┼───┼───┼───┼───┼───┼───┤
│ │ │ │ │ │ │ │ │
├───┼───┼───┼───┼───┼───┼───┼───┤
│ │ │ │ │ │ │ │ │
├───┼───┼───┼───┼───┼───┼───┼───┤
│ │ │ │ │ │ │ │ │
├───┼───┼───┼───┼───┼───┼───┼───┤
│ │ │ │ │ │ │ │ │
├───┼───┼───┼───┼───┼───┼───┼───┤
│ │ │ │ │ │ │ │ │
├───┼───┼───┼───┼───┼───┼───┼───┤
│ │ │ │ │ ● │ │ │ │
├───┼───┼───┼───┼───┼───┼───┼───┤
│ │ │ │ ● │ S │ ● │ │ │
└───┴───┴───┴───┴───┴───┴───┴───┘
Invalid (corner checker is not "reachable"):
┌───┬───┬───┬───┬───┬───┬───┬───┐
│ ● │ ● │ │ │ │ │ │ │
├───┼───┼───┼───┼───┼───┼───┼───┤
│ ● │ │ │ │ │ │ │ │
├───┼───┼───┼───┼───┼───┼───┼───┤
│ │ │ │ │ │ │ │ │
├───┼───┼───┼───┼───┼───┼───┼───┤
│ │ │ │ │ │ │ │ │
├───┼───┼───┼───┼───┼───┼───┼───┤
│ │ │ │ │ │ │ │ │
├───┼───┼───┼───┼───┼───┼───┼───┤
│ │ │ │ │ │ │ │ │
├───┼───┼───┼───┼───┼───┼───┼───┤
│ │ │ │ │ │ │ │ │
├───┼───┼───┼───┼───┼───┼───┼───┤
│ │ │ │ │ S │ │ │ │
└───┴───┴───┴───┴───┴───┴───┴───┘
My current approach
My current approach is just brute force, unfortunately:
Generate a 2d bool array (for an 8x8 checkerboard, I'd just use a 64-bit int)
Flood-fill from the start square for "reachability"
Check every checker and make sure it's "reachable"
As you find valid boards with n checkers, you can stop trying any board that has <= n checkers.
Obviously, for an n by m board (with one start square), you'd have to check 2^(n*m-1) different combinations to be sure you found the maximum. Not great.
One realization I've had is that if a given checker configuration is invalid, adding more checkers to it won't magically make it valid. So, there's some opportunity to prune the problem space that way, but I'm not sure if there's an efficient way to implement it. Turning this into a recursive problem ("if you have a valid board, try adding one more checker to it in all available locations, and recurse for any of those boards that are valid") would end up either:
trying certain configurations much more than once (depth-first), or
turning the problem into a 2^(n*m-1) memory complexity problem to prevent that, I think? (breadth-first).
Answer: One approach would be to express this as an instance of SAT, and use an off-the-shelf SAT solver. See Boolean constraints for a connected component of a graph for a technique for encoding reachability and thus enforcing all of your constraints via a CNF formula.
In particular, you can use SAT to check whether there exists a solution with (for example) at least 37 checkers. Then, you can use binary search to vary the number 37 and find the largest number such that there exists a valid solution.
Alternatively, you could encode it as an instance of integer linear programming, where the goal is to maximize the number of checkers (which is a linear objective function), and then solve it with an off-the-shelf ILP solver. See also Express boolean logic operations in zero-one integer linear programming (ILP) for how to convert logical constraints to ILP. | {
"domain": "cs.stackexchange",
"id": 21582,
"tags": "algorithms, optimization"
} |
Scripts don't see ROS if executed by ProcessBuilder() | Question:
I want to execute a script with a ProcessBuilder(). My code is:
new ProcessBuilder().inheritIO().command("/bin/bash", "-c", "./deploy.sh").start();
In the bash script I have:
#!/bin/bash
rosrun my_package ardrone_test_1.py
It works if I run the bash script manually in the terminal, but if I do with the ProcessBuilder I got an error:
rosrun: command not found
The same if I run python scripts which uses ROS. There are errors that some package are not found, whereas it works fine if run via terminal.
Originally posted by Cipek on ROS Answers with karma: 5 on 2019-06-14
Post score: 0
Answer:
You need to source setup.bash first in order for the environment to work. In your deploy.sh add a line like "source /your/ros_ws/devel/setup.bash" before you call rosrun
Originally posted by davrsky with karma: 26 on 2019-06-20
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by Cipek on 2019-06-20:
I have been looking for the solution really long and that worked! Thank you. | {
"domain": "robotics.stackexchange",
"id": 33187,
"tags": "ros, bash, rosrun, java, ros-indigo"
} |
Is Carnot cycle the only "most efficient" cycle? | Question: In all the books, that I have studied so far, they say that Carnot cycle is the most efficient cycle. But why isn't any other reversible process as efficient as Carnot cycle? Can somebody please provide mathematical explanation for why that is or isn't the case?
Answer: A nuance is that the Carnot cycle is the only reversible cycle between only two reservoirs. There are other idealized cycles that are also reversible (i.e., that generate no entropy) but require more reservoirs.
See da Silva, "Some considerations about
thermodynamic cycles", Eur. J. Phys. 33 (2012) 13–42 for the math framework of evaluating engine reversibility. Note the statement "This property makes the Carnot engine a very
special engine: it is the unique reversible heat engine functioning with only two reservoirs. (Any other cycle could be performed in a reversible way, but that would require a virtually
infinite number of thermal reservoirs.)"
See also Leff, "Thermal efficiency at maximum work output: New results for old heat engines," Am. J. Phys. 55, 602 (1987) and Leff, "Reversible and irreversible heat engine and refrigerator cycles," Am. J. Phys. 86, 344 (2018) for mathematical analyses of non-Carnot reversible cycles, such as the reversible Otto cycle. | {
"domain": "physics.stackexchange",
"id": 70539,
"tags": "thermodynamics, entropy, carnot-cycle"
} |
In the Keras Tokenizer class, what exactly does word_index signify? | Question: I'm trying to really understand Tokenizing and Vectorizing text in machine learning, and am looking really hard into the Keras Tokenizer class. I get the mechanics of how it's used, but I'd like to really know more about it. So for example, there's this common usage:
tokens = Tokenizer(num_words=SOME_NUMBER)
tokens.fit_on_texts(texts)
tokens returns a word_index, which maps words to some number.
Are the words all words in texts, or are they maxed at SOME_NUMBER?
And are the dict values for word_index the frequency of each word, or just the order of the word?
Answer:
Are the words all words in texts, or are they maxed at SOME_NUMBER?
Yes, it will be maxed at the most common SOME_NUMBER-1 words.
And are the dict values for word_index the frequency of each word, or just the order of the word?
It is just the index of the word in the dictionary.
You can read more here in the documentation. | {
"domain": "datascience.stackexchange",
"id": 5382,
"tags": "keras, nlp"
} |
Is there a calculation for flow loss through a venturi tube vs a full bore? | Question: I understand that the volumetric flow either side of a venturi tube is the same - that makes total sense to me. However, what I'm interested in finding out is how to calculate the difference in flow rate vs the total potential flow without said tube section. Here's a rough diagram explaining things:
So, in the bottom system the venturi effect tells us that v2 equals v3. However, assuming all other variables are the same between the two systems, my assumption is that v1 does not equal v2/v3 - the constriction of the pipe will cause some loss of volumetric flow vs the larger one. Firstly, is this assumption correct?
Secondly, is there a formula or theory for calculating/explaining this? Of course the exact dimensions will mean the flow loss will vary - in particular the throat diameter and length, as well as the angles of the converging and diverging cones. But it doesn't seem to be covered by the venturi effect per se.
Answer: Assuming the fluid is viscous (which is a very reasonable assumption), then you are correct in your assumption that the restriction will reduce the overall flow rate the tube can accommodate (all else equal).
The typical explanation of the Venturi effect is a simplified analysis that only applies to incompressible, inviscid fluids along a streamline.
When you include viscosity, the restriction of the pipe would create friction loss, where the fluid loses energy due to interactions between the fluid and the walls. There are many empirical ways to approximate how much friction loss is expected based on pipe roughness, flow velocities, pipe sizes, fluid, and any flow restrictions such as orifices or elbows.
The one I can recall using is the Darcy–Weisbach equation which would allow you to determine the expected head loss (and therefore affect on flow rate) that would be introduced by adding a Venturi tube compared to not having one. | {
"domain": "physics.stackexchange",
"id": 61287,
"tags": "fluid-dynamics"
} |
Can a gamma-ray burst affect the same place of the space twice times? | Question: I know that the space-time has curvature. Also I have
basic knowledge about the gamma-ray bursts.
I would like to know if a same pulse of GRB can affect the same place of the space twice times. Is it impossible? Thank you.
Answer: A gamma ray burst is a electromagnetic signal, so the question is can be re-written as "Can a light signal arrive at one point at two or more times?"
And the answer to that question is "Yes", we see light signals from distant quasars along different paths due to gravitational microlensing all the time, and when there are more that one distinct images (as opposed to a continuous smear), the paths can have different travel times (the minimum time requirement is local). | {
"domain": "physics.stackexchange",
"id": 45584,
"tags": "special-relativity, astrophysics, astronomy, relativity, gamma-rays"
} |
Distinguishing stereoisomers using reagents | Question: Given the following possible reagents:
acetone,
$\ce{MnO2}$,
ozone, or
aluminium propoxide,
which one would distinguish between the cis and trans isomers of cyclopentan-1,2-diol?
Answer: The only reagent listed that will distinguish between cis and trans diols is acetone. In the presence of catalytic acid it will form an acetal with the cis diol. This cannot occur with the trans diol as the two OH groups are too far apart. Try drawing the cis and trans diol in 3D. | {
"domain": "chemistry.stackexchange",
"id": 7775,
"tags": "organic-chemistry, stereochemistry"
} |
Templated Tokenizer Functions | Question: I wanted to be able to tokenize a few different containers so I created a generic way to do it. I originally wrote this in Visual Studio 2012, but I had to modify it to get it to compile on Ideone.
Here is a brief description of my tokenizer functions:
Tokenize (): Tokenizes a container based on a single delimiter.
TokenizeIf (): Tokenizes a container based on a single delimiter and a condition.
BackInsertTokenize () and BackInsertTokenizeIf (): Wrapper for containers that can use a std::back_inserter iterator.
A specialized BackInsertTokenize () for character types (char, wchar, etc).
Here are my generic tokenizer functions:
#include <algorithm>
#include <iterator>
#include <string>
#include <utility>
template <typename Token, typename Iter, typename OutIter, typename Condition>
auto TokenizeIf (Iter begin, Iter end, OutIter out, typename std::iterator_traits <Iter>::value_type delimiter, Condition condition) -> OutIter
{
if (begin == end) {
return out ;
}
auto current = begin ;
auto next = begin ;
do {
next = std::find (current, end, delimiter) ;
Token token (current, next) ;
if (condition (token) == true) {
*out++ = std::move (token) ;
}
current = next ;
} while (next != end && ++current != end) ;
if (next != end) {
Token token ;
if (condition (token) == true) {
*out++ = std::move (token) ;
}
}
return out ;
};
template <typename Token, typename Iter, typename OutIter>
auto Tokenize (Iter begin, Iter end, OutIter out, typename std::iterator_traits <Iter>::value_type delimiter) -> OutIter
{
if (begin == end) {
return out ;
}
auto current = begin ;
auto next = begin ;
do {
next = std::find (current, end, delimiter) ;
*out++ = Token (current, next) ;
current = next ;
} while (next != end && ++current != end) ;
if (next != end) {
*out++ = Token () ;
}
return out ;
};
template <class ContainerOut, class ContainerIn, class Condition>
auto BackInsertTokenizeIf (ContainerIn const &in, typename ContainerIn::value_type delimiter, Condition condition) -> ContainerOut
{
typedef typename ContainerOut::value_type Token ;
ContainerOut out ;
TokenizeIf <Token> (std::begin (in), std::end (in), std::back_inserter (out), delimiter, condition) ;
return out ;
}
template <class ContainerOut, class ContainerIn>
auto BackInsertTokenize (ContainerIn const &in, typename ContainerIn::value_type delimiter) -> ContainerOut
{
typedef typename ContainerOut::value_type Token ;
ContainerOut out ;
Tokenize <Token> (std::begin (in), std::end (in), std::back_inserter (out), delimiter) ;
return out ;
}
template <class ContainerOut, class CharT>
auto BackInsertTokenize (const CharT *in, CharT delimiter) -> ContainerOut
{
typedef typename ContainerOut::value_type Token ;
ContainerOut out ;
Tokenize <Token> (in, in + std::char_traits<CharT>::length (in), std::back_inserter (out), delimiter) ;
return out ;
}
These are the test cases that I was interested in:
#include <vector>
int main ()
{
std::string const s1 = ",hello,5,cat,192.3," ;
auto const t1 = BackInsertTokenize <std::vector <std::string>> (s1, ',') ;
std::vector <char> v1 = {'S', '1', '\0', 'S', '2', 'I', 'N', '\0', 'S', '3', '\0', '\0'} ;
auto const t2 = BackInsertTokenizeIf <std::vector <std::string>> (v1, '\0', [] (const std::string &s) {
return !s.empty () ;
}) ;
auto const t3 = BackInsertTokenize <std::vector <std::string>> ("C:\\Some\\Path\\To\\Nowhere", '\\') ;
return 0 ;
}
Answer: The code generally looks good to me. It is clear, complete and working. The naming and formatting are clear and consistent (although I prefer to read code that does not have the space before each statement-terminating semicolon) but I did see a few things that may help you improve your code.
Consider using {} style initializers
There are a few places where a Token is constructed, such as this:
Token token (current, next_) ;
However, this might be misconstrued as a function call. Assuming that you're using C++11 or better, it may be worth considering using the {} style for the constructor:
Token token {current, next} ;
This can't be misconstrued as a function call and may be slightly less ambiguous.
Consider alternative usage
The code works well for the use cases you've said you're interested in addressing. That's good, and it may be all you ever need, but when I first saw the code, I thought it might be useful to be able to use it like this:
std::string const s2 = "55,33,1,7,42";
auto const t4 = BackInsertTokenize <std::vector <int>> (s2, ',') ;
The intent was to create a vector of integers from the const string, but this code doesn't actually compile. The problem is essentially this line:
*out++ = Token (current, next) ;
That works fine for any Token type that can be constructed from an iterator range like this, but not for a primitive type like int. One way to address that might be to provide another template that additionally takes an operator to explicitly perform this conversion with the given types.
Omit return 0
When a C++ program reaches the end of main the compiler will automatically generate code to return 0, so there is no reason to put return 0; explicitly at the end of main. | {
"domain": "codereview.stackexchange",
"id": 14494,
"tags": "c++, parsing"
} |
What is meant by "Expression" of Non-Coding RNA? | Question: I was having a look at lncRNAdb and its help says:
The ENCODE project gene annotation list, GENCODE, has predicted that the human genome contains 14,470 lncRNAs whereas only a small proportion of these have been shown to have function.
We have examined thousands of publications in order to find and
manually annotate lncRNAs that have been shown to be functional by
overexpression or knockdown experiments.
What is meant by Overexpression of a Non-coding RNA here? Even if you have a look at any lncRNA in the DB, its entry has a column for "Expression":
Answer: They queried publications dealing with the lncRNA that studied whether it was functional through over-expression or knockdown. Cells do something measurable in their normal state, that can be observed through microscopy, qPCR, microarray, etc. You use the data as a control which to compare experimental results against.
For an over-expression study, for example, you can transfect your cells with a vector that will transcribe a lot of your ncRNA. Whether the transfection need be stable or transient will depend on a number of other factors. So the methods you used to collect data as a control, apply these to the transfected group. If there are statistically significant changes to anything when you have more of the ncRNA, you can say in part that your ncRNA is functional. It's doing something observable.
The opposite is true for knockdown. You can maybe use CRISPR to do the job but lentiviral transduction with say a doxycycline-inducible promoter is among other possibilities. The goal now is to make the gene, or the expression of the gene go away. You do the same experiment, and see if an absence of the ncRNA does anything to the cell.
Now in the database you're looking at, expression is being used to show you in what tissues has BX118339 been detected? It basically says BX118339 is expressed in testis and the fetal brain. | {
"domain": "biology.stackexchange",
"id": 5506,
"tags": "molecular-biology, molecular-genetics, rna, noncoding-rna"
} |
Calculating reaction enthalpy given other reactions | Question: $$\begin{alignat}{2}\ce{X(s) + 1/2O2(g) &-> XO}&&\qquad\Delta H=-895.5\ \mathrm{kJ} \\
\ce{XCO3(s) &-> XO(s) + CO2(g)}&&\qquad\Delta H= +484.3\ \mathrm{kJ}\\
\ce{X(s) + 1/2O2(g) + CO2 &-> XCO3(s)}&&\qquad\Delta H = \;?\ \mathrm{kJ}
\end{alignat}$$
I wish to know how to do it. I have seen some questions where there's 3+ equations and at the end a final equation that I would need to determine the $\Delta H$ for the final equation. I have looked online on how to solve this but the tutorials are very confusing and I can't understand them. Can someone explain how to do this and other similar problems like this?
One of the tutorials talks about making sure they cancel out and flipping the equations? Can someone please explain this?
Answer: Ok so for future reference if anyone needs help with this you look at the last equation and then at the other ones, if the compounds(ex: CO2) of the other equations don't appear to be on the same side of the final equation you flip that equation and as a result the sign. After you have done this add everything up and it will give you the answer | {
"domain": "chemistry.stackexchange",
"id": 4331,
"tags": "physical-chemistry, thermodynamics"
} |
Error while trying to compile ccny_vision package | Question:
Hello, I´m trying to compile this package but I don´t know what´s the problem to not be possible to make ar_single. The terminal shows:
root@ROS:/opt/ros/groovy/stacks/ccny_vision/ar_pose/demo# rosmake ccny_vision
[ rosmake ] rosmake starting...
[ rosmake ] Packages requested are: ['ccny_vision']
[ rosmake ] Logging to directory /home/viki/.ros/rosmake/rosmake_output-20130307-164700
[ rosmake ] Expanded args ['ccny_vision'] to:
['ar_pose', 'artoolkit']
[ rosmake ] WARNING: The stack "geometry" was not found. We will assume it is using the new buildsystem and try to continue...
[ rosmake ] WARNING: The stack "robot_model" was not found. We will assume it is using the new buildsystem and try to continue...
[ rosmake ] WARNING: The stack "ros" was not found. We will assume it is using the new buildsystem and try to continue...
[rosmake-0] Starting >>> artoolkit [ make ]
[rosmake-0] Finished <<< artoolkit [PASS] [ 0.51 seconds ]
[rosmake-0] Starting >>> catkin [ make ]
[rosmake-0] Finished <<< catkin ROS_NOBUILD in package catkin
No Makefile in package catkin
[rosmake-0] Starting >>> genmsg [ make ]
[rosmake-0] Finished <<< genmsg ROS_NOBUILD in package genmsg [ 1 Active 2/46 Complete ]
No Makefile in package genmsg
[rosmake-0] Starting >>> gencpp [ make ]
[rosmake-0] Finished <<< gencpp ROS_NOBUILD in package gencpp
No Makefile in package gencpp
[rosmake-0] Starting >>> genlisp [ make ]
[rosmake-0] Finished <<< genlisp ROS_NOBUILD in package genlisp
No Makefile in package genlisp
[rosmake-0] Starting >>> genpy [ make ]
[rosmake-0] Finished <<< genpy ROS_NOBUILD in package genpy
No Makefile in package genpy
[rosmake-0] Starting >>> message_generation [ make ]
[rosmake-0] Finished <<< message_generation ROS_NOBUILD in package message_generation
No Makefile in package message_generation
[rosmake-0] Starting >>> cpp_common [ make ]
[rosmake-0] Finished <<< cpp_common ROS_NOBUILD in package cpp_common
No Makefile in package cpp_common
[rosmake-0] Starting >>> rostime [ make ]
[rosmake-0] Finished <<< rostime ROS_NOBUILD in package rostime
No Makefile in package rostime
[rosmake-0] Starting >>> roscpp_traits [ make ]
[rosmake-0] Finished <<< roscpp_traits ROS_NOBUILD in package roscpp_traits
No Makefile in package roscpp_traits
[rosmake-0] Starting >>> roscpp_serialization [ make ]
[rosmake-0] Finished <<< roscpp_serialization ROS_NOBUILD in package roscpp_serialization
No Makefile in package roscpp_serialization
[rosmake-0] Starting >>> message_runtime [ make ]
[rosmake-0] Finished <<< message_runtime ROS_NOBUILD in package message_runtime
No Makefile in package message_runtime
[rosmake-0] Starting >>> std_msgs [ make ]
[rosmake-0] Finished <<< std_msgs ROS_NOBUILD in package std_msgs
No Makefile in package std_msgs
[rosmake-0] Starting >>> geometry_msgs [ make ]
[rosmake-0] Finished <<< geometry_msgs ROS_NOBUILD in package geometry_msgs
No Makefile in package geometry_msgs
[rosmake-0] Starting >>> visualization_msgs [ make ]
[rosmake-0] Finished <<< visualization_msgs ROS_NOBUILD in package visualization_msgs
No Makefile in package visualization_msgs
[rosmake-0] Starting >>> angles [ make ]
[rosmake-0] Finished <<< angles ROS_NOBUILD in package angles
No Makefile in package angles
[rosmake-0] Starting >>> rospack [ make ]
[rosmake-0] Finished <<< rospack ROS_NOBUILD in package rospack
No Makefile in package rospack
[rosmake-0] Starting >>> roslib [ make ]
[rosmake-0] Finished <<< roslib ROS_NOBUILD in package roslib
No Makefile in package roslib
[rosmake-0] Starting >>> rosunit [ make ]
[rosmake-0] Finished <<< rosunit ROS_NOBUILD in package rosunit
No Makefile in package rosunit
[rosmake-0] Starting >>> rosconsole [ make ]
[rosmake-0] Finished <<< rosconsole ROS_NOBUILD in package rosconsole
No Makefile in package rosconsole
[rosmake-0] Starting >>> rosgraph_msgs [ make ]
[rosmake-0] Finished <<< rosgraph_msgs ROS_NOBUILD in package rosgraph_msgs
No Makefile in package rosgraph_msgs
[rosmake-0] Starting >>> roslang [ make ]
[rosmake-0] Finished <<< roslang ROS_NOBUILD in package roslang
No Makefile in package roslang
[rosmake-0] Starting >>> xmlrpcpp [ make ]
[rosmake-0] Finished <<< xmlrpcpp ROS_NOBUILD in package xmlrpcpp
No Makefile in package xmlrpcpp
[rosmake-0] Starting >>> roscpp [ make ]
[rosmake-0] Finished <<< roscpp ROS_NOBUILD in package roscpp
No Makefile in package roscpp
[rosmake-0] Starting >>> rosgraph [ make ]
[rosmake-0] Finished <<< rosgraph ROS_NOBUILD in package rosgraph
No Makefile in package rosgraph
[rosmake-0] Starting >>> rospy [ make ]
[rosmake-0] Finished <<< rospy ROS_NOBUILD in package rospy
No Makefile in package rospy
[rosmake-0] Starting >>> rosclean [ make ]
[rosmake-0] Finished <<< rosclean ROS_NOBUILD in package rosclean
No Makefile in package rosclean
[rosmake-0] Starting >>> rosmaster [ make ]
[rosmake-0] Finished <<< rosmaster ROS_NOBUILD in package rosmaster [ 1 Active 27/46 Complete ]
No Makefile in package rosmaster
[rosmake-0] Starting >>> rosout [ make ]
[rosmake-0] Finished <<< rosout ROS_NOBUILD in package rosout
No Makefile in package rosout
[rosmake-0] Starting >>> rosparam [ make ]
[rosmake-0] Finished <<< rosparam ROS_NOBUILD in package rosparam
No Makefile in package rosparam
[rosmake-0] Starting >>> roslaunch [ make ]
[rosmake-0] Finished <<< roslaunch ROS_NOBUILD in package roslaunch
No Makefile in package roslaunch
[rosmake-0] Starting >>> rostest [ make ]
[rosmake-0] Finished <<< rostest ROS_NOBUILD in package rostest
No Makefile in package rostest
[rosmake-0] Starting >>> message_filters [ make ]
[rosmake-0] Finished <<< message_filters ROS_NOBUILD in package message_filters
No Makefile in package message_filters
[rosmake-0] Starting >>> sensor_msgs [ make ]
[rosmake-0] Finished <<< sensor_msgs ROS_NOBUILD in package sensor_msgs
No Makefile in package sensor_msgs
[rosmake-0] Starting >>> tf [ make ]
[rosmake-0] Finished <<< tf ROS_NOBUILD in package tf
No Makefile in package tf
[rosmake-0] Starting >>> resource_retriever [ make ]
[rosmake-0] Finished <<< resource_retriever ROS_NOBUILD in package resource_retriever
No Makefile in package resource_retriever
[rosmake-0] Starting >>> opencv2 [ make ]
[rosmake-0] Finished <<< opencv2 ROS_NOBUILD in package opencv2
No Makefile in package opencv2
[rosmake-0] Starting >>> console_bridge [ make ]
[rosmake-0] Finished <<< console_bridge ROS_NOBUILD in package console_bridge
No Makefile in package console_bridge
[rosmake-0] Starting >>> class_loader [ make ]
[rosmake-0] Finished <<< class_loader ROS_NOBUILD in package class_loader
No Makefile in package class_loader
[rosmake-0] Starting >>> pluginlib [ make ]
[rosmake-0] Finished <<< pluginlib ROS_NOBUILD in package pluginlib
No Makefile in package pluginlib
[rosmake-0] Starting >>> image_transport [ make ]
[rosmake-0] Finished <<< image_transport ROS_NOBUILD in package image_transport
No Makefile in package image_transport
[rosmake-0] Starting >>> cv_bridge [ make ]
[rosmake-0] Finished <<< cv_bridge ROS_NOBUILD in package cv_bridge
No Makefile in package cv_bridge
[rosmake-0] Starting >>> ar_pose [ make ]
[ rosmake ] All 5 linesar_pose: 1.1 sec ] [ 1 Active 42/46 Complete ]
{-------------------------------------------------------------------------------
mkdir -p bin
cd build && cmake -Wdev -DCMAKE_TOOLCHAIN_FILE=/opt/ros/groovy/share/ros/core/rosbuild/rostoolchain.cmake ..
CMake Error: The current CMakeCache.txt directory /opt/ros/groovy/stacks/ccny_vision/ar_pose/build/CMakeCache.txt is different than the directory /opt/ros/groovy/share/ros/ccny_vision/ar_pose/build where CMakeCache.txt was created. This may result in binaries being created in the wrong place. If you are not sure, reedit the CMakeCache.txt
CMake Error: The source "/opt/ros/groovy/stacks/ccny_vision/ar_pose/CMakeLists.txt" does not match the source "/opt/ros/groovy/share/ros/ccny_vision/ar_pose/CMakeLists.txt" used to generate cache. Re-run cmake with a different source directory.
[ rosmake ] Output from build of package ar_pose written to:
[ rosmake ] /home/viki/.ros/rosmake/rosmake_output-20130307-164700/ar_pose/build_output.log
[rosmake-0] Finished <<< ar_pose [FAIL] [ 1.06 seconds ]
[ rosmake ] Halting due to failure in package ar_pose.
[ rosmake ] Waiting for other threads to complete.
[ rosmake ] Results:
[ rosmake ] Built 43 packages with 1 failures.
[ rosmake ] Summary output to directory
[ rosmake ] /home/viki/.ros/rosmake/rosmake_output-20130307-164700
root@ROS:/opt/ros/groovy/stacks/ccny_vision/ar_pose/demo#
Thanks!
Originally posted by alonsosjumper on ROS Answers with karma: 9 on 2013-03-07
Post score: 0
Answer:
It looks like you're trying to compile the precompiled binary packages. You should not be building in /opt/ros/* I suggest you take a look at the basic tutorials.
Originally posted by tfoote with karma: 58457 on 2013-04-04
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 13244,
"tags": "rviz, ar-pose"
} |
Function of ER in reviewing mutated proteins | Question: At least in the case of Cystic Fibrosis it happens that a mutant protein (which could actually function!) is held in the ER because the ER detects it as misfolded. Does this happen in every type of mutant protein?
If so, should I conclude that such genetic disorders are caused because of the total absence of a protein rather than because of a misfolded protein? If not, why would the ER or anything else in the cell allow some mutated proteins to do any job (Like in the case of Sickle cell anaemia, where an amino acid substitution happens)?
How do scientists conclude that a misfolded protein could have done a good enough job (Like in the case of cystic fibrosis)?
Answer: Membrane proteins have to go to the ER from where they are transported to the Golgi apparatus. Usually these proteins are translocated into the ER while translation is still going on. There are chaperones (such as calbindin) in ER which help in translocation and also catalyze initial steps of folding. ER is also involved in unfolded protein response. Unless the protein is properly folded it is not transported to Golgi. However, this process is regulated at many steps and I wont add a detailed explanation of the mechanism, in this answer.
To conclude ER is involved in this case because the protein considered is a membrane protein. | {
"domain": "biology.stackexchange",
"id": 1335,
"tags": "genetics, cell-biology, proteins, protein-folding"
} |
why is MSE of prediction way different from loss over batches | Question: I am new to machine learning so forgive me if i ask stupid question. I have a time series data and i split it into training and test set.
This is my code:
from numpy import array
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense
# split a univariate sequence into samples
def split_sequence(sequence, n_steps_in, n_steps_out):
X, y = list(), list()
for i in range(len(sequence)):
# find the end of this pattern
end_ix = i + n_steps_in
out_end_ix = end_ix + n_steps_out
# check if we are beyond the sequence
if out_end_ix > len(sequence):
break
# gather input and output parts of the pattern
seq_x, seq_y = sequence[i:end_ix], sequence[end_ix:out_end_ix]
X.append(seq_x)
y.append(seq_y)
return array(X), array(y)
# choose a number of time steps
n_steps_in, n_steps_out = 10, 5
# split into samples
X, y = split_sequence(trainlist, n_steps_in, n_steps_out)
# define model
model = Sequential()
model.add(Dense(100, activation='relu', input_dim=n_steps_in))
model.add(Dense(n_steps_out))
model.compile(optimizer='adam', loss='mean_squared_error')
# fit model
history = model.fit(X, y, epochs=2000, verbose=0)
# demonstrate prediction
x_input = array(testlist[0:10])
x_input = x_input.reshape((1, n_steps_in))
yhat = model.predict(x_input, verbose=0)
yhat=list(yhat[0])
when i do print(history.history['loss'][-10:-1]) it gives me roughly 0.55 and when i do
from sklearn.metrics import mean_squared_error
mean_squared_error(testlist[11:16],yhat)
it gives me 0.11. Why is it so different?
Answer: From my understanding, you are comparing the prediction error on the test set versus the training loss error. When the training error is greater than the test error, that means your model is under-fitted.
Under-fitting occurs when there is still room for improvement on the test data. This can happen for a number of reasons: If the model is not powerful enough, is over-regularized, or has simply not been trained long enough. This means the network has not learned the relevant patterns in the training data. There are number of ways you can resolve under-fitting problem:
Increase the time of training procedure
Increase number of parameters in your model or the complexity of your model | {
"domain": "datascience.stackexchange",
"id": 6440,
"tags": "machine-learning, loss-function, mse"
} |
Dilemma About Energy : Can You EXPLAIN? | Question: PROBLEM 1 : A 2.14 kg block is dropped from a height of 43.6 cm onto a spring of force constant k = 18.6 N/cm as shown in fig:12-19.find the maximum distance the spring will be compressed.
solution :::: Let the spring get compressed a distance x. If the object fell from a height h= 0.436 m . then conservation of energy gives
0.5 * k * (x^2) = mg(x+h) { note that we calculate "mgx" here }
now if we put all the values we will get
x = 0.111 meter
PROBLEM 2 : figure 12-16 shows a 7.94 kg stone resting on a spring . the spring is compressed 10.2 cm by the stone .
the stone is pushed down an additional 28.6 cm and released.
how much potential energy is stored in the spring just before the stone is released?
and how high above this new (lowest) position will the stone rise?
solution ::: if x distance spring compressed because of mg ( m is the mass of block) we can get force constant
k = F/x = mg/x = (7.94)(9.81)/0.012 = 764 N/m
after the stone is pushed down an additional 28.6 cm x = (0.286 + 0.102) meter
SO potential energy stored in the spring just before the stone is released
0.5 * k * (x^2) = 0.5 * 764 * (0.286 + 0.102)^2 = 57.5 J
if h is height from the lowest position to the highest it will rise,using conservation of energy we can say
mgh = 57.5 J so h = 0.738 meter
note that we didnt use "mgx" here !!
now my question is why didnt we use mgx in second problem?? shouldnt we solve it like the first one??
problems collected from "Physics" by halliday,resnick,krane volume 1 5th edition
this book isnt used as textbook in our country so this isnt a homework problem. im reading the book for my own pleasure.
Answer: In the first:
The energy of the mass till it reaches the height of uncompressed length of the spring:
$E_i=mg(h+l)$
where $h$ is the height of the mass dropped from the uncompressed length of the spring
Final energy after compression:
$E_f=\frac{1}{2}kx^2+mg(l-x)$
Equating $E_i$ and $E_f$
$\frac{1}{2}kx^2=mg(h+x)$
In second prob
$E_i=\frac{1}{2}k(x_1+x_2)^2+mg(l-x_1-x_2)$
$E_f=mg(h'+l)$
So, $mg(h'+x_1+x_2)=\frac{1}{2}k(x_1+x_2)^2$
$h'$ is the height of the mass will reach from the height of uncompressed length of the spring
This quantity $"h'+x_1+x_2"$ is basically taken as $"h"$ in your solution | {
"domain": "physics.stackexchange",
"id": 26987,
"tags": "homework-and-exercises, newtonian-mechanics, energy-conservation, work, potential-energy"
} |
What kind of "vector" is a feature vector in machine learning? | Question: I'm having trouble understanding the use of Vector in machine learning to represent a group of features.
If one looks up the definition of a Vector, then, according to wikipedia, a Vector is an entity with a magnitude and direction.
This can be understood when applying Vectors to for example physics to represent force, velocity, acceleration, etc...: the components of the Vector represent the components of the physical property along the axes in space. For example, the components of a velocity vector represent the velocity along the x, y and z axes
However, when applying Vectors to machine learning to represent features, then those features can be totally unrelated entities. They can have totally different units: one feature can be the length in meters of a person and another can be the age in years of the person.
But then what is the meaning of the Magnitude of such a Vector, which would then be formed by a summation of meters and years? And the Direction?
I do know about normalization of features to make them have similar ranges, but my question is more fundamental.
Answer:
I'm having trouble understanding the use of Vector in machine learning
to represent a group of features.
In short, I would say that "Features Vector" is just a convenient way to speak about a set of features.
Indeed, for each label 'y' (to be predicted), you need a set of values 'X'. And a very convenient way of representing this is to put the values in a vector, such that when you consider multiple labels, you end up with a matrix, containing one row per label and one column per feature.
In an abstract way, you can definitely think of those vectors belonging to a multiple dimensions space, but (usually) not an Euclidean one. Hence all the math apply, only the interpretation differs !
Hope that helps you. | {
"domain": "datascience.stackexchange",
"id": 4117,
"tags": "machine-learning, featurization"
} |
Why are both caller-save registers and callee save registers needed? | Question: I am having a difficult time understanding callee and caller-save registers. I get that caller-save registers are those which are needed after function call and hence caller saves them in caller prologue and restored in caller epilogue. Callee-save are those who are used by callee and are saved in callee prologue and restored in callee epilogue. But why are both needed? If caller is saving the needed registers already, why should callee save them again on use? I am really confused.
Answer: Let's see. CPU have, f.e., 16 registers. If subroutine modifies them, they should be saved before that and restored later. Now, who should do that?
All 16 registers can be saved by caller, or all 16 registers can be saved by callee, or mixed scheme may be used. In this schema, 8 registers is considered as transient - caller don't expect that their contents will be left intact after a call. Thus callers usually use these registers only for temporary data, i.e. computations between calls. But in a few cases when these registers contains non-transient data, caller has to save their contents.
Other registers are considered as permanent, so caller expects that they are left untouched by callee. But in the cases when callee really needs to use more registers, it needs to save/restore their contents.
Why hybrid scheme is advantageous - because usually each procedure needs a few registers for transient data, so by having convention that these registers can be modified by callee, you can reduce calling sequence and therefore make procedure calls a bit faster and more compact.
OTOH, if we consider some registers as "permanent", we can try to avoid using these registers in leaf procedures (i.e. those not calling other procedures) and avoid saving/restoring these registers in leaf procedures. Since cpu usually spends most of time in leaf procedures, this also means quite substantial speed improvement.
So, by splitting registers into two classes, we can make program faster and shorter - leaf subroutines try to not use "permanent" registers and as result don't need to save/restore any. OTOH, other procedures try to don't use "transient" registers to hold values through calls, thus avoiding need to save/restore them.
So, hybrid scheme makes program more efficient compared to any of two pure schemes. | {
"domain": "cs.stackexchange",
"id": 13898,
"tags": "programming-languages, functional-programming"
} |
Solving linear programs with special structure | Question: We have an application and at some point we need to solve a linear programming problem that looks like this:
$$
\min\ w_{1,2} + w_{3,4} + w_{5,6}\\
x_i - x_j \leq c_{ij},\ \forall\ (i,j) \in C\\
x_1 - x_2 \leq w_{1,2}\\
x_3 - x_4 \leq w_{3,4}\\
x_5 - x_6 \leq w_{5,6}
$$
where $x_i, x_j$ and $w_{1,2}, w_{3,4}, w_{5,6}$ are variables of $\mathbb{R}$ and $c_{ij} \in \mathbb{R}$ are constants. Set $C$ is the set of difference constraints for which the RHS is a constant. We know that pairs $(1,2),(3,4),(5,6) \notin C$.
Obviously this can be solved with any linear programming tool out there. And because of this, we know this problem is solvable in polynomial time. Unfortunately, most LP solvers are very slow for our purposes so we are looking for some other technique that could exploit this structure somehow, if one exists.
What we know: It is well known that
$$ x_i - x_j \leq c_{ij},\ \forall\ (i,j) \in C$$
can be solved using shortest path algorithms by creating a distance graph from these constraints and running, for example, Bellman-Ford or Floyd-Warshall algorithms. If our objective function is only
$$ \min\ w_1$$
it is possible to solve the problem with a trivial binary search. However, when there are three values which interplay with each other this is not straightforward (we believe).
Question: Is there any known algorithm or technique that can solve this problem and which is not a Linear Programming algorithm (e.g., Simplex, Ellipsoid, etc.)? It would be helpful just to point us into some direction because so far we have not found anything remotely close that can help. Additionally, is there some special cases from this system which may have a known algorithm to be solved (say, if all $c_{ij} \geq 0$ or something)?
Counter example: Here is a counter example to the approach proposed in the answer https://cstheory.stackexchange.com/a/52165 (now deleted). It refers to the following system:
$$
\min\ w_{1,2} + w_{3,4} + w_{5,6}\\
x_2 - x_5 \leq -5\\
x_2 - x_7 \leq 5\\
x_5 - x_1 \leq -5\\
x_4 - x_6 \leq -5\\
x_6 - x_3 \leq -5\\
x_6 - x_5 \leq -21\\
x_7 - x_3 \leq 0\\
x_7 - x_5 \leq -15\\
x_1 - x_2 \leq w_{1,2}\\
x_3 - x_4 \leq w_{3,4}\\
w_{5,6} = 0,\ \text{(to simplify)}
$$
If we construct the proposed graph, we have the following:
Computing the shortest paths with Floyd-Warshall, we have:
$$
d_{2,1} = -15\\
d_{4,3} = -10
$$
which would imply a solution of value 25.
However, if we check the system replacing $w_{1,2} = 15$ and $w_{3,4}=10$ we will notice that this creates an infeasible system (the graph has a negative cycle).
Indeed, the optimal solution is 26, created with $w_{1,2}=15$ and $w_{3,4}=11$. This is caused due to the interplay between the constraints $(1,2)$ and $(3,4)$ with the others in our system.
The current proposed solution is a valid lower bound but it is not guaranteed to produce the optimal solution.
We have tried using off-the-shelf LP solvers, including CPLEX, Gurobi, and GLPK, but are hoping for something faster. We are also hoping for something that doesn't rely on expensive commercial software, so a dedicated algorithm for this class of problem would be very helpful.
Answer: Assuming none of the $w_{ij}$ variables are constrained to be non-negative, your problem can be recast as a particular min-cost flow problem, and via that solved by solving just one all-pairs shortest-path problem, as follows.
The original LP is equivalent (after negating the objective) to
$$\displaystyle\max \{x_2 - x_1 +x_4 - x_3 + x_6 - x_5 : (\forall (i, j)\in C)~ x_i - x_j \le c_{ij}\}.$$
The dual is to minimize $\displaystyle\sum_{(i,j)\in C} c_{ij} f_{ij}$ subject to $f_{ij} \ge 0$ and
$$\sum_{i : (i, k)\in C} f_{ik}
- \sum_{j : (k, j)\in C} f_{kj}
=
\begin{cases}
-1 & (k\in \{1, 3, 5\}) \\
1 & (k\in \{2, 4, 6\}) \\
0 & (k\not\in[6]).
\end{cases}
$$
This is equivalent to the following minimum-cost flow problem. Create a flow network $G=(V,E)$ with a vertex $i\in V$ for each index $i$, an infinite-capacity edge $(i, j)$ of cost $c_{ij}$ for each $(i, j)\in C$, and artificial source and sink vertices $s$ and $t$ with edges $(s, 2)$, $(s, 4)$, $(s, 6)$, $(1, t)$, $(3, t)$, and $(5, t)$ with cost zero and capacity 1. Now ask for a minimum-cost $s$-$t$ flow $f$ of value 3 in this network.
In this particular network, this is equivalent to finding three shortest paths $p_2$, $p_4$, $p_6$ such that for some bijection $\pi:\{2,4,6\}\leftrightarrow\{1,3,5\}$ each path $p_i$ goes from $i$ to $\pi(i)$.
There are only nine possible pairs $(i, \pi(i))$, so your problem can be solved by solving just one all-pairs shortest-path problem in $G$ (or three single-source problems), then considering the six possible bijections $\pi$ and taking the best.
One downside is that these shortest-path problems presumably involve negative weights. (But no negative-weight cycles unless your original problem is infeasible, I think.)
Hopefully this gives you something to work with. Perhaps you can exploit the structure to get further improvements. | {
"domain": "cstheory.stackexchange",
"id": 5594,
"tags": "graph-algorithms, optimization, linear-programming"
} |
"Zip-like" functionality with C++11's range-based for-loop | Question: My goal was to get the following code to work:
#include<iostream>
#include<vector>
#include<list>
#include<algorithm>
#include<array>
#include"functional.hpp"
int main( )
{
std::vector<double> a{1.0, 2.0, 3.0, 4.0};
std::list<char> b;
b.push_back('a');
b.push_back('b');
b.push_back('c');
b.push_back('d');
std::array<int,5> c{5,4,3,2,1};
auto d = zip(a, b, c);
for (auto i : zip(a, b, c) )
{
std::cout << std::get<0>(i) << ", " << std::get<1>(i) << ", " << std::get<2>(i) << std::endl;
}
for (auto i : d)
{
std::cout << std::get<0>(i) << ", " << std::get<1>(i) << ", " << std::get<2>(i) << std::endl;
std::get<0>(i) = 5.0;
//std::cout << i1 << ", " << i2 << ", " << i3 << std::endl;
}
for (auto i : d)
{
std::cout << std::get<0>(i) << ", " << std::get<1>(i) << ", " << std::get<2>(i) << std::endl;
//std::cout << i1 << ", " << i2 << ", " << i3 << std::endl;
}
}
With the output:
1, a, 5
2, b, 4
3, c, 3
4, d, 2
5, a, 5
5, b, 4
5, c, 3
5, d, 2
The source for "functional.hpp" is:
#pragma once
#include<tuple>
#include<iterator>
#include<utility>
/***************************
// helper for tuple_subset and tuple_tail (from http://stackoverflow.com/questions/8569567/get-part-of-stdtuple)
***************************/
template <size_t... n>
struct ct_integers_list {
template <size_t m>
struct push_back
{
typedef ct_integers_list<n..., m> type;
};
};
template <size_t max>
struct ct_iota_1
{
typedef typename ct_iota_1<max-1>::type::template push_back<max>::type type;
};
template <>
struct ct_iota_1<0>
{
typedef ct_integers_list<> type;
};
/***************************
// return a subset of a tuple
***************************/
template <size_t... indices, typename Tuple>
auto tuple_subset(const Tuple& tpl, ct_integers_list<indices...>)
-> decltype(std::make_tuple(std::get<indices>(tpl)...))
{
return std::make_tuple(std::get<indices>(tpl)...);
// this means:
// make_tuple(get<indices[0]>(tpl), get<indices[1]>(tpl), ...)
}
/***************************
// return the tail of a tuple
***************************/
template <typename Head, typename... Tail>
inline std::tuple<Tail...> tuple_tail(const std::tuple<Head, Tail...>& tpl)
{
return tuple_subset(tpl, typename ct_iota_1<sizeof...(Tail)>::type());
// this means:
// tuple_subset<1, 2, 3, ..., sizeof...(Tail)-1>(tpl, ..)
}
/***************************
// increment every element in a tuple (that is referenced)
***************************/
template<std::size_t I = 0, typename... Tp>
inline typename std::enable_if<I == sizeof...(Tp), void>::type
increment(std::tuple<Tp...>& t)
{ }
template<std::size_t I = 0, typename... Tp>
inline typename std::enable_if<(I < sizeof...(Tp)), void>::type
increment(std::tuple<Tp...>& t)
{
std::get<I>(t)++ ;
increment<I + 1, Tp...>(t);
}
/****************************
// check equality of a tuple
****************************/
template<typename T1>
inline bool not_equal_tuples( const std::tuple<T1>& t1, const std::tuple<T1>& t2 )
{
return (std::get<0>(t1) != std::get<0>(t2));
}
template<typename T1, typename... Ts>
inline bool not_equal_tuples( const std::tuple<T1, Ts...>& t1, const std::tuple<T1, Ts...>& t2 )
{
return (std::get<0>(t1) != std::get<0>(t2)) && not_equal_tuples( tuple_tail(t1), tuple_tail(t2) );
}
/****************************
// dereference a subset of elements of a tuple (dereferencing the iterators)
****************************/
template <size_t... indices, typename Tuple>
auto dereference_subset(const Tuple& tpl, ct_integers_list<indices...>)
-> decltype(std::tie(*std::get<indices-1>(tpl)...))
{
return std::tie(*std::get<indices-1>(tpl)...);
}
/****************************
// dereference every element of a tuple (applying operator* to each element, and returning the tuple)
****************************/
template<typename... Ts>
inline auto
dereference_tuple(std::tuple<Ts...>& t1) -> decltype( dereference_subset( std::tuple<Ts...>(), typename ct_iota_1<sizeof...(Ts)>::type()))
{
return dereference_subset( t1, typename ct_iota_1<sizeof...(Ts)>::type());
}
template< typename T1, typename... Ts >
class zipper
{
public:
class iterator : std::iterator<std::forward_iterator_tag, std::tuple<typename T1::value_type, typename Ts::value_type...> >
{
protected:
std::tuple<typename T1::iterator, typename Ts::iterator...> current;
public:
explicit iterator( typename T1::iterator s1, typename Ts::iterator... s2 ) :
current(s1, s2...) {};
iterator( const iterator& rhs ) : current(rhs.current) {};
iterator& operator++() {
increment(current);
return *this;
}
iterator operator++(int) {
auto a = *this;
increment(current);
return a;
}
bool operator!=( const iterator& rhs ) {
return not_equal_tuples(current, rhs.current);
}
typename iterator::value_type operator*() {
return dereference_tuple(current);
}
};
explicit zipper( T1& a, Ts&... b):
begin_( a.begin(), (b.begin())...),
end_( a.end(), (b.end())...) {};
zipper(const zipper<T1, Ts...>& a) :
begin_( a.begin_ ),
end_( a.end_ ) {};
template<typename U1, typename... Us>
zipper<U1, Us...>& operator=( zipper<U1, Us...>& rhs) {
begin_ = rhs.begin_;
end_ = rhs.end_;
return *this;
}
zipper<T1, Ts...>::iterator& begin() {
return begin_;
}
zipper<T1, Ts...>::iterator& end() {
return end_;
}
zipper<T1, Ts...>::iterator begin_;
zipper<T1, Ts...>::iterator end_;
};
//from cppreference.com:
template <class T>
struct special_decay
{
using type = typename std::decay<T>::type;
};
//allows the use of references:
template <class T>
struct special_decay<std::reference_wrapper<T>>
{
using type = T&;
};
template <class T>
using special_decay_t = typename special_decay<T>::type;
//allows template type deduction for zipper:
template <class... Types>
zipper<special_decay_t<Types>...> zip(Types&&... args)
{
return zipper<special_decay_t<Types>...>(std::forward<Types>(args)...);
}
I'm asking for a few things:
References and performance: Theoretically, the only things that (I believe) should be copied are iterators, but I'm not sure how to confirm this. I'm hoping that this has very little overhead (A couple of pointer dereferences maybe?), but I'm not sure how to check something like this.
Correctness: Are there any hidden bugs that aren't showing up in my use case? Technically, the code isn't really working as "expected" (it should spit out copies of the values for "auto i", and only allow modification of the original containers values with "auto& i", and there should be a version that allows you to look, but not touch: "const auto& i"), but I'm not sure how to fix that. I expect I need to create a const version for the const auto& i mode, but I'm not sure how to make a copy for the auto i version.
Code Cleanliness, Best practices: My code is pretty much never read by another human being, so any recommendations of best practices or commenting would be appreciated. I'm also not sure what to do about the move constructors: should I be deleting them, or ignoring them?
Answer: I have not much to say. Your code reads quite good, which is rather pleasant. Here are a few tidbits though:
typedef
If you are willing to write modern code, you should consider dropping typedef and using using everywhere instead. It helps to be consistent between regular alias and alias template. Moreover, the = symbol help to visually split the new name and the type it refers to. And the syntax is also consistent towards the way you can declare variables:
auto i = 1;
using some_type = int;
Perfect forwarding
It's clear that you already used it. But there are some other places where it would make sense to use it:
template <size_t... indices, typename Tuple>
auto tuple_subset(Tuple&& tpl, ct_integers_list<indices...>)
-> decltype(std::make_tuple(std::get<indices>(std::forward<Tuple>(tpl))...))
{
return std::make_tuple(std::get<indices>(std::forward<Tuple>(tpl))...);
// this means:
// make_tuple(get<indices[0]>(tpl), get<indices[1]>(tpl), ...)
}
std::enable_if
While using std::enable_if in the return type of the functions, I find that it tends to make it unreadable. Therefore, you may probably want to move it to the template parameters list instead. Consider your code:
template<std::size_t I = 0, typename... Tp>
inline typename std::enable_if<(I < sizeof...(Tp)), void>::type
increment(std::tuple<Tp...>& t)
{
std::get<I>(t)++ ;
increment<I + 1, Tp...>(t);
}
And compare it to this one:
template<std::size_t I = 0, typename... Tp,
typename = typename std::enable_if<I < sizeof...(Tp), void>::type>
inline void increment(std::tuple<Tp...>& t)
{
std::get<I>(t)++ ;
increment<I + 1, Tp...>(t);
}
Pre-increment vs. post-increment
Depending on the type, ++var may be faster than var++. It does not change anything for int but if your container contains large type, remember that the ++ in var++ is generally defined as:
auto operator++(int)
-> T&
{
auto res = var;
++var;
return res;
}
As you can see, yet another copy of the incremented variable is made and ++var is called. Therefore, you may want to use ++var instead of var++ in a generic context.
Miscellaneous tidbits
template<typename U1, typename... Us>
zipper<U1, Us...>& operator=(zipper<U1, Us...>& rhs) { ... }
You may want to pass a const zipper<U1, Us...>& instead of a zipper<U1, Us...>&.
zipper<T1, Ts...>::iterator& begin() {
return begin_;
}
zipper<T1, Ts...>::iterator& end() {
return end_;
}
You may also want to provide the functions begin() const, end() const, cbegin() const and cend() const if you want this set of functions to be complete and consistent towards the STL. Also, some operator== would be great for zipper::iterator.
Also, I like the new function syntax which IMHO helps to separate the function return type and its name, which is especially useful when the said return type is long. However, I read that some people don't like it, so it's up to your own preferences.
Conclusion
Generally speaking, your code was good and it works, which is even better. I've provided a few tips, but there are probably many other things that you could do to improve it. It will always be up to you when it concerns the readability though, preferences matter in that domain :)
Edit: ok, so apparently yout example worked fine, but @Barry's answer seem to highlight more serious problems. You might want to accept it instead. | {
"domain": "codereview.stackexchange",
"id": 17086,
"tags": "c++, functional-programming, c++11, iterator"
} |
Why tertiary amines can't show chirality? | Question:
Why is a tertiary amine with three different substituents not chiral? Doesn't the lone pair represent a fourth, different substituent?
Answer: The short answer to your question
Under conditions that inhibit inversion, an amine that has three different groups attached is chiral.
The above applies not just to tertiary but also to secondary amines.
Now, why inhibit inversion? Why does it make a difference?
As mentioned by Zhe (and mentioned in Newton's answer) in the comments:
Tertiary amines can definitely be chiral. It's just that they epimerize quickly, but they are definitely chiral when there are 3 different substituents. In addition, you can easily create a tertiary amine that does not invert by tying back the substituents.
If you notice, they are chiral for very short periods of time due to the inversion that takes place in amines. In regards to the time scale – Ammonia ($\ce{NH3}$) flips $4 \times 10^{10}$ times every second. Converting that into the time for one inversion that translates into an inversion every $\pu{2.5 \times10^{-11} s}$ which is $\pu{25 picoseconds}$. Such a scale of time cannot be discerned by any of our current methods to identify stereochemistry.
From http://www.chem.ucalgary.ca/courses/350/Carey5th/Ch07/ch7-3.html
In this case, even though a chirality center is present in each molecule, the sample is optically inactive since the optical activity of the two extremes of inversion
averages out because they are enantiomeric.
For why umbrella inversion occurs, see this answer by SendersReagent
Now, what if we restrict this inversion by attaching free-rotation restricting groups? What happens then?
This restriction brings (or rather removes) a fact into the mix. There is no more inversion possible. This is because rearrangement into $\mathrm {sp}^2$ cannot take place because of the groups added since it becomes sterically inhibited in becoming planar. An example of this would be quinuclidine which has no way of forming a planar structure. | {
"domain": "chemistry.stackexchange",
"id": 14504,
"tags": "organic-chemistry, amines, chirality"
} |
Orocos components can not be compiled | Question:
Hello,
I am getting an error while compiling rtt orocos-toolchain package. I tried to reinstall latest updates from repository but seems it is still not working.
rootx@rootx-laptop:/opt/ros/cturtle/stacks/kul-ros-pkg/stacks/orocos_toolchain_ros$ roscd rtt
rootx@rootx-laptop:/opt/ros/cturtle/stacks/kul-ros-pkg/stacks/orocos_toolchain_ros/tags/orocos_toolchain_ros-0.1.5/rtt$ rosmake
[ rosmake ] No package specified. Building ['rtt']
[ rosmake ] Packages requested are: ['rtt']
[ rosmake ] Logging to directory/home/rootx/.ros/rosmake/rosmake_output-20110221-115447
[ rosmake ] Expanded args ['rtt'] to:
['rtt']
[ rosmake ] Checking rosdeps compliance for packages rtt. This may take a few seconds.
[ rosmake ] rosdep check passed all system dependencies in packages
[rosmake-0] Starting >>> rosbuild [ make ]
[rosmake-0] Finished <<< rosbuild ROS_NOBUILD in package rosbuild
No Makefile in package rosbuild
[rosmake-1] Starting >>> roslang [ make ]
[rosmake-1] Finished <<< roslang ROS_NOBUILD in package roslang
No Makefile in package roslang
[rosmake-1] Starting >>> roslib [ make ]
[rosmake-1] Finished <<< roslib ROS_NOBUILD in package roslib
[rosmake-3] Starting >>> rtt [ make ]
[rosmake-2] Starting >>> xmlrpcpp [ make ]
[rosmake-2] Finished <<< xmlrpcpp ROS_NOBUILD in package xmlrpcpp
[rosmake-2] Starting >>> rosconsole [ make ]
[rosmake-2] Finished <<< rosconsole ROS_NOBUILD in package rosconsole
[rosmake-2] Starting >>> roscpp [ make ]
[rosmake-2] Finished <<< roscpp ROS_NOBUILD in package roscpp
[rosmake-2] Starting >>> rosout [ make ]
[rosmake-2] Finished <<< rosout ROS_NOBUILD in package rosout
[ rosmake ] Last 40 linest: 6.5 sec ] [ 1 Active 9/10 Complete ]
{-------------------------------------------------------------------------------
[ 0%] Built target message
make[3]: Entering directory `/opt/ros/cturtle/stacks/kul-ros-pkg/stacks/orocos_toolchain_ros/tags/orocos_toolchain_ros-0.1.5/rtt/build'
make[3]: Leaving directory `/opt/ros/cturtle/stacks/kul-ros-pkg/stacks/orocos_toolchain_ros/tags/orocos_toolchain_ros-0.1.5/rtt/build'
[ 44%] Built target orocos-rtt-gnulinux_dynamic
make[3]: Entering directory `/opt/ros/cturtle/stacks/kul-ros-pkg/stacks/orocos_toolchain_ros/tags/orocos_toolchain_ros-0.1.5/rtt/build'
make[3]: Entering directory `/opt/ros/cturtle/stacks/kul-ros-pkg/stacks/orocos_toolchain_ros/tags/orocos_toolchain_ros-0.1.5/rtt/build'
make[3]: Leaving directory `/opt/ros/cturtle/stacks/kul-ros-pkg/stacks/orocos_toolchain_ros/tags/orocos_toolchain_ros-0.1.5/rtt/build'
make[3]: Leaving directory `/opt/ros/cturtle/stacks/kul-ros-pkg/stacks/orocos_toolchain_ros/tags/orocos_toolchain_ros-0.1.5/rtt/build'
make[3]: Entering directory `/opt/ros/cturtle/stacks/kul-ros-pkg/stacks/orocos_toolchain_ros/tags/orocos_toolchain_ros-0.1.5/rtt/build'
[ 47%] Built target rtt-typekit-gnulinux_plugin
[ 49%] Built target orocos-rtt-mqueue-gnulinux_dynamic
make[3]: Entering directory `/opt/ros/cturtle/stacks/kul-ros-pkg/stacks/orocos_toolchain_ros/tags/orocos_toolchain_ros-0.1.5/rtt/build'
make[3]: Entering directory `/opt/ros/cturtle/stacks/kul-ros-pkg/stacks/orocos_toolchain_ros/tags/orocos_toolchain_ros-0.1.5/rtt/build'
make[3]: Entering directory `/opt/ros/cturtle/stacks/kul-ros-pkg/stacks/orocos_toolchain_ros/tags/orocos_toolchain_ros-0.1.5/rtt/build'
make[3]: Leaving directory `/opt/ros/cturtle/stacks/kul-ros-pkg/stacks/orocos_toolchain_ros/tags/orocos_toolchain_ros-0.1.5/rtt/build'
make[3]: Leaving directory `/opt/ros/cturtle/stacks/kul-ros-pkg/stacks/orocos_toolchain_ros/tags/orocos_toolchain_ros-0.1.5/rtt/build'
make[3]: Leaving directory `/opt/ros/cturtle/stacks/kul-ros-pkg/stacks/orocos_toolchain_ros/tags/orocos_toolchain_ros-0.1.5/rtt/build'
make[3]: Leaving directory `/opt/ros/cturtle/stacks/kul-ros-pkg/stacks/orocos_toolchain_ros/tags/orocos_toolchain_ros-0.1.5/rtt/build'
make[3]: Entering directory `/opt/ros/cturtle/stacks/kul-ros-pkg/stacks/orocos_toolchain_ros/tags/orocos_toolchain_ros-0.1.5/rtt/build'
[ 49%] [ 69%] Building CXX object build/orocos-toolchain-rtt/rtt/transports/mqueue/CMakeFiles/rtt-transport-mqueue-gnulinux_plugin.dir/MQLib.cpp.o
Built target orocos-rtt-corba-gnulinux_dynamic
[ 76%] make[3]: Entering directory `/opt/ros/cturtle/stacks/kul-ros-pkg/stacks/orocos_toolchain_ros/tags/orocos_toolchain_ros-0.1.5/rtt/build'
Built target rtt-marshalling-gnulinux_plugin
make[3]: Leaving directory `/opt/ros/cturtle/stacks/kul-ros-pkg/stacks/orocos_toolchain_ros/tags/orocos_toolchain_ros-0.1.5/rtt/build'
[ 76%] Built target rtt-transport-corba-gnulinux_plugin
[100%] Built target rtt-scripting-gnulinux_plugin
In file included from /opt/ros/cturtle/stacks/kul-ros-pkg/stacks/orocos_toolchain_ros/tags/orocos_toolchain_ros-0.1.5/rtt/build/orocos-toolchain-rtt/rtt/transports/mqueue/MQSerializationProtocol.hpp:50,
from /opt/ros/cturtle/stacks/kul-ros-pkg/stacks/orocos_toolchain_ros/tags/orocos_toolchain_ros-0.1.5/rtt/build/orocos-toolchain-rtt/rtt/transports/mqueue/MQLib.cpp:42:
/opt/ros/cturtle/stacks/kul-ros-pkg/stacks/orocos_toolchain_ros/tags/orocos_toolchain_ros-0.1.5/rtt/build/orocos-toolchain-rtt/rtt/transports/mqueue/binary_data_archive.hpp: In member function ‘void RTT::mqueue::binary_data_iarchive::load_binary(void*, size_t)’:
/opt/ros/cturtle/stacks/kul-ros-pkg/stacks/orocos_toolchain_ros/tags/orocos_toolchain_ros-0.1.5/rtt/build/orocos-toolchain-rtt/rtt/transports/mqueue/binary_data_archive.hpp:245: error: ‘stream_error’ is not a member of ‘boost::archive::archive_exception’
/opt/ros/cturtle/stacks/kul-ros-pkg/stacks/orocos_toolchain_ros/tags/orocos_toolchain_ros-0.1.5/rtt/build/orocos-toolchain-rtt/rtt/transports/mqueue/binary_data_archive.hpp:259: error: ‘stream_error’ is not a member of ‘boost::archive::archive_exception’
/opt/ros/cturtle/stacks/kul-ros-pkg/stacks/orocos_toolchain_ros/tags/orocos_toolchain_ros-0.1.5/rtt/build/orocos-toolchain-rtt/rtt/transports/mqueue/binary_data_archive.hpp: In member function ‘void RTT::mqueue::binary_data_oarchive::save_binary(const void*, size_t)’:
/opt/ros/cturtle/stacks/kul-ros-pkg/stacks/orocos_toolchain_ros/tags/orocos_toolchain_ros-0.1.5/rtt/build/orocos-toolchain-rtt/rtt/transports/mqueue/binary_data_archive.hpp:434: error: ‘stream_error’ is not a member of ‘boost::archive::archive_exception’
make[3]: *** [build/orocos-toolchain-rtt/rtt/transports/mqueue/CMakeFiles/rtt-transport-mqueue-gnulinux_plugin.dir/MQLib.cpp.o] Error 1
make[3]: Leaving directory `/opt/ros/cturtle/stacks/kul-ros-pkg/stacks/orocos_toolchain_ros/tags/orocos_toolchain_ros-0.1.5/rtt/build'
make[2]: *** [build/orocos-toolchain-rtt/rtt/transports/mqueue/CMakeFiles/rtt-transport-mqueue-gnulinux_plugin.dir/all] Error 2
make[2]: Leaving directory `/opt/ros/cturtle/stacks/kul-ros-pkg/stacks/orocos_toolchain_ros/tags/orocos_toolchain_ros-0.1.5/rtt/build'
make[1]: *** [all] Error 2
make[1]: Leaving directory `/opt/ros/cturtle/stacks/kul-ros-pkg/stacks/orocos_toolchain_ros/tags/orocos_toolchain_ros-0.1.5/rtt/build'
-------------------------------------------------------------------------------}
[ rosmake ] Output from build of package rtt written to:
[ rosmake ] /home/rootx/.ros/rosmake/rosmake_output-20110221-115447/rtt/build_output.log
[rosmake-3] Finished <<< rtt [FAIL] [ 6.61 seconds ]
[ rosmake ] Halting due to failure in package rtt.
[ rosmake ] Waiting for other threads to complete.
[ rosmake ] Results:
[ rosmake ] Built 10 packages with 1 failures.
[ rosmake ] Summary output to directory
[ rosmake ] /home/rootx/.ros/rosmake/rosmake_output-20110221-115447
rootx@rootx-laptop:/opt/ros/cturtle/stacks/kul-ros-pkg/stacks/orocos_toolchain_ros/tags/orocos_toolchain_ros-0.1.5/rtt$
Any help on this would be appreciated.
Prasad
Originally posted by Prasad on ROS Answers with karma: 79 on 2011-02-20
Post score: 0
Answer:
The 'official' installation instructions are available on orocos.org (http://www.orocos.org/wiki/orocos/toolchain/getting-started) and contains a link to the orocos_toolchain_ros stack. In order to solve the error message you have, a couple of questions:
are you sure you're working directory is clean? I see folder names like /tag/-0.1.5... That's definitely not the latest version of the stack. Ruben released the 0.2.1 version last week.
which ROS distribution are you using? As of diamondback, there are differences in the orocos_toolchain_ros stack between cturtle and diamondback. If you're using our git repository, use the cturtle branch for cturtle and the diamondback (or master) branch for diamondback.
What platform are you working on?
Which boost version are you using?
regards,
Steven
Originally posted by Steven Bellens with karma: 735 on 2011-02-21
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 4814,
"tags": "ros, orocos"
} |
Transpiling error in IBM Quantum Experience | Question: I ran the following code
which adds two 2-digit numbers on different back-ends and the simulator gives the expect result while all real QC's have a problem with the transpiling:
In fact, Melbourne was able to return some results but they were quite far from what expected and what the simulator returned. Has anybody encountered a similar problem? I paste below how the result looks when it fails:
Answer: I would add that you use 7 qubits, however, with exception of Melbourne the processors have less qubits (for example Armonk has only 1 qubit, Ourense 5 qubits etc.), so you cannot run the algorithm on them.
Melbourne has 15 qubits, so your algorithm is transpiled without error. Check number of qubits by click on particular processor on main page in IBM Q.
Note, simulator can be used for algorithms up to 32 qubits.
Unexpected results on Melbourne are given by too deep circuit. | {
"domain": "quantumcomputing.stackexchange",
"id": 2690,
"tags": "ibm-q-experience"
} |
The pressure in a water spout and Bernoulli's equation | Question: This is a conceptual question about the application of Bernoulli's equation to a water spout.
There is a classical problem found in many physics texts which goes something like "you have a garden hose with a nozzle which flares inward so the radius is smaller at the end. How high does the water shoot into the air?"
So there are obviously details for the exact problem (like what is the angle of the hose, pressure or velocity, etc) but I am specifically interested in the applicability of Bernoulli's equation to the water which has left the hose. It would seem to be that after the water has left the hose, it is no longer satisfying the conditions for Bernoulli's equation. I can't quite put my finger on why; I can't exactly see the the flow is not laminar but since the pressure outside the flow (the air) is certainly not at the same pressure as the water, it's certainly doesn't seem to be steady.
In my understanding, you would treat the water in the nozzle with Bernoulli's equation (or continuity, depending on exactly conditions) and then simply treat the water as droplets acted on by gravity. If that's true, can someone clarify exactly what conditions of Bernoulli's equation are being violated?
Alternatively, if I am wrong, can you convince me that the water can still be considered a "fluid" for the purposes of applying Bernoulli's equation?
EDIT: A specific example in response to a comment. This shows that assuming Bernoulli still applies is equivalent to assuming the pressure of the stream is the same as the atmospheric pressure. Vertical oil pipe of height $h_1$, oil (gauge) pressure at its base of $P$ and fluid velocity $v$. How high into the air down the oil shoot?
Solution 1) Bernoulli fails when the fluid exits the pipe. The pressure when the fluid leaves is equal to 0 (gauge) so we have Bernoulli's equation at the top of the pipe
$$P+\frac{1}{2}\rho v^2=\frac{1}{2}\rho v'^2+\rho g h_1\longrightarrow v'^2=v^2+\frac{2P}{\rho}-2gh_1$$
Then using energy conservation we have
$$mgh_1+\frac{1}{2}mv'^2=mgh_2\longrightarrow h_2=h_1+\frac{v'^2}{2g}$$
$$h_2=h_1+\frac{1}{2g}\left( v^2+\frac{2P}{\rho}-2gh_1 \right)=\frac{v^2}{2g}+\frac{P}{g\rho}$$.
Solution 2) Bernoulli holds throughout the motion. The (gauge) pressure at the very top (and throughout the entire spout) is zero, so Bernoulli gives us
$$P+\frac{1}{2}\rho v^2=\rho g h_2\longrightarrow h_2=\frac{P}{g\rho}+\frac{v^2}{2g}$$
Answer: The pressure in the stream must be approximately atmospheric, which causes the two solutions to be identical.
Fluid will naturally want to accelerate down a pressure gradient; there's more force on one side than the other, so there's a net force and thus the fluid will accelerate.
Viscosity can prevent this acceleration by providing a counteracting force but it can only do this in the direction of flow, and only if there is a wall to transfer the momentum to.
So if you look the the pressure in the center of the free stream, and then track the pressure as you get closer to the air the pressure must be constant, as there is no acceleration, nor viscous pressure losses.
Then at the water/air interface the pressure difference would be determined by surface tension, but that difference would be so small as it could probably be ignored. Any other pressure difference would not be balanced by an opposing force, so it would cause acceleration, but we don't see transverse acceleration of the stream, so we know there must be a negligible pressure difference. | {
"domain": "physics.stackexchange",
"id": 27039,
"tags": "homework-and-exercises, fluid-dynamics, bernoulli-equation"
} |
Free energy of solvation and Henry's law constant | Question: I am trying to calculate the free energy of solvation of $\ce{CO2}$ from its Henry's law constant. As given on Wikipedia, the dimensionless Henry's law constant is $0.83$. If I try to calculate $\Delta G$ from this, I got
$$\begin{align}
\Delta G &= -RT\ln K \\
&= -\pu{8.3145 J mol-1 K-1}\times\pu{298.15 K}\times\ln 0.83 \\
&= \pu{0.46 kJ mol-1} \\
&= \pu{0.11 kcal mol-1}.
\end{align}$$
However, I saw an article saying that the experimental value is $\pu{0.24 kcal mol-1}$. The article cited from the CRC Handbook of Chemistry and Physics, while Wikipedia cited from this compilation, which I think is also a reliable source.
So what did I do wrong in my calculation of $\Delta G$ from the Henry's law constant?
Answer: The CRC Handbook of Chemistry and Physics [1] provides data for the solubility of $CO_2$ in $H_2O$ originally from reference [2]. The reported data are mole ratio solubilities $\chi$ of the gas at a partial pressure of one atmosphere (101.325 kPa), but can readily be converted into Henry's law solubility in other units:
$$H_{CP} = 1000 \delta_{H2O} \chi /M_{H2O} = 997 \times 6.15 \times 10^{-4}/18 = 0.034 mol/L atm$$
or in molal and bar units
$$H_{mP} = 1000\chi /1.013 (bar/atm)M_{H2O}=1000\times 6.15 \times 10^{-4}/(18 \times 1.013) = 0.034 mol/kg bar$$
Reference [2] is one of the original sources of the data tabulated in the Wikipedia, compiled by Rolf Sander in reference [3]. Other references in [3] provide similar values of $H_{CP} \approx 0.034$ mol/Latm. The value of $H_{CP}$ in the wikipedia is therefore consistent with the solubility reported in the CRC Handbook.
The article by Prasetyo and Hofer [4], referred to in the OP, cites the CRC Handbook as the source of the following reported experimental value (see Table 4): $$\Delta_{solv} G^o = 0.24 kcal/mol$$
However, from $H_{mP} = 0.034$ mol/kgbar (or $H_{CP} = 0.034$ mol/Latm) we obtain $$\Delta_{solv} G^o = -RTlog(H_{CP} \frac{P^o}{m^o})= 2.0 kcal/mol$$ which is far larger than the value reported in the article.
To attempt further progress, note that comparing standard free energies of solvation requires awareness of the associated reference states.
Prasetyo and Hofer indicate (pg 6474) that their choice of standard state is P° = 1 bar, m° = 1 mol/kg, and T = 298.15 K. The experimental values Prasetyo and Hofer compare to simulation results (see Table 4) refer presumably to this standard state. But this is approximately the same as the molar standard state associated with the data reported in the Wikipedia (P° = 1 atm, c° = 1 M, 298.15 K). Certainly the energies computed in the two scales don't differ significantly, as shown above. I therefore suspect that a different standard state was used in their calculation, or free energies for different (chemical) processes are being reported.
It is worth adding that the value $\Delta_{solv} G^o = 0.24 kcal/mol$ is encountered in other articles that also reference the 75th Edition of the CRC Handbook.
References
[1] The CRC Handbook of Chemistry and Physics, 85th Edition, section 8-87, SOLUBILITY OF SELECTED GASES IN WATER, L. H. Gevantman.
[2] R. Crovetto, Evaluation of Solubility Data for the System CO2-H2O, J. Phys. Chem. Ref. Data, 20, 575, 1991.
[3] R. Sander: Compilation of Henry's law constants (version 4.0) for water as solvent, Atmos. Chem. Phys., 15, 4399-4981 (2015).
[4] N. Prasetyo and T.S. Hofer J. Chem. Theory Comput. 2018, 14, 12, 6472-6483 | {
"domain": "chemistry.stackexchange",
"id": 11360,
"tags": "thermodynamics, free-energy"
} |
DPD weight function | Question: I was wondering about the connection between the weight function of the random force and the conservative force between DPD particles in a standard DPD simulation. Both usually have the form [Groot and Warren 1997]
$$
w^R(r) = F^C(r)/a = (1 - r)\quad \text{for } r<1 \text{, else }0
$$
I know that one can in principle choose a different weight function as long as it fulfils the fluctuation dissipation relation $(w^R)^2 = w^D$ but to my knowledge the above weight function is almost always chosen.
What is the reason that the weight function is chosen to be of the same form as the conservative force in most DPD simulations? What could be a reason to choose another weight function?
Sources would be greatly appreciated.
Answer: The weight functions should be continuous and vary from a maximum value at $r=0$ to zero at the cut-off. The linear function is the simplest and computationally least expensive function that satisfies this requirement. That's why it's used.
Note, however, that non-linear functions are occasionally used, e.g.
Yaghoubi, S., et al. "New modified weight function for the dissipative force in the DPD method to increase the Schmidt number." EPL (Europhysics Letters) 110.2 (2015): 24002. (link)
By the way, it may be useful for future reference to know that one of the inventors of DPD is an active member on this site: Johannes. | {
"domain": "physics.stackexchange",
"id": 32142,
"tags": "fluid-dynamics, simulations, weight, dissipation, fluctuation-dissipation"
} |
How should I calculate uncertainty of measurement calculated as average of two measurements | Question: I am measuring force with two channel transducer. Both channels (separately) of this transducer has been calibrated and I can calculate uncertainty of measurement for each of it. However I want to take average of both measurements as my final measured value - I think this should give me better value. How should I calculate uncertainty of that measurement?
Answer: In the absence of other knowledge, the best you can do is average the readings and claim the undertainty is $\frac{1}{\surd2}$ times the quoted accuracy of each probe . This is based on the assumption (hope) that the two measurements are uncorrelated, so you're essentially performing a vector sum of orthogonal uncertainties.
However, you need to be very cautious about this sort of thing. For example, if both probes are plugged into the same receiver (box, USB port, whatever), there may well be systemic bias which is not zero-mean. Both probes could have the same bias, in other words. | {
"domain": "physics.stackexchange",
"id": 17271,
"tags": "error-analysis, metrology"
} |
How do lipid-soluble substances diffuse through the cell membrane? | Question: It’s said that water-soluble substances can diffuse through cell membrane with less ease than lipid-soluble substances because the former encounters impedance in the hydrophobic region of the phospholipid bilayer. However, does the same logic not apply for hydrophobic substances, in that they cannot traverse the hydrophilic outer/inner surfaces of the plasma membrane? How would they be able to enter or leave the cell with greater ease?
Answer: See this paragraph and image from The Cell: A Molecular Approach. 2nd edition.:
During passive diffusion, a molecule simply dissolves in the phospholipid bilayer, diffuses across it, and then dissolves in the aqueous solution at the other side of the membrane...Passive diffusion is thus a nonselective process by which any molecule able to dissolve in the phospholipid bilayer is able to cross the plasma membrane and equilibrate between the inside and outside of the cell. Importantly, only small, relatively hydrophobic molecules are able to diffuse across a phospholipid bilayer at significant rates (Figure 12.15). Thus, gases (such as O2 and CO2), hydrophobic molecules (such as benzene), and small polar but uncharged molecules (such as H2O and ethanol) are able to diffuse across the plasma membrane. Other biological molecules, however, are unable to dissolve in the hydrophobic interior of the phospholipid bilayer. Consequently, larger uncharged polar molecules such as glucose are unable to cross the plasma membrane by passive diffusion, as are charged molecules of any size (including small ions such as H+, Na+, K+, and Cl-). The passage of these molecules across the membrane instead requires the activity of specific transport and channel proteins, which therefore control the traffic of most biological molecules into and out of the cell.
As shown in above paragraph, molecules have to dissolve in phospholipid bilayer to diffuse through it. But since the hydrophilic part is already interacting with water, so dissolving is mainly for the hydrophobic part. Now, since hydrophobic part is non-polar, so it can dissolve only non-polar substances (like dissolves like). Thus, small gases like O2, CO2 and non-polar molecules like benzene easily dissolve in hydrophobic part and diffuse through it. Also, molecules like water and ethanol can also dissolve in it in small amounts due to their small size and very less polarity. That's why only some molecules of water can diffuse passively, large amount of water would require aquaporins. Now, since bigger molecules like glucose have large size and high polarity, they cannot dissolve in the membrane and thus require transmembrane channels even for diffusion. Similar case is with ionic substances too: they require ion channels for their diffusion.
Hope this helps. Comment below if you have any further doubts. | {
"domain": "biology.stackexchange",
"id": 5440,
"tags": "biochemistry, cell-biology, cell-membrane, membrane-transport, plasma-membrane"
} |
Importing model to Gazebo 11 all links spawned at origin | Question:
Hello, I have made a custom gazebo model and was able to rig up all the links, joints and plugins in the model editor and save the model. However, every time I restart gazebo and insert my saved model, all the links would be spawned at the origin and not at the correct respective poses defined in the sdf file. (See image below).
Once I edit model, all the links would move to the correct positions but I will need to save the model again and exit model editor for the links to stay in the right position. If I do not save the model before exiting model editor, the links would stay at the origin after exiting model editor. I am using Gazebo Classic 11 on Ubuntu 20.04. I have tried changing the sdf versions, and also tried using the relative_to function for the poses of the links, but to no avail. Any help is appreciated. Thank you very much in advance.
Originally posted by airoverr on Gazebo Answers with karma: 3 on 2023-05-30
Post score: 0
Answer:
I assume the model is not static. If you start the world in pause mode, or add the model while the world is paused, are the links in the model in the correct poses? Then when played, all the links go to the origin? If yes, that may mean your physics parameters in the model is not correct, and causing some inf or NaN in one of the links' poses. This can be caused by zero mass or inertia, for example, but there can be more possibilities.
Originally posted by Veerachart with karma: 226 on 2023-05-31
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 4707,
"tags": "gazebo-11"
} |
up down sampling and low-pass filters | Question: I am trying to figure out what seems very trivial thing about down sampling and up sampling. The problem that I am facing is how to find the effects of up sample on a signal followed by a filter and/or a filter followed by down sampling. Here is a problem related to this from the Oppenhiem and Schafer book.
I am trying to see what happens to x[n] after it pass the hlp[n], considering that hlp[n] has wc = pi/4. Will there be a gain on this or just the frequency is affected?
Please help, Thanks in advance
Answer: I think, this will act like low pass filter with $\pi/2$ instead of $\pi/4$ cutoff frequency | {
"domain": "dsp.stackexchange",
"id": 2278,
"tags": "lowpass-filter, downsampling, decimation"
} |
A simple Q and A quiz that reinforces Ruby learning | Question: I wrote a simple quiz which I am hoping to scale-up as I continue to learn about programming.
The code now stores questions and answers in separate arrays.
Please review this code to improve its efficacy.
questions = []
answers = []
q0 = "What is the command for a 'for' loop?"
a0 = "for"
q1 = "What is another name for a method?"
a1 = "function"
q2 = "what are the three types of loops that I know of?"
a2 = "while, for, and until"
questions.push(q0, q1, q2)
answers.push(a0, a1, a2)
question_counter = 0
index = 0
while true
puts questions[index]
question_counter += 1
answer_attempt = gets.chomp.downcase
if answer_attempt == answers[index]
puts "Good job. Next question.."
else
puts "Incorrect. Next question"
end
index = index + 1
if question_counter == 3
puts "all done"
break
end
end
Answer: Store related data in a Hash
Also, looping like you've done is very uncommon in Ruby. You'll see Array#each (or Hash#each in this case) much more common than storing an index in a var while iterating through indexes.
@success_msg = 'Good job. Next question..'
@incorrect_msg = 'Incorrect. Next question..'
{
"What is the command for a 'for' loop?" => 'for',
'What is another name for a method?' => 'function',
'What are the three types of loops that I know of?' => 'while, for, and until'
}.each do |q, a|
puts q
puts(gets.chomp.downcase == a ? @success_msg : @incorrect_msg)
end
puts 'All done.'
Also, I'd try to make this multiple choice or something if you're really trying to use it, theres no way your response is going to match those answers exactly. Perhaps you could look for certain terms in the response, but parsing English for semantic meaning isn't very easy hehe. | {
"domain": "codereview.stackexchange",
"id": 27692,
"tags": "beginner, ruby, quiz"
} |
Simple PDE as a theory of everything? | Question: For the sake of simplicity, I’d like to believe that there is one master non-linear partial differential equation
governing physics. In particular, consider a Klein-Gordon form:
$$
\frac{\partial^2 f}{\partial t^2}=v^2 \left (\frac{\partial^2 f}{\partial x^2}+\frac{\partial^2 f}{\partial y^2}+\frac{\partial^2 f}{\partial z ^2} \right ) - m^2f
$$
The state function $f$ (with $\partial f/\partial t$) here is naturally a function of space $xyz$ and time $t$. But, let $v$ and $m$ depend on the local $f$ and
$\partial f/\partial t$ (so that the equation is non-linear and probably impossible to solve analytically). It seems $f$ needs to be a two-number vector (to allow two light polarizations, or two electron spins), but I'd like to leave that unspecified for now.
Then, is there an argument why nature can't have this simple non-linear form (where quarks and other standard model elements appear as something like solitons)? I don't understand why most physicists are focusing on more-complicated 12D string theories.
(To simplify even further, it would be nice if $m=0$, but I believe Einstein and others sought such an "electromagnetic-only theory for the electron" without success, even after adding two more dimensions! If there's no general argument for my question above, I wonder if there is a specific argument for why nature can't have the $m$=0 form.)
Answer: The idea that nature is described by a nonlinear system of equations was the idea that Einstein had in the 1920s, and motivated his search for a unified field theory. It doesn't work, and it's philosophically less worthwhile than current theories anyway, so even if it did work, it wouldn't be simpler than string theory, or as elegant.
The idea that you can describe what's going on with local equations is false, as is demonstrated conclusively by Bell's inequality violations. The Bell inequality tells you that you can send electrons to far-away locations with spins that can be measured in 3 directions, A,B,C. The spin of the two electrons in each direction are 100% correlated (it's actually anti-correlated, but same difference for the argument), so if you measure the spin in direction A, and one electron is up the other is 100% certain to be up. Same for direction B and C, the two electrons always report the same spin in any of the three directions.
The spin in directions A and B are 99% correlated, meaning if you measure A on one of the electrons is up, then B on the other electron is up 99% of the time, and B is down 1% of the time. The spin in directions B and C are 99% correlated, so if you measure B is down on one electron, C is up on the other electron 1% of the time.
From the 100% correlation of the electrons, you conclude that the nonlocal field state (hidden variable) on one electron has the property that
A and B are 99% the same, 1% different
B and C are 99% the same, 1% different
From this you deduce that
A and C must be at least 98% the same
meaning that whatever field configuration is happening to make A, the field configuration for C can only give different results 2% of the time, the sum of those times when it gives different answers than B plus the times B gives different answers than A.
This bound is called Bell's inequality, and it is violated by quantum mechanics. A and C are different 96% of the time.
This means any type of local-in-space description, linear, nonlinear, complicated, simple, whatever, will never ever work to describe nature. Your description is either nonlocal in the sense of faster-than-light communication, or nonlocal in the sense of having a global notion of state which is entangled nonlocally by measurements. This is why nobody looks for nonlinear field equations to describe nature anymore. It can't possibly work.
But the main ideas of Einstein's nonlinear field theories have survived to inspire developments in later physics.
The pions are excitations of a sigma-model, which is a type of nonlinear field theory. They are small oscillations of the quark condensate in the vacuum.
The proton can be thought of as the topological soliton of the sigma model. In quantum mechanics, it can still be a fermion even though the sigma model has no fundamental fermionic variables.
The field equations of 11-dimensional supergravity, which are a central part of string theory, generalize General Relativity in pretty much the only nontrivial ways known--- they give the biggest extension of spacetime symmetry possible, and they include a new field, constrained by the supersymmetry.
So these ideas are not a dead end, but they cannot work without quantum mechanics by themselves. If you want to understand quantum behavior emerging from some sort of nonlinear dynamics underneath, this dynamics can't be local. | {
"domain": "physics.stackexchange",
"id": 4154,
"tags": "waves, nuclear-physics, klein-gordon-equation, theory-of-everything, grand-unification"
} |
Applications of Complete Binary Tree? | Question: Wondering what are the real word applications of the Complete binary trees or Almost complete binary trees where the the last level of the tree may not be complete and all nodes in the last level are aligned to the left.
Answer: That would be the binairy heap implementation of a priority queue.
The tree itself would be stored in an array, where access to children and parent is via computation of the position of the other nodes. | {
"domain": "cs.stackexchange",
"id": 18830,
"tags": "data-structures, trees, binary-trees"
} |
How is the size of a gene defined? | Question: Is there an agreed definition as to how many nucleic acid bases constitute a gene?
If not, why not? I'm not sure I understand how the exact sizes of genes are defined.
Answer:
Is there an agreed-upon definition as to how many nucleobases constitute a gene?
If not, why not?
There is no such definition. A gene is a region of the DNA that is transcribed. Typically a gene should have a transcription start site dictated by a promoter and a transcription stop site marked by termination signals (like terminators and poly-A signal etc.)
There are some little RNAs (~18nt) that are produced from TSS of usual genes but are probably products of failed elongation. These are not really considered genes as they are heterogeneous in size and are not marked by any boundary.
There may technically be a minimum cutoff on gene length which could be the length of DNA necessary for the RNA-polymerase to sit and also include the termination signals. As indicated in the comments, the smallest gene may be the tRNA. However, the smallest annotated gene from the GENCODE annotations is TRDD1 (just 7nt long!!!). This is not based on gene prediction; it is manually annotated by the HAVANA team.
What is the average length of a gene?
I just did a rough calculation from the GENCODE human genome annotation file (version 23).
The average transcript length seems to be around: 1.5kb
The average gene length seems to be around: 29kbp
The genes would be longer than (or equal to) their corresponding transcripts because the latter gets shortened due to splicing.
I made a histogram plot of these lengths for convenience:
Transcript length distribution
Gene length distribution
Note the sharp peaks at 100bp. Quite interesting!
Remi has user19099 have mentioned that the longest gene in humans is titin. It seems that it is the longest gene in many other diverse animals. See What's the longest transcript known? for more details.
Methodology (so that limitations can be identified)
To calculate gene length distribution: I parsed the GTF file for "genes" (third field i.e. feature) and subtracted the fifth field (stop) from fourth (start).
To calculate transcript length distribution: Got the transcript fasta file from the annotated locations. Calculated their lengths. Plotted the distribution. | {
"domain": "biology.stackexchange",
"id": 5706,
"tags": "genetics, dna, genomics"
} |
What are the differences between the Hilbert space and real space representations in chemistry? | Question: I would like to understand the difference between Hilbert space and real space in a molecular code.
My understanding is that the Hilbert space of a system (such as a molecule) can be represented by a set of atomic orbitals (AOs) while the real space represents a system through a grid of points in the specific space of a system. Is this true?
For a more concrete example, consider electronic excitations. Using the entire (orbital) space of a reference system, one can project out (excite) an electron to the virtual space of a reference ground state. However, I'm not sure how this is handled in a "real space" program, or what that even means.
Answer: A Hilbert space is a kind of linear vector space.
In chemistry which encounter it when quantum mechanics, when we can represent wavefunctions by their contributions from different orthonormal single particle states. It is these single particle states which we build up out our atomic orbitals (AOs).
Example:
Consider a two level systems - minimal basis $\ce{H2}$:
Each atom has a $1s$ AO on its hydrogen: $\phi_1(\textbf{r})$, $\phi_2(\textbf{r})$. These AOs aren't orthogonal though:
$$
\iiint d^3\textbf{r}\ \phi_1^*(\textbf{r}) \phi_2(\textbf{r}) \ne 0
$$but we can construct orthonormal states:
$$
\psi_\pm(\textbf{r}) = \frac{1}{\sqrt{2}}\left(\phi_1(\textbf{r})\pm\phi_2(\textbf{r})\right)
$$
such that like functions integrate to $1$ and functions of unlike functions vanish:
$$
\iiint d^3\textbf{r}\ \psi_\pm^*(\textbf{r}) \psi_\mp(\textbf{r}) = 0 \\
\iiint d^3\textbf{r}\ \psi_\pm^*(\textbf{r}) \psi_\pm(\textbf{r}) = 1
$$.
These two orthogonal function can be represented as state vectors:
$$
\psi_+(\textbf{r}) = \pmatrix{1\\0},\; \psi_-(\textbf{r}) = \pmatrix{0\\1}
$$
This defines a Hilbert or vector space, where our single particle states $\psi_\pm$ are our vectors and the inner product of two vectors, $\psi_i$ and $\psi_j$, is defined by the integral over all space:
$$
\langle\psi_i\vert\psi_j\rangle = \iiint d^3\textbf{r}\ \psi_i^*(\textbf{r}) \psi_j(\textbf{r})
$$
and is equivalent to the dot product of two state vectors.
In a electronic structure (specifically DFT) code, a "real space code" means using atom orbitals as basis functions rather than plane waves.
The Hilbert space described above is complete - it describes all possible wavefunctions which can be made from the AOs included. If you included the infinite number of hydrogenic wavefunctions on each atom you would be able to have an exact description of all possible wavefunctions for $\ce{H_2}$ - as the hydrogenic wavefunctions form a complete set, instead you include as many as practical.
An alternative complete set of functions is the infinite set of all plane waves of all frequencies. In cases where you are performing calculations on a periodic system, such as a solid, plane waves can be a more natural choice as they themselves are also periodic. In a box of width $L$, the set of waves with wavelengths:
$$\lambda = \frac{2L}{n}$$
for all integral n, form a complete set. They are more commonly described by their wavevectors, $\textbf{k}$, a vector in what is called reciprocal space, which describes the momentum of the wave.
In DFT, when calculating integrals over density, in a plane wave code you manipulate the density in terms of wavevectors, $\textbf{k}$. In a real space code, you compose your density from Kohn-Sham orbitals in real space and calculate integrals of the density over quadrature grids in real (Euclidean) space.
Hence "real space" is used as a descriptor, for DFT codes that work with AOs and basis functions defined on atoms in Euclidean space, in contrast to "plane wave" codes which used basis functions defined in position space. | {
"domain": "chemistry.stackexchange",
"id": 8995,
"tags": "quantum-chemistry, computational-chemistry, theoretical-chemistry, terminology"
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.