anchor stringlengths 0 150 | positive stringlengths 0 96k | source dict |
|---|---|---|
Could someone please help me understand why the normal forces are acting downward here? | Question:
The question said that 'the floor of the lift exerts forces of magnitude 678N and 452N respectively on Albert and Bella'. This is the normal reaction force. But why are they taken as downwards? Shouldn't they be upwards? Doesn't the force on the person by the lift always act upwards?
Answer: The diagram shows the forces on the lift.
the floor of the lift exerts (upward) forces of magnitude 678N and 452N respectively on Albert and Bella
These two upwards forces on Albert and Bella accelerate them upwards.
Applying Newton’s third law.
Albert and Bella exerts downward forces of magnitude 678N and 452N respectively on the floor of the lift
Update as a result of a comment
Here are the free body diagrams for the lift and its passengers. | {
"domain": "physics.stackexchange",
"id": 43613,
"tags": "homework-and-exercises, newtonian-mechanics, forces, newtonian-gravity, acceleration"
} |
Comoving Hubble radius in terms of expansion scale factor | Question: I am currently reading Baumann's Cambridge lecture slides on cosmology and am confused on the notion of the Hubble radius. On page 10 it is stated that for a perfect fluid with constant equation of state $w = P/\rho$, the comoving Hubble radius is $(aH)^{-1} \propto a^{0.5(1+3w)}$, but I haven't the faintest idea where this equation came from, as no derivation is given. Can anyone provide one?
Answer: In Planck units, $M_P=(8\pi G)^{-1}=1$. | {
"domain": "physics.stackexchange",
"id": 78841,
"tags": "cosmology, space-expansion, cosmological-inflation, space"
} |
Why would a reaction favor one isotope over another? | Question: In my instrumentation class last semester, we were asked to read a short paper. The paper described a technique for determining whether dinosaurs were cold-blooded or warm-blooded based on the amount of a certain isotope in a sample of dinosaur bone.
What would cause a reaction to favor one isotope over another? In my general chemistry course, I learned that isotopes act similarly in chemical reactions. Of course, I have found over the years that pretty much everything taught in general chemistry was approximations probably so students don't feel inundated.
Answer: Chemically speaking compounds containing isotopes are very similar but not exactly similar
Broadly when compounds contain different isotopes of the same element (say different isotopes of oxygen) their reactivity is the same. But this is a little over broad. There are subtle and (usually) small differences and these can show up as small differences in chemical reactivity.
To get an idea why consider the simple physical properties of water and heavy water. In heavy water the two hydrogens are replaced by deuterium (hydrogen with an extra neutron in the nucleus). A heavy water molecule is about 10% heavier than a normal water molecule (molecular mass ~20 rather than ~18). As a result the physical properties are a little different: bp is about 2°C higher, mp about 4°C higher and the liquid is about 10% denser. So careful techniques can separate the two (by repeated fractional distillation, for example).
The slight differences also appear in chemical reactions. The reasons why are more complex than the explanation of simple physical effects. But one hint that they exist is simply that the vibrational frequency of a bond depends on the mass of the atoms making up the bond. Isotopes have different masses so there will be some difference with different isotopes. And bonds vibrating are part of the mechanism of reactions involving that bond so the reactions will have different rates. The detailed theory is pretty complex (see this) and the size of the effect depends on the detailed mechanism of the reaction.
Reaction rates for some reactions involving bonds to hydrogen can differ by large factors (because deuterium is twice as heavy as hydrogen) and are sometimes 10 fold. Most other isotopes have only small relative mass differences compared to their parent element and the net rate differences are often only small single-digit percentages. This results in some reactions concentrating some isotopes over others (eg 18O vs 16O. But these can be large enough to detect by mass spectroscopy.
So, despite their chemical similarity, some reactions and some physical processes will favour one isotope over another (though often only very slightly). If you understand the key reaction mechanism, the small differences in the concentrations can tell something about the conditions of the reaction (temperature dependence, for example). | {
"domain": "chemistry.stackexchange",
"id": 11521,
"tags": "reaction-mechanism, isotope"
} |
Regarding calculation with rigorous steps | Question: I was studying a book called Theoretical Astrophysics by Prof T Padmanavan, and in the very first chapter, i find the expression of Net Contributing Pressure ($P$) as:
$$P = \frac{1}{3} \int_{0}^{\infty}n(\epsilon)p(\epsilon)v(\epsilon)d\epsilon$$
followed by relativistic momentum $P = \gamma mv$ and kinetic energy, $\epsilon = (\gamma-1)mc^2$ with $\gamma$ as the relativistic factor. By far this I understood it, but then, substituting these values in the above given expression, as:
$$P = \frac{1}{3} \int_{0}^{\infty}n\epsilon\left(1+\frac{2mc^2}{\epsilon}\right)\left(1+\frac{mc^2}{\epsilon}\right)^{-1}d\epsilon$$
is not understood by me. Can anyone please help me out with proper steps to achieve this expression. Thanks in advance !
Answer: Thanks to @robphy for providing the answer, and with some algebraic manipulation, I got this:
$$P = \frac{1}{3}\int_{0}^{\infty}np(\epsilon)v(\epsilon)d\epsilon$$
$$= \frac{1}{3}\int_{0}^{\infty}np\frac{p}{\epsilon+m}d\epsilon$$
As, $v(\epsilon) = \frac{p}{E} = \frac{\sqrt{\epsilon(\epsilon+2m)}}{\epsilon+m}$
$$= \frac{1}{3}\int_{0}^{\infty}n\frac{p^2}{\epsilon+m}d\epsilon$$
$$= \frac{1}{3}\int_{0}^{\infty}n\frac{\epsilon(\epsilon+2m)}{\epsilon+m}d\epsilon$$
As, $p=\sqrt{\epsilon(\epsilon+2m)}$
Upto this, it was normalized as $c=1$, and reconsidering that, we get:
$$P = \frac{1}{3}\int_{0}^{\infty}n\frac{\epsilon(\epsilon+2mc^2)}{\epsilon+mc^2}d\epsilon$$
$$= \frac{1}{3}\int_{0}^{\infty}n\epsilon(1+\frac{2mc^2}{\epsilon})(\frac{\epsilon}{{\epsilon+mc^2}})d\epsilon$$
$$= \frac{1}{3}\int_{0}^{\infty}n\epsilon(1+\frac{2mc^2}{\epsilon})(\frac{1}{1+\frac{mc^2}{\epsilon}})d\epsilon$$
$$\therefore P = \frac{1}{3}\int_{0}^{\infty}n\epsilon(1+\frac{2mc^2}{\epsilon})(1+\frac{mc^2}{\epsilon})^{-1}d\epsilon$$ | {
"domain": "physics.stackexchange",
"id": 69122,
"tags": "thermodynamics, special-relativity"
} |
Schrödinger-Propagator for combined linear and harmonic potential | Question: Given the Hamiltonian
\begin{equation}
H = \frac{p^2}{2m} + V(x)
\end{equation}
The propagator for a pure harmonic potential of the form
\begin{equation}
V(x) = \frac{1}{2} m \omega^2 x^2
\end{equation}
is given in the wikipedia article about propagators https://en.wikipedia.org/wiki/Propagator.
My Question is: What is the propagator if the potential also contains a linear part?
For example:
\begin{equation}
V(x) = u_1 x + \frac{1}{2} m \omega^2 x^2.
\end{equation}
Answer: You can rewrite that as
$$V(x) = \frac{1}{2}m\omega^2x^2 + u_1x = \left(\sqrt\frac{m\omega^2}{2}x+\frac{u_1}{2}\sqrt\frac{2}{m\omega^2}\right)^2 - \frac{u_1}{2m\omega^2}.$$
Then make a coordinate transformation to $\tilde{x} = \sqrt\frac{m\omega^2}{2}x+\frac{u_1}{2}\sqrt\frac{2}{m\omega^2}$ and write the momentum operator in this coordinate frame. All that changes are some prefactors.
EDIT: Sorry, probably better if you write it like that
$$V(x) = \frac{m\omega^2}{2}\left(x^2+\frac{2u_1}{m\omega^2}x\right) = \frac{m\omega^2}{2}\left(x+\frac{u_1}{m\omega^2}\right)^2-\frac{u_1^2}{2m\omega^2}$$
and $\tilde{x} = x+\frac{u_1}{m\omega^2}$, such that
$$V(\tilde{x}) = \frac{m\omega^2}{2}\tilde{x}^2.$$ | {
"domain": "physics.stackexchange",
"id": 94571,
"tags": "homework-and-exercises, schroedinger-equation, potential-energy, harmonic-oscillator, propagator"
} |
Avoiding Duplicates in a Ruby on Rails Factory Girl Factory with Fake | Question: I have a Factory Girl factory that needs to generate a unique name using Faker. Unfortunately, Factory Girl generates duplicates relatively frequently, which causes intermittent test errors. I previously added a random integer after the Faker name, but I am now checking the database to make sure that a potential name from Faker doesn't already exist.
FactoryGirl.define do
factory :company do
name do
name = Faker::Company.name
name = Faker::Company.name while Company.exists?(name: name)
name
end
end
end
The above code works but seems inelegant. I'm assigning name in the first row of the block so that only one DB lookup (instead of two) will be necessary in the most common case where a Fake name is unique.
I end the block with name as otherwise the while statement returns nil when the loop ends.
Answer: The simplest thing to do is probably to use FactoryGirl's sequence:
FactoryGirl.define do
factory :company do
sequence(:name) { |n| "#{Faker::Company.name} #{n}" }
end
end
sequence basically gives you an auto-incrementing integer, so you can avoid uniqueness issues. Sure, it'll generate some slightly odd company names, but for testing that shouldn't matter much.
In general though, I don't feel too great a need for something like Faker. Something like this should work just as well:
FactoryGirl.define do
factory :company do
sequence(:name) { |n| "Acme Inc. #{n}" }
end
end
If it's only your specs looking at the data, it doesn't really matter if the company name's "Acme", "foobar 42", or "ACC8B1B7-7EC0-4F05-AB2A-B487C134F6BF".
Besides, your current solution is:
non-deterministic, so it may never exit the while loop if it just happens to keep picking the same couple names again and again,
but even if we ignore that, the act of adding more tests that use company records, will mean more name collisions and longer time spent in the loop. I don't know how Faker works, but (unlikely but technically possible) you might even end up exhausting every name Faker can come up with, and end up looping forever. | {
"domain": "codereview.stackexchange",
"id": 30544,
"tags": "ruby, ruby-on-rails, factory-method"
} |
How to read off the set represented by a van-Emde-Boas tree? | Question: I'm reviewing my background in Algorithms and DS design. Specifically I never went through the van Emde Boas Tree. Though I can undestand the proto-vEB with related picture. I'm struggling to understand the actual data structure. I have no clue how the "min" and "max" members are actually used, I'm not talking about the procedures but just how the following picture is supposed to be interpreted. Can anyone help me with this?
Answer: If you haven't done so, I suggest you read chapter 20 from the beginning. They develop the final data structure bit by bit, supposedly for didactic reasons.
In 20.3.1, they write:
min stores the minimum element in the vEB tree, and max stores the maximum element in the vEB tree. Furthermore, the element stored in min does not appear in any of the [subtrees].
What might confuse you is that the actual elements to not appear. As explained earlier in the chapter, the elements are encoded by position: the domain of the tree is $[0..15]$, and the subset is essentially encoded by a bit vector stored in the leaves of the tree. Figure 20.4 is annotated helpfully in this regard; note that the minimums are moved up to the min fields here, though.
In the full VEB tree, the encoding itself is a little tricky. Here goes:
Here is how you read the tree (ignore the dark summary nodes for this purpose):
u denotes the size of the domain of a node.
The root has domain [0..u-1].
The domain of a node cluser[i] is the the i-th chunk of size cluser[i].u in the domain of its parent.
If they are not nil, min and max are indices into the domain of the node.
If min and max are nil, the node represents no elements.
Examples:
root.cluster[1].min is 0; it represents element 0 + 1*4 * 0 = 3.
root.cluster[0].cluster[1].min/max are nil; therefore 0 + 0*4 + 1*2 + 0/1 = 2/3 are both not in the node.
But 2 is in the set anyway, since it's represented by root.min!
root.cluster[3].min is 2; therefore, 0 + 3*4 + 2 = 14 is in the set.
root.cluster[3].cluster[1].min/max are 1; therefore 0 + 3*4 + 1*2 + 1 = 15 is in the set. | {
"domain": "cs.stackexchange",
"id": 10937,
"tags": "data-structures, trees, sets, integers, van-emde-boas-trees"
} |
Desilylation mechanism with fluoride | Question: I want to understand the mechanism of desilylation of trimethylsilyl under fluoride conditions in methanol, because I can't find any useful info that makes me understand the mechanism that takes place.
I did find this:
But my case is a little bit different because a terminal alkyne is bonded to the silyl group instead of oxygen:
Why would that bond be broken and not any other carbon-hydrogen bond?
Thanks in advance!
Answer: The mechanism is the same with the generation of the alkynyl anion as the abstract of this paper ref 1 makes clear:
Tetrabutylammonium triphenyldifluorosilicate (TBAT) can be employed as a fluoride source to cleave silicon−carbon bonds thus generating in situ carbanions that coupled with a variety of electrophiles, including aldehydes and ketones, in moderate to high yields. Among the examples reported is the first instance of fluoride-induced intermolecular coupling between allyltrimethylsilane and imine derivatives. Also, of particular note is the TBAT-initiated coupling of primary alkyl halides with allyltrimethylsilane. TBAT is an easily handled crystalline solid that has several advantages over tetrabutylammonium fluoride (TBAF) as a fluoride source; it is anhydrous, nonhygroscopic, soluble in most commonly used organic solvents, and less basic than TBAF.
Bear in mind that the $\mathrm{p}K_\mathrm{a}$ of an alkynyl proton is approximately 25 (ref 2) so this is a far more stable anion than any possible alternative. Clearly with $\ce{MeOH}$ as the solvent it will be quenched immediately. | {
"domain": "chemistry.stackexchange",
"id": 13661,
"tags": "organic-chemistry"
} |
Can maximum matching algorithms be used for maximum weight matching? | Question: There are two fast algorithms for maximum matching on general graphs:
Micali and Vazirani in $O(E\sqrt{V})$.
Mucha and Sankowski in $O(V^{2.376})$.
Can these be also used for maximum weighted matching on general graphs? Note that Edmonds' Blossom algorithm can be used to solve both problems.
Answer: Ran Duan and Seth Pettie survey maximum matching algorithms in their 2014 paper Linear-Time Approximation for Maximum Weight Matching. In particular, Table III in their paper (page 5) lists algorithms for maximum weight matching in general graphs. | {
"domain": "cs.stackexchange",
"id": 13295,
"tags": "algorithms, matching"
} |
Balancing by oxidation numbers method multiple atoms | Question: In my Chemistry textbook, the rules for balancing a chemical equation using the oxidation-Number method are as follows:
Assign oxidation numbers to all the atoms in the equation.
Identify the atoms that are oxidized and the atoms that are reduced.
Determine the change in oxidation number for the atoms that are oxidized and for the atoms that are reduced.
Make the change in oxidation numbers equal in magnitude by adjusting coefficients in the equation.
If necessary, use the conventional method to balance the remainder of the equation.
I understand how to do this for simple equations, such as $$\ce{SnCl4 + Fe -> SnCl2 + FeCl3}$$
Following the steps:
$\ce{Sn^{4+} + 4Cl- + Fe^0 -> Sn^{2+} + 2Cl^- + Fe^{3+} + 3Cl^-}$
Oxidation: $\ce{Fe^0 -> Fe^{3+} (Fe)}$ reduction: $\ce{Sn^{4+} -> Sn^{+2} (Sn)}$
oxidation: +3; reduction -2
$\ce{3SnCl4 + 2Fe -> 3SnCl2 + 2FeCl3}$
(no change necessary, already balanced)
The Problem: Notice how Sn and Fe (the reduced and oxidized atoms) all have subscripts of 1. However, when their subscripts differ, I'm not sure how to balance it.
For example, I can easily balance the following equations by inspection, but not so easily by this method of oxidation numbers:
$$\ce{KClO3 -> KCl + O2}$$ (O is oxidized, subscripts of 3 and 2)
$$\ce{NH3 + NO2 -> N2 + H2O}$$ (N is oxidized and reduced, oxidation numbers of -3, +2, and 0)
And this gets even more complicated when the subscripts differ and there are more than two elements whose oxidation numbers changed (are reduced or oxidized), e.g.,
$$\ce{NH4ClO4 + Al -> Al2O3 + HCl + N2 + H2O}$$
(Al and N are oxidized, Cl is reduced)
For all of these, I get stuck on the fourth step. Determining oxidation numbers and calculating the change is not the problem, but figuring out how to do it with larger or different coefficients as in the examples above is the problem.
The five general steps in my textbook don't help me understand this method much with more complicated methods such as this. Could you give me a more thorough explanation - a better explanation or a general rule - of how to solve all of these?
Answer: In some (many?) cases you get a more parsimonious description by considering an entire reactant as an oxidant or reductant. Separating out individual atoms makes you miss the forest because you get confused by the trees.
Take the ammonium perchlorate-aluminum reaction. Since aluminum is being oxidized let us render the ammonium perchlorate as the oxidizing agent, ignoring the fact that only some of its atoms are oxidizing. You then have a reaction:
$\ce{NH_4ClO_4}\rightarrow \ce{N^0}+4\ce{H^I}+\ce{Cl^{-I}}+4\ce{O^{-II}}$
where the Roman numerals indicate oxidation states in the products. These add up to $-5$ for all the product whereas the ammonium perchlorate, as a neutral compound, started with the states adding up to zero. So five electrons must be added to the reactants:
$\ce{NH_4ClO_4}+5 e^-\rightarrow \ce{N^0}+4\ce{H^I}+\ce{Cl^{-I}}+4\ce{O^{-II}}$
Thus although some atoms might be oxidized, others reduced, and still others are just along for the ride, we identify a net oxidizing effect of the ammonium perchlorate as a whole, numerically five electrons per formula unit.
Now we know that the redox stoichiometry will be $3\ce{NH_4ClO_4}+5\ce{Al}$ and we can balance the reaction accordingly. (We will need to double the coefficients to eliminate a fraction, that's just algebra.) | {
"domain": "chemistry.stackexchange",
"id": 9785,
"tags": "redox, stoichiometry"
} |
Merge Sort proof | Question: I am trying to prove that merge sort is indeed $O(n \log n)$.
I was able to extract a pattern using constants, however now I am stuck. This is as far as I can get:
$T(n) = 2T(n/2) + cn$
$T(n/2) = 2T(n/4) + c(n/2)$
Now plug in 1. into 2.
$T(n) = 2(2T(n/4) + c(n/2)) + cn$
$T(n) = 4T(n/4) + 2cn$
Now the pattern I was able to find is the following:
$2^kT(n/2^k) + kcn$
Is there a way to use this pattern to prove that merge sort has complexity of $O(n\log n)$?
Answer: You correctly figured out that after unrolling the recursive equation
$$T(n) = 2 \cdot T(n/2) + c n$$
$k$-times you get
$$T(n) = 2^k \cdot T(n/2^k) + k c n.$$
To finish your proof, ask yourself when the unrolling process will stop. The answer: when we reach the base case, which is $T(1) = d$ where $d$ is a constant. For what value of $k$ do we reach $T(1)$? For this we need to solve the equation
$$n/2^k = 1$$
whose solution is $k = \log_2 n$. So now we plug in $k = \log_2 n$:
\begin{align*}
T(n) &= 2^{\log_2 n} \cdot T(n/2^{\log_2 n}) + c \cdot n \cdot \log_2 n \\
&= n \cdot T(1) + c \cdot n \cdot \log_2 n \\
&= d \cdot n + c \cdot n \cdot \log_2 n \\
&\in O(n \cdot log_2 n).
\end{align*} | {
"domain": "cs.stackexchange",
"id": 4014,
"tags": "asymptotics, recurrence-relation"
} |
Rooted Tree Isomorphism Algorithm | Question: I have developed an algorithm to determine if two rooted trees are isomorphic, which is based on the following conjecture:
Let $S_{u}$ be the number of vertices in the rooted subtree of vertex $u$.
Namely, the size of the subtree of $u$. Now Let $L_{i}$ = {$S_{u}$ : $lvl(u)$ = $i$}.
Here, $lvl(v)$ denotes the level of $v$.
Also the height of a tree is the maximum level of any of its nodes.
Now the conjecture:
Let $H_{1}$ and $H_{2}$ be the heights of the rooted trees $T_{1}$ and $T_{2}$, respectively. $T_{1}$ and $T_{2}$ are isomorphic if and only if $H_{1} = H_{2}$
and for every integer $i \in [ \,1,H_{1}] \, $, the multiset $L_{i}$ of $T_{1}$ and that of $T_{2}$ are the same.
Apparently this conjecture is false because i implemented a C++ program (to solve a competitive programming task) that is based on it, but it failed system tests. Still, it may be an implementation fault, so i'd like to know if there are any counterexamples to this conjecture.
Answer: Nice conjecture on tree isomorphism.
However, here is a counterexample.
Exercise. Construct a counterexample using a binary tree. | {
"domain": "cs.stackexchange",
"id": 13492,
"tags": "trees, graph-isomorphism"
} |
Serialize the properties of an entity framework entity to data fields and back | Question: I am trying to write code to convert properties of an Entity Framework entity to strings and back.
Here is the code so far that converts back from strings to the object properties.
I am stuck trying to figure out how to handle datetime. I am also wondering if there is a better approach.
private static FormatterConverter formatConverter;
public static FormatterConverter FormatConverter
{
get
{
if (formatConverter == null)
{
formatConverter = new FormatterConverter();
}
return formatConverter;
}
}
// ChangeValue is an entity framework entity.
static void DoSetValue(ChangeValue cv , PropertyInfo pi, object obj )
{
try
{
switch (pi.PropertyType.ToString())
{
case "System.TimeSpan":
case "System.Nullable`1[System.TimeSpan]":
var s = cv.Value;
if (s == "") s = "0";
var ticks = Convert.ToInt64(s);
var ts = new TimeSpan(ticks);
obj = ts;
break;
case "":
case "System.Nullable`1[System.DateTime]":
// code needed here
break;
case "System.Guid":
obj = new Guid(cv.Value);
break;
default:
pi.SetValue(obj, FormatConverter.Convert(cv.Value, pi.PropertyType), null);
break;
}
}
catch (Exception ex)
{
ex.Data.Add("", string.Format( "Error converting type for {0} {1} ",pi.Name ,pi.PropertyType.ToString()));
throw ex;
}
}
Answer: If you are trying to convert the string to DateTime format that might be used in the EF object, you may try:
Convert.ToDateTime("2013-09-10 12:12:12",System.Globalization.CultureInfo.InvariantCulture) | {
"domain": "codereview.stackexchange",
"id": 4747,
"tags": "c#, entity-framework, serialization"
} |
Code Fix - changing accessibility | Question: I am trying to get the hang of Roslyn at the moment, and to implement a couple of code fixes for specific fields/properties that should usually be readonly (in the case of a property, to have no setter).
I've started from the "Analyzer with Code Fix (NUGET + VSIX)" template and now have two diagnostics implemented in the DiagnosticAnalyzer class and two corresponding code fixes in the CodeFixProvider class. But in this second class, it all feels a bit "messy" - so I'm looking for feedback on what I can do to clean it up:
using Microsoft.CodeAnalysis;
using Microsoft.CodeAnalysis.CodeActions;
using Microsoft.CodeAnalysis.CodeFixes;
using Microsoft.CodeAnalysis.CSharp.Syntax;
using Microsoft.CodeAnalysis.Editing;
using System.Collections.Immutable;
using System.Composition;
using System.Linq;
using System.Threading;
using System.Threading.Tasks;
namespace LiveVariables.Analyzers
{
[ExportCodeFixProvider(LanguageNames.CSharp, Name = nameof(LiveVariablesAnalyzersCodeFixProvider)), Shared]
public class LiveVariablesAnalyzersCodeFixProvider : CodeFixProvider
{
private const string readonlyAction = "Make readonly";
public sealed override ImmutableArray<string> FixableDiagnosticIds
{
get { return ImmutableArray.Create(LiveVariablesAnalyzersAnalyzer.LV1001DiagnosticId, LiveVariablesAnalyzersAnalyzer.LV1002DiagnosticId); }
}
public sealed override FixAllProvider GetFixAllProvider()
{
return WellKnownFixAllProviders.BatchFixer;
}
public sealed override async Task RegisterCodeFixesAsync(CodeFixContext context)
{
var root = await context.Document.GetSyntaxRootAsync(context.CancellationToken).ConfigureAwait(false);
var diagnostic = context.Diagnostics.First();
var diagnosticSpan = diagnostic.Location.SourceSpan;
if (diagnostic.Id == LiveVariablesAnalyzersAnalyzer.LV1001DiagnosticId)
{
// Find the field declaration identified by the diagnostic.
var declaration = root.FindToken(diagnosticSpan.Start).Parent.AncestorsAndSelf().OfType<FieldDeclarationSyntax>().First();
context.RegisterCodeFix(
CodeAction.Create(
title: readonlyAction,
createChangedDocument: c => MakeFieldReadOnly(context.Document, declaration, c),
equivalenceKey: readonlyAction),
diagnostic);
}
else
{
// Find the property declaration identified by the diagnostic.
var declaration = root.FindToken(diagnosticSpan.Start).Parent.AncestorsAndSelf().OfType<PropertyDeclarationSyntax>().First();
context.RegisterCodeFix(
CodeAction.Create(
title: readonlyAction,
createChangedDocument: c => MakePropertyReadOnly(context.Document, declaration, c),
equivalenceKey: readonlyAction),
diagnostic);
}
}
private async Task<Document> MakeFieldReadOnly(Document document, FieldDeclarationSyntax fieldDecl, CancellationToken cancellationToken)
{
var genny = SyntaxGenerator.GetGenerator(document);
var nFieldDecl = genny.WithModifiers(fieldDecl, genny.GetModifiers(fieldDecl) | DeclarationModifiers.ReadOnly);
var newSyntax = (await document.GetSyntaxRootAsync()).ReplaceNode(fieldDecl, nFieldDecl);
return document.WithSyntaxRoot(newSyntax);
}
private async Task<Document> MakePropertyReadOnly(Document document, PropertyDeclarationSyntax propDecl, CancellationToken cancellationToken)
{
var genny = SyntaxGenerator.GetGenerator(document);
var setter = genny.GetAccessor(propDecl, DeclarationKind.SetAccessor);
var newSyntax = (await document.GetSyntaxRootAsync()).RemoveNode(setter, SyntaxRemoveOptions.KeepNoTrivia);
return document.WithSyntaxRoot(newSyntax);
}
}
}
Specific things that feel messy to me:
Should I even be implementing multiple code fixes in a single CodeFixProvider? - the API definitely supports it but it feels it could easily get out of hand
I'm working out what diagnostic I'm responding to, inside RegisterCodeFixesAsync by doing a string comparison - is there a stronger-typed way of working this out?
Creating the SyntaxGenerator within the MakeFieldReadOnly and MakePropertyReadOnly feels a bit wrong somehow - is there a more natural way for me to make my changes?
And of course, anything else people would care to suggest will be interesting.
(I'm generally happy with the code within DiagnosticAnalyzer but can add it here also if people think that it's important to see all of the code within the project)
Answer: Addressing your comments first:
Should I even be implementing multiple code fixes in a single CodeFixProvider? - the API definitely supports it but it feels it could easily get out of hand
It's okay to do so. I typically use the same diagnostic ID for both these things and simply change the message. For example the diagnostic with ID "MemberCanBeReadOnly" can have a message "Field A can be made readonly" and "Property B can be made readonly".
I'm working out what diagnostic I'm responding to, inside RegisterCodeFixesAsync by doing a string comparison - is there a stronger-typed way of working this out?
Not that I'm aware of but it's okay -- IDs are supposed to be unique anyway. You might want to add a prefix of your own though to indicate it's a diagnostic part of your own library (and thus avoid collisions with other libraries). An example is the "CS" prefix used by Microsoft themselves.
Creating the SyntaxGenerator within the MakeFieldReadOnly and MakePropertyReadOnly feels a bit wrong somehow - is there a more natural way for me to make my changes?
You're just getting a document twice. If this bothers you, make it once and pass it along to those two methods.
Now, actual code remarks:
You're using SyntaxGenerator but you're not necessarily generating new code -- you're editing existing code. I would advise to use the DocumentEditor class to make changes to code because it prevents problems when you have to edit more than one area in the same document. You will notice that its syntax also feels a little more fluent.
Asynchronous methods are supposed to end with the Async prefix
Time for C# 6 baby:
public sealed override ImmutableArray<string> FixableDiagnosticIds
{
get { return ImmutableArray.Create(LiveVariablesAnalyzersAnalyzer.LV1001DiagnosticId, LiveVariablesAnalyzersAnalyzer.LV1002DiagnosticId); }
}
public sealed override FixAllProvider GetFixAllProvider()
{
return WellKnownFixAllProviders.BatchFixer;
}
becomes
public sealed override ImmutableArray<string> FixableDiagnosticIds => ImmutableArray.Create(LiveVariablesAnalyzersAnalyzer.LV1001DiagnosticId, LiveVariablesAnalyzersAnalyzer.LV1002DiagnosticId);
public sealed override FixAllProvider GetFixAllProvider() => WellKnownFixAllProviders.BatchFixer;
Code fixes are typically boring though. Show us the analyzer! Analyzers are where the real work is done and where all the edge cases popup. I can see many edge cases:
Are the members set from another class?
Have you thought about public members being accessed from another solution?
Did you make the distinction between read-only properties that can be set from the constructor?
There's more but these are just off the top of my head. | {
"domain": "codereview.stackexchange",
"id": 16084,
"tags": "c#, roslyn"
} |
Is feature importance from classification a good way to select features for clustering? | Question: I have a large data set with many features (70). By doing preprocessing (removing features with too many missing values and those that are not correlated with the binary target variable) I have arrived at 15 features. I am now using a decision tree to perform classification with respect to these 15 features and the binary target variable so I can obtain feature importance. Then, I would choose features with high importance to use as an input for my clustering algorithm. Does using feature importance in this context make any sense?
Answer: It might make sense, but it depends what you're trying to do:
If the goal is to predict the binary target for any instance, a classifier will perform much better.
If the goal is to group instances by their similarity, loosely taking the binary target into account indirectly, then clustering in this way makes sense. This would correspond to a more exploratory task where the goal is to discover patterns in the data, focusing on the features which are good indicators of the target (it depends how good they actually are). | {
"domain": "datascience.stackexchange",
"id": 9259,
"tags": "machine-learning, classification, clustering, decision-trees, feature-importances"
} |
Query about derivation for the grand partition function for an ideal fermi gas | Question: I am having trouble understanding the following derivation:
I am having a bit of trouble understanding what it means to sum over all states.
So this is how I am interpreting the above.
We have a gas of $N$ particles in different states $n_1, n_2, n_3,...$ with energies $Ɛ_1, Ɛ_2, Ɛ_3,...$ etc.
In the above derivation $E$ is just the total energy, given by $$\sum_{i=1}^{i=\infty} {n_iƐ_i}, $$
and $N$ is just the sum of the occupancies $n_1 + n_2 + n_3 + ...$.
My issue might be more with the mathematical notation than the physics but when we have a sum like the one above, the different terms in the sum are given by the indice $i$, i.e $$\sum_{i=1}^{i=\infty} {n_iƐ_i} ,$$ = $n_1Ɛ_1 + n_2Ɛ_2 + n_3Ɛ_3 +...$.
What are the different terms in the sum given by line 2 in the above derivation?
I can't even write the second term in the above sum as I don't see what the summing indice is.
Secondly, I don't understand what the logic is going from line 3 to line 4 is.
If someone understands the above derivation can you please explain it.
Answer: The index $i$ in $n_i$ and $\epsilon_i$ refers to the label of the one-particle states. I.e., the Hamiltonian $\hat H_N$ of the non-interacting gas of $N$ particles is written as a sum of $N$ one-particle Hamiltonian $\hat h_{\alpha}$:
$$
\hat H_N= \sum_{\alpha=1}^N \hat h_{\alpha}
$$
and the eigenstates of $\hat h_{\alpha}$ are the states $\left| i\right>$ such that
$$
\hat h_{\alpha}\left| i\right>=\epsilon_i \left| i\right>.
$$
In general, the number of the one-particle states is infinite ($i=1, \dots, \infty$).
$N$-particle states are obtained as symmetrized or anti-symmetrized tensor products of $N$ of such one-particle states. The corresponding $N$-particle energy, $E_N$, is just the sum of the one particle energy for each state, each multiplied by the number of particles which are in that state (the occupation numbers $n_i$):
$$
E_N= \sum_{i=1}^{\infty} n_i \epsilon_i,
$$
with the constraint $\sum_{i=1}^{\infty} n_i=N$. Therefore, one could say that a single $N$_particle state is uniquely identified by the infinite set of occupation numbers $\left\{ n_i \right\}$, or equivalently by the ordered sequence $(n_1,n_2,n_3,\dots)$, always with the constraint $\sum_{i=1} n_i=N$.
Now, let's go to grand-canonical statistical mechanics. Your starting expression is a possible way to slightly simplify the derivation, by starting immediately with a sum ove all the states, without any constraint on the number of particles. It is a way to get almost immediately the correct final result for the gran-canonical partition function.
It could be interesting and probably pedagogically more useful to start with the expression for the gran canonical partition function, written as:
$$
\mathcal{Z} = \sum_{N=0}^{\infty} e^{\beta \mu N} Q_N ~~~~~~~~~~~[1]
$$
where $Q_N$ is the canonical partition function for a system of $N$ particles.
$$
Q_N=\sum_{\left\{ n_i \right\}; \sum_{i=1} n_i=N } e^{-\beta E_N(\left\{ n_i \right\})}.
$$
The constraint $ \sum_{i=1} n_i=N $ can be absorbed into a Konecker's delta, in order to rewrite $Q_N$ as an unconstrained the sum over all possible values of the occupation numbers:
$$
Q_N=\sum_{\left\{ n_i \right\} } e^{-\beta E_N(\left\{ n_i \right\})}\delta_{N,\sum_{i=1} n_i}~~~~~~~~~~[2]
$$.
Using this expression [2], the gran canonical partition function [1] can be rewritten as:
$$
\mathcal{Z} = \sum_{N=0}^{\infty} e^{\beta \mu N}\sum_{\left\{ n_i \right\} } e^{-\beta E_N(\left\{ n_i \right\})}\delta_{N,\sum_{i=1} n_i}=
\sum_{N=0}^{\infty} \sum_{\left\{ n_i \right\} } e^{\beta \mu N}e^{-\beta E_N(\left\{ n_i \right\})}\delta_{N,\sum_{i=1} n_i}=
\sum_{N=0}^{\infty} \sum_{\left\{ n_i \right\} } e^{\beta \mu \sum_{i=1}^N n_i}e^{-\beta E_N(\left\{ n_i \right\})}\delta_{N,\sum_{i=1} n_i}=
\sum_{N=0}^{\infty} \sum_{\left\{ n_i \right\} } e^{-\beta \sum_{i=1}^{\infty} n_i (\epsilon_i-\mu)}\delta_{N,\sum_{i=1} n_i}
$$
In the last formula, it is possible to invert the order of summation, leaving an external, unconstrained sum over all the possible choices of occupation numbers, while the sum over $N$ of the Kronecker's delta, for each choice of of the occupation numbers $\left\{ n_i \right\} $, is equal to $1$ (only one value of $N$ is equal to $\sum_i n_i $ ).
Therefore, we have arrived to the unconstrained sum over the occupation numbers.
Its explicit meaning is
$$
\sum_{\left\{ n_i \right\} }=\sum_{n_1}\sum_{n_2}\sum_{n_3}\dots.
$$
So that, if each term in the sum is factorized [1] gets factorized as
$$
\left(\sum_{n_1} e^{-\beta n_1 (\epsilon_1-\mu)}\right)\left(\sum_{n_2} e^{-\beta n_2 (\epsilon_2-\mu)}\right)\left(\sum_{n_3} e^{-\beta n_3 (\epsilon_3-\mu)}\right)\dots.
$$
At this point the exact calculation depends on the statistics (two values of each occupation number for fermions, infinite values for bosons), but it is straightforward. | {
"domain": "physics.stackexchange",
"id": 63339,
"tags": "thermodynamics, statistical-mechanics, fermions, partition-function"
} |
Can buoyant force act downwards? | Question: I came up with a question stated below
"A vessel contains oil (density 0.8g/cc) over mercury (density 13.6g/cc). A homogeneous sphere floats with half its volume immersed in mercury and the other half in oil. The density of the material of the sphere in g/cc is ?"
The answer to this problem is 7.2 g/cc but this seems incorrect to me. I got 6.4 g/cc
If we consider the oil section only then, the massive force at the surface of lower part of the hemisphere is cancelling it out (Think in 3d), and small force which act on upper surface has vertical component (downwards) as horizontal component are cancelled out, there is a net downward force on sphere by oil (Buoyant force).
Sphere is homogenous, oil and mercury has constant density so dont assume out of the box
Regards
Edit :-
As Suggested by BIO's answer let put a small negligible rod in between two hemisphere under consideration then due to that flat surface there is a greater force on that flat surface which compensate other small force acting on curved surface to lead for an upward net force by oil, but in reality we cant assume that, as that flat surface isnt exposed to oil section (but only curved surface)
Excuse for that drawing
Still cant figure it out help
Answer: Archimedes' principle states that the buoyant force equals the weight of the fluid displaced. This gives the solution of 7.2 g/cc. If you're not convinced, consider the following set-up
The two hemispheres are only separated slightly but joined rigidly by a short, thin rod so that the pressures at the two flat surfaces are infinitesimally different. You should feel reassured to use Archimedes' principle now.
Regarding your original question, yes buoyant force can act downwards (as you have drawn in your diagram). The buoyant force is due to the liquid pushing the surface of the object. As pressure is higher at the bottom of the liquid due to gravity, usually the net buoyant force is pointing upwards. (A suction cup is a counterexample) However, if we just consider a part of the object's surface, the buoyant force acting on that patch is normal to the surface: $d\vec{F}_\textrm{buoy} = -p d\vec{A}$.
Your above reasoning suggests that the density of the sphere is equal to half the difference of the density of the two media. What happens when both fluids are the same?
Lastly, you can always calculate the force by direct integration, if you are familiar with vector calculus. It would be good to use spherical coordinates for this problem. Proof of Archimedes' principle uses the Gauss' theorem and can be found in this answer. | {
"domain": "physics.stackexchange",
"id": 85149,
"tags": "homework-and-exercises, fluid-dynamics, density, buoyancy"
} |
Precise algorithm for finding higher order derivatives | Question: I'm trying to make an algorithm that finds the first 10 or so terms of a function's Taylor series, which requires finding the nth derivative of the function for the nth term. It's easy to implement derivatives by following the definition of the derivative:
$$f'(x) = \lim_{h\to0}\dfrac{f(x+h)-f(x)}{h}$$
implemented here in Python:
dx = 0.001
def derivative(f, x):
return (f(x + dx) - f(x)) / dx
The value seems to be even closer to the actual value of the derivative if we define it like this:
dx = 0.001
def derivative(f, x):
return (f(x + dx) - f(x - dx)) / (2 * dx)
which just returns the average of (f(x + dx) - f(x)) / dx and (f(x) - f(x - dx)) / dx.
For higher order derivatives, I implemented a simple recursive function:
dx = 0.001
def nthDerivative(f, n, x):
if n == 0:
return f(x)
return (derivative(f, n - 1, x + dx) - derivative(f, n - 1, x - dx)) / (2 * dx)
I tested the higher order derivatives of $f$ at $1$, where $f(x)=x^9$, and as can be proved by induction,
$$\dfrac{d^n}{dx^n}(x^k)=\dfrac{k!}{(k - n)!}x^{k-n}$$
Therefore, the nth derivative of $f$ at $1$ is $\dfrac{9!}{(9 - n)!}$.
Here are the values returned by the function for n ranging from 0 to 9:
n Value Intended value
-----------------------------------
0 1.000 1
1 9.000 9
2 72.001 72
3 504.008 504
4 3024.040 3024
5 15120.252 15120
6 60437.602 60480
7 82298.612 181440
8 32278187.177 362880
9 95496943657.736 362880
As you can see, the values are waaaay off for $n$ greater than $5$.
What can I do to get closer to the actual values? And is there an algorithm for this that doesn't have $O(2^n)$ performance like mine?
Answer: The first thing you should understand is why central differencing gives you a more precise solution.
Consider the Taylor expansion of $f$ around $x$:
$$f(x + h) = f(x) + h f'(x) + \frac{1}{2} h^2 f''(x) + \frac{1}{3!} h^3 f'''(x) \cdots$$
Then:
$$\frac{f(x+h) - f(x)}{h} = f'(x) + \frac{1}{2} h f''(x) + \frac{1}{3!} h^2 f'''(x)\cdots$$
That is:
$$f'(x) = \frac{f(x+h) - f(x)}{h} + O(h)$$
However:
$$f(x - h) = f(x) - h f'(x) + \frac{1}{2} h^2 f''(x) - \frac{1}{3!} h^3 f'''(x) \cdots$$
Therefore:
$$f(x + h) - f(x-h) = 2 h f'(x) + \frac{2}{3!} h^3 f'''(x) \cdots$$
And so:
$$\frac{f(x + h) - f(x-h)}{2h} = f'(x) + O(h^2)$$
With central differencing, the even terms of the Taylor series cancel, and you get a second-order approximation instead of a first-order approximation.
(Note that in real-world problems, central differencing is not always possible for many reasons. In fluid dynamics, for example, you do not want to do a central differencing approximation across a shock, so you generally only approximate derivatives using values that are "upwind" of the point of interest. I digress.)
You can think of approximating high-order derivatives as solving for the coefficients of the Taylor expansion. To find the first derivative, you solve for the first two coefficients. Since there are two unknowns, you need two equations. To calculate the second derivative as well, you need a third equation at least.
This doesn't seem like a game that you can win. If you obtain more points by using smaller values of $h$, then numerical evaluation becomes more unstable; if $f(x)$ and $f(x+h)$ are close in value, then $f(x+h) - f(x)$ can easily suffer from catastrophic cancellation. If you use larger values of $h$, then the $O(h)$ or $O(h^2)$ error term is larger. For a given $h$, there are only two points at distance $h$ from $x$ on the real line.
However, there are as many points (at a distance $h$ from $x$) as you want in the complex plane. So if $f$ is holomorphic, and $f(x) = x^9$ is holomorphic, you can obtain some extremely high-precision derivative estimates by picking points on a circle around $x$.
Back to central differencing for a moment, this works by eliminating the even terms of the Taylor approximation. You may wonder if it's possible to extend this to eliminate higher-order terms.
Suppose that $A_k(h)$ is a $k$th-order approximation to some desired value $L$. That is, there are some constants $c_i$ such that:
$$A_k(h) = L + c_k h^k + c_{k+1} h^{k+1} + c_{k+2} h^{k+2} + \cdots$$
Let's look at what happens if you halved $h$:
$$A_k(\frac{h}{2}) = L + \frac{1}{2^k} c_k h^k + \frac{1}{2^{k+1}} c_{k+1} h^{k+1} + \frac{1}{2^{k+2}} c_{k+2} h^{k+2} + \cdots$$
Then:
$$2^k A_k(\frac{h}{2}) - A(h) = \left(2^k - 1\right) L + O(h^{k+1})$$
And so this:
$$A_{k+1}(h) = \frac{2^k A_k(\frac{h}{2}) - A_k(h)}{2^k - 1}$$
is a $k+1$th-order approximation. And, of course, you can iterate to find higher order approximations.
Even though this technique uses smaller and smaller step sizes, you can control the numeric issues by using a factor $t$ other than 2:
$$R(h,t) = \frac{t^k A(\frac{h}{t}) - A(h)}{t^k - 1}$$
This is known as Richardson extrapolation, and it turns out to be extremely useful in estimating derivatives. | {
"domain": "cs.stackexchange",
"id": 18694,
"tags": "algorithms, mathematical-programming, numerical-algorithms, mathematical-analysis, numerical-analysis"
} |
Robot performing soldering functions using Soldering iron | Question: Is it possible to design and construct a Robot which can solder components using a Soldering iron on printed circuit boards?
If No, What are the limitations?
Answer: Yes. There are many soldering robots in existence today. Unfortunately, they don't look very humanoid. They are usually simply gantry style robot arms. They are common on manufacturing lines.
Remember that a robot is just a machine. And machines are designed to do a specific task, usually as efficiently as possible. And this usually means simple, straightforward, and with the fewest extra degrees of freedom. So a humanoid robot doesn't really make sense for this task.
Some quick google searches turn up many examples: | {
"domain": "robotics.stackexchange",
"id": 2491,
"tags": "industrial-robot, artificial-intelligence"
} |
Problem with installing ROS indigo Ubuntu 14.04 - ros-indigo-perception | Question:
Dear all,
I already installed ROS-indigo once on my Ubuntu 14.04 which was working perfectly but I was obliged to delete ROS-indigo.
Now that I am trying to re-install ROS-indigo it does not accept the full desktop version installation and gives me this error
$ sudo apt-get install
ros-indigo-desktop-full
Reading package lists... Done
Building dependency tree
Reading state information... Done
Some packages could not be installed.
This may mean that you have requested
an impossible situation or if you are
using the unstable distribution that
some required packages have not yet
been created or been moved out of
Incoming. The following information
may help to resolve the situation:
The following packages have unmet
dependencies: ros-indigo-desktop-full
: Depends: ros-indigo-perception but
it is not going to be installed E:
Unable to correct problems, you have
held broken packages.
I searched on the previous answers and nothings works and still it has a problem with the "ros-indigo-perception"
Thanks,
Hamed
Originally posted by Mobile_robot on ROS Answers with karma: 264 on 2017-09-28
Post score: 0
Original comments
Comment by Mobile_robot on 2017-09-28:
I should mention that I tried Synaptic and
$sudo apt-get install -f
In order to fix the broken packages, but still is not working.
Comment by jayess on 2017-09-28:
Please note that updates to your question (and answers) should be done as an edit to your question and not as an answer.
Comment by jayess on 2017-09-28:
How did you delete ROS? What answers did you try?
Answer:
make sure your system is up to date
sudo apt update
sudo apt upgrade
Originally posted by tjadhav with karma: 68 on 2017-09-28
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 28949,
"tags": "ubuntu-trusty, ubuntu, ros-indigo"
} |
Game of Life in Ruby | Question: I think Ruby is kinda interesting, so did this Game of life implement in Ruby. I am wonder is there is some magic in Ruby can let my code more elegant.
I am a python coder and I think my Ruby kinda smell like Python now xD(and with lots end)
def lifegame(grid)
alive = 1
die = 0
while not lifeless(grid, alive)
print grid
print "\n"
next_round = update(grid, alive, die)
if next_round == grid
puts "In stable exiting..."
break
end
grid = next_round
end
end
def lifeless(grid, alive)
0.upto(grid.length-1) do |i|
0.upto(grid[0].length-1) do |j|
if(grid[i][j] == alive)
return false
end
end
end
return true
end
def update(grid, alive, die)
next_round = Array.new(grid.length){Array.new(grid[0].length, die)}
0.upto(grid.length-1) do |i|
0.upto(grid[0].length-1) do |j|
next_round[i][j] = evolve(grid, i, j, alive, die)
end
end
return next_round
end
def evolve(grid, i, j, alive, die)
directions = [[0,1],[0,-1],[1,0],[-1,0],[1,1],[1,-1],[-1,1],[-1,-1]]
t = 0
directions.each do |direction|
if (i+direction[0] >= 0 and i+direction[0] < grid.length and j+direction[1] >= 0 and j+direction[1] < grid[0].length)
if(grid[i+direction[0]][j+direction[1]] == alive)
t += 1
end
end
end
if((grid[i][j] == alive and (t < 2 or t > 3)) or (grid[i][j] == die and t != 3))
return die
else
return alive
end
end
grid = [[0,0,1,0,0],[1,0,1,0,0],[0,1,1,0,0],[0,0,0,0,0],[0,0,0,0,0]]
lifegame grid
class version
Thanks for @Johan Wentholt's advice about class
Here is my update code with custom class
Any advices all welcome!
class Game
WIDTH = 5
HEIGHT = 5
SEEDS = [[0,2],[1,0],[1,2],[2,1],[2,2]]
def initialize
@grid = Grid.new(WIDTH, HEIGHT)
@grid.plant_seeds(SEEDS)
end
def start
while not @grid.lifeless?
puts @grid
next_grid = update()
if(@grid == next_grid)
break
end
@grid = next_grid
end
end
def update
next_round = Grid.new(WIDTH, HEIGHT)
0.upto(WIDTH-1) do |row|
0.upto(HEIGHT-1) do |column|
next_round.update(row, column, evolve(row, column))
end
end
return next_round
end
def evolve(row, column)
directions = [[0,1],[0,-1],[1,0],[-1,0],[1,1],[1,-1],[-1,1],[-1,-1]]
t = 0
directions.each do |i, j|
if (row+i >= 0 and row+i < WIDTH and column+j >= 0 and column+j < HEIGHT)
if(@grid.cell_alive(row+i,column+j))
t += 1
end
end
end
return ((@grid.cell_alive(row,column) and (t == 2 or t == 3)) or (not @grid.cell_alive(row,column) and t == 3))
end
end
class Grid
def initialize(width, height)
@width = width
@height = height
@grid = setup_grid
end
def setup_grid
grid = []
@width.times do |row|
cells = []
@height.times do |column|
cells << Cell.new(false)
end
grid << cells
end
return grid
end
def plant_seeds(seeds)
seeds.each do |x,y|
@grid[x][y].live!
end
end
def update(row, column, value)
@grid[row][column].change_state(value)
end
def cell_alive(row, column)
return @grid[row][column].alive?
end
def lifeless?
not @grid.any?{|row| row.any?{|cell| cell.alive?}}
end
def to_s
rows = []
0.upto(@width-1) do |row|
columns = []
0.upto(@height-1) do |column|
columns << @grid[row][column].to_s
end
rows << columns.join("")
end
return rows.join("\n") + "\n\n"
end
def ==(other)
0.upto(@width-1) do |row|
0.upto(@height-1) do |column|
if cell_alive(row, column) != other.cell_alive(row, column)
return false
end
end
end
return true
end
end
class Cell
def initialize(alive)
@alive = alive
end
def change_state(state)
@alive = state
end
def alive?
@alive
end
def live!
@alive = true
end
def to_s
if @alive
return "x"
else
return "."
end
end
end
game = Game.new()
game.start()
Answer: Like I said in the comments Ruby is an object oriented language. However in your first attempt you don't make use of custom classes and object at all. In your second attempt you do use custom classes, but in my opinion the design can be done better.
Simple way to spot "bad" Ruby code
One of the simplest ways to spot "bad" Ruby code is by the use of manual iteration. Ruby provides plenty of iterators that could be used instead of manually iterating over an collection. Examples are each, each_with_index, map, none?, all?, any? and many others.
In some cases you may not be able to work around manual iteration, but most scenarios have a build-in solution.
In case you need the index with map you can make use of the enumerator returned if no block is provided.
array.map.with_index { |element, index| ... }
Game rules
Let's first address the rules of the game:
Any live cell with fewer than two live neighbours dies, as if by under-population.
Any live cell with two or three live neighbours lives on to the next generation.
Any live cell with more than three live neighbours dies, as if by overpopulation.
Any dead cell with exactly three live neighbours becomes a live cell, as if by reproduction.
Rule evaluation
The rules are all about the amount of neighbours that is alive. A cell could check this for himself if he knew who his neighbours are. For this reason I would leave the placement and neighbour assignment up to the Grid, but I would leave the state checking to the Cell itself. This would also eliminate a lot of the coordinate usage, since a cell doesn't care if a neighbour lives above, next or under him. The only thing that matters is the amount of neighbours alive. The only place where you still need the coordinates is when placing the cells and when assigning the neighbours of each cell.
In my opinion the code becomes a lot more readable when it speaks for itself.
Advantages of working with classes
Working with classes comes most of the time with some overhead (which can be seen in my example below), but has several advantages.
When working with classes the methods are namespaced in the class. Keeping the global namespace free from clutter.
You can assign certain classes certain responsibilities. This makes it easier to maintain code since you know where you should look for certain problems.
Responsibilities
I've chosen the following responsibilities for the different classes:
Cell
A cell is responsible for its own state and the transition to the next state. It has references to its neighbours to check this.
Grid
The grid is responsible for creating the grid, creating initially activated cells and assigning each cell its neighbours.
Game
The game is responsible for grid instantiation and manages the game cycles to progress the grid further.
Code Example
class Cell
RELATIVE_NEIGHBOUR_COORDINATES = {
north: [-1, 0].freeze, north_east: [-1, 1].freeze,
east: [0, 1].freeze, south_east: [1, 1].freeze,
south: [1, 0].freeze, south_west: [1, -1].freeze,
west: [0, -1].freeze, north_west: [-1, -1].freeze,
}.freeze
NEIGHBOUR_DIRECTIONS = RELATIVE_NEIGHBOUR_COORDINATES.keys.freeze
attr_accessor(*NEIGHBOUR_DIRECTIONS)
def initialize(alive = false)
@alive = !!alive # "!!" converts alive value to boolean
end
def alive?
@alive
end
def live!
@alive = true
end
def die! # currently unused
@alive = false
end
##
# Queues the next state. Returns true if the state is going to change and
# false if it stays the same.
def queue_evolve
@queued_alive = alive_next_cycle?
@alive != @queued_alive
end
##
# Applies the queued state. Returns true if the state changed and false if the
# state stayed the same.
def apply_queued_evolve
old_alive = @alive
@alive = @queued_alive
old_alive != @alive
end
def alive_next_cycle?
alive_neighbours = neighbours.count(&:alive?)
if alive?
(2..3).cover?(alive_neighbours)
else
alive_neighbours == 3
end
end
def going_to_change?
alive? != alive_next_cycle?
end
##
# Used to get a neighbour in dynamic fashion. Returns the neighbouring cell or
# nil if there is no neighbour on the provided direction.
#
# cell[:north]
# #=> neighbouring_cell_or_nil
#
def [](direction)
validate_direction(direction)
send(direction)
end
##
# Used to set a neighbour in dynamic fashion. Returns the provided neighbour.
#
# cell[:south] = other_cell
# #=> other_cell
#
def []=(direction, neighbour)
validate_direction(direction)
send("#{direction}=", neighbour)
end
##
# Returns a list of all present neighbours.
def neighbours
NEIGHBOUR_DIRECTIONS.map(&method(:[])).compact
end
##
# Returns a hash of neighbours and their positions.
#
# cell.neighbours_hash
# #=> {
# north: nil,
# north_east: nil,
# east: some_cell,
# south_east: some_other_cell,
# # ...
# }
#
def neighbours_hash # currently unused
NEIGHBOUR_DIRECTIONS.map { |dir| [dir, self[dir]] }.to_h
end
##
# Returns "x" if the cell is alive and "." if the cell is not.
def to_s
alive? ? 'x' : '.'
end
##
# Since neighbours point to each other the default inspect results in an
# endless loop. Therefore this is overwritten with a simpler representation.
#
# #<Cell dead> or #<Cell alive>
#
def inspect
"#<#{self.class} #{alive? ? 'alive' : 'dead'}>"
end
private
def validate_direction(direction)
unless NEIGHBOUR_DIRECTIONS.map(&:to_s).include?(direction.to_s)
raise "unsupported direction #{direction}"
end
end
end
class Grid
def initialize(width, height, seeds = [])
@cells = Array.new(width * height).map { Cell.new }
@grid = @cells.each_slice(width).to_a
seeds.each { |coordinate| @grid.dig(*coordinate).live! }
assign_cell_neighbours
end
##
# Returns true if the resulting grid changed after evolution.
def evolve
# Keep in mind that any? short circuits after the first truethy evaluation.
# Therefore the following line would yield incorrect results.
#
# @cells.each(&:queue_evolve).any?(&:apply_queued_evolve)
#
@cells.each(&:queue_evolve).map(&:apply_queued_evolve).any?
end
##
# Returns true if the next evolutions doesn't change anything.
def lifeless?
@cells.none?(&:going_to_change?)
end
##
# Returns the grid in string format. Placing an "x" if a cell is alive and "."
# if a cell is dead. Rows are separated with newline characters.
def to_s
@grid.map { |row| row.map(&:to_s).join }.join("\n")
end
private
##
# Assigns every cell its neighbours. @grid must be initialized.
def assign_cell_neighbours
@grid.each_with_index do |row, row_index|
row.each_with_index do |cell, column_index|
Cell::RELATIVE_NEIGHBOUR_COORDINATES.each do |dir, rel_coord|
(rel_row_index, rel_column_index) = rel_coord
neighbour_row_index = row_index + rel_row_index
neighbour_column_index = column_index + rel_column_index
next if neighbour_row_index.negative? ||
neighbour_column_index.negative?
cell[dir] = @grid.dig(neighbour_row_index, neighbour_column_index)
end
end
end
end
end
class Game
def initialize(width, height, seeds)
@width = width
@height = height
@seeds = seeds
end
def reset
@grid = Grid.new(@width, @height, @seeds)
end
def start
reset
puts @grid
until @grid.lifeless?
@grid.evolve
puts
puts @grid
end
end
end
game = Game.new(5, 5, [[0,2], [1,0], [1,2], [2,1], [2,2]])
game.start
The reason cell needs to update its state in two steps is simple. It can't depend upon the new state of one of its neighbours. For this reason all cells prepare their new state first before applying the prepared state.
References
Most things speak for themselves, however I still think some references are needed for the not so obvious (Ruby specific) code.
The splat operator (*) used to use the contents of an array as individual arguments. Used in the lines:
@grid.dig(*coordinate)
# and
attr_accessor(*NEIGHBOUR_DIRECTIONS)
attr_accessor is used to create getters and setters for the different neighbour directions.
attr_accessor(:north) # or attr_accessor :north
# is the same as
def north
@north
end
def north=(value)
@north = value
end
This allows cell.north to fetch the north neighbour and cell.north = neighbour to set the north neighbour.
The use of send to dynamically call methods inside the Cell class.
Array decomposition assignment done in the following line:
(rel_row_index, rel_column_index) = rel_coord
Block passing. I currently can't find a reference for this. But the following things yield the same result.
numbers = [1, 2, 3, 4]
numbers.map { |number| number.to_s }
#=> ["1", "2", "3", "4"]
# is the same as
numbers.map(&:to_s)
#=> ["1", "2", "3", "4"]
#===========================================
def some_method(number)
number.to_s
end
numbers.map { |number| some_method(number) }
#=> ["1", "2", "3", "4"]
# is the same as
number.map(&method(:some_method))
#=> ["1", "2", "3", "4"]
Most other methods I use (e.g. none?, each_slice) can be found in the Enumerable module. | {
"domain": "codereview.stackexchange",
"id": 32935,
"tags": "ruby, game-of-life"
} |
Propeller modelling | Question: I need a (very) approximate model of a propeller on an aircraft.
My principal question is this: what would the relationship be between:
Propeller rate of rotation
Aircraft speed
Force generated by propeller
As an aside, what would the relationship be between the above and:
Energy required to drive propeller?
Rotational force between the propeller and aircraft body
Any insight would be very helpful, including stating that I haven't given enough information!
Thanks
Answer: You have enough information. A propeller is just a complicated sort of wing.
Assume the aircraft is moving forward at an airspeed of, say, 150 km/hr.
Assume the propeller is turning at an angular rate of, say, 40 revolutions per second (hz).
From that, if you look at the propeller surface at a particular radius from the center, such as 1 meter, you can determine the angle of the helical path followed by that part of the propeller through the air, and the speed it is travelling through the air.
Then assume it has a particular angle of attack, like 10 degrees or 0.18 radian, and has a particular "width", which is the chord length of its airfoil. From that, you can figure out how much "lift" and "drag" it is generating, and what the direction of those forces are relative to the aircraft centerline.
Now if you just do that for a range of different radii on the propeller (i.e. perform an integration), you can figure out how much thrust the propeller generates and how much power it needs.
Don't forget, that is per propeller blade. Typical propellers have 2 or 3 blades, but can have as many as 6.
This is still just a rough calculation because it depends on things like the shape of the tip of the propeller, just as the shape of an aircraft's wingtips make a difference in how much drag is produced.
Note that if the thrust is greater than the drag on the aircraft, the aircraft will be climbing. If less, it will be descending. | {
"domain": "physics.stackexchange",
"id": 6404,
"tags": "newtonian-mechanics, aerodynamics, aircraft"
} |
Simplifying Friedmann's Equation | Question: So we have one of Friedmann's equation:
$$\rho_c = \frac{3H^2}{8\pi G}$$
Using This website, resources where gathered for specific times in the universe. The resources being the Hubble constant at the specific times (i.e. 3.38By). The critical density was worked out for the specific times using the Hubble constant. The critical density was in Kg/m-3 units and was converted into amu/m-3 units, by timing the value by 6.02214129 × 10^26.
I then graphed the values:
I could then conclude from this data that (equation of the line):
$$\rho_c = E H^2$$
$$H^2= \frac{\rho_c }{E}$$
Where E = 1081.6, isn't this a much simpler equation to work out the critical density or Hubble constant? Not saying one is better than the other, but just an idea. For further investigation i will use the equation to find the Hubble constant at 13.79By where the critical density is 5.2 amu/m-3:
$$H^2= \frac{\rho_c }{E}$$
$$H^2= \frac{5.216128}{1081.6}$$
$$H= \sqrt{0.00482}$$
$$H= 0.069445$$
And we convert this from s-1 mpc to km/s mpc, through google, or times by 977.6 (google is better though). The final awnser we get is: H = 67.9km/s mpc. And if we refer to the Planck Mission where they observed the Hubble constant to be within 67.13 and 68.57km/s mpc, which corresponds to the answer we have through the new equation. Does this 'new' equation have any relevance in physics at all?
Answer: If we start with the equation you quote:
$$ \rho_c = \frac{3H^2}{8\pi G} $$
and rewrite it as:
$$ \rho_c = \frac{3}{8\pi G} H^2 $$
then it's the same as your equation:
$$ \rho_c = E H^2 $$
because all you've done is to replace the constant factor of $3/8\pi G$ with the symbol $E$. This is of couse a perfectly reasonable thing to do, but it isn't new in any sense.
The reason we prefer the original form is that $G$ is an important constant of nature - it's Newton's constant. It's the same constant as used to calculate the (non-relativistic) gravitational force between two masses:
$$ F = \frac{Gm_1m_2}{r^2} $$ | {
"domain": "physics.stackexchange",
"id": 16341,
"tags": "general-relativity, classical-mechanics, cosmology, astronomy"
} |
Time taken for collision | Question: We have three particles at the vertices of equilateral triangle of side $d$. At $t=0$ they start moving in such away that at all instant of time each of them has a speed $v$ towards adjacent one. We have to find the time after which they will collide. I know that you can solve this by calculating from reference frame of any one particle, but I want a way to solve this by calculating the total distance traveled first in ground frame and then dividing by speed.
Answer: Let me first describe the usual way one solves the problem as mentioned by the OP. I will then work the problem out in the ground frame.
Nice Answer:
Let us work in the frame of particle 2. In this frame, particle 3 is spiralling in towards 2, while particle 1 has a complicated motion that is always point towards 3. It is easy to show that the component of the velocity of 3 along the line joining 2 and 3 is $\frac{3v}{2}$. The net time taken is $t_0 = \frac{2d}{3v}$.
Nice Answer in ground frame: In the ground frame, there is a clear symmetry. The particles slowly move in towards the center. Due to the symmetry, the component of the velocity of each particle towards the center of the triangle is always equal and is a constant. At $t=0$, the component in this direction is $v \cos \frac{\pi}{6} = \frac{\sqrt{3}v}{2}$. The total distance every particle travels in that direction is $\frac{d}{\sqrt{3}}$. The total time is then $\frac{2d}{3v}$.
Full answer in ground frame:
Choose the origin to be the center of the equilateral triangle. At any time $t$, let the polar coordinates of the $i$th particle be $(r_i(t),\theta_i(t))$ with $0 < \theta_i(t) \leq 2\pi$ and $i=1,2,3$. The position vector is then given by
$$
\vec{r}_i = r_i \left( \cos\theta_i {\hat i} + \sin\theta_i {\hat j} \right)
$$
Now, before we proceed, we will use symmetry. Note that, no matter how the particles travel they will always form an equilateral triangle. Mathematically this implies the following equations
$$
r_1(t) = r_2(t) = r_3(t) = r(t)
$$
$$
\theta_1(t) - \theta_2(t) = \theta_2(t) - \theta_3(t) = \frac{2\pi}{3}
$$
$$
\implies \theta_1(t) = \theta(t) + \frac{2\pi}{3} ,~\theta_2(t) = \theta(t),~\theta_3(t) = \theta(t) - \frac{2\pi}{3}
$$
Thus, the original 6 variables have been reduced to two variables $r(t)$ and $\theta(t)$. Let us now write down the equations of motion. These are
$$
\frac{d}{dt}\vec{r}_2(t) = \frac{v \left( \vec{r}_1 - \vec{r}_2\right) }{|\vec{r}_1 - \vec{r}_2|}
$$
and similarly for $ \vec{r}_1(t) $ and $ \vec{r}_3(t) $, but given symmetry, these equations can be derived from the equation above. Let us now explicitly write out the equation above.
$$
\vec{r}_1 - \vec{r}_2 = r(t) \left[ \left( \cos\theta_1(t) - \cos\theta_2(t) \right) {\hat i} + \left( \sin\theta_1(t) - \sin\theta_2(t) \right) {\hat j} \right]
$$
$$
= \frac{\sqrt{3}}{2} r(t) \left[ \left( - \sqrt{3} \cos\theta - \sin\theta \right) {\hat i} + \left( \cos\theta - \sqrt{3} \sin\theta \right) {\hat j} \right]
$$
Also
$$
|\vec{r}_1 - \vec{r}_2 | = \sqrt{3} r(t)
$$
The full equation is then
$$
\frac{d}{dt} \left[ r(t) \left( \cos\theta {\hat i} + \sin\theta{\hat j} \right) \right]= \frac{v}{2} \left[ \left( - \sqrt{3} \cos\theta - \sin\theta \right) {\hat i} + \left( \cos\theta - \sqrt{3} \sin\theta \right) {\hat j} \right]
$$
This splits into two equations
$$
{\dot r} \cos\theta - r {\dot \theta} \sin\theta = - \frac{v}{2} \left( \sqrt{3} \cos\theta + \sin\theta \right)
$$
$$
{\dot r} \sin\theta + r {\dot \theta} \cos\theta = \frac{v}{2} \left( \cos\theta - \sqrt{3} \sin\theta \right)
$$
The boundary conditions are $r(0) = \frac{d}{\sqrt{3} },~\theta(0) = \frac{7\pi}{6} $. We now wish to decouple the two first-order differential equations into two decoupled second order differential equations. This is quite easy to do. Multiply the first equation by $\cos\theta$, the second by $\sin\theta$ and add the two. We get
$$
{\dot r} = - \frac{\sqrt{3}}{2} v
$$
We must now integrate this. At time $t=0$, $r(0) = \frac{d}{\sqrt{3}}$. The radial coordinate at arbitrary time $t$ is then
$$
r(t) = - \frac{\sqrt{3}}{2} v t + \frac{d}{\sqrt{3} }
$$
Though we are done here, for completeness, we can also solve for $\theta(t)$. The solution is
$$
\theta(t) = \frac{7}{6} \pi - \frac{1}{\sqrt{3} } \log \left( 1 - \frac{3 v t }{2 d } \right)
$$
The total time of travel is the time when $r(t_0) = 0 \implies t_0 = \frac{2d}{3v}$. Let us also make sure that we understand what the total distance is. Squaring and adding the two equations of motion we wrote above, we find
$$
{\dot r}^2 + r^2 {\dot \theta}^2 = v^2
$$
The total distance travelled is then
$$
s = \int_0^{t_0} \left[ {\dot r}^2 + r^2 {\dot \theta}^2 \right]^{1/2} = v t_0 = \frac{2d}{3}
$$
Note that as $t \to \frac{2d}{3v}$, $\theta(t) \to \infty$, i.e. the particles spiral an infinite number of times before reaching the center of the equilateral triangle. | {
"domain": "physics.stackexchange",
"id": 8324,
"tags": "homework-and-exercises, newtonian-mechanics, kinematics"
} |
80-tap Decimation Filter: divide into stages? / Cost of having a number of taps | Question: I did a FIR filter hardware implementation with 80 taps (FPGA/VHDL). I will use it later as the decimation filter.
My questions:
Does the number of taps affect a cost of implementation of FIR?
FIR filters belong to the class of linear filters, the combination of N lower order filters can create the desired FIR filter of the higher order. So I was thinking If it makes sense to implement the combination of N lower order filters instead of a one 80 taps filter?
Is the decimation implemented in one stage usually?
EDIT 1
What is a difference between "filter ( convolution)", " polyphase filter" and "polyphase filter bank"?
Honestly, I dont know ny difference in an implementation.
Answer:
Does the number of taps affect a cost of implementation of FIR?
Well, insert your definition of "cost", and your question should really answer itself.
FIR filters belong to the class of linear filters, the combination of N lower order filters can create the desired FIR filter of the higher order. So I was thinking If it makes sense to implement the combination of N lower order filters instead of a one 80 taps filter?
Is the decimation implemented in one stage usually?
Indeed, splitting decimation into multiple stages is something that is commonly done, because then only the first filter needs to run at the highest rate.
Now, 80 taps is not what we'd usually call a large filter, so yes, you can do that splitting, it's even advisable.
However, it might really be premature optimization: Your FPGA is probably fast enough to do all 80 MACs at full rate, and it doesn't even have to do that, seeing that you're building a decimating filter, so everything but the most naive implementation would be a polyphase filter implementation, which reduces your complexity to 80/(decimation) per input sample. Seeing that your filter is only 80 taps long, you're probably also not aiming for a very high decimation, anyway. | {
"domain": "dsp.stackexchange",
"id": 12366,
"tags": "filters, digital-communications, digital-filters"
} |
Phaser effect with feedback control IIR filter calculation | Question: This question is possibly relevant to this one, but it's not duplicate as far as I'm aware.
I'm trying to make phaser with feedback control as shown in the picture below: (Got it from here)
First problem for me was that feedback loop here doesn't have any delay, so at time $n$ I should pass allpass cascade output $y_n$ to its input at this exact time which seemed impossible to me. However, thanks to this comment I was able to figure out how to transform IIR filter cascade to incorporate this feedback into it in terms of its coefficients:
Recursive equation for $k$-th order IIR filter:
$$y_n = b_0x_n + b_1x_{n-1} + ... + b_kx_{n-k} - a_1y_{n-1} - a_2y_{n-2} - ... - a_ky_{n-k}$$
As we pass system output through explicit feedback loop our input sequence $x_n$ is defined as follows:
$$x_n = d_n + fy_n$$ where $d_n$ is system "dry" input and $f$ is feedback coefficient such as $|f| < 1$
If we plug this into the first equation we'll get
$$y_n = b_0(d_n + fy_n) + b_1(d_{n-1} + fy_{n-1}) + ... + b_k(d_{n-k} + fy_{n-k}) - a_1y_{n-1} - a_2y_{n-2} - ... - a_ky_{n-k}=\\
=b_0d_n + b_0fy_n+...+b_kd_{n-k}+b_kfy_{n-k}-a_1y_{n-1}-...-a_ky_{n-k}=\\
= b_0fy_n+b_0d_n+...+b_kd_{n-k}-(a_1-b_1f)y_{n-1}-...-(a_k-b_kf)y_{n-k}$$
Moving all $y_n$ terms to the left and grouping them we'll finally get our desired system coefficients:
$$(1-b_0f)y_n = b_0d_n+b_1d_{n-1}+...+b_kd_{n-k}-(a_1-b_1f)y_{n-1}-(a_2-b_2)y_{n-2} -...-(a_k-b_k)y_{n-k}$$
Now, the problem is: we all know we cannot directly use filters with order $k > 2$ without stumbling upon various stability problems. Say, I want to implement phaser with four notches, meaning I'll have allpass cascade of overall order $k = 8$ In that case I would probably want to find poles/zeros of my system analytically (instead of coefficients) and split it to second order sections afterwards or something like that, but I cannot figure out how to find poles/zeros instead of coefficients or is it even possible to do so?
All I've got is that using the fact that for allpass of order $k$ and having poles $[p_1, p_2, ..., p_k]$ its zeros will be always equal to $[1/p_1, 1/p_2, ..., 1/p_k]$ and knowing that all poles of initial system (without feedback) are equal I can write down my problem as follows:
$(x - p)^8 + a(x-1/p)^8= 0$
$p,a \in ℝ$
$|p| < 1$
$|a| < 1$
Also, below you can find matlab code which calculates filter coefficients for the whole phaser, you can see unstable case on
[bf, af] = notch_reso2(100, 48000, 0.5, 0.5);
function [ b, a ] = notch_reso2( fc, fs, feedback, depth )
order = 8;
w = pi * fc / fs;
a1 = (1 - cot(w)) / (1 + cot(w));
k = a1 ^ order;
p = -a1;
z = 1/p;
zs = repmat(z, 1, order);
ps = repmat(p, 1, order);
b = poly(zs) .* k;
a = poly(ps);
f = feedback;
af = a - b .* f;
bf = b ./ af(1);
af = af ./ af(1);
bf = bf .* depth + af * (1 - depth);
b = bf;
a = af;
end
Answer: Solution can be found in the well-known paper of Vadim Zavalishin. In sections (4.1) and (4.2) there's an explanation on how to implement zero feedback loop around cascade of LP filters. He even mentions later in the paper that exactly the same approach can be used to implement phaser (6.1) | {
"domain": "dsp.stackexchange",
"id": 3574,
"tags": "filters, audio, filter-design, infinite-impulse-response"
} |
Passing argument to a constructor and using it only in some class methods | Question: I've created some controller for handling clicked links statistics. Does this class meet Single Responsibility Principle?
class StatisticsController
{
protected $statisticsQuery;
public function __construct(StatisticsQuery $statisticsQuery)
{
$this->statisticsQuery = $statisticsQuery;
}
public function recordClickedLink(array $request)
{
(...)
if ($this->canRecordClick()) {
// record clicked link
}
(...)
}
public function getStatistics(array $request)
{
(...)
$someRequestParam = $request['id'];
$statistics = $this->statisticsQuery->get($someRequestParam);
(...)
}
protected function canRecordClick()
{
// returns true of false
}
}
If has two public methods. The first is responsible for saving clicked link to a database by delegating it to a model class. The second public method is responsible for getting statistics and it is the only method which use $statisticsQuery object passed to a class' constructor. Is it OK that recordClickedLink method doesn't need constructor's argument?
I wonder also if it is OK that my class contain canRecordClick method with logic for checking if link click can be saved during particular request.
If this class doesn't meet SRP, how can I refactor it?
Answer: As far as I can see in the code example you've provided, it seems that the getStatistics and recordClickedLink methods are independent of each other. So it seems intuitive to have in this case two controllers: a StatisticsController and a something like a RecordController where the two methods recordClickedLink and canRecordClick live. In that way the StatisticsController is responsible for statisics and the RecordController for the record.
It is ok that the class contains canRecordClick, but protected means that any class extending the StatisticsController will be able to call this method too. If it will only be used in the StatisticsController, private would be better. | {
"domain": "codereview.stackexchange",
"id": 37828,
"tags": "php, controller, constructor"
} |
What's the architecture and size of neural-network-based reward models as used in reinforcement learning by human feedback | Question: My rough understanding of RLHF as used for ChatGPT in a nutshell is this:
A reward model is trained using comparisons of different responses
to the same prompt. Human trainers rank these responses based on
quality.
The reward model is a neural network that learns to predict these
human rankings. It essentially learns the "policy" that human
trainers use to rank responses.
An initial policy, which is a language model, is fine-tuned using
Proximal Policy Optimization (PPO) with the reward model providing
the reward signal. This process is iterative, with the policy and
reward model being updated alternately.
The policy is then used to generate responses to prompts. The reward
model assesses these responses and provides a reward signal, which
is used to further fine-tune the policy, i.e. the language model.
My main question is the first one, the others are just for giving context:
1. What's the architecture and size of the neural-network-based reward model?
Is it pretrained, too? Is it possibly another pretrained (foundational) language model?
By how many samples labelled by human trainers is the reward model trained?
By how many prompts and rewarded completions is the language model trained later? (Which prompts, by the way?)
These numbers I'd like to compare with the numbers of pretrained ChatGPT:
Transformer-based ChatGPT has 175 billion weights.
It was pretrained on 500 GB of text data, distributed over an unknown number of "documents" (from single tweets to the Holy Bible) with roughly 500B tokens over all. During training ChatGPT was exposed to a multiple of 500B samples (assuming that all 500B tokens were used for training).
I assume that during RLHF foundational ChatGPT was exposed to a much smaller number of prompts to complete (and to be rewarded).
Answer: If you haven't already, I would recommend a careful reading of OpenAI's paper on InstructGPT. This was their publication from last year regarding how they applied RLHF to GPT-3, the precursor of ChatGPT.
The appendix provides information on the reward model and the RLHF training data. For example,
For the reward models and
value functions, the unembedding layer of the original model is replaced with a projection layer
to output a scalar value.
The final reward model was initialized from a 6B GPT-3 model that was fine-tuned on a variety of
public NLP datasets (ARC, BoolQ, CoQA, DROP, MultiNLI, OpenBookQA, QuAC, RACE, and
Winogrande).
and,
We train all the RL models for 256k episodes. These episodes include
about 31k unique prompts, after filtering out prompts with PII and deduplication based on common
prefixes.
If you want to know what ChatGPT does specifically, you might have to ask someone who works there. It's not public information. | {
"domain": "ai.stackexchange",
"id": 3879,
"tags": "reinforcement-learning, rewards, hyper-parameters, training-datasets, human-like"
} |
Why does a halocline form? | Question: I was doing an experiment over the last couple of days to try to crystallize alum using a thermal gradient. The idea was that solute at the bottom of my container would be dissolved at a higher percentage (I used a coffee mug warmer) because of the solubility curve...
http://nshs-science.net/images/alum_solubility_chart.gif
Then because the local solution is warmer than the solution above it, it would become buoyant, rise to the top, cool, become super saturated, and then deposit its excess solute preferably where I want it. This setup worked fine for me with epsom salt and ordinary table salt. But in this case, using alum, a salinity gradient, or halocline formed and the solution at the bottom, though warm enough to be uncomfortable to the touch, did not become buoyant. My container was only 9 inches tall. At the top, the solution was almost room temperature. I remembered this effect when reading about solar ponds:
http://en.wikipedia.org/wiki/Solar_pond
But typically a solar pond is several feet deep.
My question is, why does a halocline form in the first place? Is this simply the action of gravity on the solute? Is alum's greater solubility the reason I observed this effect in only 9 inches of container height?
Answer: This sort of thing happens all the time when you try dissolving a highly-soluble compound in a solvent without proper stirring. A high-concentration, high-density layer forms, and is stabilized by the fact that the density increase due to solute concentration outweighs the density decrease due to the increased temperature. For low-solubility compounds, or compounds with small $dS/dT$ (where $S$ is solubility and $T$ is temperature), this is less of a problem.
In the case of $\text{KAl(SO}_4)_2\cdot12\text{H}_2\text{O}$, the solubility at high temperatures (near boiling) is very high (see Solubilities of Inorganic and Organic Compounds by Atherton Seidell). You mention the solution at the bottom was "warm enough to be uncomfortable to the touch", which is probably in the regime of very high solubility, so it is overwhelmingly likely that this is the mechanism.
Stir it (magnetically or otherwise), or try using a smaller thermal gradient, and it should probably be better.
Halocline formation
user1839484: My question is, why does a halocline form in the first place?
I'd guess the mechanism for a halocline can roughly be described by considering the density $\rho$ as a function of solute concentration $S$ and temperature $T$, ie, $\rho(S,T)$. The halocline stability condition then becomes
$$\frac{d}{dT}\rho(S,T)>0.$$
Since $S$ depends on $T$, the halocline stability condition becomes
$$\begin{array}{|l|}
\hline
\frac{d}{dT}\rho(S(T),T)=S'(T) \rho ^{(1,0)}(S(T),T)+\rho ^{(0,1)}(S(T),T)>0.
\\\hline
\end{array}$$The second term is almost certainly negative for any value of $S$ (density of solutions generally tend to decrease with increasing temperature). The first term is positive, and is proportional to $S'(T)$, the slope of the solubility curve. If $S'(T)$ is large, the halocline stability condition is satisfied, and dense layers can form.
(Note: I made all of this up, but I think it might be helpful in phenomenological understanding. Also, thank you for teaching me the word "halocline"). | {
"domain": "physics.stackexchange",
"id": 13479,
"tags": "crystals"
} |
launch "global_planner" without move_base | Question:
Hey everybody,
I'm still working on my platform which is already running in the meantime. But I had to do some workarounds. To be more precise i didn't manged to set up the move_base to work with my bot. Either it is the setup/params or the odometry (which i programmed myself). But the Pathfinding works so far. So i programmed a path executor. A node, that just follows the path (the green line from the (global?) planner - NAVfn_path). No local planner, nothing extra. Just try to follow this line as close as possible. The bot is really slow - no dynamics to take into account. Moving objects are avoided because the path gets recalculated once every second or so.
Now to the "problem": i want to get rid of all the unneeded code running inside of move_base. So i tried to set up a costmap (costmap_2d) with the parameters i used with the move_base (since the costmap there just looks nice as it is right now :D). The standalone costmap looks as expected in rviz. Next step would be to start the "global_planner". from this link it is doing everything i need. I can even change the type of calculation there. And i hope i can just reinit the planner once it dies because the goal lies within an obstacle (the move_base tends to do this. The carrot_planner would do what i need, but ignores obstacles at all - even walls). But ... how do i call the planner? I just tried
$ rosrun glob[tab]al_planner [tab]planner
the [tab] just auto completed the command. So i would expect the node exists. I added it to the launch file i had prepared and it worked. for the goal i used
$ rostopic pub /planner/goal geometry_msgs/PoseStamped '{header: {frame: map}, pose: {position: {x: 1.0, y: 2.0, z: 0.0}}}'
Well, the planner complained the bot is outside of it's costmap. So i displayed it in rviz. And yes, it's outside, or more resize at 0.0/0.0 . And the costmap is blank. The costmap from the costmap_2d-node looks still nice and the bot is at the correct position (amcl works fine).
==>> How do i tell the "global_planner" to use the costmap of the "costmap_2d" ?
the documentation doesn't name anything like this. No topic the node subscribes to or a parameter with the name. all in all the documentation seems to be limited to the minimum since the planner doesn't seem to be intended to be used outside of move_base at all ?
rosparam list:
...
/costmap_2d_node/costmap/(many different parameters here)
...
/planner/costmap/(almost the same parameters as above)
....
Launchfile (shorted/abstract):
<launch>
<include> hardware driver, laserscanner, encoder, odometry </include>
<node> map server </node>
<node> amcl </node>
<node pkg="costmap_2d" type="costmap_2d_node" name="costmap_2d_node" clear_params="true">
<rosparam file="$(find navigation)nav_config/params.yaml" />
</node>
<node pkg="global_planner" type="planner" name="planner" />
<robotdescription> my own xacro </robotdescription>
<robot_state_publisher ... />
<joint_state_publisher ... />
<node> rviz </node>
</launch>
params.yaml (but since the costmap itself looks nice i expect it to be ok):
costmap:
map_type: costmap
global_frame: /map
robot_base_frame: /base_link
update_frequency: 5.0
publish_frequency: 2.0
static_map: true
rolling_window: false
resolution: 0.1
obstacle_range: 2.5
raytrance_range: 5.0
inflation_radius: 0.9
...
...
observation_sources: laser_scan
....
....
Originally posted by -LD- on ROS Answers with karma: 135 on 2018-10-05
Post score: 1
Answer:
And i would like to answer my own question. it came to me as i wrote down that the two nodes have almost exactly the same costmap/params. The global_planner just comes with it's own costmap. All i had to do is to load the exact same yaml-file together with the global_planner. And as expected: the node complains and doesn't calculate a path, but it stays alive and you can just enter a valid goal and the planner will than give you a plan.
Maybe this can help someone trying the same as me :D
Originally posted by -LD- with karma: 135 on 2018-10-05
This answer was ACCEPTED on the original site
Post score: 1 | {
"domain": "robotics.stackexchange",
"id": 31867,
"tags": "ros, navigation, global-planner, move-base, ros-indigo"
} |
About Nuclear magnetic resonance | Question: I'm trying to understand the basic principles of Nuclear Magnetic Resonance reading this link but I have some doubts:
1) I have ever known that when protons aren't in a magnetic field, their spins are random oriented in the space. When the protons are in a magnetic field, their spin feel the effect of $ \tau= \mu \times B$ and are inclined to align themselves with the magnetic field. But they precede around B, they aren't oriented like B! Isn't it?
So I can't understand why in the link that I have reported, at the section "Spin Physics", paragraph "Energy Levels", there is written
When the proton is placed in an external magnetic field, the spin vector of the particle aligns itself with the external field,just like a magnet would. There is a low energy configuration or state where the poles are aligned N-S-N-S and a high energy state N-N-S-S.
(please, see the animation on the link)
It seems to me that the two interpretations are in contrast..
And, if I follow the interpretation that I have ever heard, I don't understand this step: (paragraph "$T_1$ process"))
At equilibrium, the net magnetization vector lies along the direction of the applied magnetic field Bo and is called the equilibrium magnetization Mo. In this configuration, the Z component of magnetization MZ equals Mo. MZ is referred to as the longitudinal magnetization."?
The only thing that came to my mind is: it is due to the fact that we are considering a lot of atoms and so the x,y components of the net magnetization vector are -on the average- equal to zero.. But I'm not sure that it could be right...
2) In the paragraph "Spin Relaxation"
I dont't understand if the motions that influence $T_1$ are rotational motions at the Larmor frequency or any motion that causes a time varying field at the Larmor frequency..
And I'd like to understand if the loss of phase of the transverse magnetization is due to the action of the many molecules that rotates at a frequency less than and equal to the Larmor frequency..
Many thanks!
Answer: I don't consider myself an expert in MRI, but let me try (since nobody else has stepped up in the last hour...)
You are right with your first assertion: the spin precesses about the B vector (this is why you get resonance in the first place). However, on average there is a net component of the magnetic moment aligned with the B field. This is what gives rise to the change in energy - aligned or misaligned you will have either a drop or increase in energy.
It is this "average" magnetization $M_0$ that is equal to the component of the magnetic field along Z (the traditional direction of the magnetic field in an MRI). Yes - the X and Y component are on average zero, especially if you average over time (because of the precession of the individual nuclei).
As for your second point - T1 is called the spin-lattice relaxation as it refers to the way that the spin of a proton reverts to the "mean for the system"; in other words, when left alone the system will return to a certain number of up and down spins (net magnetization) which is different than the magnetization it had after the RF stimulus. So it is really "any motion" of the surrounding material at the right frequency - anything that can cause the spin transition. As your notes state, the higher the density of motions at the Larmor frequency, the shorter T1 (the more chances that the spin of an individual proton will be flipped).
Finally - transverse magnetization happens when the protons are all in phase, which you force by the RF pulse. As the protons experience slightly different local fields, they will end up precessing at slightly different phase which results in the loss of transverse magnetization. This is the T2 mechanism.
As was pointed out by CuriousOne, it is important to remember that the nuclear spins are subject to thermal equilibrium - there is only a small energy difference between the up and down states, so there will only be a small net magnetization (Boltzmann at work). There is something called hyperpolarization - a mechanism whereby nuclei (for example C13) are cooled to very low temperatures (below liquid helium, I believe) after which they can be hyper polarized - with the consequence that for a short time they exhibit a very strong MRI signal (up to $10^5\times$ stronger than at room temperature). Add to this the fact that C13 has a resonance at a different frequency than protons or any other nuclei in the human body, and you can briefly visualize organic molecules made with this technique with exquisite sensitivity (signal to noise ratio). I believe this is now used to image physiological processes such as cardiac or tumor metabolism (choline, acetate etc) in vivo without using ionizing radiation - an exciting new frontier in MRI. See for example this press release | {
"domain": "physics.stackexchange",
"id": 16679,
"tags": "nuclear-physics, magnetic-fields, resonance"
} |
Is it possible to get a better performance using memoization? Array algorithm | Question: I have a working solution for the problem below. I have the feeling that it could be improved using memoization, but I cannot see how to do it.
The problem:
You are given an array arr of N integers. For each index i, you are required to determine the number of contiguous subarrays that fulfill the following conditions:
The value at index i must be the maximum element in the contiguous subarrays, and
These contiguous subarrays must either start from or end on index i.
Signature
int[] countSubarrays(int[] arr)
Input
Array arr is a non-empty list of unique integers that range between 1 to 1,000,000,000
Size N is between 1 and 1,000,000
Output
An array where each index i contains an integer denoting the maximum number of contiguous subarrays of arr[I]
Example:
arr = [3, 4, 1, 6, 2]
output = [1, 3, 1, 5, 1]
Explanation:
For index 0 - [3] is the only contiguous subarray that starts (or ends) with 3, and the maximum value in this subarray is 3.
For index 1 - [4], [3, 4], [4, 1]
For index 2 - [1]
For index 3 - [6], [6, 2], [1, 6], [4, 1, 6], [3, 4, 1, 6]
For index 4 - [2]
So, the answer for the above input is [1, 3, 1, 5, 1]
My solution (Is it O(n^2) time complexity?):
function countSubarrays(arr) {
// Write your code here
if(arr.length === 0) return [];
if(arr.length === 1) return [1];
const checkLeft = (index) => {
let count = 0;
for(let i=index-1; i>=0; i--) {
if(arr[i] < arr[index]) {
count++;
} else
break;
}
return count;
}
const checkRight = (index) => {
let count = 0;
for(let i=index+1; i<arr.length; i++) {
if(arr[i] < arr[index]) {
count++;
} else
break;
}
return count;
}
const output = [];
for(let i=0; i<arr.length; i++) {
output.push(1 + checkLeft(i) + checkRight(i))
}
return output;
}
Answer: Starting from your main question My solution (Is it O(n^2) time complexity?) the answer is yes because for every element of the array you are looking for elements at the left and at the right of it so the complexity of your algorithm is quadratic. You can erase from your code the if(arr.length === 0) return []; line because it is guarantee that array is always not empty.
It is possible to reach a linear complexity O(n) using an auxiliary structure like a stack and iterating two times over the elements of your array, first one from the beginning :
function countSubarrays(arr) {
const length = arr.length;
const result = new Array(length).fill(1);
let stack = [];
for (let i = 0; i < length; ++i) {
while(stack.length && arr[stack[stack.length - 1]] < arr[i]) {
result[i] += result[stack.pop()];
}
stack.push(i);
}
//code for the reversed loop I'adding after
}
Taking your [3, 4, 1, 6, 2] array example I create an initial [1, 1, 1, 1, 1] result array obtaining the [1, 2, 1, 4, 1] result array equal to the left maximum number of contiguous subarrays of arr[i] including itself.
The same algorithm can be applied to calculate the the right maximum number of contiguous subarrays of arr[i] subtracting itself in a reverse cycle, arriving to the final code :
function countSubarrays(arr) {
const length = arr.length;
const result = new Array(length).fill(1);
let stack = [];
for (let i = 0; i < length; ++i) {
while(stack.length && arr[stack[stack.length - 1]] < arr[i]) {
result[i] += result[stack.pop()];
}
stack.push(i);
}
stack = [];
let tmp = new Array(length).fill(1);
for (let i = length - 1; i >= 0; --i) {
while((stack.length) && (arr[stack[stack.length - 1]] < arr[i])) {
tmp[i] += tmp[stack.pop()];
}
stack.push(i);
result[i] += (tmp[i] - 1);
}
return result;
}
console.log(countSubarrays([3, 4, 1, 6, 2])); | {
"domain": "codereview.stackexchange",
"id": 42199,
"tags": "javascript, array"
} |
Hamming distance | Question: I am wondering how I can improve the runtime of this code. I was solving this task at leetcode, and came up with this solution. But I got the result that my runtime is 88 ms, which beats a bit more than 79% of other submissions, so I wonder what else could be improved in this code.
var hammingDistance = function(x, y) {
const startPoint = x > y ? x : y;
let count = 0;
for (let i = Math.floor(Math.log2(startPoint)); i >= 0; i--) {
xPositiveBit = x/Math.pow(2, i) >= 1;
yPositiveBit = y/Math.pow(2, i) >= 1;
if (xPositiveBit) {
x = x%Math.pow(2, i);
}
if (yPositiveBit) {
y = y%Math.pow(2, i);
}
if (( xPositiveBit || yPositiveBit ) && !( xPositiveBit && yPositiveBit )) {
count++
}
}
return count;
};
Answer: Performance review.
There are some obvious performance problems that can be fixed.
Style and performance.
Use strict mode. Strict mode give code that runs slightly quicker than code not in strict mode. This is because the engine does not have to test and vet many common bad practices.
Declare all variables. In strict mode all variables need to be declared or the code throws an error. The vars xPositiveBit and xPositiveBit are undeclared and thus become global variables.
Global variables are significantly slower than local function scope variables. This is because of the way the engine searches for variables, it starts at the current scope and if the var can not be found it moves up one level of scope. It does this until the var is found. This slows down access to global variables.
Don't repeat calculations. You calculate the bit position you are testing up to 4 time per loop. Math.pow(2, i) can be done only once per loop. (Note that different JS contexts will have different optimisation strategies, some will spot the repeated calculation and store the result rather than recalculate)
Using these methods we get a tiny improvement with the following
Solution A
function hammingDistance2(x, y) {
"use strict";
var i;
var count = 0;
for (i = Math.floor(Math.log2(x > y ? x : y)); i >= 0; i--) {
const pow = Math.pow(2, i)
const xPositiveBit = x / pow >= 1;
const yPositiveBit = y / pow >= 1;
if (xPositiveBit) { x = x % pow }
if (yPositiveBit) { y = y % pow }
if ((xPositiveBit || yPositiveBit) && !(xPositiveBit && yPositiveBit)) {
count++
}
}
return count;
};
Getting a benchmark of 4.4 compared to the original 4.6 which is about a ~4% performance increase. Not much but every bit helps.
Smarter logic
If you look at the logic you are testing each loop the bit location defined Math.pow(2,Math.floor(Math.log2(x > y ? x : y))) shifted right depending on the loop count.
If we consider that the input values are limited to only positive doubles less than ((2^31) -1) you can use bitwise operators to give a major improvement in performance.
The following code benchmarks 2.0 compared to the original 4.6 so that is a massive improvement of ~57% and would bring your 88ms down to ~38ms But remember that has a limited input range.
Solution B
function hammingDistance(x, y) {
"use strict";
var xPos, yPos;
var count = 0;
var i = Math.pow(2, Math.log2(x > y ? x : y) | 0);
while (i > 0) {
xPos = x / i >= 1;
yPos = y / i >= 1;
if (xPos) { x %= i }
if (yPos) { y %= i }
if ((xPos || yPos) && !(xPos && yPos)) { count++ }
i >>= 1;
}
return count;
};
Note that benchmarking was on a set of random values in the range 2^27 to 2^29 selected such as to amplify the performance benefits of optimising code inside the loop. Testing the code on a larger range 0 to 2^29 reduces the performance improvements with the first snippet's improvements having error bars overlapping the original, in other words too close to call. However the second snippet kept the same performance increase mainly due to using | 0 to floor the value of i outside the loop.
All benchmarking on Firefox.
Update
Dont trust leetCode performance results.
At the time of writing the answer I had no clue what the hamming distance was. Now that I know it is just the number of bits that are different the solution is very simple. Count the on bits when you xor x and y.
So I came up with Solution C
var hammingDistance = function(x, y) {
return (x ^ y).toString(2).replace(/0/g,"").length;
}
And submitting to keetCode it got a lower score than using a loop, being only ahead of 73% of submissions ???
Then I tried
var hammingDistance = function(x, y) {
var ones = (x ^ y).toString(2).match(/1/g);
return ones ? ones.length : 0; // ones is null if no matching 1s
}
This only gained a slight advantage, ahead of 82%.
I then looked to see what code gave the best result, it was almost identical????
Code from leetCode best answer in terms of performance.
var hammingDistance = function(x, y) {
var xor = x ^ y;
var str = xor.toString(2);
var match = str.match(/1/g);
return match ? match.length : 0 ;
};
So I submitted that as an answer and it bombed big time being only ahead of 52% of submissions.
NOTE Your (OP) code runs 33% faster than the above.
I used my own benchmarker to test the different functions and the code I first gave (Solution B) running much faster than the updated solutions.
Inconsistencies
I have zero trust in the leetCode results, they are completely inconsistent with proven benchmarking results and even for their own results. I would guess there is a bug in there timing solution (JS is notoriously hard to benchmark) so be happy for a pass, and give no credence to the performance as that is based on luck.
Best JS solution I could come up with.
So the updated best performance as tested on a independent benchmarker is the following snippet run in strict mode. Out performing all the above by an order of magnitude. BenchMarked 0.1 compared to the next best at 2.0 for (Solution B) and OP's 4.6
33 times faster than OP's original.
function hammingDistanceA(x,y){
var xor = x ^ y;
var count = 0;
while(xor > 0){
count += xor & 1;
xor >>= 1;
}
return count;
}
Which on leet got above 91% on the first submission and on the second submission only managed to get ahead of 43% | {
"domain": "codereview.stackexchange",
"id": 28483,
"tags": "javascript, bitwise, edit-distance"
} |
Combinations with replacement | Question: I am still new to Scala and wrote a small snippet to find all the combinations with replacement of a sequence (e.g. cwr(ab, 3) should give aaa, aab, abb, bbb).
The slow way would be to generate all permutations, sort them, and throw out duplicates. I'd like to know if this code is idiomatic Scala style, and if I'm making any bad performance mistakes.
def sortedRanges(lo: Int, hi: Int, reps: Int) : List[List[Int]] =
if (reps == 0) List(Nil) else (
List.range(lo,hi).flatMap(
(x:Int) => (sortedRanges(x,hi,reps-1).map((xs:List[Int]) => x :: xs ))
)
)
def combinations_with_replacement[T](stuff: List[T], reps: Int) : List[List[T]] =
sortedRanges(0,stuff.length, reps).map( (xs:List[Int]) => xs.map(stuff(_)))
Answer: Here are the changes I would make to your code in terms of both style and efficiency:
def sortedRanges(lo: Int, hi: Int, reps: Int): List[List[Int]] = reps match {
case 0 => Nil :: Nil
case _ =>
List.range(lo, hi) flatMap (x =>
sortedRanges(x, hi, reps - 1) map (xs =>
x :: xs))
}
def combinationsWithReplacement(stuff: Vector[T], reps: Int): List[List[T]] =
sortedRanges(0, stuff.length, reps) map (xs =>
xs map (x =>
stuff(x)))
Style Comments
Methods which take functions as parameters (flatMap and map in this case) should be invoked using infix notation. That is, if there were a god of Idiomatic Scala Style he would prefer xs map (...) to xs.map(...)
Defined functions should use camel case, e.g. combWithRep over comb_with_rep.
In general, and you will get a feel for this the more you use Scala, pattern matching is preferred over if-statements. You will notice that in your function sortedRanges I've swapped your if-else statement for a pattern match on the value reps.
The last style tip I have for you is to break a chain of higher order functions over several lines. This last one can be fudged in some cases, but in general I find that it improves readability.
Efficiency Comment
The one change that I made in the name of efficiency was to change the container type of stuff from List[T] to Vector[T]. The reason I made this change is that accessing an item in a List takes linear time on the length of the list, whereas accessing an item in a Vector is almost constant. And as you know, you are accessing elements of stuff by index is in the last line of combinationsWithReplacement. | {
"domain": "codereview.stackexchange",
"id": 9156,
"tags": "beginner, scala, combinatorics"
} |
Are particles merely "relatively stable" patterns that can appear on their respective fields? | Question: With quantum field theory, particles are seen as excitation on various fields. Am I mistaken to think that then particles merely refer to "relatively stable" patterns that can appear on these fields? I assume the answer is yes, and in that case I have a couple questions: are interactions depicted in Feynman diagrams approximations of what actually happens in the involved fields? And is that why individual nucleons can't be perceived as being "neatly compartmentalized" inside the nucleus, if that even is true, in that if you were able to precisely take a look at the field, you wouldn't necessarily be able to recognize individual patterns corresponding to each individual nucleons?
Answer: There is no such thing as "looking at a quantum field", and particles aren't just "relatively stable patterns". You're thinking about this with a classical intuition (that there are things and that they have unambiguous properties and that you can look at them), but classical intuition does not apply in the quantum realm, and indeed intuition as such is a hard tool to master in the context of quantum field theory - we need to retrain our intuition to conform to what the theory says, not try to twist the theory into fitting the classical world in which our intuition was formed.
Fields - both classical and quantum - should not necessarily be imbued with the idea that they are some sort of substance we could look at, see this question and its answers for a longer discussion of the ontology of fields. Physics, especially quantum physics, provides a mathematical model of the world that allows us to predict what we will observe, but it does not necessarily select a specific ontology - no unique idea of "what there really is", whatever that means. There is nothing observable about "the electron field" except electrons and positrons, we cannot "look" at this field in any other way than the particles and processes we associate with it.
It is, as far as we know (cf. searches for proton decay), a fact that an isolated proton is infinitely stable, and so is a single photon that doesn't have anything else to interact with, or a single electron. There's nothing relative about this, and nothing approximate. Some particles are stable, others aren't, but this has nothing to do with them being "excitations" - insofar as it is meaningful to say that particles are excitations of fields, all particles are such excitations, stable or not. The impossibility of separating a nucleon like the proton into neat smaller constituents is due to the strongly interacting nature of quantum chromodynamics, which makes the perturbative approach in which the particles we associated with free fields are a good approximation impossible. See this question and its linked questions for more discussion of the internal structure of hadrons.
The problem here is what it actually means to say that a particle is "an excitation of a field". All physicists agree about the technical sense - the modes of a free field become creation/annihilation operators during quantization and then construct particle states - but there is no necessary implication between that technical sense and the vague idea of a particle as some sort of wave in a material field that seems to be implied by your question. See also this question and its answers for a longer discussion of the sense in which quantum fields "oscillate" or get "excited". The problem is, again, imbuing a formal description with intuitve ontological weight it does not actually carry within itself. | {
"domain": "physics.stackexchange",
"id": 90953,
"tags": "quantum-field-theory, particle-physics"
} |
roswtf and rpath problems in fuerte | Question:
I recently noticed that running roswtf on every package that I have produces errors related to rpath and the linker. For example, following the tutorial on creating ros packages here you create a new package by running the command
roscreate-pkg beginner_tutorials std_msgs rospy roscpp
Then if I cd to the beginner-tutorials directory, and run roswtf, I get the following error:
Loaded plugin tf.tfwtf
Package: beginner_tutorials
================================================================================
Static checks summary:
Found 1 error(s).
ERROR The following packages have rpath issues in manifest.xml:
* beginner_tutorials: found flag "-L/opt/ros/fuerte/lib", but no matching "-Wl,-rpath,/opt/ros/fuerte/lib"
* std_msgs: found flag "-L/opt/ros/fuerte/lib", but no matching "-Wl,-rpath,/opt/ros/fuerte/lib"
* roscpp: found flag "-L/opt/ros/fuerte/lib", but no matching "-Wl,-rpath,/opt/ros/fuerte/lib"
The question here seems to indicate that I should not be using -L commands within the export tag. I don't have any references to export or cpp in any of the manifests that produce these errors.
Basically every package I have written produces these same errors. What is causing this? What else should I be including in my manifest.xml?
EDIT
Adding the following line to the manifest (per manifest documentation) does not improve anything:
<export>
<cpp cflags="-I${prefix}/include" lflags="-L${prefix}/lib -Wl,-rpath,${prefix}/lib -lros"/>
</export>
Explicitly specifying the -rpath directory using
-Wl,-rpath,${prefix}/lib,-rpath,/opt/ros/fuerte/lib
Removes the error for the beginner_tutorials package. But I am still getting errors for std_msgs, and roscpp. Is there a bug in roswtf? Should there be something else in the manifests for these packages?
Clearly, this isn't causing much of a real problem, but the "ERROR" note is definitely unsettling.
Originally posted by jarvisschultz on ROS Answers with karma: 9031 on 2012-07-05
Post score: 2
Original comments
Comment by jarvisschultz on 2012-07-17:
Note, I just found this link from ros-users that explains -rpath and -Wl very well. Although, my question still stands, is there a problem with roswtf?
Answer:
Maybe this documentation will help.
EDIT: The export changes need to be made in the beginner_tutorials, std_msgs and roscpp packages.
To get that fixed, someone needs to open defect tickets for those packages.
Originally posted by joq with karma: 25443 on 2012-07-05
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by jarvisschultz on 2012-07-17:
Created a ticket here. Thanks for the help! | {
"domain": "robotics.stackexchange",
"id": 10060,
"tags": "ros, manifest.xml, roswtf"
} |
Are covariant vectors representable as row vectors and contravariant as column vectors | Question: I would like to know what are the range of validity of the following statement:
Covariant vectors are representable as row vectors. Contravariant
vectors are representable as column vectors.
For example we know that the gradient of a function is representable as row vector in ordinary space $ \mathbb{R}^3$
$\nabla f = \left [ \frac{\partial f}{\partial x},\frac{\partial f}{\partial y},\frac{\partial f}{\partial z} \right ]$
and an ordinary vector is a column vector
$ \mathbf{x} = \left[ x_1, x_2, x_3 \right]^T$
I think that this continues to be valid in special relativity (Minkowski metric is flat), but I'm not sure about it in general relativity.
Can you provide me some examples?
Answer: Yes, the statement holds true in general relativity as well. However, as we need to deal with tensors of higher and in particular mixed order, the rules of matrix multiplication (which is where the idea of the representation via row- and column-vectors comes from) are no longer sufficiently powerful:
Instead, the placement of the index determines if we are dealing with a contravariant (upper index) or a covariant (lower index) quantity.
Additionally, by convention an index which occurs in a product in both upper and lower position gets contracted, and equations must hold for all values of free indices.
If the given metric is non-Euclidean (which is already true in special relativity), mapping between co- and contravariant quantities is more involved than simple transposition and the actual values of the components in a given basis can change, eg:
$$
p^\mu = (p^0,+\vec p)\\
p_\mu = (p^0,-\vec p)
$$
and in general:
$$
p_\mu = g_{\mu\nu}p^\nu
$$
where $g_{\mu\nu}$ denotes the metric tensor and a sum $\nu=1\dots n$ is implied. | {
"domain": "physics.stackexchange",
"id": 4675,
"tags": "general-relativity, vectors, notation, tensor-calculus"
} |
How can I calculate the shortest path between two 2d vector points in an environment with obstacles? | Question: I have a 2D plane, with a fixed height and width of 10M. The plane has an agent (or robot) in the point $(1, 2.2)$, and an electric outlet in the point $(8.2, 9.1)$. The plane has a series of obstacles.
Is there an algorithm to find the shortest path between the agent and the goal?
And if the point has a fixed wingspan? For example, that the space between O and N is smaller than the agent, and then the agent cannot cross?
Answer: The usual way to solve this kind of problem is to construct a configuration space: extruding all the polygonal obstacles by sliding the polygon corresponding to the robot around them (some slides).
The exterior vertices of the configuration space can then be used as input to a path-planning algorithm, such as A*. | {
"domain": "ai.stackexchange",
"id": 1032,
"tags": "search, robotics, a-star, shortest-path-problem, path-finding"
} |
Filtering a collection by an async result | Question: I'm trying to do something like this:
var loudDogs = dogs.Where(async d => await d.IsYappyAsync);
The "IsYappyAsync" property would return a Task<bool>.
Obviously this isn't supported, so instead I've built an extension method called WhereAsync.
public static async Task<IEnumerable<T>> WhereAsync<T>(this IEnumerable<T> items, Func<T, Task<bool>> predicate)
{
var results = new List<T>();
var tasks = new List<Task<bool>>();
foreach (var item in items)
{
var task = predicate.Invoke(item);
tasks.Add(task);
}
var predicateResults = await Task.WhenAll<bool>(tasks);
var counter = 0;
foreach (var item in items)
{
var predicateResult = predicateResults.ElementAt(counter);
if (predicateResult)
results.Add(item);
counter++;
}
return results.AsEnumerable();
}
This probably isn't the best approach, but I'm at a loss for something better. Any thoughts?
Answer: There are several ways to achieve what you're after and it depends on whether you want the results drip fed to you as they're available or whether you're happy to have them all in one bang.
The way you've implemented your method gives it all in one bang - which is fine.
A shorter implementation could be
public static async Task<IEnumerable<T>> WhereAsync2<T>(this IEnumerable<T> items, Func<T, Task<bool>> predicate)
{
var itemTaskList = items.Select(item=> new {Item = item, PredTask = predicate.Invoke(item)}).ToList();
await Task.WhenAll(itemTaskList.Select(x=>x.PredTask));
return itemTaskList.Where(x=>x.PredTask.Result).Select(x=>x.Item);
}
It builds a list of an anonymous type where the type contains the item and the Task.
It then waits for all of the tasks to complete.
It then goes through that list from (1) and picks out the items that had a true result from the Task.
The other advantage is that you get rid of all of that counter and ElementAt() stuff. Either way still builds a complete List for the items in the given Enumerable...
Aside: You could investigate something like Rx if you wanted the results via an IObservable<T> instead. The result wouldn't even be async Task - it would just be an IObservable. | {
"domain": "codereview.stackexchange",
"id": 4656,
"tags": "c#, linq, task-parallel-library, async-await"
} |
Why do mutations not take place in mRNA of higher eukaryotes? | Question: Is it because it is too short-lived to be mutated? Both DNA and RNA are nucleic acids so how is mRNA protected? RNA viruses undergo mutations to evolve so I guess it is not immune to mutations
Answer: The premise of the questions suggests that mutations cannot take place in the mRNAs of higher eukaryotes.
To answer your question I think it is important to consider two viewpoints:
First, from a theoretical point of view, since DNA and RNA are as you pointed out composed of nucleic acids, they both can be mutated if enough energy is provided (UV light, chemicals, etc) which invalidates the premise of the questions.
Now, from a practical point of view, as you mentioned most mRNAs molecules have a short half life typically in the minute to day range whereas DNA molecules exist during the whole existence of the organism.
While it can occurs that mRNAs are mutated people are not interested to study this aspect for the following reasons:
You pointed out in the comment section, a mutation in a mRNA molecule might lead to translated malformed protein which can easily be degraded. It will be only one mRNA transcript from thousands transcripts. With a short half life, the mRNA an proteins will be degraded which will not have a long standing impact of the cell/organism. As such it will be very difficult to observe a phenotype which will affect the whole host.
Thus RNAs mutations have only transient effects which will not affect the host in the long term. RNAs molecules are not more protected than DNA molecules just that they are short lived so the host is protected from the effects of RNA mutations.
Hope this helps! | {
"domain": "biology.stackexchange",
"id": 10050,
"tags": "molecular-biology, dna, rna, mutations"
} |
Huygens' principle in curved spacetimes | Question: Does Huygens' principle hold in even dimensional (2m+1,1) curved spacetimes, or are there certain necessary conditions for it to hold? In other words, if I have Cauchy data for a field satisfying the wave equation on curved space, does the field value at a point only depend on the intersection of the past light cone with the Cauchy surface?
In addition, what are the physical implications in cases when Huygens' principle fails, both in odd dimensional flat space and curved spacetimes? Are there complications with the Cauchy problem or notable physical phenomena other than wave tails? I would be interested in implications for electromagnetic and gravitational radiation.
Answer: It generally does not work in curved spacetime. There is a quite thick book almost completely devoted to study this issue by P. Günther: Huygens' Principle and Hyperbolic Equations.
Some discussions can be found in Friedlander's book about the wave equation in curved spacetime. A necessary condition for the validity of the Huygens principle is that the spacetime be an Einstein space. For Ricci-flat spacetimes there are only two cases, one is Minkowski spacetime the other is a space containing plane gravitational waves.
There are also implications regarding the characteristic Cauchy problem... | {
"domain": "physics.stackexchange",
"id": 15356,
"tags": "general-relativity, waves, electromagnetic-radiation, interference, huygens-principle"
} |
Finding alternating sequence in a list of numbers | Question: Please be brutal and treat this as me coding this up for an interview.
A sequence of numbers is called a zig-zag sequence if the differences between successive numbers strictly alternate between positive and negative. The first difference (if one exists) may be either positive or negative. A sequence with fewer than two elements is trivially a zig-zag sequence.
For example, 1,7,4,9,2,5 is a zig-zag sequence because the differences (6,-3,5,-7,3) are alternately positive and negative. In contrast, 1,4,7,2,5 and 1,7,4,5,5 are not zig-zag sequences, the first because its first two differences are positive and the second because its last difference is zero.
Given a sequence of integers, sequence, return the length of the longest subsequence of sequence that is a zig-zag sequence. A subsequence is obtained by deleting some number of elements (possibly zero) from the original sequence, leaving the remaining elements in their original order.
More examples:
{ 1, 7, 4, 9, 2, 5 }
Returns: 6
The entire sequence is a zig-zag sequence.
{ 1, 17, 5, 10, 13, 15, 10, 5, 16, 8 }
Returns: 7
There are several subsequences that achieve this length. One is 1,17,10,13,10,16,8.
{ 44 }
Returns: 1
{ 1, 2, 3, 4, 5, 6, 7, 8, 9 }
Returns: 2
{ 70, 55, 13, 2, 99, 2, 80, 80, 80, 80, 100, 19, 7, 5, 5, 5, 1000, 32, 32 }
Returns: 8
{ 374, 40, 854, 203, 203, 156, 362, 279, 812, 955, 600, 947, 978, 46, 100, 953, 670, 862, 568, 188, 67, 669, 810, 704, 52, 861, 49, 640, 370, 908, 477, 245, 413, 109, 659, 401, 483, 308, 609, 120, 249, 22, 176, 279, 23, 22, 617, 462, 459, 244 }
Returns: 36
Worst case: \$O(n^2)\$
Space Complexity: \$O(n)\$
private static int longestAlternatingSequence(int[] values){
if(values.length == 1){
return 1;
}
int[] difference = new int[values.length-1];
for(int i = 1; i < values.length; i++){
difference[i-1] = values[i] - values[i-1];
}
int[] calculationsCache = new int[difference.length];
calculationsCache[0] = 1;
int max = Integer.MIN_VALUE;
for(int i = 1; i < difference.length; i++){
if(difference[i] > 0){
for(int j = 0; j < i; j++){
if(difference[j] < 0){
max = Math.max(max, calculationsCache[j]);
}
}
}else if(difference[i] < 0){
for(int j = 0; j < i; j++){
if(difference[j] > 0){
max = Math.max(max, calculationsCache[j]);
}
}
}else{
max = 0;
}
calculationsCache[i] = max + 1;
}
max = Integer.MIN_VALUE;
for(int value : calculationsCache){
max = Math.max(max, value);
}
return max + 1;
}
Answer: Basically, you approach solves the problem adequately, and I have nothing to say about the existing code in terms of style.
However, I think the general approach might be too costly, in terms of time and space requirements (even though I admit that there is no spec).
But since we should treat it as an interview question, then a potential interviewer could ask you
how to approach the problem more efficiently: for example, how could you process the array in one pass only?
So here is a possible answer.
Algorithm's complexity
As you said, your current implementation requires:
linear storage requirement
quadratic execution time
I have an alternative approach, using constant memory and linear time, which is based on an incremental algorithm: for each element of the array, you can determine the longuest zig-zag subsequence that can be built so-far.
I prototyped the solution using Common Lisp, so here is my version (this could be translated easily as a simple for loop in Java, but this is left as an exercise ;-))
(defun max-zig-zag (sequence)
(loop
for last = nil then current
for current in sequence
for sign = 0 then (signum (- current last))
for match = t then (or (eql await 0) (eql await sign))
for await = 0 then (if match (- sign) await)
count match))
This approach makes all your test examples pass:
(loop for (expected sequence) in
'((6 (1 7 4 9 2 5))
(7 (1 17 5 10 13 15 10 5 16 8))
(1 (44))
(2 (1 2 3 4 5 6 7 8 9))
(8 (70 55 13 2 99 2 80 80 80 80 100 19 7 5 5 5 1000 32 32))
(36 (374 40 854 203 203 156 362 279 812 955 600 947 978 46 100 953 670 862 568 188 67 669 810 704 52 861 49 640 370 908 477 245 413 109 659 401 483 308 609 120 249 22 176 279 23 22 617 462 459 244)))
always (eql expected (max-zig-zag sequence)))
=> T
I acknowledge that this might not be easy to follow through, so I'll try to write a step-by-step explanation.
Main idea
The difficulty in your question is that you allow some elements to be deleted (or, ignored). We are looking for sequences with maximum length, however.
The elements that can be ignored are those that do not alternate from previous ones: when you encounter a 2, and then the number 4, you can discard all the following elements that are greater than, or equal to, 4, without reducing the potential length of the subsequence: all numbers above 3 are just intermediate values.
The suggested approach is a greedy count of all the alternations of slopes in the array (*).
It is built around a fixed number of variables that are updated at each element, during a unique iteration of the array; in fact, the input array could be an infinite stream of inputs, whereas the count value could be an infinite stream of integers that is updated for each input (and growing when needed).
Derivative
So, the main idea is to look at the shape, or the derivative of your numbers: you take the current value, and the last value, when iterating over the array, and you compute the difference: that gives you the slope.
However, what is interesting is only the sign of this slope: the sign variable always contains the sign of the difference between current and last element, so that a 1 means that current value is higher, and -1 mean that is is lower than the last one.
Matching zig-zags
Initially, you await for any of those sign, either -1 or 1 (which is encoded by zero in the await variable).
This first time that sign differs from zero you have a match: you encountered a positive or negative variation of value; so, the new value of await is now the opposite of the current sign: if we saw a positive slope, like 1, we must read until there is a negative one.
The match variable is initially true, and is otherwise true only when the sign variable equals the await variable (or, if await is zero, during the initialization phase).
When match is false, we skip entries and let await keep its previous value.
When true, await takes the opposite of current sign, which means that we await for a change in the slope.
Then, match is true whenever we go up after having gone down, and respectively down after having gone up, discarding all continuous sequences of decreasing (or increasing) elements.
Finally, we count the number of matches to have the longuest subsequence.
(*) It is based on a data-flow approach (see Lustre, Signal, ...). In fact, the Common Lisp code above could probably easily be rewritten using the SERIES package. | {
"domain": "codereview.stackexchange",
"id": 7211,
"tags": "java, algorithm, interview-questions, dynamic-programming"
} |
How to run XGBoostregressor using reg: tweedie as objective? | Question: I installed XGBoost for anaconda on windows 10 based on the instructions provided here. It seems that xgboost 0.6 is already installed. It performs well using "reg:linear". However, if I use "reg:tweedie", this error is reported:
xgboost.core.XGBoostError: b'[23:08:27] src/objective/objective.cc:21: Unknown objective function reg:tweedie'
Answer: As I understood, it is a versioning issue. However, it should be noted that before installing the latest version, the old version should be manually removed. Then, try to install the new version. | {
"domain": "datascience.stackexchange",
"id": 1354,
"tags": "python, regression, xgboost, python-3.x, anaconda"
} |
Is molecular weight of proteins additive? | Question: I have an elementary question regarding calculation of molecular weight of a complex of two proteins. Essentially, I have two proteins binding as
$$\ce{A + B <=> AB}$$
I have the concentration of [A] and [B] in ng/ml, which I can convert to nM using its molecular weight (available in kDa). I have only concentration of [AB] in ng/ml, but I am not able to convert that into nM. Is the assumption that molecular weight of [AB] is equal to sum of molecular weights of [A] and [B] correct assumption?
I am not able to see the intuition behind this logic, and if someone can help me here, I would be very thankful.
Answer: The molecular mass in Dalton, amu or whichever unit you prefer is equivalent to the molar mass $M$ in $\mathrm{g/mol}$ that chemists prefer to use for this discussion. Both are strictly additive. The molar mass of a larger complex is the sum of the molar masses of all its components.
This is because of how the unit is defined. Focussing on molecular mass, we could see ot as mass of a molecule. The mass of a molecule is put together by adding the masses of the atoms that this molecule is made up of. If that molecule now forms larger aggregates, the mass of the resulting thing (still a single ‘thing’ albeit larger) is the sum of the masses of its constituents. And so on.
You just need to beware in case you have aggregates that are not strictly $\ce{AB}$. Consider two proteins that form an $\ce{AB2}$ complex: In this case $M(\ce{AB2}) = M(\ce{A}) + 2 \cdot M(\ce{B})$, because we have two $\ce{B}$ in the final complex. | {
"domain": "chemistry.stackexchange",
"id": 5746,
"tags": "concentration"
} |
Compression Library for C using Huffman Coding | Question: This is an update to a question I posed nearly two years ago about my implementation of Huffman Coding which I had written in C. Since then I have had time to grow as a programmer and have managed to integrate most, if not all, of the suggestions given to me then and I am looking for fresh feedback on the current version of my code.
Let's begin with a high level look at the internals of the library. The library is very simple to use and consists of two interface functions, huffman_encode() and huffman_decode().
Encoding Overview
huffman_encode() begins by performing a frequency analysis of the bytes in the input from which it generates a binary Huffman Tree, and in turn generates an encoding table to allow for fast compression. Once this is complete, it writes all the the header information (encoding representations of each byte, the original decompressed length of the data, the length of the encoded data, and the size of the header) before writing the compressed data itself to the output byte by byte.
One of the criticisms I received in my original implementation of this process was that my code relied on writing only one bit at a time to the output. I was able to devise a significantly faster way of achieving the same result by writing up to 16 bits in blocks of up to 8 bits simultaneously to the output via the function write_k_bits().
Decoding Overview
huffman_decode() first reads the decompressed length of and the header size before building a decoding table based on the encoding representations stored in the header. Then, it uses this table and the function peek_buffer() to read two bytes of the compressed data at an arbitrary bit offset and convert that to the decoded representation of that character. This process is then repeated until the entirety of the input has been decompressed.
Decoding was where the focus of the criticisms were in my previous implementation. My previous decoded would work by building a Huffman Tree from the header and then reading one bit at a time from the compressed data and traversing the tree to see if the currently read bits represented a compressed character. This was a very slow method as it not only read a single bit at a time but it also required a traversal of the tree for every single bit read from the buffer which for long strings would require multiple fruitless traversals of the tree for every single byte of data! I have solved this by reading multiple bytes of data simultaneously via the function peek_buffer() and using a lookup table for decoding instead of rebuilding the original tree.
Additional Changes
As well as the changes referenced above, I have made many other improvements since my previous post. These include increasing the maximum number of bits which can represent a compressed byte from 8 to 16, reduction of the header size, compression of arbitrary data (previously only character strings could be compressed), removal of the clunky linked list, improved file organisation and folder structure, addition of a Makefile, and other small improvements.
Feedback I am looking for
The majority of the changes I have made have involved improving the performance of my code and the compression ratios of my tests and I am very interested in hearing about any further improvements I could make in these areas. I am particularly interested in ways which I might reduce the size of the headers as their current size often leads to compression ratios > 1 for shorter and more diverse inputs and therefore end up making the "compressed" versions of certain inputs larger than the original uncompressed versions. Of course if you can find any bugs in my code then I'd very much like to hear about those as well.
Other slightly less important things which would still be good to get feedback on might include potential memory usage reductions, documentation/comment quality, style improvements, and potential porting issues between systems (this code was compiled with GCC 8.3.0 on Debian Sid).
I have posted all the code below as per the Code Review rules. If you plan on testing it yourself (you will need to create the directory obj/ inside the cloned repo before you run make).
The Code
huffman.c
/*
* Filename: huffman.c
* Date: 13/07/20
* Licence: GNU GPL V3
*
* Encode and decode a byte stream using Huffman coding
*
* Return/exit codes:
* EXIT_SUCCESS - No error
* MEM_ERROR - Memory allocation error
* INPUT_ERROR - No input given
*
* Interface Functions:
* - huffman_encode() - Encodes a string using Huffman coding
* - huffman_decode() - Decodes a Huffman encoded string
*
* Internal Functions:
*
* Encoding:
* - create_huffman_tree() - Generate a Huffman tree from a frequency analysis
* - create_encoding_table() - Generate a "code array" from the huffman tree, used for fast encoding
* - node_compare() - Calculate the difference in frequency between two nodes
* - create_byte_node() - Generate a byte node
* - create_internal_node() - Generate an internal node
* - destroy_huffmantree() - Traverses a Huffman tree and frees all memory associated with it
* - write_k_bits() - Write an arbitrary number of bits to a buffer
*
* Decoding:
* - peek_buffer() - Read a two bytes from a buffer at any given bit offset
*
* Data structures:
*
* Code array:
* - Fast way to encode and decode data using the information generated from a Huffman tree and an easy way to store a representation of the tree
* - 2D array that represents each byte to be encoded and how it is encoded allowing for O(1) time to determine how a given byte is encoded
* - Position in the array (i.e. code_array[0-255]) represents the byte to be encoded or an encoded byte
*
* Huffman tree:
* - Binary tree that operates much like any other Huffman tree
* - Contains two types of nodes, internal nodes and byte nodes
* - Every node contains either the frequency of the byte it represents if it is a byte node or the combined frequencies of its child nodes if it is an internal node
*
* Encoded data format:
*
* - Header
* - Compressed string length (1x uint32_t)
* - Decompressed string length (1x uint32_t)
* - Header size (1x uint16_t)
* - Huffman tree stored as an encoding table (16 + (number of bits representing the encoded byte) bits per byte: byte, bit length of encoded representation, encoded representation)
* - Encoded data
*
* The future:
* - Find way to reduce header size
* - Possibly using the huffman algorithm twice to encode the header?
* - Combine with duplicate string removal and make full LZW compression
*
*/
#include <ctype.h>
#include <stdbool.h>
#include <stdio.h>
#include <stdlib.h>
#include <stdint.h>
#include <string.h>
#include "../include/huffman.h"
/* Interface functions */
int huffman_encode(uint8_t * input, uint8_t ** output, uint32_t decompressed_length)
{
size_t freq[256] = { 0 };
uint16_t encoded_bytes = 0;
/* Frequency analysis */
for(size_t i = 0; i < decompressed_length; i++)
freq[input[i]]++;
for(uint16_t i = 0; i < 256; i++)
if(freq[i])
encoded_bytes++;
/* Handle strings with either one unique byte or zero bytes */
if(!encoded_bytes) {
return INPUT_ERROR;
} else if(encoded_bytes == 1) {
for(uint16_t i = 0; i < 256; i++) {
if(freq[i]) {
++freq[i > 0 ? i - 1 : i + 1];
}
}
}
/* Construct a Huffman tree from the frequency analysis */
huffman_node_t * head_node = NULL;
if(create_huffman_tree(freq, &head_node) != EXIT_SUCCESS)
return MEM_ERROR;
huffman_coding_table_t encoding_table[256] = {{ .code = 0, .length = 0 }};
/* Convert the tree to a lookup table */
create_encoding_table(head_node, encoding_table, 0);
destroy_huffman_tree(head_node);
size_t header_bit_length = 0;
/* Use the generated encoding table to calculate the byte length of the output */
for(uint16_t i = 0; i < 256; i++)
if(encoding_table[i].length)
header_bit_length += 16 + encoding_table[i].length;
size_t header_byte_length = (header_bit_length >> 3) + !!(header_bit_length & 0x7); /* Fast division by 8, add one if there's a remainder */
size_t encoded_bit_length = 0;
for(size_t i = 0; i < decompressed_length; i++)
encoded_bit_length += encoding_table[input[i]].length;
size_t encoded_byte_length = (encoded_bit_length >> 3) + !!(encoded_bit_length & 0x7);
if(!(*output = calloc(HEADER_BASE_SIZE + header_byte_length + encoded_byte_length + 1, sizeof(uint8_t))))
return MEM_ERROR;
/* Write header information */
((uint32_t *)(*output))[0] = decompressed_length;
((uint32_t *)(*output))[1] = encoded_byte_length;
((uint16_t *)(*output))[4] = header_bit_length;
size_t bit_pos = HEADER_BASE_SIZE << 3;
/* Store the encoding information */
for(uint16_t i = 0; i < 256; i++) {
if(encoding_table[i].length) {
write_k_bits(*output, i, &bit_pos, 8);
write_k_bits(*output, encoding_table[i].length, &bit_pos, 8);
write_k_bits(*output, encoding_table[i].code, &bit_pos, encoding_table[i].length);
}
}
/* Encode output stream */
for(size_t i = 0; i < decompressed_length; i++)
write_k_bits(*output, encoding_table[input[i]].code, &bit_pos, encoding_table[input[i]].length);
return EXIT_SUCCESS;
}
int huffman_decode(uint8_t * input, uint8_t ** output)
{
size_t bit_pos = HEADER_BASE_SIZE << 3;
huffman_coding_table_t decoding_table[65536] = {{ .symbol = 0, .length = 0 }};
/* Extract header information */
uint32_t decompressed_length = * (uint32_t *) &input[0];
uint16_t header_bit_length = * (uint16_t *) &input[8] + (HEADER_BASE_SIZE << 3);
/* Build decoding lookup table */
while(bit_pos < header_bit_length) {
uint8_t decoded_byte = peek_buffer(input, bit_pos);
bit_pos += 8;
uint8_t encoded_length = peek_buffer(input, bit_pos) & 15;
encoded_length = encoded_length ? encoded_length : 16;
bit_pos += 8;
uint8_t pad_length = MAX_CODE_LEN - encoded_length;
uint16_t encoded_byte = peek_buffer(input, bit_pos) & ((1U << encoded_length) - 1); /* Trim all leading bits */
bit_pos += encoded_length;
uint16_t padmask = (1U << pad_length) - 1;
for(uint16_t padding = 0; padding <= padmask; padding++)
decoding_table[encoded_byte | (padding << encoded_length)] = (huffman_coding_table_t) { .symbol = decoded_byte, .length = encoded_length };
}
if(!(*output = calloc(decompressed_length + 1, sizeof(uint8_t))))
return MEM_ERROR;
/* Decode input stream */
for(uint32_t byte_count = 0; byte_count < decompressed_length; byte_count++) {
uint16_t buffer = peek_buffer(input, bit_pos);
(*output)[byte_count] = decoding_table[buffer].symbol;
bit_pos += decoding_table[buffer].length;
}
(*output)[decompressed_length] = '\0';
return EXIT_SUCCESS;
}
/* Encoding functions */
huffman_node_t * create_byte_node(uint8_t c, size_t freq)
{
huffman_node_t * node;
if(!(node = malloc(sizeof(huffman_node_t))))
return NULL;
node->freq = freq;
node->child[0] = NULL;
node->child[1] = NULL;
node->c = c;
return node;
}
huffman_node_t * create_internal_node(huffman_node_t * first_child, huffman_node_t * second_child)
{
huffman_node_t * node;
if(!(node = malloc(sizeof(huffman_node_t))))
return NULL;
node->freq = first_child->freq + second_child->freq;
node->child[0] = first_child;
node->child[1] = second_child;
return node;
}
int create_huffman_tree(size_t * freq, huffman_node_t ** head_node) {
huffman_node_t * node_list[256] = { NULL };
huffman_node_t * internal_node;
huffman_node_t ** node_list_p;
size_t node_count = 0;
for(uint16_t i = 0; i < 256; i++)
if(freq[i] && !(node_list[node_count++] = create_byte_node((uint8_t)i, freq[i])))
return MEM_ERROR;
node_list_p = node_list;
while(node_count > 1) {
qsort(node_list_p, node_count, sizeof(huffman_node_t *), node_compare);
if(!(internal_node = create_internal_node(node_list_p[0], node_list_p[1])))
return MEM_ERROR;
node_list_p[0] = NULL;
node_list_p[1] = internal_node;
node_list_p++;
node_count--;
}
*head_node = node_list_p[0];
return EXIT_SUCCESS;
}
int node_compare(const void * first_node, const void * second_node)
{
huffman_node_t * first = *(huffman_node_t **)first_node;
huffman_node_t * second = *(huffman_node_t **)second_node;
if(!(first->freq - second->freq)) {
if(first->child[1] && !second->child[1])
return 1;
else if(!first->child[1] && second->child[1])
return -1;
else
return 0;
} else {
return first->freq - second->freq;
}
}
void create_encoding_table(huffman_node_t * node, huffman_coding_table_t huffman_array[256], uint8_t bits_set)
{
static uint16_t value = '\0';
if(node->child[1]) {
value &= ~(0x1 << bits_set);
create_encoding_table(node->child[0], huffman_array, bits_set + 1);
value |= 0x1 << bits_set;
create_encoding_table(node->child[1], huffman_array, bits_set + 1);
} else {
huffman_array[node->c].code = value & ((1U << bits_set) - 1);
huffman_array[node->c].length = bits_set;
}
}
void destroy_huffman_tree(huffman_node_t * node)
{
if(node->child[1]) {
destroy_huffman_tree(node->child[0]);
destroy_huffman_tree(node->child[1]);
}
free(node);
return;
}
void write_k_bits(uint8_t * buffer, uint16_t value, size_t * bit_pos, uint8_t bits)
{
size_t byte_pos = *bit_pos >> 3;
uint8_t bit_offset = *bit_pos & 7;
uint8_t bits_to_first_byte = 8 - bit_offset;
uint8_t extra_bytes_needed = ((bit_offset + bits) >> 3) - (bit_offset >> 3);
buffer[byte_pos] &= ~0 >> bits_to_first_byte; /* Clear the top n bits of the first byte we're writing to */
buffer[byte_pos] |= value << bit_offset; /* Shift `value` so that the largest relevant bit is in the MSB position and write as many bits as we can to the first byte of the buffer */
if(extra_bytes_needed > 0) {
value >>= bits_to_first_byte; /* Shift `value` such that the relevant bits can be written to the buffer */
buffer[byte_pos + 1] &= 0; /* Clear the next byte */
buffer[byte_pos + 1] |= value; /* Write the next 8 bits of `value` to the buffer */
if(extra_bytes_needed > 1) {
value >>= 8;
buffer[byte_pos + 2] &= 0;
buffer[byte_pos + 2] |= value; /* Write the remainder of `value` to the buffer */
}
}
*bit_pos += bits;
}
/* Decoding functions */
uint16_t peek_buffer(uint8_t * input, size_t bit_pos)
{
size_t byte_pos = (bit_pos >> 3);
uint32_t concat = (input[byte_pos + 2] << 0x10) | (input[byte_pos + 1] << 0x8) | input[byte_pos];
return concat >> (bit_pos & 7); /* Concatenate three successive bytes together and return a two bytes at the calculated bit offset */
}
huffman.h
#ifndef HUFFMAN_H
#define HUFFMAN_H
/* Header files */
#include <stdint.h>
/* Return values */
#define EXIT_SUCCESS 0
#define MEM_ERROR 1
#define INPUT_ERROR 2
/* Node identifiers, might change to enumeration */
#define INTERNAL_NODE 0
#define BYTE_NODE 1
#define HEADER_BASE_SIZE 10 /* Size of the header with no bytes stored */
#define MAX_CODE_LEN 16 /* The longest any encoded representation is allowed to be */
/* Huffman Tree node */
typedef struct huffman_node_t {
size_t freq;
union {
struct huffman_node_t * child[2];
uint8_t c;
};
} huffman_node_t;
/* Lookup table used for encoding and decoding */
typedef struct huffman_coding_table_t {
union {
uint16_t code;
uint8_t symbol;
};
uint8_t length;
} huffman_coding_table_t;
/* Interface Functions */
int huffman_decode(uint8_t * input, uint8_t ** output);
int huffman_encode(uint8_t * input, uint8_t ** output, uint32_t decompressed_length);
/* Internal Decoding Functions */
uint16_t peek_buffer(uint8_t * input, size_t bit_pos);
/* Internal Encoding Functions */
int create_huffman_tree(size_t * freq, huffman_node_t ** head_node);
int node_compare(const void * first_node, const void * second_node);
huffman_node_t * create_byte_node(uint8_t c, size_t freq);
huffman_node_t * create_internal_node(huffman_node_t * first_child, huffman_node_t * second_child);
void create_encoding_table(huffman_node_t * node, huffman_coding_table_t huffman_array[256], uint8_t bits_set);
void destroy_huffman_tree(huffman_node_t * node);
void write_k_bits(uint8_t * buffer, uint16_t value, size_t * byte_pos, uint8_t bits);
#endif
main.c
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include "../include/huffman.h"
int compare(uint8_t * first, uint8_t * second, size_t len);
int main()
{
uint8_t * encoded = NULL;
uint8_t * decoded = NULL;
char * test_strings[] = {
"test string",
"abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ1234567890!\"£$%^&*()-=_+\\|,./<>?[]{}'#@~`¬\n",
"test",
"Hello world!",
"This is a test string",
"My name is Jeff",
"Test",
"tteesstt",
"test",
"ab",
"Ω≈ç√∫˜µ≤≥÷",
"ЁЂЃЄЅІЇЈЉЊЋЌЍЎЏАБВГДЕЖЗИЙКЛМНОПРСТУФХЦЧШЩЪЫЬЭЮЯабвгдежзийклмнопрстуфхцчшщъыьэюя",
"If you're reading this, you've been in a coma for almost 20 years now. We're trying a new technique. We don't know where this message will end up in your dream, but we hope it works. Please wake up, we miss you.",
"a",
"aaaaaaaaaaaaaa",
"\0",
"Powerلُلُصّبُلُلصّبُررً ॣ ॣh ॣ ॣ冗",
"When the sunlight strikes raindrops in the air, they act as a prism and form a rainbow. The rainbow is a division of white light into many beautiful colors. These take the shape of a long round arch, with its path high above, and its two ends apparently beyond the horizon. There is , according to legend, a boiling pot of gold at one end. People look, but no one ever finds it. When a man looks for something beyond his reach, his friends say he is looking for the pot of gold at the end of the rainbow. Throughout the centuries people have explained the rainbow in various ways. Some have accepted it as a miracle without physical explanation. To the Hebrews it was a token that there would be no more universal floods. The Greeks used to imagine that it was a sign from the gods to foretell war or heavy rain. The Norsemen considered the rainbow as a bridge over which the gods passed from earth to their home in the sky. Others have tried to explain the phenomenon physically. Aristotle thought that the rainbow was caused by reflection of the sun's rays by the rain. Since then physicists have found that it is not reflection, but refraction by the raindrops which causes the rainbows. Many complicated ideas about the rainbow have been formed. The difference in the rainbow depends considerably upon the size of the drops, and the width of the colored band increases as the size of the drops increases. The actual primary rainbow observed is said to be the effect of super-imposition of a number of bows. If the red of the second bow falls upon the green of the first, the result is to give a bow with an abnormally wide yellow band, since red and green light when mixed form yellow. This is a very common type of bow, one showing mainly red and yellow, with little or no green or "
}; /* A series of horrible strings that try and break the compression */
size_t successes = 0;
size_t failures = 0;
size_t test_count = sizeof(test_strings) / sizeof(test_strings[0]);
for(size_t i = 0; i < test_count; i++) {
printf("\nEncoding string %zu...", i);
fflush(stdout);
if(huffman_encode((uint8_t *)test_strings[i], &encoded, strlen(test_strings[i]) + 1) != EXIT_SUCCESS) {
fprintf(stderr, "\nError: Failed to encode string %zu!\n", i);
failures++;
continue;
}
printf("Done!\nAttempting to decode...");
fflush(stdout);
if(huffman_decode(encoded, &decoded) != EXIT_SUCCESS) {
fprintf(stderr, "\nError: Failed to decode string %zu!\n", i);
free(encoded);
failures++;
continue;
}
printf("Done!\nValidating...");
if(!compare((uint8_t *)test_strings[i], decoded, strlen(test_strings[i]))) {
uint32_t uncompressed_len = (*(uint32_t *) &encoded[0]) << 3;
uint32_t compressed_len = ((*(uint32_t *) &encoded[4]) << 3) + (*(uint16_t *) &encoded[8]);
printf("Success!\nUncompressed length: %u (~%u bytes)\nCompressed length: %u (~%u bytes)\nCompression ratio: %lf\n", uncompressed_len, uncompressed_len >> 3, compressed_len, compressed_len >> 3, (float) compressed_len / uncompressed_len);
} else {
printf("Failed! Got \"");
for(size_t j = 0; j < strlen(test_strings[i]); j++)
putchar(decoded[j]);
printf("\"!\n");
failures++;
}
free(decoded);
free(encoded);
successes++;
}
printf("Results:\n\nTests completed: %zu\nSuccessful tests: %zu (%.0f%%)\nFailed tests: %zu (%.0f%%)\n", test_count, successes, 100 * (float) successes / test_count, failures, 100 * (float) failures / test_count);
return 0;
}
int compare(uint8_t * first, uint8_t * second, size_t len)
{
for(size_t i = 0; i < len; i++) {
if(first[i] < second[i]) {
return -1;
} else if(first[i] > second[i]) {
return 1;
}
}
return 0;
}
Makefile
CC := gcc
SRCDIR := src
OBJDIR := obj
DEPDIR := include
TARGET := huffman
CFLAGS := -Wall -Wextra -Wpedantic
LIBS :=
_OBJS := huffman.o main.o
OBJS := $(patsubst %,$(OBJDIR)/%,$(_OBJS))
_DEPS := huffman.h
DEPS := $(patsubst %,$(DEPDIR)/%,$(_DEPS))
$(OBJDIR)/%.o: $(SRCDIR)/%.c $(DEPS)
$(CC) -c -o $@ $< $(CFLAGS)
$(TARGET): $(OBJS)
$(CC) -o $@ $^ $(CFLAGS) $(LIBS)
.PHONY: clean
clean:
rm -f $(OBJDIR)/*.o $(TARGET)
Answer: A bug
This version of the program uses limited-length codes, which is good. Decoding looks good. However, limited-length codes create a new edge case: what if the tree is deeper than the length limit? There are various solutions, but as far as I can tell, none of them are used in this program - a length that exceeds MAX_CODE_LEN is generated and things go wrong. This is difficult to find with tests, as almost any realistic string would not result in such a long code. As an example of an unrealistic string, here is one (I cannot put it directly in this answer, it exceeds the size limit of 64KB). I mentioned some approaches to handle that edge case last time, but to go into a little more detail of the simplest trick: divide the frequencies by 2 while rounding up, then rebuild the tree (iterate if necessary).
Or, as an alternative to correctly handling that edge case, I suggest at least correctly failing to handle it: outputting an appropriate error message instead of producing bad data which cannot be decompressed.
Divide rounding up
A couple of times there is a construction like (n >> 3) + !!(n & 0x7). There is a simpler way: (n + 7) / 8, or if you prefer, (n + 7) >> 3.
Header size
Similar as in the previous review: if canonical Huffman codes were used, the header would not need to store the codes (as they can be reconstructed from the lengths and the implicit alphabetical order of the symbols), saving space. The sequence of lengths could be further compressed. | {
"domain": "codereview.stackexchange",
"id": 38905,
"tags": "performance, algorithm, c, memory-optimization, compression"
} |
Seurat Integration | Question: I'm following the instructions for integration:
https://satijalab.org/seurat/articles/integration_introduction.html
And it's taking a while to run:
immune.anchors <- FindIntegrationAnchors(object.list = ifnb.list, normalization.method = "SCT",
anchor.features = features)
I'm integrating 6 datasets.
Is this function designed well for large amount of data?
Answer: Are you using the same data from the integration tutorial or using your own data? How many cells/genes are in your dataset?
Seurat's default integration method (CCA) is known to be runtime/memory intensive. It can handle large datasets but may require lots of CPU cores/memory. I have found that the amount of RAM also seems to increase as more CPU cores are used. That said, I have been able to run the integration tutorial with the included dataset on an 8GB laptop.
The easiest solution is to simply use more resources when integrating: increase the amount of RAM and consider using multiple cores (there is a separate Seurat tutorial on parallelization). If you are encountering runtime/memory issues and you cannot scale up your resources, I would suggest looking at alternative integration methods:
Seurat includes another integration approach, RPCA which is supposed to be less memory intensive at the cost of being more conservative with integration
There are several other integration methods that provide similar results. I have used Harmony and found that it requires significantly less resources than the Seurat integration approaches (both CCA and RPCA). A quick google search of single cell RNA-seq integration methods will turn up other popular methods with benchmarks to compare and contrasts different approaches. | {
"domain": "bioinformatics.stackexchange",
"id": 2132,
"tags": "seurat, single-cell"
} |
How does moving qubits on the surface code not change the logical state | Question: I am reading this paper, and I do not understand how the process of moving a qubit does not change the logical state.
Moving a qubit is done by
Extending the logical $ \hat{Z}_L $ operator to include an adjacent measurement-qubit
Extending the hole by not measuring the measurement-qubit
Measuring the data qubit that is now not stabilized
Turning on the measurement-qubit that was in the original hole
Changing the logical operators to match the new structure
I understand why 1 and 2 do not change the logical state held by the surface code (we do not perform any gate or measurement). However it is not clear to me why after measuring the logical qubit and adding it back again to the surface code, the measurement outcomes of the logical operators does not change.
Answer: Let's assume that we do the move without any errors.
We denote the state before the move as $|\psi_L\rangle$, so that
$$
Z_L|\psi_L\rangle=|\psi'_L\rangle \\
X_L|\psi_L\rangle=|\psi''_L\rangle \\
$$
What happens during the move? After step (b), we stopped measuring the stabilizer $Z_6Z_7Z_8Z_9$ - this didn't change the state, and we also measured $X_6$, namely we projected the state to become
$$
|\psi^b_L\rangle = \frac{1+(-1)^{x_6} X_6}{2}|\psi_L\rangle.
$$
The new logical operators are $X_L$ and $Z_L^e=Z_3Z_4Z_5Z_7Z_8Z_9$. But they act on $|\psi^b_L\rangle$ exactly like the previous operators acted on $|\psi_L\rangle$ up to sign, because
$$
Z_L^e|\psi^b_L\rangle=\frac{1+(-1)^{x_6} X_6}{2}Z_6Z_7Z_8Z_9Z_L|\psi_L\rangle \\
=(-1)^{z_{6789}}\frac{1+(-1)^{x_6} X_6}{2}|\psi'_L\rangle
$$
and $X_L$ commutes with $X_6$.
Now we do the cycle shown in (c). Here the calculation is similar to above so I just outline it. Qubit 6 is fully stabilized by two X stabilizers and also by the stabilizer $Z_3Z_4Z_5Z_6$. The state is thefore projected and becomes an eigenstate of $Z_3Z_4Z_5Z_6$ and these 2 X stabilizers. However, if we act on it with the new logical gates $Z_L'$ and $X_L'$, it will be (up to sign) like acting on them with $X_L$ and $Z_L^e$, which in turn was the same as acting on them with $Z_L$ and $X_L$. That's really why it has to be a 2-step process, and also why you need to keep track of the outcomes of the first $z_{6789}$ result, $x_6$ and the last $z_{3456}$. | {
"domain": "quantumcomputing.stackexchange",
"id": 4335,
"tags": "surface-code"
} |
Simple item and inventory system | Question: Items.cs, Creates dictionaries, enumerators, and classes for every type of item.
using System;
using System.Collections.Generic;
public class Item {
// Medical
public static Dictionary < Medical, ItemMedical > Medicals = new Dictionary<Medical, ItemMedical>() {
{ Medical.bandage, new ItemMedical() { name = "bandage", healing = 15, weight = 0.2f, value = 75, description =
"Simple bandage to dress minor injuries. For patching up boo boos when you fall off your bicycle, won't save you from a gunshot." }},
{ Medical.tourniquet, new ItemMedical() { name = "tourniquet", healing = 25, weight = 0.7f, value = 225, description =
"A device that tightly wraps around a limb near the wound to stop the flow of blood, more importantly, it stops blood from flowing OUT." }}
};
// Weapon
public static Dictionary < Weapon, ItemWeapon > Weapons = new Dictionary < Weapon, ItemWeapon > () {
{ Weapon.shortsword, new ItemWeapon() { name = "shortsword", Damage = 25, weight = 4, value = 750, description =
"A relatively short sword, hence the very creative name \"Shortsword\"." }},
{ Weapon.longsword, new ItemWeapon() { name = "longsword", Damage = 40, weight = 6, value = 950, description =
"A relatively long sword, hence the very creative name \"Longsword\"." }}
};
// Armor
public static Dictionary < Armor, ItemArmor > Armors = new Dictionary < Armor, ItemArmor > () {
{ Armor.police_vest, new ItemArmor() { name = "police vest", Resistance = 25, Durability = 15, weight = 5, value = 1200, description =
"A pistol grade body armor used by police forces." }},
{ Armor.military_vest, new ItemArmor() { name = "military vest", Resistance = 60, Durability = 40, weight = 8, value = 2400, description =
"A heavy military vest capable of withstanding some rifle rounds." }}
};
public static ItemMedical Get(Medical key) => Medicals[key];
public static ItemWeapon Get(Weapon key) => Weapons[key];
public static ItemArmor Get(Armor key) => Armors[key];
}
public enum Medical {
bandage,
tourniquet
}
public enum Weapon {
shortsword,
longsword
}
public enum Armor {
police_vest,
military_vest
}
public class ItemMedical: ItemBase {
public float healing = 0;
}
public class ItemWeapon: ItemBase {
private float damage = 0;
public float Damage {
get => damage;
set => damage = value;
}
}
public class ItemArmor: ItemBase {
// Damage required to penetrate armor
public float Resistance = 0;
// "health" of the armor, damaged much more if penetrated
private float durability = 0;
public float Durability {
get => durability;
set => durability = Math.Clamp(value, 0, value);
}
}
public class ItemBase {
public string name = "No name";
public string description = "No description";
public float weight = 0;
public float value = 0;
}
Inventory.cs, equipment is an array because the slots never change, but backpack is a list because i dont know how many items will be in there.
public static ItemBase[] equipment = new ItemBase[4] {
// Primary weapon
Item.Get(Weapon.longsword),
// Secondary weapon
Item.Get(Weapon.shortsword),
// Body armor
Item.Get(Armor.police_vest),
// Rig
null
};
public static List<ItemBase> backpack = new List<ItemBase>() {
Item.Get(Medical.bandage),
Item.Get(Medical.bandage),
};
}
Usage
class Program {
static void Main(string[] args) {
Console.WriteLine("Items in inventory:");
foreach(ItemBase item in Inventory.equipment) {
if (item != null) Console.WriteLine(item.name);
else Console.WriteLine("Nothing");
}
Console.WriteLine("\nItems in backpack:");
foreach(ItemBase item in Inventory.backpack) {
Console.WriteLine(item.name);
}
}
}
Output:
Items in inventory:
longsword
shortsword
police vest
Nothing
Items in backpack:
bandage
bandage
My first actually competant looking piece of code as a beginner. Really proud of it.
Answer: First of all, congratulation it is a great first attempt. Even though it does not contain too much functionality rather structure and data.
Most of my recommendation will be related to C# coding conventions. Some of my suggestions are taking advantage of C# 9's new features so I'll share some links about them.
Enums
public enum Medical { Bandage, Tourniquet }
public enum Weapon { ShortSword, LongSword }
public enum Armor { PoliceVest, MilitaryVest }
In C# we usually use PascalCasing for enum members
In your Weapon enum you have used lower casing
whereas in your Armor enum you have used snake_casing
Please try to chase consistency across your domain model
Most of the time when the enum contains less than 5 members (and they are not overwriting default values) C# developers tend to define the enum in a single line
Base class
public abstract class ItemBase
{
public string Name { get; init; } = "No name";
public string Description { get; init; } = "No description";
public float Weight { get; init; }
public float Value { get; init; }
}
This class is used to define common properties that's why it is advisable to mark it as abstract
That prevents the consumer of this class to instantiate an object from it
You want to allow to be able to create only derived classes
C# developers are preferring properties over fields whenever they are public
That's why I've changed all the base class members to properties
I've also changed their name since in C# we normally use Pascal Casing for properties
I've used init instead of set, because it only allows initialization (via constructor or via object initializer)
So, after an item is created with a specified values it can't be changed later on
Since C# 6 you can define default value for auto generated fields
Derived classes
public class ItemMedical : ItemBase
{
public float Healing { get; init; }
}
public class ItemWeapon : ItemBase
{
public float Damage { get; init; }
}
public class ItemArmor : ItemBase
{
public float Resistance { get; init; }
private float durability;
public float Durability
{
get => durability;
set => durability = Math.Clamp(value, 0, value);
}
}
Please prefer auto-generated properties over manually creating backing fields and defining getter and setter methods
The only exception is whenever you have custom logic either inside the getter or inside the setter (like Durability)
I've get rid of the = 0 initial value assignments since these are their default values
The Item class #1
This constant data class can be implemented in several ways. One way to achieve it as you have done it.
In this case you can mark the class itself as static since all of its member is static as well
If you expose the dictionaries (public) then you should consider to make them immutable to prevent further element removal or addition after intialization
public static class Item
{
public static readonly ImmutableDictionary<Medical, ItemMedical> Medicals = new Dictionary<Medical, ItemMedical>
{
{
Medical.Bandage,
new()
{
Name = "bandage",
Healing = 15,
Weight = 0.2f,
Value = 75,
Description = "Simple bandage to dress minor injuries. For patching up boo boos when you fall off your bicycle, won't save you from a gunshot."
}
},
{
Medical.Tourniquet,
new()
{
Name = "tourniquet",
Healing = 25,
Weight = 0.7f,
Value = 225,
Description = "A device that tightly wraps around a limb near the wound to stop the flow of blood, more importantly, it stops blood from flowing OUT."
}
}
}.ToImmutableDictionary();
...
public static ItemMedical Get(Medical key) => Medicals[key];
public static ItemWeapon Get(Weapon key) => Weapons[key];
public static ItemArmor Get(Armor key) => Armors[key];
}
I've marked the collections as readonly to prevent overwriting with another collection or null by the consumer of the class
I've used new() (target typed new expression) to avoid repeating the class names over and over again
It can be inferred from the Dictionary type parameter
The Item class #2
Let me show you an alternative approach:
public class Item
{
private static readonly Lazy<Item> singleton = new Lazy<Item>(new Item());
public static Item Instance => singleton.Value;
private Item() { }
private readonly Dictionary<Medical, ItemMedical> Medicals = new ()
{
{
Medical.Bandage,
new ()
{
Name = "bandage",
Healing = 15,
Weight = 0.2f,
Value = 75,
Description = "Simple bandage to dress minor injuries. For patching up boo boos when you fall off your bicycle, won't save you from a gunshot."
}
},
{
Medical.Tourniquet,
new ()
{
Name = "tourniquet",
Healing = 25,
Weight = 0.7f,
Value = 225,
Description = "A device that tightly wraps around a limb near the wound to stop the flow of blood, more importantly, it stops blood from flowing OUT."
}
}
};
...
public ItemMedical this[Medical key] => Medicals[key];
public ItemWeapon this[Weapon key] => Weapons[key];
public ItemArmor this[Armor key] => Armors[key];
}
Here I've made the dictionaries private so they became implementation details
I've replaced the Get methods to index operators to make the item retrieval a bit more convenient
See next section for usage
The index operators can't be defined as static so we have to make a little trick here to make the usage easy
We use the singleton pattern to expose a single instance to the consumers of the class
Here I've implemented this pattern with a Lazy to make sure that the instance is created only when it is first accessed but in a thread safe manner
These changes allow us to define the Equipments and Backpack like this:
public ItemBase[] Equipments = new []
{
Item.Instance[Weapon.LongSword],
Item.Instance[Weapon.ShortSword],
Item.Instance[Armor.PoliceVest],
(ItemBase)null
};
public List<ItemBase> Backpack = new ()
{
Item.Instance[Medical.Bandage],
Item.Instance[Medical.Bandage],
}; | {
"domain": "codereview.stackexchange",
"id": 42339,
"tags": "c#, beginner, game"
} |
Coulomb repulsion in the Anderson impurity model | Question: In Phil Anderson's famous paper on impurities, Localized Magnetic States in Metals, he has the following paragraph on page 44,
However, I am puzzled by the last sentence: why is the $J$ part really only a one-electron energy $n_{\uparrow} + n_{\downarrow}$?
Answer: Because
$$
n_\downarrow n_\downarrow = n_\downarrow
$$
and similarly for $n_\uparrow$.
Why? Because $n_\downarrow$ can only take on the values 0 or 1 and
$$
0^2=0
$$
and
$$
1^2=1
$$ | {
"domain": "physics.stackexchange",
"id": 19982,
"tags": "condensed-matter, solid-state-physics, strong-correlated"
} |
Regarding the radial motion of photons | Question: Photons move on null geodesics and the equation of motion on equatorial plane after some algebra can be written as
$$e^{\nu}\dot{t}^2-e^{-\nu}\dot{r}^2-r^2\dot{\phi}^2 = 0$$
$\phi =0$ for the radial motion thus above equation becomes
$$e^{\nu}\dot{t}^2-e^{-\nu}\dot{r}^2 = 0$$
from where one can write down
$$\frac{d r}{d t} = \pm \Big(1-\frac{2M}{r}\Big)$$
and integration will lead to
$$t = r +2M\ln\Big|1-\frac{r}{2M}\Big|+C \hspace{12.5mm} \And \hspace{12.5mm} t = -r -2M\ln\Big|1-\frac{r}{2M}\Big|+C$$
Here I'm not sure what kind of physical meaning I should attribute to the final equations. I see at $r=2M$, $t\to \mp\infty$ but this also confuses my interpretation.
Answer: The physical meaning is that for a distant observer at fixed $r$, for whom $dt$ approximately represent a proper time interval, ingoing light appears to take an infinitely long time to reach the event horizon.
The other solution is for outgoing light and tells you that light emitted from sources at fixed $r$ takes an increasingly long time to reach a distant observer, with an asymptote to an infinitely long time at the event horizon (where a fixed source cannot exist). This is of course the phenomenon of gravitational time dilation. | {
"domain": "physics.stackexchange",
"id": 75630,
"tags": "general-relativity, visible-light, black-holes, photons, geodesics"
} |
DFT Frequency domain analysis and interpolation | Question: I have a 2 part question, one may be related to why I'm not understanding the other.
A while back, I remember some professor saying that for the $N$ point DFT frequency domain, the values for $k$ around $N/2$ (halfway point for the spectral domain - those around $\pi$) are considered high frequencies, and the end points $(0, N-1)$ (those around 0, $2\pi$) are considered low frequencies. I am having trouble seeing this mathematically given the definition for computing the coefficients of the DFT.
$$X[k]=\sum_{n=0}^{N-1}\left(x_ne^{\frac{-i2\pi{}kn}{N}}\right)$$
To me, it seems that the frequencies grow as $k$ increases right up until $k=N-1$, where the coefficients repeat due to periodicity. Perhaps I am mis-remembering and this may be an incorrect assumption?
This leads on to my second question. I have seen a few examples floating around online, where to interpolate in the time domain, one may pad in the frequency domain (running the IFFT after of course). The examples I have seen seem to decide to pad the coefficients around the middle, i.e. for $N=4$ and a desired sampling of $2N$, [1 2 3 4] become [1 2 0 0 0 0 3 4]. Is there a reason why the middle is targeted and not the end like [1 2 3 4 0 0 0 0]?
If my first question is correct, then this simply pads the higher frequencies as 0. I am aware that these zeroes add nothing to the frequency information as $X[k]=0$ cancels any effect during the inverse DFT. But wouldn't moving any of the coefficients change some frequency information, i.e. for [1 2 3 4] becoming [1 2 0 0 0 0 3 4]
$N=4, k=2, X[k]=3$, becomes $2N=8, k=6, X[k]=3$
$$e^{\frac{+i2\pi{}2n}{N}}\neq{}e^{\frac{+i2\pi{}6n}{2N}}$$
Answer: Concerning your first question, the fact that $k=N/2$ (for even $N$) corresponds to the highest frequency has to do with the periodicity of $X[k]$. Note that due to periodicity, the indices $k=N-1$, $k=N-2$, etc. correspond to the negative indices $k=-1$, $k=-2$, etc., which are low (negative) frequencies. As an example consider $x[n]=\cos(2\pi n/N)$. If you take the DFT of $x[n]$ you'll obtain a spectrum $X[k]$ with contributions at $k=1$ and at $k=N-1$. It would be wrong, however, to conclude that the signal has one component at low frequencies and one at high frequencies. It's just that a real-valued signal has a conjugate symmetric spectrum and so you get components at positive and at the corresponding negative frequencies. The latter appear at higher DFT indices due to periodicity of $X[k]$. The figure below shows the signal and its DFT coefficients for $N=8$:
Concerning your second question, as mentioned before, a real-valued signal has a conjugate symmetric spectrum and its DFT coefficients satisfy
$$X[k]=X^*[N-k]\tag{1}$$
If you want to interpolate a real-valued signal, then also the zero-padded spectrum must satisfy $(1)$. This means that you must zero-pad in the middle of the given frequency domain vector. Otherwise, the inverse DFT would generate a complex-valued time-domain signal. | {
"domain": "dsp.stackexchange",
"id": 4305,
"tags": "dft, interpolation"
} |
Bremermann's limit vs Planck frequency | Question: Bremermann's limit, as maximum possible computation power or CPU total computing frequency, is known to be on the order $10^{50}~\text{Hz}/\text{kg}$.
Why max computation frequency for unit mass can exceed Plank frequency, which is on the order of $10^{43} ~\text{Hz}$ and how it is related to it?
Answer: It's bits per second. It's not a frequency. It could be 1 bit being processed $10^{50}$ times per second, which would be faster than the Planck frequency, but it could also be $10^{50}$ bits being processed once per second each, which wouldn't.
You're also comparing things with different units. It's like saying that the Schwarzschild constant (mass of a black hole compared to its radius) is $1.34663531 × 10^{27} kg / m$, but the observable universe has a mass of $1.5×10^{53} kg$, so why isn't the observable universe a black hole? Well, the observable universe has a radius of more than 1 metre. Or, the speed limit is only 50 mph, so how can you drive from from New York to Philadelphia (94.5 miles)? Well, it takes more than one hour.
If you were going to choose an arbitrary amount of mass to put into the equation, there's no reason it would have to be 1kg. A more "natural" amount of mass would be something like the Planck mass or the electron mass. (very different amounts of mass, by the way). If the speed limit did somehow dictate how far you could travel, it would be more logical to multiply it by the Planck time, than by 1 hour - and by that reasoning your car couldn't even drive a millimetre, which shows how that reasoning is completely wrong. | {
"domain": "physics.stackexchange",
"id": 72150,
"tags": "frequency, physical-constants, absolute-units"
} |
Basic substring algorithms + auxiliary string-generating functions | Question: Summary: a bunch of algorithms to find if particular substring exists in the string. Meanwhile there's a lot to learn and I am looking forward to implementing more complex ones, I suppose there's some crucial insight regarding style, syntax, or design, which I might be missing. I have also managed to implement the auxiliary functions which potentially would assist me in the future.
Note: the question I have is how to avoid the boilerplate part in the main function. There should have been a some kind of "framework", which would accept the vector of functions and perform the tests. However, for me it's not quite clear how the architecture should be designed.
#include <algorithm>
#include <cassert>
#include <functional>
#include <iostream>
#include <random>
#include <string_view>
#include <utility>
#include <vector>
template <typename T>
std::ostream &operator<<(std::ostream &o, std::vector<T> const &vector)
{ for (auto const &element: vector) o << element << " "; return o; }
namespace string_generator_utility {
std::string generate_string(std::string::size_type length,
std::vector<char>&& alphabet = {'a', 'b', 'c', 'd'}) {
std::string result;
static std::random_device rd;
static std::mt19937_64 mersenne_twister_generator {rd ()};
const auto supremum = alphabet.size() - 1;
std::uniform_int_distribution<std::size_t> range {0, supremum};
for (std::size_t index = 0; index < length; ++index) {
const auto letter = range(mersenne_twister_generator);
result += alphabet[letter];
}
return result;
}
std::vector<std::string> generate_n_strings(std::size_t vector_size, std::size_t string_length,
std::vector<char>&& alphabet = {'a', 'b', 'c', 'd'}) {
std::vector<std::string> generated;
generated.reserve(vector_size);
std::generate_n(std::back_inserter(generated), vector_size,
[&]() { return generate_string(string_length, std::move(alphabet)); });
return generated;
}
} // namespace string_generator_utility
namespace algo {
std::vector<std::int64_t> naive_substring(std::string_view haystack, std::string_view needle) {
const auto haystack_size = haystack.size();
const auto needle_size = needle.size();
assert(haystack_size >= needle_size);
std::vector<std::int64_t> result;
for (std::size_t index = 0; index < haystack_size - needle_size + 1; ++index) {
std::size_t offset = 0;
for (; offset < needle_size; ++offset) {
if (haystack[index + offset] != needle[offset])
break;
}
if (offset == needle_size)
result.push_back(index);
}
return result;
}
std::vector<std::int64_t> rabin_karp_hash(std::string_view haystack, std::string_view needle) {
const auto haystack_size = haystack.size();
const auto needle_size = needle.size();
assert(haystack_size >= needle_size);
std::vector<std::int64_t> matches;
static const auto hash_function = [](std::string_view::iterator begin, std::string_view::iterator end) {
std::int64_t hash = 5381;
for (; begin != end; ++begin)
hash = ((hash << 5) + hash) + *begin;
return hash;
};
const auto needle_hashed = hash_function(std::begin(needle), std::end(needle));
for (std::size_t index = 0; index < haystack_size - needle_size + 1; ++index) {
const auto substring_hash = hash_function
(
std::begin(haystack) + index, std::begin(haystack) + index + needle_size
);
if (substring_hash == needle_hashed)
matches.push_back(index);
}
return matches;
}
} // namespace algo
int main() {
auto vector = string_generator_utility::generate_n_strings(25, 50);
std::cout << "naive substring:\n";
for (std::size_t index = 0; index < vector.size(); ++index)
{
std::cout << vector[index] << ": ";
auto shift = algo::naive_substring(vector[index], "ab");
std::cout << shift << "\n";
}
std::cout << "rabin-karp-substring:\n";
for (std::size_t index = 0; index < vector.size(); ++index)
{
std::cout << vector[index] << ": ";
auto shift = algo::rabin_karp_hash(vector[index], "ab");
std::cout << shift << "\n";
}
return 0;
}
Answer: From a readability viewpoint, your operator<< for std::vector declared at the top of your file should be spread out onto multiple lines so that it is easier to read and understand.
Also, the calculation of ((hash << 5) + hash) should be rewritten as the simpler hash * 33. The compiler will know the best way to multiply a number by 33. This could be a multiply, a shift-and-add like you've coded, or some sequence involving the address calculation instructions.
Rather than using an assert to verify that the needle is not longer than the haystack (which will only check the condition if the NDEBUG macro is not defined), just check the condition and return an empty collection.
In rabin_karp_hash you assume that two strings match if their hash values are the same. This is not necessarily the case. It is possible, however unlikely, that two different strings will have the same hash value. This is a hash collision. To ensure that your potential match are identical strings, you still need to compare both strings when the hashes match.
To simplify the code in main and eliminate the duplication, you can create a class with a virtual compare member. Then derive two classes from it, one for the naive comparison, the other for the Rabin-Karp one. Put your loop into another function, and pass instances of the appropriate derived class to use the specific comparison you want to test. | {
"domain": "codereview.stackexchange",
"id": 33731,
"tags": "c++, algorithm, c++17"
} |
Motion of a bead threaded onto a vertical ring with friction using a differential equation | Question: There are similar questions posted but unfortunately all deal with the case of no gravity.
I am trying to create a general model for the motion of a small bead of mass $ m $ which has been threaded onto a circular ring or radius $ r $ which is fixed in a vertical plane. The coefficient of friction between the ring and the bead is $ \mu $.
Initially (at time $ t = 0 $), the bead is located at the left-hand point on the ring which is at the same elevation as its centre, and is being pushed downwards at a speed of $ u $ (see image):
My attempt was (where $ \omega $ and $ \alpha $ are angular speed and angular acceleration respectively):
$(1)$ Resolving forces radially: $$ R - mg \sin \theta = mr\omega ^2 $$
$(2)$ Resolving forces tangentially: $$ mg \cos \theta - \mu R = mr \alpha $$
Rearranging $(1)$ for $ R $ and substituting it into $(2)$ gives:
$$ r \alpha = g \cos \theta - \mu r \omega ^2 - \mu g \sin \theta $$ which translates into the differential equation
$$ \theta '' + \mu (\theta ')^2 + \dfrac{g}{r}(\mu \sin \theta - \cos \theta) = 0 $$
The fact that this equation has no analytical solution isn't an issue in my case, but numerically solving it (with sensible values $ r = 1 $, $ g = 9.81 $, $ \mu = 0.1 $, $ \theta (0) = 0, \theta '(0) = 1 $) and graphing the results shows that the oscillations do not die down as expected due to friction, meaning my equation must be wrong.
Any help in correcting the model is much appreciated!
(Please note I am aware this problem can be solved using conservation of energy. I specifically want to explore the differential equation approach.)
Answer: Friction opposes relative motion between surfaces, but according to your equation friction force always points in the same direction regardless of the velocity of the bead relative to the ring. So depending on the direction of motion, "friction" is actually adding energy to your system. You need to put into your equations the ability for the friction force to point in the opposite direction of the velocity at all points in time.
One choice would be
$$mg \cos \theta - \mu R\cdot\text{sgn}(\omega) = mr \alpha$$
This will give dissipation at all times: | {
"domain": "physics.stackexchange",
"id": 68156,
"tags": "homework-and-exercises, newtonian-mechanics"
} |
Is the basic reproduction number unique? | Question: Given any epidemic model of an infectious disease, there are various ways of computing a basic reproduction number($R_0$) such as; the next-generation method, survival function, largest eigenvalue of the Jacobian matrix and so on.
My question is,
Do we get the same $R_0$ for the same model but using different methods of computing $R_0$?
If we don't(if $R_0$ is unique) what is the explanation?
Are we allowed to use "this" different $R_0$ in writing a paper?
Answer: It should be noted that many of the methods you talk about aren't necessarily for getting a number but rather a formula for $R_0$, all of which should be equivalent.
However, when those methods step into the realm of getting a number by fitting parameters, they may give different results as they handle certain aspects of the data differently.
You are certainly "allowed" to use any formulation of $R_0$ you want - just document what you did. | {
"domain": "biology.stackexchange",
"id": 7486,
"tags": "theoretical-biology, population-dynamics, epidemiology, infectious-diseases"
} |
Is there a way (like planning algorithms) to draw a successive polyline to fill a specific shape (like triangle)? | Question: there is a specific shape (like triangle) area, i am trying to develop an program to draw a successive polyline inside the triangle to fill the triangle, one line 3 layers
this post demonstrates some algorithms to solve this category of problems.
none of them is suitable for this task, any ideas?
Answer: Would not it be easier to simply apply geometric computation to your triangle to get smaller triangles whose vertices can be used for the polyline.?
With Wolfram Language
You have a triangle.
shp = Triangle[];
Graphics[{LightGray, shp}]
A ScalingTransform about the RegionCentroid can be performed with TransformedRegion to get smaller inner triangles. Below, shp is scaled by factors 0.8, 0.6, and 0.4 for the inner triangles.
scaledShps =
TransformedRegion[shp,
ScalingTransform[{#, #},
RegionCentroid@shp]] & /@ Range[.8, .4, -.2];
SeedRandom[456]
Graphics[{LightGray, shp, Riffle[Hue /@ RandomReal[1, Length@scaledShps], scaledShps]}]
Each of scaledShps has the vertices as the first argument. Map (/@) First to collect these and Flatten them into a single Line.
Graphics[{
LightGray, shp,
Orange, Thick, Line@Flatten[First /@ scaledShps, 1]
}]
Get more layers by increasing the number of scaling factors. For example, from .95 to .05 in steps of -.05
scaledShps =
TransformedRegion[shp,
ScalingTransform[{#, #},
RegionCentroid@shp]] & /@ Range[.95, .05, -.05];
Graphics[{
LightGray, shp,
Orange, Thick, Line@Flatten[First /@ scaledShps, 1]
}]
Wolfram recently released a free Wolfram Engine that can be called from many languages (Python, C, ...) so you can use the above code directly in your project.
Hope this helps. | {
"domain": "datascience.stackexchange",
"id": 5244,
"tags": "machine-learning, deep-learning, algorithms, opencv"
} |
Subset with modified condition, is it still NP-complete? | Question: So I know the conditions required for a problem to be NP-Complete is that it has to lie within NP and has to be NP-hard.
The given problem I have is subset sum.
However, the conditions have been change to sum ≤ M and sum ≥ M from sum = M. To be more specific:
"If we ask if there is a subset with sum ≤ M, is the problem still NP- Complete?"
"If we ask if there is a subset with sum ≥ M, is the problem still NP- Complete?"
My initial reaction is that the two problems are no longer NP-complete since they can both be solved within polynomial time.
Check each element and see if there exists at least one smaller than M.
Add all positive integers and see if the sum of all elements is larger than M.
Since it isn't NP Hard, it cannot therefore be NP-complete.
Am I thinking/approaching this correctly?
Answer: What precisely are the problems? I may be missing something (and cannot do comments yet). Are they
(1) Given a set $A \subseteq \mathbb{Z}$ of $n$ elements, does there exist some subset $S \subseteq A$ with $\sum_{x \in S} x \le M$.
(2) Given a set $ \subseteq \mathbb{Z}$ of $n$ elements, does there exist some subset $S \subseteq A$ with $\sum_{x \in S} x \ge M$.
If so these problems seem clearly in $\mathbf{P}$, basically by the reasoning you described -- we just want the minimum/maximum possible subset sum, and then we compare that with $M$. Assuming empty $S$ is allowed:
for (1), add up all the negative numbers in $A$ and see if it's $\le M$. If Yes, then return yes. If not, return No.
for (2), add up all the positive numbers in $A$ and see if it's $\ge M$.
Again I may be missing something, but it seems like the other answer's reduction might not be addressing the possibility that $SS_\le$ and $SS_\ge$ would return yes based on different sets? Like consider input $A = \{1, 3\}$ and $M=2$. | {
"domain": "cs.stackexchange",
"id": 14017,
"tags": "np-complete, np-hard, decision-problem"
} |
How does ROS handle messages between 3 or more nodes across 2 machines? | Question:
Hey,
I am working on a robot that creates a node on one computer to capture and transmit (through ros messages) an image (Node 1). The image is sent to two other nodes. One node is on a separate computer (connected via wi-fi) which also runs the rosmaster (Node 2). The third node is on the same computer as the image capture node (Node 3).
My question is: How is the image message transmitted to each node. Does the image get sent to the computer with the rosmaster and redistributed to the listening nodes (Node 1->Master->Node 3 and Node 2)? Or is the messaging system more dynamic in that messages go directly to listening nodes(Node 1-> Node 2 and Node 1->Node 3)? The reason this matters is because of the delay involved in wireless transmission to the rosmaster and then back to the transmitting computer to be consumed by node 3. Also the processing overhead of sending and receiving extra messages is undesirable.
Thank you,
Tim
Originally posted by TimZ on ROS Answers with karma: 13 on 2012-07-04
Post score: 1
Answer:
In short: the master can be thought of as a coordinator; it does not route all traffic between nodes through itself, but helps nodes (at startup, and during execution) to find each other. Nodes then communicate directly, via either TCP or UDP (this can even be configured per subscriber).
From the wiki/Master and wiki/Topics wiki pages.
The role of the Master is to enable individual ROS nodes to locate one another. Once these nodes have located each other they communicate with each other peer-to-peer.
In your case, images will go directly from the publisher to the subscriber, after the master has provided them with enough information to be able to find each other.
Originally posted by ipso with karma: 1416 on 2012-07-04
This answer was ACCEPTED on the original site
Post score: 4 | {
"domain": "robotics.stackexchange",
"id": 10053,
"tags": "ros"
} |
Will swinging a baseball bat create EM waves? | Question: It’s my understanding that an accelerating charge is what creates EM radiation. A bat has electrons which have a negative charge. A swinging bat accelerates those electrons. Is it correct to conclude that the bat is therefore generating EM radiation? I guess if the answer is yes then all acceleration of mass generates EM, the earth, cars, etc.
Answer: Yes but no.
Yes the acceleration of the charges will create EM waves but first of all the bat is most likely neutral meaning for each wave created by an electron there is also a wave created by a proton. Those waves would destructively interfere with each other, what they create can be more accurately described as noise rather than a wave. But even if your bat had excess charge the acceleration of it would be too low, relatively speaking. What do I mean by "too" small?
Well, consider the formula for the Electric fields of moving charges:
E = Constant * charge * acceleration / distance
https://doi.org/10.1016/j.proeng.2011.05.078
Here is an article on the acceleration of cricket bats, the values they've calculated does not exceed 2gs; I assume it wouldn't be much different for a baseball bat too. So say the bat is not perfectly neutral but has an excess charge of 1 electron, 1.6 * 10^-19 Coulombs that is. The constant is 1/(4piepsilon0*c^2) which is around 10^-7 and we can say we are a meter away from the bat.
E ≈ 10^-7 * 1.6 * 10^-19 2*9.81 / 1 = 3.14 * 10^-25 N/q
Which is significantly low, it is unlikely for your bat to have enough excess charge to create a significant disturbance in the field. | {
"domain": "physics.stackexchange",
"id": 74648,
"tags": "particle-physics, electromagnetic-radiation, electrons"
} |
Why doesn't the ocean gradually turn into hydrogen and oxygen gas? | Question: Maybe I'm wrong about this, but I thought I remembered from high school chemistry that all reactions are in equilibrium. Some equilibria are extremely far to the right or left, so they appear to react 100% or not at all, but even in those reactions there is a tiny amount on the unfavored side.
If that's not true, then the answer to this question will be short. But if it is true, then I have a question about the following reaction, and bodies of water like lakes and oceans.
$$
\ce{2H2O (liquid) <=> 2H2 (g) + O2 (g)}
$$
If I remember right, the energy change ($\Delta{H}$ or $\Delta{G}$) in that reaction strongly favors the left side, so water molecules are not breaking apart into gas in large numbers. However, it's a gas on the right side, so I imagine that if a water molecule at the surface of the ocean breaks into gas, those products will escape, and the $\ce{H2}$ and $\ce{O2}$ gas molecules are no longer in physical contact and have no chance to react again.
That makes me think that gradually the ocean would turn into hydrogen and oxygen gas, even if the left side of that equation were favored. That doesn't seem to be happening, so what am I missing? I don't normally think there is hydrogen gas floating around in the air, but perhaps there actually is enough that some of it is colliding with $\ce{O2}$ at the ocean surface and balancing the equilibrium.
Answer:
Maybe I'm wrong about this, but I thought I remembered from high school chemistry that all reactions are in equilibrium. Some equilibria are extremely far to the right or left, so they appear to react 100% or not at all, but even in those reactions there is a tiny amount on the unfavored side.
Yeah, that's true. But the word extremely is doing a lot of heavy lifting to make that statement correct. Let's explore just how extreme the situation is.
Before even talking about the equilibrium we should discuss reaction rates. Even if the reaction is not in equilibrium, it may take a very long time to get there. And I'm not talking about "leave it overnight." The Arrhenius equation relates the rate of a reaction to the energy barrier. At standard conditions (25 C), the speed of a reaction is halved for every additional 1.73 kJ/mol of activation energy. Familiar reversible reactions that happen on the hour scale have activation energies less than 100 kJ/mol or so. By contrast, the Gibbs free energy of formation of water is 237 kJ/mol. So the reaction will be at least $2^{(237-100)/1.73} = 7 \times 10^{23}$ times slower. Instead of proceeding in hours, it will take quintillions of years, billions of times longer than the age of the universe, to reach equilibrium.
Even allowing for the reaction to reach equilibrium, what is that equilibrium? LibreText's chemistry book gives an equilibrium constant of $2.4 \times 10^{47}$ at 500 K. It doesn't state the units, but when the number is that large, any sane unit leads to the same conclusion. Namely, the equilibrium concentration of hydrogen will be very small, parts per quintillion or so.
In summary, maybe there's an equilibrium. But it will take quintillions of years to reach it, and even when it is reached, the quantities of hydrogen will be measured in parts per quintillion. So don't expect it to matter over normal periods of time. | {
"domain": "chemistry.stackexchange",
"id": 17700,
"tags": "equilibrium"
} |
Intuitive proof for a tree with n nodes, has n-1 edges | Question: I am interested in an intuitive proof for "any binary tree with $n$ nodes has $n-1$ edges", that goes beyond proof by strong induction.
Answer: You can see a (binary) tree as a directed graph: suppose the root is the "lowest" node and the leaves are the "highest" ones, then say that all the edges are oriented upwards. Then, every node that is not the root will have exactly one edge entering in it, and every edge will be pointing at exactly one node. This means that if you have $n$ nodes, you have to have $n-1$ edges (one per node with the exception of the root). | {
"domain": "cs.stackexchange",
"id": 12691,
"tags": "graphs, trees, proof-techniques"
} |
When the voltage is increased does the speed of electrons increase or does the electron density increase? | Question: I am just a high school student trying to self study, please excuse me if this question sounds silly to you.
I know that current is a product of the speed of electrons and the electron density.When current is increased it either means that the speed of electrons has increased or it means that the number density of the flowing electrons has increased.
I also know that voltage is directly proportional to current and when voltage increases(without no change in the resistance) the current will also increase.
But my question is, when voltage increases does an increase in the speed of electrons contribute for an increase in current or does an increase in electron density contribute for it.
If it isn't that black and white, then in what proportion will each of the two components increase? Does it randomly increase?
Related question:Say the electron density of a circuit that lights a light bulb increases.When this happens what change will we see in the brightness of the light bulb?I know that when the speed of electrons increase the brightness increases but what will happen when the electron density increases?
Answer: In a conductive material such as a metal, for all practical purposes, current depends only on the speed of the electrons. The electron density does not change because each metal atom has already given up all of its valence electrons; releasing further electrons would require a very large energy input.
In an insulator or semiconductor, the density of charge carriers may increase during electrical breakdown. This occurs in avalanche diodes, neon lights, lightning bolts, and elsewhere. | {
"domain": "physics.stackexchange",
"id": 74426,
"tags": "electricity, electric-current, voltage"
} |
Does the recent "3-sigma" result at LHCb account for the number of different tests of beyond standard model physics that have been done? | Question: Recently there has been quite a lot of media interest generated by a reported observation of beyond-standard-model physics at the LHC with a "three sigma" degree of statistical significance.
My understanding (correct me if this is wrong), is that this roughly means that, in a world where there is no physics beyond the standard model, there would be ~1/1000 chance that this experiment would see the degree of difference to the standard model prediction that it sees.
This sounds compelling, but on the other hand I assume that a huge number of different tests have been run on LHC data looking for beyond standard model physics. I would be surprised if that number isn't easily greater than 1000, over the course of the LHC's life (again, please correct me if this wrong). So, in that light, it doesn't seem surprising that one may occasionally see such deviations from the standard model in the data, even without any actual new physics.
I understand it is possible to correct for multiple tests when calculating the statistical significance of a result, but it is not clear to me from the accounts in the media whether this has been done. So my question is: does the claimed significance of the LHCb result account for the multiple tests for beyond standard model physics that have been done on LHC data?
Answer: Look-elsewhere effect is the name for this, or one name for it. You'll sometimes see two different significance measures quoted for an effect, one with a look-elsewhere correction and one without.
The LHCb report is arXiv:2103.11769. At a glance I don't see evidence that they've considered the look-elsewhere effect. That's fine: results like this are useful when correctly interpreted. The correct interpretation is not that the effect exists, but that it's worth devoting extra resources to investigating it. Three-sigma effects usually disappear with additional data, but you never know.
Tommaso Dorigo, who is part of the CMS collaboration, says in a blog post "the odds that this is instead only a fluke are really, really high".
On the other hand, BaBar and Belle have seen evidence for a similar anomaly, at least according to Wikipedia (last paragraph of the section). That may be cause for optimism. | {
"domain": "physics.stackexchange",
"id": 77730,
"tags": "particle-physics, standard-model, beyond-the-standard-model, large-hadron-collider, data-analysis"
} |
Remove contiguous elements in array in a more functional way | Question: I try to remove elements in an array if the number of contiguous element is greater than two.
Here are the tests:
test('delete pieces if number of piece contiguous greater than two', t => {
t.deepEqual([null, null, null], removeIdenticalPieces([1, 1, 1]));
t.deepEqual([null, null, null, null], removeIdenticalPieces([1, 1, 1, 1]));
t.deepEqual([null, null, null, null, 4], removeIdenticalPieces([1, 1, 1, 1, 4]));
t.deepEqual([4, null, null, null, null], removeIdenticalPieces([4, 1, 1, 1, 1]));
t.deepEqual([1, 1, 4, 1, 1], removeIdenticalPieces([1, 1, 4, 1, 1]));
t.deepEqual([1, 1], removeIdenticalPieces([1, 1]));
});
Here is a working function:
function removeIdenticalPieces(line) {
const newLine = [];
while (line.length !== 0) {
const firstPiece = line.shift();
let nbContiguousPieces = 0;
for (let i = 0; i < line.length; i++) {
let currentPiece = line[i];
if (firstPiece !== currentPiece) {
break;
}
nbContiguousPieces += 1
}
if (nbContiguousPieces >= 2) {
newLine.push(null);
for (let j = 0; j < nbContiguousPieces; j++) {
line.shift();
newLine.push(null)
}
} else {
newLine.push(firstPiece);
for (let k = 0; k < nbContiguousPieces; k++) {
newLine.push(line.shift())
}
}
}
return newLine;
}
Does a more "functional" way exist to do the same?
Edit: thank you for your solutions.
Here the jsperf https://jsperf.com/removeidenticalpieces3.
I take the solution of @lilobase because it is faster and more readable.
Answer: Close to Damien's solution. If we hardcode the lookups, we can achieve a simpler version :
const lookAfter = (i, a) => (a[i] == a[i+1] == a[i+2]);
const lookBehind = (i, a) => (a[i] == a[i-1] == a[i-2]);
const lookAround = (i, a) => (a[i] == a[i-1] == a[i+1]);
const deleteContiguousItems = array => array.map((item, i) => (lookAfter(i, array) || lookAround(i, array) || lookBehind(i, array)) ? null : item);
And if you inline all the declarations you'll get the simplest expressions (not the most readable)
const lookAndReturn = (val, i, a) => (val == a[i+1] == a[i+2]) || (val == a[i-1] == a[i-2]) || (val == a[i-1] == a[i+1]) ? null : val;
const removeIdenticalPieces = array => array.map(lookAndReturn); | {
"domain": "codereview.stackexchange",
"id": 23836,
"tags": "javascript, functional-programming"
} |
Do heat pipes work in any orientation? | Question: I think I understand how heat pipes work, and thus the following one has me flummoxed:-
It's from a typical bulk order Chinese web site, but I've seem many similarly contorted arrangements. They are very common in the over clocking /modding community. There is never discussion as to which way the heat sink must be oriented for efficient operation.
My example seems ridiculous. With an U bend in the middle, I don't see how heat sinks like this can evaporate at the hot end and condense at the cold end. Surely the condensate will just pool in the U bend? Even with condensate wicking, it must be easier to wick downwards with gravity than upwards fighting gravity.
Are these types of heat sinks just a con? Overclocking /modding heat sinks are never spec'd with a deg. C /W rating or recommended orientation. This would not do in the engineering world. Can it work equally effectively in any orientation?
Answer: Orientation does often matter. As Carl's answer mentions, the liquid can get from the condenser to the hot interface via capillary action, but most common heat pipes are designed assuming gravity will do the job.
Capillary action is much more effective in space where there is no gravity, but produces very little flow when it has to work against gravity. Therefore, even heat pipes designed to transport the liquid via capillary action in space need to be oriented correctly on earth.
Make sure the radiator is above the hot interface, especially where there is no datasheet available. | {
"domain": "engineering.stackexchange",
"id": 3794,
"tags": "heat-transfer, cooling"
} |
Ruby Command Line Tic-Tac-Toe | Question: The following is my implementation of a command-line Tic-Tac-Toe game, written in Ruby. This was my first attempt at practicing object-oriented design principles.
require 'colored'
module TicTacToe
class Player
attr_accessor :symbol
def initialize(symbol)
@symbol = symbol
end
end
class Board
attr_reader :spaces
def initialize
@spaces = Array.new(9)
end
def to_s
output = ""
0.upto(8) do |position|
output << "#{spaces[position] || position}"
case position % 3
when 0, 1 then output << " | "
when 2 then output << "\n-----------\n" unless position == 8
end
end
output
end
def check_space(cell, sym)
if spaces[cell].nil?
place_symbol(cell, sym)
@current_turn += 1
else
puts "Space unavailable! Please select another cell"
end
end
def place_symbol(cell, sym)
spaces[cell] = sym
end
WINNING_COMBOS = [
[0, 1, 2], [3, 4, 5], [6, 7, 8],
[0, 3, 6], [1, 4, 7], [2, 5, 8],
[0, 4, 8], [2, 4, 6]
]
def winning_scenarios
WINNING_COMBOS.each do |set|
if spaces[set[0]] == spaces[set[1]] && spaces[set[1]] == spaces[set[2]]
return true unless spaces[set[0]].nil?
end
end
false
end
def tie
if !spaces.include?(nil) && !winning_scenarios
return true
end
end
end
class Game < Board
attr_reader :player1, :player2, :symbol
def initialize
super
play_game
end
def play_game
@player1 = Player.new("X")
@player2 = Player.new("O")
puts Board.new
@current_turn = 1
turn
win_message
tie_message
play_again
end
def move(player)
while !winning_scenarios && !tie
puts "Where would you like to move 'player #{player.symbol}'?".red
choice = gets.chomp.to_i
check_space(choice, player.symbol)
puts "Player #{player.symbol}'s move:".green
puts self
turn
end
end
def tie_message
puts "It's a Draw!".cyan if tie
end
def win_message
puts "Game over!".cyan if winning_scenarios
end
def turn
@current_turn.even? ? move(@player2) : move(@player1)
end
def play_again
puts "Play again? (yes or no)".yellow
answer = gets.chomp.downcase
if answer == "yes"
TicTacToe::Game.new
else
puts "Goodbye".cyan.bold
end
end
end
end
TicTacToe::Game.new
Answer: Not bad.
Unbounded recursion
Inside the Game class:
def initialize
# ...
play_game
end
def play_game
@player1 = Player.new("X")
@player2 = Player.new("O")
# ...
play_again
end
def play_again
# ...
if answer == "yes"
TicTacToe::Game.new
else
# ...
end
end
You're using recursion to repeat the game, which can get arbitrarily deep, leading to a stack overflow error. Use a loop instead. Also, to avoid violating the Single Responsibility Principle, put this loop outside the Game class whose sole responsibility should be to play a game.
Don't call play_again in play_game, and move play_again outside the class. The main code should be:
def play_again?
puts "Play again? (yes or no)".yellow
answer = gets.chomp.downcase
return answer == "yes"
end
loop do
TicTacToe::Game.new
unless play_again?
puts "Goodbye".cyan.bold
break
end
end
(I renamed play_again to play_again?. It's conventional to use a question mark in names of methods that return a boolean)
Unbounded recursion 2
def turn
@current_turn.even? ? move(@player2) : move(@player1)
end
def move(player)
while !winning_scenarios && !tie
# ...
turn
end
end
This recursion is not bounded either (it's not bounded by the number of spaces in the board, because a function call occurs for (rejected) invalid moves too). Use a loop instead:
def play_game
# ...
while !winning_scenarios && !tie
turn
end
# ...
end
def turn
@current_turn.even? ? move(@player2) : move(@player1)
end
def move(player)
# ...
# Don't call turn here
end
Object oriented design
class Board; ...; end
class Game < Board; ...; end
A game is not a board. A game has a board. Use composition instead of inheritance:
class Game # NOTE: Don't inherit Board
def initialize
@board = Board.new # NOTE
play_game
end
def play_game
# ...
while !@board.winning_scenarios && !@board.tie
turn
end
# ...
end
def move(player)
# ...
space_available = @board.check_space(choice, player.symbol)
@current_turn += 1 if space_available
puts "Player #{player.symbol}'s move:".green
puts @board # NOTE
end
def tie_message
... if @board.tie
end
def win_message
... if @board.winning_scenarios
end
# ...
end
Note I moved @current_turn += 1 to Game. A game has a current turn, a board doesn't. Modify check_space so it returns a boolean.
Encapsulation
Make internal methods private, so they aren't visible outside the class:
class Board
def initialize; ...; end
def to_s; ...; end
def check_space(cell, sym); ...; end
def winning_scenarios; ...; end
def tie; ...; end
private
WINNING_COMBOS = [
[0, 1, 2], [3, 4, 5], [6, 7, 8],
[0, 3, 6], [1, 4, 7], [2, 5, 8],
[0, 4, 8], [2, 4, 6]
]
def place_symbol(cell, sym); ...; end
spaces[cell] = sym
end
end
class Game
def initialize; ...; end
def play_game; ...; end
private
def move(player); ...; end
def tie_message; ...; end
def win_message; ...; end
def turn; ...; end
end
Remove this line which exposes internal fields of Game: (you never actually use Game#symbol, by the way)
attr_reader :player1, :player2, :symbol
And this line from Board: (then change every occurence of spaces in Board to @spaces)
attr_reader :spaces
Board methods
def check_space(cell, sym)
if @spaces[cell].nil?
place_symbol(cell, sym)
else
puts "Space unavailable! Please select another cell"
end
end
Single Responsibility Principle: Board should not do any printing, as it's not its core responsibility. It's better to return a boolean instead and let Game do the printing.
Naming: This method both checks if the space is free and places a symbol. Rename it to place_symbol_if_free.
Naming 2: Use position instead of cell to be consistent with other methods.
Consider splitting this to two methods: space_free?(position) and place_symbol(position, sym) (already exists) and then call both from Game.
WINNING_COMBOS = [
[0, 1, 2], [3, 4, 5], [6, 7, 8],
[0, 3, 6], [1, 4, 7], [2, 5, 8],
[0, 4, 8], [2, 4, 6]
]
Put this inside winning_scenarios, the only method where it's used.
def winning_scenarios
WINNING_COMBOS.each do |set|
if @spaces[set[0]] == @spaces[set[1]] && @spaces[set[1]] == @spaces[set[2]]
return true unless @spaces[set[0]].nil?
end
end
false
end
Rename winning_scenarios to game_won?. Or to winner and return the winner's symbol
Use Array#any? instead of looping.
Use Array#map to obtain the symbols at the locations of a set, instead of repeatedly accessing @spaces.
Result:
def winner
WINNING_COMBOS.any? do |set|
symbols = set.map { |position| @spaces[position] }
if symbols[0] == symbols[1] && symbols[1] == symbols[2]
symbols[0]
end
end
end
def tie
if !@spaces.include?(nil) && !winning_scenarios
return true
end
end
Rename to tie?, and simply return the boolean:
def tie?
return !@spaces.include?(nil) && !winner
end
I also suggets extracting first part to a new private method full?.
to_s can be simplified by using functional programming. See my implementation of as_string in this answer.
Game methods
play_game
puts Board.new
Change this to puts @board.
while !@board.winner && !@board.tie?
Looks like Board is missing a game_over? method.
win_message
tie_message
These method names are misleading. Each one conditionally prints a message. Merge them to a single print_game_result method.
turn
def turn
@current_turn.even? ? move(@player2) : move(@player1)
end
The call to move is duplicated because the method does two things: (1) determine who's turn it is (2) call move.
It's better to do only the first one:
def current_player
@current_turn.even? ? @player2 : @player1
end
Then use move(current_player) in play_game.
move
def move(player)
puts "Where would you like to move 'player #{player.symbol}'?".red
choice = gets.chomp.to_i
space_available = @board.check_space(choice, player.symbol)
@current_turn += 1 if space_available
puts "Player #{player.symbol}'s move:".green
puts @board
end
I'd move the last two puts statements to play_game, because they're not part of making a move.
If you split check_space as I suggest earlier, you can make a loop that gets input until Board#space_free? returns true and then call Board#place_symbol. Otherwise, you'll have to rename this method to try_make_a_move.
initialize
Don't call game_play. Constructors should only initialize an object, and should not do IO. Instead, call game_play in the main code (TicTacToe::Game.new.play_game).
Further Improvements
Say who won.
Use digits 1 to 9 when printing the board and getting user input. It's hard to tell 0 and O apart. | {
"domain": "codereview.stackexchange",
"id": 16747,
"tags": "ruby, game, console, tic-tac-toe"
} |
How to use smart pointers with SDL2 (SDL_RWops)? | Question: I want to write read savefile function with smart pointers and SDL2. I have little expirience with smart pointers and just want to ask is my code good, correct and no memory leaks.
this is my code:
std::shared_ptr<SDL_RWops> saveFile;
--
int GameEngine::ReadSave() {
int Highlevel;
//Open file for reading in binary
saveFile = std::shared_ptr<SDL_RWops>(SDL_RWFromFile( "data/save.txt", "r" ), SDL_RWclose);
//File does not exist
if( saveFile.get() == NULL ){
printf( "Warning: Unable to open file! SDL Error: %s\n", SDL_GetError() );
Highlevel = 0;
//Create file for writing
saveFile = std::shared_ptr<SDL_RWops>(SDL_RWFromFile( "data/save.txt", "w+" ), SDL_RWclose);
if( saveFile.get() != NULL ){
printf( "New save file created!\n" );
//Initialize data
SDL_RWwrite( saveFile.get(), &Highlevel, sizeof(int), 1 );
}
else {
printf( "Error: Unable to create savefile! SDL Error: %s\n", SDL_GetError() );
}
}
else{
//Load data
printf( "Reading save file...!\n" );
SDL_RWread( saveFile.get(), &Highlevel, sizeof(int), 1 );
}
return Highlevel;
}
Answer: Your idea to wrap the result of SDL_RWFromFile in a smart pointer is a good idea. I recommend a few improvements:
Factor out the opening-and-wrapping operations into a named function. For example:
auto OpenFile(const char *fname, const char *mode) {
return std::shared_ptr<SDL_RWops>(SDL_RWFromFile(fname, mode), SDL_RWclose);
}
if( saveFile.get() == NULL ){
Your whitespace is all messed up here; and also, you should generally treat smart pointers the same way you'd treat regular pointers. Avoid using .get() and .reset() and so on, unless you have a specific reason you need to emphasize the "object-ness" of the smart pointer. Keep it simple by writing:
saveFile = OpenFile("data/save.txt", "r");
if (saveFile == nullptr) {
printf("Warning: Unable to open savefile! SDL Error: %s\n", SDL_GetError());
Highlevel = 0;
saveFile = OpenFile("data/save.txt", "w+");
if (saveFile == nullptr) {
printf("Error: Unable to create savefile! SDL Error: %s\n", SDL_GetError());
} else {
printf("New save file created!\n");
SDL_RWwrite(saveFile.get(), &Highlevel, sizeof(Highlevel), 1);
}
} else {
printf("Reading save file...!\n");
SDL_RWread(saveFile.get(), &Highlevel, sizeof(Highlevel), 1);
}
I notice that you are not checking the return value of SDL_RWread for errors. That's not great.
Have you considered throwing an exception on failure, instead of using printf to report directly to the user? This would certainly streamline your code:
if (auto f = OpenFile("data/save.txt", "r")) {
printf("Reading save file...\n");
if (SDL_RWread(f.get(), &Highlevel, sizeof(Highlevel), 1) != 1) {
throw std::runtime_error(std::string("Save file format error: ") + SDL_GetError());
}
} else if (auto f = OpenFile("data/save.txt", "w+")) {
printf("Writing new save file...\n");
Highlevel = 0;
SDL_RWwrite(f.get(), &Highlevel, sizeof(Highlevel), 1);
} else {
throw std::runtime_error(std::string("Unable to create save file: ") + SDL_GetError());
}
Finally, shared_ptr is overkill here because you never have more than one "owner" of the open file. You should certainly rewrite OpenFile to return a unique_ptr — a type which you can still implicitly convert to shared_ptr if you do ever need shared ownership of the file. See "OpenSSL client and server from scratch, part 1"; it's basically this:
struct RWCloser { void operator()(SDL_RWops *p) const { SDL_RWclose(p); } };
using UniqueRWops = std::unique_ptr<SDL_RWops, RWCloser>;
UniqueRWops OpenFile(const char *fname, const char *mode) {
return UniqueRWops(SDL_RWFromFile(fname, mode));
} | {
"domain": "codereview.stackexchange",
"id": 40137,
"tags": "c++, pointers, sdl"
} |
How to use configuration files for identical robots? | Question:
I have a few configuration files containing calibrations, configurations, camera_infos for a set of identical robots.
Currently, the launch files will read in the computer's hostname as an environment variable so it knows to look in code repo/{$HOSTNAME}/config for that robot's parameters.
This works well enough for the runtime system, but then I'm not sure what's the best way to use those configurations when playing back bags recorded on those robots. Should I change the environment variable to be something like $ROBOT_HOSTNAME so that the system doesn't sometimes set it? This would allow the person to change the variable on their development machine before playing a bag from a specific robot. Is there something better I could do?
This question is related to my other question about recording configuration, but this one also has the runtime launch component.
Originally posted by Chad Rockey on ROS Answers with karma: 4541 on 2012-02-28
Post score: 1
Answer:
Environment variables are awkward. I like to avoid them, when I can.
How about collecting all the configuration data for each robot in a separate package? Store the relevant package name in a configuration file under /etc/ or ~. The per-user settings could even override the per-system one.
Originally posted by joq with karma: 25443 on 2012-02-28
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 8421,
"tags": "ros, best-practices"
} |
Why does non-baryonic matter give structure formation a head start? | Question: This image is from the textbook by Ryden. It shows that the density perturbation was able to grow earlier if there were non-baryonic dark matter, why is that?
Answer: This is because non-baryonic dark matter does not interact with radiation, hence it can collapse and form potential wells before baryonic matter can (and the potential wells attract baryonic matter).
See Wiki (most relevant parts highlighted):
Structure formation refers to the period after the Big Bang when density perturbations collapsed to form stars, galaxies, and clusters. Prior to structure formation, the Friedmann solutions to general relativity describe a homogeneous universe. Later, small anisotropies gradually grew and condensed the homogeneous universe into stars, galaxies and larger structures. Ordinary matter is affected by radiation, which is the dominant element of the universe at very early times. As a result, its density perturbations are washed out and unable to condense into structure. If there were only ordinary matter in the universe, there would not have been enough time for density perturbations to grow into the galaxies and clusters currently seen.
Dark matter provides a solution to this problem because it is unaffected by radiation. Therefore, its density perturbations can grow first. The resulting gravitational potential acts as an attractive potential well for ordinary matter collapsing later, speeding up the structure formation process. | {
"domain": "physics.stackexchange",
"id": 92567,
"tags": "cosmology, space-expansion, cosmological-inflation, structure-formation"
} |
Defining Custom Message Types for Rosjava Android | Question:
Hello!
I am writing an android app in JellyBean to subscribe to a rostopic. I am publishing a custom message type from an ubuntu virtual machine using groovy. I have successfully published and subscribed to the message between 2 VMs, but have issues when receiving the message type on my tablet.
I correctly configured the .msg files in my ros package, but am unsure how to define classes within my app as descriptors for ros messages.
Does anyone have information on configuration and declaration of custom messages in an android app using rosjava?
Thanks!
Originally posted by Srogacki on ROS Answers with karma: 74 on 2013-02-11
Post score: 0
Answer:
After you have correctly made your .msg files in your package, you will need to rebuild at least rosjava with ./gradle as shown in the documentation.
This will automatically include your custom message in the javalibrary, rosjava_messages-0.0.0-SNAPSHOT.jar, which will need to be included into your Java project.
Be careful as if you include a java library file directly in your /libs folder of your project it will conflict with other copies of that file, such as can be the case if the gingerbread_mr2 library is also referenced. If you are developing in java to use android core, then I suggest that you should also rebuild android_core as per the documentation, and just reference the gingerbread_mr2 project as a library in the project_properties->android_settings tab (if using eclipse)
You can them import them into your code like any other message. Just use:
import my_msg_pkg.msg
replacing "my_msg_pkg" with the package your message is defined in and "msg" as your message type.
Originally posted by PeterMilani with karma: 1493 on 2013-02-11
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by Srogacki on 2013-02-15:
awesome! thanks.
Comment by Anis on 2015-02-14:
Hello
I am having the same issue as I successfully built my messages in my rosjava package, but now I am not able to link the rosjava package that contains the messages to my Android application. When I add
compile project (':my_package') it tells me that the package cannot be found. Any hint? | {
"domain": "robotics.stackexchange",
"id": 12839,
"tags": "ros, android, rosjava, msg"
} |
When is an ant colony at its loudest? | Question: I want to make a sound recording of an ant colony located in a wood pile in a German forest (I don’t know the exact species).
To optimise this, I would like to select a time for this when the ant colony is at the height of its activity, i.e., loudest.
My naïve guess would be that noon in summer with some time after the last rainfall is best, but I could find anything to confirm or go beyond this.
Thus I am asking: When is an ant colony loudest, more specifically:
What time of the year?
What time of the day?
What weather conditions?
Answer: I believe you speak of some species of Formica spp., such as Formica polyctena (nest example)
You should mind that (i) ants are not particularly noisy insects; (ii) ant nests are 3-dimensional structures within which ants are heterogeneously distributed. So considering a microphone somewhere on the nest where you happened to see more ants, mostly you will be picking up noises from foragers stumbling into debris while moving around. And that only when that region of the nest happens to be most active, which was your original concern. In case I am wrong and you're interested in some smaller ant species (e.g. Solenopsis fugax or Lasius flavus) then you will probably record few if any significant sounds resulting from ants' activities.
Thus my first advice if you're focusing on foragers moving around the nest surface is that you record them in the afternoon of your hottest days, particularly if they're refurbishing their nest or upon disturbance. If you leave the microphone for extended periods try to find out where most foragers are coming out from (and pray they'll not damage the equipment) and leave it there in the afternoon of a dry, hot day.
Now, I am not sure about your ultimate goal, but in case you want to actually hear the ants instead of their stumbling around, then you should focus on having a tiny (protected) microphone inserted into a nest chamber. It is hard to say when the insides of a nest of unknown species will be particularly active, but usually during a fair day, under the sunlight exposure following a wet night. Finding the best region to record ants stridulating inside their nest will depend on finding the best nest location(s) and equipment.
Good luck! | {
"domain": "biology.stackexchange",
"id": 8803,
"tags": "entomology, ant, sociality"
} |
What kind of a fly is this? | Question: I have a couple of new pictures of insects which I am really curious about. This is one of them. I found this fly near my cottage in Prague, Czech Republic.
It is fluffy with very long proboscis (almost half of an inch in length).
Answer: This is indeed a fly from the genus Bombylius. However, I think this is not the common B. major, but Bombylius discolor.
The reason is that your specimen clearly has small dots on its wings. These dots are characteristic of Bombylius discolor, B. major lacks these dots. Furthermore, B. major has a black part at the front of its wings, that is very distinct. In discolor, this black part is less pronounced or even hardly visible.
Picture from the Dutch Wikipedia:
https://nl.m.wikipedia.org/wiki/Gevlekte_wolzwever | {
"domain": "biology.stackexchange",
"id": 9113,
"tags": "species-identification, diptera"
} |
Why shouldn't photons be able to curve spacetime? | Question: I have read this question. It was asked if photons are able to curve spacetime. But if classical electromagnetic fields can curve spacetime (due to the energy contained in the fields contributing to the mass-momentum tensor) why shouldn't photons be able to do this? If the em field curves spacetime then photon fields should be able too. After all, they constitute the classical em field when the number of photons is large.
I mean, what would be the reasons to ask if a photon would not be able to make spacetime curve?
One thing I can think of (thanks to @WolphramJohnny): a photon has a different energy in different frames of reference, so if it curves spacetime in one frame it curves spacetime differently (more or less) in different frames of reference. This holds true also for massive particles but massive particles have a rest mass, while photons have not, so curvature due to mass is not there. Only curvature due to motion, which gives rise to linear frame-drag. Though this can also be said for classical em waves. So photons must emit gravitons (or small spacetime distortions) like a speeding boat emits bow waves (if it speeds. But how can this be different for different frames? In one frame the photon will not even emit gravitons (no energy).
Can we say that photons can't exist in General relativity in the first place (because they are a quantum object)? Can only fermions emit gravitons (because they are quantum objects)? Of course, you can say that we need a quantum theory of gravity (which doesn't necessarily mean that gravitons are involved though; spacetime itself can also be seen as quantized) but the theory has to involve an interaction with a graviton field.
Answer:
There's no frame where photon does not exist. You would need infinite velocity to drive its energy to zero
The metric differs in different frames. The tranformation is
\begin{equation}
\tilde{g}_{\mu\nu}= (\tilde{\partial_\mu}x^\alpha)(\tilde{\partial_\nu}x^\beta)g_{\alpha\beta}
\end{equation}
The Lorentz boost applied to the gravitational wave spacetime transforms it... Into a gravitational wave with a boosted momentum.
The monochromatic classical electromagnetic plane wave deforms spacetime so that it is accompanied by the monochromatic gravitational plane wave (see pp-wave solutions). For more complex electromagnetic fields the nonlinearities start playing their role.
When you construct the quantum gravity you start with linearized approximation where all interactions are infinitely weak. This gives you a free field theory that you can easily quantize and consider interactions as a perturbation (this works for gravity as long as the energies and non-linearities are sufficiently suppressed and you can neglect the infinite number of terms required for the renormalization) However even then there is some mixing between the gravitational and matter degrees of freedom. So even in such limit you see that the photons are not just electromagnetic waves but they are accompanied by the gravitational waves.
The current cosmological model works well and it tells us that the chemical composition of the universe is very sensitive to the rate of the universe expansion at tempeeatures of $~MeV$. At this temperatures the significant gravity was caused by the thermal gas of photons. This is not something you can describe well by a classical electormagnetic wave
No free particle in absence of other particles to interact with emit anything. Unless it decays. The electron doesn't emit photons slowing down. It keeps flying by inertia. The same is for a free photon in empty space. It doesn't emit gravitons. However that doesn't mean that it does not curve spacetime. Electron does not emit photons but it produces the electromagnetic field around itself. | {
"domain": "physics.stackexchange",
"id": 79784,
"tags": "general-relativity, photons, spacetime, curvature, stress-energy-momentum-tensor"
} |
How to generate a quantum circuit from the quantum state $|1000\rangle+|0100\rangle+|0010\rangle+|0001\rangle$? | Question: I am trying to understand the steps of how make a state preparation circuit from a quantum state.
For making my question more clearer, for example, for the state is $\frac{|00\rangle+|11\rangle}{\sqrt{2}}$, the state preparation circuit is :
I want to draw the state preparation circuit for the quantum state $\frac{1}{2}( |1000\rangle + |0100\rangle + |0010\rangle + |0001\rangle)$.
I was following this lecture.
I am not able to completely understand the steps used to draw circuits. It would be great if someone can help me with this question.
Answer:
I have added an image for your reference. If you have to create any state but don't know the circuit then you can always initialize the state then using qiskit transpile, you can decompose the circuit in terms of basis gates | {
"domain": "quantumcomputing.stackexchange",
"id": 2976,
"tags": "quantum-state, circuit-construction, textbook-and-exercises"
} |
Understanding a formula on article | Question: I just read an article "Statistical approach for figurative sentiment analysis on Social Networking Services: a case study on Twitter", which provide an algorithm to analyze tweets, and this article includes 2 formulas which I don't really understand.
Link to the article
I hope maybe someone here can help me.
The first formula is the (4) formula (Page 5)
The second formula is the (6) formula (Page 8)
I will be very thankful if someone will help me with this.
Examples will be most welcome! :)
Answer: Each tweet and each cluster is represented as an $m_k$-dimensional vector. The distance between a tweet $t_k$ and a cluster $\delta$ is then
$$
dis(t_k,\delta) = 1 - \frac{\langle t_k, \delta \rangle}{\|t_k\| \|\delta\|},
$$
where $\langle a,b \rangle = \sum_{i=1}^{m_k} a_i b_i$ is the inner product and $\|a\| = \sqrt{\langle a,a \rangle}$ is the norm. This explains equation (4).
The definition of $P(S_t|w)$ is given below (6):
$P(S_t|w)$ is the
probability that a term has a score with the given tweet score.
This is the number of tweets containing $w$ which have score $S_t$ divided by the total number of tweets containing $w$.
The rest of the formula is hopefully self-explanatory ($\times$ is just ordinary multiplication).
If you have any more questions, I suggest contacting the authors. | {
"domain": "cs.stackexchange",
"id": 8622,
"tags": "algorithms, algorithm-analysis"
} |
How to distinguish primary and secondary beam in FEM formulation? | Question: Assume that I am writing a new FEM software for structural analysis, it shouldn't really matter whether this is a frame or frame+shell element analysis tool.
I understand that primary and secondary beams have their own function, and are designed for different use.
Primary Beam: A horizontal beam connecting columns (simply supported
or shear connected.) Function: It will transfer the load from
secondary beam(if present) to the columns.
Secondary Beam: A horizontal beam connecting primary beams (simply
supported or shear connected.) Function: It will transfer the load to
the primary beam and not directly connected to the columns.
In FEM, everything is just frame element, and all the beams and columns are equal, so whether one beam should resist more loads ( ie primary beam) than the other is completely up to the outcome of FEM procedure. But as engineers, sometimes we just want this one beam to take more loads than the neighboring ones.
How can I modify my FEM engine to do what I want?
Answer: You have some options:
Remove/Ignore the secondary beams from your analysis. If they aren't there to take bear major loads, then remove them completely and perform your analysis on the primary beams in isolation. This could potentially be performed by reducing the Young's modulus of the secondary beams, to simulate artificially suppress their effect on displacements of the primary beams as a result of loading.
Perform nonlinear contact analysis Set gaps between the primary and secondary beams. Load the primary beams and set contact conditions between the two. The secondary beams will only start bearing load after the primary beams have deformed and make contact with the secondary beams.
2. Explanation
Using FE methods, very few models can be analysed using linear static modelling, they are simply too complex. A linear static models can be computed very fast and usually need only be calculated once. Why only once? - consider that a linear static models deformation will be linearly proportional to the force applied. If for example an applied load x leads to a deformation y, it's safe to assume that a load of 2x will deform the part by 2y etc.
Very few models can be accurately modelled in this way because in real life, there are many causes of nonlinearity.
Typically 3 causes of nonlinearity are considered: Material nonlinearity (look at the stress strain for rubber or steel - both nonlinear); geometric nonlinearity (deformations can alter the location and orientation of loads and internal strains) and contact nonlinearity. Contact nonlinearity is best explained with a simple diagram:
Thus by providing a physical gap between the primary and secondary beams and setting up contact conditions, your FEM software will necessarily consider a contact nonlinearity condition. This might not accurately describe your model, but applying this method could force the simulation to only load the secondary beams after the primary beams have deformed by a certain margin. | {
"domain": "engineering.stackexchange",
"id": 851,
"tags": "finite-element-method, structural-analysis"
} |
How does an encoder-decoder network work? | Question: Let's say I trained an encoder-decoder network on a cat dataset using reconstruction error as loss function. The network is fully trained and the decoder is able to reconstruct good cat images.
Now what if I use the same network and input a dog image. Will the network be able to reconstruct dog image or not?
Answer: It probably won't. The whole point of the training was to encode cat images and thus the network has tried to learn what information is the most necessary to keep to ensure a low reconstruction error (i.e. what separates one cat from another) and what information can it throw away (i.e. what characteristics appear in all cat images and can be discarded).
That being said, a dog image would produce a fairly decent reconstruction because most features are shared between both animals. If you try, however, to reconstruct something completely different (e.g. a car) then it would probably fail. | {
"domain": "datascience.stackexchange",
"id": 5913,
"tags": "neural-network, deep-learning, autoencoder"
} |
Displaying alert from an action | Question: This code works great, but I think it can be reduced.
@if (Session["Success"] != null)
{
<text>
<div class="alert alert-success">@Session["Success"]</div>
</text>
Session.Remove("Success");
}
else if (Session["Error"] != null)
{
<text>
<div class="alert alert-danger">@Session["Error"]</div>
</text>
Session.Remove("Error");
}
Answer: It's a bad idea to modify the session state from inside a view or a partial view. Instead, I would create an alert view model
public class AlertViewModel
{
public AlertType AlertType { get; set; }
public string Message { get; set; }
}
public enum AlertType
{
None,
Success,
Error
}
and use that from inside the view/partial view, for example:
@if (Model.AlertType != AlertType.None)
{
string alertClass = Model.AlertType.ToString().ToLowerInvariant();
<text>
<div class="alert alert-@alertClass">@Model.Message</div>
</text>
}
This way, you avoid modifying the session where someone else may not expect it, and you have a cleaner view.
About populating the view: I do not know why you are using the session, but if the value is only needed for one request, you should consider using TempData instead. In any way, just populate the view model in the controller and remove the session value if you need to. | {
"domain": "codereview.stackexchange",
"id": 5345,
"tags": "c#, .net, asp.net-mvc, razor"
} |
What does it mean when roc curves intersect at a point? | Question: I am working with a data set and I have obtained the following roc curve:
As you can see, black and Asian ethnicity cross at one point (green and purple lines). Does this have any significance?
Could any conclusion be drawn from this?
Note that I am dealing with the following datasets:
-transrisk_performance_by_race_ssa
-transrisk_cdf_by_race_ssa.csv
-totals.csv
In order to observe whether fairness affects profits.
Answer: From a fairness point of view one might argue that one curve dominating another curve may be an indication of a model being potentially biased towards the class with the dominant ROC curve. However, if two ROC curves intersect it implies that none of the two dominates the other one. (Please note the direction of above stated conditionals. Also this is intentionally vaguely phrased since the conclusions to be drawn are limited.)
Another (not insightful) observation is that in such intersecting cases one cannot generally tell from the ROC curve plot which of the two has a higher ROC AUC (to phrase it differently: an ROC curve dominating another ROC curve implies a higher ROC AUC). | {
"domain": "datascience.stackexchange",
"id": 10713,
"tags": "machine-learning, r, rstudio, roc"
} |
MySQL request in Python | Question: In my code I have three requests to a MySQL database:
@app.route('/private', methods=['POST'])
def private():
login = request.form['login']
if login is None or not login:
return jsonify(data='Incorrect URL')
try:
c, conn = cursor_connection()
c = conn.cursor()
c.execute("SELECT accounts_info_uid "
"FROM auth_info WHERE login='{}' ".format(login))
id = c.fetchall()
if not id:
return jsonify(data='Incorrect login')
c.execute("SELECT * FROM boxes_id AS tb1 LEFT JOIN"
" accounts_info AS tb2 ON tb2.boxes_ids=tb1.uid "
# "LEFT JOIN electricity_info as tb3 ON tb3.boxes_id_uid=tb1.uid"
" WHERE tb2.uid={} ".format(id[0][0]))
uid, mc_address, working_status, activation_status, _,\
first_name, second_name, registration_date, phone, email, boxes_id = c.fetchall()[0]
c.execute(" SELECT consumed_electricity "
"FROM electricity_info "
"WHERE boxes_id_uid={} ".format(boxes_id))
consumed_electricity = [float(val[0]) for val in c.fetchall()]
c.close()
conn.close()
except Exception as e:
logger.error(msg='Cannot execute /private {}'.format(e))
return str(e)
I fetched a list from electricity info by primary key in boxes_id (so in electricity_info it is called boxes_id_uid).
Structure of pk in my tables:
auth_info --------> pk is accounts_info_uid
boxes_id ----------> pk is uid
accounts_info ------> pk is uid and it is connected to table boxes_id by field boxes_id
electricity_info ------> pk is boxes_id_uid
I think it can be optimized in one SQL request. If so, can you tell me how to achieve that?
Answer:
DONT! String format your sql queries, but let the cursor do it for you.
As @Gareth Rees said, this is not secure!
c.execute("SELECT accounts_info_uid "
"FROM auth_info WHERE login='{}' ".format(login))
As taken from the docs, this would be the proper way to execute statements with sql:
cursor.execute("SELECT accounts_info_uid FROM auth_info WHERE login = %s", login)
You could use a context manager over the connection
Using a context manager would improve the readability, see PEP343
There are some resource out there how to implement such a thing
Make the login a separate function
I think it can be optimized in one SQL request?
Yes, but without the database itself it is hard to test, a github repo or some more information would help.
However you could improve on your google-fu, Simply typing "mysql join 3 tables" would have given you the answer you are looking for.
https://stackoverflow.com/questions/3709560/joining-three-tables-using-mysql | {
"domain": "codereview.stackexchange",
"id": 29318,
"tags": "python, mysql, database"
} |
Did the new image of black hole confirm the general theory of relativity? (M87) | Question: How can we do it just by looking at the image. But I heard in news saying "Einstein was right! black hole image confirms GTR. The image is so less detailed that I can't even make some pretty good points. Please correct me if I'm wrong on any aspect. Please provide a link if this question sounds duplicate...
Answer: I think it's fair to say that the EHT image definitely is consistent with GR, and so GR continues to agree with experimental data so far. The leading paper in the 10th April 2019 issue of Astrophysical Journal letters says (first sentence of the 'Discussion' section):
A number of elements reinforce the robustness of our image and the conclusion that it is consistent with the shadow of a black hole as predicted by GR.
I'm unhappy about the notion that this 'confirms' GR: it would be more correct to say that GR has not been shown to be wrong by this observation: nothing can definitively confirm a theory, which can only be shown to agree with experimental data so far.
This depends of course on the definition of 'confirm': above I am taking it to mean 'shown to be correct' which I think is the everyday usage and the one implied in your question, and it's that meaning I object to. In particular it is clearly not the case that this shows 'Einstein was right': it shows that GR agrees with experiment (extremely well!) so far, and this and LIGO both show (or are showing) that GR agrees with experiment in regions where the gravitational field is strong.
(Note that, when used informally by scientists, 'confirm' very often means exactly 'shown to agree with experiment so far' and in that sense GR has been confirmed (again) by this observation. I'm assuming that this is not the meaning you meant however.)
At least one other answer to this question is excellent and very much worth reading in addition to this. | {
"domain": "physics.stackexchange",
"id": 57486,
"tags": "general-relativity, black-holes, astronomy"
} |
What's wrong with using tin in medicinal chemistry? | Question: I just read this In the Pipeline post and I was slightly confused by a statement on the use of tin. Lowe reports on this paper, which describes a synthetic route to spiro heterocycles using tin compounds, and caveats the finding with the comment that
No one's crazy about using tin, but the transformation is too useful to pass up.
Why is the use of tin undesirable in synthetic chemistry?
Answer: Organotin compounds are rather toxic. They are also persistent in the environment and have a long biological half-life.
The problem is that trialkyltin byproducts from your reaction are difficult to separate from the product. In the lab, this is painful and (usually) involves multiple columns, but when you want to get a drug past the FDA into the clinic you have to meet purity standards for heavy metal contamination. To a medicinal chemist, a transformation that won't make it into pilot plant and production is useless. | {
"domain": "chemistry.stackexchange",
"id": 2381,
"tags": "metal, toxicity, organometallic-compounds, medicinal-chemistry"
} |
ROS navigation stack VS OMPL | Question:
I am starting to look at navigation options to use for my humanoid robot (nao) and I would like some inputs from the community on using ROS navigation VS OMPL. Both of them seem to be a good solution at the moment, but I can't find any comparison of advantages/disavantages that would help guide me to make the best choice. Suggestion on other navigation methods are also welcome.
Originally posted by TopSecret on ROS Answers with karma: 37 on 2014-02-21
Post score: 0
Answer:
OMPL is a planning library that includes only planning algorithms. There is no collision checking, nothing that maintains a map or that interacts with controllers. The ROS navigation stack includes planning, map representation, collision checking, and components for executing the computed plans.
I guess it depends on what problem you are trying to solve. If you have a map representation and a means to implement collision checking, OMPL can be your solution. If you want to use more components from ROS, the navigation stack would be better suited.
Another detail to consider is that the navigation stack is geared for 2d planning and uses search-based algorithms (sbpl library) to compute optimized solutions. OMPL includes sampling based algorithms which will produce different solutions for every execution, but the space in which the motion plan is computed is easily changed. (see http://ompl.kavrakilab.org/planners.html for the different algorithms included in OMPL)
Originally posted by isucan with karma: 1055 on 2014-02-22
This answer was ACCEPTED on the original site
Post score: 3 | {
"domain": "robotics.stackexchange",
"id": 17048,
"tags": "navigation, ompl"
} |
Are all known algorithms for solving NP-complete problems constructive? | Question: Are there any known algorithms that correctly output "yes" to an NP-complete problem without implicitly generating a certificate?
I understand that it is straightforward to turn a satisfiability oracle into a satisfying-assignment finder: just iterate over the variables, each time asking the satisfiability oracle to solve the conjunction of that variable with the original problem.
But would such a wrapper ever be useful? Do all sat solvers search over the space of possible assignments?
Or are there some types of NP-complete problems (traveling salesman, subset sum, etc.) in which the solver can, say, exploit a mathematical theorem to prove that a solution must exist? Like doing a proof by contradiction?
Answer: As I understand it, you are asking two questions: (1) are there e.g. SAT algorithms that are more clever than naive brute force, and (2) are there algorithms that simply give a YES/NO answer when solving an NP-complete problem without actually finding the solution. I'll answer both questions in this order.
(1) It is perfectly possible to solve a problem without brute force, i.e. without naively trying all possibilities. To take your example, modern complete SAT solvers can apply clever algorithms that infer or prove certain (partial) assignments can't lead to a solution, so they don't even examine that part.
More generally, even NP-hard problems often exhibit some kind of algorithmic foothold that allows us to design algorithm faster than brute force. The field of this research is exact (exponential) algorithms. Such algorithms take exponential time, but are still faster than naive algorithms. For instance, you can solve TSP in roughly $n!$ time, where $n$ is the number of cities to visit. This method won't scale to even moderate values of $n$, but there is a classic dynamic programming $O(2^nn^2)$ time algorithm due to Held & Karp. For general techniques, see e.g. branch & bound.
(2) There are "oracle algorithms" for NP-complete problems that just output YES/NO with no explicit certificate. For instance, consider the $k$-path problem:
(The $k$-path problem) Given a graph $G$ and an integer $k$, is there a simple path in $G$ on $k$ vertices?
The above problem is easily seen to be NP-complete. An $O^*(2^k)$ algorithm for the problem is given in [1]. The algorithm itself only gives a YES/NO answer to the problem, but we can use additional tricks to construct the actual $k$-path itself. More generally, when given such an "oracle algorithm", one can use tools from combinatorial group testing for extracting the witness itself.
[1] Williams, Ryan. "Finding paths of length $k$ in $O^*(2^k)$ time." Information Processing Letters 109.6 (2009): 315-318. publisher link, PDF | {
"domain": "cs.stackexchange",
"id": 3391,
"tags": "algorithms, complexity-theory, np-complete"
} |
Relationship between fourier transform and fourier series | Question: Let
$$x(t) = A\sin(2 \pi f_0 t + \alpha)$$
its Fourier transform is given by $$ X(\omega) = \frac{A \pi}{i}(e^{ia}\delta(\omega-2\pi f_0) - e^{-ia}\delta(w+2\pi f_0)). $$
the Fourier series complex representation of a $T$-periodic is :
$$x(t) = \sum_{n=-\infty}^\infty c_n e^{(2 i \pi n)/T \cdot t}$$ thus its Fourier transform is $X(\omega) =2 \pi \sum_{n=-\infty}^\infty c_n \delta(\omega - 2 \pi n/T)$
now here is my question, whats the expression of $c_n$ by identification with the first Fourier transform of the first signal previously, here is what i did : since $\exists k = \omega \cdot T/(2 \pi)$ then the Fourier transform becomes
$$X(\omega) = 2 \pi c_k = \frac{A \pi}{i}(e^{ia}\delta(\omega-2\pi f_0) - e^{-ia}\delta(w+2\pi f_0)) \iff \begin{align} c_k = \frac{A}{2i}(e^{ia}\delta(\omega-2\pi f_0) - e^{-ia}\delta(w+2\pi f_0)) \end{align}$$
is this valid? if not whats the problem? if it is then how can i determine the amplitude and frequency spectrum of this signal?
Answer: Transform looks right, but the logic afterwards needs some correction.
$$
X(\omega) =
-i \pi A \left(e^{i \alpha } \delta \left(\omega -2 \pi f_0\right)-e^{-i \alpha } \delta \left(2 \pi f_0+\omega
\right)\right)
$$
you made a slight mistake with this equality:
$$
X(\omega) = 2 \pi \sum^{\infty}_{n=-\infty} c_n \delta ( \omega - 2 \pi n f_0 ) = 2 \pi \sum^{1}_{n=-1} c_n \delta ( \omega - 2 \pi n f_0 )
$$
which simplifies to:
$$
X(\omega)
= 2 \pi c_{-1} \delta(\omega - 2 \pi (-1) f_0) + 0 + 2 \pi c_{1} \delta(\omega - 2 \pi (1) f_0)
= \pi \left(2 c_{-1} \delta(\omega + 2 \pi f_0) + 2c_{1} \delta(\omega - 2 \pi f_0) \right)
$$
so we can equate to our original Fourier Transform
$$
\pi \left(
2 c_{-1} \delta(\omega + 2 \pi f_0) + 2c_{1} \delta(\omega - 2 \pi f_0) \right)
=
-i \pi A \left(
e^{i \alpha } \delta \left(\omega -2 \pi f_0\right)-e^{-i \alpha } \delta \left(2 \pi f_0+\omega
\right)\right)
$$
$$
\pi \left(2 c_{-1} \delta(\omega + 2 \pi f_0) + 2c_{1} \delta(\omega - 2 \pi f_0) \right)
=
\pi \left(
-i A e^{i \alpha } \delta \left(\omega -2 \pi f_0\right)
+ i A e^{-i \alpha } \delta \left(2 \pi f_0+\omega
\right)
\right)
$$
so by inspection then
$$
c_{-1} = \frac{i A e^{-i \alpha}}{2} \qquad c_1= \frac{-iAe^{-i \alpha}}{2}.
$$
Just note that depending how you define your Fourier Transform (there are a few conventions) your coefficients can change. | {
"domain": "dsp.stackexchange",
"id": 12365,
"tags": "fourier-transform, fourier-series, amplitude"
} |
Is a Home's Temperature Affected by Solar Gain Through Windows on Cloudy Days? | Question: Do homes increase in temperature due to solar gain with curtains open if it's overcast?
In other words, Does having curtains closed reduce the temperature of the home on an overcast day?
In other words, Does an overcast day prevent the warming effect of the sun hitting a homes window and raising it's temperature?
In other words, Closing the curtains on a sunny clearly keeps a room cooler. Does closing the curtains on an overcast day have the same effect? Why?
Thank you for your time.
Answer: Closing the curtains will help even on a cloudy day.
Assuming there is minimal heat transfer from the curtain to the room due to infrared radiation of the curtain and assuming minimal convection of hot air trapped between the curtain and the glass, overflowing the curtain rail into the room, the curtain will help by reflecting some of the sun's rays that have penetrated through the clouds and also some of the infrared radiation caused by the clouds.
The difference between a cloudy day and a sunny day is some of the solar radiation energy reflects back into space by the clouds, but not all of it. It depends on reflection index of the cloud. The part of radiation which is not reflected partially heats up the cloud causing infrared radiation to earth and partially reaches the surface of the air.
So having the shades closed still helps even during cloudy days. | {
"domain": "engineering.stackexchange",
"id": 2098,
"tags": "thermodynamics, temperature"
} |
Fluid flow: Force acting on the fluid and the Navier-Stokes equation | Question: Consider a one dimensional fluid flow in a rectangular tube.
Typical streams are the poiseuille streams.
Consider the case in wich we apply a force on the fluid.
The Navier-Stokes equation (for incompressible fluids) is formally:
$$ \rho_f \frac{d \vec{v}}{dt}=-\nabla p+\rho_f \vec{f}+\eta \nabla^2 \vec{v}$$
The flow is $1D$ so: $\frac{\partial \vec{v}}{\partial t}=\frac{d \vec{v}}{dt}$.
Consider inviscid flow: $\eta=0$.
$$ \rho_f \frac{\partial \vec{v}}{\partial t}=-\nabla p+\rho_f \vec{f}$$
The lengths along the tube is denoted by $s$. Let`s apply the force: $$\vec{f}=q\sin s.\hat{s}$$
Where $q$ is just a constant to match the appropriate units of force per kg and $\hat{s}$ the unit vector in the positive $s$ direction. We don't have a pressure difference so the equation of motion reduces to:
$$ \rho_f \frac{\partial \vec{v}}{\partial t}=\rho_f q\sin s.\hat{s}$$
taking the dot product with $\hat{s}$:
$$ \frac{\partial v}{\partial t}= q\sin s$$
Where $\vec{v} \cdot\hat{s}=v $
So: $$v(t,s) = qt \sin s$$
The velocity in the other directions is $0$. So we have an inconsistency with the continuity equation: $$\nabla \cdot \vec{v} = \frac{\partial v}{ds}=qt \cos s \neq 0$$
How is this possible? Is the assumtion of incompressibility incorrect? Maybe there is a pressure due to the force?
To go a bit further:
Consider the case when the tube is closed like a torus. there are viscous effects and there is a non-conservative force. furthermore the fluid is incompressible. What equation describes the motion of this problem? The above Navier-Stokes equation gives a contradiction.
Thanks.
Answer: I agree with user3823992 that it was incorrect to neglect the pressure differential. With the steady sinusoidal body force that's given, it's basically a hydrostatics problem with the pressure differential balancing the body force. Consider the Navier-Stokes momentum equation:
$$
\frac{\partial \mathbf{v}}{\partial t}+(\mathbf{v} \cdot \nabla)\mathbf{v}=-\frac{\nabla p}{\rho}+\mathbf{f} +\nu \nabla^2 \mathbf{v}
$$
If we assume the velocity $\mathbf{v}$ is zero then it reduces to the hydrostatic case, where:
$$
\frac{\nabla p}{\rho}=\mathbf{f}
$$
$\mathbf{f}$ only has a component in the s-direction, therefore so will $\nabla p$:
$$
\begin{eqnarray*}
\frac{dp}{ds} \cdot \hat s&=&\rho q \sin(s) \cdot \hat s\\
\int_{p_0}^pdp&=&\rho q \int_0^s \sin(s) ds\\
p-p_0&=&\rho q (1- \cos (s))
\end{eqnarray*}
$$
($p_0$ is just an arbitrary reference pressure that may have been present before the force was applied).
$\mathbf{v}=0$ obviously satisfies the continuity equation, although I think any other solution with a constant $\mathbf{v}$ would also satisfy the equation. This would just be bulk fluid motion that was present before the force was applied, and would tend to zero in steady-state if viscous drag on the walls is included.
In the case where the tube is closed like a torus, the flow is still governed by the Navier-Stokes equations. The momentum equation in polar coordinates (r, $\theta$, z) can be reduced to:
$$
\begin{eqnarray*}
\theta: f_\theta&=&-\nu (\frac{1}{r} \frac{\partial}{\partial r}(r \frac{\partial V_\theta}{\partial r})+\frac{\partial^2 V_\theta}{\partial z^2}-\frac{V_\theta}{r^2})\\
r: \frac{\partial p}{\partial r}&=&\rho \frac{V^2_\theta}{r}
\end{eqnarray*}
$$
The only component of velocity is in the $\theta$ (circumferential) direction. The first line is the body force balanced by the wall friction and the $\frac{\partial p}{\partial r}$ in the second line is necessary to provide the centripetal force for the curved streamlines. However, this is now a 2-dimensional PDE and I think it's pretty unlikely that you'd be able to integrate or find a simple function to satisfy it - to find the velocity profile you would probably have to resort to CFD at this point.
The Poiseuille equation has a nice, straightforward solution because it is axisymmetric and effectively 1-dimensional. | {
"domain": "physics.stackexchange",
"id": 21555,
"tags": "classical-mechanics, fluid-dynamics, navier-stokes"
} |
Same equation, different meanings | Question: I went into a physics classroom today and saw this equation written on the board:
$$
E = \frac \sigma \epsilon
$$
At first I thought it referred to the electric field $ E $ between 2 parallel plates of charge density $\sigma$ separated by a material of permittivity $\epsilon$. However, I then realised it was actually the definition of the Young's Modulus $E$ for a material that has a strain $\epsilon$ when a stress $\sigma$ is applied to it!
So the same equation has two completely different meanings in two completely different areas of physics, with the symbols defined differently (ignoring symbols to show which variables are vectors etc). Is this just a coincidence resulting from the huge number of 3-variable equations in physics, or have the symbols intentionally been defined like this? Is there a deeper meaning? Are there other examples like this?
Answer: Yeah, that's just a coincidence. The easy way to see this is that $\epsilon$ is a relatively static property of a dielectric but a totally dynamic property of a stretching material. | {
"domain": "physics.stackexchange",
"id": 28016,
"tags": "electrostatics, terminology, material-science, units, notation"
} |
Can one use dialysis tubing several times? | Question: I would like to know, if one can use dialysis tubes multiple times. Or is there a risk of plugging the pores? I use the tubes for dialysis of a solution that contains a precipitated enzyme (183 kDa) and ammonium sulfate (132 Da). Thanks!
Answer: This is more of a long comment than an answer, since I don't know. It could be okay, but if you are unsure whether it will work, is it really worth the risk? Here are some thoughts:
With high AS concentration, there is an initial large influx of water which can stretch the tubing. If this is repeated several times you may stretch the pores to where they no longer retain your protein. You can determine yield after each dialysis to see if loss increases with use.
Clogging up the pores seems like it could plausibly happen. You could add some low molecular weight dye to monitor whether the membrane remains permeable with use.
Dialysis tubing costs like 5 ¢/cm. Unless you're using 10 meters at a time, it's really very cheap. | {
"domain": "biology.stackexchange",
"id": 6748,
"tags": "enzymes, purification"
} |
How to install external libraries(PCL, OpenNI) without uninstalling related ROS packages | Question:
Hi, everyone.
How can I install external libraries(PCL, OpenNI) without uninstalling related ROS packages(ros-groovy-pcl, ros-groovy-openni-*, ros-groovy-navigation(which depends on ros-groovy-pcl) .etc)?
Is it possible ?
I'm trying to use ScaViSLAM(http://www.ros.org/wiki/ScaViSLAM) with Xtion PRO Live.
Unfortunately, ScaViSLAM is not ROS package, so I need to install it with this guide line(https://github.com/strasdat/ScaViSLAM).
To install ScaViSLAM, I have to install standalone pcl and OpenNI because of the difference of the version.
For example,
ros-groovy-pcl doesn't include pcl/io/openni_grabber.h, but standalone pcl(1.6) does.
I can use Xtion with ros-groovy-openni-*. However, when running ScaViSLAM, I have an error like "cannot open OpenNI.so.0". I think that I will not have that with standalone OpenNI installation.
If I uninstall ros-groovy-pcl with "sudo apt-get remove ros-groovy-pcl",
ROS packages which depend on it are also uninstalled.
If I install ros-groovy-openni-* after installation of standalone OpenNI, multiple OpenNI
will be installed.
I don't like them.
If you have any idea, could you help me ?
Thanks in advance.
---Configuration---
OS : Ubuntu 12.04 32bit
ROS distro : Groovy
Originally posted by moyashi on ROS Answers with karma: 721 on 2013-05-27
Post score: 1
Answer:
I see nothing on that guide that has anything to do with ROS. This forum is for ROS questions, so please refrain from questions not regarding ROS functionality/installation.
But, if you were to install the packages you mentioned, you would need to just install the correct packages from the repository (although most if not all of the dependencies are covered in the guide you linked).
Originally posted by allenh1 with karma: 3055 on 2013-05-28
This answer was ACCEPTED on the original site
Post score: 0
Original comments
Comment by moyashi on 2013-05-28:
Thank you for your answer and sorry for my unclear question. I think this question is similar to http://answers.ros.org/question/38162/how-to-update-opencv-to-latest-version/ , so I don't think it doesn't completely have nothing to do ROS.
Comment by moyashi on 2013-05-28:
And according to your answer, I have to just install external packages I like. However, installation is denied because there is competitive ROS packages. If I remove these, related ros packages I'd not like to remove are also removed. I'd like to solve this problem. Sorry for my confusing question. | {
"domain": "robotics.stackexchange",
"id": 14330,
"tags": "ros, openni, pcl, external-libraries"
} |
How is stable levitation possible? | Question: This question is with reference to the video in this blog post: http://www.universetoday.com/90183/quantum-levitation-and-the-superconductor/
My question is the following: how is the disc stable in its levitated position?
Specifically, 25 seconds into the video, the exhibitor turns the entire thing upside down, and the disc doesn't fall. This contradicts two intuitive ideas I have:
Right-side-up, gravity is counteracting the repulsive effect of the magnet. Upside down, gravity is working with it. Unless there's some adaptation going on somewhere else, shouldn't "gravity switching sides" be enough to tear the disc away from the magnet?
I remembered something about "inverse-square" forces not permitting stable equilibria from college physics - sure enough Wikipedia calls it Earnshaw's theorem. I don't see how this is exempt from that rule. I guess I don't understand diamagnetism.
Answer: I tried to add this as a comment, but it is too long so I am making this an answer instead. This is not my text, but the text of one of the commentators on the video:
"Superconductors are of two types, which are defined by their
Meissner effect. One type repels magnetic fields, which will levitate
the superconducting object. A type I superconductor becomes a perfect
diamagnetic material, which exhibits a magnetization in the opposite
direction of an applied magnetic field. The Meissner effect creates a
complete diamagnetic material so that no magnetic field lines are
present in that material. I doubt this will suspend the object
against gravity by putting it on bottom, for the magnetic fields in
opposition will impose a force on the superconductor in the same
direction as gravity.
There is what might be called an anti-Meissner effect where the
superconducting material collimates magnetic flux lines into narrow
tubes or vortex fluxes. If the magnetic field at large is not
perfectly uniform it takes work to move the object through the
magnetic field and so energetically it is favorable to remain in a
region with B_in and B_out remains the same. This is the
Landau-Ginsburg effect and is found in type II superconductors. I
think that this is a case of a type II superconductor."
This sounds right to me and explains what is meant by quantum locking since superconductivity is a macroscopic quantum phenomenon that is effectively locking the magnetic flux into specific tubes in the superconductor. The force that opposes gravity is, of course, magnetic so we are not talking about any kind of new force of nature.
When he uses his hand to move the superconductor, he is using enough force to make the magnetic flux tubes be rearranged but apparently the force of gravity is weak enough such that it cannot rearrange the flux tubes by itself. So I predict that if you added enough weight to the puck, it would fall :) | {
"domain": "physics.stackexchange",
"id": 1824,
"tags": "electromagnetism, superconductivity, stability, levitation"
} |
Basic QED - How are conserved charges expressions throught ladder operators derived? | Question: I can't find this in similar questions, and I must be missing something very basilar since I can't find this in any textbook or online note: they just skip the passage.
So, from my course's notes, we have for example a complex scalar field:
$$
\phi(x) = \int \dfrac{d^3p}{(2\pi)^3} \dfrac{\sqrt{V}}{\sqrt{2E( \mathbf p)}} \left( a_{(+)} (\mathbf p ) e^{-ipx} + a_{(-)}^{\dagger } (\mathbf p ) e^{ipx} \right)
$$
$$
\phi^{*}(x) = \int \dfrac{d^3p}{(2\pi)^3} \dfrac{\sqrt{V}}{\sqrt{2E( \mathbf p)}} \left( a_{(+)}^{\dagger } (\mathbf p ) e^{ipx} + a_{(-)} (\mathbf p ) e^{-ipx} \right)
$$
and from the free $ S_0 = \int d^4x \left( \partial_\mu \phi^*(x) \partial^\mu \phi(x) - m^2 \phi^*(x) \phi(x) \right) $ with Noether's theorem for U(1) we get
$$
J^\mu(x) = i \left(\phi^*(x) \partial^\mu\phi(x) - \partial^\mu \phi^* (x) \phi(x) \right)
$$
$$
Q = \int d^3x J^0(x)
$$
QUESTION
So, how do I go from
$$
Q = i \int d^3x \dfrac{d^3p \ d^3q}{(2\pi)^6} \dfrac{V}{2\sqrt{E( \mathbf p)E( \mathbf q)}} \cdot \\
\cdot \left[ \left( a_{(+)}^{\dagger } (\mathbf p ) e^{ipx} + a_{(-)} (\mathbf p ) e^{-ipx} \right) \ iE(\mathbf q) \left( - a_{(+)} (\mathbf q ) e^{-iqx} + a_{(-)}^{\dagger } (\mathbf q ) e^{iqx} \right) + \\
- iE(\mathbf p) \left( a_{(+)}^{\dagger } (\mathbf p ) e^{ipx} - a_{(-)} (\mathbf p ) e^{-ipx} \right) \left( a_{(+)} (\mathbf q ) e^{-iqx} + a_{(-)}^{\dagger } (\mathbf q ) e^{iqx} \right) \right]
$$
to?
$$
Q = \int d^3p \dfrac{V}{(2\pi)^3} \left( a_{(+)}^{\dagger } (\mathbf p ) a_{(+)}(\mathbf p ) - a_{(-)}^{\dagger } (\mathbf p ) a_{(-)}(\mathbf p ) \right)
$$
At least, what mathematical formulas do I have to use?
Answer: It is pretty simple just use the following formula,
$$ \int d^3 x e^{i(p+q)x} = (2\pi)^3\delta(p+q)$$
and thus on integrating $d^3 q$ you will have $\sqrt{2E(p)2E(q)} = E(p)$ in the downstairs, and then it's pretty straightforward. | {
"domain": "physics.stackexchange",
"id": 19817,
"tags": "homework-and-exercises, quantum-field-theory, quantum-electrodynamics, noethers-theorem"
} |
Conditions for which the Hilbert transform returns a correct phase | Question: I'm quite new to signal analysis, and I'm currently trying to understand under which conditions a Hilbert transform can be used to compute the correct instantaneous phase and enveloppe of a given signal.
Say I start from the example in Python given here (from the scipy website):
import numpy as np
import matplotlib.pyplot as plt
from scipy.signal import hilbert, chirp
duration = 1.0
fs = 400.0
samples = int(fs*duration)
t = np.arange(samples) / fs
signal = chirp(t, 20.0, t[-1], 100.0)
signal *= (1.0 + 0.5 * np.sin(2.0*np.pi*3.0*t) )
analytic_signal = hilbert(signal)
amplitude_envelope = np.abs(analytic_signal)
instantaneous_phase = np.unwrap(np.angle(analytic_signal))
instantaneous_frequency = (np.diff(instantaneous_phase) /
(2.0*np.pi) * fs)
fig = plt.figure()
ax0 = fig.add_subplot(211)
ax0.plot(t, signal, label='signal')
ax0.plot(t, amplitude_envelope, label='envelope')
ax0.set_xlabel("time in seconds")
ax0.legend()
ax1 = fig.add_subplot(212)
ax1.plot(t[1:], instantaneous_frequency)
ax1.set_xlabel("time in seconds")
ax1.set_ylim(0.0, 120.0)
I get a nice result showing me the enveloppe and desired frequency:
However, if for instance I decide to add a little bit of noise:
signal+=random.rand(len(signal))*0.2
Then the inferred enveloppe and frequencies get not that good (some of the noise goes into the enveloppe, some into the frequencies):
This was quite expectable, and I know I just have to find a way to smooth my signal to solve that. My real problem is, say you add a trend to your signal:
signal+=np.linspace(-1,1,400)
Then the results get completely messed up: *can't add any more links since I'm new to the forum*
And the results are also messed up even for just a vertical shift... So, basically, my question is: what are the conditions needed for the Hilbert transform to return the correct phase and amplitude ?
For now, this is what I suppose:
the signal must not be noisy
the signal must be centered around zero
the signal must not have any trend
amplitude and frequency can vary
Am I right? Thank you.
Answer: A single instantaneous phase estimate may or may not make any sense if there is more than one frequency peak in the signal's local spectrum. So, to get a better single frequency and phase estimate, you may want to first high-pass filter your signal to remove the spectrum of any D.C. bias or slow trend, and also band-pass filter the signal to remove any outside spectral peaks due to noise or interfering signals.
So my off-hand guess for a condition is that the local spectrum has a single clear frequency peak with a monotonicly decreasing spectral magnitude away from that one peak. All the way to zero at DC for real signals. | {
"domain": "dsp.stackexchange",
"id": 5492,
"tags": "transform, hilbert-transform"
} |
What is the space-complexity of the Newton-Raphson algorithm? | Question: What's the space-complexity of Newton-Raphson?
I think it reduces to the space-complexity of storing the inverse hessian matrix.
Answer: No, it reduces to the space complexity of solving a linear system using the Hessian matrix.
As with all situations where you need to solve a linear system, computing and storing the inverse explicitly is a bad idea for any problem larger than about 4 by 4. The inverse of a matrix is usually not as well-conditioned as the matrix itself, the inverse of a sparse matrix is typically not sparse... all the usual reasons apply. | {
"domain": "cs.stackexchange",
"id": 18876,
"tags": "space-complexity, matrices"
} |
Feature importance difference in two similar machine learning models | Question: Situation 1:
I have trained a text classification model (Model 1) which gives me a probability of true class as X. I have also trained a classification model (Model 2) using only the categorical and numeric data. Both the models are used to predict the same true class; just the features differ.
I used a random forest classifier on the probabilities returned by Model 1 and Model 2(taking them as input features) and got similar performance metrics(Accuracy, Precision recall). feature importance was 49% for model 1 and 51% for model 2.
Situation 2:
I used the probability X of text classification model as an input feature to the Model 2(which contained categorical and numeric features). The performance was almost similar to Situation 1 but here the feature importance of the final model indicated that text model probability had higher importance around 68% and rest of the features had lesser importance.
I want to understand the difference in feature importance of both situations.
Answer: In 2nd case, you are not comparing apple to apple.
Let's say we have 4 Features and all are equally good [Also, No interaction].
Case I -
We created two Models using 2 Features each
Model I - F1/F2
Model II- F3/F4
Comparing these two Models as Features will give you an Idea about F1/F2-combined compared to F3/F4-combined. This is your Case - I.
Case II -
If you will compare F1/F2-combined, F3(alone), and F4(alone), definitely F1/F2 combined will have high importance.
The output probability of a Model using two Features is basically holding the information of both the Features together. | {
"domain": "datascience.stackexchange",
"id": 8886,
"tags": "machine-learning, random-forest, xgboost, features, stacking"
} |
Why does Jupiter have so many moons? | Question: The usual explanations one finds just say that Jupiter has a strong gravitational field, thereby being able to catch moons easier, and then they stop there. But this seems far from a satisfactory explanation. After all, an object which is not gravitationally bound to another object will never become gravitationally bound unless it interacts with other objects so it can shed some of its energy. Having a stronger gravitational field doesn't change this.
So then: Is there a more detailed explanation for why Jupiter has so many more moons than the other planets?
Answer:
After all, an object which is not gravitationally bound to another object will never become gravitationally bound unless it interacts with other objects so it can shed some of its energy.
This is true, but you've forgotten about the Sun. Every interaction between a planetismal and Jupiter is a three-body interaction.
Above, a simulation of a low-mass planetismal moving in the effective potential in the rotating frame for a planet with mass $10^{-3}$ of its star's mass. The Lagrange points are marked with $\color{orange}{\times}$. The particle starts at $%(0.83,0.47)$ some random place I clicked; it moves ahead of the planet for two or three orbits, pausing at a couple of unstable stationary points in the rotating frame, then has a close interaction with the planet. In this case the close interaction doesn't lead to a capture, but you can see from the inset that the interaction is chaotic: it's extremely sensitive to the details of the closest approach. You can surely imagine a three-body interaction that ended in the particle being captured by the planet, even if I haven't hunted for one to show you. | {
"domain": "physics.stackexchange",
"id": 86999,
"tags": "astrophysics, solar-system, jupiter"
} |
Why don't unit circle poles lead to infinite amplitude response for Butterworth lowpass? | Question: This is probably a very stupid question. In many places (e.g. here), the Butterworth filters, e.g. lowpass, are described as being "allpole" filters, that have all of these poles on the unit circle.
But shouldn't poles on the unit circle lead to a pole in the amplitude response of the filter at that particular frequency, in the same way as a zero attenuates that particular frequency? Instead, the Butterworth lowpass has strong attenuation in the upper frequency spectrum, despite having all of its poles on the left-hand side of the unit circle (the high frequency side).
Could anyone please clear my confusion ?
Answer: You've made an understandable mistake. You are probably looking at this picture:
That is not the unit circle, and it isn't even in the $z$ domain.
What you are looking at is the locations of the poles for a 4-pole Butterworth filter in the Laplace domain. These are values of $s$, not $z$, and the circle is not a unit circle -- it's radius is defined by the cutoff frequency of the filter (which is why the radius is indicated as being $\omega_0$).
The Butterworth is one of the "old modern" filters, invented before we could just start with a desired frequency-domain response and synthesize the optimal filter. All of these (Butterworth, Tchebychev, eliptic, Gaussian) were originally designed as continuous-time filters, and the canonical representations of them are in the Laplace ($s$) domain. Implementations of these as IIR filters in the $z$ domain are sometimes-useful approximations. | {
"domain": "dsp.stackexchange",
"id": 11188,
"tags": "filters, frequency-response, digital-filters, poles-zeros, butterworth"
} |
Can Depth-first search (DFS) with alphabetical traversal of neighbors be run in O(|V|+|E|) time? | Question: I feel like the answer is no but I'm not sure. I think it's commonly accepted that DFS runs in $O(|V| + |E|)$ time. I've read a few explanations and they all make sense if the neighbour traversal for any given vertex can be done in arbitrary order.
But I've noticed a commonly suggested DFS behavior is to traverse the neighbors in alphabetical order (i.e. CLRS exercise 22.3-2), and I don't see how this can be done in $O(|V|+|E|)$ time. This became evident to me when actually trying to implement this in runnable code.
I see two ways to do it:
I can keep the list of vertices and each vertex's adjacency list sorted as I'm constructing the graph. However this means $O(V)$ insertion time for each new vertex in the graph, which means a total of $O(|V|^2)$ insertion time over $|V|$ nodes. And $O(|E'|)$ insertion time for a new edge where $|E'|$ is the number of neighbours in a particular vertex's adjacency list, meaning $O(|E'|^2)$ time to construct the adjacency list for any given vertex.
OR
Construct the graph and insert the vertices and adjacency elements in arbitrary order, but then sort them before running DFS. But comparison-based sorts are $\Omega(n\log n)$ so I'd have $O(|V|\log|V|)$ sort time for the list of vertices, and $O(|E'|\log|E'|)$ for each adjacency list.
In either case it doesn't seem like the runtime is $O(|V| + |E|)$ any more. Can someone confirm or refute? Thanks.
Answer: No, there is no linear time algorithm (for worst case).
Note for a list of $n$ positive integers, you can construct a tree with a root and $n$ children encoded by these integers respectively and traverse this tree in alphabetical order to sort these integers. So your problem is at least as hard as sorting. | {
"domain": "cs.stackexchange",
"id": 11831,
"tags": "algorithms, graphs, data-structures"
} |
Why doesn't my rtPCR reaction work? | Question: I am doing a rtPCR to detect the watermelon mosaic virus (WMV). My set of primers are:
WMV primer forward: 5'-TNGARAATTTGGATGYAGG-3'
WMV primer reverse: 5'-CTGCGGTGGACCCATACC -3'
both of which at the concentration of 10 microMolar.
The first step of my rtPCR reaction was looking for a positive control.
I did the rtPCR against to a field sample (In the NGS results showed a positive against to WMV) and two greenhouse samples (the first one of my greenhouse samples was inoculated with WMV and the second one it was not).
These samples were extracted with good concentrations and integrity.
After that, I did the rtPCR with rt enzyme (from Affinity company).
Following the next steps:
I added 2000 ng of my RNA samples (in different tubes; one of them to one greenhouse sample, another tube for the remaining green house sample and the latest tube with the field sample) with Reverse primer (2 ul) and H2O (until get 12.5 ul).
1.2. I incubated them 65ºC 5' and 25ºc 10'.
2.1. I added the next volumens for 1 sample: 2ul of my buffer, 2ul of my DTT, 2ul of my DnTps, 0.5 ul RNAsa block and 1 ul of Rt enzyme.
2.2. I vortex them and I incubated them 1 hour 50ºC and 10' 70ºC.
3.1. The PCR reaction was performed with the takara polymerase enzyme.
I added the next volumen per one tube: 2.5 ul of takara buffer, 0.5 ul forward primer, 0.5 ul and primer reverse, 0.5 ul DNTPS, 0.25 ul of taq polymerase and 15.75 ul of watter. All these volume 20 ul I added 5 microlites of my rt samples (2.2 point). I did this process with my remaining two samples.
3.2. Finally I incubated them 98ºC 5', then 35 cicles of 98ºC 30'', 56ºC 30'' and 72ºC 1'. out of the 35 cicles the temperature goes to 72ºC 10'.
I run the gel and I got primer dimmer. What I am doing wrong? All help will be very useful.
Note: These protocol was following by a labmate who got good results with the same field sample. Now I am following the protocol that was written in his notebook.
Answer: doing more assays I found what was the trouble. I used an oligodT for the rtPCR and then I amplified with my polymerase. Probably my RNA sample had not a linear conformation in the region that my reverse primer must join. The oligo dt bint to the polyA region, typical of this type of potyviruses. | {
"domain": "biology.stackexchange",
"id": 8964,
"tags": "molecular-biology, pcr"
} |
Time evolution of mean values for a two-particle system | Question: Consider a two particle system subject to a one-dimensional harmonic oscillator potential. So, the Hamiltonian is $H=H_1+H_2$, where $H_1$ acts only on the state space of particle $1$ and $H_2$ on the state space of particle $2$.
At $t=0$, the system is in the energy eigenstate:
$$|\psi(0)\rangle=\frac{1}{2}(|\psi_{0,0}\rangle+|\psi_{1,0}\rangle+|\psi_{0,1}\rangle+|\psi_{1,1}\rangle).$$
Then, a measurement of the total energy $H$ is performed and the result found is $2\hbar\omega$ (at $t=0$).
I am trying to calculate the mean values of position and momentum of particle $1$ at $t>0$.
My attemp is the following: Because the measurement of $H$ is made, I start with the collapsed eigenstate:
$$|\psi(0)\rangle=\frac{1}{\sqrt{2}}(|\psi_{1,0}\rangle+|\psi_{0,1}\rangle)$$
Now, the time evolution of the system for particle $1$ is:
$$|\psi(t)\rangle=\sum_{n_1}e^{-iE_{n}t/\hbar}|\psi_{n_1,n_2}(0)\rangle=\frac{1}{\sqrt{2}}(e^{-i3\omega t/2}|\psi_{1,0}\rangle+e^{-i\omega t/2}|\psi_{0,1}\rangle)$$
where I have made use of the energy eigenvalues $E_{n_1,n_2}=(n_1+n_2+1)\hbar\omega$.
In order to calculate the mean value of the position, I use the position operator in terms of creation and annihilation operators acting on the fist particle state space:
$$X|\psi(t)\rangle=\sqrt{\frac{\hbar}{2m\omega}}(a_{1}+a^{\dagger}_{1})|\psi(t)\rangle=\frac{1}{2}\sqrt{\frac{\hbar}{m\omega}}(e^{-i3\omega t/2}|\psi_{0,0}\rangle+e^{-i\omega t/2}|\psi_{1,1}\rangle).$$
It is obvious from this last equation that $\langle X\rangle(t)=0$. Similarly I obtain that $\langle P\rangle(t)=0$.
This result doesn't make sense to me because it contradicts the Ehrenfest theorem given that $X_1$ and $P_1$ do not commute with the Hamiltonian, so the expectation values can't be zero.
Also, in my calculation I made steps that I am not sure at all. For example, I made the time evolution of the state for the particle 1 only. Although, even if I do it for the complete system, I still get the same result of null mean values.
Is it possible that this results are due to the measurement done just before the time evolution?
How the time evolution of a two-particle system state should be calculated? Is it right to calculate it just for one particle, or it must always be done to the whole system?
Answer: The energies $E_n$ in the exponential must be the total energy, as you say, not just the energy of particle 1. You don't need to do any time evolution, though, because the system is left in an energy eigenstate after the measurement, and energy eigenstates are stationary states: nothing will depend on time.
And you're misreading the Ehrenfest theorem; there are mean values around the commutator too:
$$i \hbar \frac{d\langle A \rangle}{dt} = \langle [A, H] \rangle$$
Here $[X,H]\propto P$ and $[P,H]\propto X$, so you can see that $\langle X \rangle = 0$ and $\langle P \rangle = 0$ is a solution to the above equation. Even if the commutator is nonzero, its mean value can be zero, as it is here. | {
"domain": "physics.stackexchange",
"id": 35260,
"tags": "quantum-mechanics, homework-and-exercises, harmonic-oscillator"
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.