anchor stringlengths 0 150 | positive stringlengths 0 96k | source dict |
|---|---|---|
$\log^*(n)$ runtime analysis | Question: So I know that $\log^*$ means iterated logarithm, so $\log^*(3)$ = $(\log\log\log\log...)$ until $n \leq 1$.
I'm trying to solve the following:
is
$\log^*(2^{2^n})$
little $o$, little $\omega$, or $\Theta$ of
${\log^*(n)}^2$
In terms of the interior functions, $\log^*(2^{2^n})$ is much bigger than $\log^*(n)$, but squaring the $\log^*(n)$ is throwing me off.
I know that $\log(n)^2$ is $O(n)$, but I don't think that property holds for the iterative logarithm.
I tried applying the master method, but I'm having trouble with the properties of a $\log^*(n)$ function. I tried setting n to be max (i.e. $n = 5$), but this didn't really simplify the problem.
Does anyone have any tips as to how I should approach this?
Answer: Recall that for $k > 1$, by definition we have $\log^*k = \log^*(\log{k}) + 1$.
By applying the definition twice, we see that $\log^*(2^{2^n}) = \log^*n + 2$. Now we can compare $\log^*n + 2$ and $(\log^*n)^2$. | {
"domain": "cs.stackexchange",
"id": 147,
"tags": "asymptotics, landau-notation, mathematical-analysis"
} |
What does this representation of organic aromatic compounds mean? | Question: I saw this structure in some problem. What does this mean? How to express this in form of a simple structure
Answer: This is the cyclooctatetraene dianion. One way to get to it is to start with cyclooctatriene and remove one hydrogen ion each from the two $\ce{- CH2 -}$ groups. This is related to the cyclopentadienyl anion, the more common aromatic anion.
Image source: https://www.masterorganicchemistry.com/2017/05/17/frost-circles/ | {
"domain": "chemistry.stackexchange",
"id": 15614,
"tags": "organic-chemistry"
} |
Attribute driven behaviour in C# methods | Question: We want to create a TransactionScope factory class that we can use as a central point for instantiating TransactionScopes with varying configurations throughout our app.
One requirement we have is that a method can either:
Instantiate a plain TransactionScope whose settings are driven by the defaults in the App.Config
Instantiate a TransactionScope passing some config key, which will pull specific settings from some other source
The latter requirement is so that settings can be changed at runtime for specific methods if necessary (e.g. extending a timeout) without having to recompile the system and without affecting all TransactionScopes.
Option 1 - pass config keys via method param on the create methods
public static class TransactionScopeFactory
{
public static TransactionScope Create()
{
return new TransactionScope();
}
public static TransactionScope Create(string configKey)
{
var source = GetConfigSettings(configKey);
if(source != null)
{
var options = new TransactionOptions
{
//IsolationLevel = From Config Source
//Timeout = From Config Source
};
return new TransactionScope(TransactionScopeOption.Required, options);
}
return Create();
}
}
public class Frob
{
public void DoStuff()
{
using (var scope = TransactionScopeFactory.Create()) //Default
{ /*Do Stuff*/ }
}
public void DoFoo()
{
using (var scope = TransactionScopeFactory.Create("DoFoo"))
{ /*Do Foo*/ }
}
public void DoBar()
{
using (var scope = TransactionScopeFactory.Create("DoBar"))
{ /*Do Bar*/ }
}
}
My only issue with this is that I don't really like the fact that there are different strings peppered throughout the different Create() methods throughout the app. Even moving them to a utility class as const strings makes the code feel less "clean".
I was thinking about another way to do this with MethodAttributes. Instead, the TransactionScopeManager would pull the correct configuration key from a MethodAttribute using reflection and so the actual Create() would only ever be that plain parameter less value.
Option 2 - Pass config keys via method param on the create methods
[AttributeUsage(AttributeTargets.Method, Inherited = false, AllowMultiple = false)]
public class TransactionScopeConfigurationAttribute : Attribute
{
public string ConfigKey { get; set; }
}
public static class TransactionScopeFactory
{
public static TransactionScope Create()
{
string configKey = GetConfigKeyByReflection();
var source = GetConfigSettings(configKey);
if(source != null)
{
var options = new TransactionOptions
{
//IsolationLevel = From Config Source
//Timeout = From Config Source
};
return new TransactionScope(TransactionScopeOption.Required, options);
}
return Create();
}
private static string GetConfigKeyByReflection()
{
var attributes = (from frame in (new StackTrace()).GetFrames()
let attribs = frame.GetMethod().GetCustomAttributes(true).ToList()
select attribs).SelectMany(a => a.ToList());
var attrib = attributes.FirstOrDefault(a => a.GetType() == typeof(TransactionScopeConfigurationAttribute));
return (attrib != null)
? (attrib as TransactionScopeConfigurationAttribute).ConfigKey
: null;
}
}
public class Frob
{
//Default - No Attribute
public void DoStuff()
{
using (var scope = TransactionScopeFactory.Create())
{ /*Do Stuff*/ }
}
[TransactionScopeConfiguration(ConfigKey = "DoFoo")]
public void DoFoo()
{
using (var scope = TransactionScopeFactory.Create())
{ /*Do Foo*/ }
}
[TransactionScopeConfiguration(ConfigKey = "DoBar")]
public void DoBar()
{
using (var scope = TransactionScopeFactory.Create())
{ /*Do Bar*/ }
}
}
Now TransactionScopeManager only has a single solitary method Create(). Overriding what configuration it should use is now done via a setting on an attribute. I'm kinda torn over this implementation. On the face of it, it seems slightly more elegant and makes the code look a little neater... but at the same time, the behaviour isn't as discoverable (i.e. parameter listing in the IntelliSense popup is pretty obvious, having to use an attribute is not so much).
Also, it introduces reflection into the mix which I'm sure is going to cause a performance hit having to generate and walk those StackTrace/Frames every time I need a new TransactionScope.
Answer: Eoin, nice job with the second attempt. However, I would stick to your guns on the first style with a few deviations. Either A) pass in enumerations into the create method or B) create an override method with the specific name.
I would highly recommend direction B. This prevents magic strings all over the place and reflection while promoting readability.
Example:
private static TransactionScopeFactory Create(string key){/* primary create */}
public static TransactionScopeFactory Start(){return Create("default_key");}
public static TransactionScopeFactory StartFoo(){return Create("foo");}
public static TransactionScopeFactory StartBar(){return Create("bar");} | {
"domain": "codereview.stackexchange",
"id": 1563,
"tags": "c#, .net, reflection"
} |
September 26 twelve hours sunrise to sunset? | Question: I noticed in today's newspaper (Boston Globe, September 26) that sunrise and sunset were both at 6:25. That's the twelve hour day I'd have expected at the equinox.
(Please retag if there's a better one)
Answer: A day would be twelve hours long at the equinoxes if the Earth had no atmosphere and if the Sun was a point rather than a sphere. Sunrise and sunset are defined as the time at which the upper limb of the Sun appears to rise above or set below an ideal oblate spheroid Earth and assuming average atmospheric conditions. While this calculation ignores terrain and variations in atmospheric condition, it does not ignore that the basic facts that the Earth does have an atmosphere and that Sun is a sphere. The standard atmosphere conditions means that sunrise (sunset) occur when the upper limb of the Sun first reaches (first falls below) 34 arc minutes below the horizon. That the Sun is not a point adds another 16 arc minutes to this, making sunrise/sunset occur when the center of the Sun is 50 arc minutes below the horizon.
This means that at the equinoxes, a "day" is at least six minutes longer than the twelve hours we are naively taught (and incorrectly reported twice a year by naive weathermen). As an aside, days are always longer than nights at the equator. | {
"domain": "astronomy.stackexchange",
"id": 1110,
"tags": "time"
} |
How to create a list of differentially expressed (DE) genes after normalization with RUVSeq? | Question: I am using edgeR to perform differential expression (DE) analysis on a set of RNA-seq data samples (2 controls; 8 treatments). To correct for batch effects, I am using RUVSeq.
I am able to get a list of DE genes without normalization:
x <- as.factor(rep(c("Ctl","Inf"),c(2,8)))
set <- newSeqExpressionSet(as.matrix(counttable),phenoData=data.frame(x,row.names=colnames(counttable)))
design <- model.matrix(~x, data=pData(set))
y <- DGEList(counts=counts(set), group=x)
y <- calcNormFactors(y, method="upperquartile")
y <- estimateGLMCommonDisp(y, design)
y <- estimateGLMTagwiseDisp(y, design)
fit <- glmFit(y, design)
lrt <- glmLRT(fit, coef=2)
top <- topTags(lrt, n=nrow(set))$table
write.table(top, paste(OUT, "DE_genelist.txt", sep=""))
Then immediately after creating the "top" object, I use RUVg to normalize:
# [...]
top <- topTags(lrt, n=nrow(set))$table
empirical <- rownames(set)[which(!(rownames(set) %in% rownames(top)[1:5000]))]
ruvg <- RUVg(set, empirical, k=1)
write.table(ruvg, paste(OUT, "DE_RUVg_genelist.txt", sep=""))
And I get the error:
Error in as.data.frame.default(x[[i]], optional = TRUE) :
cannot coerce class ‘structure("SeqExpressionSet", package = "EDASeq")’ to a data.frame
I am not sure how to print the list of normalized results like I can with the unnormalized data. Ideally, I would get a file with the same format as the edgeR output (as a .csv or .txt file):
"logFC" "logCPM" "LR" "PValue" "FDR"
"COBLL1" -2.150 4.427061248733 75.0739519350016 4.53408921348828e-18 9.51203608115384e-15
"UBE2D1" -2.178 3.577168782408 74.9346752854903 4.86549160161322e-18 9.51203608115384e-15
"NEK7" -2.404 4.020072739285 72.6539117671717 1.54500340443843e-17 2.71843349010941e-14
"SMC6" -2.300 5.674738981329 61.8130019860261 3.7767230643666e-15 3.4974443325016e-12
How can I get a list of genes as an output after normalization with RUVSeq?
Answer: You do the normalization before running your edgeR. The purpose of RUVg is to remove "Remove Unwanted Variation Using Control Genes". In your code, you ran edgeR and then normalize the data using RUVg, which is only going to return you the normalized counts.
Using the example dataset in vignette:
library(RUVSeq)
library(zebrafishRNASeq)
data(zfGenes)
filter <- apply(zfGenes, 1, function(x) length(x[x>5])>=2)
filtered <- zfGenes[filter,]
genes <- rownames(filtered)[grep("^ENS", rownames(filtered))]
spikes <- rownames(filtered)[grep("^ERCC", rownames(filtered))]
x <- as.factor(rep(c("Ctl", "Trt"), each=3))
set <- newSeqExpressionSet(as.matrix(filtered),
phenoData = data.frame(x, row.names=colnames(filtered)))
set <- betweenLaneNormalization(set, which="upper")
set1 <- RUVg(set, spikes, k=1)
You can look at it, it's an expression set with counts etc, not results:
set1
SeqExpressionSet (storageMode: lockedEnvironment)
assayData: 20865 features, 6 samples
element names: counts, normalizedCounts, offset
protocolData: none
phenoData
sampleNames: Ctl1 Ctl3 ... Trt13 (6 total)
varLabels: x W_1
varMetadata: labelDescription
featureData: none
experimentData: use 'experimentData(object)'
Annotation:
You run edgeR now on the results of RUVg:
design <- model.matrix(~x + W_1, data=pData(set1))
y <- DGEList(counts=counts(set1), group=x)
y <- calcNormFactors(y, method="upperquartile")
y <- estimateGLMCommonDisp(y, design)
y <- estimateGLMTagwiseDisp(y, design)
fit <- glmFit(y, design)
lrt <- glmLRT(fit, coef=2)
topTags(lrt) | {
"domain": "bioinformatics.stackexchange",
"id": 1473,
"tags": "r, rna-seq, differential-expression, normalization, edger"
} |
Mass Resolution- Particle's width in Particle physics? | Question:
I'm not 100% sure. but I think that the width of a particle could change depending on the decay channel. for example: The J/psi's mass resolution could be 40 MeV or in other cases 8 MeV.
So I would like to know :
What actually contributes to the width of a particle?.
Why some particles are really narrow and some other are wide?
Do we expect to see the same particle's mass resolution in the Monte Carlo simulation as the one we will see in the real experiment Data?
I hope someone will be able to answer my questions. Thanks in advance.
Answer: This is $e^{+}e^{-}$ interactions versus energy $(\sqrt{s})$ from the particle data book, figure 49.5:
Many interesting experimental measurements that led to the quark model can be found in this plot.
Particles like protons, electrons and muons have a fixed mass, no width and are measured by the "length" of their four-vector. Measurement errors will introduce a statistical indeterminacy which can be fitted with the statistically defined gaussian. In the plot though, we see practically a delta function for the $J/\psi$ of the plot in the question, because the scale is different. The particles $\omega$ and $\rho$ have a large width which is not gaussian, $\mathrm{Y}$ is very narrow and the $Z$ is also non-gaussian.
What defines the intrinsic width is the type of interaction entering in the possible decays of the resonances and whether strong interaction decays are suppressed or not. The $J/\psi$ is a good example:
It has a rest mass of $3.0969 \ GeV/c^2$, just above that of the $\eta_c$ ($2.9836 \ GeV/c^2$), and a mean lifetime of $7.2×10^{−21} \ s$. This lifetime was about a thousand times longer than expected.
....
Hadronic decay modes of $J/\psi$ are strongly suppressed because of the OZI Rule. This effect strongly increases the lifetime of the particle and thereby gives it its very narrow decay width of just $93.2±2.1 \ keV$. Because of this strong suppression, electromagnetic decays begin to compete with hadronic decays. This is why the $J/\psi$ has a significant branching fraction to leptons.
The width of the $Z$ also is interesting and is composed out of all the partial widths in the channels it can decay, see page 2 in this link.
In conclusion, the intrinsic interaction width (calculable by the standard model) of resonances has to be folded with the experimental error widths, as explained in the comments to the question. When the intrinsic width is very small, the experimental error dominates and is a gaussian. | {
"domain": "physics.stackexchange",
"id": 35213,
"tags": "quantum-mechanics, particle-physics, experimental-physics, data-analysis, elementary-particles"
} |
Encoding set values in $o(n)$ memory | Question: Apache Spark, a distributed computation framework, has a construct called accumulators which are global variables with an associative addition action. These can be used to aggregate statistics during the computation process. The problem with these variables is that parallel tasks can fail without "rolling back" the value of the accumulator, so at the end of the process if any tasks were re-run, the accumulator will have a larger value than it should have in reality.
In my case I want to count the amount of elements I filter out in my process.
If I simply use an integer/long counter and increment it each time I filter out an element, I will possibly get inaccurate results because of the problem previously described.
An idea I had is to use a set as the accumulator, collecting in it all the unique identifiers of the elements I filtered out, then getting its size at the end of the process.
This will produce an accurate result but will have a performance impact of storing a possibly very large list of identifiers in memory.
Thus I am curious if there is any mathematical construct I can use which has the following properties:
Occupies $o(n)$ of memory. By that I mean that its size will grow very little a a function of the size of the dataset I am running on.
Has an associative "addition" method that recognizes when a value added to it was already added previously
Has a "size" method that returns the amount of unique values added to it.
A set satisfies the second and third constraint but not the first, however a set has the property of retrieving its values, which I do not need. Is it maybe possible to achieve all three when this is not needed?
I know this is probably overkill for a programming problem but at this point I am just curious if anything like this exists.
thanks.
Answer: No such data structure can exist that satisfies (1) and (2): if $n$ arbitrary elements can be added to it, and it has to keep track of whether each of them has been seen already or not, then it has to have at least $2^n$ configurations, and therefore maintain $\Theta(n)$ memory at least (i.e. keeping a single bit for each element it has seen or not). | {
"domain": "cs.stackexchange",
"id": 19869,
"tags": "data-structures"
} |
NO cmd_vel on rqt graph | Question:
i want to navigate P3AT robot spawned in usarsim through teleop_twist_keyboard package (in ROS).
I have connected ros to usarsim using executive_usarsim package and spawned the P3AT robot in usarsim, but when i run teleop_twist_keyboard and presses keys to move the robot, nothing happens. please help me out.
I have tried to run $rosrun rqt_graph rqt_graph, the following rqt graph appears: Click to view rqt_graph
then i have run $ rostopic echo /executive_usarsim/status, following graph appeared: Click to view graph
then i observed the cmd_vel is not present. what to do next. please help.
NOTE: i am using ROS-fuerte on UBUNTU 12.04 LTS. [with USARSim simulator]
see full description: click here
Originally posted by sumant on ROS Answers with karma: 77 on 2014-09-26
Post score: 0
Original comments
Comment by bvbdort on 2014-09-27:
cmd_vel is not published so its not on rqt_graph. try rostopic info cmd_vel to see topic publishers and subcribers
Comment by sumant on 2014-10-10:
hello, @bvbdort . from rostopic info cmd_vel.
publisher: teleop_twist_keyboard
subscriber: None
since no one is subscriber.
i want executive_usarsim to be a subscriber.
here is the rqt graph obtained.
https://tucrlab.files.wordpress.com/2014/09/executive.png
Comment by ajain on 2014-10-17:
Are you sure that executive_usarsim is subscribing to cmd_vel topic? You may wanna view "All Nodes and Topics" in rqt_graph (Upper left drop down menu) or check it using rostopic list. Also, check your launch file if you are using remap to change topic names.
Comment by sumant on 2014-10-23:
i am using usarsim_inf & it works fine.
Answer:
hi @sumant,
you executive_usarsim node is not subscribing to the /cmd_vel topic that's why it is not shown in you rqt_graph.
you can do following things to drive robot:-
Check, on which topic your executive_usarsim node is expected to receive twist message and remap the teleop_twist_keyboard from /cmd_vel to the identified topic.
Use usarsim_inf insted of executive_usarsim.
Originally posted by Aarif with karma: 351 on 2015-05-02
This answer was ACCEPTED on the original site
Post score: 1 | {
"domain": "robotics.stackexchange",
"id": 19534,
"tags": "ros"
} |
What is the parameterized complexity of following model checking problem? | Question:
Input: Graph $G$ and formula $\varphi_1(\vec x),\varphi_2(\vec x)$
Parameter: $tw(G)+|\varphi_1|+|\varphi_2|$
Problem: Decide if $|\varphi_1(G)|=|\varphi_2(G)|$
where $tw(G)$ is the treewidth of $G$ and $\varphi(G):=\{\vec a|(G,\vec a)\models\varphi\}$.
What is the parametrized complexity of this problem for $\varphi_i\in FO$ or $\varphi_i\in MSO$?
Answer: This problem is $\mathsf{FPT}$ for $\varphi_i \in MSO$ (and hence also for $\varphi_i \in FO$).
More precisely, Courcelle et al. prove in [1] the following:
Theorem [1, Thm. 32]
Let $\mathcal{C}$ be a class of graphs which is of bounded tree-width $k$.
Then any $MSO_2$ definable counting problem, given by $\varphi$, can be solved in time $c_k \cdot \mathcal{O}(|V| + |E|)$, where $c_k$ is a constant which depends only on $\varphi$ and $k$.
$MSO_2$ stands for monadic second-order logic where the universe is $V \cup E$ (vertices and edges), and we are given a binary relation $R(v,e)$ for the incidence between a vertex $u$ and an edge $e$. This is a quite natural representation of graphs, sufficiently powerful to e.g. define Hamiltonicity.
[1]: B.Courcelle, J.A.Makowsky, U.Rotics. On the fixed parameter complexity of graph enumeration problems definable in monadic second-order logic. Discrete Applied Mathematics 108, pp. 23-52 (2001) | {
"domain": "cstheory.stackexchange",
"id": 573,
"tags": "cc.complexity-theory, descriptive-complexity, parameterized-complexity, finite-model-theory, model-checking"
} |
Reaction between Iron (III) Oxide and CO? | Question: I'm a science tutor and I came across a stoichiometry problem asking how much Iron is produced in a reaction between $\ce{Fe2O3}$ and $\ce{CO}$:
$$\ce{Fe2O3 + CO -> Fe + ?}$$
I know how to do the stoichiometry part, but I'm stumped about how to figure out the products. My guess is $\ce{Fe}$ and $\ce{CO2}$ since the $\ce{CO}$ would want to change to the more stable $\ce{CO2}$, thus deoxidizing the $\ce{Fe}$? All the student's problems so far have just been between simple ionic salts, and I never remember coming across one like this when I took AP Chen myself in highschool, since $\ce{CO}$ is covalent.
Answer: The reaction products depends on stoichiometric ratio of the reactants as well as reaction conditions (i.e temperature, pressure etc.)
$$\ce{Fe2O3 + CO ->[500-600 C] 2FeO + CO2}$$
Iron(III) oxide and carbon monoxide to produce iron(II) oxide and
carbon dioxide. This reaction takes place at a temperature of
500-600°C. (source)
$$\ce{3Fe2O3 + CO ->[400 C] 2Fe3O4 + CO2}$$
Iron(III) oxide react with carbon monoxide to produce iron(II,III)
oxide and carbon dioxide. This reaction takes place at a temperature
near 400°C. (source)
$$\ce{Fe2O3 + 3CO ->[700 C] 2Fe + 3CO2}$$
Iron(III) oxide react with carbon monoxide to produce iron and carbon
dioxide. This reaction takes place at a temperature near 700°C. (source) | {
"domain": "chemistry.stackexchange",
"id": 6988,
"tags": "inorganic-chemistry, redox"
} |
Is ergodic hypothesis in contradiction with the notion of equilibrium? | Question: From wikipedia:
In physics and thermodynamics, the ergodic hypothesis1 says that, over long periods of time, the time spent by a system in some region of the phase space of microstates with the same energy is proportional to the volume of this region, i.e., that all accessible microstates are equiprobable over a long period of time.
So if I understood it right, given enough time the system will move through all possible states.
However, from thermodynamics we know that state of equilibrium is in a sense the "final state" in which system will get once and won't move to other states after that.
Aren't these two things in contradiction? If ergodic hypothesis is true then wouldn't that mean that system which is already in state of equilibrium will spontaneously move out of equilibrium into some other state (after enough time has passed)?
Answer: You have to be careful to distinguish between microstates and macrostates. Thermodynamic equilibrium is a macrostate which consists of a mixture of all possible microstates of energy $E$ weighted by a Boltzmann weight $e^{- \beta E} / Z$. A state in macroscopic thermal equilibrium can be thought of as "moving through phase space" ergodically (i.e. the microstate is constantly changing, but the fraction of time spent in each microstate is fixed to the Boltzmann weight). | {
"domain": "physics.stackexchange",
"id": 46479,
"tags": "thermodynamics, statistical-mechanics, equilibrium, ergodicity"
} |
Shake water and seed oil: color and density | Question: These days I am spraying the leaves of my garden, to prevent insects, mixing strongly water and soya oil. The color has become all white.
Soy oil has a relative density of $0.915 \div 0.925$ kg/dm$^3$ and the water is $1.000$ kg/dm$^3$.
The white colour I believe is due to the strong mixing that breaks the bonds of soya oil and water.
Is there a physical or mathematical method, starting with the densities, to justify the white colour?
Answer: What you're observing is the Tyndall effect.
The Tyndall effect is light scattering by particles in a colloid or in
a very fine suspension. Also known as Willis–Tyndall scattering, it is
similar to Rayleigh scattering, in that the intensity of the scattered
light is inversely proportional to the fourth power of the wavelength,
so blue light is scattered much more strongly than red light. An
example in everyday life is the blue colour sometimes seen in the
smoke emitted by motorcycles, in particular two-stroke machines where
the burnt engine oil provides these particles. | {
"domain": "physics.stackexchange",
"id": 70009,
"tags": "density, density-operator"
} |
C main function for POSIX shell | Question: I got a pretty large main function that I want to break up in smaller helper functions. Can you help me suggest what to break out into helper functions? The function is part of my own command-line shell and all the code is available on github. I think that a good helper function is for "builtin commands" (e.g. commands that I implement myself) and another good helper method could be "parse shell input" (e.g. the handletoken function). Are there any more possible helper functions that I should consider?
My goal is to make the main function small and readable.
int main(int argc, char *argv[]) {
/* int awk = 0; */
char line2[BUFFER_LEN];
char linecopy[BUFFER_LEN];
char *params[100];
char *cl;
char *path_value;
int i = 0;
int isBackground = 0;
int built_in_command = 0;
int fd[2];
int b;
long time;
int status = 0;
int max = 80;
struct timeval time_start;
struct timeval time_end;
sigset_t my_sig;
pid_t pid_temp;
char *pathValue;
char *path_strdup;
struct sigaction sa, osa;
char line[BUFFER_LEN];
char *input, shell_prompt[BUFFER_LEN];
size_t length;
int ret;
struct sigaction less_sa;
err_setarg0(argv[argc - argc]);
pid_temp = 0; /* To please the compiler */
sa.sa_sigaction = sighandler;
sa.sa_flags = SA_SIGINFO;
sigaction(SIGINT, &sa, &osa);
less_sa.sa_handler = &handle_sigchld;
sigemptyset(&less_sa.sa_mask);
less_sa.sa_flags = SA_RESTART | SA_NOCLDSTOP;
if (sigaction(SIGCHLD, &less_sa, 0) == -1) {
perror(0);
exit(1);
}
/* get the PATH environment to find if less is installed */
pathValue = getenv("PATH");
if (!pathValue) {
printf("'%s' is not set.\n", "PATH");
}
else {
printf("'%s' is set to %s.\n", "PATH", pathValue);
}
path_strdup = strdup(pathValue);
path_value = strtok(path_strdup, ":");
ret = find_less_program(path_value);
free(path_strdup);
while (1) {
i = 0;
Janitor(SIGCHLD);
/* Create prompt string from user name and current working directory. */
snprintf(shell_prompt, sizeof(shell_prompt), "%s:%s $ ", getenv("USER"), getcwd(NULL, 1024));
/* Display prompt and read input (NB: input must be freed after use)...*/
input = readline(shell_prompt);
if (!input)
break;
add_history(input);
strncpy(line2, input, BUFFER_LEN);
strncpy(linecopy, input, BUFFER_LEN);
length = strlen(input);
if (input[length - 1] == '\n') {
input[length - 1] = '\0';
}
built_in_command = handleBuiltinCommands(input, ret);
if (0 == built_in_command) { /*Not a built in command, so let execute it*/
/*isBackground = background_check(max, input);*/
isBackground =0;
for (b = 0; b < max; b++) {
if ('&' == input[b]) {
printf("is background");
isBackground = 1;
}
}
if (isBackground == 1) { /*If backgroundprocess*/
if (pipe(fd) == -1) { /*(two new file descriptors)*/
perror("Failed creating pipe\n");
}
pid_temp = fork();
}
else if (isBackground == 0) { /*If foreground process*/
gettimeofday(&time_start, NULL);
if (1 == isSignal) { /*If using signaldetection*/
sigemptyset(&my_sig); /*empty and initialising a signal set*/
sigaddset(&my_sig, SIGCHLD); /*Adds signal to a signal set (my_sig)*/
/*http://pubs.opengroup.org/onlinepubs/7908799/xsh/sigprocmask.html*/
sigprocmask(SIG_BLOCK, &my_sig, NULL);
}
/*pid_temp = fork();*/
foreground = pid_temp; /*Set pid for foreground process*/
}
if (0 < pid_temp) {
/*Parent process*/
}
else if (0 > pid_temp) {
/*Error*/
}
else {
/*Child process*/
if (1 == isBackground) { /*Backgroundprocess*/
dup2(fd[STDIN_FILENO], STDIN_FILENO);
close(fd[0]);
close(fd[1]);
}
length = strlen(linecopy);
if (linecopy[length - 1] == '\n')
linecopy[length - 1] = '\0';
/*printf("Command line: %s\n", linecopy);*/
cl = strtok(linecopy, " ");
i = 1;
params[0] = NULL;
i = handleToken(input, cl, params, i);
dump_argv("Before"
" exec_arguments", i, params);
exec_arguments(i, params);
corpse_collector();
/*free(input)*/;
}
if (0 == isBackground) { /*Foregroundprocess*/
waitpid(foreground, &status, 0); /*Waiting*/
/*Foregroundprocess terminated*/
gettimeofday(&time_end, NULL);
time = (time_end.tv_sec - time_start.tv_sec) * 1000000 +
time_end.tv_usec - time_start.tv_usec;
printf("Execution time %ld.%03ld ms\n", time / 1000, time % 1000);
if (1 == isSignal) { /*If using signaldetection*/
int a = sigprocmask(SIG_UNBLOCK, &my_sig, NULL);
/*http://man7.org/linux/man-pages/man2/sigprocmask.2.html*/
if (0 == a) {
/*Sigprocmask was successfull*/
}
else {
/*Sigprocmask was not successfull, return=-1*/
}
Janitor(SIGCHLD);
}
}
else if (1 == isBackground) {
close(fd[0]);
close(fd[1]);
}
}
built_in_command = 0; /*Reset*/
memset(line, 0, sizeof line); /*Reset*/
free(input);
}
return (0);
}
Answer: Look for the separate
sa.sa_sigaction = sighandler;
sa.sa_flags = SA_SIGINFO;
sigaction(SIGINT, &sa, &osa);
These three lines use sighandler which is not defined here, sa, and osa. All three are only used here. So you can move them into their own function without impacting the rest of the method. Don't forget the variable declaration.
Since you never use osa again, it's unclear why you have it here. Later you pass a 0 instead. Why not do it here?
less_sa.sa_handler = &handle_sigchld;
sigemptyset(&less_sa.sa_mask);
less_sa.sa_flags = SA_RESTART | SA_NOCLDSTOP;
if (sigaction(SIGCHLD, &less_sa, 0) == -1) {
perror(0);
exit(1);
}
The same thing applies here with less_sa. This could be in its own function or in the function with the previous code.
change_signal_handlers();
If you want both in one.
change_interrupt_handler();
change_child_handler();
If you want them to be in different functions.
Other names would also be valid. These are just possibilities.
/* get the PATH environment to find if less is installed */
pathValue = getenv("PATH");
if (!pathValue) {
printf("'%s' is not set.\n", "PATH");
}
else {
printf("'%s' is set to %s.\n", "PATH", pathValue);
}
path_strdup = strdup(pathValue);
path_value = strtok(path_strdup, ":");
ret = find_less_program(path_value);
free(path_strdup);
The only variable used later is ret. So this could be moved into its own function.
I don't know what ret is. Perhaps a more descriptive name?
Beyond that, I'd move the function call to get ret and the entire while loop into its own function. You might further break up that function as well, but that stuff doesn't belong in main.
That would make your main very simple:
int main(int argc, char *argv[]) {
err_setarg0(argv[argc - argc]);
change_interrupt_handler();
change_child_handler();
respond_to_shell_commands();
// silence compiler warning with unnecessary return
return 0;
}
You could actually do without the return as well, but perhaps your compiler squawks. It won't hurt anything either way.
And I'm not convinced that you need argc - argc either, but it doesn't seem to be hurting anything. You might comment in the reason for that though.
Consistency
Your naming standards are all over the place. Sometimes you use snake_case and others you use camelCase. Please pick one and stick to it. If I tried to modify this code, I'd have no idea what standard to use. In C, snake_case is more common. It also is a bit easier for international users to understand, so I would recommend that. Using either consistently would be better than the current state though.
Get rid of unused code
strncpy(line2, input, BUFFER_LEN);
Never used.
memset(line, 0, sizeof line); /*Reset*/
Why reset a variable that you never use? Perhaps this should be the same variable as linecopy. | {
"domain": "codereview.stackexchange",
"id": 19577,
"tags": "c, shell, unix, posix"
} |
Frequency Axis of Discrete Fourier Transform (DFT) with Odd Number of Data Points | Question: I am trying to understand the logic behind making a frequency axis in DFT. I am using for time based light absorbance. When we have even number of data points (N= even integer), collected over a length of time A, the lowest frequency $\omega_1$ that can be resolved is 1/A, the next step is 2/A, the $k^{th}$ step is k/A, where k =1,2,3..., N/2. The highest frequency $\omega_{max}$ = N/2A based on Shannon's or Nyquist criterion. This the maximum value on the frequency axis.
How do we scale the frequency axis of FFT when we have odd number of data points (N+1) collected over a length of time A, where N is a even number? I cannot find similar reasoning for scaling when we have odd number of data points except in the book excerpt below.
The step size of frequency in case of odd number of data points: k/A whether the data points are even or odd.
Please see the excerpt from a book called DFT: An Ownwer's Manual by Briggs. What is the author trying to say when he says that the highest frequency $$\frac{N}{(2A)}$$ does not quite coincide with the endpoint of the frequency domain which have the values $\Omega$/2?Please note that the author is using N as an even number.
Is he indicating that there is a slight error when we have a DFT of an odd number of datapoints?
Answer: This can be answered simply by considering the definition of the $N$-point DFT:
$$X_N[n] = \sum_{k=0}^{N-1} x[k]e^{-j2\pi \frac {n}{N}k }$$
where it's easy to see that the DFT just compares your $N$-point input signal $x[k]$ to a sinusoid of frequency $\frac nN$.
Thus, the lowest frequency is always $0$, and the resolution is always $\frac{f_\text{sample}}{N}$, no matter whether $N$ is even or odd. That also means that the lowest non-DC frequency is also $\frac{f_\text{sample}}{N}$. | {
"domain": "dsp.stackexchange",
"id": 7552,
"tags": "discrete-signals, fourier-transform, frequency-spectrum, frequency-response"
} |
Can one define wavefronts for waves travelling on a stretched string? | Question: If I have a wave on a string, can any wavefront be defined for such a wave?
And also is it possible to have circularly polarized string waves?
Answer:
If I have a wave on a string, can any wavefront be defined for such a wave?
In general, a wavefront is defined as a connected set of points in a wave that are all at the same phase at a given time (usually at the phase corresponding to the maximum displacement.) For a wave traveling in 1-D, the points at which the string is at the same phase are disconnected from each other; so in some sense, each wavefront consists of a single point.
This actually makes sense if you think about it. For a wave traveling in 3-D, the wavefronts are two-dimensional surfaces; for a wave traveling in 2-D, the wavefronts are one-dimensional curves; and so for a wave traveling in 1-D, the wavefronts are zero-dimensional points.
And also is it possible to have circularly polarized string waves?
Sure thing. You have two independent transverse polarizations; just set up a wave where these two polarizations are 90° out of phase with each other. The result would be a wave that looks like a helix propagating down the string. The animation from Wikipedia below was created with electric fields in a circularly polarized light wave in mind; but the vectors in the animation could equally well represent the displacement of each point of the string from its equilibrium position. | {
"domain": "physics.stackexchange",
"id": 56994,
"tags": "newtonian-mechanics, classical-mechanics, waves"
} |
specifying tcp port range for nodes | Question:
we're extremely port limited and would like to specify a range of ports for nodes to use when communicating with each other. For example: 9000-9500
any ideas how to assign specific ports for node communication? Looking at roscpp and the docs it looks like the publisher sends a connection header to subscriber with how to connect to itself. Does this means the port is already determined at this point in time? or is connection header sent through XMLRPC with the new port? It looks like the info is sent into the 'callerid' variable. Unfortunately, i haven't been able to track down exactly how this gets assigned....
how does port assignment make sure it doesn't class with other ports?
all this is very mysterious and it would be great if someone could provide even the smallest bit of info.
thank you
Originally posted by Rosen Diankov on ROS Answers with karma: 516 on 2011-12-04
Post score: 4
Answer:
Ports are automatically assigned by the system when the publishers socket is created. The tcp and udp code would need to be modified to constrain the ports within a range. After the listening port is opened, it is registered on the master so that the other nodes can contact it when they want to connect to it. (The client side ports are also auto assigned.)
Originally posted by tfoote with karma: 58457 on 2011-12-11
This answer was ACCEPTED on the original site
Post score: 4
Original comments
Comment by Rosen Diankov on 2011-12-11:
so in other words, the port selection happens in the linux kernel somewhere, interesting. that would explain why there's only calls to get the port. thanks! | {
"domain": "robotics.stackexchange",
"id": 7515,
"tags": "ros, tcpros, node"
} |
Rayleigh scattering and Raman scattering observed intensities | Question: I am currently studying the textbook Infrared and Raman Spectroscopy, 2nd edition, by Peter Larkin. In a section entitled The Raman Scattering Process, the author says the following:
Both Rayleigh and Raman are two photon processes involving scattering of incident light ($h c \bar{\nu}_L$), from a “virtual state.” The incident photon is momentarily absorbed by a transition from the ground state into a virtual state and a new photon is created and scattered by a transition from this virtual state. Rayleigh scattering is by far the most probable event and the scattered intensity is c. $10^{-3}$ less than that of the original incident radiation. This scattered photon results from a transition from the virtual state back to the ground state and is an elastic scattering of a photon resulting in no change in energy (i.e., occurs at the laser frequency).
Raman scattering is far less probable than Rayleigh scattering with an observed intensity that is c. $10^{-6}$ that of the incident light for strong Raman scatterers. This scattered photon results from a transition from the virtual state to the first excited state of the molecular vibration. This is described as an inelastic collision between photon and molecule, since the molecule acquires different vibrational energy ($\bar{\nu}_m$) and the scattered photon now has different energy and frequency.
There seems to be a typesetting issue here. Is this supposed to be $c \cdot 10^{-3}$ and $c \cdot 10^{-6}$, respectively, where $c$ is the speed of light? I would appreciate it if someone would please take the time to clarify this.
Answer: c. = circa (around, about, roughly, approximately) | {
"domain": "physics.stackexchange",
"id": 67689,
"tags": "quantum-mechanics, electromagnetic-radiation, photons, scattering, raman-spectroscopy"
} |
Frictional forces on banked curve | Question: I am quite confused when I consider the frictional force for an object moving around a banked curve.
I understand that the direction of friction changes from pointing up the slope to down the slope as the centripetal acceleration increases, as the horizontal components of the normal and frictional forces result in the centripetal force.
However, my understanding is that both the gravitational force and normal force experienced by the object remain constant. How can this be the case if the direction and magnitude of the frictional force (with its horizontal and vertical components) changes with velocity?
I created a diagram below that illustrates my understanding of the forces present when the velocity of the object on the banked curve is zero, small, and very large (diagrams 1, 2, and 3 respectively). For the forces in the y-direction to remain balanced while the frictional force changes, the y-component of the normal force should also be changing, which doesn't make sense to me.
Is there something that I am missing here, or is the normal force actually changing with velocity?
Answer: Indeed, the normal force is changing with orbital speed.
As the object is flung more outwards with higher speed in the same circular path, it applies a larger force on the banked surface. Meaning, the banked surface has a larger load to hold back against in order to avoid breaking. The normal force is the name given to the forces that come into existence in order to prevent a surface from breaking under a load. So, with a larger load, the normal force increases.
You could also turn this around and ask: why would you expect the normal force to be constant? There is no reason to expect that. The normal force is a reactionary force that adjusts its magnitude to any surrounding effects at any given moment. (Possibly you are intuitively assuming the normal force to match gravity, but that is a common mistake - if so, then it wouldn't make sense to ever talk about horizontal normal forces from when you, say, lean against a wall.) | {
"domain": "physics.stackexchange",
"id": 85188,
"tags": "newtonian-mechanics, forces, friction, centripetal-force"
} |
Can two heavy objects circling around their C.M. be separated because of the speed of gravity? | Question:
Imagine two massive objects, with the same mass (M) circling around their center of mass (C.M.). Let's assume that the distance between them is 1 light hour. Don´t the two bodies get accelerated and move away from each other because they feel the gravity of each other as it was 1 hour ago, because of which a force tangent to the direction of the speed develops?
Can it be that gravity acts instantaneously? I've read about experiments proving the speed of gravity to be equal to the speed of light, but these were disputed.
Isn't there both in Newtonian mechanics and GR there a time delay? The difference being though that space and time in Newtonian mechanics are separate and absolute (implying instantaneity?), while in GR the both are a whole and absolute.
Answer:
Can two heavy objects circling around their C.M. be separated because of the speed of gravity?
No. Newtonian mechanics does quite well assuming instantaneous gravity. You can get relativistically correct orbits either by doing a complete GR calculation, or using a simple approximation as already eluded alluded to in @antispinward's answer.
This excellent answer to Besides retarded gravitation, anything else to worry about when calculating MU69's orbit from scratch? in Space Exploration SE explains that we get the wrong answer if purely Newtonian mechanics is used except for slowing down the speed of gravity, but that's because one is neither treating gravity correctly nor using Newtonian mechanics correctly.
Many answers to How to calculate the planets and moons beyond Newtons's gravitational force? include a way to treat this problem using a well-accepted approximate treatment of general relativity.
From this answer:
The acceleration of a body in the gravitation field of another body of standard gravitational parameter $GM$ can be written:
$$\mathbf{a_{Newton}} = -GM \frac{\mathbf{r}}{|r|^3},$$
where $r$ is the vector from the body $M$ to the body who's acceleration is being calculated. Remember that in Newtonian mechanics the acceleration of each body depends only on the mass of the other body, even though the force depends on both masses, because the first mass cancels out by $a=F/m$.
and later:
The following approximation should be added to the Newtonian term:
$$\mathbf{a_{GR}} = GM \frac{1}{c^2 |r|^3}\left(4 GM \frac{\mathbf{r}}{|r|} - (\mathbf{v} \cdot \mathbf{v}) \mathbf{r} + 4 (\mathbf{r} \cdot \mathbf{v}) \mathbf{v} \right),$$ | {
"domain": "astronomy.stackexchange",
"id": 4805,
"tags": "gravity"
} |
Simple and fair scheduler for function calls on Arduino | Question: because Arduino platforms are fairly limited in its capacities, I wrote a small process scheduler. I append a function to an array and define a tickrate. After this tickrate elapses, the function should be called. Additionally I set a small delay. If several function share the same tickrate, this delays hinders them to get called very shortly together. In case, that the serial bus is used, this could avoid write/read blocks.
I never wrote something similiar and wouldn't say that my approach with the delay is elegant. Maybe someone has ideas to enhance this story.
Ads:
- CAPITALS are #defines
Sample:
Emitter emitMain(&main_loop, MAIN_LOOP_T_MS);
// Prepare scheduler for the main loop ..
_SCHED.addEmitter(&emitMain, 0);
_SCHED.run();
Code *.h:
class Emitter {
public:
Emitter(void (*pf_foo)(), uint16_t delay = 0);
bool emit();
void reset();
uint16_t getDelay(uint16_t iNum);
private:
bool bSend;
uint16_t iDelay;
void (*pfEmitter)();
};
///////////////////////////////////////////////////////////
// Container for emitter objects
///////////////////////////////////////////////////////////
class Emitters {
private:
const AP_HAL::HAL *m_pHAL;
uint8_t m_iItems; // Current number of items in the arrays below
uint32_t m_timerList [MAX_NO_PROC_IN_SCHED];
Emitter *m_functionList[MAX_NO_PROC_IN_SCHED];
uint16_t m_tickrateList[MAX_NO_PROC_IN_SCHED];
protected:
///////////////////////////////////////////////////////////
// pEmitters: Array of iSize_N elements
// iTickRates: The times in ms until the emitter in the array will emit again
// iTimerList: Array holding the timers for each element
// iSize_N: The number of emitters in the array
///////////////////////////////////////////////////////////
void scheduler(Emitter **pEmitters, uint16_t *iTickRates, uint32_t *iTimerList, const uint8_t iSize_N);
public:
Emitters(const AP_HAL::HAL *);
void addEmitter(Emitter *, uint16_t iTickRate);
void run();
};
Code *.cpp:
#include "emitter.h"
Emitter::Emitter(void (*pf_foo)(), uint16_t delay) {
bSend = false;
iDelay = delay;
pfEmitter = pf_foo;
}
bool Emitter::emit() {
if(!bSend && pfEmitter != NULL) {
pfEmitter();
bSend = true;
return true;
}
return false;
}
void Emitter::reset() {
bSend = false;
}
uint16_t Emitter::getDelay(uint16_t iNum) {
return iDelay * (iNum+1);
}
///////////////////////////////////////////////////////////
///////////////////////////////////////////////////////////
Emitters::Emitters(const AP_HAL::HAL *p) {
m_pHAL = p;
memset(m_functionList, NULL, sizeof(m_functionList));
memset(m_tickrateList, 0, sizeof(m_tickrateList));
m_iItems = 0;
uint32_t timer = m_pHAL->scheduler->millis();
for(uint8_t i = 0; i < MAX_NO_PROC_IN_SCHED; i++) {
m_timerList[i] = timer;
}
}
void Emitters::addEmitter(Emitter *p, uint16_t iTickRate) {
if(m_iItems < sizeof(m_functionList)-1 && p != NULL) {
m_functionList[m_iItems] = p;
m_tickrateList[m_iItems] = iTickRate;
m_iItems++;
}
}
void Emitters::scheduler(Emitter **pEmitters, uint16_t *iTickRates, uint32_t *iTimerList, const uint8_t iSize_N) {
if(m_pHAL == NULL)
return;
for(uint8_t i = 0; i < iSize_N; i++) {
uint32_t time = m_pHAL->scheduler->millis() - iTimerList[i];
if(time > iTickRates[i] + pEmitters[i]->getDelay(i) ) {
if(pEmitters[i]->emit() ) {
if(i == (iSize_N - 1) ) { // Reset everything if last emitter successfully emitted
for(uint16_t i = 0; i < iSize_N; i++) {
pEmitters[i]->reset();
}
iTimerList[i] = m_pHAL->scheduler->millis();
}
}
}
}
}
void Emitters::run() {
scheduler(m_functionList, m_tickrateList, m_timerList, m_iItems);
}
EDIT after first answer: Initially my scheduler had a completely other design. I was not testing the current code so far. But with the help of palacsint I made a few changes in comparison to the example above. If there are further suggestion I will change the code after this section.
SAMPLE:
// function, delay, multiplier of the delay
Emitter emitAtti(&send_atti, 3, 0);
Emitter emitRC (&send_rc, 37, 0);
Emitter emitComp(&send_comp, 44, 0);
Emitter emitBaro(&send_baro, 66, 0);
Emitter emitGPS (&send_gps, 66, 1);
Emitter emitBat (&send_bat, 75, 0);
Emitter emitPID (&send_pids, 75, 1);
IMPLEMENTATION:
Emitter::Emitter(void (*pf_foo)(), uint16_t delay, uint8_t mult) {
m_bSend = false;
m_iDelay = delay;
pfEmitter = pf_foo;
m_iDelayMultplr = mult;
uint32_t m_iTimer = 0;
}
bool Emitter::emit() {
if(!m_bSend && pfEmitter != NULL) {
pfEmitter();
m_bSend = true;
return true;
}
return false;
}
void Emitter::reset() {
m_bSend = false;
}
uint32_t Emitter::getTimer() {
return m_iTimer;
}
void Emitter::setTimer(const uint32_t iTimer) {
m_iTimer = iTimer;
}
uint16_t Emitter::getDelay() {
return m_iDelay * m_iDelayMultplr;
}
///////////////////////////////////////////////////////////
///////////////////////////////////////////////////////////
Emitters::Emitters(const AP_HAL::HAL *p) {
m_pHAL = p;
memset(m_functionList, NULL, sizeof(m_functionList));
memset(m_tickrateList, 0, sizeof(m_tickrateList));
m_iItems = 0;
}
void Emitters::addEmitter(Emitter *p, uint16_t iTickRate) {
if(m_iItems < NO_PRC_SCHED && p != NULL) {
m_functionList[m_iItems] = p;
m_tickrateList[m_iItems] = iTickRate;
m_iItems++;
}
}
bool Emitters::isEmitted(const uint8_t i) {
Emitter *pCurEmit = m_functionList[i];
uint32_t time = m_pHAL->scheduler->millis() - pCurEmit->getTimer();
// Time yet to start the current emitter?
if(time <= m_tickrateList[i] + pCurEmit->getDelay() ) {
return false;
} else {
// Release the block for the transmitter
pCurEmit->reset();
}
if(pCurEmit->emit() ) {
// Set timer to the current time
pCurEmit->setTimer(m_pHAL->scheduler->millis() );
} else {
return false;
}
return true;
}
void Emitters::resetAll() {
// Reset everything if last emitter successfully emitted
for(uint16_t i = 0; i < m_iItems; i++) {
m_functionList[i]->reset();
}
}
void Emitters::run() {
if(m_pHAL == NULL)
return;
for(uint8_t i = 0; i < m_iItems; i++) {
// Run all emitters
if(!isEmitted(i) ) {
continue;
}
}
}
Answer: Your application looks like a typical periodic real-time task scheduler. There are many known and good algorithms for this, the two most widely used are Earliest Deadline First(EDF) and Rate Monotonic (RM). By the looks of your example you do not seem to have a pre-emptive scheduling model, which is fine if you don't want to deal with processes and context switches.
A Task is a periodic processing that has to be done. We call each period of a Task for a job. The task releases jobs to be executed periodically, and each job has a deadline. The deadline is equal to the time_of_release + period, which coincidentally is the time of the next job release. Jobs are executed in order of earliest deadline first after they have been released. Each job has a designed "worst case execution time" (WCET), which you can determine experimentally or preferably by analysis. Or if you don't care about hard real-time constraints, you can simply set it to 0. It only affects schedulability analysis, and has no impact on actual scheduling.
The following implements a rudimentary (not-tested) non-preemptive EDF scheduler.
Please note: I have written this from the top of my head, and this is not suitable for use in any real-time system without a thorough code review and testing. This is provided for demonstrative purposes only!
class EdfScheduler;
class Task{
public:
// period: The period the task must be executed with.
// wcet: The worst case execution time of the task.
Task(int period, int wcet, int start_time)
: m_period(period), m_wcet(wcet), m_next_deadline(start_time)
{}
virtual ~Task();
virtual void run() = 0;
private:
bool canRun(int time){
int job_start = m_next_deadline - m_period;
return job_start >= time;
}
int nextRun(){
return m_next_deadline;
}
const int m_period;
const int m_wcet;
int m_next_deadline; // And coincidental job-release
friend class EdfScheduler;
};
class EdfScheduler{
static const int MAX_TASKS = 8;
static const int MAX_SLEEP = 200;
public:
EdfScheduler(){
for(int i = 0; i < MAX_TASKS; ++i){
m_tasks[i] = NULL;
}
m_processor_load = 0.0f;
}
bool addTask(Task* t){
bool added = false;
for(int i = 0; i < MAX_TASKS; ++i){
if(m_tasks[i] == NULL){
m_tasks[i] = t;
added = true;
break;
}
}
if(!added)
return false;
t->m_next_deadline += t->m_period;
float wcet_max = 0;
for(int i = 0; i < MAX_TASKS; ++i){
if(m_tasks[i] != NULL){
wcet_max = max(wcet_max, m_tasks[i]->m_wcet);
}
}
float load = 0;
for(int i = 0; i < MAX_TASKS; ++i){
if(m_tasks[i] != NULL){
load += (m_tasks[i]->m_wcet + wcet_max) / m_tasks[i]->m_period;
}
}
if(load > 1.0f)
warning("System is overloaded and may not meet deadlines.");
return true;
}
void runScheduler(){
while(1){
Task* edf = NULL;
int next_run = MAX_SLEEP;
for(int i = 0; i < MAX_TASKS; ++i){
if(m_tasks[i] != NULL){
if(m_tasks[i]->canRun()){
if(edf == NULL){
edf = m_tasks[i];
}else if( m_tasks[i]->m_next_deadline < edf->m_next_deadline){
edf = m_tasks[i];
}
}
next_run = min(next_run, m_tasks[i]->nextRun());
}
}
if(!edf){
sleep(next_run);
continue;
}
edf->run();
edf->m_next_deadline += edf->m_period; // Perpare it for running the next time.
}
}
private:
Task* m_tasks[MAX_TASKS];
float m_processor_load;
};
I realize I might have gone over-board with this but I hope you find it helpful or at least interesting in some way :rollseyes: | {
"domain": "codereview.stackexchange",
"id": 6184,
"tags": "c++, arduino"
} |
How photon travel diagonally in a spaceship at relativistic speed according to Special Relativity | Question: According special relativity, the clock ticks slower on spaceship moving at relativistic speed because the light travels a longer diagonal distance with respect observer on ground.
If light is a wave then how it gains the velocity of spaceship? such that it travels in a diagonal.
can light make an interference pattern in this scenario?
Answer: Your question should be closed as a duplicate. The answer has nothing to do with light, per se. The answer is that the direction of any motion is frame dependent. To see this, consider the following...
Suppose you stand at the origin of your horizontal x axis and shine a light vertically up. What that means is that the light moves further and further along your vertical y axis, but does not move at all along your x axis- the x coordinate of the light is always zero. That is the definition of vertical motion.
Now suppose that at the instant you shine the light, I happen to be walking past you at a meter per second. Let us consider the motion of the light in my frame. After a second, the light will be about 300,000 km above me, but not directly above me, since in my frame the light has an x coordinate of -1 meter. After two seconds, the light will be about 600,000 km above me, and now it will have drifted further behind me, with an x coordinate in my frame of -2 meters. And so on. With every second that passes, the light has an x coordinate in my frame that puts it an extra meter away from my vertical axis. So in my frame, as a person walking past you, the light is following a slightly angled path, not truly vertical one.
What I have depicted above is true of any kind of linear motion, be it the motion of light or of a bouncing ball. It has nothing to do with momentum etc. | {
"domain": "physics.stackexchange",
"id": 95049,
"tags": "special-relativity, electromagnetic-radiation, coordinate-systems, speed-of-light, inertial-frames"
} |
Why Do We Store The action In Replay Memory Deep Q-learning | Question: According to my understanding, in Deep Q-learning, in order to train the NN, the agent stores experiences in a buffer (Replayed Memory), and each experience contains:
e = <s,a,r,s'>
When we train the model, we input the s in the target NN, and choose the Q value of action a. Then we input the s' in the prediction network and take the max Q value (then we calculate the loss using the reward r...).
My problem is that I don't understand why do we store the action a and take its Q value in the target network, if we always take the action with the max Q value.
Doesn't the data of action a redundant?
Answer: In a very general form, temporal difference (TD) learning is based on the idea that a value (typically a state value or action value) at time $t$ is related to the value at time $t+n$, and this can be used to improve estimates of the value at time $t$.
In single-step TD learning using action values, the values that get related, and are used to drive the update are $Q(s_t, a_t)$ and $Q(s_{t+1}, a_{t+1})$. This applies to Q learning and SARSA - in both of these, in order to process an update, you need to select which parameters the estimate is being updated for i.e. $s_t, a_t$ (for both SARSA and Q-learning) and which parameters are used to calculate the TD target. It is these second set of parameters where selecting $a_{t+1}$ differs between SARSA and Q-learning. SARSA should use the action that was actually taken, and Q-learning can use the current greedy action.
The update is always for a specific pair $s_t, a_t$ - you are estimating the value of an observed state/action pair, based on the immediate reward and state transition seen after it. You can alter that estimate based on a different target policy, but you cannot alter what it is an estimate for - swapping $a_t$ for some other action means you don't know what the immediate reward and next state should be.
So, the action $a_t$ in the experience replay table selects which estimate you are updating. In Q learning you don't need to store the next action $a_{t+1}$ because you will use the greedy action instead to calculate the update.
As an aside, in SARSA you cannot normally use it with an experience replay table because the old experiences are all off-policy. However, you could store a bunch of experiences without updating, then use them in a single batch update. In that case then you would store both $a_t$ and $a_{t+1}$. | {
"domain": "datascience.stackexchange",
"id": 10451,
"tags": "deep-learning, reinforcement-learning"
} |
DRCSIM VRC Terrain: Stripmine those mountains! | Question:
Currently it seems the camera simulation view volume is set to be smaller than the VRC environment, with the effect that as the robot moves around mountains come into view, changing the skyline. This makes it hard for image-based visual registration algorithms, since the skyline is such a dominant feature.
I understand that limiting the camera simulation view volume speeds up the simulation (but it shouldn't matter that much if camera simulation is happening in a separate thread). I see several options.
Just increase the view volume by increasing the range to 300.
Cut the terrains down to just what is needed in the VRC. Make the VRC tasks fit in 100x100m. This is really only a problem for VRC1. Just loop the road in a race course like layout instead of an almost straight road.
Strip mine the mountains. Have no tall mountains, so stuff far away cannot be seen, even if it is there. In West Virginia they simply cut the tops off the mountains at a fixed height. You can fix the ugly scars by enclosing the VRC terrain with giant billboards (like theater sets) with images of mountain tops texture mapped on them.
Chris
Originally posted by cga on Gazebo Answers with karma: 223 on 2013-03-17
Post score: 0
Answer:
thanks for the ticket. We'll test impact on performance and adopt one of the solutions.
Originally posted by hsu with karma: 1873 on 2013-04-02
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 3136,
"tags": "drcsim"
} |
Writing a buffer that takes a header and a variable number of packets and makes a payload | Question: I'm writing this as an exercise. I would probably use a vector as a buffer internally (the extra capacity pointer overhead is not important enough). Primarily it's an exercise in writing copy/move constructors and assignment operators.
What are some of the pitfalls of defining these operators explicitly? What could go wrong?
#include <algorithm>
#include <span>
struct header_t { /* some fields... */ };
struct packet_t { /* some fields... */ };
const header_t default_header() { return header_t{ /* some init... */ }; }
class message_combiner {
char* payload{nullptr};
size_t size{0};
public:
std::pair<char*, size_t> get() const { return {payload, size}; };
message_combiner(const std::span<packet_t>& data, const header_t& hdr = default_header())
: size{sizeof(header_t) + data.size() * sizeof(packet_t)}
{
payload = new char[size];
auto hdr_ptr = reinterpret_cast<header_t*>(payload);
*hdr_ptr = hdr;
auto data_ptr = reinterpret_cast<packet_t*>(hdr_ptr + 1);
std::copy(data.begin(), data.end(), data_ptr);
}
message_combiner(const message_combiner& other) { *this = other; }
message_combiner(message_combiner&& other) { *this = other; }
message_combiner& operator=(const message_combiner& other)
{
size = other.size;
payload = new char[size];
std::copy(other.payload, other.payload + size, payload);
return *this;
}
message_combiner& operator=(message_combiner&& other)
{
std::swap(size, other.size);
std::swap(payload, other.payload);
return *this;
}
~message_combiner() { if (payload != nullptr) delete [] payload; }
/// other functions that do useful things
};
Answer:
What are some of the pitfalls of defining these operators explicitly? What could go wrong?
The best code you can write is no code.
Every single line of code—every statement, every expression, every single character—introduces potential bugs, and needs to be inspected, tested, or both. The only possible way to have no potential for bugs is to not write any code.
That is why, whenever it is possible, you should let the compiler generate code for you. In theory, the compiler may have bugs… but even if it does, your final program will have more bugs if it has both the compiler bugs and the bugs you introduce. In practice, the compiler is much more rigorously tested, and regularly reviewed by many, many coders way, way better than you or I ever will be, so it will be a very, very rare thing to discover a bug in the compiler… whereas finding bugs in your own code will happen multiple times every day.
And, in point of fact, your implementation of these operators is riddled with bugs. Your code would be much better if you’d used something like std::vector, and let the copy/move operations be automatically generated. But even then, there are lot of problems with this class.
Code review
#include <algorithm>
#include <span>
You are using std::pair, but missing the <utility> header.
char* payload{nullptr};
Okay, let’s set aside that you are using a naked pointer for ownership semantics for now. Even allowing for that, there are still piles of problems here.
You haven’t given nearly enough information about what this type is really supposed to be about, other than some vague, hand-wavey something-something about putting a header and a variable number of packets in it. What is message_combiner for? What is it supposed to do? Not knowing these things, I am forced to make guesses about what is going on here.
I see two possibilities for what you are trying to do.
You want to copy the memory representation of a header_t and zero or more packet_t objects into a bunch of bytes. I can’t imagine why you would want to do this… it kinda looks like you might have some sort of data transmission in mind, but this would be wildly unportable. Depending on a lot of factors, not only will it be dangerous to share this data between different computers, it might not even work for sharing data between different processes on the same machine… hell, it might not even work for different processes of the same program compiled with the same compiler.
You want to actually create real header_t and packet_t objects in a single memory buffer. There might be good reasons for this, like enforcing memory locality. But that’s more limited than you might think. And a lot harder to do right.
All in all, while what you’re doing looks like an absolutely terrible idea… and completely wrong… because you haven’t given enough information about what it’s supposed to be doing, I can’t be sure.
So I’ll just accept that you have a legitimate, sensible reason for this type. I seriously doubt you do, but I’ll give you the benefit of the doubt.
Okay, so you’re copying junk from memory into a byte array. For that, char* is the type you’d use… before C++20.
You are using C++20. In C++20, there is std::byte. This is now the correct type to use for this kind of thing.
So at the very least, this data member should be:
std::byte* payload{nullptr};
But this is still terrible. I would refuse this in any of my projects, without even so much as a second glance.
std::vector<std::byte> should be your default choice for this, but if you don’t need the resizeability power of std::vector, you could go for std::unique_ptr<std::byte[]> instead. You’d have to manually keep track of the size, but if you’re not changing it after initialization, this isn’t a big problem.
Either way, not using a smart pointer or container of some kind is simply unacceptable in modern C++.
std::pair<char*, size_t> get() const { return {payload, size}; };
This function smells like a terrible idea. First, is it really necessary to give everyone access to the buffer? Second, if it is necessary… why not use a span?
message_combiner(const std::span<packet_t>& data, const header_t& hdr = default_header())
: size{sizeof(header_t) + data.size() * sizeof(packet_t)}
{
payload = new char[size];
auto hdr_ptr = reinterpret_cast<header_t*>(payload);
*hdr_ptr = hdr;
auto data_ptr = reinterpret_cast<packet_t*>(hdr_ptr + 1);
std::copy(data.begin(), data.end(), data_ptr);
}
Okay, let’s start at the top.
Never pass view types like std::span by const&. That’s just silly. The whole point of std::span is that it’s a cheap view of a span of data. It’s meant to be copied around. (Hopefully it will be passed in registers, but even if not, it will be passed by-value, avoiding unnecessary indirection, and the compiler can optimize aggressively because it doesn’t need to worry about aliasing.)
Personally, I am not a fan of default arguments. They cause way more problems than they’re worth. You would do better to have two constructors, with one delegating to the other:
explicit message_combiner(std::span<packet_t> data)
: message_combiner(data, default_header())
{}
message_combiner(std::span<packet_t> data, const header_t& hdr)
// ...
Note also the explicit there. It’s not clear in your code because of the default arguments, but that constructor is potentially a converting constructor. Those almost always should be marked explicit.
Okay, now we get to the first UB bug in your code:
auto hdr_ptr = reinterpret_cast<header_t*>(payload);
*hdr_ptr = hdr;
So you cast your pointer to a header_t pointer, and then do a copy. The problem? You didn’t make sure that the ALIGNMENT of what payload points to matches the alignment of a header_t. If you’re not lucky, when you try to copy hdr into *hdr_ptr, it’s going to trigger a misaligned data fault, and… crash (if you’re lucky; with UB like this, you could get a whole lot worse than a mere crash).
The thing is, I don’t even know if aligning the payload properly is the right fix, because I can’t make sense of what you’re trying to do.
If your goal is just to copy the memory representation of a header_t… then do that. If your goal is to have your payload actually be a header_t (plus other stuff)… for whatever reason… then do that instead. They are two very, very different things.
auto data_ptr = reinterpret_cast<packet_t*>(hdr_ptr + 1);
Why all the casting and pointer arithmetic? data_ptr is just payload + sizeof(header_t).
Now, casting that to a packet_t pointer creates the same problem as above: you haven’t guaranteed that the alignment is correct for packet_t. Which means:
std::copy(data.begin(), data.end(), data_ptr);
More UB.
I still don’t know what you think you’re doing here. Either you’re copying the memory representation of these objects into a byte array, or you’re trying to create a single chunk of memory that has an actual header_t followed by zero or more packet_t objects. Either option is… kinda silly, and totally unportable. So I can’t tell which flavour of silly you’re actually going for.
If you’re copying MEMORY REPRESENTATIONS, then you just have to do:
// allocate the memory
//
// alignment doesn't matter if you're just copying the REPRESENTATIONS
// of the objects
payload = new std::byte[sizeof(header_t) + (sizeof(packet_t) * data.size())];
// copy the header representation
std::copy_n(static_cast<std::byte const*>(&hdr), sizeof(header_t), payload);
// copy the representations of the packets, if any
//
// note that we just add the size of the header to skip past it
std::copy_n(as_bytes(data), sizeof(packet_t) * data.size(), payload + sizeof(header_t));
Simple.
If you actually want to have ACTUAL OBJECTS… not just representations, but actual objects… things get much trickier.
First, you need to calculate the size of the memory to allocation… and this is not trivial. To calculate the size, you have to:
a. Start with the size of header_t. That’s the starting value, and the minimum size.
b. If there are packets, then make sure the current value matches the alignment of packet_t. If not, increase the value until it does.
c. Now add sizeof(packet_t) * data.size().
As you can see, the really tricky part is in the middle, where you have to account for the alignment of packet_t.
When you allocate the memory, you need to align it to the alignment of a header_t (because that will be the first object). So something like: payload = new (std::align_val_t{alignof(header_t)}) std::byte[/*size*/].
You can just copy with *static_cast<header_t*>(payload) = hdr because it is now properly aligned.
For the packets, you first need to find the offset to the first packet (if any). It’s the same logic as when calculating the size. Once you have the offset of the first packet, you can just do: std::uninitialized_copy(data.begin(), data.end(), static_cast<packet_t>(payload + offset)). Note that you have to use uninitialized_copy()… not copy(). Why? Because copy() only works when you are copying over existing objects. But you don’t have existing objects, you have raw, uninitialized memory.
If anything throws an exception, you need to be able to handle it. The most dangerous point is after you have copied the header, because if anything else fails, it’s on you to destroy it.
This might look something like this:
// allocate the memory
//
// alignment DOES matter; payload must be aligned as a header_t
//
// packet_offset is just the offset to the first packet, calculated as
// described above
if (data.empty())
payload = new std::byte (std::align_val_t{alignof(header_t)}) [sizeof(header_t)];
else
payload = new std::byte (std::align_val_t{alignof(header_t)}) [packet_offset + data.size()];
// if anything after this throws, you need to free the memory
try
{
// copy the header
*static_cast<header_t*>(payload) = hdr;
// if anything after this throws, you need to destroy the header
try
{
if (not data.empty())
std::uninitialized_copy(data.begin(), data.end(), static_cast<packet_t*>(payload + packet_offset));
}
catch (...)
{
std::destroy_at(static_cast<header_t*>(payload));
}
}
catch (...)
{
delete[] payload;
throw;
}
That is UGLY, largely because payload is not a smart pointer.
Note that if you go this route, you are COMPLETELY responsible for managing the header and packet objects. That means you have to manually delete them in the destructor, manually copy them in the copy constructor, and so on. (If you are just working with representations, you don’t need to worry about that stuff.)
message_combiner(const message_combiner& other) { *this = other; }
This is a bad way to do copy construction. It is not just misguided, it is inefficient.
In order to use copy assignment, you must have a fully-constructed object. So if you wanted to do it this way you would first have to properly construct the object… and then do the assignment:
message_combiner(message_combiner const& other)
: message_combiner() // default construct this object first
{
// now that you have a fully (default) constructed object, you can assign
*this = other;
}
Right now, your copy constructor “works” because the member initializers roughly approximate a default consturctor. But that’s just for now… if you change the class, that may no longer be true.
If you do copy construction this way, you are completely constructing an object… and then immediately obliterating it by copying over it. That’s silly. That’s why C++ coders generally do it the other way around: they write a proper copy constructor, then write copy assignment in terms of that.
message_combiner(message_combiner&& other) { *this = other; }
All the same problems as the copy constructor, plus more.
First, that should be a move assignment.
Second, move ops should be noexcept wherever possible. (And that’s certainly possible here.)
message_combiner& operator=(const message_combiner& other)
{
size = other.size;
payload = new char[size];
std::copy(other.payload, other.payload + size, payload);
return *this;
}
First, you are failing to delete the existing payload before allocating a new one.
Second, everything else in the function is wrong, because it should all be more or less identical to regularly constructing the object.
In theory, copy assignment is just:
destruction; followed by
copy constructing over the ashes of the old object.
That’s the pattern you are attempting to write. But that’s a dangerous pattern, because if anything fails in the second step, you have already destroyed the original. Your object, and likely your program by extension, will now be in a broken state.
A safer pattern is the copy-and-swap pattern:
auto operator=(message_combiner const& other) -> message_combiner&
{
auto temp = other; // copy
std::ranges::swap(*this, temp); // swap - normally this will be no-fail
return *this;
}
If the copy fails, then the original object is untouched.
This is why you should write the copy constructor properly, and then do copy assignment in terms of copy construction… not the other way around.
message_combiner& operator=(message_combiner&& other)
{
std::swap(size, other.size);
std::swap(payload, other.payload);
return *this;
}
This is fine, but it could be noexcept.
Also, you should really consider writing a swap function, and then writing move assignment (and copy assignment, and move construction) in terms of that. If you do it the way you are doing now, then swapping becomes ridiculously over-complicated. On the other hand, if you write a proper swap, then that will be efficient… and everything that uses swapping will also be.
~message_combiner() { if (payload != nullptr) delete [] payload; }
If you are just storing representations of the header and packets, then this is fine.
But the way you’ve written the code, where you actually create real header and packet objects within the memory payload points to, this is not good enough. If you have real header and packet objects, they need to be destroyed. You can’t just delete the memory out from under them.
Summary
There is a lot of conceptual confusion here: quite frankly, I don’t think you really know what you’re doing.
You can’t just copy the memory representation of objects around willy-nilly. This is C++; not C. (And even in C, the way you’re copying representations around would be clumsy and ill-formed.) Objects should be treated like actual objects, not just a bag of bytes in memory. They should be properly constructed, and they should be properly destroyed. Even if the constructor and destructor are no-ops, which is often the case for simple types, you still have to treat them like they do actual stuff.
So you can’t just allocate a chunk of memory and then copy objects into it. You either have to first initialize that memory properly (by constructing objects in it with placement new, for example), or you have to use the uninitialized memory algorithms (which basically just use placement new or construct_at() under the hood).
And even if you could just allocate a chunk of memory and copy objects into it, you’d still need to respect things like alignment.
And even if you managed to fix all the problems here, one way or the other—either by just using memory representations, or by properly initializing real objects in the uninitialized memory and then managing them properly—there doesn’t seem to be any point to it all, because you couldn’t really do anything with the class. It’s useless; the whole idea behind it is misguided. Whether it’s memory representations or actual objects, it won’t work for data transfer, and would be really dodgy for serialization. As I said, I don’t think you really know what you’re doing.
You said your focus was on the copy and move operations. Well, they’re all wrong, but I can’t even tell you how to fix them properly, because I can’t make sense of what you’re trying to do. But you basically have two options:
If you are just storing the memory representations of a header and a bunch of packets, then your copy/move ops are close to correct. There are some things that need fixing, but you have the general idea right.
If you are storing actual objects… then no, all of your copy/move ops are just tragically wrong; not even close to correct. What you would have to do in this case is manually handle the copying/moving of every object in your payload. For example, to copy a message_combiner, you would first have to allocate the same amount of memory, and then manually copy the header, and then manually copy all the packets. And in your destructor, you would have to manually destroy the header, and then manually destroy all the packets, and then free the memory. All of that is a lot of work.
Generally, the copy/move ops should be done like this:
class whatever
{
public:
// ... other stuff in the class ...
whatever(whatever const& other)
{
// properly copy-construct from other
}
whatever(whatever&& other) noexcept : whatever{}
{
std::ranges::swap(*this, other);
}
auto operator=(whatever const& other) -> whatever&
{
auto temp = other;
std::ranges::swap(*this, temp);
return *this;
}
auto operator=(whatever&& other) noexcept -> whatever&
{
std::ranges::swap(*this, other);
return *this;
}
friend auto swap(whatever& a, whatever& b) noexcept
{
// for each data member:
std::ranges::swap(a./*...*/, b./*...*/);
}
};
You just need to write:
a proper (cheap, no-fail) default constructor (or properly initialize to some default state in the move constructor before swapping)
a proper copy constructor; and
a proper swap function (usually just a bunch of swaps of the data members)
That’s the general pattern. The only places the details really change are in the copy constructor. Everything else is more-or-less boilerplate.
If your ultimate goal is to sent messages across the wire, then you are barking up the completely wrong tree. You need to look at proper serialization of types, which is not just memcpy()ing the memory representations.
I guess the bottom line is this: If you really want to learn how to write proper copy/move operations, you should first start with something that you understand a little better, and write good copy/move ops for that. (And that is not necessarily easy! Writing good copy/move ops can be hard.) Trying to learn how to write good copy/move ops for such a muddled, incoherent idea as this… you’re not helping yourself. Master one thing at a time; start with simple types, and learn how to write copy/move ops for them… and then consider moving on to more complex types, like this… whatever this “message_combiner” is actually supposed to be.
Questions
Usage of std::unique_ptr<std::byte[]>
std::unique_ptr<std::byte> would be a pointer to a single byte:
auto p = std::unique_ptr<std::byte>{new std::byte{}};
// or
auto p = std::unique_ptr{new std::byte{}};
// or
auto p = std::make_unique<std::byte>();
// or, if you want default initialization:
auto p = std::unique_ptr<std::byte>{new std::byte};
// or
auto p = std::unique_ptr{new std::byte};
// or
auto p = std::make_unique_for_overwrite<std::byte>();
And the usage would be pretty much the same as for any pointer:
// get the value
auto val = *p;
// set the value
*p = std::byte(42);
// and so on
if (p != nullptr) ...
std::unique_ptr<std::byte[]> would be a pointer to an array of bytes:
// allocates an array of 100 value-initialized bytes
auto p_bytes = std::unique_ptr<std::byte[]>{new std::byte[100]{}};
// or
auto p_bytes = std::make_unique<std::byte[]>(100);
// or, if you want default initialization:
auto p_bytes = std::unique_ptr<std::byte[]>{new std::byte[100]};
// or
auto p_bytes = std::make_unique_for_overwrite<std::byte[]>(100);
And the usage would be pretty much the same as for any pointer-to-array:
// get the 33rd element's value
auto val = p_bytes[33];
// set the 33rd element's value
p_bytes[33] = std::byte(42);
// get the pointer to the start of the array
auto p_begin = p_bytes.get();
auto p_end = p_begin + 100;
Let’s assume payload is defined like this:
std::unique_ptr<std::byte[]> payload = nullptr;
Then your constructor might look like:
message_combiner(std::span<packet_t> data, header_t const& hdr = default_header())
{
// this could be a private class constant
constexpr auto packet_offset = /* calculate offset to first packet somehow */;
// determine the size
if (data.empty())
size = sizeof(header_t);
else
size = packet_offset + (data.size() + sizeof(packet_t));
// allocate (note alignment is handled)
payload = new (std::align_val_t(alignof(header_t))) std::byte[size];
// if there are any errors after this, no problem, unique_ptr will
// automatically free the memory
// construct the header
std::construct_at(static_cast<header_t*>(payload.get()), hdr);
try
{
// construct the packets
std::ranges::uninitialized_copy(data,
std::span{static_cast<packet_t*>(payload.get() + packet_offset), data.size()});
}
catch (...)
{
// you have to manually destroy the header
std::destroy_at(static_cast<header_t*>(payload.get()));
throw;
}
}
Which, as you can see, aside from the .get()s, is no different from when payload is a naked std::byte*… except that there’s one less try-catch level, because unique_ptr will automatically clean itself up. (Unfortunately, without something like scope_fail, you can’t avoid the try-catch block to clean up the header.)
What does a proper copy constructor look like?
The answer depends on your class.
The general form of the move constructor, move assignment, and copy assignment don’t change from class to class. The default implementations will always be right (though may not be the most efficient, in some rare scenarios, like if you’re expecting a lot of self-assignments… or, of course, if they could have been left default-generated).
The copy constructor, however, is very specific for each class. If it can’t be default-generated, then it will usually be the hardest part of all the fundamental operations to write.
For example, in your case, what you’d have to do is:
allocate the memory (which will be the same size as other’s memory allocation)
std::construct_at() the header as a copy of other’s header; then
std::uninitialized_copy() the packets.
This is basically the same as the regular constructor. Indeed, you could write the copy constructor as:
// assuming you have the following member functions:
// * header(), which returns a const& to the header in the payload
// * packets(), which returns a span<packet_t> view of the packets, if any
message_combiner(message_combiner const& other)
: message_combiner{other.packets(), other.header()}
{}
This happens to work well for this class, but for other classes the same pattern will sometimes be very inefficient (or really silly to have a constructor that makes it possible, because it would be exposing internal stuff).
Won’t *this = other; get optimized to *this = std::move(other);?
No. x = y will never get optimized to x = std::move(y).
To see why this would be a terrible idea, imagine if your move constructor wanted to do something with other after the assignment:
message_combiner(message_combiner&& other) noexcept
: message_combiner{}
{
*this = other;
do_something(other); // oops, other was silently moved away from
}
Compare that to this:
message_combiner(message_combiner&& other) noexcept
: message_combiner{}
{
*this = std::move(other);
do_something(other); // other was moved away from... but you can
// clearly see that, so this mistake is easy to
// spot
}
You could argue that the compiler can “see” whether other will be used after the assignment, and if not, then it’s safe to move. But that would be sketchy at least, because moving and copying may do very different things (I mean, you would be foolish if you made a class that did that… but people do a lot of foolish things in C++, and the language and compiler need to account for that). To have exactly the same code do different things depending on stuff that happens elsewhere would be daft (yes, people write code that does that sometimes… but it’s not a good idea).
You could also think of it like this: moving an object is basically destroying it, so it should never happen invisibly. It should always be crystal clear when you’re ripping the guts out of an object, like with an explicit move(), or when it’s being destroyed anyway, like when you’re returning an object (which can never be used after, so really is safe to always move and never copy).
To understand what’s happening, remember that the easiest way to distinguish between an lvalue and an rvalue is that if you can take the address of something, it’s an lvalue; if not, it’s an rvalue.
Can you take the address of other? Of course:
message_combiner(message_combiner&& other) noexcept
: message_combiner{}
{
if (this != &other) // silly because it will never be false... but you can do it, so it illustrates the point
*this = other;
}
Since you can take the address of other, other is an lvalue. So *this = other is an lvalue assignment… that is, a copy assignment.
(Another trick some people use to distinguish lvalues and rvalues is: if it has a name, it’s an lvalue… otherwise it’s an rvalue. other has a name—that name is “other”—so it’s an lvalue.)
If you want a move assignment, you need to cast other to an rvalue, which is what std::move() does. | {
"domain": "codereview.stackexchange",
"id": 42009,
"tags": "c++, reinventing-the-wheel, memory-management, c++20"
} |
How to deal with zero uncertainties? | Question: Suppose you measure quantity $x$ with an uncertainty ${\rm d}x$. Quantity $f$ is related to $x$ by $f=x^2$ . By error propagation the uncertainty on $f$ would be ${\rm d}f=2x{\rm d}x$. If a certain point $x$ equals zero then the uncertainty on $f$ would be zero, even if $x$ carries an uncertainty. Is there a special procedure in these cases?
Answer: Use the second derivative (or third, or whatever). The reason we use that formula is that
$$
df \approx \frac{df}{dx} dx
$$
is the first order Taylor approximation to df. If the first order term vanishes, you should include higher terms:
$$
df \approx \frac{df}{dx} dx+\frac{1}{2}\frac{d^2f}{dx^2} dx^2+...
$$
In your case, with $f=x^2$, and $x=0$, we'd have
$$
df \approx dx^2
$$ | {
"domain": "physics.stackexchange",
"id": 31027,
"tags": "error-analysis, data"
} |
Huge variations in epoch count for highest generalized accuracy in CNN | Question: I have written my own basic convolutional neural network in Java as a learning exercise. I am using it to analyze the MIT CBCL face database image set. They are a set of 19x19 pixel greyscale images.
Network specifications are:
Single Convolution Layer with 1 filter:
Filter Size: 4x4.
Stride Size: 1
Single Pooling Layer
2x2 Max Pooling
3 layer MLP(input, 1 hidden and output)
input = 64 neurons
hidden = 15 neurons
output = 2 neurons
learning rate = 0.1
Now I am getting reasonable accuracy(92.85%), but my issue is that it is being achieved at very different points in the epoch count across network runs:
Epochs Training Accuracy Test Accuracy Validation Accuracy
Run 1 415 93.13 92.44 93.35
Run 2 515 92.44 93.18 92.84
Run 3 327 93.83 92.05 92.38
I am using the Java random class with the same seed for every run to initialize the kernel, the MLP weights and break the input data into 3 sets.(training is being done using the 33-33-33 method)
I am a loss as to what is causing this variation in epoch count to achieve the highest point in validation accuracy. Can anybody explain this?
Answer: Fixed. Was an issue with the random generator. In my class for the Neuron layer where I initialize the weights I get new doubles from the generator for each of the initial weight values, but I found a bug where I was re-initializing the random generator, which was of course causing different values. | {
"domain": "ai.stackexchange",
"id": 762,
"tags": "convolutional-neural-networks, java"
} |
How well can we measure how fast are we spinning? | Question: Although absolute translational motion is meaningless and unmeasurable (Michelson and Morley, etc), absolute rotational motion is meaningful (Newton's bucket) and measurable, using Foucault's pendulum. In 1851 Foucault showed this could be used to measure the earths rotation, $1 \over 4$ degree per minute.
What can we achieve using the improvements of modern technology? After 170 years of development, could we build an experiment that, without any external observations, measured the rotation of the earth round the sun? And the rotation of the sun round the galaxy? And even (this is stretching it, but interesting) whether the whole universe has some rotation?
Answer: Direct, instantaneous monitoring of the Earth rotation rate is possible with a large ring laser interferometer, secured to bedrock.
In Germany there is a facility 'Fundamentalstation Wettzell'. This facility operates a setup called 'Ring Laser G'.
Ring laser G is a ring laser setup with the mirrors as the corners of a 4x4 meter square.
In a ring laser setup clockwise propagating light keeps propagating clockwise, counter-clockwise propagating light keeps propagating counter-clockwise, that is how the mirrors are set up.
If there would be absolutely perfect reflection the clockwise and counterclockwise propagating light would never interact. However, while the mirrors are 99.9999 percent efficient, the remaining backscatter tends to keep the two counter-propagating beams of light locked to the same frequency.
When the ring laser is sufficiently large even the slow Earth rotation rate is enough to unlock the two beams.
The ring laser is anchored to bedrock. Due to the rotation of the Earth a frequency difference arises. (During the time that the light goes around the source moves, so the clockwise and counter-clockwise beams don't travel the same length) The magnitude of the frequency difference is measured by allowing some of the light to exit and then obtaining interference between the two beams. The resulting interference pattern is a beat frequency. In the case of the Ring Laser G this beat frequency is arond 348.6Hz.
Fundamental to the operation of a ring laser is that establishing the point of zero rotation does not require calibration. When a ring laser is not rotating there is no frequency shift, hence no beat frequency.
(Of course there are practical difficulties, such as the already mentioned tendency of the beams to remain locked.)
Given the dimensions of a setup the expected beat frequency can be calculated in advance. So even in the absence of any other data the Earth rotation rate can be inferred from the magnitude of the observed beat frequency.
The operating principle of ring laser interferometry (and other forms of ring interferometry) is the Sagnac effect.
With a ring laser interferometer you observe whether you are rotating with respect to inertial space.
A ring laser gyro device is the optical counterpart of a gyroscope. As we know: a spinning gyroscope, when perfectly undisturbed, remains in the same orientation with respect to inertial space.
The web page lists among facility's goals:
Detection of short-term spin fluctuations with a resolution of $10^{-9}$
Detection of short-term polar motions with a resolution of 0.2 mas or 6 mm
Near real time acquisition with a temporal resolution of 1 hour or less
The particular page with that information was last updated in 2005. I cannot find whether that setup is till running, or whether it has been shut down.
The physics department of the University of Canterbury New Zealand was leading in the development of ring lasers for Earth monitoring. Their facilty (including a ring laser setup far larger than the one at Wettzell) was located in caverns near the city of Canterbury. As far as I know the Earthquake in Canterbury has shut down those activities. | {
"domain": "physics.stackexchange",
"id": 72923,
"tags": "newtonian-mechanics, inertial-frames, angular-velocity, solar-system, machs-principle"
} |
Minkowski's equation of motion | Question: I'm trying to prove $f^{\mu}U_{\mu}=0$ for four-force $f^{\mu}=c\frac{dP^{\mu}}{ds}$ and four-velocity $U_{\mu}$. I start by using the chain rule, $f^{\mu}=c\frac{dP^{\mu}}{dt}\frac{dt}{ds}=\gamma\frac{dP^{\mu}}{dt}$ since $\frac{dt}{ds}=\frac{\gamma}{c}$. Since four momentum $P^{\mu}=(E/c, \vec{p})$, for energy $E$ and 3-momentum $\vec{F}$. By differetiating with respect to time I find $f^{\mu}=\gamma(0,\vec{F})$ for 3-force $\vec{F}$ in a particular frame. Using the fact that $P^{\mu}=mcU^{\mu}$ I then find $f^{\mu}U_{\mu}=\frac{\gamma}{mc}(0\cdot\frac{E}{c}-\vec{F}\cdot\vec{p})$ which doesnt (necessarily) give zero. Any idea where I've gone wrong here?
Answer:
I'm trying to prove $f^{\mu}U_{\mu}=0$ for four-force $f^{\mu}=c\frac{dP^{\mu}}{ds}$ and four-velocity $U_{\mu}$. I start by using the chain rule, $f^{\mu}=c\frac{dP^{\mu}}{dt}\frac{dt}{ds}=\gamma\frac{dP^{\mu}}{dt}$ since $\frac{dt}{ds}=\frac{\gamma}{c}$. Since four momentum $P^{\mu}=(E/c, \vec{p})$, for energy $E$ and 3-momentum $\vec{F}$. By differetiating with respect to time I find $f^{\mu}=\gamma(0,\vec{F})$ for 3-force $\vec{F}$ in a particular frame. Using the fact that $P^{\mu}=mcU^{\mu}$ I then find $f^{\mu}U_{\mu}=\frac{\gamma}{mc}(0\cdot\frac{E}{c}-\vec{F}\cdot\vec{p})$ which doesnt (necessarily) give zero. Any idea where I've gone wrong here?
With
$$
P^\mu = (E/c, \vec p)\;,
$$
you have to also differentiate the energy E with respect to time. (E depends on p so if p changes E changes). You find:
$$
\frac{dP^\mu}{dt} = (\frac{1}{c}\frac{\partial E}{\partial \vec p}\cdot \dot{\vec p}, \dot{\vec p})\;.
$$
But, by the very definition from Hamiltonian's equations of motion, we also have
$$
\vec v = \frac{\partial E}{\partial \vec p}\;.
$$
I use the definition:
$$
U^\mu = \gamma(c, \vec v)
$$
Thus:
$$
f^\mu U_\mu \propto (\frac{1}{c}\vec v \cdot \dot{\vec p}, \dot{\vec p}) \cdot {(c, \vec v)}^T
$$
$$
=\vec v \cdot \dot{\vec p} - \dot{\vec p}\cdot \vec v = 0
$$ | {
"domain": "physics.stackexchange",
"id": 88244,
"tags": "special-relativity, forces, classical-mechanics, vectors"
} |
Can Shor‘s code correct two- or three-qubit errors? | Question: I have read some articles about Shor's code (e.g. this one). It is said that Shor's code can correct a single-qubit error. What about two qubit errors? Three qubit errors? It confused me a lot...
Answer: We can combine weaker codes to obtain stronger codes using concatenation (see this paper or chapter X B in this paper).
Suppose we have a quantum error correcting code that encodes a logical qubit into $n$ physical qubits. Let $d$ denote the code distance, i.e. the smallest number of qubits we must act on with a local Pauli operator in order to induce a non-trivial logical transformation on the code subspace. Similarly to the case of classical codes, if an error affects no more than $t=\big\lfloor\frac{d-1}{2}\big\rfloor$ qubits the decoder can diagnose the error correctly.
Now, we can use the code to encode $n$ logical qubits in $n^2$ physical qubits and then we can add a second level of encoding to encode a second-level logical qubit into the $n$ first-level logical qubits. Then the smallest number of qubits we must act on with a local Pauli operator in order to induce a non-trivial logical transformation on the code subspace is $d'=d^2$. Consequently, the two-level concatenated code can correct any
$$
t'=\bigg\lfloor\frac{d'-1}{2}\bigg\rfloor=\bigg\lfloor\frac{d^2-1}{2}\bigg\rfloor
$$
physical errors and if $d>1$ then $t'>t$. For example, the two level Shor's code can correct any four physical errors.
Concatenation can be continued to any number of levels and if the physical error rate is low enough it allows us to bring the logical error rate below any desired target value. This last result is known as the threshold theorem. | {
"domain": "quantumcomputing.stackexchange",
"id": 5317,
"tags": "error-correction"
} |
Atoms in motion - It's known that heat is related only with the movement | Question: It's known that heat is related only with the movement of particles inside the body. What is the difference between hot, stationary ball baseball and the ball moving quickly?
Answer: Basically, de difference is in the direction of the atoms. The atoms in a hot, stationary ball move around randomly (the kinetic energy of the atoms is in their vibrational and rotational motion), whereas all atoms in a moving ball move in the same direction. | {
"domain": "physics.stackexchange",
"id": 27568,
"tags": "homework-and-exercises"
} |
How to evaluate AMCL localization performance | Question:
Hi all,
I'm using amcl for localization in a mobile robot. Now succesfully I have it working on the robot. My idea is to use it with a laser scan and I will be modifying the readings so the information passed to the localization algorithm is somehow filtered.
What I want to do now is to evaluate the performance of one filter vs. the original data, is there any information that amcl output so I can do it?. I think I can use the spread of the particles, but seeing from the execution this doesn't always reflect the accuracy of the localization, as sometimes the variance is small but the robot is not in its right orientation.
Other way to ask the question would be: how can I compare odometry vs. laser base localization using amcl algorithm?
Thanks for your thoughts and ideas,
Ibraim
Originally posted by Ibraim on ROS Answers with karma: 101 on 2012-08-28
Post score: 1
Answer:
Have you considered using stage simulation? That way you have the ground truth pose (exactly where the robot is in the simulated world) which you can compare against the amcl pose.
For example, you can compare the euclidean distance between the two poses after moving the robot on a fixed path. Run this for both laser inputs, and whichever has the lowest (average!) distance is presumably the better one.
Originally posted by HammyG with karma: 51 on 2013-02-08
This answer was ACCEPTED on the original site
Post score: 1 | {
"domain": "robotics.stackexchange",
"id": 10796,
"tags": "ros, localization, navigation, performance, amcl"
} |
is URDF usefull in real robot? | Question:
I am building a robot and want to ask if urdf file is important when comes to navigation and mapping in real world.
Originally posted by offgrid8 on ROS Answers with karma: 13 on 2021-12-19
Post score: 0
Original comments
Comment by muratkoc503 on 2021-12-26:
i think, this is usefull. Because, for example, footprint involved at common costmap parameter. This interested in robot's shape. This is relating for inflaction and costmap. But, if you use gazebo and rviz.
Answer:
In some cases, it would be required: Lidar slam packages like cartographer; Kalman filter packages like robot_localization.
If your robot will not use any existing packages that requires URDF file, you may skip it.
Originally posted by Mingjie with karma: 26 on 2023-01-18
This answer was ACCEPTED on the original site
Post score: 1 | {
"domain": "robotics.stackexchange",
"id": 37272,
"tags": "navigation, urdf, move-base"
} |
Is pooling-aware bin packing NP-Hard? | Question: I am unable to prove whether the following problem is NP-Hard. It seems like a bin-packing or a partition problem, without being close enough to either of them (at least I do not see the reduction to them).
Pooling-aware bin packing
Consider 2 sets of non-negative numbers
$$a=\{a_1,a_2,...,a_n\}\\b=\{b_1,b_2,...,b_n\}.$$ Which is the size of the smallest
partition $P$ for the values $1$ to $n$ such that for every subset
$S=\{ (a_i,b_i), (a_j,b_j),\ldots\}$ in the partition $$\max_{i\in S}a_i+\max_{j\in S}b_j\le1,\qquad \forall S\in P?$$ (I inherently
assume feasibility, i.e., $ a_i+b_i\le1, i=1,...,n$)
Simple instance:
$$a=[0.3,0.5,0.4,0.9,0.7]\\ b=[0.6,0.3,0.6,0.1,0.2]$$
Solution: we need 3 bins
$[(0.9,0.1)]$
$[(0.7,0.2),(0.5,0.3)]$
$[(0.3,0.6),(0.4,0.6)]$
Note that maybe the most similar problem is the one in
Michael Sindelar, Ramesh K. Sitaraman, Prashant J. Shenoy: Sharing-aware algorithms for virtual machine colocation. SPAA 2011: 367-3 and discussed in
bin packing with overlapping objects.
Thoughts / similar problems / pointers?
PD: I want to apologize in advance if there is some issue with my question that I am unaware of, I am new here :)
Answer: This problem is polynomially solvable.
Claim: solving the problem on input $a = (a_1, \ldots, a_n)$ and $b = (b_1, \ldots, b_n)$ using a partition $P$ is equivalent to choosing a multiset of numbers $X$ with $|X| = |P|$ such that for $i = 1, \ldots, n$, there exists an $x \in X$ with $a_i \le x$ and $b_i \le 1-x$.
Assuming this claim, solving your problem is equivalent to finding the smallest set $X$ such that for each $i = 1, \ldots, n$, there exists an $x \in X$ with $a_i \le x$ and $b_i \le 1-x$. For any fixed $i$, this condition can be rewritten as $a_i \le x \le 1-b_i$. Thus, the problem is equivalent to finding the smallest set $X$ which includes at least one point from each interval $[a_i, 1-b_i]$.
It is easy to show that the following algorithm finds an optimal solution:
Algorithm: Initialize $X$ to be the empty set. Scan from 0 to 1. When leaving each interval during this scan, check whether the interval you are leaving already contains a point in $X$. If yes, continue without doing anything. If no, add the current value of the scan-line (i.e. the end of the interval) to $X$. Once you reach the end of the scan, output $X$.
Thus, all that's left is to prove the claim:
proof of claim:
First suppose we have a partition $P$ solving your problem. Then define $X = \{x_S~|~S \in P\}$ where $x_S = \max_{i \in S}a_i$.
If $i' \in \{1, \ldots, n\}$, then $i' \in S$ for some $S \in P$.
Since $i' \in S$, clearly we have that $a_{i'} \le max_{i \in S}a_i$. The RHS, however, is the definition of $x_S$, so this shows $a_{i'} \le x_S$.
By the conditions on the partition, $max_{i \in S}a_i + max_{i \in S}b_i \le 1$, or in other words $max_{i \in S}b_i \le 1 - max_{i \in S}a_i = 1 - x_S$. Simply applying the fact that $i' \in S$, we have that $b_{i'} \le max_{i \in S}b_i$. Putting this together, we see that $b_{i'} \le 1-x_S$.
Thus we have shown that for $i = 1, \ldots, n$, there exists an $x \in X$ with $a_i \le x$ and $b_i \le 1-x$.
Next suppose that we have a set $X$ such that for $i = 1, \ldots, n$, there exists an $x \in X$ with $a_i \le x$ and $b_i \le 1-x$.
Name the elements of $X$ as $x_1, x_2, \ldots, x_{|X|}$. Then let $S_j = \{i \in \{1,2,\ldots,n\}~|~a_i \le x_j ~\text{and}~ b_i \le 1-x_j\}$. By the property of $X$, every element of $\{1,2,\ldots,n\}$ is in at least one $S_j$. Then if we define $S_j' = \{i \in S_j~|~i \not\in S_{j'} ~\text{for}~j'<j\}$, we see that sets $S_1', \ldots, S_{|X|}'$ form a partition of $\{1,2,\ldots,n\}$.
We claim that this partition is a valid solution to your problem. Consider any part $S_j'$ in this partition. Since $S_j' \subseteq S_j$, we have that $\max_{i \in S_j'}a_i \le \max_{i \in S_j}a_i \le x_j$
and $\max_{i \in S_j'}b_i \le \max_{i \in S_j}b_i \le 1-x_j$,
and so we see that $\max_{i \in S_j'}a_i + \max_{i \in S_j'}b_i \le x_j + 1-x_j = 1$. This is sufficient to show that the partition is a valid solution to your problem. | {
"domain": "cstheory.stackexchange",
"id": 4051,
"tags": "cc.complexity-theory, np-hardness, partition-problem, packing"
} |
Product and Factor | Question: Overview
I was challenged recently to write some code that could find the smallest integer that when multiplied and divided by 2 or 3, retained all of its digits and gained no extras. For example:
With 2
285714
When multiplied: 571428.
When divided: 142857.
With 3
31046895
When multiplied: 93140685.
When divided: 10348965.
Code
You'll need the following using statements:
using System;
using System.Collections.Generic;
using static System.Console;
The Main method contents:
int variant = 2;
for (int i = 0; i < int.MaxValue; i++)
if (ProductAndFactor(i, variant)) {
WriteLine(i);
break;
}
WriteLine("Done");
ReadKey();
The backbone code:
static bool ProductAndFactor(int i, int v) {
Dictionary<char, int> oChars = GetValueChars(i);
Dictionary<char, int> dChars = GetValueChars(i / v);
Dictionary<char, int> mChars = GetValueChars(i * v);
if ($"{i}".Length != $"{i / v}".Length ||
$"{i}".Length != $"{i * v}".Length)
return false;
foreach (char c in oChars.Keys) {
if (!dChars.ContainsKey(c)) return false;
else if (dChars[c] != oChars[c]) return false;
else if (!pChars.ContainsKey(c)) return false;
else if (pChars[c] != oChars[c]) return false;
}
WriteLine($"{i} * {v} = {i * v}\n{i} / {v} = {i / v}");
return true;
}
static Dictionary<char, int> GetValueChars(int i) {
Dictionary<char, int> chars = new Dictionary<char, int>();
foreach (char c in i.ToString()) {
if (chars.ContainsKey(c)
chars[c]++;
else
chars.Add(c, 1);
}
return chars;
}
My Question
Is there a more simplistic way to accomplish this? I can't help but feel like there is and that using Dictionary<char, int> is probably an inefficient option in this task unless there is a smart way to reuse it.
Is there a more simplistic way to accomplish this?
Are there more efficient data types to utilize in this use-case?
What are they?
Why are they more efficient?
Is there a smarter way to reach the answer faster?
Also, if I used the wrong tags, or more tags are needed, please edit and add them.
Benchmarks
For v = 2 the discovery time was 4 seconds. For v = 3 the discovery time was 112 seconds.
Answer: The trick is to search for j = i / v instead of j = i:
that means to search for i and i * v and i * v * v:
static bool ProductAndFactor(int i, int v)
{
if ($"{i}".Length != $"{i * v}".Length ||
$"{i}".Length != $"{i * v * v}".Length)
return false;
Dictionary<char, int> oChars = GetValueChars(i);
Dictionary<char, int> dChars = GetValueChars(i * v);
Dictionary<char, int> mChars = GetValueChars(i * v * v);
foreach (char c in oChars.Keys)
{
if (!dChars.ContainsKey(c)) return false;
else if (dChars[c] != oChars[c]) return false;
else if (!mChars.ContainsKey(c)) return false;
else if (mChars[c] != oChars[c]) return false;
}
return true;
}
or in other words you search for the smallest value to find rather than the middle value.
Notice that I've moved the length tests above the Dictionary stuff, because there is no need for that if the lengths differs.
int variant = 3;
for (int i = 1; i < int.MaxValue; i++)
{
if (ProductAndFactor(i, variant))
{
Console.WriteLine($"{i} => {i * variant} => {i * variant * variant}");
break;
}
}
Another version could be:
static bool ProductAndFactor(int i, int v)
{
int iv = i * v;
int ivv = i * v * v;
int[] ai = new int[10];
int[] aiv = new int[10];
int[] aivv = new int[10];
while (i > 0)
{
ai[i % 10]++;
aiv[iv % 10]++;
aivv[ivv % 10]++;
i /= 10;
iv /= 10;
ivv /= 10;
}
for (int j = 0; j < 10; j++)
{
if (ai[j] != aiv[j] || ai[j] != aivv[j])
return false;
}
return iv == 0 && ivv == 0;
} | {
"domain": "codereview.stackexchange",
"id": 32303,
"tags": "c#, performance"
} |
Refining an ASP.Net MVC VeiwModel for a Table to display worked hours | Question: I am writing an ASP.Net MVC app which has a page that will display a standard table which will contain a person's hours for the week. The basic structure of the table would be a 7 column, muti-row table. The Header of each column would display the Day of the Week with the total hours worked that week. There would then be rows for the time of the day starting at 8AM going to 5:30 PM in 15m increments. In the table for each day would be a listing of what projects where worked on that day with an indication of when it started and stopped being worked on. (Please see image below for visual aid).
I have built a ViewModel to hold the data for this table and pass it up but I am not sure if I built it in a manor that makes it efficient to work though and have it's data displayed so I would love to have some feedback on ways I might be able to improve it or even alternate ways of building such a ViewModel.
If this is off topic here please let me know where is on topic and I will delete and post there.
My ViewModel Code:
{
public class TimesheetHoursTableVM
{
public int TimesheetHeaderID { get; set; }
public DateTime WeekEndingDate { get; set; }
public decimal TotalWeekHours { get; set; }
public decimal SundayHours { get; set; }
public decimal MondayHours { get; set; }
public decimal TuesdayHours { get; set; }
public decimal WednesdayHours { get; set; }
public decimal ThursdayHours { get; set; }
public decimal FridayHours { get; set; }
public decimal SaturdayHours { get; set; }
public IEnumerable<TimesheetDailyHoursVM> SundayTimesheet { get; set; }
public IEnumerable<TimesheetDailyHoursVM> MondayTimesheet { get; set; }
public IEnumerable<TimesheetDailyHoursVM> TuesdayTimesheet { get; set; }
public IEnumerable<TimesheetDailyHoursVM> WednesdayTimesheet { get; set; }
public IEnumerable<TimesheetDailyHoursVM> ThrusdayTimesheet { get; set; }
public IEnumerable<TimesheetDailyHoursVM> FrodauTimesheet { get; set; }
public IEnumerable<TimesheetDailyHoursVM> SaturdayTimesheet { get; set; }
}
public class TimesheetDailyHoursVM
{
public int TimesheetID { get; set; }
public DateTime StartDateTime { get; set; }
public DateTime EndDateTime { get; set; }
public string ProjectCode { get; set; }
public string TaskCode { get; set; }
public string ProjectDescription { get; set; }
public string TaskDescription { get; set; }
}
}
Table the data goes into:
Answer: I don't think there is anything necessarilly wrong with your approach especially if it works. However I personal might consider trying to leverage the DayOfWeek enumeration of c# and avoid the individual day timesheets.
This way you could loop over weeks more easily as well as potentially leverage Linq to do things such as calculating the TotalHours as part of the viewModel.
My viewModel for this case might then look like this:
public class TimesheetWeeklyTableVM
{
public int TimesheetHeaderID { get; set; }
public DateTime WeekEndingDate { get; set; }
public decimal TotalWeekHours { get; set; }
public List<TimesheetDailyVM> DaysOfWeek { get; set; }
public TimesheetWeeklyTableVM()
{
DaysOfWeek = new List<TimesheetDailyVM>();
foreach (DayOfWeek dayOfWeek in Enum.GetValues(typeof(DayOfWeek)))
{
DaysOfWeek.Add(new TimesheetDailyVM()
{
DayOfWeek = dayOfWeek
});
}
}
}
public class TimesheetDailyVM
{
public DayOfWeek DayOfWeek { get; set; }
public double TotalHours
{
get { return TimeSpan.FromSeconds(Tasks.Sum(p => (p.StartDateTime - p.EndDateTime).TotalSeconds)).TotalHours; }
}
public IEnumerable<TimesheetDailyHoursVM> Tasks { get; set; }
public TimesheetDailyVM()
{
Tasks = new List<TimesheetDailyHoursVM>();
}
}
public class TimesheetDailyHoursVM
{
public int TimesheetID { get; set; }
public DateTime StartDateTime { get; set; }
public DateTime EndDateTime { get; set; }
public string ProjectCode { get; set; }
public string TaskCode { get; set; }
public string ProjectDescription { get; set; }
public string TaskDescription { get; set; }
}
Note, I'm not sure if TotalHours is calculated differently but if it's just a sum of the Task times then you could include that logic into the Viewmodel itself as in.
public double TotalHours
{
get { return TimeSpan.FromSeconds(Tasks.Sum(p => (p.StartDateTime - p.EndDateTime).TotalSeconds)).TotalHours; }
}
If you wanted to add properties for each individual day to make it easier to access say for example Monday then you could easily do this such as.
public TimesheetDailyVM Monday { get { return Day(DayOfWeek.Monday); }
public TimesheetDailyVM Tuesday { get { return Day(DayOfWeek.Tuesday); }
private TimesheetDailyVM Day(DayOfWeek day)
{
return Days.Single(p => p.DayOfWeek == day);
} | {
"domain": "codereview.stackexchange",
"id": 19826,
"tags": "c#, asp.net-mvc-5"
} |
Burning fuel in space while accelerating and deaccelerating | Question: Is it true that for acceleration in space we need not the same quantity of the fuel than during the deacceleration stage?
Answer: Generally true because during the acceleration phase fuel that will later be burned is accelerated along with the ship, whereas upon deceleration the overall mass of the ship will be smaller, requiring less fuel to slow it. | {
"domain": "physics.stackexchange",
"id": 17670,
"tags": "acceleration, rocket-science, space-travel"
} |
Can the thermal state be associated with a single pure state? | Question: I'm trying to understand better the quantum thermal state defined by
\begin{equation}
\rho_{0}=\frac{e^{-\hbar\omega_{\mu}}\left|n_{\mu}\right\rangle \left\langle n_{\mu}\right|}{\sum_{n_{\mu}}e^{-\hbar\omega_{\mu}}}
\end{equation}
More specifically, I'm interested whether or not we could associated to the above density matrix a state ket defined through by $\rho_{0} =\left|\psi_{0}\right\rangle \left\langle \psi_{0}\right|$ with perhaps
\begin{equation}
\left|\psi_{0}\right\rangle =\sum_{n_{\mu}}\frac{e^{-\frac{\hbar\omega_{\mu}}{2}}}{\sqrt{\sum_{n_{\mu}}e^{-\hbar\omega_{\mu}}}}\left|n_{\mu}\right\rangle
\end{equation}
I believe this is not the correct answer since if I use this formula it will give rise to terms like $\left|n_{\mu}\right\rangle \left\langle n_{\mu}+l\right|$. Any thoughts on that?
Thanks
Answer: No. The state $\rho_0$ is not a pure state, i.e. it cannot be written in the form $\rho_0=|\psi_0\rangle\langle\psi_0|$.
This can be seen by noting that $\mathrm{trace}(\rho_0^2)<1$, while for a pure state, the trace would have to be $1$.
$\rho_0$ can, however, be seen as one half of the "thermofield double" state
\begin{equation}
\left|\psi_{0}\right\rangle =\sum_{n_{\mu}}\frac{e^{-\frac{\hbar\omega_{\mu}}{2}}}{\sqrt{\sum_{n_{\mu}}e^{-\hbar\omega_{\mu}}}}\left|n_{\mu}\right\rangle \otimes \left|n_{\mu}\right\rangle \ .
\end{equation} | {
"domain": "physics.stackexchange",
"id": 75269,
"tags": "quantum-mechanics, statistical-mechanics, quantum-information, density-operator"
} |
Coincidence measurements of cosmic ray particles | Question: Why does a coincidence measurement in, for example, a scintillator paddle detector identify a particle as a muon? Couldn't it be some other particle that happens to travel through both detectors? Or say you have two Cherenkov detectors with one about a metre above the other. Then if they both detect a signal close together would you be able to say yep, that's a muon, and if so why?
Here's the link that seems to me to suggest that the coincidence measurement is what allowed them to say they've detected muons. They don't give details of any other identification method unless I've completely missed it.
Statistically I think most detections would be muons because of their sheer number, so is that what they're basing their statement on?
Answer: A simple coincidence counter provides no particle ID, so it is largely insensitive to what species has triggered it. So, yes, you assume the species from statistics, but muons make up an overwhelming fraction of the particle at ground level (about 45:1 over protons and neutrons according to the Particle Data Group).
Now we know the statistics of the population by building detectors that have particle ID. The simplest design would be a hodoscope telescope with a magnet but using a drift chamber or time-projection chamber (again with a analyzing magnet) with probably give better results. | {
"domain": "physics.stackexchange",
"id": 35176,
"tags": "particle-physics, experimental-physics"
} |
What is the relation between the Choi matrix and the Liouville space (superoperator) representations of a channel? | Question: A.S. Fletcher, P. W. Shor, and M. Z. Win
Phys. Rev. A 75, 012338 (2007) says
the Choi matrix for the operation $\mathcal{A}$ is given by $X_A \equiv \sum_k |A_k\rangle\!\rangle\langle\!\langle A_k|$, and the channel mapping $\mathcal{A}:\mathcal{L}(\mathcal{H})\mapsto \mathcal{L}(\mathcal{K})$ is defined by
\begin{equation}
\mathcal{A}(\rho) = {\rm{tr}}_{\mathcal{H}}[(\rho^{\rm{T}}\otimes I)X_A]. \tag{11}
\end{equation}
Here they used the Liouville space representation with $|\rangle\!\rangle \langle\!\langle|$. How do we get to this representation starting from the usual definition of Choi matrix representation of a channel we know from Preskill's notes Eq.(3.71)
$(I\otimes \mathcal{E})\left((|\tilde\Phi\rangle\langle|\tilde\Phi|)_{RA}\right)$
?
Answer: I think there's some confusion here, so let me try to clarify some basic things:
Given any quantum channel $\Phi$, you define its Choi representation as the operator $J(\Phi)=(\Phi\otimes \operatorname{Id})\mathbb{P}_m$, where $\mathbb{P}_m\equiv |m\rangle\!\langle m|$ and $|m\rangle\equiv\sum_i |i,i\rangle$ is the (unnormalised) maximally entangled state. It is also common to instead talk about the Choi state, which is the same thing, except you define $|m\rangle$ as the actual (normalised) maximally entangled state. The two definitions only differ by a multiplicative factor, so it doesn't matter which one you use (as long as you're consistent with your notation of course).
Given a quantum channel with Kraus representation $\Phi(\rho)=\sum_k A_k \rho A_k^\dagger$, where $A_k$ are the Kraus operators, its Choi (following the definition above) can be written as
$$J(\Phi)=\sum_k \operatorname{vec}(A_k)\operatorname{vec}(A_k)^\dagger
= \sum_k |A_k\rangle\!\rangle\langle\!\langle A_k|.$$
Note that $|A_k\rangle\!\rangle$ refers to the vector obtained vectorising the operator $A_k$, and $\operatorname{vec}(A_k)$ is an equivalent notation for the same thing. You can directly verify these formulas by applying the general definition of Choi representation to a channel having $A_k$ as Kraus operators.
It goes without saying, but there is a bijective relation between channels and their Chois, which also means that a channel $\Phi$ has Kraus operators $\{A_k\}$ if and only if its Choi can be decomposed as above in terms of the vectors $\{\operatorname{vec}(A_k)\}$.
The channel $\Phi$ corresponding to the Choi $J(\Phi)$ is
$$\Phi(\rho) = \operatorname{tr}_2[(I\otimes\rho^T)J(\Phi)].
$$
To see this explicitly, consider the following:
$$\operatorname{tr}_2[(I\otimes \rho^T)J(\Phi)] =
\sum_{ij} \operatorname{tr}_2[(I\otimes \rho^T)(\Phi(E_{ij})\otimes E_{ij})] =
\sum_{ij} \Phi(E_{ij}) \underbrace{\operatorname{tr}[\rho^T E_{ij}]}_{=\rho_{ij}} = \Phi(\rho).$$
You can find some related discussions in How does the spectral decomposition of the Choi operator relate to Kraus operators?. | {
"domain": "quantumcomputing.stackexchange",
"id": 4956,
"tags": "quantum-operation, kraus-representation"
} |
Is the hypothesis "at some level, spacetime becomes discrete" falsifiable? | Question: Suppose I conjectured that, at some length scale, spacetime was discretized into "cells", Minecraft-style. For simplicity, I guess let's say they're cubes with side length $n$.
Presumably we can put an upper bound on $n$ from observation. For instance, myself and the table occupy the same $10 \text{m}^2$ cube of space, and we are two different objects, so $n < 10 \text{m}^2$. (Is this conclusion correct? Is my reasoning correct?)
Is there any experiment that would disprove this hypothesis for all $n$? My intuition is no, and I was using this as an example of a non-falsifiable hypothesis earlier today, but I was seized by doubt, as I have a math & computer science background without much physics.
Is my conjecture falsifiable?
Answer: If you want a falsifiable theory you must make a prediction about specific mutually exclusive ways the universe could be versus not be and then argue that one the groups must happen or cannot happen. Then when you investigate and find which one you get, you have falsified or not falsified the theory.
So ask yourself what you predictions are. Your predictions are a bit vague, but they sound like the predictions of a continuous theory. Why do I say that. It's because of the "at some level" part. It sounds like it means that at scales much larger than that hypothetical level every prediction agrees with the continuous theory.
So you could make the same predictions as a continuous theory and then whenever data is collected, no matter what data we see, the data was collected at some scale and you could just say that if only the scale was smaller things would have turned out differently.
But however small the scale is, you can pull the same trick. You never even need to bother making discrete predictions because whatever data fits the continuous theory also allows the discrete one to exist at the much smaller resolution without being exposed.
This freedom to wiggle out of any data is the hallmark of an unfalsifiable theory.
But if you claimed there was a discrete theory at a fixed level where experiments with a particular nonzero scale were small enough to require different predictions for the discrete theory. Now you have made a prediction that can be tested and thus your theory is falsifiable.
So you could have a whole family of theories, each predicting deviations at a different scale. And each one would be falsifiable. But the meta claim that at least one of them is correct, that meta claim is not falsifiable. | {
"domain": "physics.stackexchange",
"id": 34026,
"tags": "spacetime, discrete"
} |
Quantum mechanical experiments with large objects | Question: What are some examples of quantum mechanical phenomena that have been observed not with electrons but rather with large real-life objects? In particular, what is the largest object for which the double slit experiment has been successfully performed with?
Answer: As I say in my comment, there is probably a duplicate of this, that will turn up as soon as I write this, but as technology advances, perhaps it's out of date.
This answer concentrates as much on how you prepare "large" molecules, as I think that aspect is of interest in itself, but we cannot approach the size of a virus or any tiny bacteria as yet, or for the foreseeable futures, as far as I know.
From Largest Molecule Double Slit.
These molecules are around 100 atoms in size, compare to the 180, 000 atoms in the smallest virus.
The relatively large phthalocyanine $C_{32}H_{18}N_8$ and derivative molecules $C_{48}H_{26}F_{24}N_8O_8$ have more mass than anything in which quantum interference has previously been observed. To have wavelengths that are relatively large compared to their sizes, the molecules need to move very slowly. This was achieved this by directing a blue diode laser onto a very thin film of molecules in a vacuum chamber, effectively boiling off individual molecules directly under the beam while leaving the rest unaffected.
After separation from the film, the molecules were sent through a collimator to ensure they formed a beam before reaching the barrier, which had a number of parallel slits to produce the actual interference pattern. To prevent excessive interactions (primarily van der Waals forces) between the molecules and the edges of the slits, the researchers used a specially-prepared grating coated in silicon nitride membranes. Without such preparation, the molecules are likely to be deflected by ordinary interactions with the hardware.
After passing through the slits, the molecules' positions were recorded using fluorescence microscopy, which has both sufficient spatial resolution and fast response to detect when and where the molecules arrive. The positions of individual spots were measured to 10 nanometer accuracy. Additionally, the molecules lodged in the fluorescent screen, meaning their positions could be independently verified in the form of build-up at the experiment's end. | {
"domain": "physics.stackexchange",
"id": 33761,
"tags": "quantum-mechanics, experimental-physics"
} |
Why is the DNA codon table "equal" to the RNA codon table | Question: Before anything else please pay attention of the double quotes on the "equal" in the title - I know they are not equal, but you will understand in a bit.
If I look at the DNA codon table here or in wikipedia, and at the RNA codon table here or in wikipedia, their only difference is that the former has thymine (T) whereas the latter has uridine (U). But I am not understanding how all other nucleotides are the same. Bear in mind that transcription reads DNA from 3'->5' and translation reads mRNA from 5'->3'.
Look at this example, focusing on the 3'-TAC-5' of the antisense strand.
The 3'-TAC-5' codes for Tyrosine according to the DNA codon table. However, it transcribes and translates 3'-TAC-5' -> 5'-AUG-3' -> Methionine.
Now focus on the corresponding codon in the sense strand, 5'-ATG-3', which codes for Valine according to the DNA table (it is read from 3' to 5'). This codon transcribes and translates 3'-GTA-5' -> 5'-CAU-3' -> Histidine, not Valine and not Methionine.
So my questions are:
Why do DNA codon tables show the correspondence between codons and amino acids in the sense strands (5'-ATG-3'), if the only way to go from ATG to Met as in the figure is to consider the antisense strand?
I can see that the AUG codon translates to Methionine, and it is translated by reading the 3'-TAC-5' codon in antisense strand, whose corresponding codon in sense strand is 5'-ATG-3'. But this last one is read from 3'->5', so it reads 3'-GTA-5' -> 5'-CAU-3' -> His.
I am guessing I'm getting many things wrongly in this... Maybe on the way that I'm reading the tables: I always take into account that transcription is read 3'->5' and this is the order I take to read DNA tables.
Thank you and sorry if I was confusing.
Answer: I can understand your confusion but it all makes sense. The basic idea is that what we call the "antisense" strand is actually the one being transcribed. However, since that is in effect a mirror image, it is much simpler to think in terms of the sense strand.
To take a very simple example:
5' ATG 3' <-- sense strand
3' TAC 5' <-- antisense strand
The antisense strand will be read in a 3' to 5' direction:
3' TAC 5' <-- antisense strand
5' AUG 3' <-- mRNA
Since the mRNA is a mirror image, it has the sequence of the sense strand. It is this mRNA that is then translated and this is read in a 5' to 3' direction. So, AUG is translated as Met. To illustrate, have a look at this image from Wikipedia (click on it for a larger version),
In the image above you can see the growing polypeptide (protein) chain snaking its way through the ribosome. The flying blue things are tRNA molecules and the black chain at the bottom is the mRNA being translated. You can't really see it very well in this image but if you look at the original you can clearly see it is moving from the right towards the left. The right hand side is the 5' end and the left is the 3'. Or, in a more static version (adapted from here):
So, the antisense strand is read, transcribed to RNA (which has the sequence of the sense strand but with T converted to U) and it is this mRNA which is read (in a 5' to 3' direction) to produce the protein. | {
"domain": "biology.stackexchange",
"id": 3089,
"tags": "genetics, molecular-genetics, transcription, translation"
} |
How do you upgrade pcl to the current release? | Question:
I saw this question: https://answers.ros.org/question/11622/how-do-you-upgrade-pcl-to-the-current-release/
But it is 7 years old and I assume some progress happened on this issue. I am asking because all the approaches suggested by tfoote sound like they take a good amount of work. And I don't want to waste my time, to find out afterwards, that the upgrading process has been made easier. So is there a canonical way to do this by now?
Is one of the solutions of tfoote the best for kinetic and pcl 1.8?
I saw some project like this one: https://github.com/NicolaCovallero/iri_tos_supervoxels that simply add pcl 1.8 in the cmake file and not as a package dependency. Will this work in kinetic? And if yes, does it prevent me from using any packages that do depend on ros_pcl, ros_conversions etc, because I will get namespace collisions?
Originally posted by Hakaishin on ROS Answers with karma: 142 on 2018-04-21
Post score: 2
Answer:
PCL is a 'system dependency' (ie: something used by ROS packages, but not a ROS package itself).
To use a newer version of a system dependency there is a relatively simple procedure:
identify all packages that you use that depend on the system dependency (read: all ROS packages that you use that (indirectly) depend on the dependency)
clone their source repositories into your workspace
install the newer version of the system dependency (this will most likely install it in /usr/local if building from source, or if you're using your system's pkg manager, in the appropriate system location)
in case you have parallel installations of the same dependency on your system: update the CMakeLists.txt of all affected packages to add version requirements to the find_package(..) lines
build the workspace
At this point all the packages in your workspace should be using the new version of the dependency (provided you did step 4 correctly).
Step 1 is really important: it's allright to use pkg A with dependency X version Q, and pkg B with dependency X version W, as long as A does not link anything from B, nor B from A. Linking two different versions of dependency X into the same binary is generally not very stable, and can lead to SEGFAULTs (in the case of C/C++) and/or other strange, unexpected and hard to diagnose problems.
If the two packages do not directly share any binary artefacts, but are standalone and communicate only through messages, it can work.
Step 4 is equally important: pkgs typically look for PCL with something like find_package(PCL REQUIRED). If you have both 1.7 and 1.8 on your system, CMake may end up finding 1.7 before 1.8, resulting in 1.7 being used.
So summarising: follow the above procedure, and make sure to update any find_package(PCL ..) lines to read find_package(PCL 1.8 ..). If depending on any of the perception_ros packages, you would have to update the line in (at least) pcl_ros.
Originally posted by gvdhoorn with karma: 86574 on 2018-04-21
This answer was ACCEPTED on the original site
Post score: 5
Original comments
Comment by gvdhoorn on 2018-04-21:
Two notes:
this is not specific to PCL: the same works (more or less) for any system dependency. Some will require a bit more work to get CMake to find the new versions, but in general it'll be the same
this is not ROS specific: Catkin ~= CMake, and this is a CMake workflow
Comment by Hakaishin on 2018-04-21:
Thank you very much, I will try to follow these instructions. In the same question that I linked, why did your procedure not apply there? Did tfoote not know about this answer, or where there other reasons, that at that time this did not work or did I misunderstand the linked question?
Comment by gvdhoorn on 2018-04-21:
At that point in time PCL was actually a ROS package, so upgrading it was a litle bit more involved. Note also the comment:
we're working on making it so that PCL and ROS integrate in a more standard way in future versions
I believe he is referring to making PCL a system dependency.
Comment by gvdhoorn on 2018-04-21:
But of course there are more sides to this than just installing a newer version on your system: newer versions of libraries typicaly introduce new functionality, but could also introduce breaking changes and changes to existing APIs. If that happens, no workflow will save you, and you'll have to do some work to make the ROS packages that you want to link against the new version compatible again.
That is probably what @tfoote means with:
however that will not solve the fundamental problem.
Comment by Hakaishin on 2018-04-21:
I see, thank you very much. Your approach worked :) | {
"domain": "robotics.stackexchange",
"id": 30710,
"tags": "ros-kinetic"
} |
In an antenna how fast do electrons move when receiving a signal? | Question: In an antenna how fast do electrons move when "receiving a wave"?
Answer: They move at the drift velocity for that material and that electric field. A strong FM radio signal from a nearby station has an intensity of about $10^{-5}$ W/m^2, while for a weak astronomical radio source it might be more like $10^{-26}$ W/m^2. The equation for the drift velocity in terms of the intensity $S$ is
$$v=\mu \sqrt{\frac{4\pi k}{c} S}$$,
and if we put in a typical electron mobility for a metal of $\mu\sim 3\times10^{-3}$ m^2/V.s, the results range from $\sim10^{-8}$ m/s for the weak astronomical source to $\sim100$ m/s for the strong radio station.
I'm surprised that the OP accepted the answer by Bill N, which seems to me to be uninformative. | {
"domain": "physics.stackexchange",
"id": 77406,
"tags": "electromagnetic-radiation, electric-fields, electronics, antennas"
} |
Why is the B field of a solenoid equal to $\mu_0 i n$ while that of a loop is $\frac{\mu_0 i R^2}{2(R^2+z^2)^{3/2}}$? | Question: In the B field of the loop, if R is the radius and z is the distance along the axis perpendicular to the center of the loop, let z go to zero, and multiply by N loops. Starting with the B field of the loop axis:
$$B=\frac{\mu_0 i R^2}{2(R^2+z^2)^{3/2}}$$
Becomes:
$B_{\text{N loops}}=\frac{\mu_0 i N}{2R}$ and not $B_{\text {solenoid}}=\mu_0 i n$. What is the difference?
Answer: You can think of a solenoid as containing an infinite number of loops stacked one on top of the other. Thus, the expression for one loop becomes a small contribution to the net field of the solenoid:
$$
B_{loop}\to dB_{solenoid}= \frac{\mu_0 (nidz) R^2}{2(R^2+z^2)^{3/2}}
\tag{1}
$$
where $n$ is the number of turns per meter so that $ndz$ is the number of current loops in a stack of thickness $dz$. Basically $n$ measures how densely you stack your loops.
Summing over all these loop contributions gives
$$
B_{net}=\int_{-\infty}^\infty dB =\mu_0 ni \tag{2}
$$
as in the solenoid.
This solution, which uses the superposition principle, is "easy" because the field on the symmetry axis of a loop is easy to compute.
A more general approach, using Ampere's law, shows that the field is constant inside the soleinoid, even for points that are off-axis. This latter result can also be shown using superposition but the integrations involved are a lot more technical. | {
"domain": "physics.stackexchange",
"id": 43174,
"tags": "electromagnetism, magnetic-fields"
} |
A limit to birds affinity for high vantage points | Question: Birds seem to have a natural affinity for high vantage points, including power wires, the tops of trees, and the sides and tops of buildings.
However I presume the top of the Burj Khalifa is not packed with birds, flyover video of cities full of skyscrapers suggest this would not be the case.
Q: Therefore, is there a cap, if so at what height, where higher is no longer better? Perhaps the birds vantage point is reduced by going higher.
Related, but I feel different: Why do crows sit on treetops even when it is cold?
Answer: Birds find their niches based on many reasons. Their choice is based on primarily resource availability, predation risk, and competition. Keep in mind there are variations among species, most birds like to forage for food at a hight safe enough to avoid ground predation, at the same time to be able to see and find food without much competition. Therefore very high altitudes do not seem beneficial for food resources and its a cost in energy, thus reasonable altitudes are preferred. But again many variations depending on species and location. | {
"domain": "biology.stackexchange",
"id": 5108,
"tags": "eyes, ornithology, behaviour, collective-behaviour"
} |
Efficiently calculating differences between file using diff file | Question: I'm using SVNKit to get diff information between two revisions. I'm using the diff utility to generate a diff file, however I still need to parse it into numbers.
I implemented a solution, but it is rather slow. JGit does something similar, however it actually parses the values itself and returns an object, rather than a output stream, and is much much faster. I was unable to determine how to leverage that for SVNKit, so attempted the following solution:
private Diff compareRevisions(final SVNRevision rev1, final SVNRevision rev2) throws SVNException {
final Diff diff = new Diff();
try (final ByteArrayOutputStream baos = new ByteArrayOutputStream()) {
doDiff(rev1, rev2, baos);
int filesChanged = 0;
int additions = 0;
int deletions = 0;
final String[] lines = baos.toString().split("\n");
for (final String line : lines) {
if (line.startsWith("---")) {
filesChanged++;
} else if (line.startsWith("+++")) {
// No action needed
} else if (line.startsWith("+")) {
additions++;
} else if (line.startsWith("-")) {
deletions++;
}
}
diff.additions = additions;
diff.deletions = deletions;
diff.changedFiles = filesChanged;
return diff;
} catch (final IOException e) {
LOGGER.trace("Could not close stream", e);
return diff;
}
}
I've taken to caching the values in files to improve time, but optimally I'd like to speed this up. Perhaps I could use external programs?
Answer: You need to parse the patch file format correctly. Otherwise the next patch that deletes an SQL comment will confuse your program, as it looks like this:
--- old_file.sql
+++ new_file.sql
@@ -1,1 +1,1 @@
--- SQL comment
+SELECT * FROM table;
Your current code interprets the removed line as a removed file.
The file format is explained here: http://www.gnu.org/software/diffutils/manual/html_node/Detailed-Unified.html
Since there are other people who had the same problem, you could just build on their work instead of writing your own, e.g. https://github.com/thombergs/diffparser. | {
"domain": "codereview.stackexchange",
"id": 21163,
"tags": "java, performance, parsing, edit-distance, svn"
} |
Why is it ACETone? | Question: Acet* indicates Ethyl, but does not offer two Carbon atoms, but three.
Acetaldehyde is Ethanal, Acetic acid is Ethanoic acid, but Aceton is Propanon (yes, I'm aware that there is no Ethanon).
Why is Acetone called Acetone?
Answer: Here's a link to the first page of a book entitled, "The History of Acetone, 1600-1850" by Mel Gorman. The author points out that acetone was known in the Middle Ages and was frequently produced by heating dry lead acetate. I suspect that the "acetate" (or whatever the Latin, French or German term was) root stuck and then it was just modified a bit - to acetone - to make it distinct. | {
"domain": "chemistry.stackexchange",
"id": 1473,
"tags": "nomenclature"
} |
If Viruses use Host Proteins, why don't Immune Cells attack Host Cells? | Question: The Wikipedia article for Viral Proteins contains the following line:
Thus, viruses do not code for many of their own viral proteins, and instead use the host cell's machinery to produce the viral proteins they require for replication
If the host-derived (i.e. not encoded by the virus) viral proteins are made following the hosts DNA, are those proteins still flagged as "viral" by cytotoxic T cells? If so, why don't they attack host cells that have those proteins when uninfected?
My guesses:
Those host-derived viral proteins are coded for by the host, but aren't usually produced during normal function
The proportion of host-derived viral peptides matters, with a lower proportion indicating normal cell function and a higher proportion indicates infection
The viral proteins are a mix of host proteins that, separately, are not recognized as viral, but when put together by virus instruction can form composite proteins whose peptides fragments are viral
I'm misunderstanding the quote, and what they meant was "Thus, viruses do not code for self-generation of many of their own viral proteins, and instead code to utilize the host cell's machinery to produce the viral proteins they require for replication"
Answer: The phrase “viruses do not code for many of their own viral proteins” in this Wikipedia entry is an obvious oxymoron.
I echo what someone else said: “The beauty of Wikipedia is not that it is correct, but that it is correctable”. So I corrected it.
Unless it has been reverted (let battle commence) the first paragraph now reads as follows:
The term viral protein refers to both the products of the genome of a virus and any host proteins incorporated into the viral particle. Viral proteins are grouped according to their functions, and groups of viral proteins include structural proteins, nonstructural proteins, regulatory proteins, and accessory proteins. Viruses are non-living and do not have the means to reproduce on their own, instead depending on their host cell's machinery to do this. Thus, viruses do not code for most of the proteins required for their replication and the translation of their mRNA into viral proteins, but use proteins encoded by the host cell for this purpose.
To expand for this answer:
Many viruses rely on either the host-encoded DNA polymerase, RNA polymerase or both for replicating their genome, depending on their size and complexity
All viruses depend on the host-encoded machinery of protein synthesis (ribosomes, tRNA, aminoacyl-tRNA synthetases etc.) for the translation of the proteins encoded in their mRNAs (which in some RNA viruses are also their genomes)
Some viruses — especially those enclosed by lipid envelopes — may have host-encoded proteins included in their virions, whether by chance or to serve some function.
The immunity ‘problem’
There seems to be no problem to answer regarding the host’s immune response to viral particles, in any case. The immune response recognizes proteins that are ‘foreign’. Virus-encode proteins are foreign. Any host constituents of the virion are not. (This is the essence of the answer from @PrashantBharadwaj.) | {
"domain": "biology.stackexchange",
"id": 12355,
"tags": "proteins, virology"
} |
Transformation of fields in non-abelian gauge theories | Question: Let us consider a gauge group, e.g. $SU(N)$. One usually says that a fermionic field $\psi$ belongs to the fundamental representation of the group.
As far as I understand, the fundamental representation is made of matrices that belong to $SU(N)$. Then why the field, being a matrix, transforms as $$\psi \mapsto U\psi$$ and not as $$\psi \mapsto U\psi U^\dagger?$$
The adjoint representation instead should be made of matrices belonging to the Lie algebra $\mathfrak{su}(N)$. What is the physical meaning of a field in the adjoint representation and how does it transform?
Answer: A representation of a group can refer to both the group homomorphism, i.e. M: $SU(N) \rightarrow GL(N)$ and the vector space on which the representation acts. In this case a field transforming in the fundamental representation means that it lives in the vector space on which the fundamental representation of SU(N) acts, hence it is a column vector which transforms as $\psi \rightarrow U\psi$. | {
"domain": "physics.stackexchange",
"id": 49877,
"tags": "gauge-theory, group-representations, yang-mills"
} |
Bohr hydrogen atom model and quantum mechanics on quantisation of angular momentum | Question: Bohr's model says that angular momentum is quantised to integral multiples of reduced Planck's constant, $$L = nh/2\pi$$
but in quantum mechanics, angular momentum operator has non-integer eigenvalues, since $$L = \sqrt{\ell(\ell+1)} h/2\pi$$
Does that mean Bohr's postulate about angular momentum is wrong?
Answer: Correct. Bohr's model is very lucky and very difficult to get working!
For example, while most people only know that Bohr's model give the correct energy levels if we assume that the angular momentum is quantised to $L=n\hslash$, that is only for the case of spherical coördinates.
If we do the Bohr's model in other coördinate systems, we might need to have $L=\sqrt3\hbar$ or some other nonsense like that.
So, there were obvious red flags that Bohr's model is not everything even during its heyday, the old quantum theory period. Modern quantum theory completely changed that and made everything much more internally consistent.
Note: The name of the quantisation is Bohr-Sommerfeld quantisation conditions, not Bohr's model. Of course, just use modern quantum theory. | {
"domain": "physics.stackexchange",
"id": 95468,
"tags": "quantum-mechanics, angular-momentum, atomic-physics"
} |
How do liquid crystals rotate the plane of polarized light (electric field of light) which is used in LCD displays? | Question: Currently, I am studying liquid crystal displays. I have studied that liquid crystals are used in LCD screens for controlling and rotating the plane of vibration of the incoming light. I didn't get the exact physics phenomenon behind it. Please elaborate more on it in a scientific way on the physics behind the behaviour of liquid crystals.
Answer: The liquid crystal “state” can be thought of as being between the solid (molecules fixed in both position and orientation) and liquid (molecules with both random position and orientation) state.
In the liquid crystal “state” molecules can have random position but there is some degree of order as to their orientation. Not all liquid crystal molecules point in the same direction but over time there is an average non-zero (zero in the liquid state) direction in which the molecules point, and this direction is called the director.
As might be expected from what has been written the molecules tend to be long tread-like (nematic), helical (chiral nematic) or arranged in planes and the direction order breaks down above a certain temperature above which the material exhibits a liquid phase.
A liquid crystal is an anisotropic material and so what happens to light as it passes through depends on the direction of travel and polarisation of the light relative to the director.
If the molecules which make up a liquid crystal have a permanent or induced dipole moment, then applying an external electric field will change the alignment of the director.
The speed of propagation of light through a liquid crystal depends on the orientation of the plane of polarisation of the light relative to the director, one when the plane of polarisation is parallel to the director and one when it is perpendicular to the director, thus a liquid crystal is birefringent and possesses two refractive indices.
This means that linearly polarised light entering a nematic liquid crystal will emerge elliptically polarised or possibly linearly polarised because the differential speed results in a change of phase between light with a component plane polarised along the director and light with a component plane polarised perpendicular to the director.
With helical molecules (chiral nematic) and the direction of the light along the helical axis, right and left polarised light will travel at different speeds. Thus, if linearly polarised light, which can be thought of as the sum of left and right circularly polarised light, enters the right and left components travel at different speeds and when they emerge from the crystal their sum results in a plane polarised wave which has been rotated relative to the incident plane polarised wave. | {
"domain": "physics.stackexchange",
"id": 89134,
"tags": "optics, solid-state-physics"
} |
Truncating integer using string manipulation | Question: I have a class with a data member that needs to be rounded up to a 2 digit integer, irrespective of the number of the input digits.
For example:
roundUpto2digit(12356463) == 12
roundUpto2digit(12547984) == 13 // The 5 rounds the 12 up to 13.
Currently my code looks like:
int roundUpto2digit(int cents){
// convert cents to string
string truncatedValue = to_string(cents);
// take first two elements corresponding to the Most Sign. Bits
// convert char to int, by -'0', multiply the first by 10 and sum the second
int totalsum = int(truncatedValue[0]-'0')*10 + int(truncatedValue[1]-'0');
// if the third element greater the five, increment the sum by one
if (truncatedValue[2]>=5) totalsum++;
return totalsum;
}
How can this be made less ugly?
Answer: If I understood your requirements correctly then it might be like this:
int roundUpto2digit(int cents) {
if (cents < 100)
return cents < 10 ? cents * 10 : cents;
while ((cents + 5) >= 1000)
cents /= 10;
return (cents + 5) / 10;
}
The test:
#include <stdio.h>
void
test(int i) {
printf("%d -> %d\n", i, roundUpto2digit(i));
}
int
main() {
test(0);
test(1);
test(5);
test(9);
test(10);
test(49);
test(50);
test(94);
test(95);
test(99);
test(100);
test(104);
test(105);
test(994);
test(995);
test(999);
test(1000);
test(1040);
test(1050);
return 0;
}
The result:
0 -> 0
1 -> 10
5 -> 50
9 -> 90
10 -> 10
49 -> 49
50 -> 50
94 -> 94
95 -> 95
99 -> 99
100 -> 10
104 -> 10
105 -> 11
994 -> 99
995 -> 10
999 -> 10
1000 -> 10
1040 -> 10
1050 -> 11
It is uncertain if [1, 9] range should map into [10, 90] (2 digits) or [1, 9] (1 digit). I could fix it, if the later case is true. | {
"domain": "codereview.stackexchange",
"id": 12833,
"tags": "c++, integer"
} |
Computing characteristic polynomial of unitary operation | Question: I am trying to replicate a calculation from the linked paper but I am unsure if I understand their math. A locally invariant function defined as the characteristic polynomial is as follows:
$$F_U(t) = det[\Re[M^{\dagger}UM] + t \cdot \Im[M^{\dagger}UM]]$$
Then it is said for $U$ with interaction coefficients $(x,y,z)$, we have
$$F_U(t) = (t^2+1)(Ct^2 + Bt + A) -t^2$$
I'm having trouble understanding how $(x,y,z)$ are related to $t$, and because neither $M$ nor $U$ are defined as functions of $t$, then it seems like $F_U(t)$ is not a degree-4 polynomial.
In this paper, equation (9), they define $U$ with parameters $(\alpha, \beta, \gamma_1, \gamma_2, \delta_1, \delta_2)$ and find the corresponding coefficient $C=\frac{1}{16}(\cos{2\alpha}-\cos{2\beta})^2$. When I plug the same values in, since $U$ is not a function of $t$, then $F_U$ is not degree-4 and hence $C=0$. Any guidance would be appreciated, thank you.
Source: https://arxiv.org/abs/2105.06074
Answer: Resolved in comments, missing step was $\text{det}(tA) =t^d\text{det}(A)$ | {
"domain": "quantumcomputing.stackexchange",
"id": 3807,
"tags": "quantum-gate"
} |
rosjava(android) pubsub tutorial | Question:
I build rosjava pubsub tutorial without error. and I can run the PubSubTutorial on the AVD.
It seems working well. but I don't know it is actually publishing the topic.
I ran the roscore in ubuntu PC, but PubSubTutorial is not trying to connect to roscore in PC.
MainActivity Manifest has internet permission.
<uses-permission android:name="android.permission.INTERNET"></uses-permission>
Should I run the roscore(rosjava)? the DEFAULT_MASTER_URI is "http://localhost:11311/" was configured by NodeRunner.createDefault() function.
[2011-06-23 22:41:19 - MainActivity] Android Launch!
[2011-06-23 22:41:19 - MainActivity] adb is running normally.
[2011-06-23 22:41:19 - MainActivity] Performing org.ros.tutorials.pubsub.MainActivity activity launch
[2011-06-23 22:41:19 - MainActivity] Automatic Target Mode: using existing emulator 'emulator-5554' running compatible AVD 'scv'
[2011-06-23 22:41:21 - MainActivity] Application already deployed. No need to reinstall.
[2011-06-23 22:41:21 - MainActivity] Starting activity org.ros.tutorials.pubsub.MainActivity on device emulator-5554
[2011-06-23 22:41:22 - MainActivity] ActivityManager: Starting: Intent { act=android.intent.action.MAIN cat=[android.intent.category.LAUNCHER] cmp=org.ros.tutorials.pubsub/.MainActivity }
How can I use this tutorial?
Please give me a answer. Thank you.
Originally posted by hughie on ROS Answers with karma: 71 on 2011-06-23
Post score: 1
Original comments
Comment by damonkohler on 2011-06-23:
Could you clarify your question? You say that it built and runs fine on the AVD. Your logcat looks fine as well.
Answer:
To connect to a roscore that is running in an AVD, you'll need to use adb forwarding. See http://developer.android.com/guide/developing/tools/adb.html#forwardports
For example:
adb forward tcp:11311 tcp:11311
That will forward all TCP connections to port 11311 on your host to your AVD on port 11311.
Originally posted by damonkohler with karma: 3838 on 2011-06-27
This answer was ACCEPTED on the original site
Post score: 1 | {
"domain": "robotics.stackexchange",
"id": 5939,
"tags": "roscore, rosjava, android"
} |
Is there a better or more compact way of adding items in treeview using LINQ? | Question: I am using the following code to add nodes in a treeview. Is there a better or more compact way of doing this by using LINQ?
foreach (Plan plan in this.IncomingPlan.Plans)
{
foreach (Document doc in plan.Documents.Where(d => d.Name.Equals(this.DocumentName, StringComparison.OrdinalIgnoreCase)))
{
foreach (Author author in doc.Authors)
{
TreeNode treeNode = new TreeNode()
{
Text = author.Name,
Type = NodeType.ParentNode,
Tag = author
};
foreach (Book book in author.Books)
{
treeNode.Nodes.Add(new TreeNode()
{
Text = book.Name,
Type = NodeType.ChildNode,
Tag = book
});
}
this.treeView.Nodes.Add(treeNode);
}
}
}
Answer: You can make this more maintainable and more compact by utilizing LINQ here. I'm not sure what TreeNode is in your code, I'm guessing you derived from a WinForms TreeNode.
I'd argue that the node's Type is unnecessary. You can easily determine that if you look at its Level. Level 0 indicates it is at the root of the tree, otherwise it is greater than 0.
Unfortunately there's no nice way to add a range of nodes to another. You could only add arrays of the nodes. Using a loop would be the best option.
Here, I would flatten the nested loops as far as I can then loop through to add them to the tree. To compact it even more, create a factory method to create the nodes. Even more useful if you have a lot of properties to set.
// create the node for the item
static TreeNode CreateNode<T>(T item, Func<T, string> textSelector)
{
return new TreeNode { Text = textSelector(item), Tag = item };
}
var authors =
from plan in this.IncomingPlan.Plans
from doc in plan.Documents
where doc.Name.Equals(this.DocumentName, StringComparison.OrdinalIgnoreCase)
from author in doc.Authors
select author;
foreach (var author in authors)
{
var authorNode = CreateNode(author, a => a.Name);
foreach (var book in author.Books)
{
authorNode.Nodes.Add(CreateNode(book, b => b.Name));
}
treeView.Nodes.Add(authorNode);
} | {
"domain": "codereview.stackexchange",
"id": 750,
"tags": "c#, linq"
} |
Trying to extrapolate info from a partial data set - statistical inference | Question: I am wondering if my logic is OK here or not.
98% of a group without a device has an event occur
2% of group with device has an event occur
Since we know that correlation isn't causation I can't say that the device made a difference one way or the other but I am wondering if I can reasonably conclude:
Of the 2% where the device was present and the event occurred...
It likely would have occurred in 98% of that group anyway since we have observed that it happens 98% of the time when the device isn't present.
I don't have any data beyond that unfortunately so I am trying to figure out how much it mattered if I assume it mattered - based on the data I have.
If that doesn't extrapolate mathematically, what am I missing?
Answer: What you are describing is commonly called conditional probability. In other words, the probability of an event occurring, given that another event has occurred. Bayes' theorem is a way of conducting statistical inference based on conditional probability. It might be useful to frame your problem as statistical inference (in contrast to extrapolation). | {
"domain": "datascience.stackexchange",
"id": 11561,
"tags": "statistics, probability, mathematics"
} |
LMS1XX on ROS Kinetic | Question:
Hy,
I am trying to get a connection in ROS Kinetic to my LMS111 Laserscanner with:
rosrun lms1xx LMS1xx_node _host:=192.168.3.7
The result is a connection fail. I saw the other posts like http://answers.ros.org/question/207220/connecting-a-sick-laserscanner-via-ethernet/ and http://answers.ros.org/question/66437/using-the-sicktoolbox_wrapper-with-lms1xx/ bbut there is never a full description to the solution...
Does anyone have an idea what could be the problem or could provide a full solution path?
Thanks in advance,
Michael
Originally posted by mtROS on ROS Answers with karma: 92 on 2017-05-12
Post score: -1
Original comments
Comment by AlexR on 2017-05-14:
I think the problem is due to permission of the port. Try $ sudo chmod a+rx [port of the LMS] before launching the lms node. It should work. I have had no problems using LMS sensors on ROS kinetic so far.
Comment by mtROS on 2017-05-15:
i messed up the configuration, thanks for your answer, i added a solution path below.
Answer:
In order to establish a connection of the Laserscanner SICK LMS111 to ROS Kinetic several steps have to be performed.
1.Supply the Sensor and connect it to the ethernet port a system running Windows.
2. Install SICK SOPAS Engineering tool and execute it.
3. Configure the Paramters in the SICK Software SOPAS. That means define a ethernet ip adress and adjust measuring parameters.
4.Install the sick toolbox wrapper:
$ sudo apt-get install
ros-kinetic-lms1xx
In Ubuntu under Settings configure the wired connection to be different than the IP of the laserscanner. The IP of the laserscanner can be obtained by
$ sudo ifconfig
Make sure that the connection between ubuntu and the LMS is established in Ubuntu (You can see this in the Network Settings).
$ roscore
In order to establish a connection to ROS use a new terminal and enter
$ rosrun lms1xx LMS1xx_node _
host:=[IP of the scanner obtained from
sudo ifconfig]
has to be used.
9.When using rviz: the fixed frame in global options has to be set to laser and a LaserScan display has to be added with the topic /scan enabled.
Originally posted by mtROS with karma: 92 on 2017-05-15
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 27885,
"tags": "lidar, sicklms, ros-kinetic"
} |
why do some cells in the body prefer necrosis to apoptosis as a means of cell death? | Question: There are many programmed cell death pathways, but some cells show a greater preference for some over the other. I'm wondering as to why if necrosis is an inflammatory response that causes damage to neighboring cells, why some cells would prefer this as opposed to a more controlled mechanism such as apoptosis or even autophagy.
Answer: There is actually no preference for apoptosis or necrosis in cells of the human body - both types can occur in all cells and they have different triggers. The main differences can be seen in this figure (from here):
Apoptosis (also called programmed cell death) has three different triggers (intrinsic, extrinsic and Perforin/Granzyme pathways), see the image below for details (from reference 1):
All three activate different components and caspases in the beginning but finally all activate Caspase 3, which finally activates endonucleases (which degrade the chromosomal DNA), Proteases (which degrade the proteins in the nucleus and the cytoskeleton) which leads to the degradation of the cell. In the end apoptotic bodies are formed from the cell. Additionally the mitochondria break down and release cytochrome c and ATP. Their release from the cell attracts macrophages which take up and eliminate the apoptotic bodies and in turn release cytokines which suppress an inflammatory response.
Apoptosis goes on in a ordered way and does not trigger any further reactions in neighbouring cells (except these receive the same signal). See also reference 1 and 2 for more details.
Necrosis is triggered by external stimuli of the cell as injury of tissues or toxins from infections. Necrotic cells swell, while there internal structures are in a state of unregulated degradation. Finally the membrane will burst and set free cytochrome c and phosphatidylserines from the membranes which cause inflammation in the affected tissue. The necrotic cells are not removed by macrophages which allows the released interior of the cell to spread further and cause problems. See reference 2 and 3 for more details.
Necrosis is usually not as a beneficial process, but there are also publications which see
the process as a specific form of cell death (see reference 4).
References:
Apoptosis: A Review of Programmed Cell Death
Apoptosis vs. Necrosis
Cell death by necrosis: towards a molecular definition
Review Necrosis: a specific form of programmed cell death? | {
"domain": "biology.stackexchange",
"id": 3398,
"tags": "cell-biology, apoptosis, autophagy"
} |
What is the rotation matrix corresponding to a point on the Bloch sphere? | Question: A qubit is given in the following form:
$\left|\psi\right\rangle = \cos\left(\dfrac{\theta}{2}\right)\left|0\right\rangle +
e^{i\phi}\sin\left(\dfrac{\theta}{2}\right)\left|1\right\rangle$.
Let's us start at $\left|0\right\rangle$ and rotate about the $x$-axis $180^{\circ}$ (we should end up at $\left|1\right\rangle$). Mathematically, it could be shown easily:
Let $\theta = 180^{\circ}$ and $\phi = 0^{\circ}$:
$\left|\psi\right\rangle = \cos\left(\dfrac{180}{2}\right)\left|0\right\rangle +
e^{i(0)}\sin\left(\dfrac{180}{2}\right)\left|1\right\rangle\\
\left|\psi\right\rangle = \cos\left(90\right)\left|0\right\rangle +
\sin\left(90\right)\left|1\right\rangle\\
\left|\psi\right\rangle = \left|1\right\rangle
$
Now, let's use the rotation matrix instead. The matrix is given as:
$R_x(\theta) \equiv e^{-i \theta \mathbb{X}/2} = \cos(\theta/2)\mathbb{I} -i\sin(\theta/2)\mathbb{X} = \begin{bmatrix} \cos(\theta/2) & -i\sin(\theta/2) \\ -i\sin(\theta/2) & \cos(\theta/2)\end{bmatrix}$, where $\mathbb{I} = \begin{bmatrix} 1 & 0 \\ 0 & 1 \end{bmatrix}$ and $\mathbb{X} = \begin{bmatrix} 0 & 1 \\ 1 & 0 \end{bmatrix}$.
Using $R_x(\theta)$, we get
$R_x(180) = \begin{bmatrix} \cos(180/2) & -i\sin(180/2) \\ -i\sin(180/2) & \cos(180/2)\end{bmatrix}\begin{bmatrix}1\\0\end{bmatrix}\\
R_x(180) = \begin{bmatrix}0 & -i\\ -i & 0\end{bmatrix}\begin{bmatrix}1\\0\end{bmatrix}\\
R_x(180) = \begin{bmatrix}0\\-i\end{bmatrix}\\
R_x(180) = -i\begin{bmatrix}0\\1\end{bmatrix}.
$
Of course, I feel that I am missing something. The vector obtained is correct but with a phase shift of $-i$.
Also, I am wondering why it is okay to let $\phi = 0$ (if it is not correct, then what should be the value?).
Lastly, I would like to know why the rotation matrix only have $\theta$ but not $\phi$.
Thank you in advance!
Answer: Be careful with your choice of notation. You're using $(\theta,\phi)$ to describe the input state, and you're using $\theta$ as the angle of rotation. These two are different $\theta$s.
Now $\theta=\pi$ and $\phi=0$ simply because you chose your initial state to be $|0\rangle$. (Actually, $\phi$ could be arbitrary, so you pick it to be 0 for simplicity.)
It perhaps helps to think about a picture of the Bloch sphere. An arbitrary pure state (on the surface of the sphere) requires two parameters to describe it, $(\theta,\phi)$. An arbitrary rotation requires three parameters - an axis (which is two parameters, entirely equivalent to the $(\theta,\phi)$ of the pure state), and an angle of rotation about that axis. Now, in your example, you have selected a fixed axes, $X$, and the $\theta$ you're using describes the angle of rotation about that axis. See, it's really incomparable to the other $\theta$ you're using.
Finally, you are correct that the $R_x$ operation gives you the answer that you want only up to a global phase factor. But global phase factors make no difference, and can be neglected.
Also, the -i you see outside the state vector is part of global phase. Those are not considered phase shifts (only relative phase shifts are). And since $R_x$ does not introduce any phase shift, $\phi$ is not in its rotation matrix.
PS: As @DaftWullie pointed out, axis needs two parameters. | {
"domain": "quantumcomputing.stackexchange",
"id": 917,
"tags": "quantum-state, bloch-sphere"
} |
Is there a naming convention for network weights for multilayer networks? | Question: In the diagram below, although the flow of information happens from the input to output layer, the labeling of weights appears reverse. Eg: For the arrow flowing from X3 to the fourth hidden layer node has the weight labeled as W(1,0) and W(4,3) instead of W(0,1) and W(3,4) which would indicate data flowing from the 3rd node of the 0'th layer to the 4th node of the 1st layer.
One of my neural networks teachers did not emphasize on this convention at all. Another teacher made it a point to emphasize on it.
Is there a reason there is such an un-intuitive convention and is there really a convention?
Answer: When the system grows matrix notation is used, as a=Wx, being a (input to activation function in hidden layer) and x (values from input layer) column vectors, transpose of (a1,a2,...a_m) and (x1,x2,...,x_n), and W a m-by-n matrix of dimensions m rows and n columns. The standard way to denote matrix elements is w(i,j) where "i" is the row number and "j" column number:
(from wiki)
For this reason, the weight that applies to h4 from x3 is element in row 4 column 3 of the matrix W, that is, W(4,3) ( as your teachers advocates but with a sad lack of ability to explain ).
In your example:
Note: things are a few more complex when x1, x2, ... are itself vectors, but final conclusion is the same.
( PS: URGENT to allow latex notation on this stack exchange ! ) | {
"domain": "ai.stackexchange",
"id": 454,
"tags": "neural-networks"
} |
Does the normal reaction on pull up bar change during the pull ups? | Question: Intuitively, I know the answer but I can't think of the right math.
I found this question but none of the answers were satisfying enough for me. Human body is not a rigid body so do how do we even apply $\Sigma F=ma_{net}$ over it?
Answer:
Does normal reaction on pull up bar changes during the pull ups?
The normal reaction of the bar changes while the body moves upwards (or downwards) because body does not move at a constant acceleration. You are right to say that human body is a complex system which cannot be modelled as a simple particle, but Newton's laws of motion still apply. For the body to accelerate there must be a net force which will provide the acceleration. In your example this comes from the bar
$$F_\text{bar} = m(a + g)$$
where $m$ is total mass of the body, and $a$ is its vertical acceleration.
Another interesting example similar to this would be doing squats on a scale. As body accelerates downwards the scale shows lower weight, and as the body slows down to rest the scale shows larger weight. Once the body is at rest, the scale shows the normal weight.
Although human body is a complex system that has many particles, it can be considered as a particle with all the mass concentrated at the center of mass.
When a collection of particles is acted on by external forces, the center of mass moves as though all the mass were concentrated at that point and acted on by the net external force.
Center of mass of a collection of particles can be calculated as
$$\vec{r}_\text{cm} = \frac{m_1 \vec{r}_1 + m_2 \vec{r}_2 + \dots}{m_1 + m_2 + \dots}$$
The sum in denominator is total mass of the object $M$, and the above equation becomes
$$M \vec{r}_\text{cm} = m_1 \vec{r}_1 + m_2 \vec{r}_2 + \dots$$
The second time-derivative of the above equation gives
$$M \vec{a}_\text{cm} = m_1 \vec{a}_1 + m_2 \vec{a}_2 + \dots$$
The forces acting on a complex object can be divided to (i) internal forces between the particles and (ii) external forces
$$\sum \vec{F}_\text{ext} + \sum \vec{F}_\text{int} = M \vec{a}_\text{cm}$$
By Newton's third law of motion, internal forces between the particles cancel (you cannot lift yourself by pulling your own belt) and we are left with
$$\boxed{\sum \vec{F}_\text{ext} = M \vec{a}_\text{cm}}$$
In your example external forces would be (i) gravitational force, and (ii) normal force by the bar. | {
"domain": "physics.stackexchange",
"id": 86822,
"tags": "classical-mechanics"
} |
Units in modified Arrhenius equation? | Question: The modified Arrhenius equation is used to express the rate constant in a chemical mechanism model I'm working with. The equations is as follows:
$$k_\mathrm{f} = A\times T^b\times\exp\left(-\frac{E_\mathrm{a}}{RT}\right)$$
The paper states that "Units are Moles, cm3, Seconds, K, and Calories/Mole" so what would be the final units of the rate constant? I'm a bit confused due to the presence of the exponential function. When I simply plug the values in as given in the paper the rate becomes huge when multiplied by the molar concentration, is this because it gives molecules per second? To give you and idea for the following reaction:
$$\ce{C2H5OH + OH <=>C2H4OH + H2O}$$
The Arrhenius constants are as follows:
$$A = 1.74E+11$$
$$b = 0.27$$
$$E_a = 600.0$$
I am yet to calculate the reverse reaction using Gibbs Free Energy, is it just equally as large and thus it all cancels out or are the final units really not in $\mathrm{mol\ s^{-1}}$?
Answer:
The exponential must be dimensionless. That means if $E_a$ is in units of $\mathrm{cal\cdot mol^{-1}}$, then $R$ must be units of $\mathrm {cal\cdot mol^{-1}\cdot K^{-1}}$. Make sure you use the right value of $R$ for these units; $R=8.314~ \mathrm{J \cdot mol^{-1}\cdot K^{-1}}$, but $R = 1.987~ \mathrm{cal \cdot mol^{-1}\cdot K^{-1}}$. Temperature must obviously be in Kelvins.
The example reaction you gave is bimolecular, so I would think $k_f$ is meant to have units of $\mathrm{cm^3 \cdot s^{-1}\cdot mol^{-1}}$. That way, when you multiply $k_f$ by the molar concentration of both reactants, e.g. ethanol and hydroxyl, you get a reaction rate that is in units of $\mathrm{mol \cdot s^{-1}\cdot cm^{-3}}$.
The remaining terms must therefore combine to give the right units for $k_f$. Since the units of $T^b$ will have units of $\mathrm K^b$, then the units for $A$ will be the units for $k_f$ divided by $\mathrm K^b$, i.e. the units of $A$ are $\mathrm{{cm}^3 \cdot s^{-1}\cdot mol^{-1} \cdot K^{\it-b}}$. | {
"domain": "chemistry.stackexchange",
"id": 3918,
"tags": "kinetics, units"
} |
Chemical name : Melamine | Question: melamine is a tri-azine with 3 amine groups how would you name it,
as 1,3,5-triamino-2,4,6-triazine
or
2,4,6-triamino-1,3,5-triazine ?
please explain the reason too if possible
Answer: According to the current version of Nomenclature of Organic Chemistry – IUPAC Recommendations and Preferred Names 2013 (Blue Book), cyclic parent structures containing one or more heteroatoms with no more than ten ring members are named by using the extended Hantzsch-Widman system. The locant ‘1’ is always is given to a heteroatom. The corresponding rule for cycles with more than one of the same heteroatom reads as follows.
P-22.2.2.1.2 A multiplicity of the same heteroatom is indicated by a multiplying prefix ‘di’, ‘tri’, ‘tetra’, etc., placed before the appropriate ‘a’ term. The final letter ‘a’ of a multiplying prefix is elided before a vowel, e.g., tetrazole, not tetraazole. Lowest possible locants are assigned to heteroatoms, locant ‘1’ being assigned to one of the heteroatoms. Locants are cited at the front of the name, i.e., before the skeletal replacement (‘a’) term and any preceding numerical prefixes.
Therefore, the systematic name of the parent structure in melamine is 1,3,5-triazine.
This numbering is also used in substituted heteromonocyclic compounds. A heteroatom keeps the locant ‘1’; then low locants are given to any substituents.
P-14.4 NUMBERING
When several structural features appear in cyclic and acyclic compounds, low locants are assigned to them in the following decreasing order of seniority:
(a) fixed numbering in chains, rings, or ring systems, i.e., when the numbering of a system is fixed, for example in purine, anthracene, and phenanthrene, this numbering must be used, both in PINs and in general nomenclature;
(…)
(c) principal characteristic groups and free valences (suffixes);
(…)
Therefore, the nitrogen atoms of the heterocycle in melamine have the lowest locant set ‘1,3,5’ according to (a), and then the lowest possible locant set ‘2,4,6’ is given to the amine groups according to (c). Thus, the preferred IUPAC name is 1,3,5-triazine-2,4,6-triamine. | {
"domain": "chemistry.stackexchange",
"id": 7740,
"tags": "organic-chemistry, nomenclature"
} |
How to find out the transfer function of a FIR filter? | Question: $$h[n]=\begin{cases}a^n & \text{if } 0 \le n < N \\
0 & \text{otherwise}\end{cases}$$
And for which values of $a$ the filter is stable
I know that the transfer function will be
$$H(z)=\frac{z}{z-a}~,~|z|>a$$
how to find out the values of $a$ for it's stability ?
Answer: \begin{align}
y[n] &= h[n] * x[n]\\
&= h[0]x[n] + h[1]x[n-1] + \ldots + h[N-2]x[n-(N-2)] + h[N-1]x[n-(N-1)]\\
&= a^{0}x[n] +a^{1}x[n-1] + \ldots + a^{N-2} x[n-(N-2)] + a^{N-1} x[n-(N-1)]
\end{align}
So
$$\mathcal Z\{y[n]\} = Y(z) = X(z)\left(1 + az^{-1} + ... + a^{N-2}z^{-(N-2)} + a^{N-1}z^{-(N-1)}\right)$$
and
$$H(z) = \dfrac{Y(z)}{X(z)} = 1 + az^{-1} + ... + a^{N-2}z^{-(N-2)} + a^{N-1}z^{-(N-1)}$$
For $a$ finite, $H(z)$ is finite and thus stable for finite input values. | {
"domain": "dsp.stackexchange",
"id": 5300,
"tags": "filters, z-transform, transfer-function, finite-impulse-response"
} |
Find the ideal gas law from the internal energy | Question: I'm looking for a way on how to obtain the ideal gas law $PV=nRT$ by being given the internal energy $$U=U(S,V)=\alpha N k_b \left(\frac NV\right)^{2/3} e^{2S/(3Nk_b)}$$ I can find the pressure and the temperature both from $$\left(\frac{\partial U}{\partial V}\right)_S=-P$$ and $$\left(\frac{\partial U}{\partial S}\right)_V=T$$ but I don't know how to continue after? Other else than substituting $P$ and $T$ in $PV=nRT$ which is indeed a proof but it isn't really a rigorous proof since we are using our objetive to prove itself so, is there any other way?
Answer: I am not totally sure if this is the most straightforward way, but you will receive the ideal gas law by varying $U$, that is
$$\operatorname{d}U = \dfrac{\partial U}{\partial V} \operatorname{d}V + \dfrac{\partial U}{\partial S}\operatorname{d}S$$.
Since you know that $\frac{\partial U}{\partial V}= -p$ and $\frac{\partial U}{\partial S}= T$, you can plug it into this equation and get
$$\dfrac{\partial U}{\partial V}\operatorname{d}V + \dfrac{\partial U}{\partial S} \operatorname{d}S = -p \operatorname{d}V + T\operatorname{d}S $$.
On the other hand you can carry out the derivatives on the left hand side, which will give you
$$\dfrac{\partial U}{\partial V} = -\dfrac{2}{3}\alpha k_B \Bigl(\dfrac{N}{V}\Bigr)^{5/3}e^x$$
and
$$\dfrac{\partial U}{\partial S} = \dfrac{2}{3}\alpha \Bigl(\dfrac{N}{V}\Bigr)^{2/3}e^x$$
where $x=\frac{2S}{3Nk_b}$. Substituting again brings you to
$$-\dfrac{2}{3}\alpha k_B \Bigl(\dfrac{N}{V}\Bigr)^{5/3}e^x \operatorname{d}V + \dfrac{2}{3}\alpha \Bigl(\dfrac{N}{V}\Bigr)^{2/3}e^x \operatorname{d}S = -p \operatorname{d}V + T\operatorname{d}S $$.
Rearranging gives
$$\Bigl( \dfrac{2}{3}\alpha \Bigl(\dfrac{N}{V}\Bigr)^{2/3}e^x - T\Bigr) \operatorname{d}S = \Bigl( \dfrac{2}{3}\alpha k_B \Bigl(\dfrac{N}{V}\Bigr)^{5/3}e^x -p \Bigr)\operatorname{d}V$$.
Of course this equation needs to be true for all $\operatorname{d}V$ an $\operatorname{d}S$ and so
$$\dfrac{2}{3}\alpha k_B \Bigl(\dfrac{N}{V}\Bigr)^{5/3}e^x -p = 0$$
and
$$\dfrac{2}{3}\alpha \Bigl(\dfrac{N}{V}\Bigr)^{2/3}e^x - T = 0$$
needs to hold true. Rearranging the first of these two equations yields
$$p = \dfrac{2}{3}\alpha k_B \Bigl(\dfrac{N}{V}\Bigr)^{5/3}e^x =
\underbrace{\Bigl(\dfrac{2}{3}\alpha \Bigl(\dfrac{N}{V}\Bigr)^{2/3}e^x \Bigr)}_T \dfrac{k_B N}{V}$$.
That $T = \frac{2}{3}\alpha \Bigl(\frac{N}{V}\Bigr)^{2/3}e^x$ indeed is true, we can see by rearranging the latter of these two equations for $T$. Continuing with
$$p = \underbrace{\Bigl(\dfrac{2}{3}\alpha \Bigl(\dfrac{N}{V}\Bigr)^{2/3}e^x \Bigr)}_T \dfrac{k_B N}{V} = \dfrac{T k_B N}{V} $$,
it is easy to see that
$$pV = N k_B T$$.
With $R= N_A k_B$, where $N_A$ is Avogadro's constant and $\frac{N}{N_A}=n$, where $n$ is the mole number and $N$ is the number of molecules, we finally found
$$pV = n R T$$. | {
"domain": "physics.stackexchange",
"id": 98643,
"tags": "thermodynamics, energy, pressure, entropy, ideal-gas"
} |
Is the statistical interpretation of Quantum Mechanics dead? | Question: I'm sure this question is a bit gauche for this site, but I'm just a mathematician trying to piece together some physical intuition.
*Question:*Is the statistical interpretation of Quantum Mechanics still, in any sense, viable? Namely, is it completely ridiculous to regard the theory as follows: Every system corresponds to a Hilbert space, to each class of preparations of a system corresponds to a state functional and to every class of measurement procedure there is a self-adjoint operator, and finally, a state functional evaluated at one of these self-adjoint operators yields the expected value of numerical outcomes of measurements from the class of measurement procedures, taken over the preparations represented by the state?
I am aware of Bell's inequalities and the fact that the statistical interpretation can survive in the absence of locality, and I am aware of the recent work (2012) which establishes that the psi-epistemic picture of quantum mechanics is inconsistent with quantum predictions (so the quantum state must describe an actual underlying physical state and not just information about nature). Nevertheless, I would really like a short summary of the state of the art with regard to the statistical interpretation of QM, against the agnostic (Copenhagen interpretation) of QM, at present.
Is the statistical interpretation dead, and if it isn't...where precisely does it stand?
An expert word on this from a physicist would be very, very much appreciated. Thanks, in advance.
EDIT: I have changed the word "mean" to "expected" above, and have linked to the papers that spurred this question. Note, in particular, that the basic thing in question here is whether the statistical properties prescribed by QM can be applied to an individual quantum state, or necessarily to an ensemble of preparations. As an outsider, it seems silly to attach statistical properties to an individual state, as is discussed in my first link. Does the physics community share this opinion?
EDIT: Emilio has further suggested that I replace the word "statistical" by "operational" in this question. Feel free to answer this question with such a substitution assumed (please indicate that you have done this, though).
Answer: The statistical interpretation of quantum mechanics is alive, healthy, and very robust against attacks.
The statistical interpretation is precisely that part of the foundations of quantum mechanics where all physicists agree. In the foundations, everything beyond that is controversial.
In particular, the Copenhagen interpretation implies the statistical interpretation, hence is fully compatible with it.
Whether a state can be assigned to an individual quantum system is still regarded as controversial, although nowadays people work routinely with single quantum systems. The statistical interpretation is silent about properties of single systems, one of the reasons why it can be the common denominator of all interpretations.
[Added May 2016:] Instead of interpreting expectations as a concept meaningful only for frequent repetition under similar conditions, my thermal interpretation of quantum mechanics interprets it for a single system in the following way, consistent with the practice of thermal statistical mechanics, with the Ehrenfest theorem in quantum mechanics, and with the obvious need to ascribe to particles created in the lab an approximate position even though it is not in a position eigenstate (which doesn't exist).
The basic thermal interpretation rule says:
Upon measuring a Hermitian operator $A$, the measured result will be approximately $\bar A=\langle A\rangle$ with an uncertainty at least of the order of $\sigma_A=\sqrt{\langle(A−\bar A)^2\rangle}$. If the measurement can be sufficiently often repeated (on an object with the same or sufficiently similar state) then $\sigma_A$ will be a lower bound on the standard deviation of the measurement results.
Compared to the Born rule (which follows in special cases), this completely changes the ontology: The interpretation applies now to a single system, has a good classical limit for macroscopic observables, and obviates the quantum-classical Heisenberg cut. Thus the main problems in the interpretation of quantum mechanics are neatly resolved without the need to introduce a more fundamental classical description. | {
"domain": "physics.stackexchange",
"id": 5076,
"tags": "quantum-mechanics, quantum-interpretations"
} |
Beginner clustering project, what are the input features and how do I analyze the data? | Question: I am a beginner to data science. I have this dataset on natural disaster events in Afghanistan from 2016 - 2017. Columns:
REGION (ex. North, North West, etc)
PROVINCE_NAME (kind of like US 50 states)
DISTRICT_NAME (kind of like US counties)
INCIDENT_DATE (5 types: Flood, Earthquake, Land slide, Avalanche, and Heavy Snowfall)
INCIDENT_TYPE
Persons_killed
Persons_injured
Individuals_affected
Families_affected
Houses_damaged
Houses_destroyed
I need to do any basic ML model on this dataset. I thought of predicting the disaster type given the other features using classification, or predicting the number of persons killed using regression. But I think some of these ideas are silly because they aren't useful in real life. For example, if I'm predicting Persons_killed, would I realistically have access to Persons_injured? (I don't know, if you have a good scientific question I can answer using regression etc, please let me know.)
A more meaningful experiment to try might be clustering. Since clustering is unsupervised, I'm just looking for any patterns. Does this mean I put all 13 columns into my model? I am a bit stuck on how to design this model but here is my thought process:
I have checked my dataset for missing values, typos, etc. I have done some EDA.
Do I need to encode categorical vars like REGION, PROVINCE_NAME, DISTRICT_NAME, and INCIDENT_TYPE? And what do I do with INCIDENT_DATE? Should I make a new feature called "Season", since I am not sure how to work with dates, or if I should just leave it out?
Another issue is there are 2 natural disasters that are outliers (Earthquakes), they had a very large number of Persons_killed. This was factual, so would I leave these in the dataset or remove them? Because they cause the plot to zoom out very far and then you can't see the other data.
I would then scale the data using StandardScaler (am I using all 13 columns in my model?)
I don't fully understand dimensionality reduction with PCA, so I may leave this out for now and repeat this whole experiment with applying PCA at this step.
Then I would create the model object and fit the model to the scaled data, and predict the cluster assignments. For the number of clusters, I could try a random number to start, but in class I learned about silhouette analysis and using a range of k to loop through and find the best k value.
Now I'm confused on how to analyze the clusters. I would be analyzing the characteristics of each cluster. Perhaps use mean values of features. I am not sure how I would look at this on a map to see geographic patterns.
I apologize, as a beginner this is my first project, I would appreciate ANY advice, even if you cannot answer the whole series of questions.
Answer: Let me provide some guidance on the various aspects you've mentioned:
1. Encoding Categorical Variables
Yes, you should encode categorical variables like REGION, PROVINCE_NAME, DISTRICT_NAME, and INCIDENT_TYPE. Use techniques like one-hot encoding or label encoding, depending on the nature of the variables. For INCIDENT_DATE, creating a new feature like SEASON is a good idea. You can extract this information from the date and then encode it.
2. Handling Outliers
Whether to keep or remove outliers depends on the goals of your analysis. If earthquakes are significant events in your context, and you want the model to capture their impact, keep them. However, if they are skewing the results and making it hard to visualize other patterns, you may consider creating separate models with and without these outliers for comparison.
3. Scaling Data
Yes, use StandardScaler to scale your data. Include all relevant columns (features) in your model, as they contribute to the clustering process.
4. Dimensionality Reduction with PCA
While PCA can be beneficial, it's not always necessary, especially if your dataset isn't too large. You can start without PCA and explore it later if needed. It helps in reducing the number of features while preserving the most important information.
5. Determining the Number of Clusters
Silhouette analysis is a good approach. You can also try the elbow method, which involves plotting the explained variance as a function of the number of clusters and selecting the "elbow" point where the improvement slows down.
6. Analyzing Clusters
Once you have your clusters, analyze them by examining the mean values of features within each cluster. To visualize geographic patterns, consider plotting the clusters on a map using geospatial visualization tools. This can help you understand if certain regions or provinces are more prone to specific types of natural disasters. | {
"domain": "datascience.stackexchange",
"id": 12030,
"tags": "machine-learning, clustering, dimensionality-reduction, encoding, beginner"
} |
Correlated errors in Stim | Question: In Stim, I want to simulate correlated errors having the following behavior:
R 0 1
H 0
ERROR_1(p) 0
CX 0 1
ERROR_2(conditioned on ERROR_1) 1
...
The ERROR_1 and ERROR_2 compose a correlated error. ERROR_1 happens with probability p and ERROR_2 will happen if and only if ERROR_1 happens.
The current CORRELATED_ERROR in stim seems not support this type of error. How can I achieve this or did I miss anything?
Answer: Stim doesn't have good facilities for time-correlated errors. For your case you can use an ancilla qubit as a hacky error storage bit:
R 0 1
H 0
# Start fancy error
R 999 # clear ancilla qubit
X_ERROR(p) 999 # store error on ancilla qubit
CX 999 0 # spread error from ancilla qubit to targets
CY 999 1 # spread error from ancilla qubit to targets
CZ 999 2 # spread error from ancilla qubit to targets
CX 0 1
# Continue fancy error
CX 999 1 # spread error from ancilla qubit to more targets
CY 999 2 # spread error from ancilla qubit to more targets
CZ 999 3 # spread error from ancilla qubit to more targets
The main annoyance is that code working with the circuit often needs to understand that the CNOT gates touching the ancilla qubits aren't intended as physical gates a machine would do but rather just implementation details of the noise model.
For more complex cases you can implement your error model in python and run it alongside a stim.FlipSimulator, injecting the errors as the simulation runs. This still requires all errors to be probabilistic Pauli flips, but since they're generated by python you can relate them in essentially any way you want.
For even more complex cases you can run alongside a stim.TableauSimulator, but resorting to that is a major 1000x performance reduction in sample rate. | {
"domain": "quantumcomputing.stackexchange",
"id": 5431,
"tags": "stim"
} |
Does the amount of energy released from burning of fossil fuels have a measurable impact on global warming? | Question: I understand that the main issue of global warming is greenhouse gases that trap solar energy instead of allowing it to bounce back into space. That being said, I've always had this impression or idea that the amount of energy released from burning fossil fuels should also have an effect. Consider from antiquity to about 1600 AD, the sum of heat being dumped into the atmosphere came from the sun. Now, after the industrial revolution, a new source has been added, that being human activity (burning coal, burning petrol, burning natural gas, thermonuclear weapons, nuclear reactors). While I suspect human activity has no measurable comparison against the output of the sun, I'm wondering if it has an impact on global warming to any degree.
Answer: Yes, it does add to global warming.
No, it's not currently measurable.
Human primary non-renewable energy consumption is about 15TW - and pretty much all of that goes into low-grade heat in the ocean surface and the atmosphere.
Expressed in the same terms as the forcing units of global warming, the forcing effect of that heat is about 1.7% of the effect of the anthropogenic release of $\ce{CO2}$ to date.
So we can calculate it, but it's not directly measurable as a change to global energy content: it matters, but it's not quantifiable from temperature observations, given current data and techniques.
(Kudos to Pont for the link to the paper) | {
"domain": "earthscience.stackexchange",
"id": 805,
"tags": "climate-change, fossil-fuel"
} |
Global reactivity parameters from open shell DFT calculations using Koopman's theorem | Question: Koopmans' theorem is a useful approach to calculate the global reactivity parameters from the HOMO-LUMO energies. My question is, does it apply to the open-shell systems where we get two sets of singly occupied (alpha and beta) HOMO-LUMO energies?
I am working on rare earth systems. Rare earth (RE) elements generally show +3 oxidation states which result in open shell configurations for $\ce{Ce^3+}$, $\ce{Eu^3+}$, $\ce{Gd^3+}$ etc. I am performing unrestricted open shell DFT calculations for these systems, as restricted open shell calculations for RE systems require larger computational time and cost. In unrestricted open shell calculations, we get two sets of alpha and beta HOMO and LUMO. My question is, can I apply Koopmans' theorem here to calculate the global reactivity parameters?
Could anyone help with some literature on the reactivity parameters for open shell systems? Thank you!
Answer: You (probably) cannot apply Koopmans' theorem. (Please also note that it is named after Tjalling Charles Koopmans, the s at the end is part of the name.)
Open shell systems are usually hard to describe and unrestricted density functional approximations might not even describe the system correct qualitatively. The problem usually comes down to the multireference character of the partially occupied orbitals. In these cases you need a multireference wave function to correctly describe the ground state, e.g CASSCF methods. In a first order approximation, unrestricted methods can probably yield reasonable results, because (in a practical sense) it is using more than one determinante. However, when you use these methods, there will be fractional occupancy of multiple orbitals, which make the approximations of Koopmans' theorem break down. I would suggest a thorough literature review on the best practises for such systems. In short, HOMO and LUMO are not defined in these systems. | {
"domain": "chemistry.stackexchange",
"id": 12604,
"tags": "quantum-chemistry, density-functional-theory"
} |
Why can the alphabet be represented in numbers in base 256 | Question: This is in context of hashing of strings. I'm not sure why a string like, CS, could be represented as CS = 'c'*256 + 's'
Does anyone know about this?
Answer: A number in base ten is just a sequence of digits 0–9, with the string $d_n\dots d_2 d_1 d_0$ representing the number $10^nd_n + \dots + 10^2d_2 + 10^1d_1 + 10^0d_0$. Similarly, a character in an 8-bit character set can be considered to be a "digit" between 0 and 255, so a sequence $d_n\dots d_1d_0$ of these "digits" represents the number $256^nd_n + \dots + 256^2d_2 + 256^1d_1 + 256^0d_0$.
Another way to see this is to write the number out in binary. For example, a 32-bit binary number can be considered as having 32 binary digits (bits) or 4 256-ary digits by collecting the bits into groups of 8. In the same way, an 8-digit decimal number can be considered as a 4-digit number in base-100. For example
$$38572856$$
can be considered as
$$38\,57\,28\,56 = 38\times 100^3 + 57\times 100^2 + 28\times 100^1 + 56\times 100^0 = 38572856\,.$$
For the Latin alphabet, base 26 would be the most natural representation, since the letters can be treated as 26 different "digits". You may also have heard of Base-64 encoding which uses the 64 characters A...Za...z0...9+/ in that order as 64 "digits". The advantage there is that all 64 of the characters can easily be included in a text file and, by using Base-64, you can code three bytes of binary data (24 bits) into four characters (4x6 bits), which is a acceptably inefficient. | {
"domain": "cs.stackexchange",
"id": 3504,
"tags": "hash"
} |
Why does the refractory period of neurons only allow signals to pass in one direction? | Question: My textbook states that the advantages of the refractory period is that it means that action potentials are discrete and also that it results in signals only being able to pass one way, but provides no explanation. Could anyone explain why this is?
Answer: The axon is an uniformly excitable structure; if you would stimulate an axon electrically somewhere in the middle, an action potential would be generated in both directions. Hence, an axon in itself does not have directionality.
Nonetheless, under normal physiological conditions, an axon conveys action potentials from the dendritic region to the axon terminal, called anterograde signaling. The reverse direction, retrograde action potentials, normally do not occur because signals arising in the dendritic region travel unidirectionally to the terminal. The reason why an action potential travels unidirectionally is because of the refractory period. Because the refractory period will cause the part of the axon that just generated an action potential to become unresponsive, the traveling action potential cannot generate another action potential in the retrograde direction, because the only excitable region available is in the anterograde direction to the terminal (Fig. 1).
Fig. 1. Refractoriness. source: University of British Columbia
As an analogy, it's like a car driving across a road and throwing a temporary road block behind it that first has to be taken away before the road can be used again. This means that when a car leaves the parking lot and enters the road, it can never go back, because it throws up road blocks that prevent it to return the way it came. | {
"domain": "biology.stackexchange",
"id": 6676,
"tags": "neuroscience, neurophysiology"
} |
Show that if $d(n)$ is $O(f(n))$, then $ad(n)$ is $O(f(n))$, for any constant $a > 0$? | Question: Show that if $d(n)$ is $O(f(n))$, then $ad(n)$ is $O(f(n))$, for any constant $a > 0$?
Does this need to be shown through induction or is it sufficient to say:
Let $d(n) = n$ which is $O(f(n))$.
Therefore $ad(n) = an$ which is trivially $O(f(n))$
Answer: No, it is not sufficient to say "let $d(n) = n$ which is $O(f(n))$. Therefore $ad(n) = an$ which is trivially $O(f(n))$". Although that is a reasonable way to to understand the proposition quickly, it is neither sufficient nor necessary. It cannot be considered as a proof. It can easily lead to misunderstanding if communicated.
To show that "if $d(n)$ is $O(f(n))$, then $ad(n)$ is $O(f(n))$, for any constant $a > 0$", let us apply the relevant definitions.
$$\begin{align*}
d(n)\text{ is }O(f(n))
&\Longrightarrow \limsup_{n\to\infty}\dfrac{|d(n)|}{f(n)} <\infty\\
\left(\text{since } \limsup_{n\to\infty}\dfrac{a|d(n)|}{f(n)}=a\limsup_{n\to\infty}\dfrac{|d(n)|}{f(n)}\right) &\Longrightarrow\limsup_{n\to\infty}\dfrac{a|d(n)|}{f(n)} <\infty\\
&\Longrightarrow\limsup_{n\to\infty}\dfrac{|ad(n)|}{f(n)} <\infty\\ &\Longrightarrow ad(n)\text{ is }O(f(n)).\\
\end{align*}$$
The proof above is rigorous, although it is hardly the way we as humans understand the proposition. Here is another approach.
$$\begin{align*}
d(n)\text{ is }O(f(n))
&\Longrightarrow |d(n)|\text{ is bounded above by } cf(n)\text{ when $n$ is large enough for some constant } c\\
&\Longrightarrow |ad(n)|\text{ is bounded above by } acf(n)\text{ when $n$ is large enough for some constant } c\\
(\text{let } c'=ac)\ \ &\Longrightarrow |ad(n)|\text{ is bounded above by } c'f(n)\text{ when $n$ is large enough for some constant } c'\\
&\Longrightarrow ad(n)\text{ is }O(f(n)).\\
\end{align*}$$
The approach above can be considered as a proof among people who are familiar with the stuff. It is probably the way to understand the proposition as well. You can imagine that the graph of $cf(n)$ lies above the graph of $d(n)$, and, hence, the graph of $acf(n)$ lies above the graph of $ad(n)$. | {
"domain": "cs.stackexchange",
"id": 16352,
"tags": "asymptotics"
} |
Discrete or continuous Kalman filter? | Question: I have position and acceleration measurements and I would like to apply a Kalman filter to estimate the velocity of the system.
I am not sure yet about how to procede, but I will check the already answered questions on this website (like Estimating velocity from known position and acceleration , Kalman filter with accelerometer with DC offset and Applying Kalman filter to a data set).
Once I have understood how to procede, I will implement it using Matlab. There I saw there are 2 different types of Kalman filter: discrete and continuous. What is the difference? I mean, what is the difference of working with one or the other?
EDIT to be more clear, I am referring to function kalman and kalmd
Since I work with a set of data, should I use the discrete one?
Answer: I don't have experience with continuous time kalman filters. However from your description it sounds like you are making measurements of position and acceleration over time. If these measurements are sampled, meaning you have individual measurements associated with time, you should be using the discrete time kalman filter. I feel comfortable broadly stating if you are trying to implement the filter on a pc (matlab), microcontroller, fpga, or dsp based on a set of measurements you are implementing the discrete time filter.
EDIT
Based on your edits, I more clearly understand your question. I don't have experience with Matlab's built-in kalman filter functions but a quick read of the comments in kalmd seem to indicate to me you want to use kalman and not kalmd. Below is a snippet pasted from kalmd that should make it clear.
% The LTI system SYS specifies the plant data (A,[B G],C,[D 0]).
% The continuous plant and covariance matrices (Q,R) are first
% discretized using the sample time Ts and zero-order hold
% approximation, and the discrete Kalman estimator for the
% resulting discrete plant is then calculated with KALMAN.
This text tells me the function is expecting a continuous function which it then evaluates/discretizes, and then calls the kalman function to create a discrete filter. From the sounds of it you already have discrete data (measurements over time). Hence kalman is what you want to use. | {
"domain": "dsp.stackexchange",
"id": 2710,
"tags": "matlab, kalman-filters"
} |
Why do all the atoms of a radioactive substance not decay at the same time? | Question: Why does the substance decay at a rate which is proportional to the amount of the substance at that moment?
As all atoms are in hurry to become a stable atom and as their decay do not depend on any external things (like pressure, decaying of neighbouring atom), they all should decay in a moment into a stable atoms and the whole substance should become an stable substance.
Answer: The decay phenomenon is a purely quantum mechanical property. This problem is equivalent to a particle in a finite potential well, and a lower potential state that is available outside the well. Classically if the energy of the particle in the well is lower than the potential barrier - it will never get to the lower state. By quantum mechanics, the particle can tunnel through the barrier to the lower state, but its chance to accomplish this is very very low (in actual situations). Also, its probability to tunnel is independent of the state of other particles and the previous attempts of the particle to tunnel don't change its probability to accomplish the next tunneling (no memory). Now, using some mathematical tools from the probability theory you can prove that the probability of each particle to tunnel has an exponential probability density, and so the collective behaviour of many particles (which you asked about) follows that. | {
"domain": "physics.stackexchange",
"id": 22042,
"tags": "quantum-mechanics, nuclear-physics, probability, randomness"
} |
Is it possible to use more than one client library in the same workspace? | Question:
I purchased a book that uses the python client library on ROS Indigo on Ubuntu 14.04.
I would like to follow the code examples in the book and test them in my catkin workspace; then I would like to convert the python code examples to the Julia client library and test them in Julia.
Should I/can I have the python code files and the Julia code files in the same catkin workspace, or should I have the different client libraries in separate catkin workspaces?
Originally posted by TurtleBot_Fan on ROS Answers with karma: 287 on 2016-03-23
Post score: 0
Answer:
Should I/can I have the python code files and the Julia code files in the same catkin workspace, or should I have the different client libraries in separate catkin workspaces?
There is no need for a different workspace. A workspace is nothing magical, it is essentially just a directory that contains a bunch of sub directories that happen to follow a specific layout and have a few signature files in them (package.xml, CMakeLists.txt, etc). Because of those files (and their location), tools like catkin can provide a few convenient services, but all of that can be done manually as well.
What you do in those directories is up to you.
You do need to keep in mind the ROS naming conventions and make sure your packages are 'good citizens' of your workspace (no two packages with the same name, fi).
Originally posted by gvdhoorn with karma: 86574 on 2016-03-23
This answer was ACCEPTED on the original site
Post score: 1 | {
"domain": "robotics.stackexchange",
"id": 24221,
"tags": "python"
} |
Why are all-Sky images drawn as a filled ellipse? | Question: There is some convention? how is this 3d to 2d mapping done?
here an example
Answer: The ellipse is a particular way to draw the surface of a sphere - like the sphere of the skies around us, or the surface of the Earth - on a flat piece of paper or screen (because of the curvature of the sphere, it cannot be "flattened" without distortions). This one is called the Mollweide projection and it preserves the areas (and completely sacrifices correct representation of the angles):
http://en.wikipedia.org/wiki/Mollweide_projection
http://en.wikipedia.org/wiki/Map_projection
The second link enumerates many other ways how to draw the spherical surface. | {
"domain": "physics.stackexchange",
"id": 596,
"tags": "astronomy"
} |
Homogeneity of space doubts | Question: This question might have been asked so many times, but here we go again. I'm wondering what homogeneity of space means. All the descriptions say:
there's no special point in space, every point looks the same. or
laws of physics are the same.
Can we explain more what "laws of physics are the same" mean? I'm first wondering what this means for free particle and then, particle moving when force acts on it. Maybe good examples would enlighten the situation for me in both cases(free and non-free particle). Is it like the equations of motion calculated at $x=2$ and $x=10$ are the same? Hopefully, at some point, you might as well include explanation why inertial frame is homogeneous and non-inertia frame inhomogeneous? in non-inertial frame, the equations of motions would be different at $x=2$ and $x=10$? why?
EXTRA
Why do we call non-inertial frame inhomogeneous space ? Imagine the ball put on a huge table on a uniform accelerating train car. As the train accelerates, ball moves backwards. so can you tell me how different points that ball passes on the table have different physics law ?
Answer: Set up an experiment. Find the result.
Move the experiment somewhere else and run it again. You will get the same result.
To do this, you must move all the important parts of the experiment. For example, if you drop a rock on earth, it falls. If you move the experiment out into space, it just floats. To do it right, you would have to move the earth too.
The results would not be exactly the same because gravity from the moon and the sun have a small effect. So you really need to move them too. And if you really get precise, everything in the universe has a small effect. So you need to move the whole universe. And if you do that, how do you know you have moved anything? It gets confusing.
But if you just move the earth and the thing you drop, you would find that nothing about space itself is different in the two places. The difficulty is all in setting up two identical experiments.
If Alice is in a rocket in space with the engine pushing her, she is in an accelerated frame of reference. If she drops a rock, the rock falls toward the back of the rocket.
Alice is using a frame of reference where things are motionless if they keep up with the rocket. She sees herself as motionless and the rock as accelerated.
An inertial frame of reference is one with no forces on it. If Bob is floating in space far from earth, a dropped rock does not move. In this frame, $F=ma$. If he ties a string to the rock and pulls on it, the rock accelerates.
Bob can make sense out of the rocket. He sees Alice and the rock accelerating together. When Alice drops the rock, it is left behind. Alice accelerates ahead of it. You can see how it looks to her like the rock is accelerating toward the back.
To Bob, space is isotropic. Bob can face any direction and get the same result from his experiment. Space has no special direction.
Alice does see a special direction. She has to exert a force toward the front of the rocket to hold the rock motionless. If she stops and lets $F=0$, the rock accelerates toward the back.
Alice finds that space is homogeneous. She gets the same result if she moves to a different place in the rocket and drops the rock. This sounds like splitting hairs. But it is sometimes useful to distinguish between isotropic and homogeneous.
Alice wants to do physics too. She wants to use a law like $F=ma$. But $F=ma$ applies in inertial frames, and Alice is ignoring a force that pushes her and everything she sees forward. To make it work, she has to play a trick. She says a fictitious force is pushing everything back. She adds this force to the force she exerts on the rock. They cancel and the total force is $0$. Now $F=ma$ works.
One of Einstein's great insights was that gravity is a fictitious force just like a rocket engine. This is called the equivalence principle. This is the basis of General Relativity, which is a theory of gravity.
Bob cannot do any experiment that tells him whether his rocket is floating in space or falling off a cliff on Earth.
Alice cannot do any experiment that tells her whether her rocket engine is pushing her forward or if the rocket is sitting on earth with the engine off.
This is a bit confusing, because Bob or Alice can just look out a window. But that doesn't count. The idea is that for the space inside the room, the laws of physics are the same whether the acceleration comes from gravity or a rocket engine.
So the surface of the Earth is an accelerated frame of reference.
Space near the earth is not homogeneous or isotropic.
If you do not move very far, space is almost the same. But if you go up, the force of gravity gets weaker. If you go to the other side of the world, the special direction changes.
General relativity is not obvious. It might not have been the simplest example of how space can be inhomogeneous in a non-inertial frame of reference.
It took 300 years to get from the discovery of Newton's laws to the discovery of General relativity. For those 300 years, people have been treating space around earth as homogeneous and isotropic. Gravity has been treated as a real force. This all works just fine in ordinary circumstances. You should think this way too.
General relativity is useful because it explains how mass causes gravity. It explains tiny differences between predictions of Newton's laws and experimental results. Starlight is deflected a tiny fraction of a degree as it passes near the surface of the sun. The orbit of mercury is almost but not quite an ellipse. Time runs very slightly slower on earth than in orbit above. It explains effects produced by extremely strong gravity, such as black holes and gravitational waves. | {
"domain": "physics.stackexchange",
"id": 97377,
"tags": "newtonian-mechanics, classical-mechanics, spacetime, symmetry, inertial-frames"
} |
Why are the two acceleration considered as a1 and a2 here and find acceleration of 6kg mass? | Question: Why are the two acceleration considered as a1 and a2 here? What is the reason for this even though the tension of both the strings are same.
T is equal to 5gTcos 53 for left side and 5gTcos37 for the right side
Answer: As posed, you have three unknowns - $a1$, $a2$, and $a3$. The tension on the cable is everywhere uniform, but the accelerations are different because the static force components affecting the tension are different.
$F=ma$ at each mass. So for mass $m1$, the tension is
$m1*(g*cos53 +a1)$. Likewise
$m2*(g*cos37 + a2)$ and
$1/2*m3*(g*cos90 +a3)$.
These are all equal. And the length of the line doesn't change.
You can assume that you start from a stationary condition (all velocities are initially zero) - that doesn't change the accelerations. You can choose any initial length that like - that doesn't change the accelerations either.
So $m1(g*cos53 + a1) = 1/2 m3(g*cos90 + a3) = m2(g*cos37 +a2) $ from equal tension.
And $a1 = 2*a3 - a2$ from constant length.
$ 5 kg (0.600g + a1) = 3 kg (1g + a3) = 5 kg ( 0.799g + a2)$
$0.01 kg + 5 kg * a1/g = 3 kg * a3/g = 0.993 kg + 5 kg * a2/g$
$0.0033 + 1.666 a1/g = a3/g = 0.331 + 1.666 a2/g$
$0.0033 + 1.666 (2a3 - a2)/g = a3/g$
$0.0033 - 1.666 a2/g = -2.332 a3/g$
$-0.0014 + 0.7144 a2/g = a3/g $
$-0.0014 + .7144 a2/g = 0.331 + 1.666 a2/g$
$-0.3324 = .9516 a2/g$
$a2 = -0.349 g$
$a3 = -0.2509 g$
$a1 = -0.1529 g$ | {
"domain": "engineering.stackexchange",
"id": 3776,
"tags": "applied-mechanics, acceleration"
} |
HamCycle to HamPath reduction | Question: I've seen a reduction that's done by adding another vertex to the graph and creating a path through that vertex.
Why do I need to add a vertex? Cant I just remove an edge? Lets say the graph with the HamCycle is G,s,t when removing the edge between s and t dont I get a path the goes through all the vertexes that's qualified as a Hamiltonian path?
Answer: First of all I think that the direction of the reduction of your question is from Hampath to Hamcycle (you prove the NP-hardness of Hamcycle reducing the NP-complete problem Hampath to it).
Now given an instance of Hampath: $G,s,t$:
1) there is an edge between $s$ and $t$
2) there is not an edge between $s$ and $t$
In both cases the extra node $u$ is needed in order to force $s,u,t$ to be consecutive nodes in the Hamiltonian cycle.
The new graph $G'$ with a new node $u$ and new edges $(s,u), (u,t)$ (in the first case you split the original edge) will have an Hamiltonian cycle iif the original graph $G$ has an Hamiltonian path from $s$ to $t$.
You can "play" with these two graphs: | {
"domain": "cs.stackexchange",
"id": 1019,
"tags": "complexity-theory, np-complete, reductions"
} |
Difference between Ridge and Linear Regression | Question: From what I have understood, the Ridge Regression is just having the loss function for an optimization problem with the addition of the regularization term (L2 Norm in the case of Ridge). However I am not sure if the loss function can be described by a non-linear function or it needs to be linear. In this case, if the loss functions needs to be linear, then from what I understand the Ridge regression, is simply performing Linear regression with the addition of the L2-Norm for regularization. Please correct me if I am wrong.
Answer: Introduction to Statistical Learning (page261) gives some instructive details. The linear regression loss function is simply augmented by a penalty term in an additive way. | {
"domain": "datascience.stackexchange",
"id": 7159,
"tags": "regression, linear-regression"
} |
Error when running launch file Marker.launch | Question:
I study book Programming Robots with ROS Morgan Quigley, Brian Gerkey & William D. Smart
I try launch file in chapter14 stuckroom_bot markers.launch
in this chapter14 robot is fetchbot, ros version is indigo
markers.launch
-when launch this file, then appear this err message
... logging to /home/ho/.ros/log/da9b7b56-1299-11e7-83ee-000c29c6431a/roslaunch-ubuntu-51819.log
Checking log directory for disk usage. This may take awhile.
Press Ctrl-C to interrupt
Done checking log file disk usage. Usage is <1GB.
Invalid roslaunch XML syntax: mismatched tag: line 38, column 2
The traceback for the exception was written to the log file
--
Originally posted by deokisys on ROS Answers with karma: 5 on 2017-03-26
Post score: 0
Answer:
The tags for the tag_rot and tag_trans don't have a / at the end: ... default="0 -0.28 -0.1 0 0 0" />
Originally posted by nlamprian with karma: 366 on 2017-03-27
This answer was ACCEPTED on the original site
Post score: 1 | {
"domain": "robotics.stackexchange",
"id": 27433,
"tags": "ros, roslaunch, ros-indigo"
} |
Do technological developments terminate the evolution of human species? | Question: One of the most agreed upon mechanisms for evolution is natural selection. Changing environmental conditions necessitates development of variations that enable the survival of that particular species. These genetically passed down variations later becomes the adaptive features which further result in the development of new species.
But with the development of technology we are finding newer ways to cope up with changes in the environment. Does this impact human evolution?
Answer: "Natural selection" is a somewhat misleading term. Evolution does not need "natural" selection to occur; it only needs selection. Even the term "selection" is a bit misleading because it's often thought of as referring to the death of individuals or, somewhat more accurately, as reduced likelihood of producing offspring, due to lower fitness.
In fact, any process that gives individuals with particular genetically determined features - a reproductive advantage in a given ecological niche will drive evolution among the sub-population within that niche.
Addressing your question more directly: Suppose technology leads to people whose keyboarding talents are high having more children. If so, it will tend to drive evolution among the sub-population having access to keyboards toward genotypes having higher keyboarding talents. But if having keyboarding talents results in those individuals having less likelihood of producing children, evolution in that sub-population will be driven in the opposite direction.
IF it could be said that technology in general decouples human reproduction from "natural" influences like disease, resource availability, climate, etc., it could then be said that evolution will be less driven by those influences and more driven by, e.g., the cultural tendencies of particular sub-populations to produce more children. But that would be a gross over-simplification.
It is almost certain that changes in technology have had significant influence on human evolution. Agriculture, clothing, tool use, etc., have all had long-term consequences in human evolution.
Human culture is very complex, and separating out all the effects of technology on differential reproduction rates among sub-populations would be extremely difficult; but it's almost certain that there are such effects. | {
"domain": "biology.stackexchange",
"id": 10345,
"tags": "evolution, natural-selection, human-evolution"
} |
The irrelevancy of irrelevant couplings in the Wilsonian RG | Question: I have a few questions related to irrelevant couplings in the Wilsonian approach to the renormalization group (RG).
What is so great about RG theory is that one can trade the 'real physics', the one valid at any energies or scales, for an effective description at the scale of interests. In particular, it is sufficient to consider an action including
$$S= (\text{kinetic term}) \,+ \,(\text{relevant interactions})\,, $$
while all irrelevant interactions can be set to zero.
I have difficulties accepting that last point as the irrelevant quantities only vanish at the fixed point. Why can you simply neglect these irrelevant couplings altogether?
Why is an RG-flow so 'boring', i.e. one never encounters limit points, bifurcations, etc....? Does there exist "physical", in contrast to toy examples, showing a limit cycle behavior? If so, what is the role of the irrelevant operators? Surely these cannot be taken to vanish?
Answer: There's really two questions here.
1) Why can we ignore the irrelevant interactions?
In general, you can't. If you define your theory with a finite cutoff -- for instance, using a lattice -- you can have non-negligible interactions which are nonetheless classified as 'irrelevant'. These interactions only become negligible as the lattice shrinks to nothing, i.e., as you approach the IR fixed point.
2) Why is RG flow boring?
In 2d continuum theories, it's because of the c-theorem. In almost all 2d QFTs (IIRC, you need a stress-energy tensor), you can define a quantity, the coefficient $c$ of the conformal anomaly, which always decreases under renormalization flow. This makes limit cycles impossible; you can never get back to where you started.
In 4d, an analogous theorem (a "physics theorem", but the proof will carry over to any rigorous setting mathematicians cook up) was proven in 2011 by Komargodski & Schwimmer. | {
"domain": "physics.stackexchange",
"id": 41262,
"tags": "quantum-field-theory, condensed-matter, renormalization, effective-field-theory"
} |
Interpreter programming challenge | Question: I have posted here my working and accepted solution to the intepreter programming challenge (detailed here) for your review. The challenge is as follows:
A certain computer has 10 registers and 1000 words of RAM. Each register or RAM location holds a 3-digit integer between 0 and 999. Instructions are encoded as 3-digit integers and stored in RAM.
The encodings are as follows:
100 means halt
2dn means set register d to n (between 0 and 9)
3dn means add n to register d
4dn means multiply register d by n
5ds means set register d to the value of register s
6ds means add the value of register s to register d
7ds means multiply register d by the value of register s
8da means set register d to the value in RAM whose address is in register a
9sa means set the value in RAM whose address is in register a to the value of register s
0ds means goto the location in register d unless register s contains 0
All registers initially contain 000. The initial content of the RAM is read from standard input. The first instruction to be executed is at RAM address 0. All results are reduced modulo 1000.
Input
The input begins with a single positive integer on a line by itself indicating the number of the cases following, each of them as described below. This line is followed by a blank line, and there is also a
blank line between two consecutive inputs.
The input to your program consists of up to 1000 3-digit unsigned integers, representing the contents of consecutive RAM locations starting at 0. Unspecified RAM locations are initialized to 000.
Output
For each test case, the output must follow the description below. The outputs of two consecutive cases will be separated by a blank line.
The output from your program is a single integer: the number of instructions executed up to and including the halt instruction. You may assume that the program does halt.
SampleInput
1
299
492
495
399
492
495
399
283
279
689
078
100
000
000
000
SampleOutput
16
The program recognizes the required commands:
#include <iostream>
#include <vector>
#include <sstream>
#include <fstream>
//platform specific code.
#ifdef WINDOWS
#include "stdafx.h"
#endif
using std::cin;
using std::cout;
using std::vector;
using std::getline;
using std::string;
using std::istringstream;
using std::endl;
using std::stoi;
static void execute_case();
static void skip_blank_lines();
static void fill_ram();
static void halt();
static void set_register(int register_address, int value);
static void increment_register(int register_address, int value);
static void multiply_register(int register_address, int value);
static void copy_register(int destination_address, int source_address);
static void sum_registers(int destination_address, int value_address);
static void multiply_registers(int destination_address, int value_address);
static void copy_ram_to_register(int destination_register, int source_ram);
static void set_ram(int destination_ram, int source_register);
static void jump_to(int register_location, int register_sentinel);
static void initialize_memory();
int number_of_instructions = 0;
int line_number = 0;
const int N_RAM_WORDS = 1000; //Number of words of ram our interpretor can store.
const int N_REGISTERS = 10; //Number of registers.
const int MAX_VALUE = 1000; //All values in registers and ram must be less than this.
const int DEFAULT_VALUE = 0; //default values for all registers and ram words.
int n_cases; //The number of input cases in input
vector<int> registers(N_RAM_WORDS, DEFAULT_VALUE); //each element registers[i] stores the value of register i.
vector<int> ram_words(N_REGISTERS, DEFAULT_VALUE); // each element ram_words[i] stores the value at memory address i.
int main()
{
//first line of input is the number of cases to follow.
cin >> n_cases;
// skip two lines seperating number of cases from first case.
skip_blank_lines();
for (int i = n_cases; i > 0; i--) {
execute_case();
// print an extra blank line for all but the last case.
if (i > 1) {
cout << "\n";
}
}
}
//this function skips two lines seperating number of cases from first case.
static void skip_blank_lines() {
string input;
getline(cin, input);
getline(cin, input);
}
static void execute_case() {
initialize_memory();
fill_ram();
while (line_number < 999) {
int instruction = ram_words[line_number] / 100;
int parameter1 = ram_words[line_number] % 100 / 10;
int parameter2 = ram_words[line_number] % 10;
number_of_instructions++;
switch (instruction) {
case 1:
if ((parameter1 == parameter2) && (parameter1 == 0)) {
halt();
return;
}
break;
case 2:
set_register(parameter1, parameter2);
break;
case 3:
increment_register(parameter1, parameter2);
break;
case 4:
multiply_register(parameter1, parameter2);
break;
case 5:
copy_register(parameter1, parameter2);
break;
case 6:
sum_registers(parameter1, parameter2);
break;
case 7:
multiply_registers(parameter1, parameter2);
break;
case 8:
copy_ram_to_register(parameter1, parameter2);
break;
case 9:
set_ram(parameter1, parameter2);
break;
case 0:
jump_to(parameter1, parameter2);
break;
}
line_number++;
}
}
//resets all registers and ram words to their default value
// ie. 000.
static void initialize_memory() {
std::fill(registers.begin(), registers.end(), 0);
std::fill(ram_words.begin(), ram_words.end(),0);
number_of_instructions = 0;
line_number = 0;
}
//reads all instructions executed for a case from stdin.
//input is terminated by a blank line.
static void fill_ram() {
string input_line;
int word_number = 0;
while (getline(cin, input_line)) {
if (input_line.empty()) {
return;
}
//convert line into an integer which is then stored in ram_words.
ram_words[word_number] = stoi(input_line);
word_number++;
}
}
//prints out the number of instructions executed for input case.
static void halt() {
cout << number_of_instructions << "\n" ;
}
//sets the register at register_address to value.
void set_register(int register_address, int value) {
registers[register_address] = value%10;
}
//increments the register at register_address by value.
static void increment_register(int register_address, int value) {
registers[register_address] = (registers[register_address]+value) % MAX_VALUE;
}
//multiplies the register at register_address by value.
static void multiply_register(int register_address, int value) {
registers[register_address] = (registers[register_address]* value) % MAX_VALUE;
}
//copies the register at source_address to destination_address
static void copy_register(int destination_address, int source_address) {
registers[destination_address] = registers[source_address];
}
//sums the registers at destination_address and value_address and stores the result in register destination_address
static void sum_registers(int destination_address, int value_address) {
registers[destination_address] = (registers[destination_address]+registers[value_address]) % MAX_VALUE;
}
//multiplies the registers at destination_address and value_address and stores the result in register destination_address
static void multiply_registers(int destination_address, int value_address) {
registers[destination_address] = (registers[destination_address]*registers[value_address])%MAX_VALUE;
}
//copies the value stored at ram_word number source_ram to register number destination_register
static void copy_ram_to_register(int destination_register, int source_ram) {
registers[destination_register] = ram_words[registers[source_ram]];
}
//sets the ram_word whose address is stored in register number destination ram to the value stored in register number source_register.
static void set_ram(int source_register, int destination_ram) {
ram_words[registers[destination_ram]] = registers[source_register];
}
//if sentinel_register is not 0 it
//jumps to and executes the instruction stored in the ram_address stored in register location_register
static void jump_to(int location_register, int sentinel_register) {
if (registers[sentinel_register] == 0) {
return;
}
else {
line_number = registers[location_register]-1;
}
}
Answer: Here are some things that may help you improve your program.
Fix the bug
The code currently contains these two lines:
vector<int> registers(N_RAM_WORDS, DEFAULT_VALUE);
vector<int> ram_words(N_REGISTERS, DEFAULT_VALUE);
However, it's obvious that N_RAM_WORDS and N_REGISTERS should be swapped. Being able to spot such errors is one advantage to having named constants as you do.
Eliminate global variables
In this case, eliminating global variables is easy and obvious. As @LokiAstari advises, simply wrap everything except n_cases up into a nice neat Machine object. Then n_cases can be declared within main.
Make all instruction handlers alike
All instructions except for halt look alike. They are each void functions taking two int arguments. I'd suggest making halt look like that, too, and move the parameter checking to within the body of the halt function. This makes it easier to implement the next suggestion.
Consider making the code more data driven
Right now there is a big switch with each case being almost identical except for the function called. I'd recommend instead using a table driven approach instead. In my case, I made all instruction handlers return bool which indicates "halted", so only halt() actually returns true. The call then looks like this (
if ((this->*inst[instruction])(parameter1, parameter2))
return;
I've made all of the functions member functions of a Machine class as mentioned above. Then there is a table that is also part of that Machine class:
bool(Machine::*inst[10])(int, int) = {
&Machine::jump_to,
&Machine::halt,
&Machine::set_register,
&Machine::increment_register,
&Machine::multiply_register,
&Machine::copy_register,
&Machine::sum_registers,
&Machine::multiply_registers,
&Machine::copy_ram_to_register,
&Machine::set_ram,
};
This is simply an array named inst of 10 pointers to member functions. The syntax would be similar for C-style functions except of course they would not have a Machine:: anywhere.
Use all required #includes
The code makes use of std::string and std::stoi but does not #include <string>. It should.
Use only required #includes
It doesn't appear to me that this program actually needs anything from either <sstream> or <fstream> so these two lines can safely be deleted:
#include <sstream>
#include <fstream> | {
"domain": "codereview.stackexchange",
"id": 22766,
"tags": "c++, performance, strings, programming-challenge, vectors"
} |
Importing python packages into node files | Question:
I have set up my ROS package with the following layout:
packagename
CMakeLists.txt
package.xml
|- src/
|- packagename/
|- __init__.py
|- package_file.py
|- scripts/
|- node_file.py
And in node_file.py, I'd like to be able to do
from packagename import package_file
But I get cannot import name package file.
Am I laying this out correctly? How do I make my python package visible to node_file.py? I am using ROS Hydro in Ubuntu 12.04 LTS.
Originally posted by kamek on ROS Answers with karma: 79 on 2014-04-11
Post score: 0
Answer:
I didn't realize that I needed a setup.py file at the root of my package (I previously thought this was only needed for installing packages). I was used to using Fuerte where this was unnecessary. See here for how to create and use this file:
http://wiki.ros.org/rospy_tutorials/Tutorials/Makefile
Originally posted by kamek with karma: 79 on 2014-04-11
This answer was ACCEPTED on the original site
Post score: 2
Original comments
Comment by jackcviers on 2014-07-18:
Setup.py is very important, and should probably be added to the python publish/subscribe tutorials. | {
"domain": "robotics.stackexchange",
"id": 17624,
"tags": "ros, rospy, package"
} |
Addition/Deletion records based on method argument | Question: I have one method for addition and deletion of the account records. It has method argument based on which addition or deletion happens.
Below is method
char ADD = 'Y';
char DELETE = 'N';
private void updateAccountDtls(AccountDtlsDTO accountDtlsDTO, char addOrDeleteRecord){
if (ADD == addOrDeleteRecord) {
//account addition related processing
}
else if (DELETE == addOrDeleteRecord) {
//account deletion related processing
}
}
which is consumed in switch like below
switch (action) {
case SET:
updateAccountDtls(accountDtlsDTO,ADD);
break;
case RESET:
updateAccountDtls(accountDtlsDTO,DELETE);
break;
}
from performace perspective is this good practice to use character instaed of string for Yes/No?
Answer: Performance wise you're not going to notice the difference.
There is a far better solution available to you though. That is to have 2 separate methods altogether.
You already have a switch statement before calling the method in which you know that it will be a delete or an add. This makes it really easy to just call the right method.
Having separate methods that each do a specific thing makes the code easier to read and maintain later on. | {
"domain": "codereview.stackexchange",
"id": 30053,
"tags": "java"
} |
How to append numbers only on duplicates sequence names? | Question: I have a reference database with contains 100s of sequences in fasta format. Some of these sequences have duplicate names like so:
>1_uniqueGeneName
atgc
>1_anotherUniqueGeneName
atgc
>1_duplicateName
atgc
>1_duplicateName
atgc
Is is possible to run through a large file like this and change the names of only the duplicates?
>1_uniqueGeneName
atgc
>1_anotherUniqueGeneName
atgc
>1_duplicateName_1
atgc
>1_duplicateName_2
atgc
Answer: Sure, this little Perl snippet should do it:
$ perl -pe 's/$/_$seen{$_}/ if ++$seen{$_}>1 and /^>/; ' file.fa
>1_uniqueGeneName
atgc
>1_anotherUniqueGeneName
atgc_2
>1_duplicateName
atgc_3
>1_duplicateName_2
atgc_4
Or, to make the changes in the original file, use -i:
perl -i.bak -pe 's/$/_$seen{$_}/ if ++$seen{$_}>1 and /^>/; ' file.fa
Note that the first occurrence of a duplicate name isn't changed, the second will become _2, the third _3 etc.
Explanation
perl -pe : print each input line after applying the script given by -e to it.
++$seen{$_}>1 : increment the current value stored in the hash %seen for this line ($_) by 1 and compare it to 1.
s/$/_$seen{$_}/ if ++$seen{$_}>1 and /^>/ : if the current line starts with a > and the value stored in the hash %seen for this line is greater than 1 (if this isn't the first time we see this line), replace the end of the line ($) with a _ and the current value in the hash
Alternatively, here's the same idea in awk:
$ awk '(/^>/ && s[$0]++){$0=$0"_"s[$0]}1;' file.fa
>1_uniqueGeneName
atgc
>1_anotherUniqueGeneName
atgc
>1_duplicateName
atgc
>1_duplicateName_2
atgc
To make the changes in the original file (assuming you are using GNU awk which is the default on most Linux versions), use -i inplace:
awk -iinplace '(/^>/ && s[$0]++){$0=$0"_"s[$0]}1;' file.fa
Explanation
In awk, the special variable $0 is the current line.
(/^>/ && s[$0]++) : if this line starts with a > and incrementing the value stored in the array s for this line by 1 evaluates to true (is greater than 0).
$0=$0"_"s[$0] : make the current line be itself with a _ and the value from s appended.
1; : this is just shorthand for "print this line". If an expression evaluates to true, awk will print the current line. Since 1 is always true, this will print every line.
If you want all of the duplicates to be marked, you need to read the file twice. Once to collect the names and a second to mark them:
$ awk '{
if (NR==FNR){
if(/^>/){
s[$0]++
}
next;
}
if(/^>/){
k[$0]++;
if(s[$0]>1){
$0=$0"_"k[$0]
}
}
print
}' file.fa file.fa
>1_uniqueGeneName
atgc
>1_anotherUniqueGeneName
atgc
>1_duplicateName_1
atgc
>1_duplicateName_2
atgc
IMPORTANT: note that all of these approaches assume you don't already have sequence names ending with _N where N is a number. If your input file has 2 sequences called foo and one called foo_2, then you will end up with two foo_2:
$ cat test.fa
>foo_2
actg
>foo
actg
>foo
actg
$ perl -pe 's/$/_$seen{$_}/ if ++$seen{$_}>1 and /^>/; ' test.fa
>foo_2
actg
>foo
actg
>foo_2
actg
If this can be an issue for you, use one of the more sophisticated approaches suggested by the other answers. | {
"domain": "bioinformatics.stackexchange",
"id": 140,
"tags": "fasta, text"
} |
Implementation of Wilf-Zeilberger and related methods | Question: The book A=B by Petkovsek, Wilf and Zeilberger describes algorithms to compute different sums of binomials. AFAIK, these algorithms are still being improved by different authors.
Do you know where we can find the most up-to-date implementations of these algorithms? And do you know if there exist implementations in some free softwares such as Sage?
Answer: It is implemented in Maxima (http://maxima.sourceforge.net/docs/manual/de/maxima_77.html#SEC400), to which Sage has interface. A few dozens of examples (ranging from very easy to very difficult) I tested today work in the exact same way as in Maple. | {
"domain": "cstheory.stackexchange",
"id": 3913,
"tags": "ds.algorithms, co.combinatorics, implementation"
} |
Acceleration of an object dropped inside an accelerating elevator | Question: A stone is released from an elevator going up with acceleration of $g/2$. What is the acceleration of the stone just after the release?
The answer is $g$. Shouldn't the stone carry the acceleration of the elevator and be $-g/2$?
Answer: While the stone is still travelling on the elevator, there are two forces acting on it, the force from the elevator to the stone, as well as the weight due to gravity.
The moment the stone leaves the elevator, it becomes a free falling object. The elevator stops giving a force to the stone, and the only force remaining is its weight due to gravity.
From this you can see that as the only force is W = mg, the acceleration felt by the stone will be g.
While it is true it will be travelling upwards initially due to its momentum, its initial speed does not matter, as the only force that is acting on it would be force due to gravity, so its acceleration experienced will simply be $g$. | {
"domain": "physics.stackexchange",
"id": 16726,
"tags": "homework-and-exercises, newtonian-mechanics, newtonian-gravity, free-fall"
} |
What's cosmic VCF? | Question: The popular MuTect variant tool:
http://archive.broadinstitute.org/cancer/cga/mutect_run
has the following VCF option:
--cosmic < comic.vcf >
Q: What is this cosmic VCF file? What's the purpose of the file?
Answer: VCF is an abbreviation for Variant Call Format. It is a file format for SNPs.
COSMIC stands for Catalogue Of Somatic Mutations In Cancer. It is a database.
Have a look at the links for more information.
A COSMIC VCF is likely just a file in VCF format file containing data coming from the COSMIC data base. | {
"domain": "biology.stackexchange",
"id": 6860,
"tags": "bioinformatics"
} |
What is the difference between a Bioinformatics pipeline and workflow? | Question: I want to understand the difference between pipeline systems and workflow engines.
After reading A Review of Scalable Bioinformatics Pipelines I had a good overview of current bioinformatics pipelines. After some further research I found that there is collection of highly capable workflow engines. My question is then based on what I saw for argo. I would say I can be used as a bioinformatics pipeline as well.
So how do bioinformatics pipelines differ from workflow engines?
Answer: Great question! Note that from a prescriptive standpoint, the terms pipeline and workflow don't have any strict or precise definitions. But it's still useful to take a descriptive standpoint and discuss how the terms are commonly used in the bioinformatics community.
But before talking about pipelines and workflows, it's helpful to talk about programs and scripts. A program or script typically implements a single data analysis task (or set of related tasks). Some examples include the following.
FastQC, a program that checks NGS reads for common quality issues
Trimmomatic, a program for cleaning NGS reads
salmon, a program for estimating transcript abundance from NGS reads
a custom R script that uses DESeq2 to perform differential expression analysis
A pipeline or a workflow refers to a particular kind of program or script that is intended primarily to combine other independent programs or scripts. For example, I might want to write an RNA-seq workflow that executes Trimmomatic, FastQC, salmon, and the R script using a single command. This is particularly useful if I have to run the same command many times, or if the commands take a long time to run. It's very inconvenient when you have to babysit your computer and wait for step 3 to finish so that you can launch step 4!
So when does a program become a pipeline? Honestly, there are no strict rules. In some cases it's clear: the 10-line Python script I wrote to split Fasta files is definitely NOT a pipeline, but the 200-line Python script I wrote that does nothing but invoke 6 other bioinformatics programs definitely IS a pipeline. There are a lot of tools that fall in the middle: they may require running multiple steps in a certain order, or implement their own processing but also delegate processing to other tools. Usually nobody worries too much about whether it's "correct" to call a particular tool a pipeline.
Finally, a workflow engine is the software used to actually execute your pipeline/workflow. As mentioned above, general-purpose scripting languages like Bash, Python, or Perl can be used to implement workflows. But there are other languages that are designed specifically for managing workflows. Perhaps the earliest and most popular of these is GNU Make, which was originally intended to help engineers coordinate software compilation but can be used for just about any workflow. More recently there has been a proliferation of tools intended to replace GNU Make for numerous languages in a variety of contexts. The most popular in bioinformatics seems to be Snakemake, which provides a nice balance of simplicity (through shell commands), flexibility (through configuration), and power-user support (through Python scripting). Build scripts written for these tools (i.e., a Makefile or Snakefile) are often called pipelines or workflows, and the workflow engine is the software that executes the workflow.
The workflow engines you listed above (such as argo) can certainly be used to coordinate bioinformatics workflows. Honestly though, these are aimed more at the broader tech industry: they involve not just workflow execution but also hardware and infrastructure coordination, and would require a level of engineering expertise/support not commonly available in a bioinformatics setting. This could change, however, as bioinformatics becomes more of a "big data" endeavor.
As a final note, I'll mention a few more relevant technologies that I wasn't able to fit above.
Docker: managing a consistent software environment across multiple (potentially dozens or hundreds) of computers; Singularity is Docker's less popular step-sister
Common Workflow Language (CWL): a generic language for declaring how each step of a workflow is executed, what inputs it needs, what outputs it creates, and approximately what resources (RAM, storage, CPU threads, etc.) are required to run it; designed to write workflows that can be run on a variety of workflow engines
Dockstore: a registry of bioinformatics workflows (heavy emphasis on genomics) that includes a Docker container and a CWL specification for each workflow
toil: a production-grade workflow engine used primarily for bioinformatics workflows | {
"domain": "bioinformatics.stackexchange",
"id": 929,
"tags": "software-usage, terminology, workflow-management"
} |
What is a phospho-protein binding domain? | Question: Is this just a domain that binds proteins that have been phosphorylated? And it mediates signalling between an activated/phosphorylated protein? How is this significant with BRCA1?
Answer: This is a very large topic but I will try to partially provide an answer. Now your question has three parts as far as I understand it.
Do phospho-protein binding domains only bind to phosphorylated proteins i.e. is that their sole (observed) function.
What roles do phospho-protein binding domains mediate.
Function of BRCA1 phosphorylation/phospho-protein binding
Just to get some basics out of the way, there are three amino-acid residues in a protein that can become post-translationally modified by phosphorylation, including serine, threonine and tyrosine. Now the purpose of protein phosphorylation ranges from activation/deactivation signals to binding and changing cellular localisation i.e. recruiting a protein to different part of a cell. Now tyrosine phosphorylation, which I know by far the most about is capable of all the above. So if you think about a receptor tyrosine kinase (RTK) such as epidermal growth factor receptor (EGFR), once a ligand such as EGF binds to it, it becomes tyrosine phosphorylated (activated) through its intrinsic tyrosine kinase activity. Once tyrosine phosphorylated, many proteins are capable of binding to tyrosine-phosphorylated RTKs such as Grb2 to induce a signal downstream of RTKs. So as you can see, phosphorylation of RTK affected cellular localisation of GRB2. Excellent Ras signalling review by Karnoub & Weinberg, 2008 Nature reviews (Fig 5). Grb2 binds to phospho-RTKs through a specific domain called Src-homology 2 (SH2) domain, which provides a platform for other proteins such as Son of sevenless (SOS) which is a GEF to bind to it and induce RTK signalling through Ras. So in the above case the role of phospho-protein binding domain (SH2) is to change cellular localisation of multiple proteins and mediate signalling.
Now I'm no expert in BRCA1 but just had a quick look and it seems serine phosphorylation of BRCA1 may be responsible for its recruitment to the DNA site of damage and if serine residues in BRCA1 are mutated, the protein will no longer get phosphorylated, which may effect its recruitment to the DNA sites of damage, which may affect its DNA damage response function (Clark et al 2012 Comput Struct Biotechnol J). Now BRCA has a phospho-protein binding domain called BRCT, and that is also thought to mediate DNA binding and binds to phosphorylated and non-phosphorylated proteins. So BRCA1 phosphorylation and phospho-protein binding is though to be important in its function (Clark et al 2012 Comput Struct Biotechnol J). So to answer your question in short, phospho-protein binding domains such as BRCT do not necessarily bind to their target proteins through phosphorylation.
Hope this helps. | {
"domain": "biology.stackexchange",
"id": 2277,
"tags": "molecular-biology, cell-biology, cancer, cell-signaling"
} |
What is the best file format to save short signal samples? | Question: We have the need to save signal samples measured with an arbitary sampling rate. The samples are usually about one second in duration. Currently we save them in a text file format which consumes a lot of disk space and is very slow. Also this type of format doesn't suit for streaming applications.
I have been searching for a binary file format for this kind of data, but I'm not sure what factors I need to consider when choosing a format.
The sampling rate of our data varies from 1kHz to 152kHz and currently 16 bits would be enough to represent the data (varies roughly from -1000 to 1000). This is vibration data measured from different sensors if it makes a difference. Also we would need to save pre-calculated FFT's in the same format. Is this possible?
Just to point out, I have no education in signal processing. I have the task to implement the system to be used in the analysis.
Currently the best option seems to be WavPack, which is basically a compressed WAV file and supports sampling rates from 1 to 4.3GHz and the compression is lossless.
WavPack is released in 1998 so I'm thinking are there any newer or more efficient formats available today?
Answer: Let's write down a quick spec:
The samples are usually about one second in duration, i.e. let's say the maximum length is 4s
The sampling rate of our data varies from 1kHz to 152kHz, i.e. max rate 152 kS/s
varies roughly from -1000 to 1000, i.e. more than 2048, so let's head straight for 16 bit integers
That makes a maximum recording raw data size of
$$4\,\text s\cdot 152\,\frac{\text{kS}}{\text s}\cdot 2\,\frac{\text B}{\text S} = 1216\,\text{kB.}$$
In other words, by all modern standards for permanent storage and RAM of machines that you'd use to store recordings... nearly no data at all, and that's the maximum case. Even when you incorporate the pre-calculated FFTs (why?? Storage access of data in this length is usually much slower than computing them from the data when you need it), that's still not much data.
So, I would recommend not to use any specific compression at all, but simply write the data as raw int16 into files. Every, language, in this world has methods of reading raw data from files.
Regarding metadata like sampling rate, time of capture, and something like that:
That really depends on your usage scenario. If you know you'll keep the data with the metadata database, I'd quickly put up a separate table (be it in CSV file, be it in a SQLite file, or in a proper relational database).
If you know you'll distribute these files somewhere else, check whether you'd rather be using a custom header format (which usually boils down to a packed C struct in one way or another) is really advantageous, or whether a simple standard scientific data file format like HDF5 with well-defined structures within wouldn't be better – that would allow you to skip writing your own header parser in any target language, and also, you could easily put your raw samples, and an arbitrary amount of secondary data into it. Drawback is the fact that you can't just start streaming at the first byte of the data stream without loading the file to a large extent into memory. However, again, with your very small files, that isn't really a problem. I've not worked on a PC with less than 4 GB of RAM in the last decade, and neither should someone who does signal analysis. If of those 4GB of RAM, 2GB are available, you could completely have around 2000 of your maximum size data files in RAM at once. | {
"domain": "dsp.stackexchange",
"id": 5088,
"tags": "discrete-signals"
} |
Would a nuclear explosion over one of the Earth’s magnetic poles momentarily disrupt/weaken the Earth’s magnetic field? | Question: If say the Tsar Bomba had been detonated directly over one of the Earth’s magnetic poles at ground level, would the nuclear electromagnetic pulse generated from that blast have had any momentary effect on the strength of Earth’s magnetic field?
If a nuclear EMP cannot disrupt/weaken the Earth’s magnetic field, would it still be powerful enough to momentarily energize the Earth magnetic field and would this powerful electromagnetic discharge fry any sensitive electronics devices around the Earth and/or satellites in low Earth orbit?
Answer: “to momentarily energize the Earth magnetic field”... hm.
A nuclear explosion has two effects that one could connect with the subject of magnetic fields:
The EMP. This is very quick. Like any EM signal, it just follows the wave solution of the Maxwell equations (which are in air very linear, so the amplitude isn't really important), meaning it spreads out at the speed of light, well ahead of the pressure and ionisation effects. The lower-frequency parts will partly be reflected at the ionosphere, you may get some whistler-mode dispesion. This isn't really special to the nuclear bomb (it also happens with lightning transients), and it's completely independent of the Earth's static magnetic field. Yes, the EMP itself also has a magnetic component, but this is short-lived and actually not that strong; only because it's a dynamic field with a strong electrical component does the pulse destroy electronics. (Magnetically, it's much weaker that the disturbances solar flares can cause on Earth.)
The ionisation. A nuclear bomb turns a significant volume of air into plasma, both through the heating and through radiation. In plasma, you don't have separate dynamics for the gas and EM fields anymore, but both are linked together in magnetohydrodynamics. However, not much air is heated so much† that, at the high densities you have a surface level, you'd actually have electrical conductivity for a long time – recombination removes most of the ions quickly. Only a high voltage would be able to sustain a current (and thus effect a magnetic field), which again is just what happens in a lightning, but that too is short-lived.
So, no, I don't think there's much interesting to be said here.
It might also be worth noting that the Tsar Bomba actually did explode at quite high latitude, 74°N. At that time the magnetic pole itself was only at 75°N (albeit on the western hemisphere, so the explosion wasn't actually at very high magnetic latitude).
†The hard radiation which does most of the initial heating in the fireball does so precisely because it interacts strongly with air, but that also means it doesn't reach very far. The components that reach through air can immediately burn surfaces miles away, but they don't much heat the air they pass through. Only the mechanical (i.e. acoustic) shock wave causes a heating again, but this is transient. | {
"domain": "earthscience.stackexchange",
"id": 1515,
"tags": "planetary-science"
} |
Initial joint angles | Question:
Hi,
Is there anyway to set the initial joint angles of the robot ? After reading a lot of pages and some packages of several robots, I have seen that it can be done by adding some args to the spawning node. However, I haven't been able to make it work for me. Right now, what I am doing is:
<node name="arm_base_spawn_urdf" pkg="gazebo_ros" type="spawn_model" args="-urdf -param robot_description -model schunk_lwa4p_and_base
-J schunk_lwa4p_and_base::J_foldingSupport 0.075
-J schunk_lwa4p_and_base::J1_PowerBall -1.5607
-J schunk_lwa4p_and_base::J2_PowerBall -0.3817"
respawn="false" output="screen" />
The robot is a lwa4p mounted over a mobile platform. For doing that, I am using xacros, being the robot schunk_lwa4p_and_base the union of all the xacros.
I would really appreciate if you can help me.
Thank you in advance,
JLuis Samper
Originally posted by Samper-Esc on ROS Answers with karma: 50 on 2015-08-24
Post score: 1
Original comments
Comment by dornhege on 2015-08-24:
Do you need that from config file? Or would just setting defaults in the URDF be sufficient?
Comment by Samper-Esc on 2015-08-24:
Setting defaults is sufficient
Answer:
Start gazebo paused, then have spawn_model unpause gazebo.
<include file="(find gazebo_ros)/launch/empty_world.launch">
...
<arg name="paused" value="true">
</include>
<node name="arm_base_spawn_urdf" pkg="gazebo_ros" type="spawn_model" args="-urdf -param robot_description -model schunk_lwa4p_and_base
-J schunk_lwa4p_and_base::J_foldingSupport 0.075
...
-unpause
..."/>
If this does not work, you can make your own version of "spawn_model", where you can pause gazebo, set the configuration, reset your joint controller, then unpause gazebo.
Some links I found helpful:
https://github.com/ros-simulation/gazebo_ros_pkgs/issues/93
https://github.com/ros-simulation/gazebo_ros_pkgs/blob/kinetic-devel/gazebo_ros/scripts/spawn_model
Originally posted by bsaund with karma: 161 on 2017-02-08
This answer was ACCEPTED on the original site
Post score: 2
Original comments
Comment by mogumbo on 2019-03-21:
This odd little workaround works for me. Thank you, bsaund :) Can anyone explain what's going on here in detail? I assume the controllers are somehow seeing different initial joint states if Gazebo is paused during initialization. Is there any plan to implement a more elegant solution? I might be able to make this contribution if I can better understand the problem. | {
"domain": "robotics.stackexchange",
"id": 22518,
"tags": "ros"
} |
how to use Velodyne VLP16 and laser_scan_matcher with cloud input? | Question:
I have a robot equipped with a 3D-Lidar (Velodyne VLP16) and want to use laser_scan_matcher (http://wiki.ros.org/laser_scan_matcher) to estimate it's position.
The velodyne driver (http://wiki.ros.org/velodyne_pointcloud?distro=indigo) publishes velodyne_points (sensor_msgs/PointCloud2) which I intend to use as input for the laser_scan_matcher by setting the use_cloud_input parameter to 'true'. The following error is produced: "Invalid number of rays". After some research I found that only 10000 rays are accepted, while there are roughly ~19000 in the PointCloud2 message.
Obviously, I could always use pointcloud_to_laserscan in between to get a /scan topic out of the velodyne_points and use it as input to the laser_scan_matcher. This works fine but I was hoping to get a better result for the position estimation if I use the PointCloud2 message directly.
My question: is there a convenient way to adjust the number of published points in the PointCloud2 message? Or is there a way to circumvent the "number of rays" restriction?
I found the two following posts with similar questions/problems, if it helps:
http://answers.ros.org/question/61758/localization-just-from-imu-data/
http://answers.ros.org/question/226070/laser_datac-location/
Originally posted by rosarm on ROS Answers with karma: 21 on 2017-03-22
Post score: 2
Original comments
Comment by fiorano10 on 2017-10-09:
I'm working on something similar and have the same issue. I got it working with pointcloud_to_laserscan package but I wanted to use pointcloud2 as input. Did you find a solution to the problem ?
Answer:
The usual solution is the one you mention, using pointcloud_to_laserscan.
If you need or want more than that, you'll likely need to do some original work.
Originally posted by joq with karma: 25443 on 2017-03-28
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 27393,
"tags": "ros, pointcloud-to-laserscan, vlp16, pointcloud, laser-scan-matcher"
} |
Recognizing a line from three r-theta ultrasonic distance readings? | Question: Anyone know of sample Python code, or a tutorial, on the polar coordinate math for recognizing that three ultrasonic distance readings form a straight line?
Deg Distance
-10° 20 cm
0° 18 cm
+10° 16 cm
Once I understand the math, I'll have to deal with the lack of precision.
I want my bot to recognize a wall, and eventually recognize a corner.
Answer: Checking for three is a subset of checking for many; so, I am going to consider the more general solution. I will discuss the three point solution at the end.
First, convert the polar coordinates to Cartesian Coordinates.
First, to make things simple, use your robot as the reference frame (make it the center of the universe).
That means, for each target calculate:
x=Distancecos(Deg), y=Distancesin(Deg)
Here is a Post on Converting Coordinates in Python
After you have all these calculate the slope and intercept of the line using linear regression; then, calculate R-squared (the Coefficient of determination) to determine how well the measurements fit a line. Note: Any measurement system is going to have errors; so, the points will likely never perfectly fit a line.
If the number is too high, try dropping out points the deviate furthest from the line. Maybe you could group them together and if they have a low r-squared then you have found another wall.
I am sure there are lots of Python regression libraries. Here is a reference I found on Calculating R-squared in Python.
If you are only going to use three points, you could likely simplify this by using the middle point as your reference and see if the slope to the other two point are exactly/nearly opposite.
Here are some other approaches for How to find if three points fall in a straight line or not. | {
"domain": "robotics.stackexchange",
"id": 1572,
"tags": "ultrasonic-sensors, geometry"
} |
What is a 21q21 deletion? | Question: I am reading a journal paper about the relationship between the protein NCAM2 and autism, and I have come across the following statement:
We report three patients affected with neurodevelopmental disorders
and harbouring 21q21 deletions involving NCAM2 gene.
I am not sure what a 21q21 deletion is. I have read that 21q refers to long arm of chromosome 21, and that 21q21 refers to position 21 on the long arm of chromosome 21.
However would a 21q21 deletion, refer to this whole section of the chromosome being missing, with this section including the NCAM2 gene? Any insights are appreciated.
Answer: Two likely scenarios.
Interstitial deletion. More likely.
Where just a small internal part of the chromosome is missing. This is like pressing "delete" for a while in the middle of a text document.
Deletions from the end of the chromosome to some point.
Chromosome 21 is the smallest, and people survive having the whole p side of the chromosome missing, even all the way down into band 21 on the q side.
There are more recent articles than the one you saw, look on google scholar for ncam2 deletion and sort by date.
Wikipedia article.
https://en.wikipedia.org/wiki/Chromosome_21
This document below mentions specific cases of (severe) 21q21 deletions (page 15).
https://www.rarechromo.org/media/information/Chromosome%2021/21q%20deletions%20FTNW.pdf | {
"domain": "biology.stackexchange",
"id": 11019,
"tags": "genetics, molecular-biology"
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.