anchor stringlengths 0 150 | positive stringlengths 0 96k | source dict |
|---|---|---|
Exact differentials and state functions | Question: I was reading a Wiki article on the relationships between heat capacities
And during the derivation I came across this formula (and others like it):
This equation was used as a tool in a derivation, and came about independently.
Where does this come from (rigorously)? I see how it makes sense that we can get this formula by saying $S=S(V,T)$, given that this functions is an exact differential. But why are there only two natural variables chosen? Isn't it also true to say $S=S(P,V,T)$?
This is only true if $S=S(V,T)$ is an exact differential. How do we know it's an exact differential?
The article has a similar equation for $P=P(S,T)$. How do we know $P(S,T)$ is an exact differential? Is $P$ a state function?
I feel like this is an underlying simple answer to this, since these equations seem to be used as standalone identities themselves.
Answer: For any function of multiple variables, say, $f(x, y, z, \ldots)$ you can always write the following:
$df = \big( \frac{\partial f}{\partial x} \big)_{y, z, \ldots} dx + \big( \frac{\partial f}{\partial y} \big)_{x, z, \ldots} dy + \big( \frac{\partial f}{\partial z} \big)_{x, y, \ldots} dz + \ldots$
In fact, the constancy of the other variables is implicit in the partial differential notation ($\partial/\partial x$) but it is customary to write the variables that are constant under the derivative when discussing thermodynamics, just to keep track of what other variables we were considering in that particular case.
The really confusing thing when discussing thermodynamics is the fact that your variables ($P, V, T$) are not independent. For instance, when considering $S$, the entropy, there are three variables, but they are also constrained by an additional equation, which is the equilibrium condition for the gas, which happens to be $PV = NkT$ for an ideal gas. So, you can pick any two to be independent variables, but the third will be dictated by the equilibrium condition.
Mathematically, of course, you can say $S = S(P, V, T)$. Let us see what happens then:
$dS = \big(\frac{\partial S}{\partial T} \big)_{P, V} dT + \cdots$
I just wrote the first term. It says, derivative of $S$ with respect to $T$, keep $P$ and $V$ constant. The question is, how can you change $T$ in the first place when keeping $P$ and $V$ constant? You can not, because the gas is
(supposed to be) in equilibrium. So, what keeps you from doing this is the fact
of the existence of the additional equation, hence you can not pick all three
as independent variables. It's not the math, it's the physics.
If any quantity $X$ is a state function of a set of variables, then its differential will be an exact differential. Looks like a circular definition; so let me try to elaborate:
Think about this: You know that a friend just traveled from New York City to Boston, and now he is in Boston. Can you know the amount of gas he spent? The answer is no. The amount of gas he spent depends not only where he is now (well, that dependence is indirect, it sort of gives a minimum) but also what route he took and how aggressive he drove, how much traffic there was, air pressure, and whatnot. So if you call the amount of gas $G$, $G$ can not be known by looking at where he is right now (without investigating history) it is not a state variable. Therefore, $dG$ is not an exact differential.
But what about his distance to New York City? You know he is in Boston, and that is all you need to know. By just looking at his final state, you can tell what the distance is (I can not because I am too lazy to look at a map). So, his distance-to-New-York-City function $D$ is a state variable. Therefore $dD$ is an exact differential. It really does not matter how he traveled there; even if he teleported by magic, we still know $D$.
Now the third part. Darn thermodynamics, everything jumps around.
Let us say we picked $V$ and $T$ as our two independent variables. Which we can.
Then, we can say $S = S(V, T)$ and also $P = P(V, T)$. Now $P$ is not an independent variable, but once you know $V$ and $T$, it is known. So, since there is an equation $PV = NkT$, if you know what $V$ and $T$ are (your state variables - the independent ones - which you had a choice in picking) you do know what $P$ is, without having to know about the history of the gas.
This is not the answer to your question, of course. Here comes additional complication.
Look at the case more broadly. What you know is the existence of two relations among four variables $P, V, T, S$:
$F(P, V, T) = 0$ (this is the equilibrium condition)
$G(S, P, V, T) = 0$ (This is relation of $S$)
Just to be clear, in the ideal gas case, these two equations are:
$PV = NkT$ (which makes $F(P, V, T) = PV - NkT$ )
and
$S = Nk \ln \frac{VT^{c_v}}{N \Phi}$ ($c_v$ is the specific heat of the gas at constant volume, and $\Phi$ is a gas-dependent constant. This is as far as you can deduce using thermodynamics of an ideal gas only.)
For a non-ideal gas, the equations deviate from these forms, but a gas in equilibrium always has a state equation like $PV = NkT$, and there is always a "proper" expression for $S$.
Since I have two equations and four "unknown"s, it means I can choose any two of them as independent variables, and the other two will be specified by the equations.
So yeah... Now I exercise my freedom, and pick $S$ and $T$ as my independent variables. This makes $P = P(S, T)$ and also $V = V(S, T)$. And, since they are specified exactly by my state variables $V$ and $T$, $dP$ and $dV$ are exact differentials.
What are not exact differentials then? The work done by a system is not an exact differential. You can increase the temperature of a gas by compressing it, or by giving it heat. By looking at the final volume and temperature, I can not possibly say by which way the internal energy arrived. So, work $W$ and heat exchanged $Q$ are not state variables. It does not even make sense to talk about what the "heat" or "work" of a system is, precisely because they are not state variables. You can talk about work done in a process, or heat exchanged in a process since they are process-dependent.
Final re-iteration: If a value of a variable depends on the end points only, and not the path taken, then it is a state variable. If it does depend on the path, then it is not a state variable.
Thermodynamics is difficult like this, because (as Feynman has said) we can not decide once and for all what our independent variables are. This is not because physicists are evil, but because every problem requires a different approach, and we use whatever is more useful for that case. | {
"domain": "physics.stackexchange",
"id": 28060,
"tags": "thermodynamics, entropy, differentiation"
} |
Why fluid gets cooler when you pour it from one cup to another and back? | Question: 30+ years ago my mother was pouring hot tea from one cup to another and back presumably to make it cooler. Now, my wife does the same for our kids. And I wonder, what is the physical mechanism that justify this?
I've had my last physics classes about 15-20 years ago and I wasn't the best student of this, but from what I remembered is that adding energy should actually increase, not decrease temperature.
I perform mechanical manoeuvres (a force that is required to literally transfer the fluid from one pot to another) in this process. Plus, another energy is added coming from the fact that moving (in the air) fluid must overcome resistance of the air particles in this way. Shouldn't it all actually increase the temperature of the fluid itself?
Answer: In addition of the tea giving heat to the colder cup, when you pour from cup A to cup B. Cup A is empty for a short while and give off a little heat to the air. When the tea is returned to A, A is slightly colder. An interesting question would be what's the optimal time of leaving the tea in the colder cup before pouring to the other. | {
"domain": "physics.stackexchange",
"id": 62906,
"tags": "thermodynamics, fluid-dynamics, water, home-experiment"
} |
The difference between Type I strings and Type II strings | Question: I understand Type II strings but i do not understand the difference between Type I and Type II strings. Can anyone explain this to me?
Answer: Type II superstring theory starts from the assumption that small perturbations of the vacuum state result only in orientable closed[*] strings.
By contrast, Type I superstring theory starts from the assumption that perturbations near the vacuum state can be either open or closed strings, but both must be non-orientable.
Another difference is that while Type II theories have two 10-dimensional supersymmetry generators, Type I theories have only one. This difference is a consequence of the non-orientability of Type I strings. Assuming the strings are non-orientable means forcing the positive and negative chiral components of the worldsheet spinors to be dependent on each other. They are both determined by the same set of modes, not two different sets of modes as in Type II.
For Type I superstring theory, anomaly cancellation requires the gauge group to be SO(32).
[*] Type II does include open strings. However, they don't show up in the vacuum perturbation theory--they are only there in connection with non-perturbative effects (D-branes). | {
"domain": "physics.stackexchange",
"id": 37810,
"tags": "string-theory, supersymmetry, duality"
} |
The most efficient algorithm for computing cardinality of sumset | Question: Let A and B be two finite non-empty sets of positive integers. Their sumset is the set of all possible sums a + b where a is from A and b is from B. For example, if A = {1, 2} and B = {2, 3, 6} then A + B = {1 + 2, 1 + 3, 1 + 6, 2 + 2, 2 + 3, 2 + 6} = {3, 4, 5, 7, 8}.
As we can see, cardinality #(A + B) is less than sum of cardinalities #A + #B. So i wonder whether we can calculate #(A+B) faster than just calculating all #A + #B sums and inserting them in some set-like data structure.
I've tried searching "sumset cardinality algorithm" and so on but didn't succeed.
P.S. We may assume that there is some constan K, not very lare - for example 10^5, such that 1 <= #A, #B <= K and all the elements of A and B are between 1 and K
Answer: Use the FFT. The characteristic vector of $A+B$ is the convolution of the characteristic vectors for $A$ and for $B$. Convolutions can be computed efficiently using a FFT. As long as every element of $A,B$ is not too large, this will be efficient: the running time will be approximately $O(n \lg n)$ if $A,B \subseteq \{1,2,\dots,n\}$. | {
"domain": "cs.stackexchange",
"id": 6081,
"tags": "algorithms, counting"
} |
Electric Field calculation | Question: What are the limits(lower and upper) of electric field of the differential element in the calculation of electric field of some standard configuration
Eg:
electric field calculation for spherical shell
a: radius of the shell
r: distance of the point(where field due to the shell is calculated)from the centre of the shell
What are the limits of dE in this case?
Answer: The best would be to think for every such case you get. If you want in this case, for whole sphere, you will get it for theta varying from 0 to $\pi$. Some another important things to keep in mind : if you are dealing with vectors (like electric field) ; the choice of reference (origin) etc | {
"domain": "physics.stackexchange",
"id": 58620,
"tags": "electrostatics, electric-fields"
} |
How is "F2/sig(F2)" the data quality in SXRD? | Question: When I recieve a dataset from single crystal X-ray diffraction, I can determine the quality from a graph in which F2/sig(F2) is displayed in dependence of the frame.
But I forgot the basics... I have found that the $R_{int}$ value is calculated as such:
$$R_{int}=\frac{\Sigma|F^2-F_{mean}^2|}{\Sigma|F^2|}$$
Reference: http://www.canadiancrystallography.ca/cccw/files/CIF-file%20and%20validation.pdf
But what exactly is F2/sig(F2) now?
Answer: $R_{int}$ considers in this summation symmetry relevant reflections, e.g. in your equation the term $\Sigma|F^2-F_{mean}^2|$ takes an intensity of reflection and subtracts from this number the mean intensity of all its symmetry equivalents. If redundancy of your data collection is high (the case for all area detectors), then with this number you can tell how close in intensity are the equivalents, and thus how good is your crystal.
Another measure of quality is $R_{\sigma}$
$$R_{\sigma}=\frac{\Sigma[{\sigma}(F^2)]}{\Sigma|F^2|}$$
It tells you how close are the estimated standard deviation of equivalents (e.g.: were all equivalents collected with the same precision?) | {
"domain": "chemistry.stackexchange",
"id": 4038,
"tags": "crystal-structure"
} |
Number of Degrees of Freedom of a Rigid Body System - Proof | Question: Let us define the number of degrees of freedom of a material system as
the number of scalar parameters needed to know the position of each particle of the system with respect to any inertial frame of reference.
Let us also define a rigid body system as
a material system where the distance between particles is fixed.
I have read online and on this forum many "would-be" proofs of the fact that a rigid body system has exactly 6 degrees of freedom. However, they usually lack some logical step, e.g. in some proofs they only show that the number of degrees of freedom is at least 6 but not no more than 6; in others, they adjust wrong formulas without showing that the obtained formula is the only one right. Therefore, I would like to know if anyone knows a rigorous, yet intuitive proof of this fact. It would be amazing if such proof also contained an argument concerning translations and rotations.
As always, any help or comment is highly appreciated!
Answer: Definition. The dimension of the configuration space is called the number of degrees of freedom.
Thus, if we find the dimension of the configuration space of a rigid body, we can deduce its degrees of freedom.
Definition. A rigid body is a system of point masses, constrained by holonomic relations expressed by the fact that the distance between points is constant:
$$|\mathbf{x}_i - \mathbf{x}_j| = r_{ij} = \text{const.}$$
Theorem. The configuration manifold of a rigid body is a six-dimensional manifold, namely, $\mathbb{R}^3 \times \operatorname{SO}(3)$ (the direct product of a three-dimensional space $\mathbb{R}^3$ and the group $\operatorname{SO}(3)$ of its rotations), as long as there are three points in the body not in a straight line.
The dimension of $\mathbb{R}^3 \times \operatorname{SO}(3)$ is indeed $3+3=6$.
Proof. Let $\mathbf{x}_1$, $\mathbf{x}_2$, and $\mathbf{x}_3$ be three points of the body which do not lie in a straight line. Consider the right-handed orthonormal frame whose first vector is in the direction of $\mathbf{x}_2-\mathbf{x}_1$, and whose second is on the $\mathbf{x}_3$ side in the $\mathbf{x}_1 \mathbf{x}_2 \mathbf{x}_3$-plane (Figure). It follows from the conditions $|\mathbf{x}_i - \mathbf{x}_j|=r_{ij}$ ($i=1,2,3$), that the positions of all the points of the body are uniquely determined by the positions of $\mathbf{x}_1$, $\mathbf{x}_2$, and $\mathbf{x}_3$, which are given by the position of the frame. Finally, the space of frames in $\mathbb{R}^3$ is $\mathbb{R}^3 \times \operatorname{SO}(3)$, since every frame is obtained from a fixed one by a rotation and a translation. $\blacksquare$
$\hspace{4.7cm}$
Not only did we find the dimension of the configuration space, but exactly which space it is!
Intuition. You need 3 parameters from $\mathbb{R}^3$ in order to describe the position of the object (i.e. a point $\mathbf{x}$ in $\mathbb{R}^3$ which locates the position of the frame), and 3 parameters from $\operatorname{SO}(3)$ to describe its orientation (i.e. an element $R$ of $\operatorname{SO}(3)$ which defines the orientation of the frame). Thus, 6 parameters in total.
Mathematically, we can write that the configuration space (manifold) of a rigid body is the space defined by
$$\mathbb{R}^3 \times \operatorname{SO}(3) = \{(\mathbf{x},R):\mathbf{x} \in \mathbb{R}^3 \text{ and } R \in \operatorname{SO}(3)\}.$$
If you have trouble understanding the last bit of the proof, look at this question of mine. And if you are wondering why $\operatorname{SO}(3)$ is 3-dimensional, i.e. has 3 parameters, consider looking at this.
I hope that helps!
Source. Mathematical Methods of Classical Mechanics by V.I. Arnold. | {
"domain": "physics.stackexchange",
"id": 88238,
"tags": "newtonian-mechanics, rotational-dynamics, coordinate-systems, rigid-body-dynamics, degrees-of-freedom"
} |
Advent Of Code 2017 Day 4 (part 1) in Functional Programming (FP) | Question: I wanted to practice functional programming (fp) without using any library but using vanilla JS only. So I took a problem from Advent of Code (the 1st part of Day 4):
A new system policy has been put in place that requires all accounts
to use a passphrase instead of simply a password. A passphrase
consists of a series of words (lowercase letters) separated by spaces.
To ensure security, a valid passphrase must contain no duplicate
words.
For example:
aa bb cc dd ee is valid.
aa bb cc dd aa is not valid - the word aa appears more than once.
aa bb cc dd aaa is valid - aa and aaa count as different words.
The system's full passphrase list is available as
your puzzle input. How many passphrases are valid?
My solution in FP:
const INPUT =
`pphsv ojtou brvhsj cer ntfhlra udeh ccgtyzc zoyzmh jum lugbnk
vxjnf fzqitnj uyfck blnl impo kxoow nngd worcm bdesehw
...
caibh nfuk kfnu llfdbz uxjty yxjut jcea`;
const get = input => input.split('\n');
const countDuplicate = words => words.reduce((acc, word) => {
return Object.assign(acc, {[word]: (acc[word] || 0) + 1});
}, {});
const onlyUniqueWords = phrases => {
const words = phrases.split(' ');
const duplicateWords = countDuplicate(words);
return !Object.values(duplicateWords).some(w => w > 1);
};
const phrasesWithUniqueWords = get(INPUT).filter(onlyUniqueWords);
console.log("solution ", phrasesWithUniqueWords.length);
Is there a better way to write it in FP with pure JavaScript, i.e. no additional FP library? Any other improvement suggestions are welcomed.
Answer: I don't remember where I saw it (maybe one of your other posts) but I recently saw a technique for ensuring uniqueness of array elements - iterate over the elements and just return the equality of the current index to the value returned by calling Array.indexOf(). Using that technique here, Array.every() can be used to ensure each word is unique.
With that technique, there is no need to count the number of occurrences of each word and hence the reduction can be removed. Thus countDuplicate can be eliminated and onlyUniqueWords can be simplified like below:
const onlyUniqueWords = phrases => {
const words = phrases.split(' ');
return words.every((word, index) => words.indexOf(word) === index);
};
Here is a jsPerf comparing this technique with the original. The original appears to be ~83-86% slower.
In the snippet below (and the jsPerf mentioned above), I replaced the 3rd line (i.e. ...) with two lines similar to those in the problem description. The first inserted line has unique words and the second does not.
const INPUT =
`pphsv ojtou brvhsj cer ntfhlra udeh ccgtyzc zoyzmh jum lugbnk
vxjnf fzqitnj uyfck blnl impo kxoow nngd worcm bdesehw
aa bb cc dd
aa bb cc dd ee cc
caibh nfuk kfnu llfdbz uxjty yxjut jcea`;
const get = input => input.split('\n');
const onlyUniqueWords = phrases => {
const words = phrases.split(' ');
return words.every((word, index) => words.indexOf(word) === index);
};
const phrasesWithUniqueWords = get(INPUT).filter(onlyUniqueWords);
console.log("solution ", phrasesWithUniqueWords.length);
P.S. I feel like this technique could very easily apply to your subsequent post for the 2nd part of this...,
but perhaps for the sake of varying ideas, I shall refrain from mentioning this in that post... | {
"domain": "codereview.stackexchange",
"id": 29017,
"tags": "javascript, programming-challenge, functional-programming, ecmascript-6"
} |
Applicabilty of the definition of thermodynamic temperature | Question: I have a question about the definition temperature, given by $\frac{\partial S}{\partial E}(E,V,N) = \frac{1}{T}$
Is this valid only for isolated systems (and not applicable, for instance, to a (small) closed system in contact with a heat-bath)?
From the above formula (if it's applicable to any thermodynamic system in equilibrium), it would seem like temperature is a deterministic function of $(E,V,N)$. --- which would contradict the Maxwell-Boltzman distribution of energies, of a system with fixed (T,V,N).
Answer: I think you are mixing Thermodynamics and Statistical Mechanics concepts.
From the point of view of Thermodynamics, there is a set of definitions and relations between functions of the state that hold for all systems at equilibrium, independently of the boundary conditions. Entropy and other state functions are well-defined for an isolated system (fixed energy $E$) and for a system in contact with a thermostat (fixed temperature $T$). Also, a relation like $\frac{1}{T}=\left.\frac{\partial S}{\partial E}\right|_{V,N}$ is universally valid although its meaning will be slightly different. For an isolated system, the temperature is provided by the knowledge of the fundamental equation $S(E,V,N)$. For a thermostatted system, it gives the value of the derivative of entropy wrt energy from an external parameter (temperature).
Things are slightly different in statistical mechanics, mainly if applied to finite systems. In such a case, relative fluctuations do not vanish, and some quantities may not be well-defined for small systems. However, thermodynamics is not intended to be applied to small systems in its standard form. And it is precisely the case of a finite, small system, where the statistical mechanics formulas in different ensembles are not equivalent. However, this is not the condition of the validity of thermodynamics. | {
"domain": "physics.stackexchange",
"id": 100119,
"tags": "thermodynamics, energy, statistical-mechanics, temperature, entropy"
} |
Will a reflection phase change work regardless of wavelength? | Question: If I have a gamma photon traveling in air and it shoots off towards a glass mirror and bounces off it, when I measure its phase, will it have changed by 180${}^\circ$?
I was reading the wikipedia article on reflection phase change and it says
Light waves change phase by 180° when they reflect from the surface of a medium with higher refractive index than that of the medium in which they are travelling. A light wave travelling in air that is reflected by a glass barrier will undergo a 180° phase change [...]
But I didn't know if wavelength of the photon changes that.
Answer: This effect is independent of wavelength, as long two things hold:
The refractive index of the substance does not become less than that of the incident medium at any wavelength; and
The reflectivity of the substance is sufficiently high at all wavelengths that reflection can actually be detected. | {
"domain": "physics.stackexchange",
"id": 41979,
"tags": "photons, reflection, phase-transition, wavelength, gamma-rays"
} |
Is there such a thing as a state-based programming language? | Question: As anyone knows who has read Alan Turing's paper describing the Turing Machine (On Computable Numbers, With an Application to the Entscheidungsproblem), the syntax he uses is vastly different from that of modern languages (and closer to the densely symbolic mathematical writing of the time). This is not very surprising, since he wrote the paper about a decade before the first working electronic computer was finished and several decades before the first compiler was written. More interestingly, though, is the fact that the paradigm Turing uses seems pretty foreign as well. I would expect it to be procedural, but in fact it's what I would call "state-based": he describes a finite set of possible states in which the machine might find itself, and, given a particular state and an input value, he describes what actions the machine should take. In essence, then, the Turing machine is a finite state machine that has access to an infinite strip of rewritable memory locations. Since Turing proves that this machine is functionally equivalent to any other sufficiently powerful mechanical computational device, we can see that his state-based programming language can implement all the algorithms that other languages can implement.
As far as I know, however, there is no modern programming language that actually uses this paradigm. I suppose this is probably because it's a bit of a pain to wrap one's head around, and because it doesn't provide a very natural way of thinking about most algorithms, but I'd still be surprised if no one has at least tried something like this. And there may be some applications for which such a language would work extremely well. For instance, a processor can be represented quite directly as Turing's "universal" machine, which takes as input a coded representation of another machine and then performs the work that machine would perform; so might it be possible to design new processors by "compiling" something akin to Turing's language into a circuit layout for an FPGA? (True, the compiler might not come up with the optimal layout, and this approach might be too abstract to fully characterize the details of chip design, but this is just an example of something I think might work.)
So, my question is: does anyone know of any modern programming languages based on the original Turing machine language, or any languages that use a paradigm akin to this "state-based" paradigm?
Answer: I think that STRIPS and other languages used in Automation Planning are very similar to a "state-based" programming language.
The problem of finding if a STRIPS planning problem is satisfiable is PSPACE-complete, and in the (easy) proof you can see how it can simulate the behaviour of a Turing machine with a finite tape (LBA).
A STRIP program is composed of:
An initial state;
The specification of the goal states – situations which the planner is trying to reach;
A set of actions. For each action, the following are included: preconditions (what must be established before the action is performed); postconditions (what is established after the action is performed)
There is no "procedural execution", but instead it is checked if a valid transition exists from the initial state to a state where the goal states are satisfied. | {
"domain": "cstheory.stackexchange",
"id": 2087,
"tags": "pl.programming-languages, turing-machines"
} |
Prime factorisation of decidable problems | Question: Disclaimer: I am not a theoretical computer scientist.
The set of decidable problems $\mathbb{D}$ is countable so $\lvert \mathbb{D} \rvert = \lvert \mathbb{N} \rvert$ and this led me to the following idea.
Given two decision problems $p,q$ in $\mathbb{D}$ and $\mathcal{M}_p,\mathcal{M}_q$ the minimal-length Turing Machines deciding $p$ and $q$ we may say that $p$ and $q$ are relatively prime if:
\begin{equation}
K(\mathcal{M}_p|\mathcal{M}_q) = K(\mathcal{M}_p) \tag{1}
\end{equation}
and
\begin{equation}
K(\mathcal{M}_q|\mathcal{M}_p) = K(\mathcal{M}_q) \tag{2}
\end{equation}
where $K(\cdot)$ denotes Kolmogorov Complexity.
I think we should then be able to define $\mathbb{B} \subset \mathbb{D}$ such that $\lvert \mathbb{B} \rvert = \lvert \mathbb{D} \rvert$ and:
\begin{equation}
\forall b \in \mathbb{B} \forall d \in \mathbb{D} \setminus \{b\}, K(\mathcal{M}_b|\mathcal{M}_d)=K(\mathcal{M}_b) \tag{3}
\end{equation}
where $\mathbb{B}$ is analogous to the set of primes in $\mathbb{N}$. Has this idea been explored?
Answer: I think a candidate for such a set $\mathbb{B}$, or something very much like it, could be produced by considering an infinite sequence of singleton languages: $L_1=\{w_1\},L_2=\{w_2\},\ldots$ --- where the $w_1\cdot w_2\cdot\ldots$ form an incompressible sequence. You might be able to shave off a bit here or there (this will strongly depend on the encoding -- which is why Kolmogorov complexity really only makes sense up to an additive constant). But modulo additive constants, this family of strings has the property where being given one of them as input does not in any way help in generating any other one. | {
"domain": "cstheory.stackexchange",
"id": 4889,
"tags": "turing-machines, kolmogorov-complexity"
} |
How to export picture from rxgraph? | Question:
When my rxgraph's graph is larger than my screen,it is difficult to save to picture format.
How to do that?
Thank you~
Originally posted by sam on ROS Answers with karma: 2570 on 2011-07-11
Post score: 3
Answer:
Just run:
rxgraph -o rxgraph.dot
dot -T png -o output.png rxgraph.dot
Instead of PNG, you can use any output format supported by graphviz.
Originally posted by Martin Günther with karma: 11816 on 2011-07-11
This answer was ACCEPTED on the original site
Post score: 11
Original comments
Comment by Sudhan on 2012-10-24:
how to save rxgraph along with the information coloumn (for all nodes) on the right side?
Comment by Martin Günther on 2012-10-24:
The only way I can think of for doing that is a screenshot. | {
"domain": "robotics.stackexchange",
"id": 6112,
"tags": "ros, rxgraph"
} |
Subset of $k$ vectors with shortest sum, with respect to $\ell_\infty$ norm | Question: I have a collection of $n$ vectors $x_1, ..., x_n \in \mathbb{R}_{\geq 0}^{d}$. Given these vectors and an integer $k$, I want to find the subset of $k$ vectors whose sum is shortest with respect to the uniform norm. That is, find the (possibly not unique) set $W^* \subset \{x_1, ..., x_n\}$ such that $\left| W^* \right| = k$ and
$$W^* = \arg\min\limits_{W \subset \{x_1, ..., x_n\} \land \left| W \right| = k} \left\lVert \sum\limits_{v \in W} v \right\rVert_{\infty}$$
The brute-force solution to this problem takes $O(dkn^k)$ operations - there are ${n \choose k} = O(n^k)$ subsets to test, and each one takes $O(dk)$ operations to compute the sum of the vectors and then find the uniform norm (in this case, just the maximum coordinate, since all vectors are non-negative).
My questions:
Is there are a better algorithm than brute force? Approximation algorithms are okay.
One idea I had was to consider a convex relaxation where we assign each vector a fractional weight in $[0, 1]$ and require that the weights sum to $k$. The resulting subset of $\mathbb{R}^d$ spanned by all such weighted combinations is indeed convex. However, even if I we can find the optimum weight vector, I am not sure how to use this set of weights to choose a subset of $k$ vectors. In other words, what integral rounding scheme to use?
I have also thought abut dynamic programming but I'm not sure if this would end up being faster in the worst-case.
Consider a variation where we want to find the optimal subset for every $k$ in $[n]$. Again, is there a better approach than solving the problem naively for each $k$? I think there ought to be a way to use the information from runs on subsets of size $k$ to those of size $(k + 1)$ and so on.
Consider the variation where instead of a subset size $k$, one is given some target norm $r \in \mathbb{R}$. The task is to find the largest subset of $\{x_1, ..., x_n\}$ whose sum has uniform norm $\leq r$. In principle one would have to search over $O(2^n)$ subsets of the vectors. Do the algorithms change? Further, is the decision version (for example, we could ask if there exists a subset of size $\geq k$ whose sum has uniform norm $\leq r$) of the problem NP-hard?
Suppose we now know that our vectors $x_i$ all come from $\{0, 1\}^d$. Does anything change?
Answer: The problem is NP-hard by a reduction from https://en.wikipedia.org/wiki/Independent_set_(graph_theory) or set packing.
One approach to solve the problem is to use integer linear programming: define 0-or-1 variables $v_1,\dots,v_n$, and then minimize $t$ subject to the constraints $\|\sum_i v_i x_i \|_\infty \le t$ and $\sum_i v_i = k$. Note that $\|\sum_i v_i x_i \|_\infty \le t$ iff $-t \le \sum_i v_i x_{ij} \le t$ for all $j$, so this can be expressed using linear constraints. Then, apply an off-the-shelf ILP solver and hope it terminates in a reasonable amount of time.
(The ILP solver will probably apply methods such as solving the associating linear program and then applying randomized rounding, so you don't need to implement it yourself.)
If $d$ is very small, it might be possible to solve the problem in something like $\tilde{O}(dkn^{k/2})$ time using meet-in-the-middle search combined with a nearest-neighbor data structure, but I haven't worked out the details, and I expect it won't scale to large $d$. | {
"domain": "cs.stackexchange",
"id": 15739,
"tags": "time-complexity, optimization"
} |
Hardness of node partitioning under shortest path constraint | Question: Given a direct graph $G=(V,E)$. $\forall (i,j) \in E$, there is a weight $w(i,j) \in R$ (negative weight is possible). A label $l(i)$ is associated with each node $i \in V$. How to assign $k$ (or less) distinctive values to $l(i)$ such that
$$l(j) \leq l(i) + w(i,j),\quad \forall (i,j) \in E ?$$
Notice that when $k=|V|$, this problem is easily solvable by Shortest path algorithm (Bellman-Ford). But what's the hardness is this problem for $k < |V|$?
Answer: Let’s call an assignment of vertex labels feasible if it satisfies all the inequality constraints, ignoring the condition on the number of distinct labels.
Here is what I think is a proof that it is NP-complete to decide whether a given directed graph G=(V, E) with integer (possibly negative) edge weights has a feasible assignment of labels which uses at most k distinct labels, for k=|V|/2. We construct a reduction from the following NP-complete problem [WY92].
Equal Subset Sum
Instance: A finite set S of positive integers.
Question: Do there exist two disjoint nonempty subsets S1 and S2 of S whose sums are equal to each other?
Let S={a1, …, an} be a set of positive integers, where n=|S|. Construct a directed graph G with 2n vertices u1, …, un, v1, …, vn by connecting ui and vi in both directions for each i. Give the weight ai to the edge (ui, vi) and −ai to the edge (vi, ui).
Consider the instance (G, n) of the current problem. Note that an assignment l of labels is feasible if and only if for each i, it holds l(vi)=l(ui)+ai. From this, we can prove the following, establishing that the above transformation is a reduction from Equal Subset Sum to the current problem.
Claim. G has a feasible assignment which uses at most n distinct labels if and only if there exist two disjoint nonempty subsets S1 and S2 of S whose sums are equal to each other.
Proof. First observe that if we are given a feasible assignment l (which might use any number of distinct labels), we can construct a directed graph Hl from G by removing the edges with negative weights and merging vertices with equal labels. Note that Hl has exactly n edges whose weights are a1, …, an. Also, note that the number of vertices of Hl is equal to the number of distinct labels used in l.
For the “only if” direction, given a feasible assignment l which uses at most n distinct labels, consider the directed graph Hl. Since Hl has at most n vertices and exactly n edges, Hl contains a cycle C ignoring the direction of edges. Let S1 be the set of weights of edges appearing in C in one direction, and S2 be the set of weights of edges appearing in C in the other direction. It is easy to see that the sum of S1 is equal to the sum of S2.
For the “if” direction, fix S1 and S2 be the subsets of S satisfying the condition. Without loss of generality, assume that S1={a1, a2, …, as} and S2={as+1, as+2, …, as+t}, where s=|S1| and t=|S2|. Then we assign the following labels to the vertices of G:
l(u1)=l(us+1)=0.
l(ui)=l(vi−1) for 2≤i≤s and s+2≤i≤s+t.
l(ui)=0 for s+t+1≤i≤n.
l(vi)=l(ui)+ai for 1≤i≤n.
It is easy to verify that this assignment is feasible and uses at most n distinct labels. (The graph Hl in this case consists of two edge-disjoint paths from the vertex labeled 0 to the vertex labeled $\sum_{i=1}^s a_i = \sum_{i=s+1}^{s+t} a_i$ and n−s−t edges originating at the vertex labeled 0.) QED.
References
[WY92] Gerhard J. Woeginger and Zhongliang Yu. On the equal-subset-sum problem. Information Processing Letters, 42(6):299–302, July 1992. http://dx.doi.org/10.1016/0020-0190(92)90226-L | {
"domain": "cstheory.stackexchange",
"id": 178,
"tags": "graph-theory, np-hardness"
} |
What is this structure called? | Question: I've been writing this data structure for a few days, and now I'm really curious about what it actually is, and to get some critique on my logic.
A branch, based on this usage, exists when a HEAD node contains at least one node of the same type referenced.
The purpose of the structure is to have branches that are arranged by type. Each node on the branch has a reference to the next node on the branch (always of the same type) and an entry point to a Subdata branch. Subdata in this case being an instance of a class that inherits from the AchievementNode. When subdata is added and it is the first of it's kind on that branch it has the HEAD tag applied to it, additionally, it also has a tag that contains the metadata of the type of data contained (to bypass the typeof calls).
Implementation:
public abstract class AchievementNode : ScriptableObject
{
public enum NodeTypes
{
NONE = 0x0,
HEAD = 0x1,
TAIL = 0x2,
TYPE = 0x4,
DATA = 0x8,
LEVEL = 0x16,
GLOBAL = 0x32
}
public NodeTypes nodeType;
public AchievementNode nextOfType;
public AchievementNode headOfSubnode;
public void OnEnable ()
{
hideFlags = HideFlags.HideAndDontSave;
}
public virtual void Init(NodeTypes type, int enumData)
{
nodeType = type;
}
protected void AddNode(NodeTypes type, AchievementNode originNode, AchievementNode newNode)
{
//Create SubNode branch notch when types mismatch.
if((originNode.nodeType & type) != type)
{
//If Has subNode Data Run to the end and assign new node
if(originNode.headOfSubnode!=null)
{
newNode.nodeType = type | NodeTypes.TAIL;
AppendToTail(type,GetEndOfBranch(originNode.headOfSubnode),newNode);
}//Search for proper SubNodeTypes then add. Wicked Recursion warning here...
else if((originNode.headOfSubnode.nodeType & type) != type)
{
Debug.LogError("Do Gnarly Search To Find!");
return;
}//Doesn't have subnode... add new Subnode.
else
{
newNode.nodeType = type | NodeTypes.HEAD | NodeTypes.TAIL;
originNode.headOfSubnode = newNode;
}
}
else
{
//Add to the current branch
newNode.nodeType = type | NodeTypes.TAIL;
AppendToTail(type,GetEndOfBranch(originNode),newNode);
}
}
private void AppendToTail(NodeTypes type,AchievementNode tailNode, AchievementNode newNode)
{
if((tailNode.nodeType & NodeTypes.HEAD) == NodeTypes.HEAD)
{
tailNode.nodeType = tailNode.nodeType | type;
}
else
{
tailNode.nodeType = type;
}
tailNode.nextOfType = newNode;
}
protected AchievementNode GetEndOfBranch(AchievementNode currentNode)
{
//Special Case where Node is HEAD and TAIL.
if((currentNode.nextOfType.nodeType & NodeTypes.TAIL) != NodeTypes.TAIL)
{
return GetEndOfBranch(currentNode.nextOfType);
}
else
{
return currentNode;
}
}
protected void SetType(NodeTypes type)
{
nodeType = type;
}
protected virtual AchievementNode FindInHierarchy(NodeTypes nodeCheck, AchievementNode currentNode)
{
if(currentNode == null)
{
return null;
}
else if((currentNode.nodeType & nodeCheck) == nodeCheck)
{
return currentNode;
}
else
{
return FindInHierarchy(nodeCheck,currentNode.nextOfType);
}
}
}
Answer:
Why do you have HEAD and TAIL NodeTypes? Doesn't their position on the branch tell you that?
I'm concerned that you have structure traversal code infused into AchievementNode class. Classically the structure - tree, queue, List, etc. - is independent of the objects it holds. I would think greatly upon using something like Dictionary<T>, List<T>, etc. Use that structure's inherent reference/traversal features in the context of NodeType. - wrapping (inheriting?) that structure along with your concept of NodeType into a class that is essentially a MyNodes<AcheivementNode> class. I wonder, is MyNodes<NodeType> what you are really building? In any case I suspect the separation of concerns will make the NodeType idea standout better conceptually and architecturally.
How many palaces do you have NodeType defined I wonder. Why is NodeType enum defined inside AchievementNode? That's unusual. You want it hidden from client code? I don't think so since you have method parameters of NodeType. | {
"domain": "codereview.stackexchange",
"id": 3922,
"tags": "c#, recursion, linked-list, queue"
} |
What is the smallest possible packing of earth air inside a box? | Question: It is often stated that all the air in a room could suddenly rush into one corner but that there is simply a very low probability that it would do so. One possible way this could in fact happen is via the phase change of gas into a solid via really low temperatures.
What is the smallest possible way of packing a 1 by 1 by 1 meter box of Nitrogen gas that starts at standard temperature and pressure that is still Nitrogen and not some other kind of matter such as Neutronium or Lead?
Answer: The relevant properties of nitrogen at 20°C and 1 atm are:
13.8 standard cubic feet per pound of nitrogen gas.
0.808 specific gravity of liquid nitrogen.
1/13,8 = 0.07246 pounds of gaseous nitrogen per cubic foot.
Converting to metric units we get 1.1607 kg/m³ for the density of gaseous nitrogen
1000 kg/m³ equals the density of liquid water.
Therefore 808 kg/m³ is the density of liquid nitrogen.
1.1607/808 = 0.001437
1 m³ of gaseous margin will condense to all 0.14% of a cubic meter. | {
"domain": "physics.stackexchange",
"id": 25499,
"tags": "phase-transition, gas"
} |
DFA Minimization | Question: I am currently in a class that deals with DFA's and their minimization. However I believe I have reached a DFA where the method of minimization we were taught doesn't work.
I have the following DFA (in JFLAP)
Using the method given in class I do the following see if their are any non dissimilar states and if so collapse them together.
| q0 q1 q2 q3 q4 q5
-------------------------
a| q1 q1 q2 q3 q4 q5
b| q3 q2 q5 q4 q5 q5
However there are no states where both a and b go to the same state, therefore no collapse states.
However when doing it in JFLAP it gives the following
The only way I can see you getting what JFLAP does is just thinking and having to realize bother q1 and q3 can be joined as well as q2 and q4.
Answer: Your method of DFA minimization is incorrect. You start with assuming all states to be different and merge the indistinguishable ones, this may not give the correct minimization. The correct way is to assume all to be same states and separate those which are distinguishable. Look at this for exact algorithm. | {
"domain": "cs.stackexchange",
"id": 4390,
"tags": "finite-automata"
} |
Practical method to weigh human limbs with common household items? | Question: What methods could be used to determine (or estimate within a reasonable margin of error) the mass of a living human's limbs, short of cutting them off? And more interestingly, how can this be done without any high tech equipment, just with the means commonly found in households?
A scale for example is allowed. An MRI isn't ;)
Answer: For those who upvoted me, I'm sorry. I just checked my equations again and I found that I made stupid mistakes when writing the torque equations. I made a mistake in writing the length of the lever arms of $W-W_{arm}$ and $W_{board}$, it should have been $L-x_a$ and $d-x_a$ instead of $L$ and $d$.
It turns out that the method in my previous answer doesn't work because every time we find $W_{arm}$, we won't find it alone. It always sticks together with $L$ as one $W_{arm}L$. At best we can only get $W_{arm}L$ as a single variable. So unless we conduct another measurement using a different method that has nothing to do with measuring center of mass, we won't be able to find $W_{arm}$. Even worse, even if we come up with other experiment of locating the center of mass or anything related to torque with various body orientations. We will always end up with $W_{arm}$ sticking with another quantity with dimension of length. And adding the number of measurements also won't help. So we have to give up trying to find $W$ with torque method and be satisfied with $W\times[Length]$. We have to figure out another experiment to find $W\times F(Length)$. The only mechanical experiment that I can think of is an experiment involving centrifugal force, since it's a dynamical experiment it's hard to measure. So I think perhaps Hennes' method is better. But if we still insist to use mechanical method here is one way to do it:
Let's say we want to calculate the weight of someone's arm. First, place a thin board horizontally and support it with a pivot and a weigh scale. Ask that guy to stand up on it but with his hands oriented straight horizontally.
Where $N$ is the reading of the scale times the gravitational acceleration $g$. The balance in torque gives
$W_{arm}(L-x)+N(l+x)=(W-W_{arm})x+W_{board}(d+x)$
All quantities involved in two equations above can be measured using a scale and a meter stick except $W_{arm}$ and $L$.
Now for the second experiment, we need two high speed camera and a scale with high refresh rate. Hang a paper above the guy's head and ask him to swing his arms quickly while keeping them straight until he hits the paper. While he is doing that, record the movement of the hand using one camera and record the reading of the scale using another camera. The angular velocity $\omega$ of the hand just before hitting the paper can be obtained from the video recorded by the first camera. Angular velocity is much easier to calculate than velocity since we don't need to worry about parallax so much, but still it's not an easy task. Then we can also obtain the normal force at the instant the hand hits the paper from the second video from the reading of the scale when the voice of hitting paper is present. Let's say the reading is $N'$
$\Sigma F_y$ at the moment when the hand hits the paper gives
$N'=W-W_{arm}+W_{arm}(1+\frac{\omega^2 L}{g})$
Now we can substitute $L$ from one equation to the other equation to obtain $W_{arm}$. Note that $W_{arm}$ that we get is the mass of two arms, so we need to divide the final result by a factor of two to get the mass of an arm. And we can also measure the mass of a leg using the similar method. | {
"domain": "physics.stackexchange",
"id": 6619,
"tags": "mass, measurements, home-experiment"
} |
Arduino rosserial - Unable to sync with device | Question:
Hi,
I have been using the rosserial_arduino in order to run a ROS node on arduino.
I have the following error:
[ERROR] [WallTime: 1433855869.447165] Unable to sync with device; possible link problem or link software version mismatch such as hydro rosserial_python with groovy Arduino
But, If I use the Arduino IDE before run the node it works fine.
[INFO] [WallTime: 1433858173.848825] Note: publish buffer size is 512 bytes
[INFO] [WallTime: 1433858173.849108] Setup publisher on /joystick_raw [lhd_msgs/Joystick]
[INFO] [WallTime: 1433858173.855225] Note: subscribe buffer size is 512 bytes
[INFO] [WallTime: 1433858173.855483] Setup subscriber on /leds_ocu [lhd_msgs/Leds]
If I disconnect the arduino, and I connect it again doesn't work again.
Is not a problem with the permissions of the USB.
Thanks
Can you help me out?
Originally posted by MKnight on ROS Answers with karma: 51 on 2015-06-09
Post score: 2
Original comments
Comment by Andromeda on 2015-06-09:
possible link problem or link software version mismatch such as hydro rosserial_python with groovy Arduino
do you have hydro installed and did you compile the libraries with groovy?!?!
Answer:
I encountered the same problem, mine was due to the buffer being over filled ( see q/a ). Try increasing the buffer size or reducing the message size. Im not sure of the contents of you messages but that was my issue.
Also try a diferent USB cable, I also encountered faulty cables giving this issue.
Originally posted by miguel with karma: 170 on 2015-07-23
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by sirrarthur on 2021-06-20:
I had the same issue and the errors looked like a memory problem: I could run each half of my code separately, but when I put the halves together I got that error. Also, I tried to comment out code to identify the "problematic section," but kept identifying different lines that caused the error. In the end I was able to work around this by reducing the number of publishers and the message size in each one. | {
"domain": "robotics.stackexchange",
"id": 21868,
"tags": "arduino, rosserial"
} |
Why do steam and diesel engines have differing cylinder configurations? | Question: Consider a fixed steam engine, such as in a Lancashire cotton mill or pumping in a Cornish mine. Then consider a fixed diesel engine such as a large generator or ship's engine.
Why does the steam engine have one or two large cylinders with long stokes, but the diesel has many cylinders with relatively short strokes?
Answer: Steam engines can expand the steam only so much before the water vapor starts to condense. In addition, the volume of the vapor increases significantly as it expands, making it difficult to obtain further work, as the cylinder size must increase to accommodate the volume of vapor passing through it.
Most steam engines use three or four cylinders to expand the steam, and are referred to as "triple expansion" engines. The steam enters the smallest, or high pressure cylinder, first out of the boiler. Then, it leaves the HP cylinder and enters an intermediate pressure cylinder next, which is physically larger than the HP cylinder. Finally, the steam enters the low pressure cylinder, which may be split into two cylinders because of the large volume of steam which must be handled.
On the other hand, a diesel engine does not expand the combustion gases in multiple stages like a steam engine. The fuel is injected and burned all in one cylinder. Large diesel engines can therefore have their power output increased by adding additional cylinders. These engines are usually designed in a modular fashion to make it relatively easy to increase the number of cylinders during construction. | {
"domain": "engineering.stackexchange",
"id": 2054,
"tags": "steam, diesel"
} |
Why must a Primary Index be sparse? | Question: Reading Fundamentals of Database Systems 7th Edition, on Page 603, it says,
Indexes can also be characterized as dense or sparse. A dense index has an index entry for every search key value (and hence every record) in the data file. A sparse (or nondense) index, on the other hand, has index entries for only some of the search values. A sparse index has fewer entries than the number of records in the file. Thus, a primary index is a nondense (sparse) index, since it includes an entry for each disk block of the data file and the keys of its anchor record rather
than for every search value (or every record).
Is that true though? As far as a database is concerned, if there are no sparse-indexes provided, is it said that all data-files have no "primary index"? I get that a specific designation for a sparse unique index on a block may be required, but is that what primary means? What does this mean for a database, like PostgreSQL, that implements Primary Keys on a dense btrees, and stores its rows in an unordered heap? Does it, technically, not have a "Primary Index"?
Answer: Without the context of the excerpt, I found it confusing since it seemed to depend on fairly specific implementation details. This is, in fact, the case. These course notes give a decent overview of this and related terminology.
A primary index refers to an index stored in sorted order on the sorting key of data stored in blocks. To look up a value, you do a binary search on the index which will produce a pointer to the block, and then you can do a binary search on the data in the block. If the data wasn't in blocks, there would be no point to this as you could just do binary search directly on the data file. (With a primary index, while the contents of the blocks need to be in sorted order, the blocks themselves don't necessarily need to be.) This index is not dense since it stores a pointer to a block which will typically contain multiple records, so there is not a one-to-one correspondence between index entries and records.
If you wanted to lookup data based on a key that the records are not sorted on, then you couldn't rely on records between index entries being in the same block, thus an index entry would be required for each specific record. This would be a dense index.
So the thing to note is that this terminology is referring to a specific way of storing and indexing data, namely as a collection of sorted blocks. "Primary index" has nothing to do with "primary key" in this terminology. A primary index depends on how the data is actually sorted (assuming it is sorted) which doesn't at all need to be based on the primary key, e.g. you may sort by a timestamp. There is no need for a "primary index" to exist at all.
A B-tree (and its variations) is just a different indexing approach entirely. You can still potentially apply the "dense index"/"sparse index" terminology to it. For example, if your data is sorted on the key the B-tree index is indexing, then it could just as well store only pointers to blocks and then perform a binary search within the block once fetched. This arrangement would produce a sparse index. You could easily imagine different arrangements that would produce a B-tree index that is dense.
I think you are viewing "primary index" as being more widely applicable than it is. I know I was at first which was the source of my confusion. In the context in which it applies, it is a reasonable term, but "primary index" is way too uninformative to stand alone. | {
"domain": "cs.stackexchange",
"id": 10585,
"tags": "terminology, databases"
} |
Is there an algorithm that finds the forbidden minors? | Question: The Robertson–Seymour theorem says that any minor-closed family $\mathcal G$ of graphs can be characterized by finitely many forbidden minors.
Is there an algorithm that for an input $\mathcal G$ outputs the forbidden minors or is this undecidable?
Obviously, the answer might depend on how $\mathcal G$ is described in the input.
For example, if $\mathcal G$ is given by an $M_\mathcal G$ that can decide membership, we cannot even decide whether $M_\mathcal G$ ever rejects anything.
If $\mathcal G$ is given by finitely many forbidden minors - well, that's what we're looking for.
I would be curious to know the answer if $M_\mathcal G$ is guaranteed to stop on any $G$ in some fixed amount of time in $|G|$.
I'm also interested in any related results, where $\mathcal G$ is proved to be minor-closed with some other certificate (like in case of $TFNP$ or WRONG PROOF).
Update: The first version of my question turned out to be quite easy, based on the ideas of Marzio and Kimpel, consider the following construction.
$M_\mathcal G$ accepts a graph on $n$ vertices if and only if $M$ does not halt in $n$ steps. This is minor closed and the running time depends only on $|G|$.
Answer: The answer by Mamadou Moustapha Kanté (who did his PhD under supervision of Bruno Courcelle) to a similar question cites A Note on the Computability of Graph Minor Obstruction Sets for Monadic Second Order Ideals (1997) by B. Courcelle, R. Downey, and M. Fellows for a non-computability result (for MSOL-definable graph classes, i.e. classes defined by a Monadic Second order formula) and The obstructions of a minor-closed set of graphs defined by a context-free grammar (1998) by B. Courcelle and G. Sénizergues for a computability result (for HR-definable graph classes, i.e. classes defined by a Hyperedge Replacement grammar).
The crucial difference between the computable and the non-computable case is that (minor-closed) HR-definable graph classes have bounded treewidth, while (minor-closed) MSOL-definable graph classes need not have bounded treewidth. In fact, if a (minor-closed) MSOL-definable graph class has bounded treewidth, then it is also HR-definable.
The treewidth seems to be really the crucial part for separating the computable from the non-computable cases. Another known result (by M. Fellows and M. Langston) basically says that if a bound for the maximum treewidth (or pathwidth) of the finite set of excluded minors is known, then the (finite) minimal set of excluded minors becomes computable.
It is not even known whether the (finite) minimal set of excluded minors for the union (which is trivially minor-closed) of two minor-closed graph classes each given by their respective finite set of excluded minors can be computed, if no information about treewidth (or pathwidth) is available. Or maybe it has even been proved in the meantime that it is non-computable in general. | {
"domain": "cstheory.stackexchange",
"id": 4555,
"tags": "graph-theory, decidability, graph-minor"
} |
How to filter out everything but a single frequency in the time domain? | Question: I'm new to signal processing and would like some insights about the best way to filter simple data without being too computationally intensive.
I have an audio signal and I want to extract a single frequency (21khz) from it.
This frequency represents whether something is on or off.
I figured a simple amplitude threshold would do the trick to guess the on or off state but I can't think of a good way to single it out so I can threshold it.
I thought about doing it in the frequency domain but using the FFT seems a bit overkill.
I'm sorry if this is obvious but I'm not familiar with signal processing.
Answer: Depending on the exact characteristics of the signal and implementation requirements (if any) I can think of a number of approaches to extract content at a "single" frequency. The most common:
Apply an FFT to calculate the spectrum of the signal and extract the frequency of interest.
Apply Goertzel's algorithm to effectively extract signal content at a single spectral line.
Apply a bandpass filter around the frequency of interest.
Apply an inverted notch filter around the frequency of interest.
Depending on the amplitude and frequency content of the signal, the additional noise present and other implementation requirements (e.g. should the signal be processed in real time or offline, should the implementation be digital or analog, etc.) a specific approach might be preferred. | {
"domain": "dsp.stackexchange",
"id": 8356,
"tags": "filters, fourier-transform, bandpass, time-domain"
} |
Response time of scheduling a DAG where each vertex is a task | Question: Suppose I have a directed acyclic graph where each vertex $v$ represents a task with a certain execution time and the edges represent precendence constraints between the tasks. I.e. task $v_i$ has to execute before task $v_j$ if there exists an edge $(v_i,v_j)$ between those two tasks. I have $x$ threads that can execute those tasks.
Is there a (simple) formula to determine the response time (or at least an upper bound) for the given setup using any work-conserving scheduler?
Answer: Found it! This problem is NP-complete when the goal is to minimize total execution time.
The scheduling problem (P1) is the following. We are given
(1) a set $S = \{J_1 , \ldots , J_n\}$ of jobs,
(2) a partial order $\prec$ on $S$,
(3) a weighting function $W$ from $S$ to the positive integers, giving the number of time units required by each job, and
(4) a number of processors, $k$.
Please see the following paper, which immediately discusses it from the introduction onward:
J.D. Ullman. Np-complete scheduling problems. Journal of Computer andSystem Sciences, 10(3):384 – 393, 1975.
There are a few other cases that may be of interest that are discussed in the paper:
When $\prec$ is empty (no edges in the DAG)
(P2) When $W(J_i) = 1$ for all $i$.
(P3) When $k = 2$ and $W(J_i) \in \{1, 2\}$ for all $i$.
You can show 3-SAT reduces to (P2) through and intermediate problem (P4) that Ullman discusses. Then it is clear that (P2) reduces to (P1). | {
"domain": "cs.stackexchange",
"id": 13668,
"tags": "graphs, parallel-computing, scheduling"
} |
Does a wet paper towel absorb gas molecules better than a dry one? | Question: I was riding the subway and noticed an educational video on how to act in case of a fire hazard.
Looking back, it's common knowledge to use a wet paper towel in case of close areas in the presence of smoke.
But what exactly is the reason for having to wet a paper towel? Is it water - being the polar molecule it is - attracting molecules or the oxygen content dissolved in water helping us survive longer?
Answer: Smoke isn't what you might think it is. What is referred to in layman terms as "smoke" or "fumes" or various similar terms is actually a mix of two distinct components:
solid airborne particles or liquid droplets (aerosols)
actual gasses (vapours)
Much of what you see in visible smoke (maybe all of it, actually) is solid particles. The only time I can think of being able to see gasses might be your breathe during the winter time and dry ice, and I'm pretty sure that's actually not vapours but just ice crystals which are solid particles.
Passing the air through a soaked material before you breathe it causes the solid particles to be removed from the air because they adhere to the liquid in the cloth via surface tension tension. It's the same as how rain removes dust from the air or how you use a wet cloth to wipe dust from a table. A good dry filter, like a HEPA filter, will do the same thing. So would a cloth soaked in something like mineral/baby oil.
But this type of mechanical filtering (or any type of mechanical filtering) won't protect you from the vapours. For a similar method to protect you from the vapours, something in the cloth must react with the vapour. The best example of this is breathing through a urine soaked cloth would protect you from mustard gas in World War 1.
And it obviously won't protect you from lack of oxygen.
This is why industrial respirators have both particulate filter cartridges and chemical vapour cartridges. And that's why there's many types of chemical vapour cartridges targeted for different substances but only only one or two particulate filter cartridges varying in particulate size. And why they say they are not for use in oxygen deficient environments. | {
"domain": "physics.stackexchange",
"id": 85148,
"tags": "thermodynamics"
} |
Why is true and average anomaly of planet important? | Question: Why is a true and average anomaly of the planet important? What useful information do they give us?
Answer: Given one body orbiting one another, the radial distance between the two and the true anomaly are the polar coordinates of the orbiting body. This wouldn't be all that useful if there was no way to predict where the orbiting body will be at some point in the future.
The mean anomaly along with Kepler's laws of motion do just that. Kepler's first law says that the planets orbits about the Sun are ellipses. His second law says that the rate at which a planet sweeps out area (with the Sun as the central point) is constant. His third law says that the period of a planet's orbit depends on the semi-major axis length but not eccentricity.
This means that a planet in an eccentric orbit and a planet in a circular orbit, both with the same semi-major axis length, will, on average, exhibit the same motion. In particular, their periods will be exactly the same. It's easy to predict where the planet in a circular orbit will be at some point in the future because that planet's true anomaly is a linear function of time. Calling this the mean anomaly, this linear relationship means that
$$M(t) = M(t_0) + (t-t_0)n$$
where $n$ is the mean motion:
$$n = \frac{2\pi}{T} = \sqrt{\frac{\mu}{a^3}}$$
Here, $T$ is the period, $a$ is the semi-major axis length, and $\mu$ is the constant of proportionality in Kepler's third law.
What's needed is a bridge between the mean anomaly and the true anomaly that makes the true anomaly satisfy Kepler's second law. This bridge is the eccentric anomaly. Mean anomaly and eccentric anomaly are related via Kepler's equation,
$$M = E - e \sin E$$
Here, $E$ is the eccentric anomaly and $e$ is the eccentricity. Note that by this equation, it's trivial to calculate the mean anomaly given the eccentric anomaly and the eccentricity. Simply plug the values in the right hand side and evaluate. What's wanted is a means to calculate the eccentric anomaly given the mean anomaly and the eccentricity. Unfortunately, there is no close-formed inverse of Kepler's equation in the elementary functions. Fortunately, there are many ways to solve Kepler's equation for the eccentric anomaly. One easy way is to use Newton-Raphson iteration.
What's still needed is a relationship between eccentric anomaly and true anomaly. This relationship can be represented in closed form:
$$\tan\frac\theta2 = \sqrt{\frac{1+e}{1-e}}\tan\frac E2$$ | {
"domain": "astronomy.stackexchange",
"id": 3784,
"tags": "orbit, planet"
} |
Vibrational motion of linear diatomic molecule | Question: This question concerns the following exercise from an old exam:
The vibrational motion of a linear diatomic molecule can be approximated as simple harmonic motion.
A CO molecule has a bond with force constant $k = 1900 \, Nm^{-1}$. What frequency of radiation would excite transitions between the different vibrational enery levels? (C and O have masses of $12 \, m_u$ and $16\, m_u$ respecitvely, where $m_u$ is the atomic mass unit.)
Explain why electromagnetic radiation can not excite vibrational transitions in an $O_2$ or $N_2$ molecule. How is this important for the heating of the earth's atmosphere by the sun (the greenhouse effect)?
For the first part I know the energy levels of a harmonic oscillator (in center of mass reference frame)
$$E_n = \hbar\omega(n+1/2),\quad \text{where }\omega = \sqrt{\frac{k}{\mu}}, \quad \mu = \frac{m_Om_C}{m_O + m_C}$$
and the energy of a photon of frequency $\nu$ is given by $E_{ph} = h\nu$. The photon energy must be the energy difference between two states:
$E_{ph} = E_n - E_m = \hbar \omega (n-m)$. This gives me the frequency
$$\nu = \frac{(n-m)}{2\pi}\sqrt{\frac{k}{\mu}}$$
Or minimum frequency $\nu_0 = \frac1{2\pi}\sqrt{\frac{k}{\mu}}$, right?
However, now to my actual question: I don't know why we cannot do the exact same thing for the other two molecules? What is different for $O_2$ and $N_2$?
I'd appreciate some help, thanks! =)
Answer: The molecules $O_2$ and $N_2$ are symmetric and have no dipole momentum. That's why they can not interact with EMW (at least within dipole approximation).
One can say that transitions between the oscillator levels in these molecules are forbidden by symmetry in electrodipole approximation.
The molecule $CO$ consists of two different atoms. The average positions of positive and negative charges are not the same. This molecule is polar. | {
"domain": "physics.stackexchange",
"id": 2338,
"tags": "quantum-mechanics, homework-and-exercises, harmonic-oscillator, molecules"
} |
Why don't electrons "get lost"? | Question: According to quantum mechanics, existence of an electron at a place depends on the wavefunction which in turn gives us the probability of an electron being there. And for a few special places, like nodes in an atom, the probability of finding an electron diminishes to zero. But at every other possible point in the whole universe, there is some non-zero probability of finding an electron. This is what I know (correct me if I'm wrong).
Now this is my question. Let's suppose an electron wandered far away such that now it is more closer to the nucleus of another atom than it is to it's original atom's nucleus. Now how does the electron know that which atom did it originally belong to? And with so many number of electrons surrounding us, this process of intermixing of electrons from one atom to another should happen quite spontaneously. But as far as I know, we don't see electrons moving from one atom to another quite often. Yes, there are cases of ionic bond formation and conduction where there is apparent movement and transfer of electrons, but why not everywhere?
And the implications of electrons "getting lost" are drastic. Electronic configurations of atoms would no longer matter. Almost all the matter around us would get ionized, and also emit (hopefully) beautiful and colourful line spectra. But in reality, non of these fantasies exist. Why?
Answer:
Quantum particles are indistinguishable. You cannot "label" the electrons. So, a state in which two electrons are exchanged is the same state as the original one.
The probability of an electron to be found very far away from the nucleus is very low.
In order to calculate the probability for an electron to "jump" from an atom to another atom you need to use the wavefunction for the entire system (with two or more atoms). Solving it you can find the probability for an electron transfer. If the two atoms are at a distance much larger than the typical chemical bonds, that probability will be very low.
In conclusion, the reason we do not see electrons "getting lost" is the very low probability of such an event to happen. | {
"domain": "physics.stackexchange",
"id": 62991,
"tags": "quantum-mechanics, electrons, wavefunction, probability, atoms"
} |
Derivation of Electric Vector Potential | Question: From Balanis' Antenna Theory: Analysis and Design, 3rd Ed., p. 137:
3.3 The vector potential $\boldsymbol F$ for a magnetic current source $\boldsymbol M$
Although magnetic currents appear to be physically unrealizable, equivalent magnetic currents arise when we use the volume or the surface equivalence theorems. The fields generated by a harmonic magnetic current in a homogeneous region, with $\mathbf J = 0$ but $\mathbf M \neq 0$, must satisfy $\nabla\cdot \mathbf D = 0$. Therefore, $\mathbf E_F$ can be expressed as the curl of the vector potential $\mathbf F$ by
$$\mathbf E_F = -\frac1\epsilon \nabla\times\mathbf F$$
I didn't find much information about this "electric vector potential" (I had never heard about it before, actually). I don't understand how the magnetic current density $\mathbf{M}$ is related to the electric field $\mathbf{E}$, and why the conditions imposed ($\mathbf{J}=0$ and $\mathbf{M}\neq0$) lead to $\nabla\cdot\mathbf{D}=0$.
Can someone explain this or tell me where I can find more information about this vector potential?
Answer: Short answer: $\nabla \cdot \mathbf{D}=0$ comes from taking the divergence of $\nabla \times \mathbf{H}_F = j \omega \epsilon \mathbf{E}_F$ since we have assumed that $\mathbf{J}=0$ for this case.
We use surface equivalence to replace physical structures with fictitious $\mathbf{J}$ and $\mathbf{M}$ so that we can use the free space Green's function to calculate radiated electric and magnetic fields.
And now to overexplain:
For the case of both electric and magnetic sources, we start with the equations
$\nabla \times \mathbf{E} = -j\omega \mu \mathbf{H} -\mathbf{M}$
$\nabla \times \mathbf{H} = j\omega \epsilon \mathbf{E} + \mathbf{J}$
Using superposition, we can consider the electric and magnetic current sources separately.
$\nabla \times \mathbf{E}_A = -j\omega \mu \mathbf{H}_A$
$\nabla \times \mathbf{H}_A = j\omega \epsilon \mathbf{E}_A + \mathbf{J}$
and
$\nabla \times \mathbf{E}_F = -j\omega \mu \mathbf{H}_F - \mathbf{M}$
$\nabla \times \mathbf{H}_F = j\omega \epsilon \mathbf{E}_F$
and using superposition we have
$\mathbf{E} = \mathbf{E}_A+\mathbf{E}_F$
$\mathbf{H} = \mathbf{H}_A+\mathbf{H}_F$
Solving the electric current case, we use the usual magnetic vector potential $\mathbf{A}$, where $\mathbf{H}_A = \frac{1}{\mu}\nabla \times \mathbf{A}$, and this will yield $\mathbf{E}_A$ and $\mathbf{H}_A$. All of the logic for solving for $\mathbf{E}_F$ and $\mathbf{H}_F$ from the magnetic current source $\mathbf{M}$ is exactly the same, just switch $\mathbf{E}$ and $\mathbf{H}$ (and other constants and signs) and do the same thing.
Take the following divergence
$\nabla \cdot \left( \nabla \times \mathbf{H}_F = j\omega \epsilon \mathbf{E}_F \right)$
to get
$\nabla \cdot \epsilon \mathbf{E}_F=0$
I think this was the part you were missing. Then we can write
$\mathbf{E}_F = -\frac{1}{\epsilon} \nabla \times \mathbf{F}$.
This will allow us to solve for $\mathbf{E}_F$ and $\mathbf{H}_F$.
Leaving some steps out, we'll end up with the vector potentials $\mathbf{A}$ and $\mathbf{F}$ as
$\nabla^2 \mathbf{A} +k^2 \mathbf{A} = -\mu \mathbf{J}$
$\nabla^2 \mathbf{F} +k^2 \mathbf{F} = -\epsilon \mathbf{M}$
This can be seen as 6 separate scalar wave equations, and in free space the solution is well known using Green's functions. Note that using the Lorenz gauge is necessary to obtain this form. Assuming that $\mathbf{J}$ and $\mathbf{M}$ are surface currents on some surface $\Gamma$, then
$\mathbf{A}(\mathbf{r}) = \mu \int_\Gamma g(\mathbf{r},\mathbf{r}') \mathbf{J}(\mathbf{r}') ds'$
$\mathbf{F}(\mathbf{r}) = \epsilon \int_\Gamma g(\mathbf{r},\mathbf{r}') \mathbf{M}(\mathbf{r}') ds'$
where $g(\mathbf{r},\mathbf{r}') = \frac{e^{-jk|\mathbf{r}-\mathbf{r}'|}}{4\pi|\mathbf{r}-\mathbf{r}'|}$
And then once we have $\mathbf{A}(\mathbf{r})$ and $\mathbf{F}(\mathbf{r})$ we can get
$\mathbf{E}(\mathbf{r}) = -j \omega \mathbf{A} - \frac{j}{\omega \mu \epsilon} \nabla (\nabla \cdot \mathbf{A}) - \frac{1}{\epsilon} \nabla \times \mathbf{F}$
$\mathbf{H}(\mathbf{r}) = -j \omega \mathbf{F} - \frac{j}{\omega \mu \epsilon} \nabla (\nabla \cdot \mathbf{F}) + \frac{1}{\mu} \nabla \times \mathbf{A}$
We can also take the limit $|\mathbf{r}| \rightarrow \infty$ to obtain far fields.
As to why we consider fictitious magnetic currents $\mathbf{M}$:
Suppose we have a radiating antenna, and we know the electric and magnetic fields in some close vicinity of the antenna (e.g. using a numerical finite element simulation), but we want to find the radiated fields at some far away point. We can't use simply use the actual electric currents along with the integrals above, since those equations rely on the free space Green's function and the presence of a physical radiating structure means this is not free space.
Given a closed surface $\Gamma$ which contains the antenna, if we know the fields $\mathbf{E}$ and $\mathbf{H}$ on $\Gamma$, then we can consider a equivalent problem in which we (1) introduce fictitious sources $\mathbf{J}=\mathbf{\hat{n}} \times \mathbf{H}$, $\mathbf{M} = -\mathbf{\hat{n}} \times \mathbf{E}$ on $\Gamma$, where $\mathbf{E}$ and $\mathbf{H}$ are the known fields from the original problem, and (2) remove the physical structure and set $\mathbf{E}=\mathbf{H}=0$ inside the surface. Surface equivalence tells us that outside $\Gamma$ the two scenarios will produce the same fields. The equivalent problem has the advantage that it is free space, and therefore we know the Green's function. Therefore we can use the integrals above to calculate the vector potentials and then the electric and magnetic fields at any point outside $\Gamma$. | {
"domain": "physics.stackexchange",
"id": 78849,
"tags": "electromagnetism, potential, maxwell-equations"
} |
What does it mean for a distribution to be "consistent with a two rate-limiting stochastic steps"? | Question: I'm reading a study (full text here) that examine the dynamic of nuclear translocation of a transcription factor in budding yeast, in response of calcium stress. They found that it occurs in bursts, which distribution of durations is plotted on this normalized histogram:
In the text they say:
Normalized histograms, $h(t)$, of total burst duration at two calcium concentrations are both well-fit by $h(t)=te^{-t/\tau}$, with $\tau$ = 70 sec (black line).
And also
This distribution was consistent with two rate-limiting stochastic steps, each with a timescale of ~70 sec.
Can someone explain me the second sentence cited?
Answer: WYSIWYG is almost there, but you need one more piece of information to make this explicit.
The distribution cited in the paper is $h(t)\propto te^{-t/\tau}$ (we're going to ignore normalizing constants today). We can recognize this as a particular case of the Gamma distribution, with PDF:
$$f_{\mathrm{Gamma}}(t\,\big|\,k,\theta)\propto t^{k-1}e^{-t/\theta}.$$
In particular, $h(t)$ looks like the PDF of a $\mathrm{Gamma}(2,\tau)$ random variable,
$$h(t)\propto f_{\mathrm{Gamma}}(t\,\big|\,2,\tau)\propto te^{-t/\tau}.$$
So the authors of the paper are claiming that the burst duration $\tau_{\mathrm{burst}}$ is a random variable with a $\mathrm{Gamma}(2,70\,\mathrm{ sec})$ distribution.
How do we get from that to "This distribution was consistent with two rate-limiting stochastic steps?"
If we recall from our stats class (or look up properties of the Gamma distribution on Wikipedia), a Gamma distribution with integer values of $k$ is the distribution of the waiting time for $k$ events to occur in a Poisson process. The setup of a Poisson process is that you are tracking the occurrence of unlikely/infrequent events in time, so the fact that the distribution $h(t)$ of burst times looks like a $\mathrm{Gamma}(2,\tau)$ suggests that the burst duration $\tau_{\mathrm{burst}}$ is determined by the occurrence of 2 stochastic events (with the same "frequency" $1/\tau$). In other words, this distribution is consistent with the following scenario: a nuclear localization occurs, and then will stop after two particular events, where the timing of the events is governed by a Poisson process (which, as WYSIWYG pointed out, is a reasonable model for the occurrence of chemical reactions with reasonably slow kinetics, i.e. you can count individual reactions in time). If this is true, the distribution of $\tau_{\mathrm{burst}}$ will look like $te^{-t/\tau}$.
The authors generalize that statement somewhat by saying that technically there could be more events/reactions required to terminate a localization burst, but all but 2 of those reactions happen extremely rapidly, i.e. there are 2 rate-limiting steps in the decision process (both of which happen to have the same value for $\tau$).
Edit: I realized that one more mathematical point might make this more clear: The statement about the Gamma distribution and Poisson processes above is equivalent to the statement that the sum of $k$ iid random variables with distribution $\mathrm{Exponential}(\tau)$ is a random variable with a $\mathrm{Gamma}(k,\tau)$ distribution. Thus $h(t)$ is literally (up to normalization) the distribution of a sum of 2 independent exponential random variables. If you read up on the connection between the exponential distribution and waiting times that might make the claim in the paper seem more transparent.
Source: https://en.wikipedia.org/wiki/Gamma_distribution#Special_cases
http://en.wikipedia.org/wiki/Exponential_distribution | {
"domain": "biology.stackexchange",
"id": 1506,
"tags": "biochemistry, molecular-biology"
} |
Trajectory Length Problem | Question:
MISSION:
Plan from A to B, not colliding with obstacles, using all planners in OMPL and CHOMP. Compare the trajectories of those
planners.
SETUP:
I am using Moveit to plan for a 6 DOF articulated robot arm.
The workspace contains obstacles.
I use C++ move_group interface to compute the plan with different OMPL Library planners.
OBSERVATIONS:
I am interested in joint_6 movement. When I plot the 'position' attribute of the trajectory points, I make two observations:
The planners start at a common point (the start pose), but end at completely different points. Since the manipulator is 6 DOF, I can only imagine this to be some kind of misconfiguration issue.
The trajectories are not equal in length (length = number of trajectory points).
PROBLEM STATEMENT:
My goal is to compare the trajectories from each Algorithm. For an accurate comparison, the two observations are problematic.
QUESTION:
Is there a way to
a) have every planner output the exact same number of waypoints (I am thinking here of a "fine" grid ~ 1000 for example ~ something that is way above the default 40 - 150 number that I observed)?
b) fix the position generation? Since the robot arm is 6 DOF, the end pose should be uniquely defined through a set of joint positions. It is unclear to me how the different planners can have a varying end position for joint_6.
Edit: First of all, thank you for your answer.
A trajectory point position attribute equals the position member of the trajectory message.
As my robot has six revolute joints, that equals one float64[6] array for each point on the trajectory.
Since only joint_6 is of interest for me for now, I plot joint_6 RAD value on the y-Axis
and the corresponding trajectory point number on the x-Axis.
I may mix up something here, please correct me if I am wrong.
A robot with a serial kinematic chain and DOF > 6 is considered redundant, i.e.
the same end effector pose can be reached with multiple joint configurations.
Since my manipulator is 6 DOF and I specify x,y,z,r,p,y for my end pose, I assume
that this pose directly transfers to a unique set of joint values.
Question b) of my original question was (likely) solved due to a bug in my
program where I export the trajectory in json format.
I tried a different aproach where instead of exporting the trajectory, I recorded
the movement of the /joint_states topic during excecution of the trajectory.
There seems to be a lot of smoothing and interpolating going on, but coupled
with the bug fix I get a farely stable fix to question b).
Regarding question a) nothing has changed here. But since I am now recording the
excecution of the trajectory, I can be sure that the intervall between my measuring points
is defined by the timestamp of the /joint_states topic's message.
Before I relied on the trajectory generation, which as you too mentioned is a rather
unstable approach.
A downside of the new approach is, that my measurement depends on
the fake excecution manager trajectory processing.
I am using the ros - industrial - fanuc packages.
Originally posted by cpetersmeier on ROS Answers with karma: 35 on 2019-08-14
Post score: 0
Answer:
Just to clarify something when you're talking about the position attribute of trajectory points, are you referring to the position member of the trajectory message here? This is not the Cartesian position of the last joint of the robot but the final joint position, an angle in your case.
All the 6 DOF revolute arms I can think of often have several different joint solutions for a given end effector pose, so it's perfectly possible that different planners will end up with a different joint angle solution while giving the end effector exactly the same pose.
However you can use Moveit to define a goal in terms of joint angles instead of end effector pose, this means that the planners will all end up with the same (to within a tolerance) joint angles. The method you need is moveit::planning_interface::MoveGroup::setJointValueTarget
Regarding trajectory resolution, I don't think there is anyway to do this. The number of points on a trajectory is defined by the movement tolerance specified, and is calculated on the fly. There is nothing to stop you interpolating the different trajectories you generate to the same sample spacing though, It could get a bit strange if trajectories are of significantly different durations.
Finally, can you explain exactly how you're comparing the trajectories?
Originally posted by PeteBlackerThe3rd with karma: 9529 on 2019-08-15
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by PeteBlackerThe3rd on 2019-08-16:
Firstly I'm glad you've managed to find a solution so you can compare the trajectories. I do need to correct you regarding joint solution of 6 DOF revolute arms though. A revolute arm with > 6 joints has an infinite number of solutions for any end effector pose, however a 6 DOF arm can have a discrete number of joint solutions. Often there are 2 or 4 different joint configurations that will put the end effector in exactly the same location. These are often referred to as elbow up/down and shoulder left/right, this however will depend on the exact configuration of your arm. Arms will greater than 360 degree of movement joints can have many more.
Comment by cpetersmeier on 2019-08-16:
Thanks for the quick answer!
That explains why the trajectories were converging to multiple end points when not planning in joint-goals!
Glad I have learned something! :)
Comment by gvdhoorn on 2019-08-17:
@cpetersmeier: please don't post updates or follow-up questions as answers, unless you are answering your own question. Either update your original question text (use the edit link/button for that) or post a comment.
I've merged your non-answer into your OP this time and re-arranged the comments, but please keep this in mind for the future. | {
"domain": "robotics.stackexchange",
"id": 33620,
"tags": "robotic-arm, moveit, ros-melodic, robot, ompl"
} |
Can photons have negative energy? | Question: Apparently there are 2 electron self-energy graphs possible.
The first, the more "familiar", where the incoming electron at time $t_1$ splits up in a photon and an virtual electron. At $t_2>t_1$ the virtual photon joins the electron again. But the Feynman-propagator also allows $t_1>t_2$, where apparently the incoming electron hits a positron at $t_1$ which was created at time $t_2<t_1$. But for the creation of the electron-positron pair at $t_2$ the photon has to provide the positive energy at $t_2$ for the creation process, so this photon seems to be created at $t_1>t_2$, so the photon is apparently moving backward in time.
The other possibility is of course apply Feynman's saying: Backward in time running particles with energy $E$ can be interpreted as forward in time running particles with $-E$ (assuming $E$ can have positive or negative sign in general).
Therefore, it would be equivalent to say that for the electron self-energy diagram where $t_1>t_2$, at $t_2$ a photon is created with energy $-E$ ($E$ being the energy needed for the creation of the electron-positron pair) and then moving in time forward to $t_1$ to deliver this negative energy to destroy the (incoming) electron-(virtual) positron pair.
Therefore I come to the conclusion that virtual photons can have negative energy in running in time forward or positive energy running in time backward.
What about real photons? I would be astonished about real negative energy photons. Or would the dispersion relation $\omega^2=k^2$ allow for negative frequencies (keeping in mind that these negative frequency solutions would again correspond to photons as anti-photons and photons are the same)?
Answer: Note that you have to choose between momentum representation and space-time representation, for Feynmann diagrams, but you cannot use the $2$ together, this is quantum mechanics. So you cannot speak, at the same time, of precise times $t_1,t_2$, and precise energy $E$
"Virtual" particles are not particles at all. The Feynman propagator $D(x)$, in space-time representation, just express amplitudes to go from, say, the origin $0$ to a point $x$. It is better to see this as field correlations between the points $0$ and $x$
However, your general idea in the first paragraph of your question is quite correct, in the expression of the simple (massless scalar field) Feynman propagator :
$D(x) = -i\int \frac{d^3k}{(2\pi)^3 2\omega_k}[\theta(x^0)e^{-i(\omega_k x^0- \vec k.\vec x)}+\theta(-x^0)e^{+i(\omega_k x^0- \vec k.\vec x)}] \tag{1}$
with $\omega_k = |\vec k|$
we see that positive $x^0$ are associated with a collection of positive energies $\omega_k$, while negative $x^0$ are associated with a collection of negative energies $-\omega_k$
But once more, it is better to speak of field perturbations, or field correlations, but not "particles"
Real particles, on the other hand, have always, in momentum representation, a positive energy. So they belong to some representation of the Poincaré group, with, for massless particles as photons, $p^0>0$ and $p^2=0$. These real particles have creation and anihilation operators, etc.. | {
"domain": "physics.stackexchange",
"id": 14348,
"tags": "quantum-field-theory, feynman-diagrams"
} |
PHP library for handling account creations, logins, and file uploads | Question: I'm new to OOP and PHP. I've made an attempt at a PHP library file that handles account creations, logins, and file uploads with image resizing on the fly. It works so far. I'd like some help with the Obj Oriented part, namely making the code more efficient/succinct.
User class:
<?php
define( 'BASEPATH', realpath( dirname( __FILE__ ) ) );
define( 'AppRoot', 'http://www.eclecticdigital.com/eventfinder' );
/**
* Class user
*/
class user {
function createAcc() {
$userData = new DB();
$userFiles = new fileHandler();
$userID = $userData->createNewUser();
$userpath = BASEPATH . '/users/user_' . $userID . "/";
$userFiles->userDirectory( $userpath );
}
/*END CREATE ACC*/
/**
* @param $postVars
* @return bool
*/
function loginUser( $postVars ) {
$newLogin = new DB();
$user_data = $newLogin->findUser( $postVars );
$session = new userSession( $user_data );
return true;
}//**END loginUsr function
/*function userLogOut{
1. take $userData and session info and unset session
2. redirect to homepage
unset($_SESSION);
}*/
}
Render class:
<?php
/**
* Class render
*/
class render {
/*MAKE STYLESHEETS AND JAVASCRIPT FILES GLOBALLY AVAILABLE TO THE APP*/
public $homePath;
public $stylePath;
public $javascript;
public $header;
public $beginHTML;
public $headerDiv;
/**
*
*/
public function __construct() {
$this->homePath = AppRoot;
$this->stylePath = $this->homePath . '/style.css';
$this->javascript = $this->homePath . '/eventsJS.js';
$this->beginHTML = '<!Doctype HTML>';
$this->headTag = '<head><title>Eventfinder</title><link href =" ' . $this->stylePath . '" type="text/CSS"
rel="stylesheet" /> <link href="http://fonts.googleapis.com/css?family=Playball" rel="stylesheet" type="text/CSS">
<link href="http://fonts.googleapis.com/css?family=Satisfy" rel="stylesheet" type="text/css">
<script type="text/javascript" src=" ' . $this->javascript . ' "></script> </head>';
$this->headerDiv = "<div id=\"header\"> Eventfinder </div>";
}
function index() {
echo $this->beginHTML;
echo $this->headTag;
echo $this->headerDiv;
}
function usrLogin() {
$newform = new html_;
$newform->userLogin();
}
function createAcc() {
$forms = new html_;
$forms->newUserForm();
}
}
HTML Class:
<?php
/**
* Class html_
*/
class html_ {
public $Type = array( 'button' => 'button', 'checkbox' => 'checkbox', '_file' => 'file', 'hidden' => 'hidden', 'radio' => 'radio', 'r_set' => 'reset', 'submit' => 'submit', '_text' => 'text' );
public $formOpen = '<form method="post" action="tryDB.php">';
public $formClose = "</form>";
public $inputText = '<input \'$name\'>';
/**
* @param $method
* @param $action
* @param $formName
*/
function openForm( $method, $action, $formName ) {
echo "<form method=\"$method\" action=\"$action\" name=\"$formName\">";
}
function newUserForm() {
echo "<div id=accform>";
echo "<form method=\"post\" action=\"#\" name=\"newacc\" onsubmit=\"return validateAcc(this)\"> ";
echo "First Name : <input type=\"text\" name=\"fname\" id=\"firstname\" /> <br><br>";
echo "Last Name : <input type=\"text\" name=\"lname\" /> <br><br>";
echo "email : <input type=\"text\" name=\"email\" id=\"email\" /> <br><br>";
echo "password: <input type=\"password\" name=\"password\" id=\"password\" /> <br><br>";
echo "<input type=\"submit\" value=\"create account\" name=\"create\" >";
echo "</form>";
echo "</div>";
echo "<div id=\"errorsDiv\"></div>";
}
function userLogin() {
echo "<div id=login> ";
$this->openForm( 'post', '#', 'loginForm' );
echo "email : <input type=\"text\" name=\"login\" />";
echo "password: <input type=\"text\" name=\"loginPass\" />";
echo "<input type=\"submit\" value=\"login\" name=\"submit\" />";
$this->formClose;
echo "</div>";
}
}
Filehandler class:
<?php
/**
* Class fileHandler
*/
class fileHandler {
/*CREATE FOLDERS FOR NEW USER'S FUTURE UPLOADS*/
/**
* @param $userDir
*/
public function userDirectory( $userDir ) {
$dir = array( $userDir, $userDir . 'images',
$userDir . 'images/width_800',
$userDir . 'images/width_1024',
$userDir . 'images/width_1400',
$userDir . 'images/width_1600',
$userDir . 'images/width_2400' );
for ( $i = 0; $i < count( $dir ); $i++ ) {
mkdir( $dir[$i], 0764, true );
chmod( $dir[$i], 0764 );
}
}
/*DEFINE IMAGE RESIZE FUNCTIONS*/
/**
* @param $image
* @return mixed
*/
public function imgResize( $image ) {
$this->userPath = $_SESSION['userFolder'];
$imgDetails = explode( '/', $image );
$imgName = $imgDetails[2];
$size = getimagesize( $image );
$imgType = $size['mime'];
$reqdWidths = array( '800', '1024', '1400', '1600', '2400' );
$width = $size[0];
$height = $size[1];
$imgRatio = $width / $height;
/*PUT GD 'IMAGECREATE'FUNCTIONS IN A SWITCH-CASE TO DEAL WITH MULTIPLE IMAGE TYPES */
/**
* @param $img
* @return resource
*/
function createImage( $img ) {
$size = getimagesize( $img );
$typeImg = $size['mime'];
switch ( $typeImg ) {
case 'image/png':
$newImage = imagecreatefrompng( $img );
return $newImage;
break;
case 'image/jpeg':
$newImage = imagecreatefromjpeg( $img );
return $newImage;
break;
case 'image/gif' :
$newImage = imagecreatefromgif( $img );
return $newImage;
break;
}
}
/**
* @param $imgType
* @param $dest
* @param $userDestImage
* @return bool
*/
function finalImg( $imgType, $dest, $userDestImage ) {
switch ( $imgType ) {
case 'image/png':
$final = imagepng( $dest, $userDestImage );
return $final;
break;
case 'image/jpeg':
$final = imagejpeg( $dest, $userDestImage );
return $final;
break;
case 'image/gif' :
$final = imagegif( $dest, $userDestImage );
return $final;
break;
}
}
$source = call_user_func( 'createImage', $image );
for ( $i = 0; $i < count( $reqdWidths ); $i++ ) {
echo count( $reqdWidths ) . '<br><br>';
$userDestImage = $this->userPath . '/images/width_' . $reqdWidths[$i] . '/' . $imgName;
$newWidth = intVal( $reqdWidths[$i] );
$newHeight = intVal( $newWidth / $imgRatio );
$dest = imagecreatetruecolor( $newWidth, $newHeight );
echo $userDestImage;
imagecopyresampled( $dest, $source, 0, 0, 0, 0, $newWidth, $newHeight, $width, $height );
$imgFinal = call_user_func( 'finalImg', $imgType, $dest, $userDestImage );
}
return $imgFinal;
}
/*---function to handle file upload. Calls resize when upload is done---*/
/**
* @return string
*/
function file_upload() {
/**
* @return string
*/
function do_upload() {
$userFolder = $_SESSION['userFolder'];
echo $_FILES['upFile']['type'] . '<br>';
$allowedFiles = array( 'image/gif', 'image/jpg', 'image/jpeg', 'image/png' );
if ( !empty( $_FILES ) && in_array( $_FILES['upFile']['type'], $allowedFiles ) ) {
//echo $userFolder.'<hr>';
$tmpFldr = $_FILES['upFile']['tmp_name'];
$fileDest = $userFolder . '/' . $_FILES['upFile']['name'];
if ( move_uploaded_file( $tmpFldr, $fileDest ) ) {
echo 'file(s) uploaded successfully';
} else {
echo 'Your file failed to upload<br><br>';
}
return $fileDest;
} else {
die( 'An error occurred, please make your
files are of the correct type' );
}
}
//END do_upload,
/*Perform upload return path to file location*/
$fileLoc = do_upload();
return $fileLoc;
}
//**END file_upload
}
DB class:
<?php
/**
* Class DB
*/
class DB {
/*Predefined Queries*/
/**
* @return bool|PDO
*/
public function connect() {
$dbConn = new PDO( 'mysql:dbname=eh15118b;host=xxxxxx', 'xxxxx', 'xxxx' );
if ( $dbConn ) {
echo 'connected';
} else {
$dbConn = false;
echo 'failed to make a connection';
}
return $dbConn;
}
//NEW USER CREATION FUNCTION
/**
* @return string
*/
public function createNewUser() {
$find = "Select email from users where email=:email";
$find_param = array( ':email' => $_POST['email'] );
$connDB = $this->connect();
if ( $connDB ) {
try {
$stmt = $connDB->prepare( $find );
$result = $stmt->execute( $find_param );
$row = $stmt->fetch();
} catch ( PDOException $e ) {
die( 'Failed to execute Query: <hr>' . $e->getmessage() );
}
if ( !empty( $row ) ) {
die ( 'There is already an account registered to the address :email .
You can login here or reset your password' );
} else {
/*SALT AND HASH FOR PASSWORD SECURITY*/
$salt = dechex( mt_rand( 0, 2147483678 ) ) . dechex( mt_rand( 0, 2147483678 ) );
$password = hash( 'sha256', $_POST['password'] . $salt );
$query = "INSERT INTO users (fname,lname, password,salt, email)
VALUES (:fname,:lname, :password,:salt,:email)";
for ( $round = 0; $round < 65336; $round++ ) {
$password = hash( 'sha256', $password . $salt );
}
$query_params = array( ':fname' => $_POST['fname'], ':lname' => $_POST['lname'],
':password' => $password, ':salt' => $salt,
':email' => $_POST['email'] );
try {
$stmt = $connDB->prepare( $query );
$result = $stmt->execute( $query_params );
} catch ( PDOException $ex ) {
/*THIS COULD ALSO BE SENT TO A LOG FILE*/
$ex->getmessage();
}
}
}
$id = $connDB->lastinsertid( 'usrID' );
return $id;
}
//END CREATE NEW USER
/**
* @param $vars
* @return array|null
*/
public function findUser( $vars ) {
$sessionVars = null;
$Conn = $this->connect();
if ( $Conn ) {
$user = $vars['user'];
$pass = $vars['pass'];
/*FIND user in DB*/
$usrLogin = $Conn->quote( $user );
$usrFind = 'Select email,password,salt,usrID from users where email=' . $usrLogin;
$stmt = $Conn->query( $usrFind );
$result = $stmt->setFetchMode( PDO::FETCH_NUM );
$row = $stmt->fetch();
if ( !empty( $row ) ) {
$salt = $row[2];
$checkPass = hash( 'sha256', $pass . $salt );
for ( $round = 0; $round < 65336; $round++ ) {
$checkPass = hash( 'sha256', $checkPass . $salt );
}
//Check Password and start session
if ( $checkPass === $row[1] ) {
$sessionVars = array( 'user' => $row[0], 'userID' => $row[3] );
} else {
die( 'your password is incorrect.<br><br>' );
}
} else {
echo 'no data in row';
}
} else {
echo 'failed to connect to the Database';
}
return $sessionVars;
}
//**END FUNCTION findUser
/**
* @param $postVals
*/
function createEvent( $postVals ) {
$uploads = new fileHandler();
$image = $uploads->file_upload();
if ( $uploads->imgResize( $image ) ) {
echo 'The file : ' . $image . ' was uploaded and resized successfully ';
} else {
echo '<HR> !!message or email to admin that an image was not resized successfully';
}
echo 'THE VALUES IN POST VALS ARE <BR><BR><BR>';
print_r( $postVals );
echo '<HR>';
var_dump( $image );
//prepare query to insert into events database
$query = "INSERT INTO Events (evntName,evntVenue,evntType,evntDate,fullDesc,imgpath)
VALUES(:evntName,:evntVenue,:evntType,:evntDate,:fullDesc,:imgpath)";
$params = array( ':evntName' => $postVals['eName'],
':evntVenue' => $postVals['eVenue'], 'evntType' => $postVals['eType'],
':evntDate' => $postVals['eDate'], ':fullDesc' => $postVals['Desc'],
':imgpath' => $image );
$connDB = $this->connect();
if ( $connDB ) {
try {
$stmt = $connDB->prepare( $query );
$result = $stmt->execute( $params );
} catch ( PDOException $err ) {
echo $err->getMessage();
exit;
}
} else {
echo 'NO DATABASE CONNECTION';
exit;
}
}
}
userSession class:
<?php
/**
* Class userSession
*/
class userSession {
/**
* @param $sessionVars
*/
public function __construct( $sessionVars ) {
session_start();
$_SESSION['userEmail'] = $sessionVars['user'];
$_SESSION['userID'] = $sessionVars['userID'];
$_SESSION['userFolder'] = 'users/user_' . $_SESSION['userID'];
$_SESSION['sessID'] = session_id();
}
}
Answer: Executive summary:
Pick a naming standard and stick to it (img != image for instance)
Learn how to write comment blocks ( /* description / /* @var */ is kinda weird)
SOLID is a good place to start (google it)
PSR-1 and PSR-2
Don't store html in strings, just don't.
Again, don't use a class to represent a 'template'. Php does a really good job as a templating engine <myhtml><?php echo $data ?></myhtml>
your database isn't really abstracted (more on this later)
I miss interfaces :( (why? I just like them)
So why bother commenting?
- You did a good effort in separating concern
- you actually wrote some comments
- You are using PDO
Long part
(More text will follow, I simply don't feel like writing everything in 1 go):
FileHandler or no, fileHandler
Naming convention is everything . When using code, you should not have to think about syntax. It should all feel very natural. Your code doesn't. Some examples:
the FileHandler class is actually fileHandler. Always start your class names with a capital letter. Why? because. It's a convention, we all do it, it makes writing code easy.
Now, lets say I want to use your fileHandler class. I copy paste it into my project and initialize it:
$myAwesomeStolenFileHandler = new fileHandler();
$myAwesomeStolenFileHandler->createImage($myImage);
OW sh*@t, function not found. Huh? but it's there in the code allright. Ow, wait, it's not a method of the class, it's a function inside a method. I'm ot supposed to use it, so why not make it private? Instead of creating it everytime I call createImage. wow, thats enoying. uh ok, well, I was planning on adding a watermark but i'll wate for SteveBK to implement it I guess. Ok, well lets use the imageresizer then.
$myNotSoAwesomeStolenFileHandler->resizeImage($image);
Again, method not found.
Huh? confusion
Aaah, it's img not image. and in this class somehow it's imgResize and not resizeImage.
Ok, next try;
$myNotSoAwesomeStolenFileHandler->imgResize($image);
Note that I had to read the method itself to be able to know what @param $image actually is (is a String? is it a binary object, is it, ...)
Recap:
Comments aren't really helping
A lot of functionality is cramped into one method (and those strange function creations on the fly in a method, iieeewww)
naming of the methods could be a lot better.
Rewrite:
What does the name FileHandler tell me? Well, without reading the code it tells me that it's a class that aids me in handling files. I could for instance want to get a list of files. Add a file, upload one, rename it, get a FileInstance to do some extra stuff with the file itself, ... You know, the usual stuuf a FileHandler does. it would look something like this:
<?php
interface FileHandler
{
/**
* Define the root path for the FileHandler
* The FileHandler should not be able
* to access files outside this root folder
*
* @param String $root the root path
*/
public function __construct($root);
/**
* Returns an array of all file and directory names in the current directory
* IF $showDirectries == false
* exlude the directories
*
* @param boolean $showDirectories
* @return String[]
*/
public function listAll($showDirectories=true);
public function changeDirectory();
public function getCurrentDirectory();
public function getFile($name);
}
Yes, Id din't finish all the comments. Complains can be written to /dev/null
Interface?
Yes, I wrote an interface and not a class. why? here is the reason.
An interface is a contract, It tells us programmers what a certain class/object can do if it implements this interface. This also goes the other way round, if we need an object of a certain interface. we simply type-hint for that interface and we get really nice error messages that help us in the debug proces (for instance if someone is trying to cramp a MouseHandler to your app instead of a FileHandler)
Back to the FileHandler that actually should have been a ImageInstance or ImageFile
As you see, I now wrote a piece of code that has nothing to do with your app, simply because you didn't need a FileHandler. You need a ImageResizer, or whatever you need.
Handling independent files and/or images
Our FileHandler has a method getFile() and let's for the sake of codereview and my essay I'm writing here go all the way and have it a return an object of type FileInstance.
The FileInstance
A FileInstance is a representation of a file. Lucky for us, we don't have to write this ourself, there is a really good built int class for this, the SplFileInfo class
What about my ImageResize
good question, the ImageResizer is a function/method that needs a FileInstance and then does some really cool stuff to it (like resizing). The ImageResizer needs a File or a link to a file. We us the SplFileInfo object here.
ImageResizer inteface
Here we can go for different approaches, I will go for an ImageResizer that has a resize method that resizes a given image according to a certain set of rules (only width is supported here)
<?php
interface ImageResizer
{
/**
* Constructor, set some defaults
* @param int $width the width of the image
*/
public function __construct($width);
/**
* Resize the image according to a set of rules defined in the __construct
*
* @param SplFileInfo $image the image to resize
* @param SplFileInfo $destination the destination for the resized image
* @return void
* @throws MymeTypeNotSupportedException If myme_type<$image> is not supported
* @throws DestinationFileNotWritable If !$destination->isWritable()
*/
public function resize(SplFileInfo $image, SplFileInfo $destination);
}
And now, lets implement the sh*@t out of this interface
<?php
class MimeTypeNotSupportedException extends Exception {}
class DestinationFileNotWritableException extends Exception {}
class NotAFileException extends Exception {}
class FileNotReadableException extends Exception {}
class AwesomeImageResizer implements ImageResizer
{
private $width;
private $infoReader;
/**
* Constructor, set some defaults
*
* @param int $width the width of the image
*/
public function __construct($width)
{
$this->width = $width;
$this->infoReader = new finfo(FILEINFO_MIME_TYPE);
}
/**
* Resize the image according to a set of rules defined in the __construct
*
* @param SplFileInfo $image the image to resize
* @param SplFileInfo $destination the destination for the resized image
* @return void
* @throws NotAFileException If ! $image->isFile()
* @throws MimeTypeNotSupportedException If myme_type<$image> is not supported
* @throws DestinationFileNotWritableException If !$destination->isWritable()
*/
public function resize(SplFileInfo $image, SplFileInfo $destination)
{
#is it a file?
if ( !$image->isFile() )
{
throw new NotAFileException();
}
#is destination writable?
if ( !$destination->isWritable() )
{
throw new DestinationFileNotWritableException();
}
#get the mimeType
$mimeType = $this->infoReader->file($image);
#get the image resource
$imageResource = $this->getImageResource();
#let's do the fancy resizing work
$width = imagesx($imageResource);
$height = imagesy($imageResource)
$newWidth = $this->width;
$newHeight = $height * $newWidth / $width;
$newResource = imagecreatetruecolor($newWidth, $newHeight);
imagecopyresampled(
$newResource,
$imageResource,
0, 0, 0, 0,
$newWidth,
$newHeight,
$width,
$height
);
#create the new image
$newImage = $this->createImage($newResource, $mimeType);
#write the new image to the file
$file = $destination->openFile('w');
$file->fwrite($newImage);
}
/**
* calls the corresponding php method
* to create a resource from a certain mimtype
*
* @param SplFileInfo $image
* @param string $mimeType
* @return resource
* @throws MimeTypeNotSupportedException If myme_type<$image> is not supported
*/
private function getImageResource(SplFileInfo $image, $mimeType)
{
//is the image readable?
if ( !$image->isReadable() )
{
throw new FileNotReadableException();
}
switch ($mimeType) {
case 'image/png':
return imagecreatefrompng($image->getPathName());
break;
case 'image/jpeg':
return imagecreatefromjpeg($image->getPathName());
break;
case 'image/gif' :
return imagecreatefromgif($image->getPathName());
break;
default:
#sorry, can't help ya
throw new MimeTypeNotSupportedException($mimeType);
break;
}
}
private function createImage($resource, $mimeType)
{
switch ($mimeType) {
case 'image/png':
return imagepng($resource);
break;
case 'image/jpeg':
return imagejpeg($resource);
break;
case 'image/gif' :
return imagegif($resource);
break;
default:
#sorry, can't help ya
throw new MimeTypeNotSupportedException();
break;
}
}
}
Wow, what code, so much wow.
This is however untested and only written for explanation ;)
Some usage examples:
<?php
$mediumResizer = new AwesomeImageResizer(800);
$mediumResizer->resize(
new SplFileInfo('my/current/image.png'),
new SplFileInfo('my/destination/image.png')
);
#and agin with error handling
try {
$mediumResizer->resize(
new SplFileInfo('other/current/image.png'),
new SplFileInfo('other/destination/image.png')
);
} catch ( NotAFileException $e )
{
echo "Oh nows, the image isn't even a file";
die();
} catch( FileNotReadableException $e)
{
echo "Aaahhh, I'm blind. The file is unreadable, aaaahhhhhhhhh!";
} catch ( MimeTypeNotSupportedException $e )
{
echo "Oh nows, ze program has no clue what to with your mime_type: " . $e->getMessage();
die();
} catch ( DestinationFileNotWritableException $e )
{
echo "Oh nows, the destination file doesn't want me to write to it :(";
die();
}
But what about all those throws, and exceptions? Well, when something goes wrong, you should not echo an error message, or die();
See how I'm handling errors outside of my ImageReszier? this is good. Because let's face it. An ImageResizer that has to handle resizing of images AND error handling, that's a lot. Even for our AwesomeImageResizer, we all have a limit ;)
Ok, I hope that with writing this huuuuuuge ImageResizer I have given you a better biew into writing OO code.
Ok, it's a bit of a long read, but I'll come back soon with more text ;)
On Comments:
A good read: http://pear.php.net/manual/en/standards.sample.php
Note ofcourse that this is proabably overkill. But it's good to know how far you can push it ;) | {
"domain": "codereview.stackexchange",
"id": 15099,
"tags": "php, object-oriented, image, library, authentication"
} |
How to convert wave from real to complex and vice versa? | Question: I have wave expressed by array of real numbers (double in C++). But I want to express it as a complex.
I tried to create complex variable and assign to its real the array of my wave and to its imaginary just array of zeros. OK it's complex now. But I feel there is something wrong. It's not authentic complex. It's just surrogate. For example I can't manipulate in phase (only reverse phase). When I multiply it by for example $ i^{0.4} $ it's just changing amplitude little bit but phase is without change.
In the other hand if I have authentic complex wave how to express it as a real numbers but including information from imaginary values?
For any help great thanks in advance.
Answer: Rather than saying the questions you are asking are non-sensical, I will say they reveal a lack of understanding of complex numbers. There is lots of material to be had with a few simple searches. I would also recommend that you read my blog article The Exponential Nature of the Complex Unit Circle which gives an explanation of Euler's equation which is absolutely essential to understand in order to comprehend the meaning of DFT bin values.
For your first question: Technically, a real number is a complex number with a zero imaginary component. So setting the real part equal to your values and the imaginary part to zero is the correct method. When you multiplied it by $ i^{0.4} $ it should have rotated each value a tenth of a cycle. Thus the amplitude stays the same, but the phase shifts by $\pi/5$ radians.
For your second question: A complex number consists of two values. A real number consists of one. Therefore converting a complex number to a real number is a non-sensical proposition. However, you can calculate the magnitude, which is a real number. You can also ask what the real part is, which is also a real number. The imaginary part is a real number multiplied by $i$.
A real tone is a sinusoidal, flat as a pancake. A complex tone is a spiral, like a slinky. The definitions I use in my blog articles are:
$$ S_n = A \cos( \alpha n + \phi ) $$
For a real tone, and:
$$ S_n = A e^{i (\alpha n + \phi)} $$
For a complex one.
Notice that a real valued tone is just the average of two complex tones:
$$ \frac{ A e^{i (\alpha n + \phi)} + A e^{-i (\alpha n + \phi)} }{2} = A \frac{ e^{i (\alpha n + \phi)} + e^{-i (\alpha n + \phi)} }{2} = A \cos( \alpha n + \phi )$$
Because
$$ \cos( \theta ) = \frac{ e^{i\theta} + e^{-i\theta} }{2} $$
Hope this helps.
Ced | {
"domain": "dsp.stackexchange",
"id": 6269,
"tags": "complex"
} |
Defaulting dependencies | Question: What are some advantages and disadvantages of providing default implementations for dependencies. I have chosen to do this because it allows the objects to be easily used in the application but also allows me to mock for unit testing.
public class RoleProvider : RoleProviderBase
{
private IRoleDataProvider _roleDataProvider;
private IEntitlementsDataProvider EntitlementsDataProvider
{
get
{
if (_roleDataProvider == null)
{
_roleDataProvider = new RoleDataProvider();
}
return _roleDataProvider;
}
}
public RoleProvider(IRoleDataProvider roleDataProvider)
{
_roleDataProvider = roleDataProvider;
}
public RoleProvider()
{
}
}
Answer: The idea is very practical and makes it much easier to reuse the class. It's also easy to test or even substitute very different implementations of composite components on demand in production code.
However, I do have a few suggestions worth considering when doing this sort of thing.
There's no need to code the substitution mechanism until you actually need it for your first test.
So start out with only the parameterless constructor.
Until then, the additional code would just be clutter. (I.e. don't add stuff automatically just because you might need it.)
The lazy initialization approach you used isn't thread-safe, so you may want to consider coding the parameterless constructor as follows.
This is of course assuming that the either the composite object will always be used or is 'cheap' to construct. Otherwise you might want to look into a thread-safe lazy initialization technique or ensure the object is not shared across multiple threads.
public RoleProvider()
{
_roleDataProvider = new RoleDataProvider();
}
Watch out for the possibility of circular dependencies.
A typical programming problem is handling circular dependencies. This technique can be exposed to that risk, because you're not explicitly specifying the dependencies at construction time. E.g.
new A(); //automatically creates default B which automatically creates default A etc.
Whereas forcing the composite object to be provided in the constructor avoids the risk.
//Assuming X and B share a common ancestor which
//is required as input to A's constructor.
x = new X(); //Truly doesn't have composite objects.
a1 = new A(x);
b = new B(a1);
a1 = new A(b);
Note that it's impossible to code infinite circular construction dependencies this way. | {
"domain": "codereview.stackexchange",
"id": 1974,
"tags": "c#"
} |
How to design Multi qubit Controlled Z rotations | Question: I need some help in multi-qubit controlled -Z rotation.
Below is the qiskit code of triple controlled z rotation
def cccZ():
qc = QuantumCircuit(4)
qc.cp(pi/4, 0, 3)
qc.cx(0, 1)
qc.cp(-pi/4, 1, 3)
qc.cx(0, 1)
qc.cp(pi/4, 1, 3)
qc.cx(1, 2)
qc.cp(-pi/4, 2, 3)
qc.cx(0, 2)
qc.cp(pi/4, 2, 3)
qc.cx(1, 2)
qc.cp(-pi/4, 2, 3)
qc.cx(0, 2)
qc.cp(pi/4, 2, 3)
gate = qc.to_gate(label=' cccZ')
return gate
Please help me in modifying this code to hexa qubit z rotation? It would be really great if someone can explain its theory also. I am really struggling with random nature of quantum computing with no fixed pattern at all.
Answer: The mostly flexible way to create gates beyond the ones included as methods is with the circuit library:
from qiskit import QuantumCircuit
from qiskit.circuit.library import ZGate
circuit = QuantumCircuit(7)
c6z = ZGate().control(6)
circuit.append(c6z, range(7))
circuit.draw()
q_0: ─■─
│
q_1: ─■─
│
q_2: ─■─
│
q_3: ─■─
│
q_4: ─■─
│
q_5: ─■─
│
q_6: ─■─
The decomposition of this gate is deep:
circuit.decompose().depth()
315
How to construct multi-controlled gates, in general
Any single-qubit gate can be arbitrarily-n-controlled with the method Gate.control. The parameter n sets the amount of controlled qubits. As a consequence, the resulting controlled-gate will be a n-plus-one-qubits gate.
from qiskit.circuit.library import <some>Gate
cN_gate = <some>Gate(<gate_params>).control(<n>)
print(cN_gate.num_qubits) # n+1
You can append this custom gate to a circuit using the QuantumCircuit.append method:
circuit.append(cN_gate, [....., i])
\_n_/
The first $n$ parameters are the qubits in which you want to control. The last one (i) is the target qubit. | {
"domain": "quantumcomputing.stackexchange",
"id": 3043,
"tags": "programming, qiskit, quantum-gate"
} |
States transformation of the bilinear transform | Question: I have used the bilinear (or tustin) transform for a while, have been though the derivation of it and also through the concept of frequency warping.
Something that I still not understand that is stated in some places, for example in the Matlab documentation of the continuous to discrete conversion methods, is that the "states are not preserved" (with respect to the original continuous time states). It shows the relation:
$$
w[kT] = x[kT] - \frac{T}{2} \left( Ax[kT] + Bu[kT] \right)
$$
Where this expression comes from and why are the states not preserved?
Answer: The Tustin approximation is concerned with transfer functions, i.e. relations between inputs and outputs. In state space representation
$$ \dot{\mathbb{x}}(t) = A \mathbb{x}(t) + B \mathbb{u}(t) $$
$$ \mathbb{y} = C \mathbb{x}(t) + D \mathbb{u}(t) $$
for continuous time
or
$$ \mathbb{w}[(k+1)T] = A \mathbb{w}[kT] + B \mathbb{u}[kT] $$
$$ \mathbb{y}[kT] = C \mathbb{w}[kT] + D \mathbb{u}[kT] $$
for discrete time
The $\mathbb{y}$ variables will be consistent, $\mathbb{x}$ will not be the same as $\mathbb{w}$.
Relation between state space representations
If you want all the poles in the continuous time to be mapped to the poles in the discrete time representation in the positions given by $(1 + sT/2)/(1-sT/2)$ you could set
$$ A_d = (I - A_c T/2)^{-1}(I + A_c T/2)$$
Then ever eigenvector of $A_c$ associated with an eigenvalue $\lambda$ is an eingenvector of $A_d$ associated with an eigenvalue $(1 + \lambda T/2)/(1-\lambda T/2)$.
One way to match the state space of the discretized system is to choose $B_d = \left(\frac{T}{1-sT/2}\right)(1 - A_c T / 2)^{-1}B_c$
So that
$$
\begin{eqnarray}
x(z)/u(z) &=& (z I - A_d)^{-1}B_d \\ \\
&=& \left(z I - (I - A_c T/2)^{-1}(I + A_c T/2) \right)^{-1}(1 - A_c T / 2)^{-1} \left(\frac{T}{1-sT/2}\right) B_c \\ \\
&=&\left(z (I - A_c T / 2) - (I + A_c T/2) \right)^{-1} \left(\frac{T}{1-sT/2}\right) B_c \\ \\
&=&\left(\left(\frac{1+sT/2}{1-sT/2}\right) (I - A_c T / 2) - (I + A_c T/2) \right)^{-1} \left(\frac{T}{1-sT/2}\right) B_c \\ \\
&=&\left(\frac{(1+sT/2)(I - A_c T / 2) - (1-sT/2)(I + A_c T/2)}{1-sT/2} \right)^{-1} \left(\frac{T}{1-sT/2}\right) B_c \\ \\
&=&\left(\frac{(s I - A_c) T }{1-sT/2} \right)^{-1} \left(\frac{T}{1-sT/2}\right) B_c \\ \\
&=&\left(s I - A_c \right)^{-1} B_c \\ \\
\end{eqnarray}$$
Notice that this choice of $B_d$ depends on $s$, but if time is small it reduces to:
$$
B_d = \left(\frac{T}{1-sT/2}\right)(1 - A_c T / 2)^{-1}B_c = \left(\frac{T}{1-sT/2}\right)\left(\frac{T}{2}\right)^{-1}\left(\frac{2I}{T} - A_c\right)^{-1}B_c = \left(\frac{2}{1-sT/2}\right)\left(\frac{2I}{T} - A_c\right)^{-1}B_c \approx 2\left(\frac{2I}{T} - A_c\right)^{-1}B_c = B_d
$$
where $\left(\frac{2}{1-sT/2}\right) \rightarrow 2 $ when $ sT/2 << 1$, mean signal input change very slow compare to frequency discretization. This result can be obtained in this answer Bilinear transformation of continuous time state space system (here $\alpha = \left(\frac{2}{T}\right)$):
$$z\mathbf{Q}(z) = \left(\alpha\mathbf{I}-A\right)^{-1}\left(\alpha\mathbf{I}+A\right)\mathbf{Q}(z) + \left(\alpha\mathbf{I}-A\right)^{-1}\mathbf{B}\left(z+1\right)\mathbf{X}(z) \\ \\
\approx z\mathbf{Q}(z) = \left(\alpha\mathbf{I}-A\right)^{-1}\left(\alpha\mathbf{I}+A\right)\mathbf{Q}(z) + \left(\alpha\mathbf{I}-A\right)^{-1}2 \mathbf{B}\mathbf{X}(z)$$
and this would be the best we could to make the discrete state to correspond to the continuous state.
The Matlab approximation
Apparently MATLAB is using a value that is half way between the two samples.
$$x(t + T_s/2) \approx x(t) + T_s \dot{x}(t) = x(t) + T_s (A x(t) + Bu(t))$$
I cannot say too much about their implementation. | {
"domain": "dsp.stackexchange",
"id": 10442,
"tags": "state-space, bilinear-transform, discretization"
} |
Electromagnetic induction in an open loop? | Question: So I was solving a problem where there was a parabolic conductor lying in a region of uniform magnetic field (B) and a conducting rod was sliding on it, and its acceleration was given by 'w'.But how can an emf be induced in parabola despite of the fact that it is not a closed loop?
Answer: The Movement of Conducting rod on the parabolic conductor make a closed loop.
and since the conducting rod is accelerating, the area of the loop becomes time dependent and hence the flux.
so the EMF will be induced due to dime dependent change of flux. | {
"domain": "physics.stackexchange",
"id": 71175,
"tags": "quantum-electrodynamics"
} |
A question regarding the concept of potential difference between two points in an electric field, as stated in my 12th grade book | Question: My 12th grade physics book on electrostatics says:
Potential difference between two points in electric field can be defined as work done in displacing a unit positive charge from one point to another against the electric forces.
By this logic, the potential difference between two points in an electric field should always be positive because the work done in moving a unit positive charge against the direction of electric field will always be positive. But potential difference can also be negative, so what exactly is my book describing?
Answer: Consider a system where the electric force is due to a negative charge, not a positive charge as is usually assumed.
Now, when it is said that:
... work done in displacing a unit positive charge... against the electric forces
what is meant is that an external force that is equal in magnitude but opposite in direction to the electric force is applied.
Now, suppose, as the figure shows, you move the charge q (which is positive) from A to B. So, the direction of your force is AB. The big charge Q will try instead to make it move towards A. So, it's direction is BA. Now, the force you apply and the direction of the charge's displacement is in the same direction. Therefore, in this case work done by the external force (you) is positive.
Let us now assume the charge moves from B to A. In this case, the electric force is still in the direction BA, and you are applying a force in the direction AB. In this case, the direction of the displacement and the external force are opposite to each other, so the work done is negative. | {
"domain": "physics.stackexchange",
"id": 100195,
"tags": "electrostatics, electric-fields, potential, voltage, conventions"
} |
In what sense did the Oklo reactor "trap" its own nuclear waste? | Question: A number of popular writings on the natural fission reactor at Oklo, Gabon (e.g. here) state that some of the energetic byproducts (krypton and xenon, presumably 85Kr and 133Xe) of the reactor were "trapped" in aluminophosphate.
How, exactly, does "aluminophosphate" (I assume this is some sort of framework mineral?) trap gaseous elements? Is it related to the mechanism by which clathrates trap gases?
Answer: According to Record of Cycling Operation of the Natural Nuclear Reactor in the Oklo/Okelobondo Area in Gabon, the Xe is trapped in crystalline cage-like structures formed of the aluminum phosphate that are similar to zeolites.
The reference also explains that the aluminum phosphate structures grow quickly under hydrothermal conditions of 270-300 degrees C. | {
"domain": "earthscience.stackexchange",
"id": 146,
"tags": "geology, earth-history, mineralogy"
} |
Calculating the displacement of a fault | Question: In the calculation of scalar moment magnitude of an earthquake we have the formula
$$M_0=\mu AD$$
where:
$\mu$ is the shear modulus of the rocks involved in the earthquake (in Pa)
$A$ is the area of the rupture along the geologic fault where the earthquake occurred (in m2), and
$D$ is the average displacement on $A$ (in m).
In many cases the displacement can occur in the subsurface.In order to predict the moment magnitude of such earthquakes the value of $D$ must be estimated. How is this process carried out ?
Answer: The process is carried out by solving an "inverse problem" and there are many ways to estimate the moment depending on the observable. For example if you have some measurements of ground deformation following the earthquake (using GPS/InSAR) then by combing a physics based model, with an optimizer, you can estimate the area/slip distribution that best explains the observations. Once you know the area/slip distribution then you can estimate the moment.
The physical models can be simple or complex (e.g., semi-analytical or fully numerical). Same goes for the optimizer (local or global). For computationally expensive numerical models global optimization is typically not feasible. For local optimization you need a 'reasonable' initial model. | {
"domain": "earthscience.stackexchange",
"id": 376,
"tags": "geophysics, seismology, earthquakes, tectonics"
} |
How to find Equivalent Resistance? | Question:
I am not able to figure out the combination between the two 12ohm resistors. And how will the simplified diagram look like after we figure out the combination between 12ohm resistors.
Answer: Redraw the circuit to make it more understandable:
As long as all resistor terminals still connect to the same nodes they did before, there's no change to the circuit's behavior. | {
"domain": "physics.stackexchange",
"id": 46999,
"tags": "homework-and-exercises, electric-circuits, electrical-resistance"
} |
Rijndael for use in production systems | Question: I found an example of how to implement Rijndael.
This class uses a symmetric key algorithm (Rijndael/AES) to encrypt and decrypt data. As long as encryption and decryption routines use the same parameters to generate the keys, the keys are guaranteed to be the same. The class uses static functions with duplicate code to make it easier to demonstrate encryption and decryption logic. In a real-life application, this may not be the most efficient way of handling encryption, so - as soon as you feel comfortable with it - you may want to redesign this class.
Is this code secure enough for production systems?
using System;
using System.IO;
using System.Text;
using System.Security.Cryptography;
public class RijndaelSimple
{
/// <summary>
/// Encrypts specified plaintext using Rijndael symmetric key algorithm
/// and returns a base64-encoded result.
/// </summary>
/// <param name="plainText">
/// Plaintext value to be encrypted.
/// </param>
/// <param name="passPhrase">
/// Passphrase from which a pseudo-random password will be derived. The
/// derived password will be used to generate the encryption key.
/// Passphrase can be any string. In this example we assume that this
/// passphrase is an ASCII string.
/// </param>
/// <param name="saltValue">
/// Salt value used along with passphrase to generate password. Salt can
/// be any string. In this example we assume that salt is an ASCII string.
/// </param>
/// <param name="hashAlgorithm">
/// Hash algorithm used to generate password. Allowed values are: "MD5" and
/// "SHA1". SHA1 hashes are a bit slower, but more secure than MD5 hashes.
/// </param>
/// <param name="passwordIterations">
/// Number of iterations used to generate password. One or two iterations
/// should be enough.
/// </param>
/// <param name="initVector">
/// Initialization vector (or IV). This value is required to encrypt the
/// first block of plaintext data. For RijndaelManaged class IV must be
/// exactly 16 ASCII characters long.
/// </param>
/// <param name="keySize">
/// Size of encryption key in bits. Allowed values are: 128, 192, and 256.
/// Longer keys are more secure than shorter keys.
/// </param>
/// <returns>
/// Encrypted value formatted as a base64-encoded string.
/// </returns>
public static string Encrypt(string plainText,
string passPhrase,
string saltValue,
string hashAlgorithm,
int passwordIterations,
string initVector,
int keySize)
{
// Convert strings into byte arrays.
// Let us assume that strings only contain ASCII codes.
// If strings include Unicode characters, use Unicode, UTF7, or UTF8
// encoding.
byte[] initVectorBytes = Encoding.ASCII.GetBytes(initVector);
byte[] saltValueBytes = Encoding.ASCII.GetBytes(saltValue);
// Convert our plaintext into a byte array.
// Let us assume that plaintext contains UTF8-encoded characters.
byte[] plainTextBytes = Encoding.UTF8.GetBytes(plainText);
// First, we must create a password, from which the key will be derived.
// This password will be generated from the specified passphrase and
// salt value. The password will be created using the specified hash
// algorithm. Password creation can be done in several iterations.
PasswordDeriveBytes password = new PasswordDeriveBytes(
passPhrase,
saltValueBytes,
hashAlgorithm,
passwordIterations);
// Use the password to generate pseudo-random bytes for the encryption
// key. Specify the size of the key in bytes (instead of bits).
byte[] keyBytes = password.GetBytes(keySize / 8);
// Create uninitialized Rijndael encryption object.
RijndaelManaged symmetricKey = new RijndaelManaged();
// It is reasonable to set encryption mode to Cipher Block Chaining
// (CBC). Use default options for other symmetric key parameters.
symmetricKey.Mode = CipherMode.CBC;
// Generate encryptor from the existing key bytes and initialization
// vector. Key size will be defined based on the number of the key
// bytes.
ICryptoTransform encryptor = symmetricKey.CreateEncryptor(
keyBytes,
initVectorBytes);
// Define memory stream which will be used to hold encrypted data.
MemoryStream memoryStream = new MemoryStream();
// Define cryptographic stream (always use Write mode for encryption).
CryptoStream cryptoStream = new CryptoStream(memoryStream,
encryptor,
CryptoStreamMode.Write);
// Start encrypting.
cryptoStream.Write(plainTextBytes, 0, plainTextBytes.Length);
// Finish encrypting.
cryptoStream.FlushFinalBlock();
// Convert our encrypted data from a memory stream into a byte array.
byte[] cipherTextBytes = memoryStream.ToArray();
// Close both streams.
memoryStream.Close();
cryptoStream.Close();
// Convert encrypted data into a base64-encoded string.
string cipherText = Convert.ToBase64String(cipherTextBytes);
// Return encrypted string.
return cipherText;
}
/// <summary>
/// Decrypts specified ciphertext using Rijndael symmetric key algorithm.
/// </summary>
/// <param name="cipherText">
/// Base64-formatted ciphertext value.
/// </param>
/// <param name="passPhrase">
/// Passphrase from which a pseudo-random password will be derived. The
/// derived password will be used to generate the encryption key.
/// Passphrase can be any string. In this example we assume that this
/// passphrase is an ASCII string.
/// </param>
/// <param name="saltValue">
/// Salt value used along with passphrase to generate password. Salt can
/// be any string. In this example we assume that salt is an ASCII string.
/// </param>
/// <param name="hashAlgorithm">
/// Hash algorithm used to generate password. Allowed values are: "MD5" and
/// "SHA1". SHA1 hashes are a bit slower, but more secure than MD5 hashes.
/// </param>
/// <param name="passwordIterations">
/// Number of iterations used to generate password. One or two iterations
/// should be enough.
/// </param>
/// <param name="initVector">
/// Initialization vector (or IV). This value is required to encrypt the
/// first block of plaintext data. For RijndaelManaged class IV must be
/// exactly 16 ASCII characters long.
/// </param>
/// <param name="keySize">
/// Size of encryption key in bits. Allowed values are: 128, 192, and 256.
/// Longer keys are more secure than shorter keys.
/// </param>
/// <returns>
/// Decrypted string value.
/// </returns>
/// <remarks>
/// Most of the logic in this function is similar to the Encrypt
/// logic. In order for decryption to work, all parameters of this function
/// - except cipherText value - must match the corresponding parameters of
/// the Encrypt function which was called to generate the
/// ciphertext.
/// </remarks>
public static string Decrypt(string cipherText,
string passPhrase,
string saltValue,
string hashAlgorithm,
int passwordIterations,
string initVector,
int keySize)
{
// Convert strings defining encryption key characteristics into byte
// arrays. Let us assume that strings only contain ASCII codes.
// If strings include Unicode characters, use Unicode, UTF7, or UTF8
// encoding.
byte[] initVectorBytes = Encoding.ASCII.GetBytes(initVector);
byte[] saltValueBytes = Encoding.ASCII.GetBytes(saltValue);
// Convert our ciphertext into a byte array.
byte[] cipherTextBytes = Convert.FromBase64String(cipherText);
// First, we must create a password, from which the key will be
// derived. This password will be generated from the specified
// passphrase and salt value. The password will be created using
// the specified hash algorithm. Password creation can be done in
// several iterations.
PasswordDeriveBytes password = new PasswordDeriveBytes(
passPhrase,
saltValueBytes,
hashAlgorithm,
passwordIterations);
// Use the password to generate pseudo-random bytes for the encryption
// key. Specify the size of the key in bytes (instead of bits).
byte[] keyBytes = password.GetBytes(keySize / 8);
// Create uninitialized Rijndael encryption object.
RijndaelManaged symmetricKey = new RijndaelManaged();
// It is reasonable to set encryption mode to Cipher Block Chaining
// (CBC). Use default options for other symmetric key parameters.
symmetricKey.Mode = CipherMode.CBC;
// Generate decryptor from the existing key bytes and initialization
// vector. Key size will be defined based on the number of the key
// bytes.
ICryptoTransform decryptor = symmetricKey.CreateDecryptor(
keyBytes,
initVectorBytes);
// Define memory stream which will be used to hold encrypted data.
MemoryStream memoryStream = new MemoryStream(cipherTextBytes);
// Define cryptographic stream (always use Read mode for encryption).
CryptoStream cryptoStream = new CryptoStream(memoryStream,
decryptor,
CryptoStreamMode.Read);
// Since at this point we don't know what the size of decrypted data
// will be, allocate the buffer long enough to hold ciphertext;
// plaintext is never longer than ciphertext.
byte[] plainTextBytes = new byte[cipherTextBytes.Length];
// Start decrypting.
int decryptedByteCount = cryptoStream.Read(plainTextBytes,
0,
plainTextBytes.Length);
// Close both streams.
memoryStream.Close();
cryptoStream.Close();
// Convert decrypted data into a string.
// Let us assume that the original plaintext string was UTF8-encoded.
string plainText = Encoding.UTF8.GetString(plainTextBytes,
0,
decryptedByteCount);
// Return decrypted string.
return plainText;
}
}
/// <summary>
/// Illustrates the use of RijndaelSimple class to encrypt and decrypt data.
/// </summary>
public class RijndaelSimpleTest
{
/// <summary>
/// The main entry point for the application.
/// </summary>
[STAThread]
static void Main(string[] args)
{
string plainText = "Hello, World!"; // original plaintext
string passPhrase = "Pas5pr@se"; // can be any string
string saltValue = "s@1tValue"; // can be any string
string hashAlgorithm = "SHA1"; // can be "MD5"
int passwordIterations = 2; // can be any number
string initVector = "@1B2c3D4e5F6g7H8"; // must be 16 bytes
int keySize = 256; // can be 192 or 128
Console.WriteLine(String.Format("Plaintext : {0}", plainText));
string cipherText = RijndaelSimple.Encrypt(plainText,
passPhrase,
saltValue,
hashAlgorithm,
passwordIterations,
initVector,
keySize);
Console.WriteLine(String.Format("Encrypted : {0}", cipherText));
plainText = RijndaelSimple.Decrypt(cipherText,
passPhrase,
saltValue,
hashAlgorithm,
passwordIterations,
initVector,
keySize);
Console.WriteLine(String.Format("Decrypted : {0}", plainText));
}
}
Answer: This code uses obsolete PasswordDeriveBytes class, use Rfc2898DeriveBytes class instead (Thanks @tom for highlighting this issue):
Rfc2898DeriveBytes password = new Rfc2898DeriveBytes(
passPhrase,
saltValueBytes,
passwordIterations);
Also, even though IV (initVectorBytes) may be publicly stored it should not be reused for different encryptions. You can derive it from pseudo-random bytes:
byte[] initVectorBytes = password.GetBytes(symmetricKey.BlockSize / 8);
Other than that encryption/decryption looks properly implemented, and I completely agree with original code writer that initialization/duplicate steps should be taken to constructor. It will reduce the number of parameters in Encrypt/Decrypt methods to one - actual payload.
Depending on your specifics you can also expose methods accepting and returning encrypting/decrypting streams for large encryption volumes if necessary. | {
"domain": "codereview.stackexchange",
"id": 3004,
"tags": "c#, security, aes"
} |
Roboearth installation in fuerte in ubuntu 12.04 | Question:
I m trying to install Roboearth from the wiki page but when i try to install ros-fuerte-perception-pcl-addons.
it returns with the error no such package found.
Is there some other way to install these depedencies?
Thanks in advance :)
Originally posted by pntripathi9417 on ROS Answers with karma: 66 on 2012-05-09
Post score: 1
Original comments
Comment by felix k on 2012-06-18:
Same for me on 11.10, that package does not seem to exist on fuerte or unstable. Found it in trunk and tried to compile, but its dependencies and the requirements of their dependencies again are too old and tangled.
Answer:
Please try it again. It should compile under ROS Fuerte now.
Originally posted by ddimarco with karma: 916 on 2012-06-19
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by felix k on 2012-06-20:
So far I got everything compiled, thanks for your work! | {
"domain": "robotics.stackexchange",
"id": 9322,
"tags": "ubuntu-precise, ros-fuerte, roboearth, ubuntu"
} |
The difference between peptide bonds and the bonds between polypeptides? | Question: I was doing some tests for the multiple-choice final we've got ahead. And it was on me to count the peptide bonds in an Insulin hormone with 51 aminoacids arranged in two polypeptides with 30 and 21 aminoacids. (these are not true in reality) the number of bonds were 49, not 50, and that means the bond between two polypeptides doesn't count as a peptide bond. Additionally, I know that polypeptide bonds make the proteins' molecular structure, as it is now. (just look at that shape.) PEPTIDE COVALENT BONDS CAN NEVER cause that kind of 3d orientation in space. So there must be some fundamental difference between those bonds. What is it?
My research couldn't find any results as simple things have jammed the internet.
Answer: Aside from covalent bonds (amide and disulfide), the sturcture of a protein is determined by hydrogen bonds, salt bridges, and less specific interactions such as hydrophobic and hydrophilic effects. Hydrophobic portions of the protein chain tend toward the interior of the folded protein and hydrophilic regions to the exterior, in aqueous solution. | {
"domain": "chemistry.stackexchange",
"id": 2447,
"tags": "organic-chemistry, bond, covalent-compounds"
} |
Asexual reproduction and Telomeres | Question: Many eukaryotic organisms like yeasts, hydras , planarias, plants etc reproduce asexually.
Replication of End of linear DNA pose a limit to the number of cell divisions.
My question : Do asexually reproducing organisms have telomerase in all their cells ? Is the telomerase activated at specific times at specific places ?
If they don't, how can they reproduce infinitely ?
Answer: Short answer:
No. Eukaryotes have more ways of maintaining telomere length than via telomerase alone and all organisms with circular genomes do not need to worry about telomere length anyway.
Long answer:
Firstly, the telomerase system is not the only observed mechanism in Eukaryotes that elongates telomeres. Other mechanisms such as the transposition of retrotransposons (e.g. Drosophila) or recombination (e.g. Yeast) can also be used in some species. Yeast has been observed to use several of these mechanisms. So no, telomerase itself is not always necessary for immortality.
Ofcourse prokaryotes do not need to worry about shortening chromosome ends at all since their genome is circular - i.e. it has no ends to shorten with each cell division! | {
"domain": "biology.stackexchange",
"id": 1897,
"tags": "reproduction, telomere, eukaryotic-cells, cell-division"
} |
Grover algorithm for a database search: where is the quantum advantage? | Question: I have been trying to understand what could be the advantage of using Grover algorithm for searching in an arbitrary unordered database D(key, value) with N values instead of a classical search.
I assumed that the oracle function is a function f(key)=y, where y is the index of the corresponding value in the classical database.
My problem is related to the oracle. The oracle circuit has to be modified for each search is performed in the database because the key is specified in the oracle. Let's assume this is a negligible operation for simplicity.
Supposing that the oracle circuit has to be calculated classically, it would require to produce a circuit which behaves like the function f(key)=y. This function would be obtained in at least O(N) steps (except for some special cases). The oracle function circuit has to be recalculated each time a database entry is being modified/added/removed, with a cost of O(N).
Many papers such as Quantum Algorithm Implementations for Beginners, Quantum Algorithms for Matching and Network Flows seem to not consider the oracle at all.
I don't know if I have to consider a quantum database for obtaining a real advantage or not (this and the unreliability of quantum results convinced me is not a very good idea, but it is just conjecture).
So, where is considered the complexity for building the oracle? Have I misunderstood something?
Is "The oracle function circuit has to be recalculated each time a database entry is being modified/added/removed, with a cost of O(N)" a wrong assumption?
Answer: Grover's algorithm does not have an advantage when searching an unordered database, because encoding the oracle into a circuit requires $\tilde \Omega(n)$ operations. You can prove this with a simple circuit counting argument. If the circuit had size $O(n^{0.99})$ then there would be fewer distinct circuits than distinct oracles. So the actual operational complexity is $\tilde \Omega(n^{1.5})$, even though the query complexity is $O(n^{0.5})$.
Grover's algorithm only has an advantage when the thing you are searching over is abstract, like possible solutions to a SAT problem, as opposed to literally stored in hardware somewhere, like a database. | {
"domain": "quantumcomputing.stackexchange",
"id": 999,
"tags": "quantum-algorithms, grovers-algorithm, quantum-advantage"
} |
Testing the process of assigning offers to a customer 3 | Question: Following on from this question. I have made a very small change (changes are commented on in the code). Please see the test code below:
[TestFixture]
public class CustomerTest
{
private IList<IProduct> _products;
private Concor _concor;
private Chestnut _chestnut;
private IOfferCalculator _offerCalculator;
[OneTimeSetUp]
public void OneTimeSetUp()
{
_offerCalculator = new OfferCalculator();
}
[SetUp]
public void Setup()
{
_concor = new Concor();
_chestnut = new Chestnut();
_products = new List<IProduct>();
_products.Add(_concor);
_products.Add(_chestnut);
}
[Test]
public void Should_receive_a_concor()
{
var expectedConcor = new List<IProduct>();
expectedConcor.Add(_concor);
var gender = "F";
var expenditure = 1500M;
var customer = new Customer(gender,expenditure);
//Act
var eligibleProducts = customer.GetEligibleOffers(_offerCalculator, _products);
//refactored this code to loop through the eligible products
foreach (IProduct eligibleProduct in eligibleProducts)
{
customer.AddOffer(eligibleProduct);
}
CollectionAssert.AreEqual(customer._assignedProducts, expectedConcor);
}
}
and the supporting code below:
public class Concor : IProduct
{
private const decimal _expenditure =100;
private const string _gender = "F";
public bool IsEligible(string gender, decimal expenditure)
{
if (expenditure > _expenditure && gender == _gender)
{
return true;
}
else
{
return false;
}
}
}
public class Chestnut : IProduct
{
private const decimal _expenditure = 100;
private const string _gender = "M";
public bool IsEligible(string gender, decimal expenditure)
{
if (expenditure > _expenditure && gender == _gender)
{
return true;
}
else
{
return false;
}
}
}
public interface IProduct
{
bool IsEligible(string gender, decimal expenditure);
}
public interface IOfferCalculator
{
IEnumerable<IProduct> CalculateEligibility(string gender, decimal expenditure, IList<IProduct> products);
}
public class Customer
{
public string Gender { get; protected set; }
public decimal Expenditure { get; protected set; }
public IList<IProduct> _assignedProducts = new List<IProduct>();
public Customer(string gender, decimal expenditure)
{
Gender = gender;
Expenditure = expenditure;
}
public IEnumerable<IProduct> GetEligibleOffers(IOfferCalculator offerCalculator, IList<IProduct> products)
{
return offerCalculator.CalculateEligibility(Gender, Expenditure, products);
}
//Refactored this method to only accept one product
public void AddOffer(IProduct eligibleProduct)
{
_assignedProducts.Add(eligibleProduct);
}
}
public class OfferCalculator : IOfferCalculator
{
public IEnumerable<IProduct> CalculateEligibility(string gender, decimal expenditure, IList<IProduct> products)
{
foreach (var product in products)
{
if (product.IsEligible(gender,expenditure))
{
yield return product;
}
}
}
}
I have simply made the following changes:
Renamed AssignOffer to AddOffer
AddOffer accepts a single product rather than a list of products
The test method loops through the products instead of the Customer class
The reason I have made this change is to be more consistent with what I am seeing online like here and here. Notice that single entities are added to the collections in both links.
I cannot find any examples similar to the way I approached it in my other question.
I would be grateful for comments on, which approach to use as I am trying to follow the principle of least astonishment.
Update
A change is needed to Chestnut.IsEligible(). The first line of the if statement needs to change to: if (expenditure < _expenditure && gender == _gender). Notice the greater than operator changed to a less than operator.
Please note that Concor is not affected.
Answer: I do not see any big issue, just few minor design details.
Both Coconut and Chestnut implements IProduct and their implementation is the same. This makes me consider two alternatives:
In this case inheritance maybe is not the best tool for this job. I does not know for sure because I cannot see the big picture but someone strongly disagree to design a hierarchy just to add a state, without any added/changed behavior. This is not a suggestion but more a thing to consider (also according to your database design).
Introduce an abstract base class Product where you put the shared implementation (Do not Repeat Yourself).
Also note that IsEligible() may be slightly simplified:
abstract class Product : IProduct
{
public bool IsEligible(string gender, decimal expenditure)
=> expenditure > _expenditure && gender == _gender;
}
Now you have to choose, inheritance or not? If not then you can go one step further: make Gender and Expenditure two public properties, add a Name property make Product a concrete class and...read the product list from database. This is definitely the most flexible structure because you do not have to introduce more and more code when adding new products. I imagine this pseudo-code:
var customer = new Customer(...);
var offerCalculator = new BlackFridayOfferCalculator(...);
var eligibleProducts = customer.GetEligibleOffers(offerCalculator, database.Products);
In the other case (keeping hierarchy in place) you just need to add a protected ctor to Product to accept gender and expenditure parameters (which are also private readonly fields in the base class).
An important note: with this code approach you need to have all your entities in memory (it depends on the ORM you're using but I suppose they can't track down virtual method invocations). It means that your queries will consume a lot of memory and CPU and your super optimized fine-tuned database engine with indices won't be able to help. Think twice about this.
<EDIT>
Few quick thoughts about your edited question:
How many different products you have? If they're just FEW (and you're reasonably sure that they'll remain few in near future) then I'd keep the hierarchy (but note the paragraph about performance, probably ALL your entities will be retrieved and materialized in memory outside the DBE). If, however, you have a fair amount of products, they'll grow in future or they must be managed somehow (for example to add/remove products or to change their properties) then hard-coded solution is absolutely a no-no and you definitely should have a generic Product class (not abstract).
On the other side to have different IsEligible() expressions will quickly make your code much harder to write. You can start easily with a simple flag (expenditure greater or lower than) but for more complex scenarios I'd introduce some external business logic. If calculation is done in memory then it's easy: pick an expression evaluator like NCalc and move the condition to database (as simple text).
Note that code generation here may help you: it's fairly easy to read a configuration/list of products and to generate all the code you need (with hierarchy or not) and also producing all the relevant SQL code. Sometimes I used NCalc expressions in conjunction with its ExpressionVisitor to generate SQL, JavaScript, and plain English translations of my business logic (expressed as an NCalc expression and evaluated as such in C#). Without much more knowledge about your scenario and domain I can't suggest anything better but just give you few (there are many more options...) ideas to begin playing with.
</EDIT>
Now few minor details. You have a field Gender. Let's refine this concept step by step. Now you're using == (current culture, case sensitive) for comparison. What you probably want is to use invariant culture comparison (eventually case insensitive) or even just ordinal comparison. Use String.Equals() and make it explicit.
However gender does not need to be a string. It might be a simple enum with two values Male and Female.
However gender defines the sexual orientation of a person. It's not the sex at birth. If you stick with the gender then you should definitely provide much more options (Agender, Bigender, Demigender, Intergender, Non-binary, Cisgender, Transgender and so on, just to mention few). If you need to know customer's gender then you should pick a good list, read it carefully and stick to it. Do not underestimate this aspect because gender is an important and delicate topic.
On the same line you should note that these definitions are fluid and subject to change, if you're not using sex at birth (or sex assigned at birth) then you better move this list into a separate database table. Also note that what it's acceptable to present in this list may vary according to the country then logic behind this is much much more complex than you may expect. There are also privacy implications you can't ignore...are you still sure you need to know customer's gender instead of assigned sex at birth? | {
"domain": "codereview.stackexchange",
"id": 29247,
"tags": "c#, unit-testing"
} |
If something that is moving at constant velocity has no net force acting on it, how come it is able to move other objects? | Question: Let's say 10 kg block is sliding on a frictionless surface at a constant velocity, thus its acceleration is 0.
According to Newton's second law of motion, the force acting on the block is 0:
$a = 0$
$F = ma$
$F=0$
So let's say that block slid into a motionless block on the same surface, the motionless block would move.
Wouldn't the first block need force to be able to move the initially motionless block? I understand that it has energy due its constant velocity, but wouldn't it be its force that causes the displacement?
Answer: Here's a slightly different but equivalent way to think about it.
Forces describe interactions between two objects. If two objects are interacting, they exert forces on each other. If two objects are not interacting, they do not exert forces on each other. Thus, an object doesn't "carry around" a force with it. A force is not a property of an object, just as dmckee explains. Instead, we describe interactions between two objects using the more-abstract concept of force.
In your block-hits-other-block scenario, it's tempting to ask where did the force come from if colliding object had $F_\text{net}=0$? But when forces are viewed as interactions, it becomes more apparent that the force didn't come from anywhere within one of the objects. There simply wasn't an interaction before they collided, so we wouldn't ascribe the existence of a force force. | {
"domain": "physics.stackexchange",
"id": 63288,
"tags": "newtonian-mechanics, forces, free-body-diagram"
} |
Regarding the commutator of ladder operator in QFT | Question: I am trying to verify the computation of the commutator of the ladder operator for Klein-Gordon solutions, but it seems like I am unable to do it properly. Here is what I do:
For,
$$
\varphi(x^\mu)=\int\frac{\mathrm{d}^3p}{(2\pi)^{3/2}\sqrt{2p_0}}\left(a(\vec p)e^{-ip_\mu x^\mu}+a^\dagger(\vec p)e^{ip_\mu x^\mu}\right)\nonumber\\
\Pi(x^\mu)=\int\frac{\mathrm{d}^3p}{(2\pi)^{3/2}}(-i)\sqrt{\frac{p_0}{2}}\left(a(\vec p)e^{-ip_\mu x^\mu}-a^\dagger(\vec p)e^{ip_\mu x^\mu}\right),\nonumber
$$
write:
$$
a(\vec p)=\int \frac{\mathrm{d}^3\vec x}{(2\pi)^{3/2}\sqrt{2p_0}}(p_0\varphi(\vec x,t)+i\Pi(\vec x,t))e^{ip_\mu x^\mu}\nonumber\\
a^\dagger(\vec p)=\int \frac{\mathrm{d}^3\vec x}{(2\pi)^{3/2}\sqrt{2p_0}}(p_0\varphi(\vec x,t)-i\Pi(\vec x,t))e^{-ip_\mu x^\mu}\nonumber.
$$
From this compute the following:
$$
\left[a(\vec p),a^\dagger(\vec q)\right]{=\int\frac{\mathrm{d}^3\vec x\>\mathrm{d}^3\vec y}{(2\pi)^32\sqrt{p_0q_0}}\left[p_0\varphi(\vec x,t)+i\Pi(\vec x,t),
q_0\varphi(\vec y,t)-i\Pi(\vec y,t)\right]e^{i(p_\mu x^\mu-q_\mu y^\mu)}\nonumber\\
=\int\frac{\mathrm{d}^3\vec x\>\mathrm{d}^3\vec y}{(2\pi)^32\sqrt{p_0q_0}}e^{i(p_\mu x^\mu-q_\mu y^\mu)}(-ip_0\left[\varphi(\vec x,t),\Pi(\vec y,t)\right]+iq_0\left[\Pi(\vec x,t),\varphi(\vec y,t)\right])\nonumber\\
=\int\frac{\mathrm{d}^3\vec x\>\mathrm{d}^3\vec y}{(2\pi)^32\sqrt{p_0q_0}}e^{i(p_\mu x^\mu-q_\mu y^\mu)}i(-ip_0\delta^3(\vec x-\vec y)-iq_0\delta^3(\vec y-\vec x))\nonumber\\
=\int\frac{\mathrm{d}^3\vec x\>\mathrm{d}^3\vec y}{(2\pi)^32\sqrt{p_0q_0}}e^{i(p_\mu x^\mu-q_\mu y^\mu)}\delta^3(\vec x-\vec y)(p_0+q_0)\nonumber\\
=\int\frac{\mathrm{d}^3\vec x}{(2\pi)^3}e^{i(p-q)\cdot x}\frac{p_0+q_0}{2\sqrt{p_0q_0}}\nonumber\\
=\delta^3(\vec p-\vec q)\nonumber\frac{p_0+q_0}{2\sqrt{p_0q_0}}e^{i(p_0-q_0)t}.}
$$
And I don't understand why I have this strange factor which should not be there.
Remark: The factor is one if $p_0=q_0$ which is of course what we want. Also, one can see that the last line can only be true if the preceding condition holds, because of the argument of the exponential.
Answer: You need to think about what $p_0$ actually is here. When going from the Schrödinger to the Heisenberg picture, one simplifies the expressions for the mode expansions by introducing*
$$ p_0 = p_0(\vec p) = E_{\vec p} = \sqrt{\vec p^2 + m^2} \;. $$
Hence, your delta function does in fact set $p_0 = q_0$.
By the way, in your second-to-last line (v2) you have
$$ \mathrm e^{\mathrm i(p_\mu x^\mu - q_\mu y^\mu)} = \mathrm e^{\mathrm i (p_0 - q_0) t}\, \mathrm e^{\mathrm i(\vec x \cdot \vec p - \vec y \cdot \vec q)} \;. $$
You left out the first factor -- I assume, accidentally -- but it vanishes for the same reason ($p_0 = q_0$).
Edit as response to comment:
In general,
$$ f(\vec p, \vec q) \delta(\vec p - \vec q) = f(\vec p, \vec p) \delta(\vec p - \vec q) $$
because $f(\vec p, \vec q) \delta(\vec p - \vec q) = 0$ whenever $\vec q \neq \vec p$. That means you can replace all $q_0 = \sqrt{m^2 + \vec q^2}$ with $p_0 = \sqrt{m^2 + \vec p^2}$.
* At least that's how it was introduced in the QFT lecture I took. | {
"domain": "physics.stackexchange",
"id": 44246,
"tags": "operators, fourier-transform, commutator, dirac-delta-distributions, klein-gordon-equation"
} |
What equation (/solution) predicts the existence of black holes? | Question: Where does our theoretical prediction of the existence of black holes come from? If it is (as I am guessing) from the Einstein Field Equations, which solution predicts it and why?
Answer: From a historical perspective black holes weren't predicted. In 1916 Karl Schwarzschild found a solution to Einstein's equations for a spherically symmetric mass. It was only subsequently realised that the Schwarzschild metric is a vacuum solution with an event horizon and a curvature singularity at its centre, and that the metric describes a static uncharged black hole. It took until the end of the fifties (over 40 years!) before the black hole nature of Schwarzschild's solution was fully understood.
Following Schwarzschild's solution, three more solutions were found describing charged, rotating and charged-rotating black holes. These are the Reisner-Nordström, Kerr and Kerr-Newman metrics. These four metrics are the only known black hole solutions. | {
"domain": "physics.stackexchange",
"id": 24823,
"tags": "general-relativity, black-holes, differential-geometry, singularities"
} |
How can I construct a Band-pass filter from a low and a high-pass filter? | Question: Suppose, I need to construct a Band-pass filter in OpenCV. But, I know, there are no functions in OpenCV for Band-pass filters.
Now, what I need to do is to have a low-pass filter and a high-pass filter and combine them as a series. That is, first, the image would be passed through a low-pass filter and then the output of that low-pass filter would be passed to a high-pass filter.
Am I correct?
Now, what kind of low and high pass filters should I use? Would they be,
Gassian low/high pass filters
Mean low/high pass filters
Median low/high pass filters
Sobel filters
or, anything else...
Answer: Yes you are correct. You apply them if series in they are linear.
One simple band-pass filter you could use is called the difference of Gaussian (DoG)
The procedure is:
Create a Gaussian filter with a small variance
Create a Gaussian filter with a large variance
Subtract the latter from the former to create a band-pass filter
Apply the filter to the image
There are a lot of different filters to choose from depending on the application. A good paper is "On the choice of band-pass quadrature filters" by Boukerroui, Noble and Brady. It has an analysis of the DoG filter as well. | {
"domain": "dsp.stackexchange",
"id": 3937,
"tags": "image-processing, lowpass-filter, opencv, bandpass, highpass-filter"
} |
Is a year really 365.24 days, or is it 365.2564 days like I remember? | Question: The NPR News item and podcast Spring Starts Today All Over America, Which Is Weird includes the following:
But why isn't the time of the equinox the same each year?
The short answer is that the time and the date are imperfect human constructs that we use to keep track of our planet's movements.
The longer answer involves leap years.
All of this is caused simply by the fact that the spin of the Earth doesn't divide evenly into one year," says Michelle Thaller, an astrophysicist turned space communications expert at NASA.
One spin of the Earth around its axis is one day. "The problem is we're happily spinning on our axis, and the Earth is going around the Sun, but one year — one complete path around the Sun — isn't an even, exact number of days. In fact, it's 365.24 [days]."
Wikipedia gives Earth's orbital period as 365.256363004 days and I have always remembered it to be 365.2564 which is the same value as Wikipedia, just to fewer digits.
So is a year really about 365.24 days or closer to 365.2564 days?
Answer: Both are correct.
The Sidereal Year is the length of time it takes Earth to complete an orbit around the sun, relative to the fixed stars. It is 365.2564 days.
The Tropical Year is the length of time it takes for the Sun to complete a cycle around the Ecliptic and return to the position in the cycle of seasons; e.g. from Vernal Equinox to Vernal Equinox. It's about 365.24217 days, about 20 minutes shorter than the Sidereal year because of the precession of the equinoxes. This is the year that the Gregorian Calendar is attempting to emulate. | {
"domain": "astronomy.stackexchange",
"id": 4355,
"tags": "solar-system, earth, orbital-mechanics, time"
} |
Best coordinate system for Projectile motion | Question: What is the best coordinate system for
describing the projectile motion?
Rectangular coordinate system or n-t(normal
and tangential) coordinate system.
Answer: For a particle in a gravitational field treated as a constant? Surely Newton's equations of motion in the fixed rectangular frame:
$$\ddot{x}=0$$
$$\ddot{y}=-g$$
are as simple as it can get! | {
"domain": "physics.stackexchange",
"id": 22286,
"tags": "newtonian-mechanics, kinematics, projectile, coordinate-systems"
} |
Is there an oracle $A$ with $P^A = NP^A$, but $EXP^A \not= NEXP^A$? | Question: Is there an oracle $A$ with $P^A = NP^A$, but $EXP^A \not= NEXP^A$ ?
I found a proof with padding arguments (wikipedia), that
$$ P = NP \Rightarrow EXP = NEXP $$
If an oracle $A$ exists with $P=NP$ and $EXP\not=NEXP$ relative to $A$
then the padding technique would be a proof methode, that circumvents the
Relativization Barrier.
Maybe padding arguments can help to solve problems like P vs. NP.
Answer: Yes, there is an oracle with $P^A = NP^A$ and $EXP^A \not= NEXP^A$.
It is an exercise in Odifreddi: Classical Recursion Theory Vol. II on page 253. An example is in
Kurtz : Sparce sets in NP - P: relativization S.I.A.M J. Comp. 14(1985) 113-119 | {
"domain": "cs.stackexchange",
"id": 21445,
"tags": "complexity-theory, p-vs-np, oracles"
} |
Most activated position on para-terphenyl for EAS | Question: Para-terphenyl: it doesn't look pretty with all those math-y numbers, but those are going to come helpful in answering my question!
A question asked me to tell the expected product when this reacts with $\ce{Br2/FeBr3}$. Now, I have done such questions with all types of fancy organic molecules ranging from benzene to picric acid to parantiroanisole etc. I know that I am supposed to find the most activated position1 for EAS. But I have never done more than one phenyl ring at once.
So, in this question, I have drawn the eight resonating structures (without charge separation) of paraterphenyl manually. Here they are:
But, as you can see, every position seems to be activated identically. Each position has their own share of electrons through a pi bond, but it ends up being completely symmetric.
So, how do I find the most activated position in this compound?
1:Clarification: By "most activated position" I mean the position with maximum electron density. For example, in aniline, those are the ortho and para positions.
Answer:
So, how do I find the most activated position in this compound?
Start by drawing resonance structures of the various possible intermediates (sigma complexes) formed when $\ce{Br^{+}}$ attacks the different ring positions in p-terphenyl. Whichever intermediate has the most resonance structures is likely to be the most stable (lowest energy) intermediate. The lowest energy intermediate will have the smallest activation energy and consequently be the kinetically favored product (barring steric effects).
I've drawn just a few of these resonance structures in the following diagram.
In the top row I've drawn a few of the possible resonance structures for electrophilic attack at the para position on a terminal phenyl ring. You can draw many more resonance structures delocalizing the charge around the various rings, but notice that it is possible to delocalize the positive charge over all 3 aromatic rings. In the second row I've drawn a few resonance structures showing charge delocalization when electrophilic attack occurs in the center ring. Importantly, in this case charge can only be delocalized over 2 of the aromatic rings.
This suggests that based on resonance effects, electrophilic attack on a terminal benzene ring would be preferred over attack at the central benzene ring.
Further, we would expect ortho and para attack on the terminal rings to be preferred since the phenyl substituent acts as an activating o-p director in electrophilic aromatic substitution. The rate of attack at the ortho position might also be decreased somewhat due to the steric effect of the adjacent phenyl (actually biphenyl) substituent.
In a paper by Shafig and Taylor (J. Chem. Soc., Perkin Trans. 2, 1978, 0, 1263-1267. DOI: 10.1039/P29780001263), they explore your question by studying the electrophlic protonation of tritiated p-terphenyl. After correcting the rates for statistical effects (e.g. there are twice as many ortho positions on the terminal rings as there are para positions) they find that:
the para position (your positions $10, 16$) reacts at a relative rate of $273$
the meta position (your positions $9, 11, 15, 17$) reacts at a relative rate of $1.54$
the ortho position (your positions $8, 12, 14, 18$) reacts at a relative rate of $176$, and finally,
the $4$ identical positions on the middle ring (your positions $2, 3, 5, 6$) react with a relative rate of $59.1$
These results support our predictions that
attack on the terminal rings will be preferred over attack on the central ring and
that attack at the ortho and para positions is preferred over attack at the meta position on the terminal rings. | {
"domain": "chemistry.stackexchange",
"id": 9680,
"tags": "organic-chemistry, aromatic-compounds, regioselectivity"
} |
In molecular docking, what is the difference between ligand and cofactor? | Question: In molecular docking aspect, what is the difference between the Ligand and Cofactor? Can a Cofactor be used like a ligand for docking with the target?
Answer: Ligand is an umbrella term for non-covalently bound anything. A cofactor, substrate, or allosteric regulator could be a ligand. This is a biochemical definition ligand, don't confuse it with a formal chemistry definition of ligand.
What's generally important about cofactors is they're not protein, but they're essential to the protein's biological activity. They could be a metal ion, organic complex, a vitamin, etc. Organic complexes that act as cofactors are also known as coenzymes. Cofactors that bind tightly or become covalently bound are called prosthetic groups. The protein without the required cofactors is called an apoenzyme, and with it's cofactors is called the holoenzyme.
Keep in mind to be on-topic for this site you need to do some research ahead of time and produce a focused, studied question. Wikipedia regularly provides the same answer verbatim! | {
"domain": "biology.stackexchange",
"id": 5151,
"tags": "bioinformatics"
} |
Invariance Textbook Problem: Clarification Needed | Question: I am currently reading Michael Soltys' Analysis of Algorithms (2nd Edition), and Problem 1.13 of the subsection titled Invariance reads:
Let $n$ be an odd number, and suppose that we have the set $\{1,2,\dots,2n\}$. We pick any two numbers $a$, $b$ in the set, delete them from the set, and replace them with $|{a-b}|$. Continue repeating this until just one number remains in the set; show that this remaining number must be odd.
However, I picked $n=3$ and performed the following.
I start with $\{1,2,3,4,5,6\}$.
I pick $1$ and $2$; I end up with $(\{1,2,3,4,5,6\}-\{1,2\})\cup\{|{1-2}|\}=\{1,3,4,5,6\}$.
I pick $1$ and $6$; I end up with $\{3,4,5\}$.
I pick $3$ and $5$; I end up with $\{2,4\}$.
And finally, I pick $2$ and $4$; I end up with $\{2\}$.
Clearly, $2$ is not an odd number.
Is there something I misunderstood in my attempt?
Answer: I think they don't mean to consider a "set" of numbers (where $\{3, 4, 5, 5\} = \{3, 4, 5\}$) but rather a list or multiset of numbers.
In that case the result is true: initially the sum of the entries of the list is $n(2n+1)$ which is odd, and at each step the parity of the sum of the entries is preserved. When you choose $a, b$, the sum changes from its old value $s$ to $s-a-b+ |b-a|$. If $a, b$ are even then $-a-b+|b-a|$ is clearly even, if they are both odd then $-a-b$ is even as is $|b-a|$, and if one is even and one odd then $-a-b$ is odd and $|b-a|$ is odd so $-a-b+|b-a|$ is odd+odd=even. | {
"domain": "cs.stackexchange",
"id": 21902,
"tags": "algorithms, proof-techniques, induction, loop-invariants"
} |
A package for communications between packages - v2 | Question: This is basically the registry pattern and a pub/sub event system.
Very simple and minimalist. Looking for general feedback.
/***************************************************************************************************
**COMMS
- provides registry and event system
- reduces dependencies
***************************************************************************************************/
// self used to hold client or server side global
(function (self) {
"use strict";
// holds (Pub)lic properties
var Pub = {},
// holds (Priv)ate properties
Priv = {},
// holds "imported" library properties
$A;
(function manageGlobal() {
// Priv.g holds the single global variable, used to hold all packages
Priv.g = '$A';
if (self[Priv.g] && self[Priv.g].pack && self[Priv.g].pack.utility) {
self[Priv.g].pack.comms = true;
$A = self[Priv.g];
} else {
throw new Error("comms requires utility module");
}
}());
Pub.Reg = (function () {
var publik = {},
register = {};
publik.get = function (key) {
return register[key];
};
publik.set = function (key, value) {
register[key] = value;
};
publik.setMany = function (o) {
$A.someKey(o, function (val, key) {
register[key] = val;
});
};
publik.getMany = function () {
return register;
};
return publik;
}());
Pub.Event = (function () {
var publik = {},
events = {};
publik.add = function (name, callback) {
if (!events[name]) {
events[name] = [];
}
events[name].push(callback);
};
publik.remove = function (name, callback) {
if (name && callback) {
delete events[name][callback];
} else if (name) {
delete events[name];
}
};
publik.trigger = function (name) {
if (events[name]) {
$A.someIndex(events[name], function (val) {
val();
});
}
};
return publik;
}());
self[Priv.g] = $A.extendSafe(self[Priv.g], Pub);
}(this));
Answer: It looks good overall IMO. I won't comment on the bootstrap code as I'm not familiar with your library, but I would change a few things. Inside the closures that you create you could simply return the public object, without defining publik, it looks a bit cleaner. Then I added some annotation and changed the control flow a bit:
Pub.Reg = (function() {
var register = {};
return {
get: function(key) {
return register[key];
},
set: function(key, value) {
register[key] = value;
},
setMany: function(o) {
$A.someKey(o, function (val, key) {
register[key] = val;
});
},
getMany: function() {
return register;
}
};
}());
Pub.Event = (function() {
var events = {};
return {
add: function(name, callback) {
// A bit more concise
(events[name] = events[name]||[]).push(callback);
},
remove: function(name, callback) {
// It's faster to set the value to `null`
// than using the `delete` operator
// but this depends on your use case;
// the properties would still show up
// in a `for..in` loop, but it's fine
// if you use it merely as a dictionary
if (!callback) {
events[name] = null;
return;
}
events[name][callback] = null;
},
trigger: function(name) {
if (events[name]) {
$A.someIndex(events[name], function (val) {
val();
});
}
}
};
}()); | {
"domain": "codereview.stackexchange",
"id": 5737,
"tags": "javascript, sorting"
} |
Non-interacting causal Green function in localized-orbitals representation | Question: Say I have a one-partilce hamiltonian
$$\hat{h}=\sum_{\alpha} \epsilon_\alpha \hat{n}_\alpha+\sum_{\alpha \neq \beta, } t_{\alpha,\beta} \hat{c}^\dagger_{\alpha }\hat{c}_{\beta }$$
(I will ignore spin for simplicity, since it does not play a relevant part in my doubt), where the $\hat{c}_\mu,\hat{c}_\mu^\dagger$ are anihilation/destruction operators of electrons in states $\vert \chi_\mu \rangle .$
Quick version of my question:
What is the causal Green function i frequency space for this problem?
Detailed question and problems:
I want to write down explicitely what are the Green functions $g(\mu,t)= \frac{-i}{\hbar} \langle T [\hat{c}_\mu(t) \hat{c}_\mu^\dagger(t^\prime)]\rangle$ in frequency space, $g(\mu,\omega),$ and I want to do it by two methods: 1) Applying the direct definition and explicitely computing the Fourier transform and 2) By using that, if we call $h$ to the matrix of elements $\langle \chi_\nu \vert \hat{h} \vert \chi_\mu \rangle,$ it holds that $g(\mu,\omega)=(\omega \ \text{Id}-h)^{-1}(\mu,\mu)$ (this holds for one-electron hamiltonians and shows that, in that case, the notion of Green function in many-body theory coincides with the concept of Green function in pure mathematics).
1) First method: For simplicity, we assume $t^\prime =0$ and I will drop the hats from the operators, since there's no possible confussion We have:
$\langle T [\hat{c}_\mu(t) \hat{c}_\mu^\dagger(0)]\rangle= \Theta(t)\langle c_\mu(t)c_\mu^\dagger\rangle-\Theta(-t) \langle c_\mu^\dagger c_\mu(t) \rangle.$
We insert in both terms the resolution of the identity, $\text{Id}=\sum_{m} \vert m \rangle\langle m \vert$, where $m$ is an index which runs over all eigenstates of the system.
On the other hand, recall that $c_\mu(t)=e^{i h t}c_\mu e^{-i h t}$ (setting $\hbar=1$) and that $e^{-i h t} \vert m \rangle = e^{-i E_m t} \vert m \rangle.$ Keeping this in mind is easy to get to:
$$\Theta(t)\sum_{m}e^{i(\omega_0-\omega_m)t}\langle 0 \vert c_\mu \vert m \rangle \langle m \vert c_\mu^\dagger \vert 0 \rangle-\Theta(-t)\sum_{m}e^{i(\omega_m-\omega_0)t}\langle m \vert c_\mu \vert 0 \rangle \langle 0 \vert c_\mu^\dagger \vert m \rangle.$$
Next, to Fourier transfrom this expression we notice that for the first term we need a convergent factor $i\eta$ (that is, we'll have pole sin the upper semi-plane), and for the second term we need to plug in a convergent factor $-i\eta$ (so the poles lie in the lower semi-plane).
After computing those Fourier transforms and replugging the prefactor $-i$, we get:
$$\sum_ {m} \dfrac{\langle 0 \vert c_\mu \vert m \rangle \langle m \vert c_\mu^\dagger \vert 0 \rangle}{\omega-(\omega_m-\omega_0)+i\eta}+\sum_{m}\dfrac{\langle m \vert c_\mu \vert 0 \rangle \langle 0 \vert c_\mu^\dagger \vert m \rangle}{\omega-(\omega_0-\omega_m)-i\eta}.$$
If the Ground state $\vert 0 \rangle$ is a state of $N$ electrons, in the first term the sum runs over eigenstates $\vert m \rangle $ of $N+1$ particles and in the second term the sum runs over states with N-1$ particles.
But I don't know hot to proceed any further. Intuitively speaking, it seems that in the first term, the term in the denominator $\omega_m-\omega_0$ should be $\epsilon_\mu$ in case all the $t_{\alpha,\beta}=0,$ and perhaps $\epsilon_mu+\sum_\beta t_{\mu,\beta}$ in the general case (but I'm not sure), since this should be the energy difference between $\vert 0 \rangle $ and $\vert m \vert.$ For the denominator in the second term, the reaosoning is similar and leads (or it would lead, would it be clear) to the same result, that's for sure. Also, the numerator in the first term is about "when an electron in state $\vert \chi_\mu \rangle $ is not in $\vert 0 \rangle,$ and in the second term the numerator is about "when an electron in state $\vert \chi_\mu \rangle$ is in $\vert 0 \rangle $", so it seems that the final result should be (probably) something like (here we call $n_\mu=\langle 0 \vert \hat{n}_\mu \vert 0 \rangle$ to the occupation number of state $\vert \mu \rangle $)
$$g(\mu,\omega)=\dfrac{1-n_\mu}{\omega-\epsilon_\mu+i\eta}+\dfrac{n_\mu}{\omega-\epsilon_\mu-i\eta},$$
or perhaps something like this but with $\epsilon_\mu \rightarrow \epsilon_\mu+\sum_{\beta} t_{\mu,\beta}$
2) Second method: A direct inversion $(\omega \ \text{Id}-h)^{-1}$ is not well-defined because of all the issues with the poles. We need to insert convergent factors. $((\omega+i\eta) \ \text{Id}-h)^{-1}$ and $((\omega-i\eta) \ \text{Id}-h)^{-1}$ are the retarded and advanced Green function respectively, and in virtue of the identity $\frac{1}{x\pm iy}= P.V \frac{1}{x} \mp i\pi \delta(x)$ as $y \rightarrow 0,$ it is clear that by adding up the two of them we should get theGreen functions:
$$g(\mu,\omega)=((\omega+i\eta) \ \text{Id}-h)^{-1}(\mu,\mu)+((\omega-i\eta) \ \text{Id}-h)^{-1}(\mu,\mu).$$
But, in this case, for the simplest case possible, a diagonal hamiltonian, where all the $t_{\alpha,\beta}$ are zero, so that $\hat{h}=\sum_{\alpha}\epsilon_\alpha \hat{n}_\alpha,$ the formula above trivially yields the result:
$$ g(\mu,\omega)= \dfrac{1}{\omega-\epsilon_\mu+i\eta}+\dfrac{1}{\omega-\epsilon_\mu-i\eta},$$
with no $n_\mu,1-n_\mu$ in the numerators at all, which makes little sense. -- -- -- - - - ---------------
So, I have two arguments and they are giving different results. Also, I would like to understand how to actually complete in a convincing way the reasonings I presented, specially in the first method.
Answer: It is not true that $(\omega \text{Id}-h)^{-1}$ is the causal Green function. Define as above $g(\mu,t-t^\prime)= \frac{-i}{\hbar}\langle T[\hat{c}_\mu(t)\hat{c}_\mu^\dagger(t^{\prime)}]\rangle.$ Suppose $\vert \phi_\lambda \rangle$ form a (orthonormal )basis of eigenstates of the one-electron hamiltonian, $\hat{h}\vert \phi_\lambda \rangle = e_\lambda \vert \phi_\lambda \rangle.$ Define an operator $\hat{\xi}$ such that the elements of the basis $\vert \phi_\lambda \rangle$ are its eigenfunctions, with eigenvalues 1 or -1 if $\vert \phi_\lambda \rangle$ is, respectively, an occupied or an unoccupied orbital (that is, if it appears or not in the Slater determinant forming the Ground state)
Then what holds is that the Fourier transform of this quantity is:
$$g(\mu,\omega)=\langle \mu \vert \hat{R}(\omega) \vert \mu \rangle,$$
where $\hat{R}(\omega)=(\omega \text{Id}-\hat{h}+i\eta\hat{\xi})^{-1},$ where the limit $\eta \rightarrow 0$ is implicit. That this is true can be seen by comparison with the formula obtained with the "many-body approach" in the first method, $\sum_ {m} \dfrac{\langle 0 \vert c_\mu \vert m \rangle \langle m \vert c_\mu^\dagger \vert 0 \rangle}{\omega-(\omega_m-\omega_0)+i\eta}+\sum_{m}\dfrac{\langle m \vert c_\mu \vert 0 \rangle \langle 0 \vert c_\mu^\dagger \vert m \rangle}{\omega-(\omega_0-\omega_m)-i\eta}$ (the "intuitive" reasoning after this formula in the first method is wrong unless $\vert \chi_\mu \rangle$ is itself one of the eigenstates of the Hamiltonian!).
The function $\hat{R}(\omega)$ can thus be represented as
$$\hat{R}(\omega)=\sum_{\lambda} \dfrac{\vert \phi_\lambda \rangle \langle \phi_\lambda \vert}{\omega-\omega_\lambda+i\eta\theta(\omega_F-\omega_\lambda)},$$
where $\omega_F$ is the Fermi level and $\theta(x)$ is a function taking the value 1 if $x>0$ and the value $-1$ for $x<0.$
Each element $\vert \phi_\lambda \rangle $ of the basis has an expansion in terms of the our basis orbitals:
$$\vert \phi_\lambda \rangle = \sum_{\mu} c_\mu (\lambda)\vert \chi_\mu \rangle.$$
Then, we have, splitting the occupied and the unoccupied part:
$$g(\mu,\omega)=\sum_{\lambda \ \text{occupied}} \dfrac{\vert c_\mu(\lambda) \vert ^2}{\omega-\omega_\lambda+i\eta}+\sum_{\lambda \ \text{unoccupied}} \dfrac{\vert c_\mu(\lambda) \vert ^2}{\omega-\omega_\lambda-i\eta}.$$
Notice also that $\sum_{\lambda \ \text{occupied}} \vert c_\mu(\lambda )\vert^2 = n_\mu $ and $\sum_{\lambda \ \text{unoccupied}} \vert c_\mu(\lambda )\vert^2 = 1-n_\mu. $ If $\vert \chi_\mu \rangle$ is one of the eigenstates $\vert \phi_\lambda \rangle$ is clear that we recover the well-known result $\dfrac{n_\mu}{\omega-\omega_\mu+i\eta}+\dfrac{1-n_\mu}{\omega-\omega_\mu-i\eta}$ | {
"domain": "physics.stackexchange",
"id": 45511,
"tags": "condensed-matter, greens-functions"
} |
Can wires related to modems and internet emit harmful radiation? | Question: I am installing internet services at my house and need to run wires (Ethernet, modem, router related) along the lower part of the wall near some beds. Can people sleeping in the beds be exposed to harmful radiation (cancer-causing) from these wires?
Edit: my concern lies primarily with all the different sorts of wires involved. But I'd be curious about the router and modem too
Answer: People have had internet wires in their houses for 20+ years, and while not all cables run close to beds, certainly some do so - given the omnipresence of this kind of wires - if there were harmful effects, we’d know by now. Never mind bedrooms think of offices, where people would also be exposed for long periods.
So we can say with almost certainly that it’s highly unlikely that this kind of wiring has adverse health effects over periods in the 20+ year range. Moreover, technology is improving, so we can also say that those who have been exposed in the early 2000s were probably more exposed than people nowadays so the exposure risk is likely decreasing in time on a per-device basis. It could be that overall exposure has increased as the prevalence of the modems and routers has increased, but again: it’s not like there’s a detectable increase in cancer, at least not an increase attributable internet cables or gear.
This isn’t like cigarettes, for instance, where the adverse effects where known on animals, and where the prevalence of lung cancer was easily detectable as function of the age and as a function of nicotine consumption of individuals.
There is current no evidence to suggest that wifi radiation, or anything related to the wiring of internet gear, had adverse health effects. | {
"domain": "physics.stackexchange",
"id": 80677,
"tags": "electromagnetic-radiation, radiation"
} |
Correctness proof of the algoritm to generate permutations in lexicographic order | Question: The following algorithm generates the next permutation lexicographically after a given permutation. It changes the given permutation in-place.
Find the largest index k such that a[k] < a[k + 1]. If no such index exists, the permutation is the last permutation.
Find the largest index l greater than k such that a[k] < a[l].
Swap the value of a[k] with that of a[l].
Reverse the sequence from a[k + 1] up to and including the final element a[n].
(from https://en.wikipedia.org/wiki/Permutation#Generation_in_lexicographic_order)
I would like to know a (possible formal) proof.
Answer: One can try to come up with the algorithm by oneself (thereby proving its correctness) after incremental understanding of: (a) how to define a permutation as "larger" (lexciographically) than another one, (b) what really the very next permutation of $P$ "looks like" compared to $P$ (how/where it differs from $P$). Once we find-out some properties, we may be able to write this algorithm ourselves.
Some outline:
define the "lexicographic order" of permutations $P_1$ and $P_2$. Say, $k$ is the very first index at which they differ. Then the "order" of their elements at index $k$ decides the "lexicographic order" of $P_1$ and $P_2$.
So what should be the smallest and largest permutation of array $a$ ?
Suppose array $a$ currently holds permutation $P$. Say $P'$ is the very next permutation of $P$ in the lexicographic-order, and they first differ at index $k$. Now, use the definition in (1) and the observation in (2) to find out what is the structure of elements in $P/P'$ at index $k$ and after it. You can argue that:
elements in $P$ after $k$ must form a decreasing-sequence (call it $S$).
element in $P$ at $k$ must be smaller than the largest element of $S$, which is $a[k+1]$.
due to above, we must have an element $a[l]$ in $S$ which is the smallest element in $S$ larger than $a[k]$. In $P'$, this element must appear at index $k$.
so the remaining (after $k$) elements of $P'$ are simply $S$ with $a[l]$ removed and $a[k]$ added.
these remaining (after $k$) elements of $P'$, must be in increasing order (for $P'$ to be the smallest possible, but larger than $P$). The 4th step in the posted algorithm is simply obtaining this increasing order by reversing the existing decreasing-sequence $S$ (now modified via swap, but continues to remain decreasing).
For further details, refer to section "Lexicographical Order" in this article (written by me). | {
"domain": "cs.stackexchange",
"id": 17601,
"tags": "correctness-proof, permutations"
} |
Could the magnetic field influence tectonic forces | Question: Paleomagnetism can be measured as the magnetic field at the time of rock forming is preserved in some minerals.
Could the paleomagnetism and the present magnetic field somehow influence the movements of tectonic plates?
Answer: The electromagnetic force and related field is a strong force at very small distances (governs the way the proton and electron are held to an atom) but is relatively weak over large distances. I don't see how that small force could act on processes involved in the convection of the mantle. If memory serves, the iron in magmas preserves the force of the electromagnetic field on earth at the time the magma cools to form rock. It leaves its imprint but has no affect on the process or origin of magmatism much less the convection that drives plate tectonics | {
"domain": "earthscience.stackexchange",
"id": 1950,
"tags": "plate-tectonics, paleomagnetism"
} |
Use dynamic reconfiguration to set the parameter value in ROS | Question:
I am new to ROS and i want to use Dynamic reconfiguration technique to set a parameter (rectangle_height).
Through internet i came across with the following method but its not working.
Problem: when i run rqt_reconfigure, in that my node (visual_analysis) is not visual so i can't change the parameter.
-In my Includes, i have included the following:
#include <dynamic_reconfigure/DoubleParameter.h>
#include <dynamic_reconfigure/Reconfigure.h>
#include <dynamic_reconfigure/Config.h>
-In my main() where my variable is declared, i have written the following:
int main( )
{
double rectangle_height;
///////////////////Dynamic Reconfig
dynamic_reconfigure::ReconfigureRequest srv_req;
dynamic_reconfigure::ReconfigureResponse srv_resp;
dynamic_reconfigure::DoubleParameter double_param;
dynamic_reconfigure::Config conf;
//Entering values using Dynamic Reconfig
double_param.name = "kurtana_pitch_joint";
double_param.value = rectangle_height;
conf.doubles.push_back(double_param);
srv_req.config = conf;
ros::service::call("/visual_analysis/set_parameters", srv_req, srv_resp);
return 0;
}
Originally posted by rosqueries on ROS Answers with karma: 1 on 2014-02-24
Post score: 0
Answer:
You have to add explicit support for dynamic_reconfigure in the node you're trying to configure.
Dynamic_reconfigure has a very nice tutorial describing how to do this: http://wiki.ros.org/dynamic_reconfigure/Tutorials/SettingUpDynamicReconfigureForANode%28cpp%29
Originally posted by ahendrix with karma: 47576 on 2014-02-24
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 17072,
"tags": "dynamic-reconfigure"
} |
The Identity of this Lepidopteron | Question: I snapped this picture on a walk in the Pacific Northwest last week, and, being new to entomology, cannot manage to identify whether this is a moth or a butterfly.
It seems to have the coloring of a moth, yet it was out around 4-5 PM, with the sun still shining brightly. Its antenna look like a butterfly's, but its face (particularly the eyes) seems to be that of a moth. The wings are folded up like a butterfly (in another picture of the lepidoptera), but they look quite drab for a butterfly. Any help?
Answer: This looks like the Satyr Comma or Polygonia satyrus.
Characteristic of this species is a dark border near the tops of wings, fading near the bottom. They are common across the Western United States and Southern Canada.
For more information on this species, try this link and this link.
To differentiate between moths and butterflies you can look at the anntenae. Butterflies often have a bulb at the end whereas moth antennae are feathery or saw-edged. | {
"domain": "biology.stackexchange",
"id": 7166,
"tags": "species-identification, entomology, lepidoptera"
} |
Is this given ray diagram for a biconvex lens correct? | Question: Figure shows a biconvex lens of having radius of curvatures as R1 and R2 , in the ray depicted shouldnt the meeting point of the rays be at 2F and the object should be placed at -2F , where F is the focal length from the lens maker formula for thin lenses ? The figure depicted -2F1 going to 2F1 this is wrong isnt ? It should be F in place of F1 ,where 1/F= 1/F1 +1/F2 isnt ? And is the F1 and F2 called the focal point of lenses having one radius of curvature as infinity ?is that the definition of first and secondary focus(other than the parallel rays meeting one)?
Answer: F$_1$ and F$_2$ are the principal foci. They are the same distance, f, (the focal length) either side of the optical centre of the lens.
I can therefore make no sense of your equation "1/F= 1/F1 +1/F2".
The rays on the diagram are roughly correct. An object in a plane at distance 2f from the optical centre of the lens should form an image on the plane at a distance 2f from the optical centre but to the other side of the lens. You can show this by putting $u=2f$ into the equation
$$\frac 1u + \frac 1v = \frac 1f\ \ \ \ \ \ \ \text{[real-is-positive convention]}$$
or, equivalently, by putting $u=-2f$ into the equation
$$\frac 1u + \frac 1f = \frac 1v\ \ \ \ \ \ \ \text{[cartesian convention]}$$
I assume that 2F$_1$ and 2F$_2$ are supposed to be points on the lens axis a distance 2f from the optical centre of the lens, on either side of the lens. 2F$_2$ seems to be marked too close to the lens.
When I first looked at the diagram I was misled by the labels C$_1$ and C$_2$, expecting them to stand for the centres of curvature of the two faces of the lens. But these will not be at distance 2f from the centre of the lens, as on the diagram. [It would require a refractive index of 2 for the lens material, as you can show from the lens-makers' formula.] Clearly C$_1$ and C$_2$ have nothing to do with centres of curvature! | {
"domain": "physics.stackexchange",
"id": 85138,
"tags": "geometric-optics, lenses"
} |
simple question on torques on an ellipsoid | Question: I have an ellipsoid, and in the reference frame where the x-, y- and z-axis are aligned with its eigenvectors I compute the torque $\vec\tau$ acting on it.
And I'm asking myself how can I quantify the speed at which the torque makes the ellipsoid rotate about its major axis.
Is the solution as simple as computing $\dot{\vec{\omega}}$ using the well know relation
$\vec\tau=I\dot{\vec{\omega}}$ ?
For instance, for an ellipsoid of constant density one has
$\vec\tau=\dfrac{M}{5}((b^2+c^2)\,\dot{\omega_x},(a^2+c^2)\,\dot{\omega_y},(a^2+b^2)\,\dot{\omega_z})$
Is there any better way to quantify how much the torque would let the ellipsoid rotate along its axes?
Answer: The above equation won't work. It's true that $\vec{\tau} = \dot{\vec{L}}$ in an inertial reference frame, such as the fixed "space frame". But in the space frame, $\dot{\vec{L}} \neq \mathbf{I} \dot{\vec{\omega}}$, since $\mathbf{I}$ is changing with respect to time as well as the body is rotating.
The usual solution is to go to the "body frame" instead, which is a frame that rotates with the body. The relationship between torque and angular momentum that holds in the body frame is now
$$
\vec{\tau} = \dot{\vec{L}} + \vec{\omega} \times \vec{L}
$$
and since $\mathbf{I}$ is constant in the body frame, we have
$$
\vec{\tau} = \mathbf{I} \dot{\vec{\omega}} + \vec{\omega} \times (\mathbf{I} \vec{\omega} ).
$$
When this equation is split into its components along the principle axes of the body, the resulting three equations are often called Euler's equations.
So if you know the torque in the body frame, you can in principle solve for $\vec{\omega}$ in the body frame as a function of time. The resulting equations are non-linear and hard to solve in general, although various approximation methods can be used. Alternately, if you want to find the torque required to keep the body rotating about a given axis, you can set $\vec{\omega} = $ const. and $\dot{\vec{\omega}} = 0$, and see what $\vec{\tau}$ is required (in the body frame) for this to be true. | {
"domain": "physics.stackexchange",
"id": 22446,
"tags": "newtonian-mechanics, classical-mechanics"
} |
Why does this recurrence give O(n) time? | Question: Given this following recurrence: $$T(n) = T(n/2) + O(n)$$Find the final time complexity.
My first thought is $O(n\log n)$, since there is at most $\log n$ times the
$O(n)$ will appear.
However, if we adopt the following analysis and let $n=2^m$, then we have:
$$T(2^m) = T(2^{m-1}) + k(2^m) = T(2^{m-2}) + k(2^m + 2^{m-1})...$$
Which we can then condense to have the cost become:
$$2^m + 2^{m-1} .... + 1 = 2^{m+1} - 1$$
And so since the cost is $O(2^m)$, we have our $O(n)$ time as required.
Is the analysis valid? Because I have so very often seen proofs using recurrences of the form $T(n) = T(n/2) + ...$, and they all similarly concluded that there will be $\log n$ times of the relationship.
Which is correct?
Answer: Your second analysis is correct, and $T(n) \in \Theta(n)$. You can also use the Master Theorem to verify this.
The reason that the first "naive" analysis fails is that you don't have $O(n)$ at each step, you have $O(\frac{n}{2^{i}})$ where $i$ is how far down the recursion you've gone.
Ignoring the constant multipliers for the moment, this gives
$$
\sum_{i=0}^{\log n} \frac{n}{2^{i}} = \frac{\sum_{i=0}^{\log n}n}{\sum_{i=0}^{\log n}2^{i}} = \frac{n\log n}{2^{log n + 1}} = \frac{n\log n}{2 \log n} = \frac{n}{2}
$$
Your second analysis basically does the same thing, just using variable substitution instead. | {
"domain": "cs.stackexchange",
"id": 11183,
"tags": "algorithms, algorithm-analysis"
} |
SQL Query to get parent parts of materials using Recursive CTE | Question: The intent of the query is to filter out certain materials to get to a list that will then be run through a recursive CTE to drill up through the BOM to the 0 level assembly.
P_Parts_CTE is filtering to get just purchased parts
P_Parts_Inv_CTE is getting the on hand inventory of the purchased parts
PartDtl_Sum_CTE is taking the same list of parts from the first CTE and summing all Supply and Demand records for each part
PartDtl_CTE is using the results of the prior two CTEs to take the on hand inventory, add in the supply, and subtract the demand to get to a projected balance
Parts_Neg_CTE is filtering to get the parts that have a negative balance
Reverse_Recursive_BOM_CTE takes the list of parts from prior CTE and explodes them upwards in the BOM to the 0 level
It works, but I ran it last night and was tired of waiting for it and killed it at 35 minutes. I'm not sure if I did something wrong or if the query is just that complex. I would love to do this all in one query, but if there is no way to improve this then I will create a table and run a stored procedure to populate it with the exploded BOM. Thanks in advance
Query
with P_Parts_CTE (Company, PartNum) --Identify purchased parts
as
(
select Company, PartNum
from dbo.Part
where Company = 'Comp' and TypeCode = 'P'
),
P_Part_Inv_CTE (Company, PartNum, OnHandQty) --Get on-hand inventory for parts
as
(
select a.Company, a.PartNum, Sum(OnHandQty)
from P_Parts_CTE as a
left outer join dbo.PartWhse as b
on a.Company = b.Company and a.PartNum = b.PartNum
group by a.Company, a.PartNum
),
PartDtl_Sum_CTE (Company, PartNum, Supply, Demand) --Get current supply & demand for parts
as
(
select c.Company, c.PartNum, Sum(case when d.RequirementFlag = 0 then d.Quantity else 0 end) as Supply, Sum(case when d.RequirementFlag = 1 then d.Quantity else 0 end) as Demand
from P_Parts_CTE as c
left outer join dbo.PartDtl as d
on c.Company = d.Company and c.PartNum = d.PartNum
group by c.Company, c.PartNum
),
PartDtl_CTE (Company, PartNum, Balance) --Find out the balance of inventory after supply and demand are factored in
as
(
select e.Company, e.PartNum, (e.OnHandQty + f.Supply - f.Demand) as Balance
from P_Part_Inv_CTE as e
left outer join PartDtl_Sum_CTE as f
on e.Company = f.Company and e.PartNum = f.PartNum
group by e.Company, e.PartNum, e.OnHandQty, f.Supply, f.Demand
),
Parts_Neg_CTE (Company, PartNum) --Get list of parts where balance is negative
as
(
select Company, PartNum
from PartDtl_CTE
where Balance < 0
),
Reverse_Recursive_BOM_CTE (Company, PartNum, [Level], MtlPartNUm) --As these are mainly materials that go into finished goods, blow out the BOM upwards
as
(
select h.Company, h.PartNum, 0 as [Level], h.MtlPartNum
from Parts_Neg_CTE as g
inner join dbo.PartMtl as h
on g.Company = h.Company and g.PartNum = h.PartNum
union all
select i.Company, i.PartNum, [Level] - 1, i.MtlPartNum
from dbo.PartMtl as i
inner join Reverse_Recursive_BOM_CTE as j
on i.MtlPartNum = j.PartNum
)
select *
from Reverse_Recursive_BOM_CTE
Execution Plan
<?xml version="1.0" encoding="utf-16"?>
<ShowPlanXML xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema" Version="1.1" Build="10.50.2500.0" xmlns="http://schemas.microsoft.com/sqlserver/2004/07/showplan">
<BatchSequence>
<Batch>
<Statements>
<StmtSimple StatementCompId="1" StatementEstRows="43701.6" StatementId="1" StatementOptmLevel="FULL" StatementSubTreeCost="61.7944" StatementText="with P_Parts_CTE (Company, PartNum)
as
(
 select Company, PartNum
 from dbo.Part
 where Company = 'Bruce' and TypeCode = 'P'
),
P_Part_Inv_CTE (Company, PartNum, OnHandQty)
as
(
 select a.Company, a.PartNum, Sum(OnHandQty)
 from P_Parts_CTE as a
 left outer join dbo.PartWhse as b
 on a.Company = b.Company and a.PartNum = b.PartNum
 group by a.Company, a.PartNum
),
PartDtl_Sum_CTE (Company, PartNum, Supply, Demand)
as
(
 select c.Company, c.PartNum, Sum(case when d.RequirementFlag = 0 then d.Quantity else 0 end) as Supply, Sum(case when d.RequirementFlag = 1 then d.Quantity else 0 end) as Demand
 from P_Parts_CTE as c
 left outer join dbo.PartDtl as d
 on c.Company = d.Company and c.PartNum = d.PartNum
 group by c.Company, c.PartNum
),
PartDtl_CTE (Company, PartNum, Balance)
as
(
 select e.Company, e.PartNum, (e.OnHandQty + f.Supply - f.Demand) as Balance
 from P_Part_Inv_CTE as e
 left outer join PartDtl_Sum_CTE as f
 on e.Company = f.Company and e.PartNum = f.PartNum
 group by e.Company, e.PartNum, e.OnHandQty, f.Supply, f.Demand
),
Parts_Neg_CTE (Company, PartNum)
as
(
 select Company, PartNum
 from PartDtl_CTE
 where Balance < 0
),
Reverse_Recursive_BOM_CTE (Company, PartNum, [Level], MtlPartNUm)
as
(
 select h.Company, h.PartNum, 0 as [Level], h.MtlPartNum
 from Parts_Neg_CTE as g
 inner join dbo.PartMtl as h
 on g.Company = h.Company and g.PartNum = h.PartNum
 union all
 select i.Company, i.PartNum, [Level] - 1, i.MtlPartNum
 from dbo.PartMtl as i
 inner join Reverse_Recursive_BOM_CTE as j
 on i.MtlPartNum = j.PartNum
)
select *
from Reverse_Recursive_BOM_CTE" StatementType="SELECT" QueryHash="0x7FDBDBEEA130262E" QueryPlanHash="0x9402E43E7FD31267">
<StatementSetOptions ANSI_NULLS="true" ANSI_PADDING="true" ANSI_WARNINGS="true" ARITHABORT="true" CONCAT_NULL_YIELDS_NULL="true" NUMERIC_ROUNDABORT="false" QUOTED_IDENTIFIER="true" />
<QueryPlan CachedPlanSize="168" CompileTime="100" CompileCPU="100" CompileMemory="5680">
<RelOp AvgRowSize="73" EstimateCPU="0.000218508" EstimateIO="0" EstimateRebinds="0" EstimateRewinds="0" EstimateRows="43701.6" LogicalOp="Lazy Spool" NodeId="0" Parallel="false" PhysicalOp="Index Spool" EstimatedTotalSubtreeCost="61.7944">
<OutputList>
<ColumnReference Column="Expr1066" />
<ColumnReference Column="Recr1028" />
<ColumnReference Column="Recr1029" />
<ColumnReference Column="Recr1030" />
<ColumnReference Column="Recr1031" />
</OutputList>
<Spool Stack="true">
<RelOp AvgRowSize="73" EstimateCPU="4.37016E-05" EstimateIO="0" EstimateRebinds="0" EstimateRewinds="0" EstimateRows="43701.6" LogicalOp="Concatenation" NodeId="1" Parallel="false" PhysicalOp="Concatenation" EstimatedTotalSubtreeCost="61.7937">
<OutputList>
<ColumnReference Column="Expr1066" />
<ColumnReference Column="Recr1028" />
<ColumnReference Column="Recr1029" />
<ColumnReference Column="Recr1030" />
<ColumnReference Column="Recr1031" />
</OutputList>
<Concat>
<DefinedValues>
<DefinedValue>
<ColumnReference Column="Expr1066" />
<ColumnReference Column="Expr1063" />
<ColumnReference Column="Expr1065" />
</DefinedValue>
<DefinedValue>
<ColumnReference Column="Recr1028" />
<ColumnReference Database="[Epicor905]" Schema="[dbo]" Table="[PartMtl]" Alias="[h]" Column="Company" />
<ColumnReference Database="[Epicor905]" Schema="[dbo]" Table="[PartMtl]" Alias="[i]" Column="Company" />
</DefinedValue>
<DefinedValue>
<ColumnReference Column="Recr1029" />
<ColumnReference Database="[Epicor905]" Schema="[dbo]" Table="[PartMtl]" Alias="[h]" Column="PartNum" />
<ColumnReference Database="[Epicor905]" Schema="[dbo]" Table="[PartMtl]" Alias="[i]" Column="PartNum" />
</DefinedValue>
<DefinedValue>
<ColumnReference Column="Recr1030" />
<ColumnReference Column="Expr1020" />
<ColumnReference Column="Expr1027" />
</DefinedValue>
<DefinedValue>
<ColumnReference Column="Recr1031" />
<ColumnReference Database="[Epicor905]" Schema="[dbo]" Table="[PartMtl]" Alias="[h]" Column="MtlPartNum" />
<ColumnReference Database="[Epicor905]" Schema="[dbo]" Table="[PartMtl]" Alias="[i]" Column="MtlPartNum" />
</DefinedValue>
</DefinedValues>
<RelOp AvgRowSize="73" EstimateCPU="0.000437016" EstimateIO="0" EstimateRebinds="43701.6" EstimateRewinds="0" EstimateRows="1" LogicalOp="Compute Scalar" NodeId="2" Parallel="false" PhysicalOp="Compute Scalar" EstimatedTotalSubtreeCost="0.000437016">
<OutputList>
<ColumnReference Column="Expr1063" />
<ColumnReference Database="[Epicor905]" Schema="[dbo]" Table="[PartMtl]" Alias="[h]" Column="Company" />
<ColumnReference Database="[Epicor905]" Schema="[dbo]" Table="[PartMtl]" Alias="[h]" Column="PartNum" />
<ColumnReference Column="Expr1020" />
<ColumnReference Database="[Epicor905]" Schema="[dbo]" Table="[PartMtl]" Alias="[h]" Column="MtlPartNum" />
</OutputList>
<ComputeScalar>
<DefinedValues>
<DefinedValue>
<ColumnReference Column="Expr1063" />
<ScalarOperator ScalarString="(0)">
<Const ConstValue="(0)" />
</ScalarOperator>
</DefinedValue>
</DefinedValues>
<RelOp AvgRowSize="40" EstimateCPU="0.00436224" EstimateIO="0" EstimateRebinds="0" EstimateRewinds="0" EstimateRows="43622.4" LogicalOp="Compute Scalar" NodeId="3" Parallel="false" PhysicalOp="Compute Scalar" EstimatedTotalSubtreeCost="22.3494">
<OutputList>
<ColumnReference Database="[Epicor905]" Schema="[dbo]" Table="[PartMtl]" Alias="[h]" Column="Company" />
<ColumnReference Database="[Epicor905]" Schema="[dbo]" Table="[PartMtl]" Alias="[h]" Column="PartNum" />
<ColumnReference Database="[Epicor905]" Schema="[dbo]" Table="[PartMtl]" Alias="[h]" Column="MtlPartNum" />
<ColumnReference Column="Expr1020" />
</OutputList>
<ComputeScalar>
<DefinedValues>
<DefinedValue>
<ColumnReference Column="Expr1020" />
<ScalarOperator ScalarString="(0)">
<Const ConstValue="(0)" />
</ScalarOperator>
</DefinedValue>
</DefinedValues>
<RelOp AvgRowSize="36" EstimateCPU="1.35964" EstimateIO="0" EstimateRebinds="0" EstimateRewinds="0" EstimateRows="43622.4" LogicalOp="Inner Join" NodeId="4" Parallel="false" PhysicalOp="Hash Match" EstimatedTotalSubtreeCost="22.3451">
<OutputList>
<ColumnReference Database="[Epicor905]" Schema="[dbo]" Table="[PartMtl]" Alias="[h]" Column="Company" />
<ColumnReference Database="[Epicor905]" Schema="[dbo]" Table="[PartMtl]" Alias="[h]" Column="PartNum" />
<ColumnReference Database="[Epicor905]" Schema="[dbo]" Table="[PartMtl]" Alias="[h]" Column="MtlPartNum" />
</OutputList>
<MemoryFractions Input="0.140598" Output="1" />
<Hash>
<DefinedValues />
<HashKeysBuild>
<ColumnReference Database="[Epicor905]" Schema="[dbo]" Table="[Part]" Column="PartNum" />
</HashKeysBuild>
<HashKeysProbe>
<ColumnReference Database="[Epicor905]" Schema="[dbo]" Table="[PartMtl]" Alias="[h]" Column="PartNum" />
</HashKeysProbe>
<ProbeResidual>
<ScalarOperator ScalarString="[Epicor905].[dbo].[Part].[PartNum]=[Epicor905].[dbo].[PartMtl].[PartNum] as [h].[PartNum]">
<Compare CompareOp="EQ">
<ScalarOperator>
<Identifier>
<ColumnReference Database="[Epicor905]" Schema="[dbo]" Table="[Part]" Column="PartNum" />
</Identifier>
</ScalarOperator>
<ScalarOperator>
<Identifier>
<ColumnReference Database="[Epicor905]" Schema="[dbo]" Table="[PartMtl]" Alias="[h]" Column="PartNum" />
</Identifier>
</ScalarOperator>
</Compare>
</ScalarOperator>
</ProbeResidual>
<RelOp AvgRowSize="20" EstimateCPU="1.02806" EstimateIO="0" EstimateRebinds="0" EstimateRewinds="0" EstimateRows="5321.06" LogicalOp="Inner Join" NodeId="5" Parallel="false" PhysicalOp="Hash Match" EstimatedTotalSubtreeCost="19.6618">
<OutputList>
<ColumnReference Database="[Epicor905]" Schema="[dbo]" Table="[Part]" Column="PartNum" />
</OutputList>
<MemoryFractions Input="0.384615" Output="0.528321" />
<Hash>
<DefinedValues />
<HashKeysBuild>
<ColumnReference Database="[Epicor905]" Schema="[dbo]" Table="[Part]" Column="PartNum" />
</HashKeysBuild>
<HashKeysProbe>
<ColumnReference Database="[Epicor905]" Schema="[dbo]" Table="[Part]" Column="PartNum" />
</HashKeysProbe>
<ProbeResidual>
<ScalarOperator ScalarString="[Epicor905].[dbo].[Part].[PartNum]=[Epicor905].[dbo].[Part].[PartNum] AND (([Expr1005]+[Expr1011])-[Expr1012])<(0.00000000)">
<Logical Operation="AND">
<ScalarOperator>
<Compare CompareOp="EQ">
<ScalarOperator>
<Identifier>
<ColumnReference Database="[Epicor905]" Schema="[dbo]" Table="[Part]" Column="PartNum" />
</Identifier>
</ScalarOperator>
<ScalarOperator>
<Identifier>
<ColumnReference Database="[Epicor905]" Schema="[dbo]" Table="[Part]" Column="PartNum" />
</Identifier>
</ScalarOperator>
</Compare>
</ScalarOperator>
<ScalarOperator>
<Compare CompareOp="LT">
<ScalarOperator>
<Arithmetic Operation="SUB">
<ScalarOperator>
<Arithmetic Operation="ADD">
<ScalarOperator>
<Identifier>
<ColumnReference Column="Expr1005" />
</Identifier>
</ScalarOperator>
<ScalarOperator>
<Identifier>
<ColumnReference Column="Expr1011" />
</Identifier>
</ScalarOperator>
</Arithmetic>
</ScalarOperator>
<ScalarOperator>
<Identifier>
<ColumnReference Column="Expr1012" />
</Identifier>
</ScalarOperator>
</Arithmetic>
</ScalarOperator>
<ScalarOperator>
<Const ConstValue="(0.00000000)" />
</ScalarOperator>
</Compare>
</ScalarOperator>
</Logical>
</ScalarOperator>
</ProbeResidual>
<RelOp AvgRowSize="37" EstimateCPU="0.0235022" EstimateIO="0" EstimateRebinds="0" EstimateRewinds="0" EstimateRows="20811.9" LogicalOp="Compute Scalar" NodeId="6" Parallel="false" PhysicalOp="Compute Scalar" EstimatedTotalSubtreeCost="5.86434">
<OutputList>
<ColumnReference Database="[Epicor905]" Schema="[dbo]" Table="[Part]" Column="PartNum" />
<ColumnReference Column="Expr1005" />
</OutputList>
<ComputeScalar>
<DefinedValues>
<DefinedValue>
<ColumnReference Column="Expr1005" />
<ScalarOperator ScalarString="CASE WHEN [Expr1057]=(0) THEN NULL ELSE [Expr1058] END">
<IF>
<Condition>
<ScalarOperator>
<Compare CompareOp="EQ">
<ScalarOperator>
<Identifier>
<ColumnReference Column="Expr1057" />
</Identifier>
</ScalarOperator>
<ScalarOperator>
<Const ConstValue="(0)" />
</ScalarOperator>
</Compare>
</ScalarOperator>
</Condition>
<Then>
<ScalarOperator>
<Const ConstValue="NULL" />
</ScalarOperator>
</Then>
<Else>
<ScalarOperator>
<Identifier>
<ColumnReference Column="Expr1058" />
</Identifier>
</ScalarOperator>
</Else>
</IF>
</ScalarOperator>
</DefinedValue>
</DefinedValues>
<RelOp AvgRowSize="37" EstimateCPU="0.0235022" EstimateIO="0" EstimateRebinds="0" EstimateRewinds="0" EstimateRows="20811.9" LogicalOp="Aggregate" NodeId="7" Parallel="false" PhysicalOp="Stream Aggregate" EstimatedTotalSubtreeCost="5.86434">
<OutputList>
<ColumnReference Database="[Epicor905]" Schema="[dbo]" Table="[Part]" Column="PartNum" />
<ColumnReference Column="Expr1057" />
<ColumnReference Column="Expr1058" />
</OutputList>
<StreamAggregate>
<DefinedValues>
<DefinedValue>
<ColumnReference Column="Expr1057" />
<ScalarOperator ScalarString="COUNT_BIG([Epicor905].[dbo].[PartWhse].[OnHandQty] as [b].[OnHandQty])">
<Aggregate AggType="COUNT_BIG" Distinct="false">
<ScalarOperator>
<Identifier>
<ColumnReference Database="[Epicor905]" Schema="[dbo]" Table="[PartWhse]" Alias="[b]" Column="OnHandQty" />
</Identifier>
</ScalarOperator>
</Aggregate>
</ScalarOperator>
</DefinedValue>
<DefinedValue>
<ColumnReference Column="Expr1058" />
<ScalarOperator ScalarString="SUM([Epicor905].[dbo].[PartWhse].[OnHandQty] as [b].[OnHandQty])">
<Aggregate AggType="SUM" Distinct="false">
<ScalarOperator>
<Identifier>
<ColumnReference Database="[Epicor905]" Schema="[dbo]" Table="[PartWhse]" Alias="[b]" Column="OnHandQty" />
</Identifier>
</ScalarOperator>
</Aggregate>
</ScalarOperator>
</DefinedValue>
</DefinedValues>
<GroupBy>
<ColumnReference Database="[Epicor905]" Schema="[dbo]" Table="[Part]" Column="PartNum" />
</GroupBy>
<RelOp AvgRowSize="33" EstimateCPU="0.158188" EstimateIO="0" EstimateRebinds="0" EstimateRewinds="0" EstimateRows="21827.1" LogicalOp="Inner Join" NodeId="8" Parallel="false" PhysicalOp="Merge Join" EstimatedTotalSubtreeCost="5.84084">
<OutputList>
<ColumnReference Database="[Epicor905]" Schema="[dbo]" Table="[Part]" Column="PartNum" />
<ColumnReference Database="[Epicor905]" Schema="[dbo]" Table="[PartWhse]" Alias="[b]" Column="OnHandQty" />
</OutputList>
<Merge ManyToMany="false">
<InnerSideJoinColumns>
<ColumnReference Database="[Epicor905]" Schema="[dbo]" Table="[PartWhse]" Alias="[b]" Column="PartNum" />
</InnerSideJoinColumns>
<OuterSideJoinColumns>
<ColumnReference Database="[Epicor905]" Schema="[dbo]" Table="[Part]" Column="PartNum" />
</OuterSideJoinColumns>
<Residual>
<ScalarOperator ScalarString="[Epicor905].[dbo].[Part].[PartNum]=[Epicor905].[dbo].[PartWhse].[PartNum] as [b].[PartNum]">
<Compare CompareOp="EQ">
<ScalarOperator>
<Identifier>
<ColumnReference Database="[Epicor905]" Schema="[dbo]" Table="[Part]" Column="PartNum" />
</Identifier>
</ScalarOperator>
<ScalarOperator>
<Identifier>
<ColumnReference Database="[Epicor905]" Schema="[dbo]" Table="[PartWhse]" Alias="[b]" Column="PartNum" />
</Identifier>
</ScalarOperator>
</Compare>
</ScalarOperator>
</Residual>
<RelOp AvgRowSize="20" EstimateCPU="0.0271213" EstimateIO="0.114603" EstimateRebinds="0" EstimateRewinds="0" EstimateRows="24513" LogicalOp="Index Seek" NodeId="9" Parallel="false" PhysicalOp="Index Seek" EstimatedTotalSubtreeCost="0.141724" TableCardinality="98055">
<OutputList>
<ColumnReference Database="[Epicor905]" Schema="[dbo]" Table="[Part]" Column="PartNum" />
</OutputList>
<IndexScan Ordered="true" ScanDirection="FORWARD" ForcedIndex="false" ForceSeek="false" ForceScan="false" NoExpandHint="false">
<DefinedValues>
<DefinedValue>
<ColumnReference Database="[Epicor905]" Schema="[dbo]" Table="[Part]" Column="PartNum" />
</DefinedValue>
</DefinedValues>
<Object Database="[Epicor905]" Schema="[dbo]" Table="[Part]" Index="[TypePart]" TableReferenceId="1" IndexKind="NonClustered" />
<SeekPredicates>
<SeekPredicateNew>
<SeekKeys>
<Prefix ScanType="EQ">
<RangeColumns>
<ColumnReference Database="[Epicor905]" Schema="[dbo]" Table="[Part]" Column="Company" />
<ColumnReference Database="[Epicor905]" Schema="[dbo]" Table="[Part]" Column="TypeCode" />
</RangeColumns>
<RangeExpressions>
<ScalarOperator ScalarString="'Bruce'">
<Const ConstValue="'Bruce'" />
</ScalarOperator>
<ScalarOperator ScalarString="'P'">
<Const ConstValue="'P'" />
</ScalarOperator>
</RangeExpressions>
</Prefix>
</SeekKeys>
</SeekPredicateNew>
</SeekPredicates>
</IndexScan>
</RelOp>
<RelOp AvgRowSize="33" EstimateCPU="0.0519766" EstimateIO="5.48894" EstimateRebinds="0" EstimateRewinds="0" EstimateRows="47108.7" LogicalOp="Clustered Index Seek" NodeId="10" Parallel="false" PhysicalOp="Clustered Index Seek" EstimatedTotalSubtreeCost="5.54092" TableCardinality="135573">
<OutputList>
<ColumnReference Database="[Epicor905]" Schema="[dbo]" Table="[PartWhse]" Alias="[b]" Column="PartNum" />
<ColumnReference Database="[Epicor905]" Schema="[dbo]" Table="[PartWhse]" Alias="[b]" Column="OnHandQty" />
</OutputList>
<IndexScan Ordered="true" ScanDirection="FORWARD" ForcedIndex="false" ForceSeek="false" ForceScan="false" NoExpandHint="false">
<DefinedValues>
<DefinedValue>
<ColumnReference Database="[Epicor905]" Schema="[dbo]" Table="[PartWhse]" Alias="[b]" Column="PartNum" />
</DefinedValue>
<DefinedValue>
<ColumnReference Database="[Epicor905]" Schema="[dbo]" Table="[PartWhse]" Alias="[b]" Column="OnHandQty" />
</DefinedValue>
</DefinedValues>
<Object Database="[Epicor905]" Schema="[dbo]" Table="[PartWhse]" Index="[PartNumWarehouseCode]" Alias="[b]" IndexKind="Clustered" />
<SeekPredicates>
<SeekPredicateNew>
<SeekKeys>
<Prefix ScanType="EQ">
<RangeColumns>
<ColumnReference Database="[Epicor905]" Schema="[dbo]" Table="[PartWhse]" Alias="[b]" Column="Company" />
</RangeColumns>
<RangeExpressions>
<ScalarOperator ScalarString="'Bruce'">
<Const ConstValue="'Bruce'" />
</ScalarOperator>
</RangeExpressions>
</Prefix>
</SeekKeys>
</SeekPredicateNew>
</SeekPredicates>
</IndexScan>
</RelOp>
</Merge>
</RelOp>
</StreamAggregate>
</RelOp>
</ComputeScalar>
</RelOp>
<RelOp AvgRowSize="54" EstimateCPU="1.7495" EstimateIO="0" EstimateRebinds="0" EstimateRewinds="0" EstimateRows="23817.3" LogicalOp="Compute Scalar" NodeId="20" Parallel="false" PhysicalOp="Compute Scalar" EstimatedTotalSubtreeCost="12.7694">
<OutputList>
<ColumnReference Database="[Epicor905]" Schema="[dbo]" Table="[Part]" Column="PartNum" />
<ColumnReference Column="Expr1011" />
<ColumnReference Column="Expr1012" />
</OutputList>
<ComputeScalar>
<DefinedValues>
<DefinedValue>
<ColumnReference Column="Expr1011" />
<ScalarOperator ScalarString="CASE WHEN [Expr1059]=(0) THEN NULL ELSE [Expr1060] END">
<IF>
<Condition>
<ScalarOperator>
<Compare CompareOp="EQ">
<ScalarOperator>
<Identifier>
<ColumnReference Column="Expr1059" />
</Identifier>
</ScalarOperator>
<ScalarOperator>
<Const ConstValue="(0)" />
</ScalarOperator>
</Compare>
</ScalarOperator>
</Condition>
<Then>
<ScalarOperator>
<Const ConstValue="NULL" />
</ScalarOperator>
</Then>
<Else>
<ScalarOperator>
<Identifier>
<ColumnReference Column="Expr1060" />
</Identifier>
</ScalarOperator>
</Else>
</IF>
</ScalarOperator>
</DefinedValue>
<DefinedValue>
<ColumnReference Column="Expr1012" />
<ScalarOperator ScalarString="CASE WHEN [Expr1061]=(0) THEN NULL ELSE [Expr1062] END">
<IF>
<Condition>
<ScalarOperator>
<Compare CompareOp="EQ">
<ScalarOperator>
<Identifier>
<ColumnReference Column="Expr1061" />
</Identifier>
</ScalarOperator>
<ScalarOperator>
<Const ConstValue="(0)" />
</ScalarOperator>
</Compare>
</ScalarOperator>
</Condition>
<Then>
<ScalarOperator>
<Const ConstValue="NULL" />
</ScalarOperator>
</Then>
<Else>
<ScalarOperator>
<Identifier>
<ColumnReference Column="Expr1062" />
</Identifier>
</ScalarOperator>
</Else>
</IF>
</ScalarOperator>
</DefinedValue>
</DefinedValues>
<RelOp AvgRowSize="54" EstimateCPU="1.7495" EstimateIO="0" EstimateRebinds="0" EstimateRewinds="0" EstimateRows="23817.3" LogicalOp="Aggregate" NodeId="21" Parallel="false" PhysicalOp="Hash Match" EstimatedTotalSubtreeCost="12.7694">
<OutputList>
<ColumnReference Database="[Epicor905]" Schema="[dbo]" Table="[Part]" Column="PartNum" />
<ColumnReference Column="Expr1059" />
<ColumnReference Column="Expr1060" />
<ColumnReference Column="Expr1061" />
<ColumnReference Column="Expr1062" />
</OutputList>
<MemoryFractions Input="0.241026" Output="0.331081" />
<Hash>
<DefinedValues>
<DefinedValue>
<ColumnReference Column="Expr1059" />
<ScalarOperator ScalarString="COUNT_BIG([Expr1032])">
<Aggregate AggType="COUNT_BIG" Distinct="false">
<ScalarOperator>
<Identifier>
<ColumnReference Column="Expr1032" />
</Identifier>
</ScalarOperator>
</Aggregate>
</ScalarOperator>
</DefinedValue>
<DefinedValue>
<ColumnReference Column="Expr1060" />
<ScalarOperator ScalarString="SUM([Expr1032])">
<Aggregate AggType="SUM" Distinct="false">
<ScalarOperator>
<Identifier>
<ColumnReference Column="Expr1032" />
</Identifier>
</ScalarOperator>
</Aggregate>
</ScalarOperator>
</DefinedValue>
<DefinedValue>
<ColumnReference Column="Expr1061" />
<ScalarOperator ScalarString="COUNT_BIG([Expr1033])">
<Aggregate AggType="COUNT_BIG" Distinct="false">
<ScalarOperator>
<Identifier>
<ColumnReference Column="Expr1033" />
</Identifier>
</ScalarOperator>
</Aggregate>
</ScalarOperator>
</DefinedValue>
<DefinedValue>
<ColumnReference Column="Expr1062" />
<ScalarOperator ScalarString="SUM([Expr1033])">
<Aggregate AggType="SUM" Distinct="false">
<ScalarOperator>
<Identifier>
<ColumnReference Column="Expr1033" />
</Identifier>
</ScalarOperator>
</Aggregate>
</ScalarOperator>
</DefinedValue>
</DefinedValues>
<HashKeysBuild>
<ColumnReference Database="[Epicor905]" Schema="[dbo]" Table="[Part]" Column="PartNum" />
</HashKeysBuild>
<BuildResidual>
<ScalarOperator ScalarString="[Epicor905].[dbo].[Part].[PartNum] = [Epicor905].[dbo].[Part].[PartNum]">
<Compare CompareOp="IS">
<ScalarOperator>
<Identifier>
<ColumnReference Database="[Epicor905]" Schema="[dbo]" Table="[Part]" Column="PartNum" />
</Identifier>
</ScalarOperator>
<ScalarOperator>
<Identifier>
<ColumnReference Database="[Epicor905]" Schema="[dbo]" Table="[Part]" Column="PartNum" />
</Identifier>
</ScalarOperator>
</Compare>
</ScalarOperator>
</BuildResidual>
<RelOp AvgRowSize="46" EstimateCPU="0.00908534" EstimateIO="0" EstimateRebinds="0" EstimateRewinds="0" EstimateRows="90853.4" LogicalOp="Compute Scalar" NodeId="22" Parallel="false" PhysicalOp="Compute Scalar" EstimatedTotalSubtreeCost="11.0199">
<OutputList>
<ColumnReference Database="[Epicor905]" Schema="[dbo]" Table="[Part]" Column="PartNum" />
<ColumnReference Column="Expr1032" />
<ColumnReference Column="Expr1033" />
</OutputList>
<ComputeScalar>
<DefinedValues>
<DefinedValue>
<ColumnReference Column="Expr1032" />
<ScalarOperator ScalarString="CASE WHEN [Epicor905].[dbo].[PartDtl].[RequirementFlag] as [d].[RequirementFlag]=(0) THEN [Epicor905].[dbo].[PartDtl].[Quantity] as [d].[Quantity] ELSE (0.00000000) END">
<IF>
<Condition>
<ScalarOperator>
<Compare CompareOp="EQ">
<ScalarOperator>
<Identifier>
<ColumnReference Database="[Epicor905]" Schema="[dbo]" Table="[PartDtl]" Alias="[d]" Column="RequirementFlag" />
</Identifier>
</ScalarOperator>
<ScalarOperator>
<Const ConstValue="(0)" />
</ScalarOperator>
</Compare>
</ScalarOperator>
</Condition>
<Then>
<ScalarOperator>
<Identifier>
<ColumnReference Database="[Epicor905]" Schema="[dbo]" Table="[PartDtl]" Alias="[d]" Column="Quantity" />
</Identifier>
</ScalarOperator>
</Then>
<Else>
<ScalarOperator>
<Const ConstValue="(0.00000000)" />
</ScalarOperator>
</Else>
</IF>
</ScalarOperator>
</DefinedValue>
<DefinedValue>
<ColumnReference Column="Expr1033" />
<ScalarOperator ScalarString="CASE WHEN [Epicor905].[dbo].[PartDtl].[RequirementFlag] as [d].[RequirementFlag]=(1) THEN [Epicor905].[dbo].[PartDtl].[Quantity] as [d].[Quantity] ELSE (0.00000000) END">
<IF>
<Condition>
<ScalarOperator>
<Compare CompareOp="EQ">
<ScalarOperator>
<Identifier>
<ColumnReference Database="[Epicor905]" Schema="[dbo]" Table="[PartDtl]" Alias="[d]" Column="RequirementFlag" />
</Identifier>
</ScalarOperator>
<ScalarOperator>
<Const ConstValue="(1)" />
</ScalarOperator>
</Compare>
</ScalarOperator>
</Condition>
<Then>
<ScalarOperator>
<Identifier>
<ColumnReference Database="[Epicor905]" Schema="[dbo]" Table="[PartDtl]" Alias="[d]" Column="Quantity" />
</Identifier>
</ScalarOperator>
</Then>
<Else>
<ScalarOperator>
<Const ConstValue="(0.00000000)" />
</ScalarOperator>
</Else>
</IF>
</ScalarOperator>
</DefinedValue>
</DefinedValues>
<RelOp AvgRowSize="34" EstimateCPU="1.30515" EstimateIO="0" EstimateRebinds="0" EstimateRewinds="0" EstimateRows="90853.4" LogicalOp="Left Outer Join" NodeId="23" Parallel="false" PhysicalOp="Hash Match" EstimatedTotalSubtreeCost="11.0108">
<OutputList>
<ColumnReference Database="[Epicor905]" Schema="[dbo]" Table="[Part]" Column="PartNum" />
<ColumnReference Database="[Epicor905]" Schema="[dbo]" Table="[PartDtl]" Alias="[d]" Column="RequirementFlag" />
<ColumnReference Database="[Epicor905]" Schema="[dbo]" Table="[PartDtl]" Alias="[d]" Column="Quantity" />
</OutputList>
<MemoryFractions Input="0.615385" Output="0.374359" />
<Hash>
<DefinedValues />
<HashKeysBuild>
<ColumnReference Database="[Epicor905]" Schema="[dbo]" Table="[Part]" Column="PartNum" />
</HashKeysBuild>
<HashKeysProbe>
<ColumnReference Database="[Epicor905]" Schema="[dbo]" Table="[PartDtl]" Alias="[d]" Column="PartNum" />
</HashKeysProbe>
<ProbeResidual>
<ScalarOperator ScalarString="[Epicor905].[dbo].[Part].[PartNum]=[Epicor905].[dbo].[PartDtl].[PartNum] as [d].[PartNum]">
<Compare CompareOp="EQ">
<ScalarOperator>
<Identifier>
<ColumnReference Database="[Epicor905]" Schema="[dbo]" Table="[Part]" Column="PartNum" />
</Identifier>
</ScalarOperator>
<ScalarOperator>
<Identifier>
<ColumnReference Database="[Epicor905]" Schema="[dbo]" Table="[PartDtl]" Alias="[d]" Column="PartNum" />
</Identifier>
</ScalarOperator>
</Compare>
</ScalarOperator>
</ProbeResidual>
<RelOp AvgRowSize="20" EstimateCPU="0.0271213" EstimateIO="0.114603" EstimateRebinds="0" EstimateRewinds="0" EstimateRows="24513" LogicalOp="Index Seek" NodeId="24" Parallel="false" PhysicalOp="Index Seek" EstimatedTotalSubtreeCost="0.141724" TableCardinality="98055">
<OutputList>
<ColumnReference Database="[Epicor905]" Schema="[dbo]" Table="[Part]" Column="PartNum" />
</OutputList>
<IndexScan Ordered="true" ScanDirection="FORWARD" ForcedIndex="false" ForceSeek="false" ForceScan="false" NoExpandHint="false">
<DefinedValues>
<DefinedValue>
<ColumnReference Database="[Epicor905]" Schema="[dbo]" Table="[Part]" Column="PartNum" />
</DefinedValue>
</DefinedValues>
<Object Database="[Epicor905]" Schema="[dbo]" Table="[Part]" Index="[TypePart]" TableReferenceId="2" IndexKind="NonClustered" />
<SeekPredicates>
<SeekPredicateNew>
<SeekKeys>
<Prefix ScanType="EQ">
<RangeColumns>
<ColumnReference Database="[Epicor905]" Schema="[dbo]" Table="[Part]" Column="Company" />
<ColumnReference Database="[Epicor905]" Schema="[dbo]" Table="[Part]" Column="TypeCode" />
</RangeColumns>
<RangeExpressions>
<ScalarOperator ScalarString="'Bruce'">
<Const ConstValue="'Bruce'" />
</ScalarOperator>
<ScalarOperator ScalarString="'P'">
<Const ConstValue="'P'" />
</ScalarOperator>
</RangeExpressions>
</Prefix>
</SeekKeys>
</SeekPredicateNew>
</SeekPredicates>
</IndexScan>
</RelOp>
<RelOp AvgRowSize="33" EstimateCPU="0.0750727" EstimateIO="9.48882" EstimateRebinds="0" EstimateRewinds="0" EstimateRows="68105.2" LogicalOp="Clustered Index Seek" NodeId="25" Parallel="false" PhysicalOp="Clustered Index Seek" EstimatedTotalSubtreeCost="9.56389" TableCardinality="243402">
<OutputList>
<ColumnReference Database="[Epicor905]" Schema="[dbo]" Table="[PartDtl]" Alias="[d]" Column="PartNum" />
<ColumnReference Database="[Epicor905]" Schema="[dbo]" Table="[PartDtl]" Alias="[d]" Column="RequirementFlag" />
<ColumnReference Database="[Epicor905]" Schema="[dbo]" Table="[PartDtl]" Alias="[d]" Column="Quantity" />
</OutputList>
<IndexScan Ordered="true" ScanDirection="FORWARD" ForcedIndex="false" ForceSeek="false" ForceScan="false" NoExpandHint="false">
<DefinedValues>
<DefinedValue>
<ColumnReference Database="[Epicor905]" Schema="[dbo]" Table="[PartDtl]" Alias="[d]" Column="PartNum" />
</DefinedValue>
<DefinedValue>
<ColumnReference Database="[Epicor905]" Schema="[dbo]" Table="[PartDtl]" Alias="[d]" Column="RequirementFlag" />
</DefinedValue>
<DefinedValue>
<ColumnReference Database="[Epicor905]" Schema="[dbo]" Table="[PartDtl]" Alias="[d]" Column="Quantity" />
</DefinedValue>
</DefinedValues>
<Object Database="[Epicor905]" Schema="[dbo]" Table="[PartDtl]" Index="[TypPartDate]" Alias="[d]" IndexKind="Clustered" />
<SeekPredicates>
<SeekPredicateNew>
<SeekKeys>
<Prefix ScanType="EQ">
<RangeColumns>
<ColumnReference Database="[Epicor905]" Schema="[dbo]" Table="[PartDtl]" Alias="[d]" Column="Company" />
</RangeColumns>
<RangeExpressions>
<ScalarOperator ScalarString="'Bruce'">
<Const ConstValue="'Bruce'" />
</ScalarOperator>
</RangeExpressions>
</Prefix>
</SeekKeys>
</SeekPredicateNew>
</SeekPredicates>
</IndexScan>
</RelOp>
</Hash>
</RelOp>
</ComputeScalar>
</RelOp>
</Hash>
</RelOp>
</ComputeScalar>
</RelOp>
</Hash>
</RelOp>
<RelOp AvgRowSize="36" EstimateCPU="0.191065" EstimateIO="1.13259" EstimateRebinds="0" EstimateRewinds="0" EstimateRows="173553" LogicalOp="Index Seek" NodeId="47" Parallel="false" PhysicalOp="Index Seek" EstimatedTotalSubtreeCost="1.32366" TableCardinality="384010">
<OutputList>
<ColumnReference Database="[Epicor905]" Schema="[dbo]" Table="[PartMtl]" Alias="[h]" Column="Company" />
<ColumnReference Database="[Epicor905]" Schema="[dbo]" Table="[PartMtl]" Alias="[h]" Column="PartNum" />
<ColumnReference Database="[Epicor905]" Schema="[dbo]" Table="[PartMtl]" Alias="[h]" Column="MtlPartNum" />
</OutputList>
<IndexScan Ordered="true" ScanDirection="FORWARD" ForcedIndex="false" ForceSeek="false" ForceScan="false" NoExpandHint="false">
<DefinedValues>
<DefinedValue>
<ColumnReference Database="[Epicor905]" Schema="[dbo]" Table="[PartMtl]" Alias="[h]" Column="Company" />
</DefinedValue>
<DefinedValue>
<ColumnReference Database="[Epicor905]" Schema="[dbo]" Table="[PartMtl]" Alias="[h]" Column="PartNum" />
</DefinedValue>
<DefinedValue>
<ColumnReference Database="[Epicor905]" Schema="[dbo]" Table="[PartMtl]" Alias="[h]" Column="MtlPartNum" />
</DefinedValue>
</DefinedValues>
<Object Database="[Epicor905]" Schema="[dbo]" Table="[PartMtl]" Index="[WhereUsed]" Alias="[h]" IndexKind="NonClustered" />
<SeekPredicates>
<SeekPredicateNew>
<SeekKeys>
<Prefix ScanType="EQ">
<RangeColumns>
<ColumnReference Database="[Epicor905]" Schema="[dbo]" Table="[PartMtl]" Alias="[h]" Column="Company" />
</RangeColumns>
<RangeExpressions>
<ScalarOperator ScalarString="'Bruce'">
<Const ConstValue="'Bruce'" />
</ScalarOperator>
</RangeExpressions>
</Prefix>
</SeekKeys>
</SeekPredicateNew>
</SeekPredicates>
</IndexScan>
</RelOp>
</Hash>
</RelOp>
</ComputeScalar>
</RelOp>
</ComputeScalar>
</RelOp>
<RelOp AvgRowSize="73" EstimateCPU="0.00367094" EstimateIO="0" EstimateRebinds="43701.6" EstimateRewinds="0" EstimateRows="1.00182" LogicalOp="Assert" NodeId="55" Parallel="false" PhysicalOp="Assert" EstimatedTotalSubtreeCost="39.4443">
<OutputList>
<ColumnReference Column="Expr1065" />
<ColumnReference Database="[Epicor905]" Schema="[dbo]" Table="[PartMtl]" Alias="[i]" Column="Company" />
<ColumnReference Database="[Epicor905]" Schema="[dbo]" Table="[PartMtl]" Alias="[i]" Column="PartNum" />
<ColumnReference Column="Expr1027" />
<ColumnReference Database="[Epicor905]" Schema="[dbo]" Table="[PartMtl]" Alias="[i]" Column="MtlPartNum" />
</OutputList>
<Assert StartupExpression="false">
<RelOp AvgRowSize="73" EstimateCPU="0.00367094" EstimateIO="0" EstimateRebinds="43701.6" EstimateRewinds="0" EstimateRows="1.00182" LogicalOp="Inner Join" NodeId="56" Parallel="false" PhysicalOp="Nested Loops" EstimatedTotalSubtreeCost="39.4443">
<OutputList>
<ColumnReference Column="Expr1065" />
<ColumnReference Database="[Epicor905]" Schema="[dbo]" Table="[PartMtl]" Alias="[i]" Column="Company" />
<ColumnReference Database="[Epicor905]" Schema="[dbo]" Table="[PartMtl]" Alias="[i]" Column="PartNum" />
<ColumnReference Column="Expr1027" />
<ColumnReference Database="[Epicor905]" Schema="[dbo]" Table="[PartMtl]" Alias="[i]" Column="MtlPartNum" />
</OutputList>
<NestedLoops Optimized="false">
<OuterReferences>
<ColumnReference Column="Expr1065" />
<ColumnReference Column="Recr1023" />
<ColumnReference Column="Recr1024" />
<ColumnReference Column="Recr1025" />
<ColumnReference Column="Recr1026" />
</OuterReferences>
<RelOp AvgRowSize="73" EstimateCPU="0.000437016" EstimateIO="0" EstimateRebinds="43701.6" EstimateRewinds="0" EstimateRows="1" LogicalOp="Compute Scalar" NodeId="57" Parallel="false" PhysicalOp="Compute Scalar" EstimatedTotalSubtreeCost="0.000437016">
<OutputList>
<ColumnReference Column="Expr1065" />
<ColumnReference Column="Recr1023" />
<ColumnReference Column="Recr1024" />
<ColumnReference Column="Recr1025" />
<ColumnReference Column="Recr1026" />
</OutputList>
<ComputeScalar>
<DefinedValues>
<DefinedValue>
<ColumnReference Column="Expr1065" />
<ScalarOperator ScalarString="[Expr1064]+(1)">
<Arithmetic Operation="ADD">
<ScalarOperator>
<Identifier>
<ColumnReference Column="Expr1064" />
</Identifier>
</ScalarOperator>
<ScalarOperator>
<Const ConstValue="(1)" />
</ScalarOperator>
</Arithmetic>
</ScalarOperator>
</DefinedValue>
</DefinedValues>
<RelOp AvgRowSize="73" EstimateCPU="0.000437016" EstimateIO="0" EstimateRebinds="43701.6" EstimateRewinds="0" EstimateRows="1" LogicalOp="Lazy Spool" NodeId="58" Parallel="false" PhysicalOp="Table Spool" EstimatedTotalSubtreeCost="0.000437016">
<OutputList>
<ColumnReference Column="Expr1064" />
<ColumnReference Column="Recr1023" />
<ColumnReference Column="Recr1024" />
<ColumnReference Column="Recr1025" />
<ColumnReference Column="Recr1026" />
</OutputList>
<Spool Stack="true" PrimaryNodeId="0" />
</RelOp>
</ComputeScalar>
</RelOp>
<RelOp AvgRowSize="40" EstimateCPU="7.92129E-06" EstimateIO="0" EstimateRebinds="43700.6" EstimateRewinds="0" EstimateRows="79.2129" LogicalOp="Compute Scalar" NodeId="62" Parallel="false" PhysicalOp="Compute Scalar" EstimatedTotalSubtreeCost="39.4401">
<OutputList>
<ColumnReference Database="[Epicor905]" Schema="[dbo]" Table="[PartMtl]" Alias="[i]" Column="Company" />
<ColumnReference Database="[Epicor905]" Schema="[dbo]" Table="[PartMtl]" Alias="[i]" Column="PartNum" />
<ColumnReference Database="[Epicor905]" Schema="[dbo]" Table="[PartMtl]" Alias="[i]" Column="MtlPartNum" />
<ColumnReference Column="Expr1027" />
</OutputList>
<ComputeScalar>
<DefinedValues>
<DefinedValue>
<ColumnReference Column="Expr1027" />
<ScalarOperator ScalarString="[Recr1025]-(1)">
<Arithmetic Operation="SUB">
<ScalarOperator>
<Identifier>
<ColumnReference Column="Recr1025" />
</Identifier>
</ScalarOperator>
<ScalarOperator>
<Const ConstValue="(1)" />
</ScalarOperator>
</Arithmetic>
</ScalarOperator>
</DefinedValue>
</DefinedValues>
<RelOp AvgRowSize="36" EstimateCPU="0.384354" EstimateIO="14.1888" EstimateRebinds="43700.6" EstimateRewinds="0" EstimateRows="79.2129" LogicalOp="Eager Spool" NodeId="63" Parallel="false" PhysicalOp="Index Spool" EstimatedTotalSubtreeCost="39.094">
<OutputList>
<ColumnReference Database="[Epicor905]" Schema="[dbo]" Table="[PartMtl]" Alias="[i]" Column="Company" />
<ColumnReference Database="[Epicor905]" Schema="[dbo]" Table="[PartMtl]" Alias="[i]" Column="PartNum" />
<ColumnReference Database="[Epicor905]" Schema="[dbo]" Table="[PartMtl]" Alias="[i]" Column="MtlPartNum" />
</OutputList>
<Spool>
<SeekPredicateNew>
<SeekKeys>
<Prefix ScanType="EQ">
<RangeColumns>
<ColumnReference Database="[Epicor905]" Schema="[dbo]" Table="[PartMtl]" Alias="[i]" Column="MtlPartNum" />
</RangeColumns>
<RangeExpressions>
<ScalarOperator ScalarString="[Recr1024]">
<Identifier>
<ColumnReference Column="Recr1024" />
</Identifier>
</ScalarOperator>
</RangeExpressions>
</Prefix>
</SeekKeys>
</SeekPredicateNew>
<RelOp AvgRowSize="36" EstimateCPU="0.422568" EstimateIO="2.50312" EstimateRebinds="0" EstimateRewinds="0" EstimateRows="384010" LogicalOp="Index Scan" NodeId="64" Parallel="false" PhysicalOp="Index Scan" EstimatedTotalSubtreeCost="2.92569" TableCardinality="384010">
<OutputList>
<ColumnReference Database="[Epicor905]" Schema="[dbo]" Table="[PartMtl]" Alias="[i]" Column="Company" />
<ColumnReference Database="[Epicor905]" Schema="[dbo]" Table="[PartMtl]" Alias="[i]" Column="PartNum" />
<ColumnReference Database="[Epicor905]" Schema="[dbo]" Table="[PartMtl]" Alias="[i]" Column="MtlPartNum" />
</OutputList>
<IndexScan Ordered="false" ForcedIndex="false" ForceSeek="false" ForceScan="false" NoExpandHint="false">
<DefinedValues>
<DefinedValue>
<ColumnReference Database="[Epicor905]" Schema="[dbo]" Table="[PartMtl]" Alias="[i]" Column="Company" />
</DefinedValue>
<DefinedValue>
<ColumnReference Database="[Epicor905]" Schema="[dbo]" Table="[PartMtl]" Alias="[i]" Column="PartNum" />
</DefinedValue>
<DefinedValue>
<ColumnReference Database="[Epicor905]" Schema="[dbo]" Table="[PartMtl]" Alias="[i]" Column="MtlPartNum" />
</DefinedValue>
</DefinedValues>
<Object Database="[Epicor905]" Schema="[dbo]" Table="[PartMtl]" Index="[WhereUsed]" Alias="[i]" IndexKind="NonClustered" />
</IndexScan>
</RelOp>
</Spool>
</RelOp>
</ComputeScalar>
</RelOp>
</NestedLoops>
</RelOp>
<Predicate>
<ScalarOperator ScalarString="CASE WHEN [Expr1065]>(100) THEN (0) ELSE NULL END">
<IF>
<Condition>
<ScalarOperator>
<Compare CompareOp="GT">
<ScalarOperator>
<Identifier>
<ColumnReference Column="Expr1065" />
</Identifier>
</ScalarOperator>
<ScalarOperator>
<Const ConstValue="(100)" />
</ScalarOperator>
</Compare>
</ScalarOperator>
</Condition>
<Then>
<ScalarOperator>
<Const ConstValue="(0)" />
</ScalarOperator>
</Then>
<Else>
<ScalarOperator>
<Const ConstValue="NULL" />
</ScalarOperator>
</Else>
</IF>
</ScalarOperator>
</Predicate>
</Assert>
</RelOp>
</Concat>
</RelOp>
</Spool>
</RelOp>
</QueryPlan>
</StmtSimple>
</Statements>
</Batch>
</BatchSequence>
</ShowPlanXML>
Schema
Answer: Finally figured it out. Because I did a recursive CTE starting with component materials, I created an infinite loop (at least I think it is infinite, or pretty darn close).
Material Part Parent Part
a b
a b
a c
b c
As a quick example, I keep running through the above situation where a Material Part is in Parent Part b & c. So it runs through both of those Parent Parts and then will loop back with b as the Material Part which is in Parent Part c so it will loop through that again unnecessarily. This is why I say I'm not sure if it is infinite. It's doing a ton of redundancy but not sure if there is an end. | {
"domain": "codereview.stackexchange",
"id": 31723,
"tags": "sql, sql-server"
} |
How is tritium illumination possible without negative health effects? | Question: Turns out there's tritium illumination - a tiny very strong plastic tube will be covered in phosphor and filled with tritium. Tritium will undergo beta decay and a flow of electrons will cause the phosphor to glow.
This gives enough light for illuminating hours marks on a wristwatch dial and the hands of the wristwatch for many years and is claimed to not pose health hazard.
Now how it is possible to have energetic enough radioactive decay and no health hazard at the same time?
Answer: Because you keep the Tritium on the other side of the glass tube!
Tritium is a beta emitter, it sends out electrons, these are absorbed by the glass. More usefully they are also absorbed by the phosphor coating the glass which causes it to emit light.
The decay doesn't have to be particularly energetic, only enough to cause the phosphor to emit light. Also radioactive risk is a combination of activity; how much the isotope emits and how much of the isotope you have, the chemical nature of the isotope; what it binds to in the body and how long it stays there, and the type/energy of the emission; how much damage a single particle can do.
Tritium is a kind of Hydrogen so will form water and so can get into your body relatively easily, it's a low energy beta emitter so the particles don't carry much energy to do any damage and being water it doesn't accumulate in your body for very long. | {
"domain": "physics.stackexchange",
"id": 4634,
"tags": "radiation, radioactivity, biophysics"
} |
why do we obtain a sigmoid curve in vapour pressure versus temperature graph | Question: i have recently got a question in an assignment, which was somewhat like this
what would be the shape of curve obtained in a graph between vapour pressure & temperature of a binary solution in a closed vessel
generally we do all the experiments regarding vapour pressure in a closed vessel, so the volume must remain constant and from the formula $PV=nRT$, the number of moles remains constant in a binary solution, as volume is remaining constant we get the equation as $Y=mX$, which is a straight line passing through the origin,
but the answer was a sigmoid curve, how did we get that, can i get an elaborate answer for this quiery.
Answer: the answer already present over here is absolutely excellent but, as an extention i would add some points to it
the equation of $PV=nRT$ is made for gases and for a solution of gas and liquid in which the volatile solvent shows some vapour pressure on its solution
the curve sigmoid curve you get is through the Clausius Clapeyron equation which suits this condition [i.e. the situation of solutions]
after the whole solution is evaporated then you get pure solvent in the vapour phase after which your equation for an ideal gas $PV=nRT$ is applicable and you will get a striaght line there after | {
"domain": "chemistry.stackexchange",
"id": 1682,
"tags": "solutions"
} |
How many giant kelp are there per square meter? | Question: In a high-density giant kelp (Macrocystis pyrifera) bed how many "holdfasts" or roots would there be per square meter of seafloor on average?
Answer: The density of Macrocystis pyrifera is pretty variable. According to Dayton et al. 1984 off of the Southern California coast it can range from less than 0.1 to to 1.0 $individuals/m^2$. The variation depends on the location, where there are different exposure levels to high energy waves, upwellings containing nutrients, and predation by sea urchin.
Dayton, P. K., Currie, V., Gerrodette, T., Keller, B. D., Rosenthal, R., & Tresca, D. V. (1984). Patch dynamics and stability of some California kelp communities. Ecological monographs, 54(3), 253-289. https://doi.org/10.2307/1942498 | {
"domain": "biology.stackexchange",
"id": 11036,
"tags": "ecology, population-biology, algae"
} |
Rotation vs translation | Question: What is so fundamentally different between a rotation and a translation that one can be represented with a single n-vector and the other one needs an n-n matrix?
Answer: Both are Euclidean isometries (distance and global-shape preserving maps): the only other kind of Euclidean isometry is a reflexion, although this last one is "discrete" - you can't have a fraction of a reflexion, whereas you can have a fraction $\alpha$ of a translation or rotation: you translate $\alpha$ times as far or rotate through $\alpha$ times a given angle. So the two transformations you mention are the only continuous Euclidean isometries.
What's fundamentally different? Translations commute. Rotations do not. This means that for two translations $T_1,\,T_2$ the order of application does not matter: the resultant is the same whichever way around you choose, i.e. $T_1\,T_2 = T_2\,T_1$. This does not hold for rotations: draw some marks on an orange and check this yourself if you haven't noticed this before. Aside from for rotations about the same axis, $R_1\,R_2 \neq R_2\,R_1$.
All groups (look this word up if you haven't met it) of continuous symmetries that are Abelian ( i.e. groups of symmetries that commute - for which the order does not matter) can be shown to be essentially the same (isomorphic) to either (1) a space of vectors kitted with vector addition or (2) a torus whose surface is labelled orthogonal Cartesian co-ordinates and which behaves essentially the same as a vector space of adding arrows. The only difference in the latter, torus group is that if you translate far enough in a given direction, you can get back to your beginning point. Otherwise the torus group looks exactly like a space of vectors. So you can take this as your fundamental informal reason: any commuting Lie group looks either exactly like a space of vectors, or a "compactified" one that behaves the same aside from one's being able to get back to one's beginning point by a far enough translation in any direction.
Another pithy characterisation is CuriousOne's comment:
A translation affects only one coordinate direction, a rotation affects two. An infinitesimal translation needs one vector, an infinitesimal rotation needs two. Maybe the better way to think about comparing these operations is with their generators than the matrices and vectors? I am sure a theorist can chime in with a better answer regarding the representation theory of Lie-groups.
Some background: in higher dimensions, rotations rotate 2D planes and leave the orthogonal complement of a plane invariant. This is what CuriousOne means by "rotation affects two". The axis concept only works in 3 dimensions: you can define a plane in 3 dimensions as the 2D subspace normal to a vector (in two dimensions you rotate about a point). So, in 3D with a vector's direction standing for the axis and its magnitude standing for the angle, you can indeed represent a rotation by a lone vector. There is even a triangle law for vector addition of rotations, but it is somewhat more involved than addition of translations. Nonetheless, it may surprise you just the same: see the discussion under the heading "Example 1.4: ($2\times 2$ Unitary Group $SU(2)$)" on this page of my website here. Another way of saying this is as David Hammen points out: The axis concept's being workable in 3D is an "accident" of dimension: in $N$ dimensions, a rotation is of the form $e^H$ where $H$ is a real $N\times N$ skew-symmetric ($H=-H^T$: equal to the negative of its transpose) matrix and thus $H$ must have noughts along its leading diagonal and its lower, below leading diagonal triangle is simply the upper, above leading diagonal triangle reflected. There are thus $N\,(N-1)/2$ real parameters needed to specify the rotation, which just happens to be equal to $N$ when $N=3$.
Lastly, one should mention the special relationship between translations and rotations. The translations are special insofar that for any isometry $U$ and translation $T$ we have $U\,T\,U^{-1} = T_1$, where $T_1$ is another translation (not needfully the same as $T$). The technical name for this is that the translations form a normal subgroup of the group of isometries; what this means is that any isometry can be uniquely decomposed into the form $T\,R$, where $T$ is a translation and $R$ a rotation: we say that the isometry group $E(N)$ is the semidirect product (written $E(N)=T(N)\rtimes R(N)$) of the group $T(N)$ of translations and $R(N)$ of rotations).
I should add that, in higher dimensions, I use the word "rotation" more loosely than many authors: I simply mean any homogeneous transformation whose matrix is of the form $e^H$ with $H$ skew symmetric. Many authors split these up into further different classes. | {
"domain": "physics.stackexchange",
"id": 18773,
"tags": "rotation"
} |
Why does the policy network in AlphaZero work? | Question: In AlphaZero, the policy network (or head of the network) maps game states to a distribution of the likelihood of taking each action. This distribution covers all possible actions from that state.
How is such a network possible? The possible actions from each state are vastly different than subsequent states. So, how would each possible action from a given state be represented in the network's output, and what about the network design would stop the network from considering an illegal action?
Answer: The output of the policy network is as described in the original paper:
A move in chess may be described in two parts: selecting the piece to move, and then
selecting among the legal moves for that piece. We represent the policy π(a|s) by a 8 × 8 × 73
stack of planes encoding a probability distribution over 4,672 possible moves. Each of the 8×8
positions identifies the square from which to “pick up” a piece. The first 56 planes encode
possible ‘queen moves’ for any piece: a number of squares [1..7] in which the piece will be
moved, along one of eight relative compass directions {N, NE, E, SE, S, SW, W, NW}. The
next 8 planes encode possible knight moves for that piece. The final 9 planes encode possible
underpromotions for pawn moves or captures in two possible diagonals, to knight, bishop or
rook respectively. Other pawn moves or captures from the seventh rank are promoted to a
queen.
So each move selector scores the relative probability of selecting a piece in a given square and moving it in a specific way. For example, there is always one output dedicated to representing picking up the piece in A3 and moving it to A6. This representation includes selecting opponent pieces, selecting empty squares, making knight moves for rooks, making long diagonal moves for pawns. It also includes moves that take pieces off the board or through other blocking pieces.
The typical branching factor in chess is around 35. The policy network described above always calculates discrete probabilities for 4672 moves.
Clearly this can select many non-valid moves, if pieces are not available, or cannot move as suggested. In fact it does this all the time, even when fully trained, as nothing is ever learned about avoiding the non-valid moves during training - they do not receive positive or negative feedback, as there is never any experience gained relating to them. However, the benefit is that this structure is simple and fixed, both useful traits when building a neural network.
The simple work-around is to filter out impossible moves logically, setting their effective probability to zero, and then re-normalise the probabilities for the remaining valid moves. That step involves asking the game engine for what the valid moves are - but that's fine, it's not "cheating".
Whilst it might be possible to either have the agent learn to avoid non-valid moves, or some clever output structure that could only express valid moves, these would both distract from the core goal of learning how to play the game optimally. | {
"domain": "ai.stackexchange",
"id": 717,
"tags": "neural-networks, reinforcement-learning, ai-design, alphazero, alphago-zero"
} |
Manipulator end-effector orientation with quaternions | Question: I have the following problem:
Given 3 points on a surface, I have to adjust a manipulator end-effector (i.e. pen) on a Baxter Robot, normal to that surface.
From the three points I easily get the coordinate frame, as well as the normal vector. My question is now, how can I use those to tell the manipulator its supposed orientation.
The Baxter Inverse Kinematics solver takes a $(x,y,z)$-tuple of Cartesian coordinates for the desired position, as well as a $(x,y,z,w)$-quaternion for the desired orientation. What do I set the orientation to? My feeling would be to just use the normal vector $(n_1,n_2,n_3)$ and a $0$, or do I have to do some calculation?
Answer: As Brian indicated in a comment, you simply need to convert your rotation matrix (or Euler angles) into a quaternion. Maths - Conversion Matrix to Quaternion is my favorite site for geometric conversions.
Quaternions are a great representation and have a number of benefits over other representations, so you should definitely read up on them. | {
"domain": "robotics.stackexchange",
"id": 897,
"tags": "inverse-kinematics, orientation"
} |
Problem in downloading model | Question:
When i am trying to load any model in gazebo from the insert tab using the link i am getting the following error.
Can sm1 tell what may b the problem...?
Error [ModelDatabase.cc:354] Unable to load manifest
file[/home/myname/.gazebo/models/bookshelf/manifest.xml]
Error [ModelDatabase.cc:391] Invalid model manifest
file[/home/myname/.gazebo/models/bookshelf/manifest.xml]
Originally posted by Vipul Divyanshu on Gazebo Answers with karma: 1 on 2012-10-31
Post score: 0
Answer:
Try deleting '/home/myname/.gazebo/models/bookshelf', and then try again. You probably have an outdated manifest.xml file.
If that doesn't work, can you post the contents of the manifest file?
Originally posted by nkoenig with karma: 7676 on 2012-11-07
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 2784,
"tags": "gazebo, gazebo-model"
} |
How to interpret quantum fields? | Question: As an analogy of what I am looking for, suppose $f(x,t)$ represents a classical field. Then we may interpret this as saying at position $x$ and time $t$ the field takes on a value $f(x,t)$.
In quantum field theory the fields are now operator valued distributions. Suppose $\varphi$ is a quantum field, thus it must be of the form $\varphi(f)$ where $f$ is some suitably nice test function. What is the physical interpretation here analogous to the classical field case? Is the test function $f$ supposed to represent the state of the system (as it would in quantum mechanics, i.e. $f \in \mathcal{H}$ where $\mathcal{H}$ is some Hilbert space)?
To word things different, what exactly does it mean to apply the resulting operator valued distribution $\varphi$, for example, $\varphi | 0 \rangle$? Physically what does this tell us?
Answer: Perhaps the most direct (but not only) interpretation is to say that $\phi(x)$ represents a local observable.$^\star$ In other words, $\phi(x)$ represents the value of the field at $x$. You can (in principle) perform a measurement to learn the value of the field at $x$.
Just like in normal quantum mechanics, in a general state $|\Psi\rangle$, you cannot predict precisely what the outcome will be of measuring $\phi(x)$. You can only make such predictions in special states, field eigenbasis states. For example, there is an eigenstate $|\varphi(x)\rangle$ where the field will take on the value $\varphi(x)$:
\begin{equation}
\phi(x)|\varphi(x)\rangle = \varphi(x) |\varphi(x)\rangle
\end{equation}
However, in other states, like the ground state (also called the vacuum state) $|0\rangle$, the field does not take on a definite value. There is a superposition of field values, represented by the Schrodinger wavefunctional (which is the generalization of the Schrodinger wavefunction of quantum mechanics, to quantum field theory).
Frequently we are interested in the correlation functions of the field, in some state (usually the vacuum). This is a way to capture the probability distribution over different field configurations. From these correlation functions, we can extract other observables we care about (such as scattering amplitudes in particle physics). Some examples of correlation functions are
\begin{equation}
\langle 0 | \phi(x) \phi(y) | 0 \rangle, \langle 0 | \phi(x) \phi(y) \phi(z) |0 \rangle, \cdots
\end{equation}
Note that because of operator ordering ambiguities, in practical applications it is important to specify how the operators are ordered when defining and computing correlation fucntions.
The reason that the field is an operator valued distribution, and not simply an operator valued function, is because it is quite a singular object. For example, the two point function is divergent in the limit $x\rightarrow y$
\begin{equation}
\lim_{x\rightarrow y} \langle 0 | \phi(x) \phi(y) | 0 \rangle = \infty
\end{equation}
Therefore, one typically "smooths out" the correlation function by integrating the field against a test function.
$^\star$ As Gold mentioned in their answer, this is a simplification. Because of the freedom to do field redefinitions, and in gauge theories the ability to do gauge transformations, the value of the field itself is not a physically invariant quantity. For the field value itself to be meaningful, you have to couple the field to a probe that will measure the field's value. | {
"domain": "physics.stackexchange",
"id": 92654,
"tags": "quantum-field-theory, hilbert-space, operators, field-theory, observables"
} |
Ratio of $B/H$ in a ferromagnet | Question: On the page https://en.wikipedia.org/wiki/Saturation_(magnetic), it is stated that
The relation between the magnetizing field H and the magnetic field B can also be expressed as the magnetic permeability: $\mu =B/H$ or the relative permeability $\mu/\mu _{0}=\mu _{r}$, where $\mu _{0}$ is the vacuum permeability. The permeability of ferromagnetic materials is not constant, but depends on $H$. In saturable materials the relative permeability increases with $H$ to a maximum, then as it approaches saturation inverts and decreases toward one.
So the page says that for a ferromagnet, as we increase $H$, the value of $\mu_r = \mu/\mu_o =\frac{B}{\mu_oH} $ first increases to a maximum and then decreases towards one. I thought that makes sense: we have $\boldsymbol B = \mu_o(\boldsymbol H + \boldsymbol M)$. As we increase $\boldsymbol H$ at some point the material saturates and $M$ stops growing at $M_{sat}$. Increasing $\boldsymbol H$ further, at some point surely we can achieve $H\gg M_{sat}$ (since $M$ is now constant) and so we can approximate $\boldsymbol B \simeq \mu_o\boldsymbol H$, and so $B/H \simeq \mu_o$ for large $H$ and indeed $\color{blue}{\mu_r\rightarrow 1}$.
But then, the $\boldsymbol B$ field inside also saturates at some point and reaches a magnitude $B_{sat}$, right? So the ratio $B/H = B_{sat}/H$ should go to zero - since $B_{sat}$ is just a constant and $H$ goes arbitrarily large - hence $\color{blue}{\mu_r\rightarrow 0}$.
So which reasoning is correct? I know this is probably a silly question, but I'm missing something obvious and I'd be thankful if someone could clear this up for me.
EDIT: the experimental setup I have in mind would be a ferromagnetic coil with wire wound around it. By changing the current in the wire we control the external field $\boldsymbol H$.
Answer: Your first reasoning is correct and the second one is wrong. For high magnetic fields $\mathbf{H}$, the magnetization $\mathbf{M}$ will saturate but the magnetic induction $\mathbf{B}$ does not saturate as it is linearly proportional with the external magnetic field $\mathbf{H}$. This is commonly made a mistake when people plot the hysteresis curve (if you google "hysteresis curve magnet" you find multiple figures were the B-field is indicated on the y-axis and saturates for high $\mathbf{H}$ values - this is wrong!).
The picture below (taken from Physics behind the magnetic hysteresis loop—a survey of misconceptions in magnetism literature) is correct. The magnetization saturates and the magnetic induction approaches a linear behavior for high $\mathbf{H}$. | {
"domain": "physics.stackexchange",
"id": 65909,
"tags": "electromagnetism, magnetic-fields, ferromagnetism"
} |
Does long term use of antibiotics in humans actually lead to a greater risk of infection? | Question: I've read about the overuse of antibiotics leading to antibiotic resistant strains of bacteria, so generally does long term use of antibiotics breed these strains in the bodies of antibiotic users and increase the risk of various bodily infections?
Answer:
does long term use of antibiotics breed these strains in the bodies of antibiotic users[,] and increase the risk of various bodily infections?
Yes to the first question (although I would use the words select for). No, not usually for the second question in the individual(*), yes in the population.
This should be looked at as a problem in populations of both people/animals and bacterial populations in general. Given time, any living organism will be favored if a mutation involving a selective pressure (like an antibiotic) is introduced, not only in those towards which the antibiotic is directed but also among the normal microbiota. The longer the selective pressure (antibiotic treatment), the greater the probability that a resistant strain of one bacteria or another will develop.
That's why there's a push (in studies) to limit the number of days of antibiotic use necessary to the least number needed which still proves efficient in eradicating an infection. (The trend is towards kill 'em quick and kill 'em all - higher doses for shorter periods, or make it strong and only long enough to cause the extinction of the illness-causing bacteria and not long enough for selection of resistance mutations.)
It is quite normal to have potentially pathogenic bacteria as part of one's microbiome. Usually other beneficial bacteria and conditions keep these in check. But antibiotic resistance, once selected for, can persist for longer periods of time than previously recognized (up to four years in one study). This gives even a small population of resistant bacteria to genetically transfer resistance to others.
Short-term use is less likely to select for resistance in one individual, but, in terms of populations of people, antibiotics are used so ubiquitously that resistant strains will be selected for in shorter treatment intervals by sheer numbers of people being treated.
When resistant and susceptible organisms compete to colonize or infect hosts, and use of an antibiotic has a greater impact on the transmission of susceptible bacteria than resistant ones, then increasing use of the antibiotic will result in an increase in frequency of organisms resistant to that drug in the population, even if the risk for treated patients is modest. Antimicrobial use and patient-to-patient transmission are not independent pathways for promoting of antimicrobial resistance, rather they are inextricably linked.
"Long term use" in one individual is relatively uncommon and unnecessary, but if necessary, usually does more good than harm to that individual. Almost all (if not all) of the population of the bacteria you're trying to eliminate from one site will be susceptible to the antibiotic and will die, leaving, in a few cases, a few resistant ones that your body can handle naturally, or that can repopulate to "normal" levels without causing illness. But they are then there, the resistant ones, to spread within a population of the individual's microbiome, and the population of people. The next person with a problem, who might have picked up that resistant strain, will not respond well to the same antibiotic. Nor will the patient who was initially treated, if the problem recurs with an antibiotic resistant strain. So, as I said, yes and no.
That's why people (doctors, vets, and - with help - patients) should consider a risk-to-benefit-ratio whenever using an antibiotic. From that standpoint, it is always unethical to use an antibiotic for what is probably a virus, because the benefit to a patient is very small, but the risk - to that patient, who might suffer complications from its use, and to the population if a resistant strain is selected for - is relatively high. Better to wait it out to see if it is bacterial (if one can't test for it directly), and treat it if it does turn out to be. Of course, the risk of not treating in certain populations must also be considered: the elderly, the immunosuppressed, infants, and people with other illnesses or predispositions to infection.
It's not just use of antibiotics in patients/humans that causes problems; its use in any situation can lead to selection for resistance. This is well documented to occur in the meat industry, where a grower often tries to minimize the effect of cost-efficient but insalubrious growing conditions by giving antibiotics to the entire population of meat-animals (beef or poultry) to cut the monetary losses associated with potential illnesses.
It's a multifactorial, and often monetarily driven, problem. Doctors share the blame, but so do patients who demand antibiotics thinking they know better than the doctor, and who will doctor-shop until they find one who will do their bidding, or make a stink for the doctor who refuses. Some doctors still refuse, but they pay a heavy (non-monetary and sometimes monetary) price for doing so.
In the meat industry, it's always a monetarily driven process.
This review focuses on agricultural antimicrobial drug use as a major driver of antimicrobial resistance worldwide for four reasons: It is the largest use of antimicrobials worldwide; much of the use of antimicrobials in agriculture results in subtherapeutic exposures of bacteria; drugs of every important clinical class are utilized in agriculture; and human populations are exposed to antimicrobial-resistant pathogens via consumption of animal products as well as through widespread release into the environment.
What YOU can do:
Do not take an antibiotic for a viral infection like a cold or the flu.
Do not save some of your antibiotic for the next time you get sick.
Take an antibiotic exactly as the healthcare provider tells you. Complete the prescribed course of treatment even if you are feeling better. If treatment stops too soon, some bacteria may survive and re-infect.
Do not take antibiotics prescribed for someone else.
If your healthcare provider determines that you do not have a bacterial infection, ask about ways to help relieve your symptoms. Do not pressure your provider to prescribe an antibiotic.
Antibiotic resistance: delaying the inevitable
Get Smart: Know When Antibiotics Work
Short-Term Antibiotic Treatment Has Differing Long-Term Impacts on the Human Throat and Gut Microbiome
Antimicrobial Use and Antimicrobial Resistance: A Population Perspective
Comparison of 8 vs 15 Days of Antibiotic Therapy for Ventilator-Associated Pneumonia in AdultsA Randomized Trial
Plasmid encoded antibiotic resistance: acquisition and transfer of antibiotic resistance genes in bacteria
Industrial Food Animal Production, Antimicrobial Resistance, and Human Health | {
"domain": "biology.stackexchange",
"id": 3450,
"tags": "infection, antibiotics, antibiotic-resistance"
} |
Question about impulse and momentum | Question: Check this question first please
http://3.ii.gl/CRekoysv4.jpg
My question is:
Why can't I use equations of motion to get the final speed after rebounding?Acceleration not equal to $9.8 \text{m/s}^2$ or the intial after rebounding doesnt equal the intial before? Why do I have to use $$mgh=\frac{1}{2} mv^2$$
$$\text{Total mechanical before}=\text{after}$$
Answer: You can use either method, conservation of energy or kinematics, and both will give the same answer:
CONSERVATION OF ENERGY
Let's arbitrarily define downwards displacements, velocities, accelerations and forces as negative.
Falling motion:
$$mgh_1 = \frac{1}{2}mv_1^2$$
$$\therefore v_1 = -\sqrt{2gh_1}$$
Rising motion:
$$\frac{1}{2}mv_2^2 = mgh_2$$
$$\therefore v_2 = \sqrt{2gh_2}$$
$$I = m(v_2 - v_1) = m(\sqrt{2gh_2}+\sqrt{2gh_1})$$
KINEMATICS
Constant acceleration, therefore we can use suvat equations.
$$v^2 = u^2 +2as$$
Falling:
$$v_1^2 = 0 + 2(-g)(-h_1)$$
$$\therefore v_1 = -\sqrt{2gh_1}$$
Rising:
$$0 = v_2^2 + 2(-g)(h_2)$$
$$\therefore v_2 = \sqrt{2gh_2}$$
$$I = m(v_2 - v_1) = m(\sqrt{2gh_2}+\sqrt{2gh_1})$$
The reason why you get the same answers either way is because the definition of KE derived from kinematics.
The kinetic energy is the amount of work require to bring a mass, $m$ to a velocity $v$ from rest.
$$KE = work = Fs = mas$$
suvat equation:
$$v^2 = u^2 +2as$$
$$\therefore v^2 = 0 + 2as$$
Substitute:
$$v^2 = \frac{2KE}{m}$$
$$\therefore KE = \frac{1}{2}mv^2$$ | {
"domain": "physics.stackexchange",
"id": 18814,
"tags": "homework-and-exercises, newtonian-mechanics, momentum, conservation-laws, collision"
} |
Why does applying Fourier Transform on point Spread Function yield h(t) which is complex-valued | Question: I wanted to understand why this text talks about applying the Fourier transform on H(f) to obtain h(t). I view Fourier transform as moving from the time or spatial domain to the frequency / spatial frequency domain.
Is it normal for Fourier transform and inverse Fourier transform to be used interchangeably?
the first image pertains to my question.
The second image is for additional context which appear before the the paragraph of interest.
Answer: The Fourier transform doesn't move only from time to frequency (or space etc.), it moves between these domains. The only difference between the Fourier transform and its inverse is a sign in the exponent and possibly the scaling. But these are conventions, and you may have noticed that the Fourier transform can be defined differently in different fields.
With the definition of the Fourier transform and its inverse as it is common in signal processing
\begin{align*}
X(\omega) &= \int_{-\infty}^{\infty}x(t)e^{-j\omega t}dt \\
x(t) &= \frac{1}{2\pi} \int_{-\infty}^{\infty}X(\omega)e^{j\omega t}d\omega \\
\end{align*}
we have
\begin{align*}
\mathscr{F}\big\{X(\omega)\big\} &= \int_{-\infty}^{\infty}X(\omega)e^{-j\omega t}d\omega \\
&= 2\pi x(-t)
\end{align*}
which shows that applying the Fourier transform twice gets us back to the time domain.
Also take a look at this related question. | {
"domain": "dsp.stackexchange",
"id": 12333,
"tags": "fourier-transform, radar"
} |
Ultra-Beginner Python FizzBuzz ... Am I missing something? | Question: I just started programming in Python this morning, and it is (more or less) my first programming language. I've done a bit of programming before, but never really did much except for "Hello World" in a few languages. I searched around for some Python FizzBuzz solutions, and they all seem significantly more complicated then mine, so I think I must be missing something, even though it works correctly. Could you guys point out any errors I've made, or things I can improve?
count = 0
while (count < 101):
if (count % 5) == 0 and (count % 3) == 0:
print "FizzBuzz"
count = count +1
elif (count % 3) == 0:
print "Fizz"
count = count + 1
elif (count % 5) == 0:
print "Buzz"
count = count +1
else:
print count
count = count + 1
Answer: Lose the useless brackets
This:
while (count < 101):
can just be:
while count < 101:
Increment out of the ifs
Wouldn't be easier to do:
count = 0
while count < 101:
if count % 5 == 0 and count % 3 == 0:
print "FizzBuzz"
elif count % 3 == 0:
print "Fizz"
elif count % 5 == 0:
print "Buzz"
else:
print count
count = count + 1 # this will get executed every loop
A for loop will be better
for num in xrange(1,101):
if num % 5 == 0 and num % 3 == 0:
print "FizzBuzz"
elif num % 3 == 0:
print "Fizz"
elif num % 5 == 0:
print "Buzz"
else:
print num
I've also renamed count to num since it doesn't count much, is just a number between 1 and 100.
Let's use only one print
Why do 4 different print, when what is really changing is the printed message?
for num in xrange(1,101):
if num % 5 == 0 and num % 3 == 0:
msg = "FizzBuzz"
elif num % 3 == 0:
msg = "Fizz"
elif num % 5 == 0:
msg = "Buzz"
else:
msg = str(num)
print msg
Light bulb!
"FizzBuzz" is the same of "Fizz" + "Buzz".
Let's try this one:
for num in xrange(1,101):
msg = ''
if num % 3 == 0:
msg += 'Fizz'
if num % 5 == 0: # no more elif
msg += 'Buzz'
if not msg: # check if msg is an empty string
msg += str(num)
print msg
Copy and paste this last piece of code here to see what it does.
Python is a very flexible and powerful language, so I'm sure there could be other hundred and one different possible solutions to this problem :)
Edit: Improve more
There's still something "quite not right" with these lines:
if not msg:
msg += str(num)
IMHO it would be better to do:
for num in xrange(1,101):
msg = ''
if num % 3 == 0:
msg += 'Fizz'
if num % 5 == 0:
msg += 'Buzz'
print msg or num
There! Now with:
print msg or num
is clear that num is the default value to be printed. | {
"domain": "codereview.stackexchange",
"id": 1439,
"tags": "python, beginner, fizzbuzz"
} |
Where might hertz per dioptre actually be useful? | Question: I once came across the strange, artificial unit "hertz per dioptre", which is dimensionally equivalent to "metres per second". Could this unit, by some stretch of the imagination, be used in some artificial situation where the usage implications of both "hertz" and "dioptre" (frequency of periodic events and refractive power, respectively) would make for the ratio to actually be useful?
Answer: In special relativity, hertz per dioptre is an excellent unit for showing the joint invariance of electromagnetic phenomena in the behavior of all types of lenses, reflective or refractive, under the effects of the Lorentz transformation along the axis of motion. I'm not aware of any other unit that links those two domains in quite that way. In the case of refractive lenses with chromatic dispersion, the invariance turns out to be non-trivial and a bit surprising, since it asserts that the atomic materials in a Lorentz compressed lens must maintain a very specific relationship in how they interact with a spectrum of gamma-shifted light frequencies.
Here's how it works for the easier reflective-lens case. First, imagine a sphere 4 meters across with an $f=280$ THz resonant infrared light wave inside. Why 4 meters? Well, I'm trying to use the correct definition of dioptre. That is the focal length of a refractive or reflective lens, which means the distance it requires to converge parallel light down to a single focal point. In this case, the lens is reflective and has spherical curvature. Looking only at a region small enough (e.g. 2 cm across) to avoid spherical aberration, the focal length of the $d=4$ m sphere is $L=\frac{1}{2}r=\frac{1}{4}d=\frac{1}{4}4=1$ m. So, a 4 m diameter sphere thus correctly gives a dioptre (curvature) of $\delta=1/L=1/1=1$ D, where D $=m^{-1}$.
Next, accelerate the sphere along it X axis to a velocity of $v=\sqrt{\frac{3}{4}}$ c, which gives a Lorentz factor of $\gamma=2$. That means that both the sphere and the resonant light pattern within it will be compressed to $\frac{1}{2}$ their original lengths along the X axis, from the perspective of a viewer "at rest" relative to the moving sphere.
For the small reflective lens regions around either end of where the X axis crosses the sphere, the pre-acceleration curvature was $\delta_0=1D$ (the zero subscript indicates the rest frame). After acceleration to $\gamma=2$ the sphere becomes an oblate spheroid, and the curvatures of the two reflective lens areas have been reduced to $\delta_1=2D$, where higher dioptre numbers indicate flatter curves. (The proof of that is left as an exercise for the reader, but it's not difficult.)
Now let's examine what happens to the frequency of the light within the sphere. The neat thing about special relativity is that physics must remain invariant for both the observer and the observed system. So, if there were n wavelengths of resonant light crossing the sphere along the X axis prior to it being accelerated, there must also be n wavelengths along that same length after the compression. In other words, the wavelengths of the radiation must also be cut in half along X (only), resulting in twice the frequency as before. That transforms the original X-axis $f_0=280$ THz light of the at-rest sphere into $f_1=560$ THz light in the moving sphere. An observer in the rest frame would see this as bright green.
Observant readers may now be saying "Hey, that can't be right! The Lorentz factor also slows time... so shouldn't the light in the moving sphere be slower and thus less energetic?"
While it is true that time will pass more slowly within the moving sphere, it is not correct to think that this same light will be slower when viewed from the rest frame. For that situation the geometry of the wavelengths wins, and the light looks green. However, a simpler way to think of it is that since the light is being emitted and reflected by an object traveling at $\gamma=2$ (or equivalently $v=\sqrt{\frac{3}{4}}$ c), the ordinary Doppler effect will double its frequency.
(@ColinK has correctly noted that the above explanation glosses over some important complications. Please see his excellent comment for more info. I may try to address that soon.)
Now it's time to put this all together.
The original light and sphere had an eta factor of:
$\eta_0=f_0/\delta_0 = (280 THz)/(1 D) = 280\times{10}^{12}$ HpD
where 1 HpD = 1 Hz/D (Hertz per dioptre).
The moving light and sphere has an eta factor of:
$\eta_1=f_1/\delta_1 = (560 THz)/(2 D) = 280\times{10}^{12}$ HpD.
In other words, the eta factor $\eta$, which relates the Lorentz-transformed electromagnetic waves to the Lorentz-contracted physical mirrors from which they reflect, has remained invariant for this example of $\gamma=2$.
It is not an isolated case. It is easy to show that $\eta$ is a universal invariant of special relativity:
$\forall{v_i}(\eta_i=\frac{f_i}{\delta_i}= C)$
where C is a constant in units of HpD = Hz/D = Hertz per dioptre.
Now the remarkable generalization of all of this is that by the same kinds of geometric arguments and application of the "physics must be preserved in both frames" principle, refractive lenses must also fall under the above argument. If a refractive lens has chromatic dispersion (the colored fringes seen in cheap lenses), then the constant C in the above equation will become a frequency-dependent value $C(f)$. Yet the eta invariance remains intact! That is surprising because light dispersion is a pretty complicated phenomenon, yet from the rest frame these messy compressed atoms must nonetheless maintain eta invariance. That is... unexpected.
Thus HpD units not only have real physical meaning, but a meaning that relates directly to the original intent of both the Hertz and dioptre units (versus just being $m/s$ in disguise). This meaning in turn provides an easy way to express an invariant relationship in special relativity that links together the electromagnetic and mechanical Lorentz transformations in an unexpected and non-intuitive fashion.
And finally, despite all the above unexpectedly interesting (to me at least!) SR relationships involved, the HpD unit really did originate as a bit of humor in (as best I could uncover) this xkcd discussion posting back in 2007. So, shrodingersduck from the People's Democratic Republic of Leodensia, wherever you are six years later, I thank you for inadvertently creating an interesting and quite fun opportunity to explore special relativity in a rather unusual context.
Addendum 2013-01-31
The generality of the HpD unit in special relativity can I think be stated even more broadly. So, here goes:
Light frequency, geometric forms, and frequency-dependent refractive indices all change when systems undergo Lorentz transformation, so they are not individually Lorentz invariant. Theorem: If the optical characteristics of an optical system are instead described using HpD (Hertz per Dioptre) and/or its inverse unit DpH (Dioptres per Hertz), the resulting description of its optical properties will remain constant ("eta invariance") regardless of relativistic frame or orientation from which the optical system is analyzed.
That is a theorem only. @ColinK's excellent observation that the Doppler argument I made could be bogus because the shift works differently depending on whether the light is moving with or against the velocity still concerns me. So, I want to look at that a lot more closely and see if I can disprove my own theorem.
Still, wouldn't it be delightful if a unit defined as a joke turned out to be relativistically invariant when the common units for the same phenomena are not?
The other obvious generalization question is this: Does eta invariance (if it exists) apply to other wave phenomena?
And finally, @JoeZeng, I think I misunderstood your question about whether the eta factors (descriptions of optical components using HpD units) are related to the velocity of light. Well, HpD does have dimensional equivalence to a velocity ($m/s$), but if there is a meaningful way to re-interpret an HpD value as a velocity, I sure don't see it. Intriguing question, though... | {
"domain": "physics.stackexchange",
"id": 6240,
"tags": "special-relativity, dimensional-analysis"
} |
Chess game in Python | Question: I have programmed for 2 months, and I began writing a Chess game. I am a beginner programmer in Python, so please assess my code.
class Chess_Board:
def __init__(self):
self.board = self.create_board()
def create_board(self):
board_x=[]
for x in range(8):
board_y =[]
for y in range(8):
board_y.append('.')
board_x.append(board_y)
board_x[7][4] = 'K'
board_x[7][3] = 'Q'
board_x[7][2] = 'B'
board_x[7][1] = 'N'
board_x[7][0] = 'R'
return board_x
class WHITE_KING(Chess_Board):
def __init__(self):
Chess_Board.__init__(self)
self.position_x_WK = 7
self.position_y_WK = 4
self.symbol_WK = 'K'
def move(self):
while True:
try:
print ('give a x and y coordinate for WHITE KING')
destination_x_WK = int(input())
destination_y_WK = int(input())
if self.board[destination_x_WK][destination_y_WK] == '.' :
if ( abs(self.position_x_WK-destination_x_WK) <2 and abs(self.position_y_WK-destination_y_WK) < 2 ):
self.board[self.position_x_WK][self.position_y_WK] = '.'
self.position_x_WK = destination_x_WK
self.position_y_WK = destination_y_WK
self.board[self.position_x_WK][self.position_y_WK] = self.symbol_WK
return self.board
break
else:
print ('your move is invalid, please choose cooridnates again')
continue
except:
pass
class WHITE_QUEEN(Chess_Board):
def __init__(self):
Chess_Board.__init__(self)
self.position_x_WQ = 7
self.position_y_WQ = 3
self.symbol_WQ = 'Q'
def move(self):
while True:
try:
print ('give a x and y coordinate for WHITE QUEEN')
destination_x_WQ = int(input())
destination_y_WQ = int(input())
if self.board[destination_x_WQ][destination_y_WQ] == '.' :
if (destination_x_WQ == self.position_x_WQ or destination_y_WQ==self.position_y_WQ or abs(self.position_x_WQ-destination_x_WQ) == abs(self.position_y_WQ-destination_y_WQ) ):
self.board[self.position_x_WQ][self.position_y_WQ] = '.'
self.position_x_WQ = destination_x_WQ
self.position_y_WQ = destination_y_WQ
self.board[self.position_x_WQ][self.position_y_WQ] = self.symbol_WQ
return self.board
break
else:
print ('your move is invalid, please choose cooridnates again')
continue
except:
pass
class WHITE_ROOK(Chess_Board):
def __init__(self):
Chess_Board.__init__(self)
self.position_x_WR = 7
self.position_y_WR = 0
self.symbol_WR = 'R'
def move(self):
while True:
try:
print ('give a x and y coordinate for WHITE ROOK ')
destination_x_WR = int(input())
destination_y_WR = int(input())
if self.board[destination_x_WR][destination_y_WR] == '.' :
if (destination_x_WR == self.position_x_WR or destination_y_WR==self.position_y_WR ):
self.board[self.position_x_WR][self.position_y_WR] = '.'
self.position_x_WR = destination_x_WR
self.position_y_WR = destination_y_WR
self.board[self.position_x_WR][self.position_y_WR] = self.symbol_WR
return self.board
break
else:
print ('your move is invalid, please choose cooridnates again')
continue
except:
pass
class WHITE_BISHOP(Chess_Board):
def __init__(self):
Chess_Board.__init__(self)
self.position_x_WB = 7
self.position_y_WB = 2
self.symbol_WB = 'B'
def move(self):
while True:
try:
print ('give a x and y coordinate for WHITE BISHOP')
destination_x_WB = int(input())
destination_y_WB = int(input())
if self.board[destination_x_WB][destination_y_WB] == '.' :
if abs(self.position_x_WB-destination_x_WB) == abs(self.position_y_WB-destination_y_WB) :
self.board[self.position_x_WB][self.position_y_WB] = '.'
self.position_x_WB = destination_x_WB
self.position_y_WB = destination_y_WB
self.board[self.position_x_WB][self.position_y_WB] = self.symbol_WB
return self.board
break
else:
print ('your move is invalid, please choose cooridnates again')
continue
except:
pass
class WHITE_KNIGHT(Chess_Board):
def __init__(self):
Chess_Board.__init__(self)
self.position_x_WKN = 7
self.position_y_WKN = 1
self.symbol_WKN = 'N'
def move(self):
while True:
try:
print ('give a x and y coordinate for WHITE KNIGHT')
destination_x_WKN = int(input())
destination_y_WKN = int(input())
if self.board[destination_x_WKN][destination_y_WKN] == '.' :
if abs(self.position_x_WKN-destination_x_WKN)**2 + abs(self.position_y_WKN-destination_y_WKN)**2 == 5 :
self.board[self.position_x_WKN][self.position_y_WKN] = '.'
self.position_x_WKN = destination_x_WKN
self.position_y_WKN = destination_y_WKN
self.board[self.position_x_WKN][self.position_y_WKN] = self.symbol_WKN
return self.board
break
else:
print ('your move is invalid, please choose cooridnates again')
continue
except:
pass
class Engine(Chess_Board):
def __init__(self):
WHITE_KING.__init__(self)
WHITE_QUEEN.__init__(self)
WHITE_ROOK.__init__(self)
WHITE_BISHOP.__init__(self)
WHITE_KNIGHT.__init__(self)
Chess_Board.__init__(self)
def play(self):
print('Please write what figure you choose to move: white_king, white_queen, white_rook, white_bishop'
'or white knight')
while True:
choice=str(input())
if choice == 'white_king':
WHITE_KING.move(self)
break
elif choice == 'white_queen':
WHITE_QUEEN.move(self)
break
elif choice == 'white_bishop':
WHITE_BISHOP.move(self)
break
elif choice == 'white_knight':
WHITE_KNIGHT.move(self)
break
elif choice == 'white_rook':
WHITE_ROOK.move(self)
break
else:
print ('please choose again')
def display(self):
for i in range (8):
for j in range (8):
print (self.board[i][j], end=' ')
print ()
c_engine = Engine()
c_engine.display()
c_engine.play()
c_engine.display()
Answer: This is a lot of work, and I don't have a lot of time, but I thought I'd throw in my two cents.
So, here's what I've got for you:
The Good
Your models are nicely formed. More than a data store, they actually do stuff. This is good practise.
You've compartmentalised the code into objects that are easy to read and follow. Good job.
The Bad
You're violating pep8 all over the place. This is the gold standard for Python development so you really should conform your code to it. Specifically some of the more glaring violations:
Your lines exceed 80 characters a lot
It's print(, not print (
Operators like = are supposed to be surrounded by spaces unless used in a keyword argument, in which case there shouldn't be any spaces.
Your class names are in ALL_CAPS. Don't do that. All caps is meant for constants only.
if statements should end with a : with no spaces to the left or right.
Your variable names need some work
They violate pep8 since you're using all caps in some in whole or in part.
They're sometimes not using words. WKN means nothing to someone who didn't write the code.
Your class names are all caps and have underscores. Again, this is a violation of pep8.
The end of your file has raw logic not wrapped in if __name__ == "__main__":. This means that if someone were to import your file, your program would actually run. This is very bad form.
Too much vertical space. Again, pep8 dictates that there's one blank line before every method, two before a class.
For the most part, it's all pep8 stuff, so that's good news. I didn't actually run the program though, so there may be more that I've missed. I like to go for style & readability first anyway.
You may also want to consider breaking your code out into separate files for readability and to keep the file you're working on short and simple to follow. | {
"domain": "codereview.stackexchange",
"id": 23739,
"tags": "python, beginner, chess"
} |
Can you escape a black hole by going into another (4th) dimension? | Question: I imagine if there was a 2D black hole on a piece of paper, and something was inside the black hole had access to the 3rd dimension, they could just go "upwards" out of the black hole. Could it be possible, that if there were something, perhaps a particle that was inside a blackhole, escape it by going into the 4th dimension and back into the 3rd dimension elsewhere in space?
Answer: That all depends on how gravity works in your hypothetical universe. Let's imagine a Flatland black hole.
In flatland (a two dimensional world embedded in a three dimensional one) a two dimensional star collapses and forms a black hole. What happens next? Well it is at least possible to imagine that in flat land, gravity does not pass out of the 2-d universe into the 3d one. This means that particle in the 3d world could pass through the disc of the 2d black hole. It would only be "in" the black hole for an instant.
It is equally plausible that gravity causes a curvature of the spacetime not only in the the 2-d flatland world, but in the 3d world which contains it. Then the black hole would appear as a sphere, and a particle in the 3d world could not pass through the disc.
There are string theories with more than 3 dimensions. But in these theories, we are not like a flatland universe, because the other dimensions are curlled up tighter than a proton. (You can use the metaphor of a long thin straw: It looks one dimensional, but the surface of the straw is in fact 2d. It is flat in one direction but highly curved in another) In such universes the black hole does pass through all dimensions (as does everything else) and you can't escape it by "going up a dimension".
Even if there was a large flat 4th dimension, that was not affected by gravity, getting into it would be at least as hard as escaping a black hole. Just as a flatlander can't get into the 3d world except with help from that world. | {
"domain": "astronomy.stackexchange",
"id": 5305,
"tags": "black-hole"
} |
Why does taking more readings reduce random error? | Question: So I was tossing a coin
And I did two experiments
Experiment 1:
Tossing same Coin with no fan with different torque each time and did'nt much care about orientation of the coin, 8 times
I got 5 T 3 H
Experiment 2:
Tossing the same coin without fan with different torque ,but same orientation of coin and Always Heads up while tossing
I got 7 T and 1 H
I know this isn't large enough data but is the conclusion correct ??
Taking more readings and averaging reduces random errors because we start doing the experiment with the same habit thus parameters which are random become more constant and random errors reduce ..
Answer: When you add more trials, what actually decreases in experiments is the uncertainty on the average of the trial results. This is called the law of large numbers, and is a fundamental fact of statistics. There are various mathematical proofs of this fact, but what may be more useful is some intuition on why we should expect such a thing.
In each trial of an experiment, what you are actually doing is acquiring information about some random process. Assuming that your experiment is properly controlled, each trial will give you information about the same random process. In this case, the more trials you do, the more total information you have about the process. Having more information should allow you to estimate the outcome of the process more precisely.
An average is a way of estimating the outcome of the process using all of the available information. Therefore, it should make sense that having more trials of a properly-controlled experiment should lead to a more precise average.
In your particular cases, the small number of trials means that you don't have much information about the process you're measuring. As such, the uncertainty on the probability of the coin landing on heads (which is the equivalent of an average in this situation) is going to be large. In particular, for an estimated probability $p$ taken from a sample of $n$ events, the uncertainty in the probability will be on the order of $\sqrt{p(1-p)/n}$.
For experiment 1, we have that $p=0.375\pm 0.171$. For experiment 2, we have that $p=0.125\pm0.171$. The uncertainties of these two results overlap (for example, a probability of 0.25 is within the uncertainty range of both results), and so you definitely cannot claim that there is a statistically significant difference between the two results. | {
"domain": "physics.stackexchange",
"id": 52265,
"tags": "measurements, error-analysis, statistics"
} |
Classifying videos with varying length using ConvLSTM2D in tensorflow | Question: I have a collection of videos, where I would like to extract a frame for every second, and then feed them through a ConvLSTM2D for binary classification.
I was under the impression that a LSTM could take varying input sizes, but after many hours of googling it seems like I either need to:
Use padding and masking
Use ragged tensors
Actually use varying input length, but use batch size of 1
I'm not sure how to proceed from here, since I cant find any resources for padding and masking a sequence of images. Ragged tensors are confusing, and I cant find any examples for a sequence of images. When trying to use a batch size of 1, tensorflow still complains that the inputs are not the same size when using model.fit.
The length of the video is actually important, so thats the reason I'm using a variable amount of images, but I could possibly extract a fixed amount of frames.
Any code examples or suggestions appreciated
Answer: I found a solution that works for me.
Since I wanted to avoid using padding and masking, and didn't fully understand ragged tensors, I decided to continue with using varying input lengths.
My training data consists of a list of image stacks between 6-82 frames. When trying to use this directly with model.fit(x=x, y=y, batch_size=1) where x is a list of tensors, tensorflow will complain that the input dimensions have varying size. I thought this didn't matter since the batch size was 1, but apparently tensorflow tries to change the list of image stacks to a tensor.
A way around this, is to pass a training sample for every step using a generator:
class ArtificialDataset(tf.data.Dataset):
def _generator(samples):
for i in samples:
yield (image_stack, output)
And then making sure that the steps per epoch is the size of the data set
training_data = ArtificialDataset(training_samples)
model.fit(training_data, epochs=epochs, steps_per_epoch=len(training_samples))
This way tensorflow never tries to create a tensor with varying inner dimensions. A drawback of this method is that no batching occurs, so training takes a while. In my case this doesn't matter much since the network size and input data already requires me to load very few samples at a time.
An optimization would be to batch same-length videos (or videos with same number of extracted frames). I might do this at a later time, since the implementation of varying batch sizes is too much work right now. | {
"domain": "datascience.stackexchange",
"id": 9130,
"tags": "tensorflow"
} |
The electron configuration and electron density distribution in singlet oxygen | Question: The molecular orbital schemes for singlet ($\mathrm{^1\Delta_g}$) and triplet oxygen ($\mathrm{^3\Sigma_g^-}$) are typically given as shown in the image below.
Figure 1: Molecular orbital schemes of two types of singlet oxygen and triplet oxygen with the highest energy electrons highlighted in red.
It is pretty clear that the dioxygen molecule, if just observing the position of the nuclei, must be $D_\mathrm{\infty h}$ symmetric. This implies that both $\unicode{x3c0}^*$ bonds are symmetry equivalent. Therefore, it seems logical that in the $\mathrm{^3\Sigma_g^-}$ triplet state the two orthogonal $\unicode{x3c0}^*$ bonds are each populated by a single electron, leading to an overall rotation-symmetric linear molecule.
In the $\mathrm{^1\Delta_g}$ singlet state, this degeneracy seems to be destroyed, just by looking at the MO scheme. We now seem to have one populated $\unicode{x3c0}^*$ orbital and one unpopulated one, going by my understanding of this scheme.
What does this imply for the electron density distribution of $\mathrm{^1\Delta_g}$ singlet oxygen? Is it valid to assume that two (arbitrary) opposite sides have a higher electron density than the two other orthogonal sides? How can this be interpreted?
Answer: Probably not a full answer, but hoping to point readers in a useful direction.
If we denote the two $\pi^*$ orbitals as $\pi_x^*$ and $\pi_y^*$ respectively, then the spatial wavefunctions for each state can be written as follows (all lower orbitals are ignored):
$$\begin{align}
{}^1\Sigma_\mathrm g^+ &: 2^{-1/2}[\pi_x^*(1) \pi_y^*(2) + \pi_x^*(2) \pi_y^*(1)] \\
{}^3\Sigma_\mathrm g^- &: 2^{-1/2}[\pi_x^*(1) \pi_y^*(2) - \pi_x^*(2) \pi_y^*(1)] \\
{}^1\!\Delta_\mathrm g &: \begin{cases}\pi_x^*(1) \pi_x^*(2) \\ \pi_y^*(1) \pi_y^*(2) \end{cases}
\end{align}$$
For the ${}^1\!\Delta_\mathrm g$ case, one would expect a spatial degeneracy of 2, because the letter $\Delta$ in the term symbol indicates that the quantum number $|\Lambda| = 2$. Therefore, there is one state with $\Lambda = +2$, and one state with $\Lambda = -2$. On the other hand, the $\Sigma$ term has $|\Lambda| = 0$ and hence is spatially non-degenerate (the $^3\Sigma$ term is triply degenerate due to spin).
(Digression: $\Lambda$ represents the projection of the angular momentum along the internuclear axis. As such, there is no further projection quantum number ("$M_\Lambda$") that can take values $-2, -1, 0, +1, +2$. In comparison, in atoms, $\ell$ indicates the total angular momentum and $m_\ell$ the projection of this angular momentum onto the $z$-axis. For more information about term symbols of diatomic molecules, I suggest reading this link. There are many other sources on the Internet, but many are sloppy with notation, which only leads to confusion down the road.)
So, the MO diagram above - which ties the ${}^1\!\Delta_\mathrm g$ state to only one wavefunction - is incomplete. The ${}^1\!\Delta_\mathrm g$ state corresponds to two possible wavefunctions, which are symmetry-equivalent; there is no "preference" for the $x$-axis over the $y$-axis.
If you want to go further than what I've written, then I'd point you to the case of the boron atom which I asked about earlier, which is exactly analogous to this. The point about boron is that it has one electron in the 2p subshell. Does this go into the $\mathrm p_x$, $\mathrm p_y$, or $\mathrm p_z$ orbital? In the case of dioxygen, do the paired electrons go into the $\pi_x^*$ or $\pi_y^*$ orbital?
As you said, it does not make sense for us to assign the electron(s) to any of the orbitals in particular, as that leads to an asymmetric electron density distribution. However, I must admit that I don't fully understand the CASSCF calculation there, and I never quite got round to following up on that question, so you'll have to ask somebody else about that. | {
"domain": "chemistry.stackexchange",
"id": 8950,
"tags": "electrons, molecular-orbital-theory, electronic-configuration"
} |
Calculating Electric fields of a line charge: where does this term come from? | Question: The electric field at a point $P$ from a line of charge q is defined as the following:
$$dE_x=\frac{1}{4\pi\epsilon_0}\frac{\rho dx}{r^2}\frac{x}{r}$$
$$dE_y=\frac{1}{4\pi\epsilon_0}\frac{\rho dx}{r^2}\frac{y}{r}$$
where $r$ is the distance from $dx$ to point $P$.
Where does the term $\frac xr$ come from in the equation for the $x$ component of the $E$ field?
Answer: This answer might appear a bit needlessly complicated but the process generalises nicely to more complicated questions you may encounter. If anything is unclear here just let me know. If you're already familiar with the mechanics of displacement vectors you can skip to the end.
I considered the following setup to solve this problem. I've only shown a segment of the line of charge, where I've said the charge $q = \rho{dx}$, where $\rho$ is the charge density per unit length.
$\vec{r}_P$ is the displacement vector from the origin to the point P.
$\vec{r}_q$ is the displacement vector from the origin to the charge q.
$\hat{u}$ is the unit vector pointing from the charge q to the point P. This is the direction the electric field points in.
The electric field at $\vec{r}_P$ due to the charge q at $\vec{r}_q$ is the following:
$$ d\vec{E} = \frac{1}{4\pi \epsilon_0}\frac{\rho dx}{|\vec{r}_P - \vec{r}_q|^2}\hat{u}$$
One possible source of confusion here can be the the expression $|\vec{r}_P - \vec{r}_q|^2$. This expression in fact first gives us the displacement vector pointing from q to P and then we find its magnitude and square it. This will give us exactly the squared distance between q and P. If you're unsure about why this works consider the following diagram showing vector addition:
So just to reiterate, $\vec{r}_q + \vec{r}_{qP} = \vec{r}_P$, so then we can rearrange this and find $\vec{r}_{qP} = \vec{r}_{P} - \vec{r}_q$, exactly as we stated above.
The last remaining piece of the puzzle is the unit vector $\hat{u}$ which will exactly explain where your $\frac{x}{r}$ and $\frac{y}{r}$ come from. The unit vector is defined as follows, where we normalise it to give us a magnitude of 1.
$$ \hat{u} = \frac{\vec{r}_P - \vec{r}_q}{|\vec{r}_P - \vec{r}_q|}$$
If we now write down $\vec{r}_P$ and $\vec{r}_q$ in terms of the $\hat{i}$ and $\hat{j}$ basis vectors, where y and x are the magnitudes of the respective displacement vectors:
$$ \vec{r}_P = y\hat{j}$$
$$ \vec{r}_q = x\hat{i}$$
Finally we see:
$$ \hat{u} = \frac{\vec{r}_P - \vec{r}_q}{|\vec{r}_P - \vec{r}_q|} = \frac{y\hat{j} - x\hat{i}}{|\vec{r}_P - \vec{r}_q|}$$
Our electric field is now written as:
$$ d\vec{E} = \frac{1}{4\pi \epsilon_0}\frac{\rho dx}{|\vec{r}_P - \vec{r}_q|^2}\frac{y\hat{j} - x\hat{i}}{|\vec{r}_P - \vec{r}_q|}$$
If you look at the x and y components of the electric field (coefficients of $\hat{i}$ and $\hat{j}$ respectively):
$$ dE_x = -\frac{1}{4\pi \epsilon_0}\frac{\rho dx}{|\vec{r}_P - \vec{r}_q|^2}\frac{x}{|\vec{r}_P - \vec{r}_q|}$$
$$ dE_y = \frac{1}{4\pi \epsilon_0}\frac{\rho dx}{|\vec{r}_P - \vec{r}_q|^2}\frac{y}{|\vec{r}_P - \vec{r}_q|}$$
This is exactly the decomposition of the electric field at point P into its x and y components, notice I've dropped the vector notation for the electric field now that we're considering each component separately.
This matches what you provided in your question (I've just used some extra vector notation but if you define $r = |\vec{r}_P - \vec{r}_q|$ then we have a match). My expression for the x component of the electric field is negative, however this will be positive if you consider the left hand plane (x < 0). So in fact integration over the whole line charge will result in a net zero x component for the electric field as they cancel. | {
"domain": "physics.stackexchange",
"id": 77224,
"tags": "electromagnetism, electrostatics, charge"
} |
Reactivity of Alkenes with HBr | Question: I am new in the Organic Chemistry (World of Reactions). I had got one question in my test. But, when I had seen it, I got no proper logic to tick the correct answer, and then when I saw solution, then there is only written, "More Substituted Alkene, is more Reactive." But, how they are more reactive ? I mean, that if they are substituted with ALKYL GROUPS, then they will get more stable (due to + Inductive Effect) and will not React. I think, I am wrong somewhere, and when I searched on the internet, there is No Proper Logic, for this question.
So, I just want to know the Correct Logic and Mechanism of this question. Please help.
Answer: What you should be looking at is the stability of the carbocation intermediate formed when the strong acid $\ce{HBr}$ protonates the alkene. This stability is more sensitive to alkyl substitution than the stability of the uncharged alkene, so the net effect is that alkyl substitution lowers the activation energy versus having no substitution. Ergo a faster reaction, e.g.
2,3-dimethyl-but-2-ene (two methyl groups at each double-bonded carbon, tertiary cation when protonated)
$>$ but-2-ene (one methyl group, secondary cation)
$>$ ethene (no substitution, primary cation). | {
"domain": "chemistry.stackexchange",
"id": 16239,
"tags": "organic-chemistry, reaction-mechanism, hydrocarbons, hyperconjugation, inductive-effect"
} |
Making a word processor | Question: The word processor I've had in mind would be similar to Microsoft Word and OpenOffice. I am trying to keep my logic separated from the user Interface and that separated from the controller, basically using the MVC (Model View Controller).
Other things I would like reviewed would be code layout: should I try to abstract the code more? Am I using the correct writing surface (JTextArea) so that I can later implement a font style, size change on run time? Also, should I be doing something about thread safety (I understand that JFrames are not thread safe, and I am going to be honest and say that I don't fully understand what this really means, but I am sure it has to do with a single thread running for the user input, graphics and business logic).
Controller:
import java.awt.event.ActionEvent;
import java.awt.event.ActionListener;
import java.io.File;
public class ProcessEvents {
private WordFrame frame = new WordFrame();
private DataStuff data = new DataStuff();
private DialogBoxes dialogs = new DialogBoxes();
private boolean fileSaved;
String fileName = "";
int fontSize = 0;
public ProcessEvents(WordFrame frame, DataStuff data){
this.frame = frame;
this.data = data;
this.frame.addListener(new wordProcessListener());
}
class wordProcessListener implements ActionListener{
@SuppressWarnings("static-access")
@Override
public void actionPerformed(ActionEvent e) {
if(e.getSource().equals(frame.openMenuItem)){
frame.fileChooser.showOpenDialog(frame);
File f = frame.fileChooser.getSelectedFile();
System.out.println("Command Executed: open");
data.loadFile(f.getAbsoluteFile());
if(data.showText() != null){
System.out.println(data.showText());
frame.textArea.append(data.showText().toString());
}
}
if(e.getSource().equals(frame.FontMenuItem)){
System.out.println("font");
}
if(e.getSource().equals(frame.exitMenuItem)){
dialogs.getConfirmDialog("exitWithoutSave");
}
if(e.getSource().equals(frame.saveMenuItem)){
frame.fileChooser.showSaveDialog(null);
File f = frame.fileChooser.getSelectedFile();
String text = frame.textArea.getText();
data.saveFile(f.getAbsolutePath()+".txt", text);
System.out.println(f.getName());
fileSaved = true;
}
}
}
}
Model:
import java.io.File;
import java.io.FileInputStream;
import java.io.FileOutputStream;
import java.io.IOException;
public class DataStuff {
private File file;
String text;
String name;
private File saveFile;
int counter = 0;
FileInputStream fis = null;
FileOutputStream fout = null;
StringBuilder sb = new StringBuilder(4096);
int count = 0;
public void loadFile(File fileName){
this.file = fileName;
try{
fis = new FileInputStream(file);
while ((counter = fis.read()) != -1) {
System.out.print((char) counter);
sb.append((char) counter);
}
}
catch(IOException ex){
System.out.println("file couldn't be opened, or was incorrect or you clicked cancel");
}
finally {
try {
if (fis != null)
fis.close();
} catch (IOException ex) {
ex.printStackTrace();
}
}
}
public StringBuilder showText(){
return sb;
}
public void saveFile(String name, String text) {
this.name = name;
try{
fout = new FileOutputStream(name);
fout.write(text.getBytes());
System.out.println("file saving worked");
}
catch(IOException e){
System.out.println("File failed to save or something went horribly wrong");
}
}
}
View:
import java.awt.Font;
import java.awt.event.ActionListener;
import javax.swing.ImageIcon;
import javax.swing.JFileChooser;
import javax.swing.JFrame;
import javax.swing.JMenu;
import javax.swing.JMenuBar;
import javax.swing.JMenuItem;
import javax.swing.JScrollPane;
import javax.swing.JTextArea;
public class WordFrame extends JFrame{
/**
*
*/
private static final long serialVersionUID = 1L;
private JMenuBar menubar;
private JMenu fileMenu, editMenu, viewMenu;
JMenuItem saveMenuItem, openMenuItem, newMenuItem, exitMenuItem, FontMenuItem;
JTextArea textArea = new JTextArea(1000,900);
private int width = 1280, height = 980;
private JScrollPane scrollbar = new JScrollPane(textArea);
JFileChooser fileChooser = new JFileChooser();
private int textHeight = 12;
private Font yeah = new Font(Font.SANS_SERIF, 2, textHeight);
public WordFrame(){
setUI();
addMenuBar();
textArea.setFont(yeah);
}
public void setUI(){
this.setTitle("Word Processor");
this.setIconImage(new ImageIcon(this.getClass().getResource("Bridge.jpg")).getImage());
this.setSize(width, height);
this.setLocation(0,0);
this.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE);
this.add(scrollbar);
}
public void addMenuBar(){
menubar = new JMenuBar();
fileMenu = new JMenu(" File ");
editMenu = new JMenu("Edit ");
viewMenu = new JMenu("View ");
newMenuItem = new JMenuItem("New");
fileMenu.add(newMenuItem);
fileMenu.addSeparator();
fileMenu.setMnemonic('f');
openMenuItem = new JMenuItem("Open");
fileMenu.add(openMenuItem);
saveMenuItem = new JMenuItem("Save");
fileMenu.add(saveMenuItem);
fileMenu.addSeparator();
exitMenuItem = new JMenuItem("Exit");
fileMenu.add(exitMenuItem);
FontMenuItem = new JMenuItem("Font");
editMenu.add(FontMenuItem);
menubar.add(fileMenu);
menubar.add(editMenu);
menubar.add(viewMenu);
this.setJMenuBar(menubar);
}
public void setFontSize(int i){
this.textHeight = i;
}
public void addListener(ActionListener listener){
openMenuItem.addActionListener(listener);
exitMenuItem.addActionListener(listener);
saveMenuItem.addActionListener(listener);
}
}
Main class:
public class Main {
/*
* @Version: 0.002
* not much in terms of commenting but a lot of this stuff is very obvious
*/
public Main(){
WordFrame mainFrame = new WordFrame();
DataStuff data = new DataStuff();
@SuppressWarnings("unused")
ProcessEvents process = new ProcessEvents(mainFrame, data );
mainFrame.setVisible(true);
}
public static void main(String args[]){
@SuppressWarnings("unused")
Main m = new Main();
}
}
Answer: At face value the code looks OK. I want to focus on one area, though: File Management
You use FileInputStream and FileOutputStream to read and write your files.
Input/Output streams are designed for byte data. You do not have bytes, you have characters.
But, you are trying to shoe-horn these characters in to single byte values, and this will cause all sorts of problems for non-ascii characters.... This code here:
fis = new FileInputStream(file);
while ((counter = fis.read()) != -1) {
System.out.print((char) counter);
sb.append((char) counter);
}
is the problem. fis.read() reads just a single byte, and then you shoe-horn that in to a char... which it may not be.
ALSO ... doing it that way is probably the slowest possible way to read a file ... ;-)
Readers/Writers are designed for characters, and they do all the smarts of character encoding for you.
By default, Readers/Writers will use the platform encoding for your files. I recommend that you force them to use the UTF-8 encoding so that you have consistency from one platform to the next.
Additionally, you can save a fair amount of error handling if you use the Java7 try-with-resources structures.
The write code will be something like:
public void saveFile(String name, String text) {
this.name = name;
try (Writer writer = new BufferedWriter(new OutputStreamWriter(
new FileOutputStream(name), StandardCharsets.UTF_8))) {
writer.write(text);
writer.flush();
System.out.println("file saving worked");
} catch(IOException e){
// do at least a little more than just print a useless message
e.printStackTrace();
System.out.println("File failed to save or something went horribly wrong");
}
}
and the read code will look something like:
public void loadFile(File fileName){
// this.file = fileName;
try (BufferedReader reader = new BufferedReader(new InputStreamReader(
new FileInputStream(fileName), StandardCharsets.UTF_8))) {
char[] buffer = new char[8192]; // decent size buffer.
sb.setLength(0);
int len = 0;
while ((len = reader.read(buffer)) >= 0) {
sb.append(buffer, 0, len);
}
} catch(IOException ex){
// do at least a little more than just print a useless message
ex.printStackTrace();
System.out.println("file couldn't be opened, or was incorrect or you clicked cancel");
}
} | {
"domain": "codereview.stackexchange",
"id": 6281,
"tags": "java, object-oriented, mvc"
} |
How does cloud_to_laserscan.cpp work | Question:
I have some more qns regarding cloud_to_scan.cpp. the post was previously posted[http://answers.ros.org/question/1561/doubts-about-pointcloud-to-laser-scan] but it was not answered. I am in need of urgent help.
Qn 1)if (output->ranges[index] * output->ranges[index] > range_sq) output->ranges[index] = sqrt(range_sq);, what is this statement trying to do.. not that sure...
qn 2) is the x, y and z in metres?or what are the units for the x,y and z?
qn 3) why is the angle = -atan2(x,z) and not angle = -atan2(z,x)
Originally posted by lakshmen on ROS Answers with karma: 101 on 2011-07-25
Post score: -2
Answer:
All your questions can be answer by reviewing REP 103 Standard Units of Measure and Coordinate Conventions
Originally posted by tfoote with karma: 58457 on 2011-07-25
This answer was ACCEPTED on the original site
Post score: 3
Original comments
Comment by Lorenz on 2011-07-27:
At least 3 is also answered. Look at the coordinate axis definitions in the REP and the definition of the tangent function as found in your favorite math book. 1 apparently just limits range readings to some maximum.
Comment by lakshmen on 2011-07-27:
sorry, from REP103, qn 2 was answered but not qn 1 and 3... or maybe i can't find it... need more explaination. sorry. | {
"domain": "robotics.stackexchange",
"id": 6255,
"tags": "ros, pointcloud-to-laserscan"
} |
Is the fact that sound travels faster in metal than in water related to the fact that hitting metal is louder than hitting water? | Question: I am mostly a mathematician but have some physics background. I was tutoring high school physics, and we were covering the speed of sound in various mediums. He noticed that the speed of sound in metal is greater than in water. He said if you hit metal, it's usually pretty loud where if you hit water, it's usually not as loud. "Hitting air" is usually silent and air has an incredibly low speed of sound. Is there a relationship between these two?
Answer: I think there is a potentially dangerous mixture of the principles:
speed of sound propagation is a feature of material (i.e. material related magnitude)
"loudness of the hit" strongly depends on a shape of a specific object (i.e. object related "magnitude")
Intensity of the radiated sound depends on magnitudes that might be related to speed of sound, but that may easily be a misleading clue. Hitting a metal coin and hitting a metal church bell is really not of the same loudness. | {
"domain": "physics.stackexchange",
"id": 29299,
"tags": "acoustics"
} |
Would obstacles cause light diffraction if it has the same refractive index as the surrounding material? | Question: Restatement of title:
Would an obstacle still cause diffraction of light if it has the same refractive index as the surrounding material?
Answer: Generally speaking we use the term diffraction when we have some apparatus that blocks part of the light. So for example a diffraction grating absorbs light except at the slits in it. When no light is being absorbed we normally use the term refraction. They're both the same physics, but the distinction is often convenient. Your question implies the obstacle doesn't absorb light, so your question is about refraction not diffraction.
Anyhow, refraction relies on different regions of the light encountering media with different refractive indices. The differing refractive index causes a path dependant phase change in the light that leads to refraction. In the case you describe where the refractive index is everywhere constant there will be no refraction.
This is used in the measurement of refractive indices of powders. If you do first year crystallography at university you are likely to do a practical where you measure the refractive index of a powdered mineral by placing the powder in liquids of different refractive indices and measuring the light absorption. When the refractive index of the powder is the same as the refractive index of the liquid the powder becomes virtually invisible and the light absorption falls to almost zero. | {
"domain": "physics.stackexchange",
"id": 21622,
"tags": "optics"
} |
Does time pass fastest in isolated, resting space? | Question: While it is fairly established that both fast movement and the presence of gravity make time pass slower as compared to a system at rest / free of gravity, does that mean that there is no way for time to pass faster than in vacuum, or does general relativity also have "faster" metrics?
To be more precise, is there any frame into which one could go for a while, and on returning to vacuum less time would have passed there?
Answer: You need to be a little careful with your definition of vacuum. For instance inside a spherical shell of matter spacetime is flat, however time still runs more slowly than it does outside the shell. I'm assuming you have no such trickery in mind, and by vacuum you mean the usual concept of far (effectively infintely far) from any matter.
To a good approximation a time interval measured at some other point in the universe is related to the time interval measured by you by:
$$ \Delta t_0 = \Delta t \left( 1 + \frac{2 \Phi}{c^2}\right)^{-1/2} $$
where $t_0$ is your time, $t$ is the time at the other point and $\Phi$ is the Newtonian gravitational potential at that point (relative to you). So for example, relative to infinity at some distance $r$ from the Earth the gravitational potential is:
$$ \Phi = -\frac{GM}{r} $$
and therefore:
$$ \Delta t_0 = \Delta t \left( 1 - \frac{2 GM}{c^2r}\right)^{-1/2} $$
and time runs more slowly as you nearer to the Earth. Relative to the vacuum in the sense you mean it gravitational potentials are (as far as we know) always negative so time always runs slow compared to flat spacetime.
I guess the question is whether there are any special cases e.g. if exotic matter exists or if spacetime has some non-trivial topology. However I know of none. | {
"domain": "physics.stackexchange",
"id": 38094,
"tags": "general-relativity, time, vacuum"
} |
Is there a better than linear lower bound for factoring and discrete log? | Question: Are there any references that provide details about circuit lower bounds for specific hard problems arising in cryptography such as integer factoring, prime/composite discrete logarithm problem and its variant over group of points of elliptic curves (and their higher dimensional abelian varieties) and the general hidden subgroup problem?
Specifically do any of these problems have more than a linear complexity lower bound?
Answer: @Suresh: following your advice, here is my "answer". The status of circuit lower bounds is quite depressing. Here are the "current records":
$4n-4$ for circuits over $\{\land,\lor,\neg\}$, and $7n-7$
for circuits over $\{\land,\neg\}$ and $\{\lor,\neg\}$
computing $\oplus_n(x)=x_1\oplus x_2\oplus\cdots\oplus x_n$;
Redkin (1973). These bounds are tight.
$5n-o(n)$ for circuits over the basis with all
fanin-2 gates, except the parity and its negation; Iwama and
Morizumi (2002).
$3n-o(n)$ for general circuits over the basis
with all fanin-2 gates; Blum (1984). Arist Kojevnikov and Sasha Kulikov from Petersburg have found a simpler proof of a
$(7/3)n-o(1)$ lower bound. The advantage of their proof is its simplicity, not numerical. Later they gave a simple proof of $3n-o(1)$ lower bound for general circuits (all fanin-2 gates are allowed). Albeit for very complicated functions - affine dispersers. Papers are online here.
$n^{3-o(1)}$ for formulas over
$\{\land,\lor,\neg\}$; Hastad (1998).
$\Omega(n^2/\log n)$ for general fanin-$2$ formulas,
$\Omega(n^2/\log^2 n)$ for deterministic branching programs, and
$\Omega(n^{3/2}/\log n)$ for nondeterministic branching programs;
Nechiporuk~(1966).
So, your question "Specifically do any of these problems have more than a linear complexity lower bound?" remains widely open (in the case of circuits). My appeal to all young researchers: go forward, these "barriers" are not unbreakable! But try to think in a "non-natural way", in the sense of Razborov and Rudich. | {
"domain": "cstheory.stackexchange",
"id": 1064,
"tags": "cc.complexity-theory, complexity-classes, cr.crypto-security, circuit-complexity"
} |
Switch statements in Rock-Paper-Scissors game | Question: I am very much new to design pattern. I'm trying to modify the Rock, paper, scissor game from an example book where I try to add more various design pattern. But I am encountering two similar switch statement which I think have duplicates and I don't have any idea on removing those duplicates.
A abstract class:
abstract class UserVsComp {
protected static Item item;
abstract protected Item getItem() throws WrongMove;
}
Here are the two switch statement:(First Class is for user movement)
class UserMove extends UserVsComp {
private String move;
public UserMove(String move) { this.move = move; }
public Item getItem() throws WrongMove {
switch (choiceValue(this.move)) {
case 0 : item = new Paper(); break;
case 1 : item = new Scissors(); break;
case 2 : item = new Rock(); break;
default: throw new WrongMove("Wrong move. Game end");
}
return item;
}
Second class is for Computer movement:
class ItemGenerator extends UserVsComp {
public Item getItem() throws WrongMove {
switch((int) (Math.random() * 2)) {
case 0: item = new Paper(); break;
case 1: item = new Scissors(); break;
case 2: item = new Rock(); break;
default: throw new WrongMove("Wrong move. Game end");
}
return item;
}
}
The two switch statement have same case (I guess duplicacy). I'd like to know if there is any way to remove this duplicacy.
Answer: Instead of creating a new Item each time you can just use a enum:
public enum Move{
Paper{
public boolean canBeat(Move other){
return other==Rock;
}
},
Scissor{
public boolean canBeat(Move other){
return other==Paper;
}
},
Rock{
public boolean canBeat(Move other){
return other==Scissor;
}
};
public abstract boolean canBeat(Move other);
public boolean losesTo(Move other){
return other.canBeat(this);
}
}
Then You can keep them in an array (like what is returned by Move.values()) and index into that.
public Item getItem() throws WrongMove {
int choice = choiceValue(this.move);
if(choice<0||choice >= Move.values().length)
throw new WrongMove("Wrong move. Game end");
else return Move.values()[choice];
} | {
"domain": "codereview.stackexchange",
"id": 11984,
"tags": "java, object-oriented, design-patterns"
} |
TQBF PSPACE-COMPLETE : Why this algorithm is exponential but Savitch's not? | Question: So this is a question pertaining to the proof for $PSPACE-COMPLETE$ (for TQBF for example). The idea is to first prove the $L$ $is$ $PSPACE$(easy part) and next is to prove $PSPACE-COMPLETE$. The latter requires demonstrating an algorithm which computes the L in polynomial space. This is usually achieved by having recursive calls such that is re-used.
In TBQF proof, the equation $\phi_{i+1}(A,B)$= $\exists Z [\phi_{i+1}(A,Z)
\land \phi_{i+1}(Z,B) ]$ ($Z$ is mid-point )is default recursive relation for computing TBQF truth. In any standard proof, it is said that $\phi_{i+1}(A,B)$ is computed two times and for $m$ nodes, this formula explodes hence, other recursive-relation should be used to bound.
However in Savitch's proof, the recursive relation is $Path(a,b,t)$ = $Path(a,mid,t-1)$ AND $Path(mid,b,t-1)$ accepts then ACCEPT. In proof, it is stated that this relation reuses-spaces.
My Question is Why in TBQF relation space explodes while in Path, it is reused? Both of these relations looks more or less same to me because both refers to i-1 instances and will need space to store them?.
Answer: In the proof of the $\textit{PSPACE-}$completeness of TQBF, we need to construct a formula of some specific type. The recursive midpoint algorithm is easy to implement (and Savitch's Theorem requires finding the $\textit{algorithm}$), while the same idea, if performed in a similar manner, results in an exponential formula (and the completeness proof requires the $\textit{formula}$).
More generally, suppose there's some language $\textit{L}$, an optimal algorithm $A$ deciding $L$, and the smallest formula (maybe quantified) $\varphi$, s.t. $\varphi(x) = 1 \Leftrightarrow x \in L$. For me, it's not very clear why $A\in \textit{PSPACE}$ would imply that $\varphi$ is small (I only see it as a consequence of the $\textit{PSPACE}-$completeness theorem) | {
"domain": "cs.stackexchange",
"id": 14425,
"tags": "complexity-theory, space-complexity, nondeterminism"
} |
Is there something similar to Noether's theorem for discrete symmetries? | Question: Noether's theorem states that, for every continuous symmetry of an action, there exists a conserved quantity, e.g. energy conservation for time invariance, charge conservation for $U(1)$. Is there any similar statement for discrete symmetries?
Answer: For continuous global symmetries, Noether theorem gives you a locally conserved charge density (and an associated current), whose integral over all of space is conserved (i.e. time independent).
For global discrete symmetries, you have to distinguish between the cases where the conserved charge is continuous or discrete. For infinite symmetries like lattice translations the conserved quantity is continuous, albeit a periodic one. So in such case momentum is conserved modulo vectors in the reciprocal lattice. The conservation is local just as in the case of continuous symmetries.
In the case of finite group of symmetries the conserved quantity is itself discrete. You then don't have local conservation laws because the conserved quantity cannot vary continuously in space. Nevertheless, for such symmetries you still have a conserved charge which gives constraints (selection rules) on allowed processes. For example, for parity invariant theories you can give each state of a particle a "parity charge" which is simply a sign, and the total charge has to be conserved for any process, otherwise the amplitude for it is zero. | {
"domain": "physics.stackexchange",
"id": 78736,
"tags": "mathematical-physics, symmetry, group-theory, noethers-theorem, discrete"
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.