anchor stringlengths 0 150 | positive stringlengths 0 96k | source dict |
|---|---|---|
Implications of null-fields for operator algebra in CFT | Question: In the context of Conformal Field Theory (CFT), I have a primary field $\phi_{(r,s)}$ with a level 1 null-descendant, i.e. $(r,s)=(1,1)$ and $h_{(1,1)}=0$. My goal is to understand how this condition constraints the operator algebra, especially regarding the OPE (Operator Product Expansion).
Looking at the 3-point function, I see the following:
$$\langle L_{-1} \phi_{(1,1)} \phi_1 \phi_2 \rangle = \partial_z \langle \phi_{(1,1)} \phi_1 \phi_2 \rangle, \tag{1}$$
but we know that:
$$L_{-1} \phi_{(1,1)} = 0, \tag{2}$$
which in turn means
$$\langle L_{-1} \phi_{(1,1)} \phi_1 \phi_2 \rangle = 0. \tag{3}$$
We also know that in CFT, the general form of a 3-pt function is (with $h_{(1,1)}=0$):
$$\langle \phi_{(1,1)} \phi_1 \phi_2 \rangle = c_{12}^h (z-z_1)^{-h_1+h_2} (z_1-z_2)^{-h_1-h_2} (z_2-z)^{-h_2+h_1}. \tag{4}$$
Thus equating $(1)$ with $(3)$, and using $(4)$, we find:
$$\left( \frac{-h_1+h_2}{z-z_1} - \frac{-h_2+h_1}{z_2-z} \right) \langle \phi_{(1,1)} \phi_1 \phi_2 \rangle = 0, \tag{5}$$
and therefore that $h_1 = h_2$ or $\langle \phi_{(1,1)} \phi_1 \phi_2 \rangle = 0$.
So far so good. Now the claim is that, as a consequence, we can write the following OPE:
$$\phi_{(1,1)} \phi_1 = \sum c_{12}^h \phi_2 + \text{descendants} \tag{6}$$
when the condition $h_1 = h_2$ is satisfied. I don't see that. Can someone explain how to obtain eq. $(6)$ from eq. $(5)$?
Thanks in advance!
Edit:
Here is a picture of the script of my professor, maybe his intentions are clearer then?
Answer: The note is simply saying this: the three-point function
$$
\langle \phi_{(1,1)} \phi_1 \phi_2 \rangle = c^{h = 0}_{12}\, (z-z_1)^{-h_1+h_2} (z_1-z_2)^{-h_1-h_2} (z_2-z)^{-h_2+h_1} \,,
$$
satisfies a differential equation that implies
$$
c_{12}^0 \,(h_1 - h_2) = 0\,.
$$
So this means either $c_{12}^h = 0$ or $h_1 = h_2$. The first trivial consequence is that in the OPE $\phi_1 \times \phi_2$
$$
\phi_1 \times \phi_2 = \sum_j c^{h_j}_{12} \,\phi_j
$$
you can find the operator $\phi_{(1,1)}$ (i.e. $c^{0}_{12} \neq 0$) only if $h_1 = h_2$.
But you can say other things: using the associativity of the OPE you can flip this around and look at the OPE $\phi_{(1,1)} \times \phi_1$
$$
\phi_{(1,1)} \times \phi_1 = \sum_j c_{0 1}^j \phi_j \underset{\mathrm{Associativity}}{=}\sum_j c_{1j}^0 \,\phi_j \,.
$$
If $h_j = h_1$ (for example if $\phi_j$ is $\phi_2$ or $\phi_1$ itself), then the operator appears in the OPE, otherwise it does not.
Indeed $\phi_{(1,1)}$ can be thought of as an "identity element" in the OPE algebra: $\phi_j \times \phi_{(1,1)} \sim \phi_j$, which makes this thing very intuitive. | {
"domain": "physics.stackexchange",
"id": 59780,
"tags": "operators, conformal-field-theory"
} |
Adding new dependency after running catkin_create_pkg | Question:
How can I add a new dependency to my package, after the package has been created?
In my case:
1st step: catkin_create_pkg beginner_tutorials std_msgs roscpp
...
Next step: I want to add a dependency to pcl and pcl_ros
Is there an easy way to achieve this?
After editing CmakeLists.txt/package.xml or creating a new package from scratch I get this error when running catkin_make in the root work folder:
CMake Error at /opt/ros/hydro/share/catkin/cmake/catkinConfig.cmake:72 (find_package):
Could not find a configuration file for package pcl.
Set pcl_DIR to the directory containing a CMake configuration file for pcl.
The file will have one of the following names:
pclConfig.cmake
pcl-config.cmake
Originally posted by Almanakk on ROS Answers with karma: 1 on 2013-09-09
Post score: 0
Answer:
Have a look at the CMakeLists.txt/package.xml and see how e.g. roscpp is used there. Play around a bit. Shouldn't be too hard too figure out.
Originally posted by jodafo with karma: 365 on 2013-09-10
This answer was ACCEPTED on the original site
Post score: 0
Original comments
Comment by catmanjan on 2022-04-27:
9 years later, surely there is a better answer than this? | {
"domain": "robotics.stackexchange",
"id": 15478,
"tags": "ros, catkin-create-pkg"
} |
Reading a configuration file in c# | Question: The configuration file of my app is similar to,
<?xml version="1.0" encoding="utf-8" ?>
<Items>
<Item Name="Coffee"
Cost="10"
Image="itemCoffee.png" />
<Item Name="Tea"
Cost="10"
Image="itemTea.png" />
<Item Name="Vada"
Cost="10"
Image="itemVada.png" />
</Items>
Just trying to read the above small configuration file and I wrote this method.
public static class Configuration
{
public static T DeSerialize<T>(string filePath)
{
if (!System.IO.File.Exists(filePath))
{
throw new System.IO.FileNotFoundException(filePath);
}
var serializer = new System.Xml.Serialization.XmlSerializer(T);
return (T)serializer.Deserialize(new FileStream(filePath, FileMode.Open));
}
}
Where should I use using in this code? (Because, I never ever dispose the new FileStream that I wrote in this method)
Is this an overkill for reading this simple xml file?
Answer:
I would definitely wrap the FileStream object into a using clause like this:
using (Stream reader = new FileStream(filePath, FileMode.Open))
{
var serializer = new System.Xml.Serialization.XmlSerializer(typeof(T));
return (T)serializer.Deserialize(reader);
}
This way the Dispose method of the FileStream gets called immediately. Otherwise it will be delayed until the FileStream object gets garbage collected. This may not be a big problem in practice in simple applications where you read the configuration only once but is good practice to do it anyway.
Also note that you have to use XmlSerializer(typeof(T)) instead of XmlSerializer(T).
My opinion is that it is overkill to create a generic method if you have only one kind of configuration file in your app. This method might be hard to read for a maintainer who is not sufficiently comfortable with generics and it has no benefit in making your code DRY-er if you have only one type of configuration file. | {
"domain": "codereview.stackexchange",
"id": 5883,
"tags": "c#, .net"
} |
carry over factor in beam ( moment distribution ) | Question: In the solution , we notice that the 2823.6 is the result of carry over factor from BA to AB . Why we dont need to do the same thing for BC to CB ?
Answer: Because it is an end pin joint, it is assumed that no moment can occur in the joint.
The Moment carried from BA to AB because AB is a fixed joint, and thus incur a moment as a result of the moment at joint B.
Check out the Moment Distribution Method Wikipedia page and look at their worked example, it should help clear things up. | {
"domain": "engineering.stackexchange",
"id": 1257,
"tags": "structural-engineering, structural-analysis"
} |
A simple pendulum moving at a relativistic speed - how does the period change? | Question: I've been pondering the precise mechanism of time dilation for the example of a simple pendulum in two different situations:
The observer and ground are at rest in one frame of reference; the pendulum is moving at high speed with respect to that frame.
The observer is at rest in one frame of reference; the pendulum and the ground together move at high speed with respect to that frame.
user8260 has pointed out that in situation 1, in the pendulum's frame $g$ is greater by $\gamma^2$ compared to $g$ in the observer's frame. Thus in the pendulum's frame the period is less than it is in the observer's frame by a factor of $1/\gamma$, just as one would expect from time dilation.
But what about situation 2? Here, compared to the pendulum frame, the observer sees the pendulum with the same length in a stronger gravitational field, yet observes a longer period. Does the inertial mass of the pendulum change differently than its gravitational mass? Also, does the analysis depend on whether the plane of swing is perpendicular or parallel the direction of motion?
Answer: As in physics in general, a suitable choice of coordinates makes our life so much better. Time dilation in this problem is somewhat a more trivial effect, and the transformation of gravitational field is somewhat a more complicated phenomenon. With this in mind, let me reformulate slightly the two situations:
Case 2. Pendulum is at rest with respect to the Earth (and some observer moves with respect to them, observes time dilation etc etc)
Case 1. Pendulum is set above the Earth, which moves relativistically below it (and some observer moves with the Earth, observes time dilation etc)
So, let us settle the physics first, and the observer effects last.
Case 2: Classical physics problem, nothing to settle.
Case 1: From the pendulum's point of view, the gravitational field is generated by a moving body (=>the field is unknown). From the Earth frame, a relativistic body moves in a gravity field (=>the equations of motion are unknown).
One might transform the energy-momentum tensor of the Earth from the Earth rest frame to the pendulum frame, but special care should be taken about the fact that the Earth ceases to be spherical in the new frame (though its density does increase as $\gamma^2$). Additionally, it is not clear appriori that the motion of the Earth doesn't cause any additional forces.
I propose to use a straightforward yet more secure method of transforming the metric tensor from the Earth frame to the pendulum frame, and hence obtain the gravity, acting on the pendulum.
In the Earth rest frame the metric tensor is known to be
$$g_{\mu\nu}=\left(\begin{array}{cccc}
&1-2U & 0 & 0 & 0 &\\
&0 & 1-2U & 0 & 0 &\\
&0 & 0 & 1-2U & 0 &\\
&0 & 0 & 0 & -1-2U &\\
\end{array} \right),$$
where $U$ is the Newtonian potential of the Earth. This expression corresponds to the so called weak field limit, when the metric tensor is nearly flat. We use the standard notation of MTW ($c=1$, signature $(+++ -)$, Einstein's summation rule etc) and refer to this book for further details on linearized gravity.
Transformation of the field to the pendulum frame:
Lorentz tranformation matrix is given by:
$$
\Lambda_{\mu'}^{~\mu}=\left(\begin{array}{cccc}
&\gamma & 0 & 0 & \beta \gamma &\\
&0 & 1 & 0 & 0 &\\
&0 & 0 & 1 & 0 &\\
&\beta\gamma & 0 & 0 & \gamma &\\
\end{array} \right),
$$
with $\beta=\dfrac{v}{c}, \gamma=(1-\beta^2)^{-1/2}$ and $v$ being the relative velocity of the pendulum with respect to the Earth rest frame.
The transformed metric tensor is obtained by:
$$g_{\mu'\nu'}=\Lambda_{\mu'}^{~\mu}\Lambda_{\nu'}^{~\nu} g_{\mu\nu}=\left(\begin{array}{cccc}
&1-2U\dfrac{1+\beta^2}{1-\beta^2} & 0 & 0 & -\dfrac{4 U \beta}{1-\beta^2} &\\
&0 & 1-2U & 0 & 0 &\\
&0 & 0 & 1-2U & 0 &\\
&-\dfrac{4 U \beta}{1-\beta^2} & 0 & 0 & -1-2U\dfrac{1+\beta^2}{1-\beta^2} &\\
\end{array} \right)$$
In the pendulum frame (further primes in the indices are omitted!):
It is known that only the term $g_{44}$ determines the newtonian potential. One can see that by writing out the lagrangian for the pendulum:
$$
\mathcal{L}=\dfrac{1}{2}g_{\mu\nu} u^\mu u^\nu=\\
=\dfrac{1}{2}((u^1)^2+(u^2)^2+(u^3)^2-(u^4)^2)-\\
-U((u^2)^2+(u^3)^2+4 u^1 u^4 \beta \gamma^2+((u^1)^2+(u^4)^2)\dfrac{1+\beta^2}{1-\beta^2})
$$
Here $u^\mu$ is the 4-velocity of the pendulum. As the latter moves non-relativistically (in its own frame), we may consider $u^4\gg u^1,u^2,u^3$ and $u^4\approx \mathrm{const}$, which leaves:
$$
\mathcal{L}=\dfrac{1}{2}((u^1)^2+(u^2)^2+(u^3)^2)-U(u^4)^2\dfrac{1+\beta^2}{1-\beta^2}
$$
If the pendulum as a whole didn't move with respect to the Earth, we would have $\beta = 0$ and
$$
\mathcal{L}_0=\dfrac{1}{2}((u^1)^2+(u^2)^2+(u^3)^2)-U(u^4)^2
$$
Effectively, therefore, the pendulum in its rest frame experiences the gravitational field magnified by the factor of $\dfrac{1+\beta^2}{1-\beta^2}$. The pendulum frequency is thus magnified by $\dfrac{(1+\beta^2)^{1/2}}{(1-\beta^2)^{1/2}}$.
Remarks: the neglected terms in the lagrangian are either $\dfrac{v}{c}$ or $(\dfrac{v}{c})^2$ smaller than the kept leading terms. Hence, up to $\dfrac{v}{c}$ accuracy the direction of motion doesn't affect the pendulum frequency.
Finally, lets add time dilations to get the final answers. Let the period of the pendulum in the case when observer, the Earth, and the pendulum do not move with respect to each other be $T_0$. Then:
Case 1: In the pendulum frame, as we have seen it has the period of $\dfrac{(1-\beta^2)^{1/2}}{(1+\beta^2)^{1/2}} T_0$. Then in the observer frame, due to time dilation, the period is $\dfrac{1}{(1+\beta^2)^{1/2}}T_0$.
Case 2: In the pendulum frame the period is $T_0$. In the observer frame the period is $\dfrac{T_0}{(1-\beta^2)^{1/2}}$.
To conclude, the two cases are quite different due to the different physics happening. In one case the observed period changes due to the change of the reference frame, whereas in the other there is an additional factor due to the fact that the gravity of a moving source is not the same as that of a still source. | {
"domain": "physics.stackexchange",
"id": 2714,
"tags": "special-relativity"
} |
Is solubility in Qsp affected by coefficient? | Question: Related to my previous question: Is solubility coefficient affected if ion data is given in Ksp?
$200\,\mathrm{mL}$ solution of $0.02\,\mathrm{M}\,\ce{AgNO3}$ is added to $200\,\mathrm{mL}$ $\ce{CrO4^{2-}}$ and $\ce{PO4^3-}$ ions. Find out both of $Q_{\mathrm{sp}}$.
Actually the question is asking "Will it precipitate", but I skip it.
Here is my half approach:
$$200\,\mathrm{mL}=2\times 10^{-1}\,\mathrm{L}$$
$$V_{\mathrm{total}}=400\,\mathrm{mL}=4\times 10^{-1}\,\mathrm{L}$$
$$[\ce{Ag+}]=[\ce{CrO4^2-}]=[\ce{PO4^3-}]=\frac{0.02\,\mathrm{M}\cdot 2\times 10^{-1}\,\mathrm{L}}{4\times 10^{-1}\,\mathrm{L}}=0.01\,\mathrm{M}$$
$$\ce{Ag2CrO4 -> 2Ag^{+} +CrO4^{2-}}$$
$$Q_{\mathrm{sp}}(\ce{Ag2CrO4})=[\ce{Ag+}]^{2} \cdot [\ce{CrO4^2-}]$$
$$Q_{\mathrm{sp}}(\ce{Ag2CrO4})=[2s]^{2}[0.01\,\mathrm{M}]$$
I know that to find $K_{\mathrm{sp}}$, the $s$ needs to be multiplied (and to the power) with the coefficient from the reaction, but how about $Q_{\mathrm{sp}}$? Does the $s$ also need to be multiplied?
I asked this because my teacher said that only to the power is affecting the $Q_{\mathrm{sp}}$.
Answer: You don't need to multiply s, since $Q_\text{sp}$ is the result of multiplication of ions concentration (and you don't need to multiply with the coefficient in any $Q$). So the right way to answer $Q_\text{sp}$ is
$$Q_{\text{sp}}\:(\ce{Ag_{2}CrO_{4}})=[\ce{Ag^{+}}]^{2}[\ce{CrO_{4}^{2-}}]$$
$$Q_{\text{sp}}\:(\ce{Ag_{2}CrO_{4}})=[0.01]^{2}[0.01] = 10^{-6}$$ | {
"domain": "chemistry.stackexchange",
"id": 3421,
"tags": "inorganic-chemistry, equilibrium, solubility"
} |
Find contiguous integers with a given sum | Question: Introduction
This question was asked in a technical interview, I am looking for some feedback on my solution.
Given a list of integers and a number K, return which contiguous
elements of the list sum to K. For example, if the list is [1, 2, 3, 4, 5] and K is 9, then it
should return [2, 3, 4].
The Ask
My solution works with my test cases, but I would like feedback on how others would approach the problem and where my code could be altered to improve efficiency and runtime. Currently, I have a nested for loop and I believe my solution is O(n2).
Solution
def contigSum(nums, k):
for i, num in enumerate(nums):
accum = 0
result = []
# print(f'Current index = {i}')
# print(f'Starting value = {num}')
for val in nums[i:len(nums)]:
# print(f'accum = {accum}')
result.append(val)
# print(f'accum = {accum} + {val}')
accum += val
if accum == k:
print(f'{result} = {k}')
return 0
# else:
# print(f'accum = {accum}')
print('No match found')
return 1
Test Cases
nums0 = []
k0 = None
contigSum(nums0, k0)
nums6 = [1, 2, 3]
k6 = 99
contigSum(nums6, k6)
nums1 = [1, 2, 3, 4, 5]
k1 = 9
contigSum(nums1, k1)
nums2 = [-1, -2, -3]
k2 = -6
contigSum(nums2, k2)
nums4 = [5, 2, 6, 11, 284, -25, -2, 11]
k4 = 9
contigSum(nums4, k4)
nums5 = [10, 9, 7, 6, 5, 4, 3, 2 ,1]
k5 = 20
contigSum(nums5, k5)
Answer: Unused code
for i, num in enumerate(nums):, the i variable is used in your commented code, but commented code shouldn't exist, so i shouldn't exist either.
To come back on the commented code, I hope you didn't submit your solution with these comments as this is a (in my opinion) very bad practice. What does commented code mean after all? These comments could all be replaced by any debugging tool.
Running time
I believe your solution is actually running on \$O(n*\log(n))\$ time as the nested loop doesn't restart at index 1 every time it runs (which is a good thing). Seems like this is wrong.
Code structure
Usually, in an interview question code structure is pretty important. Right now you have one method that does everything. You should at least have a method that returns your result and one that prints it. Something like :
def main():
nums = ...
k = ...
print(contigSum(nums, k))
def contigSum(nums, k):
...
and contigSum should return the result, not print it. | {
"domain": "codereview.stackexchange",
"id": 32633,
"tags": "python, algorithm, interview-questions"
} |
Semantics: Unit/second versus unit-second | Question: I've been trying to make sense of an unusual bit of mathematical and notational semantics in my electrical engineering studies:
Velocity is typically given in units of metres per second ( $ v = \frac{m}{s} = m \cdot s^{-1} $ ). That is, a velocity of one metre per second is defined as a rate of positional change of one metre in a time period of one second.
However, some examples where this notation isn't the case in electrical engineering include the following:
Charge: $ C = A \cdot s $, Ampere-seconds, One ampere of current passing through a conductor during a time period of one second is equivalent to one coulomb of charge transferred.
Energy: $ J = W \cdot s $, Watt-seconds, One watt of power produced or consumed during a time period of one second is equivalent to one joule of energy produced or consumed.
Magnetic flux: $ Wb = V \cdot s $, Volt-seconds, A change of electromagnetic potential of one volt over a time period of one second generates a magnetic flux of one weber.
Inductance: $ H = \Omega \cdot s $, Ohm-seconds, A change of electrical resistance of one ohm over a time period of one second is equivalent to an inductance of one henry (I think; I may be interpreting this one incorrectly).
These are just a few examples where electrical engineering is chock full of "A rate of something happening in a time period of one second" is basically defined as that something multiplied by time, as opposed to divided by time as in the case of velocity.
As far as I can tell the phrasing is effectively equivalent, but mathematically, the difference is profound, and very, very confusing. Can someone explain the difference between the two?
Answer: Velocity, $m/s$, is a measure of the rate at which something moves. I guess you could define displacement as velocity-seconds, a measure of how far something moves at a given rate over a second. In your examples, it is not rates being defined, but the effect of going at a certain rate over a period of time. Amps are Coulombs per second, so you can define Coulombs as the amount of charge moved over a second at a rate of one amp. If you multiply the rate at which something happens by how long that something happens, you will get how much of that something happened. | {
"domain": "physics.stackexchange",
"id": 41838,
"tags": "units, notation, unit-conversion"
} |
If we lived in a multiverse, what would our universe most likely then be named? | Question: We live in the Milky Way Galaxy, but we don't just call it "The Galaxy", because we know there are multiple different galaxies. So if we lived in a multiverse, it wouldn't make much sense to keep calling our universe "The Universe".
How are names for things like this created, and what would a likely name for our universe be?
Answer: New objects usually get some preliminary name or number according to one or more naming or numbering conventions (e.g. HR numbers for stars). If there aren't too many objects of interest, they are named by the discoverer(s) (e.g. Hubble volume), an institution (e.g. PANSTARRS), an occasional nickname liked by the public (e.g. black hole), or by a majority in a commitee or an election (e.g. Hydra).
For our universe I can only speculate; probably a notion of the respective multiverse theory would become part of the name, e.g. "Herman brane", if we take a brane cosmology as an example. | {
"domain": "astronomy.stackexchange",
"id": 277,
"tags": "universe, naming, multiverse"
} |
Double slit for electrons (two beams or one)? | Question: I understand the double slit for waves but for electrons do we have a beam for each slit so each beam is responsible le to shoot electrons through its own slit.
Or do we have just one beam? Which slit do we place the beam to? If it’s in the middle, wouldn’t all the electrons hit the wall between the two slits?
Sorry Ian very confused as how to carry out the experiment for electrons
Answer: There is only one source or one beam of electrons. They are directed toward the slits, with some hitting the middle and some making contact with the edges of the openings. | {
"domain": "physics.stackexchange",
"id": 65123,
"tags": "electrons, double-slit-experiment"
} |
Easy question about orbital motion | Question: A satellite is orbiting the sun at the distance of $r$ and with velocity $v$ and on a circular orbit. (We name this orbit $O_1$). We want to change its orbit. The new orbit $O_2$ is perpendicular to $O_1$ with the same distance from the sun and hence the same velocity. How much we should change its velocity and energy? ($v<<c$ so it is classical)
Somewhere answered this question that we must first zero the horizontal velocity and then give it vertical velocity so $2v$ is the change in velocity. and $\Delta E = 0.5 m v^2 + 0.5 m v^2 = m v^2$. The first for zeroing the horizontal and the second for giving vertical velocity.
Somewhere else said the "Velocity-Change" vector is $\sqrt{v^2 + v^2} = \sqrt{2} v$ and hence $\Delta E \ = 0.5 m (\sqrt{2}v)^2 = m v^2$.
The energy change of both is the same but I think their way to get this is different. I want to know which one is physically correct? (or maybe both is the same thing)
Answer: Where did you get those two responses? I expect this was not a stackexchange network site. I'll restrain myself from derogating other sites. If this was an SE site, let us know.
The first response is complete nonsense. The person who wrote that response doesn't know about vectors or orbital mechanics. The "somewhere else" response is correct. | {
"domain": "physics.stackexchange",
"id": 39509,
"tags": "classical-mechanics, energy, orbital-motion, velocity"
} |
Getting acceleration due to gravity from dropping ball experiment | Question: This seems like pretty basic experiment, but I'm having a lot of trouble with it. Basically, I have two timer gates that measure time between two signals, and I drop metal ball between them. This way I'm getting distance traveled, and time. Ball is dropped from right above the first gate to make sure initial velocity is as small as possible (no way to make it 0 with this setup/timer). I'm assuming $v$ initial is $0 \frac{m}{s}$. Gates are $1$ meter apart.
Times are pretty consistent, and average result from dropping ball from $1.0$ meters is $0.4003$ seconds.
So now I have $3$ [constant acceleration] equations that I can use to get $g$.
$$d_{traveled} = v_{initial} . t + \frac{1}{2} a t^2$$
$$a = \frac{2d}{t^2}$$
$$a = \frac{2.1}{(.4003)^2}$$
$$a = 12.48 \frac{m}{s^2}$$
$$v_f^2 = v_i^2 + 2ad$$
$$a = \frac{v_f^2-v_i^2}{2d}$$
$$v_f = \frac{distance}{time}=\frac{1.0}{0.4003}=2.5 \frac{m}{s}$$
$$a = \frac{(2.5 \frac{m}{s})^2}{2.1 m}$$
$$a = 3.125 \frac{m}{s^2}$$
$$v_f = v_i + at$$
$$a= \frac{v_f-v_i}{t}$$
$$a = \frac{2.5 m/s - 0}{ 0.4003 s}$$
$$a = 6.25 \frac{m}{s^2}$$
I'm getting three different results. And all of them are far from $9.8\frac{m}{s^2}$ .
No idea what I'm doing wrong.
Also, if I would drop that ball from different heights, and plot distance-time graph, how can I get acceleration from that?
Answer: The time you should be getting is $0.4516$ seconds. The measurement is off by $0.05$ seconds. This is reason why you are getting $12.48$ instead of $9.8$. This is one of the cases where even small errors in calculations can give you very wrong answers. Since the time is squared, it will bring more error to the answer.
Moving on, in your second and third calculations, you used a very wrong formula to get final velocity.
The relation,$Velocity=\frac{Distance}{Time}$, can only be used when motion in uniform (unaccelerated). But since the body is falling under gravity, the motion is accelerated.
Therefore, the last two calculations will always give wrong results because the usage of equations is wrong. However, the equations used in first equation are correct. | {
"domain": "physics.stackexchange",
"id": 43037,
"tags": "homework-and-exercises, newtonian-mechanics, experimental-physics, acceleration"
} |
Bacteria trapped in crystal inclusions found 'alive' after 50,000 years - what were they eating all that time? | Question: The phys.org article Biologists find weird cave life that may be 50,000 years old describes the announcement by NASA Astrobiology Institute director Penelope Boston at the 2017 AAAS meeting$^{(1)}$ of micro-organisms found in small inclusions within crystals of hydrated [Calcium Sulphate][] (Gypsum) that had grown while underwater in a cave in Naica, Mexico. It is estimated that some of these organisms have been isolated within inclusions for as long as 50,000 years, and yet can grow and reproduce when carefully extracted and provided with fresh chemosynthetic nutrients.
(BBC Radio interview with Penelope Boston)
This particular discovery has just been formally announced so there is no peer-reviewed material to read yet, but it's possible someone here in Biology SE attended the meeting or has read further about the announcement.
50,000 years is a long time (for a bacteria, not a crystal), and if I understand correctly the energy source for these organisms is chemosynthesis. Put simply, wouldn't they have eventually used up all their food and died? I'm thinking the crystal is a good electrical insulator and the cave was dark, so there couldn't be external energy sources to replenish the oxidation state of the iron or sulphur or whatever they were eating.
The elevated temperature and ubiquitous radiation would have presented a relentless, potential source for DNA damage mechanisms, and repair would require a constant supply of energy. So I'm guessing there had to be some minimal source of energy to keep them viable, if not actually 'alive' for 50,000 years.
Is this thinking roughly correct? If so, what might that source of energy have been?
Below: Giant gypsum crystals in a cave in Naica, Mexico, from here. Note the person for scale at lower right.
(1) Currently 2017 AAAS meeting links are all re-directing to the 2018 meeting, so I can't find a link to the particular talk or session.
Answer: The question is interesting, but I must say it is too early to say anything. But let me tell you what I can.
Dormancy, first of all, is a state in which all metabolic activities of an organism temporarily stop or slow down, and when saying all, I literally mean it. Dormancy can be considered, in layman's terms, as molecular level of hibernation. Any organism, in dormant state, does perform all kinds of activities, but in super slow motion. Again, we don't know anything about those microbes, so I can just tell the currently known mechanisms by which those microbes could have survived. Gypsum i.e. $CaSO_4.2H_2O$ contains sulfate ($SO_4^{2-}$), so those microbes (let me call them X from now) must be sulfur metabolizing ones to survive. Now, known mechanisms of sulfur metabolism include:
sulfur oxidation
sulfate reduction
sulfite reduction
which are summarized as:
source
Now, with sulfate already available from gypsum (considering dissociation) as:
$CaSO_4.2H_2O \rightarrow Ca^{2+} + SO_4^{2-} + 2\hspace{1mm}H_2O$
The reactions it can perform are sulfate reduction and sulfite reduction. Some of the mechanisms are:
source
In the presence of $CO_2$, glucose can also be formed as:
$12\hspace{1mm}H_2S + 6\hspace{1mm}CO_2\hspace{1mm}\rightarrow\hspace{1mm}C_6H_{12}O_6 + 6\hspace{1mm}H_2O + 12\hspace{1mm}S$
The required $CO_2$ might come from respiration. During dormancy, this process is highly slowed down, so that X may have survived such a long time by just this process.
Again, this is just the known mechanism, X may have evolved other, possibly even more efficient, mechanism. The above example was just to suggest how X could have survived in gypsum crystals for as long as 50K years. We don't know anything about X yet. From the article you cite:
If confirmed, the find is yet another example of how microbes can survive in extremely punishing conditions on Earth.
meaning we don't even know yet whether they are really viable or not.
The life forms—40 different strains of microbes and even some viruses—are so weird that their nearest relatives are still 10 percent different genetically. That makes their closest relative still pretty far away, about as far away as humans are from mushrooms, Boston said.
Now, viruses ain't alive. And 90% similarity means a lot of difference, which means they almost certainly have different mechanisms for survival. At last, I would again remind that it is too early to say anything unless some scientific studies are performed and published.
P.S. talking about energy source, even the heat and radiation from environment could be used as a source of energy. Obviously, harnessing energy directly from external heat is pretty difficult, but when one finds fungi using radiation for growth, then you never know! | {
"domain": "biology.stackexchange",
"id": 6677,
"tags": "biochemistry, bacteriology, chemosynthesis"
} |
Maxwell's equations in integral form using differential geometry | Question: So I've been trying to convert from Maxwell's equations in terms of differential forms to the integral versions of Maxwell's equations that we know from vector calculus.
We have, in vector calculus
$$\left\{\begin{align}
\nabla\cdot\mathbf{E}&=~\rho\\
\nabla\times\mathbf B~&=~\mathbf J+\frac{\partial\mathbf E}{\partial t}\\
\nabla\times\mathbf E~&=-\frac{\partial\mathbf B}{\partial t}\\
\nabla\cdot\mathbf{B}~&=~0
\end{align}\right.$$
Which equates to
$$
F = E_i\,{\rm d}t\wedge{\rm d}x^i + \star B_i\,{\rm d}t\wedge{\rm d}x^i\\
J = \star \left( \rho \, {\rm d} t + J_i \,{\rm d} x^i \right)\\
d\star F = J \\
dF = 0
$$
in differential geometry, with implied sum over the $i$ index.
Stoke's theorem in differential geometry tells us that
$$
\int_M \mathrm{d} \omega = \int_{\partial M} \omega
$$
For a given $n-1$ form $\omega$ on manifold $M$ of dimension $n$, with boundary $\partial M$.
So our integral Maxwell's equations in differential geometry are
$$
\int_{\partial M} \star F = \int_M J \\
\int_{\partial M} F = 0
$$
Reducing this expression back to vector calculus for the divergence equations is then an exercise in choosing the correct surfaces. Take the domain
$$
D_{t_0} = \left\{ \left(\begin{array}{c} t_0 \\ x \\ y \\ z\end{array}\right) \in \mathbb{R}^4 \;\;\Bigg| \;\;\left(\begin{array}{c} x \\ y \\ z\end{array}\right) \in D \subset \mathbb{R}^3 \right\}
$$
ie. a domain $D$ in $\mathbb{R}^3$, at a chosen time $t_0$. This is a 3 dimensional manifold.
Let $\imath$ be the inclusion mapping of $D_{t_0}$ in Minkowski spacetime, and then $\imath^*F$ and $\imath^* \star F$ are both 2 forms on a 3 dimensional manifold, and $\imath^*J$ is a 3 form on a 3 dimensional manifold. Now we can apply our integral versions of Maxwell's equations.
Calculating the pull-backs gives us that $\imath^*{\rm d} t = 0$, $\imath^*{\rm d} x^i = {\rm d} x^i$, so
$$
\imath^*F = \star B_i\,{\rm d}x^i\\
\imath^* \star F = \star E_i\,{\rm d}x^i\\
\imath^* J = \rho \; {\rm d} x \wedge {\rm d} y \wedge {\rm d} z
$$
Then the homogeneous divergence equation gives us (with an implied pull-back by the inclusion mapping of $\partial D$ in $D$)
$$
\int_{\partial D} \imath^* F = \int_{\partial D} \star B_i\,{\rm d}x^i = \int_{\partial D} \mathbf{B} \cdot {\rm d} \mathbf{A} = 0
$$
And the inhomogeneous divergence equation;
$$
\int_{\partial D} \imath^* \star F = \int_{\partial D} \star E_i\,{\rm d}x^i = \int_{\partial D} \mathbf{E} \cdot {\rm d} \mathbf{A} = \int_D \imath^* J = \int_D \rho \; {\rm d} x \wedge {\rm d} y \wedge {\rm d} z = \int_D \rho \,\rm{d} V = Q
$$
And so we recover two of the integral versions of Maxwell's equations.
I am, however, a bit lost recovering the two remaining integral versions of Maxwell's equations; those that involve the line integrals around closed loops. Because these equations still involve time derivatives, we've only actually applied Stokes' theorem in 3D and not in 4D space. I'm unsure about how to deal with this, I was thinking about maybe taking a domain of integration something like
$$
D_{x_0} = \left\{ \left(\begin{array}{c} t \\ x_0 \\ y \\ z\end{array}\right) \in \mathbb{R}^4 \;\; \Bigg| \;\;\left(\begin{array}{c} y \\ z\end{array}\right) \in A \subset \mathbb{R}^2, t \in [t_0, t_0 + h] \right\}
$$
(Obviously this could be done for a surface oriented in a general direction rather than in the $y-z$ plane, but let's make the maths more simple by Lorentz invariance) And then taking a limit as $h \rightarrow 0$ to recover the time derivative, but it seems a bit of a messy way to do it really, and I also can't work out how the boundaries of this set match up with the 4D Stokes' theorem and things. If anyone could show me how to do it like this that would be great?
My other thought was about pulling back to three dimensional space before applying Stokes' theorem, but I feel like this is treating the equations a bit un-symmetrically and so there should be some way to keep the 4D Stokes theorem and not add another time derivative, and still reproduce the integral forms of the equations. So does anyone have any better ideas?
Answer: I think your limiting procedure should work.
For the homogeneous Maxwell equations, when we restrict to a surface of constant $x$, the pull back of the field strength looks something like
$$\iota^* F = E^\perp_i dt \wedge dx^i + B_x dy \wedge dz,$$
where $E^\perp$ is the component of $E$ in the $yz$-plane. Now your timelike pillbox integral will have three contributions, one from the sides of the cylinder, and two from the top and bottom. The tangent vectors of the sides are $\partial_t$ and $v^\mu$, the tangent vector of the curve bounding the surface in the $yz$-plane. So the side contribution gives
$$\int dt\oint \mathbf{E}\cdot\mathbf{v}ds,$$
where $s$ is a parameter on the curve. The top and bottom of the integral comes entirely from the $B_x dy \wedge dz$ term, and is just the usual flux of $B$ through the surface. Since the total contribution is zero, we have
$$\int_t^{t+h} dt \oint \mathbf{E}\cdot\mathbf{v}ds + \int_{t+h} \mathbf B\cdot d\mathbf{A} - \int_{t} \mathbf B\cdot d\mathbf{A} = 0$$
Now if you want to take the limit as $h\rightarrow 0$, you can Taylor expand each term and look at the first order in $h$ terms. The first integral just becomes the line integral by the Fundamental Theorem of Calculus. The $\mathbf{B}$ flux integrals can also be seen straightforwardly to approach the time derivative of the flux at first order in $h$. Thus the result is
$$\oint \mathbf{E}\cdot \mathbf{v} ds + \frac{\partial}{\partial t}\int_t \mathbf{B}\cdot d\mathbf{A} = 0$$
Applying basically the same procedure to the dual field strength gives
$$\oint \mathbf{B}\cdot \mathbf{v} ds -\frac{\partial}{\partial t}\int_t \mathbf{E}\cdot d \mathbf{A} = \int_t \mathbf{j}\cdot d\mathbf{A}.$$ | {
"domain": "physics.stackexchange",
"id": 23686,
"tags": "homework-and-exercises, differential-geometry, maxwell-equations"
} |
Can a metal be forced to form an anion theoretically? | Question: I know that metals have the capability to lose electrons and form cations, but is it also theoretically possible to supply an electron to a metal so that it forms an anion?
If so, has it ever been done?
I referred this question (Can two metals combine to form a compound?) but could not get a satisfactory solution from that.
Answer: Absolutely! You will find these mostly in electride systems and off these, mostly in alkali metals.
Here is an example research paper:
"Superakali-Alkalide Interactions and Ion Pairing in Low-Polarity Solvents, J. Am. Chem. Soc., 2021, 143(10), 3934–3943 (https://pubs.acs.org/doi/10.1021/jacs.1c00115)
Remember, metals have a positive charged when ionized because it is energetically more favorable to lose electrons than to gain them, this being of course an oversimplified version of electron orbitals and shells. If you have a situation in which this is reverse or not possible, you will get a negative metal ion | {
"domain": "chemistry.stackexchange",
"id": 15953,
"tags": "electrons, metal, electronic-configuration"
} |
Longest Increasing Subsequence | Question: I am learning dynamic programming and I have written down some code for longest increasing subsequence. I would like to know if there is any case or any area of improvement in the terms of optimization/programming style.
/**
* LIS = Longest increasing subsequence.
* Input = [10, 22, 9, 33, 21, 50, 41, 60, 80]
* Output = [10, 22, 33, 50, 60, 80]
* Created by gaurav on 1/7/15.
*/
function findSubsequence(arr){
var allSubsequence = [],
longestSubsequence = null,
longestSubsequenceLength = -1;
for(var i=0;i<arr.length;i++){ //i=1
var subsequenceForCurrent = [arr[i]],
current = arr[i],
lastElementAdded = -1;
for(var j=i;j<arr.length;j++){
var subsequent = arr[j];
if((subsequent > current) && (lastElementAdded<subsequent)){
subsequenceForCurrent.push(subsequent);
lastElementAdded = subsequent;
}
}
allSubsequence.push(subsequenceForCurrent);
}
for(var i in allSubsequence){
var subs = allSubsequence[i];
if(subs.length>longestSubsequenceLength){
longestSubsequenceLength = subs.length;
longestSubsequence = subs;
}
}
return longestSubsequence;
}
(function driver(){
var sample = [87,88,91, 10, 22, 9,92, 94, 33, 21, 50, 41, 60, 80];
console.log(findSubsequence(sample));
})();
Answer: This bit worries me:
lastElementAdded = -1;
It assumes that the minimum value in the array is zero. But really, an increasing sequence could start with -3456780 or something.
I'd use null or something non-numeric instead. Or simply add the first element automatically, and start the loop at index 1. You could use Number.NEGATIVE_INFINITY, but it has the same problem: A sequence could start at negative infinity.
Also, don't use for..in loops on arrays. A for..in loop enumerates an object's properties; it's not intended for iterating through an array's elements. Instead, use a regular for loop, or a forEach iterator.
Lastly, I'd change allSubsequence to allSubsequences (plural) simply because it more grammatically correct.
In terms of overall strategy, your current algorithm is doing a bit of unnecessary work. Given the example input, the first 3 subsequences it finds are:
[ 87, 88, 91, 92, 94 ]
[ 88, 91, 92, 94 ]
[ 91, 92, 94 ]
The last two aren't really interesting, since they're just subsequences of the first.
Now, this really isn't a problem for arrays as short as what you've got here. Still, it's a fun exercise, so I tried my hand at it. There's probably an even more elegant solution than what I'm proposing here, though. Algorithms aren't my strong suit, I'm afraid. But here's what I came up with:
Start a sequence with the first element of the input array
Iterate through the array
If a value is greater than the sequence's maximum, append it to sequence
If it's less and it's the first such value we've found, recurse with a subset of the input array, starting at the current index. Store the result.
Return whichever sequence - the current one, or the "fork" - is longer
Kinda hard to explain, actually. Hope the code below will help illustrate:
function findLongestIncreasingSequence(array) {
var sequence = [],
fork = null;
// Always add the first value to the sequence
sequence.push(array[0]);
// Reduce the array with. Since no initial accumulator is given,
// the first value in the array is used
array.reduce(function (previous, current, index) {
// If the current value is larger than the last, add it to the
// sequence and return (i.e. check the next value)
if(current > previous) {
sequence.push(current);
return current;
}
// If, however, the value is smaller, and we haven't had a fork
// before, make one now, starting at the current value's index
if(!fork && current < previous) {
fork = findLongestIncreasingSequence(array.slice(index));
}
// Return the previous value if the current one is less or equal
return previous;
});
// Compare the current sequence's length to the fork's (if any) and
// return whichever one is larger
return fork && fork.length > sequence.length ? fork : sequence;
}
Given the example input, you get:
findLongestIncreasingSequence(sample); // => [ 87, 88, 91, 92, 94 ]
Anyway, what it does is more or less this, where √ means the value was added to a sequence, and F means it "forked" and recursed
Initial input: 87 88 91 10 22 9 92 94 33 21 50 41 60 80
====================================================================
1st call: √ √ √ F . . √ √ . . . . . .
2nd call: √ √ F √ √ . . . . . .
3rd call: √ √ √ F . . . . .
4th call: √ F √ . √ √
5th call: √ √ F √ √
6th call: √ √ √
Unfortunately, it's not tail-recursive, so call stack depth could be an issue.
And there's probably more optimizations that can be made. For instance, if the current sequence is already 5 items long, there's no reason to fork if the array only has 4 items left. And as I said, there's probably an even cleverer solution overall.
The alternative to the above would be to simply map out the indices at which a value is lower than the one preceding it. Then do the same thing (slice the array at those indices, find increasing sequence) one at a time instead of recursively, and compare sequence lengths at the end. Same result. But recursion is more fun :P | {
"domain": "codereview.stackexchange",
"id": 23097,
"tags": "javascript, performance, dynamic-programming"
} |
Javascript Prototypes that represent HTML elements | Question: I've been thinking about making Javascript Prototypes that represent HTML elements.
For example a form prototype with build in ajax requests and form element prototypes. Or a list with list item prototypes.
I think that the biggest benefit of this approach is that it reduces repetitive code.
Here is an example of what I have in mind.
var LightParticle = function() {
this.setWidth(10);
this.setHeight(10);
this.setTop(0);
this.setLeft(0);
this.setPosition("absolute");
this.setBackground("white");
this.setClassName("LightParticle");
};
LightParticle.prototype = {
setWidth : function(width) {
this.width = width;
},
getWidth : function() {
return this.width;
},
setHeight : function(height) {
this.height = height;
},
getHeight : function() {
return this.height;
},
setTop : function(top) {
this.top = top;
},
getTop : function() {
return this.top;
},
setLeft : function(left) {
this.left = left;
},
getLeft : function() {
return this.left;
},
setBackground : function(background) {
this.background = background;
},
getBackground : function() {
return this.background;
},
setClassName : function(className) {
this.className = className;
},
getClassName : function() {
return this.className;
},
setElement : function(element) {
this.element = element;
},
getElement : function(element) {
return this.element;
},
setPosition : function(position) {
this.position = position;
},
getPosition : function(position) {
return this.position;
},
setSize : function(size) {
this.setWidth(size);
this.setHeight(size);
},
getStyle : function() {
return {
position: this.getPosition(),
width : this.getWidth(),
height : this.getHeight(),
top : this.getTop(),
left: this.getLeft(),
background: this.getBackground()
}
},
getView : function() {
var element = $("<div></div>");
element
.addClass(this.getClassName())
.css(this.getStyle());
this.setElement(element);
return element;
},
pulsate : function (speed) {
var height = this.getHeight();
var width = this.getWidth();
var top = this.getTop();
var left = this.getLeft();
if(this.getElement().height() == height) {
height = height * 4;
width = width * 4;
top = top - (height/2);
left = left - (width/2);
}
$(this.getElement()).animate({
"height":height,
"width": width,
"top": top,
"left":left
}, speed);
var that = this;
setTimeout(function(){
that.pulsate(speed);
}, speed);
}
}
function addRandomParticle() {
try {
var particle = new LightParticle();
var seed = Math.floor(Math.random() * 70) + 1;
particle.setBackground("#" + Math.floor((Math.abs(Math.sin(seed) * 16777215)) % 16777215).toString(16));
particle.setSize(Math.floor(Math.random() * 70) + 10);
particle.setTop(Math.floor(Math.random() * $(window).height()));
particle.setLeft(Math.floor(Math.random() * $(window).width()));
$('#canvas').append(particle.getView());
particle.pulsate(Math.floor(Math.random() * 2000) + 500);
} catch(error) {
console.log(error.message);
}
}
$(document).ready(function() {
try {
for(var i = 0; i < 100; i++) {
addRandomParticle();
}
} catch(error) {
console.log(error.message);
}
});
So far I'm not satisfied with the getters and setters since they have no datatype validation.
Does anyone have any idea how I can improve this? Or does someone know a completely better approach to reduce javascript code/events?
Answer: As dystroy mentioned, the setters and getters are really not JavaScript style, neither are the many try/catch statements in your functions.
However, if you insist on having all these getters and setters, you should create a utility function that generates a getter/setter for you. Since you have default values for most properties in your LightParticle I was thinking something along the lines of this :
function addGetterSetter( o , property , defaultValue ){
var postFix = property[0].toUpperCase() + property.slice(1),
getter = 'get' + postFix,
setter = 'set' + postFix;
o[getter] = function(){
return this[property] || defaultValue;
}
o[setter] = function( value ){
this[property] = value;
return this;
}
}
And then you would
var LightParticle = function() {},
prototype = LightParticle.prototype;
addGetterSetter( prototype , width , 10 );
addGetterSetter( prototype , height , 10 );
addGetterSetter( prototype , top , 0 );
addGetterSetter( prototype , left , 0 );
addGetterSetter( prototype , position, 'absolute' );
addGetterSetter( prototype , background, 'white' );
addGetterSetter( prototype , className , 'LightParticle' ); | {
"domain": "codereview.stackexchange",
"id": 7245,
"tags": "javascript, jquery, html"
} |
How can longest isoforms (per gene) be extracted from a FASTA file? | Question: Is there a convenient way to extract the longest isoforms from a transcriptome fasta file? I had found some scripts on biostars but none are functional and I'm having difficulty getting them to work.
I'm aware that the longest isoforms aren't necessarily 'the best' but it will suit my purposes.
The fasta was generated via Augustus. Here is what the fasta file looks like currently (sequence shortened to save space)
>Doug_NoIndex_L005_R1_001_contig_2.g7.t1
atggggcataacatagagactggtgaacgtgctgaaattctacttcaaagtctacctgattcgtatgatcaactcatca
ttaatataaccaaaaacctagaaattctagccttcgatgatgttgcagctgcggttcttgaagaagaaagtcggcgcaagaacaaagaagatagaccg
>Doug_NoIndex_L005_R1_001_contig_2.g7.t2
atggggcataacatagagactggtgaacgtgctgaaattctacttcaaagtctacctgattcgtatgatcaactcatca
The format is as such:
Gene 1 isoform 1
Gene 1 isoform 2
Gene 2 isoform 1
Gene 2 isoform 2
and so forth. There are several genes that have more than one pair of isoforms (up to 3 or 4). There are roughly 80,000 total transcripts, probably 25,000 genes. I would like to extract the single longest isoform for each gene.
Answer: While the solution from https://bioinformatics.stackexchange.com/users/96/daniel-standage should work (after adjusting for possible python3 incompatibility), the following is a shorter and less memory demanding method that uses biopython:
#!/usr/bin/env python
from Bio import SeqIO
import sys
lastGene = None
longest = (None, None)
for rec in SeqIO.parse(sys.argv[1], "fasta"):
gene = ".".join(rec.id.split(".")[:-1])
l = len(rec)
if lastGene is not None:
if gene == lastGene:
if longest[0] < l:
longest = (l, rec)
else:
lastGene = gene
SeqIO.write(longest[1], sys.stdout, "fasta")
longest = (l, rec)
else:
lastGene = gene
longest = (l, rec)
SeqIO.write(longest[1], sys.stdout, "fasta")
If you saved this as filter.py, then filter.py original.fa > subset.fa would be the command to use. | {
"domain": "bioinformatics.stackexchange",
"id": 87,
"tags": "fasta, isoform, filtering"
} |
Is there a formal definition of `nature` in Biology? | Question: I've been discussing with a friend about Earth Day and at a certain point came out the question
What do you call "nature"?
He said that he considers "nature" basically all matter, including plastics and all kind of man-made materials.
I said that as far as I understand, "nature" is the ensemble of all Earth's ecosystems, thus if something is not part of an ecosystem, it is not "nature".
Is there an official definition of what is considered to be "nature"?
If yes, what is it? If no, is there a reason?
Answer: Is there a definition of "nature"?
I don't think there is any commonly accepted definition of "nature" in biology. To my experience, the term "nature" is actually relatively rarely used in conferences or peer-reviewed papers.
Why is there no formal definition of nature?
The concept of "nature" has been developed outside the field of science or philosophy. As often concepts in the popular culture are used without a proper, complete definition. Even if there is an vague intuition that correspond to a concept, it does not mean that there is any way to make an objective definition of this concept. I think the absence of formal definition of nature mainly come from the fact that nature as used in the population culture does not mean much.
Is the term "nature" used in Biology and with which definition?
I just did a quick review of the use of the term "nature" in peer-reviewed articles. It seems that the term "nature" is rarely used or only in the abstract r in the first two paragraphs of the introduction to convey very general (and somewhat inaccurate) ideas.
Often, the term "nature" holds for "essence", "origin" or "intrinsic characteristic" rather than referring to "outside the lab", "landscape" or "ecosystems". Here are some examples of where I found the term nature
"Nature" as "Essence"
From the abstract of Gibson and Dworkin 2004
[..] we highlight recent progress in determining the nature and identity of genes that underlie cryptic genetic effects [..]
From the first paragraph of Woolhouse et al. 2002
Failure to recognize the dynamic nature of the interaction could result in misinterpretation [..]
"Nature" as "outside the lab"
First sentence of the second paragraph of Elena and Lenski 2003
Since Darwin’s day, many examples of evolution in action have been studied in nature [..]
Note btw that there is no good (objective) definition of life either. One might want to have a look at Why isn't a virus alive? for a discussion on the definition of life. | {
"domain": "biology.stackexchange",
"id": 5471,
"tags": "definitions"
} |
Is Hartree-Fock a standard mean field approximation? | Question: I have read many times that Hartree-Fock is a mean field approximation, but I struggle to reconcile it with a standard mean field approach. In the simplest form of mean field approximation, we utilize the equality
$$AB = (A-\langle A\rangle)(B - \langle B \rangle) + \langle A \rangle B + A \langle B \rangle - \langle A \rangle \langle B \rangle.\tag{1}$$
We usually assume the first term on the right hand side is negligible and we drop the last term because it is a constant. In a Hartree-Fock approximation we write (see these lecture notes or this question)
$$
c_1^\dagger c_2^\dagger c_3 c_4 \approx
-\langle c_1^\dagger c_3 \rangle c_2^\dagger c_4
-\langle c_2^\dagger c_4 \rangle c_1^\dagger c_3
+\langle c_1^\dagger c_4 \rangle c_2^\dagger c_3
+\langle c_2^\dagger c_3 \rangle c_1^\dagger c_4,\tag{2}
$$
while expecting from the mean field theory
$$
\begin{align}
c_1^\dagger c_2^\dagger c_3 c_4 =-\frac{1}{2}(c_1^\dagger c_3) (c_2^\dagger c_4)+\frac{1}{2}(c_1^\dagger c_4) (c_2^\dagger c_3)\\\approx
\frac{1}{2}\left[-\langle c_1^\dagger c_3 \rangle c_2^\dagger c_4
-\langle c_2^\dagger c_4 \rangle c_1^\dagger c_3
+\langle c_1^\dagger c_4 \rangle c_2^\dagger c_3
+\langle c_2^\dagger c_3 \rangle c_1^\dagger c_4\right].\tag{3}
\end{align}
$$
In other words, application of the mean field theory according to the equation (1) gives an extra factor of $1/2$ as compared to the effective Hamiltonian of the Hartree-Fock approximation. Am I missing anything, or Hartree-Fock is indeed a non-standard mean-field approximation and cannot be obtained from Eq. (1)?
Edit. Essentially, I would like to arrive at Hartree-Fock approximation, when starting from either Eq. (1) or Hubbard-Stratonovich transformation; or, alternatively, confirm that this is not possible. I am well aware about the standard way of deriving Hartree-Fock approximation by minimizing the energy of the wave function of a non-interacting fermionic system.
Answer: The spirit and the practice of the mean field theory
The spirit of the MFT is replacing an interacting (multi-particle) Hamiltonian by a non-interacting (single-particle) one, for particles moving in an effective field, thus making the problem solvable. (So Hartree-Fock nicely filles the bill.) In practice there are several "standard" ways to do this, specifically:
Weiss mean field theory, as described in the OP, which is usually applied in the equations of motion
Habbard-Stratonovich transformation in path integral (I recommend Negele&Orland's book for coincise presentation, although it is deep in the text)
Partial resummation of diagrams in Feynman-Dyson expansion
My personal impression was always that the trend to call Hartree-Fock a mean field approximation comes from the path integral people, since it requires a leap of imagination to obtain it by Weiss method. On the other hand, in Feynman-Dyson expansion the Hartree and Fock terms appear naturally as a result of applying Wick's theorem, but no one (to my knwoledge) is calling it mean field in that context.
Ad absurdum derivation
There are four operators in the expression of interest, so let us do mean field honestly, as we would do it for four operators:
$$
c_1^\dagger c_2^\dagger c_3 c_4 =
(\langle c_1^\dagger\rangle + \delta c_1^\dagger) (\langle c_2^\dagger\rangle + \delta c_2^\dagger) (\langle c_3\rangle + \delta c_3)(\langle c_4\rangle +\delta c_4)
$$
This is obviously a problematic expression, since the averages are zero when we are dealing with a particle-number-conserving Hamiltonian, as is often the case - but formally we can write it this way. If we now keep only the terms up to the second order in $\delta c^\dagger,\delta c$, we have
$$
c_1^\dagger c_2^\dagger c_3 c_4 = \langle c_1^\dagger\rangle \langle c_2^\dagger\rangle c_3 c_4 + \langle c_1^\dagger\rangle c_2^\dagger \langle c_3\rangle c_4 + \langle c_1^\dagger\rangle c_2^\dagger c_3 \langle c_4\rangle + c_1^\dagger \langle c_2^\dagger\rangle \langle c_3\rangle c_4 +
c_1^\dagger \langle c_2^\dagger\rangle c_3 \langle c_4\rangle + c_1^\dagger c_2^\dagger \langle c_3\rangle \langle c_4 \rangle
$$
We throw the first and the last terms, since they are not single-particle ones, and unite the averages under the same bracket, as if they were uncorrelated, arriving at
$$
c_1^\dagger c_2^\dagger c_3 c_4 = -\langle c_1^\dagger c_3\rangle c_2^\dagger c_4 + \langle c_1^\dagger c_4\rangle c_2^\dagger c_3 + \langle c_2^\dagger c_3\rangle c_1^\dagger c_4 - \langle c_2^\dagger c_4\rangle c_1^\dagger c_3,$$
which is the desired result.
This is admittedly a doubtful derivation, but it looks somewhat less fishy when the operators are replaced by Grassman numbers... and Feynman-Dyson expansion definitely shows that this approximation is sound.
A quote
To close, let me quote Arnold Sommerfeld (although on a different subject):
Thermodynamics is a funny subject. The first time you go through it, you don't understand it at all. The second time you go through it, you think you understand it, except for one or two small points. The third time you go through it, you know you don't understand it, but by that time you are so used to it, it doesn't bother you any more.
So is it with Hartree-Fock and mean field. | {
"domain": "physics.stackexchange",
"id": 83403,
"tags": "second-quantization, mean-field-theory"
} |
Can muonic atoms exist? | Question: Would it be possible in the standard model to have atom like systems in which muons (or tauons) take the place of electrons? Why don't we see more of them?
For instance it could be related to some mechanism leptogenesys, but I don't know much about this subject..
How the difference between muonic and electronic atoms could affect astronomical data?
Correct me if I am wrong, but I guess there is no analogue for protons and neutrons, especially since protons have very long life.
Answer: Absolutely they can exist. In fact, physicists often creat muonic hydrogen to study things like the structure/size of the proton with more accuracy.
The reason we don't see muonic/tauonic atoms in nature is that these particles decay very quickly, whereas the electron, being the lightest of the three generations of leptons, has an essentially infinite lifetime. | {
"domain": "physics.stackexchange",
"id": 67470,
"tags": "particle-physics, atomic-physics, leptons, half-life, leptogenesis"
} |
Tensor Formulation of Maxwell's Equations | Question: I've been reading up about the tensor formulation of Maxwell's Equations of Electromagnetism, and the derivations I have seen (found here: http://www.lecture-notes.co.uk/susskind/special-relativity/lecture-2-3/maxwells-equations/) gives them in covariant form (in vacuo) as:
$$\begin{array}{rcl}\partial_{\mu}F^{\mu \nu} &= &0 \\ \partial_{\mu}\tilde{F}^{\mu \nu} & = & 0 \end{array}$$
Where $F^{\mu \nu}$ is the electromagnetic tensor and $\tilde{F}^{\mu \nu}$ is a modified electromagnetic tensor in which:
$$\begin{array}{rcl} E_m & \to & -B_m \\ B_m & \to & E_m \end{array}$$
I (just about) understand how these arose, however I'm sure I had previously seen them formulated as a single tensor equation (of a similar form): does anyone know how this looks/arises?
If possible could answers take a more component-based approach, as this is what I am able to work with easiest (don't worry if not though).
Answer: It is possible to combine the two equations into one by defining the Riemann-Silberstein field $\Psi= {\bf E}+i{\bf B}$. Then the time dependent Maxwell equations become
$$
i\partial_t \Psi = -i (\Sigma_x \partial_y+ \Sigma_y \partial_y+\Sigma_z \partial_z)\Psi
$$
where the $\Sigma_{x,y,z}$ are the spin-one version of the Pauli sigma matrices. The resulting equation is a spin-one version of the Weyl equation. There are a number of subtleties here though, and these equations do not describe static solutions. There is Wikipedia article on Riemann-Silberstein that contains the details. | {
"domain": "physics.stackexchange",
"id": 41392,
"tags": "electromagnetism, special-relativity, tensor-calculus, maxwell-equations, duality"
} |
Localization and supersymmetry explanation | Question: I'm confused about section 9.3 of the book Mirror Symmetry. In particular, I'm confused about the derivation carried out in equations (9.32) to (9.35) where they claim that the partition function is zero if $h'$ has no zeroes.
Specifically, consider a 0-dimensional QFT with fermionic and bosonic variables, defined by the action
$$S=S_0 - S_1\psi_1\psi_2 \tag{9.25}$$
with $$S_0(x)=\frac{1}{2}[h'(x)]^2\quad\text{and}\quad S_1=h''(x).\tag{9.28}$$
The partition function is $$Z=\int dxd\psi_1d\psi_2 e^{-S}.\tag{9.26}$$
This action has odd symmetries:
$$V_1=\psi_1 \frac{\partial}{\partial x} - h'(x) \frac{\partial}{\partial \psi_2}
\quad\text{and}\quad
V_2 = \psi_2 \frac{\partial}{\partial x} + h'(x) \frac{\partial}{\partial \psi_1}.\tag{9.30'}$$
This leads to the infinitesimal transformations
$$\delta x=\epsilon^1 \psi_1 + \epsilon^2 \psi_2$$
$$\delta \psi_1 = \epsilon^2 h'\tag{9.30}$$
$$\delta \psi_2 = -\epsilon^1 h'.$$
Then, by exploiting these symmetries, they claim that $Z=0$ if $h'$ is everywhere nonzero. This is where my confusion arises.
Is it "legal" to use $$\epsilon^1=\epsilon^2=-\psi_1/h'\tag{1}$$ to change one of the fermionic variables to zero? I thought the $\epsilon$'s had to be infinitesimal for the transformation to be a symmetry and leave the action invariant.
How does this "motivate" the change of variables in equation (9.32) below?
$$\hat{x} := x-\frac{\psi_1 \psi_2}{h'}$$
$$\hat{\psi}_1:=\alpha(x) \psi_1\tag{9.32}$$
$$\hat{\psi}_2 := \psi_1 + \psi_2.$$
How did they come up with (9.33) and (9.34)? Equation (9.33) states that $$S(x, \psi_1, \psi_2) = S(\hat{x},0,\hat{\psi}_2)\tag{9.33}$$
presumably because of the symmetry. But what does $S(\hat{x},0,\hat{\psi}_2)$ actually look like? Equation (9.34) is the transformation of the measure from the above change of variables;
$$dxd\psi_1d\psi_2 = \left(\alpha(\hat{x}) - \frac{h''(\hat{x})}{(h'(\hat{x}))^2}\hat{\psi}_1 \hat{\psi}_2\right) d\hat{x} d\hat{\psi}_1 d\hat{\psi}_2 .\tag{9.34}$$
I'm not quite sure where this came from.
Equation (9.35) comes directly from plugging in (9.34) (the change in measure) to the partition function. But where is the total derivative in $\hat{X}$ in the second term
$$\int e^{-S(\hat{x},0,\hat{\psi}_2)} \frac{h''(\hat{x})}{(h'(\hat{x}))^2} \hat{\psi}_1 \hat{\psi}_2 d\hat{x} d\hat{\psi}_1d\hat{\psi}_2?\tag{9.35'} $$
I guess this may be more apparent if I knew what the function $S(\hat{x},0,\hat{\psi}_2)$ looked like.
Answer:
We should first of all realize that the infinitesimal Grassmann-odd parameters $\epsilon^1$ and $\epsilon^2$ are allowed to depend on the variables $x$, $\psi_1$ and $\psi_2$ in the infinitesimal SUSY transformation (9.30). We clearly will need that in eq. (1).
OP asks a good question about the status of finite SUSY transformations. To this end let us consider a subclass of infinitesimal parameters of the form
$$\begin{align}\epsilon^1(x,\psi_1,\psi_2)~=~&\frac{f^1(x)}{h^{\prime}(x)}\psi_1, \cr
\epsilon^2(x,\psi_1,\psi_2)~=~&\frac{f^2(x)}{h^{\prime}(x)}\psi_1, \end{align}\tag{1'} $$
where $f^1(x)$ and $f^2(x)$ are two arbitrary infinitesimal functions. [Here we have used the assumption that $h^{\prime}(x)\neq 0$ for later convenience.] Then the infinitesimal SUSY transformation (9.30) becomes
$$ \begin{align}\delta x ~=~& \frac{f^2(x)}{h^{\prime}(x)} \psi_1 \psi_2,\cr
\delta \psi_1 ~=~& f^2(x) \psi_1 ,\cr
\delta \psi_2 ~=~& -f^1(x) \psi_1 .
\end{align}\tag{9.30''} $$
This can be integrated up to the corresponding finite SUSY transformations$^1$
$$ \begin{align}
\hat{x} ~=~&x+\frac{F^2(x)}{h^{\prime}(x)}\psi_1\psi_2, \cr
\hat{\psi}_1 ~=~& \alpha(x) \psi_1, \qquad\qquad
\alpha(x)~:=~1 + F^2(x),\cr
\hat{\psi}_2 ~=~&\psi_2 - F^1(x) \psi_1 ,
\end{align}\tag{9.30'''}$$
where $F^1(x)$ and $F^2(x)$ are two arbitrary finite functions.
Sketched proof of eq. (9.30'''): The infinitesimal version of eq. (9.30''') is clearly eq. (9.30''). Therefore it is enough to check that the finite SUSY transformation (9.30''') is form-invariant under composition:
$$ \begin{align}
\hat{\hat{x}} ~=~&\hat{x}+\frac{\hat{F}^2(\hat{x})}{h^{\prime}(\hat{x})}\hat{\psi}_1\hat{\psi}_2
~=~\hat{x}+\frac{\hat{F}^2(x)}{h^{\prime}(x)}\hat{\psi}_1\psi_2\cr
~=~&x+\frac{F^2(x)+\hat{F}^2(x)\alpha(x)}{h^{\prime}(x)}\psi_1\psi_2~=~x+\frac{\hat{\alpha}(x)\alpha(x)-1}{h^{\prime}(x)}\psi_1\psi_2, \cr
\hat{\hat{\psi}}_1 ~=~& \hat{\alpha}(\hat{x}) \hat{\psi}_1
~=~ \hat{\alpha}(x) \hat{\psi}_1
~=~ \hat{\alpha}(x) \alpha(x) \psi_1, \cr
\hat{\hat{\psi}}_2 ~=~&\hat{\psi}_2 - \hat{F}^1(\hat{x}) \hat{\psi}_1
~=~\hat{\psi}_2 - \hat{F}^1(x) \hat{\psi}_1\cr
~=~&\psi_2 - \left(F^1(x) - \hat{F}^1(x) \alpha(x)\right) \psi_1 ,
\end{align}$$
End of sketched proof. $\Box$
In particular, it is straightforward to check that the action
$$S(\hat{x},\hat{\psi}_1,\hat{\psi}_2)~=~S(x,\psi_1,\psi_2)$$
is invariant under the finite SUSY transformation (9.30''').
It is straightforward to see that the inverse finite SUSY transformation (9.30''') becomes
$$ \begin{align}
x ~=~&\hat{x}-\frac{F^2(\hat{x})}{\alpha(\hat{x})h^{\prime}(\hat{x})}\hat{\psi}_1\hat{\psi}_2, \cr
\psi_1 ~=~& \frac{1}{\alpha(\hat{x})}\hat{\psi}_1, \cr
\psi_2 ~=~&\hat{\psi}_2 + \frac{F^1(\hat{x})}{\alpha(\hat{x})}\hat{\psi}_1 .\end{align}\tag{9.30''''}$$
[Hint: Start by establishing the middle equation.]
The ansatz (1') [and the ansatz (1)] were chosen because it is cumbersome (but we suspect not impossible) to integrate the infinitesimal SUSY transformations (9.30) directly. It is much easier to only consider subvariations proportional to $\psi_1$, because then we can repeatedly use the nilpotency $\psi_1^2=0$ to simplify. To arrive at the form (9.32) from eq. (9.30''') now pick
$$\begin{align} F^1(x)~=~&F^2(x)~=~-1 , \cr \alpha(x)~:=~&1 + F^2(x) ~=~0,\cr
\hat{\psi}_1~=~& \alpha(x) \psi_1~=~0.\end{align}$$
However, this is a singular transformation, so let us first assume that $F^1$ and $F^2$ are $x$-independent constants different from $-1$, and only at the very end of the calculation take the limit going to $-1$. This choice simplifies some algebra as compared to Ref. 1.
Since Ref. 1 identifies Berezin integration with differentiation from right, cf. eq. (9.20), the derivatives in the Jacobian supermatrix are right derivatives. The Jacobian supermatrix becomes
$$ \frac{\partial_R(x,\psi_1,\psi_2)}{\partial (\hat{x},\hat{\psi}_1,\hat{\psi}_2)} ~=~\begin{pmatrix} 1+\frac{F^2 h^{\prime\prime}(\hat{x})}{\alpha h^{\prime}(\hat{x})^2}\hat{\psi}_1\hat{\psi}_2 &\ast&\ast \cr 0&\frac{1}{\alpha} &0 \cr 0 & \ast &1 \end{pmatrix}.$$
The superdeterminant/Berezinian becomes
$${\rm sdet}\frac{\partial_R(x,\psi_1,\psi_2)}{\partial (\hat{x},\hat{\psi}_1,\hat{\psi}_2)}
~=~\alpha+\frac{F^2 h^{\prime\prime}(\hat{x})}{ h^{\prime}(\hat{x})^2}\hat{\psi}_1\hat{\psi}_2, $$
which explains eq. (9.34).
Finally let's take the limit. The action
$$S\left(\hat{x},\hat{\psi}_1\!=\!0,\hat{\psi}_2\right)~=~S_0(\hat{x})~=~\frac{1}{2}h^{\prime}(\hat{x})^2$$
makes the integrand of the second term in (9.35') a total derivative wrt. $\hat{x}$:
$$\begin{align} h^{\prime\prime}(\hat{x})~g(h^{\prime}(\hat{x}))
~=~&\frac{d(G(h^{\prime}(\hat{x}))}{d\hat{x}},\cr g(y)~:=~&\frac{e^{-y^2/2}}{y^2},\cr G~:=~&\int\!g~:=~\text{antiderivative of }g. \end{align}$$
References:
K. Hori, S. Katz, A. Klemm, R. Pandharipande, R. Thomas, C. Vafa, R. Vakil, and E. Zaslow, Mirror Symmetry, 2003; Sections 9.2-9.3. The pdf file is available here. | {
"domain": "physics.stackexchange",
"id": 39440,
"tags": "quantum-field-theory, supersymmetry, grassmann-numbers, superalgebra"
} |
Dexcitation/Excitation of $e^-$ in Bohr Model | Question: My teacher told me that (in his words):
When an $e^-$ is excited in Bohr Model to $n^{th}$ energy level, then $e^-$ stays in this energy level fora very short time of the order of $10^{-8}$ seconds or less.After this time, if any $e^-$ic state is vacant in lower energy level(more stable states) then electron from excited state can transit to any of the lower states till it reaches the most stable state $(n=1)$.
Another thing which he told me which sound fishy to me (in my words):
The $1s,2s$ and $2p$ are depicted below which show the boundary surface of these orbitals where electron can only be found as per its configuration($e^-$ of $1s$ will be found in the inner circle).Firstly I know that orbitals are only $90\%$ or so probability region so first we can't assume a fix boundary(watch out, this creates a problem later), maybe he told that to explain us better.
He told that when $e^-$ is in position $B$(originally in $1s$), it can go to $2s$ by just gaining energy and staying at its original position, because $2s$ encompasses the $1s$ region too. But when it is in position $E$ or $C$ it needs to come back into $1s$ which takes atmax a time of the order of $10^{-8}$(I argue that $1s\wedge2s$ extends to $\infty$).Also he told that when $e^-$ is in position $A$ it can't go directly into $p$ orbital because it first needs to come into the overlapping region $B$(again, same argument)
Later when I argued and said that $1s$ extends to $\infty$, he accepted but said that the probability outside this region becomes negligible, something he drew like this:
Then he said this is the quantum world and we cannot use our intuition and dismissed the topic.
Question: How can an electron change orbitals staying at same positions, if it can then it must be able to do it all the time, why not only at the time of deexcitation?
Another fishy thing he said was(It might be correct):
We know that an orbital(or orbit it was) cannot accommodate more than two $e^-$, it is not because of the $e^-$ic repulsion, and if we make mathematical calculation, we'll get more attraction from nucleus to the third incoming $e^-$ as compared to the repulsion due to the two other previously present electrons.He said that it is based on something high and beyond our course and he can't tell us the whole thing, but this is because of the calculation results indicating a higher unstable energy of system.
Question: The thing here is I know pauli's principle from my chemistry, but I'm not convinced of the reasons he gives which I highlighted.
Edit
He meant to explain the fact(first point) with the use of schrodinger wave equation.
Answer: I will stress what the other answers also include. The Bohr model does not have orbitals, but orbits, similar to the planet orbits around the sun. It is the first successful effort to describe the experimental behavior, i.e. the photon spectra, that excited atoms emitted. It postulated steady orbits. A postulate is like an axiom in a theoretical model that fits experimental data.
When we are talking of probabilities, we are in the realm of quantum mechanics. Quantum mechanics is a theory that fits all the data we now have accumulated of how the microcosm of atoms molecules particles works. It started with the Schrodinger equation, whose solutions for the atom gave the Lyman series and went beyond orbits, into orbitals.
The shapes of the first five atomic orbitals: 1s, 2s, 2px, 2py, and 2pz. The colors show the wave function phase. These are graphs of ψ(x, y, z) functions which depend on the coordinates of one electron. To see the elongated shape of ψ(x, y, z)2 functions that show probability density more directly, see the graphs of d-orbitals below.
An orbital is a probability distribution of finding a particle in a point in space ( x,y,z) at time t. Probabilities come in because a prime postulate for the Schrodinger equation is "the square of the solutions describes the probability of finding the particle under consideration" . It is an axiom that connects the solutions of the differential equation to physical measurements. That is why quantum mechanics evolved. The classical orbits are no longer valid in the microcosm. If you observe the plot, you will see that the probabilities overlap in space. That is how an electron can change orbitals while at the same position in (x,y,z,t) if energy is supplied to the system in the appropriate form, i.e. a photon, because we are talking of electromagnetic interactions.
How can an electron change orbitals staying at same positions, if it can then it must be able to do it all the time, why not only at the time of deexcitation?
This comes from the postulates of the theory of quantum mechanics and it takes time to develop an intuition. That's the way the microcosm works. What is the same in classical and quantum mechanics are conservations laws. In quantum mechanics they are much more important because there are many more quantum numbers that describe the atoms/molecules/particles that obey conservation laws, than the quantities in classical mechanics.
Energy conservation assures that a de-exciting atom emits a photon, and that a photon is needed to go to a higher energy level. De-excitations happens due to the overlap of the probability distribution around the atom. If an empty level exists a probability for decaying with a photon into it can be given by the solution of the equation. To get experimentally the probability distribution one has to do a lot of measurements on a lot of atoms set up at the same energy level. This has been recently only done for orbitals of the hydrogen atom.
The microcosm had another surprise for the first researchers who tried to find how molecules behave . They discovered that the quantum number of spin divided all molecules/atoms/particles into two categories : integer spin and half integer spin. More importantly, that ensembles of these particles behaved statistically differently if they had integer spin, bosons, or half integer spin, fermions. At the level of atomic orbitals the Pauli exclusion principle assures that no two electrons can occupy the same energy level and have the same spin. Only pairs of opposite spin can do that. This is an observational fact and the Pauli exclusion is part of the "axioms" necessary to explain the microworld with quantum mechanical solutions of the appropriate equations.
In general one should keep in mind that the weirdness of quantum mechanical behaviors is not a complex imagination of theorists. It is how one can describe experimental numbers and predict new behaviors in the realm of particles. It needs time to develop an intuition. | {
"domain": "physics.stackexchange",
"id": 17058,
"tags": "atomic-physics"
} |
How to label sound events in noisy .wav files | Question: I'm trying to find a way of automatically identifying birdsong in phone-recorded sound files (1min long). Currently the algorithm I'm using doesn't label all of the events I want it to. (it is designed for non noisy audio files).
I don't have much experience in DSP. I'm hoping that upon looking at my wave-forms someone may have an idea of how to pick out these 'birdsong events'!
I've shown one of the waveforms. The rest can be seen here: imgur.com
Any method would be appreciated!
I've thought about applying a noise reduction algorithm but I suspect this could just interfere with things.
Here is a link to the algorithm I'm using in case you're interested: Github/kylemcdonald
The labelled events are the cross hatched ones.
Thanks in advance for taking the time to look at this!
Answer: "Picking out birdsong events" sounds like a classification problem to me. Of course, you can use concepts from DSP to engineer features, but other than that, I don't see how you can use DSP to segment them.
If you have some labeled data you can try training a classifier. You can use a sliding window of a fixed length and compute a "feature vector" from each segment. One obvious feature to use is the signal energy eg. average value of the samples. You can also try other spectral features that librosa computes, like spectral centroid. | {
"domain": "dsp.stackexchange",
"id": 6819,
"tags": "fft, audio, noise, algorithms, sound"
} |
GPS Waypoint navigation with robot_localization and navsat_transform_node | Question:
I'm using Clearpath Husky A200 which outputs encoders information, a XSENS IMU (without a good compass) and a Novatel RTK GPS with sub-inch accuracy and I would like to implement a GPS waypoints navigation where the robot must follow a set of given waypoints.
I've read the robot_localization documentation, but I didn't understand how to insert the waypoints as goals for the robot.
I know that I need to use navsat_transform_node to obtain the UTM coordinates and then to set a new goal by converting the GPS waypoint coordinates to UTM. But I don't know how to implement this.
Moreover, I've found this package gps_common which subscribes to /fix topic and outputs a /odom topic. Should I use it as input for the ekf_localization node?
Thank you for your help and support!
Originally posted by Marcus Barnet on ROS Answers with karma: 287 on 2017-01-05
Post score: 2
Original comments
Comment by sonyccd on 2017-01-10:
It is strange to me how something like GPS waypoint following is no clearly documented. I have been trying to make this work on Jackal for some time now and still am having problems. I have gotten about to where you are.
Comment by Marcus Barnet on 2017-01-10:
It is very strange to me, too! This is very strange since there is lots of documentation about all the other ROS topics, but almost nothing about GPS and waypoint. I'm very sad about this since this topic is very important to me.
Comment by Marcus Barnet on 2017-01-10:
All I want, it's to find a short tutorial on how set the waypoints and how to use move_base to make the robot follow them to reach the goal.
Comment by sonyccd on 2017-01-14:
Have you tried this?
http://answers.ros.org/question/170131/simple-gps-guided-vehicle/
It does not say much, for that matter none of the answers say much.
Comment by Marcus Barnet on 2017-01-14:
I already read it but it provides no additional information.. it only links to the robot_localization package which I have already read several times :(
Answer:
Moreover, I've found this package gps_common which subscribes to /fix topic and outputs a /odom topic. Should I use it as input for the ekf_localization node?
navsat_transform_node's job is to convert the GPS data into a nav_msgs/Odometry message so it can be fused into an EKF instance, so you don't need any external packages for that. But that's unrelated to GPS waypoint following. robot_localization and navsat_transform_node are just giving you a state estimate.
move_base requires goals to be in the robot's world frame, or a frame that can be transformed to the world frame. When you run navsat_transform_node, it generates a world_frame->utm transform. Therefore, if you can create a geometry_msgs/PoseStamped or move_base_msgs/MoveBaseGoal message with the UTM coordinates of the goal (and a frame_id of utm), I believe move_base will use that transform to convert it.
So the problem boils down to retrieving UTM coordinates from your GPS coordinates, and then packing those UTM coordinates into a goal message and firing it off to move_base. It occurs to me that this might make a handy service for navsat_transform_node, but free time being what it is(n't), you'll have to do this conversion yourself for now. Fortunately, there are a couple libraries that do this. navsat_transform_node itself uses a header file called navsat_conversions.h (credit to Chuck Gantz for the functionality contained in this header). You can include that header file and call the function I've linked, and it will give you what you want.
You'll obviously have to write a node that subscribes to a NavSatFix and then publishes a goal to move_base, but that should be fairly straightforward.
Originally posted by Tom Moore with karma: 13689 on 2017-01-26
This answer was ACCEPTED on the original site
Post score: 3
Original comments
Comment by Marcus Barnet on 2017-01-26:
So, do I need to output a /odometry/filtered by fusing encoders and IMU in ekf_localization_node and then make the navsat_transform_node subscribe to this? Then, how can I use /odometry/gps and /gps/filtered? Do I have to use them to create a new ekf_localization_node istance with only GPS data?
Comment by Tom Moore on 2017-01-26:
/odometry/gps gets fed right back into the same EKF to which it is subscribing. You can run a two-tier setup with just odom and IMU in the first tier and odom, IMU, and GPS in the second, but you don't have to. /gps/filtered just gives you the fused state estimate as GPS coords.
Comment by Tom Moore on 2017-01-26:
Cont'd: You don't need them, which is why the publication is optional. It's just handy if you have some other node that needs your GPS position.
Comment by Marcus Barnet on 2017-01-26:
Thank you a lot! I'm going to try to implement this. Just a question: my GPS system has a sub-inch accuracy while encoders and IMU include always errors (wheel slipping, IMU accumulated errors). Will they affect the GPS readings since they are fused together in ekf_localization_node?
Comment by Marcus Barnet on 2017-01-26:
Continued: I'm asking this since I would like to make the robot consider the goal as reached only when the current GPS coordinates read by the robot will be equal to the given goal GPS coordinates.what happen if the robot moves on a wet surface with a high slipping effect? Will this affect the GPS?
Comment by Tom Moore on 2017-02-01:
If the robot slips, you may see movement in your state estimate in between GPS readings, but then it will snap/jump back to the correct location when the GPS reading arrives.
Comment by Marcus Barnet on 2017-02-22:
@Tom Moore, I'm having problems with my XSENS MTi-10 IMU. It should use ENU standard as the manual reports, so the YAW should outputs 0° when facing EAST, but may be my IMU model do not have orientation output.
Comment by Marcus Barnet on 2017-02-22:
@Tom Moore, I think i'm having problems with my XSENS MTi-10 IMU. It should use ENU standard so the YAW should outputs 0° when facing EAST as reported in the manual, but may be my IMU model do not have orientation output.
Comment by Marcus Barnet on 2017-02-22:
Cont'd: This is a bag file containing /imu/data and /magnetic which are generated by my IMU node. Can you tell me if my IMU output is correct for GPS integration, please? If not, I need to specify the datum parameter with the starting GPS coordinates?
Comment by Marcus Barnet on 2017-02-22:
May be, is it not possible to integrate GPS data if the IMU doesn't output the orientation data? Should I buy another IMU sensor or I just need to set the datum parameter by adding the GPS coordinates for the robot starting point?
Comment by Tom Moore on 2017-02-22:
Your IMU must produce earth-referenced orientation data (i.e., it needs a compass) if you want to use navsat_transform_node. The node has to know your heading so it can figure out which way you are facing in the UTM grid.
Comment by Marcus Barnet on 2017-02-22:
Thank you, so my current IMU is not useful if I want to use this node. I will try to buy another IMU with compass included.
Comment by Marcus Barnet on 2017-02-22:
The /magnetic topic included in my bag is not useful? It seems to contain the magnetic orientation vector. Should the orientation be included in the /imu/data only?
Comment by P_Brand on 2017-04-19:
Hi, I tried to send a move_base_simple/goal (with frame_id: utm) on the Jackal robot. The navsat_transorm_node was running. But that didn't worked. The move_base node accepts only goals in the odom or map frame. How can I send goals in utm Coordinates to the robot? | {
"domain": "robotics.stackexchange",
"id": 26650,
"tags": "navigation, ekf, navsat-transform-node, robot-localization, ekf-localization-node"
} |
How do I distribute ros nodes in executable form | Question:
Hi,
For my teaching it would be convenient at times to be able to provide my students with executable ROS nodes, while not giving them the source code. Is there a simple way to do that?
I'm running ROS Groovy under Ubuntu 12.04 and use catkin_make to build my nodes, but I'll be using ROS Indigo under Ubuntu 14.04 from September, so I'm interested in both situations, in case it makes a difference.
I have only extremely basic knowledge of Linux, so maybe the question is very elementary and I don't realize it.
Thanks for any input. Regards.
G. Garcia
Originally posted by ggg on ROS Answers with karma: 61 on 2015-04-01
Post score: 2
Original comments
Comment by autonomy on 2016-03-24:
Were you able to solve this? I'm running into the same issue you mentioned in your comment below, where ROS doesn't find the package in the install directory http://answers.ros.org/question/230072/sourcing-workspace-generated-by-a-catkin_make-install/
Answer:
You cannot just use the build directory. If you want to distribute the binaries only you must invoke catkin_make install which will install everything into a local install folder. Then you can delete the build, devel and src folders, or just zip up only the install folder and distribute that. However, the binaries need to be compiled for the correct architecture that your students computer has and any dependencies you do not have in your workspace when you invoke catkin_make need to also be installed by your students before being used.
If you are only on Ubuntu then you can have bloom generate you a deb-src and then you can compile that and distrbute the .deb file and students can install it with dpkg -i your_distributable.deb. If you follow this part of the pre-release tutorial:
http://wiki.ros.org/bloom/Tutorials/PrereleaseTest#bloom.2BAC8-Tutorials.2BAC8-PrereleaseTest.2BAC8-indigo.Perform_the_pre-release_locally
After the git-buildpackage command you'll have a .deb file that you can install with dpkg -i. This will only work if all of the packages your package depends on are released into ROS. If you want to build multiple closed-sourced packages then you'll need to setup a custom build farm so that you can have a full release lifecycle for your packages, allowing you to release and build binaries for one package and then release yet another package which depends on that package.
Originally posted by William with karma: 17335 on 2015-04-01
This answer was ACCEPTED on the original site
Post score: 4
Original comments
Comment by ggg on 2015-04-03:
Zipping and copying the install does not work. ROS does not find the package.
[rospack] Error: stack/package foo not found
Tried to re-run catkin_make install in student account, but does not work either... | {
"domain": "robotics.stackexchange",
"id": 21328,
"tags": "catkin"
} |
Same photon or different photon? | Question: Consider a typical optical focusing system: A small light source, then a collimating lens, then a focussing lens, and then a detector (e.g. CCD).
Assume that source intensity is so low that only one photon enter collimating lens per second. Today's modern technology is able to produce single-photon light sources. Assume that the dark room (in which the experiment is done) is 100% dark, i.e. detector detects only the photons coming from the source.
A photon that originated from the source is detected by the CCD. Is the detected photon the same as the one which originated from source?
Between source and detector, there are two lenses. When the photon enters the first collimating lens, it will interact with the electrons inside the molecules of the material from which the lens is made, but it will not interact with nuclei of different elements present in the molecules. Is the photon that came out of this collimating lens the same as the photon which entered the collimating lens?
What is the reason that the photon falling on the edge of the focussing lens gets deviated by some angle (i.e. gets focussed) and falls on the detector?
Answer: It is a matter of definition of "same".
Classically one can define "same" condition of particles by labels stuck on them. Light classically is a wave, and same needs a new definition. We apply the everyday definition by identifying the light beam with the source. The light leaving the sun is the same light arriving on earth. The light reflected from the moon is the same light. The light from a candle falling on a mirror is the same light. One can only label light from its source, imo.
Quantum mechanically light is an emergent phenomenon from a confluence of photons, and photons are elementary particles. Quantum mechanical calculations have been very successful in describing elementary particle experiments and are used extensively with success in cosmological models. The simplest basic calculation is a scatter of a particle on another particle , and the photon electron scatter can be represented as:
This diagram is used to calculate the probability of the interaction happening, which is the mathematical modeling at the quantum level. It is the dominant term in calculating the cross section for a photon hitting an electron at energies below particle creation.
Now the question becomes "is it the same photon entering and leaving the scatter diagram". The position is not different then when considering light beams, which also cannot be labeled. The source is a photon on an electron, the output is a photon and an electron. One can define the photon's existence from the source and call it "the same".
When one goes to the apparatus you describe, the electron line becomes off shell, the interaction with the fields of the lattices it goes through, but the logic is the same. The scatter may be elastic, or inelastic and there is a probability for each case. Through the lens the probability is high to scatter elastically in the direction defined by the macroscopic optical ray.
Thus, if I were doing the experiment, and got a photon hit on the CCD , and had a source of photons, I would identify it as the same from the source. Of course not all CCD hits come from the light source, as there are cosmics and surrounding radiation, but that will be the noise level of the experiment. | {
"domain": "physics.stackexchange",
"id": 30781,
"tags": "quantum-mechanics, optics, electromagnetic-radiation, photons, identical-particles"
} |
How does DC circuit deliver energy? | Question: I don't understand how energy is being transferred in a DC circuit. For example here I have a simple circuit powered by a constant voltage. Some form of energy is being transferred from the source to the light bulb and the light bulb lights up.
A constant voltage means there is a constant electric field between the terminals of the light bulb. A constant current means there exists a magnetic field of constant magnitude looping around the wire. If both the electric field and magnetic field is constant, there cannot be EM waves propagating along the wire to deliver energy to the light bulb.
If the energy is delivered by the constant electron drift, something has to move a lot faster than the drift current itself to signal or deliver the energy to the load (otherwise the light bulb wouldn't light up instantaneously). But again, both the E-field and M-field are constant, there cannot be waves propagating.
So what is delivering energy in a DC circuit and how does it work?
Answer: E and B are static, from Poyntings theorem the energy that is being used is transfered by the poynting vector
$$\vec{S} =\frac{1}{\mu_0} \vec{E} × \vec{B}$$
$\vec{E},\vec{B} ≠ 0$ and are not parrallel, meaning there must be a poynting vector
Static fields can also have a poynting vector, The difference, is that for static fields in freespace:
$$\nabla \cdot \vec{S} = 0$$
Meaning there is no net energy gain or loss in any region of space, the amount of energy going into a region, equals the amount of energy leaving that region.
Hence energy being transfered, without "seeing" energy move.
Edit: imagine the initial flipping of a switch, the em wave will initially propagate, hitting the charges and do work, its just continual constant work once the wave has passed. | {
"domain": "physics.stackexchange",
"id": 89432,
"tags": "electromagnetism, energy, electric-circuits, power, poynting-vector"
} |
The language of all base-10 integers that are multiples of 9 | Question: If I want to represent all the base 10 integers that are multiples of 9 as a language how do I do so?
Alphabets are finite sets.
Answer: You need to check the definitions. An alphabet is a finite set of characters (or "symbols") that are used to write strings; a string is a sequence of characters from some alphabet.
When you write numbers down, you write them as a sequence of symbols. What symbols are those? That's your alphabet. | {
"domain": "cs.stackexchange",
"id": 5894,
"tags": "formal-languages, regular-languages"
} |
How to run electrochemical tests on Microorganism (bacteria in a medium)? | Question: I am trying to run some electrochemical tests(e.g.CV (Cyclic Voltammetry) & EIS (electrical impedance spectroscopy)) using half cell configuration on micro-organisms (e.g.bacteria). The bacteria are cultured in a medium (liquid). The half cell configuration is working, counter, and a reference (AgCl) electrodes.
Usually, it is easy to clip a metal or any thin solid material. However, I don't know how to deal with a solution. Since the beaker will be filled with electrolyte, I am afraid it mixes with bacteria solution causing it to be removed from the electrode. I am avoiding add another material like tape or something as it might alter the measurement.
I am looking for a way to attach the Micro-organism to the working electrode.
Answer: From what I understand, your main issue is connecting a solution in an electrical circuit without changing the composition of the solution due to electrochemical redox reactions.
So, there are two problems here. Firstly, passing direct current
(DC) changes the composition of the solution. Secondly, a solution cannot
be connected to the bridge like a metallic wire or any other solid conductor.
The first difficulty is resolved by using an alternating current (AC) source
of power. The second problem is solved by using a specially designed
vessel called conductivity cell. | {
"domain": "chemistry.stackexchange",
"id": 15343,
"tags": "electrochemistry, biochemistry, solutions, electrolysis, cyclic-voltammetry"
} |
Why only the spectrum range of radio waves can be reflected from atmospheric layers while higher ranges (Infrared-Visible Light-Ultraviolet) can't? | Question: From communication prospective why can't higher frequency ranges be reflected from atmospheric layers and what is the effect of parameters (like temperature, pressure, etc) of the atmospheric layer on reflection of waves for sky wave and space wave propagation.
Answer: Different materials reflect, absorb, and scatter different wavelengths of light. The ionized layer at the edge of earth's atmosphere happens to reflect a certain range of wavelength while transmitting others. This is due to the physics of the material.
The reason you can't use the atmosphere to reflect shorter wavelengths for the purposes of communication, is because the atmosphere doesn't reflect those wavelengths in the first place.
The primary parameters affecting the atmosphere's reflectivity is the amount of ionization and the depth of the ionized layer. That largely depends on solar activity. There is a significant difference in this layer between night and day. It is also subject to individual solar phenomena, like flares or coronal mass ejections. | {
"domain": "engineering.stackexchange",
"id": 1616,
"tags": "telecommunication, wireless-communication"
} |
What is difference between fast neutrons and thermal neutrons? | Question: Recently i was reading about neutron absorption by metals. But there are always two different categorizes for thermal neutrons and fast neutrons. I don't understand what difference between them!
Most importantly i want know fast neutrons or thermal neutrons are used for adding neutrons into atomic nuclei (increase neutron number)?
Answer: Neutrons carry a certain energy, just like every other object in the universe carries energy. Your body might also have more energy during the day than in the evenings.
And so is the neutron losing energy over time with every interaction (except for cold or ultra-cold neutrons in a hotter environment). Due to their neutral electrical charge, it is almost impossible to accelerate them again. I am attaching here a cosmic-ray neutron spectrum edited from this paper:
The plot shows how incoming high-energy neutrons lose energy from interactions with atoms (e.g. in the atmosphere and ground) while they turn to classes like fast and epithermal neutrons, just until they got thermalized. Here they have so low energy that it equals the thermal energy of the surrounding material. In this thermal domain, the ping-pong interactions with surrounding particles keeps their mean energy roughly constant. Note that "thermal" on Earth means something else than in space, as @Vadim pointed out the relationship to temperature.
Funny thing is, the likelihood of neutron interaction with atoms (called "cross section") highly depends on the neutron energy. In this plot (also from that paper) you can see that thermal neutrons are much more likely to interact with, e.g., hydrogen H, than high-energy neutrons.
You can look up those cross sections at ENDF. Note that they are different for different interaction processes, e.g., scattering or absorption, but this is another story. | {
"domain": "physics.stackexchange",
"id": 69703,
"tags": "neutrons"
} |
Adding two numbers, each digit as a single element of array | Question: After failing to answer an interview question, I went back home and coded the answer for my own improvement.
The questions was: Given two arrays containing a number, but with each element of the array containing a digit of that number, calculate the sum and give it in the same format.There must be no leading zeros in the resulting array.
My main concern is that I am being redundant with my logic, or that my code is not clear.
int *sumOfTwo(int *a1,int *a2, int lengthOfA1, int lengthOfA2, int *lengthDest) {
//I assume there is error checking for null pointers, etc.
int *answer;
int maxSize, minSize, diff;
int a1IsLongest = TRUE;
if(lengthOfA1 > lengthOfA2) {
maxSize = lengthOfA1;
minSize = lengthOfA2;
a1IsLongest = TRUE;
}
else {
maxSize = lengthOfA2;
minSize = lengthOfA1;
a1IsLongest = FALSE;
}
diff = maxSize - minSize;
answer = malloc(maxSize * sizeof(int));
int i, extraTen, result;
extraTen = 0;
*lengthDest = maxSize;
//printf("This is lengthDest: %d\n", *lengthDest);
for (i = maxSize-1; i >= 0; i--) {
//add numbers, check if bigger than 10
//check if arrays are of different sizes we want to access
//the end from each, so need to account for this.
if(diff == 0) {
//arrays are equal in size
result = a1[i] + a2[i] + extraTen;
extraTen = 0;
}
else{
//arrays are different sizes
if(! ((i - diff) < 0)) {
if(a1IsLongest) {
result = a1[i] + a2[i - diff] + extraTen;
extraTen = 0;
}
else {
result = a1[i - diff] + a2[i] + extraTen;
extraTen = 0;
}
}
else {
if(a1IsLongest) {
result = a1[i] + extraTen;
extraTen = 0;
}
else {
result = a2[i] + extraTen;
extraTen = 0;
}
}
}
//the new malloc can only happen here
if (result >= 10) {
//check if at beginning of array
if(i == 0) {
maxSize = maxSize + 1;
*lengthDest = maxSize;
//printf("This is lengthDest: %d\n", *lengthDest);
//overflowing, need to realloc
int *temp = malloc(maxSize * sizeof (int));
int j;
//copy old elements into new array
for(j = 0; j < maxSize; j++) {
//printf("index: %d\n", j);
if(j == 0) {
temp[j] = 1;
}
else if(j == 1) {
temp[j] = result % 10;
}
else {
temp[j] = answer[j-1];
}
}
free(answer);
answer = temp;
return answer;
}
else{
//printf("in the else of the i equals 0.\n");
answer[i] = result % 10;
extraTen = 1;
}
}
else {
//printf(" results is less than 10\n");
answer[i] = result;
}
}
return answer;
}
Answer:
int *sumOfTwo(int *a1,int *a2, int lengthOfA1, int lengthOfA2, int *lengthDest) {
//I assume there is error checking for null pointers, etc.
int *answer;
int maxSize, minSize, diff;
int a1IsLongest = TRUE;
if(lengthOfA1 > lengthOfA2) {
maxSize = lengthOfA1;
minSize = lengthOfA2;
a1IsLongest = TRUE;
}
else {
maxSize = lengthOfA2;
minSize = lengthOfA1;
a1IsLongest = FALSE;
}
diff = maxSize - minSize;
answer = malloc(maxSize * sizeof(int));
This seems more complicated than necessary. Consider
typedef struct Number {
int *digits;
int length;
} Number;
Number *sumOfTwo(const Number *a1, const Number *a2) {
//I assume there is error checking for null pointers, etc.
if (a1->length > a2->length) {
return _sumOfTwo(a1, a2);
} else {
return _sumOfTwo(a2, a1);
}
}
Number *_sumOfTwo(const Number *a1, const Number *a2) {
Number *answer;
answer = malloc(sizeof(*answer));
answer->length = (a1->length + 1);
answer->digits = calloc(answer->length, sizeof(*(answer->digits)));
Now, a1 is always as long as or longer than a2. So you can do away with your temporary variables. So far, it's about the same length, but we've simplified the function signature.
if (answer == NULL) {
// do something
}
if (answer->digits == NULL) {
// do something
}
As noted elsewhere, you should check that malloc and calloc succeed.
int i, extraTen, result;
extraTen = 0;
*lengthDest = maxSize;
//printf("This is lengthDest: %d\n", *lengthDest);
for (i = maxSize-1; i >= 0; i--) {
//add numbers, check if bigger than 10
//check if arrays are of different sizes we want to access
//the end from each, so need to account for this.
if(diff == 0) {
//arrays are equal in size
result = a1[i] + a2[i] + extraTen;
extraTen = 0;
}
else{
//arrays are different sizes
if(! ((i - diff) < 0)) {
if(a1IsLongest) {
result = a1[i] + a2[i - diff] + extraTen;
extraTen = 0;
}
else {
result = a1[i - diff] + a2[i] + extraTen;
extraTen = 0;
}
}
else {
if(a1IsLongest) {
result = a1[i] + extraTen;
extraTen = 0;
}
else {
result = a2[i] + extraTen;
extraTen = 0;
}
}
}
Now this code we can make a lot shorter with fewer checks.
int i = 0;
for (; i < a2->length; i++) {
answer->digits[i] += a1->digits[i] + a2->digits[i];
process_carry(answer->digits, i);
}
for (; i < a1->length; i++) {
answer->digits[i] += a1->digits[i];
process_carry(answer->digits, i);
}
Since a1 is always longer than a2, we don't have to check the lengths.
We avoid checking the diff by changing the order from most significant digit at the zero position to the least significant at the zero position. Now we have to use two loops, but we don't have to check anything.
We can use += because calloc zeroes out its elements. If there's a carry, we add to it. If not, there's already a 0 there.
Create process_carry:
const int BASE = 10;
void process_carry(int *digits, int position) {
if (digits[position] > BASE) {
digits[position] -= BASE;
digits[position+1] += 1;
}
}
You can create a more generic version by replacing the if with a while. Or
void process_carry(int *digits, int position) {
digits[position+1] += digits[position] / BASE;
digits[position] %= BASE;
}
This may be more efficient than using a while.
Note how neither version requires an extraTen variable.
This also uses BASE rather than the magic number 10. Now if you want to change to a different base, you can do so easily.
//the new malloc can only happen here
if (result >= 10) {
//check if at beginning of array
if(i == 0) {
maxSize = maxSize + 1;
*lengthDest = maxSize;
//printf("This is lengthDest: %d\n", *lengthDest);
//overflowing, need to realloc
int *temp = malloc(maxSize * sizeof (int));
int j;
//copy old elements into new array
for(j = 0; j < maxSize; j++) {
//printf("index: %d\n", j);
if(j == 0) {
temp[j] = 1;
}
else if(j == 1) {
temp[j] = result % 10;
}
else {
temp[j] = answer[j-1];
}
}
free(answer);
answer = temp;
return answer;
}
else{
//printf("in the else of the i equals 0.\n");
answer[i] = result % 10;
extraTen = 1;
}
}
else {
//printf(" results is less than 10\n");
answer[i] = result;
}
}
return answer;
This could be a lot simpler.
if (answer->length > 1 && answer->digits[answer->length - 1] == 0) {
answer->length--;
realloc(answer->length * sizeof(answer->digits[0]));
if (answer->digits == NULL) {
free(answer->digits);
// fail gracefully
}
}
return answer;
Now we don't have to copy. And we already handled the carry. We just resize the array if we made it too big to start. And then return the result.
Note that this allows for the situation where we are returning zero while stripping off a single leading zero otherwise. | {
"domain": "codereview.stackexchange",
"id": 19039,
"tags": "c"
} |
What makes columnar databases suitable for data science? | Question: What are some of the advantages of columnar data-stores which make them more suitable for data science and analytics?
Answer: A column-oriented database (=columnar data-store) stores the data of a table column by column on the disk, while a row-oriented database stores the data of a table row by row.
There are two main advantages of using a column-oriented database in comparison
with a row-oriented database. The first advantage relates to the amount of data one’s
need to read in case we perform an operation on just a few features. Consider a simple
query:
SELECT correlation(feature2, feature5)
FROM records
A traditional executor would read the entire table (i.e. all the features):
Instead, using our column-based approach we just have to read the columns
which are interested in:
The second advantage, which is also very important for large databases, is that column-based storage allows better compression, since the data in
one specific column is indeed homogeneous than across all the columns.
The main drawback of a column-oriented approach is that manipulating (lookup, update or delete) an entire given row is inefficient. However the situation should
occur rarely in databases for analytics (“warehousing”),
which means most operations are read-only, rarely read many attributes in the same
table and writes are only appends.
Some RDMS offer a column-oriented storage engine option. For example, PostgreSQL
has natively no option to store tables in a column-based fashion, but Greenplum has
created a closed-source one (DBMS2, 2009). Interestingly, Greenplum is also behind
the open-source library for scalable in-database analytics, MADlib (Hellerstein et al.,
2012), which is no coincidence. More recently, CitusDB, a startup working on high speed, analytic database, released their own open-source columnar store extension for
PostgreSQL, CSTORE (Miller, 2014). Google’s system for large scale machine learning
Sibyl also uses column-oriented data format (Chandra et al., 2010). This trend
reflects the growing interest around column-oriented storage for large-scale analytics.
Stonebraker et al. (2005) further discuss the advantages of column-oriented DBMS.
Two concrete use cases: How are most datasets for large-scale machine learning stored?
(most of the answer comes from Appendix C of: BeatDB: An end-to-end approach to unveil saliencies from massive signal data sets. Franck Dernoncourt, S.M, thesis, MIT Dept of EECS) | {
"domain": "datascience.stackexchange",
"id": 554,
"tags": "databases, tools"
} |
Effect of sampling rate on BER simulation | Question: I am trying to build a bit error rate simulation for a digital baseband signal
(specifically using Manchester coding). The simulator generates a digital waveform from random symbols at a sample rate of at least $f_s = 2/T_l$, where $T_l$ is the chip duration (the duration of the "on" time).
The transmitted signal is mixed with AWGN, based on the desired S/N ratio. The detector correlates each symbol period from the received signal with an ideal 0 waveform or 1 waveform, and chooses the best. Either "ideal" is 1V during the on time, and 0V during the off time. These ideal waveforms are sampled at the same rate as the input.
What I am seeing is that the BER improves when the sampling rate is increased, for a given SNR (see plot). This was unexpected, since I believed that the error rate would be independent of the sampling rate.
Am I missing something in this BER simulation? Shouldn't the error rate be independent of the sampling rate?
Answer: Another hypothesis: as you increase the sample rate you also increase the bandwidth. Depending on how you generate the noise, this probably means that the spectral density of your noise decreases. If the spectral content of your signal stays constant, that means that the in-band SNR goes up. | {
"domain": "dsp.stackexchange",
"id": 8927,
"tags": "signal-analysis, self-study"
} |
How to prove this identity for the complex conjugate of linear operator? | Question: I want to prove the following identity:
$$\langle v|\Omega^{\dagger}|u\rangle = \langle u|\Omega | v \rangle^*$$
How should I go about this? I believe I can prove it when $\Omega$ is hermitian, but I do not know how to prove it in general.
Answer: The complex conjugate operation distributes in an inverse order. Let a, b, c be some numbers that the conjugate action can act. Then,
$$
(abc)^* = c^* b^* a^*
$$
For a general case where a, b, c are not numbers but could be state, operator, etc., such that the product is a number, then one needs to use Hermitian conjugate, of course. So,
$$\langle u | \Omega | v \rangle^* = | v\rangle^\dagger \Omega^\dagger \langle u |^\dagger $$
and of course, due to the fact that the conjugate of a bra is a ket and vice versa, one can conclude the proof. | {
"domain": "physics.stackexchange",
"id": 48624,
"tags": "quantum-mechanics, operators, hilbert-space, notation, linear-algebra"
} |
Show that if the Lindblad satisfy $\sum_\mu L_\mu L_\mu^\dagger=\sum_\mu L_\mu^\dagger L_\mu$ then the von Neumann entropy increases monotonically | Question: How can we show that when the Lindblad operators satisfy the condition:
$$\sum_{\mu}L_{\mu} L_{\mu}^{\dagger} = \sum_{\mu} L_{\mu}^{\dagger}L_{\mu},\tag{1}$$
the master equation evolution monotonically increases the von Neumann entropy. When measurements are made in the basis in which $\rho$ is diagonal, the von Neumann entropy coincides with the Boltzmann-Shannon entropy.
I have worked with the basis which is going to diagonalise $\rho$ and also I have taken the necessary condition where von Neumann entropy has been increased monotonically but how to proceed the next step I am not getting.
Answer: I'll show you how to do it by brute force, since this will demonstrate a lot of techniques that will be useful for you if you have to derive something more complicated.
The Lindblad evolution:
$$
\frac{\textrm{d}\rho}{\textrm{d}t} = -\frac{\textrm{i}}{\hbar}[H,\rho] + \sum_{\mu,\nu} h_{\mu,\nu} \left( L_\mu \rho L_\nu^\dagger -\frac{1}{2}\left\{ L^\dagger_\nu L_\mu ,\rho \right\}\right), \tag{1}
$$
can be re-written in diagonal form:
$$\tag{2}
\frac{\textrm{d}\rho}{\textrm{d}t} = -\frac{\textrm{i}}{\hbar}[H,\rho] + \sum_{\mu} \gamma_{\mu} \left( L_\mu \rho L_\mu^\dagger -\frac{1}{2}\left\{ L^\dagger_\mu L_\mu ,\rho \right\}\right),
$$
and in the special case in which we have:
$$\tag{3}
\sum_\mu L_\mu L_\mu^\dagger = \sum_\mu L_\mu^\dagger L_\mu,
$$
we get:
$$\tag{4}
\frac{\textrm{d}\rho}{\textrm{d}t} = -\frac{\textrm{i}}{\hbar}[H,\rho] + \sum_{\mu} \gamma_{\mu} \left( L_\mu \rho L_\mu^\dagger -\frac{1}{2}\left( L^\dagger_\mu L_\mu\rho + \rho L_\mu L^\dagger_\mu \right)\right).
$$
Let us now look at the von Neumann entropy:
$$
S = - \textrm{Tr}\left( \rho \ln \rho \right),\tag{5}
$$
and it's rate of change:
\begin{align}
\frac{\textrm{d}S}{\textrm{d}t} &= - \textrm{Tr}\left( \frac{\textrm{d}\rho}{\textrm{d}t} \ln \rho + \rho \frac{\textrm{d}\ln\rho}{\textrm{d}t}\right)\tag{6}\\
&= - \textrm{Tr}\left( \frac{\textrm{d}\rho}{\textrm{d}t} \ln \rho + \rho \frac{1}{\rho}\frac{\textrm{d}\rho}{\textrm{d}t}\right),\tag{7}\\
&= - \textrm{Tr}\left( \frac{\textrm{d}\rho}{\textrm{d}t} \ln \rho + \frac{\textrm{d}\rho}{\textrm{d}t}\right),\tag{8}\\
&=- \textrm{Tr}\left( \frac{\textrm{d}\rho}{\textrm{d}t} \ln \rho\right) - \textrm{Tr} \left(\frac{\textrm{d}\rho}{\textrm{d}t} \right).\tag{9}\\
\end{align}
Let me start the right-most term for you:
\begin{align}
\textrm{Tr} \left(\frac{\textrm{d}\rho}{\textrm{d}t} \right) & =
\textrm{Tr} \left( -\frac{\textrm{i}}{\hbar}[H,\rho] + \sum_{\mu} \gamma_{\mu} \left( L_\mu \rho L_\mu^\dagger -\frac{1}{2}\left( L^\dagger_\mu L_\mu\rho + \rho L_\mu L^\dagger_\mu \right)\right)\right)\tag{10}\\
& =
\sum_{\mu} \gamma_{\mu}\textrm{Tr} \left( L_\mu \rho L_\mu^\dagger -\frac{1}{2}\left( L^\dagger_\mu L_\mu\rho + \rho L_\mu L^\dagger_\mu \right)\right)\tag{11}\\
& =
\sum_{\mu} \gamma_{\mu} \left( \textrm{Tr}\left(L_\mu \rho L_\mu^\dagger\right) -\frac{1}{2}\textrm{Tr}\left( L^\dagger_\mu L_\mu\rho \right) - \frac{1}{2}\textrm{Tr}\left(\rho L_\mu L^\dagger_\mu \right)\right)\tag{12}\\
&=\sum_{\mu} \gamma_{\mu} \left( \textrm{Tr}\left(L_\mu^\dagger L_\mu \rho \right) -\frac{1}{2}\textrm{Tr}\left( L^\dagger_\mu L_\mu\rho \right) - \frac{1}{2}\textrm{Tr}\left( L^\dagger_\mu L_\mu\rho \right)\right)\tag{13}\\
& = 0. \tag{14}
\end{align}
I used the cyclic property of the trace operator, and the fact that its linear, in the last set of equations.
We now have:
$$
\frac{\textrm{d}S}{\textrm{d}t} = - \textrm{Tr}\left( \frac{\textrm{d}\rho}{\textrm{d}t} \ln \rho\right) .\tag{15}\\
$$
To do this we have to examine various pieces:
\begin{align}
&\textrm{Tr} \left( [H,\rho]\ln \rho \right)\tag{16}\\
=& \textrm{Tr} \left( H\rho \ln \rho - \rho H \ln \rho\right) \tag{17} \\
= &0.
\end{align}
That works because we have three operators in the argument of the trace operator, and they are all Hermitian (this wouldn't necessarily work if we had the trace of a product of 4 or more Hermitian operators).
Let us now look at another piece:
\begin{align}
&-\frac{1}{2}\textrm{Tr} \left( L^\dagger L \rho \ln \rho \right) -\frac{1}{2}\textrm{Tr} \left( \rho L^\dagger L \ln \rho \right) \tag{19}\\
=& -\textrm{Tr} \left( L^\dagger L \rho \ln \rho \right).
\end{align}
The above result comes from using the cyclic property of the trace for the first term and the ability to equate arbitrary permutations for cases in which the argument of the trace operator is the product of three Hermitian arguments (treating $L^\dagger L$ as a single Hermitian operator).
So now we're left with:
$$
\frac{\textrm{d}S}{\textrm{d}t} = -\sum_\mu \gamma_\mu\left( \textrm{Tr}\left(
L_\mu \rho L_\mu^\dagger \ln \rho \right) - \textrm{Tr}\left(
L_\mu^\dagger L_\mu \rho \ln \rho \right)\right) .\tag{20}\\
$$
That would be 0 if $L_\mu^\dagger = L_\mu^{-1}$ but can you otherwise show that $\frac{\textrm{d}S}{\textrm{d}t}$ never changes sign (i.e. that it's either monotonically increasing or monotonically decreasing)? | {
"domain": "quantumcomputing.stackexchange",
"id": 3295,
"tags": "density-matrix, open-quantum-systems"
} |
What is a virtual state? | Question: In talking about Raman spectroscopy, one finds the Stokes line is simply the difference between the energy of an incoming photon and an emitted photon. This energy corresponds to a vibrational transition in terms of energy. Yet, the light sent at the molecule is usually much higher energy than that vibrational transition (or else we would just be doing IR).
The explanation I've heard is that the state which holds all that energy in between the ground vibrational state and decaying back to the first vibrationally excited state is a virtual state.
From doing some reading, I have gathered that a virtual state is called virtual not because it's imaginary, but because it's not an eigenfunction of the hamiltonian of the system.
Where does this state come from? I've heard that it drops out of some perturbation theory. Because Raman is intricately connected with polarizabilities, I imagine this might be perturbations to the time-dependent Schroedinger equation.
So, to put some actual questions out there:
What is meant by "virtual state"? In what sense is it virtual?
Can anyone show me the perturbation theory where this comes from? Even if it is just a reference, that would be greatly appreciated.
I have also seen this is somehow connected with Feshbach Resonance which I know nothing about, so an answer which addresses that point would be a good one.
Answer: In most Raman experiments, the incident radiation is not near (or at) an absorbing wavelength, and so you will never access a real, honest-to-God excited state (stationary state). (If it was close to an absorbing wavelength, we would be taking about a resonance Raman experiment.)
Now obviously something is going on between the molecule and the incident light. What you are really doing is preparing a superposition state. A superposition of real excited states, or stationary states, of your Hamiltonian. This is what we call a "virtual state," and it is not an eigenstate of your Hamiltonian, so its energy is undefined. (You are free to compute an expectation value, though.) This virtual state doesn't last forever, and, when it decays, scatters radiation away in some direction (it doesn't matter where, unless you are doing an angle-resolved Raman experiment).
(By the way, if you look closely at the equation porphyrin gave, the sum over $v$ virtual states is actually a sum over real $v$ excited stationary states, such that $E_v$ is defined. Otherwise that expression is nonsense. This equation essentially is the sum of overlaps of the initial state into the states that make up your virtual state, and then multiplied by the sum of overlaps of the excited states that make up the virtual state into the final state. This is the Kramers-Heisenberg-Dirac expression for Raman amplitudes.)
If the superposition collapses back to the ground state, the scattered radiation will have the same wavelength as the incident light, and we have Rayleigh scattering. If the superposition collapses to an excited vibrational states, we have Stokes Raman. If your initial state was vibrationally excited and you collapse to a lower vibrational state, you get anti-Stokes Raman. In the actual experiment you will observe all types and can see a Rayleigh line in the spectra. Most spectrometers will filter this wavelength out, however, leaving just the Raman peaks. Usually the virtual state is dominated by the ground state and most of the scattered light is Rayleigh. This is why Raman spectroscopy was too weak to be useful until we had better optical components.
Obviously this isn't the only way to think about Raman. You can use path integrals and/or QED if you want, which is much closer to Wildcat's answer. In fact, I think you must use QED if you want to work out virtual state lifetimes. Here is a good reference if you want to dig a bit deeper:
Williams, Stewart O., and Dan G. Imre. "Raman spectroscopy:
time-dependent pictures." The Journal of Physical Chemistry 92, no. 12
(1988): 3363-3374. | {
"domain": "chemistry.stackexchange",
"id": 6842,
"tags": "quantum-chemistry, spectroscopy"
} |
Simple and practical deterministic algorithm, complicated running time | Question: Very often, if the running time of an algorithm is a complicated expression, the algorithm itself is also complicated and impractical. Each of the cube roots and $\log \log n$ factors in the asymptotic running time tends to add complexity to the algorithm and also hidden constant factors to the running time.
Do we have striking examples in which this rule of thumb fails?
Of course it is easy to find examples of algorithms that are very difficult to implement even though they happen to have a very simple worst-case running time. But what about the converse?
Do we have examples of very simple and practical deterministic algorithms that are easy to implement but happen to have a very complicated expression as its worst-case asymptotic running time?
Please note the keywords "deterministic" and "worst-case"; the analysis of simple randomised algorithms fairly easily leads to complicated expressions.
Of course what is "complicated" is a matter of taste. Anyway, I would prefer to see an expression that is far too ugly to put in the title of your paper. And I would prefer a complicated function of one natural parameter (input size, number of nodes, etc.).
PS. I thought I would not make this a "big-list question", and not CW. I'd like to find a single excellent example (if it exists at all). Hence please post another answer only if you think that it is "better" than any of the answers so far.
Answer: The best example I can think of is an algorithm (described below) to compute the $k$-level in an arrangements of $n$ lines in the plane, i.e. the polygonal line formed by the points that have exactly $k$ lines vertically above it. This is not the most efficient algorithm known for the problem. There are more efficient algorithms with simpler complexities, but I believe this one is more practical than most (if not all) of them. The analysis is probably not tight, because it uses the $k$-level complexity, which is a famous open problem (I think all other terms in the analysis are tight). Even still, I doubt improved bounds for $k$-level would make the running time much simpler. I'll assume $k=n/2$ to write the complexity as a function of $n$ alone.
The algorithm is based on the line sweep paradigm and uses two $(\log n)$-ary kinetic tournaments as kinetic priority queues. Insertions and deletions are performed when a line goes above or below the $k$-level, moving a line from one kinetic tournament to the other. Therefore, there are $O(n^{4/3})$ insertions and deletions (using Dey's bound for the $k$-level complexity). Each event is processed in $O(\log n)$ time and there are $O(n^{4/3} \alpha(n) \log n / \log \log n)$ events (the $\alpha(n)$ comes from the complexity of the upper envelope of arrangements of line segments, while the $\log n / \log \log n$ comes from the height of a $(\log n)$-ary tree). The total running time is
$$O(n^{4/3} \alpha(n) \log^2 n / \log \log n).$$
Please check Timothy Chan's manuscript http://www.cs.uwaterloo.ca/~tmchan/lev2d_7_7_99.ps.gz for more details and references. The $1/\log \log n$ factor can be removed by using a binary (intead of $(\log n)$-ary) kinetic tournament, but it actually speeds up the kinetic priority queue in the tests that I performed. The complexity should get a little uglier and worse (while the algorithm will still be practical) if a kinetic heap is used instead of a kinetic tournament (a $\log$ inside a square root should show up). | {
"domain": "cstheory.stackexchange",
"id": 395,
"tags": "ds.algorithms, soft-question, recreational"
} |
Binary/yijing clock in Processing with excessive resource consumption | Question: I've been learning Processing for the last week or two, and I've made a couple of little 'desktop toy' type apps like this one that I'm quite pleased with. However, it just occurred to me today to look at the activity monitor while my app was running, and I was unpleasantly surprised – nobody wants a desktop toy that's consuming more resources than anything else on the system.
So far throughout my coding education/experience I've been told not to worry about optimisation 'yet', and haven't really been offered much guidance re: efficient coding (faculty are mainly interested in code that's easy to mark).
Here's what the app is presenting to the user:
Versus the resources it's consuming:
The source is in two parts: the main .pde and another containing the 'Gua' (hexagram) class.
yijingClock.pde
PGraphics mins;
PGraphics hrs;
float fadeAmount;
static final float fadeMax = 1440; //1440 means 1 step per frame takes 1 minute at 24fps
void setup() {
size(500, 500);
colorMode(RGB, 255, 255, 255, fadeMax);
background(255);
imageMode(CENTER); // All images in the sketch are drawn CENTER-wise
frameRate(24); // Affects the smoothness and speed of fade();
mins = createGraphics(width, height);
hrs = createGraphics(width, height);
noFill();
stroke(100);
//polygon(5, width/2, height/2, width / 4 * 3, height / 4 * 3, -PI / 2); // Draw a static pentagon as the background
fill(0);
fadeAmount = 0;
} // end setup()
void draw() {
//let's fade instead of redrawing the background to expose change over time
fadeAmount = map(System.currentTimeMillis() % 60000, 0, 60000, 1, fadeMax); // new way explicitly ties the fade amount to the real current second
//println(fadeAmount);
fade(fadeAmount);
drawMins();
drawHrs();
}// end draw()
void drawMins() {
Gua gua = new Gua(minute());
mins = gua.drawGua(color(0, 0, 0, constrain(fadeAmount*2, 100, fadeMax)));
image(mins, width/2.0, height/2.0, width/2.5, height/2.5);
}// end drawMins()
void drawHrs() {
float angle = TWO_PI / 5; // To arrange them in a pentagon
float startAngle = -PI / 2; // To put the first point at the top
String binHr = binary(hour(), 5); // Use modulo 12 if we want 12 hour time, but that's a waste of bits imo
char[] binHrArr = reverse(binHr.toCharArray()); // We're reversing this to match the endianness of the yijing, and the arrangement of a clock.
hrs = createGraphics(width, height); // Clearing the previous state
hrs.beginDraw();
hrs.clear(); // Trying to make the background transparent so we can layer things
hrs.imageMode(CENTER);
for (int i = 4; i >= 0; i--) {
PGraphics bit = hourBit(binHrArr[i]);
hrs.image(bit, width/2 + (width/2.1) * cos(startAngle + angle * i),
height/2 + (height/2.1) * sin(startAngle + angle * i),
25, 25);
} // end for
hrs.endDraw();
image(hrs, width/2, height/2, (width/4)*3, (height/4)*3);
}// end drawHrs()
// Returns a full circle for true and a hollow one for false
PGraphics hourBit(char state) {
PGraphics bit = createGraphics(40, 40);
bit.beginDraw(); // Start drawing to this buffer...
bit.imageMode(CENTER);
//bit.clear();
bit.stroke(0);
// Fill colour based on state, 1 = filled.
if (state == '1') {
bit.fill(0, fadeAmount); // They fade in
} else {
bit.fill(255);
} // end if
bit.ellipse(20, 20, 30, 30);
bit.endDraw();
return bit;
} // end hourBit()
/**
Copypasted from https://processing.org/tutorials/anatomy/
Originally i was drawing a pentagon but mainly i just needed to learn
how to get the coordinates of the points.
**/
void polygon(int n, float cx, float cy, float w, float h, float startAngle) {
float angle = TWO_PI/ n;
// The horizontal "radius" is one half the width,
// the vertical "radius" is one half the height
w = w / 2.0;
h = h / 2.0;
beginShape();
for (int i = 0; i < n; i++) {
vertex(cx + w * cos(startAngle + angle * i),
cy + h * sin(startAngle + angle * i));
}
endShape(CLOSE);
}
/**
Draws a semitransparent background, which fades everything on
screen by a given amount
**/
void fade(float amount) {
fill(255, amount); // hardcoding bc 1. im lazy, and 2. this sketch should always be b/w
rectMode(CENTER);
rect(width/2, height/2, width, height);
}// end fade
Gua.pde
class Gua {
int[] yao = new int[6];
PImage img;
int intValue = 0; // Used to cast the binary to base 10 to represent the hexagram that way
static final int height = 300; // Need a good alternative to using these constants here...
static final int width = 300;
static final int lineHeight = height / 12; // the height of a line; voids are equal height to lines
static final int segmentSize = width / 5; // Used to determine the ratio of line to white in yin line; must be an odd number to allow middle to be empty
private int x = 0; // remember that (0,0) is the top left corner. Not using x really.
private int y = 0; // Used when moving down the hexagram while drawing it
/**
Default constructor instantiates a random hexagram. This hexagram exists in the memory,
and must be drawn to the canvas in a seperate operation.
**/
Gua() {
for (int i = 0; i < 6; i++) {
yao[i] = round(random(0, 1));
//print(yao[i]);
}
//println();
} // end Gua()
// Construct a hexagram with a given (base 10) value.
Gua(int value) {
String binary = binary(value, 6);
//println("parsed binary: " + binary);
for (int i = 0; i < binary.length(); i++) {
yao[i] = Character.getNumericValue(binary.charAt(i));
//print(yao[i]);
}
//println();
} // end Gua(int)
// Convenience version comes in black
PGraphics drawGua() {
return drawGua(color(0,0,0));
} // end drawGua()
/** Draw a gua of a given colour **/
PGraphics drawGua(color col) {
PGraphics pg = createGraphics(width, height); // the image buffer we will return
pg.beginDraw();
//pg.clear();
pg.noStroke();
pg.fill(col);
//pg.background(255, alpha(col));
int i = 5; // we have to read them backwards because tradition places the "smallest bit" at the top
for (y = 0; y <= 250; y = y + 50) {
if (yao[i] == 0) { // yin
pg.rect(x, y, (segmentSize * 2), lineHeight); // left side
pg.rect(segmentSize * 3, y, (segmentSize * 2), lineHeight); // right side
//println("-- --");
} // end if
else { // yang
pg.rect(x, y, width, lineHeight);
//println("-----");
} // end else
i--;
} //end for
pg.endDraw();
return pg;
}// end drawGua(colour)
String toString() {
return join(str(yao), "");
}// end toString()
int toInt() {
return unbinary(toString());
}// end toInt()
} // End Gua
I probably don't need to point out that I'm a total beginner, so please, even the most obvious optimisation tips would be appreciated. If the answer is that I've written everything totally wrong and should start over, honestly that would be even better, as I feel I need to start developing good habits ASAP.
Answer: Something you need to learn about performance in java is that if performance is important to you, Java might not be the language for you. Your code is not directly translated into machine code; the compiler interprets your code, rewrites it with it's own optimization algorithms before translating it to machine code. Of course you can make minor changes to your code to make slight performance improvements, but on a general scale going for performance means it becomes more of a game of "how to trick the compiler" rather than pure performance increase through code.
To your actual code
Comments
You're commenting way too much. If this is for your own learning process and noone else have to look at the code, that's fine, but if this is code you are gonna share, commenting near every line is only gonna make it hard to read quite annoying. Anyone who didn't pick up coding 2 weeks ago don't need to see // end methodName(). You should only write comments when your code does something that's not obvious just by reading it. You're not making a tutorial for people who can't read code, you're writing comments to clarify difficult parts to yourself and your colleagues.
JavaDoc are actually quite useful if you do them right;
/**
* This method connects to the database and persists the ObjectType entity provided.
* @param ObjectType
* @return ObjectType (persisted object)
*/
If you're using an IDE like NetBeans or IntelliJ this JavaDoc will pop up and let you know what the method does and how to use it (just like it will help anyone else using your code). However, you're not writing JavaDoc, you're writing block comments that explains the code, not documents it.
You're also leaving a lot of scrap comments in your code. If you have to comment it out and it's no longer a WIP, remove unused code.
Access modifiers
It's also bad practice to not add access control modifiers, if it's something you haven't studied yet you can read about it in the oracle tutorial.
Magic numbers and hard-coding
You should also try to not hard-code everything. There's something called "magic numbers" and you're using them near everywhere. Ins this for loop you have three:
for (y = 0; y <= 250; y = y + 50) {}
The first one can be somewhat forgiven since index begins with 0 and there might not be any places you want to modify this, but the other two;
What is 250? Where does it come from? Why is it that specific number?
y = y + 50 raises the same questions.
Why it's important to not use magic numbers is that a) You'll know what the number represents and it's easier to follow what you're iterating. b) The most important thing! If you need to change these numbers, you only change one variable declared at the top in the global scope, otherwise you need to somehow know and remember which magic numbers are related to this variable and has to be changed as well.
And when you hard-code all the variables locally, your methods are not dynamic and can't be re-used. It might not seem to matter here, but it's a good practice to put in among your habits when coding. | {
"domain": "codereview.stackexchange",
"id": 16534,
"tags": "performance, datetime, graphics, memory-optimization, processing"
} |
Do Galilean boosts and Lorentz boosts share the same generators? | Question: Gottfried and Yan's Quantum Mechanics introduces a generator $N$, called the boost, which generates Galileo transformations. I think in other terminology one might say $N$ generates Galilean boosts, but I think that is just terminology. It is defined as
$$\mathbf{N}=\mathbf{P}t-M\mathbf{X}.$$
This matches the definition on Wikipedia of the dynamic mass moment which is part of the same bivector as Angular momentum. That makes me think that mass moment should serve as the generator of Lorentz boosts. Indeed, this Phys.SE question and this Wikipedia page imply to me that it does, as long as $N$ is the same as $K$. So how can these two transformations have the same generator?
Answer: Galilean symmetry is a symmetry of non-relativistic physics. In reality the symmetry of physics is Lorentz invariance and non-relativistic physics arises as approximation in the limit $c\rightarrow\infty$
Acting on functions $f(x^\mu)$ the generator of Lorentz boost can be written as,
\begin{equation}
K_i\equiv M_{0i}=x_0 \partial_i-x_i \partial_0=ct\partial_i+\frac{x^i}{c}\partial_t
\end{equation}
and gives Lorentz boost by exponentiation $\Lambda=e^{K_i w}$ with argument $w=\operatorname{arctanh}\,{v/c}$ known as rapidity.
In the limit $c\rightarrow\infty$ we get,
\begin{equation}
w\rightarrow \frac{v}{c},\quad K_i w\rightarrow K_i^{G} v,\quad K_i^G=t\partial_i
\end{equation}
\begin{equation}
e^{K_i^{G}v} \begin{pmatrix}t\\\vec{x}\end{pmatrix}=\begin{pmatrix}t\\x+vt\end{pmatrix}
\end{equation}
When we consider transformations of canonical coordinate and momenta for particle it works somewhat differently - we can rewrite in that case Lorentz generator in terms of canonical variables for relativistic particle as
\begin{equation}
\tilde{K}_i=-ct p^i+\frac{x^i}{c}H
\end{equation}
So that canonical functions transform as $f(x,p)\mapsto f(x,p)+w\{f(x,p),K_i\}+\ldots$ Note that it's not just mass but energy that is involved in terms with $x^i$.
However for particle $H=c\sqrt{m^2c^2+\vec{p}^2}\sim mc^2+\frac{p^2}{2m}+O(1/c^2)$
Therefore we get,
\begin{equation}
\tilde{K}_i w\sim v(-t p^i+mx^i)\equiv \tilde{K}_i^G v
\end{equation}
so that on non-relativistic canonical coordinates and momenta it acts as
\begin{equation}
\begin{pmatrix}\vec{x}\\\vec{p}\end{pmatrix}\mapsto\begin{pmatrix}x+vt\\ \vec{p}+mv\end{pmatrix}
\end{equation}
So no, generators are not the same but they are connected through non-relativistic limit. | {
"domain": "physics.stackexchange",
"id": 35498,
"tags": "special-relativity, angular-momentum, group-theory, galilean-relativity"
} |
Mechanical waves edge between material and vacuum | Question: I have been thinking about the propagation of EM waves vs. mechanical waves and some of their odd cases. One such case that I haven't been able to puzzle out is what happens when a mechanical wave reaches the end of a medium (such as the outer layer of the atmosphere) and the beginning of a vacuum - outer space itself.
Edit: for clarity, I created a simple image demonstrating what I thought should happen:
Basically, my theory was that the "edge" particles (or particles with a large amount of space between them) would continue into space with perpetual motion as the force applied to them is not counteracted by particles in front of them.
My question is, after the wave has affected the last molecules on the edge of a vacuum, what happens to it, and, do the molecules on the very edge continue to move indefinitely (or do they return to equilibrium)?
Answer: the materials are elastic, so when it reaches the end it would reflect, because the last layer won't have other layer to collide and the elastic properties of the material would make it bounce back, making the wave reflect back. Unless the wave energy is sooo large that would break the material and in that case the particles would indeed leave the material. if you have a bucket of water and drop a tiny piece of something, it would make a wave, but no water would separate, now if you drop something really big, some part of the water would separate from the other part. | {
"domain": "physics.stackexchange",
"id": 47189,
"tags": "waves, vacuum"
} |
Design patterns for console commands | Question: I've made a pretty standard console in which you type a command and it does something. However, the issue that comes to my mind is scaling: if I want to add hundreds of commands, I have to manually add a new Command instance for each one individually, which is... less than ideal.
The full code is stored in this GitHub repository.
Program.cs
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
using ConsolePlus.Commands;
namespace ConsolePlus
{
public class Program
{
/// <summary>
/// The version of the program.
/// </summary>
public const string Version = "1.0.0.0";
public static CommandRegistry<Command> Registry = new CommandRegistry<Command>();
/// <summary>
/// The application's entry point.
/// </summary>
/// <param name="args">The command-line arguments passed to the program.</param>
public static void Main(string[] args)
{
CommandHandler.RegisterCommands();
Console.WriteLine("ConsolePlus v." + Version);
while (true)
{
Console.Write(">>> ");
string line = Console.ReadLine();
List<string> parts = line.Split(' ').ToList<string>();
string commandName = parts[0];
parts.RemoveAt(0);
string[] commandArgs = parts.ToArray<string>();
try
{
string result = Registry.Execute(commandName, commandArgs);
if (result != null)
{
Console.WriteLine("[{0}] {1}", commandName, result);
}
}
catch (CommandNotFoundException)
{
Console.WriteLine("[ConsolePlus] No such command.");
}
}
}
}
}
CommandRegistry.cs
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
namespace ConsolePlus
{
public class CommandRegistry<T>
where T : ICommand
{
private Dictionary<string, T> register;
public CommandRegistry()
{
register = new Dictionary<string, T>();
}
public CommandRegistry(params T[] commands)
: this()
{
foreach (T command in commands)
{
register.Add(command.Name, command);
}
}
public T GetCommand(string name)
{
if (register.ContainsKey(name))
{
return register[name];
}
else
{
throw new CommandNotFoundException(name);
}
}
public string Execute(string name, string[] args)
{
if (register.ContainsKey(name))
{
return register[name].Execute(args);
}
else
{
throw new CommandNotFoundException(name);
}
}
public void RegisterCommand(T command)
{
register.Add(command.Name, command);
}
}
}
ICommand.cs
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
namespace ConsolePlus
{
public interface ICommand
{
string Name { get; set; }
string HelpText { get; set; }
bool IsPrivileged { get; set; }
string Execute(string[] args);
}
}
Command.cs
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
namespace ConsolePlus
{
public class Command : ICommand
{
public string Name { get; set; }
public string HelpText { get; set; }
public bool IsPrivileged { get; set; }
private Func<string[], string> method;
public Command(string name, bool privileged, string help, Func<string[], string> commandMethod)
{
Name = name;
IsPrivileged = privileged;
HelpText = help;
method = commandMethod;
}
public string Execute(string[] args)
{
return method.Invoke(args);
}
}
}
CommandHandler.cs
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
namespace ConsolePlus.Commands
{
public class CommandHandler
{
public static List<Command> AllCommands = new List<Command>
{
new Command("clear", false, ClearCommand.HelpText, ClearCommand.CommandMethod),
new Command("exit", false, ExitCommand.HelpText, ExitCommand.CommandMethod)
};
public static void RegisterCommands()
{
foreach (Command command in AllCommands)
{
Program.Registry.RegisterCommand(command);
}
}
}
}
And just for illustration (the other *Command classes are very similar):
ClearCommand.cs
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
namespace ConsolePlus.Commands
{
class ClearCommand
{
public static string HelpText = "Clears the console screen.";
public static string CommandMethod(string[] args)
{
Console.Clear();
return null;
}
}
}
At the moment, to add another command, I have to create another class and add another new Command line to the CommandHandler.RegisterCommands method - not ideal. Reviews of this specifically, and if there are any better ways to do this, would be very useful - though of course, any review is a good thing.
Answer: Why do you have the Command class? Why not make life much simpler for yourself by making AllCommands a List<ICommand>, and having classes like ClearCommand implement ICommand?
You can then query your assembly file for classes where ICommand is implemented; that way you don't even need to fill AllCommands "by hand":
AllCommands = Assembly.GetExecutingAssembly().GetTypes()
.Where(x => x.GetInterfaces().Contains(typeof(ICommand))
&& x.GetConstructor(Type.EmptyTypes) != null)
.Select(x => Activator.CreateInstance(x) as ICommand);
Also, does AllCommands need to be called AllCommands? Can't it just be called Commands?
I would also recommend to remove the setters from the properties in ICommand. That way you'd end up with something like this:
public interface ICommand
{
string Name { get; }
string HelpText { get; }
bool IsPrivileged { get; }
string Execute(string[] args);
}
public class ClearCommand : ICommand
{
public string Name { get { return "clear"; } }
public string HelpText { get { return "Clears the console screen."; } }
public bool IsPrivileged { get { return false; } }
public string Execute(string[] args)
{
Console.Clear();
return null;
}
} | {
"domain": "codereview.stackexchange",
"id": 15880,
"tags": "c#, console"
} |
Why is entropy favorable? | Question: I cannot seem to grasp the logic behind it. We say that more entropy (or more disordered system) is favorable over less entropy. But why? Why is randomness preferred over proper arrangement of atoms/molecules? The only thing I can think of is more degrees of freedom. Molecules in a more random system would have more degrees of freedom, and would thus be favorable. But, again, why? Why would more degrees of freedom be favorable? I've already asked my teachers and looked up internet, but haven't found any satisfactory answer. Also, is there any way you could also explain diffusion by the same concept (diffusion is entropically favorable, right?)?
PS: if this question is better suited for physics stack exchange, I'd be more than happy to have it migrated.
Answer: It appears you're looking for an ELI5-style answer, not an elaborate definition.
Entropy just happens – as long as the universe isn't frozen solid, things will always be moving around, and that movement tends to introduce randomness more than it tends to introduce order.
Consider a deck of cards. Shuffle it. Is it perfectly sorted? No. Why? There are 10^67 ways to arrange a deck of cards, but only one way to sort it; chances are you didn't shuffle the deck back into perfect order. The number of different available states (e.g. card sequences) shows up in the various formal definitions of entropy.
The same principle applies everywhere else in the universe: in the face of random (microscopic) state changes, the odds are stacked in favour of mishmash and against (macroscopic) "order" appearing by mere chance – and overwhelmingly so. | {
"domain": "chemistry.stackexchange",
"id": 9404,
"tags": "physical-chemistry, entropy, free-energy, diffusion"
} |
What's the difference between $\nabla\cdot(\rho v)$ and $\rho(\nabla\cdot v)$ as a physical intuition? | Question: I'm currently learning on substantial derivatives in fluid mechanics and kind of understand how partial derivatives $\frac{(\partial\rho)}{(\partial t)}$ and substantial derivatives $\frac{(D\rho)}{(Dt)}$ differ. However, I can't get the intuition from the vector notation of the two. What would be the good way to interpret the difference between two notations and apply it to partial and substantial derivative's difference. Is $\rho$ independent of coordinate in $\rho(\nabla\cdot v)$?
Answer:
What's the difference between $\nabla\bullet(\rho v)$ and $\rho(\nabla\bullet v)$ as a physical intuition?
Physically, we are often interested in $\rho \vec v$, where $\vec v$ is the fluid velocity field (not the particle velocity) when we want to know how much mass is flowing out of some given volume $V$ with a boundary $\partial V$.
We calculate this as an integral over area $d^2\vec A$ as:
$$
\oint_{\partial V} \rho \vec v \cdot{d^2\vec A} \;,
$$
which can be turned into a volume integral:
$$
\int_V \vec \nabla \cdot \left(\rho\vec v\right)d^3r\;,
$$
where you see that it is the whole thing $\rho \vec v$ that is being differentiated in the integrand: $\vec \nabla \cdot (\rho \vec v)$, which is generally different from $\rho\vec \nabla \cdot \vec v$ when $\rho$ depends on the spatial coordinates.
For example, the change in mass of a fixed volume $V$ has to come from two places: (1) any explicit change in the density with time; (2) flow of mass out of the volume with time. So we have:
$$
\frac{dM}{dt} = \int d^3r \frac{\partial \rho}{\partial t} + \int d^3r \vec \nabla \cdot \left(\rho \vec v\right)=0\;,
$$
and we set this to zero since mass (usually) can't be created out of nothing...
More generally, we are interested in $\rho \vec v$ since we often are interested in quantities per unit mass and the flow of those is usually written as:
$$
\theta \rho \vec v\;,
$$
where $\theta$ is the amount of whatever per unit mass. (e.g., if you are interested in the mass itself, $\theta=1$. e.g., if you are interested in the momentum $\theta =\vec v$.) | {
"domain": "physics.stackexchange",
"id": 98733,
"tags": "fluid-dynamics, differentiation, flow, density"
} |
Given $f: \{0, 1\}^n\to\{0, 1\}^m$, how many qubits are needed to implement the oracle $\mathcal U|x,0\rangle^{\otimes m}=|x,f(x)\rangle$? | Question: Suppose I am given a function $f: \{0, 1\}^n \to \{0, 1\}^m$.
A standard oracle would be of the form $\mathcal{U}|x\rangle|0\rangle^{\otimes m} = |x\rangle|f(x)\rangle$.
I would suspect that this encoding of the function is not always possible with exactly $n+m$ qubits, so my question is how many qubits are needed for it.
(Though I know that any such function can be expressed using NAND gates and translated to Toffoli gates, so there is one loose upper bound of one ancilla qubit for each NAND gate.)
I hope to possibly see some upper bounds for either general or constrained cases of $f$.
Answer: Usually, saying that there is access to an oracle compute $f$ is equivalence to saying that you assume a model in which it is given that $f$ can be computed for free, Namely, In your case $f :\{0,1\}^m \rightarrow \{0,1\}^n$ one has to pay only $n +m$ space for storing the state.
Yet, if you indeed consider the complexity in the general model, then the space complexity might be arbitrarily large. For example, consider a function $f:\{0,1\}^m \rightarrow \{0,1\} $ that decides if an $n$-length binary string belongs to some language $L$ such that $L \in $ EXPSPACE, where EXPSPACE stands for all the languages that can be decided using at most exponential, in the length of the given, memory. Then the gate $\ \mathcal{U} \ $ might use $ \Theta(2^m) $ anciles, which can be thought of as exactly as the working memory needed by the classical machine to decide $f$. Furthermore, if $f$ is strictly in EXPSPACE, then any implementation of $\ \mathcal{U} \ $ must use an exponential number of anciles. | {
"domain": "quantumcomputing.stackexchange",
"id": 5119,
"tags": "circuit-construction, oracles"
} |
Image Matching on iOS | Question: Was recommended to migrate this question from SO.
I am building an iOS app that, as a key feature, incorporates image matching. The problem is the images I need to recognize are small orienteering 10x10 plaques with simple large text on them. They can be quite reflective and will be outside(so the light conditions will be variable). Sample image http://i1274.photobucket.com/albums/y425/Chris_Mitchelmore/2_zpsce84d4f3.png
There will be up to 15 of these types of image in the pool and really all I need to detect is the text, in order to log where the user has been.
The problem I am facing is that with the image matching software I have tried, aurasma and slightly more successfully arlabs, they can't distinguish between them as they are primarily built to work with detailed images.
I need to accurately detect which plaque is being scanned and have considered using gps to refine the selection but the only reliable way I have found is to get the user to manually enter the text. One of the key attractions we have based the product around is being able to detect these images that are already in place and not have to set up any additional material.
Can anyone suggest a piece of software that would work(as is iOS friendly) or a method of detection that would be effective and interactive/pleasing for the user.
Sample environment:
http://www.orienteeringcoach.com/wp-content/uploads/2012/08/startfinishscp.jpeg
The environment can change substantially, basically anywhere a plaque could be positioned they are; fences, walls, and posts in either wooded or open areas, but overwhelmingly outdoors.
Answer: I managed to find a solution that is working quite well. Im not fully optimized yet but I think its just tweaking filters, as ill explain later on.
Initially I tried to set up opencv but it was very time consuming and a steep learning curve but it did give me an idea. The key to my problem is really detecting the characters within the image and ignoring the background, which was basically just noise. OCR was designed exactly for this purpose.
I found the free library tesseract (https://github.com/ldiqual/tesseract-ios-lib) easy to use and with plenty of customizability. At first the results were very random but applying sharpening and monochromatic filter and a color invert worked well to clean up the text. Next a marked out a target area on the ui and used that to cut out the rectangle of image to process. The speed of processing is slow on large images and this cut it dramatically. The OCR filter allowed me to restrict allowable characters and as the plaques follow a standard configuration this narrowed down the accuracy.
So far its been successful with the grey background plaques but I havent found the correct filter for the red and white editions. My goal will be to add color detection and remove the need to feed in the data type. | {
"domain": "dsp.stackexchange",
"id": 706,
"tags": "image-processing, object-recognition, matched-filter"
} |
When recombination in PN junction occurs, which atom becomes an ion? | Question: In an unbiased PN junction, when the carriers recombine to form a depletion layer , it is said that immobile ions are formed.
We know that the conduction band electrons in N type are not associated with any particular atom. So when the conduction band electron diffuses to the P type region, which atom becomes an ion?
Answer: None, really. Such junctions form in semiconductor crystals. Those are remarkable materials. Let's look at electrons in solids first.
Many atoms have weakly bound valence electrons in outer orbits. These orbits would have specific energies if the atom existed in isolation. But when man atoms are packed together and their outer orbits overlap, the energy of these orbits shift slightly. Electrons in those orbits now can have a band of possible energies.
Now those weakly bound electrons are good for carrying currents, but to hop through the material the overlapping orbits better not be completely filled. There wouldn't be room for the moving electron. In metals, there's in fact a band which is about half filled. Ideal - there's plenty of room for electrons to move, but also still enough electrons to move.
Semiconductors have a full and an empty band close together, and with some external help (doping, electric fields, etc) this can be used to switch from conducting to non-conducting.
Regardless, as the outer orbits overlap and join to form a band, the electrons in that band no longer belong to a single orbit and therefore a single atom. At the quantum level, the probability function of the electron is smeared out over the crystal. And thus it's not sensible to talk about the exact atoms ionized. | {
"domain": "physics.stackexchange",
"id": 14592,
"tags": "semiconductor-physics"
} |
What is the role of Numerical Gradient Computation in Backpropagation algorithm? | Question: I was listening CS231n (2017) lectures and noted that there is a lot of attention to Numerical Gradient Computation (NGC). It starts @5:53 in this video and appears a few times later.
Also, looking at the batch normalization materials (example), I found a lot of attention drawn to exactly the same topic (well, probably because it is the same backpropagation...).
As I understood, gradients we use in various optimization methods (vanilla SGD, Adam) require us to know activation function derivative. I suppose, if the activation function is complex or we are lazy enough to take derivative analytically, we need to compute gradient numerically and that is where we use NGC.
Questions:
Is that the only purpose of NGC in backpropagation?
Isn't it faster to use analytically form of the activation function derivative to calculate gradients?
Answer: The purpose is pure educational. Students that jump straight to mid- or high-level libraries like tensorflow, keras, theano, etc don't have to compute the gradients themselves. On the one hand, it saves a lot of time, but on the other hand, it is makes it very easy to abstract away the learning process.
Here's how Andrej Karpathy puts it (the lecturer of the previous cs231n classes at Stanford):
When we offered CS231n (Deep Learning class) at Stanford, we intentionally designed the programming assignments to include explicit calculations involved in backpropagation on the lowest level. The students had to implement the forward and the backward pass of each layer in raw numpy. Inevitably, some students complained on the class message boards:
“Why do we have to write the backward pass when frameworks in the real
world, such as TensorFlow, compute them for you automatically?”
...
The problem with Backpropagation is that it is a leaky abstraction.
In other words, it is easy to fall into the trap of abstracting away the learning process — believing that you can simply stack arbitrary layers together and backprop will “magically make them work” on your data.
I recommend to read the whole post, it's very interesting.
So, you try to compute gradients manually. And when you do that you find that it's pretty hard to assess if the code is right: it's just a raw formula that takes a bunch of floating numbers and returns another bunch of floating numbers. And here you find the alternative numerical method that you can compare to very useful.
Of course, analytical formulas are faster and more precise and they are used in practice whenever possible. But while studying neural networks and back-propagation, it's very useful to get through manual computation at least once. Besides, it sometimes helps to find the bugs. | {
"domain": "cs.stackexchange",
"id": 10547,
"tags": "machine-learning, neural-networks, numerical-algorithms"
} |
Breaking down CNF clauses | Question: I have an assignment where i have to encode certain problem to conjunctive normal form so i can solve it by using SAT solver. I have been able to encode my problem correctly but the solver is quite slow and most likely the cause for this is the length of some clauses which i have.
So my question is:
How can one break down long clauses into smaller ones and therefore speed up the SAT solver, lets say i have following clause:
$ (x_1 \vee x_2 \vee x_3 \vee ... \vee x_n) $ where n can potentially be large number.
Is there any way to split this into smaller clauses which would be logically equivalent?
Answer: The reason why the SAT solver is slow is unlikely to be the size of the clauses, but rather the hardness of the problem itself.
In any case, any instance of CNF-SAT can notoriously be transformed into an equisatisfiable formula with at most three literals per clause by replacing each clause in the form:
$$
( x_1 \vee x_2 \vee \dots \vee x_n)
$$
with:
$$
(x_1 \vee x_2 \vee y_1) \wedge (\neg y_1 \vee x_3 \vee y_2) \wedge \dots \wedge (\neg y_{n-4} \vee x_{n-2} \vee y_{n-3}) \wedge (\neg y_{n-3} \vee x_{n-1} \vee x_n)
$$
where $\{y_i\}_{i \in \mathbb{N}}$ are fresh variables. | {
"domain": "cs.stackexchange",
"id": 8265,
"tags": "optimization, propositional-logic"
} |
Using own bt_navigator in Humble | Question: Hello ROS 2 Navigators,
I am working in ROS 2 Humble Nav2 stack, There's a tutorial in Nav2 for Writing a New Navigator plugin but I can't find navigators parameter in the Nav2 Humble Version, I want to get to know that, Is it possible to use own bt navigator instead NavigateToPose in Humble?
Answer: No, only Iron and newer has the plugin-based navigators -- please upgrade! | {
"domain": "robotics.stackexchange",
"id": 38703,
"tags": "navigation, ros2, plugin, nav2"
} |
Delete directory tree function | Question: I made this function to delete a directory with all its contents recursively, will it work as expected in a production environment ? is it safe ? I don't want to wake up one day with /home contents is gone :D
public static function delTree($dir) {
if(!is_dir($dir)){return false;};
$files = scandir($dir);if(!$files){return false;}
$files = array_diff($files, array('.','..'));
foreach ($files as $file) {
(is_dir("$dir/$file")) ? SELF::delTree("$dir/$file") : unlink("$dir/$file");
}
return rmdir($dir);
}
Note: I use this function internally, meaning there are no client parameters like directory names is taken from the client before I call it, so there is no chance for traversal attacks, and I check the base path with another function before I call it, for example to delete a client folder I do something like this
$clientsFolderPath = $_SERVER['DOCUMENT_ROOT'] . "/../clients"
$clientFolderPath = "$clientsFolderPath/$clientId";
$realBase = realpath($clientsFolderPath);
$realClientDir = realpath($clientFolderPath);
if ( !$realBase || !$realClientDir || strpos($realClientDir, $realBase) !== 0 ){
//error, log , and exit;
} else {
ExtendedSystemModel::delTree($clientFolderPath);
}
Answer: As indicated by the scattered comments on https://stackoverflow.com/q/3349753/2943403, your approach is trustworthy.
scandir() has an advantage over glob() (which is normally handy when trying to ignore . and ..) because glob() will not detect hidden files.
The RecursiveIterator methods are powerful, but it is my opinion that fewer developers possess the ability to instantaneously comprehend all of the calls and flags (and I believe that should weigh in on your decision).
As for your snippet, I would like to clean it up a little.
public static function delTree($dir) {
if (!is_dir($dir)) {
return false;
}
$files = scandir($dir);
if (!$files) {
return false;
}
$files = array_diff($files, ['.', '..']);
foreach ($files as $file) {
if (is_dir("$dir/$file")) {
SELF::delTree("$dir/$file");
} else {
unlink("$dir/$file");
}
}
return rmdir($dir);
}
I don't use ternary operators when I am not assigning something in that line. For this reason, a classic if-else is cleaner in my opinion. | {
"domain": "codereview.stackexchange",
"id": 34291,
"tags": "php, recursion, security, file-system"
} |
What are the difficulties/challenges against developing a coronavirus vaccine? | Question: Multiple groups of scientists are trying to develop a coronavirus vaccine but they are not yet being fruitful. What challenges or difficulties are there in the process that slowing down and/or causing failure in development of vaccine?
Answer: There are multiple challenges presented, and many of those are not limited to coronavirus vaccine.
As mentioned above, it just takes time. Before a vaccine can be used in patients, clinical trials must be performed to validate the safety and efficiency of the vaccine candidate. A Clinical trial includes three phases, which again, just takes time.
But to really answer your question, what are some challenges of developing a vaccine?
To begin with, from the perspective of lab research, we first need to develop a vaccine candidate. What protein on the virus should we take as an antigen? There are many many proteins on the virus, and for CoV, we know it is a protein called Spike protein. It is the neutralizing target on the virus. Which means, when an antibody bound to this site, it could prevent the virus from infecting the cells. Okay, then we have to know, do we need to do something to this protein to make a vaccine? For example, when we try to manufacture this protein into a vaccine, what if the protein is degraded? Or it somehow becomes a 'bad' protein that simply cannot stimulate immune responses in human? These are just some examples of roadblocks of developing vaccine.
Let's suppose we finally work those out. Then what's next? Each candidate vaccine needs to be tested in animals before going into human clinical trial. This step is time-consuming.
Now we are finally ready to start clinical trials. According to FDA, the vaccine for human trials (also for each drug) needs to be produced under Good Manufacturing Practices (GMP). It's basically a standard operating procedure to make sure that the vaccine candidate can be manufactured in a large-scale, quality-controlled manner. There are many many challenges, like how to purify the protein? How to scale-up production. This is partly of the reason why the first vaccine in trial in USA is an mRNA, and in China is a DNA, both of which does not require complicated protein purification.
As you can see, the list of challenges can go on and on. But there are scientists and doctors working diligently on it. Hopefully we'll get there. | {
"domain": "biology.stackexchange",
"id": 10436,
"tags": "immunology, biotechnology, vaccination, coronavirus"
} |
Is there an undecidable language that is mapping reducible to its complement? | Question: Is there an undecidable language A that is mapping reducible to its complement?
If it is possible, since A is an undecidable language, so A's complement must also be an undecidable language. But i don't know whether undecidable languages is closed under complement or not.
Answer: Yes, it is possible. Enumerate the set of all possible Turing machines and let $H$ (resp. $\overline{H}$) be the set of indices of the Turing machines that halt (resp. do not halt) on empty input.
Let $L = \{ \langle T, 1 \rangle \, : T \in H \} \cup \{ \langle T, 0 \rangle \, : T \in \overline{H} \}$.
Clearly $L$ is not decidable but it is possible to reduce $L$ to $\overline{L}$ since $\langle T, r \rangle \in L \iff \langle T, 1-r \rangle \in \overline{L}$. | {
"domain": "cs.stackexchange",
"id": 16216,
"tags": "formal-languages, undecidability"
} |
Silhouette score for optimal k value (k prototype in python) | Question: I am trying to cluster using k prototypes algorithm as my data has both categorical and continuous variables
I found this answer explaining the elbow method with k prototype
https://stackoverflow.com/a/56218269/9543171
How can I use silhouette score instead of cost for finding optimal k value in k prototype?
Answer: May be it's too late, but I found a good resource for your question - silhouette score for k-prototype | {
"domain": "datascience.stackexchange",
"id": 11586,
"tags": "python, clustering"
} |
How do I shorten my Rust code to get integer values from regex named capture groups? | Question: I am completely new to rust. My first rust code is simple text filter application for parsing log file and accumulating some information. Here is my code:
//use std::io::{self, Read};
use std::io::{self};
use regex::Regex;
fn main() -> io::Result<()> {
let re_str = concat!(
r"^\s+(?P<qrw1>\d+)\|(?P<qrw2>\d+)",//qrw 0|0
r"\s+(?P<arw1>\d+)\|(?P<arw2>\d+)",//arw 34|118
);
let re = Regex::new(re_str).unwrap();
//let mut buffer = String::new();
//io::stdin().read_to_string(&mut buffer)?;
let buffer = " 0|1 2|3\n 4|5 6|7\n 8|9 10|11\n";
let mut lines_skipped = 0;
let mut m_qrw1:i32 = 0;
let mut m_qrw2:i32 = 0;
let mut m_arw1:i32 = 0;
let mut m_arw2:i32 = 0;
for line in buffer.lines() {
match re.captures(line) {
Some(caps) => {
// I would like to shorten these long lines =>
let qrw1 = caps.name("qrw1").unwrap().as_str().parse::<i32>().unwrap();
let qrw2 = caps.name("qrw2").unwrap().as_str().parse::<i32>().unwrap();
let arw1 = caps.name("arw1").unwrap().as_str().parse::<i32>().unwrap();
let arw2 = caps.name("arw2").unwrap().as_str().parse::<i32>().unwrap();
if qrw1 > m_qrw1 {m_qrw1 = qrw1}
if qrw2 > m_qrw2 {m_qrw2 = qrw2}
if arw1 > m_arw1 {m_arw1 = arw1}
if arw2 > m_arw2 {m_arw2 = arw2}
}
None => {
lines_skipped = lines_skipped + 1;
}
}
}
println!("max qrw1: {:.2}", m_qrw1);
println!("max qrw2: {:.2}", m_qrw2);
println!("max arw1: {:.2}", m_arw1);
println!("max arw2: {:.2}", m_arw2);
Ok(())
}
Playground
This works as expected, but I think those long chained calls which I created to get integer values of regex named capture groups are a bit ugly. How do I make them shorter/more in idiomatic rust style? I've got an advice to use ? operator instead of unwrap calls but I'm not sure how to apply it in this case.
Answer: regex::Captures provides a handy implementation of Index<&str>. This lets you pull named matches out with caps[name]. Combine that with a few std APIs and you can write the same code like this:
use regex::Regex;
fn main() -> Result<(), Box<dyn std::error::Error>> {
let re = Regex::new(concat!(
r"^\s+(?P<qrw1>\d+)\|(?P<qrw2>\d+)",
r"\s+(?P<arw1>\d+)\|(?P<arw2>\d+)",
))
.unwrap();
let names = ["qrw1", "qrw2", "arw1", "arw2"];
let buffer = " 0|1 2|3\n 4|5 6|7\n 8|9 10|11\n";
let mut maximums = [0i32; 4];
for caps in buffer.lines().filter_map(|line| re.captures(line)) {
for (&name, max) in names.iter().zip(&mut maximums) {
*max = std::cmp::max(*max, caps[name].parse()?);
}
}
for (&name, max) in names.iter().zip(&maximums) {
println!("max {}: {:.2}", name, max);
}
Ok(())
}
``` | {
"domain": "codereview.stackexchange",
"id": 38502,
"tags": "rust"
} |
Why is the momentum of a particle $\gamma mv$? | Question: I am very new to relativity, and as I was going through Resnick & Halliday, I saw that momentum was no longer $mv$, rather $\gamma mv$. This was the proof:
$$p = mv = m \frac{\delta x}{\delta t_0} $$
where $\delta t_0$ is the proper infinitesimal time interval to cover $\delta x$ distance.
Using the time dilation formula, $\delta t = \gamma \delta t_0$ we can then write
$$ p = m \frac{\delta x}{\delta t_0} = m \frac{\delta x}{\delta t} \frac{\delta t}{\delta t_0} = \gamma m \frac{\delta x}{\delta t} $$
However, since $ \frac{\delta x}{\delta t} $ is just the [observed] particle velocity,
$$ p = \gamma mv $$
On the other hand, when I went on to Feynmann's book I saw that there he talked about using "the new m" (the relativistic mass) to derive all the equations and said
$$p = mv = \frac{m_0 v}{(1 - \frac{v^2}{c^2})} = \gamma m_0 v $$
My problem is that if both these proofs are true, then why doesn't one consider the other's argument, id est, why aren't both of these factors considered together ? If the mass is really changing, why doesn't Resnick & Halliday account for it? That way the momentum would turn out to be $p = \gamma^2 mv$
Am I missing something here?
Answer: They're both saying the same thing: the relativistic momentum is given by
$$
\mathbf{p}=\gamma(v)\,m\mathbf{v}
$$
The confusion, it seems, is that you are using Feynman's $m=\gamma m_0$ as equivalent to the $m$ in Resnick & Halliday's text; the actual correlation is Feynman's $m_0$ to Resnick's $m$--both of these terms are the (invariant) rest mass. Loosely, Feynamn "attaches" the Lorentz factor ($\gamma$) to the mass while Resnick attaches it to the velocity ($dx^i/d\tau=dx^i/d(t/\gamma)=\gamma v^i$)
Feynman's notation of "relativistic mass,"
$$
m_{\rm rel}=\gamma(v)m
$$
is considered by many to be "outdated" (though there are still some proponents); Resnick & Halliday's notation, on the other hand, is the more modern approach to relativistic velocity. Nonetheless, the two textbooks come out to the same conclusion: $p=\gamma mv$. | {
"domain": "physics.stackexchange",
"id": 23071,
"tags": "special-relativity, mass, momentum, time-dilation"
} |
Does "improper" imply that a system cannot be stable and causal? | Question: This answer and the comments in it made me wonder whether the following statement is true:
If a transfer function is improper, then that system cannot be causal and stable at the same time.
I had thought that this was true for a while. But the other day I wondered why. For example, the transfer function
$$H(s)=\frac{s^2}{s+1}$$
is improper, but the ROC $\{s:\mathrm{Re}(s)>-1\}$ would make it stable (it contains the imaginary axis) and causal (it consists of a left-sided plane).
So how are causality and stability related in improper systems?
Answer: An improper system cannot be causal and stable. If the order of the numerator is greater than the order of the denominator, you'll always have at least one pole at infinity. Consequently, not all poles are in the left half-plane (or inside the unit circle in the case of discrete-time systems).
The system in your example is clearly unstable:
$$H(s)=\frac{s^2}{s+1}=s-1+\frac{1}{s+1}\tag{1}$$
Part of it is an ideal differentiator ($s$), which is unstable.
Also take a look at this related answer. | {
"domain": "dsp.stackexchange",
"id": 6163,
"tags": "linear-systems, transfer-function, stability, causality"
} |
How does Walter White make pure crystal meth using a non-stereospecific reaction? | Question: In the highly-rated TV series, Breaking Bad, Walter White, a high school chemistry teacher recently diagnosed with cancer, takes to making the illicit drug, crystal meth (methamphetamine), by two main routes.
First, along with his RV-driving accomplice, Jessie Pinkman, Mr. White uses the common small-scale route starting with (1S,2S)-pseudoephedrine (the active ingredient in Sudafed®️). This method features the use of an optically active starting material to make an optically active end product, (S)-methamphetamine. However, making (S)-methamphetamine on a large scale is limited because it is hard to get sufficient quantities of (1S,2S)-pseudoephedrine.
In the second route, Mr. White uses his knowledge of chemistry to move to an alternative synthesis starting with phenylacetone (also know as P2P or phenyl-2-propanone):
Racemic methamphetamine was obtained by the Winnebago-based chemists by reductive amination of P2P using methylamine and hydrogen over activated aluminum.
While his blue-colored product is considered by his customers to be exceptionally pure, Mr. White clearly knows about the issue of producing the correct enantiomer. In fact, he raises this topic more than once in the series.
Since the show might not want to tell us the answer, I am wondering what what other possible methods Mr. White could have used to obtain an enantiomerically pure product?
Answer: Intriguing question.
First, the best yield would be achieved by selectively producing one enantiomer instead of the other. In this case, White wants D-methamphetamine (powerful psychoactive drug), not L-methamphetamine (Vicks Vapor Inhaler). Reaction processes designed to do this are known as "asymmetric synthesis" reactions, because they favor production of one enantiomer over the other.
The pseudoephedrine method for methamphetamine employs one of the more common methods of asymmetric synthesis, called "chiral pool resolution". As you state, starting with an enantiomerically-pure sample of a chiral reagent (pseudoephedrine) as the starting point allows you to preserve the chirality of the finished product, provided the chiral point is not part of any "leaving group" during the reaction. However, again as you show, phenylacetone is achiral, and so the P2P process cannot take advantage of this method.
There are other methods of asymmetric synthesis, however none of them seem applicable to the chemistry shown or described on TV either; none of the reagents or catalysts mentioned would work as chiral catalysts, nor are they bio- or organocatalysts. Metal complexes with chiral ligands can be used to selectively catalyze production of one enantiomer, however the aluminum-mercury amalgam is again achiral. I don't remember any mention of using organocatalysis or biocatalysis, but these are possible.
The remaining route, then, is chiral resolution; let the reaction produce the 50-50 split, then separate the two enantiomers by some means of reactionary and/or physical chemistry. This seems to be the way it works in the real world. The advantage is that most of the methods are pretty cheap and easy; the disadvantage is that your maximum possible yield is 50% (unless you can then run a racemization reaction on the undesireable half to "reshuffle" the chirality of that half; then your yield increases by 50% of the last increase each time you run this step on the undesirable product).
In the case of methamphetamine, this resolution is among the easiest, because methamphetamine forms a "racemic conglomerate" when crystallized. This means, for the non-chemists, that each enantiomer molecule prefers to crystallize with others of the same chiral species, so as the solution cools and the solvent is evaporated off, the D-methamphetamine will form one set of homogeneous crystals and the L-methamphetamine will form another set. This means that all White has to do is slow the evaporation of solvent and subsequent cooling of the pan, letting the largest possible crystals form. Then, the only remaining trick is identifying which crystals have which enantiomer (and as these crystals are translucent and "optically active", observing the polarization pattern of light shone through the crystals will identify which are which). | {
"domain": "chemistry.stackexchange",
"id": 12457,
"tags": "reaction-mechanism, stereochemistry, chemistry-in-fiction"
} |
Is $SU(2)\times U(1) = U(2)$? | Question: In the many textbook of the Standard Model, I encounter the relation
\begin{align}
SU(2)_L \times U(1)_L = U(2)_L.
\end{align}
Here the subscript $L$ means the left-handness (i.e., the chirality of the fermions).
Is the relation above true in the general case? That is, is
\begin{align}
SU(2) \times U(1) = U(2)\ ?
\end{align}
Answer:
The relevant Lie group isomorphism reads
$$\begin{align} U(2)~\cong~&[U(1)\times SU(2)]/\mathbb{Z}_2, \cr
Z(SU(2))~\cong~&\mathbb{Z}_2.\end{align}\tag{1a} $$
In detail, the Lie group isomorphism (1a) is given by
$$U(2)~\ni~ g\quad\mapsto\quad $$
$$ \left(\sqrt{\det g}, ~\frac{g}{\sqrt{\det g}}\right) ~\sim~ \left(-\sqrt{\det g}, ~-\frac{g}{\sqrt{\det g}}\right)$$
$$~\in ~[U(1)\times SU(2)]/\mathbb{Z}_2.\tag{1b}$$
Here the $\sim$ symbol denotes a $\mathbb{Z}_2$-equivalence relation. The $\mathbb{Z}_2$-action resolves the ambiguity in the definition of the double-valued square root.
It seems natural to mention that the Lie group isomorphism (1a) generalizes in a straightforward manner to generic (indefinite) unitary (super) groups
$$\begin{align} U(p,q|m)~\cong~&[U(1)\times SU(p,q|m)]/\mathbb{Z}_{|n-m|}, \cr
Z( SU(p,q|m))~\cong~&\mathbb{Z}_{|n-m|},\end{align}\tag{2a}$$
where
$$\begin{align} p,q,m~\in~& \mathbb{N}_0, \cr n~\equiv~p+q~\neq~&m, \cr
n+m~\geq ~& 1,\end{align}\tag{2b}$$ are three integers. Note that the number $n$ of bosonic dimensions is assumed to be different from the number $m$ of fermionic dimensions. In detail, the Lie group isomorphism (2a) is given by
$$U(p,q|m)~\ni~ g\quad\mapsto\quad $$
$$ \left(\sqrt[|n-m|]{{\rm sdet} g}, ~\frac{g}{\sqrt[|n-m|]{{\rm sdet} g}}\right) ~\sim~ \left(\omega^k~\sqrt[|n-m|]{{\rm sdet} g}, ~\frac{g}{\omega^k~\sqrt[|n-m|]{{\rm sdet} g}}\right)$$
$$ ~\in ~[U(1)\times SU(p,q|m)]/\mathbb{Z}_{|n-m|},\tag{2c}$$
where $$\omega~:=~\exp\left(\frac{2\pi i}{|n\!-\!m|}\right)\tag{2d}$$
is a $|n\!-\!m|$'th root of unity, and $k\in\mathbb{Z}$.
Interestingly, in the case with the same number of bosonic and fermionic dimensions $n=m$, the center
$$ Z( SU(p,q|m))~\cong~U(1) \tag{3a}$$
becomes continuous! I.e. the $U(1)$-center of $U(p,q|m)$ has moved inside $SU(p,q|m)$, and formula (2a) no longer holds! | {
"domain": "physics.stackexchange",
"id": 20284,
"tags": "mathematical-physics, group-theory, topology, lie-algebra, group-representations"
} |
How to install openni_camera & openni_launch package in ROS groovy on Raspberry Pi | Question:
Hello
I am using ROS groovy in Raspberry Pi2 with bare bones installation.
i want to obtain the rgb and depth stream from kinect using rosrun commands like
rosrun openni_camera openni _node & rosrun image_view image_view image:=rgb/image_raw
For that,i need to install openni_camera & openni_launch packages in ROS GROOVY.
i have tried that with the procedure of apt-get install but unable to do so.
Is it possible to install these packages in Pi with ROS. If anyone has done that, pls share the steps to do that.
Thanks & Regards
Originally posted by dhiraj on ROS Answers with karma: 1 on 2015-07-24
Post score: 0
Answer:
It's likely that the binary of the packages you mentioned are not built for Groovy. You can build them from source; for example openni_camera clone its source repository in your catkin workspace, and build:
cd %YOUR_CATKINWS%/src
git clone https://github.com/ros-drivers/openni_camera.git && cd %YOUR_CATKINWS%
catkin_make
But better answer would be to use ROSberryPi for Indigo where better support can be expected for longer term, unless you have a reason to stick to 2 generations older distro.
Originally posted by 130s with karma: 10937 on 2015-10-23
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 22286,
"tags": "ros-groovy, openni-camera"
} |
Heat Energy $\propto 1/R$ or $\propto R$? | Question: I know that heat energy can be calculated by $I^2 R t$. Therefore, it's directly proportional to resistance.
However,
$$I^2 R t = \frac{V^2}{R}t$$
In this case, won't the heat energy be inversely proportional to the resistance? Won't these 2 statements contradict each other?
Answer: You are implicitly assuming that $I$ and $V$ are constant quantities. If one says that $x$ is proportional to $y$, it means $x = ay$ where $a$ is a constant.
But consider what happens when you change $R$: either $V$ or $I$, or both, change. They are not constant.
I think it would be better to say that power is proportional to $I^2$, with coefficient of proportionality $R$, and power is also proportional to $V^2$, with coefficient of proportionality $1/R$.
Changing $I$ or $V$ does not affect the value of $R$ (not in simple analyses anyway). | {
"domain": "physics.stackexchange",
"id": 77429,
"tags": "thermodynamics, electric-current, electrical-resistance, voltage, power"
} |
Calculate grades based on pass/fail percentage | Question: Our school grading scale is from 1..10 with one decimal. If you do nothing you still get a grade of 1.0. A passing grade equals 5.5. A 'cesuur' percentage defines at what percentage of correct answers the 5.5 will be given to a student.
Examples:
List item
Grade(0,100,x) should always result in 1.0
Grade(100,100,x) should always results in 10.0
Grade(50,100,0.5) should result in 5.5
Questions: how can I simplify the code? How can I make it more robust?
Public Function Grade(Points As Integer, MaxPoints As Integer, Cesuur As Double) As Double
Dim passPoints As Integer
Dim maxGrade As Integer
Dim minGrade As Integer
Dim passGrade As Double
Dim base As Double
Dim restPoints As Integer
Dim restPass As Double
passPoints = Cesuur * MaxPoints
maxGrade = 10
minGrade = 1
passGrade = (maxGrade + minGrade) / 2
base = maxGrade - passGrade
If Points < passPoints Then
Grade = 1 + (passGrade - minGrade) * Points / passPoints
Else
restPoints = MaxPoints - Points
restPass = MaxPoints * (1 - Cesuur)
Grade = maxGrade - restPoints * base / restPass
End If
Grade = Round(Grade, 1)
End Function
Answer: The function's parameters are implicitly passed by reference, which probably isn't the intent since none of the parameters are assigned/returned to the caller.
Signature should pass its parameters by value, like this:
Public Function Grade(ByVal Points As Integer, ByVal MaxByVal As Integer, ByVal Cesuur As Double) As Double
maxGrade and minGrade are only ever assigned once - they're essentially constants and could be declared as such:
Const MAXGRADE As Integer = 10
Const MINGRADE As Integer = 1
I would suggest to declare variables closer to their usage, and perhaps only assign the function's return value in one place.
Variables restPoints and restPass are only ever used with a passing grade, in that Else block. VBA doesn't scope variables at anything tighter than procedure scope, so you could extract a method to calculate a passing grade, but that's borderline overkill - here's what it would look like, with parameter casing to camelCase:
Option Explicit
Private Const MAXGRADE As Integer = 10
Private Const MINGRADE As Integer = 1
Public Function Grade(ByVal points As Integer, ByVal maxPoints As Integer, ByVal cesuur As Double) As Double
Dim passPoints As Integer
passPoints = cesuur * maxPoints
Dim passGrade As Double
passGrade = (MAXGRADE + MINGRADE) / 2
Dim base As Double
base = MAXGRADE - passGrade
Dim result As Double
If points < passPoints Then
result = 1 + (passGrade - MINGRADE) * points / passPoints
Else
result = CalculatePassingGrade(MAXGRADE, base, points, maxPoints, cesuur)
End If
Grade = Round(result, 1)
End Function
Private Function CalculatePassingGrade(ByVal base As Double, ByVal points As Integer, ByVal maxPoints As Integer, ByVal cesuur As Double) As Double
Dim restPoints As Integer
restPoints = maxPoints - points
Dim restPass As Double
restPass = mxPoints * (1 - cesuur)
CalculatePassingGrade = MAXGRADE - restPoints * base / restPass
End Function | {
"domain": "codereview.stackexchange",
"id": 13710,
"tags": "vba"
} |
How did the expression for electric field? | Question: How did the expression for electric field come to be defined as F/q?
Answer: Hi and welcome to physics stackexchange. Imagine you want to quantify how much force will a (test) charge feel if you place it somewhere near another charge, without considering the first (test) charge's properties (i.e. charge magnitude). So you want to define a quantity that is independent of the test particle's charge, but it is related to the second charged particle's properties (i.e. charge magnitude again). Well, the most natural way to do that is to divide by the magnitude of the test charge. The resulting quantity is what came to be known as the electric field, caused by the (non-test) charged particle to all the points in space.
It has the properties of a field, i.e. a given value (or several values if we are talking about a vector field) for each point in space and it also inherits the properties of the (non-test) charged particle, while at the same time neglects properties a test particle might have If we were placing it anywhere in space, where it would be influenced by the field caused by the non-test charge.
I know it doesn't sound like a lot, but I think this is simply a matter of definition. And the definition makes sense, because fields are essentially space-dependent functions (or vectors, whose components are functions of space). I hope this helps... | {
"domain": "physics.stackexchange",
"id": 89278,
"tags": "forces, electrostatics, electric-fields, charge, history"
} |
Is it possible create current by spinning a charged sphere? | Question: If we have a sphere which has $σ$ surface charge density and rotate it in axis z will this create current ? Is it possible without any potential difference ?
Answer: Yes. Let's assume that the charge density is fixed relative to the surface of a sphere of radius $R$, then spinning the sphere with angular velocity $\vec\omega = \omega \hat{z}$ would create a surface charge density $\vec K$ given by
$$
\vec K(\theta, \phi) = \omega R\sin\theta\sigma(\theta, \phi)\hat\phi(\theta, \phi)
$$
where $\theta$ and $\phi$ are the polar and azimuthal spherical coordinates. | {
"domain": "physics.stackexchange",
"id": 7444,
"tags": "electricity, electric-current"
} |
Are number of states in a NFA same as Pumping length? | Question: So i was reading a post on Minimum pumping length of regular language where Yuval Filmus has proved that a pumping lemma might have lesser number of states than a minimal DFA. But What about NFA's? Are NFA's able to give us minimum pumping length?
For example say we have a language L= $(10)^∗$, though for this minimal DFA will have $3$ states but NFA will have only $2$ states, which in fact is the pumping length of the language. So are NFA's able to give us exact pumping length of a language?
Answer: Here is a counterexample. Consider the language $L = 1^* + 0^*1^n$. The minimal NFA for $L$ has $n+C$ states for some constant $C$, but every word can be pumped, so the pumping length is 1. | {
"domain": "cs.stackexchange",
"id": 14874,
"tags": "automata, finite-automata, pumping-lemma"
} |
How are Richter magnitudes of past earthquakes estimated? | Question: In reading about historical major earthquakes, in particular, the Great Shaanxi Earthquake that killed approximately 830,000 people in July, 1556, there is a claim made about the approximate Richter scale magnitude:
Later scientific investigation revealed that the magnitude of the quake was approximately 8.0 to 8.3
despite this earthquake obviously pre-dating the development of the Richter Scale.
What do geologists look for to estimate the magnitude of past major earthquakes?
Answer: The approximate magnitude of ancient historic earthquakes may be estimated using a Magnitude/Intensity comparison based on modern earthquakes. Earthquake intensity may be estimated from the descriptions written down by people. The Chinese officials made detailed records of the damage caused by large earthquakes. | {
"domain": "earthscience.stackexchange",
"id": 322,
"tags": "geophysics, seismology, earthquakes"
} |
What is 'degrees of freedom' when using Fourier series to express a periodic waveform? | Question: We can express any desired periodic waveform using Fourier series.
In the book I am studying from it's said:
'We see that with Fourier series, we can produce any desired periodic waveform and extract its wave number content (via the $a_n$ and $b_n$). But even though we have used an infinite number of plane wave components, we still evidently do not have enough “degrees of freedom” to produce a truly localized wave packet."
I was wondering what the 'degrees of freedom' in this context mean.
For my understanding a wave packet is:
The most general solution of the wave equation can be shown to be given by any (suitably differentiable) function of the form $\phi(x,t)=f(x\pm vt)$ since it satisfies:
$$(\pm v)^2 ''(+vt)=\frac{\partial^2\phi(x,t)}{\partial t^2}=v^2 \frac{\partial^2 \phi(x,t)}{\partial x^2}=v^2 f''(x \pm vt).$$
This implies that any appropriate initial waveform, $f(x)$, can be turned into solution of $\frac{\partial^2\phi(x,t)}{\partial t^2}=v^2 \frac{\partial^2 \phi(x,t)}{\partial x^2},$ $f''(x \pm vt),$ which propagates to the right (-) or left (+) with no change in shape.
Such solution is called wave packet.
Answer: In this context, degrees of freedom are the independent numbers that represent your waveform. When you specify a Fourier series, there's a countable infinity of such numbers, one for each harmonic. This lets you make a wave packet that will repeat periodically, i.e. the envelope of the waveform will have an infinite number of bumps.
But the usual concept of a wave packet is a single bump. This is impossible to make using a Fourier series. You can reduce the step $\Delta k$ between wavenumbers of the harmonics (thus increasing the amount of numbers—degrees of freedom), which will let you increase period and thus extend the "useful length" of the single wave packet. Only in the limit $\Delta k\to0$ will you get the desired single bump, and this limit corresponds to the continuous Fourier transform, rather than Fourier series. Here you'll have an uncountable infinity of degrees of freedom, and this amount is able to represent the single non-periodic wave packet. | {
"domain": "physics.stackexchange",
"id": 71672,
"tags": "waves, fourier-transform, string, degrees-of-freedom"
} |
AEC-to-WebAssembly compiler in C++ | Question: Now that my new compiler is capable of compiling programs such as the Analog Clock in AEC, I've decided to share the code of that compiler with you, to see what you think about it.
File compiler.cpp:
#include "TreeNode.cpp"
#include "bitManipulations.cpp"
AssemblyCode convertToInteger32(const TreeNode node,
const CompilationContext context) {
auto originalCode = node.compile(context);
const AssemblyCode::AssemblyType i32 = AssemblyCode::AssemblyType::i32,
i64 = AssemblyCode::AssemblyType::i64,
f32 = AssemblyCode::AssemblyType::f32,
f64 = AssemblyCode::AssemblyType::f64,
null = AssemblyCode::AssemblyType::null;
if (originalCode.assemblyType == null) {
std::cerr
<< "Line " << node.lineNumber << ", Column " << node.columnNumber
<< ", Compiler error: Some part of the compiler attempted to convert \""
<< node.text
<< "\" to \"Integer32\", which makes no sense. This could be an "
"internal compiler error, or there could be something semantically "
"(though not grammatically) very wrong with your program."
<< std::endl;
exit(1);
}
if (originalCode.assemblyType == i32)
return originalCode;
if (originalCode.assemblyType == i64)
return AssemblyCode(
"(i32.wrap_i64\n" + std::string(originalCode.indentBy(1)) + "\n)", i32);
if (originalCode.assemblyType == f32)
return AssemblyCode(
"(i32.trunc_f32_s\n" + std::string(originalCode.indentBy(1)) + "\n)",
i32); // Makes little sense to me (that, when converting to an integer,
// the decimal part of the number is simply truncated), but that's
// how it is done in the vast majority of programming languages.
if (originalCode.assemblyType == f64)
return AssemblyCode("(i32.trunc_f64_s\n" +
std::string(originalCode.indentBy(1)) + "\n)",
i32);
std::cerr << "Line " << node.lineNumber << ", Column " << node.columnNumber
<< ", Compiler error: Internal compiler error, control reached the "
"end of the \"convertToInteger32\" function!"
<< std::endl;
exit(-1);
return AssemblyCode("()");
}
AssemblyCode convertToInteger64(const TreeNode node,
const CompilationContext context) {
auto originalCode = node.compile(context);
const AssemblyCode::AssemblyType i32 = AssemblyCode::AssemblyType::i32,
i64 = AssemblyCode::AssemblyType::i64,
f32 = AssemblyCode::AssemblyType::f32,
f64 = AssemblyCode::AssemblyType::f64,
null = AssemblyCode::AssemblyType::null;
if (originalCode.assemblyType == null) {
std::cerr
<< "Line " << node.lineNumber << ", Column " << node.columnNumber
<< ", Compiler error: Some part of the compiler attempted to convert \""
<< node.text
<< "\" to \"Integer64\", which makes no sense. This could be an "
"internal compiler error, or there could be something semantically "
"(though not grammatically) very wrong with your program."
<< std::endl;
exit(1);
}
if (originalCode.assemblyType == i32)
return AssemblyCode(
"(i64.extend_i32_s\n" + // If you don't put "_s", JavaScript Virtual
// Machine is going to interpret the argument as
// unsigned, leading to huge positive numbers
// instead of negative ones.
std::string(originalCode.indentBy(1)) + "\n)",
i64);
if (originalCode.assemblyType == i64)
return originalCode;
if (originalCode.assemblyType == f32)
return AssemblyCode("(i64.trunc_f32_s\n" +
std::string(originalCode.indentBy(1)) + "\n)",
i64);
if (originalCode.assemblyType == f64)
return AssemblyCode("(i64.trunc_f64_s\n" +
std::string(originalCode.indentBy(1)) + "\n)",
i64);
std::cerr << "Line " << node.lineNumber << ", Column " << node.columnNumber
<< ", Compiler error: Internal compiler error, control reached the "
"end of the \"convertToInteger64\" function!"
<< std::endl;
exit(-1);
return AssemblyCode("()");
}
AssemblyCode convertToDecimal32(const TreeNode node,
const CompilationContext context) {
auto originalCode = node.compile(context);
const AssemblyCode::AssemblyType i32 = AssemblyCode::AssemblyType::i32,
i64 = AssemblyCode::AssemblyType::i64,
f32 = AssemblyCode::AssemblyType::f32,
f64 = AssemblyCode::AssemblyType::f64,
null = AssemblyCode::AssemblyType::null;
if (originalCode.assemblyType == null) {
std::cerr
<< "Line " << node.lineNumber << ", Column " << node.columnNumber
<< ", Compiler error: Some part of the compiler attempted to convert \""
<< node.text
<< "\" to \"Decimal32\", which makes no sense. This could be an "
"internal compiler error, or there could be something semantically "
"(though not grammatically) very wrong with your program."
<< std::endl;
exit(1);
}
if (originalCode.assemblyType == i32)
return AssemblyCode(
"(f32.convert_i32_s\n" + // Again, those who designed JavaScript Virtual
// Machine had a weird idea that integers
// should be unsigned unless somebody makes
// them explicitly signed via "_s".
std::string(originalCode.indentBy(1)) + "\n)",
f32);
if (originalCode.assemblyType == i64)
return AssemblyCode("(f32.convert_i64_s\n" +
std::string(originalCode.indentBy(1)) + "\n)",
f32);
if (originalCode.assemblyType == f32)
return originalCode;
if (originalCode.assemblyType == f64)
return AssemblyCode("(f32.demote_f64\n" +
std::string(originalCode.indentBy(1)) + "\n)",
f32);
std::cerr << "Line " << node.lineNumber << ", Column " << node.columnNumber
<< ", Compiler error: Internal compiler error, control reached the "
"end of the \"convertToDecimal32\" function!"
<< std::endl;
exit(-1);
return AssemblyCode("()");
}
AssemblyCode convertToDecimal64(const TreeNode node,
const CompilationContext context) {
auto originalCode = node.compile(context);
const AssemblyCode::AssemblyType i32 = AssemblyCode::AssemblyType::i32,
i64 = AssemblyCode::AssemblyType::i64,
f32 = AssemblyCode::AssemblyType::f32,
f64 = AssemblyCode::AssemblyType::f64,
null = AssemblyCode::AssemblyType::null;
if (originalCode.assemblyType == null) {
std::cerr
<< "Line " << node.lineNumber << ", Column " << node.columnNumber
<< ", Compiler error: Some part of the compiler attempted to convert \""
<< node.text
<< "\" to \"Decimal64\", which makes no sense. This could be an "
"internal compiler error, or there could be something semantically "
"(though not grammatically) very wrong with your program."
<< std::endl;
exit(1);
}
if (originalCode.assemblyType == i32)
return AssemblyCode("(f64.convert_i32_s\n" +
std::string(originalCode.indentBy(1)) + "\n)",
f64);
if (originalCode.assemblyType == i64)
return AssemblyCode("(f64.convert_i64_s\n" +
std::string(originalCode.indentBy(1)) + "\n)",
f64);
if (originalCode.assemblyType == f32)
return AssemblyCode("(f64.promote_f32\n" +
std::string(originalCode.indentBy(1)) + "\n)",
f64);
if (originalCode.assemblyType == f64)
return originalCode;
std::cerr << "Line " << node.lineNumber << ", Column " << node.columnNumber
<< ", Compiler error: Internal compiler error, control reached the "
"end of the \"convertToDecimal64\" function!"
<< std::endl;
exit(-1);
return AssemblyCode("()");
}
AssemblyCode convertTo(const TreeNode node, const std::string type,
const CompilationContext context) {
if (type == "Character" or type == "Integer16" or type == "Integer32" or
std::regex_search(
type,
std::regex(
"Pointer$"))) // When, in JavaScript Virtual Machine, you can't
// push types of less than 4 bytes (32 bits) onto
// the system stack, you need to convert those to
// Integer32 (i32). Well, makes slightly more sense
// than the way it is in 64-bit x86 assembly, where
// you can put 16-bit values and 64-bit values onto
// the system stack, but you can't put 32-bit
// values.
return convertToInteger32(node, context);
if (type == "Integer64")
return convertToInteger64(node, context);
if (type == "Decimal32")
return convertToDecimal32(node, context);
if (type == "Decimal64")
return convertToDecimal64(node, context);
std::cerr << "Line " << node.lineNumber << ", Column " << node.columnNumber
<< ", Compiler error: Some part of the compiler attempted to get "
"the assembly code for converting \""
<< node.text << "\" into the type \"" << type
<< "\", which doesn't make sense. This could be an internal "
"compiler error, or there could be something semantically "
"(though not grammatically) very wrong with your program."
<< std::endl;
exit(-1);
return AssemblyCode("()");
}
std::string getStrongerType(int, int, std::string,
std::string); // When C++ doesn't support function
// hoisting, like JavaScript does.
AssemblyCode TreeNode::compile(CompilationContext context) const {
std::string typeOfTheCurrentNode = getType(context);
if (!mappingOfAECTypesToWebAssemblyTypes.count(typeOfTheCurrentNode)) {
std::cerr
<< "Line " << lineNumber << ", Column " << columnNumber
<< ", Internal compiler error: The function \"getType\" returned \""
<< typeOfTheCurrentNode
<< "\", which is an invalid name of type. Aborting the compilation!"
<< std::endl;
exit(1);
}
AssemblyCode::AssemblyType returnType =
mappingOfAECTypesToWebAssemblyTypes.at(typeOfTheCurrentNode);
auto iteratorOfTheCurrentFunction =
std::find_if(context.functions.begin(), context.functions.end(),
[=](function someFunction) {
return someFunction.name == context.currentFunctionName;
});
if (iteratorOfTheCurrentFunction == context.functions.end()) {
std::cerr
<< "Line " << lineNumber << ", Column " << columnNumber
<< ", Internal compiler error: The \"compile(CompilationContext)\" "
"function was called without setting the current function name, "
"aborting compilation (or else the compiler will segfault)!"
<< std::endl;
exit(1);
}
function currentFunction = *iteratorOfTheCurrentFunction;
std::string assembly;
if (text == "Does" or text == "Then" or text == "Loop" or
text == "Else") // Blocks of code are stored by the parser as child nodes
// of "Does", "Then", "Else" and "Loop".
{
if (text != "Does")
context.stackSizeOfThisScope =
0; //"TreeRootNode" is supposed to set up the arguments in the scope
// before passing the recursion onto the "Does" node.
for (auto childNode : children) {
if (childNode.text == "Nothing")
continue;
else if (basicDataTypeSizes.count(childNode.text)) {
// Local variables declaration.
for (TreeNode variableName : childNode.children) {
if (variableName.text.back() != '[') { // If it's not an array.
context.localVariables[variableName.text] = 0;
for (auto &pair : context.localVariables)
pair.second += basicDataTypeSizes.at(childNode.text);
context.variableTypes[variableName.text] = childNode.text;
context.stackSizeOfThisFunction +=
basicDataTypeSizes.at(childNode.text);
context.stackSizeOfThisScope +=
basicDataTypeSizes.at(childNode.text);
assembly += "(global.set $stack_pointer\n\t(i32.add (global.get "
"$stack_pointer) (i32.const " +
std::to_string(basicDataTypeSizes.at(childNode.text)) +
")) ;;Allocating the space for the local variable \"" +
variableName.text + "\".\n)\n";
if (variableName.children.size() and
variableName.children[0].text ==
":=") // Initial assignment to local variables.
{
TreeNode assignmentNode = variableName.children[0];
assignmentNode.children.insert(assignmentNode.children.begin(),
variableName);
assembly += assignmentNode.compile(context) + "\n";
}
} else { // If that's a local array declaration.
int arraySizeInBytes =
basicDataTypeSizes.at(childNode.text) *
variableName.children[0]
.interpretAsACompileTimeIntegerConstant();
context.localVariables[variableName.text] = 0;
for (auto &pair : context.localVariables)
pair.second += arraySizeInBytes;
context.variableTypes[variableName.text] = childNode.text;
context.stackSizeOfThisFunction += arraySizeInBytes;
context.stackSizeOfThisScope += arraySizeInBytes;
assembly += "(global.set $stack_pointer\n\t(i32.add (global.get "
"$stack_pointer) (i32.const " +
std::to_string(arraySizeInBytes) +
")) ;;Allocating the space for the local array \"" +
variableName.text + "\".\n)\n";
if (variableName.children.size() == 2 and
variableName.children[1].text == ":=" and
variableName.children[1].children[0].text ==
"{}") // Initial assignments of local arrays.
{
TreeNode initialisationList =
variableName.children[1].children[0];
for (unsigned int i = 0; i < initialisationList.children.size();
i++) {
TreeNode element = initialisationList.children[i];
TreeNode assignmentNode(
":=", variableName.children[1].lineNumber,
variableName.children[1].columnNumber);
TreeNode whereToAssignTheElement(
variableName.text, variableName.lineNumber,
variableName
.columnNumber); // Damn, can you think up a language in
// which writing stuff like this isn't
// as tedious and error-prone as it is
// in C++ or JavaScript? Maybe some
// language in which you can switch
// between a C-like syntax and a
// Lisp-like syntax at will?
whereToAssignTheElement.children.push_back(TreeNode(
std::to_string(i), variableName.children[0].lineNumber,
variableName.children[1].columnNumber));
assignmentNode.children.push_back(whereToAssignTheElement);
assignmentNode.children.push_back(element);
assembly += assignmentNode.compile(context) + "\n";
}
}
}
}
} else
assembly += std::string(childNode.compile(context)) + "\n";
}
assembly += "(global.set $stack_pointer (i32.sub (global.get "
"$stack_pointer) (i32.const " +
std::to_string(context.stackSizeOfThisScope) + ")))";
} else if (text.front() == '"')
assembly += "(i32.const " + std::to_string(context.globalVariables[text]) +
") ;;Pointer to " + text;
else if (context.variableTypes.count(text)) {
if (typeOfTheCurrentNode == "Character")
assembly +=
"(i32.load8_s\n" + compileAPointer(context).indentBy(1) + "\n)";
else if (typeOfTheCurrentNode == "Integer16")
assembly +=
"(i32.load16_s\n" + compileAPointer(context).indentBy(1) + "\n)";
else if (typeOfTheCurrentNode == "Integer32" or
std::regex_search(typeOfTheCurrentNode, std::regex("Pointer$")))
assembly += "(i32.load\n" + compileAPointer(context).indentBy(1) + "\n)";
else if (typeOfTheCurrentNode == "Integer64")
assembly += "(i64.load\n" + compileAPointer(context).indentBy(1) + "\n)";
else if (typeOfTheCurrentNode == "Decimal32")
assembly += "(f32.load\n" + compileAPointer(context).indentBy(1) + "\n)";
else if (typeOfTheCurrentNode == "Decimal64")
assembly += "(f64.load\n" + compileAPointer(context).indentBy(1) + "\n)";
else {
std::cerr << "Line " << lineNumber << ", Column " << columnNumber
<< ", Internal compiler error: Compiler got into a forbidden "
"state while compiling the token \""
<< text << "\", aborting the compilation!" << std::endl;
exit(1);
}
} else if (text == ":=") {
TreeNode rightSide;
if (children[1].text == ":=") { // Expressions such as "a:=b:=0" or similar.
TreeNode tmp = children[1]; // In case the "compile" changes the TreeNode
// (which the GNU C++ compiler should forbid,
// but apparently doesn't).
assembly += children[1].compile(context) + "\n";
rightSide = tmp.children[0];
} else
rightSide = children[1];
assembly += ";;Assigning " + rightSide.getLispExpression() + " to " +
children[0].getLispExpression() + ".\n";
if (typeOfTheCurrentNode == "Character")
assembly += "(i32.store8\n" +
children[0].compileAPointer(context).indentBy(1) + "\n" +
convertToInteger32(rightSide, context).indentBy(1) + "\n)";
else if (typeOfTheCurrentNode == "Integer16")
assembly += "(i32.store16\n" +
children[0].compileAPointer(context).indentBy(1) + "\n" +
convertToInteger32(rightSide, context).indentBy(1) + "\n)";
else if (typeOfTheCurrentNode == "Integer32" or
std::regex_search(typeOfTheCurrentNode, std::regex("Pointer$")))
assembly += "(i32.store\n" +
children[0].compileAPointer(context).indentBy(1) + "\n" +
convertToInteger32(rightSide, context).indentBy(1) + "\n)";
else if (typeOfTheCurrentNode == "Integer64")
assembly += "(i64.store\n" +
children[0].compileAPointer(context).indentBy(1) + "\n" +
convertToInteger64(rightSide, context).indentBy(1) + "\n)";
else if (typeOfTheCurrentNode == "Decimal32")
assembly += "(f32.store\n" +
children[0].compileAPointer(context).indentBy(1) + "\n" +
convertToDecimal32(rightSide, context).indentBy(1) + "\n)";
else if (typeOfTheCurrentNode == "Decimal64")
assembly += "(f64.store\n" +
children[0].compileAPointer(context).indentBy(1) + "\n" +
convertToDecimal64(rightSide, context).indentBy(1) + "\n)";
else {
std::cerr << "Line " << lineNumber << ", Column " << columnNumber
<< ", Internal compiler error: The compiler got into a "
"forbidden state while compiling the token \""
<< text << "\", aborting the compilation!" << std::endl;
exit(1);
}
} else if (text == "If") {
if (children.size() < 2) {
std::cerr
<< "Line " << lineNumber << ", Column " << columnNumber
<< ", Compiler error: Corrupt AST, the \"If\" node has less than 2 "
"child nodes. Aborting the compilation (or else we will segfault)!"
<< std::endl;
exit(1);
}
if (children[1].text != "Then") {
std::cerr << "Line " << lineNumber << ", Column " << columnNumber
<< ", Compiler error: Corrupt AST, the second child of the "
"\"If\" node isn't named \"Then\". Aborting the compilation "
"(or else we will probably segfault)!"
<< std::endl;
exit(1);
}
if (children.size() >= 3 and children[2].text != "Else") {
std::cerr << "Line " << lineNumber << ", Column " << columnNumber
<< ", Compiler error: Corrupt AST, the third child of the "
"\"If\" node is not named \"Else\", aborting the "
"compilation (or else we will probably segfault)!"
<< std::endl;
exit(1);
}
assembly += "(if\n" + convertToInteger32(children[0], context).indentBy(1) +
"\n\t(then\n" + children[1].compile(context).indentBy(2) +
"\n\t)" +
((children.size() == 3)
? "\n\t(else\n" +
children[2].compile(context).indentBy(2) + "\n\t)\n)"
: AssemblyCode("\n)"));
} else if (text == "While") {
if (children.size() < 2 or children[1].text != "Loop") {
std::cerr << "Line " << lineNumber << ", Column " << columnNumber
<< ", Compiler error: Corrupt AST, aborting (or else we will "
"segfault)!"
<< std::endl;
exit(1);
}
assembly += "(block\n\t(loop\n\t\t(br_if 1\n\t\t\t(i32.eqz\n" +
convertToInteger32(children[0], context).indentBy(4) +
"\n\t\t\t)\n\t\t)" + children[1].compile(context).indentBy(2) +
"\n\t\t(br 0)\n\t)\n)";
} else if (std::regex_match(text,
std::regex("(^\\d+$)|(^0x(\\d|[a-f]|[A-F])+$)")))
assembly += "(i64.const " + text + ")";
else if (std::regex_match(text, std::regex("^\\d+\\.\\d*$")))
assembly += "(f64.const " + text + ")";
else if (text == "Return") {
if (currentFunction.returnType != "Nothing") {
if (children.empty()) {
std::cerr << "Line " << lineNumber << ", Column " << columnNumber
<< ", Compiler error: It's not specified what to return from "
"a function that's supposed to return \""
<< currentFunction.returnType
<< "\", aborting the compilation (or else the compiler will "
"segfault)!"
<< std::endl;
exit(1);
}
TreeNode valueToBeReturned = children[0];
if (valueToBeReturned.text == ":=") {
TreeNode tmp =
valueToBeReturned; // The C++ compiler is supposed to forbid
// side-effects in the "compile" method, since
// it's declared as "const", but apparently it
// doesn't. It seems to me there is some bug both
// in my code and in GNU C++ compiler (which is
// supposed to warn me about it).
assembly += valueToBeReturned.compile(context) + "\n";
valueToBeReturned = tmp.children[0];
}
assembly +=
";;Setting for returning: " + valueToBeReturned.getLispExpression() +
"\n";
assembly += "(local.set $return_value\n";
assembly +=
convertTo(valueToBeReturned, currentFunction.returnType, context)
.indentBy(1) +
"\n)\n";
}
assembly += "(global.set $stack_pointer (i32.sub (global.get "
"$stack_pointer) (i32.const " +
std::to_string(context.stackSizeOfThisFunction) +
"))) ;;Cleaning up the system stack before returning.\n";
assembly += "(return";
if (currentFunction.returnType == "Nothing")
assembly += ")";
else
assembly += " (local.get $return_value))";
} else if (text == "+") {
std::vector<TreeNode> children =
this->children; // So that compiler doesn't complain about iter_swap
// being called in a constant function.
if (std::regex_search(children[1].getType(context), std::regex("Pointer$")))
std::iter_swap(children.begin(), children.begin() + 1);
std::string firstType = children[0].getType(context);
std::string secondType = children[1].getType(context);
if (std::regex_search(
firstType,
std::regex(
"Pointer$"))) // Multiply the second operand by the numbers of
// bytes the data type that the pointer points to
// takes. That is, be compatible with pointers in
// C and C++, rather than with pointers in
// Assembly (which allows unaligned access).
assembly += "(i32.add\n" +
std::string(children[0].compile(context).indentBy(1)) +
"\n\t(i32.mul (i32.const " +
std::to_string(basicDataTypeSizes.at(firstType.substr(
0, firstType.size() - std::string("Pointer").size()))) +
")\n" + convertToInteger32(children[1], context).indentBy(2) +
"\n\t)\n)";
else
assembly +=
"(" + stringRepresentationOfWebAssemblyType.at(returnType) +
".add\n" +
convertTo(children[0], typeOfTheCurrentNode, context).indentBy(1) +
"\n" +
convertTo(children[1], typeOfTheCurrentNode, context).indentBy(1) +
"\n)";
} else if (text == "-") {
std::string firstType = children[0].getType(context);
std::string secondType = children[1].getType(context);
if (!std::regex_search(firstType, std::regex("Pointer$")) and
std::regex_search(secondType, std::regex("Pointer$"))) {
std::cerr << "Line " << lineNumber << ", Column " << columnNumber
<< ", Compiler error: What exactly does it mean to subtract a "
"pointer from a number? Aborting the compilation!"
<< std::endl;
exit(1);
} else if (std::regex_search(firstType, std::regex("Pointer$")) and
std::regex_search(
secondType,
std::regex("Pointer$"))) // Subtract two pointers as if they
// were two Integer32s.
assembly += "(i32.sub\n" + children[0].compile(context).indentBy(1) +
"\n" + children[1].compile(context).indentBy(1) + "\n)";
else if (std::regex_search(firstType, std::regex("Pointer$")) and
!std::regex_search(secondType, std::regex("Pointer$")))
assembly += "(i32.sub\n" + children[0].compile(context).indentBy(1) +
"\n\t(i32.mul (i32.const " +
std::to_string(basicDataTypeSizes.at(firstType.substr(
0, firstType.size() - std::string("Pointer").size()))) +
")\n" + children[1].compile(context).indentBy(2) +
"\n\t\t)\n\t)\n)";
else
assembly +=
"(" + stringRepresentationOfWebAssemblyType.at(returnType) +
".sub\n" +
convertTo(children[0], typeOfTheCurrentNode, context).indentBy(1) +
"\n" +
convertTo(children[1], typeOfTheCurrentNode, context).indentBy(1) +
"\n)";
} else if (text == "*")
assembly +=
"(" + stringRepresentationOfWebAssemblyType.at(returnType) + ".mul\n" +
convertTo(children[0], typeOfTheCurrentNode, context).indentBy(1) +
"\n" +
convertTo(children[1], typeOfTheCurrentNode, context).indentBy(1) +
"\n)";
else if (text == "/") {
if (returnType == AssemblyCode::AssemblyType::i32 or
returnType == AssemblyCode::AssemblyType::i64)
assembly +=
"(" + stringRepresentationOfWebAssemblyType.at(returnType) +
".div_s\n" +
convertTo(children[0], typeOfTheCurrentNode, context).indentBy(1) +
"\n" +
convertTo(children[1], typeOfTheCurrentNode, context).indentBy(1) +
"\n)";
else
assembly +=
"(" + stringRepresentationOfWebAssemblyType.at(returnType) +
".div\n" +
convertTo(children[0], typeOfTheCurrentNode, context).indentBy(1) +
"\n" +
convertTo(children[1], typeOfTheCurrentNode, context).indentBy(1) +
"\n)";
} else if (text == "<" or text == ">") {
std::string firstType = children[0].getType(context);
std::string secondType = children[1].getType(context);
std::string strongerType;
if (std::regex_search(firstType, std::regex("Pointer$")) and
std::regex_search(secondType, std::regex("Pointer$")))
strongerType =
"Integer32"; // Let's allow people to shoot themselves in the foot by
// comparing pointers of different types.
else
strongerType =
getStrongerType(lineNumber, columnNumber, firstType, secondType);
AssemblyCode::AssemblyType assemblyType =
mappingOfAECTypesToWebAssemblyTypes.at(strongerType);
if (assemblyType == AssemblyCode::AssemblyType::i32 or
assemblyType == AssemblyCode::AssemblyType::i64)
assembly +=
"(" + stringRepresentationOfWebAssemblyType.at(assemblyType) +
(text == "<" ? ".lt_s\n" : ".gt_s\n") +
convertTo(children[0], strongerType, context).indentBy(1) + "\n" +
convertTo(children[1], strongerType, context).indentBy(1) + "\n)";
else
assembly +=
"(" + stringRepresentationOfWebAssemblyType.at(assemblyType) +
(text == "<" ? ".lt\n" : ".gt\n") +
convertTo(children[0], strongerType, context).indentBy(1) + "\n" +
convertTo(children[1], strongerType, context).indentBy(1) + "\n)";
} else if (text == "=") {
std::string firstType = children[0].getType(context);
std::string secondType = children[1].getType(context);
std::string strongerType;
if (std::regex_search(firstType, std::regex("Pointer$")) and
std::regex_search(secondType, std::regex("Pointer$")))
strongerType = "Integer32";
else
strongerType =
getStrongerType(lineNumber, columnNumber, firstType, secondType);
AssemblyCode::AssemblyType assemblyType =
mappingOfAECTypesToWebAssemblyTypes.at(strongerType);
assembly +=
"(" + stringRepresentationOfWebAssemblyType.at(assemblyType) + ".eq\n" +
convertTo(children[0], strongerType, context).indentBy(1) + "\n" +
convertTo(children[1], strongerType, context).indentBy(1) + "\n)";
} else if (text == "?:")
assembly +=
"(if (result " + stringRepresentationOfWebAssemblyType.at(returnType) +
")\n" + convertToInteger32(children[0], context).indentBy(1) +
"\n\t(then\n" +
convertTo(children[1], typeOfTheCurrentNode, context).indentBy(2) +
"\n\t)\n\t(else\n" +
convertTo(children[2], typeOfTheCurrentNode, context).indentBy(2) +
"\n\t)\n)";
else if (text == "not(")
assembly += "(i32.eqz\n" +
convertToInteger32(children[0], context).indentBy(1) + "\n)";
else if (text == "mod(")
assembly +=
"(" + stringRepresentationOfWebAssemblyType.at(returnType) +
".rem_s\n" +
convertTo(children[0], typeOfTheCurrentNode, context).indentBy(1) +
"\n" +
convertTo(children[1], typeOfTheCurrentNode, context).indentBy(1) +
"\n)";
else if (text == "invertBits(")
assembly += "(i32.xor (i32.const -1)\n" +
convertToInteger32(children[0], context).indentBy(1) + "\n)";
else if (text == "and")
assembly += "(i32.and\n" +
convertToInteger32(children[0], context).indentBy(1) + "\n" +
convertToInteger32(children[1], context).indentBy(1) + "\n)";
else if (text == "or")
assembly += "(i32.or\n" +
convertToInteger32(children[0], context).indentBy(1) + "\n" +
convertToInteger32(children[1], context).indentBy(1) + "\n)";
else if (text.back() == '(' and
basicDataTypeSizes.count(
text.substr(0, text.size() - 1))) // The casting operator.
assembly +=
convertTo(children[0], text.substr(0, text.size() - 1), context);
else if (std::count_if(context.functions.begin(), context.functions.end(),
[=](function someFunction) {
return someFunction.name == text;
})) {
function functionToBeCalled = *find_if(
context.functions.begin(), context.functions.end(),
[=](function someFunction) { return someFunction.name == text; });
assembly += "(call $" + text.substr(0, text.size() - 1) + "\n";
for (unsigned int i = 0; i < children.size(); i++) {
if (i >= functionToBeCalled.argumentTypes.size()) {
std::cerr
<< "Line " << children[i].lineNumber << ", Column "
<< children[i].columnNumber
<< ", Compiler error: Too many arguments passed to the function \""
<< text << "\" (it expects "
<< functionToBeCalled.argumentTypes.size()
<< " arguments). Aborting the compilation (or else the compiler "
"will segfault)!"
<< std::endl;
exit(1);
}
assembly +=
convertTo(children[i], functionToBeCalled.argumentTypes[i], context)
.indentBy(1) +
"\n";
}
for (unsigned int i = children.size();
i < functionToBeCalled.defaultArgumentValues.size(); i++) {
if (!functionToBeCalled.defaultArgumentValues[i])
std::cerr
<< "Line " << lineNumber << ", Column " << columnNumber
<< ", Compiler warning: The argument #" << i + 1 << " (called \""
<< functionToBeCalled.argumentNames[i]
<< "\") of the function named \"" << text
<< "\" isn't being passed to that function, nor does it have some "
"default value. Your program will very likely crash because of "
"that!"
<< std::endl; // JavaScript doesn't even warn about such errors,
// while C++ refuses to compile a program then. I
// suppose I should take a middle ground here.
assembly +=
convertTo(TreeNode(std::to_string(
functionToBeCalled.defaultArgumentValues[i]),
lineNumber, columnNumber),
functionToBeCalled.argumentTypes[i], context)
.indentBy(1);
}
assembly += ")";
} else if (text == "ValueAt(") {
if (typeOfTheCurrentNode == "Character")
assembly +=
"(i32.load8_s\n" + children[0].compile(context).indentBy(1) + "\n)";
else if (typeOfTheCurrentNode == "Integer16")
assembly +=
"(i32.load16_s\n" + children[0].compile(context).indentBy(1) + "\n)";
else if (typeOfTheCurrentNode == "Integer32" or
std::regex_search(typeOfTheCurrentNode, std::regex("Pointer$")))
assembly +=
"(i32.load\n" + children[0].compile(context).indentBy(1) + "\n)";
else if (typeOfTheCurrentNode == "Integer64")
assembly +=
"(i64.load\n" + children[0].compile(context).indentBy(1) + "\n)";
else if (typeOfTheCurrentNode == "Decimal32")
assembly +=
"(f32.load\n" + children[0].compile(context).indentBy(1) + "\n)";
else if (typeOfTheCurrentNode == "Decimal64")
assembly +=
"(f64.load\n" + children[0].compile(context).indentBy(1) + "\n)";
else {
std::cerr << "Line " << lineNumber << ", Column " << columnNumber
<< ", Internal compiler error: The compiler got into a "
"forbidden state while compiling \"ValueAt\", aborting!"
<< std::endl;
exit(1);
}
} else if (text == "AddressOf(")
return children[0].compileAPointer(context);
else {
std::cerr << "Line " << lineNumber << ", Column " << columnNumber
<< ", Compiler error: No rule to compile the token \"" << text
<< "\", quitting now!" << std::endl;
exit(1);
}
return AssemblyCode(assembly, returnType);
}
AssemblyCode TreeNode::compileAPointer(CompilationContext context) const {
if (text == "ValueAt(")
return children[0].compile(context);
if (context.localVariables.count(text) and text.back() != '[')
return AssemblyCode(
"(i32.sub\n\t(global.get $stack_pointer)\n\t(i32.const " +
std::to_string(context.localVariables[text]) + ") ;;" + text +
"\n)",
AssemblyCode::AssemblyType::i32);
if (context.localVariables.count(text) and text.back() == '[') {
if (children.empty()) {
std::cerr << "Line " << lineNumber << ", Column " << columnNumber
<< ", Compiler error: The array \""
<< text.substr(0, text.size() - 1)
<< "\" has no index in the AST. Aborting the compilation, or "
"else the compiler will segfault!"
<< std::endl;
exit(1);
}
return AssemblyCode(
"(i32.add\n\t(i32.sub\n\t\t(global.get "
"$stack_pointer)\n\t\t(i32.const " +
std::to_string(context.localVariables[text]) + ") ;;" + text +
"\n\t)\n\t(i32.mul\n\t\t(i32.const " +
std::to_string(basicDataTypeSizes.at(getType(context))) + ")\n" +
std::string(convertToInteger32(children[0], context).indentBy(2)) +
"\n\t)\n)",
AssemblyCode::AssemblyType::i32);
}
if (context.globalVariables.count(text) and text.back() != '[')
return AssemblyCode("(i32.const " +
std::to_string(context.globalVariables[text]) +
") ;;" + text,
AssemblyCode::AssemblyType::i32);
if (context.globalVariables.count(text) and text.back() == '[') {
if (children.empty()) {
std::cerr << "Line " << lineNumber << ", Column " << columnNumber
<< ", Compiler error: The array \""
<< text.substr(0, text.size() - 1)
<< "\" has no index in the AST. Aborting the compilation, or "
"else the compiler will segfault!"
<< std::endl;
exit(1);
}
return AssemblyCode(
"(i32.add\n\t(i32.const " +
std::to_string(context.globalVariables[text]) + ") ;;" + text +
"\n\t(i32.mul\n\t\t(i32.const " +
std::to_string(basicDataTypeSizes.at(getType(context))) + ")\n" +
std::string(convertToInteger32(children[0], context).indentBy(3)) +
"\n\t)\n)",
AssemblyCode::AssemblyType::i32);
}
std::cerr << "Line " << lineNumber << ", Column " << columnNumber
<< ", Compiler error: Some part of the compiler attempted to get "
"the assembly of the pointer to \""
<< text
<< "\", which makes no sense. This could be an internal compiler "
"error, or there could be something semantically (though not "
"grammatically) very wrong with your program."
<< std::endl;
exit(1);
return AssemblyCode("()");
}
std::string getStrongerType(int lineNumber, int columnNumber,
std::string firstType, std::string secondType) {
if (firstType == "Nothing" or secondType == "Nothing") {
std::cerr << "Line " << lineNumber << ", Column " << columnNumber
<< ", Compiler error: Can't add, subtract, multiply or divide "
"with something of the type \"Nothing\"!";
exit(1);
}
if (std::regex_search(firstType, std::regex("Pointer$")) and
!std::regex_search(secondType, std::regex("Pointer$")))
return firstType;
if (std::regex_search(secondType, std::regex("Pointer$")) and
!std::regex_search(firstType, std::regex("Pointer$")))
return secondType;
if (std::regex_search(firstType, std::regex("Pointer$")) and
std::regex_search(secondType, std::regex("Pointer$"))) {
std::cerr
<< "Line " << lineNumber << ", Column " << columnNumber
<< ", Compiler error: Can't add, multiply or divide two pointers!";
}
if (firstType == "Decimal64" or secondType == "Decimal64")
return "Decimal64";
if (firstType == "Decimal32" or secondType == "Decimal32")
return "Decimal32";
if (firstType == "Integer64" or secondType == "Integer64")
return "Integer64";
if (firstType == "Integer32" or secondType == "Integer32")
return "Integer32";
if (firstType == "Integer16" or secondType == "Integer16")
return "Integer16";
return firstType;
}
std::string TreeNode::getType(CompilationContext context) const {
if (std::regex_match(text, std::regex("(^\\d+$)|(^0x(\\d|[a-f]|[A-F])+$)")))
return "Integer64";
if (std::regex_match(text, std::regex("^\\d+\\.\\d*$")))
return "Decimal64";
if (text == "AddressOf(") {
if (children.empty()) {
std::cerr << "Line " << lineNumber << ", Column " << columnNumber
<< ", Compiler error: \"AddressOf\" is without the argument!"
<< std::endl;
exit(1);
}
if (children.size() > 1) {
std::cerr
<< "Line " << lineNumber << ", Column " << columnNumber
<< ", Compiler error: Can't take the address of multiple variables!"
<< std::endl;
exit(1);
}
if (children[0].getType(context) == "Nothing") {
std::cerr << "Line " << lineNumber << ", Column " << columnNumber
<< ", Compiler error: \"AddressOf\" has an argument of type "
"\"Nothing\"!"
<< std::endl;
exit(1);
}
return children[0].getType(context) + "Pointer";
}
if (text == "ValueAt(") {
if (children.empty()) {
std::cerr << "Line " << lineNumber << ", Column " << columnNumber
<< ", Compiler error: \"ValueAt\" is without the argument!"
<< std::endl;
exit(1);
}
if (children.size() > 1) {
std::cerr
<< "Line " << lineNumber << ", Column " << columnNumber
<< ", Compiler error: Can't dereference multiple variables at once!"
<< std::endl;
exit(1);
}
if (std::regex_search(children[0].getType(context),
std::regex("Pointer$")) == false) {
std::cerr
<< "Line " << lineNumber << ", Column " << columnNumber
<< ", Compiler error: The argument to \"ValueAt\" is not a pointer!"
<< std::endl;
exit(1);
}
return children[0].getType(context).substr(
0, children[0].getType(context).size() - std::string("Pointer").size());
}
if (context.variableTypes.count(text))
return context.variableTypes[text];
if (text[0] == '"') {
std::cerr << "Line " << lineNumber << ", Column " << columnNumber
<< ", Internal compiler error: A pointer to the string " << text
<< " is being attempted to compile before the string itself has "
"been compiled, aborting the compilation!"
<< std::endl;
exit(1);
}
if (text == "and" or text == "or" or text == "<" or text == ">" or
text == "=" or text == "not(" or text == "invertBits(") {
if (children.empty()) {
std::cerr << "Line " << lineNumber << ", Column " << columnNumber
<< ", Compiler error: The operator \"" << text
<< "\" has no operands. Aborting the compilation (or else we "
"will segfault)!"
<< std::endl;
exit(1);
}
if (children.size() < 2 and text != "not(" and text != "invertBits(") {
std::cerr << "Line " << lineNumber << ", Column " << columnNumber
<< ", Compiler error: The binary operator \"" << text
<< "\" has less than two operands. Aborting the compilation "
"(or else we will segfault)!"
<< std::endl;
exit(1);
}
return "Integer32"; // Because "if" and "br_if" in WebAssembly expect a
// "i32", so let's adapt to that.
}
if (text == "mod(") {
if (children.size() != 2) {
std::cerr << "Line " << lineNumber << ", Column " << columnNumber
<< ", Compiler error: \"mod(\" operator requires two integer "
"arguments!"
<< std::endl;
exit(1);
}
if (std::regex_search(children[0].getType(context),
std::regex("^Decimal")) or
std::regex_search(children[1].getType(context),
std::regex("^Decimal"))) {
std::cerr << "Line " << lineNumber << ", Column " << columnNumber
<< ", Compiler error: Unfortunately, WebAssembly (unlike x86 "
"assembly) doesn't support computing remaining of division "
"of decimal numbers, so we can't support that either "
"outside of compile-time constants."
<< std::endl;
exit(1);
}
return getStrongerType(lineNumber, columnNumber,
children[0].getType(context),
children[1].getType(context));
}
if (text == "If" or text == "Then" or text == "Else" or text == "While" or
text == "Loop" or text == "Does" or
text == "Return") // Or else the compiler will claim those
// tokens are undeclared variables.
return "Nothing";
if (std::regex_match(text, std::regex("^(_|[a-z]|[A-Z])\\w*\\[?"))) {
std::cerr << "Line " << lineNumber << ", Column " << columnNumber
<< ", Compiler error: The variable name \"" << text
<< "\" is not declared!" << std::endl;
exit(1);
}
if (text == "+" or text == "*" or text == "/") {
if (children.size() != 2) {
std::cerr << "Line " << lineNumber << ", Column " << columnNumber
<< ", Compiler error: The binary operator \"" << text
<< "\" doesn't have exactly two operands. Aborting the "
"compilation (or else we will segfault)!"
<< std::endl;
exit(1);
}
return getStrongerType(lineNumber, columnNumber,
children[0].getType(context),
children[1].getType(context));
}
if (text == "-") {
if (children.size() != 2) {
std::cerr << "Line " << lineNumber << ", Column " << columnNumber
<< ", Compiler error: The binary operator \"" << text
<< "\" doesn't have exactly two operands. Aborting the "
"compilation (or else we will segfault)!"
<< std::endl;
exit(1);
}
if (std::regex_search(children[0].getType(context),
std::regex("Pointer$")) and
std::regex_search(children[1].getType(context), std::regex("Pointer$")))
return "Integer32"; // Difference between pointers is an integer of the
// same size as the pointers (32-bit).
return getStrongerType(lineNumber, columnNumber,
children[0].getType(context),
children[1].getType(context));
}
if (text == ":=") {
if (children.size() < 2) {
std::cerr << "Line " << lineNumber << ", Column " << columnNumber
<< ", Compiler error: The assignment operator \":=\" has less "
"than two operands. Aborting the compilation, or else the "
"compiler will segfault."
<< std::endl;
exit(1);
}
if (children[1].getType(context) == "Nothing") {
std::cerr << "Line " << lineNumber << ", Column " << columnNumber
<< ", Compiler error: Attempting to assign something of the "
"type \"Nothing\" to a variable. Aborting the compilation!"
<< std::endl;
}
return children[0].getType(context);
}
auto potentialFunction =
std::find_if(context.functions.begin(), context.functions.end(),
[=](function fn) { return fn.name == text; });
if (potentialFunction != context.functions.end())
return potentialFunction->returnType;
if (text.back() == '(' and
basicDataTypeSizes.count(text.substr(0, text.size() - 1))) // Casting
return text.substr(0, text.size() - 1);
if (text.back() == '(') {
std::cerr << "Line " << lineNumber << ", Column " << columnNumber
<< ", Compiler error: Function \"" << text
<< "\" is not declared!" << std::endl;
exit(1);
}
if (text == "?:")
return getStrongerType(lineNumber, columnNumber,
children[1].getType(context),
children[2].getType(context));
return "Nothing";
}
The rest of the code is available on my GitHub profile, it's about 4'000 lines long, and I don't think most of it is relevant here.
Answer: How to create an alias for an enum
I see you are declaring 5 constants, like i32, to avoid having to write out a whole enum name, like AssemblyCode::AssemblyType::i32. In C++20, you can make all enum value names directly accessible in a given scope by writing using enum AssemblyCode::AssemblyType. However, if you are stuck with C++11, then the next best thing is still use using, but then to just declare a short alias for the enum type:
using AT = AssemblyCode::AssemblyType;
And then you can use it like so:
if (originalCode.assemblyType == AT::null) {
...
}
Prefer switch when dealing with enums
Instead of having multiple if-statements when checking the value of an enum type variable, prefer using switch. Apart from resulting in slightly more concise code, the advantage is that the compiler will then check if you covered all the possible values of that enum, and if not it will print a warning. This makes it easy to catch mistakes. So:
switch (originalCode.assemblyType) {
case AT::null:
...;
break;
case AT::i32:
return originalCode;
case AT::i64:
...
case AT::f32:
...
case AT::f64:
...
};
How to handle impossible enum values
Technically, a variable of an enum class type should never have a value that's not one of those listed in the declaration of that enum class type. Some compilers will even assume that it can never happen (Clang), but others (GCC) will require you to write some code after the switch, else they will warn you that there is a possibility to reach the end of the non-void function without a proper return.
Instead of calling exit(-1) though, call abort(). -1 is not even a proper exit code, EXIT_FAILURE is (and it's usually positive 1). But exit() will also call a normal exit from the program, whereas abort() has the benefit of doing an abnormal exit, which can be used to trigger a core dump, or if it's already running inside a debugger, it tells the debugger that a bug happened right there where it was called.
There is no need to call return after a function like exit() or abort(), as the compiler knows that those functions will never return.
Don't use std::endl
Prefer using \n instead of std::endl. The latter is equivalent to the former, but also forces the output to be flushed, which is usually not necessary and will only hurt performance. Note that std::cerr is already unbuffered, so there is no need to flush anything when writing to it.
Possibilities to avoid repeating type names
If you have a function that has a well-defined return type, then in the return statement you don't have to repeat that type; the compiler already knows what it is you have to return. So instead of:
return AssemblyCode("()");
You can just write:
return "()";
And in case you needed to pass multiple arguments to the constructor of AssemblyCode, you can use brace notation:
return {"i32.wrap_i64\n" + std::string(originalCode.indentBy(1)) + "\n)", AT::i32};
Create a user defined literal for AssemblyCode
Since your assembly code is strings, consider adding a user defined literal for AssemblyCode. For example:
AssemblyCode operator"" _asm(const char *code) {
return code;
}
This way you can write:
return {"i32.wrap_i64\n"_asm + originalCode.indentBy(1) + "\n)", AT::i32};
Avoid stringly typed code
Your program already deals a lot with strings, but avoid using them for things where there is a more appropriate type. Consider the parameter type of convertTo(). It can only have a few possible values. Create an enum class for it, just like you did for AssemblyType. This makes the code more type safe, allows you to use switch which has benefits as mentioned above, and avoids your CPU having to do lots of string comparisons at run time.
Avoid regular expressions
“Some people, when confronted with a problem, think "I know, I'll use regular expressions." Now they have two problems.” Jamie Zawinski
Regular expressions can be very powerful, but there are many string parsing problems where they are not the right tool. In particular, it's easy to make mistakes, and they can actually have bad performance. If you want to check if a string ends with some text, then in C++20 you would use ends_with():
if (... or type.ends_with("Pointer"))
If you need to support earlier versions of C++, it is quite easy to implement ends_with() yourself without needing regular expressions. Or if you would use an enum class for type, you wouldn't need to deal with strings to begin with.
There are other regular expressions you are using, like the ones for checking if something is a literal integer or floating point value. However, I can already see they are wrong; they don't allow negative values and they don't allow scientific notation. Doing this properly quickly results in huge regular expressions, but there is a much simpler solution: use std::stoi() and std::stof() (or if you can use C++17, std::from_chars() is preferred). These functions try to convert a string to int or float, and you can check whether they succeeded.
Create more helper functions
There is quite a lot of code duplication that can be avoided by writing more helper functions. You repeated the code to check if something is a pointer by writing out the whole std::regex() call. This is a lot of typing, is hard to read, and if you ever change something in how you represent pointer types, you now have to change a lot of code. Consider writing something like:
bool is_pointer(const char *type) {
...
}
Note that if you move from a stringly typed type to an enum class, having such a helper function would make that transitions a lot easier.
You could also create a function (or macro if you want to print line and function numbers and can't use std::source_location yet) for internal compiler errors, so you could write:
if (value != valid) {
ICE("The function foobar() was called with an invalid value");
}
And if you see that ICE() is always called like that within an if-statement, you could make an assert()-like function:
compiler_assert(value == valid, "The function foobar() was called with an invalid value"); | {
"domain": "codereview.stackexchange",
"id": 43431,
"tags": "c++, c++11, compiler, webassembly, aec"
} |
Write to a new, modified Excel file from an original one | Question: I noticed that I've a HUGE bottleneck in this following function of mine. I can't see how to make it faster.
These are the profiling test results(keep in mind that I'm using a PyQt GUI so times can be stretched):
cProfile Results
def write_workbook_to_file(self, dstfilename):
self.populaterownumstodelete()
# actual column in the new file
col_write = 0
# actual row in the new file
row_write = 0
for row in (rows for rows in range(self.sheet.nrows) if rows not in self.row_nums_to_delete):
for col in (cols for cols in range(self.sheet.ncols) if cols not in self.col_indexes_to_delete):
self.wb_sheet.write(row_write, col_write, self.parseandgetcellvalue(row, col))
col_write += 1
row_write += 1
col_write = 0
I ran a cProfile profiling test and write_workbook_to_file() resulted the slowest function in all my application. parseandgetcellvalue() isn't a problem at all.
Answer: To make it faster you could move this
(cols for cols in range(self.sheet.ncols) if cols not in self.col_indexes_to_delete)
out of the loop, and use enumerate instead of explicitly incrementing variables.
Revised code:
def write_workbook_to_file(self, dstfilename):
self.populaterownumstodelete()
rows = [row for row in xrange(self.sheet.nrows) if row not in self.row_nums_to_delete]
cols = [col for col in xrange(self.sheet.ncols) if col not in self.col_indexes_to_delete]
for row_write, row in enumerate(rows):
for col_write, col in enumerate(cols):
self.wb_sheet.write(row_write, col_write, self.parseandgetcellvalue(row, col))
In fact you could even move enumerate out of the loop, but I like how the code looks quite clean now. | {
"domain": "codereview.stackexchange",
"id": 11094,
"tags": "python, performance, python-2.x, excel"
} |
Global Planner Parameters in Navigation | Question:
Hello,
I was wondering what the parameters below are exactly doing in the global planner (Reference Tutorial). The description is not very clear to me. I tried different values for the parameters in my yaml file, but it didn't seem to be affecting the global planner:
~<name>/planner_window_x (double, default: 0.0)
Specifies the x size of an optional window to restrict the planner to. This can be useful for restricting NavFn to work in a small window of a large costmap.
~<name>/planner_window_y (double, default: 0.0)
Specifies the y size of an optional window to restrict the planner to. This can be useful for restricting NavFn to work in a small window of a large costmap.
and I was wondering if they are visualizable in RViz?
Here is my yaml file
NavfnROS:
allow_unknown: false
planner_window_x: 0.0
planner_window_y: 0.0
default_tolerance: 0.05
visualize_potential: true # false
planner_costmap_publish_frequency: 0.2
Thanks
Originally posted by ROSCMBOT on ROS Answers with karma: 651 on 2014-09-29
Post score: 0
Answer:
I am afraid those parameters are unused. If you download the source code of the navigation stack and do *grep -R planner_window ** you will get this:
global_planner/src/planner_core.cpp: private_nh.param("planner_window_x", planner_window_x_, 0.0);
global_planner/src/planner_core.cpp: private_nh.param("planner_window_y", planner_window_y_, 0.0);
global_planner/include/global_planner/planner_core.h: double planner_window_x_, planner_window_y_, default_tolerance_;
navfn/src/navfn_ros.cpp: private_nh.param("planner_window_x", planner_window_x_, 0.0);
navfn/src/navfn_ros.cpp: private_nh.param("planner_window_y", planner_window_y_, 0.0);
navfn/include/navfn/navfn_ros.h: double planner_window_x_, planner_window_y_, default_tolerance_;
Which makes me think that they are declared and initialised, but never used. Maybe it is just part of the API and the current global planner implementation does not use it, or may be it is just legacy code that will be removed at some point.
Might be a good idea to open an issue on github
Originally posted by Martin Peris with karma: 5625 on 2014-09-30
This answer was ACCEPTED on the original site
Post score: 2
Original comments
Comment by Procópio on 2014-09-30:
am I wrong or actually those commands are just specifying the default parameters in case none are given?
Comment by David Lu on 2014-09-30:
Yeah, that looks to be the case. Please open up an issue. Was this a behavior you were looking to get, or just curious what the parameters did?
Comment by ROSCMBOT on 2014-09-30:
Thanks All. The parameters looked like to be performing a special case of the feature I am trying to implement (Explained Here). So yes I was looking to see how they behave.
Comment by Martin Peris on 2014-09-30:
@Procópio Silveira Stein, yes, those commands are just specifying the default parameters in case none are given, but after that they are not used anywhere else in the code. | {
"domain": "robotics.stackexchange",
"id": 19568,
"tags": "ros, navigation, planner, navfn"
} |
Prompt user for DateTime | Question: I'm working on a project where I have to prompt a user a enter several dates. This is the code I came up with for prompting the user for the day, month, and year.
Console.WriteLine("Day: ");
var dateDay = Console.ReadLine();
var dateDayInt = Convert.ToInt32(dateDay);
Console.WriteLine("Month: ");
var dateMonth = Console.ReadLine();
var dateMonthInt = Convert.ToInt32(dateMonth);
Console.WriteLine("Year: ");
var dateYear = Console.ReadLine();
var dateyearInt = Convert.ToInt32(dateYear);
DateTime myDate = new DateTime(dateYearInt, dateMonthInt, dateDayInt, 00, 00, 00, 000);
I'm using this date in a method, so it needs to be in this format.
As you can see, it's a ton of repetitive code. If I even add one or two more dates, my project will become very cluttered. Let's say I need to get 20 dates. Is there an easier way to do this, like only having one prompt per date?
Answer: Note that I'm not very familiar with C#, so the code that I write may not actually compile.
Let's say I need to get 20 dates. Is there an easier way to do this, like only having one prompt per date?
There's a much easier way to do this, and that is to extract this to it's own separate method. This method should do just what your code is doing now: prompt the user for the day, the month, and the year, and should return a DateTime.
public static DateTime PromptDateTime()
{
Console.WriteLine("Day: ");
var day = Convert.ToInt32(Console.ReadLine());
Console.WriteLine("Month: ");
var month = Convert.ToInt32(Console.ReadLine());
... year...
return new DateTime(year, month, day, 0, 0, 0, 0);
}
Need to call this method twenty times? Go ahead (you'd probably want to use a loop though). With an extracted method like the one above, you won't have to worry about having cluttered code.
Validate Input
This is how you are getting input:
var dateDay = Console.ReadLine();
var dateDayInt = Convert.ToInt32(dateDay);
What if the user enters something like this?
LOOK MA! NO VALIDATION
What's your code going to do? It's going to throw an error because Convert.ToInt32 doesn't quite know how to parse that.
To validate the user's input, you should use int.TryParse. This method takes a string and an output variable and attempts to parse the string into a number and puts the parsed number in the output variable. If the parse is unsuccessful, then this method returns false.
What this method could look like in action:
var input = Console.ReadLine();
var num;
if(int.TryParse(input, out num))
{
Valid!
} else
{
Invalid!
}
Of course, this is a little much to put inside this method you have (considering the fact that you'll have to keep looping to get the user's proper input). Therefore, you should extract this into a method.
public static int ReadInteger() {
while(true) {
var input = Console.ReadLine();
var num;
if(int.TryParse(input, out num))
{
return num;
}
}
}
The above method keeps on looping until proper input is receive. As you may notice, this is not very suitable for unit testing as STDIN is forced. However, as Dan Lyons commented:
Console.WriteLine utilizes Console.Out for everything - if the method took in a TextWriter, he could use Console.Out in "production", but a StringWriter in test.
Now, with proper validation, your extracted method would look like this:
public static DateTime PromptDateTime()
{
Console.WriteLine("Day: ");
var day = ReadInteger();
Console.WriteLine("Month: ");
var month = ReadInteger();
... year...
return new DateTime(year, month, day, 0, 0, 0, 0);
}
As svick mentioned, this is still technically not enough validation. What if, for the month, the user enters
13
That doesn't make much sense, does it?
This, of course, can lead to some complications. Some months had 30 days and others have 31. Validation in general should be mostly simple; you just check that the month number is between 1 and 12 (or 0 and 11, but be consistent), and that the year number is between 1 and the current year (unless you want to handle BC/BCE, in which you'd have to do some extra checking).
However, this still doesn't solve the days-in-month problem. A solution could be to create a map/table showing the months and the corresponding amount of days and use that to look up whether the day number is valid. However, then you encounter February (yay!) with which you have to do further validation. Honestly, your best bet would be to go with Zack's solution as it is more idiomatic and would probably be easier to do parsing with (although, I don't know how it handles that days-in-month issue either). | {
"domain": "codereview.stackexchange",
"id": 18615,
"tags": "c#, datetime"
} |
What does all the formula and pictures mean? | Question: https://www.nature.com/articles/s41467-020-17419-7
I am a medical school graduate and I really want to learn AI/ML for computer-aided diagnosis.
I was building a symptom checker and I found the material. It clarifies the drawbacks of associative models which are performing differential diagnosis. And it suggests counterfactual(causal) approach to improve accuracy.
The thing is I couldn't understand what the formulas mean in the article, e.g.:
$$P(D \mid \mathcal{E}; \theta )$$
I really want to know what | and ; are doing here, what do they mean, etc.
I would really happy if someone can directly answer or just provide me some references to get general idea quickly.
Here comes the most tricy part...
Answer: According to the provided article,
$$
\begin{equation} \tag{1}
P(D| {\mathcal{E}};\ \theta )
\end{equation}
$$
is a probability of disease $D$ given findings $\mathcal{E}$, and a model $\theta$ that is used to estimate this probability.
$D$ represents a disease or diseases, and findings $\mathcal{E}$ can
include symptoms, tests outcomes and relevant medical history.
The Sheffer stroke symbol (vertical bar) | in the conditional probability notation is read "given that", whereas the semicolon symbol ; tells that we use a model (or parameters for the model) to calculate this probability. For instance, we can define this probability as $P(D| {\mathcal{E}};\ \theta ) = M_{\theta}(\mathcal{E})$, where $M$ can be a nueral network with parameters $\theta$ that takes $\mathcal{E}$ as input and returns the probability of $D$.
As we read the article further, we see that Equation 1 is nothing more than the posterior probability from Bayes' theorem:
$$
\begin{equation} \tag{2}
P(D| {\mathcal{E}};\ \theta )=\frac{P({\mathcal{E}}| D;\ \theta )P(D;\ \theta )}{P({\mathcal{E}};\ \theta )}.
\end{equation}
$$
where $P({\mathcal{E}}| D;\ \theta )$ is a likelihood of findings $\mathcal{E}$ given that we have the disease $D$, $P(D;\ \theta )$ is the prior probability of the disease $D$, and $P({\mathcal{E}};\ \theta )$ is a likelihood of findings $\mathcal{E}$.
As the article suggests, Theorem 2 is related to the Noisy-OR model, which itself is a large area of research. I encourage you to read the references provided in the article to learn more about approaches used by authors. If you have further questions regarding this theorem, I suggest you to open another question. | {
"domain": "ai.stackexchange",
"id": 2968,
"tags": "neural-networks, machine-learning, algorithm, bayesian-networks"
} |
Chromatic number of G+v where G is a cograph | Question: Cograph is a well-know graph that does not have induced $P_4$. My questions are about determining the chromatic number of graphs in the class cograph+v.
Notations:
Denote by $\chi(G)$ the chromatic number of a graph $G$.
Let $V(G)$ denote the vertices of $G$. For any vertex $v\in V(G)$, denote by $N(v)$ the neighbors of $v$ and $N[v]=N(v)+v$ the closed neighborhood of $v$. Let $A\subseteq V(G)$, denote by $\langle A \rangle_{G}$ be the induced subgraph of $A$ of $G$.
Let $G$ be an undirected simple graph, and $H=G+v$ be a graph such that $G=H-v$, i.e., $G$ is the graph obtained by deleting $v$ from $H$. This type of notation is introduced in the paper Parameterized complexity of vertex colouring. For a graph class $\mathcal{F}$, let $\mathcal{F}+kv$ denote the graphs which can be built by adding at most $k$ vertices to a graph in $\mathcal{F}$.
Question 1:
Is the following statement true?
$\chi(G+v)=\chi(G)+1$ if and only if $\chi(\langle N(v) \rangle)=\chi(G)$, where $G$ is a cograph.
The "if" direction is easy, because $\chi(G+v) \le \chi(G)+1 $ and $\chi(\langle N[v] \rangle) = \chi(\langle N(v) \rangle) + 1$.
For general simple graph $G$, the "only if" direction is not true. For example, let $G$ be a $P_4$ (path of 4 vertices) and $G+v$ be a $C_5$ (cycle of 5 vertices).
Question 2:
If the proposition in question 1 is false. Is it easy to determine $\chi(G+v)$, where $G$ is a cograph.
Update: Jim showed than cograph+v is a subset of the class of perfect graph. Also he actually proved another more general theorem.
For any perfect graph $G$, $\chi(G)=\chi(G-v)+1$ if and only if $\chi(\langle N(v)\rangle) = \chi(G-v)$.
Answer: I think Artem is on the right track with perfection:
As cographs are $P_4$-free, cograph+v is $C_5$-free (and $C_{2k+1}$-free and $\overline{C}_{2k+1}$-free, $k>1$) and so they are perfect graphs.
This means the only thing that is pushing the chromatic number up is clique size. So if $\chi(G+v) = \chi(G) + 1$, it is because v has increased the maximum clique size, which means that $N(v)$ must have had a clique of size $\chi(G)$ and so $\langle N(v)\rangle_G$ had the same chromatic number as $G$.
So it seems the statement in question 1 is true.
But even without it, since cograph+v can't have any large induced cycles, it is contained in the set of weakly chordal graphs. A graph $G$ is a weakly chordal graph if for every cycle of length at least 5 in $G$ and $\overline{G}$, there is a chord. Weakly chordal graphs are a subset of perfect graphs and there are various efficient algorithms for a number of optimization problems on weakly chordal graphs, such as colouring. | {
"domain": "cstheory.stackexchange",
"id": 2562,
"tags": "graph-theory, graph-colouring, perfect-graph, cograph"
} |
Why doesn't $v_{max} = \omega A$ work for this pendulum problem? | Question: I have a pendulum of length $L$ swinging with some angle $\theta$ and period $T$. I want to find the maximum velocity (which occurs at the bottom of the motion). I am using two methods that give me different answers.
First, conservation of energy:
$$ \frac{1}{2}mv_{max}^2 = mgL(1-\cos\theta)$$
Using $T = 2\pi \sqrt{L/g}$, I get:
$$ v_{max} = \sqrt{2gL(1-\cos\theta)} = \frac{T}{2\pi}g\sqrt{2(1-\cos\theta))}$$.
However, I also know that $v_{max} = \omega A$, where the amplitude $A$ is $L(1-\cos\theta)$, which gives
$$ v_{max} = \frac{T}{2 \pi}g(1-\cos\theta)$$
Why do these methods disagree? I believe the first answer is correct, but I am not sure why the second answer is incorrect. If $v_{max} = \omega A$ is the problem, under what circumstances does $v_{max} = \omega A$ break down?
Answer: The second equation is not correct. The amplitude refers to the motion along the arc, $A=L\theta$, or horizontally, $A=sin(\theta)$. These will all agree for small theta converging to $v_{max}=(T/2\pi)g\theta$. For large amplitudes the system is nonlinear and the motion is not simple harmonic motion (sinusoidal), so $v_{max}=\omega A$ does not apply. | {
"domain": "physics.stackexchange",
"id": 85749,
"tags": "newtonian-mechanics, classical-mechanics, energy-conservation, velocity"
} |
ros2 optionally launch node with no arguments | Question:
I have a node that takes zero or one argument. I want to use ros2 launch arguments to pass either zero or one argument to the the launch file which will then get passed down to the node. I'm having a bit of trouble with the 'zero' argument case.
LaunchDescription([
launch.actions.DeclareLaunchArgument(
"my_param",
default_value=[""], # default_value=[], has the same problem
description="optional parameter"
),
launch_ros.actions.Node(
package="my_package",
node_executable="my_node",
arguments=[launch.substitutions.LaunchConfiguration("my_param")]
),
])
When I do this, I can run my launch file with my_param:=foo and foo will get passed down to my node. No problem
When I run my launch file without any arguments, my_node still behaves like it got a positional argument - a single empty string. This is not how I intended to launch the node.
What do I need to do to make my_node behave like it got no arguments when I don't pass any arguments to launch?
The current behavior makes sense - the LaunchConfiguration substitution is getting resolved to an empty string and is getting put into the node's arguments array. What I need is a way to substitute the whole 'arguments' list passed to the node instead of substituting a single element in the list, but the Node object really wants a list of substitutions
Originally posted by Pete B on ROS Answers with karma: 43 on 2019-02-06
Post score: 2
Answer:
There's no way to do this (conditionally include a substitution in a list) right now.
You could do something like this:
LaunchDescription([
launch.actions.DeclareLaunchArgument(
"my_param",
default_value=[""], # default_value=[], has the same problem
description="optional parameter"
),
launch_ros.actions.Node(
package="my_package",
node_executable="my_node",
arguments=[launch.substitutions.LaunchConfiguration("my_param")],
condition=launch.conditions.IfCondition(launch.substitutions.LaunchConfiguration("my_param")),
),
launch_ros.actions.Node(
package="my_package",
node_executable="my_node",
arguments=[],
condition=launch.conditions.UnlessCondition(launch.substitutions.LaunchConfiguration("my_param")),
),
])
Which is obviously silly, but it's the only way to express it at the moment.
As a long term solution, I think we either need conditional Substitutions, where maybe they return None if the condition is false, and code that processes lists of substitutions would ignore (just drop) any substitutions that return None, or some way to process lists of substitutions conditionally (not sure how that would work off hand).
Then you'd be able to do something like this:
LaunchDescription([
launch.actions.DeclareLaunchArgument(
"my_param",
default_value=[""], # default_value=[], has the same problem
description="optional parameter"
),
launch_ros.actions.Node(
package="my_package",
node_executable="my_node",
arguments=[launch.substitutions.LaunchConfiguration("my_param", condition=launch.conditions.IfCondition(launch.substitutions.LaunchConfiguration("my_param"))]
),
])
Or some short hand for self-evaluating the substitution, e.g.:
LaunchDescription([
launch.actions.DeclareLaunchArgument(
"my_param",
default_value=[""], # default_value=[], has the same problem
description="optional parameter"
),
launch_ros.actions.Node(
package="my_package",
node_executable="my_node",
arguments=[launch.substitutions.LaunchConfiguration("my_param", condition=launch.conditions.UnlessEmpty())]
),
])
Originally posted by William with karma: 17335 on 2019-02-06
This answer was ACCEPTED on the original site
Post score: 3
Original comments
Comment by mogumbo on 2019-07-16:
Has there been any progress on this? I think I'm having the same problem. I have
gzserver = launch.actions.ExecuteProcess(
cmd=['gzserver', '--verbose', '-s', 'libgazebo_ros_init.so',
LaunchConfiguration('command_arg1'),
LaunchConfiguration('world_name')],
output='screen'
)
LaunchConfiguration('command_arg1' sometimes evaluates to '' and my launch script fails with [gzserver-1] [Err] [Server.cc:382] Could not open file[]
Comment by newellr on 2019-07-26:
I'm interested in this too. I've opened an issue on ros2/launch. https://github.com/ros2/launch/issues/290 | {
"domain": "robotics.stackexchange",
"id": 32423,
"tags": "ros2, roslaunch"
} |
Why does aluminum foil "electrocute" dental fillings? | Question: This is probably one of the weirdest questions on here. When I was a kid, I put aluminum foil in my mouth (maybe it was still attached to some food or something, I don't remember.)
Anyway, it made my mouth feel like I'd licked a 9-volt battery. Being the natural born scientist, I repeated the procedure with similar results.
Why does this happen? I really think it has something to do with my dental fillings since the sensation occurred near them.
Answer: Ouch... actually a known phenomenon. Between the foil of aluminum/Al, and the metal filling of amalgam in your tooth, you created a little electrochemical cell. Your saliva served as an electrolyte and your nerves around sensibly detected the current. This already "works" with little splinters of Al foil wrapping chocolate (obviously, it was not a small Hershey bar), yet is less powerful than the said 9 V block.
(one reference) | {
"domain": "chemistry.stackexchange",
"id": 7886,
"tags": "everyday-chemistry, electrochemistry"
} |
Linear search algorithm template implementation | Question: This template class is a linear search algorithm with a simple search function. I have tested it with int and char and it seems to be working fine. I would like pointers on how I can make it more efficient and on my coding technique. I do know that the standard library provides this algorithm, i am just doing it for fun and practice
sLinear.h
//Linear search template
#ifndef SLINEAR_H
#define SLINEAR_H
namespace fm
{
template <typename T>
class sLinear
{
T *_list;
T _item;
int _sizeOfList;
public:
sLinear();
sLinear(T *myArray, T item, int size);
~sLinear();
int findItem();
};
//------public methods------
template <typename T>
sLinear<T>::sLinear()
{
_list = nullptr;
_sizeOfList = 0;
}
template <typename T>
sLinear<T>::sLinear(T *myArray, T item, int size)
{
_list = myArray;
_item = item;
_sizeOfList = size;
}
template <typename T>
sLinear<T>::~sLinear()
{
_list = nullptr;
_sizeOfList = 0;
}
template <typename T>
int sLinear<T>::findItem()
{
while (_list != nullptr)
{
for (int i = 0; i < _sizeOfList; i++)
{
if (_list[i] == _item)
{
return i + 1;
}
}
return -1;
}
}
}
#endif
main.h
#include<iostream>
#include "sLinear.h"
using namespace fm;
int main()
{
/* Linear search with integer */
int iList[] = { 1,9,2,6,5,3,7,4,8,0 };
int size = (sizeof(iList) / sizeof((iList[0])));
sLinear<int> mySearch(iList, 8, size);
int k = mySearch.findItem();
if (k == -1)
std::cout << "Item not found!" << std::endl;
else
std::cout << "Item found:" << k << std::endl;
return 0;
}
Answer:
Returning int is dubious; it unnecessarily narrows the range of possible return values. Consider retuning an iterator.
The line if (_list[i] == _item) implies that _list must provide an operator[](int), which is very restrictive. A linear search is just like its name implies, linear. It is expected to work on any linear collection, e.g. forward iterator (maybe even on input iterator).
The class sLinear exposes one public method, and keeps no state. There's no reason to make it a class. A standalone
template <typename I>
I findItem(I first, I last, I::value_type value) {
....
}
will do as well.
while (_list != nullptr) is very strange. _list never changes; why do you want to loop?
PS: Could you explain a rationale for returning i+1? | {
"domain": "codereview.stackexchange",
"id": 25362,
"tags": "c++, algorithm"
} |
Defining finite sets inductively in a proof assistant? | Question: To represent finite sets within coq, we either use something like ListSet, which are just definitions on top of list, or we build something like Compcert.Map, and then we define a set A as a map from A to ().
However, neither of these approaches manage to define sets inductively. What I want to know is a way to define a set type in the form of:
Inductive set (A: Type) : Type :=
nil: set A | add: A -> set A -> <fill in the blanks> -> set A
Is it possible to have such an "inductive definition" for finite sets? If not, can I be supplied with a proof of why not?
My intuition is that such a thing is not possible, because an inductive type allows for equational reasoning, while sets cannot be equationally reasoned with:
$$
\texttt{add}~(1, \emptyset) = \texttt{add}(1, \texttt{add}(1, \emptyset)) \quad \text{(union is idempotent.)}
$$
however:
add 1 (add 1 nil) <> add 1 nil
So we will always have to go through some "extensional interface". Unfortunately, I don't know how to prove such a thing!
Answer: There are many variants of finite sets in constructive mathematics. One that can be defined using just inductive definitions, and is therefore amenable to formalization in type theory, is the Noetherian finiteness by Thierry Coquand and Arnaud Spiwack. The idea is to define a set or a type $A$ to be finite if the following holds: every sequence $a : \mathbb{N} \to A$ contains a duplicate. The trick is to express an equivalent condition using inductive definitions, so that we get an induction principle for reasoning about such sets.
The definition of Noetherian finiteness from section 2.3 of the linked paper can be translated to Coq like this:
(* [occurs x l] states that x appears in the list l *)
Inductive occurs {A : Type} : A -> list A -> Type :=
| occurs_head : forall x k, occurs x (cons x k)
| occurs_tail : forall x y k, occurs x k -> occurs x (cons y k).
(* [has_duplicates l] states that [l] has a duplicate, i.e., that an element appears in it twice. *)
Inductive has_duplicates {A : Type} : list A -> Type :=
| has_duplicates_head : forall x l, occurs x l -> has_duplicates (cons x l)
| has_duplicates_tail : forall x l, has_duplicates l -> has_duplicates (cons x l).
(* An auxiliary definition: a list `l` is said to be `noetherian` if it contains a duplicate, or if every extension of `l` by one element is `noetherian`. *)
Inductive noetherian (A : Type) : list A -> Type :=
| N_duplicates : forall l, has_duplicates l -> noetherian A l
| N_step : forall l, (forall a, noetherian A (cons a l)) -> noetherian A l.
Definition NoetherianFinite A := noetherian A nil.
If you're willing to use quotient types or the higher inductive types from homotopy type theory, then you can have a look at the defintion of finite sets Finite in the HoTT library. It says that a type X is finite if there is a number n such that X is merely equivalent to the standard finite set {0, 1, ..., n-1}. The word "merely" here means that we truncate the existence, i.e.,
$$\textstyle\mathsf{Finite}\, X \mathrel{{:}{=}}
\sum_{n : \mathbb{N}} \left\| X \simeq \mathsf{Fin}\,n\right\|$$
where $\mathsf{Fin}\,n$ is the standard finite set $\sum_{k : \mathbb{N}} (k < n)$. | {
"domain": "cstheory.stackexchange",
"id": 4995,
"tags": "type-theory, dependent-type, coq, formal-methods"
} |
Validity of ionic resonance structures | Question: What's wrong with ionic resonance structures? I asked one professor about them once and all he commented was "I've seen them too, and I think they're wrong."
Another professor rejected an ionic resonance structure showing the "H+" ion and asked me what phase the experiment was run in, in an attempt to figure out why the authors reported a resonance contributor with a H+ proton. He specifically asked if the experiment was run in the gas phase.
I think this guy forgot what resonance structures are because I know that he emphasizes that bare protons do not exist in solution, and he was probably thinking about how there could be a bare proton shown in a resonance structure if the molecule is in solution. The resonance structure I showed him was this (which I could see him picturing in solution, given that it is nitrous acid.
However, resonance structures are not discrete forms of the molecule in question - they're just contributors and show that there is partial ionic character; that there is a highly electronegative atom that is withdrawing electron density.
So I ask:
1) Are ionic resonance structures valid?
2) If so, why might they be valid, or what counterarguments might you present to someone who argues they are invalid?
Answer: I can't imagine anyone arguing that ionic resonance structures are invalid. The often contribute significantly to the structure of a molecule and help explain the various physical aspects of the molecule. The following ionic resonance structure contributes significantly to the description of the carbonyl compound. $\ce{NaCl}$ is not totally ionic and is described by a blend of both the ionic (major) and covalent (minor) resonance structures shown below.
Resonance structures with a positive hydrogen (proton) are important in explaining hyperconjugative effects. | {
"domain": "chemistry.stackexchange",
"id": 3243,
"tags": "resonance"
} |
Program transformations for numeric stability | Question: There's tons of research on program transformations for optimization. Is there any research on transformations that improve numeric stability? Examples of such transformations might include:
Transform $\log(\exp(a)+\exp(b))$ into $\max(a,b)+\log(\exp(a-\max(a,b))+\exp(b-\max(a,b)))$
Convert multiplication of an inverse matrix times a vector into the solution to a linear system solver.
Automatically perform multiplications of small numbers in the log domain.
All the tricks I'm aware of for better numeric stability like this are pretty standard and something that every "good" numeric programmer always does. Since the tricks are so standard and always applied, it makes sense that the compiler might be able to do them for us.
Answer: There actually is some research on improving the numerical stability of floating point expressions, the Herbie project. Herbie is a tool to automatically improve the accuracy of floating point expressions. It's not quite comprehensive, but it will find a lot of accuracy improving transformations automatically.
Cheers,
Alex Sanchez-Stern | {
"domain": "cs.stackexchange",
"id": 5042,
"tags": "compilers, floating-point, numerical-algorithms"
} |
Showing jQuery UI tooltip on focus of text box | Question: Firstly, here's a link to the fiddle.
Basically my situation is that there is a requirement to show tooltips on textboxes on some forms being created for a website to allow employees to update their personal information (among other things).
The customer likes the idea of the tooltip being displayed to the right of the textbox when the user clicks into it rather than jQuery UI's default of when hovering over it. To that end, I created the following code that shows and hides the tooltip when the user clicks into/out of the textbox:
$(function () {
$('.tooltip').each(function () {
var $this,
id,
t;
$this = $(this);
id = this.id;
t = $('<span />', {
title: $this.attr('title')
}).appendTo($this.parent()).tooltip({
position: {
of: '#' + id,
my: "left+190 center",
at: "left center",
collision: "fit"
}
});
// remove the title from the real element.
$this.attr('title', '');
$('#' + id).focusin(function () {
t.tooltip('open');
}).focusout(function () {
t.tooltip('close');
});
});
});
My question is simple, is there a better way of doing this?
Note that I can rely on the inputs always having a unique Id as this is an ASP.Net application.
Answer: The demo is seen here
$('.tooltip').each(function () {
//title can be fetched directly from the element. I know no bugs for to
//require the use of `attr()`. This removes several function calls.
//No need to get the element by id since the current elements you are iterating
//over each are the same DOM elements you are trying to attach events to. The
//`this` in here is where you attach events.
//You are appending the content to the parent, landing the span next to input.
//Instead of doing that, you can use `insertAfter()` which inserts the object in
//context after the specified element, which is the current object in the
//`each()`. It also accepts a DOM element so no need to wrap it in jQuery.
var t = $('<span />', {
title: this.title
}).insertAfter(this).tooltip({
position: {
of: '#' + this.id,
my: "left+190 center",
at: "left center",
collision: "fit"
}
});
this.title = '';
//Now the only need to wrap the current object this is here, when we attach events.
//Since the event methods are merely shorthands of `on(eventName,fn)`, we use
//`on()` instead to lessen function calls. Also, it allows a map of events, which
//results in calling `on()` only once.
$(this).on({
focusin: function () {
t.tooltip('open');
},
focusout: function () {
t.tooltip('close');
}
});
}); | {
"domain": "codereview.stackexchange",
"id": 3837,
"tags": "javascript"
} |
Some confusions on Model selection using cross-validation approach | Question: https://stats.stackexchange.com/questions/11602/training-with-the-full-dataset-after-cross-validation explains the procedure and the importance of doing cross-validation to assess the performance of the method/ classifier. I have few concerns which I could not clearly understand from that answer. It will be immensely helpful if these are clarified.
Consider that I am using the Matlab's fisheriris dataset. The variable meas contains 150 examples and 4 features. The varaible species contains the labels. I have put the data and labels into a variable: Data = [meas species]According to the procedure outlined above,
I have split the data set Data using cvpartition into 60/40 where 60% is the Xtrain and 40% is a separate Xtest data subsets.
Using Xtrain I perform k fold cross-valiation and inside each fold I validate the model using the indices from Xtrain. This loop is used to tune the hyperparameters of the model. I never use Xtest in selecting the hyperparameters. Is my understanding correct?
Confusion 1) The answer in the link says
You build the final model by using cross-validation on the whole set to choose the hyper-parameters and then build the classifier on the whole dataset using the optimized hyper-parameters.
"use the full dataset to produce your final model as the more data you use the more likely it is to generalise well"
I am a bit confused on what dataset and whole set are we referring to and how is building the final model by using cross-validation on the whole set using the selected hyper-parameters different from building the classifier on the whole dataset using the hyper-parameters?
I wanted to verify if my understanding of this part is correct or not. Does this statement mean that using the cross-validated hyper-parameters obtained using the Xtrain, should the classifier be build by re-training on the Xtrain subset or on Data ?
Should my final model be the one from Data?
Confusion 2) What is the role of the unseen Xtest data set? In papers is the performance reported on the Data or on the untouched Xtest ?
Answer: I'm not exactly sure of cvpartition's routine, so I'll try to provide a more generalised answer.
"Whole set" v "full data set"
These are the same in this instance. Model hyperparameter tuning can be done by feeding the full / whole / complete data set into a cross validation process. Inside this cross validation, the data is split into k folds, where (k - 1) folds are used to build a model, and the remaining fold is used to test the model (since the remaining data is essentially unseen to the model). This is repeated k times; results can be averaged and standard deviation calculated.
Seen v unseen data (a.k.a. modelling and hold out samples)
This is essentially a variation on the cross validation approach above, except the split is done only once: the full data is split into two subsets - Xtrain and Xtest in your case. In this approach, you would build a model using Xtrain only and test it on Xtest (there is no repetition).
So, what's the difference?
Both approaches attempt to do the same thing - somehow validate a model's performance and ability to generalise on unseen data. Cross validation is arguably the stronger technique, especially on smaller data sets.
Once you've successfully parameterised your model (using either approach), building your model on the whole data set is advisable since more data > less data. | {
"domain": "datascience.stackexchange",
"id": 3472,
"tags": "classification, cross-validation"
} |
Hardness of parameterized CLIQUE? | Question: Let $0\le p\le 1$ and consider the decision problem
CLIQUE$_p$
Input: integer $s$, graph $G$ with $t$ vertices and $\lceil p\binom{t}{2} \rceil$ edges
Question: does $G$ contain a clique on at least $s$ vertices?
An instance of CLIQUE$_p$ contains a proportion $p$ out of all possible edges. Clearly CLIQUE$_p$ is easy for some values of $p$. CLIQUE$_0$ contains only completely disconnected graphs, and CLIQUE$_1$ contains complete graphs. In either case, CLIQUE$_p$ can be decided in linear time. On the other hand, for values of $p$ close to $1/2$, CLIQUE$_p$ is NP-hard by a reduction from CLIQUE itself: essentially, it is enough to take the disjoint union with the Turán graph $T(t,s-1)$.
My question:
Is CLIQUE$_p$ either in PTIME or NP-complete for every value of $p$? Or are there values of $p$ for which CLIQUE$_p$ has intermediate complexity (if P ≠ NP)?
This question arose from a related question for hypergraphs, but it seems interesting in its own right.
Answer: I assume that the number $\left\lceil p \binom{t}{2} \right\rceil$ in the definition of the problem CLIQUEp is exactly equal to the number of edges in the graph, unlike gphilip’s comment to the question.
The problem CLIQUEp is NP-complete for any rational constant 0<p<1 by a reduction from the usual CLIQUE problem. (The assumption that p is rational is only required so that $\lceil pN \rceil$ can be computed from N in time polynomial in N.)
Let k≥3 be an integer satisfying both k2≥1/p and (1−1/k)(1−2/k)>p. Given a graph G with n vertices and m edges along with a threshold value s, the reduction works as follows.
If s<k, we solve the CLIQUE problem in time O(ns) time. If there is a clique of size at least s, we produce a fixed yes-instance. Otherwise, we produce a fixed no-instance.
If n<s, we produce a fixed no-instance.
If n≥s≥k, we add to G a (k−1)-partite graph where each set consists of n vertices which has exactly $\left\lceil p \binom{nk}{2} \right\rceil - m$ edges, and produce this graph.
Note that the case 1 takes O(nk−1) time, which is polynomial in n for every p. The case 3 is possible because if n≥s≥k, then $\left\lceil p \binom{nk}{2} \right\rceil - m$ is nonnegative and at most the number of edges in the complete (k−1)-partite graph Kn,…,n as shown in the following two claims.
Claim 1. $\left\lceil p \binom{nk}{2} \right\rceil - m \ge 0$.
Proof. Since $m \le \binom{n}{2}$, it suffices if we prove $p \binom{nk}{2} \ge \binom{n}{2}$, or equivalently pnk(nk−1) ≥ n(n−1). Since p ≥ 1/k2, we have pnk(nk−1) ≥ n(n−1/k) ≥ n(n−1). QED.
Claim 2. $\left\lceil p \binom{nk}{2} \right\rceil - m \lt n^2 \binom{k-1}{2}$. (Note that the right-hand side is the number of edges in the complete (k−1)-partite graph Kn,…,n.)
Proof. Since $\lceil x \rceil \lt x+1$ and m ≥ 0, it suffices if we prove $p \binom{nk}{2} + 1 \le n^2 \binom{k-1}{2}$, or equivalently n2(k−1)(k−2) − pnk(nk−1) − 2 ≥ 0. Since p < (1−1/k)(1−2/k), we have
$$n^2(k-1)(k-2) - pnk(nk-1) - 2$$
$$\ge n^2(k-1)(k-2) - n \left( n-\frac{1}{k} \right) (k-1)(k-2) - 2$$
$$= \frac{n}{k} (k-1)(k-2) - 2 \ge (k-1)(k-2) - 2 \ge 0.$$
QED.
Edit: The reduction in Revision 1 had an error; it sometimes required a graph with negative number of edges (when p was small). This error is fixed now. | {
"domain": "cstheory.stackexchange",
"id": 158,
"tags": "cc.complexity-theory, np, parameterized-complexity, clique"
} |
Understanding Eigenface Face Recognition Algorithm Notations | Question: I have been reading the paper from Matthew A. Tusk and Alex Pentland
https://www.cs.ucsb.edu/~mturk/Papers/mturk-CVPR91.pdf
I got stuck in this part. I have a couple of questions.
Assume my images are 256 by 256.
1- What are G1,G2...Gm ? Row vectors? Column vectors? (1*65536) or (65536 * 1) ? Or are they 2D vectors (256*256)?
2- Then it says A = [ F1 F2 ... Fm] . Now I do not get this notation. Again A is what ? A matrix of matrices? or just a huge column/row vector?
Thanks
Answer: G1, G2... Gm are column vectors of length 65536. The matrix A has columns that are the principal components. | {
"domain": "dsp.stackexchange",
"id": 3025,
"tags": "image-processing, computer-vision"
} |
Given Velocity formula and told to find acceleration, I need help please. | Question:
I tried to do this problem by taking the derivative and it did not get me the right answer. I also tried the Vdv=Ads integration and could not get the right answer. Please Help
Answer: The correct formula to use would be $a = dv/dt = dv/ds \cdot ds/dt = dv/ds \cdot v$. Integration is not the right way to go about using this. Since you are given a formula for $v(s)$, simply compute the derivative at $s=5m$, and multiply that derivative by the value of v itself at $s=5m$. | {
"domain": "physics.stackexchange",
"id": 11585,
"tags": "homework-and-exercises, velocity"
} |
Basic value comparisons | Question: I made a value comparison "programming language" similar to a previous one I made, except this one is based solely on value comparisons.
# Basic value comparison
from operator import *
# Main compare class
class Compare(object):
def __init__(self):
self.OpError = "OpError: Invalid operator or brace.\n"
self.IntError = "IntError: Invalid integers.\n"
self.StrError = "StrError: Invalid strings.\n"
self.LstError = "LstError: Invalid list.\n"
self.BoolError = "BoolError: Invalid boolean.\n"
self.st_brace = "["
self.end_brace = "]"
self.OPERATORS = {
">=": ge,
"<=": le,
"==": eq,
"!=": ne,
">": gt,
"<": lt}
# Compare two integers
def compareint(self, command):
if command[1] == self.st_brace and command[3] in self.OPERATORS and command[5] == self.end_brace:
try:
print self.OPERATORS[command[3]](
int(command[2]), int(command[4]))
except ValueError:
print self.IntError
if command[3] not in self.OPERATORS:
print self.OpError
# Compare two strings
def comparestr(self, command):
if command[1] == self.st_brace and command[3] in self.OPERATORS and command[5] == self.end_brace:
try:
print self.OPERATORS[command[3]](
eval(command[2]), eval(command[4]))
except SyntaxError:
print self.StrError
if command[3] not in self.OPERATORS:
print self.OpError
# Compare two lists
def comparelst(self, command):
if command[1] == self.st_brace and command[3] in self.OPERATORS and command[5] == self.end_brace:
try:
print self.OPERATORS[command[3]](
eval(command[2]), eval(command[4]))
except SyntaxError:
print self.LstError
if command[3] not in self.OPERATORS:
print self.OpError
# Compare two booleans
def comparebool(self, command):
if command[1] == self.st_brace and command[3] in self.OPERATORS and command[5] == self.end_brace:
try:
print self.OPERATORS[command[3]](
eval(command[2]), eval(command[4]))
except NameError:
print self.BoolError
if command[3] not in self.OPERATORS:
print self.OpError
# Dict containing commands
COMMANDS = {
"cmpbool": Compare().comparebool,
"cmplst": Compare().comparelst,
"cmpint": Compare().compareint,
"cmpstr": Compare().comparestr,
}
# Read the inputted commands
def read_command(prompt):
command = raw_input(prompt)
split_cmd = command.split(" ")
if split_cmd[0] in COMMANDS:
COMMANDS[split_cmd[0]](split_cmd)
if split_cmd[0] not in COMMANDS:
print "CmdError: Invalid command.\n"
# Run the program
if __name__ == "__main__":
while True: read_command(">}}>")
Here's how you use it.
>}}>cmpbool [ True == True ]
True
>}}>cmplst [ [""] == [""] ]
True
>}}>cmpint [ 2 == 2 ]
True
>}}>cmpstr [ "Hello" == "Hello" ]
True
What can be improved? How can I shorten the code? Any general improvements are welcome. Just don't change the syntax.
Answer: Removing duplicated code
The same code seems to be duplicated in various places. You can easily define a generic function that you'll be able to reuse in all your definitions.
Here's what I have done (not so tested) :
def generic_compare(self, command, func, error):
if command[1] == self.st_brace and command[3] in self.OPERATORS and command[5] == self.end_brace:
try:
print self.OPERATORS[command[3]](
func(command[2]), func(command[4]))
except ValueError:
print error
if command[3] not in self.OPERATORS:
print self.OpError
# Compare two integers
def compareint(self, command):
self.generic_compare(command, int, self.IntError)
# Compare two strings
def comparestr(self, command):
self.generic_compare(command, eval, self.StrError)
# Compare two lists
def comparelst(self, command):
self.generic_compare(command, eval, self.LstError)
# Compare two booleans
def comparebool(self, command):
self.generic_compare(command, eval, self.BoolError)
You don't need a class
I suggest you have a look at this Python talk called "Stop Writing Classes ".
In your situation, there is no need for a class at all :
Your code could be written :
from operator import *
OpError = "OpError: Invalid operator or brace.\n"
IntError = "IntError: Invalid integers.\n"
StrError = "StrError: Invalid strings.\n"
LstError = "LstError: Invalid list.\n"
BoolError = "BoolError: Invalid boolean.\n"
st_brace = "["
end_brace = "]"
OPERATORS = {
">=": ge,
"<=": le,
"==": eq,
"!=": ne,
">": gt,
"<": lt}
def generic_compare(command, func, error):
if command[1] == st_brace and command[3] in OPERATORS and command[5] == end_brace:
try:
print OPERATORS[command[3]](
func(command[2]), func(command[4]))
except ValueError:
print error
if command[3] not in OPERATORS:
print OpError
# Compare two integers
def compareint(command):
generic_compare(command, int, IntError)
# Compare two strings
def comparestr(command):
generic_compare(command, eval, StrError)
# Compare two lists
def comparelst(command):
generic_compare(command, eval, LstError)
# Compare two booleans
def comparebool(command):
generic_compare(command, eval, BoolError)
# Dict containing commands
COMMANDS = {
"cmpbool": comparebool,
"cmplst": comparelst,
"cmpint": compareint,
"cmpstr": comparestr,
}
# Read the inputted commands
def read_command(prompt):
command = raw_input(prompt)
split_cmd = command.split(" ")
if split_cmd[0] in COMMANDS:
COMMANDS[split_cmd[0]](split_cmd)
if split_cmd[0] not in COMMANDS:
print "CmdError: Invalid command.\n"
# Run the program
if __name__ == "__main__":
while True: read_command(">}}>")
Use the logic and the usual programming structure
Instead of writing :
if split_cmd[0] in COMMANDS:
COMMANDS[split_cmd[0]](split_cmd)
if split_cmd[0] not in COMMANDS:
print "CmdError: Invalid command.\n"
Write :
if split_cmd[0] in COMMANDS:
COMMANDS[split_cmd[0]](split_cmd)
else:
print "CmdError: Invalid command.\n"
Also, you could rewrite :
if command[1] == st_brace and command[3] in OPERATORS and command[5] == end_brace:
try:
print OPERATORS[command[3]](
func(command[2]), func(command[4]))
except ValueError:
print error
if command[3] not in OPERATORS:
print OpError
in a way that hilight that sometimes something will be printed, sometimes not.
Define "pure" functions
By defining your functions in such a way that they have no side-effects (printing results for instance), you make them easier to test. In your case, it's quite simple, you can just return instead of printing.
Separe the concerns
You should split the part handling user input/output and the part computing results. This does hand in hand with previous comment.
At this stage, after adding simple tests in the main :
from operator import *
OpError = "OpError: Invalid operator or brace.\n"
IntError = "IntError: Invalid integers.\n"
StrError = "StrError: Invalid strings.\n"
LstError = "LstError: Invalid list.\n"
BoolError = "BoolError: Invalid boolean.\n"
st_brace = "["
end_brace = "]"
OPERATORS = {
">=": ge,
"<=": le,
"==": eq,
"!=": ne,
">": gt,
"<": lt}
def generic_compare(command, func, error):
if command[3] in OPERATORS:
if command[1] == st_brace and command[5] == end_brace:
try:
return OPERATORS[command[3]](
func(command[2]), func(command[4]))
except ValueError:
return error
else:
return OpError
# Compare two integers
def compareint(command):
return generic_compare(command, int, IntError)
# Compare two strings
def comparestr(command):
return generic_compare(command, eval, StrError)
# Compare two lists
def comparelst(command):
return generic_compare(command, eval, LstError)
# Compare two booleans
def comparebool(command):
return generic_compare(command, eval, BoolError)
# Dict containing commands
COMMANDS = {
"cmpbool": comparebool,
"cmplst": comparelst,
"cmpint": compareint,
"cmpstr": comparestr,
}
def evaluate_command(command):
split_cmd = command.split(" ")
if split_cmd[0] in COMMANDS:
return COMMANDS[split_cmd[0]](split_cmd)
else:
return "CmdError: Invalid command.\n"
# Read the inputted commands
def read_command(prompt):
print(evaluate_command(raw_input(prompt)))
# Run the program
if __name__ == "__main__":
assert evaluate_command('cmpbool [ True == True ]')
assert not evaluate_command('cmpbool [ True == False ]')
assert evaluate_command('cmplst [ [""] == [""] ]')
assert not evaluate_command('cmplst [ [""] == ["a"] ]')
assert evaluate_command('cmpint [ 2 == 2 ]')
assert not evaluate_command('cmpint [ 2 == 3 ]')
assert evaluate_command('cmpstr [ "Hello" == "Hello" ]')
assert not evaluate_command('cmpstr [ "Hello" == "HelloW" ]')
#while True: read_command(">}}>")
Make things cleaner
It could be a good idea to feed the comparison functions only the information they need : the first element of the list is not relevant anymore at this stage.
You can just write :
def evaluate_command(command):
split_cmd = command.split(" ")
func, args = split_cmd[0], split_cmd[1:]
if func in COMMANDS:
return COMMANDS[func](args)
else:
return "CmdError: Invalid command.\n"
and then, you need to re-index generic_compare :
def generic_compare(command, func, error):
if command[2] in OPERATORS:
if command[0] == st_brace and command[4] == end_brace:
try:
return OPERATORS[command[2]](
func(command[1]), func(command[3]))
except ValueError:
return error
else:
return OpError
Don't import *
Import * is bad. You can just as easily write :
import operator
OPERATORS = {
">=": operator.ge,
"<=": operator.le,
"==": operator.eq,
"!=": operator.ne,
">": operator.gt,
"<": operator.lt}
Style
In Python, the good there is a style guide called PEP 8. You'll find various tools to check your code if you want : pep8, pylint, pychecker, pyflakes, etc.
Misc
Usually, using eval in any programming language is quite dangerous. What if I was to evaluate a massive chunk of code with unpredicted side-effects (removing files from my file system for instance) ?
Also, you should perform a lot more checks before accessing random elements from split_cmd as it might be shorter than you expect. | {
"domain": "codereview.stackexchange",
"id": 9308,
"tags": "python, python-2.x, console, dsl"
} |
Does magnetic geometry determine the scaling of a Polywell fusor? | Question: Does magnetic geometry determine the scaling of a Polywell fusor?
Forgive imprecise terminology here - by "magnetic geometry" I mean the configuration of the magnets, the configuration that creates the "Christmas star" geometry of the electron field in the fusor.
On to my question... I've read that the rate of fusion scales at $r^5$, with $r$ being the radius of the reaction chamber. Is this because of the aforementioned magnetic geometry? Or is it because of something more essential, like Bremsstrahlung, or something else?
Answer: The scaling you refer to is derived (though not in great detail) in for instance papers [0], [1] and [2] below.
Since the polywell approach relies on a very particular kind geometry for its proposed operation, it could be said that the magnetic geometry is determining the scaling, but that would be circumventing the question.
Fusion power scales as $P_f \propto B^4R^3$. For the polywell type devices $B\propto R$, and hence $P_f \propto R^7$. One main difference between the polywell approach and regular magnetic confinement devices, is that the polywell mainly aims to confine the electrons magnetically. The electrons then create a confining electrostatic potential to capture the ions. This means that much smaller magnetic fields are needed, and that the main loss of energy is assumed to be through electron losses, which scales as $P_l \propto R^2$, the area of the device. This gives a total fusion gain scaling of $G = P_f/P_l \propto R^5$.
As for the particular polywell design used, the most common so far is the cubic, but some of papers have mentioned octahedral and dodecahedral configurations. It is believed that a more spherical configuration will yield better stability, as well as a larger "active" core region compared to the overall size of the device, but the scaling does not rely on the particular polyhedral geometry employed.
It should be stressed that this scaling relies on a lot of as yet unproven assumptions, based on very little (so far) empirical experience from operating this kind of device.
References:
[0]: Bussard, R. W., "Some Physics Considerations of Magnetic Inertial Electrostatic Confinement," Fusion Technology, vol. 19 (1991)
[1]: Bussard, R. W., "Inherent Characteristics of Fusion Power Systems,"
Fusion Technology, vol. 26 (1994)
[2]: Bussard, R. W., "The Advent of Clean Nuclear Fusion," 57th International Astronautical Congress (2006) | {
"domain": "physics.stackexchange",
"id": 12800,
"tags": "energy, radiation, fusion"
} |
joint compatibility branch and bound (JCBB) data association implementation | Question: I would like to implement the joint compatibility branch and bound technique in this link as a method to carry out data association. I've read the paper but still confused about this function $f_{H_{i}} (x,y)$. I don't know exactly what they are trying to do. They compared their approach with The individual compatibility nearest neighbor (ICNN). In the aforementioned method we have this function $f_{ij_{i}} (x,y)$. This function simply the inverse measurement function or what they call it in their paper the implicit measurement function. In Laser sensor, given the observations in the polar coordinates, we seek via the inverse measurement function to acquire their Cartesian coordinates. In ICNN, every thing is clear because we have this function $f_{ij_{i}} (x,y)$, so it is easily to acquire the Jacobian $H_{ij_{i}}$ which is
$$
H_{ij_{i}} = \frac{\partial f_{ij_{i}}}{\partial \textbf{x}}
$$
For example in 2D case and 2D laser sensor, $\textbf{x} = [x \ y \ \theta]$ and the inverse measurement function is
$$
m_{x} = x + rcos(\phi + \theta) \\
m_{y} = y + rsin( \phi + \theta )
$$
where $m_{x}$ and $m_{y}$ are the location of a landmark and
$$
r = \sqrt{ (m_{x}-x)^{2} + (m_{y}-y)^{2}} \\
\phi = atan2\left( \frac{(m_{y}-y)}{(m_{x}-x)} \right) - \theta
$$
Using jacobian() in Matlab, we can get $H_{ij_{i}}$. Any suggetions?
Answer: Suppose you have three measurements (1, 2, and 3) and four landmarks (a, b, c, d). The joint compatibility is a measure of how well a subset of the measurements associates with a subset of the landmarks.
For example, what is the joint compatibility of (1b, 2d, 3c)? First we construct the implicit measurement functions $f_{ij_i}$ for each correspondence ($f_{1b_1}$, $f_{2d_2}$, $f_{3c_3}$). The joint implicit function $f_{\mathcal{H}_i}$ is simply the vector of the implicit measurement functions; i.e.,
$$
f_{\mathcal{H}_i} := \begin{bmatrix} f_{1b_1} \\ f_{2d_2} \\ f_{3c_3}\end{bmatrix}
$$
This function is linearized in (5) in the linked paper (this requires the Jacobians of $f_{\mathcal{H}_i}$ with respect to the state and measurement). Equation (9) calculates the covariance of $f_{\mathcal{H}_i}$ and (10) uses this covariance, along with expected value of $f_{\mathcal{H}_i}$ (that is, $h_{\mathcal{H}_i}$) to calculate the joint compatibility (or more specifically, the joint Mahalanobis distance) of this particular set of associations. The Mahalanobis distance forms a chi-square distribution, and the confidence of the association is checked against a threshold (which is dependent on the dimension of the distribution; in this case, it is three).
What I described above is how you check a single set of associations. The real trick is how to check "all" (you don't usually need to check all of them) of the possible associations and pick the one that (a) has the maximum likelihood, AND (b) maximizes the number of associations. The reason why you want to maximize the number of associations is because (from the paper):
"The probability that a spurious pairing is jointly compatible with all the pairings of a given hypothesis decreases as the number of pairings in the hypothesis increases."
The "branch and bound" part of JCBB is how you efficiently traverse through the search space to find the best set of associations. | {
"domain": "robotics.stackexchange",
"id": 590,
"tags": "slam, ekf, mapping, data-association"
} |
how to extract meaning from signal (distance estimation)? | Question: I have 3 times series representing detected rssi signal power from 3 emitting devices. The devices are at 1 meter distance from the receptor, plotting the 3 time-series gives the following results :
My question is : Is there a way, a general mechanism, a filter, or an algorithm for processing these signal, so that I can find some common regularities between them to which I can assign the distance 1 meter, and predict that distance for different time-series?
Answer: In general, you can apply a filter with some time constant T which will be giving you a more reliable RSSI value but even in this case, the RSSI will only be an accurate indication of distance under certain assumptions.
There are two types of filters you could use here, a moving average low pass filter which would effectively produce a smoothed out version of the RSSI waveform or a non-linear modal-mean or median filter. An example of an easily implemented moving average Finite Impulse Response (FIR) filter would be h=ones(N)./N where N is the order of the filter that is directly proportional to the Group Delay (or time constant) of the filter (for instance a [0.25,0.25,0.25,0.25] N=4 filter). Applied over a sliding window of N samples, this filter would return an average, slow changing, estimate of the RSSI curve which you could then associate with distance. Similarly, the non-linear filter is applied over a sliding window of size N but in this case you just estimate the modal mean (i.e. the most frequently occurring RSSI value of the N available in the sample) or the median value. The median filter would tend to remove the spikes and could possibly be a better alternative for this application.
The relationship between RSSI and distance would be established through the Free Space Path Loss but the main assumptions here would be that a) there is a direct line-of-sight between transmitter and receiver and b) the RSSI is accurately estimated by the device. In other words, if you start moving the device in space, it is not necessary that the distance estimation will remain accurate. You might also want to check this link as well.
Hope this helps. | {
"domain": "dsp.stackexchange",
"id": 2788,
"tags": "filters, algorithms, estimation"
} |
Searching for a solvate that changes its solvation property | Question: I'm looking for a solvate that changes its solvation property when being dried.
One example of such a material is gypsum:
at temperatures up to 250°C it becomes γ-anhydrite which will slowly
hydrate to a hemihydrate.
at temperatures above 250°C it becomes
β-anhydrite which will not hydrate again.
However the temperature range is too high to use it as example in my thesis.
Answer: http://nvlpubs.nist.gov/nistpubs/jres/086/jresv86n2p181_A1b.pdf
http://www.phasediagram.dk/binary/magnesium_sulfate.htm
Sodium sulfate hydrates,
http://iopscience.iop.org/0022-3727/41/21/212002/fulltext/
http://www.springerimages.com/Images/Chemistry/1-10.1007_s00269-008-0256-0-0
DOI:10.1209/0295-5075/102/28003
http://ej.iop.org/images/0295-5075/102/2/28003/Full/epl15377f5_online.jpg
http://www.phys.tue.nl/nfcmr/PhD-Saidov-2012.pdf
====
http://patentimages.storage.googleapis.com/US7638109B2/US07638109-20091229-D00001.png
http://www.genchem.com/properties.asp
sodium carbonate | {
"domain": "chemistry.stackexchange",
"id": 1144,
"tags": "solubility, solvents"
} |
Hearing and neurons- do ears have a sampling period? | Question: From what I have read, outer hair cells in the human ear amplify incoming signals and inner hair cells "pick-up" the signals and generate action potential. However, neurons have refractory periods during which they cannot fire again. Does this mean that the human ear has a "sampling period" within which it cannot "pick-up" sounds?
Answer: Inner hair cells (IHC) do not fire action potentials themselves. It's the auditory-nerve that synapses with IHC that generates action potentials. The firing rate of the auditory nerve can be as high as few hundred Hz with a refractory period as short as 1 ms or so (depends on the animal).
However, it is important to note that the signal is not sampled at this rate. As you may know, the auditory signal is first transformed to the "frequency" domain through the physical structure of cochlea. Each inner hair cell therefore is roughly only encoding the relative strength of a frequency band. There are presumably inner hair cells in the human cochlea that correspond to a range centered around 18 kHz for example, but neither the neurotransmitters of the corresponding IHC nor the auditory nerve can fire at 18 kHz. Nevertheless, the amplitude modulation at this high frequency is what is transmitted.
Also, thinking of neural firing as a "sampling period" is not always a good analogy. There are debates about this, but it could be that precise timing of action potential carries large amount of information about the stimulus (perhaps not so much so in early auditory system.)
If you want to see some computational modeling work for inner ear, IHC, and auditory nerve, I recommend the Meddis IHC model:
C. J. Sumner, E. A. Lopez-Poveda, L. P. O'Mard, R. Meddis. A revised model of the inner-hair cell and auditory-nerve complex, J. Acoust. Soc. Am. 111 (5) 2002 | {
"domain": "biology.stackexchange",
"id": 1148,
"tags": "neuroscience, human-ear"
} |
Converting a binary string to hexadecimal using Rust | Question: Once again, I am reinventing the wheel here to understand the fundamentals of rust. In a previous question I requested feedback on my function that performed a hexadecimal to binary conversion. In this snippet, I am seeking some constructive feedback on converting from binary to hexadecimal.
fn main() {
let hexadecimal_value = convert_to_hex_from_binary("10111");
println!("Converted: {}", hexadecimal_value);
}
fn convert_to_hex_from_binary(binary: &str) -> String {
let padding_count = 4 - binary.len() % 4;
let padded_binary = if padding_count > 0 {
["0".repeat(padding_count), binary.to_string()].concat()
} else {
binary.to_string()
};
let mut counter = 0;
let mut hex_string = String::new();
while counter < padded_binary.len() {
let converted = to_hex(&padded_binary[counter..counter + 4]);
hex_string.push_str(converted);
counter += 4;
}
hex_string
}
fn to_hex(b: &str) -> &str {
match b {
"0000" => "0",
"0001" => "1",
"0010" => "2",
"0011" => "3",
"0100" => "4",
"0101" => "5",
"0110" => "6",
"0111" => "7",
"1000" => "8",
"1001" => "9",
"1010" => "A",
"1011" => "B",
"1100" => "C",
"1101" => "D",
"1110" => "E",
"1111" => "F",
_ => "",
}
}
Answer: A couple of things:
Any time you have a function which could fail, it should return an Option<T>. Ask yourself, if someone calls convert_to_hex_from_binary("foobar") and gets back "", is that reasonable? They will need to manually check that their input makes sense, or that the output makes sense every time. Static checking of these errors is part of the joy of Rust.
With that it mind, change to_hex like so:
fn to_hex(b: &str) -> Option<&str> {
match b {
"0000" => Some("0"),
"0001" => Some("1"),
"0010" => Some("2"),
"0011" => Some("3"),
"0100" => Some("4"),
"0101" => Some("5"),
"0110" => Some("6"),
"0111" => Some("7"),
"1000" => Some("8"),
"1001" => Some("9"),
"1010" => Some("A"),
"1011" => Some("B"),
"1100" => Some("C"),
"1101" => Some("D"),
"1110" => Some("E"),
"1111" => Some("F"),
_ => None,
}
}
You are iterating over a known range of values to get counter. Instead of using a while loop, you can use a range. Since you're now returning None from to_hex you can now shortcut returning if something is wrong with None.
Your inner loop now looks like this:
let mut hex_string = String::new();
for counter in (0..padded_binary.len()).step_by(4) {
match to_hex(&padded_binary[counter..counter + 4])
{
Some(converted) => hex_string.push_str(converted),
None => return None
};
}
Your function signature and return type also need to match Option<String>:
fn convert_to_hex_from_binary(binary: &str) -> Option<String> {
...
Some(hex_string)
}
Your padded_binary definition can be simplified, since repeating "0" zero times is exactly the same as not repeating it.
The definition is simply:
let padded_binary = ["0".repeat(padding_count), binary.to_string()].concat();
A couple of non Rust specific things:
Consider writing some tests.
String representation of binary numbers often begin with '0b'. For example 0b1101 == 13. You might want to consider checking for this prefix on the input string and trimming it.
You may have already thought about this, but consider if you want to trim whitespace or leave it to the function caller. | {
"domain": "codereview.stackexchange",
"id": 36003,
"tags": "beginner, reinventing-the-wheel, rust, converting, number-systems"
} |
How to derive $|0\rangle=\frac{1}{\sqrt{2}}(|+\rangle+|-\rangle)$? | Question: When learning measurement basis, my teacher told us $|0\rangle=\frac{1}{\sqrt{2}}(|+\rangle+|-\rangle)$ and said that we can derive it ourselves. Along this, he also mentioned $|+\rangle=\frac{1}{\sqrt{2}}(|0\rangle+|1\rangle)$.
I understand that when we visualize those vectors on a bloch sphere, $|0\rangle$ lies in between $|+\rangle$ and $|-\rangle$, and if we normalize the coefficient, we would get $\frac{1}{\sqrt{2}}$. However, I'm confused how we know that the phase is + ($|+\rangle+|-\rangle$) instead of -? Is this just a definition for $|0\rangle$ or is it backed by a deeper reason?
Answer: You can do it via use of substitution, or via the expansion into vectors and comparison. However, for this and other expansions, I find the use of the identity operator, which can be diagonalised in all bases, is the most informative:
$$|0\rangle=I|0\rangle=(|+\rangle\langle+|+|-\rangle\langle-|)|0\rangle$$
$$=\langle+|0\rangle|+\rangle+\langle-|0\rangle|-\rangle=\frac{1}{\sqrt{2}}|+\rangle + \frac{1}{\sqrt{2}}|-\rangle$$ | {
"domain": "quantumcomputing.stackexchange",
"id": 2986,
"tags": "quantum-state, textbook-and-exercises, superposition"
} |
calculate torsion in a diffrent sized shaft | Question: first of i know it seems like a homework assignment but it does not, it is calculations i need to do at work and it has been a long time since i studied this material. so I need help to calculate if a given clutch can handle a given torque but I'm not sure if my calculation are right.
what I did so far
on the left there is a motor on the right is the cluch with a connector.
ive build a FBD:
and ended up with this eq:
$$\tau_{max} = \frac{T*r_1}{J} = \frac{2T*r_1}{\pi*r^3}=\frac{2*2.5}{\pi*(6.05*10^{-3})^3} = 7.2 [MPa]$$
but I don't know if this is the way to calculate and I don't know how to calculate if the point connecting the two diameters is strong enough.
Answer: Your calculation is generally correct .
The basic formula is:
$$\tau_{max} = \frac{T*r_1}{J}$$
Although the Torque is constant ($2.5 [Nm]$), what changes is the radii and the second moment of area.
For solid sections
$$J_{solid} = \frac{\pi r^4}{4}$$
For hollow sections
$$J_{hollow} = \frac{\pi \left(r_o^4 - r_i^4\right)}{4}$$
where:
$r_o$: the external radius
$r_i$: the internal radius
if you are worried for increases in stress due to the change in diameter, you can look at stress concentrations. | {
"domain": "engineering.stackexchange",
"id": 4063,
"tags": "mechanical-engineering, motors, torque"
} |
MailQueue implementation with auto start - stop | Question: Previous question was a little portion of the mailQueue.
I finished the MailQueue, which has the ability to start and stop itself. I also implemented some more threads for sending when the load becomes greater. This is because we do send at some points over the 15000 mail in a short time. The last MailQueue did need 2hrs after adding mail to empty the Queue.
public enum MailQueue implements Runnable {
INSTANCE;
private JavaMailSender sender;
private boolean run = false;
private final ConcurrentLinkedQueue<MimeMessage> mailsToSend = new ConcurrentLinkedQueue<MimeMessage>();
private final ConcurrentLinkedQueue<MimeMessage> errorRun = new ConcurrentLinkedQueue<MimeMessage>();
private final Map<MimeMessage, MailException> mailsWithErrors = new ConcurrentHashMap<MimeMessage, MailException>();
private static final Logger LOGGER = LoggerFactory.getLogger(MailQueue.class);
private static final int WAIT_FAILURE_TIME = 120000;
private static final int MAX_THREADS_SEND_MAIL = 4;
private static final int MAX_ELEMENTS_BEFORE_NEW_THREAD = 25;
private static final AtomicInteger CURRENT_THREADS_SEND_MAIL = new AtomicInteger(0);
@Override
public void run() {
run = true;
CURRENT_THREADS_SEND_MAIL.getAndIncrement();
while (run) {
while (mailsToSend.peek() != null) {
int currentThreads = CURRENT_THREADS_SEND_MAIL.get();
if (currentThreads < MAX_THREADS_SEND_MAIL && mailsToSend.size() > (MAX_ELEMENTS_BEFORE_NEW_THREAD * currentThreads)) {
new Thread(this).start();
}
MimeMessage message = mailsToSend.remove();
sendMessage(message);
}
}
if (CURRENT_THREADS_SEND_MAIL.decrementAndGet() < 1) {
getErrorThread().start();
}
run = false;
}
/**
* Adding a mail to the Queue.
* When Queue is not started, it will start.
* @param message to send.
* @return true is mail is successfully added to the Queue
*/
public boolean addMail(MimeMessage message) {
boolean result = mailsToSend.add(message);
if (!run) {
new Thread(this).start();
}
return result;
}
/**
* Adding a mail to the Queue.
* When Queue is not started, it will start.
* @param messages to send.
* @return true is mail is successfully added to the Queue
*/
public boolean addMails(Set<MimeMessage> messages) {
boolean result = mailsToSend.addAll(messages);
if (!run) {
new Thread(this).start();
}
return result;
}
/**
* Removes a specific mail from the error list.
* @param message to remove
* @throws MessagingException When there is a fault with getting recipients for logging.
* Mail is NOT removed when this error comes up.
*/
public void removeMailFromError(MimeMessage message) throws MessagingException {
LOGGER.info("Removed mail to " + message.getRecipients(Message.RecipientType.TO)[0].toString()
+ "\nWith title : " + message.getSubject() + "from error queue. \nError was : " + mailsWithErrors.remove(message).getMessage());
}
/**
* Starts a new Thread, to try sending the erroneous mails again.
*/
public void startErrorThread() {
getErrorThread().start();
}
/**
* Try to send this specific mail from error list.
* @param message to send
* @return True if mail was send.
*/
public boolean trySingleErrorMail(MimeMessage message) {
if (sendMessage(message)) {
LOGGER.trace("erroneous mail succesfull send", mailsWithErrors.remove(message));
return true;
}
return false;
}
private Thread getErrorThread() {
return new Thread(new Runnable() {
@Override
public void run() {
wait(WAIT_FAILURE_TIME);
tryErrorsAgain();
}
private void wait(int time) {
try {
Thread.sleep(time);
} catch (InterruptedException ex) {
LOGGER.error("sleep interrupted.", ex);
}
}
});
}
private void tryErrorsAgain() {
errorRun.addAll(mailsWithErrors.keySet());
while (errorRun.peek() != null) {
MimeMessage message = errorRun.remove();
if (sendMessage(message)) {
MailException exception = mailsWithErrors.remove(message);
if (exception != null) {
LOGGER.trace("Errorneous mail succesfull send.", exception);
}
}
}
}
private boolean sendMessage(MimeMessage message) {
MailException exception;
try {
sender.send(message);
return true;
} catch (MailException e) {
try {
LOGGER.error("sending mail failed " + String.valueOf(message.getRecipients(Message.RecipientType.TO)[0]), e);
} catch (MessagingException ex) {
LOGGER.error("This error shouldn't happen.", ex);
}
exception = mailsWithErrors.put(message, e);
if (exception != null) {
LOGGER.trace("Added duplicated mail in errors", e);
}
}
return false;
}
public MimeMessage createMimeMessage() {
return sender.createMimeMessage();
}
public MailQueue setSender(JavaMailSender sender) {
this.sender = sender;
return this;
}
public Map<MimeMessage, MailException> getMailsWithErrors() {
return mailsWithErrors;
}
public Collection<MimeMessage> getToSend() {
return Collections.unmodifiableList(Arrays.asList(mailsToSend.toArray(new MimeMessage[0])));
}
public boolean isRun() {
return run;
}
}
Answer: private boolean run = false;
A verb as a variable name seems weird to me. Especially as a boolean. Try to make your booleans adjectives, questions or statements instead. In this case, I'd go for running.
You'd get an aptly named isRunning method from this too, rather than isRun.
public enum MailQueue implements Runnable {
INSTANCE;
What the...
This is clever abuse of language mechanics, and I don't like it for that specific reason.
Do it the proper way. Have one class that keeps track of the tasks and one class that does the tasks. Not this self-forking madness where you keep a reference to the main instance by a enum variable.
Thread-safety
I was wondering about the thread-safety of your code, so I tested if threads start directly after the start method is called.
for(int threads = 0; threads < 10; threads++){
final int thred = threads;
System.out.println("Creating thread "+thred);
new Thread(new Runnable(){
@Override
public void run()
{
System.out.println("Thread "+thred);
}
}).start();
System.out.println("Created thread "+thred);
}
The output?
Creating thread 0
Created thread 0
Creating thread 1
Created thread 1
Creating thread 2
Created thread 2
Thread 0
Thread 1
Creating thread 3
Thread 2
Created thread 3
Creating thread 4
Created thread 4
Thread 3
Creating thread 5
Thread 4
Created thread 5
Creating thread 6
Thread 5
Created thread 6
Creating thread 7
Created thread 7
Creating thread 8
Thread 6
Created thread 8
Thread 7
Creating thread 9
Thread 8
Created thread 9
Thread 9
Oh dear. It seems I can create three threads before any threads have even started running.
It is not required for the JVM to start running your thread when you create it.
Thus, you can exceed your max threads here:
@Override
public void run() {
run = true;
CURRENT_THREADS_SEND_MAIL.getAndIncrement();
while (run) {
while (mailsToSend.peek() != null) {
int currentThreads = CURRENT_THREADS_SEND_MAIL.get();
if (currentThreads < MAX_THREADS_SEND_MAIL && mailsToSend.size() > (MAX_ELEMENTS_BEFORE_NEW_THREAD * currentThreads)) {
new Thread(this).start();
}
MimeMessage message = mailsToSend.remove();
sendMessage(message);
}
}
if (CURRENT_THREADS_SEND_MAIL.decrementAndGet() < 1) {
getErrorThread().start();
}
run = false;
}
There's a full queue already of 15000 mails (remember, threads are allowed to be suspended indefinitely, so I can add 15k mails before the VM starts your thread). First thread is created and run. It increments to 1. We put the limit at 2. It sees there's mail, and currently 1 thread. It adds a new thread and sends a message.
The message sending is done, but the other thread hasn't started yet. So we create a new thread.
Repeat until we have ~14975 threads.
That was a single thread breaking your code - so synchronization is not gonna help.
Though whatever you do, you'll want to have synchronization as well.
while (mailsToSend.peek() != null) {
int currentThreads = CURRENT_THREADS_SEND_MAIL.get();
if (currentThreads < MAX_THREADS_SEND_MAIL && mailsToSend.size() > (MAX_ELEMENTS_BEFORE_NEW_THREAD * currentThreads)) {
new Thread(this).start();
}
MimeMessage message = mailsToSend.remove();
sendMessage(message);
}
There's 100 mails. Thread cap at 3 threads.
Thread 1 grabs a mail, starts a thread, sends a mail.
Thread 2 grabs a mail, retrieves the thread counter, suspends.
Thread 1 returns from sending mail, grabs a mail, retrieves the thread counter, suspends.
Thread 2 creates a thread, sends a mail.
Thread 1 creates a thread, sends a mail.
You now have 4 threads.
So how do we fix this?
First, we need synchronization for starting a new thread.
At the top of the class:
private static final Object lockObject = new Object();
And in the run method:
while (mailsToSend.peek() != null) {
MimeMessage message = mailsToSend.remove();
synchronized(lockObject){
int currentThreads = CURRENT_THREADS_SEND_MAIL.get();
if (currentThreads < MAX_THREADS_SEND_MAIL && mailsToSend.size() > (MAX_ELEMENTS_BEFORE_NEW_THREAD * currentThreads)) {
new Thread(this).start();
}
}
sendMessage(message);
}
Hurray, synchronization!
Also, I just murdered your throughput (for every mail acquired, a thread must get a lock and free a lock). Clearly, this situation doesn't work.
... additionally, I just realized that you have this:
while (run) {
while (mailsToSend.peek() != null) {
int currentThreads = CURRENT_THREADS_SEND_MAIL.get();
if (currentThreads < MAX_THREADS_SEND_MAIL && mailsToSend.size() > (MAX_ELEMENTS_BEFORE_NEW_THREAD * currentThreads)) {
new Thread(this).start();
}
MimeMessage message = mailsToSend.remove();
sendMessage(message);
}
}
That's a spinloop. If there's no mail left, check if there's more mail.
And you have 100 threads spinlooping.
Poor server.
So how are we gonna keep it thread-safe AND performant?
Well, first we need to get rid of the spin loop. If there's mail, the queue should start, and if mail is gone, the queue should stop.
For the start condition, you can modify addMail to check if there are threads running at all. This is a pain when dealing with 15000 mails, so consider having a scheduler that periodically checks if the mail queue is empty (every 5 seconds?).
For the stop condition, well, once the queue is empty, kill your threads. It's as simple as that.
You could even have a single "main" thread that doesn't die, then you could get rid of the scheduler. This "main" thread could sleep for 5 seconds if it didn't find any mail.
...
I took another look at your code. I'm still not fully understanding the enum hack. Are you... having multiple threads running the same runnable?
... what. (By the way, this gives you yet another bug, where a previously waiting-for-start thread will get started and set run to true).
Next, now that we're rid of the spin loop, we need to fix synchronization for starting new threads.
It's not possible to precisely limit the amount of created threads based on the size of the queue. For that, you'd need to determine the size of mailsToSend, but you can't do that without locking access to mailsToSend. That would require synchronization... which would give you all this locking madness for grabbing a single mail.
You're better off just doing as you do now, mostly - use mailsToSend.size() and just naively trust it. Besides, a few extra threads don't matter that much - either you have work to do and could use some extra threads, or you're not doing much and can handle having to clean up a mess.
What's more concerning is fixing the case where more threads than the cap are created. To fix this, you can use double checked locking:
private static Thread threadIncreaser;
private static final Object threadIncreaserLockObject = new Object();
...
//in addMail/addMails
if(threadIncreaser == null){
synchronized(threadIncreaserLockObject){
if(threadIncreaser == null){
threadIncreaser = new Thread(new Runnable(){ ... });
threadIncreaser.start();
}
}
}
With the runnable as such:
public void run(){
int size = mailsToSend.size();
int currentThreads = CURRENT_THREADS_SEND_MAIL.get();
while (currentThreads < MAX_THREADS_SEND_MAIL && size > (MAX_ELEMENTS_BEFORE_NEW_THREAD * currentThreads)) {
new Thread(MailQueue.INSTANCE).start();
currentThreads++;
}
}
This way, only 1 thread is responsible for running mailsToSend.size(). This should save a lot of checking mailsToSend.size(), which is basically iterating over all the mails.
And that's both the thread starting and spin looping fixed. | {
"domain": "codereview.stackexchange",
"id": 13330,
"tags": "java, multithreading, queue, email"
} |
Complexity of calculating independence number of a hypergraph | Question: Let $G$ be a "hypergraph", a collection of vertices $V=\{v_1,v_2,\ldots,v_n\}$ and a collection of "hyperedges" $E=\{e_1,e_2,\ldots,e_m\}$, where $e_i\subseteq V$ and unlike normal edges, an edge may contain more than two vertices.
An "independent set" (http://en.wikipedia.org/wiki/Independent_set_(graph_theory)) is a collection of vertices, $U$, that does not fully contain any of the hyperedges: $e_i\not\subseteq U$. The "independence number" or "maximum independent set size" is the size of the largest independent set in the graph $G$.
I know that finding if there is an independent set of size $k\in\mathbb{N}$ in some normal graph $G$ is NP-Complete. If I am not mistaken, this holds for hypergraphs as well. However calculating the independence number is not proven to be NP. Even approximating it is not proven in NP.
First, is there a more specific complexity class for calculating the independence number than NP-Hard?
Second, how much harder is it for a hypergraph? Again, is there a complexity class more specific?
For related questions, a recent dissertation has been helpful to me: https://escholarship.org/uc/item/79t9b162.
Thanks!
Answer: did not find a proof of complexity class harder than "NP hard" for this problem (ie the presumably more complex hypergraph version of the problem does not seem to have been proven harder than the graph version) however did find the following. Saket has recent research in the area. results in complexity theory in active areas of research tend to be highly specialized and in the form "for limited hypergraph types [x], the following improved complexity bound [y] is shown." (ie more as approximability results, & more specialized than what you request but it can be mined for nearest desirable results/refs.)
basically hypergraphs while an old mathematical concept are a more recent area of complexity theory research and there is a large active/ongoing research program of determining complexity of operations around them and how those complexities relate to corresponding graph operation complexities, and translating theorems and knowledge about graphs into their hypergraph analogs.
Hardness of Finding Independent Sets in 2-Colorable Hypergraphs and of Satisfiable CSPs Saket (Dec 2013)
This work revisits the PCP Verifiers used in the works of H˚astad [H˚as01], Guruswami et al. [GHS02],
Holmerin [Hol02] and Guruswami [Gur00] for satisfiable MAX-E3-SAT and MAX-Ek-SET-SPLITTING,
and independent set in 2-colorable 4-uniform hypergraphs. We provide simpler and more efficient PCP
Verifiers to prove ... improved hardness results ...
Hardness of Maximum Independent Set in Structured Hypergraphs Saket powerpoint slides overview | {
"domain": "cs.stackexchange",
"id": 2589,
"tags": "complexity-theory, graphs"
} |
Does light travel forever in a medium? | Question: Light will travel forever in a vacuum. What about light in a medium? Does it travel forever too, but at a slower rate. Let’s say I run a fiber optic cable across the universe. I know it will travel slower in the cable, but will it eventually stop, or will it continue across the universe?
Answer: Even the best optical fiber has some attenuation. The very popular SMF-28 type, for example, has a maximum attenuation of 0.18 dB/km at 1550 nm, which is the optimum wavelength for attenuation.
That means that after a mere 1000 km, the power that entered the fiber will be reduced by 180 dB. If you started with 1 W, you'd have only an attowatt remaining. Whether you consider an arbitrarily large attenuation factor "stopped" or not is a semantic choice, but practically it means no receiver in the world could detect your signal after even 1000 km.
Most of the attenuation is due to absorption by impurities in the glass, like iron and hydroxide ions. So in principle you might be able to eliminate this and imagine a medium with much lower loss than optical fiber. But even then you'd have other issues to contend with such as the small attenuation due to the glass itself, scattering from the glass atoms, etc. You might be able to transmit across 1000 km, but not "across the universe". | {
"domain": "physics.stackexchange",
"id": 47954,
"tags": "electromagnetic-radiation, fiber-optics"
} |
Javascript code to make Matrix wallpaper | Question: I am a javascript noob. I did this program for the sake of learning javascript in a better way. the code is to display matrix live wallpaper in the browser.
Here is the code
<!DOCTYPE html>
<html>
<head>
<title>Hacker Live Wallpaper</title>
</head>
<body>
<div id="wallpaper"></div>
<script type="text/javascript" async="true">
var pageEndCounter = 0;
document.getElementsByTagName('body')[0].style.background = "black";
wallpaper = document.getElementById('wallpaper');
wallpaper.style.color = "green";
setInterval(function() {
pageEndCounter++;
if (pageEndCounter >= 35) {
wallpaper.innerHTML = " ";
pageEndCounter = 0;
}
wallpaper.innerHTML += "</br>";
for (var i = 0; i < 5; i++) {
for (var j = returnRandomInt(2); j > 0; j--) {
wallpaper.innerHTML += " ";
}
wallpaper.innerHTML += returnRandomInt();
for (var j = returnRandomInt(2); j > 0; j--) {
wallpaper.innerHTML += " ";
}
}
}, 100);
function returnRandomInt(c) {
if (c == undefined) {
return Math.floor(Math.random() * 100);
} else
return Math.floor(Math.random() * 100 / c);
}
</script>
</body>
</html>
It stucks to much on chrome and firefox.
Any suggestions to make the code run faster and smoother would be kindly appreciated.
Thanks.
Answer: I think you want to print random 5 numbers per line spaced at random distance, Instead of using so many & nbsp; in your code you can use spans/divs with random margins/padding etc. since there will be less number of DOM elements to be parsed it should work much faster.
setInterval(printLine, 100);
function printLine() {
pageEndCounter++;
if (pageEndCounter >= 35) {
wallpaper.innerHTML = " ";
pageEndCounter = 0;
}
wallpaper.innerHTML += "</br>";
for (var i = 0; i < 5; i++) {
var padding = returnRandomInt(2);
var col = '<span style="width:20%;margin-left:' + padding+'%;">'
+ returnRandomInt() + '</span>'
wallpaper.innerHTML += col;
}
} | {
"domain": "codereview.stackexchange",
"id": 32096,
"tags": "javascript, performance"
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.