anchor stringlengths 0 150 | positive stringlengths 0 96k | source dict |
|---|---|---|
messages from subfolders | Question:
Hi,
I'm writing a Package, that has to contain multiple message definitions.
In order to manage the amount I divided the messages in subfolders:
/msg/test
/msg/test2
I've already checked, this works(CMakeLists.txt):
add_message_files( FILES
test/test_msg.msg )
However: I still have to list every .msg in add_message_files. Is there a way to add whole folders? (Every .msg in a particular subfolder of /msg?
cheers
Originally posted by Reiner on ROS Answers with karma: 61 on 2015-06-29
Post score: 1
Answer:
Looking at the docs for genmsg (where add_message_files(..) is coming from) and at the code (specifically: here) it would seem that if you only pass the DIRECTORY argument to add_message_files(..) it will search for .msg files in that directory. So that would remove the need to list all files explicitly.
I'm not sure whether it does that recursively though (most likely not, as GLOB_RECURSE is not used), so that would require calling add_message_files(DIRECTORY ..) multiple times (once for each subdir of $pkg_dir/msg in your specific case) and I don't know whether that is supported.
Note btw that this 'auto-discovery of msg files' uses GLOB-ing, which is sort-of frowned upon in the CMake community (see Best way to specify sourcefiles in CMake on stackoverflow fi).
As an aside: making things explicit in software (engineering) is actually a good thing, as it avoids depending on implicit assumptions in your components (ie: "message my_message.msg will be there, as it is in the directory I have catkin search for messages"). If you GLOB, a missing msg file will cause an error at a much later time (while compiling any sources depending on the message) than if you explicitly listed it (existence of .msg files is checked at configuration time here) and it will also be more easily understood (ie: message file not found vs some/path/to/my_source.cpp:LINE:CHAR: fatal error: my_msgs/my_msg.h: No such file or directory which can also be caused by missing or misstated dependencies).
Originally posted by gvdhoorn with karma: 86574 on 2015-06-29
This answer was ACCEPTED on the original site
Post score: 2 | {
"domain": "robotics.stackexchange",
"id": 22034,
"tags": "ros, add-message-files, message, message-generation"
} |
Is there a quick way to roughly estimate how quickly a flood will move downstream? | Question: At the time of this post, there was urgent danger of spillway failure at the Oroville Dam in California, with evacuations in place for hundreds of thousands, and even larger populations further downstream in the Sacramento metro area. There's 300 large dams in the world. So it would seem quite useful to have a method to quickly estimate how quickly a large flood might move downstream.
Obviously detailed hydrological models are the only way to properly incorporate the varied topography and other unique factors in each situation.
But it would seem useful for a lot of folks, whether tens of miles downsource of a mountain stream (such as might've been vital as the Big Thompson flood started), dozens of miles down a valley from a dammed waterway (like those further down the Feather/Sacramento Rivers tonight), or hundreds of miles down a flooding major river (such as the 1993 Mississippi Flood), especially given the varying warning networks in place around the world, to have a rough scale of how quickly an unexpected flood might translate downstream their way. Is it a matter of hours, days, weeks? A rough worst-case scenario of how quickly they may have to react is what is desired.
Would seem a reasonable formula for a loose estimate might reasonably include the volume of water flowing, the expected channel size, and the elevation change (or alternatively, just the speed of the water... but if you knew that, you really don't need any special formula!).
Does such a formula like this exist?
Or are there good empirical estimates for those different situations I presented?
Or is each case just so unique that there's no hope of such an estimate?
Answer: My first thought would be to use Manning equation as an approximation. It does not take into account the effect of a dam burst providing excess water and immediate flooding, although for larger scales (in terms of river reach) this is likely less important.
Detailed hydrologic models may not be the right answer by the way, especially for the tradeoff of the work involved to build them. A detailed forecasting system with a model and real-time data would be the best, but a quick approximation is often a good guess for the work involved. | {
"domain": "earthscience.stackexchange",
"id": 1179,
"tags": "hydrology, flooding, dams"
} |
What's the difference between static and stagnation properties? | Question: Difference between static and stagnation properties.
Answer: Stagnation properties are those possessed by a volume element of gas after it has been decelerated from the free stream velocity to zero. In this case, the kinetic energy it originally had has been converted into potential energy of compression, which means its density and temperature have increased above the static values. | {
"domain": "physics.stackexchange",
"id": 52671,
"tags": "thermodynamics, fluid-dynamics"
} |
Why is beryllium 8 ($ ^8_4 Be $) such an unstable element? | Question: And by unstable I mean a half life of $ 6.7 \cdot 10^{-17} s$.
And what is exactly the criterion used to say an element is stable of unstable? Where do we draw the line?
Answer: One point of view is that the state of two alpha particles is not bound. A bit like the system of two protons or system of two neutrons.
The other point of view is that once you form a resonance (could be viewed as a kind of a ground state of $^{8}Be$) it immediately (in the moment of formation?) "sees" an energetically more favored state of two separate alpha particles and -since the interaction is strong - it can decay very fast to the final state (see also Fermi golden rule).
Addendum:
If the interaction $<fin|H|init>$ between the initial and final states (hypothetically) was electromagnetic, the half-life of the state could be like $10^{-12} s$ and longer (if the transition multipole is too high or the two states are too different). These are the times one frequently finds when an excited nucleus cools down by $\gamma$ emission.
Criterion:
(I forgot)
Stability is usually ment as a stability against a decay. Look at https://www-nds.iaea.org/relnsd/vcharthtml/VChartHTML.html - all black are stable, they are not too many. What you mean is probably "stable or radioactive" or something in this sense. Many can decay by $\beta+$ or $\beta-$, but from the perspective of a strong interaction, they are quite stable. You can define unbound nuclei and one speaks about neutron drip line or proton drip line. Alpha decay of heavy nuclei is a bit different process to mention it here. | {
"domain": "physics.stackexchange",
"id": 53007,
"tags": "quantum-mechanics, nuclear-physics, radioactivity, half-life"
} |
What is the difference between precession and spin angles? | Question: I was recently introduced to Euler Angles in a Dynamics course, but I am confused on the difference between precession and spin angles. Both precession and spin consist in rotating a coordinate system about the $z$-axis, which means that they have the same transformation matrices.
Answer:
Both precession and spin consist in rotating a coordinate system about the $z$-axis, which means that they have the same transformation matrices.
Yes, but in between the two there is a rotation about $x$. (Note: Some use $y$ for the middle rotation). This makes all the difference in the world for describing the orientation of a world with respect to the ecliptic, the original usage of Euler rotation sequences.
A bit overly simplified, the initial rotation about the original $z$-axis is the precession angle. Precession for a planet is very slow. The second rotation about the once-rotated $x$-axis is the nutation angle. For the Earth, this closely corresponds to the nearly constant obliquity. The final rotation about the twice-rotated $z$ axis is the daily rotation angle.
With two exceptions, given any orientation, the Euler sequence that rotates the initial $x$, $y$, and $z$ axes to the $X$, $Y$, and $Z$ axes that correspond to the desired orientation is unique with the constraint that the rotations about $z$ are between 0° (inclusive) and 360° degrees (exclusive) and the intermediate rotation about the once-rotated$x$-axis is between 0° and 180° (exclusive of both).
The two exceptions occur when the intermediate rotation about the once-rotated $x$-axis is 0° or 180°. These situations are called "gimbal lock." Here the distinction between precession and rotation is completely arbitrary; the standard approach is to arbitrarily set one of the two to zero. | {
"domain": "physics.stackexchange",
"id": 51385,
"tags": "rotational-dynamics, reference-frames, rigid-body-dynamics, precession"
} |
How do I add a new stereo or monocular camera to ROS? | Question:
Do I need to write my own "driver"? (That is code that interfaces with the camera APIs and publishes to ROS?)
In that case, what do I publish?
Do I give only raw image output or is there a way to publish a stereo pair?
Are there any examples for how to do this, e.g. from some other stereo cameras?
What is the easiest way to do this?
copied from ros-users thread: http://code.ros.org/lurker/message/20110215.125025.1315d930.en.html
Originally posted by mmwise on ROS Answers with karma: 8372 on 2011-02-15
Post score: 2
Answer:
Do I need to write my own "driver"?
Yes, unless someone has already written one.
That is code that interfaces with the camera APIs and publishes to ROS?
Yes.
In that case, what do I publish? Do I give only raw image output or is there a way to publish a stereo pair?
There are standard ROS ways to publish stereo data.
http://www.ros.org/wiki/stereo_image_proc
http://www.ros.org/wiki/camera_calibration
http://www.ros.org/wiki/image_pipeline
Are there any examples for how to do this, e.g. from some other stereo cameras?
http://www.ros.org/wiki/bumblebee2
http://www.ros.org/wiki/videre_stereo_cam
What is the easiest way to do this?
The extra ROS wrapping for publishing camera data does not involve
much additional code, but does require a basic understanding of the
image pipeline. So, study the interfaces and look at examples of other
drivers, then ask more questions.
Answers provided by Jack O'Quin
Originally posted by tfoote with karma: 58457 on 2011-02-15
This answer was ACCEPTED on the original site
Post score: 3 | {
"domain": "robotics.stackexchange",
"id": 4744,
"tags": "ros, camera-drivers"
} |
Is there any physical evidence for motion? | Question: Let's say that we have 2 tennis balls in space, one being in motion (say, pushed by an astronaut), and the other one still.
If we could take a snapshot of both tennis balls, would there be any evidence that could suggest that one is moving and the other one is still? Is there anything happening, at the atomic level or bigger, being responsible for the motion?
If there isn't, and both balls are absolutely identical, then how come one is still and the other one moving? Where does the difference of motion come from?
Answer: According to classical physics: no. It is impossible to tell how fast something is moving from a snapshot.
According to special relativity: yes. If we choose a frame of reference where one of the balls is at rest then only that ball will look normal. The other ball is moving in this frame so it will be length contracted. If its rest length is $L$ then its length will now be $L\sqrt{1-v^2/c^2}$. Since $1-v^2/c^2<1$ the ball will be shorter in the direction it is moving.
According to quantum mechanics: yes? In quantum mechanics particles are described by a wavefunction $\psi(x)$ which (handwavingly) says how much of the particle is present at a certain point. A tennis ball is also described by a wavefunction which you can get by combining all the wavefunctions of its atoms. The wavefunction actually contains all the information you can possibly know about an object, including its velocity. So if you could pause time and look at the wavefunction you would have enough information to know its (most likely) velocity. In real life you can't actually look at wavefunctions: you have to perform an experiment to extract information from the wavefunction. At this point you might wonder if that still counts as taking a snapshot. | {
"domain": "physics.stackexchange",
"id": 62510,
"tags": "kinematics, experimental-physics, reference-frames, relative-motion"
} |
What does the term "railed" mean in signal processing? | Question: I'm having trouble finding a definition of "railed" that relates to signal processing.
Am I correct in my guess that this term is in fact from this field?
My signal data comes from an EEG device. The lightly documented open source software I'm using doesn't define it, but it shows that term when there is no signal data being displayed.
Is that all it means (no data)? Or does it mean something like the signal being read is too great to be displayed or correctly measured?
Answer: A railed signal, or a railing signal, seems to indicate a flatline. On BIOPAC, Railing signal (flatline) says:
When the amplified signal for any given channel exceeds the range -10
to +10 volts, the signal will rail. You will see a straight line at
-10 or + 10 volts (more likely the reading will be close 9.99 volts). The MP system is designed to work only in the range -10 to +10 volts.
The signal could rail for several reasons (which are not exclusive)...
From Amplifiers: What do rail-to-rail and single supply mean?
With respect to analog signals, a “rail” is a boundary that a signal
has to work within. | {
"domain": "dsp.stackexchange",
"id": 7569,
"tags": "signal-analysis"
} |
Python to sum values in a column | Question: I've created a Python code that reads the data from an excel file using Pandas.
Code for your reference:-
import pandas as pd
def myFunc():
file = r"C:\Documents\myFile.xlsx"
new_dataframe = pd.read_excel(file,'Sheet1')
new_dataframe.fillna(value="No Data Found",inplace=True)
print new_dataframe
myFunc
Current Output:-
Name date amount_used
0 P1 2018-07-01 40.0
1 P1 2018-07-01 40.0
2 P1 2018-07-15 40.0
3 P2 2018-08-01 20.0
4 P2 2018-09-15 50.0
5 P2 2018-08-15 40.0
6 P3 2018-08-10 20.0
7 P3 2018-08-10 50.0
8 P3 2018-08-10 40.0
In the final output, I need to sum the amount_used column based on Name and date column.
Expected Output:-
Name date amount_used
0 P1 2018-07-01 80.0
1 P1 2018-07-15 40.0
2 P2 2018-07-01 20.0
3 P2 2018-08-15 90.0
4 P3 2018-08-10 110.0
How can I achieve this using pandas ?
Answer: You can use groupby and then sum
Take a look at https://stackoverflow.com/questions/39922986/pandas-group-by-and-sum | {
"domain": "datascience.stackexchange",
"id": 5497,
"tags": "python, pandas"
} |
Are retrograde capture orbits "easier" than prograde capture orbits? | Question: After reading up on irregular moons in the solar system - moons that are thought to be captured, most seem to be in retrograde orbit around their parent body. That led me to wonder if retrograde orbits are easier to capture objects than prograde orbits - say prograde orbits are more likely to gravitationally slingshot the object away from the parent body before capture, whereas retrograde orbits would be more likely to capture before flinging the object away.
When viewing the capture from the perspective of the parent body, an object that is moving retrograde past the body appears to slow down as it interacts with the gravity well of the body, whereas an object moving prograde past the body appears to accelerate in the same frame of reference.
Is there any validity to this, or is that just a flaw in reasoning?
Answer:
Are retrograde capture orbits “easier” than prograde capture orbits?
The answer is not just yes, but a rather emphatic yes. This is why the irregular moons of Jupiter predominantly have retrograde orbits, and why all of the outer moons of Jupiter have retrograde orbits. This is also why NASA has been interested in capturing an asteroid and putting into a distant retrograde orbit about the Earth's Moon.
Unfortunately, there are no nice, closed formed solutions to explain why this is the case. This result comes from perturbation theory and simulations galore. Prograde orbits with a semi-major axis greater than about 1/3 the Hill sphere radius tend to be markedly unstable. Retrograde orbits can be stable to the Hill sphere radius, and can be relativity stable even beyond the Hill sphere radius.
A lot of work has been done in the last couple of decades in analyzing distant retrograde orbits. Search for "distant retrograde orbit" in scholar.google.com and you will get a large number of recent hits. This search shows the results from just this year (2014). | {
"domain": "physics.stackexchange",
"id": 16422,
"tags": "gravity, orbital-motion"
} |
Can GR be reformulated in terms of invariant observables? | Question: Question
So recently I was thinking about this: How many scalars are available in $4$ dimensions in General Relativity (without being redundant)? For example, with metric we can construct the following scalar:
$$ g^{\mu \nu} g_{\mu \nu} = 4 $$
is the same as:
$$ (g^{\mu \nu} \otimes g^{\rho \kappa}) \cdot (g_{\mu \nu} \otimes g_{\rho \kappa} ) = 16 $$
We also have scalars like curvature, torsion, inner product of the riemann tensor with itself, etc.
Motivation
My motivation for doing so is as follows: GR is currently through (rank $2$ symmetric) tensors formulated as:
$$ R_{\mu \nu} - \frac{1}{2} R g_{\mu \nu} = \frac{8 \pi G}{c^4} T_{\mu \nu} $$
Hence any solution of the above automatically satisfies:
$$ (R_{\mu \nu} - \frac{1}{2} R g_{\mu \nu}) (R^{\mu \nu} - \frac{1}{2} R g^{\mu \nu}) = \bigg(\frac{8 \pi G}{c^4}\bigg)^2 T_{\mu \nu} T^{\mu \nu} $$
But note the later equation is written purely in invariant observables. I was wondering if General Relativity could also be written purely in terms of observables? If not how many short are we? Can the remaining variables be expressed as something invariant and not a scalar (not sure if it would be a tensor either)?
Answer: If you define invariant observables to be relativistic scalars formed from the Riemann tensor, then the answer is no, it is not possible to formulate GR in terms of invariant observables. As a counterexample, all curvature invariants vanish for any gravitational plane-wave solution in 3+1 dimensions. This is basically because an observer who chases the wave at high velocity can see the energy and amplitude of the wave Doppler shifted down to an arbitrarily low level. For more on this, see Schmidt, http://arxiv.org/pdf/gr-qc/9404037v1.pdf .
Re the classification of scalar invariants formed from the Riemann tensor (without taking derivatives), see https://en.wikipedia.org/wiki/Carminati%E2%80%93McLenaghan_invariants . If you take derivatives, the number of scalar invariants you can form is infinite.
I think what this really tells us is that relativistic scalars are not rich enough to describe all the things we can actually observe in GR. For instance, if a black hole merger results in the radiation of gravitational waves, then it's reasonable to talk about an observable which is the intensity of those waves in the frame of reference of a distant observer at rest with respect to the source. This is an observable that can't be described as a relativistic scalar, but it's clearly an observable -- LIGO did observe such a thing. | {
"domain": "physics.stackexchange",
"id": 53087,
"tags": "general-relativity, differential-geometry, observables, invariants"
} |
Entropy and Dissipation | Question: I've grown accustomed to thinking of entropy as something that reduces the amount of mechanical work a system can produce. However, this does not seem to be reflected in the mathematics.
For example, consider a system consisting of an ideal gas that may exchange work and heat with its surroundings. The Helmholtz free energy of the gas is then
$$ F = U - TS. $$
Now, suppose the system undergoes an arbitrary transformation in $(V,p)$ space. Then, we have
\begin{align} dF & = q - w - TdS - SdT \\\\ & \le-w - SdT. \end{align}
What I find striking is that the change in entropy does not affect the ability of the system to do mechanical work. Or, at the very least, we cannot determine by how much the change in entropy degrades the energy of the system.
Answer:
I've grown accustomed to thinking of entropy as something that reduces
the amount of mechanical work a system can produce
You are referring to entropy that is generated as a result of irreversible work or irreversible heat transfer. That results in less mechanical work being produced than the case where entropy is strictly transferred as a result of a reversible transfer of heat.
Consider the example of a reversible isothermal expansion of an ideal gas versus an irreversible expansion between the same two equilibrium states, both involving a constant temperature surroundings. See the figure below. (Note, for the irreversible path the term "isothermal" refers to only to the temperature of the system at the boundary with the surroundings. The temperature within the system varies spatially).
For the reversible path the external pressure is slowly (quasi-statically) reduced so that the temperature of the gas remains constant (i.e. $PV=$constant) with the gas in thermal and mechanical equilibrium with the surroundings at all times. The reversible work, $w_{rev}$, is the area under the reversible path.
For the irreversible path in the constant temperature surroundings, the external pressure is suddenly reduced to the final pressure and the gas allowed to expand rapidly (and irreversibly) at constant external pressure until thermal and mechanical equilibrium is reached at the final state. The irreversible work, $w_{irr}$, is the area under the irreversible path.
Note that the area under the irreversible path is less than under the reversible path. Thus
$$w_{irr}\lt w_{rev}$$
In contrast to the reversible path, for the irreversible path heat is added only during the constant external pressure rapid expansion whereas in the reversible path heat is added throughout the expansion. The less heat added over the irreversible path results in less energy available for performing mechanical work.
Note that the change in entropy of the system between the initial and final equilibrium states is the same for both the reversible and irreversible path, as it must be since entropy is a state function independent of the path. However, for the irreversible path, the change in entropy of the system consists of two components, entropy transferred from the surroundings plus entropy generated within the system due to the rapid irreversible expansion. (see @Chet Miller comment for a reference). For the reversible path the entropy change of the system consists only of the reversible transfer of entropy.
For the irreversible path there is less heat transferrer to the system from the surroundings for doing work. This means, for the irreversible path, the decrease in entropy of the surroundings will be less than the increase in entropy of the system. Thus
$$\Delta s_{tot}=\Delta s_{sys}+\Delta s_{surr}\gt 0$$
For the reversible path the increase in entropy of the system equals the decrease in entropy of the surroundings. Thus
$$\Delta s_{tot}=\Delta s_{sys}+\Delta s_{surr}= 0$$
Hope this helps. | {
"domain": "physics.stackexchange",
"id": 97621,
"tags": "thermodynamics, energy, entropy"
} |
Why don't dragonflies wings collapse? | Question: How do dragonflies manage to fly at such high speeds without their wings collapsing? Their wings are thinner than paper, but they do not even flutter. What gives them their strength?
Answer: Wootton (1992) reviewed the anatomy and biomechanics of insect wings. Basically the wing is a lightweight but strong scaffolding of veins, supporting a thin membrane. The veins are composed by a sandwich of cuticle with a potential space in between. The membrane is also a double-layer but without the space.
In the venous space are is circulating hemolymph and often nerves and tracheae. The wikipedia image is pretty good:
The nerves carry sensory information and the tracheae oxygen.The hemolymph is continuous with the body and thus is able to circulate and hydrate the wing (important for maintaining flexibility). As Wootton says:
desiccation destroys both compliancy and toughness, and a dry cuticle would be mechanically disastrous
So by maintaining a flexible tissue, insects have strong and tough wings that remain light enough. | {
"domain": "biology.stackexchange",
"id": 243,
"tags": "physiology, entomology, biophysics, anatomy, flight"
} |
Time Series Classification for 1 hour blocks | Question: I am doing some analysis on time series.
The time series would consist of 3 channels and contain 5 minute interval data.
What I want is to be able to give it a 1 hour block of 5 minute interval data and it will categorise it based on the entire one hour and picking up some patterns how the time series looks for each of the categories as per the training data.
I have many 1 hour series of 5 minute interval data which is classified to a particular category, and I want to be able to have a deep learning model which can detect the pattern between these samples and be able to determine for new samples which categories they belong to.
****Could you please recommend a type of deep learning model which is capable of this?****
Maybe I don't understand LSTM's but to my understanding they provide a prediction for each point in a time series based on the points that occur before it and would therefore give a series of predictions, where as I want 1 prediction for each hour.
I appreciate any help that you can provide to help me understand this better.
Thankyou.
Answer: Correct me if I am wrong, but the problem you describe here sounds like a classification problem, not a times series forecasting. You, just want to know to what class each 1 hour of data belongs to. If this is the case, you can try using a CNN with 1 dimensional convolutions and 3 channels. | {
"domain": "datascience.stackexchange",
"id": 6350,
"tags": "machine-learning, classification, time-series, lstm"
} |
One-hot vector for fixed vocabulary | Question: given a vocabulary with $|V|=4$ and V = {I, want, this, cat} for example.
How does the bag-of-words representation with this vocabulary and one-hot encoding look like regarding example sentences:
You are the dog here
I am fifty
Cat cat cat
I suppose it would look like this
$V_1 = \begin{pmatrix}
0 \\
0 \\
0 \\
0 \\
\end{pmatrix}$
$V_2 = \begin{pmatrix}
1 \\
0 \\
0 \\
0 \\
\end{pmatrix}$
$V_3=\begin{pmatrix}
0 \\
0 \\
0 \\
1 \\
\end{pmatrix}$
But what exactly is the point of this representation? Does is show the weakness of one-hot encoding with a fixed vocabulary or did I miss something?
Answer: library(quanteda)
mytext <- c(oldtext = "I want this cat")
dtm_old <- dfm(mytext)
dtm_old
newtext <- c(newtext = "You are the dog here")
dtm_new <- dfm(newtext)
dtm_new
dtm_matched <- dfm_match(dtm_new, featnames(dtm_old))
dtm_matched
$V_1$
Document-feature matrix of: 1 document, 4 features (100.0% sparse).
features
docs i want this cat
newtext 0 0 0 0
$V_2$
Document-feature matrix of: 1 document, 4 features (75.0% sparse).
features
docs i want this cat
newtext 1 0 0 0
$V_3$
Document-feature matrix of: 1 document, 4 features (75.0% sparse).
features
docs i want this cat
newtext 0 0 0 3
Of course when using a "one hot" vectorizer, "cat" in $V_3$ would be 1 (instead of the count). | {
"domain": "datascience.stackexchange",
"id": 8635,
"tags": "one-hot-encoding, bag-of-words"
} |
Difference between infinite state machines and turing machines | Question: Finite state machines (FSM) are strictly less powerful than turing machines (TM). But this is not the case with infinite state machines (ISM). For example, every TM can be embedded into some ISM. The opposite (that for every ISM there exists a TM that can be embedded within the ISM), however, is not true. We can construct a counterexample from any FSM by adding an infinite number of states and no transitions.
I have two questions:
Are all ISMs equivalent to either a FSM or TM? (For example, does there exist an ISM that can recognize a context-free grammar, but nothing more powerful?)
Is there an algorithm for determining how powerful an ISM is?
EDIT: If such an algorithm doesn't exist, are there any reasonable heuristics or rules of thumb?
Answer:
A pushdown automaton is a particular case of ISM, so it is possible to recognize a context-free language with ISM.
How would you give your ISM to the algorithm? Anyway a lot of problems are already undecidable for Pushown Automata (particular case of ISM) like equivalence or universality, so I guess the answer would be no for any reasonable question here. | {
"domain": "cstheory.stackexchange",
"id": 2471,
"tags": "automata-theory, turing-machines"
} |
Compressing saturated steam | Question: If you have a saturated steam (quality=1.0) volume that is compressed adiabtically will the steam condense to follow the saturation curve or superheat?
I found in an old textbook: The Principles of Thermodynamics (1899) on Google the following passage:
For compression, adiabatic:
If we start with pure saturated steam, without admixture of water, it will be superheated by compression.
If the initial steam weight is greater than that of the water, steam is generated by the compression.
If there is more water than steam, steam is condensed during compression.
I am still at a bit of a loss for this makes physical sense and how this is properly captured in an energy balance. (i.e., $\frac{\text{d}U}{\text{d}t} = -\frac{p\text{d}V}{\text{d}t}$)
Answer: An ideal adiabatic compression of steam is a reversible process. This means that on the state of the fluid will follow lines of constant entropy. An alternative view is this is the opposite of an ideal turbine where the initial assumption is ds = 0 (adiabatic expansion in reverse).
For the T-s diagram constant entropy will put you at steam or liquid depending on the condition of the steam-water.
If x > 0.5 then compression will lead to steam
If x < 0.5 then compression will lead to liquid
If x ~ 0.5 then compression will stay at roughly 50/50 mixture until supercritical water conditions are met (the maximum of the saturation curve).
Or for the Mollier P-h diagram. If you start at "B", compressing the fluid will follow the red line and go to superheated steam. | {
"domain": "engineering.stackexchange",
"id": 1067,
"tags": "thermodynamics"
} |
Why does the charge density at the centre of a nucleus decrease with increasing mass number? | Question: I have been learning about the Woods-Saxon approximation for charge distribution in the nucleus; it is given by
$$\rho_{ch}(r)=\frac{\rho_{ch}^0}{1+e^{(r-a)/b}}$$
The value of $\rho_{ch}^0$ decreases with increasing mass number A, but it is not immediately clear to me why; the distribution is subject to a normalisation condition, but it is
$$\int \rho_{ch}(\textbf{r}) \text{d}^3\textbf{r}=Ze$$
so it is not that the charge distribution is normalised by dividing by the total charge of the nucleus, as I initially thought. Is there some physical explanation for why $\rho_{ch}^0$ decreases with mass number?
Answer: For light nuclei, up to, say, neon ($z=10$) the numbers of neutrons and protons in the stable isotopes are roughly equal. As z increases the ratio of neutrons to protons increases, reaching about 1.5 for lead ($z=82$).
Treating the nucleons in nuclei naively as close-packed spheres of equal radius, the nuclear volume will be proportional to $A=Z+N$, whereas the charge is proportional to $z$. The mean nuclear charge density is therefore proportional to $\frac ZA =\frac1{1+\tfrac NZ}$, so as $Z$ increases and $\frac NZ$ increases, the nuclear charge density decreases. | {
"domain": "physics.stackexchange",
"id": 86889,
"tags": "nuclear-physics, charge, atomic-physics"
} |
Best fuel for supercar engines | Question: To experienced supercar owners/engineers:
What is the best fuel for a turbocharged 6 liter V8, 7.6 liter V10, or 9.2 liter V12 (Audi R8 or Lamborghini Aventador type cars) in terms of speed/horsepower efficiency and minimization of wear on the engine? Mileage efficiency can be discarded (it can do 4 gallons per mile... not a problem :) ).
Answer: Use the fuel specified by the manufacturer in the car's manual and/or near the gas cap. Likely a High Octane Fuel based on your question.
If you're referring to Octane levels... The higher the Octane, the harder it is to ignite the fuel. In the case of High Compression Engines(typically in sports cars like you mention in your question) lower octane fuel can detonate on the compression stroke before the piston actually reaches the point where the spark plug fires. This is not good. Higher octane fuels will not pre-detonate under higher compression.
Newer engines can handle lower octane, but performance will suffer.
You can read more here: A bit about early detonation | {
"domain": "engineering.stackexchange",
"id": 959,
"tags": "energy-efficiency, fuel-economy"
} |
outdoor autonomous robot | Question: i had search over google and found out more or less some sensor that i could use like sonar, gyroscope, accelerometer,magnetometer and gps and 2d lidar (which is less expensive, around $300)
i have cost budget so i couldnt go for those expensive devices.
im new to robotic. thus i want to ask for some recommendation.
i want to build a robot which satisfied these following feature
works on outdoor. look at the image above with some description here:
blue line is the border.
red line is the path.
black line is the building.
when the robot at point A, and i input point B. the robot should be able to find a path from point A to B
if point x is blocked, it should be able to find a way to point B via point Y
ground is not flat. thus there might be need sense some higher ground level to avoid collision and recalculate the position of the robot
works on day and night
therefore, my question is:
what is the best device/sensor i should get to satisfy my features above.
or probably any technique i could use?
Please kindly share with me. maybe there is something i miss out on my research
Gps is good but the error is around 6m.. that is too large
fusion of gyroscope, accelerometer, magnetometer to get a correct position and direction is kinda difficult(oblu maybe)
founding:
i found 2 device that could help me with that
ZED stereo camera which works on 0.5m up to 20m.($499)
Intel RealSense 435i which works 0.1m up to 10m with IR.($200)
both are not working so good in dark area even with IR. but i think a lighting could be installed to help with that
any other device recommendation ?
Answer: This is a broad topic you've opened up here. My contribution:
It all comes down of the robot specs. For example, is it a wheeled robot or a legged.? What size of terrain shall it explore? in the scale of meters, centimeters, tens of meters? etc... I think, you should write down all questions you could think of, that will help you specify your sensor options.
I think you need a redundant sensor system. For adequate precision it is inevitable. Multiple sensors that are inaccurate on their own, but combined they give accurate results. Also your sensor data and control data shall be piped through some sort of filtering algorithm to increase performance. It will gain complexity, but that is your tradeoff for accuracy.
You could for example have a nice IMU sensor, it provides 3D accelerometer and compass data. That you can use to determine your relative position, speed, acceleration.
Are you using a wheeled robot? You could use the wheel rotation to determine the relative path your robot has achieved. It may be really inaccurate, since the given terrain, wheel skidding etc.
If you have the two above you may as well can apply an IR or ultrasound sensor, to increase precision.
You still can use a cheap mono camera, to estimate speed and direction.
Also important, how you utilize your sensor data. Building a nice visual map based on the sensor data could help a lot in navigation. AI that can do some extra analysis on the shapes on the map.
Probably, most of the things above were also known for you, what I tried to show here is, that if you want to achieve high or just adequate performance, You can't get away with a simple mighty sensor.
As a concrete example: my master work was a multilegged robot with a single sweeping ultrasound sensor. It had an algorithm, that has built up a map based on the sensor input and run the analysis on it. It actually have solved relatively complex static and dynamic maze problems using the real robot hardware. But, it has flawed accuracy, caused by the monosensory and error in calculated position, that obviously can't reflect measured position, in case of a legged robot, without additional sensors.
I hope I can at least give you some ideas of thinking. | {
"domain": "robotics.stackexchange",
"id": 1835,
"tags": "mobile-robot, sensors, localization, sensor-fusion"
} |
How many times can we recycle one sheet of paper? | Question: Its really interesting to see the process of recycling of paper (as shown in picture below) but at the same time I was thinking like for how many times can one sheet of paper can be recycled ?
Answer: Well apart from the fact that the answer depends on the type of the paper that is used. Secondly how much is wasted and also the process used. Below is an article that might be helpful in answering the question
Some industry sources estimate that an ordinary sheet of paper made from cellulose fibers derived from wood can survive only four to six trips through the recycling process. The Environmental Protection Agency puts the figure at five to seven times.
It is not surprising that the rigors of remanufacturing take a toll on the fibers.
Ideally, paper for recycling is separated into types, because paper with long fibers, like white office paper, offers the most flexibility for recycling, while newsprint, with its shorter fibers, is usually reserved for making more newsprint and other low-quality papers.
The paper is shredded and chopped, then subjected to a mixture of chemicals and water and heated as it is repulped. It is centrifuged and screened to remove impurities; de-inked with more chemicals; then sprayed onto a wire screen, drained, dried and squeezed through heated rollers.
With each step, the fibers become shorter, coarser and stiffer, so that eventually, recycled fiber needs to be mixed with virgin fiber to make paper of the desired quality.
According to the American Forest and Paper Association, 63.4 percent of the paper consumed in the United States was recovered for recycling in 2009. C. CLAIBORNE RAY
SOURCES: New York Times - http://www.nytimes.com/2010/12/21/science/21qna.html?_r=0 | {
"domain": "chemistry.stackexchange",
"id": 3780,
"tags": "everyday-chemistry, cleaning"
} |
Why is the opening in the Anglo-Australian Telescope's dome so small? | Question: Many older or "classic" telescope domes have a horizon-to-zenith opening in the dome, and this helps speed up the thermal equilibration between the inside and outside air, decreasing turbulence and its effects upon resolution.
Source
Source
But the Anglo-Australian Telescope has second mechanism that tightly constrains the vertical extent of the opening as well, leaving just a tiny hole barely big enough for the telescope to see the sky.
What are the benefits of each of these, and how do the minimal aperture domes deal with temperature differences?
below: Screen shot from the video A 2dF night at the Anglo-Australian Telescope found here.
Answer: This is a two part windscreen designed to minimize the effects of windshake on the telescope and to avoid the deterioration in image quality that wind would cause. The AAT is in a tall 6 story dome on a pretty exposed part of Siding Spring Mountain and so is likely more affected by wind gusts. Initially there were issues with the mount being too flexible and having an lower resonant frequency than expected, which was in the range that can be excited by wind buffeting (a mention in the bio of Harry Minnett, a CSIRO engineer who worked on the AAT). This may have also lead to a more extensive windscreens but windscreens, particularly upper ones, are fairly common and exist at several other observatories. As an example, the pointing limits of the Isaac Newton Telescope page discusses the sky area that is available with and without lowering the upper windshield. | {
"domain": "astronomy.stackexchange",
"id": 3427,
"tags": "observatory, angular-resolution, weather, atmospheric-effects"
} |
How does the Italian Percolator work? | Question: This morning, as with every morning, I had my coffee. However, today it was burnt, because I slightly overfilled the water. I use an Italian Percolator on a gas top.
Normally, you put it on to boil, and once you hear the water bubbling, the coffee is ready, and chamber C will be full of coffee.
As you can see, you fill chamber A with water, loosely fill B with coffee, and the water then rises through B, up through the red pipe, and flows over into C. However, as the picture shows, there is a small valve on A. If you fill the water above this valve, you get horrible coffee. When you hear it bubbling, very little coffee has risen into chamber C, and you have to wait a long time for it to do this, while water also escapes from between chambers B and C (which screw together).
Why does this happen? I cannot think of why there should be a valve to release what is presumably steam. Surely the machine should work regardless? Why does the steam have to be released for the mechanism to work properly - is it linked to the pressure of the system?
Answer: I have never used one, but I read about it on the Italian Percolator (Moka Pot) Wikipedia Page.
The valve is a pressure relief just like on a pressure cooker; it is for safety and probably not involved with the burnt result. You may want to clean it with some vinegar to be sure it is in good operating condition, but it is probably fine since other pots have been turning out well.
Further down the wiki page it says:
When the lower chamber is almost empty, bubbles of steam mix with the upstreaming water, producing a characteristic gurgling noise. This "strombolian phase" allows a mixture of superheated steam and water to pass through the coffee, which leads to undesirable results, and therefore brewing should be stopped as soon as this stage is reached.
Perhaps filling the water higher changes your timing, and higher temperature steam is reaching your coffee grounds.
If I had to guess, without your anecdote, I would say that for the same amount of time, an overfilled pot would result in a more dilute, lower temperature brew, and an underfilled pot would result in dark burnt brew.
You may have to time yourself, sacrifice a few cups, and/or get the thermocouple meter out ;-) Good Luck! | {
"domain": "engineering.stackexchange",
"id": 1036,
"tags": "heat-transfer, pressure, heating-systems"
} |
How can I fix "Unable to handle 'index' format version '2', please update rosdistro" | Question:
When I run some tools I get an error like this:
... Unable to handle 'index' format version '2', please update rosdistro ...
For example rosdep update gives this:
ERROR: Rosdep experienced an error: Unable to handle 'index' format version '2', please update rosdistro
Please go to the rosdep page [1] and file a bug report with the stack trace below.
[1] : http://www.ros.org/wiki/rosdep
rosdep version: 0.10.24
Traceback (most recent call last):
File "/usr/lib/pymodules/python2.7/rosdep2/main.py", line 121, in rosdep_main
exit_code = _rosdep_main(args)
File "/usr/lib/pymodules/python2.7/rosdep2/main.py", line 264, in _rosdep_main
return _no_args_handler(command, parser, options, args)
File "/usr/lib/pymodules/python2.7/rosdep2/main.py", line 272, in _no_args_handler
return command_handlers[command](options)
File "/usr/lib/pymodules/python2.7/rosdep2/main.py", line 437, in command_update
error_handler=update_error_handler)
File "/usr/lib/pymodules/python2.7/rosdep2/sources_list.py", line 433, in update_sources_list
for d, dist in get_index().distributions.items():
File "/usr/lib/pymodules/python2.7/rosdep2/rosdistrohelper.py", line 58, in get_index
_RDCache.index = rosdistro.get_index(_RDCache.index_url)
File "/usr/lib/pymodules/python2.7/rosdistro/__init__.py", line 109, in get_index
return Index(data, base_url)
File "/usr/lib/pymodules/python2.7/rosdistro/index.py", line 50, in __init__
assert int(data['version']) == 1, "Unable to handle '%s' format version '%d', please update rosdistro" % (Index._type, int(data['version']))
AssertionError: Unable to handle 'index' format version '2', please update rosdistro
Also bloom-release ... gives this:
Traceback (most recent call last):
File "/usr/bin/bloom-release", line 9, in <module>
load_entry_point('bloom==0.4.4', 'console_scripts', 'bloom-release')()
File "/usr/lib/pymodules/python2.7/bloom/commands/release.py", line 797, in main
args.new_track, not args.non_interactive, args.pretend)
File "/usr/lib/pymodules/python2.7/bloom/commands/release.py", line 590, in perform_release
release_repo = get_release_repo(repository, distro)
File "/usr/lib/pymodules/python2.7/bloom/commands/release.py", line 207, in get_release_repo
url = get_repo_uri(repository, distro)
File "/usr/lib/pymodules/python2.7/bloom/commands/release.py", line 179, in get_repo_uri
release_file = get_release_file(distro)
File "/usr/lib/pymodules/python2.7/bloom/commands/release.py", line 157, in get_release_file
_rosdistro_release_files[distro] = rosdistro.get_release_file(get_index(), distro)
File "/usr/lib/pymodules/python2.7/bloom/commands/release.py", line 150, in get_index
_rosdistro_index = rosdistro.get_index(rosdistro.get_index_url())
File "/usr/lib/pymodules/python2.7/rosdistro/__init__.py", line 109, in get_index
return Index(data, base_url)
File "/usr/lib/pymodules/python2.7/rosdistro/index.py", line 50, in __init__
assert int(data['version']) == 1, "Unable to handle '%s' format version '%d', please update rosdistro" % (Index._type, int(data['version']))
AssertionError: Unable to handle 'index' format version '2', please update rosdistro
This version of bloom is '0.4.4', but the newest available version is '0.4.7'. Please update.
roslocate info <pkg> --distro hydro also produces warnings:
# rosdistro.get_cached_release() has been deprecated in favor of the new function rosdistro.get_cached_distribution() - please check that you have the latest versions of the Python tools (e.g. on Ubuntu/Debian use: sudo apt-get update && sudo apt-get install --only-upgrade python-bloom python-rosdep python-rosinstall python-rosinstall-generator)
# rosdistro.get_release_cache() has been deprecated in favor of the new function rosdistro.get_distribution_cache() - please check that you have the latest versions of the Python tools (e.g. on Ubuntu/Debian use: sudo apt-get update && sudo apt-get install --only-upgrade python-bloom python-rosdep python-rosinstall python-rosinstall-generator)
# rosdistro.get_source_file() has been deprecated in favor of the new function rosdistro.get_distribution_file() - please check that you have the latest versions of the Python tools (e.g. on Ubuntu/Debian use: sudo apt-get update && sudo apt-get install --only-upgrade python-bloom python-rosdep python-rosinstall python-rosinstall-generator)
How can I resolve these errors and warnings?
Originally posted by William on ROS Answers with karma: 17335 on 2014-01-24
Post score: 4
Answer:
These errors are occurring because the files which describe the packages which are released into ROS distributions were changed as part of the REP-0141 (http://www.ros.org/reps/rep-0141.html) roll out. These files are hosted for the ROS community on github.com (https://github.com/ros/rosdistro), and are accessed via a python library called rosdistro (https://github.com/ros-infrastructure/rosdistro).
In order to resolve these errors must at least upgrade the python library rosdistro to a version greater than or equal to 0.3.0. However, it is highly recommend to update all of the tools which directly use rosdistro as well. Upgrade instructions differ for Ubuntu/Debian and other systems, only use the Ubuntu instructions on Ubuntu.
For Ubuntu and Debian
On Ubuntu and Debian you can upgrade these packages by running these apt-get commands in the terminal:
sudo apt-get update
sudo apt-get install --only-upgrade python-rosdistro python-rosdep python-rosinstall python-rosinstall-generator python-bloom
The above commands will first get a fresh list of available packages and then it will upgrade any of those packages listed but only if they are already installed. For example, If you do not have python-bloom installed currently this command will not install it.
For Other Systems
For non Debian based systems you can use Python's pip tool to install upgrades to your packages. If you have installed one of the affected packages, with pip already then you can upgrade them with this command:
# Do not run this on Ubuntu, it can cause you to not get updates automatically from apt-get
sudo pip install --upgrade <package_name>
Replace the <package_name> above with the package you want to upgrade, these are the affected packages and the versions they should be at:
rosdistro >= 0.3.0
rosdep >= 0.10.25
rosinstall >= 0.7.2
rosinstall_generator >= 0.1.6
bloom >= 0.4.7
You can see which packages you have installed with pip freeze.
If you have trouble installing bloom from pip with this error or a similar one:
% sudo pip install --upgrade bloom
Requirement already up-to-date: bloom in /Library/Python/2.7/site-packages
Requirement already up-to-date: catkin-pkg>=0.1.14 in /Library/Python/2.7/site-packages (from bloom)
Requirement already up-to-date: distribute in /Library/Python/2.7/site-packages (from bloom)
Could not find any downloads that satisfy the requirement empy in /Library/Python/2.7/site-packages (from bloom)
Some externally hosted files were ignored (use --allow-external empy to allow).
Downloading/unpacking empy (from bloom)
Cleaning up...
No distributions at all found for empy in /Library/Python/2.7/site-packages (from bloom)
Storing debug log for failure in /Users/william/Library/Logs/pip.log
Then you will have to add the --allow-external empy option to your pip command.
Originally posted by William with karma: 17335 on 2014-01-24
This answer was ACCEPTED on the original site
Post score: 8
Original comments
Comment by Kamiccolo on 2014-01-27:
Apparently after recent Ubuntu 12.04 ros-grooovy-* amd64 updates, packages are broken in repository ( http://packages.ros.org precise/main amd64 Packages ). Or just something doesn't update automagically,
Comment by tfoote on 2014-01-27:
@Kamiccolo The repositories for precise amd64 should be up to date. Please open a new question with your errors.
Comment by AndrewLawson on 2014-02-04:
I had the same problem as William so I tried what you suggested, but it didn't fix it; I still get the same error.
Comment by AndrewLawson on 2014-02-04:
Got it. I'm running Ubuntu, but I tried using pip to update rosdistro instead. Worked perfectly.
Comment by tfoote on 2014-02-04:
If you have installed things from pip you need to upgrade via pip. It is our recommendation not to install via pip and use apt so that it will automatically update with everything else. http://answers.ros.org/question/49143/problems-with-rqt-groovy-ubuntu/?answer=49153#post-id-49153 has a good description.
Comment by mfran89 on 2014-02-06:
Thanks for help !
Comment by Ans on 2014-05-13:
Im running Ubuntu Core 13.04, on a Udoo, and trying to install ROS following this guide http://wiki.ros.org/hydro/Installation/UDOO
I'm still having the same problem with "update rosdep" except i've rosdep v. 0.10.21.
Please help, I've been stuck here for a day now!
Comment by tfoote on 2014-05-13:
@Ans please ask your own question and show the output of all the above commands so we can help you. | {
"domain": "robotics.stackexchange",
"id": 16764,
"tags": "ros, rosdep, bloom-release, rosdistro, rosinstall"
} |
Why does selenophene not undergo aromatic substitution? | Question: Other derivatives like thiophene, pyrrole give electrophilic substitution but why not selenophene? Is it because selenophene does not have enough electron donating effect and not having its electrons in conjugation, making the ring anti-aromatic?
Answer: The Se atom has such diffuse valence orbitals compared with carbon that there is not good pi overlap, so we don't really have a conjugated cycle. By definition when there isn't a conjugated cycle, it's "not aromatic", not anti-aromatic or non-aromatic as they both require cyclic conjugation. Words involving 'aromatic' are tricky. | {
"domain": "chemistry.stackexchange",
"id": 5281,
"tags": "organic-chemistry, aromatic-compounds, reactivity, heterocyclic-compounds"
} |
openni_tracker_clean_transformation | Question:
Dear all.
Just wondering whether it might be possible to clean the last transformation from a "New User N".
In other words, when I stand in front of the Xtion Pro live I get the transformation from "/openni_depth_frame" to "/right_hip_1" then I move away from the sensor, but it keeps giving me the last transformation. And the function
listener.waitForTransformationform("/openni_depth_frame","/right_hip_1",right_hip_point_1.header.stamp, ros::Duration(3))
keeps giving me true and I cant use other transformations from other New Users.
In advance I appreciate your help.
Originally posted by acp on ROS Answers with karma: 556 on 2013-06-11
Post score: 0
Answer:
I am not using Xtion Pro, but for kinect, but it is the same openni_tracker.
I would say that in fact you make two mistakes :
as soon as you live the frame, the transformation "/openni_depth_frame" to "/right_hip_1" disappears in almost no time (less than two seconds). However, if you are using RVIZ, you have a parameter you can change : In RVIZ, in the pannel Displays, under TF you have "Frame Timeout". Try putting this to 0, and you will not see the transforms anymore, as soon as you left the frame.
The second mistake concerns your use of listener.waitForTransform : as you can see on this tutorial, the third argument is the moment at which you want to know the transform was. Here, it seems that you always ask for a transform in the past, at the last moment you found the right hip. Thus leading to a seemingly always existing transform. Try to replace right_hip_point_1.header.stamp by ros::Time::now(), and you should not have it anymore.
The combination of the two thus lead you to think the transform was still here after you left the frame, whereas in fact it is not :-)
Let me know,
Bests regards,
Steph
Originally posted by Stephane.M with karma: 1304 on 2013-06-11
This answer was ACCEPTED on the original site
Post score: 2
Original comments
Comment by acp on 2013-06-11:
Hi Step, thanks for your answer, I have tried to change the parameter 'Frame Timeout' under 'TF' to zero, but for some reason rviz does not allow me, it always end up with 1.
Comment by Stephane.M on 2013-06-11:
Ok, I always put 1 also :-S Sorry for the mistake ^^ By the way, is it "normal" now ? If yes, please mark my answer as correct. If not, let me know the remaining problems (edit your question with a [EDIT] tag, so I can also modify my answer) Bests
Comment by acp on 2013-06-11:
Hi Steph, well, some how, I can not use the edit tag. But it seems to be that it is working now.
Comment by acp on 2013-06-12:
I have a question, what is the best to track, the hip, shoulder etc....thanx :)
Comment by Stephane.M on 2013-06-12:
If my answer solved the problem, mark it as the correct answer please.
Then, for another question, open a new question, never ask a question as an answer or comments ;-)
Comment by acp on 2013-06-13:
I think I have mark it as correct question, if not where can i do that? :)
Comment by Stephane.M on 2013-06-13:
Near my post, on the left, there are two arrows, with a "1" between. And just under a "v" inside a grey circle, you click there to mark my answer as correct :-) | {
"domain": "robotics.stackexchange",
"id": 14512,
"tags": "ros"
} |
Are rates a scalar, a vector or both? | Question: Are all rates in physics a scalar, a vector or both?
It seem to me like all rates in science are vectors.
Examples of rate that are vectors are rate of charge flow, rate of heat transfer, rate of mass flow, rate of change of displacement and rate of water entering a water tank (+ve for water entering the tank and -ve for water leaving the tank).
Are there rates in science that are scalar?
Thanks
Answer: A rate is the same shape as the underlying quantity.
For example, the rate of position (which is a vector) is the velocity vector.
But the rate of temperature change (which is a scalar) is also a scalar.
Derivatives and integrals do not change the shape of a quantity. | {
"domain": "physics.stackexchange",
"id": 95046,
"tags": "differential-geometry, vectors, definition, differentiation, vector-fields"
} |
Charge Distribution - Trefoil | Question: I'm trying to create a charge distribution in 3d in the shape of a trefoil knot. The trefoil has the following parametric equations.
$$x = \sin\theta + 2\sin2\theta
\\y = \cos\theta - 2\cos2\theta
\\ z = -\sin3\theta $$
I want to create $\rho(r)$ in standard cartesian coordinates. Using standard Dirac delta functions I believe the following equation is generally correct.
$$\rho(r') = \delta(x' - (\sin\theta + 2\sin2\theta) )\delta(y'-(\cos\theta - 2\cos2\theta))\delta(z'-(-\sin3\theta))$$
However I'm getting a little tripped up with the constant value I should be placing in front of my three delta functions. I know I should have something like $Q/4\pi\epsilon_{0}$ but looking at other examples online has me a little bit confused. Since I'm in cartesian coordinates I don't have to include a Jacobian term from a coordinate change do I? As well any clarification about $Q$ and $q$ in terms of their meanings with regards to charge would be helpful. Right now I think one is the average charge and the lower case is a discrete charge, but I'm a little unclear as to when I use them. I think since I have a continuous distribution I use $Q$ but again I'm not sure. Any clarification would be much appreciated!
Answer: There are a few issues to unpack here.
First, note that what you have defined is not really the charge density. Let's write out this quantity explicitly:
\begin{equation}
\hat{\rho}(x',y',z', \theta) = \lambda \delta(x'-x(\theta)) \delta(y'-y(\theta)) \delta(z'-z(\theta)),
\end{equation}
Note that I have defined $\hat\rho$ with an explicit dependence on $\theta$. This isn't actually really the charge density, $\rho$. A fixed value of $\theta$ is a single mathematical point on the curve. If the total charge of the whole curve is finite, then the charge at any one infinitesimally small point is actually zero. The total charge density should really "sum" the charge densities from every point on the curve. I put "sum" in scare quotes because the curve is continuous, so we really should integrate. The actual charge density (assuming the charge is uniformly distributed with $\theta$ -- see the footnote below for more on this) is$^\star$
\begin{equation}
\rho(x', y', z') = \int_0^{2\pi} {\rm d} \theta \hat{\rho}(x', y', z', \theta) = \lambda \int_0^{2\pi} {\rm d} \theta \delta(x'-x(\theta)) \delta(y'-y(\theta)) \delta(z'-z(\theta))
\end{equation}
Now, let's apply a little dimensional analysis to our expression for the charge density. $\rho$ needs to have dimensions of charge density (charge per unit volume). Meanwhile, $\delta(\alpha)$ has the same dimensions as $1/\alpha$. Since $x$, $y$, and $z$ all have units of length, the product $\delta(x'-x(\theta)) \delta(y'-y(\theta)) \delta(z'-z(\theta))$ has units of $1/({\rm length})^3$, or $1/{\rm volume}$. Note I've implicitly defined the functions $x(\theta)$, $y(\theta)$, and $z(\theta)$ to be the formulas given in your question; e.g. $z(\theta)= -\sin 3\theta$. Finally, ${\rm d} \theta$ has the same dimensions as $\theta$. Therefore, for dimensional consistency, $\lambda$ must have dimensions of charge per unit $\theta$. In general, you will need to worry about how you choose to parameterize your curve in order to fix $\lambda$. In this case, clearly $0\leq \theta < 2\pi$ is dimensionless. Therefore, $\lambda$ has units of charge.
Now we can think about how to fix $\lambda$. The total charge of a system, which I will denote as $Q_{\rm tot}$, is the volume integral of the charge density
\begin{equation}
Q_{\rm tot} \equiv \int {\rm d} x' {\rm d} y' {\rm d} z' \rho(x', y', z')
\end{equation}
Incidentally, it does not matter what symbol we use to denote this quantity. We can use $Q_{\rm tot}$, $Q$, or $q$, or any other symbol, so long as we are clear on the meaning. There is no intrinsic difference between the notation $Q$ and $q$. I will stick with $Q_{\rm tot}$ in this answer, however, for clarity.
Putting this all together,
\begin{eqnarray}
Q_{\rm tot} &=& \lambda \int_0^{2\pi} {\rm d} \theta \int {\rm d} x' {\rm d} y' {\rm d} z' \delta(x'-x(\theta)) \delta(y'-y(\theta)) \delta (z'-z(\theta)) \\
&=& \lambda \int_0^{2\pi} {\rm d} \theta\\
&=& 2\pi \lambda
\end{eqnarray}
Therefore
\begin{equation}
\lambda = \frac{Q_{\rm tot}}{2\pi}
\end{equation}
$^\star$ This step hides a very important subtlety, since it's not completely obvious what measure factor to use in the integral over $\theta$. Ultimately this requires some physical input -- how is the charge distributed on the trefoil? I'm assuming here that the charge density is uniform as a function of $\theta$, based on the way you phrased the question. However, if the charge density is uniform as a function of length, then you need to include a Jacobian factor converting from arc length to $\theta$ in this step, and carry it through consistently. Depending on how you define things, the dimensions of $\lambda$ may be charge per length, rather than charge, in this case.
In a bit more detail, first define the arc length of the curve as a function of $\theta$
\begin{equation}
L(\theta) = \int_0^\theta {\rm d}\vartheta \sqrt{\dot{x}(\vartheta)^2 + \dot{y}(\vartheta)^2 + \dot{z}(\vartheta)^2}
\end{equation}
where $\dot{x}(\theta)\equiv {\rm d}x/{\rm d}\theta$. The total length of the curve is $L_t = L(2\pi)$.
Then instead of the expression for $\rho$ above, we want to use
\begin{equation}
\rho(x',y',z') = \lambda \int_0^{2\pi} {\rm d} \theta \left|\frac{{\rm d}L}{{\rm d} \theta}\right| \hat{\rho}(x',y',z',\theta)
\end{equation}
Equivalently we could define $\hat{\rho}$ so it was a function of $L$ instead of $\theta$ (e.g. parameterize the curve by its arc length), and then define $\rho$ as an integral over $L$ (with no Jacobian factor) instead of an integral over $\theta$ (with a Jacobian factor)
\begin{equation}
\rho(x',y',z') = \lambda \int_0^{L_t} {\rm d} L \hat{\rho}(x',y',z',L)
\end{equation}
After this, one can follow the logic given in the body of the answer. I believe after turning the crank, the net result should be that $\lambda=Q_{\rm tot}/L_t$.
I believe (though I didn't check explicitly) if you pick, say, 100 uniformly spaced values of $\theta$ and plot the $x,y,z$ coordinates, you'll find that the points tend to "bunch up" near the "curvy parts" of the trefoil, and "spread out" on the "straight parts." On the other hand, if you pick 100 uniformly spaced values of $L(\theta)$ and plot the $x,y,z$ coordinates, the points will be uniformly distributed along the curve with no bunching or spreading out (by construction). Intuitively, the reason you need to worry about this Jacobian factor is to account for the difference in these discretizations. | {
"domain": "physics.stackexchange",
"id": 83536,
"tags": "electrostatics, charge, coordinate-systems, dirac-delta-distributions"
} |
Implications of $NP = \Sigma_2 P$ for PH collapse | Question: A simple fact is that $P = NP \to P = coNP$, which follows from the observation that $P$ is closed under complement.
I am having trouble seeing that an analogous statement is true at higher levels of $PH$. For example, is it known that $NP = \Sigma_2 P$ implies $NP = \Pi_2 P$? If so, is there an easy proof? Would such a statement have any other interesting implications (For example $NP = coNP$)?
It seems somewhat likely to me that this is true, based on the observation that $NP = \Sigma_2 P$ means that $\exists \forall$ quantifier patterns can be replaced with $\exists$ quantifier patterns, and so all higher levels of $PH$ would at least collapse to $\Pi_2 P$.
Answer: The answer is yes.
notice that $NP=\Sigma_1$ and for all $i$ you have $\Sigma_i=\Sigma_{i+1} \Rightarrow \Sigma_i=\Pi_i$. This is true since $\Pi_i\subseteq \Sigma_{i+1}=\Sigma_i$. The inclusion holds since if $\bar{L}\in\Sigma_i$ then you have $x\notin L \iff \exists v_1\in \left\{0,1\right\}^{p(|v_1|)}...Q_iv_i\in\left\{0,1\right\}^{p(|v_i|)}M(x,v_1,...,v_i)=1 $ or equivalently $x\in L \iff \forall v_1\in \left\{0,1\right\}^{p(|v_1|)}...\bar{Q_i}v_i\in\left\{0,1\right\}^{p(|v_i|)}M(x,v_1,...,v_i)=0$. So now your $\Sigma_{i+1}$ verifier for $L$ will ignore $v_1$, run $M(v_2,...,v_{i+1})$ and flip the answer. It remains to show $\Sigma_i\subseteq\Pi_i$. Let $L\in\Sigma_i, \bar{L}\in \Sigma_{i+1}$ and since $\Sigma_i=\Sigma_{i+1}$ you get $\bar{L}\in\Sigma_i$.
Back to your question. If $NP=\Sigma_2$ (or $\Sigma_1=\Sigma_2)$ then $\Sigma_1=\Pi_1$. Let $L\in\Pi_2$ then $\bar{L}\in\Sigma_2\Rightarrow \bar{L}\in\Sigma_1\Rightarrow \bar{L}\in\Pi_1\Rightarrow L\in\Sigma_1$, So $L\in NP$ as required. The opposite direction is clear(for all $i$ $\Sigma_i\subseteq\Pi_{i+1}$). | {
"domain": "cs.stackexchange",
"id": 5120,
"tags": "complexity-theory, complexity-classes"
} |
Using another device instead of RC transmitter | Question: I want to make pc controlled quadrotor. All the tutorials/projects made with rc receiver. I want to use arduino or xbee instead of rc receiver for pc control of quadrotor. How can I do this.
Note: I have arduino, beaglebone, xbee, hc-05, KK2 and multiwii parts.
Answer: The input to the autopilot is a 50Hz PWM signal with pulse width varying from 1000 µs to 2000 µs (this is with APM, it should be same with KK2). In case of a RC transmitter this signal is modulated on a 2.4GHz channel and the PWM values are mapped to the output of a potentiometer on the remote.
Now, you can keep an Arduino on your quadrotor with an XBee, and generate the same signal using the 'Servo' library (it also generates a signal of the same frequency). And on your PC you can write a python script to transmit PWM values through an XBee connected on your serial port. | {
"domain": "robotics.stackexchange",
"id": 908,
"tags": "pid, quadcopter"
} |
Gazebo Real Time Factor going down | Question:
Hi, I have a problem with the use of Gazebo
I'm running an autonomous car simulator into gazebo and another node that I made for ground removal of the lidar sensor
When I run this node the real time factor of gazebo is dropping down to 0.3 from 1. I understand that this could be because of the load on the CPU but the fact is that none of the CPU cores is on full load and the global cpu utilization is around 20%, I can run other more heavy processes without bringing it down.
Is there a way to keep this up or force the real time factor to 1?
Thank you
Originally posted by manderino on ROS Answers with karma: 11 on 2020-05-19
Post score: 1
Answer:
Hi @manderino,
The main problem is not (almost) Gazebo but the bottleneck generated by the pointclouds generation. I am sure you are using a ray plugin instead of a gpu_ray. The ray plugin is forcing Gazebo to generate the pointclouds with the CPU and that is a very slow tasks.
Change the Gazebo plugin to use a gpu_ray and you will maintaint the real time factor near 1.
The velodyne_simulator package contains an implementation of a gpu_ray lidar plugin.
Originally posted by Weasfas with karma: 1695 on 2020-06-09
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 34979,
"tags": "gazebo, ros-melodic"
} |
Creating a Generic Single Linked List | Question: This follows from a post I made here. I made changes according to the answer I accepted in the following link. I tried to make all the changes necessary that I could. I just want to see if there are any other further adjustments that I need to have for this generic single linked list.
Here is my header file:
#ifndef LinkedList_hpp
#define LinkedList_hpp
#include <iostream>
template<class T>
class SingleLinkedList {
private:
template<class S>
struct Node {
S data;
Node<S>* next;
};
Node<T>* head;
Node<T>* tail;
public:
SingleLinkedList() : head(nullptr), tail(nullptr) {}
SingleLinkedList(SingleLinkedList const& value) : head(nullptr), tail(nullptr) {
for(Node<T>* loop = value->head; loop != nullptr; loop = loop->next) {
createNode(loop->data);
}
}
SingleLinkedList& operator=(SingleLinkedList const& rhs) { SingleLinkedList copy(rhs);}
void swap(SingleLinkedList& other) noexcept {
using std::swap;
swap(head, other.head);
swap(tail, other.tail);
}
~SingleLinkedList(){
Node<T>* nodePtr = head;
while(nodePtr != nullptr) {
Node<T>* nextNode = nodePtr->next;
nodePtr = nextNode;
delete nodePtr;
}
}
void createNode(const T&& theData) {
Node<T>* temp = new Node<T>;
temp->data = std::move(theData);
temp->next = nullptr;
if(head == nullptr) {
head = temp;
tail = temp;
temp = nullptr;
}
else {
tail->next = temp;
tail = temp;
}
}
void display(std::ostream& str = std::cout) {
Node<T>* temp = head;
while(temp != nullptr) {
str << temp->data << "\t";
temp = temp->next;
}
delete temp;
}
void insert_start(const T& theData) {
Node<T>* temp = new Node<T>;
temp->data = theData;
temp->next = head;
head = temp;
delete temp;
}
void insert_position(int pos, const T& theData) {
Node<T>* previous = new Node<T>;
Node<T>* current = new Node<T>;
Node<T>* temp = new Node<T>;
current = head;
for(int i = 1; i < pos; i++) {
previous = current;
current = current->next;
}
temp->data = theData;
previous->next = temp;
temp->next = current;
}
void delete_first() {
Node<T>* temp = head;
head = head->next;
delete temp;
}
void delete_last() {
Node<T>* previous = nullptr;
Node<T>* current = nullptr;
current = head;
while(current->next != nullptr) {
previous = current;
current = current->next;
}
tail = previous;
previous->next = nullptr;
delete current;
}
void delete_position(int pos) {
Node<T>* previous = new Node<T>;
Node<T>* current = new Node<T>;
current = head;
for(int i = 1; i < pos; i++) {
previous = current;
current = current->next;
}
previous->next = current->next;
}
bool search(const T& x) {
struct Node<T>* current = head;
while (current != NULL) {
if (current->data == x)
return true;
current = current->next;
}
return false;
}
friend std::ostream& operator<<(std::ostream& str, SingleLinkedList& data) {
data.display(str);
return str;
}
};
#endif /* LinkedList_hpp */
Here is the main.cpp file that tests this header file:
#include <iostream>
#include "LinkedList.hpp"
int main(int argc, const char * argv[]) {
SingleLinkedList<int> obj;
obj.createNode(2);
obj.createNode(4);
obj.createNode(6);
obj.createNode(8);
obj.createNode(10);
std::cout<<"\n--------------------------------------------------\n";
std::cout<<"---------------Displaying All nodes---------------";
std::cout<<"\n--------------------------------------------------\n";
std::cout << obj << std::endl;
std::cout<<"\n--------------------------------------------------\n";
std::cout<<"-----------------Inserting At End-----------------";
std::cout<<"\n--------------------------------------------------\n";
obj.createNode(55);
std::cout << obj << std::endl;
std::cout<<"\n--------------------------------------------------\n";
std::cout<<"----------------Inserting At Start----------------";
std::cout<<"\n--------------------------------------------------\n";
obj.insert_start(50);
std::cout << obj << std::endl;
std::cout<<"\n--------------------------------------------------\n";
std::cout<<"-------------Inserting At Particular--------------";
std::cout<<"\n--------------------------------------------------\n";
obj.insert_position(5,60);
std::cout << obj << std::endl;
std::cout<<"\n--------------------------------------------------\n";
std::cout<<"----------------Deleting At Start-----------------";
std::cout<<"\n--------------------------------------------------\n";
obj.delete_first();
std::cout << obj << std::endl;
std::cout<<"\n--------------------------------------------------\n";
std::cout<<"----------------Deleting At End-----------------";
std::cout<<"\n--------------------------------------------------\n";
obj.delete_last();
std::cout << obj << std::endl;
std::cout<<"\n--------------------------------------------------\n";
std::cout<<"--------------Deleting At Particular--------------";
std::cout<<"\n--------------------------------------------------\n";
obj.delete_position(4);
std::cout << obj << std::endl;
std::cout << std::endl;
obj.search(8) ? std::cout << "Yes" << std::endl : std::cout << "No" << std::endl;;
return 0;
}
Answer: Code Review
Header guards are not the same as the class name.
Also there is not namespace.
#ifndef LinkedList_hpp
#define LinkedList_hpp
Now that you have the Node inside the SingleLinkedList it does not need to be seprately templated.
template<class T>
class SingleLinkedList {
private:
template<class S>
struct Node {
S data;
Node<S>* next;
};
Node<T>* head;
Node<T>* tail;
Simplify to this:
template<class T>
class SingleLinkedList {
private:
struct Node {
T data;
Node* next;
};
Node* head;
Node* tail;
This seems to be missing a swap (and a return).
SingleLinkedList& operator=(SingleLinkedList const& rhs)
{
SingleLinkedList copy(rhs);
// After you make a copy you need to swap the copy with
// the current value to change this.
// swap(copy);
// This needs a return statement.
// return *this;
}
Good try at the destructor.
But there seems to be an ordering issue.
~SingleLinkedList(){
Node<T>* nodePtr = head;
while(nodePtr != nullptr) {
Node<T>* nextNode = nodePtr->next;
// You should delete nodePtr here.
// Then once you have deleted you can move to
// the next item in the chain by assigning
// nextNode to nodePtr
nodePtr = nextNode;
// To fix simply move this delete above the previous line
delete nodePtr;
}
}
Move semantics are a compliment to normal semantics. Things can not always be moved. So you should do this in addition to the normal copy version not instead of.
void createNode(const T&& theData) {
So I would have both functions:
void createNode(const T& theData)
void createNode(T&& theData)
Note: The && is not used with const as you destroy the internal content as part of the move operation.
You don't want to delete in display().
Also the display can be marked const as it should not change the state of the object.
void display(std::ostream& str = std::cout) const {
// ^^^^^ add const to mark the function as non mutating.
Node<T>* temp = head;
while(temp != nullptr) {
str << temp->data << "\t";
temp = temp->next;
}
// This line is not needed.
// You did not create anything so you don't need
// to delete it.
delete temp;
}
No need to delete in insert_start() as you delete in the destructor.
void insert_start(const T& theData) {
Node<T>* temp = new Node<T>;
temp->data = theData;
temp->next = head;
head = temp;
// Remove this.
delete temp;
}
IF the list is empty then this is a problem.
void delete_first() {
Node<T>* temp = head;
head = head->next;
delete temp;
}
Normally we have a method called empty() that allows us to check if the list is empty before deleting.
bool empty() const {return head == nullptr;}
Now a user is responsable for checking before a call to delete.
while(!list.empty()) {
list.delete_first();
}
You remove the item from the list here.
But I don't see a call to delete so I suspect you are leaking a node here.
void delete_position(int pos) {
Node<T>* previous = new Node<T>;
Node<T>* current = new Node<T>;
current = head;
for(int i = 1; i < pos; i++) {
previous = current;
current = current->next;
}
previous->next = current->next;
}
Reference
Here is basic reference implementation I came up with using your code as a base:
#include <ostream>
#include <utility>
namespace ThorsAnvil
{
template<typename T>
class SinglyLinkedList
{
struct Node
{
T data;
Node* next;
};
Node* head;
Node* tail;
public:
SinglyLinkedList()
: head(nullptr)
, tail(nullptr)
{}
~SinglyLinkedList()
{
while(head != nullptr) {
deleteHead();
}
}
SinglyLinkedList(SinglyLinkedList const& copy)
: SinglyLinkedList()
{
for(Node* loop = copy.head; loop != nullptr; loop = loop->next) {
addTail(loop->data);
}
}
SinglyLinkedList& operator=(SinglyLinkedList const& rhs)
{
SinglyLinkedList copy(rhs);
swap(copy);
return *this;
}
SinglyLinkedList(SinglyLinkedList&& move) noexcept
: SinglyLinkedList()
{
swap(move);
}
SinglyLinkedList& operator=(SinglyLinkedList&& move) noexcept
{
swap(move);
return *this;
}
void swap(SinglyLinkedList& other) noexcept
{
using std::swap;
swap(head, other.head);
swap(tail, other.tail);
}
friend void swap(SinglyLinkedList& lhs, SinglyLinkedList& rhs)
{
lhs.swap(rhs);
}
void addTail(T const& value)
{
Node* newValue = new Node{value, nullptr};
if (tail != nullptr) {
tail->next = newValue;
}
tail = newValue;
if (head == nullptr) {
head = newValue;
}
}
void addHead(T const& value)
{
Node* newValue = new Node{value, head};
head = newValue;
if (tail == nullptr) {
tail = newValue;
}
}
// Assumes there is data in list.
// Users responsibility to validate by calling empty()
void deleteHead()
{
Node* old = head;
head = head->next;
delete old;
}
// Assumes there is data in list.
// Users responsibility to validate by calling empty()
void deleteTail()
{
Node* prev = nullptr;
Node* curr = head;
while(curr->next != nullptr) {
prev = curr;
curr = curr->next;
}
tail = prev;
if (prev == nullptr) {
head = nullptr;
}
else {
prev->next = nullptr;
}
delete curr;
}
bool empty() const {return head == nullptr;}
void display(std::ostream& str) const
{
for(Node* loop = head; loop != nullptr; loop = loop->next) {
str << loop->data << "\t";
}
str << "\n";
}
friend std::ostream& operator<<(std::ostream& str, SinglyLinkedList const& data)
{
data.display(str);
return str;
}
};
} | {
"domain": "codereview.stackexchange",
"id": 30751,
"tags": "c++, linked-list"
} |
Is there a background independent closed string field theory? | Question: Analogous to the background independent open string field theory by Witten. If there isn't, what are the main stumbling blocks preventing its construction?
Answer: An original article is
Ashoke Sen, Barton Zwiebach, Quantum Background Independence of Closed String Field Theory (arXiv:hep-th/9311009)
An old spr comment by Sabbir Rahman gives a survey of the history of some of these developments.
More references are here. | {
"domain": "physics.stackexchange",
"id": 3336,
"tags": "string-theory, research-level, string-field-theory"
} |
Vectors, Component Addition, and Significant Figures | Question: I have two vectors $\vec{A}$ and $\vec{B}$ and I need to find the x- and y-components of $\vec{C} = \vec{A} + \vec{B}$. Here's what I have so far:
$$|\vec{A}| = 50.0 \mathrm{m}, \theta = -20.0^\circ$$
$$|\vec{B}| = 70.0 \mathrm{m}, \theta = 50.0^\circ$$
$$C_x = |\vec{A}| \cos (\theta) + |\vec{B}| \sin (\theta)$$
$$C_y = |\vec{A}| \sin (\theta) + |\vec{B}| \sin (\theta)$$
Now, according to my professor, this is the solution for C_x:
$$C_x = 50.0 \cos(-20.0^\circ) + 70.0 \cos(50.0%^\circ)$$
$$C_x = 46.98^\circ + 45.0^\circ$$
$$C_x = 92.0^\circ$$
What I'm wondering is how the rounding works here. I got $46.98$ for $50.0 \cos(-20.0^\circ)$ and $44.99$ for $70.0 \cos(50.0^\circ)$. Why is $44.99$ rounded to $45.0$? If anything, shouldn't it be rounded to $45.00$? What am I missing here?
Answer: I think the real question is actually posed most directly in your comment (so you might want to consider editing some of this into the original question):
I'm more concerned about understanding whats going on and making sure that I know how to do it. I've heard that you shouldn't worry about significant figure rules until you have your final answer. How many decimal places should you round a number like 50.0 cos(-20.0) to? Do you always round to 2 decimals as in my problem?
Yes, you are correct that you should never actually round a number off until you are done with the calculation. However, when you are writing out your intermediate steps, it's common practice to write rounded values, rather than copying every digit your calculator shows you, just to avoid burdening the reader with a lot of extra digits that don't really add anything interesting. Keep in mind that this convention only affects what you write. You still keep the number to full precision in your calculator.
As for choosing the number of digits to write out, you can use the significant figure rules, which go like so:
Addition and subtraction: find the last significant digit of each number, and choose the one with the larger place value. That place should be the last significant digit of your result. Another way to think of this rule is that a digit in the sum (or difference) is not significant unless both digits that were added to produce it are significant. So, using gray shading to designate insignificant digits:
$$\begin{align*}3&.146309\\+2&.71\\ =5&.85\color{red}{6309}\end{align*}$$
So you would round to the last significant digit in this case, i.e. you would write out $5.86$. But if you use this result again:
$$\begin{align*}5&.85\color{red}{6309}\\+4&.93101\\ =10&.78\color{red}{7319}\end{align*}$$
This time you would again round to the last significant digit, and write out $10.79$.
Multiplication and division: your result should have the fewest number of significant digits of either of the numbers you're multiplying or dividing.
$$\begin{align*}&253.1\\\div &45\\ = &5.6\color{red}{2444\ldots}\end{align*}$$
and if you multiply this by the earlier result,
$$\begin{align*}&5.6\color{red}{2444\ldots}\\\times &10.78\color{red}{7319}\\ = &60.\color{red}{67268\ldots}\end{align*}$$
These rules are a simplification of a slightly more complex (but more accurate) system, the error propagation rules, which physicists normally use in research. Unfortunately, the error propagation rules for functions like the sine and cosine can't be simplified quite so easily, so in practice people often just use the multiplication/division rule (write out the fewest number of significant digits) for everything else not mentioned here. Of course, you have to remember that, except for final results, it's really not that important how many digits you write, since you should never be rounding "behind the scenes" in your calculator anyway. | {
"domain": "physics.stackexchange",
"id": 1585,
"tags": "homework-and-exercises, vectors, error-analysis"
} |
Trying to find a better way of finding something rather then looping through list from top to bottom. | Question: I am having a slight issue here, i am trying to think of a faster way of finding two elements from 2 different lists that match. The problem is that if i have lists which both have 1000 elements ( rules ) and the very last one [index.1000] matches then in order to find it i have to loop through booth lists from top to bottom. What i have is fine and it works however its not very efficient. Could any one perhaps suggest a better way ? (if there is any ? )
The loop in this meethod is where the iteration through the lists happen. For furthere reference below this method you will find what ContextRule and Context is.
public ContextRule match(Context messageContext, ContextRuleList contextRules) {
ContextRule matchedContextRule = null;
for(ContextRule contextRule : contextRules) {
if(this.match(messageContext, contextRule)) {
matchedContextRule = contextRule;
break;
}
}
if(matchedContextRule == null) {
matchedContextRule = this.getDefaultContextRule();
}
return matchedContextRule;
}
match() method which does comparason.
private boolean match(Context messageContext, ContextRule contextRule) {
return match(contextRule.getMessageContext().getUser(), messageContext.getUser())
&& match(contextRule.getMessageContext().getApplication(), messageContext.getApplication())
&& match(contextRule.getMessageContext().getService(), messageContext.getService())
&& match(contextRule.getMessageContext().getOperation(), messageContext.getOperation());
}
private boolean match(String value, String contextValue) {
return value.equals(ContextRuleEvaluator.WILDCARD) || value.equals(contextValue);
}
Context ( which is an interface )
public interface Context {
public String getUser();
public void setUser(String user);
public String getApplication();
public void setApplication(String application);
public String getService();
public void setService(String service);
public String getOperation();
public void setOperation(String operation);
}
And finally ContextRule
public interface ContextRule {
public Context getMessageContext();
public int getAllowedConcurrentRequests();
}
any help or suggestions appreciated.
Answer: Can your context rules be precomputed into any sort of hierarchical match tree once at application initialization?
HashMap<String, HashMap<String, List<ContextRule>>> contextRuleMap
public boolean match (Context context, Map<String, Map<String, List<ContextRule>> contextRuleMap)
{
Map<String, List<ContextRule>> rulesByApplication = contextRuleMap[context.getUser()];
for (ContextRule contextRule : rulesByApplication[context.getApplication()]) {
// Existing logic for remaining 'match' of service and operation
}
return false;
}
You can pre-compute your 'match tree' to as many 'levels deep' as you need to by User -> Application -> Service -> Operation to achieve the necessary performance. | {
"domain": "codereview.stackexchange",
"id": 5004,
"tags": "java, performance"
} |
Do electrons coming out of a lightbulb (and going back into the circuit) slow down? | Question: Do electrons coming out of a lightbulb (and going back into the circuit) slow down?
The electrons enter the light bulb filament with relatively high kinetic energies. As they travel through the filament they collide with metal atoms transferring much of their kinetic energy to the metal. This energy raises the temperature of the metal. The metal in turn radiates this energy as electromagnetic waves, many in the visible spectrum.(Source 1)
and
Each light bulb results in a loss of electric potential for the charge. This loss in electric potential corresponds to a loss of energy as the electrical energy is transformed by the light bulb into light energy and thermal energy. (Source 2)
My understanding of the above sources is that after the electrons come out of the lightbulb, they have less electrical potential (Voltage) than when they entered the lightbulb (as per Source 2).
Does this mean the electrons travel slower (as they have lost kinetic energy) between the bulb and the positive terminal of the battery (compared the the negative terminal of the battery to the bulb)?
Does this in term mean the current (rate of flow of charge) is less in the 2nd half of the circuit (i.e. between the bulb going towards the positive terminal) compared to the first half (between the negative terminal and the bulb)?
If not, what is the actual difference between the electrons coming into the bulb and going out of the bulb back into the circuit? If you could explain the difference in terms of terms voltage/current/charge but also what physically happens to the electrons e.g. do they travel faster, slower, that would be useful.
Answer: Several models have been proposed in previous answers for the transfer of electric potential energy to heat and light within the filament.
But I don't think that they addressed the most perceptive part of your question. You asked, what is the actual difference between electrons coming into the bulb and going out of the bulb? Any model for the energy transfer within the bulb must be consistent with the answer to that question. And the answer is that they are spaced further apart when they leave the bulb, and are drifting a little faster.
With the greater spacing between them, they must drift faster in order to maintain the same flow rate around the circuit. And they must be spaced further apart, because potential energy of any kind is always the energy associated with the position within a field. Electrons forced to be closer together have had energy transferred to the field between them by force exerted through a distance (as is always the case for energy transfer). When the electrons move apart, the energy is transferred out of the field, again by force exerted through a distance. Voltage is often referred to as electrical pressure, and that is a very apt term. Pressure is energy per unit volume.
In voltage sources, electrons are compressed at one end and rarefied at the other by various means. When a circuit is completed, the electric field is not some kind of disembodied phantom that appears instantaneously, it is established as a wave of pressure which propagates through the circuit, compressing all of the electrons.
We know that the voltage drops along the length of a bulb filament or other uniform physical resistor in proportion to the length. this implies that spacing and drift speed increase continuously through the length of the filament. That's why I say that any model of the particle by particle mechanism for transfer of electric potential energy to thermal and light energy in a resistor must explain the spacing and drift speed increases | {
"domain": "physics.stackexchange",
"id": 34923,
"tags": "electricity, electric-circuits, electrons"
} |
I'm practicing turning for loops into recursive functions. what do you think? | Question: I am trying to turn this function:
collection = ['hey', 5, 'd']
for x in collection:
print(x)
Into this one:
def printElement(inputlist):
newlist=inputlist
if len(newlist)==0:
Element=newlist[:]
return Element
else:
removedElement=newlist[len(inputlist)-1]
newlist=newlist[:len(inputlist)-1]
Element=printElement(newlist)
print(removedElement)
collection = ['hey', 5, 'd']
printElement(collection)
It works, but I wonder if it's okay there's no "return" line under "else:"
Is this as "clean" as I can make it?
Is it better code with or without the newlist?
Answer:
but I wonder if it's okay there's no "return" line under "else:"
Yes, that's OK. You don't need to return anything from your function if you don't want to. In fact, in the interest of consistency, you may as well remove the thing returned in the if block too:
def printElement(inputlist):
newlist=inputlist
if len(newlist)==0:
return
else:
removedElement=newlist[len(inputlist)-1]
newlist=newlist[:len(inputlist)-1]
Element=printElement(newlist)
print(removedElement)
collection = ['hey', 5, 'd']
printElement(collection)
Is it better code with or without the newlist?
Assigning new things to inputlist won't modify it outside of the function, so there's no harm in doing so. May as well get rid of newlist.
def printElement(inputlist):
if len(inputlist)==0:
return
else:
removedElement=inputlist[len(inputlist)-1]
inputlist=inputlist[:len(inputlist)-1]
Element=printElement(inputlist)
print(removedElement)
collection = ['hey', 5, 'd']
printElement(collection)
you don't use Element after assigning it, so you may as well not assign it at all.
def printElement(inputlist):
if len(inputlist)==0:
return
else:
removedElement=inputlist[len(inputlist)-1]
inputlist=inputlist[:len(inputlist)-1]
printElement(inputlist)
print(removedElement)
collection = ['hey', 5, 'd']
printElement(collection)
You don't really need to modify inputlist, since you only use it once after modifying it. Just stick that expression straight into the printElement call. And now that inputlist is never modified, you can get rid of removedElement too, and just inline its expression in the print function.
def printElement(inputlist):
if len(inputlist)==0:
return
else:
printElement(inputlist[:len(inputlist)-1])
print(inputlist[len(inputlist)-1])
collection = ['hey', 5, 'd']
printElement(collection)
Fun fact: for any list x, x[len(x)-1] can be shortened to x[-1]. Same with x[:len(x)-1] to x[:-1].
def printElement(inputlist):
if len(inputlist)==0:
return
else:
printElement(inputlist[:-1])
print(inputlist[-1])
collection = ['hey', 5, 'd']
printElement(collection)
Since the first block unconditionally returns, you could remove the else and just put that code at the function level, without changing the code's behavior. Some people find this less easy to read. Personally, I like my code to have the least amount of indentation possible.
def printElement(inputlist):
if len(inputlist)==0:
return
printElement(inputlist[:-1])
print(inputlist[-1])
collection = ['hey', 5, 'd']
printElement(collection)
That's about as compact as you can get, with a recursive solution. You should probably just stick with the iterative version, for a few reasons:
Fewer lines
More easily understood
Doesn't raise a "maximum recursion depth exceeded" exception on lists with 200+ elements | {
"domain": "codereview.stackexchange",
"id": 4033,
"tags": "python, recursion"
} |
Is cyclohexane‐1,3‐dicarboxylic acid a correct IUPAC name? | Question: I did the numbering by taking all the functional groups in principal chain (scheme A), but my teacher did it differently and proposed the name cyclohexane‐1,3‐dicarboxylic acid (scheme B):
The principal chain in my teacher's answer is shorter. Also, he did not put any $\ce{-COOH}$ in the principal chain which he could have done.
Where am I wrong?
Answer: The answer to this question is actually pretty interesting, I think. @user55119 's comment that the principal chain is the cyclohexane ring is correct -- but it seems like you understand that that's what the teacher is saying, and you're asking, "why?" The best short answer, too, is imho in your comments: @MaxW 's "naming organic compounds is just screwy" is the bottom-line answer, because there are so many crazy special cases to deal with that it sometimes seems like sorcery to come up with them. But if one is willing to dig into it a little, 99% of them do actually have consistent reasoning behind them -- even if coming up with that reasoning can feel pretty byzantine right up to the point where you get it. This can be particularly true for cyclic compounds, which can in some cases even have more than one valid IUPAC name.
In this particular case, the reason your answer is incorrect is because what you defined with your numbering isn't a chain.
Per IUPAC, a chain is a sequence of linked units that is bounded by precisely two "boundary units" -- either a branch point or an "end unit", which is itself defined by IUPAC as a unit* of the macromolecule that is bound to only one other unit.
Now, this clearly causes problems with cyclic compounds, so the IUPAC has some caveats included within the definition of a chain -- one being that a cyclic macromolecule with no end groups is also a chain. So, a chain is either linear and has two boundary groups terminating it, or it's strictly cyclical and includes no end units. That means a cycloalkane -- including the cyclohexane in your example -- is a valid chain.
Your numbering, however, contains one boundary group. The unit at position 1 is a valid boundary group, as it is connected to only one other group. The unit at position 7, however, is not a boundary group; it is connected to a total of two other units (#6 and #2), and worse, it's a non-boundary that you can't continue counting from either since you've already counted the other molecule it attaches to!
"Maybe it solves the problem if I start from the unit I called #3, and leave the hanging methyl group to be the last part of the chain instead of starting from it -- so the chain would start at current #3 and end at current #1, 3-4-5-6-7-2-1." (You may want to be looking at your illustration while you read this; it'll probably make the counting I am referring to clearer than just imagining it.) No, that also doesn't work; it obviously that just puts the problem at the other end -- it still has only one boundary unit, because while the unit at the end of that new "chain" attempt, #1, is only connected to one other, the first unit in the "chain" (#3) is connected to two units, not one, so is not a boundary unit. Still stuck!
And of course, you can try counting from your #2 instead of your #1 or #3, but then you go all the way around and never have a way to include that methyl group hanging off of #2 -- you'd have to "step" onto #2 again, effectively counting it twice, to then "step" onto the tail CH. (I wish I could make a video of this "pointing" at each unit in the count; it would come out much clearer, I think!)
So in effect, really the only way you have to count the components of the macromolecule and get a coherent chain out of it is to start on #2 and end on the connection between #7 and #2 -- meaning you end up with the most practical solution being exactly the caveat that IUPAC settled on in specifying that a cyclic alkene can itself also be counted as a chain.
From there, I imagine you can figure out the rest: the cyclohexane as a valid chain is also the longest chain, you start numbering from the functional group, etc. Hope that helped explain the teacher's answer!
Edit/addendum: Some in the comment have asked for authoritative sourcing of the above. Good point, and thank you for bringing to my attention that I only stated what I used for sourcing but neither completely cited nor offered links to said sourcing. First post here in the chemistry StackExchange; please forgive my error.
IUPAC's authoritative reference work for nomenclature is their so-called Gold Book -- the Compendium of Chemical Terminology.
IUPAC. Compendium of Chemical Terminology, 2nd ed. (the "Gold Book"). Compiled by A. D. McNaught and A. Wilkinson. Blackwell Scientific
Publications, Oxford (1997). Online version (2019-) created by S. J.
Chalk. ISBN 0-9678550-9-8. https://doi.org/10.1351/goldbook.
I cite various definitions from within the Gold Book, the following two being of particular importance:
IUPAC - chain (Note 2). https://goldbook.iupac.org/terms/view/C00946
IUPAC - end-group. https://goldbook.iupac.org/terms/view/E02092
To address one of the comments in particular: it should be noted that nowhere in the Gold Book is there an authoritative definition of the term "ring." That would make claim that "according to IUPAC a ring cannot be a chain," already dubious based on the definition I cited above, even less likely to be the case -- they certainly aren't going to "prohibit" something they don't even define.
The complaint about the definition of "chain" referring to "polymers" as if it were a notation of exclusivity and implying that the Gold Book's definition somehow contradicts or does not apply to the Blue Book should be pretty handily refuted with a bit more reading in both the Blue and Gold books. The Blue Book states it doesn't deal with preferred naming of polymers, but does still define the systematic naming of organic structures whether polymeric or not. (Blue Book P-11, p.3) The Gold Book defines "polymer," very simply and with no caveats, as "a substance composed of macromolecules." Since "chain" and "polymer" thus both use the term "macromolecules," one would naturally be led to read that definition as well, where along with the (hopefully) uncontroversial definition of being a high-molecular mass chain of repeated low-molecular mass components, one might stumble onto Note 2:
"If a part or the whole of the molecule has a high relative molecular mass and essentially comprises the multiple repetition of units derived, actually or conceptually, from molecules of low relative molecular mass, it may be described as either macromolecular or polymeric, or by polymer used adjectivally."
No clearer acknowledgement of the lack of some "hard line" apparently perceived by some between polymers and "regular" organic macromolecules should be necessary.
When it comes to systematic naming, especially, polymers aren't a "special class" of substance which are somehow privy to different definitions and procedures -- indeed, the Blue Book tells us that's precisely the opposite of the case. So, for those strictly systematic purposes (which is the entire point of the question, and should be in most introductory academic settings because the point isn't to have students memorize some esoteric rules but instead to teach them something about the structure of molecules and their function within a larger compound) - the definition of "chain" given applies just fine. In fact, the definition of "polymer" is expressly defined in the Gold Book, again IUPAC's authoritative text, such that it could apply to any simple alkane with a side branch for the purposes of that text. Rather than second-guess clearly written prose, I'd rather assume the working group knew what it was doing there by leaving it vague and allowing common usage to define terms that are inherently ambiguous. (Also see: What is the difference between Alkanes and Polymers?) They seem more aware that strict legalism doesn't serve their purposes than most of the practitioners of their work do.
And for those of you who skipped this addendum entirely: good for you. As interesting as some of the contentions below are, it seems generally agreed by all that none actually change the correct answer to the original question asked anyway; whatever your take on the semantic issues around the Blue and Gold (and Purple!) books that took up half the comments, they're just that -- semantics. One can think that an answer is based on a definition that is "irrelevant" due to its source, which I find an odd and not particularly useful criticism (but hey, that's just me); but thinking then that said definition is "incorrect" is false logic. If someone who has never seen a math book in their life believes 2+2=4 because some hot dog vendor on a street corner told them so, it doesn't make 2+2 equal anything other than 4. Fortunately for them, just because their source has "no relevance" doesn't mean that they can't have a grasp of basic math. | {
"domain": "chemistry.stackexchange",
"id": 15094,
"tags": "organic-chemistry, nomenclature, carbonyl-compounds, cyclohexane"
} |
Would adding acetic acid from household vinegar to washing machine make it too dilute to be effective? | Question: Many people advise adding a cup of acetic acid (vinegar) to one's laundry during the clean cycle. From my calculations, considering one cup of acetic acid and a top gallon machine (at 40 gallons), it would seem this would completely dilute the concentration next to nothing.
Is this correct or am I missing something? Someone had mentioned something about acetic acid being a weak acid, but I'm not sure what that means in this context.
Answer: As a weak acid, acetic acid will act as a buffer in the washing machine solution.
Following similar calculations from the first answer of the question,What's the pH of Vinegar when it generally contains 5% Acetic acid?, we can calculate the concentration and pH of acetic acid in the washing machine solution. 1 Cup is ~ .237 L and 40 gal is ~ 152 L. Using the concentration of acetic acid in vinegar from the post above (0.847 mol/L):
(0.847 mol/L)(0.237 L)=(X mol/L)(152 L), X=0.00136 mol/L
pH of the washing machine solution = 0.5[4.76-log(0.00136)] = 3.80
So, although the acetic acid is very dilute, there is still a significant pH drop from the addition of 1 cup of white vinegar. Changes in pH can strongly affect the solubility of many compounds, which would seem to be helpful in the cleaning process. Furthermore, the denaturing of proteins is promoted at lower pH, so lowering the pH could be particularly useful in the removal of protein-based stains (grass, bodily fluid, egg, etc.). | {
"domain": "chemistry.stackexchange",
"id": 5243,
"tags": "acid-base, everyday-chemistry, concentration"
} |
Lego Digital Designer doesn't export the right model. Ideas? | Question:
Today I've downloaded the Lego Digital Designer from the Lego website and then generated some designs. When I exported the .lxf and .ldr files and proceeded to convert them to .urdf using the lxf2urdf converter I got this message:
Unable to handle 'robot.lxf': 89
I also tried re-generating the urdf files with the lxf/ldf files that come with the nxt_robots_* packages and everything went fine, but if I open and save/export them with LDD (without making any changes) then I also have a similar error message.
Note that in the example above #89 is the link that the converter can't handle, but I noticed that that link doesn't seem to exist, and I checked that the latest (... 86, 87, 88) links generated match with the latest lines at the .ldr file and also with the latest bricks I added with the designer.
So I think it's just that the folks at Lego have updated the .lxf format and the lxf2urdf parser just can't understand the ending of the file... This is really a problem because the urdf generated doesn't have any joints, just links :(
Until someone updates it, anyone knows which version of the LDD works (if any)?
I'd try to upload tomorrow the .lxf file that I've designed in case someone wants to try it (and maybe give me the proper .urdf file :P)
Thanks!
Miguel.
Update: Here are the files! Could anyone try them?
robot.lxf
robot.ldr
robot.urdf
Update: I think now I know what's wrong. When I use the wide rims to build a robot they are not exported to the .ldr file. Actually, if I build a robot with these wide rims LDD will generate an empty (WTF!?) .ldr file... Because of this, lxf2urdf doesn't recognize the bricks and fails to convert the model. I don't know why this happens, but I'm out of ideas. I tried with the latest LDD version (4.2.something), with 4.0 and with 3.1.
Originally posted by Capelare on ROS Answers with karma: 202 on 2012-03-20
Post score: 1
Answer:
I solved this by replacing the "ldraw.xml" file that comes with the LDD software with the one hosted here. Apparently the original file is outdated.
This seems to solve the problem :)
Originally posted by Capelare with karma: 202 on 2012-03-27
This answer was ACCEPTED on the original site
Post score: 2
Original comments
Comment by Capelare on 2012-03-27:
Someone please mark this answer as correct, since I don't have enough karma points.
Comment by Bence Magyar on 2012-03-27:
Could you please modify the NXT ROS wiki tutorial page accordingly? I also had the same problem but I had to move on so skipped this step but would like to get back to it sometime.
Comment by Capelare on 2012-03-28:
Bence I went to the nxt_lxf2urdf wiki page to do so and noticed that this was already noted... I guess I skipped that part when I read the wiki -_-"
Comment by mdtobi on 2013-06-24:
Thanks for the link. I'm using the Lego Digital Designer 4.3.6 and the ldraw file from the wiki page produced an error ("unable to export file"). But the file you linked to is working well. | {
"domain": "robotics.stackexchange",
"id": 8662,
"tags": "ros, nxt, nxt-ros"
} |
Does Standard ML validate (CBV) eta equivalence? | Question: $\eta$ equality of functions is fundamental in their Category-theoretic semantics but in practice even "functional" languages include "impure" features that violate it.
Note that this is not an issue of CBN vs CBV, in CBN eta equality $M = \lambda x. M x$ should hold for any term $M : A \to B$ whereas in CBV this will be restricted to values $V = \lambda x. V x$. The CBV $\eta$ equality has a perfectly good category-theoretic explanation, see Levy's Call-by-push-value.
Haskell includes seq, which can distinguish between (\x -> error "") and error ""
OCaml includes operations like "physical equality" (==), and which will distinguish fun x => x from fun y => (fun x => x) y because == corresponds to pointer equality.
So I wonder do any "real" languages satisfy eta for observational equivalence at all?
All the better if it is known that a (correct) compiler implementation performs eta reductions just based on the type.
The only example I know of is Coq which includes it as a definitional equality as of a recent version. Maybe similar languages (Agda, Idris) do as well. These are not surprising because they include an equality type and it is very frustrating to prove equality of functions without $\eta$ or the stronger(?) extensionality principle.
My main question is about Standard ML specifically which I think does not have pointer equality for functions. Does it satisfy the CBV $\eta$ law? Furthermore do compilers for SML such as MLton use $\eta$ when optimizing programs?
Answer: @xuq01's guess is (extremely surprisingly!) wrong. The CBV eta rule described in the question is sound in Standard ML: the value v is contextually equivalent to fn x => v x.
There are two caveats to this.
First, in specific implementations this equivalence might not be sound: in SML/NJ, you could use Unsafe.cast to detect physical equality of function values. However, unsafe features are not part of the standard.
Second, eta with functions and sums interacts in a slightly subtle way, due to clausal definitions and incomplete pattern matching. In particular, the definitions:
fun f (SOME x) (SOME y) = x + y
and
val g = fn (SOME x) => (fn (SOME y) => x + y)
are not contextually equivalent. This is because the call f NONE will return a function, and g NONE will raise an error.
- f NONE;
val it = fn : int option -> int
- g NONE;
uncaught exception Match [nonexhaustive match failure]
raised at: stdIn:2.32
The reason this happens is operationally obvious, and of course a call-by-push-value/polarization setup makes this choice explicable. But if you really want to seriously understand clausal definitions, you are led to the idea that clausal definitions should be primary, and the usual term formers should be derived notions. See Abel, Pientka, Thibodeau and Setzer's Copatterns: programming infinite structures by observations for an example of a polarized calculus in this style. (See also Paul Levy's Jumbo Lambda-Calculus, for another calculus based on this observation.) | {
"domain": "cstheory.stackexchange",
"id": 4429,
"tags": "pl.programming-languages, type-theory, compilers"
} |
What is meant by polarised protons? | Question: Really short question, but I cannot find anything on the internet.
What is meant proton polarisation?
Is it to do with the spin of the proton?
I guess the spin of the proton is obtained from the vector addition of the 3 quarks' individual spins, so it can't be 0...
Can we have unpolarised protons then?
Thanks!
Answer: Protons and their spin 1/2 were measured and used in experiments long before quarks were a gleam in theorists' eyes.
Spin was studied with the Stern Gerlach experiment
The Stern–Gerlach experiment involves sending a beam of particles through an inhomogeneous magnetic field and observing their deflection. The results show that particles possess an intrinsic angular momentum that is closely analogous to the angular momentum of a classically spinning object, but that takes only certain quantized values.
Polarized protons are protons in a beam with the spins oriented in one direction, up for example.
There are experiments that try to explore how the spin of the proton is built up by the spins of the constituent quarks and gluons, and these need polarized proton beams. | {
"domain": "physics.stackexchange",
"id": 11684,
"tags": "quantum-spin"
} |
Electrolysis of Strongly Basic Water | Question: What happens when strongly basic water is electrolyzed? I know that the electrolysis of water in strongly acidic solution involves hydrogen protons; will these protons still be involved in strongly basic solution?
Answer: In a basic solution $\ce{H2O}$ will act as the $\ce{H}$-source for the hydrogen evolution reaction. The reaction will look something like:
\begin{equation}
\ce{2H2O + 2e- -> 2OH- + H2}
\end{equation} | {
"domain": "chemistry.stackexchange",
"id": 1370,
"tags": "electrochemistry, redox"
} |
Compute the squared overlap between different given qubit states | Question: I was checking this problem from the book. And here is an example, but I think it's wrong. If it is not wrong can you please explain how did they derive it?
As per my workout, it should be one. But It seems they are doing something fishy here.
To be honest I would be surprised if this book is giving wrong solutions.
Answer: Let's decompose each of the calculation.
By having in mind that $\langle 0 | 0\rangle = \langle 1 | 1\rangle = 1$ and $\langle 0 | 1\rangle =\langle 1 | 0\rangle = 0$, we have:
\begin{align*} &\langle \psi_0 | \psi_1\rangle = \frac{-1}{2} \langle 0 | 0\rangle - \underbrace{\frac{\sqrt{3}}{2}\langle 0 | 1\rangle}_{=0} = -\frac{1}{2} \\
& \Longrightarrow |\langle \psi_0 | \psi_1\rangle|^2 = \left( \frac{-1}{2} \right)^2 = \frac{1}{4}
\end{align*}
You do the exact same thing for $|\langle \psi_0 | \psi_2\rangle|^2$, the only thing changing being the sign in front of $\frac{\sqrt{3}}{2}$, but since it "becomes" 0, nothing changes.
As for the last one :
\begin{align*}
\langle \psi_1 | \psi_2\rangle &= \left( \frac{-1}{2} \langle 0 | - \frac{\sqrt{3}}{2}\langle 1 |\right) \left( \frac{-1}{2} | 0 \rangle + \frac{\sqrt{3}}{2} |1 \rangle \right) \\
&= \frac{1}{4}\underbrace{\langle 0 | 0\rangle}_{=1} - \frac{3}{4}\underbrace{\langle 1 | 1\rangle}_{=1} -\underbrace{\frac{\sqrt{3}}{4}\langle 0 | 1\rangle + \frac{\sqrt{3}}{4}\langle 1 | 0\rangle}_{=0} \\
&= -\frac{1}{2} \\
& \Longrightarrow |\langle \psi_1 | \psi_2\rangle|^2 = \left( \frac{-1}{2} \right)^2 = \frac{1}{4}
\end{align*}
Please tell me if there is something you don't understand in this, I hope this helps ! :) | {
"domain": "quantumcomputing.stackexchange",
"id": 2261,
"tags": "quantum-state, mathematics, textbook-and-exercises, linear-algebra"
} |
How to forecast a timeseries with geolocation data? | Question: I have created a dataset with my geolocations from the last three months. The data set contains longitude, latitude, and timestamp, with a frequency of every 5 minutes. Based on this data, I want to predict my (geo)location for up to two weeks into the future. I would like to end up with several predictions with a degree of certainty.
I see two options:
Translate it into a classification problem with discrete target locations such as home, workplace, gym. Furthermore, I would have to create features that describe when I was at some place, basically translating the temporal dimension into features. These features could include hour of day, day of the week, etc. I'm not sure if this method is able to capture a temporal trend, though.
Use forecasting models to forecast/predict the actual latitude and longitude coordinates. Then translate the predicted latitude/longitude into a location such as 'home' or 'work'. The biggest problem I see here is that this will give me only one location, instead of a list of locations with respective certainties.
I am looking for more/better suggestions how I might do this. Thanks in advance!
Answer: I see multiple possibilities, here:
In General
Some general remarks first:
When designing you model, you should take reoccurring patterns into account: There will probably be a 24h pattern (for example: being at work has every 24h a similar probability, while 12h after being at work, the probability to be at work will be quite low). There might also be a 7 day pattern (e.g. every Wednesday evening one might go for sports). This can be done by extracting different features from the timestamp (the hour, the weekday) or choosing a suitable kernel / distance function.
There are variants: you can either predict all locations you might visit in the next 2 weeks (independent of when) or you might predict for each time (e.g. each day / each hour or even every 5 minutes) where you might be. With many approaches, both variants might be possible.
Finite set of locations
This is basically your option 1. Just some more ideas:
You might consider to to apply some clustering on your recorded data to find reoccurring locations and use these as target locations.
You do not need to transform the temporal dimension into features. There are techniques that can handle time series as input (e.g. recurrent neural networks like LSTMs or transformer networks)
Rasterization
You could put a raster over you area of interest (e.g. your city). It depends on you to choose an appropriate cell size. Now you can predict for each cell the probability that you will visit it. This will create kind of a heat-map.
Choosing the raster-approach allows you to handle your data as a series of images, which allows for techniques such as CNNs.
Gaussian Processes / Kriging
Gaussian Processes (a.k.a Kriging in the field of geostatistics) allow to learn a probability distribution over the spatiotemporal space (which seem to be waht you are looking for). Unfortunately, they come with some disadvantages:
Gaussian Processes are better with interpolation than with extrapolation. You might get some strong uncertainties.
Gaussian Processes are computationally expensive. You probably have too much data and might have to subsample or compress it.
Originally, they are used for unbound regression with Gaussian distributions. You are looking more for classifications (will you be there). This can also be done, but requires some extra steps.
Note: These are just some approaches and directions to look into. | {
"domain": "datascience.stackexchange",
"id": 11596,
"tags": "machine-learning, data-mining, data-science-model, forecasting, geospatial"
} |
How does even water rise along a glass plate? | Question: So, I was studying about general properties of matter and topics like surface tension. I came across the phenomenon of water rising along a glass plate like in the picture. I looked for some mathematical interpretation of this on the internet and in some books.
I looked for some mathematical interpretation of this on the internet and in some books.
I found some mathematical understanding of the phenomenon in the book Capillarity and Wetting Phenomena: Drops, Bubbles, Pearls, Waves and also elaborate answers on StackExchange like this one: How far can water rise above the edge of a glass?
But I decided to find the height along which the water climbs on the glass by balancing forces on the infinitely long water element:
It is to be noted that the height of this water element is $h$ and it has an infinite length in the horizontal direction.
Now the pressure force $P$ can be calculated as $P=\int_0^h \rho gz dz=\frac{1}{2}\rho g h^2 $
On balncing forces in the horizontal direction, we get $$P+S =S\sin \theta$$ $$\Rightarrow \frac{1}{2}\rho g h^2= S(\sin \theta -1)$$ which is surely a contradiction as the term in the left hand side is bound to be positive. Hence I believe that I have apparently disproved the fact that water would rise along the glass plate. But I also know that this is true that water has to rise as evident from daily experiences. So, where does my math go wrong?
Answer: If depth z is measured downward from the upper tip of the meniscus, then at depth z below the tip, the liquid pressure is given as:
$$p(z)=p(0)+\rho g z$$where p(0) is the liquid pressure at the tip (It is not equal to atmospheric pressure because of the curved interface between the liquid and the atmosphere).
And at depth z = h, representing the lower flat surface of the liquid, the pressure is atmospheric: $$p(h)=p_a=p(0)+\rho g h$$So, combining these two equations, we get: $$p(z)=p_a-\rho g(h-z)$$From this, it follows that the pressure force on the fluid (per unit width) from the left boundary in your figure (acting to the right) is $$P=p_ah-\rho g \frac{h^2}{2}$$And the force (per unit width) from the air on the right boundary of your fluid (acting to the left) is just $p_ah$. So the net force on the fluid (acting to the right) is just $-\rho g h^2/2$. The remainder of your analysis is correct. | {
"domain": "physics.stackexchange",
"id": 65630,
"tags": "homework-and-exercises, forces, water, fluid-statics, surface-tension"
} |
Relative motion. Setting course of closest approach | Question: Let $r_{P/Q}$ be the position vector of $\overrightarrow P$ relative to vector $\overrightarrow Q$ and $v_{P/Q}$ the velocity vector of $\overrightarrow P$ relative to $\overrightarrow Q$.
Suppose $|v_Q| > |v_P|$ and you want to set the direction of $v_P$ such that $|r_{P/Q}|$ becomes minimal at some point in time. According to the text I have, doing so requires that $v_P \cdot v_{P/Q} = 0$
Sorry for the horrendous image but I hope the idea is clear. $v_P$ could be any direction and the blue circle represents all possible directions of $v_p$
Anyway, my problem lies in that I do not understand why this is the necessary condition for the closest approach.
Could someone enlighten me?
If you know of a resource containing information relevant to this question, that would also be great.
Edit: I would add more detail but unfortunately there isn't much more that I know. Of course there are two angles where this works and I guess you choose the one depending on the initial positions of the two objects.
Edit: I'm really sorry but I didn't label the image properly which resulted in the post being confusing
Answer: If $\vec{v}_P$ is parallel to $\vec{v}_Q$ and $|\vec{v}_Q| > |\vec{v}_P|$ then the distance will always increase. So minimizing the component of speed parallel to the motion of Q is critical. This is stated as $\vec{v}_Q \cdot \vec{v}_P = 0$. | {
"domain": "physics.stackexchange",
"id": 13989,
"tags": "homework-and-exercises, newtonian-mechanics, vectors, relative-motion"
} |
if $L_1$ and $L_2$ are languages over the same alphabet and $L_1 \cap L_2$ is context free, at least one of them must be context free | Question: I am having a hard time understanding if this would be true or false, can someone point me in the right direction?
Answer: One way to deal with this kind of problems is to check some easy and extreme cases. Also, you would like to have a few concrete instances of the concepts involved. You should try to construct simple counterexamples with the instances. If you have found just one counterexample, great, you have found the answer. If you cannot found counterexamples by various constructions, you might have gained some understanding why it might be correct. You may have to repeat a few times.
What are some instances of non-context-free languages? Here is one. $\{0^{n^2}\mid n\ge1\}$. Or another one $\{1^{n^2}\mid n\ge 1\}$. Or another one $\{0^{n^2+1}\mid n\ge1\}$. How about their intersections?
In general, a condition that stipulates the type of the intersection of two languages is a very weak restriction to each of the languages. There are a lot of room for each of them to vary without changing their intersection.
Interested readers can enjoy the following simple exercises. $L_1$ and $L_2$ are are assumed to be over the same alphabet for all exercises.
Exercise 1. If $L_1 \cap L_2$ is finite, must at least one of them be finite?
Exercise 2. If $L_1 \cap L_2$ is regular, must at least one of them be regular?
Exercise 3. If $L_1 \cap L_2$ is context sensitive, must at least one of them be context sensitive?
Exercise 4. If $L_1 \cap L_2$ is decidable, must at least one of them be decidable?
Exercise 5. Raise a similar question. | {
"domain": "cs.stackexchange",
"id": 12821,
"tags": "regular-languages, context-free, closure-properties"
} |
Creation and annihilation operators in Hamiltonian | Question: If I find a Hamiltonian $H = \sum_{k} \varepsilon_k a_k^{\dagger} a_k + \sum_k V_k a_k^{\dagger} a_k$ then I was wondering:
As far as I know this is many body theory and so these operators act on symmetrized or antisymmetrized states respectively, but I am not sure which quantity would determine in this case what the state that we symmetrize or antisymmetrize are?
So which states do these operators $a_k$ actually create and annihilate? I guess that the answer is eigenstates of the Hamiltonian, but actually the spectrum of the Hamiltonian does not have to be discrete, so I don't know this.
EDIT: It was suggested that these are the eigenstates of the kinetic part of the Hamiltonian, but then we have the problem, that the spectrum is not necessarily discrete (think of unrestricted motion in $x$ direction)
Answer: The Hilbert space of the states in this case is the Fock space. It is a linear space "constructed" by acting by the creation operators $a^\dagger_k$ on the vacuum state $|0\rangle$, which has the property $a_k|0\rangle=0$. All other states are related so that the commutation relations between $a,a^\dagger$ are satisfied.
The individual states like $a^\dagger_{k_1}a^\dagger_{k_2}|0\rangle$ are not in general eigenstates of the Hamiltonian. Usually they correspond to the eigenstates of a "free part of the Hamiltonian", while the eigenstates of the full Hamiltonian are in general linear combinations of these basic vectors.
Note, that the "free part" is not uniquely defined. One can have the same system which looks differently when written in creation and annihilation operators, and then there will be a nontrivial homomorphism between the two different Fock spaces, this is called Bogoliubov transformation. | {
"domain": "physics.stackexchange",
"id": 17858,
"tags": "quantum-mechanics, operators, hilbert-space, many-body, second-quantization"
} |
Time-reversal procedure for spin | Question: What's the physical reason/explanation for the fact when time is reversed then, in addition to momentum of fermion, spin is also reversed?
Answer: If the spin is an actual magnetic moment, then its behavior under time reversal is simply similar to that of classical magnetization, which changes sign. Think of magnetic fields and dipoles as generated by electric currents. Under time reversal the currents reverse direction and so do the corresponding magnetic fields or dipoles. At quantum level, spin reversal goes hand in hand with the reversal of orbital and total angular momentum (orbital and total magnetic moments), and with CPT symmetry.
This lecture gives a nice presentation: Time reversal | {
"domain": "physics.stackexchange",
"id": 24117,
"tags": "angular-momentum, quantum-spin, time-reversal-symmetry"
} |
Looking for a simplified explanation for why you cannot measure velocity and pin point location | Question: Because of uncertainty, you cannot measure both velocity and exact position. Is this because when you measure the position of a particle, it is freezing it in its frame of reference? When measuring velocity, you are measuring the particle as it moves through its frame of reference?
Would this mean that each moment in time is simply a Planck Time slice, and then another Planck Time slice?
Answer: Here is a way to see this relationship from a physical standpoint.
Imagine we want to pinpoint the position of an electron. To do so, we shoot another electron at it and see how it bounces off. By back-tracking the trajectories of the two particles after the collision, we can determine where the electron was at that moment, but note the following: because the collision caused it to move, the electron isn't there any more. What to do?
Well, we could lob our electron bullet more gently at the target, so the target will move less when struck. But slowing down the electron bullet means reducing its energy, and that means increasing its equivalent wavelength. And that means the bullet becomes a less precise tool for pinpointing the location of the target. What to do?
Well, we could increase the velocity of the electron bullet, which shortens its wavelength and increases its precision, but then it will hit the target harder and send it out of position with a significant velocity.
This means that anything we do do more precisely locate the electron not only affects its location but also affects its velocity, and anything we do to affect its position and its velocity less reduces the accuracy of our position measurement.
For very very tiny things like electrons, we are stuck with this fundamental tradeoff and we can't do anything about it. Luckily, the bigger the object becomes, the less important this effect is and so for things like baseballs and cars we do not have to worry about it at all. | {
"domain": "physics.stackexchange",
"id": 69912,
"tags": "spacetime, time, heisenberg-uncertainty-principle"
} |
Tracking 1400+ client codes, multi-threading nightmare | Question: I am looking for a bit of help. I am having to cycle through 1400 clients code on my server, I have to check what Version of the software they are on and check for customization's to the code so when we apply updates, those aren't lost. I tried to do it serialized, as you might have guessed, that took forever. I then Tried running it parallel, as you might have expected, I am getting "Os Error, too many open files" I tried to resolve this by throwing in "sleep(random())" Which Works, But this seems convoluted and slows down the process considerably, and it could also be my code is complete garbage, any insight would be great.
I am using git in CLI to get version, this seems to be heavy? Is there a better way to pull tags?
Here, I get a list of the clients, I do some editing to get the pathing
def full_check(client_list, cur_vers):
for c in client_list:
if sc.venue_check(c):
url = sc.surl(c[0])
if url is not None:
path = f'/home/ubuntu/site-files/{url}'
# Maximizing output, this cut run time by 90%
# Hard to log though...
nschools.append({
'path': path,
'cur_vers': cur_vers,
'client': c[0],
'schema_version': c[2]
})
# Here, I am then looping through the paths I get, I put a sleep to stop it from bugging out
# about OS File open Limits
for s in nschools:
print(s['path'])
sleep(random())
t = threading.Thread(target=vers_check, args=(s['path'], s['cur_vers'], s['client'], s['schema_version']))
t.start()
Here I am going through doing some version checking
def vers_check(path, cur_vers, client, schema_version):
"""
This Checks the Schools Release.
TODO Add logging and Return Functionality.
:param client:
:param schema_version:
:param path:
:param school:
:param cur_vers:
:return:
"""
os.chdir(path)
try:
vers = subprocess.check_output(["git", "describe", "--tags"]).strip().decode('utf-8')
raw = str(vers).strip('-')
rawsplit = raw.split('-')
st = rawsplit[0]
if str(st) != cur_vers:
cschools.append({
'client': client,
'schema_version': schema_version,
'sw_version': st,
'path': path,
'is_update': False
})
log.log('warn', 'Version Check', f'The School {client} is on Software Version {st} and Schema Version {schema_version} Which is Out Of Date. Current Version is {cur_vers}')
else:
cschools.append({
'client': client,
'schema_version': schema_version,
'sw_version': st,
'path': path,
'is_update': True
})
log.log('info', 'Version Check', f'The School is on Software Version {st} and Schema Version {schema_version} Which is Up To Date. Current Version is {cur_vers}')
except subprocess.CalledProcessError as e:
log.log('err', 'Failed Version Check', f'We were unable to get the version for {client}', e)
```
Answer: First, I'm pretty sure that it's impossible for
os.chdir(path)
to meaningfully apply different working directories per thread; and even if it were possible it would be a bad idea. subprocess supports a cwd parameter directly that you should use.
If you were opening files yourself, you would want to make a semaphore where the lock only applies to the section of the code where the file is opened. However, you aren't opening the files - it's probably your invocation of git that opens the most files.
I think the only practical way around this is to just limit the number of concurrent threads. You could either estimate this ahead of time, or keep spinning up threads until you hit the first OSError and start no new threads, running one-out-one-in until your work pool is complete.
If you deeply care about performance, subprocess-ing out to git should be replaced with in-process computation; either calling into a library or manually processing entries in .git (I have not researched either). | {
"domain": "codereview.stackexchange",
"id": 40702,
"tags": "python, git"
} |
Higher order functions in Python using reduce technique | Question:
For the below query, show that both summation and product are
instances of a more general function, called accumulate, with the
following signature:
def accumulate(combiner, start, n, term):
"""Return the result of combining the first n terms in a sequence."""
"*** YOUR CODE HERE ***"
Accumulate takes as arguments the same arguments term and n as
summation and product, together with a combiner function (of two
arguments) that specifies how the current term is to be combined with
the accumulation of the preceding terms and a start value that
specifies what base value to use to start the accumulation. Implement
accumulate and show how summation and product can both be defined as
simple calls to accumulate:
def summation_using_accumulate(n, term):
"""An implementation of summation using accumulate.
>>> summation_using_accumulate(4, square)
30
"""
"*** YOUR CODE HERE ***"
def product_using_accumulate(n, term):
"""An implementation of product using accumulate.
>>> product_using_accumulate(4, square)
576
"""
"*** YOUR CODE HERE ***"
Below is the solution:
from operator import mul, add
def accumulate(combiner, start, n, f):
"""Return the result of combining the first n terms in a sequence."""
total = start #Result of summation gets stored here
i = 1 #Initial value of sequence
while i <= n:
total = combiner(total, f(i))
i = i + 1
return total
def summation_using_accumulate(n, f):
"""An implementation of summation using accumulate.
>>> summation_using_accumulate(4, square)
30
"""
return accumulate(add, 0, n, f)
def product_using_accumulate(n, f):
"""An implementation of product using accumulate.
>>> product_using_accumulate(4, square)
576
"""
return accumulate(mul, 1, n, f)
def square(x):
return mul(x, x)
print("product_using_accumulate: ",product_using_accumulate(4, square))
print("summation_using_accumulate: ",summation_using_accumulate(4, square))
print(accumulate(add, 0, 4, square))
print(accumulate(mul, 1, 4, square))
I have tested this code and looks good to me.
My questions:
Does the solution look incorrect in any aspect?
Any feedback on naming conventions?
Any feedback on coding style?
Answer: Here is the implementation I think you were being led towards:
##from functools import reduce # if Python 3.x
from operator import add, mul
def accumulate(combiner, start, n, f):
"""Return the result of combining the first n terms in a sequence."""
## return reduce(combiner, (f(i+1) for i in range(n)), start) # <- built-in version
total = start
for i in range(n):
total = combiner(total, f(i+1))
return total
def summation_using_accumulate(n, f):
"""An implementation of summation using accumulate.
>>> summation_using_accumulate(4, square)
30
"""
return accumulate(add, 0, n, f)
def product_using_accumulate(n, f):
"""An implementation of product using accumulate.
>>> product_using_accumulate(4, square)
576
"""
return accumulate(mul, 1, n, f)
def square(x):
return mul(x, x)
That's how you can
show how summation and product can both be defined as simple calls to accumulate
i.e. simply by doing it, which gives the results required:
>>> product_using_accumulate(4, square)
576
>>> summation_using_accumulate(4, square)
30
Also, note the use of for and range, which is easier and much less error-prone that manually incrementing values in a while loop.
So the answers to your specific questions:
No; accumulate, product_using_accumulate and summation_using_accumulate were all wrong, but you've fixed that now;
No; not now that you've removed Currentterm (which should have been current_term - per the style guide, variable names are lowercase_with_underscores); and
Yes; you need more spaces, e.g. return accumulate(add,0,n,f) should be return accumulate(add, 0, n, f). | {
"domain": "codereview.stackexchange",
"id": 8472,
"tags": "python"
} |
Attempt to solve Mondrian Puzzle with C++ | Question: I am looking for help in code proofreading. The program is trying to solve this question: Fit non-congruent rectangles into an array_size x array_size square grid. What is the smallest difference possible between the areas of the largest and the smallest rectangles?
I have a class of Rectangles, which I try to fit in a class called Boards. The function AutoInsert is basically the algorithm, where I have a linked list consisting of both Boards & Rectangles that adds and removes potential Rectangles from the Boards. This program works for small numbers of global variable array_size.
I appreciate any help at all: even if you don't understand the question or my programming, if you recognize any bad practices in my programming, please do tell.
#include<iostream>
#include<math.h>
#include<vector>
#include<time.h>
#include<stdlib.h>
#include <cstdio>
#include <ctime>
using namespace std;
const int array_size = 15; //array_size x array_size square
const int upperbound = ceil(3+array_size/log(array_size)); //given in reddit link
const int SQ = pow(array_size,2);
char getRandom(){
int n=rand()%78;
char c=(char)(n+49);
return c;
}
class Rect{
private:
int l,w,use; //int use: 0=haven't used, 1=using, 2=congruent use, 3=never use again ... should I add a possible use number w. respect to wnBounds?
int coords[4]; //top left x,tly,brx,bry
char random;
public:
Rect(){
l=0;
w=0;
use=0;
for(int i=0;i<4;i++){
coords[i]=-1;
}
}
void setL(int l){this->l=l;}
void setW(int w){this->w=w;}
void setUse(int use){this->use=use;}
void setCoords(int coords[]){
for(int i=0;i<4;i++){
this->coords[i]=coords[i];
}
}
int getL(){return l;}
int getW(){return w;}
int getArea(){return l*w;}
int getUse(){return use;}
int *getCoords(){return coords;}
int tlx(){return coords[0];}
int tly(){return coords[1];}
int brx(){return coords[2];}
int bry(){return coords[3];}
char setPiece(char c){random=c;}
char getPiece(){return random;}
};
class Rboard{
private:
bool inRange(int c[4]); //defensive functions
bool rectAtLoc(int c[4]); //defense
bool canUse(Rect n); //defense
Rect p[SQ]; //all possible rectangles, p[pow(array_size,2)-1] is the square so be careful
Rect first; //initial Rect
int difference;
public:
Rboard();
Rboard(Rect n); //initialize Rboard with first Rectangle, it will be up to Rsort to input good rectangles
bool coords[array_size][array_size]; //main coords, all begin as false, true is if occupied
void setFirst(Rect n){first=n;}
void editPoss(int c[4],int n);
Rect returnP(int n){return p[n];}
int getIndex(int l, int w);
int getDiff();
int spaceLeft();
vector<int> wnBound(); //returns rectangles within upperbound of initial rectangle
vector<int> pp(); //returns the Index of possible rectangles which can be placed
vector<int> pc(int i); //even index = top left x, odd index=y, can figure out rest of coords of rectangle from this information.
void display();
};
Rboard::Rboard(){
difference=upperbound;
int i=0;
for(int i=0;i<array_size;i++){
for(int j=0;j<array_size;j++){
coords[i][j]=false;
}
}
for(int j=0;j<array_size;j++){
for(int k=0;k<array_size;k++){
p[i].setL(j+1);
p[i].setW(k+1);
i++;
}
}
}
Rboard::Rboard(Rect n){
Rboard();
first=n;
}
int Rboard::getIndex(int l, int w){
for(int i=0;i<SQ;i++){
if((p[i].getL()==l)&&(p[i].getW()==w)){
return i;
}
}
}
void Rboard::editPoss(int c[4],int n){
Rect g=p[n];
g.setCoords(c);
g.setL(p[n].getL());
g.setW(p[n].getW());
if(canUse(g)){
p[n].setPiece(getRandom());
p[n].setCoords(c);
p[n].setUse(1);
if(g.getL()!=g.getW()){
p[getIndex(g.getW(),g.getL())].setUse(2);
}
for(int i=p[n].tly();i<=p[n].bry();i++){
for(int j=p[n].tlx();j<=p[n].brx();j++){
coords[j][i]=true;
}
}
}
}
bool Rboard::inRange(int c[4]){
for(int i=0;i<4;i++){
if((c[i]>=array_size)||(c[i]<0)){
return false;
}
}
return true;
}
bool Rboard::rectAtLoc(int c[4]){
for(int i=c[0];i<=c[2];i++){
for(int j=c[1];j<=c[3];j++){
if(coords[i][j]){
return true;
}
}
}
return false;
}
bool Rboard::canUse(Rect n){
if((n.getUse()==0)&&(inRange(n.getCoords()))&&(rectAtLoc(n.getCoords())==false)){
return true;
}
return false;
}
int Rboard::getDiff(){
int oriDiff=difference, counter=0, minN=SQ+1, maxN=-1; //biggest number, smallest number
for(int i=0;i<SQ;i++){
if(p[i].getUse()==1){
if(p[i].getArea()>maxN){
maxN=p[i].getArea();
}
if(p[i].getArea()<minN){
minN=p[i].getArea();
}
counter++;
}
}
if((counter<=1)&&(spaceLeft()==0)){
return oriDiff;
}else{
return maxN-minN;
}
}
int Rboard::spaceLeft(){
int sum=0;
for(int i=0;i<array_size;i++){
for(int j=0;j<array_size;j++){
if(coords[i][j]){
sum+=1;
}
}
}
return (SQ-sum);
}
vector<int> Rboard::wnBound(){
vector<int> output;
for(int i=0;i<SQ;i++){
if(abs(p[i].getArea()-first.getArea())<=upperbound){
output.push_back(i);
}
}
return output;
}
vector<int> Rboard::pc(int i){
vector<int> output;
int l=p[i].getL(),w=p[i].getW(),area=p[i].getArea(),counter=0;
for(int a=0;a<(array_size-l+1);a++){
for(int b=0;b<(array_size-w+1);b++){
for(int c=0;c<w;c++){
for(int d=0;d<l;d++){
if(!coords[b+c][a+d]){
counter++;
}
}
}
if(counter==area){
output.push_back(b);
output.push_back(a);
}
counter=0;
}
}
return output;
}
vector<int> Rboard::pp(){
vector<int> indexes;
for(int i=0;i<wnBound().size();i++){
int j=wnBound()[i];
if(p[j].getUse()==0){
if(spaceLeft()>=p[j].getArea()){
if(pc(j).size()!=0){
indexes.push_back(j);
}
}
}
}
return indexes;
}
void Rboard::display(){
for(int i=0;i<array_size;i++){
for(int j=0;j<array_size;j++){
if(coords[j][i]){
for(int a=0;a<SQ;a++){
if((p[a].tlx()<=j)&&(p[a].brx()>=j)&&(p[a].tly()<=i)&&(p[a].bry()>=i)){
cout<<p[a].getPiece()<<" ";
}
}
}else{
cout<<0<<" ";
}
}
cout<<endl;
}
}
class Rnode{
private:
Rect piece;
Rboard state;
Rnode *next;
public:
Rnode(Rect p, Rboard s, Rnode *n){piece=p; state=s; next=n;}
Rect getPiece(){return piece;}
Rboard getState(){return state;}
Rnode* getNext(){return next;}
void setPiece(Rect p){piece=p;}
void setState(Rboard s){state=s;}
void setNext(Rnode *next1){next=next1;}
};
class Rsort{
private:
Rnode *root;
vector<Rect> best;
int diff;
public:
Rsort(){
root=NULL;
diff=upperbound+1;
};
void setFirst(int i);
bool isLoser(Rboard b);
bool isDonut(Rboard b);
//to do: keep track of duplicates. e.j., 1x1+1x2+3x1 has been calculated... make sure don't branch to other 6!-1 combinations.
void autoInsert(Rnode *c,double duration,int maxDepth);
void autoFirst();
void display();
};
void Rsort::setFirst(int i){
Rboard board;
Rect f=board.returnP(i);
int coords[4]={0,0,f.getW()-1,f.getL()-1};
f.setCoords(coords);
board.editPoss(coords,i);
board.setFirst(f);
if(root!=NULL){
root=NULL;
}
root=new Rnode(f,board,NULL);
}
bool Rsort::isLoser(Rboard b){ //optimizes by 2 seconds
vector<int> pp=b.pp(),dup;
int sum=0,dupS=0;
for(int i=0;i<pp.size();i++){
if((b.returnP(pp[i]).getL())!=(b.returnP(pp[i]).getW())){
dup.push_back(b.returnP(pp[i]).getArea());
}
sum+=b.returnP(pp[i]).getArea();
}
for(int i=0;i<dup.size();i++){
dupS+=dup[i];
}
return ((sum-dupS/2)<b.spaceLeft()); //if sum of areas of possible rectangles < spaceLeft, this board will be a loser
}
bool Rsort::isDonut(Rboard b){ //check if there is an non-patchable "hole"
bool invBoard[array_size+2][array_size+2];
vector<int> p=b.wnBound();
for(int x=0;x<(array_size+2);x++){
invBoard[x][0]=true;
invBoard[x][array_size+1]=true;
}
for(int y=1;y<(array_size+1);y++){
invBoard[0][y]=true;
for(int x=1;x<(array_size+1);x++){
invBoard[x][y]=!b.coords[x-1][y-1]; //made coords public for this reason
}
invBoard[array_size+1][y]=true;
}
for(int i=0;i<p.size();i++){
int l=b.returnP(i).getL(),w=b.returnP(i).getW(),area=b.returnP(i).getArea(),counter=0;
for(int a=0;a<(array_size-l+3);a++){
for(int b=0;b<(array_size-w+3);b++){
for(int c=0;c<w;c++){
for(int d=0;d<l;d++){
if(invBoard[b+c][a+d]){
counter++;
}
}
}
if(counter==area){
return 0;
}
counter=0;
}
}
}
return 1;
}
void Rsort::autoInsert(Rnode *c,double duration,int maxDepth){ //add terminating counter for recursion?
if(isDonut(c->getState())||(maxDepth>=7)){ //max number of rectangles you want: change #. Need to define a function
//to calculate this based on array_size and upperbound. This variable
//alone has stopped my program from crashing at high values of array_size.
//I suspect this due to the stack overflow error.
return;
}
if(duration>=15.0){ //timer x.0 seconds => gives better solutions the longer the timer
root=NULL;
return;
}
clock_t start;
double d;
start = clock();
vector<int> pp=c->getState().pp(), pc;
c->setNext(NULL);
if(pp.size()==0){
if(c->getState().spaceLeft()==0){
if(c->getState().getDiff()<diff){
diff=c->getState().getDiff();
best.clear();
for(int a=0;a<SQ;a++){
if(c->getState().returnP(a).getUse()==1){
best.push_back(c->getState().returnP(a));
}
}
display();
c->getState().display();
}
}
}else if((root!=NULL)&&(!isLoser(c->getState()))){
for(int i=0;i<pp.size();i++){ //better restrictions here please
pc=c->getState().pc(pp[i]);
for(int j=0;j<pc.size();j+=2){
if(j==2){ //to cut down runtime
break;
}
Rect n;
Rboard n1=c->getState();
int coords[4]={pc[j],pc[j+1],pc[j]+c->getState().returnP(pp[i]).getW()-1,pc[j+1]+c->getState().returnP(pp[i]).getL()-1};
n.setL(c->getState().returnP(pp[i]).getL());
n.setW(c->getState().returnP(pp[i]).getW());
n.setCoords(coords);
n1.editPoss(coords,pp[i]);
Rnode *newnode=new Rnode(n,n1,NULL);
c->setNext(newnode);
d=(clock()-start)/(double)CLOCKS_PER_SEC;
autoInsert(c->getNext(),(duration+d),(maxDepth+1)); //remove +d if you have all the time in the world to wait for output
}
}
}
}
void Rsort::autoFirst(){
//to do: only iterate through non congruent rectangles, and interquartile range of that.
for(int i=SQ/4;i<3*SQ/4;i++){ //interquartile range
cout<<i<<endl;
setFirst(i);
autoInsert(root,0,0);
}
}
void Rsort::display(){
cout<<diff<<endl;
for(int i=0;i<best.size();i++){
cout<<best[i].getL()<<" x "<<best[i].getW()<<endl;
}
}
int main(){
srand(time(NULL));
Rsort r;
Rboard t;
r.isDonut(t);
r.autoFirst();
return 0;
}
Answer: Look at your compiler's warnings
When I try to compile your code, the compiler gives two warnings: Rect::setPiece() doesn't have a return statement, and Rboard::getIndex() is missing a return statement after the loop.
The first warning can be fixed by simply changing the return type of Rect::setPiece() to void. The second warning might be harmless; the assumption is that Rboard::getIndex() will always be called with values for l and w that match one of the rectangles. But what if it doesn't? Then the for loop will end, and a bogus value will be returned. If this is never supposed to happen, just throw an exception there: the compiler warning will go away, and if your code ever does the wrong things, you will hopefully get a helpful error message.
Try to write more C++
In general, your code looks very much like C with classes, and doesn't make good use of the features that the C++ language and its standard library provides. Try to find more C++-like ways to write your code. That doesn't mean "write templates, use inheritance, and overload every operator you possibly can", rather try to make better use of the STL, use features like range for, auto, and so on, to help you write more concise code.
Use a proper random number generator
If you can use C++11 or later, use the functions from <random> to generate random numbers, instead of using the rather bad rand() function from C.
While it is not so important for this particular code, srand(time(NULL)) will only generate a new seed every second, which might be bad if your code is run multiple times per second, or multiple instances of the code are started in parallel. Also, rand() % N will, for most values of N, not give you a uniformly distributed random number. There are ways around both issues, but the C++11 RNG functions take care of this for you.
Code style
Every programmer has his/her own favourite way of formatting their source code. While there is no right or wrong, you are using a very dense style, omitting spaces almost wherever possible. I would suggest you use spaces after punctuation (such as ;), and spaces around operators (such as =, <, and so on). For example this line:
if((b.returnP(pp[i]).getL())!=(b.returnP(pp[i]).getW())){
It's hard to see that this is comparing the result of two function called. Just adding some spaces (and removing some superfluous parentheses) results in:
if (b.returnP(pp[i]).getL() != b.returnP(pp[i]).getW()) {
And when you are initializing arrays, you can also put each element on its own line. So this line for example:
int coords[4]={pc[j],pc[j+1],pc[j]+c->getState().returnP(pp[i]).getW()-1,pc[j+1]+c->getState().returnP(pp[i]).getL()-1};
Will become:
int coords[4] = {
pc[j],
pc[j+1],
pc[j] + c->getState().returnP(pp[i]).getW() - 1,
pc[j+1] + c->getState().returnP(pp[i]).getL() - 1,
};
Also, declare one variable per line, so:
int l=p[i].getL(),w=p[i].getW(),area=p[i].getArea(),counter=0;
Becomes:
int l = p[i].getL();
int w = p[i].getW();
int area = p[i].getArea();
int counter = 0;
Use descriptive variable and function names
It should be possible to determine what a variable or function does by looking at its name. There are some commonly used abbreviations, such as i for a loop index, x, y and z for coordinates, but otherwise you should not use abbreviations.
Instead of l and w, write length and width. Instead of SQ, write array_elements. Or better yet, array_size, and split the original array_size into array_length and array_width. This way, you'll be able to handle non-square boards.
Instead of Rboard, name your class either RectangleBoard or just Board. And what do Rboard::pc() and Rboard::pp() do? Even looking at the code I have no idea what those abbreviations mean.
Move member variable initialization to the declaration
It's generally best to move initialization of variables as close as possible to their declaration. For example, in class Rect, instead of initializing the private member variables inside the constructor, just write:
class Rect {
private:
int l = 0;
int w = 0;
...
Here it is not too important, but if you have multiple constructors, or have a lot of member variables, it will become clear that this is better.
Here, you might get rid of the constructor altogether this way.
Don't store redundant information
Your class Rect stores the top-left and bottom-right coordinates of the rectangle, and its length and width. There is also nothing in that class that prevents these pieces of information from being in conflict with each other. Either store both coordinates, or one coordinate and the length and width. Your getters and setters should take care of calculating the required information if necessary.
Avoid using an array to store coordinates
Unless you are going to store manydimensional coordinates, it's usually better to just name the coordinates x and y, or in this case if you don't want to store width and height, x1, y1, x2 and y2. The reason is that it's easy to make mistakes when you store the coordinates in an int[4]: did you store the coordinates in the aforementioned order or, was is x1, x2, y1, y2? Being explicit here avoids issues.
Even better is to define a struct coordinate {int x; int y;}, or use a library like GLM that provides you with various vector and matrix types, including all kinds of useful functions that operate on them.
Use a single function to get/set multiple variables, if that is the typical use case
Instead of having separate functions setL(int l) and setW(int w), which you will always call in pairs, create a single function set_size(int l, int w). Of course, if you use a struct for coordinates, then you will automatically write code like that.
Use std::vector instead of arrays where appropriate
In class Rboard, you declare an array Rect p[SQ]. You are not always using all elements. It makes much more sense to make this a std::vector<Rect> p. This way, you can add elements to the vector as needed, you don't have to have a member variable in class Rect to tell you whether the rectangle is in use or not.
Avoid if (foo) return true; else return false;
Just directly return foo. For example Rboard::canUse() can be simplified to:
bool Rboard::canUse(Rect n) {
return n.getUse() == 0 && inRange(n.getCoords()) && !rectAtLoc(n.getCoords());
}
Use const references where appropriate
Passing large classes by value might result in expensive copies, and might even trigger some undesired behaviour, depending on how these classes are implemented. Use const references to avoid that. For example, Rboard::canUse() can be rewritten as:
bool Rboard::canUse(const Rect &n) {
... // no need to change anything in the implementation
}
Use std::list<> instead of writing your own linked lists
Your class Rnode implements a linked list. Let the STL do that for you! Remove the member variable Rnode *next, the function setNext(), and in class Rsort use std::list<Rnode> nodes instead of Rnode *root. Then instead of having to do thinks like Rnode *newnode = new Rnode(...) and setNext(newnode), you can just write nodes.push_back(...). As a bonus, this will take care of deleting the memory for you, which you forgot to do.
Use nullptr instead of NULL
NULL is C, nullptr is C++.
Use '\n' instead of std::endl
When you want to end a line, output a '\n' instead of using std::endl. The latter is the same but also flushes the output, which might slow down your program.
Optimize your Rboard::display() function
Your function to display a board is quite inefficient: it has complexity O(array_size⁴). The reason is that for every position, you check every possible square if it is covering that position. It is better to create a 2D array that represents the board, and then for each square, draw it onto that representation, and at the end write out the whole array. This reduces the complexity to O(array_size²). For example:
void Rboard::display() {
char output[array_size][array_size];
for (auto &rect: p) {
char piece = getRandom();
for (int y = rect.tly(); y < rect.bry(); y++) {
for (int x = rect.tlx(); x < rect.brx(); x++) {
output[y][x] = piece;
}
}
}
for (int y = 0; y < array_size; y++) {
cout.write(output[y], array_size);
cout.put('\n');
}
}
Note that the above function also no longer needs the member variable char random in class Rect.
Use an enum for Rect::use
Instead of using an int to represent different states, make them explicit by using an enum, preferrably even an enum class if you can use C++11 or later. For example:
class Rect {
public:
enum class UseType {
UNUSED,
USED,
CONGRUENT_USE
};
private:
UseType use = UseType::UNUSED;
...
public:
void setUse(UseType use) {
this->use = use;
}
UseType getUse() {
return use;
}
Then later in the code, you can for example write setUse(Rect::UseType::CONGRUENT_USE). That's very verbose, but it's clear from just that line of code what the intention is, whereas setUse(2) leaves the reader searching through the code to find out what 2 means. Also, you can now no longer accidentily set an invalid value, like setUse(9). | {
"domain": "codereview.stackexchange",
"id": 32461,
"tags": "c++"
} |
Decide if a language has a word of a given size | Question: Suppose that $L$ is some language over the alphabet $\Sigma$. I was asked to show that the following languages is decidable:
$$L' = \{w \in \Sigma^* | \text{ there exists a word } w'\in L \text{ such that } |w'| \leq |w| \}$$
I.e., $w \in L'$ if $L$ has some word with length smaller than $|w|$.
The way I was thinking to show that is observing that $L \cap\Sigma^{|w|}$ is finite, and $(L \cap \Sigma) \cup (L \cap \Sigma^2) \cup \ldots\cup (L\cap \Sigma^{|w|})$ is finite too, hence decidable. But the main thing I am struggling with is how can any algorithm for $L'$ know if some $u \in L$? this is undecidable, so it's unclear to me how any algorithm for $L'$ can verify that indeed some word is in $L$
Answer: There are two cases:
$L$ is empty. In this case, $L' = \emptyset$ is trivially decidable.
$L$ is non-empty. Let $m$ be the minimum length of a word in $L$. Then $L'$ consists of all words of length at least $m$, and is again trivially decidable (in constant time!).
As you can see, you never actually need an algorithm for $L$.
Similarly, the following language is always decidable:
$$L'' = \{w \in \Sigma^* \mid \text{ there exists a word $w' \in L$ such that $|w'| \geq |w|$}\}.$$
There are now three cases:
$L$ is empty. In this case, $L'' = \emptyset$ is trivially decidable.
$L$ is infinite. In this case, $L'' = \Sigma^*$ is again triviall decidable.
$L$ is finite. Let $M$ be the maximum length of a word in $L$. Then $L''$ consists of all words of length at most $M$, and is again trivially decidable (in constant time).
These are examples of non-constructive proofs, which you might not like. Instead of starting a discussion here, I refer you to this question. | {
"domain": "cs.stackexchange",
"id": 16242,
"tags": "formal-languages, computability, undecidability"
} |
Game in Pythonista App | Question: I've written this Wack-a-mole like game on my iPad in the app Pythonista. Keeping that in mind, it uses a different graphic interface from the one I have usually seen used, Tkinter. I'm a beginner and this is my first major project, so any tips on how to code more efficiently and improve this program would be greatly appreciated.
import random
import ui
from time import sleep
import console
import sys
#turn button to "on" state
def turn_on(sender):
sender.title = 'on'
sender.tint_color = ('yellow')
#reverts button to off state
def turn_off(sender):
sender.title = 'off'
sender.tint_color = 'light blue'
#briefly turns button to "on" state then reverts button to original state
def blink(sender):
turn_on(sender)
sleep(.5)
turn_off(sender)
#When the button is tapped, it reverses the current state of the button
def button_tapped(sender):
turn_off(sender)
#Describes game
def game_Description():
console.alert("Game objective:", 'Click the button to turn the light off before the next light turns on', "Ready")
#Pops up when user loses game
def game_Over_Alert():
play_Again = console.alert('Game Over!','','Play Again?')
return play_Again
#Checks to see if all lights are off
def check_Lights():
if button.title == 'off':
if button2.title == 'off':
if button3.title == 'off':
if button4. title == 'off':
return True
#Turns off all lights
def all_Off():
turn_off(button)
turn_off(button2)
turn_off(button3)
turn_off(button4)
#Increase score by 1 and display
def increase_Score(x):
x += 1
score.title = 'Score: %d' % x
return x
#setting UI and buttons
view = ui.View()
view.name = 'Light Panel'
view.background_color = 'black'
button = ui.Button(title = 'off')
button.center = (view.width*.2, view.height*.5)
button.flex = 'LRTB'
button.action = button_tapped
view.add_subview(button)
button2 = ui.Button(title = 'off')
button2.center = (view.width*.4, view.height*.5)
button2.flex = 'LRTB'
button2.action = button_tapped
view.add_subview(button2)
button3 = ui.Button(title = 'off')
button3.center = (view.width*.6, view.height*.5)
button3.flex = 'LRTB'
button3.action = button_tapped
view.add_subview(button3)
button4 = ui.Button(title = 'off')
button4.center = (view.width*.8, view.height*.5)
button4.flex = 'LRTB'
button4.action = button_tapped
view.add_subview(button4)
scoreCount = 0
score = ui.Button(title = 'Score: 0')
score.center = (view.width*.5, view.height*.75)
score.flex = 'LRTB'
view.add_subview(score)
#Set up display
view.present('sheet')
#Runs the game and handles the function
def play_Game():
scoreCount = 0
speed = 2.55
game_Description()
random.seed()
while check_Lights():
x = random.randint(0,3)
if x == 0:
turn_on(button)
elif x == 1:
turn_on(button2)
elif x == 2:
turn_on(button3)
else:
turn_on(button4)
scoreCount = increase_Score(scoreCount)
sleep(speed)
if speed >= 1.5:
speed-= .07
if game_Over_Alert():
all_Off()
play_Game()
play_Game()
view.close()
Answer: Here are some general tips and guidelines:
Do read the PEP8 guide – Loads of good information in that one
Use docstring, not comments for functions – Docstrings will possibly show up in your IDE when you hover or read documentation on functions, so move your function comments to docstrings like this:
def turn_on(sender):
"""Switch button to 'on' state."""
...
Allow vertical space between functions – Between functions it is recommended to add two new lines, which helps separate the functions, and allows for using single new lines within functions to group code statements there.
Be consistent in naming – You vary your function naming style from button_tapped() to game_Description(). I would strongly suggest to be consistent in naming (and other style aspect as well), and the most pythonic way for functions is snake_case.
Switch to new style string formatting – Instead of the % operator, you are better of using 'Score: {}'.format(x). The former is depreceated, and might be removed in the future.
Don't intermix top level code and functions – You have a section of top level code before the play_game() (aka play_Game()), which is kind of hidden. A really good advice is to not have any top level code besides the following at the bottom of the file:
if __name__ == '__main__':
main()
That is group most of the top level code in a main() function, and gather the view initialisation in another function. (Not quite sure how this works in Pythonista, but alternatively, you'll have a single call to main() at the top level.)
Don't use global variables – If possible, it is always better to pass values to functions, instead of using them globally. When restructuring into a view initialisation function, this introduces possibly a new concept of returning multiple values from a function (see code below)
Feature: Your code always increases score, tapped or not – Your current code always gives you a increase in score, independent on whether you actually tapped anything or not. If however you didn't tap anything, you'll get a false test on the check_lights() test. This is kind of hiding the game logic. I'd move the test around a little, to increase understanding of game logic.
Indentation errors in presented code – I'm surprised no-one commented on this yet, but the code as presented doesn't work, there is a faulty indentation on the x = random.randint(0, 3) line, and likewise on the speed -= .07 line. Both lines should be indented. A trick to remember for next post here on Code Review is to mark the entire code block (when editing your question/answer, and hit Ctrl+K. (Do not edit the code now, though, as you've gotten some answers)
Reorganise buttons into a list – The other answers has touched upon this already, but to take it one step further, you could use a list holding all of your buttons, and this would simplify much of your logic.
Include the 0 before floats – It is not neccessary, and is a minor detail, but I recommend keeping the 0 in front of floats. That is 0.07, instead of .07. (I kept misreading it as 0.7, and didn't quite see the use of increasing the speed twice before stopping the speed increase... )
Implementing all of these changes (and removing the increase score function, as it kind of hid a main concept of the game) we arrive at the following code:
import random
import ui
from time import sleep
import console
import sys
def turn_on(button):
"""Turn button to "on" state."""
button.title = 'on'
button.tint_color = ('yellow')
def turn_off(button):
"""Reverts button to off state."""
button.title = 'off'
button.tint_color = 'light blue'
def blink(button):
"""Briefly turns switches button state "on" and then "off"."""
turn_on(button)
sleep(.5)
turn_off(button)
def button_tapped(button):
"""When the button is tapped, it turns the button "off"."""
turn_off(button)
def game_description():
"""Describe game objective in alert box."""
console.alert("Game objective:",
"Click the button to turn the light off before the next light turns on",
"Ready")
def game_over_alert():
"""Alert box stating game over, and asking for a new game."""
return console.alert('Game Over!','','Play Again?')
def check_lights(buttons):
"""Check if all buttons are turned off."""
return all(button.title == 'off' for button in buttons)
def all_off(buttons):
"""Turn off all buttons."""
for button in buttons:
turn_off(button)
def add_button(view, width_percentage, height_percentage,
button_action=None, title='off'):
"""Adds a button positioned by percentages, with given title."""
button = ui.Button(title)
button.center = (view.width * width_percentage, view.height * height_percentage)
button.flex = 'LRTB'
button.action = button_action
view.add_subview(button)
return button
def initialise_view():
"""Initialise view, and return view and buttons."""
view = ui.View()
# Default settings for view
view = ui.View()
view.name = 'Light Panel'
view.background_color = 'black'
# Create "mole" buttons
mole_buttons = [add_button(view, 0.2 * (i + 1), 0.5,
button_action=button_tapped) for i in range(4)]
score_button = add_button(view, 0.5, 0.75, 'Score: 0')
return view, mole_buttons, score_button
def play_game(view, mole_buttons, score_button):
"""Main game loop. Handles game, and restarting of game."""
score_count = 0
speed = 2.55
game_description()
random.seed()
while True:
turn_on(mole_buttons[random.randint(0, 3)])
# Allow player to wack a mole
sleep(speed)
# Check if they hit the mole, that is check all lights are off
if check_lights(mole_buttons):
score_count += 1
score_button.title = 'Score: {}'.format(score_count)
else:
# They failed hitting the mole
break
# Change speed a little
if speed >= 1.5:
speed-= 0.07
if game_over_alert():
all_off()
play_game(view, mole_buttons, score_button)
def main():
view, mole_buttons, score_button = initialise_view()
view.present('sheet')
play_game(view, mole_buttons, score_button)
view.close()
if __name__ == '__main__':
main()
I haven't installed Pythonista, yet, so I'm not able to test this code, but hopefully it should work nicely. | {
"domain": "codereview.stackexchange",
"id": 16882,
"tags": "python, beginner, game, python-3.x"
} |
"Ease the Array" challenge | Question: I'm working on this challenge:
Given an array of integers of size N. Assume ‘0’ as invalid number and all other as valid number. Write a program that modifies the array in such a way that if next number is valid number and is same as current number, double the current number value and replace the next number with 0. After the modification, rearrange the array such that all 0’s are shifted to the end and the sequence of the valid number or new doubled number is maintained as in the original array.
Examples:
Input : arr[] = {2, 2, 0, 4, 0, 8}
Output : 4 4 8 0 0 0
Input : arr[] = {0, 2, 2, 2, 0, 6, 6, 0, 0, 8}
Output : 4 2 12 8 0 0 0 0 0 0
Input:
The first line of the input contains an integer T, denoting the number of test cases. Then T test case follows. First line of each test contains an integer N denoting the size of the array. Then next line contains N space separated integers denoting the elements of the array.
Output:
For each test case print space separated elements of the new modified array on a new line.
Constraints:
1 ≤ T ≤ 10³
1 ≤ N ≤ 10⁵
Example:
Input:
2
5
2 2 0 4 4
5
0 1 2 2 0
Output:
4 8 0 0 0
1 4 0 0 0
I managed to get my code to work for the given input and output cases. However, when I submit it, I get a "time limit exceeded" message.
#include <stdio.h>
int main() {
//code
int x,q,n,i,j,k,m,arr[100000],d,t;
scanf("%d",&x);
for(q=0;q<x;q++)
{
scanf("%d",&n);
for(t=0;t<n;t++)
{
scanf("%d",&arr[t]);
}
for(i=0;i<n;i++)
{
if(i>0)
{
if(arr[i]==arr[i-1])
{
arr[i-1]=arr[i]+arr[i-1];
arr[i]=0;
}
}
}
j=n-1;
k=0;
while(k != j)
{
if(arr[k]==0)
{
for(m=0;k+m+1<=j;m++)
{
arr[k+m]=arr[k+m+1];
}
arr[j]=0;
j--;
}
else
{
k++;
}
}
for(d=0;d<n;d++)
{
printf("%d",arr[d]);
printf(" ");
}
printf("\n");
}
return 0;
}
Answer: This is a decent start. Here are some things that could be improved.
Naming
Your variable names are single letters that as far as I can tell have no relationship to what they represent. The variable x is the number of test cases to run. You should name it numTestCases or testCases. q is the specific test case you're currently running, so testCase or testCaseIndex would be a better name. n is the number of inputs to read, so numInputs would be a good name. And arr tells us very little other than that it is an array. How about inputs? I would rename i to inputIndex, as it's for iterating over the inputs. d could be outputIndex.
Functions
There are 4 different things your code does:
Reads the inputs from the user (or stdin)
Finds duplicate items in the array and combines them
Moves 0s to the end of the array
Prints the results
Each of those should be a different function with a clear name.
Space Efficiency
You have an array declared as int arr[100000]. Assuming a typical 32-bit int implementation, that's ~400k. On my system, the default stack size is 1MB. That means that single array takes up half the stack. If had just two more of those, you'd get a stack overflow just running the app. Since it's only 1 allocation at the start of the program and not a repeating allocation, I recommend using malloc() to allocate the array and freeing it after you've finished.
Time Efficiency
The main slowdown in your code is the part that copies the remaining elements when you find a 0 in the array. There are built-in functions in the standard library to do this task for you. In this case, this loop:
for(m=0;k+m+1<=j;m++)
{
arr[k+m]=arr[k+m+1];
}
can be replaced with a single line:
memmove(&arr[k], &arr[k+1], sizeof(arr[0]) * (j - (k + 1) + 1));
This is significantly more efficient. The library function (which is sometimes not even a function), can use SIMD instructions, and other speedups to make it much faster than you're likely to do on your own. And it's portable. When I run the test with just that single change, it passes.
There are other ways you could speed this up, too. Instead of copying the remaining portion of the array at every 0, you could read up to the first 0. Mark that index. Read up to the next 0. Move only the values between those 2 indexes back, and repeat. This way you never move any element in the array more than once. | {
"domain": "codereview.stackexchange",
"id": 30869,
"tags": "c, array, time-limit-exceeded"
} |
Does the second law of thermodynamics hold under non-equilibrium conditions? | Question: The question is just that: whether the second law of thermodynamics is always valid under non-equilinrium conditions?
The origin of life involves several chemical reactions that are thermodynamically not feasible $(+ \Delta G)$. To explain these reactions, non-equilibrium thermodynamics is invoked. Hence, my question.
Answer: If you define the system consistently, the 2nd law always holds. If you define the system inconsistently, you can violate any law of physics you like.
Case 1: The system is always the refrigerator in isolation, before, during, and after the thought experiment. All the normal laws of physics apply, but nothing happens.
Case 2: The system is the refrigerator and the room and the power plant and the electrical grid and the river that cools the power plant and the atmosphere of the planet, before, during, and after the thought experiment. All the normal laws of physics apply, interesting things happen, but the model is too complex to be useful.
Case 3: The system is the refrigerator in isolation before we run the thought experiment, work and heat magically enter and leave the system during the thought experiment, and then at the end of the thought experiment we just look at the refrigerator in isolation again. It's useful to set up the problem this way, packing all the unknown behavior of a complicated system into something simple and controllable like "work across the boundary". But we can't draw any conclusions about the laws of physics based on the beginning and ending states of "the refrigerator in isolation", because all the interesting physics stuff happened in a totally different system that includes wherever the work and heat came from and went to.
If we were being ontologically correct, we shouldn't ever use case 3. Instead...
Case 4: The system is always the refrigerator, a power supply thermally isolated from the refrigerator, and a heat sink in thermal contact with the refrigerator. All the normal laws of physics apply, interesting things happen, and we can model them.
To reiterate: if you take option 3 and try to infer laws of physics based on the start and end states of an inconsistently isolated system, you can violate whatever you want.
The system is a box with a ball rolling around in it. I allow magic transport of stuff across the boundary (I take the ball out of the box) and then I look at the box again. The ball is gone! The Second Law, gone! The First Law, gone! Newton's laws of motion, gone! Quantum mechanics, gone! | {
"domain": "physics.stackexchange",
"id": 84437,
"tags": "thermodynamics, statistical-mechanics, entropy, non-equilibrium"
} |
GPS message: compute ephemeris data and pseudo-ranges from subframes | Question: I have raw GPS data (see end of the question for details if you think the protocols are relevant), and I need to extract/compute ephemeris and pseudoranges (my goal is to replace the recursive least squares that the receiver solves to compute the position with a home-brewed method based on sensor fusion). Concerning the ephemeris, I have access to the 3 first subframes of each of the 25 frames of the GPS message. Documentations/books/etc that I have found just vaguely mention that the first subframe contains clock data, and the two others contain ephemeris data. However, none of them precisely says what the words in these subframes are, and how I can use them to compute orbital information (I'm assuming that I only want satellite positions?). Can anyone give me pointers to some references on how to do this? Or (even better), is there any open-source code that already implements this?
I really appreciate your help.
Details on the data: They have been transmitted by a Ublox EVK-7p using the UBX RXM-EPH and RXM-RAW protocols.
RXM-EPH: satellite ID (4 byes), HOW word (4 bytes), followed by three 32-byte arrays corresponding to subrames 1 to 3 of the GPS message.
RXM-RAW: time of week, week, num of satellites, reserved (?) , carrier phase, pseudo-range, doppler, and so on....
Update: I found this which seems to answer most of my questions. It is from 1995 though.
Answer: The GPS standard has been the same since the first GPS satellites were launched, so sources from 1995 are very relevant In fact that is the document I would point you to. One of the best libraries for decoding GPS is gnss-sdr, You can look at how they are doing protocol decoding in order to get an idea of how to implement it.
Other docs I would point you would be the official GPS website and the interface control documents. There is a ton of useful information there. | {
"domain": "robotics.stackexchange",
"id": 1434,
"tags": "sensors, gps, pseudo-ranges, ephemeris"
} |
What is the use of line integral in physics? | Question: Where and how is it used? Why is it used in Gauss's law?
Answer:
A line integral is used whenever one needs to figure out how much of a vector has accumulated along a path. The classic example is that work is expressed as the line integral $W=\int {\vec F}\cdot d{\vec x}$, which captures that idea of "accumulating work along the path", but the concept can show up in a lot of places.
Gauss's law is not phrased in terms of a line integral, but rather a surface integral. Here, the concept is capturing "I have a ball, and I need to calculate how much electric field is leaving the ball". You do this by calculating the little amount of electric field leaving each section of the ball, and then adding it all up. This is naturally done with a surface integral.
In general, if you have a bunch of varying vector quantities, and you want to calculate numbers with them over a distance that is larger than a single point, you are likely going to have to do some sort of vector calculus. | {
"domain": "physics.stackexchange",
"id": 80741,
"tags": "electromagnetism, gauss-law, integration"
} |
Why not use just one electric motor in a car? | Question: Instead of using an alternator to charge the battery, and a starter to start the engine, why not just put one electric motor which is always connected with the engine? When we want to start the motor, we supply electricity to it; and when the engine is started, it spins the motor to generate electricity and charge the battery.
Answer: One problem with generating electrical energy from a car engine is that the RPMs are constantly changing, and consequently the power output of an electrical generator would vary with engine speed. An alternator is very good at responding to these variations; the current in the rotor can be quickly adjusted to change the power output at a given RPM. I don't know how you would do the same with an electric motor as a generator, but I imagine that it would involve rejecting some of the surplus energy as waste, which would be inefficient.
Another thing to consider is that starting an engine and using it to generate electrical energy each require controlling very different amounts of electrical power. When starting the engine, a high amount of electrical power is needed, but only for a few seconds. When using the engine to generate power, typically only a small amount of power is needed, but it has to be generated continuously, as long as the engine is running. This means that the starter motor has to be built to withstand much higher current than the alternator, and the alternator has to be built to operate much longer than the starter.
It all comes down to the fact that the starter and the generator each have different design requirements. You could use one device that does the job of both, but it might be something of a "jack of all trades, master of none". The best solution for the time being seems to be the use of a separate starter and generator. That may change as technology advances though. | {
"domain": "engineering.stackexchange",
"id": 462,
"tags": "mechanical-engineering, electrical-engineering, electric-vehicles"
} |
Is there a working implementation of "Navigation Among Movable Obstacles" for a bi-pedal robot? | Question: I would like to have a better understanding of work in the field of "Navigation Among Movable Obstacles". I started off with Michael Stilman's thesis under James Kuffner, but that has not yet sated my appetite.
I am currently trying to simulate a scenario where debris (Tables and Table parts) from a disaster scenario block pathways. The debris forms part of a movable obstacle. The robot which will be used is a bipedal humanoid.
The thesis describes an approach to define the search space of possible actions leading from the start point to the goal. However, it assumes a mobile robot which works via gliding.
I think the state space definitions would change for a bi-pedal robot. Why is why I wonder what other work is being done in this field. Perhaps the work of other research groups could give me clues as to how to design and perhaps reduce the search space for a bipedal humanoid robot.
An implementation of Navigation among Movable Obstacles would also aid me in understanding how to reduce the search space of possible actions.
So does anyone know of a working implementation of Navigation among movable obstacles?
Any supporting information about other professors or research groups working on similar problems would also be very useful.
I hope this edit is sufficient for the problem description.
Answer: Speaking about navigation among moving obstacles, look at Jur van den Berg thesis. Also Lavalle provided a tutorial at ICRA12 http://msl.cs.uiuc.edu/~lavalle/icra12/ and the videos are here http://techtalks.tv/events/105/ but neither mention the problem of bipeds. | {
"domain": "robotics.stackexchange",
"id": 44,
"tags": "navigation"
} |
Definition of space complexity when algorithm cycles. | Question: I'm reading side by side my class notes and Papadimitrious' Computational Complexity book. At this point they are talking about space complexity. They give rules for computing space employed in an algorithm that runs on a multi-tape Turing machine:
We count the cells used.
If we don't write in the input, this cells don't count.
If the output cells are written from left to right, they don't count.
The final requirement is expressed differently in Papadimitrious' and my notes. In the books it is written:
The cursor of the input string does not wander off into the blank
symbols after the end of the input. It is a useful technical requirement, but not necessary.
In my notes:
In an algorithm where space is counted, there can exist computations
that never end, but one can always transform the algorithm into
another that doesn't cycle.
So how does one measure the space complexity of an algorithm that may cycle forever? Are this statement equivalent to each other?
Answer:
So how does one measure the space complexity of an algorithm that may cycle forever?
When we say that some language $L$ belongs to some complexity class it is assumed that a TM (algorithm) decides $x \in L$ in finite space and time amount. In other words, the TM/algorithm terminates, i.e. it may not cycle forever. However, an algorithm that loops forever may use finite space, but it still cannot decide.
But I don't understand the following statement
In an algorithm where space is counted, there can exist computations that never end, but one can always transform the algorithm into another that doesn't cycle. | {
"domain": "cs.stackexchange",
"id": 9547,
"tags": "turing-machines, space-complexity"
} |
Code to analyse text get stuck if too much data | Question: I've made the following VBA script to analyse text recurrence in a huge batch of descriptions.
For a small part of the batch the code run smoothly, but when I include everything it tends to loose control, get stuck and both Excel and VBE freeze.
What I did to avoid this (at least most of the times), is to include temporisation (DoEvents) and use the Immediate Window to show that the code is still "alive" :
If Int(i / 1000) = i / 1000 Then
Debug.Print i
Else
If Int(i / 100) = i / 100 Then
DoEvents
Else
End If
End If
I guess there are better ways to handle that kind of behavior in VBA, but I don't know.
Here is the full code, that is probably improvable :
Sub test_usedW()
Dim A()
A = get_most_used_words_array(An_Array, 1, True)
End Sub
Function get_most_used_words_array(ByVal ArrayToAnalyse As Variant, Optional ByVal ColumnToAnalyse As Integer = 1, Optional OutputToNewSheet As Boolean = False) As Variant
Dim A() As String, _
wb As Workbook, _
wS As Worksheet, _
Dic As Scripting.Dictionary, _
DicItm As Variant, _
NbMaxWords As Integer, _
TpStr As String, _
Results() As Variant, _
DicItm2 As Object, _
R(), _
iA As Long, _
i As Long, _
j As Long, _
k As Long, _
c As Range
Set wb = ThisWorkbook
Set Dic = CreateObject("Scripting.Dictionary")
Dic.CompareMode = TextCompare
NbMaxWords = 5
'--1--Balayage du tableau
For iA = LBound(ArrayToAnalyse, 1) To UBound(ArrayToAnalyse, 1)
If ArrayToAnalyse(iA, ColumnToAnalyse) <> vbNullString Then
'--2--Uniformisation des descriptions pour plus de "conformité"
ArrayToAnalyse(iA, ColumnToAnalyse) = CleanStr(ArrayToAnalyse(iA, ColumnToAnalyse))
A = Split(ArrayToAnalyse(iA, ColumnToAnalyse), " ")
DoEvents
'--1--Ajout mots simples
For i = LBound(A) To UBound(A)
TpStr = CleanStr(A(i))
If Len(TpStr) > 3 Then
If Not Dic.exists(TpStr) Then
Dic.Add TpStr, TpStr
Else
DoEvents
End If
Else
End If
Next i
'--1--Ajout expressions (plusieurs mots)
If NbMaxWords < 10 Then
For i = LBound(A) To UBound(A)
For k = 2 To NbMaxWords
j = 0
TpStr = vbNullString
Do While j <= k And i + j <= UBound(A)
TpStr = TpStr & " " & CleanStr(A(i + j))
j = j + 1
Loop
TpStr = CleanStr(TpStr)
If Len(TpStr) > 3 Then
If Not Dic.exists(TpStr) Then
Dic.Add TpStr, TpStr
Else
DoEvents
End If
Else
DoEvents
End If
Next k
Next i
End If
Else
End If
Next iA
'Results = Application.Transpose(Dic.Items)
ReDim Results(Dic.Count - 1)
For i = 0 To Dic.Count - 1
Results(i) = Dic.Items(i)
If Int(i / 1000) = i / 1000 Then
Debug.Print i
Else
If Int(i / 100) = i / 100 Then
DoEvents
Else
End If
End If
Next i
ReDim R(1 To UBound(Results), 3)
Debug.Print "UBound(Results) : " & UBound(Results)
For i = 1 To UBound(Results)
R(i, 0) = Results(i) ', 1)
R(i, 2) = Len(R(i, 0))
For iA = LBound(ArrayToAnalyse, 1) To UBound(ArrayToAnalyse, 1)
If ArrayToAnalyse(iA, ColumnToAnalyse) <> vbNullString Then
'Affinage du compatge? Exclusif? instr(" " & search & " ")?
If InStr(1, ArrayToAnalyse(iA, ColumnToAnalyse), R(i, 0)) Then R(i, 1) = R(i, 1) + 1
If InStr(1, ArrayToAnalyse(iA, ColumnToAnalyse), " " & R(i, 0) & " ") Then R(i, 3) = R(i, 3) + 1
Else
End If
Next iA
If Int(i / 1000) = i / 1000 Then
Debug.Print i
Else
If Int(i / 100) = i / 100 Then
DoEvents
Else
End If
End If
Next i
DoEvents
If OutputToNewSheet Then
Set wS = wb.Worksheets.Add
wS.Activate
'ws.Range("A1").Resize(UBound(R, 1), UBound(R, 2)).Value = R
For i = LBound(R, 1) To UBound(R, 1)
For j = LBound(R, 2) To UBound(R, 2)
If InStr(1, R(i, j), "=") Then
wS.Cells(i + 1, j + 1) = "'" & R(i, j)
Else
wS.Cells(i + 1, j + 1) = R(i, j)
End If
Next j
Next i
DoEvents
Else
End If
DoEvents
get_most_used_words_array = R
End Function
And the functions to "simplify" text :
Function CleanStr(ByVal TheString As String)
Dim SpA() As String
Dim SpB() As String
Dim i As Integer
Const AccChars = "–| - | -|- |-| / | /|/ | . | .|. | , | ,|, | ) | )|) | ( | (|( |=| | | "
Const RegChars = " | | | | |/|/|/|.|.|.|,|,|,|)|)|)|(|(|(|'=| | | "
SpA = Split(AccChars, "|")
SpB = Split(RegChars, "|")
For i = LBound(SpA) To UBound(SpA)
TheString = Replace(TheString, SpA(i), SpB(i))
Next i
CleanStr = StripAccent(Trim(Trim(TheString)))
End Function
Function StripAccent(ByVal TheString As String)
Dim A As String * 1
Dim B As String * 1
Dim i As Integer
Const AccChars = "àáâãäåçèéêëìíîïðñòóôõöùúûüýÿŠŽšžŸÀÁÂÃÄÅÇÈÉÊËÌÍÎÏÐÑÒÓÔÕÖÙÚÛÜÝ"
Const RegChars = "aaaaaaceeeeiiiidnooooouuuuyySZszYAAAAAACEEEEIIIIDNOOOOOUUUUY"
For i = 1 To Len(AccChars)
A = Mid(AccChars, i, 1)
B = Mid(RegChars, i, 1)
TheString = Replace(TheString, A, B)
Next i
StripAccent = TheString
End Function
Answer: First:
Simple speed-enhancements
The 3 lowest hanging fruit in the VBA performance garden are
Application.ScreenUpdating = False
Application.EnableEvents = False
Application.Calculation = xlCalculationManual
Personally, I have the following standard Methods for dealing with those:
Option Explicit
Public varScreenUpdating As Boolean
Public varEnableEvents As Boolean
Public varCalculation As XlCalculation
Public Sub StoreApplicationSettings()
varScreenUpdating = Application.ScreenUpdating
varEnableEvents = Application.EnableEvents
varCalculation = Application.Calculation
End Sub
Public Sub DisableApplicationSettings()
Application.ScreenUpdating = False
Application.EnableEvents = False
Application.Calculation = xlCalculationManual
End Sub
Public Sub RestoreApplicationSettings()
Application.ScreenUpdating = varScreenUpdating
Application.EnableEvents = varEnableEvents
Application.Calculation = varCalculation
End Sub
Which will return the settings to whatever they were before your sub runs. But, if you really want to do it properly, this question is a much better implementation.
And now, in rough order of when I encounter things in your code, these are my thoughts:
Your interruption check could be a lot better
If Int(i / 1000) = i / 1000 Then
Debug.Print i
Else
If Int(i / 100) = i / 100 Then
DoEvents
Else
End If
End If
Personally, I prefer Mod() as in If i Mod 100 = 0 Then ...
Also, did you intend for i to call DoEvents every 100 iterations except for every 1000th iteration?
If not, it should be:
If i Mod 100 = 0 Then
DoEvents
If i Mod 1000 = 0 Then Debug.Print i
End If
On this note i is not a very useful thing to Debug.print. If somebody else runs your program (or if you have more than one thing to print to the immediate window) then it's going to be very difficult to figure out what is going on.
I recommend something like: Debug.Print "[Name of procedure / loop / some other descriptor] - Iteration Counter: " & i
Since it's in a For Loop, you already know how many iterations it should run for, so you should probably include that as well.
This:
For i = 0 To Dic.Count - 1
Results(i) = Dic.Items(i)
If Int(i / 1000) = i / 1000 Then
Debug.Print i
Else
If Int(i / 100) = i / 100 Then
DoEvents
Else
End If
End If
Next i
Then Becomes:
For i = 0 To Dic.Count - 1
Results(i) = Dic.Items(i)
If i Mod 100 = 0 Then
DoEvents
If i Mod 1000 = 0 Then Debug.Print "Copy Dic to Results Array - Iteration Counter: " & i & " / " & Dic.Count - 1
End If
Next i
And rather than seeing this in your immediate window:
1000
2000
3000
4000
You'll see
Copy Dic to Results Array - Iteration Counter: 1000 / 4192
Copy Dic to Results Array - Iteration Counter: 2000 / 4192
Copy Dic to Results Array - Iteration Counter: 3000 / 4192
Copy Dic to Results Array - Iteration Counter: 4000 / 4192
Much more useful.
Be Explicit
Sub is not Sub it is actually (implicitly) Public Sub
Same with Function --> Public Function
And Dim A --> Dim A As Variant
Methods should be Public or Private
Variables should have an explicit type (even if that type is intended to be Variant).
You do at least appear to be declaring your variables, so +1 for that.
Don't abuse the _ operator.
Dim A() As String, _
wb As Workbook, _
wS As Worksheet, _
Dic As Scripting.Dictionary, _
DicItm As Variant, _
NbMaxWords As Integer, _
TpStr As String, _
Results() As Variant, _
DicItm2 As Object, _
R(), _
iA As Long, _
i As Long, _
j As Long, _
k As Long, _
c As Range
Why do you want all these declarations on the same line?
Just declare them separately like so:
Dim A() As String
Dim wb As Workbook
Dim ws As Worksheet
Dim Dic As Scripting.Dictionary
Dim DicItm As Variant
Dim NBMaxWords as Integer
etc.
Now, you don't have to spend precious development time fiddling around with alignments and the inevitable missing / mis-typed _s that will crop up.
Good naming is really, really important
To quote developers far more experienced than I:
"There are only three hard things in computer science:
cache invalidation, off-by-one errors and naming things."
Good names should be Clear, Concise and Unambiguous.
Variables should sound like what they are.
ArrayToAnalyse is a good name. It is "the array this function needs to analyse". Awesome.
TpStr is not. I haven't got the faintest idea what this thing is or what it's meant to represent. I just spent a minute looking for it in your code to try and figure it out and I've still got no idea what it really is, except that it invariably gets "cleaned" and then added to your dictionary.
A() and R() are particularly bad. I know they're arrays (due to their declaration) but I've got no idea what they're meant to be used for.
When I see A = Split(ArrayToAnalyse(iA, ColumnToAnalyse), " ") in your code, how am I meant to know that it should be A and not R?
Whereas if A was called, say, splitString and R was called resultsStorage then it's much easier to spot errors. (I don't actually know what R should be called, your names make it difficult to figure out what's actually going on and why).
Also,
Standard VBA Naming conventions have camelCase for local variables, and PascalCase only for sub/function names and Module/Global Variables. This allows you to tell at a glance if the variable you're looking at is local to your procedure, or coming from somewhere else.
So:
Dim localScope as Variant
Private ModuleScope as Variant
Public GlobalScope as Variant
Public/Private Const CONSTANT_VALUE as String = "This value never changes"
Public Sub DoThisThing (ByRef firstParameter as Variant)
following standard conventions is good because it allows other developers to easily read and understand your code. | {
"domain": "codereview.stackexchange",
"id": 18516,
"tags": "vba, error-handling, excel, time-limit-exceeded"
} |
How do eukaryotes terminate transcription? (clarification on Campbell Biology) | Question: I'm having trouble understanding how eukaryotes terminate transcription. Studying Campbell Biology (pg. 342, 10th ed.), I read:
In eukaryotes, RNA polymerase II transcribes the polyadenylation signal sequence on DNA, which specifies a polyadenylation signal (AAUAAA) in the pre-mRNA. This is is called a "signal" because once this stretch of six RNA nucleotides appears, it is immediately bound by certain proteins in the nucleus.
The text continues with:
Then, at a point about 10-35 nucleotides downstream from the AAUAA, these proteins cut it free from the polymerase, releasing the pre-mRNA.
Does this mean pre-mRNA continues to be synthesized for 10-35 nucleotides after AAUAA is transcribed before finally being cut off?
Thanks so much.
Answer: According to Lewin's Genes XI - Krebs et. al on Eukaryotic Transcription, RNA Splicing, and Processing
It is not clear whether RNA polymerase II actually engages in a termination event at a specific site. It is possible that its termination is only loosely specified. In some transcription units, termination occurs more than 1000 bp downstream of the site, corresponding to the mature 3' end of the mRNA (which is generated by cleavage at a specific sequence) Instead of using specific terminator sequences, the enzyme ceases RNA synthesis within multiple sites located in rather long "terminator regions." The nature of the individual termination sites is largely unknown.
It does go on to say,
The site of cleavage/polyadenylation in most pre-mRNAs is flanked by two cis-acting signals: an upstream AAUAAA motif, which is located 11 to 30 nucleotides from the site, and a downstream U-rich or GU-rich element. The AAUAAA is needed for cleavage and polyadenylation because deletion or mutation of the AAUAAA hexamer prevents generation of the polyadenylated 3' end (though in plants and fungi there can be considerable variation from the AAUAAA motif).
I should add that the significance of 11 to 30, or as you said 10 to 35 nucleotides is that it is telling you the approximate size of the protein complex that is sitting on the RNA and where the active site of the protein is. The domain that recognizes the termination sequence is about 10 to 35 bases wide. It is likely that the variation is due to the RNA sequence following the termination sequence ability to form a stem loop structure.
Campbell takes a necessarily basic and less nuanced approach to the information that they provide on molecular biology. The approach is to provide some details and show the evolutionary commonalities between a diverse range of biological systems. The approach is okay as it gets you into the frame of mind of thinking about the interconnectedness of biological systems.
As you progress in your studies you will see that the reality and specifics are far more complex and exception based than was presented in introductory courses. But the more advanced courses spend an entire semester focused on what Campbell covers in one chapter. Campbell is accurate but incomplete out of necessity. | {
"domain": "biology.stackexchange",
"id": 4525,
"tags": "genetics, dna, rna, transcription"
} |
About the three point function at one loop order | Question: Could someone explain how exactly do you calculate the trace of the three point function of one loop in QED. in the following link the expression from 1. a (2)
http://learn.tsinghua.edu.cn:8080/2010310800/Webpage/files/Peskin_Schroeder/Peskin_Chap10.pdf
Answer: The source you linked to looks for the divergent part of the integral in a high energy limit ($m \to 0$). If you want to compute the whole thing, here's how you would do it. Focus on just one of the two traces:
1) "Rationalize" all the propagators by doing something like
$$
\frac{1}{\gamma^\mu k_\mu - m} = \frac{\gamma^\mu k_\mu + m}{k^2 - m^2}
$$
1)[Result] Now the numerator is something like
$$ \prod_i \left\{ \left( \gamma^{\mu_i} p_{i,\mu_i} + m_i \right) \text{ or } \gamma^{\mu_i} \right\} $$
i.e. a product of terms that are either a gamma matrix or consist of some slashed momentum with a possible mass term.
2) Expand this product term by term. For example, pick $m$ every time, or pick the slashed momentum every time.
2)[Result] Now the matrix part looks like
$$
\sum_{terms} m^n p_{1,\mu_1} \cdots p_{j,\mu_j} tr \left[ \gamma^{\mu_1} \cdots \gamma^{\mu_i} \cdots \gamma^{\mu_j} \right]
$$
Some subset of the gamma matrices in the trace are contracted with the momenta four-vectors, but there are a number of un-contracted gamma matrices. In fact, for this example there should be three.
3) For each term, use the following trace identities to simplify (there are others, but there are the most relevant ones):
trace of an odd number of gamma matrices is 0
$ tr \left[ \gamma^\mu \gamma^\nu \right] = 4 g^{\mu \nu} $
$ tr \left[ \gamma^\mu \gamma^\nu \gamma^\rho \gamma^\sigma \right] = 4 \left( g^{\mu \nu} g^{\rho \sigma} - g^{\mu \rho} g^{\nu \sigma} + g^{\mu \sigma} g^{\nu \rho} \right)$
the cyclic property: $ tr \left[ \gamma^\mu \cdots \gamma^\nu \right] = tr \left[ \gamma^\nu \gamma^\mu \cdots \right] $
You might frequently need to use other gamma matrix identities such as:
$$ \left\{ \gamma^\mu, \gamma^\nu \right\} = 2 g^{\mu \nu} $$
You'll need this when figuring out the trace of a product of 6, 8, etc. gamma matrices, in which case you can use the anticommutation relation to move a gamma matrix all the way across, hop it back with the cyclic property, and then rearrange terms to write the formula for the trace of N gamma matrices in terms of traces of N-2 gamma matrices. This is how you'd prove the identity for the trace of 4 gamma matrices, for example.
3) [Result] Now the expression is a sum of terms involving Lorentz contractions of the various momenta involved.
I don't mean these steps to be the exact order you should compute things in. Doing things in exactly this order is definitely not the most efficient way to proceed. But, this is the overall scheme. | {
"domain": "physics.stackexchange",
"id": 9671,
"tags": "homework-and-exercises, quantum-electrodynamics, trace"
} |
stereo_proc NO subscription so NO stereo_images | Question:
Hi everybody
I was wondering if someone has ever seen something like that.
I'm using two uEye USB camera with a personnal arduino trigger.
Everything works fine. The calibration setup is OK.
But when I try to launch the stereo_proc node,
I can clearly see with rxgraph (or rosnode info) that stereo/proc as no subscriptions.
And according to the stereo_image_proc tutorial I should have
Subscriptions:
/stereo/left/image_raw [sensor_msgs/Image]
/stereo/left/camera_info [sensor_msgs/CameraInfo]
/stereo/right/image_raw [sensor_msgs/Image]
/stereo/right/camera_info [sensor_msgs/CameraInfo]
And I'm thinking that's why I can not go further with the my stereo bench.
Here is my launch file.
<launch>
<!-- first camera and associated image pipeline -->
<group ns="stereo_test" >
<!-- left camera -->
<node pkg="cameraUEYE" type="cameraUEYE_node" name="cameraUEYE_left_node" >
<param name="guid" value="1" />
<param name="frame_id" value="cam_left" />
<param name="use_ros_time" value="true" />
<param name="use_external_trigger" value="true" />
<param name="frame_rate" value="50" />
<param name="camera_info_url" value="file:///home/vvittori/.ros/camera_info/UEYE_USB_1.yaml"/>
<remap from="camera" to="left" />
</node>
<!-- right camera -->
<node pkg="cameraUEYE" type="cameraUEYE_node" name="cameraUEYE_right_node" >
<param name="guid" value="2" />
<param name="frame_id" value="cam_right" />
<param name="use_ros_time" value="true" />
<param name="use_external_trigger" value="true" />
<param name="frame_rate" value="50" />
<param name="camera_info_url" value="file:///home/vvittori/.ros/camera_info/UEYE_USB_2.yaml"/>
<remap from="camera" to="right" />
</node>
<!-- STEREO IMAGE PROC -->
<node pkg="stereo_image_proc" type="stereo_image_proc" respawn="false" name="proc" args="_approximate_sync:=True" />
</group>
<!-- monitoring and configuration tools -->
<node pkg="rxtools" type="rxconsole" name="rxconsole" />
<node pkg="dynamic_reconfigure" type="reconfigure_gui"
name="reconfigure_gui" />
</launch>
Any hints are welcomed !
Thanks
Regards
EDIT :
Thanks to Patrick I was able to understand that part in the wiki page ! (Cf. Patrick answer)
I was still block at the rosnode info stereo/proc results that they give us.
Anyway I've tried what he told me with the following command line (I have change stereo_test to stereo in my launch file)
ROS_NAMESPACE=stereo rosrun stereo_image_proc stereo_image_proc __name:=PROC
and
rosrun image_view stereo_view stereo:=/stereo image:=image_rect
as written in the wiki page.
Then, I did see what you told me that right after I launch the previous command it subscribed at
[ INFO] [1334565947.793592670]: Subscribing to:
* /stereo/left/image_rect
* /stereo/right/image_rect
* /stereo/disparity
and yet I had the following error that occured :
OpenCV Error: Assertion failed (0 <= roi.x && 0 <= roi.width && roi.x + roi.width <= m.cols && 0 <= roi.y && 0 <= roi.height && roi.y + roi.height <= m.rows) in Mat, file /tmp/buildd/libopencv-2.3.1+svn6514+branch23/modules/core/src/matrix.cpp, line 303
OpenCV Error: Assertion failed (0 <= roi.x && 0 <= roi.width && roi.x + roi.width <= m.cols && 0 <= roi.y && 0 <= roi.height && roi.y + roi.height <= m.rows) in Mat, file /tmp/buildd/libopencv-2.3.1+svn6514+branch23/modules/core/src/matrix.cpp, line 303
terminate called after throwing an instance of 'cv::Exception'
what(): /tmp/buildd/libopencv-2.3.1+svn6514+branch23/modules/core/src/matrix.cpp:303: error: (-215) 0 <= roi.x && 0 <= roi.width && roi.x + roi.width <= m.cols && 0 <= roi.y && 0 <= roi.height && roi.y + roi.height <= m.rows in function Mat
Aborted
Do you know what it could be ?
Thanks
Regards
V.v
EDIT 2 :
Here is a bag file of the following topics when the OpenCV error occur. (sorry for the external link I don't have enough karma ... )
http://dl.free.fr/felDpzi5V
rosbag record -O stereo_proc_fail /stereo/right/camera_info /stereo/left/camera_info /stereo/left/image_raw /stereo/right/image_raw /stereo/left/image_rect /stereo/right/image_rect
Best Regards
Originally posted by Vincent_V on ROS Answers with karma: 91 on 2012-04-13
Post score: 1
Original comments
Comment by joq on 2012-04-13:
Shouldn't the namespace be stereo and not stereo_test?
Comment by Vincent_V on 2012-04-15:
It was a "test" choice but I think that it is not a big deal as long as you specified it in the ROS_NAMESPACE . I maybe wrong I'll check it !
Comment by Eric Perko on 2012-04-16:
Are your camera_info and image sizes set properly? Could you include a short bagfile with your cameras streaming images and infos that reproduces that OpenCV assertion?
Comment by Vincent_V on 2012-04-16:
I don't know what properly set is for you ? Here is in the main post the bag file you ask . Thank you
Answer:
Have you subscribed to any of stereo_image_proc's advertised topics (e.g. stereo_test/disparity)? To avoid unnecessary computation (both in stereo_image_proc and the source camera driver nodes) it only subscribes on demand. From the wiki docs:
All processing is on demand. Color
processing is performed only if there
is a subscriber to a color topic.
Rectification is performed only if
there is a subscriber to a rectified
topic. Disparities and point clouds
are generated only if there is a
subscriber to a relevant topic. While
there are no subscribers to output
topics, stereo_image_proc unsubscribes
from the image_raw and camera_info
topics.
Originally posted by Patrick Mihelich with karma: 4336 on 2012-04-13
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 8962,
"tags": "stere-image-proc"
} |
The spin and weight of a primary field in CFT | Question: A primary field in Conformal Field Theory transforms as
$$\phi (z,\bar{z}) =\left(\frac{dz}{dz'} \right)^h \left(\frac{d\bar{z}}{d\bar{z}'} \right)^\bar{h}\phi (z',\bar{z}') $$
under a conformal transformation.
I read in chapter 2 page 41 in Strings, Conformal Fields and M-theory by M.Kaku that $h+\bar{h}$ is called a conformal weight and $h-\bar{h}$ a conformal spin.
What is the motivation, especially for the spin-one, for these names?
Answer: Both $h$ and $\tilde{h}$ are usually called weights. Their sum, $\Delta=h+\tilde{h}$ is the (scaling) dimension of the operator, while the difference, $s=h-\tilde{h}$ is called the spin. This is due to their association with scale transformations (dilatations) and rotations, respectively. To see this, note that the dilatation operator is given by $D=z\partial+\bar{z}\bar{\partial}$ and the rotation operator by $L=z\partial-\bar{z}\bar{\partial}$. The eigenvalues of a primary under these transformations are given by its scaling dimension $\Delta$ and its spin $s$. | {
"domain": "physics.stackexchange",
"id": 17998,
"tags": "string-theory, soft-question, quantum-spin, terminology, conformal-field-theory"
} |
Anzatz for a looped open string | Question: Trying to think of a suitable ansatz for a 2 dimensional open string where both endpoints are attached to a $D0$ brane at $(0,0)$, creating a loop.
The ansatz is for $F'(u)$ since dirichlet boundary conditions impose
$\left|\frac{dF'(u)}{du}\right|^2=1$ for all u.
The string $\vec{X}(t,\sigma)$ is given by
$\vec{X}(t,\sigma)=\frac{1}{2}\left(F(ct+\sigma)-F(ct-\sigma)\right)$
I know that $F'(u)$ must be of the form $[\cos(\alpha),\sin{\alpha}]$ such that the first condition is satisfied, but i'm having trouble trying to think of how to construct an ansatz.
A similar problem is in Zweibach a first course for string theory problem 7.5 & 7.6, but the open string is attached to two $D0$ branes - one at $(0,0)$ and one at $(a,o)$. The (sucessful) ansatz for that case is given by
$\vec{F}'(u)=\left(\cos\left[\gamma\cos\frac{\pi u }{\sigma_1}\right],\sin\left[\gamma\cos\frac{\pi u }{\sigma_1}\right]\right)$
Is there a way i can transform this such that the two endpoints converge at (0,0)? Or is there something easy i am missing to make a looped string?
EDIT
I have now found that for periodicity to be satisfied i also need that
$F'(u+2\sigma_1)-F'(u)=(2a,0)$
Which implies
$\int_{0}^{2\sigma_1}F'(u)du=(2a,0)$
I will try setting a to zero and see if this helps in finding my F'(u).
Answer: It can be shown that the ansatz for $x$ equates to a Bessel function of the first kind (with a change of variable for $x$).
$$
\frac{a}{\sigma_1}=\frac{1}{2\pi}\int_{0}^{2\pi}\cos(\gamma\cos(x))dx=J_0(\gamma)
$$
By setting $a$ to zero, we see that $\sigma_1$ is then arbitrary and we merely need to locate the zeros of the Bessel functions of the first kind. This is an intriguing example as there are an infinite number of zeros. We can set either gamma in both and $y$ to different value as long as they are a solution to $J_0(\gamma)=0$.
One of the funkier graphs I found was setting $\gamma$ in $x$ to be the 9th Bessel zero, and the $\gamma$ in $y$ to be the 8th Bessel zero. Putting $t=\sigma_1 \cdot 1.7$ we get the following graph:
Now going to try and animate this somehow.
EDIT 2
I managed to animate it!!
I present to you... Toby Functions (pending patent).
Since there are infinite Bessel zeros, there are $\infty !$ (you can change $\gamma$ for $x$ and $y$) possible animations!! If you like i can give you the code for this and you can have a play around - just message me for it or i'll post it if there is anyone out there reading this...
Note: For a correct string solution, $\gamma_1$ must equal $\gamma_2$ such that we satisfy the condition for:
$\left(\frac{dF}{du}\right)^2=1 \implies \cos^2(\theta)+\sin^2(\theta)=1$
So we have a possible $\infty !$ animations, but only $\infty$ correct string animations. | {
"domain": "physics.stackexchange",
"id": 39373,
"tags": "string-theory"
} |
Beyond Standard Model | Question: (I am mathematician, not physicist, so I don't know what to read in the physics literature, I don't know the journals, where are the survey, etc...). I read in standard newspapers, so not in scientific journals, that physicists in elementary particle physics were very worried about the future of this field of research because only machines having the size of our galaxy could help to choose among all possible extensions of the Standard Model. Is it serious ? To make this question a little bit more serious, do there exist papers -probably yes- giving a general picture (for a mathematician, not a physicist !) about the possible theories beyond the Standard Model.
Answer: Yes, the point about (some) of the energy scales requiring "an accelerator the size of the galaxy" is serious. But only in certain cases, and assuming we stick with the same accelerator designs we currently use, it's possible we'll be able to increase the energy required here on Earth.
In addition, we already use astronomy to perform particle physics experiments. Remember the scare stories about the LHC "creating a black hole that would destroy Earth"? These weren't taken seriously in the physics community because collisions 10 to 1,000 times more energetic happen in the atmosphere every few minutes-to-days when cosmic rays impact.
We study particles (usually muons) from these collisions, neutrinos from the Sun and many other particles from astronomical events. It's likely that if beyond-standard-model particles exist, and are beyond the practicalities of human-made accelerators, then we'll eventually find them in high energy astronomical events.
Your first stop in researching beyond-standard-model particles should probably be supersymmetry (SUSY). Start with wikipedia and then take a look at arXiv.org for a vast range of papers. | {
"domain": "physics.stackexchange",
"id": 34109,
"tags": "particle-physics, soft-question"
} |
What is the most efficient means of warming a building with a high ceiling? | Question: Consider a large auditorium, a church, or some other very large, essentially one roomed building with a high ceiling. Suppose that the building has many entrances which enable cold air in/hot air out and traffic in and out of the building is unavoidably large.
I would imagine that any attempts to control the temperature of such a large building would be very inefficient in terms of energy and cost, particularly because warm air rises.
Assuming that it's very cold outside and we're interested primarily in keeping the building warm at the ground level so that it is comfortable for humans to work and interact, what is the best method to keep such a large, essentially one roomed building with a high ceiling warm when outdoor cold air exposure is frequent and unavoidable?
When I say "best," I'm interested in balancing energy, maintenance, and monetary costs over the life of the building.
Answer: Your question sort of has two parts: How to supply heat, and how to keep it in.
Large open rooms with a high ceilings are most efficiently warmed with radiant ceiling heat. Warm air rises, which renders forced-air systems inefficient because the pumped heat ends up at the ceiling and the coldest part of the room is near the floor where you actually want the heat. Radiant floor systems are limited to about 87F because they are in contact with occupants, and so their peak output may not be enough to keep the space comfortable. They also lose more heat to convection than radiant ceilings. (See this ref.)
As for keeping heat in, besides solid insulation/barriers: air doors (a.k.a. air curtains) are the standard solution in high traffic passages. | {
"domain": "engineering.stackexchange",
"id": 45,
"tags": "civil-engineering, building-physics, energy, hvac"
} |
Comparison of voltage to gravitational potential | Question: I'm on my way understanding what Voltage is, and came across this great video explaining the concept of electrical potential using an analogy to gravitational potential. I'll write what I understood from it and what I didn't.
Gravitation
With gravitation, we associate an object with a gravitational potential energy, measured in Joules. That it possesses when placed in some point in space. This is how much work can the object do on its way down to the lowest position possible, or conversely, how much work needed to lift it up to where it is now. This value is a function of the height, the mass of the object, and the gravitational strength.
We associate gravitational potential to a position in space, measured in Joules per kg, which tells us how much potential energy will a 1kg hold when it will be placed in that position. This value is a function of the strength of the gravitational field, and the height of that position from another position to which an object can fall to.
When we place an object in some height, say 2 meters, and let it fall to a height of 0.5 meter, each kg of that object is losing 1.5 * 9.8 Joules of potential gravitational energy when it gets to the lower position. The object will fall because nature tends to lower potential energy.
So we can say that two points in space are associated with gravitational potential, which is how much 1kg of mass will do when it falls from the higher to the lower.
Electricity
Now let’s talk about electricity. Instead of mass we talk about charge, and instead of gravitation we talk about electric field.
A charge, when placed in an electric field, is associated with electrical potential energy, measured in Joules, that it possesses when held in that place. This is how much work can that charge do when released and repealed by the electric field, or conversely, how much work needed to be done to get it to this place. This value is a function of the strength of the electric field, the size of the charge (number of coulombs), and the distance from the charge creating the field.
We associate electrical potential to a position in space, measured in Joules per Coulomb. This is how much potential energy 1 Coulomb of charge will hold when placed in that position. This value is a function of the strength of the electrical field and the distance from the charge creating it.
As with gravity, when we place a charge in an electrical field - a place with some electrical potential associated with it - and let it repealed to a point that has less electrical potential, each coulomb of charge will lose the difference of electric potential between the two points. If the difference in electrical potential between the two points is 9 Joules/Coulomb, each coulomb will lose 9 Joules of electrical potential energy when moved from the higher potential point to the lower.
Voltage
Voltage is the difference in electric potential between two points. A battery's voltage for example is the difference in electrical potential between its two poles. This difference is basically the amount of Joules of energy 1 Coulomb has when placed in the positive pole more than it has when placed in the negative pole. A coulomb placed in the positive side of a 9 Volt battery can do 9 Joules of work more than a coulomb in the negative side. The prof. in the video compares it to a ball placed 2 meters above the ground, that we hold above a table placed 0.5 meter above the ground. There's a difference of gravitational potential between the two points: the top has 2 * 9.8 Joules/kg and the bottom has 0.5 * 9.8 Joules/kg. When the ball is released, each kg will lose 15 Joules/kg of gravitational PE.
What I don't understand
My question is as follows: Consider the analogy to the ball placed above a table. Eventually, the bottom point (i.e. the table) has zero gravitational potential energy. The professor present it as a point with some gravitational potential, but as far as we concern it has no potential since once the ball reaches that point, it will have no potential energy since it can't fall anymore, so when we talk about gravitational potential, we are really talking about a potential of one point, which is some point that from there the object can fall, and that potential is a function of the height from that bottom point. We can express that height as the subtraction of the distances of the two points from a third point, but what is the use of this?
I guess my question comes down to this: We say that Voltage is the difference in electrical potential between two points, and we define electrical potential of a position as the amount of Joules of work that can be done by 1 Coulomb of charge placed in that position. From this I understand that for a given position, there is some potential, regardless to any other point, and voltage is two points with a difference in their electrical potential. Put another way: the professor in the video explains that: When we say that this battery is of 9 Volt we're actually saying that one Coulomb placed at the positive side can do 9 Joules of work more than a Coulomb placed at the negative side. From that I understand that there is some amount of work that a Coulomb at the positive side can do, and another amount of work that a Coulomb at the negative side can do, and the difference between them is the voltage of the battery.
But I understand that it's not the case... Can someone clear this for me?
Answer: The electrostatic force and the gravitational force are both conservative forces, meaning that the work done by the force (change in kinetic energy) is independent of the path taken and only depends on the starting and ending positions. The negative of the work done by a conservative force is defined as a difference in potential energy.
The electric field $\vec E$ is defined as the force per unit charge, and voltage is the potential energy per unit charge (joule per coulomb) for an electrostatic field. $V = -\int_{r_a}^{r_b} \vec E \cdot d \vec r$ where is $V$ is the difference in voltage from position $r_a$ to $r_b$.
Similarly, $PE_g = -\int_{r_a}^{r_b} \vec F_g \cdot d \vec r$ where is $PE_g$ is the difference in gravitational potential energy from position $r_a$ to $r_b$, and $\vec F_g$ is the force of gravity.
With respect to your question, a 9V battery has a difference in voltage of 9 V across its terminals. The voltage of either terminal with respect to say the earth is undefined unless one of the terminals is connected to earth in which case that terminal has the same voltage as earth which is typically taken as $0$ volts. No matter the absolute potential of either terminal with respect to earth, the difference in voltage between the battery terminals is 9 V. A charge $q$ coulombs (positive) experiences a change of voltage of -9 V ( change in potential energy of -9*q joules) when moving from the positive to the negative terminal, typically through an electrical circuit attached to the battery. For a purely resistive circuit the decrease in potential energy is equal to the change in internal energy (heating) of the resistance. (In reality negatively charged particles, electrons, move from the negative terminal to the positive terminal but the effect is the same as described for a positive charge.) | {
"domain": "physics.stackexchange",
"id": 79611,
"tags": "electrostatics, newtonian-gravity, potential, potential-energy, voltage"
} |
How can i modify FASTA headers in a multi fasta file using BioPython SeqIO | Question: I have a multi fasta file similar to this (relatively new here so uncertain of best way to present this; I have gone for an output and the code i used to make it - belt and braces...):
# create a test dataset
with open('test_dat.txt', 'w') as test_dat:
l1 = [">AAA a aa", ">BBB b bb", ">CCC c cc"]
l2 = ["ATGATG", "GACGAC", "TTGTTG"]
for i in range(len(l1)):
test_dat.write(l1[i] + "\n")
test_dat.write(l2[i] + "\n")
# which outputs something like this
>AAA a aa
ATGATG
>BBB b bb
GACGAC
>CCC c cc
TTGTTG
I want to modify the fasta headers in my fasta file by adding some text to the beginning and, ideally, stripping out spaces (as i was advised that white space in the headings could cause problems for me down the line). My plan was to do this using BioPython and SeqIO.
I started like this:
# my attempt at altering the fasta headers using SeqIO
from Bio import SeqIO
# open my test data and create an output file
with open('test_dat.txt', 'r') as inputs, open('test_dat_out.txt', 'w') as outputs:
# create dictionary of the fasta sequences using SeqIO
my_dict = SeqIO.to_dict(SeqIO.parse(inputs, "fasta"))
# renaming my fasta headers - adding "TEST~~~TEST~~~" to front of header and remove white space
for v in my_dict.values(): # using the values() to loop through...
v.description = "TEST~~~TEST~~~" + v.description.replace(" ", "_")
# write result to file
SeqIO.write(my_dict.values(), outputs, 'fasta')
which gave me a result with some duplication in the fasta header:
#result i want
>TEST~~~TEST~~~AAA_a_aa
ATGATG
#result i get
>AAA TEST~~~TEST~~~AAA_a_aa
ATGATG
If i modify both the .id and .description in exactly the same way, as suggested here, but without removing white space:
# renaming my fasta headers - adding "TEST~~~TEST~~~" to front of header
for v in my_dict.values(): # using the values() to loop through...
v.id = "TEST~~~TEST~~~" + v.id
v.description = "TEST~~~TEST~~~" + v.description
I get almost what i want, but the header still has spaces:
>TEST~~~TEST~~~AAA a aa
ATGATG
I have tried to make the .id and .description match by adding a "_" to the end of .id, or by adding .replace(" ", "_") to both .id and .description (despite there being no spaces in .id), but I just cant seem to get exactly the output i want.
I would really appreciate any pointers on this.
Answer: You're very close, but unfortunately for legacy reasons this aspect of the Biopython parser is not very intuitive. The key point is the .id is anything up until the first white space character. One day I hope the .description and .id attributes are replaced by a single attribute like .header. Anyway, here is a solution for your case:
from Bio import SeqIO
to_add = "TEST~~~TEST~~~"
with open("out.fa", "w") as outputs:
for r in SeqIO.parse("in.fa", "fasta"):
r.id = (to_add + r.description).replace(" ", "_")
r.description = r.id
SeqIO.write(r, outputs, "fasta")
Output on your sample input:
>TEST~~~TEST~~~AAA_a_aa
ATGATG
>TEST~~~TEST~~~BBB_b_bb
GACGAC
>TEST~~~TEST~~~CCC_c_cc
TTGTTG | {
"domain": "bioinformatics.stackexchange",
"id": 1429,
"tags": "fasta, python"
} |
Computing the number of permutations of n consecutive numbers | Question: I'm just starting to learn C++ and have been experimenting with various functions. Here is one that computes the number of permutations of n consecutive numbers as well as all possible permutations provided n. I'd love to receive any form of tips and criticisms on improving this function.
#include <string>
#include <iostream>
#include <vector>
#include <sstream>
#include <fstream>
using namespace std;
string rotateRt(string input);
vector<string> permute(int num);
int main(){
cout << "n: ";
int n;
cin >> n;
vector<string> permutations(permute(n));
ofstream myfile;
myfile.open("rosalind_perm.txt");
myfile << permutations.size() << endl;
for(int i = 0; i < permutations.size(); i++){
myfile << permutations.at(i) << endl;
}
myfile.close();
}
/* Rotate input string right by 1 character */
string rotateRt(string input){
int len = input.length();
string a, b;
if(len == 0){
return "";
}
else if(len == 1){
return input;
}
else{
a = input.substr(0, len - 2);
b = input.substr(len - 1) + " ";
return b + a;
}
}
/* To compute permutations we compute the permutations of num - 1,
append num to the front of each permutation, and find every cyclic
rotation. */
vector<string> permute(int num){
vector<string> permutations;
ostringstream convert;
if(num == 1){
permutations.push_back("1");
}
else{
vector<string> prevPermutations(permute(num - 1));
int size = prevPermutations.size();
string prefix, suffix, word;
convert << num;
prefix = convert.str();
convert.str("");
for(int i = 0; i < size; i++){
suffix = prevPermutations.at(i);
word = prefix + " " + suffix;
int length = word.length();
for(int j = 0; j < length / 2 + 1; j++){
permutations.push_back(word);
word = rotateRt(word);
}
}
}
return permutations;
}
Answer: In addition to Jamal's good points:
1: Consider sorting your #includes alphabetically. It's not a big issue, but I think it looks more tidy, and it's easier to see if something is already included or not.
2: Consider typedef vector<string> StringCollection (or some other meaningful name). This makes it easier to change the underlying data structure later, and makes the code look cleaner.
3: If you are using C++11, I'd write the for loop like this:
for (auto const& number : permutations) {
myfile << number << endl;
}
Since you are learning, I see no reason not to get into C++11 right away.
4: Tutorials and the like often use mySomething as identifiers. my is a bad identifier prefix. Instead, write what role it has or what it represents. outputFile is much better than myFile.
5: Initialize variables right away. The std::ofstream constructor takes a filename, so there is no need to first initialize and then call open().
6: Instantiate variables as late as possible. In your rotateRt() (which should be called rotateRight(), by the way), you don't have to create your strings unless the last else triggers. Change it to this:
else {
string a = input.substr(0, len - 2);
string b = input.substr(len - 1) + " ";
return b + a;
}
The same applies to for example ostringstream convert.
When you instantiate a variable as late as possible, you avoid constructing it unless you really have to. "As late as possible" implies as close as possible to the relevant context, which makes the code easier to read.
7: Consider rewriting permute() to reduce nesting.
8: Your code would normally be better suited as a class. You don't have to think too much about this yet, since you just started with C++. It's a good exercise to refactor this to use classes later.
9: In the case of int len = input.length();, you should use std::size_t instead of int.
10: Prefer C++-style comments: // (Effective C++ explains why, so I won't repeat it here.)
11: Use more whitespace. Especially more vertical whitespace. Insert an empty line in logical places to make your code much more readable.
12: You don't modify or copy the string you pass to rotateRt, so it should be reference-to-const instead: rotateRt(string const& input). (If you prefer, you can write const string&, it's the same thing.)
13: Consider rewriting permute() so that it's not recursive. Recursive functions that create a couple of objects can quickly chew up a lot of memory, and are often not very efficient. In short, your permute() function can be improved a great deal.
14: Consider refactoring the file name out into a constant:
const string filename = "rosalind_perm.txt";
This way it's easier to find if you have to change it, and by using the constant you are guaranteed you will only have to change it in one location. | {
"domain": "codereview.stackexchange",
"id": 4049,
"tags": "c++, beginner, combinatorics"
} |
Cannot launch most of the virtual environments from gazebo_world | Question:
I was trying to run a simulation in a virtual environment on gazebo. Therefore I launch the virtual environment in gazebo_worlds such as office_world.launch. The following error message occurred:
Msg Waiting for master[ INFO] [1345626416.332008069]: waitForService: Service [/gazebo/set_physics_properties] has not been advertised, waiting...
Msg Connected to gazebo master @ http://localhost:11345
[ INFO] [1345626419.675247663, 0.107000000]: waitForService: Service [/gazebo/set_physics_properties] is now available.
[ INFO] [1345626419.713744662, 2.769000000]: Starting to spin physics dynamic reconfigure node...
and gazebo window didn't occur
and I tried to launch empty_world.launch, it succeeded. Other launch file such as simple_office2.launch, simple_world.launch, wg_world.launch... failed, too.
How can I solve this problem or is there any other launch file that provides a complete virtual environment for simulation?
BTW I run Fuerte on Ubuntu 12.04
---------------------------------edit-------------------------------------
When I tried to launch simple_office2.launch, I got the following error:
Segmentation fault (core dumped)
[gazebo-2] process has died [pid 6666, exit code 139, cmd /opt/ros/fuerte/stacks/simulator_gazebo/gazebo/scripts/gazebo /opt/ros/fuerte/stacks/simulator_gazebo/gazebo_worlds/worlds/simple_office2.world __name:=gazebo __log:=/home/albert/.ros/log/99159028-edc2-11e1-b88b-5404a6dc3a5e/gazebo-2.log].
log file: /home/albert/.ros/log/99159028-edc2-11e1-b88b-5404a6dc3a5e/gazebo-2*.log
I've added a new line in the launch file
<node name="gazebo_gui" pkg="gazebo" type="gui" respawn="false" output="screen"/>
Originally posted by Albert K on ROS Answers with karma: 301 on 2012-08-21
Post score: 1
Answer:
make sure the following line is in the launch file
<node name="gazebo_gui" pkg="gazebo" type="gui" respawn="false" output="screen"/>
Originally posted by seth_g with karma: 178 on 2012-08-23
This answer was ACCEPTED on the original site
Post score: 0
Original comments
Comment by Albert K on 2012-08-23:
Thanks for the help. I could launch office_world.launch successfully. Buit while I tried to apply to other failed launch file, there were still some launch files that had error. I've post the new problem.
Comment by seth_g on 2012-08-24:
simple_office2 crashes for me as well. after digging I found it has something to do with the body:map world tags. the other world files that include the body:map tags have them commented out. unless you really want to use that world file, I wouldn't worry about it. you could always report a bug?
Comment by Albert K on 2012-08-26:
Do you have any idea about the body:map tag? As you say, there isn't such tag in other world file. I wonder if it is the reason that when I open rviz, the global fixed frame can't find "/map" and reports status error. Just a wild guess.
Comment by seth_g on 2012-09-11:
It's my understanding that it is a tag used to describe a map model within Gazebo. I haven't tried to use it nor have I explored the API for it, sorry. as for your fixed frame, I can't really say why it reports with an error. you might try /base_link instead of /map? | {
"domain": "robotics.stackexchange",
"id": 10710,
"tags": "ros, simulation, gazebo-worlds"
} |
Run packages from Workstation? | Question:
We are using TurtleBot. There is a laptop directly connected to the TurtleBot. In addition, there is a workstation which connects to the laptop through wireless connection.
According to my understanding, some packages (e.g. teleop) are run from the laptop, while some can be run from the workstation (e.g. rviz). Since we have a powerful workstation, it makes sense to run heavy packages (e.g. opencv, navigation) from the workstation. However, it seems that opencv, amcl, move_base, etc. are now run from the laptop (or am I wrong?)
Hence, my question is, is it possible to run these packages from the workstation? Thank you very much indeed for your help.
Originally posted by Chik on ROS Answers with karma: 229 on 2013-04-02
Post score: 0
Answer:
Yes, it is. Presuming your network is set up correctly, you just need to launch the appropriate file (roslaunch turtlebot_navigation gmapping_demo.launch, for example) on your workstation instead of the laptop. One thing you should be mindful of is the bandwidth requirements. Specifically, if the packages you're running on the workstation rely on point cloud data instead of simple images or the LaserScan generated by the pointcloud_to_laserscan node, you'll run into issues.
Originally posted by Ryan with karma: 3248 on 2013-04-03
This answer was ACCEPTED on the original site
Post score: 1 | {
"domain": "robotics.stackexchange",
"id": 13657,
"tags": "ros, turtlebot, workstation"
} |
Latent heat and energy transfer | Question: Inside an insulated vessel at 1atm there is water at 100°C and a metal rod at 100°C, since temperature gradient is null there is no net heat transfer and water still liquid. In other words, in order to transmit latent heat, there must be a temperature difference. Am I right ?
Answer:
In other words, in order to transmit latent heat, there must be a
temperature difference. Am I right ?
That is correct. Heat is energy transfer due solely to temperature difference.
Since you asked this question specifically in connection with latent heat, I think you may be confusing the requirements for heat transfer to occur with the results of that heat transfer in terms of temperature changes.
If the rod was at a temperature greater than 100 C then it could transfer heat to the water. But since the water is at its boiling point that transfer would not initially cause an increase the temperature of the water. Instead it would convert some liquid water to vapor vapor (steam) at constant temperature and pressure. That heat is then called the latent heat of vaporization.
On the other hand, if the water was below 100 C at 1 atm, heat transfer from the rod to the water would initially increase the temperature of the water until it reached its boiling point. The amount of heat required to raise the temperature of the water would be determined by the specific heat of water.
Hope this helps. | {
"domain": "physics.stackexchange",
"id": 68111,
"tags": "thermodynamics, water"
} |
Upgrade from Gazebo 7 to 7.1 | Question:
How can I upgrade from Gazebo 7.0 to 7.1? I am using ROS Kinetic and Ubuntu 16.04
Originally posted by Ash_100 on Gazebo Answers with karma: 9 on 2017-10-03
Post score: 0
Original comments
Comment by wentz on 2017-10-04:
It have to be 7.1? I think the latest version of gazebo7 is something around 7.8.1.
Answer:
Follow the alternative installation here:
http://gazebosim.org/tutorials?cat=install&tut=install_ubuntu&ver=7.0
Originally posted by chapulina with karma: 7504 on 2017-10-03
This answer was ACCEPTED on the original site
Post score: 0
Original comments
Comment by Ash_100 on 2017-10-06:
I tried this. It upgrades to Gazebo7 not Gazebo7.1.
Comment by chapulina on 2017-10-06:
Do you see 'packages.osrfoundation' with the command 'cat /etc/apt/sources.list.d/gazebo-stable.list'? If you're getting from there, you should actually get the latest version, which is 7.8.1 | {
"domain": "robotics.stackexchange",
"id": 4185,
"tags": "gazebo"
} |
Angular 2 service to upload a batch job and poll for results | Question: I have to write a web interface for one of my homebrew projects. Since I'm not familiar with HTML and CSS I've decided to take heavy duty framework such as Angular 2. This is my first experiment - I'm uploading a batch job to a remote servlet with POST request, get tracking ID and start polling this id with 100 ms interval. When remote batch completes its job, user of this service will be able to get result via Promise. Since I've written this without any systemic learning of any underlying technologies, I'm not sure if my approach is correct at all.
import { Injectable } from '@angular/core';
import {Http, Response} from '@angular/http';
import 'rxjs/add/operator/toPromise';
import 'rxjs/add/operator/map';
enum Status { SUCCESS, FAIL }
export class Upload {
constructor(public readonly status: Status,
public readonly message: string,
public readonly timestamp: Date)
{}
}
@Injectable()
export class EmeraldBackendStorageService {
constructor(private http: Http) { }
upload(fileToUpload: any): Promise<Upload> {
let input = new FormData();
input.append("file", fileToUpload);
return this.http.post("/emerald/storage/submit-content", input)
.map((response: Response) => JSON.parse(response.text()))
.toPromise().then((serverAnswer: any) => {
if (serverAnswer['success']) {
return this.subscribeSubmitStatusTracker(serverAnswer['trackingId'] as number);
} else {
throw new Error('POST request failed');
}
}).catch((err) => {
return new Upload(Status.FAIL, err, new Date())
});
}
private subscribeSubmitStatusTracker(trackingId: number) : Promise<Upload> {
return this.http.get("/emerald/storage/submit-status/" + trackingId)
.map((response: Response) => JSON.parse(response.text()))
.toPromise().then((serverAnswer: any) => {
if (serverAnswer['status'] == 'PENDING') {
return new Promise<Upload>((resolve) => {
setTimeout(() => resolve(this.subscribeSubmitStatusTracker(trackingId)), 5000);
});
} else if (serverAnswer['status'] == 'SUCCESS') {
let ts = serverAnswer['timestamp'] as number;
return new Upload(Status.SUCCESS, null, new Date(ts));
} else {
throw new Error('Server can\'t process uploaded file');
}
}).catch((err) => {
return new Upload(Status.FAIL, err, new Date())
})
}
}
Answer: First of all, there are some stylish issues in the code:
let input = new FormData();
// `input` is never reassigned so you should use `const` instead of `let`:
const input = new FormData();
input.append("file", fileToUpload);
// you should use single-quotes for strings:
input.append('file', fileToUpload);
this.http.get("/emerald/storage/submit-status/" + trackingId)
// you can use string interpolation here:
this.http.get(`/emerald/storage/submit-status/${trackingId}`)
if (serverAnswer['status'] == 'PENDING') { }
// you should use triple-equals:
if (serverAnswer['status'] === 'PENDING') { }
and some semicolons are missing.
If you use @angular/cli with this project, I suggest ng lint to point and fix thoses little issues.
In the Upload class, we are expecting a message as a string but in case of failure we are passing an Error:
.catch((err: Error) => new Upload(Status.FAIL, err.message, new Date()));
Moreover about the Upload class, I think it can be misleading that timestamp member is typed as a Date, if it's a Date it should be named date or it should be typed as a number.
Then I think you can simplify the code with interfaces representing the server responses, here is a fixed and commented version of the code:
import { Injectable } from '@angular/core';
import { Http, Response } from '@angular/http';
import 'rxjs/add/operator/toPromise';
import 'rxjs/add/operator/map';
// it's possible to use string enum since typescript 2.4
// https://www.typescriptlang.org/docs/handbook/release-notes/typescript-2-4.html
export enum Status { PENDING = 'PENDING', SUCCESS = 'SUCCESS', FAIL = 'FAIL'}
export interface ContentResponse {
readonly success: boolean;
readonly trackingId: number;
}
export interface StatusResponse {
readonly status: Status;
readonly timestamp: number;
}
export class Upload {
constructor(public readonly status: Status,
public readonly message: string,
public readonly date: Date) { // renamed timestamp -> date
}
}
@Injectable()
export class EmeraldBackendStorageService {
constructor(private http: Http) {
}
upload(fileToUpload: any): Promise<Upload> {
const input = new FormData();
input.append('file', fileToUpload);
return this.http.post('/emerald/storage/submit-content', input)
// we don't need to manually json parse the response,
// response has already a method for that exact purpose
.map((response: Response) => response.json())
// we assume that we get a ContentResponse from the server
// so it's easier to use our serverAnswer
.toPromise().then((serverAnswer: ContentResponse) => {
if (!serverAnswer.success) {
throw new Error('POST request failed');
}
return this.subscribeSubmitStatusTracker(serverAnswer.trackingId);
}).catch((err: Error) => new Upload(Status.FAIL, err.message, new Date()));
}
private subscribeSubmitStatusTracker(trackingId: number): Promise<Upload> {
return this.http.get(`/emerald/storage/submit-status/${trackingId}`)
.map((response: Response) => response.json())
.toPromise().then((serverAnswer: StatusResponse) => {
if (serverAnswer.status === Status.PENDING) {
return new Promise<Upload>((resolve) => {
setTimeout(() => resolve(this.subscribeSubmitStatusTracker(trackingId)), 5000);
});
} else if (serverAnswer.status === Status.SUCCESS) {
return new Upload(Status.SUCCESS, null, new Date(serverAnswer.timestamp));
} else {
throw new Error(`Server can't process uploaded file`);
}
}).catch((err: Error) => new Upload(Status.FAIL, err.message, new Date()));
}
}
Finally, an api returning a Promise<Upload> and the use of the toPromise operator is clearly not the angular way. In angular almost everything is an observable so the users of your api will expect observables.
As suggested by NewtonCode, you should use rxjs to achieve the functionality and return an Observable<Upload>.
Here is a rewrite with observables in mind:
import { Injectable, InjectionToken, Inject } from '@angular/core';
import { Http, Response } from '@angular/http';
import { Observable } from 'rxjs/Observable';
import 'rxjs/add/observable/of';
import 'rxjs/add/observable/interval';
import 'rxjs/add/observable/throw';
import 'rxjs/add/operator/catch';
import 'rxjs/add/operator/map';
import 'rxjs/add/operator/mergeMap';
import 'rxjs/add/operator/filter';
import 'rxjs/add/operator/take';
export enum Status { PENDING = 'PENDING', SUCCESS = 'SUCCESS', FAIL = 'FAIL'}
export interface ContentResponse {
readonly success: boolean;
readonly trackingId: number;
}
export interface StatusResponse {
readonly status: Status;
readonly timestamp: number;
}
// I need to configure the interval of time between request for testing purpose,
// because `fakeAsync`seems to have an issue with setInterval in Observable
// https://github.com/angular/angular/issues/10127
export interface EmeraldConfig {
readonly intervalBetweenTrackingRequests: number;
}
export const EMERALD_CONFIG = new InjectionToken<EmeraldConfig>('EmeraldConfig');
export class Upload {
constructor(public readonly status: Status,
public readonly message: string,
public readonly date: Date) {
}
}
export const PostRequestFailedErrMsg = `POST request failed`;
export const UploadNotProcessedErrMsg = `Server can't process uploaded file`;
@Injectable()
export class EmeraldBackendStorageService {
constructor(private http: Http, @Inject(EMERALD_CONFIG) private config: EmeraldConfig) {
}
upload(fileToUpload: any): Observable<Upload> {
const input = new FormData();
input.append('file', fileToUpload);
return this.http.post('/emerald/storage/submit-content', input)
.map((response: Response) => response.json())
// we emit an error if the upload is a failure
.map((serverAnswer: ContentResponse): ContentResponse => {
if (!serverAnswer.success) {
throw new Error(PostRequestFailedErrMsg);
}
return serverAnswer;
})
// then we return an observable emitting the succesful tracking response
.mergeMap((serverAnswer: ContentResponse): Observable<StatusResponse> => this.track(serverAnswer.trackingId))
.map((serverAnswer: StatusResponse): Upload => new Upload(Status.SUCCESS, null, new Date(serverAnswer.timestamp)))
// finally we recover, but only if it's
// a upload not processed error or a post request failed error
.catch((err: Error) => (err.message === UploadNotProcessedErrMsg || err.message === PostRequestFailedErrMsg) ?
Observable.of(new Upload(Status.FAIL, err.message, new Date())) : Observable.throw(err));
}
private track(trackingId: number): Observable<StatusResponse> {
// we create an observable that emits every X ms
return Observable.interval(this.config.intervalBetweenTrackingRequests)
// then with mergeMap we are calling the tracking service every X ms and emitting the responses with an observable
.mergeMap(() => this.http.get(`/emerald/storage/submit-status/${trackingId}`))
.map((response: Response): StatusResponse => response.json())
// we discard the pending responses
.filter((serverAnswer: StatusResponse) => serverAnswer.status !== Status.PENDING)
// and complete our observable if we get a success or fail response
.take(1)
// finally, we emit an error if the response is fail
.map((serverAnswer: StatusResponse): StatusResponse => {
if (serverAnswer.status !== Status.SUCCESS) {
throw new Error(UploadNotProcessedErrMsg);
}
return serverAnswer;
});
}
}
As you can see, it's quite different from the promised code, but you will see with practice that observables are more powerful than promises.
You can easily limit the number of calls to the tracking service:
return Observable.interval(this.config.intervalBetweenTrackingRequests)
.take(3) // only 3 attempts
.mergeMap(() => this.http.get(`/emerald/storage/submit-status/${trackingId}`))
.map((response: Response): StatusResponse => response.json())
.filter((serverAnswer: StatusResponse) => serverAnswer.status !== Status.PENDING)
.take(1)
Or add a timeout to the entire process:
upload(fileToUpload: any): Observable<Upload> {
const input = new FormData();
input.append('file', fileToUpload);
return this.http.post('/emerald/storage/submit-content', input)
.map((response: Response) => response.json())
... // all the operations
.timeout(20000);
}
Bonus
the spec of the observable version:
import { async, TestBed } from '@angular/core/testing';
import { BaseRequestOptions, Response, Http, ResponseOptions } from '@angular/http';
import { MockBackend } from '@angular/http/testing';
import {
EmeraldBackendStorageService, Status, StatusResponse,
ContentResponse, Upload, EMERALD_CONFIG, UploadNotProcessedErrMsg, PostRequestFailedErrMsg
} from './emerald-backend-storage.service';
describe('EmeraldBackendStorageService', () => {
let emerald: EmeraldBackendStorageService;
const id3AndSuccessContentResponse: ContentResponse = {success: true, trackingId: 3};
const id0AndFailContentResponse: ContentResponse = {success: false, trackingId: 0};
const timestamp400AndSuccessTrackingResponse: StatusResponse = {status: Status.SUCCESS, timestamp: 400};
const timestamp400AndPendingTrackingResponse: StatusResponse = {status: Status.PENDING, timestamp: 400};
const timestamp5400AndSuccessTrackingResponse: StatusResponse = {status: Status.SUCCESS, timestamp: 5400};
const timestamp5400AndFailTrackingResponse: StatusResponse = {status: Status.FAIL, timestamp: 5400};
function respond(contentResponse: ContentResponse, ...statusResponses: Array<StatusResponse>): void {
const mockBackend = TestBed.get(MockBackend);
mockBackend.connections.subscribe(connection => {
const response = connection.request.url === '/emerald/storage/submit-content' ?
contentResponse : statusResponses.shift();
connection.mockRespond(new Response(new ResponseOptions({body: response})));
});
}
beforeEach(() => {
TestBed.configureTestingModule({
providers: [
EmeraldBackendStorageService,
{
provide: Http,
useFactory: (backend, defaultOptions) => new Http(backend, defaultOptions),
deps: [MockBackend, BaseRequestOptions]
},
{
provide: EMERALD_CONFIG,
useValue: {delay: 5}
},
MockBackend,
BaseRequestOptions
]
});
emerald = TestBed.get(EmeraldBackendStorageService);
});
it(`should return an observable with an Upload{'SUCCESS', null, 400},
when successfully uploading a file
and the tracking response is {'SUCCESS', 400}`,
async(() => {
respond(id3AndSuccessContentResponse, timestamp400AndSuccessTrackingResponse);
emerald.upload('myFile').subscribe((upload: Upload) => {
expect(upload.status).toBe(Status.SUCCESS);
expect(upload.date.getTime()).toBe(400);
expect(upload.message).toBeNull();
});
}));
it(`should return an observable with an Upload{'SUCCESS', null, 5400},
when successfully uploading a file
and the tracking responses are {'PENDING', 400} then {'SUCCESS', 5400}`,
async(() => {
respond(id3AndSuccessContentResponse,
timestamp400AndPendingTrackingResponse, timestamp5400AndSuccessTrackingResponse);
emerald.upload('myFile').subscribe((upload: Upload) => {
expect(upload.status).toBe(Status.SUCCESS);
expect(upload.date.getTime()).toBe(5400);
expect(upload.message).toBeNull();
});
}));
it(`should return an observable with an Upload{'FAIL', 'POST request failed', ...},
when upload response is {false, 0}`,
async(() => {
respond(id0AndFailContentResponse);
emerald.upload('myFile').subscribe((upload: Upload) => {
expect(upload.status).toBe(Status.FAIL);
expect(upload.message).toBe(PostRequestFailedErrMsg);
});
}));
it(`should return an observable with an Upload{'FAIL', 'Server can't process uploaded file', ...},
when successfully uploading a file
and the tracking responses are {'PENDING', 400} then {'FAIL', 5400}`,
async(() => {
respond(id3AndSuccessContentResponse,
timestamp400AndPendingTrackingResponse, timestamp5400AndFailTrackingResponse);
emerald.upload('myFile').subscribe((upload: Upload) => {
expect(upload.status).toBe(Status.FAIL);
expect(upload.message).toBe(UploadNotProcessedErrMsg);
});
}));
}); | {
"domain": "codereview.stackexchange",
"id": 27036,
"tags": "beginner, promise, typescript, angular-2+"
} |
Irregularity of $\{a^{b+cd} : d \in \mathbb{N}\}$ | Question: I was solving some basic problems about the theory of machines and automata. The topic was about pumping lemma, but I could not solve the below question and prove that it is not regular.
$$L=\{a^{b+cd} \mid \text{$b$ and $c$ are constant, $d \in \{0,1,2,3,…\}$}\}$$
This is my incomplete solution:
Assume for contradiction that $L$ is a regular language. Since $L$ is infinite, we can apply the pumping lemma.
Let $p$ be the integer in the pumping lemma. Pick a string $w$ such that $w ∈ L$ and $|w| \ge p$.
Example: pick $a ^ {1 + p}$ and $b,c = 1$.
But now how should I get to a contradiction?
I will be grateful for any help.
Answer: I think that the language is regular. Aside of counting $a$'s $b$ times, instead of thinking of counting $c$ times $d$ (counting a known number of times something unknown), we can think of counting $d$ times $c$ (counting an unknown number of times something known), of course. That is, we see $L$ as
$$
L = a^ba^{cd} = a^b({a^c})^d
$$
for two fixed $b$ and $c$.
We can draw a simple deterministic automaton for it:
Furthermore, we can see that the language is context free, as we can show a context-free grammar for it:
\begin{align*}
&S = B ~ D ~ . \\
&B = a^b. \\
&D = a^cD ~|~ \epsilon.
\end{align*}
Then, since this language is defined on the singleton alphabet $\Sigma = \{a\}$, we know by Parikh's theorem that this language is regular. | {
"domain": "cs.stackexchange",
"id": 19759,
"tags": "regular-languages, proof-techniques, pumping-lemma"
} |
Acceptable Use of the goto Statement? | Question: So, I made a console interface. It prompts users with several options they can enter. 1 for the first option, 2 for the second option, etc. Using the switch statement, each option will bring them to a different screen. Now, after any of the options are entered and the users are brought to a particular screen, how do I get them back to the selection screen with those options? Would it be acceptable for me to do something along the lines of
selectionScreen:
printf("Enter 1 for the First Screen\n");
printf("Enter 2 for the Second Screen\n");
printf("Enter 3 for the Third Screen\n\n");
scanf("%d", &selection);
switch (selection)
{
case 1:
//do and print stuff here
for (i = 0 ; ; i ++)
{
//do and print stuff here
while(GetAsyncKeyState(VK_TAB))
{
goto selectionScreen;
}
printf("Enter the Tab key to go back the selection screen.\n\n");
printf("Enter any key besides Tab to reload!\n\n");
getchar();
system("CLS");
}
break;
case 2:
//do and print stuff here
for (i = 0; ; i++)
{
//do and print stuff here
while(GetAsyncKeyState(VK_TAB))
{
goto selectionScreen;
}
printf("Enter the Tab key to go back the selection screen.\n\n");
printf("Enter any key besides Tab to reload!\n\n");
getchar();
system("CLS");
}
break;
case 3:
//do and print stuff here
for (i = 0 ; ; i++)
{
//do and print stuff here
while(GetAsyncKeyState(VK_TAB))
{
goto selectionScreen;
}
printf("Enter the Tab key to go back the selection screen.\n\n");
printf("Enter any key besides Tab to reload!\n\n");
getchar();
system("CLS");
}
break;
default:
//do stuff here
}
Or is there a more favorable way of doing this
Answer: I'm unsure if the posted code is exactly what's intended functionally, but the following would do the equivalent:
do
{
printf("Enter 1 for the First Screen\n");
printf("Enter 2 for the Second Screen\n");
printf("Enter 3 for the Third Screen\n\n");
scanf("%d", &selection);
if ( selection < 1 || selection > 3 )
break;
switch (selection)
{
case 1:
//do and print stuff here
break;
case 2:
//do and print stuff here
break;
case 3:
//do and print stuff here
break;
}
}
while ( GetAsyncKeyState(VK_TAB) );
I really can't remember the last time I used a goto. I think it was in a FORTRAN class. | {
"domain": "codereview.stackexchange",
"id": 4776,
"tags": "c"
} |
Camera Pose Calibration: problem with multicam_capture_exec.py | Question:
Hi all!
I'm trying to calibrate 2 home-made stereo cameras (4 total cameras) with the camera_pose toolkit and I've problems with the multicam_capture_exec.py.
I receive the following error:
Waiting for Server
Traceback (most recent call last):
File "/opt/ros/fuerte/stacks/camera_pose/camera_pose_calibration/src/camera_pose_calibration/multicam_capture_exec.py", line 43, in <module>
from camera_pose_calibration.robot_measurement_cache import RobotMeasurementCache
File "/opt/ros/fuerte/stacks/camera_pose/camera_pose_calibration/src/camera_pose_calibration/robot_measurement_cache.py", line 34, in <module>
import roslib; roslib.load_manifest('pr2_calibration_executive')
File "/opt/ros/fuerte/lib/python2.7/dist-packages/roslib/launcher.py", line 62, in load_manifest
sys.path = _generate_python_path(package_name, _rospack) + sys.path
File "/opt/ros/fuerte/lib/python2.7/dist-packages/roslib/launcher.py", line 93, in _generate_python_path
m = rospack.get_manifest(pkg)
File "/usr/lib/pymodules/python2.7/rospkg/rospack.py", line 167, in get_manifest
return self._load_manifest(name)
File "/usr/lib/pymodules/python2.7/rospkg/rospack.py", line 210, in _load_manifest
retval = self._manifests[name] = parse_manifest_file(self.get_path(name), self._manifest_name)
File "/usr/lib/pymodules/python2.7/rospkg/rospack.py", line 202, in get_path
raise ResourceNotFound(name, ros_paths=self._ros_paths)
rospkg.common.ResourceNotFound: pr2_calibration_executive
[capture_exec-24] process has died [pid 12415, exit code 1, cmd /opt/ros/fuerte/stacks/camera_pose/camera_pose_calibration/src/camera_pose_calibration/multicam_capture_exec.py baseline12/left baseline12/right baseline6/left baseline6/right/ request_interval:=interval_filtered __name:=capture_exec __log:=/home/fabio/.ros/log/ad23e236-b07d-11e1-afbb-14dae9c3e9c0/capture_exec-24.log].
log file: /home/fabio/.ros/log/ad23e236-b07d-11e1-afbb-14dae9c3e9c0/capture_exec-24*.log
Any idea?
My launch file:
<launch>
<!-- MACHINES -->
<machine name="rosboromir" address="rosboromir" user="fabio" env-loader="/opt/ros/fuerte/env.sh">
</machine>
<machine name="rosfaramir" address="rosfaramir" user="fabio" env-loader="/opt/ros/fuerte/env.sh">
</machine>
<!-- LAUNCH BASELINE6 ON ROSBOROMIR -->
<!-- Driver nodelet -->
<node machine="rosboromir" ns="baseline6" name="stereo_image_proc_bsl6" pkg="stereo_image_proc" type="stereo_image_proc"></node>
<node machine="rosboromir" ns="baseline6" name="uvc_camera_bsl6" pkg="uvc_camera" type="stereo_node">
<param name="left/device" value="/dev/video1" />
<param name="right/device" value="/dev/video2" />
<param name="fps" value="30" />
<param name="skip_frames" value="3" />
<param name="width" value="320" />
<param name="height" value="240" />
<param name="frame_id" value="baseline6" />
</node>
<!-- LAUNCH BASELINE12 ON ROSFARAMIR -->
<!-- Driver nodelet -->
<node machine="rosfaramir" ns="baseline12" name="stereo_image_proc_bsl12" pkg="stereo_image_proc" type="stereo_image_proc"></node>
<node machine="rosfaramir" ns="baseline12" name="uvc_camera" pkg="uvc_camera" type="stereo_node">
<param name="left/device" value="/dev/video1" />
<param name="right/device" value="/dev/video2" />
<param name="fps" value="15" />
<param name="skip_frames" value="1" />
<param name="width" value="640" />
<param name="height" value="480" />
<param name="frame_id" value="baseline12" />
</node>
</launch>
Command line for calibration:
roslaunch camera_pose_calibration calibrate_4_camera.launch camera1_ns:=baseline12/left camera2_ns:=baseline12/right camera3_ns:=baseline6/left camera4_ns:=baseline6/right/ checker_rows:=10 checker_cols:=5 checker_size:=0.0015
EDIT:
In the file calibrate_4_cameras.launch, the lines that give me the error are the following:
<!-- generate robot measurements -->
<node pkg="camera_pose_calibration" type="multicam_capture_exec.py" name="capture_exec"
args="$(arg camera1_ns) $(arg camera2_ns) $(arg camera3_ns) $(arg camera4_ns)" output="screen">
<param name="cam_info_topic" value="camera_info" />
<remap from="request_interval" to="interval_filtered" />
</node>
Do I need that lines to have camera calibrated? What does they do?
EDIT2:
I've followed the stack trace and I've found that multicam_capture_exec.py calls robot_measurement.cache.py. At the line 34 it try to import a package that is not available:
import roslib; roslib.load_manifest('pr2_calibration_executive')
I've tryed to comment out that lines and the error goes away but I'm still not able to have some output to /tf.
Originally posted by oleo80 on ROS Answers with karma: 61 on 2012-06-06
Post score: 1
Answer:
In Fuerte, pr2_calibration has been split into two (slightly refactored) stacks: calibration and pr2_calibration. The pr2_calibration package always had its API marked "unstable", but it appears that this code was developed against the pr2_calibration_executive (which has gone away). You can find a (possibly updated) robot_measurement_cache.py in calibration_launch/src/calibration_executive.
Originally posted by fergs with karma: 13902 on 2012-06-11
This answer was ACCEPTED on the original site
Post score: 0
Original comments
Comment by Athoesen on 2014-01-08:
Did they ever solve this problem? I'm trying to use camera_pose_calibration in Hydro with 3 Kinects and it's not going well.
Comment by fergs on 2014-01-14:
I'm not aware of anyone maintaining the camera_pose_calibration stack, so I very much doubt that it has been fixed.
Comment by Athoesen on 2014-01-15:
In that case, what method are people using for multiple Kinects to combine them in RViz? I've looked all over and I can't find anything on the wiki or just by Googling. And asking on here has been unresponsive. | {
"domain": "robotics.stackexchange",
"id": 9706,
"tags": "ros, pose, camera-pose-calibration, camera"
} |
Typical reaction yield for combustion of hydrogen | Question: I am writing an essay on the chemistry of rocket fuel, and I was wondering what, if any, would be a commonly accepted value for the actual yield of the reaction between hydrogen, $\ce{H2},$ and oxygen, $\ce{O2}.$ I would also love it if you could cite some sources so I can find more information on this exothermic reaction:
$$\ce{H2 + O2 -> H2O + heat}$$
Answer: The equilibrium constant for the reaction $\ce{2H2 + O2 <=> 2H2O}$ is colossal, equal to $\mathrm{2.4 \times 10^{47}}$ at $\mathrm{500\ K}$ (source). This would ensure virtually 100% yield for the combustion of a stoichiometric mixture of oxygen and hydrogen in equilibrium conditions. However, a rocket engine is a reaction vessel that is far from equilibrium, and therefore oxygen could remain unreacted. This is in fact a problem, as it could damage the engine at its operating temperature. This, among other reasons, is why oxygen/hydrogen mixtures for rocket engines are actually very hydrogen-rich, containing up to twice the stoichiometric amount of hydrogen required.
So in practice, taking everything into account, rocket engines running on oxyhydrogen fuel are designed to have a practically 100% combustion yield based on the limiting reagent, oxygen, while possessing a substantial excess of hydrogen fuel. There is a strong engineering pressure to maximise energy output by ensuring the oxygen is completely consumed; unreacted oxygen is a waste of lift capacity, storage space, materials, cost, etc. | {
"domain": "chemistry.stackexchange",
"id": 12854,
"tags": "reference-request, hydrogen, fuel"
} |
Converting std::string to int without Boost | Question: I'm trying to convert an std::string to an int with a default value if the conversion fails. C's atoi() is way too forgiving and I've found that boost::lexical_cast is quite slow in the case where the cast fails. I imagine it's because an exception is raised. I don't have C++11, so stoi() is out.
The Delphi function StrToIntDef is the ideal, and it's also available in C-Builder (where I'm working currently). But I want something more portable that works with std::string.
I've created the following function, which uses atoi() but first tests for error conditions. It also allows for leading and trailing spaces, which are harmless in my situation.
int stringtoIntDef(const std::string & sValue, const int & DefaultValue) {
// convert a std::string to integer with a default value returned
// in the case where the string doesn't represent a valid integer
// - accepts leading or trailing spaces as valid
bool hasDigits = false;
bool TrailingSpace = false;
for (std::string::size_type k = 0; k < sValue.size(); ++k) {
if ((sValue[k] == ' ') || (sValue[k] == '\t')) {
TrailingSpace = hasDigits;
} else if ((sValue[k] == '0') || (sValue[k] == '1') || (sValue[k] == '2') || (sValue[k] == '3') ||
(sValue[k] == '4') || (sValue[k] == '5') || (sValue[k] == '6') || (sValue[k] == '7') ||
(sValue[k] == '8') || (sValue[k] == '9')) {
if (TrailingSpace) {
return DefaultValue;
} else {
hasDigits = true;
}
} else if ((sValue[k] == '-') && !hasDigits) {
hasDigits = true; // this protects against "--"
} else {
return DefaultValue;
}
}
return atoi(sValue.c_str());
}
In my testing, I've compared it against StrToIntDef and it is just as fast, but lexical_cast is much slower in the case where the default is returned. For 1000 iterations, lexical_cast took 5 seconds while the other 2 weren't measurable.
For my lexical_cast function, I've used this:
try {
return boost::lexical_cast<int>(sValue);
}
catch (boost::bad_lexical_cast &) {
return DefaultValue;
}
Are there any gotchas I might have missed here?
Answer: This is a follow up to your second follow up. I just have a few comments, and you have two bugs.
stringtoIntDef is a strange name. In particular, why are Int and Def capitalized but not To? I would go either full camelCase, or I would use the standard library convention of under_scores.
You never delete[] s which means you have a memory leak.
I actually meant in my comment that you could check the end pointer from strtol to see if all that follows is either nothing or 0. I should have been more specific. An example of this is below.
If cstdlib was included (as it should be) rather than stdlib.h, then strtol needs to be std::strtol. The cheader-style headers guarantee declarations in the std namespace, but they do not guarantee declarations in the global namespace. (Every C++ implementation I've ever seen does both, but they not required to do the global namespace.)
Define variables as late as possible. In other words, declare result when you're defining it to be strtol's return value.
If you don't want a variable to change after its initial value, make it const. This keeps you from accidentally changing the value, and it makes your intentions more clear to anyone who inherits the code.
You must provide a base of 10 to strtol, not 0. 0 is the default and it means to try to parse a number as base 10 unless it's prefixed with either 0x (for hex) or 0 (for octal). Assuming you want just decimal, it must be 10.
If you do want the default behavior, don't put the 0. Just let it default to 0. Default behavior and and an empty default value have a nice symmetry to them, and it allows for future change of what the default sentinel value is without breaking consuming code.
Since it's never modified anywhere, str should be passed by constant reference, not reference.
If you were going to copy the string (don't! memory allocation is both unnecessary here and is very slow compared to the rest of the operations we're using), I would take the string by value and modify the parameter. That would allow for moving in C++11, and it would conveniently handle the memory allocation for you. The (potentially major) downside to this approach would be that obviously invalid strings could cause a rather large copy. Imagine if you passed a string of pure whitespace. All that whitespace would be copied for nothing other than to get immediately removed.
We both missed the case of an empty string. strtol output can be used to detect it (begp == endp), but neither of us checked for that :).
I still disagree with your decision to hide the ability to check for errors, but if that's the route you want to take, I would do something like this:
int string_to_int(const std::string& str, const int& DefaultValue) {
errno = 0;
const char* str_beg = str.c_str();
const char* str_end = str_beg + str.size();
char* p = NULL;
const int result = std::strtol(str.c_str(), &p, 10);
if (p == str_beg) { return DefaultValue; }
for (; p != str_end && std::isspace(*p); ++p) { /* eat trailing spaces */}
if (p != str_end || errno != 0) {
return DefaultValue;
}
return result;
}
(This could probably be optimized quite a bit, but it would likely be compiler/system specific, and it would be tedious.) | {
"domain": "codereview.stackexchange",
"id": 5013,
"tags": "c++, converting, boost"
} |
Explanation of the Graetz circuit | Question: My knowledge of circuits is pretty rudimentary and I've never really understood circuits, so I'm having trouble with the concept of Graetz circuits:
When you register the voltage on the resistor R on a screen of an oscilloscope, you get:
Why is this exactly? As I said, my knowledge of circuits is very limited. My idea:
The sign at the bottom of the circuit means it is an AC current. If the current goes from left to right it will be stopped by the diode above and let through by the diode below (to the left of the resistor) and vice versa if it goes from the right to the left. Is this correct?
Answer: You've correctly deduced how the circuit works. This particular configuration is better known as a bridge rectifier and is often packaged as a single component containing 4 diodes. There are two uses for this - rectifying alternating current as depicted in your question, and creating circuits that can handle direct current with reversed polarity (for instance, in the event a battery is inserted backwards). | {
"domain": "physics.stackexchange",
"id": 6091,
"tags": "electric-circuits, electronics, electrical-engineering"
} |
Is it possible to install ROS 32bit and a 64bit on the 64bit ubuntu? | Question:
I have a third party 32 bit library that I want to encapsulate with a ROS node/application. The problem is that I am running Fuerte (64bit) on Ubuntu 12.04 LTS (also 64bit). The GNU C (or GCC) and Ubuntu 32bit libraries were easy to get (get-apt ia32xxx command line), but I am struggling to get the ROS fuerte 32bit libraries.
Originally posted by Mfumbesi on ROS Answers with karma: 46 on 2013-08-16
Post score: 3
Answer:
I was interested in this myself and could not find a straightforward way to allow 64 bit and 32 bit debs to coexist. The most common solutions you'll hear are:
Create a chroot enviroment, install all the 32 bit debs, and build/run from that chroot environment.
or
Utilize a virtual machine such as VirtualBox. There is a tool called "vagrant" that allows you to run headless VirtualBox instances from the command line.
Hope that helps!
Edit: Regarding your second comment
Yes, you cannot simply move binaries built against 32 bit libraries onto a machine with the 64 bit libraries installed. Yes, if the 32 bit versions of the libraries were installed, it would not be a problem. However, as I had stated earlier...it's not very easy to have both versions libraries installed side by side when using prepackaged debians.
The most common trick is to use a "chroot" environment. "chroot" stands for change root and allows you to turn some folder in your file system into a fake root that is jailed/sandboxed from the rest of the system. Using the "chroot" command, you can enter the jailed root environment. In there you can install 32 bit versions of all the debians you need. Checkout your code within the chroot environment, build it there, and most importantly...RUN IT FROM THERE.
The other option is using virtualization. You'll incur a little bit of overhead for your application, but it's negligible on modern processors.
Here's a great Ubuntu answer explaining chroot a little further:
http://askubuntu.com/questions/29665/how-do-i-apt-get-a-32-bit-package-on-a-64-bit-installation
Originally posted by mirzashah with karma: 1209 on 2013-08-16
This answer was ACCEPTED on the original site
Post score: 3
Original comments
Comment by Mfumbesi on 2013-08-19:
I used a second machine that has the 32bit fuerte ROS and built my application on it. When I try and run it on the 64bit uBuntu it complains about missing libraries.
Comment by mirzashah on 2013-08-20:
Please move this to the comments section.
Comment by Mfumbesi on 2013-08-23:
Finally bit the bullet and went with the chroot option. It is working now.
Comment by mirzashah on 2013-08-27:
Glad to hear it! I find it frustrating that the file system hierarchy doesn't allow installing multiple architectures on the same machine...but it can make things a lot more complex. It has to do with the nature of FHS and dealing with way too much complexity for too little gain. | {
"domain": "robotics.stackexchange",
"id": 15282,
"tags": "ros-fuerte, ubuntu-precise, ubuntu"
} |
Paradox in the Hellmann-Feynman Theorem | Question: The Hellmann-Feynman Theorem says
$$\tag{1} \frac{d E_\lambda}{d \lambda} ~=~ \bigg\langle \psi(\lambda) \bigg| \frac{d H_\lambda}{d \lambda} \bigg| \psi(\lambda) \bigg\rangle$$
where $H_\lambda$ is a Hamiltonian parametrized by $\lambda$. Here, $| \psi(\lambda) \rangle$ is an eigenstate of $H_\lambda$ with energy $E_\lambda$.
Naively, if $\frac{d E_\lambda}{d \lambda} = 0$ for all $\lambda$ and $| \psi(\lambda) \rangle$, I would suppose that $\frac{d H_\lambda}{d \lambda} = 0$ because $| \psi(\lambda) \rangle$ is a complete set of states.
However, I can construct a Hamiltonian where this is not true:
$$\tag{2} H_\lambda ~=~ \frac{1}{2}
\begin{pmatrix}
E_1+E_2 & (E_2-E_1) e^{-i \lambda} \\
(E_2-E_1) e^{i \lambda} & E_1+E_2 \\
\end{pmatrix}$$
The eigenvalues are $E_1$ and $E_2$, independent of $\lambda$, so while $\frac{d E_\lambda}{d \lambda} = 0$, it's clear that $\frac{d H_\lambda}{d \lambda} \neq 0$.
What is the flaw in my logic?
For those who are interested, the motivation for this question is the Dirac Hamiltonian for graphene (and specifically, the quantized conductivity in the Integer Quantum Hall Effect and all that good stuff). If $\vec{k}=(k_x,k_y)=k(\cos(\lambda),\sin(\lambda))$, then the Hamiltonian for graphene is
$$\tag{3} H(k,\lambda) ~=~ \hbar v_F k
\begin{pmatrix}
0 & e^{-i \lambda} \\
e^{i \lambda} & 0 \\
\end{pmatrix}$$
Then I replaced the linear band structure with two flat bands of energy $E_1$ and $E_2$ to get equation (2).
Answer: I don't think one can conclude $\langle\; | \frac{d H_{\lambda}}{d \lambda} |\;\rangle=0$,
hence $\frac{dH_{\lambda}}{d\lambda}=0$, from the zero values of the HF theorem, here $|\rangle$ is any state.
Since $\langle\;| \frac{d H_{\lambda}}{d\lambda}|\;\rangle = \sum_{ij} c^∗_i c_j \langle i | \frac{dH_{\lambda}}{d\lambda}|j⟩$, ($|i\rangle$ is the $i$-th eigenstate of the Hamiltonian $H_{\lambda}$) the zero values of the HF theorem only guarantee the diagonal part of this equation. | {
"domain": "physics.stackexchange",
"id": 10381,
"tags": "quantum-mechanics"
} |
Meaning of angular velocity in a rotating system | Question: When you study the motion of a rigid body you have $\vec\omega$, the vector associated to angular velocity. In the case you are using Euler angles and want a quick formula for the rotational kinetic energy you switch to a system that rotates with the body and express the components of $\vec\omega_{[e]}$ in terms of a basis $[e]$ attached to the principal axes of the body.
However what's the meaning of $\vec\omega_{[e]}$? If you are in a rotating system the body should seem still, so there should be no angular velocity. Moreover if we consider that $\vec v = \vec \omega \times \vec r$, which should remain valid in any basis, $\vec v_{[e]}$ should be zero in the rotating system, so $\vec \omega_{[e]}$ should be also zero...
I know I am confusing many things but could you clarify to me this point?
Answer: It is simple. $\vec{\omega}_{[e]}$ are not the components of angular velocity seen in the reference frame attached to the rigid body itself. As you point out, that angular velocity is zero.
It is the result of mathematical manipulation. You have a set of relations between the basis vectors of the inertial frame and the rotating frame, and you use that to write $\vec{\omega}$ in terms of the basis vectors of the rotating frame to simplify calculation. The physical meaning of $\vec{\omega}$ is still the angular velocity seen in the inertial frame of reference.
Why is the mathematical formalism for change of basis not sufficient here? Because both the change of basis matrix and the definition for (angular) velocity involves external parameter--time. In general relativity, time and space are merged together, and every vector in the 4-dimensional spacetime has both temporal and spatial parts. In that case, all vectors transform nicely as mathematics dictates.
Back in classical mechanics, because of the special status of time, there is no general formula transforming physical quantities from one frame to another with relative rotational motion. Angular velocity is a special case, however. The transformation is simple as
$$\boldsymbol{\omega}=\boldsymbol{\omega}'+\boldsymbol{\Omega},$$
where $\boldsymbol{\Omega}$ is the relative angular velocity of the primed frame with respect to the unprimed one. | {
"domain": "physics.stackexchange",
"id": 3917,
"tags": "classical-mechanics, rotational-dynamics, vectors, angular-velocity"
} |
How can a catalyst be selective if it does not change the equilibrium constants? | Question: \begin{align}
\ce{CO(g) + 3H2(g) &->[Ni] CH4(g) + H2O(g)}\\
\ce{CO(g) + 2H2(g) &->[Cu/ZnO-Cr2O3] CH3OH(g) }\\
\ce{CO(g) + H2(g) &->[Cu] HCHO(g) }
\end{align}
As we see here, using different catalysts in the reaction between Carbon monoxide and hydrogen yields different products. Is this in contradiction to the following description of the properties of catalysts?
However, it is very important to keep in mind that the addition of a catalyst has no effect whatsoever on the final equilibrium position of the reaction. It simply gets it there faster.
https://courses.lumenlearning.com/introchem/chapter/the-effect-of-a-catalyst/
Adding a catalyst makes absolutely no difference to the position of equilibrium
https://chem.libretexts.org/Bookshelves/Physical_and_Theoretical_Chemistry_Textbook_Maps/Supplemental_Modules_(Physical_and_Theoretical_Chemistry)/Equilibria/Le_Chateliers_Principle/Le_Chatelier%27s_Principle_Fundamentals
If a catalyst is not supposed to affect the reaction's final equilibrium position how do we explain the catalyst selectivity seen here?
I saw a similar question (Selectivity of catalysts) but it wasn't addressed directly at this principle (and unanswered still).
Answer:
If a catalyst is not supposed to affect the reaction's final equilibrium position how do we explain the catalyst selectivity seen here?
If you wait long enough so that all three reactions attain equilibrium, the presence or absences of catalysts have no effect on the product mixtures.
In the examples, however, the reactions without catalysts are all slow. If you add one catalyst but not the other two, one reaction will reach equilibrium and you can work up the product mixtures, which will contain little or none of the other possible products.
The fancy way of describing this is to say that the product ratio is under kinetic control (rather than thermodynamic control). Here are two specific scenario where reaction conditions determine product identity: Kinetic and thermodynamic control of sulphonation of toluene and Thermodynamic vs Kinetic Sulphonation of Naphthalene | {
"domain": "chemistry.stackexchange",
"id": 16033,
"tags": "reaction-mechanism, equilibrium, catalysis"
} |
Proof of Space Hierarchy Theorem incompatible with Linear Speed Up Theorem for time | Question: In this proof of the Space Hierarchy Theorem the following language is defined
$$
L = \{ (\langle M \rangle, 10^k) : M \mbox{ does not accept } (\langle M \rangle, 10^k) \mbox{ using space } \le f(|\langle M \rangle, 10^k|) \}.
$$
And then stated:
Now, for any machine $M$ that decides a language in space $o(f(n))$, $L$ will differ in at least one spot from the language of $M$. Namely, for some large enough $k$, $M$ will use space $\le f(|\langle M \rangle|, 10^k|)$
on $(\langle M \rangle, 10^k)$ and will therefore differ at its value.
I get the diagonalization argument, but what makes me wonder according to the linear speed up theorem for space we have [see for example Hopcroft/Ullman, 71]:
If $L$ is accepted by an $S(n)$ space-bounded Turing machine with $k$ storage tapes, then for any $c > 0$, $L$ is accepted by a $cS(n)$ space-bounded Turing machine.
As shown in the linked wikipedia article $L$ is accepted by a Turing machine using $f(n)$-bounded space, then by linear speed up $L$ is also accepted by some machine $M'$ using at most $\frac{f(n)}{2}$ tape cells. But then we get a contradiction on the inputs $(\langle M' \rangle, 10^k)$, as $M'$ accepts $L$ if $(\langle M' \rangle, 10^k)$ is accepted, then $(\langle M' \rangle, 10^k) \notin L$, contradiction, and similar for the case if it is not accepted. If $(\langle M' \rangle, 10^k)$ is not accepted by $M'$ then $(\langle M' \rangle, 10^k) \in L$ by definition of $L$, but $(\langle M' \rangle, 10^k) \notin L(M') = L$, a contradiction.
My reasoning seems to be fine, but it should not be... what am I missing here? In my argument, the problem is with the speed up and essentially it derives that if speed up is possible, then we can separate $f$ and $cf$ for $0 < c < 1$, i.e., speed up is not possible. The only valid conclusion would be that linear speed up is not possible, but that is certainly not true...
Answer: The Wikipedia proof is bogus.
As you have proven, the description for $L$ is incomplete. It should read:
$$
L = \{ (\langle M \rangle, 10^k) : M \mbox{ does not accept } x = (\langle M \rangle, 10^k) \mbox{ using space } \le f(|x|)
\; \textbf{and time} \; \le 2^{f(|x|)} \}
$$
This corrects the proof because now $M$ (or $M'$) can still accept $x = (\langle M \rangle, 10^k)$ and take space $\le f(|x|)$. Actually, $(\langle M \rangle, 10^k) \in L$ has a quite reasonable explanation in that $M$ attempts to simulate $M$, which (as a subroutine) attempts to simulate $M$ again, and so forth and so on, until eventually $2^{f(|x|)}$ steps elapse and the whole procedure is interrupted. Hence, $M$ does not accept $x$ using space $\le f(|x|)$ and time $\le 2^{f(|x|)}$, although it does accept it taking space $\le f(|x|)$ (well, perhaps actually $O(f(|x|))$) and time $> 2^{f(|x|)}$.
As a matter of fact, the purported algorithm for $L$ in the article explicitly states it runs its simulation with a $2^{f(|x|)}$ stopwatch. This is precisely the detail that is missing from the definition of $L$. | {
"domain": "cs.stackexchange",
"id": 13454,
"tags": "complexity-theory, turing-machines, computability, space-complexity"
} |
Why do farts stink, but perfume does not? | Question: Why is it that natural things like farts, poop, halitosis etc., from which we are always surrounded smell "bad"; whereas manufactured products, such as perfume or glue smell "good" to most of us?
In my understanding, things smell "bad", because they are not good for us. But how are farts not good? And shouldn't artificial things like perfume or glue, which are relatively new to us, smell "bad"?
Answer:
Why is it that natural things like farts, poop, halitosis etc., from
which we are always surrounded smell "bad"; whereas manufactured
products, such as perfume or glue smell "good" to most of us?
In my understanding, things smell "bad", because they are not good for
us. But how are farts not good? And shouldn't artificial things like
perfume or glue, which are relatively new to us, smell "bad"?
Kind of a strange question but here is a quick and rough answer... There are at least three things to think of here.
Firstly, yes often natural things that are good for you have evolved to smell attractive, but not all natural things are good for you. Therefore a fart should not smell good just because it is natural. And it is likely that breathing farts non stop is not good for you.
Secondly, perfumes etc. have been chemically manipulated or designed to smell nice by human manufacturers - the preference for such smells existed before the specific perfume, the manufacturer simply designed a fragrance that was attractive.
And Finally... Not everything exists because of selection. Farts probably smell as a by-product of process of digestion rather than selection for avoiding them. Gas is produced during the digestion processes, the amount and odor (determined by the constituents of the gas) of gas will likely depend the efficiency of digestion and what is being digested. The dominant selection force affecting the odor is then probably related to digestive efficacy and selection against odor would only be (relatively) very weak.
It's better to digest food and fart a bit than it is to not digest but smell nice, digestion is quite important! | {
"domain": "biology.stackexchange",
"id": 1711,
"tags": "evolution"
} |
How to downsample a matrix in columns ? MATLAB | Question: I am studying wavelets in image processing. I would like to learn how to downsample a matrix in columns using MATLAB. I have used the downsample(x,n) command to downsample the given matrix in rows. But how to do the same in columns?
x =
1 2 3
4 5 6
7 8 9
10 11 12
y = downsample(x,2)
y =
1 2 3
7 8 9
But I would like to get the following result.
y=
1 3
4 6
7 9
10 12
How to do it in MATLAB? Please help.
Answer: Maybe Off-Topic here, but anyway !
Try it
y = downsample(x',2)' | {
"domain": "dsp.stackexchange",
"id": 2282,
"tags": "image-processing, matlab"
} |
Tools to analyze RNA-seq data | Question: I hope this is a good place to ask such question. I have to do some data analysis on RNA-seq data from human cells. I am currently searching for tools to help me with that. Specifically, I would need some tools to analyze the gene expression from the data. Something to help me plot the expression of selected genes in each fastq file and compare the differences in the expression with the possibility to export the results or some command line interface for scripting. Basically I need something where I can put a fastq file and perhaps also a human genome annotation file as input and get gene expression as output. I have looked at bioconductor and it's packages and on Wikipedia's List of RNA-Seq bioinformatics tools. I suppose some of these tools have to be able to do what I need, but I have been unable to find out which one and how should they be used to achieve that. Could someone please give me some advice?
Answer: You will likely need a tool to "map" the reads on the reference genome.
You may find such a reference genome, together with annotations, here:
ftp://ussd-ftp.illumina.com/.
Mapping tools such as bowtie2 or bwa take fastq files and reference genomes and output mapping results in a format called sam.
You then have a lot of options to estimate gene expression.
You can write your own algorithm to parse sam format and estimate normalized read counts on each gene.
You can combine more or less low-level tools such as samtools, pysam, htseq with some scripting to do this.
You can use tools that do the counting (like bedtools ot htseq-count) and differential expression analysis (like deseq2).
In the last case, I would advice to start from the documentation of the final tool to find out what are the tools you need to generate the output of the preceding step.
It is very likely you will use some R or Python, or use the web platform galaxy for some of the steps.
Edits
As mentioned by @scribaniwannabe in this answer, the paper about the Tuxedo suite of tools gives a good example of the steps to carry out an RNA-seq analysis using recent tools (as of October 2016).
As @Student T reminds in this answer, RNA-seq data contain reads that can come from exon-exon junctions, so the read mapper has to be set up in such a way as not to discard reads not mapping continuously on all their length on the genome. To my knowledge, HISAT2 and CRAC do this by default. Bowtie2 needs special settings. | {
"domain": "biology.stackexchange",
"id": 11643,
"tags": "bioinformatics, rna-sequencing"
} |
Can why electrons exist in shells be explained by the Pauli exclusion principle? | Question: Do you know the Pauli exclusion principle?-'No two particles could be in the same quantum state at once'.
Well can you use that principle to explain why electrons stay in shells and electrons in separate shells can never get closer than a certain length to electrons in another shell. I learned somewhere that what the Pauli exclusion principle is really saying is that you should be able to indistinguishably identify
two separate fermions or something very similar to that. Well I'd get why that happens in the first shell (because there are only 2 electrons and they have opposite spins) but how could that apply to shells with higher electrons like for example the 2nd shell which has 4 electrons? I also think that there's an equation describing this and I'd love to know what that equation is. Any help would be greatly appreciated.
Answer: The state of a bound electron in an atom is described by four quantum numbers:
The principal quantum number $n$ which takes integer values $1,2,3, \dots$ and determines which shell the electron is in.
The azimuthal quantum number $l$ which takes integer values from $0$ to $n-1$ and determines the angular momentum of the electron.
The magnetic quantum number $m_l$ which takes integer values from $-l$ to $l$.
The spin quantum number $s$ which takes values $\pm \frac 1 2$.
The Pauli exclusion principle then prevents two bound electrons having exactly the same set of values for these four quantum numbers.
In shell $1$ we have $n=1$, $l=0$, $m_l=0$ and $s=\pm \frac 1 2$. So there are at most two electrons in shell $1$.
In shell $2$ we have $n=2$ and $l=0, 1$. When $l=0$ then $m_l=0$ and $s=\pm \frac 1 2$, which allows up to $2$ electrons. When $l=1$ then $m_l=-1, 0, 1$ and $s=\pm \frac 1 2$, which allows up to $6$ electrons. So there are at most $8$ electrons in shell $2$.
And. in general, there can be at most $2n^2$ electrons in shell $n$. | {
"domain": "physics.stackexchange",
"id": 71910,
"tags": "quantum-mechanics, electrons, atomic-physics, orbitals, pauli-exclusion-principle"
} |
Quantum state of a harmonic oscillator given energy and probability | Question: Hi I've been going through exercises without solution, so I would like to ask for some feedback if I am solving this right:
A one-dimensional quantum harmonic oscillator with mass $m$ and frequency $\omega$ is in a state at $t=0$ described by a linear combination with real coefficients of the first two eigenstates of the Hamiltonian.
Knowing that the probability of measuring the energy of the system to be $\frac{3}{2} \hbar \omega$ is equal to $\frac{4}{5}$, write down the state of the system.
I don't get why the energy $\frac{3}{2} \hbar \omega$ is given, my first thought was that it just meant that the first two states are: $| 0 \rangle$ with energy $\frac{1}{2} \hbar \omega$ and $| 1 \rangle$ with energy $\frac{3}{2} \hbar \omega$. But I thought this was implicit.
What makes me doubt is also that the initial state should be of the form: $| \psi \rangle = a_0 | 0 \rangle + a_1 | 1 \rangle $. The problem gives this information: $|a_1|^2 = \frac{4}{5}$, but is no way to know if $a_0$ and $a_1$ are positive or negative, right?
My answer would then be just: $| \psi \rangle = \pm \frac{1}{\sqrt{5}} | 0 \rangle \pm \sqrt{\frac{4}{5}} | 1 \rangle $
But then there are other questions following and I feel I should have a definite answer for the initial state to continue. Am I missing something?
Answer: You aren't missing anything! You are certainly correct that someone only telling you the probabilities in this basis has not given you enough information to tell you the relative phases between the components in the superposition (your $\pm$ is one example of a phase ambiguity). And the energy is given just so you have to know offhand whether that energy refers to the first state, second state, etc (trying to quiz you twice in one question). | {
"domain": "physics.stackexchange",
"id": 97030,
"tags": "quantum-mechanics, homework-and-exercises, harmonic-oscillator, quantum-states"
} |
Do I need to 'neutralise' iron/steel/metal with a base after removing rust with acid? | Question: I've never asked a question here before but these forums seem to have some extremely knowledgeable people so I thought I'd give it a try. I know it's a 1st grade question but I honestly don't have the first idea about this. So, I have a motorcycle, the fuel tank was completely full of rust, extremely bad. I've finally managed to get the tank clean by using a combination of things, mainly white vinegar (at least I know this is acetic acid 5%). I got this idea from watching many Youtube videos and reading online articles. I'm now at the stage where I need to get the vinegar out and clean and dry the tank (preventing as much flash rusting as I can).
My question is... without exception, all of the resources I've seen/read have said "after removing the vinegar, it's important to neutralise the fuel tank with baking soda/washing liquid" - I was planning to just follow this as it sounds good! This is apparently to 'stop the vinegar reaction/ return to a safe ph level'....I've seen how quickly flash rust takes hold and I'm wondering if using baking soda is really necessary. Surely it would need to be in the tank for a few minutes at least for it to do anything? - What exactly is this doing? Looking into this I've also read that baking soda and vinegar will create salt, this is concerning as I'll then need to make sure any salt is thoroughly removed. I could understand the idea to say, neutralise the vinegar itself before disposal (maybe?) but does the vinegar have any lasting effect on the steel after it's been removed?
As a side question, is there any recommendation on the best possible ways to prevent flash rust? I've seen/read many ideas which I think probably work to varying degrees of success, mainly to get the tank dry as quickly as possible then coat it with an oil/wd40/fogging oil/kerosene. I've just discovered water based corrosion inhibitors and was thinking these might be the ultimate best option, so after going through whatever process I need to, as a final step I would rinse with a corrosion inhibitor?
Answer: You might want to migrate this question to Motor Vehicle Maintenance and Repair SE for a more practical and less theoretical range of answers.
That being said, plain water is sufficient for rinsing out any remaining acetic acid. (It's a gas tank, not lab equipment.) However, unless you take further steps to stop the tank from rusting while in service, your efforts will all be wasted. The tank rusted because of water in fuel, most likely due to condensation of water from humid air as the tank "breathes". You now have bare, clean steel with no protection from future rust.
Forget the oil / kerosene / what have you. Its protective effect will last only until the first fuel-up, when the gasoline will dissolve it.
I suggest that you coat the interior of the tank with a product that is designed for the purpose, such as POR-15 Fuel Tank Sealer. I've used it. It works. It will completely coat the inside of the tank with a hard polymer coating that is impervious to fuel and water, and it will prevent any future rust. | {
"domain": "chemistry.stackexchange",
"id": 17068,
"tags": "acid-base, corrosion, iron"
} |
Can a proton and an electron annihilate in a gravitational field? | Question: According to this Physics.SE comment, it is gravitationally allowed, though very unlikely, for a proton and an electron to annihilate yielding two photons.
Is that correct?
If so, why? (In particular, why does semiclassical gravity allow nonconservation of baryon number?)
Do gravitational waves violate conservation of baryon number? is somewhat related, but the currently accepted answer discusses only black holes and the cosmological baryon asymmetry problem, neither of which is relevant to this question.
Answer: At first I thought that Ron Maimon is talking of the equivalent of the $β+$ decay which happens in proton rich nuclei, the energy taken from the binding energy balance:
$p -> n e^+ ν_e$
In the case of the answer to the question about Hawking and Unruh like radiation from large gravitational bodies that are not black holes, this should also be taken into account, with the energy provided by the gravitational field.This also is very improbable because of the tiny size of the gravitational coupling entering the necessary Feynman diagrams. In this case there is no baryon annihilation.
But the answer's "This eventually may happen when the proton decays," is based on GUTS theories. Protons do not decay in the standard model.
Ron is commenting on this sentence, and this implies GUTS.
So it is not effective quantized gravity which allows such a process , in GUTS protons decay, so there is no baryon number conservation. The process that Ron refers to (as also proton decay) can only occur within a GUTS theory. | {
"domain": "physics.stackexchange",
"id": 56253,
"tags": "quantum-field-theory, particle-physics, gravity, conservation-laws, qft-in-curved-spacetime"
} |
CMakeLists.txt vs package.xml | Question: I've read this link: https://answers.ros.org/question/217475/cmakeliststxt-vs-packagexml/
But still, I can't understand very clearly.
When I try to compile ROS project with the command: catkin_make --install, how and when is the package.xml used? how and when is the CMakeLists.txt used? Both of them must be used?
Answer: Regarding CMakeLists.txt, it
is the input to the CMake build system for building software packages.
You can think of this file as simply being a list of the cmake build targets and commands for your specific package. It may help to read this to get a general idea.
For package.xml, it
defines properties about the package such as the package name, version numbers, authors, maintainers, and dependencies on other catkin packages.
EDIT: According to an answer in How do I write package.xml? the package.xml is used during the install process to help resolve dependencies.
If you are missing any dependencies and they are not listed in the package.xml file before running catkin_make --install, then your build will either fail or not work as expected. | {
"domain": "robotics.stackexchange",
"id": 1699,
"tags": "ros"
} |
Renormalizing Chaos: Transition in a Logistic Map | Question: I am currently trying to understand the analysis of a logistic-like map $$f_\mu (x) = 1-\mu x^2$$
after section 2.2 in "Renormalization Methods" by A. Lesne.
As I understand it, the physical situation is that $f_\mu$ has exactly $2^n$ attractors $x_{i\dots 2^n}$ in a certain region $\mu\in [0,\mu_c)$. Since one can order them such that $f_\mu(x_i) = x_{i+1}$ and the number of attractors grows in some way always by a factor of two, it is called "double-periodic scenario".
However, at $\mu_c\approx1.401$, this behavior cannot be found any more and one might interprete this as critical point since the system appears to be chaotic from there, or, in some sense, the number of iterations it takes until a point $x$ is reached again becomes infinity.
Here is a scetch of the situation:
At certain points $\mu_i$, $i=[i,\infty]$, the number of attractors doubles until $\mu_c$ is reached where this periodicity cannot be found. $N$ has been chosen to be sufficiently large.
To understand the transition from the periodic point of view, Lesne conjectures that some delta exists such that $$\lim_{i\rightarrow \infty}\delta^i (\mu_c - \mu_i)=A\neq0\ .$$
Then, it is stated that $\delta \approx 4.67$ is somehow universal and can be derived using a renormalization approach of the form $R\left[ f\right] \propto f^k$ with $R$ beeing an operator that contains in the end all the informations about the system.
Two things are unclear to me:
How can one use the renormalization operator $R$ to analyze the critical behaviour, thus $\delta$?
and
Is there a systematic way to find an appropriate $R$ better than just looking at self-similarities of some graphs?
Thank you in advance
Answer: The best reference for this is Feigenbaum's original article, reprinted in "Universality in Chaos" by Cvitanovic. The point is that when you iterate a map, every time you period double, you fold up the function one more time. The behavior is dominated by the solution to the following equation:
$$ \alpha g(g(x/\alpha)) = g(x)$$
Which says that g iterated with itself and rescaled (both in the domain and range) looks just like g. The function $g$ is shifted relative to f, so that it's maximum is at 0, not at some point between 0 and 1, which means you don't have to follow the critical point under iteration. You can solve this condition more easily by imposing the symmetry $g(x)=g(-x)$ and using a Taylor expansion, and this gives g and $\alpha$, and $\alpha$ is the scale exponent. Everything about the critical behavior is determined by g, and this is described best in the original article. | {
"domain": "physics.stackexchange",
"id": 1681,
"tags": "renormalization, chaos-theory"
} |
Requirement of vector 'b' in the definition of Phase Estimation Sampling (PES) | Question: In this paper (last paragraph, page 3) by Wocjan and Zhang, the definition of PES requires vector/bit string b.
The phase estimation problem (PE) very much inspires the definition.
I cannot understand the vector b requirement in the PES definition.
I guess it has some connection to the 'distance/measure' defined in the sample space. Perhaps vector b brings a weight factor to account for uniformity in the definition. (or, states/vectors nearby b need to be more precisely estimated than the faraway vectors to b.)
Answer: The discussion in question appears to be discussing usage of the Quantum Phase Estimation algorithm when we do not have access to an eigenstate $|\eta_j \rangle$ of the unitary matrix $U$ in question. This is almost always the case in practice as one may assume that the task of obtaining an eigenstate is similarly as hard as finding an eigenvalue $\lambda_j = e^{2\pi\phi_j}$, which is the main purpose of QPE.
The paper you have linked suggests using a bit string-encoded state $|b\rangle$ as an approximate eigenstate $|\eta_j \rangle$, such that the probability of success now depends on the squared overlap: $P_{\text{success}} \propto |\langle b|\phi_j\rangle|^2$.
Using a bit string encoded initial state sometimes works quite well. Indeed, when using QPE to estimate an eigenvalue of a Hamiltonian H (which we encode in a unitary matrix U) in quantum chemistry, we call such a state the Hartree-Fock state. The ordered bit string corresponds to the occupation of molecular orbitals as determined by the Hartree-Fock method. | {
"domain": "quantumcomputing.stackexchange",
"id": 5586,
"tags": "quantum-algorithms, complexity-theory, quantum-phase-estimation, bqp"
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.