anchor stringlengths 0 150 | positive stringlengths 0 96k | source dict |
|---|---|---|
What happens if we expand the alphabet of a given Turing Machine? | Question:
If we expanded the alphabet Sigma = { 0, 1, u, > } to Sigma' = { 0, 1, 2, u, > }. How would the table change? We're just learning about Turing Machines in class and I'm a bit confused. Is the table in the picture calculated in some way?
Answer: That’s a table of transitions, and it determines what the program does when it encounters a given symbol on its tape while in a given state. This program, if I’m reading it right, accepts strings containing two consecutive 1 symbols before the first space.
If there were another symbol, it would be necessary to define what the program would do on encountering them. For example, to ignore them, there would be a pair of transitions saying that, if the program is in state s1 and reads a 2, it should stay in state s1 and move to the right, and that if it is in state s2 and reads a 2, it should stay in state s2 and move to the right. | {
"domain": "cs.stackexchange",
"id": 5282,
"tags": "computability, turing-machines"
} |
Can you suggest me some good resource for learning Mathematica for physics? | Question: I am into HEP. I know nothing in Mathematica and have to start from basics. HEP requires a lot of computation for Feynman diagrams etc could you please suggest some good resource to start learning Mathematica and some libraries that I would require?
Answer: I'll begin with tutorials
1) Mathematica's help documentation is more than sufficient for you and it contains detailed description of all the command and everything you need
2) www.wolfram.com/mathematica/ this is mathematicas official site where you will find loads of tutorials and video tutorials right from basics
3) check out blogs like http://www.sunnyguha.tk/?cat=6 which contain the basics of Mathematica.
Regarding the Packaged you'll need for HEP
1) Feyncalc
2) http://library.wolfram.com/infocenter/TechNotes/4580/
Hope you found it helpful | {
"domain": "physics.stackexchange",
"id": 8373,
"tags": "resource-recommendations, computational-physics, software"
} |
If an antenna must be $\frac{1}{4}$ of the wavelength, how can car antennas be so small? | Question: If the transmission antenna has to be $\frac{1}{4}$ of the wavelength, how can the car antennas' size be much less than that and properly receive the radio signal?
Answer: The idea behind the quarter wavelength antenna is that it is self-resonant: it is "tuned". You can however use an antenna of any size to pick off some electromagnetic energy - and you can tune the antenna by adding some inductance in series (or inductance and capacitance). The reason that you tune an antenna is simply this: you want it to have real impedance, which happens exactly at resonance. When this happens, then all the power that is incident on the antenna ends up going into the electronics; when the impedance is complex, the current either lags or leads the voltage, and this results in reflection of the power (and less power going into the amplifier).
You can't get around the fact that you are "capturing" energy from less area in space, so it will be less efficient. On the other hand having an antenna that is too long doesn't buy you anything as the signal from one part of the antenna would cancel the signal from another.
This is one of the beautiful things about the Yagi antenna: the multiple elements are spaced in such a way that they cause constructive interference along a particular axis, giving you both gain and directionality - in essence you are capturing energy from a larger volume of space.
Little anecdote: from wikipedia article on Yagi-Uda antenna
The Yagi was first widely used during World War II for airborne radar sets, because of its simplicity and directionality. Despite its being invented in Japan, many Japanese radar engineers were unaware of the design until very late in the war, partly due to rivalry between the Army and Navy. The Japanese military authorities first became aware of this technology after the Battle of Singapore when they captured the notes of a British radar technician that mentioned "yagi antenna". Japanese intelligence officers did not even recognise that Yagi was a Japanese name in this context. When questioned, the technician said it was an antenna named after a Japanese professor.
I have heard it said that the inventor was executed during WW-II by the Japanese because his invention "helped the enemy". I can't find a reference to support that. Apparently he wasn't even the inventor - it was Uda. But Yagi published it, patented it, and even sold the rights to Marconi (before the war). It would be karma if he ended up paying the ultimate price for stiffing the real inventor... "No, it wasn't me! But Professor, your name is on the patent... off with his head." In fact it is apocryphal - a bit more searching reveals that he died in 1976. | {
"domain": "physics.stackexchange",
"id": 16702,
"tags": "radio, antennas"
} |
publishing data from LabView through rosbridge(v2) | Question:
Hi,
I am using the ROS toolkit in LabView to publish an odometry message to a particular topic. When rosbridge(v2) is running I can see there is a connection between LabView and ROS(I am given a connection ID, while when rosbridge is not running i get 0). But there is no publishing of data to the topic, the topic does not appear in my topic list.
Any ideas of what could be the issue? According to clearpath robotics the only thing that needs to be done to connect labview and ROS on the ROS side is install the rosbridge. It might be though that the rosbridge is where the data gets stuck.
Thank you
Originally posted by av_roboticslab on ROS Answers with karma: 1 on 2014-03-26
Post score: 0
Answer:
Unfortunately, the toolkit works with rosbridge v1 only. My days have been a little too busy to upgrade, retest, and re-release everything.
Originally posted by Ryan with karma: 3248 on 2014-03-26
This answer was ACCEPTED on the original site
Post score: 2
Original comments
Comment by av_roboticslab on 2014-03-27:
thank you for responding. I was wondering whether you know of any way rosbridge v1 can run on hydro. In the documentation it says it works with versions up to and including groovy
I thought to give it a try but wasn't successful. | {
"domain": "robotics.stackexchange",
"id": 17432,
"tags": "ros, rosbridge, publish"
} |
Deriving the Fourier transform of cosine and sine | Question: In this answer, Jim Clay writes:
... use the fact that $\mathcal F\{\cos(x)\} = \frac{\delta(w - 1) + \delta(w + 1)}{2}$ ...
The expression above is not too different from $\mathcal F\{{\cos(2\pi f_0t)\}=\frac{1}{2}(\delta(f-f_0)+\delta(f+f_0))}$.
I have been trying to obtain the later expression by using the standard definition of the Fourier transform $X(f)=\int_{-\infty}^{+\infty}x(t)e^{-j2\pi ft}dt$ but all I end up with is an expression so different from what's apparently the answer.
Here's my work:
\begin{align}
x(t)&=\cos(2\pi f_0t)\\
\Longrightarrow \mathcal F\left\{x(t)\right\}&=\int_{-\infty}^{+\infty}\cos(2\pi f_0t)e^{-j2\pi ft}dt\\
&=\int_{-\infty}^{+\infty}\frac 12 \left(e^{-j2\pi f_0t}+e^{j2\pi f_0t}\right)e^{-j2\pi ft}dt\\
&=\frac{1}{2}\int_{-\infty}^{+\infty}\left(e^{-j2\pi f_0t}e^{-j2\pi ft}+e^{j2\pi f_0t}e^{-j2\pi ft}\right)dt\\
&=\frac{1}{2}\int_{-\infty}^{+\infty}\left(e^{-j2\pi t\left(f_0+f\right)}+e^{-j2\pi t\left(f-f_0\right)}\right)dt\\
&=\frac{1}{2}\left(\int_{-\infty}^{+\infty}\left(e^{-j2\pi t(f_0+f)}\right)dt+\int_{-\infty}^{+\infty}\left(e^{-j2\pi t(f-f_0)}\right)\right) dt
\end{align}
This is where I'm stuck.
Answer: Your work is OK except for the problem that the Fourier transform of
$\cos(2\pi f_0 t)$ does not exist in the usual sense of a function of $f$,
and we have to extend the notion to include what are called distributions,
or impulses, or Dirac deltas, or (as we engineers are wont to do, much
to the disgust of mathematicians) delta functions. Read
about the conditions that must be satisfied in order for the Fourier
transform $X(f)$ of the signal $x(t)$ to exist (in the usual sense)
and you will see that $\cos(2\pi f_0 t)$ does not have a Fourier transform
in the usual sense.
Turning to your specific question, once you understand that impulses
are defined only in terms of how they behave as integrands in an
integral, that is, for $a < x_0 < b$,
$$\int_{a}^{b} \delta(x-x_0)g(x)\,\mathrm dx = g(x_0)$$
provided that $g(x)$ is continuous at $x_0$, then
it is easier to deduce the Fourier transform of
$$\cos(2\pi f_0 t)
= \left.\left.\frac{1}{2}\right[e^{j2\pi f_0 t} + e^{-j2\pi f_0 t}\right]$$
by musing on the fact that
$$\int_{-\infty}^\infty \delta(f-f_0)e^{j2\pi ft}\,\mathrm df = e^{j2\pi f_0t}$$
and so it must be that $\cos(2\pi f_0 t)$ is the inverse
Fourier transform of $\displaystyle \left.\left.\frac{1}{2}\right[\delta(f-f_0) + \delta(f+f_0)\right]$. | {
"domain": "dsp.stackexchange",
"id": 1286,
"tags": "continuous-signals, fourier"
} |
Are the human genes' promoters all known? | Question: It seems to be a basic question, but I couldn't find a certain answer.
The human genome is known for more than a decade, and is available through several data providers as the NCBI etc. For the genes' promoters however there seems to be much less information. There are some characteristics known (like the TATA box, which appears in ~25% of human genes), some prediction methods (like CpG sites appearance), some rough estimations for the location (100-1000kbp long, in the adjacent 2kbp upstream from the TSS), but I couldn't find any data set with a closed list of human genes' promoters.
Is this really yet to exist?
Answer: Nope. The human genome is still quite unexplored, new genes are still being discovered and the annotation of non-protein-coding regions (that include promoters) is still far from being complete. For example, look at the Statistics comparison of the current and previous Human GENCODE Release, it clearly shows that the annotation of the human genome is still an ongoing process. | {
"domain": "biology.stackexchange",
"id": 6107,
"tags": "human-genetics, genomics, t7-promoter"
} |
Question on ascending $k$-tuples of naturals whose sum is less or equal than $S$ | Question: Let $k$ and $S$ be fixed non-negative integers. Let us regard the following set of tuples
$\{ (x_1,\dots,x_k)| x_i \leq x_{i+1}, \sum_j x_j \leq S \}$
I have got some questions on this set.
Is there are name for this set? It is similiar to an arbitrary partition of an integer.
Is there a nice formula that provides the number of elements in this set? It may be recursive.
Is there an easy to compute index for each of the elements, such that in turn any element can be computed from the index (easily)?
Of course, there are brute force algorithms for the last two questions. I am looking for something neater.
Answer:
Let's add one more number $x_{k+1} = S - \sum_i{x_i}$ and assume none of the $x_i$ is zero. Now $(x_1, \ldots, x_{k+1}$) is what is called an $k+1$-part integer partition of $S$. So your tuples are integer partitions of the numbers $0, \ldots, S$ into at most $k$ parts (at most because some $x_i$ could be 0).
Counting partitions is a big topic but a recurrence is not very hard to formulate. Let $q(n,k)$ be the number of partitions of $n$ into at most $k$ parts. The number of your tuples is then $1 + \sum_{n = 1}^{S}{q(n, k)}$. For the recurrence you have two cases:
a) There are $k$ nonzero $x_i$. Then if you subtract one from each $x_i$ you get a partition of $n-k$ with at most $k$ parts. So there is a bijection between partitions of $n$ with exactly $k$ parts and partitions of $n-k$ with at most $k$ parts.
b) There are less than $k$ nonzero $x_i$. The number of such tuples is $q(n, k-1)$.
The recurrence you get is $q(n, k) = q(n-k, k) + q(n, k-1)$. The initial conditions are $q(0, k) = 1$ and $q(n, 0) = 0$ for $n>0$.
As to 3., I think $q(n, k)$ is at least on the order of $n^k$ which would imply that there aren't very succinct representations. | {
"domain": "cstheory.stackexchange",
"id": 1147,
"tags": "co.combinatorics, nt.number-theory"
} |
How to get starting location of robot using IMU Sensor (without using gps) | Question: i have one robot, i have to get the initial location of that robot. then when we move the robot from that location, and when we give the command like "HOME" then it should go that initial location. but i don't understand how to get initial location of that robot.
Answer: An IMU gives you linear acceleration and rotational speed. It doesn't give a position.
You can integrate the output of an IMU to get a linear position and angular orientation (the pose), but you need to choose initial speeds and positions.
Typically, these choices are zero. If you decided to choose zero for your initial/default pose, then returning "home" just means trying to get those values to go back to zero again. | {
"domain": "robotics.stackexchange",
"id": 1822,
"tags": "mobile-robot, imu, navigation, first-robotics, dead-reckoning"
} |
audio_play on turtlebot: messages being sent, but no audio output | Question:
Hello,
I'm trying to get audio streaming back from my turtlebot and was delighted to find the audio_common package which provides audio_play and audio_capture. Sadly, it doesn't seem to be working.
I'm following the tutorial found here: http://www.ros.org/wiki/audio_common/Tutorials/Streaming%20audio
Setup
On my turtlebot, I installed sound_common with "apt-get install ros-electric-sound-drivers"
My microphone and speakers are configured correctly, I'm able to record and play back sounds.
The command to test my audio setup works. It echos sounds from the microphone.
gst-launch-0.10 alsasrc ! audioconvert ! audioresample ! alsasink
I launched both audio_capture and audio_play as per the tutorial
$ roslaunch audio_capture capture.launch
$ roslaunch audio_play play.launch
Problem and Troubleshooting
No sound output.
Both scripts launched without error.
I've verified that the nodes are connected using rxgraph
"rostopic echo audio" streams audio messages.
I can see the microphone input level bouncing, but the output level is doing nothing.
I've used sound_play to verify that I can play sounds and my volume settings are correct.
From what I can tell, everything should be working. But I still get no output. Any ideas on how to troubleshoot this?
Thanks,
-Brian
Originally posted by brianpen on ROS Answers with karma: 183 on 2012-01-26
Post score: 0
Answer:
Have you tried the example in the turtlebot_sounds package?
Originally posted by Ryan with karma: 3248 on 2012-01-26
This answer was ACCEPTED on the original site
Post score: 0
Original comments
Comment by brianpen on 2012-01-31:
hmm... thanks for the suggestion, I'll look at that. any suggestions for troubleshooting codec issues?
Comment by tfoote on 2012-01-28:
Your problem is likely that audo_play cannot process the audio you have chosen due to a codec issue.
Comment by brianpen on 2012-01-27:
Yes, it works correctly and plays the sounds.
Comment by mathijsdelangen on 2014-02-24:
Have you been able to solve this problem? I am having the same kind of issues at the moment using audio_play. However, in the beginning I have sound playback, but it stops after a little while. | {
"domain": "robotics.stackexchange",
"id": 8011,
"tags": "ros, audio, audio-common"
} |
Gravitational waves detection, any news? | Question: Is the detection of gravitational waves a reality with nowadays technology?
Are there recent news?
Answer: Unfortunately gravitational waves have not been detected yet.
There is a number of Earth-bound detectors planned and already in operation (e.g. LIGO, Geo 600, Virgo, Nanograv and others). As for space-borne detectors, ESA works on Next Gravitational-Wave Observatory after NASA pulled out of LISA project in April 2011 due to funding problems. Joint NASA/ESA mission, LISA Pathfinder will launch in June 2013 testing technologies to be used by NGO.
Keep an eye on the pages and blogs of these projects if you'd like to stay up to date on their progress. Also, if gravitational waves are detected, the discovery will no doubt be announced here and even here. | {
"domain": "physics.stackexchange",
"id": 2087,
"tags": "experimental-physics, gravitational-waves, ligo"
} |
Python: lru_cache make leetcode's dungeon game more slower | Question: I'm trying to implement a recursive+momoize version for leetcode's dungeon game question.
I tried to use @lru_cache():
________________________________________________________
Executed in 14.22 secs fish external
usr time 14.13 secs 101.00 micros 14.13 secs
sys time 0.04 secs 498.00 micros 0.04 secs
And comment it to make it unavailable:
________________________________________________________
Executed in 11.73 secs fish external
usr time 11.65 secs 123.00 micros 11.65 secs
sys time 0.04 secs 556.00 micros 0.04 secs
It sounds a lot like the lru_cache won't help in this case, just wondering if anything that I'd missed?
from typing import List
from functools import lru_cache
class Solution:
def calculateMinimumHP(self, dungeon: List[List[int]]) -> int:
height = len(dungeon)
width = len(dungeon[0])
@lru_cache()
def helper(start_x, start_y, acc, min):
cur = dungeon[start_y][start_x]
acc = cur + acc
if cur < 0 and acc < min:
min = acc
if start_x == width - 1 and start_y == height - 1:
return min
if start_x < width - 1:
right_res = helper(start_x+1, start_y, acc, min)
else:
right_res = float("-inf")
if start_y < height - 1:
down_res = helper(start_x, start_y+1, acc, min)
else:
down_res = float("-inf")
ret = max(down_res, right_res)
return ret
res = helper(0,0,0,0)
return 1 if res > 0 else abs(res)+1
def main():
sol = Solution()
long_case = [[2,-8,-79,-88,-12,-87,-5,-56,-55,-42,18,-91,1,-30,-36,42,-96,-26,-17,-69,38,18,44,-58,-33,20,-45,-11,11,15,-40,-92,-62,-51,-23,20,-86,-2,-90,-64,-100,-42,-16,-55,29,-62,-81,-60,7,-5,31,-7,40,19,-53,-81,-77,42,-87,37,-43,37,-50,-21,-86,-28,13,-18,-65,-76],
[-67,-23,-62,45,-94,-1,-95,-66,-41,37,33,-96,-95,-17,12,30,-4,40,-40,-89,-89,-25,-62,10,-19,-53,-36,38,-21,1,-41,-81,-62,3,-96,-17,-75,-81,37,32,-9,-80,-41,-13,-58,1,40,-13,-85,-78,-67,-36,-7,48,-16,2,-69,-85,9,15,-91,-32,-16,-84,-9,-31,-62,35,-11,28],
[39,-28,1,-31,-4,-39,-64,-86,-68,-72,-68,21,-33,-73,37,-39,2,-59,-71,-17,-60,4,-16,-92,-15,10,-99,-37,21,-70,31,-10,-9,-45,6,26,8,30,13,-72,5,37,-94,35,9,36,-96,47,-61,15,-22,-60,-96,-94,-60,43,-48,-79,19,24,-40,33,-18,-33,50,42,-42,-6,-59,-17],
[-95,-40,-96,42,-49,-3,6,-47,-38,31,-25,-61,-18,-52,-80,-55,29,27,22,6,29,-89,-9,14,-77,-26,-2,-7,-2,-64,-100,40,-52,-15,-76,13,-27,-83,-70,13,-62,-54,-92,-71,-65,-18,26,37,0,-58,4,43,-5,-33,-47,-21,-65,-58,21,2,-67,-62,-32,30,-4,-46,18,21,2,-5],
[-5,34,41,11,45,-46,-86,31,-57,42,-92,43,-37,-9,42,-29,-3,41,-71,13,-8,37,-36,23,17,-74,-12,-55,-18,-17,-13,-76,-18,-90,-5,14,7,-82,-19,-16,44,-96,-88,37,-98,8,17,9,-2,-29,11,-39,-49,-95,20,-33,-37,-42,42,26,-28,-21,-44,-9,17,-26,-27,24,-60,-19]]
print(sol.calculateMinimumHP(long_case))
if __name__ == '__main__':
main()
Answer: With your example input, the function helper gets called with 400568 different parameters. According to its documentation lru_cache by default caches the parameters and results of the last 128 function calls.
Try setting maxsize to something more reasonable, or use a non-lru cache (@lru_cache(maxsize=None), or simply @cache). | {
"domain": "codereview.stackexchange",
"id": 42178,
"tags": "python, performance, algorithm, cache"
} |
Why the Smallest Operable Data Type in Most Programming Languages Is One-Byte Sized? | Question: Why the smallest operable data type in most programming languages is one-byte sized? Is it possible that to operate with a single bit? If it is possible, how to do it in practical?
Answer: That has mostly historical reasons. See Wikipedia:Byte. Operating on single bits is of course possible. How to do it depends on the programming language. In most languages you have bit level operators for AND, OR, XOR and so forth that you can use to manipulate single bits in a byte. In C you can use bit field structs to define data-types that are (logically) smaller than a byte. | {
"domain": "cs.stackexchange",
"id": 6908,
"tags": "programming-languages, computer-architecture"
} |
Application of the Woodward-Hoffman rules to a [14+2] cycloaddition (follow up) | Question: This is a follow up question to another question I made earlier.
I put some of this information into my own notes just so I can get my head around this.
So just to clarify. Does the Woodward Hoffman rule given below only apply to ground state reactions, thus the rule cannot be applied to the above reaction scheme because one of the components is in an excited state?
A ground-state pericyclic change is symmetry-allowed when the total number of (4n+2)s and (4n)a components is odd.
EDIT: (1) Correct labeling of the suprafacial components, (2) Removal of orbital phases, (3) Stereochemistry situated at the correct stereocentres.
Answer: For reactions in the excited state, a modified statement of the Woodward Hoffman rule applies:
A pericyclic change in the first excited state is symmetry allowed when the total number of (4q+2)s and (4r)a components is even.
The only difference is that we're now looking for the total to be even rather than odd. | {
"domain": "chemistry.stackexchange",
"id": 8124,
"tags": "organic-chemistry, pericyclic"
} |
Converting string of comma-separated items into 2d table | Question: I'm writing the following:
def make_table(data: str, w: int) -> List[List]:
pass
data is a string containing items separated by ,. Some items may be an empty sequence so multiple commas are possible, the input can also start/end with comma which mean that first/last item is empty.
w stands for width and is the number of columns of the output table. We're assuming the input is valid.
Examples:
In [1]: make_table("a,b,c,d,e,f,g,h,i",3)
Out[1]: [['a', 'b', 'c'], ['d', 'e', 'f'], ['g', 'h', 'i']]
In [2]: make_table(",,1,,1,,1,,",3)
Out[2]: [['', '', '1'], ['', '1', ''], ['1', '', '']]
My first working solution is this:
def make_table(data: str, w: int) -> list:
data = data.split(",")
return [data[i*w:(i+1)*w] for i in range(len(data)//w)]
What I don't like here is that
range(len(...)) gives me bad memories
I feel like this could be done prettier.
I know numpy could do it, but that's overkill. I'm looking through std libs but don't see anything related.
My second solution is more efficient but a little roundabout, I was looking for some lazy split solution but found only some rejected proposals. I did this:
def coma_split(data: str):
i, j = 0, -1
while True:
i, j = j + 1, data.find(",", j + 1)
if j != -1:
yield data[i:j]
else:
yield data[i:]
break
def make_3_column_table(data: str) -> list:
g = coma_split(data)
return list(zip(g,g,g))
But here I don't know how to make zip take arbitrary number of references to g.
How can I improve any of those?
Answer:
I was looking for some lazy split solution but found only some rejected proposals
If you are looking for a lazy split solution, omit the list(...) call in the return statement, and the zip object will be returned.
def make_3_column_table(data: str) -> Iterator[tuple[str]]:
g = coma_split(data)
return zip(g,g,g) # No list() call here
It looks like you are looking for the grouper recipe of itertools.
from more_itertools import grouper
def make_table(data: str, w: int) -> list[tuple[str]]:
return list(grouper(data.split(','), w))
Or without the additional library (and without the fill padding as you assert the data is well formed):
def make_table(data: str, w: int) -> list[tuple[str]]:
return list(zip(*[iter(data.split(','))] * w))
Note that the return type is not the requested original List[List] return type, but neither was the output of make_3_column_table(...).
Again, if a lazy option is desired with an Iterable[...] return type instead of the list[...] return type, then omit the list(...) calls from the return statements and adjust the return signature. | {
"domain": "codereview.stackexchange",
"id": 42650,
"tags": "python, python-3.x"
} |
Conceptualising the continuous time unit impulse function as derivative of unit step | Question: This a very newbie question.
I just watched Lecture 3 of Oppenheim's Signals course and he defines here the continuous time function as the derivative of the unit step function like so:
$$ \delta_\Delta(t)=\frac {du_\Delta(t)}{dt}$$
and that $ \delta(t)= \delta_\Delta(t) $ as $\Delta \to 0$
He claims that the derivative is equal to 1 no matter the value of $\Delta$, because that derivative can be interpreted as the area of a rectangle with sides $\Delta$ and $\frac 1 \Delta$
I can't conceptualise this the way the function $u_\Delta(t)$ is drawn at all. If the function is linear, that is, $y = mx + b$ passes through the origin, meaning $b=0$, and we can see that it has the point $(\Delta, 1)$ we can easily tell that $m= \frac 1 \Delta$ and that should be the derivative.
Can someone explain to me the error in my line of thought? Why is the derivative the area of a square?
Answer: Let $\delta_\Delta(t) = \begin{cases}\frac{1}{\Delta} && -\frac{1}{2\Delta} \le t \le \frac{1}{2\Delta} \\ 0 && \mathrm{otherwise}\end{cases}$.
Then just integrate it! $u_\Delta(t)$ must be zero for $t < -\frac{1}{2\Delta}$, it must be 1 for $t > \frac{1}{2\Delta}$, and it must be a straight line in between:
$$u_\Delta(t) = \begin{cases}
0 && t < -\frac{1}{2\Delta} \\
\Delta(t + \frac{1}{2\Delta}) && -\frac{1}{2\Delta} \le t \le \frac{1}{2\Delta} \\
1 && t > \frac{1}{2\Delta}
\end{cases}$$
Note that you can womp up almost any $\delta_\Delta(t)$, to suit the problem at hand. Just choose a function of $\Delta$ that's zero outside of some bounds and that integrates to 1. It can be triangular, a half-sine, raised sine, etc. It's generally easier to not bother, but if you feel compelled to go back to basics and find things in the limit as $\Delta \to 0$, you can. | {
"domain": "dsp.stackexchange",
"id": 9281,
"tags": "continuous-signals"
} |
What do 'physically real' means in QM? | Question: Unlike classical physic, anything that is experimentally observable is physically real in quantum mechanics at least that's how I interpret. So can I say graviton is not real because we cannot experimentally detect any physical properties from it just like the virtual particles? Higgs boson was recently discovered so it is real only recently? What do physically real actually means in quantum mechanic anyway? Sounds like because engineering can't catch up to theory we will say they are not real? I'm confused...
Answer:
Unlike classical physic, anything that is experimentally observable is physically real in quantum mechanics
This is true for both quantum mechanics and classical mechanics. If you can experimentally observe something or a physical phenomena that can be detected, then it is real.
So can I say graviton is not real because we cannot experimentally detect
We can say we have a hypothesis that seems to suggest gravitons are real, but we cannot confirm this without an actual observation of gravitons.
just like the virtual particles?
They are termed virtual particles because their lifetime and energy are constrained by the energy/time uncertainty relation. This lifetime is extremely small, hence the term virtual. Their existence is not permanent like the case for other particles.
Higgs boson was recently discovered so it is real only recently?
This is exactly the same situation as above for gravitons only for the Higgs particle, we have already detected it. It is real, and prior to it’s detection, it was hypothesised.
What do physically real actually means in quantum mechanic anyway?
It means it is physically measurable or detectable. It is tangible, and corresponds to what are callled Hermitian operators in quantum mechanics. | {
"domain": "physics.stackexchange",
"id": 75981,
"tags": "quantum-mechanics, virtual-particles"
} |
Do the cells of any multicellular lifeforms discard their genetic material after differentiating? | Question: There are many types of cells which will never again divide. Some of them may not need DNA to perform their function. Are there any cases where the DNA is discarded after a final differentiation?
Answer: I recommend looking into terminal differentiation.
As one extreme example of terminal differentiation, xylem cells that form plant vascular tissues are simply dead. This is a case where not only the genome but also physiologic activity in these cells is irrelevant to their function.
A more famous case is red blood cell enucleation, wherein the genome of mature cells is lost in cell division.
In other words, yes, this happens. | {
"domain": "biology.stackexchange",
"id": 11955,
"tags": "genetics, development, differentiation"
} |
Complexity of finding large grid minors | Question: What is the complexity of finding the largest $k\times k$ grid graph that is a minor of a given graph $G$? It is FPT in $k$, and it seems likely to be NP-hard (or NP-complete in a decision version asking whether there exists such a minor for given $k$) but I don't know of a published proof.
There are some papers on constant factor approximations to this problem, e.g.:
Demaine, E.D.; Hajiaghayi, M.; Kawarabayashi, K. (2005). Algorithmic graph minor theory: Decomposition, approximation, and coloring. FOCS, 637–646.
Gua, Q.-P.; Tamaki, H. (2011). Constant-factor approximations of branch-decomposition and largest grid minor of planar graphs in $O(n^{1+\epsilon})$ time. Theoretical Computer Science 412 (32): 4100–4109.
Is it hard to $(1+\epsilon)$-approximate, for some $\epsilon>0$?
Answer: If I understood well the problem, perhaps this is an idea for a reduction from the Hamiltonian path problem: given $G$ with $|V| = n$, a source and target node $s, t \in V$; you can extend it adding a $(n-1) \times n$ "full" grid graph having the bottom-left node of the last row connected to $s$ and the bottom right node of the last row connected to $t$.
Then connect each of the $n-2$ remaining nodes of the last row to each of the nodes in $|V| \setminus \{s,t\}$ adding $(n-2)^2$ edges.
The resulting graph $G'$ has a graph minor of size $n \times n$ if and only if there is an Hamiltonian path from $s$ to $t$ in the source graph $G$.
In the following picture a no instance (left) and a yes instance (right) with the corresponding $n \times n$ grid graph minor. | {
"domain": "cstheory.stackexchange",
"id": 2161,
"tags": "graph-theory, np-hardness, approximation-hardness, graph-minor"
} |
Electromechanically open and close a small, hinged door? | Question: I have a chicken coop I plan on modifying to automatically open and close the door at dawn and dusk.
What sort of actuator or mechanism would be appropriate for operating the small, side-hinged door?
Constant power is not available (solar) plus it needs enough holding force to keep foxes out.
Answer: Rod & Nut drive
A threaded rod and captive nut, with either rod or nut driven rotationally by a motor is liable to offer a good solution. Because:
Power level is set by thread pitch and attachment point to the door. While "the bigger the better" always helps, almost any "sensible" size of motor should be able to be used. I say "sensible" to eliminate utterly tiny motors such as pager vibrator motors. But usually anything in the 100 milliWatt to 100 Watt range COULD be used. Lower wattage requires longer time.
The threaded rod provides positive locking at zero power with no prospect of "overhauling". As long as the door will not flex under Foxy assault when pulled solidly into its frame then dinner is off.
Level of travel can be set to suit by length of rod and mounting. Any door size up to a full dometic house door could be handled in this manner - so a small coop door is well within capability.
A cordless drill motor + gearbox is liable to be an excellent drive unit. These can usually be operated from 12V (9V to 18V units) and usually have a two stage reduction gearbox. They usually have a reversing switch which is not useful in this context if remote operation is required. To use, dismantle the drill, bypass or remove the switch and feed voltage to the two motor wires directly. Reversing the polarity reverses direction.
Here is a Barn-door star tracker which illustrates the principle well.
Their calculations page is here - overkill in this case but potentially useful.
A zillion versions of how you might do this - each image links to a webpage
RF Link: As a bonus, a radio link using 2 x Arduinos and 2 RF modules, with a range of 10's of metres to a few kilometres can be constructed for about $10 in components all up. Ask if interested. This also applies to the wiper based system below.
Wiper motor & mechanism
A possible solution depending on power availability is a wind screen wiper motor and mechanism. These are made to sweep a wiper arm across a sometimes dirty windscreen with considerable drag force. Units made for truck use are substantially more powerful.
A typical automotive unit is rated at 50-100 Watts at 12V but will operate at lower voltage with reduced power. I have some Indian made truck wiper units rated at about 300 Watts!
A 12V motor can be operated from a very small lead acid battery - say 12 V x 1.2 Ah. These can be charged by solar power. The battery should be maintained at a constant 13.7 Volts. You can obtain dedicated regulators for this purpose - PB137 is similar to the standard LM317 but rated at 13.7 Volts out.
PB137 in stock Digikey $1.27 in 1's
PB137 Data Sheet
Note that a wiper mechanism is liable to have substantial backlash and depending on which way the door swings, may allow entry by pressing on the door. If the door is external and swings closed into a frame then it is likely to be Fox proof (for many values of Fox).
An electrically latching mechanism could be added.
Wiper mechanisms often have auto-stop points using internal switches. These usually allow stopping only at one end of travel. A similar arrangement can be implemented with microswitches and diodes. A switch is arranged to break current when it is operated during motor operation but the switch has an is opposed diode connected across it. Initially the diode has no effect and the motor stops. When polarity is reversed the diode conducts and moves the mechanism out of the 'dead spot'. Be sure that switches used are rated for the desired current in DC motor operation - this is more demanding than AC operation at the same current.
I have implemented systems using rod & nut & cordless drills, and also wiper motor & mechanisms. Wiper based is easiest if it meets the needs. | {
"domain": "engineering.stackexchange",
"id": 3383,
"tags": "electrical-engineering"
} |
Can we see our solar system in the past from earth? | Question: Since our solar system is moving through the milky way galaxy, if we point our telescope to a certain point where we determine that the solar system was a certain time ago, will we see our solar system?
Answer: No. You don't see past versions of yourself as you walk around your house. Why would you expect to see past versions of the solar system ?
The finite speed of light means that when we look at other stars and other galaxies we see them as they were when the light that reaches us now was emitted - which can be thousands of years ago for stars in our galaxy, or millions or billions of years ago for other galaxies. But we don't see older versions of our own solar system. | {
"domain": "physics.stackexchange",
"id": 91284,
"tags": "speed-of-light, time, solar-system, telescopes"
} |
Repopullation after a mass extinction | Question: Is it possible to restart the whole human species with less than 10 individual.
let say that the whole human species was wipe out of the surface of the earth by a catastrophe only 8 different couple survive the catastrophe could they restart the human species?
Answer: The estimate is probably going to depend on the source, but it's generally at least an order of magnitude more than that.
Back in 2002, John Moore, an anthropologist at the University of Florida, calculated that a starship could leave Earth with 150 passengers on a 2000-year pilgrimage to another solar system, and upon arrival, the descendants of the original crew could colonize a new world there—as long as everyone was careful not to inbreed along the way. [...]
A starting population of 40,000 people maintains 100 percent of its variation, while the 10,000-person scenario stays relatively stable too. So, Smith concludes that a number between 10,000 and 40,000 is a pretty safe bet when it comes to preserving genetic variation.
For the general concept see minimum viable population. | {
"domain": "biology.stackexchange",
"id": 9628,
"tags": "genetics, human-genetics, population-genetics, population-biology"
} |
Mechanism to prevent multiple publishers simultaneously publishing contradictory messages to a topic? | Question:
Is there a way to prevent multiple publishers simultaneously publishing messages to the same topic? I think I'm looking for a way for a node to lock access to a topic and release it when it's done. While a node holds the lock, no other nodes should be able to publish to this topic and their requests to lock the resource should fail. I realize this idea contradicts the principles of pub-sub, so perhaps I'm thinking about this in the wrong way.
The cmd_vel (geometry_msgs/Twist) topic is a pertinent example as interleaving messages from multiple publishers containing drastically different velocities could cause rather undesirable vibrations in the robot.
The obvious solution is to simply not run multiple nodes that can publish to the same topic, but I would like to run multiple nodes that could publish this topic at any time depending on user input. E.g. the operator could nudge a joystick while a trajectory planner is running.
I was wondering if anyone else had thought of an elegant solution to this problem or approached the problem differently before I reinvented the wheel. remote_mutex looks promising, but there is precious little information about it, and no mention of a repository as far as I can tell.
Originally posted by grouchy on ROS Answers with karma: 53 on 2020-04-13
Post score: 0
Answer:
I realize this idea contradicts the principles of pub-sub, so perhaps I'm thinking about this in the wrong way.
yes, what you suggest does indeed seem like it clashes with the idea of anonymous publish-subscribe.
Afaik, in ROS 1, this is not supported natively.
In ROS 2, with SROS2, you could probably use access control exposed by the ROS 2 DDS-Security integration to set up a static form of what you describe (static, as afaik, access control cannot be changed at runtime, perhaps @ruffsl can confirm this).
As to the remote_mutex page you found: from the description this sounds completely voluntary (ie: cannot be enforced with nodes that have not been written to take it into account) and it's also a package from 2011. A quick search shows github.com/pandora-auth-ros-pkg has a copy, but again, I doubt its utility as it has to be integrated into nodes -- which limits its use to nodes under your control.
E.g. the operator could nudge a joystick while a trajectory planner is running.
While not a perfect solution (as rogue publishers could still publish Twists to any topic), the typical approach I've seen is to remap the default cmd_vel to something else, place a cmd_vel_mux between Twist producers and the consumer and then configure the mux intelligently. The way the Kobuki/Turtlebot2 control stack is configured is an example of this: kobuki/Tutorials/Kobuki's Control System. The Kobuki uses yocs_cmd_vel_mux for this.
By remapping cmd_vel on the consumers side, rogue publishers which assume cmd_vel is being listened to will not achieve anything, and producers which play nice can be muxed appropriately by the muxer.
But as ROS 1 has no access control (by design), it'll stay security through obscurity at best.
Edit: re: remote_mutex: reading the code and assuming a subscriber is under your control, you could potentially use the mutex_guard to make the subscriber ignore messages coming in from sources which have not previously acquired it. The mutex_guard has a field holder, which seems to contain the name of the entity (?) which currently holds the lock.
For messages coming from sources which don't hold the lock, ignoring them would result in the behaviour you describe.
It would however seem to couple nodes in at least two of the three dimensions described in #q203129. Personally, that seems like a substantial cost.
Originally posted by gvdhoorn with karma: 86574 on 2020-04-13
This answer was ACCEPTED on the original site
Post score: 3
Original comments
Comment by gvdhoorn on 2020-04-13:
Just thinking out loud here, but: by changing the ROS Master implementation, you could extend it to return only a specific list of publishers for certain topics. As data exchange is peer-to-peer, subscribers have to actively make connections with publishers. If a publisher does not appear in the list of publishers for a certain topic, subscribers should not create connections to them.
Not very maintainable (as you're now running a custom version of ROS Master), but it would allow you to do what you describe. It would again be a form of security-through-obscurity, as nothing is really prevented, just hidden. But at the roscpp and rospy level, nodes would not be able to setup connections to nodes they cannot see, so even with nodes not under your control this could work. It's a poor-mans' version almost of the access control in SROS 2.
This may be something more easily done with vapor_master, but that's really just an assumption.
Comment by stevemacenski on 2020-04-13:
There’s also a muxing program included in topic_tools. It was recommended for awhile for cmd_vel specifically for multiple clients but its unclear to me how common it was in practice. But that’s an option.
Comment by grouchy on 2020-04-13:
Thank you for taking the time to carefully consider my question.
I had considered this issue orthogonal to security. For the sake of simplicity I assume none of the nodes in the system are nefarious, if only for the context of this problem. Assuming all publishing nodes behave themselves and are under my control, a voluntary based system like remote_mutex might be acceptable or even preferable as it would support backwards compatibility with standard tools like rosbag playback (albeit without any arbitration, but at least it'll work).
Using a muxing proxy node is an interesting idea. It's good it works without modifying the nodes so it'll work with stock tools like ros_control and MoveIt. It also allows priorities rather than first come first serve like the mutex. As Steve says it already exists too in topic_tools! The only feature it lacks is the ability for a publishing node to know when it hasn't got access to the hardware so it can respond accordingly.
Comment by grouchy on 2020-04-13:
Thanks for turning me onto vapour_master, it's pretty interesting for plenty of reasons. Otherwise as you say running a custom master could present other challenges, and I'd quite like to make it easier to migrate to ROS2 one day.
Comment by gvdhoorn on 2020-04-14:\
I had considered this issue orthogonal to security
well, access control is what you are describing in your OP. That is a part of security and safety.
For the sake of simplicity I assume none of the nodes in the system are nefarious
they don't need to be bad actors. I've seen what you describe happen many times to users who did not configure their systems correctly. Nothing nefarious there, just simple misconfiguration.
As Steve says it already exists too in topic_tools!
I would still recommend the yocs versions -- they are not specific to the Turtlebot/Kobuki stacks and I believe are nicer in their implementation and ROS API. Be aware they got renamed, so it's yocs_cmd_vel_mux, not cmd_vel_mux.
The only feature it lacks is the ability for a publishing node to know when it hasn't got access to the hardware so it can respond accordingly.
this sounds like a coordination level task, which I would make the responsibility ..
Comment by gvdhoorn on 2020-04-14:
.. not of the nodes you are talking about, but of another node (or set of nodes) which are responsible for coordination of the application (at whichever level). yocs_velocity_smoother publishes the active input (on the private active topic), so whatever is coordinating your application should be able to take that into account and act accordingly (or make other nodes act accordingly).
Comment by ruffsl on 2020-04-21:
Just to confirm, the access control policy enforced in SROS2 with Secure DDS is in fact static, and must be defined at design time. I'd concur with @gvdhoorn suggestion of using a mux. One could perhaps modify the mux to publish a select topic that states the current input selected (based on input priority, or other logic, etc), that the input publishers could subscribe to and discern if they are currently being subsumed.
Comment by gvdhoorn on 2020-04-22:\
One could perhaps modify the mux to publish a select topic that states the current input selected
The yocs implementation of the mux does this. | {
"domain": "robotics.stackexchange",
"id": 34760,
"tags": "ros, navigation, ros-melodic, topics"
} |
How close are we to creating Ex Machina? | Question: Are there any research teams that attempted to create or have already created an AI robot that can be as close to intelligent as these found in Ex Machina or I, Robot movies?
I'm not talking about full awareness, but an artificial being that can make its own decisions and physical and intellectual tasks that a human being can do?
Answer: We are absolutely nowhere near, nor do we have any idea how to bridge the gap between what we can currently do and what is depicted in these films.
The current trend for DL approaches (coupled with the emergence of data science as a mainstream discipline) has led to a lot of popular interest in AI.
However, researchers and practitioners would do well to learn the lessons of the 'AI Winter' and not engage in hubris or read too much into current successes.
For example:
Success in transfer learning is very limited.
The 'hard problem' (i.e. presenting the 'raw, unwashed environment' to the machine and having it come up with a solution from scratch) is not being
addressed by DL to the extent that it is popularly portrayed: expert human knowledge is still required to help decide how the input should be framed, tune parameters, interpret output etc.
Someone who has enthusiasm for AGI would hopefully agree that the 'hard problem' is actually the only one that matters. Some years ago, a famous cognitive scientist said "We have yet to successfully represent even a single concept on a computer".
In my opinion, recent research trends have done little to change this.
All of this perhaps sounds pessimistic - it's not intended to. None of us want another AI Winter, so we should challenge (and be honest about) the limits of our current techniques rather than mythologizing them. | {
"domain": "ai.stackexchange",
"id": 94,
"tags": "research, agi"
} |
Does matter made out of ions radiate more energy than matter made out of atoms? | Question: I hope someone could correct the mistakes I'm making in my speech. Thanks.
Some say that when matter heats up, its atoms start accelerating faster in random directions exchanging energy by bumping into eachother.
Accelerating atoms inside a material are supposed to emit EM waves. So when matter heats up, it starts emiting more energy as EM radiation. If we accelerate a free charge it starts emiting light. The same is true for accelerating ions which are just atoms with an unbalanced charges. Does the same happen for atoms if they are not bound in a material?
That seems to me like both electrons and protons emit light as they accelerate in the same directions inside the atom which is bound to a material. But its EM waves destructively interfere so they should emit nothing. But wouldn't that mean they lose energy anyways because they've all emited it?
Other people say that heated atoms don't really accelerate but the electrons jump to a higher energy state, and when they fall down, they emit light. But that would mean that accelerating ions then shouldn't emit EM waves.
What is correct interpretation then?
Both can't be correct.
Answer: Accelerated charged particles emit electromagnetic radiation. The emitted power is proportional to the square of the acceleration. Given a fixed charge, then an accelerating Lorentz force produces an acceleration inversely proportional to mass and so radiated power is inversely proportional to the square of the mass.
As ions are much more massive than free electrons, then their radiative output (by this mechanism) can usually be neglected. There is no question of constructive or destructive interference here because there is no reason that the light emitted by separate particles should have any particular phase relationship.
Heated atoms and ions do move faster and heat may result in electrons occupying higher energy states in both (if the ions have any remaining electrons). You appear to be confused between two mechanisms of producing radiation. One consists of acceleration of free charged particles - free-free emission that could be thermal in nature or caused by external fields; the other is transitions within atoms/ions - bound-bound transitions between bound energy states.
In general both these things (and bound-free and free-bound radiation) are occurring. | {
"domain": "physics.stackexchange",
"id": 40863,
"tags": "electromagnetism, thermodynamics"
} |
Rotational Kinetic Energy Conservation | Question: Consider a chain around two gears, one of of radius $r_1$ and the other of $r_2$. Say the gear $r_1$ is attached to a rotational device that delivers torque $\tau$ . After a quarter cycle of rotation you have input energy $E = \tau\cdot\pi/4$ rotational energy into the system.
Say you have the same system but this time gear $r_2$ is replaced with another gear of radius $r_3$ where $r_3>r_2$. Again you spin it with torque $
\tau$ for a quarter cycle so you have the same energy in the system.
My questions are:
Would system 2 (with gear $r_3$) be spinning faster than system 1 (with gear $r_2$)? I think it would since there is a larger gear.
If it is spinning faster, how is that justifiable? You input the same energy into both the systems but one is spinning faster than the other.
Thanks for any help.
Edit for clarification:
I'm asking about the angular velocity of the first gear in both systems
The rotational device is concentrically connected to the first gear
Answer: Let's make some simplifying assumptions here:
The gears are much lighter than the chain, so we can assume all of the mass is located on the outside of the gears in the chain itself.
The chains wrap all the way around the gear. This is probably less realistic, but this way we can treat the system as two thin hoops that are constrained to spin at the same linear velocity. I don't think this messes up the overall analysis.
The chain has a uniform linear mass density $\lambda$.
Therefore, a gear of radius $R$ will have a mass of $m=2\pi R\lambda$ and a moment of inertia of $I=mR^2=2\pi R^3\lambda$ Additionally, given the constraint of the gears being connected by the chain, it must be that the gears have the same linear velocity $v=\omega_1R_1=\omega_2R_2$ at their edges.
The kinetic energy of the two-gear system will then be
$$K=\frac12I_1\omega_1^2+\frac12I_2\omega_2^2=\pi\lambda R_1^2(R_1+R_2)\omega_1^2$$
So as you can see, for the same amount of work, the larger $R_2$ is, the smaller $\omega_1$ will be. Therefore, the larger the second gear the slower everything will rotate. | {
"domain": "physics.stackexchange",
"id": 72159,
"tags": "energy, angular-momentum, rotational-kinematics"
} |
What is non-thermal plasma? | Question: I read about non-thermal plasma, but I still have some questions:
The ions and neutral particles are not in thermal equilibrium with the electron, does that mean that the overall temperature is low like in neon signs?
Does the non-thermal plasma have relatively low ionization percentages, like 2 or 3%? Or higher ionizations can be achieved?
Is the super cold plasma the only way to get a non-thermal plasma?
Answer: Non-thermal is a broad catch-all for energy distributions that are not Maxwellian. The reason it is interesting is because if it's not Maxwellian, it's not in thermodynamic equilibrium and some work can be extracted from it while it relaxes or the energy differences between electrons and ions produced an interesting or desirable phenomena.
So to answer your questions:
very hot plasmas can still be non-thermal, one hopeful example is the inertial electrostatic confinement plasma approach known as Polywell, they are hoping to get a very sharp energy distribution of plasma.
1% actually is a very appreciable ionization percentage. Gases will have plasma-like behaviors with ionization percentages as low as .01%.
sort of referenced in 1.
Hope this was helpful. | {
"domain": "physics.stackexchange",
"id": 5372,
"tags": "homework-and-exercises, thermodynamics, statistical-mechanics, plasma-physics, non-equilibrium"
} |
Installing groovy from source fails | Question:
Hello!
I'm trying to install ros-groovy from source in an open-nao1.14 virtual machine (a gentoo derivative for NAO). The idea is to be able to then copy&paste the contents of /opt/ros/groovy and associated dependencies and have ROS working on an actual NAO.
After downloading ros-comm I execute (as per the instructions on the wiki )
./src/catkin/bin/catkin_make_isolated --install --install-space /opt/ros/groovy
But when the script calls make install installation fails because catkin's
setup.py cannot recognise the option --install-layout=deb which the script is automatically adding. This is the exact error message:
+ /usr/bin/env PYTHONPATH=/opt/ros/groovy/lib/python2.7/dist-packages:/home/nao/ros/build_isolated/catkin/lib/python2.7/dist-packages:/opt/ros/groovy/lib/python2.7/dist-packages:/opt/ros/pydeps/lib/python2.7/site-packages/ CATKIN_BINARY_DIR=/home/nao/ros/build_isolated/catkin /usr/bin/python /home/nao/ros/src/catkin/setup.py build --build-base /home/nao/ros/build_isolated/catkin install --install-layout=deb --prefix=/opt/ros/groovy --install-scripts=/opt/ros/groovy/bin
usage: setup.py [global_opts] cmd1 [cmd1_opts] [cmd2 [cmd2_opts] ...]
or: setup.py --help [cmd1 cmd2 ...]
or: setup.py --help-commands
or: setup.py cmd --help
error: option --install-layout not recognized
CMake Error at catkin_generated/safe_execute_install.cmake:4 (message):
execute_process(/home/nao/ros/build_isolated/catkin/catkin_generated /python_distutils_install.sh)
returned error code
Call Stack (most recent call first):
cmake_install.cmake:115 (INCLUDE)
Any ideas on how to get it to compile?
Thanks!
Update: Here's the output of the commands requested.
Calling catkin's build & install command straight from python:
$ python setup.py build install --install-layout=deb
usage: setup.py [global_opts] cmd1 [cmd1_opts] [cmd2 [cmd2_opts] ...]
or: setup.py --help [cmd1 cmd2 ...]
or: setup.py --help-commands
or: setup.py cmd --help
error: option --install-layout not recognized
Python version:
$ /usr/bin/python --version
Python 2.7.2
Update 2: setup.py instructions
Common commands: (see '--help-commands' for more)
setup.py build will build the package underneath 'build/'
setup.py install will install the package
Global options:
--verbose (-v) run verbosely (default)
--quiet (-q) run quietly (turns verbosity off)
--dry-run (-n) don't actually do anything
--help (-h) show detailed help message
--no-user-cfg ignore pydistutils.cfg in your home directory
Options for 'install' command:
--prefix installation prefix
--exec-prefix (Unix only) prefix for platform-specific files
--home (Unix only) home directory to install under
--user install in user site-package
'/home/nao/.local/lib/python2.7/site-packages'
--install-base base installation directory (instead of --prefix or --
home)
--install-platbase base installation directory for platform-specific files
(instead of --exec-prefix or --home)
--root install everything relative to this alternate root
directory
--install-purelib installation directory for pure Python module
distributions
--install-platlib installation directory for non-pure module distributions
--install-lib installation directory for all module distributions
(overrides --install-purelib and --install-platlib)
--install-headers installation directory for C/C++ headers
--install-scripts installation directory for Python scripts
--install-data installation directory for data files
--compile (-c) compile .py to .pyc [default]
--no-compile don't compile .py files
--optimize (-O) also compile with optimization: -O1 for "python -O", -O2
for "python -OO", and -O0 to disable [default: -O0]
--force (-f) force installation (overwrite any existing files)
--skip-build skip rebuilding everything (for testing/debugging)
--record filename in which to record list of installed files
usage: setup.py [global_opts] cmd1 [cmd1_opts] [cmd2 [cmd2_opts] ...]
or: setup.py --help [cmd1 cmd2 ...]
or: setup.py --help-commands
or: setup.py cmd --help
Originally posted by Miguel S. on ROS Answers with karma: 1114 on 2013-01-11
Post score: 2
Answer:
This is strange. Can you please run in catkin:
$ python setup.py build install --install-layout=deb
and post the result? It should still give an error, but maybe a different one.
Also tell us the result of
/usr/bin/python --version
/usr/bin/python --help install
My current best guess is that you use a very old python version (2.5.x) which does not have the --install-layout option.
Possibly you can set SETUPTOOLS_DEB_LAYOUT to OFF. Note there is no need for you to run catkin_make_isolated, just using catkin_make (or only cmake and make) should be fine. E.g.:
src/catkin/bin/catkin_make -DSETUPTOOLS_DEB_LAYOUT=OFF
Update:
Sorry, what we need is rather the output of
/usr/bin/python setup.py --help install
when invoked in the catkin folder. But still good to know you have found a solution to make it work....
Update2: It seems someone else already had the same problem on Gentoo with fuerte:
https://code.ros.org/lurker/message/20120426.064234.46e82b4e.en.html
Since as you say the NAo also uses a Gentoo derivative, I think it is likely that there is something in the overall buildchain in Gentoo that behaves differently from Ubuntu, maybe a different cmake version.
Originally posted by KruseT with karma: 7848 on 2013-01-11
This answer was ACCEPTED on the original site
Post score: 3
Original comments
Comment by Miguel S. on 2013-01-16:
I got it to compile by invoking ./src/catkin/bin/catkin_make -DSETUPTOOLS_ARG_EXTRA="" and then calling make install from the build directory
Comment by KruseT on 2013-01-16:
Created a ticket here: https://github.com/ros/catkin/issues/314
Comment by KruseT on 2013-01-16:
Thanks for log, so for you, python setup.py really does not even offer --install-layout in the help. Weird.
Comment by Miguel S. on 2013-01-16:
It may not be a bug after all. The code compiles just fine with ./src/catkin/bin/catkin_make -DSETUPTOOLS_DEB_LAYOUT=OFF on a clean install. It seems catkin_make only updates the cmake variables if you haven't called it before; which is why I thought the command didn't work the first time around.
Comment by KruseT on 2013-01-16:
Yes, it's what the ticket said (also what Dirk said in his response). | {
"domain": "robotics.stackexchange",
"id": 12371,
"tags": "ros, installation, setup.py, ros-groovy, source"
} |
c# QuadTree yielding properly | Question: I have implemented a QuadTree of my own, and I'm afraid I didn't use yield properly when I query my tree, my fear is that I create O(HN) iterators.
Could you direct me how to better improve the performance of the following code?
QuadTree
public IEnumerable<T> Query(string leafId, Circle shape)
{
// Gets the relevant parent by a leafId Query (this is O(lg H))
var startNode = GetParentIntersecting(leafId, shape);
return startNode.Query(shape);
}
QuadTreeNode
public IEnumerable<T> Query(IShape shape)
{
var results = new HashSet<T>();
foreach (var content in _contents)
{
if (shape.Contains(content.GetPosition()))
{
//Debug.Log(content.GetPosition());
yield return content;
//results.Add(content);
}
}
if (IsLeaf)
yield break;
foreach (var node in Nodes)
{
if (node.IsEmpty && node.IsLeaf)
continue;
foreach (var result in node.Query(shape))
{
results.Add(result);
};
}
}
Answer: You use yield properly, it's OK to yield from inside nested loops. But all the code behind your first foreach is kind of useless, it's not doing anything useful.
First the yield break is useless, cause there are no more yields following, there are no more return values produced no matter if IsLeaf true or false. Iteration ends anyway.
Second you fill up results with something, but the content of results is lost, then the Method exits. | {
"domain": "codereview.stackexchange",
"id": 37006,
"tags": "c#, performance, iterator"
} |
$\frac{dt}{d\tau}=\gamma$ in special relativity | Question: I hope this is not too silly a question: We often see
$$\frac{dt}{d\tau}=\gamma=\frac{1}{\sqrt{1-v^2}},$$ taking $c=1$.
Problem:
I don't understand why...
In the Minkowski metric, using the $(-+++)$ signature and taking $c=1$,
$$ds^2=-dt^2+d\vec x^2\\
d\tau^2=-ds^2\\
\implies d\tau^2=dt^2-d\vec x^2\\
\implies 1=\left(\frac{dt}{d\tau}\right)^2-v^2\\
\implies \frac{dt}{d\tau}=\sqrt{1+v^2}\neq \frac{1}{\sqrt{1-v^2}}$$
What has gone wrong? with my reasoning?
Answer: You made a simple error; $dx/d\tau\neq v$!. Start from your equation
$$
d\tau^2 = dt^2 - d\vec x^2
$$
Now, divide both sides by $dt$ not $d\tau$ to get
$$
\left(\frac{d\tau}{ dt}\right)^2 = 1-v^2
$$
which gives
$$
\frac{d\tau}{dt} = \frac{1}{\gamma}
$$
as desired. | {
"domain": "physics.stackexchange",
"id": 73115,
"tags": "homework-and-exercises, special-relativity"
} |
What are the current contenders/most promising approaches to High Tc Superconductivity? | Question: I want to know what kinds of things theorists are currently looking at. Specifically, I want to know more about the promise that field-theoretic methods are showing. I am studying superconductivity at an advanced undergraduate/beginning graduate level. Links to review papers/papers appreciated. Thanks.
Answer: First note that High $T_C$ superconductivity may not have a general mechanism that works for all material, in opposition to the case of regular superconductivity and BCS theory. I mention this because there is an historical interplay between superconductivity and magnetism in the matter, as in Meissner effect.
Currently there are (roughly) two families of high $T_C$ superconductors, the cuprate family and the iron pnictides, which are material with different magnetic properties. Therefore the most promising approach might depend on the material in question.
For the better studied case of the cuprates I would recommend this lectures from Tremblay. Not a review paper, so not really addressing what is on the edge, but very pedagogical introduction to the subject, noting what ideas seem relevant | {
"domain": "physics.stackexchange",
"id": 14463,
"tags": "quantum-field-theory, condensed-matter, superconductivity"
} |
JS Progress Bar Widget | Question: Demo of the widget: http://jsfiddle.net/slicedtoad/Lywvbsf4/
It's a progress bar that shows a list of steps and which one is being completed as well as allowing previous steps to be revisited.
It will be used to display the user's form completion progress on a web app that favors a client side approach (forms are submitted to the server at the last step).
I'd like feedback specifically on the widget based approached I took. I had the goals:
simple interface
low coupling to the rest of the app
easy to extend the functionality (it should be easy to add more callbacks to handle new functionality, for example)
no code bloat from
legacy support (the app is only supported on FF and Chrome)
unnecessarily portability and safety (i.e. it needs to be safe in this app, not every app)
I ended up making a lot of fairly arbitrary decisions, which I would like critiqued.
Widget:
var progressBar = function(id,options){
this.init(id,options);
};
progressBar.prototype.init = function(options){
this.id = options.id; // ID of the div that will store the progress bar
this.$node;
this.steps = options.steps;
this.current = 0; // current step
this.previousClickedCallback = options.previousClicked;
this.draw(); // populate $node
$(this.id).append(this.$node); // append $node to the DOM
this.$node.on("click",".step-finished .step-number",
$.proxy(this.previousClick,this));
this.next();
};
progressBar.prototype.previousClick = function(e){
this.previousClickedCallback($(e.target).html());
};
progressBar.prototype.draw = function(){
this.$node = $("<ol class='progressbar container'></ol>");
var html = "";
for(var step in this.steps){
html +=
"<li class='step'>" +
"<div class='step-number'>"+(parseInt(step)+1)+"</div>" +
"<div class='step-line'></div>" +
"<div class='step-label-wrap'>" +
"<label class='step-label'>"+this.steps[step]+"</label>" +
"</div>" +
"</li>";
}
this.$node.html(html);
};
progressBar.prototype.goToStep = function(step){
console.log("gotostep: ",step);
if(step>=1 && step<=this.steps.length+1){
this.current = parseInt(step);
// Reset progress bar status
this.$node.find(".step-current").removeClass("step-current");
this.$node.find(".step-finished").removeClass("step-finished");
// Set current step
this.$node.find(".step:nth-child("+this.current+")").addClass("step-current");
// Set all previous steps to finished
for(var i = step-1; i > 0; i--){
this.$node.find(".step:nth-child("+i+")").addClass("step-finished");
}
}
};
progressBar.prototype.next = function(){
if(this.current<this.steps.length+1){
this.goToStep(this.current+1);
}
};
progressBar.prototype.previous = function(){
if(this.current>=2){
this.goToStep(this.current-1);
}
};
Example usage:
var goBack = function(step){
//cache current form or whatever
pbar.goToStep(step);
};
var pbar = new progressBar({
id:"#ProgressBar",
steps:["Date","Items","Preview","Details","Confirm"],
previousClicked:function(step){goBack(step);}
});
$("#next").on("click",function(){pbar.next()});
$("#previous").on("click",function(){pbar.previous()});
CSS (minimal):
ol.progressbar {
list-style-type: none;
padding-left:0;
}
.progressbar.container {
display: flex;
align-items: flex-end;
padding-top: 22px;
}
.progressbar .step {
flex-grow:1;
position:relative;
text-align: center;
z-index:1;
}
.progressbar .step-number {
position:relative;
z-index:10;
width: 20px;
height: 20px;
border-radius: 10px;
background-color:white;
display:inline-block;
text-align: center;
line-height: 20px;
border:1px solid;
}
.progressbar .step-finished .step-number:hover{
background-color: lightblue;
cursor: pointer;
}
.progressbar .step-finished .step-number{
background-color: green;
}
.progressbar .step-current .step-number{
background-color:lightblue;
}
.progressbar .step-label-wrap {
position: absolute;
width:100%;
top: -22px;
}
.progressbar .step-label {
padding: 0px 1px;
}
.progressbar .step-line {
height: 2px;
background-color: black;
width:100%;
position:absolute;
top: 50%;
left:50%;
z-index:5;
}
.progressbar .step:last-of-type .step-line{
display:none;
}
Answer: Stuff I noticed (not in any strict order)
This is not what I expected, given the name "progress bar". Yes, the name makes perfect sense word-for-word, but "progress bar" usually means something very specific - and different. Even non-coders, I think, wouldn't call this a progress bar, because progress bars are those things that just fill up (in one direction only) over time. I'd perhaps call this a "checklist" instead, since it has discrete steps.
This would probably work well as a jQuery plugin, which would also be the more conventional approach. jQuery plugins also have a whole list of conventional patterns for setting options and such, which would make a lot of sense here. Use such conventions to your advantage; saves you the trouble of reinventing it all.
Constructor functions should be PascalCase, i.e. ProgressBar - not progressBar.
Why not do initialization in the constructor? It's pretty much its job, yet you delegate that to an init function.
You can clean up you code by defining the prototype object wholesale, instead of prefixing everything with constructor.prototype.:
function ProgressBar() {
// ...
}
ProgressBar.prototype = {
next: function () { ... },
previous: function () { ... },
// ...
};
Warning, side-effects and mixed responsibilities:
this.draw(); // populate $node
$(this.id).append(this.$node); // append $node to the DOM
So draw builds an element hierarchy in memory. But it doesn't return it. It assigns it to a variable, which some other code then has to go grab it from. But that only makes sense if draw has for sure been called. And called only once! Otherwise things break.
Point is, it's a tangled mess. A better approach would be to have draw be in charge of inserting the elements, or make draw simply return the elements it creates like a factory function. Right now, it's neither here nor there. Personally, I'd pick the latter, call the function build rather than "draw", and not make it a prototype method, but an internal private function in the constructor.
Then again, I'd much rather define my checklist in the HTML as a regular ol element, and simply use the JS to add the behavior (not the content or structure) for that list. Right now, you have HTML in you JavaScript, when those things should be kept separate. Structure in markup, style in CSS, and behavior in JavaScript. Yes, the lines do of course blur a lot, and many widgets do inject large chunks of HTML, but I'm wary of those approaches. One might ask why it doesn't inject all the CSS too, while it's at it.
I'm not saying you should throw it all out, but do consider separating things.
Don't leave console.log statements in production code. At least not without safety checks/polyfills to ensure that console.log actually exists and is a function. Sure you only targeting a few browsers, but I'll get into that later.
Why is there only a callback option for going back? Why not use events instead, and fire them off for any state change? jQuery provides a nice, simple custom event API you can use. If you want callbacks instead of events, then I'd say add them now. You say it's easy to add such things, so why not? You may find it's not as easy as you think, or that a different interface might be even better.
Besides, there's something weird going on with previousClick being a function that gets called only by external code - but which then invokes a callback, that's also defined externally. Why is the ProgressBar involved in that at all? And why does it pass a bunch of HTML as a string to the callback? Also, it'll fail immediately, if you call previousClick but haven't passed in a callback when instantiating the object.
Outside the code, this caught my eye.
low coupling to the rest of the app
[...]
no code bloat from
legacy support (the app is only supported on FF and Chrome)
unnecessarily [sic] portability and safety (i.e. it needs to be safe in this app, not every app)
So... your code is actually very coupled to the app after all. It may not be coupled though individual parts of the code, but the result is the same: You're prevented from easily using the widget elsewhere, and maintainability may be a hassle (if, say, the rest of the app gets updated to run on more browsers, you have that much more code to test/update). Pretty much the same things that'd apply if the code was tightly coupled to the app. Portability is basically decoupling on a larger scale.
Some of your decisions, like only having a callback for going backwards also betray coupling. Coupling in the sense that if you didn't need it for this particular app, in this particular context, you just didn't add it. Again, that's fair, but don't then say it's got low coupling.
Specifically because you aimed for a self-contained little widget that should be all the more reason to also aim for compatibility and portability. Sure, there's the usual cost/benefit weighing you'll have to do, and maybe it really doesn't make sense to support more browsers. However, the smaller and the mode self-contained the project, the easier it is to ensure good browser support. In fact, I'd wager the JS will run just fine on most browsers as-is; any issues will likely all be CSS-related. If I were you, I'd at least have tested it in other browsers, just out of curiosity.
At any rate, don't trick yourself into thinking it's decoupled, because in a sense it really, really isn't. | {
"domain": "codereview.stackexchange",
"id": 9601,
"tags": "javascript, jquery, html, css"
} |
What is the importance of Fermi sphere? | Question: I am very confused with the meaning of Fermi sphere. I understand that it is exactly the same as the energy levels and Fermi energy in the real space and Fermi sphere is in K space but I don't know why is this important. What I have understood is the relation of Fermi sphere and real energy level is similar to that of crystal lattice in real space and reciprocal lattice but still anyone who could explain the significance of Fermi sphere in a very clear way.
Answer: Consider a system of electrons confined in a cube of side length $L=V^{1/3}$. Let's assume that this "electron gas" is diluted enough that we can neglect the electron-electron interactions. This means that we have to solve the Schroedinger's equation for a free particle:
$$-\frac{\hbar^2}{2m} \nabla^2 \psi(\mathbf r) = E\ \psi(\mathbf r)\tag{1}\label{1}$$
Moreover, since we are interested in the bulk properties of the material, we will assume periodic boundary conditions (PBC):
$$\psi(x+L,y,z) = \psi(x,y,z)\\
\psi(x,y+L,z) = \psi(x,y,z)\\
\psi(x,y,z+L) = \psi(x,y,z) \tag{2}\label{2}$$
It is well known that the solution of Eq. \ref{1} is a plane wave:
$$\psi_{\mathbf k}(\mathbf r) = \frac 1 {\sqrt{V}} e^{i \mathbf k \cdot \mathbf r} \tag{3}\label{3}$$
with energy
$$E(\mathbf k) = \frac{\hbar^2k^2}{2m} \tag{4}\label{4}$$
If you apply the conditions \ref{2} to the solution \ref{3}, you will get
$$e^{i k_x} L=e^{i k_y} L=e^{i k_z} L=1 \tag{5}\label{5}$$
and therefore
$$k_\alpha = \frac{2 \pi n_\alpha} L \ \ \ (\alpha =x,y,z) \tag{6}\label{6}$$
where $n_\alpha$ are integers. Therefore, the allowed wavevectors form a discrete "grid" in reciprocal space (figure below - from D.J. Griffiths, Introduction to Quantum Mechanics). Notice how reciprocal space comes out naturally because of the relation \ref{4} between the energy of an electron and its wavevector.
At $T=0$, the electrons will occupy the lowest available energy levels, starting with with $\mathbf k = \mathbf 0$. Since electrons are fermions with spin $1/2$, to satisfy Pauli's exclusion principle we can accommodate only two of them in every energy level. Therefore, starting from $\mathbf k=\mathbf 0$, we can imagine to place 2 electrons in every point in reciprocal space allowed from eq. \ref{6}. If the number of electrons is very large, it is easy to see that the result of this filling looks very much like a sphere: this is what we call the Fermi sphere.
The radius of this sphere, $k_F$, is related to energy by equation \ref{4}. The energy of the electrons on the surface of the Fermi sphere is the Fermi energy:
$$E_F= \frac{\hbar^2k_F^2}{2m} \tag{7}\label{7}$$
References
For a detailed discussion of Fermi surfaces in general, see for example Ashcroft-Mermin, Solid State Physics. | {
"domain": "physics.stackexchange",
"id": 46333,
"tags": "statistical-mechanics, condensed-matter, solid-state-physics"
} |
how can I record the same number of massages from different topics? | Question:
I want to build a database that takes each image with its joy and IMU (angles) data, so I need to receive the same number of messages for each topic to have a synchronized data.
I run the below command in order to record RGB images, depth images, joystick commands, and IMU data.
$> rosbag record -O subset /camera/depth/image_raw /camera/rgb/image_raw /joy /mobile_base/sensors/imu_data_raw
then
$> rosbag info subset.bag
which show me the information about my recorded bag file
path: subset.bag
version: 2.0
duration: 21.9s
start: Mar 11 2018 16:31:17.29 (1520771477.29)
end: Mar 11 2018 16:31:39.20 (1520771499.20)
size: 857.5 MB
messages: 3113
compression: none [605/605 chunks]
types: sensor_msgs/Image [060021388200f6f0f447d0fcd9c64743]
sensor_msgs/Imu [6a62c6daae103f4ff57a132d6f95cec2]
sensor_msgs/Joy [5a9ea5f83505693b71e785041e67a8bb]
topics: /camera/depth/image_raw 652 msgs : sensor_msgs/Image
/camera/rgb/image_raw 540 msgs : sensor_msgs/Image
/joy 991 msgs : sensor_msgs/Joy
/mobile_base/sensors/imu_data_raw 930 msgs : sensor_msgs/Imu
as you see there are a diffrente number of messegas for each topic. So, how can I receive the same number of it?
and one more thing
any help please?
Thanks in advance
Originally posted by khadija on ROS Answers with karma: 25 on 2018-03-11
Post score: 1
Answer:
Have a look at the documentation in the ros wiki: rosbag:
-l NUM, --limit=NUM
Only record NUM messages on each topic.
This is however not(!) what you really want to use as you will most likely not record the imu-message that belongs to the image-data as the imu (probably) will have a much higher frequency. A Time Synchronizer is a better choice.
Originally posted by NEngelhard with karma: 3519 on 2018-03-11
This answer was ACCEPTED on the original site
Post score: 2
Original comments
Comment by khadija on 2018-03-12:
First of all, thank you for your replay.
Your first suggestion is works but it doesn't adequate because I don't know how many messages I need to accomplish the desired task.
could you help me please how can I use the time synchronizer in my case, cuz I'm new with both ROS and python!
Comment by NEngelhard on 2018-03-12:
The wiki page I linked already contains an example in python. Where exactly is your problem?
Comment by khadija on 2018-03-13:
I saw the link but actually, I could not understand how should I use it. I run the python file but nothing is happening! I wonder if this file records my data or not? or what should I change on it? sorry if this show a stupid question but I just started with ROS&Python as well.any help is appreciate
Comment by khadija on 2018-04-11:
@NEngelhard hello again! could you tell me please what should I put on callback function? should I record my data within callback function section? or I should read a previous recording bag file inside the callback function? | {
"domain": "robotics.stackexchange",
"id": 30269,
"tags": "rosbag, ros-kinetic"
} |
Why is RTS/CTS optional? | Question: I was reading in Wikipedia about it and one of the main things that were pointed out was that it is only an optional mechanism. This bugs me more than it should as I cannot find any information that I can understand about it.
Can anyone explain to me like I am a five-year-old why are they optional?
Answer: RTS/CTS are signals used to control data flow when the input and output rates are different. When the rates are identical then there is no need for flow control. | {
"domain": "cs.stackexchange",
"id": 10561,
"tags": "algorithms, computer-networks, communication-protocols, protocols"
} |
Reinforcement learning: Discounting rewards in the REINFORCE algorithm | Question: I am looking into the REINFORCE algorithm for reinforcement learning. I am having trouble understanding how rewards should be computed.
The algorithm from Sutton & Barto:
What does G, 'return from step t' mean here?
Return from step t to step T-1, i.e. R_t + R_(t+1) + ... + R_(T-1)?
Return from step 0 to step t?, i.e. R_0 + R_1 + ... + R_(t)?
Answer:
What does G, 'return from step t' mean here?
Return from step t to step T-1, i.e. R_t + R_(t+1) + ... + R_(T-1)?
Return from step 0 to step t?, i.e. R_0 + R_1 + ... + R_(t)?
Neither, but (1) is closest.
$$G_t = \sum_{i=t+1}^T R_i$$
i.e. the sum of all rewards from step $t+1$ to step $T$.
You are possibly confused because the loop for REINFORCE goes from $0$ to $T-1$. However, that makes sense due to the one step offset from return to the sum of rewards. So $G_{T-1} = R_T$ and $G_{T} = 0$ always (there is no future reward possible at the end of the episode). | {
"domain": "datascience.stackexchange",
"id": 5862,
"tags": "reinforcement-learning, policy-gradients"
} |
Why is ozone a greenhouse gas if it absorb UV radiation? | Question: Ozone is considered a greenhouse gas, even though it absorbs shortwave (UV) radiation from the Sun, which does not fit with the definition for greenhouse gases, where they are supposed to:
"absorbs and emits radiant energy within the thermal infrared range"
(Quote from Wikipedia)
PS: This is an adaptation of a question asked in the private discussion board of a first year university course on Earth Sciences, that I'm copying here so a wider community can benefit from it and the corresponding answer. As suggested by the moderators, I'll donate any reputation received from this question in the form of a bounty awarded to exceptional answers (in particular by new or low-rep users).
Answer: Tropospheric (near the surface) ozone is a powerful greenhouse gas, even in trace amounts. Strataspheric ozone or the ozone layer is opaque to UV rays coming in and it's opaque to IR rays going out so it has both warming and cooling effects. The net effect of a thickening of the ozone layer is a small warming (with some uncertainty), so the ozone layer is not a very strong heat trapping driver, but ozone in the lower atmosphere is, and it's a strong heat trapping gas one too.
Ozone is very reactive and as a result, it has a very short atmospheric lifetime. (hours-days), but what Ozone has is an equilibrium concentration in the atmosphere (337 ppb). It's concentration is maintained in the troposphere by sunlight hitting trace elements in the atmosphere.
The majority of tropospheric ozone formation occurs when nitrogen
oxides (NOx), carbon monoxide (CO) and volatile organic compounds
(VOCs), such as xylene, react in the atmosphere in the presence of
sunlight.
NOx and VOCs are called ozone precursors.
Motor vehicle exhaust, industrial emissions, and chemical solvents are
the major anthropogenic sources of these chemicals
If, for example, you were to release several billion tons of Ozone on the surface of a planet to increase it's greenhouse effect, the effect wouldn't last long because because ozone is for the most part an unstable molecule. It needs to be constantly recreated because it tends to chemically react and stop being Ozone very quickly for an atmospheric gas. As noted in the quote above, motor vehicle exhaust has increased the atmospheric material that ozone gets created from in our lower atmosphere, so there are man-made reasons for the increase in tropospheric ozone concentration, 237 ppb in 1750 to 337 ppb today.
This reference from the Wikipedia article, notes that the Ozone layer has a minor cooling effect, (Source article here), quote from footnote 6
Radiative forcing for tropospheric ozone is taken from the 5th column
of Table 8.6 of IPCC (2013).
http://www.climatechange2013.org/images/report/WG1AR5_Chapter08_FINAL.pdf
The "current" value in that table refers to a global average. Note, in
the row immediately below the number for tropospheric forcing, the
stratospheric forcing is given as negative 0.05 W/m2
The 82 page IPCC report on radiative forcing from which that article references has this to say on it's introductory remarks on Ozone:
The total RF estimated from modelled ozone changes is 0.35 (0.15 to
0.55) W m–2, with RF due to tropospheric ozone changes of
0.40 (0.20 to 0.60) W m–2 and due to stratospheric ozone changes of –0.05 (–0.15 to +0.05) W m–2. Ozone is not emitted directly into the
atmosphere but is formed by photochemical reactions. Tropospheric
ozone RF is largely attributed to anthropogenic emissions of methane
(CH4), nitrogen oxides (NOx), carbon monoxide (CO) and non-methane
volatile organic compounds (NMVOCs), while stratospheric ozone RF
results primarily from ozone depletion by halocarbons. Estimates are
also provided attributing RF to emitted compounds. Ozone-depleting
substances (ODS) cause ozone RF of –0.15 (–0.30 to 0.0) W m–2, some of
which is in the troposphere. Tropospheric ozone precursors cause ozone
RF of 0.50 (0.30 to 0.70) W m–2, some of which is in the stratosphere;
this value is larger than that in AR4. There is robust evidence that
tropospheric ozone also has a detrimental impact on vegetation
physiology, and therefore on its CO2 uptake, but there is a low
confidence on quantitative estimates of the RF owing to this indirect
effect. RF for stratospheric water vapour produced by CH4 oxidation is
0.07 (0.02 to 0.12) W m–2. The RF best estimates for ozone and stratospheric water vapour are either identical or consistent with the
range in AR4. {8.2, 8.3.3, Figure 8.7}
From page 3 of this link, chapter 8, page 661 of the AR5 report.
What the report basically says is that tropospheric ozone has a significant greenhouse effect, 0.4 watts per square meter as a result of the man-made increase since 1750, which is nearly equal to the warming from the methane increase, despite measurably lower total concentration and a lower percentage increase. (note, there are considerable measurement uncertainties, 0.2 to 0.6 is given as the range). But even the low end of that estimate makes ozone in the lower atmosphere a powerful greenhouse gas.
The Ozone layer (0.05 watts per square meter cooling estimate) corresponds to a thinner ozone layer today compared to 1750 (as far as I can tell, I didn't see that spelled out), but assuming that's the case, a thicker ozone layer should (could) provide a small amount of warming, but 0.05 watts per square meter variation is pretty insignificant. | {
"domain": "earthscience.stackexchange",
"id": 1373,
"tags": "climate-change, atmospheric-radiation, ozone, greenhouse-gases"
} |
Renormalization of the photon propagator at loop-level | Question: I am trying to understand the photon propagator renormalization procedure, followed in M. Srednicki's book Quantum Field Theory. Specifically, I am reading Chapter 62, titled "Loop Corrections in Spinor Electrodynamics" and I am focused on the renormalization of the photon propagator at the loop level.
The procedure is as follows:
Write down the full QED Lagrangian comprised by the free Lagrangian, the interaction term and the counter-terms. The interaction term contains the renormalization constant $Z_1$ and the bare charge (I think despite that the author does not discuss anything about bare fields or bare masses/charges). The counter-term Lagrangian is given by
$$\mathcal{L_{\text{c.t}}}=i(Z_2-1)\bar{\Psi}{\partial\!\!\!/}\Psi
-(Z_m-1)m\bar{\Psi}\Psi-\frac{1}{4}(Z_3-1)F_{\mu\nu}F^{\mu\nu}$$
Label the contributions to the photon propagator from the counter terms and all the loop level diagrams as $i\Pi^{\mu\nu}(k)=i\Pi(k^2)(g^{\mu\nu}k^2-k^{\mu}k^{\nu})$ and calculate them by including the diagram with the closed fermion loop and the counter term diagram.
Perform the calculations to the very end and isolate the divergent behavior by analytically continuing the number of spacetime dimensions to $d=4-\varepsilon$. The divergences should be $\mathcal{O}(\varepsilon^{-1})$. For that step, the electron charge is redefined in a way such that the dimensionality of the latter is absorbed into a factor $\mu$, i.e. $e\rightarrow e\tilde{\mu}^{\epsilon/2}$.
Cancel those divergences by choosing the counter-term $Z_3$ appropriately. Finiteness of the loop-corrected photon propagator implies that there will be a contribution in the counter term $\mathcal{O}(\varepsilon^{-1})$ to cancel the divergent part of the (associated with the number of dimensions of the spacetime), and upon imposing $\Pi(0)=0$ yields the exact form of the counter-term $Z_3$
$$Z_3=1-\frac{e^2}{6\pi^2}\bigg[\frac{1}{\varepsilon}-\ln(m/\mu)\bigg]+\mathcal{O}(e^4)$$
where $\mu^2=4\pi e^{-\gamma}\tilde{\mu}^2$.
I have three questions about the above steps:
Why do we impose $\Pi(0)=0$? And how is this associated with the fact that we are choosing the "on-shell renormalization scheme"? Could this choice be exchanged with some other choice capable of fixing the counter term? I mean I realize that there are three renormalization schemes called "on-shell", "MS" and "MS bar" and that each of them removes the UV divergences of the theory with a certain way (depending each time on some specific conditions I guess), and that some conditions are needed in order to match the experimental results with the loop corrections, but I fail to see how this is applied here...
Is the substitution $e\rightarrow e\tilde{\mu}^{\epsilon/2}$ what we call charge renormalization, i.e. the same as starting with a bare charge $e_0$ in the Lagrangian and simply substituting the bare charge $e_0$ with $e\tilde{\mu}$? This is what I read in other QFT books (i.e. Mandl's book) and I would like to make the correspondence with those books that I have read...
Why is there not an anti-fermion diagram? A diagram in which the closed-fermion loop is comprised by anti-fermionic propagators instead of fermionic ones? Shouldn't be such a contribution as well?
I also have a bonus question: what if my Lagrangian contained another set of spinor fields, that correspond to another species of spin 1/2 particles. Then, we would add to the previous Lagrangian the following terms
$$i\bar{\psi}{\partial\!\!\!/}\psi-\tilde{m}\bar{\psi}\psi+
i(\tilde{Z_2}-1)\bar{\psi}{\partial\!\!\!/}\psi-(\tilde{Z}_m-1)\tilde{m}\bar{\psi}\psi+
\tilde{Z_1}\tilde{e}\bar{\psi}{A\!\!\!/}\psi$$
where $\psi$ represents the new fermionic field (as opposed to $\Psi$), and $\tilde{e}$, $\tilde{m}$ represents its charge and its mass (respectively). Then, I find that $Z_3$ would be fixed in the following way
$$Z_3=1-\frac{e^2}{6\pi^2}\bigg[\frac{1}{\varepsilon}-\ln(m/\mu)\bigg]
-\frac{\tilde{e}^2}{6\pi^2}\bigg[\frac{1}{\varepsilon}-\ln(\tilde{m}/\mu)\bigg]
+\mathcal{O}(e^4)$$
Is my "extension" of the photon propagator renormalization to the case in which there are two fermionic fields correct? If not, why?
Any help or comments will be appreciated.
Answer:
Setting $\Pi(0)$ to any other value also would have been valid. There are infinitely many schemes and all of them can in principle be used to make the same physical predictions. See two previous answers for some explanation of this. Alternatively, you can pick another scale $\Lambda_1$ and prescribe $\Pi(q^2 = \Lambda_1^2)$ as your renormalization condition. To use results at one scale to make predictions at another scale just requires that renormalized observables are finite functions of renormalized couplings. But the on-shell scheme has the further property that the renormalized mass is also the physical mass, similar to what we do at tree level. This makes it a common pedagogical choice.
Replacing $e$ with $e \tilde{\mu}^{\epsilon / 2}$ (where the $4\pi e^{-\gamma}$ represents the bar in MS-bar) ensures that $e$ is a dimensionless coupling in $d = 4 - \epsilon$. However it is indeed part of renormalization so I would've preferred to write $e_0 = Z_3 \tilde{\mu}^{\epsilon / 2} e$ all in one go. The explicit power of $\mu$ in the Lagrangian is the classical contribution to the running coupling while powers of $e$ in $Z_3 - 1$ are the quantum contributions. Recall that the interaction Lagrangian comes with powers of $i/\hbar$. This justifies the common phrasing that $\tilde{\mu}^{\epsilon / 2}$ is classical because you would still need it even if you were not including any powers of the interactions and therefore not seeing any loops or divergences to cancel. As you say, there are other regulators where it never shows up.
The four components of a Dirac fermion are associated with particles and anti-particles of both helicities but these are states of external particles. The propagator has to do with the field itself and there's only one Dirac field in your action. Said another way, the propagator has two indices which both run from 1 to 4 so the effects of particles and anti-particles are both captured. In other reading you might come across some "anti-propagators" but these refer to anti-time ordered fields and do not play a role in this formalism. You can try computing the Fourier transform of
\begin{align}
\left < T \left [ \psi_\alpha(x) \bar{\psi}_\beta(y) \right ] \right > = \left < T \left [ \bar{\psi}_\beta(y) \psi_\alpha(x) \right ] \right >.
\end{align}
with a mode expansion and you will see that $a$, $a^\dagger$, $b$ and $b^\dagger$ are all be involved. You can call this a "fermion propagator", "anti-fermion propagator" or "fermion and anti-fermion propagator" as desired. But whatever you call it, it's clear that this is the only one you need from inserting $e A_\mu \bar{\psi} \gamma^\mu \psi$ a bunch of times and using Wick's theorem.
The bonus is basically a "check my work" question but I do not see any problem with the expression you wrote down. | {
"domain": "physics.stackexchange",
"id": 95822,
"tags": "quantum-field-theory, quantum-electrodynamics, renormalization, propagator"
} |
Is it ok to comment out joint state publisher node from turtlebot package? | Question:
When i comment it out i am getting an error showing there is no transformation between basefoot print and the left and right wheels.
How to remove this error? Will the error go if i give some joint velocities for the turtle bot wheels?
My problem is this..I need to connect a turtlebot arm to the turtlebot . So i called the xacro file of the turtlebot arm from the turtlebot_library.urdf.xacro. So when i launch the turtlebot in rviz the turtlebot with arm comes with the joint_state_publisher python tab where i can control the the joint states of the arm joints and the bot wheels.
But when i run the arm controller the arm is vibrating back and forth between the position given by the joint_state_publisher and the arm controller. So i searched how to stop this..then i found in a website to comment out joint_state_publisher if arm controler is used.But when i comment out it, a error is showing in turtlebot tires (when launched in rviz) saying there is no transformation between left and right wheells with base_foot print.... (but the vibration of the arm stops and the controller works fine)
So is there any way to stop publishing the joint states of the arm instead of commenting out the joint_state_publisher of turtlebot or will the error will be resolved if i send some position messages to the turtlebot wheels?
Please help...thanks in advance
Originally posted by npa on ROS Answers with karma: 5 on 2015-12-02
Post score: 0
Original comments
Comment by Morgan on 2015-12-02:
where do you see the error? In RViz or somewhere else? Can you copy the full text and path of the file you have modified?
Comment by npa on 2015-12-06:
in rviz.... i just commented out joint_state_publisher node in rviz launch file
Answer:
It's not recommended to comment it out.
It's ok to comment it out if you don't want transforms in your system. As you've observed without it publishing the joint states, robot_state_publisher cannot publish the transforms for the different links.
So if you publish the wheel states then you will be replicating the joint_state_publisher.
Why do you want to comment ou tthe joint state publisher?
Originally posted by tfoote with karma: 58457 on 2015-12-02
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 23139,
"tags": "ros, joint, turtlebot, state, publisher"
} |
How is the distribution of Argo floats managed over the Earth's oceans? | Question: The BBC News article Climate change: Oceans 'soaking up more heat than estimated' mentions Argo floats as one technology to measure the temperature of the oceans, and links to argo.ucsd.edu where there is the map below that looks surprisingly uniform.
I had thought that an initial distribution of floating objects would, over the course of years, end up clumpy and gyre-bound. Is there something that is actively managing this uniformity, or at least passively maintaining it?
From the UCSD link:
What is Argo?
Argo is a global array of 3,800 free-drifting profiling floats that measures thetemperature and salinity of the upper 2000 m of the ocean. This allows, for the first time, continuous monitoring of the temperature, salinity, and velocity of the upper ocean, with all data being relayed and made publicly available within hours after collection.
Positions of the floats that have delivered data within the last 30 days :
Answer: I think, there are two factors which prevent the forming of clusters of floats:
The floats' lifespan. Batteries supposedly last for four years, then the floats become debris which sink to the bottom of the ocean (yeah, not that great from an environmentalist point of view). So the lifespan of the battery prevents the floats from clustering.
The introdcution of new floats. Derelic floats are replaced by new ones to keep the network working. Naturally, these new floats are placed somewhere in the ocean, where they don't end up immediately in a gyre but will most likely provide the most possible diversity of data. | {
"domain": "earthscience.stackexchange",
"id": 1587,
"tags": "temperature, measurements, ocean-currents"
} |
Difference in titration curves | Question: In my titration curve for HCl being titrated with NaOH, the initial pH-value was plotted as 2.1 according to a pH-meter. When I found the equivalence point and calculated the concentration of HCl, it was 0.075 M. I calculated how much the initial pH-value should be from that by -lg(0.075) which is around 1.12. That is way off from the original pH-value I got.
Why did this happen? Is it because the risk of me calibrating the pH-meter incorrectly? If that is the case, will the x-value of the equilibrium point also be wrong?? For the latter, I feel like the answer would be no, because what is wrong is that if the pH-meter was calibrated wrong, then all of the values would be 0.9 pH higher than what they should be, so the curve would only differ in the y-direction, but would be exactly the same in the x-direction. Is that correct?
Thanks in advance!
Answer: Maybe your electrode is incorrectly calibrated. But it may be another effect : you should know that at high acidic concentration, the pH values is not obtained by taking the logarithm of the concentration of $\ce{H3O+}$. At high acidic concentration, the concentration should be replaced by the activity, which could be rather different from the concentration. Just for the fun try this experimentation : Take your $\ce{HCl}$ solution with an electrode showing pH $2.1$. Add an equal volume of saturated $\ce{NaCl}$ solution (which is neither acidic nor basic). You will not believe your eyes by reading your electrode : The pH values goes down to $1$, as if diluting the initial solution produces an increase of concentration !
The activity is sometimes described as the amount of $\ce{H3O+}$ ions divided by the volume of the so-called "free water" (and not by the volume of the solution). In concentrated solutions the "free water" is this small amount of water that is free to move, and not attracted and fixed around the ions. One Liter $1 M$ $\ce{HCl}$, should contain much less that $1$ Liter free water, maybe about $0.1$ Liter free water, or less. | {
"domain": "chemistry.stackexchange",
"id": 14745,
"tags": "acid-base, experimental-chemistry, analytical-chemistry, titration"
} |
How publish an "YUV422 packed" image, aravis + basler | Question:
Hello,
I am using aravis driver+ basler camera + Camara_aravis node.
My camera can only give images with format:
-ARV_PIXEL_FORMAT_YUV_422_PACKED
-ARV_PIXEL_FORMAT_YUV_422_YUYV_PACKED
-ARV_PIXEL_FORMAT_BAYER_BG_8
-ARV_PIXEL_FORMAT_BAYER_BG_12
I have changed Camara_aravis node
msg.encoding = sensor_msgs::image_encodings::YUV422 ;
msg.step = g_width*2;
But when I subscribe the image:
rosrun image_view image_view image:=/cam1/image_raw theora
I get an image with wrong colors (blue is yellow, blue is orange...).
I think that the problems comes from ARV_PIXEL_FORMAT_YUV_422_PACKED format which is different of sensor_msgs::image_encodings::YUV422.
How can I fix this?
If I use ARV_PIXEL_FORMAT_BAYER_BG_8 with sensor_msgs::image_encodings::BAYER_BGGR8. I get a crash:
terminate called after throwing an instance of 'cv_bridge::Exception'
what(): Unsupported conversion from [bayer_bggr8] to [rgb8]
Aborted (core dumped)
What is the best way to publish images of this format?
My regards,
Originally posted by Filipe Santos on ROS Answers with karma: 346 on 2013-04-02
Post score: 0
Answer:
Ok... My faults. It is solved.
I should use http://www.ros.org/wiki/image_proc
and also run
rosrun image_view image_view image:=/cam1/image_raw theora (without theora)
regards
Originally posted by Filipe Santos with karma: 346 on 2013-04-02
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 13641,
"tags": "ros, image"
} |
What is the loudest possible sound? | Question: For a long time, Wikipedia has said that the loudest possible sound is 191 dB SPL, as this corresponds to 1 atmosphere of pressure peak-to-peak, and anything above this would be clipped at vacuum on the negative peaks, and is therefore classed as a "shockwave" rather than "sound". (Though Wikipedia also defines a shock wave as a wave moving faster than the speed of sound, regardless of amplitude.)
It gives no references, however, and I've since learned that pressure waves in air are always non-linear, and the science of acoustics assumes linearity and small pressure levels to simplify calculations. So the air will already be distorted before this pressure level.
So is there a commonly-held definition of when distortion becomes too great to consider a wave "sound"? At what dB SPL is it? Is it possible to calculate the amplitude that a sine wave in air would be distorted by 1% THD, for instance?
Answer: The "commonly held definition" is the wikipedia one... it's not so much a question of distortion, as a question that a wave is symmetrical - that is, it should not result in a net motion of gas. It is possible to construct a sinusoidal pressure wave with a peak pressure of 2 atm and a valley of 0 - from a displacement perspective this is a distorted wave, but from a pressure perspective it is not.
However, you cannot do that if the peak pressure is greater than 2 atmosphere. | {
"domain": "physics.stackexchange",
"id": 36777,
"tags": "pressure, acoustics, shock-waves"
} |
Encoding strings using periodic table abbreviations | Question: I took a JavaScript challenge to finish a task related to logo of "Breaking Bad" where letters in your first name and last name are spotted with elements of periodic table and its respective atomic number. I wrote the below code, any suggestions to improve performance or any best coding practices
function Process() {
var ellist = {
"h": "1",
"he": "2",
"li": "3",
"be": "4",
"b": "5",
"c": "6",
.
.
.
"Lv":"116",
"Uus":"117",
"Uuo":"118"
};
var fname = document.getElementById("firstname");
var lname = document.getElementById("lastname");
var splits = fname.split("");
var value;
for (var i = 0; i < splits.length; i++) {
var onevalue = fname.indexOf(splits[i]);
var singlev = fname.substring(onevalue, onevalue + 1);
var doublev = fname.substring(onevalue, onevalue + 2);
var triplev = fname.substring(onevalue, onevalue + 3);
if (ellist[splits[i]] || ellist[doublev] || ellist[triplev]) {
value = splits[i];
if (ellist[doublev] || ellist[triplev]) {
value = ellist[doublev];
if (ellist[triplev]) {
value = ellist[triplev];
// some code here
}
// some code here
}
// some code here
}
}
Using the Process() function which contains the logic. The object ellist contains the list of elements of periodic table with its atomic number. First name is taken from textbox on webpage and stored in fname and similarly the last name in lname and in the for loop it contains the code which checks whether the firstname contains the string which matches the elemetns of periodic table. Any suggestions?
Answer:
Any suggestions?
Yes, a few.
First off, split your function into parts (SRP), to separate the view (DOM elements and their values) from the logic (finding element names in strings).
var splits = fname.split("");
for (var i = 0; i < splits.length; i++) {
var onevalue = fname.indexOf(splits[i]);
That doesn't make much sense to me. Don't you expect onevalue == i? If not, you might annotate this explicitly and/or make the comparison. Maybe it's inside the "some code"?
var doublev = fname.substring(onevalue, onevalue + 2);
var triplev = fname.substring(onevalue, onevalue + 3);
Notice that these will have the same value as singlev in the last [two] iterations of your loop, where the end is outside of the string.
if (ellist[splits[i]] || ellist[doublev] || ellist[triplev]) {
value = splits[i];
if (ellist[doublev] || ellist[triplev]) {
value = ellist[doublev];
if (ellist[triplev]) {
value = ellist[triplev];
Ouch. Simplify this to
if (triplev in ellist) {
value = ellist[triplev];
} else if (doublev in ellist) {
value = ellist[doublev];
} else if (splits[i] in ellist) { // are you sure you don't want `singlev`?
value = splits[i];
} | {
"domain": "codereview.stackexchange",
"id": 4803,
"tags": "javascript, performance, strings"
} |
Must a partial halt decider be a pure function of its inputs? | Question: Must a partial halt decider be a pure function of its inputs?
A partial halt decider correctly decides the halt status of some of its inputs.
I am trying to write C code that would be acceptable to computer scientists in the field of the theory of computation.
In computer programming, a pure function is a function that has the following properties:
(1) The function return values are identical for identical arguments (no variation with local static variables, non-local variables, mutable reference arguments or input streams).
(2) The function application has no side effects (no mutation of local static variables, non-local variables, mutable reference arguments or input/output streams).
https://en.wikipedia.org/wiki/Pure_function#Compiler_optimizations
I created a partial halt decider that is able to tell when it is called in infinitely nested simulation. It can only do this if it has a static memory variable to keep track of the simulation of its input across recursive invocations.
Does this still meet the Halting Problem requirement that the function must be a pure function of its inputs?
Answer: A decision algorithm is required to behave as an "observationally pure" function. In other words, its externally observable behavior (for someone who can run it on inputs of their choosing and observe what it outputs) must be consistent with it being a pure function.
Presumably, unless you specify otherwise, any normal reader of your definition of "partial halt decider" would assume that such an algorithm is also required to be observationally pure.
Why? Because the algorithm is supposed to compute a mathematical function, and a mathematical function is pure, so any algorithm that correctly computes it also must be observationally pure.
There is no requirement that the particular implementation of the algorithm be pure in the sense you have listed. It is OK to have code that, for instance, defines a local variable and then overwrites its value. For instance, a Turing machine can overwrite what is written on the tape. Nonetheless, from the perspective of any external observer who only sees the input-output behavior of the Turing machine, it remains observationally pure. | {
"domain": "cs.stackexchange",
"id": 18993,
"tags": "computability, c, theory"
} |
Stream text scanning | Question: I've written some utilities to read text from streams. One of these, the ReadTo method, is to read text from a specified stream until a given text is encountered, behaving like string.IndexOf, but with streams.
Really, it's quite fast. But I'd like to know if anyone knows any way to make it even faster, as it's used in a very performance-critical part of a program I'm developing.
/// <summary>
/// Provides utilities to manage streams.
/// </summary>
public static class StreamUtils
{
private const int DefaultBufferSize = 1024;
/// <summary>
/// Enumerate buffers from a specified stream.
/// </summary>
/// <param name="stream">The stream to read.</param>
/// <param name="bufferSize">The size of each buffer.</param>
/// <param name="count">How many bytes to read. Negative values mean read to end.</param>
/// <returns></returns>
/// <exception cref="ArgumentException"></exception>
/// <exception cref="ArgumentNullException"></exception>
/// <exception cref="IOException"></exception>
/// <exception cref="NotSupportedException"></exception>
/// <exception cref="ObjectDisposedException"></exception>
public static IEnumerable<byte[]> EnumerateBuffers(this Stream stream, int bufferSize = DefaultBufferSize, long count = -1)
{
byte[] buffer = new byte[bufferSize];
do
{
long read = stream.Read(buffer, 0, bufferSize);
if (read < 1)
break;
if (count > -1)
{
count -= read;
if (count < 0)
read += count;
}
if (read == bufferSize)
yield return buffer;
else
{
byte[] newBuffer = new byte[read];
Buffer.BlockCopy(buffer, 0, newBuffer, 0, (int)read);
yield return newBuffer;
break;
}
} while (true);
}
/// <summary>
/// Enumerate substrings from a specified stream.
/// </summary>
/// <param name="stream">The stream to read.</param>
/// <param name="bufferSize">The length of each substring.</param>
/// <returns></returns>
/// <exception cref="ArgumentException"></exception>
/// <exception cref="ArgumentNullException"></exception>
/// <exception cref="IOException"></exception>
/// <exception cref="DecoderFallbackException"></exception>
/// <exception cref="NotSupportedException"></exception>
/// <exception cref="ObjectDisposedException"></exception>
public static IEnumerable<string> EnumerateSubstrings(this Stream stream, int bufferSize = DefaultBufferSize) => stream.EnumerateSubstrings(Encoding.Default, bufferSize);
/// <summary>
/// Enumerate substrings from a specified stream.
/// </summary>
/// <param name="stream">The stream to read.</param>
/// <param name="encoding">The encoding to use.</param>
/// <param name="bufferSize">The length of each substring.</param>
/// <returns></returns>
/// <exception cref="ArgumentException"></exception>
/// <exception cref="ArgumentNullException"></exception>
/// <exception cref="IOException"></exception>
/// <exception cref="DecoderFallbackException"></exception>
/// <exception cref="NotSupportedException"></exception>
/// <exception cref="ObjectDisposedException"></exception>
public static IEnumerable<string> EnumerateSubstrings(this Stream stream, Encoding encoding, int bufferSize = DefaultBufferSize) => from byte[] buffer in stream.EnumerateBuffers(bufferSize) select encoding.GetString(buffer);
/// <summary>
/// Read the current stream until a specified string is encountered.
/// </summary>
/// <param name="stream">The source stream.</param>
/// <param name="separator">The string that marks the end.</param>
/// <param name="bufferSize">The size of the buffers.</param>
/// <returns></returns>
/// <exception cref="ArgumentException"></exception>
/// <exception cref="ArgumentNullException"></exception>
/// <exception cref="IOException"></exception>
/// <exception cref="DecoderFallbackException"></exception>
/// <exception cref="NotSupportedException"></exception>
/// <exception cref="ObjectDisposedException"></exception>
public static string ReadTo(this Stream stream, string separator, int bufferSize = DefaultBufferSize) => stream.ReadTo(separator, Encoding.Default, bufferSize);
/// <summary>
/// Read the current stream until a specified string is encountered.
/// </summary>
/// <param name="stream">The source stream.</param>
/// <param name="separator">The string that marks the end.</param>
/// <param name="encoding">The encoding to use.</param>
/// <param name="bufferSize">The size of the buffers.</param>
/// <returns></returns>
/// <exception cref="ArgumentException"></exception>
/// <exception cref="ArgumentNullException"></exception>
/// <exception cref="IOException"></exception>
/// <exception cref="DecoderFallbackException"></exception>
/// <exception cref="NotSupportedException"></exception>
/// <exception cref="ObjectDisposedException"></exception>
public static string ReadTo(this Stream stream, string separator, Encoding encoding, int bufferSize = DefaultBufferSize)
{
// This method requires seeking, so ensure that the specified stream supports it.
if (!stream.CanSeek)
throw new NotSupportedException();
// This StringBuilder will build the resulting text. Using this to avoid too many string reallocations.
StringBuilder text = new StringBuilder();
bool hasSuffix = false;
string endingSeparator = null;
// Retrieve how many bytes is the specified separator long. This will be necessary to handle some seekings on the stream.
int separatorByteLength = encoding.GetByteCount(separator);
// Iterate through each substring in the stream. Each one is a buffer converted to a string using a specified encoding.
foreach (string substring in stream.EnumerateSubstrings(encoding, bufferSize))
{
// Retrieve how many bytes is the current substring long. Again, useful for seekings.
int substringByteLength = encoding.GetByteCount(substring);
// Check out whether the previous substring had a suffix.
if (hasSuffix)
{
// If it had, then verify whether the current substring starts with the remaining part of the separator.
if (substring.StartsWith(separator.Substring(endingSeparator.Length)))
{
// In that case, seek till before the separator and break the loop.
stream.Seek(substringByteLength - encoding.GetByteCount(endingSeparator), SeekOrigin.Current);
break;
}
// If the code reached here, then the previous suffix were not part of a separator, as the whole of the separator cannot be found.
hasSuffix = false;
text.Append(endingSeparator);
}
// If the current substring starts with the separator, just skip it and break the loop, so the StringBuilder will only contain previous substrings.
if (substring.StartsWith(separator))
break;
{
// Check out whether the current substring contains the separator.
int separatorIndex = substring.IndexOf(separator);
if (separatorIndex != -1)
{
// If that's the case, take this substring till the previously found index, ...
string newSubstring = substring.Remove(separatorIndex);
// ...then seek the current stream before the separator, ...
stream.Seek(encoding.GetByteCount(newSubstring) - substringByteLength, SeekOrigin.Current);
/// ...and finally append the new substring (the one before the separator) to the StringBuilder.
text.Append(newSubstring);
break;
}
}
// Check out whether the current substring ends with the specified separator.
if (substring.EndsWith(separator))
{
// If it does, go back as many bytes as the separator is long within the stream.
stream.Seek(-separatorByteLength, SeekOrigin.Current);
// Then, append this substring till before the specified separator to the StringBuilder.
text.Append(substring.Remove(substring.Length - separator.Length));
break;
}
// Sometimes, it might happen that the separator is divided between the current substring and the next one.
// So, see whether the current substring ends with just one part (even one only character) of the separator.
endingSeparator = separator;
do
// Remove the last character from the 'ending separator'.
endingSeparator = endingSeparator.Remove(endingSeparator.Length - 1);
// If the ending separator isn't empty yet and the current substring doesn't end with it,
// continue the loop.
while (!(endingSeparator.Length == 0 || substring.EndsWith(endingSeparator)));
// At this time, the ending separator will be an initial part of the specified separator,
// which is a 'suffix' of the current substring.
// Push the length of the suffix on the stack, so I'll avoid to call the Length getter accessor multiple times.
int suffixLength = endingSeparator.Length;
// If the suffix is empty, that means the current string doesn't end with even just a part of the separator.
// Therefore, just append the current string to the StringBuilder.
if (suffixLength == 0)
text.Append(substring);
else
{
// If rather the suffix isn't empty, then mark this with the boolean hasSuffix and
// append the current substring only till before the suffix.
hasSuffix = true;
text.Append(substring.Remove(substring.Length - suffixLength));
}
}
return text.ToString();
}
}
Usage sample:
string text = "Hello world, this is a test";
Encoding encoding = Encoding.UTF8;
using (Stream stream = new MemoryStream(encoding.GetBytes(text)))
{
string substring = stream.ReadTo(", this", encoding);
// substring == "Hello world"
}
Answer: By altering/ specifying the string comparer from the default StringComparison.CurrentCulture to StringComparison.Ordinal you can win a lot.
Also note that sometimes Buffer.BlockCopy(buffer, 0, newBuffer, 0, (int)read) is slower than Array.Copy(buffer, 0, newBuffer, 0, (int)read).
I ran your code with a Stopwatch, added these small changes and ran it as few times again and again and averaged on.
original 401986 ticks
updated 101224 ticks
If you like to really know if you gain something instrument it with BenchmarkDotNet, cool nuget package that shows where "it hurts"
This is by no means all you could do, I would start to look at removing things that cause "newing up classes" as that takes a lot of time and work with span or even ReadOnlySpan. The linq method you use is such class that generates a class in memory that needs to be taken to the GC, over time this starts to slow things down.
Here is your code with the small tweeks
public static class StreamUtils
{
private const int DefaultBufferSize = 1024;
/// <summary>
/// Enumerate buffers from a specified stream.
/// </summary>
/// <param name="stream">The stream to read.</param>
/// <param name="bufferSize">The size of each buffer.</param>
/// <param name="count">How many bytes to read. Negative values mean read to end.</param>
/// <returns></returns>
/// <exception cref="ArgumentException"></exception>
/// <exception cref="ArgumentNullException"></exception>
/// <exception cref="IOException"></exception>
/// <exception cref="NotSupportedException"></exception>
/// <exception cref="ObjectDisposedException"></exception>
public static IEnumerable<byte[]> EnumerateBuffers(this Stream stream, int bufferSize = DefaultBufferSize, long count = -1)
{
byte[] buffer = new byte[bufferSize];
do
{
long read = stream.Read(buffer, 0, bufferSize);
if (read < 1)
break;
if (count > -1)
{
count -= read;
if (count < 0)
read += count;
}
if (read == bufferSize)
yield return buffer;
else
{
byte[] newBuffer = new byte[read];
Array.Copy(buffer, 0, newBuffer, 0, (int)read);
//Buffer.BlockCopy(buffer, 0, newBuffer, 0, (int)read);
yield return newBuffer;
break;
}
} while (true);
}
// A very simple and efficient memmove that assumes all of the
// parameter validation has already been done. The count and offset
// parameters here are in bytes. If you want to use traditional
// array element indices and counts, use Array.Copy.
[System.Security.SecuritySafeCritical] // auto-generated
[ResourceExposure(ResourceScope.None)]
[MethodImplAttribute(MethodImplOptions.InternalCall)]
internal static extern void InternalBlockCopy(Array src, int srcOffsetBytes,
Array dst, int dstOffsetBytes, int byteCount);
/// <summary>
/// Enumerate substrings from a specified stream.
/// </summary>
/// <param name="stream">The stream to read.</param>
/// <param name="bufferSize">The length of each substring.</param>
/// <returns></returns>
/// <exception cref="ArgumentException"></exception>
/// <exception cref="ArgumentNullException"></exception>
/// <exception cref="IOException"></exception>
/// <exception cref="DecoderFallbackException"></exception>
/// <exception cref="NotSupportedException"></exception>
/// <exception cref="ObjectDisposedException"></exception>
public static IEnumerable<string> EnumerateSubstrings(this Stream stream, int bufferSize = DefaultBufferSize)
=> stream.EnumerateSubstrings(Encoding.Default, bufferSize);
/// <summary>
/// Enumerate substrings from a specified stream.
/// </summary>
/// <param name="stream">The stream to read.</param>
/// <param name="encoding">The encoding to use.</param>
/// <param name="bufferSize">The length of each substring.</param>
/// <returns></returns>
/// <exception cref="ArgumentException"></exception>
/// <exception cref="ArgumentNullException"></exception>
/// <exception cref="IOException"></exception>
/// <exception cref="DecoderFallbackException"></exception>
/// <exception cref="NotSupportedException"></exception>
/// <exception cref="ObjectDisposedException"></exception>
public static IEnumerable<string> EnumerateSubstrings(this Stream stream, Encoding encoding, int bufferSize = DefaultBufferSize)
=> from byte[] buffer in stream.EnumerateBuffers(bufferSize) select encoding.GetString(buffer);
/// <summary>
/// Read the current stream until a specified string is encountered.
/// </summary>
/// <param name="stream">The source stream.</param>
/// <param name="separator">The string that marks the end.</param>
/// <param name="bufferSize">The size of the buffers.</param>
/// <returns></returns>
/// <exception cref="ArgumentException"></exception>
/// <exception cref="ArgumentNullException"></exception>
/// <exception cref="IOException"></exception>
/// <exception cref="DecoderFallbackException"></exception>
/// <exception cref="NotSupportedException"></exception>
/// <exception cref="ObjectDisposedException"></exception>
public static string ReadTo(this Stream stream, string separator, int bufferSize = DefaultBufferSize) => stream.ReadTo(separator, Encoding.Default, bufferSize);
/// <summary>
/// Read the current stream until a specified string is encountered.
/// </summary>
/// <param name="stream">The source stream.</param>
/// <param name="separator">The string that marks the end.</param>
/// <param name="encoding">The encoding to use.</param>
/// <param name="bufferSize">The size of the buffers.</param>
/// <returns></returns>
/// <exception cref="ArgumentException"></exception>
/// <exception cref="ArgumentNullException"></exception>
/// <exception cref="IOException"></exception>
/// <exception cref="DecoderFallbackException"></exception>
/// <exception cref="NotSupportedException"></exception>
/// <exception cref="ObjectDisposedException"></exception>
public static string ReadTo(this Stream stream, string separator, Encoding encoding, int bufferSize = DefaultBufferSize)
{
// This method requires seeking, so ensure that the specified stream supports it.
if (!stream.CanSeek)
throw new NotSupportedException();
// This StringBuilder will build the resulting text. Using this to avoid too many string reallocations.
StringBuilder text = new StringBuilder();
bool hasSuffix = false;
string endingSeparator = null;
// Retrieve how many bytes is the specified separator long. This will be necessary to handle some seekings on the stream.
int separatorByteLength = encoding.GetByteCount(separator);
// Iterate through each substring in the stream. Each one is a buffer converted to a string using a specified encoding.
foreach (string substring in stream.EnumerateSubstrings(encoding, bufferSize))
{
// Retrieve how many bytes is the current substring long. Again, useful for seekings.
int substringByteLength = encoding.GetByteCount(substring);
// Check out whether the previous substring had a suffix.
if (hasSuffix)
{
// If it had, then verify whether the current substring starts with the remaining part of the separator.
if (substring.StartsWith(separator.Substring(endingSeparator.Length),StringComparison.Ordinal))
{
// In that case, seek till before the separator and break the loop.
stream.Seek(substringByteLength - encoding.GetByteCount(endingSeparator), SeekOrigin.Current);
break;
}
// If the code reached here, then the previous suffix were not part of a separator, as the whole of the separator cannot be found.
hasSuffix = false;
text.Append(endingSeparator);
}
// If the current substring starts with the separator, just skip it and break the loop, so the StringBuilder will only contain previous substrings.
if (substring.StartsWith(separator,StringComparison.Ordinal))
break;
{
// Check out whether the current substring contains the separator.
int separatorIndex = substring.IndexOf(separator,StringComparison.Ordinal);
if (separatorIndex != -1)
{
// If that's the case, take this substring till the previously found index, ...
string newSubstring = substring.Remove(separatorIndex);
// ...then seek the current stream before the separator, ...
stream.Seek(encoding.GetByteCount(newSubstring) - substringByteLength, SeekOrigin.Current);
/// ...and finally append the new substring (the one before the separator) to the StringBuilder.
text.Append(newSubstring);
break;
}
}
// Check out whether the current substring ends with the specified separator.
if (substring.EndsWith(separator,StringComparison.Ordinal))
{
// If it does, go back as many bytes as the separator is long within the stream.
stream.Seek(-separatorByteLength, SeekOrigin.Current);
// Then, append this substring till before the specified separator to the StringBuilder.
text.Append(substring.Remove(substring.Length - separator.Length));
break;
}
// Sometimes, it might happen that the separator is divided between the current substring and the next one.
// So, see whether the current substring ends with just one part (even one only character) of the separator.
endingSeparator = separator;
do
// Remove the last character from the 'ending separator'.
endingSeparator = endingSeparator.Remove(endingSeparator.Length - 1);
// If the ending separator isn't empty yet and the current substring doesn't end with it,
// continue the loop.
while (!(endingSeparator.Length == 0 || substring.EndsWith(endingSeparator,StringComparison.Ordinal)));
// At this time, the ending separator will be an initial part of the specified separator,
// which is a 'suffix' of the current substring.
// Push the length of the suffix on the stack, so I'll avoid to call the Length getter accessor multiple times.
int suffixLength = endingSeparator.Length;
// If the suffix is empty, that means the current string doesn't end with even just a part of the separator.
// Therefore, just append the current string to the StringBuilder.
if (suffixLength == 0)
text.Append(substring);
else
{
// If rather the suffix isn't empty, then mark this with the boolean hasSuffix and
// append the current substring only till before the suffix.
hasSuffix = true;
text.Append(substring, 0, substring.Length - suffixLength);
}
}
return text.ToString();
}
} | {
"domain": "codereview.stackexchange",
"id": 35870,
"tags": "c#, strings, search, stream"
} |
Balanced set coloring | Question: Let $\{S_1, S_2, ..., S_m\}$ be a collection of subsets of some universe $U$, where each $S_i$ has even size (so does $U$).
We want to color the elements of $U$, either red or blue, such that each $S_i$ has as many blue elements as red elements (every set is balanced in terms of colors).
I am more interested in structural results than algorithmic ones. The goal is to understand what properties of the sets make such a coloring always possible (in particular, whether specific properties that we identified are sufficient).
Does anyone know references of that type?
The problem seems related to set cover, hitting set, and hypergraph coloring, but none of the results I found so far actually address a similar question. Please feel free to suggest an alternative name or formulation in the comments.
Answer: I think this question is closely related to the term discrepancy.
Here is the defintion.
Given a universe $U$ a collection of sets $\mathcal{A}=\{S_i\}$ and a function $\varphi:U\to\{-1,1\}$. For $S\in\mathcal{A}$ define $\varphi(S)=\sum_{v\in S} \varphi(v)$, and $\mathrm{disc}(\mathcal{A},\varphi)=\max_{S\in\mathcal{A}} |\varphi(A)|$.
Finally, define $\mathrm{disc}(\mathcal{A})=\min_{\varphi:U\to\{-1,1\}} \mathrm{disc}(A,\varphi)$.
For example, it can be shown using the probabilistic method (see first reference) that
$$
\mathrm{disc}(\mathcal{A})\leq \sqrt{2|U|\ln(2|\mathrm{A}|)}
$$
I think better results are also known. More on can be found in
The Probabilistic Method by Noga Alon and Joel H. Spencer (Chapter 13 in the 4th edition).
Geometric Discrepancy by Jiřı́ Matoušek. | {
"domain": "cstheory.stackexchange",
"id": 5784,
"tags": "graph-colouring, set-cover, hypergraphs, hitting-set"
} |
Is singularity at the exact centre of a black hole? | Question: I've read that all paths inside the EH lead to singularity. ALL paths. Even the ones pointing away from it, right? Because there's NO pointing away from singularity, since ALL paths point to it.
So how could it be at the exact center? Or am I misunderstanding "center" here?
Answer: Inside the event horizon all timelike paths lead to the singularity. A timelike path is one along which you never travel at a speed greater than c.
A static black hole is spherically symmetric; any asymmetries are radiated away as gravitational waves as the black hole forms. Therefore the singularity must be at the centre or it would break the spherical symmetry. | {
"domain": "physics.stackexchange",
"id": 6859,
"tags": "black-holes, singularities"
} |
How to simplify these delegate functions? | Question: I'm looking for a way to simplify this code, because I could develop more overloads for TryThis I made the string and int both of class Nullable so that in each overloaded function, the catch block could return the same value.
The problem is I need, if possible, no overloads of TryThis. The function overloads are both identical, except for the type of delegate they are passed. Is there some kind of variable that would encompass any delegate that can be executed?
class Program
{
delegate int MyIntReturn();
delegate string MyStringReturn();
static private MyIntReturn ReadInt = () => {return int.Parse(Console.ReadLine()); };
static private MyStringReturn ReadString = () => { return Console.ReadLine(); };
static private Nullable<int> TryThis(MyIntReturn MyAction)
{
try
{
return MyAction();
}
catch (Exception ex)
{
Console.WriteLine(ex.ToString());
return null;
}
}
private static Nullable<string> TryThis(MyStringReturn MyAction)
{
try
{
return MyAction();
}
catch (Exception ex)
{
Console.WriteLine(ex.ToString());
return null;
}
}
}
Answer: Generics and Delegates. Note that you can't return null in this modified version of TryThis, so we use the default(T) method to return whatever's most sensible.
class Program
{
private delegate T TypeReturn<T>();
static private TypeReturn<int> ReadInt = () => int.Parse(Console.ReadLine());
static private TypeReturn<string> ReadString = () => Console.ReadLine();
static private T TryThis<T>(TypeReturn<T> MyAction )
{
try
{
return MyAction() ;
}
catch (Exception ex)
{
Console.WriteLine(ex.ToString());
return default (T);
}
}
} | {
"domain": "codereview.stackexchange",
"id": 2596,
"tags": "c#, delegates"
} |
Factors of $c$ in the Hamiltonian for a charged particle in electromagnetic field | Question: I've been looking for the Hamiltonian of a charged particle in an electromagnetic field, and I've found two slightly different expressions, which are as follows:
$$H=\frac{1}{2m}(\vec{p}-q \vec{A})^2 + q\phi $$
and also
$$H=\frac{1}{2m}(\vec{p}-\frac{q}{c}\vec{A})^2 + q\phi $$
with $\vec{p}$ the momentum, $q$ the electric charge, $\vec{A}$ the vector potential, $\phi$ the scalar potential and $c$ the speed of light.
So basically the difference is in the term $1/c$ multiplying $\vec{A}$, present in the second form (which I use in my lectures) but not in the first one (used by Griffiths in Introduction to quantum dynamics to treat the Aharonov-Bohm effect). Why does this difference exist and what does it mean? And how does the term $1/c$ affect the dimensional analysis (the units) of the problem?
Answer: The missing $1/c$ in your first expression is simply a consequence of the units used. The second expression is in Gaussian units while the first one is in either SI units or in natural units. In the latter system of units (natural units) certain constants like $\hbar$ and $c$ have a numerical value of 1, so they can be left out of the equations.$^1$ This is common practice in physics and it doesn't change anything about the dimensional analysis of the problem, as long as you keep in mind that you're working with those natural units.
The same goes for any other system as well. Every system of units $A$ is consistent with any other system of units $B$ as long as you yourself are consistent in their usage and correctly transform everything between $A$ and $B$ when desired.
So there is no fundamental difference between a dimensional analysis in SI, Gaussian or natural units, as long as you keep in mind what units you're working with. The units themselves will (obviously) vary between systems, but dimensional analysis in one system will be entirely consistent with dimensional analysis in another.$^2$
$^1$ Note that this is not the case for SI units. As is rather well-known, the numerical value of $c$ in SI units is about $3\times10^8$ $(\mathrm{m/s})$. The reason for the absence of $1/c$ in SI units is a conventional difference. Wikipedia has a comparison between Gaussian and SI units explaining the major differences here.
$^2$ Perhaps one important note concerning Gaussian and SI units here is that due to the different conventions, it can be more difficult to transform between them. E.g. making an equation dimensionless in SI units, might yield a non-dimensionless equation when transformed into Gaussian units.
One example is when we consider Gauss's law in Gaussian units divided by the free charge density: $(1/\rho)\vec{\nabla}\cdot\vec{E} = 4\pi$. The quantity on the left-hand side is dimensionless in Gaussian units, but not in SI units, where it is $(1/\rho)\vec{\nabla}\cdot\vec{E} = 1/\epsilon_0$. So you have to watch out for that when transforming your equations. Dimensional analysis may therefore also yield seemingly different results in SI or Gaussian units, but there is no problem if you remember the conventional differences and, again, stay consistent. | {
"domain": "physics.stackexchange",
"id": 6218,
"tags": "electromagnetism, hamiltonian-formalism, units, hamiltonian"
} |
Is there a mathematical method to determine if noise is Gaussian? | Question: Is there a mathematical method to determine if a signal's noise is Gaussian?
The only way I know so far is to analyze the histogram and layover a Gaussian distribution to visually determine if the distribution is Gaussian. I would like to know if there is a mathematical way to determine if the noise is Gaussian and how accurate the result is.
Answer: There are several statistical tests if a time series is Gaussian, although in statistics, the term "tests for normality" is usually how you search for them.
The Nist EDA site is a good place to look and the probability plot is better for shorter data sets than the sample histogram.
http://www.itl.nist.gov/div898/handbook/eda/section3/probplot.htm
Near the bottom of the page, there are references to q-q plots, KS, Chi squared, and other goodness of fit tests. You can find ample information about them on the web and replicating here isn't going to add anything.
Matlab has qqplot and prob plot in the Statistics toolbox, and the qqplot with a single argument is specific to Gaussian distributions. SAS has all these tests. R has the tests.
I recommend this book, written by 2 Engineers, and they cover several tests including for things like independence, and stationarity. The book is oriented towards the practical, minimum of mathematics.
Bendat, Julius S., and Allan G. Piersol. Random data: analysis and
measurement procedures. Vol. 729. John Wiley & Sons, 2011.
The wrinkle of these tests is that they don't conform to a Signal plus Noise scenario. The tests generally assume that the time series is all Gaussian or not. A constant mean isn't a problem. Signals are not usually Gaussian and a simple test can't tell the difference.
Signal processing operations such as a DFT, tend to manifest central limit theorem effects on data, so you need to be aware that even linear transformations will not preserve a non Gaussian pdf.
It should be also noted that from a practical perspective, Gaussianity isn't black and white. Algorithms that have Gaussian assumptions usually work well even if the Gaussianity assumption is not strictly valid. Things like bi-modality and non-symmetry are more important to know about. Cauchy (heavy tails) like noise and multiplicative noise are also important to know about. | {
"domain": "dsp.stackexchange",
"id": 5718,
"tags": "noise, gaussian"
} |
How do I calculate lumens of a specific RGB light source from watts per square meter per steradian? | Question: Context
I am trying to understand irradiance calculations for interior lighting in Blender software and got stuck with inputting light strength in lumens, because the lights in the software take watts per square meter per steradian. We are talking only about visible light. Virtual lights in the software have strength that is set in watts per square meter per steradian and an RGB color in linear color space with REC.709 primaries(610nm, 555nm and 465nm) although brightness of typical light sources are usually given in lumens.
Example
So let's say I have a spot light that has a strength of 3 watts per square meter per steradian and it's color is defined by digital RGB color values of 3 wavelenghts: 610nm, 555nm and 465nm. Let's say the color is expressed as RGB(0.499938, 0.367322, 0.220984). Its cone angle is 45 degrees. Let's say it's radius is 0.025m. How many lumens is that? How do I calculate that?
Let's say I want that same light to have a strength of 900 lumens. How many watts per square meter per steradian is that? How do I calculate that?
Answer: The conversion factor you need is the luminous efficacy. You need to know the full spectrum of the light to compute it. Tristimulus values (such as Rec.709 RGB) aren't enough, unless you make additional assumptions. For example, if you happen to know that the light has a blackbody spectrum, then you can work out the temperature from the RGB color and the luminous efficacy from the temperature.
If you assume that the illuminant is a mixture of 610nm, 555nm and 465nm monochromatic lights then you can calculate the luminous efficacy as well, but that assumption is extremely unlikely to be true. The Rec.709 primaries aren't monochromatic lights of those or any other wavelengths, and even if they were, RGB values elsewhere in the cube wouldn't necessarily come from spectra that are linear mixture of those lights.
All of this is irrelevant, though, since Blender shouldn't be using watts to begin with, it should be using lumens. If the rigid-body simulation demands masses in pounds, you shouldn't use the local gravity where the scene is set to do the conversion. You should use whatever fixed conversion factor they're using internally.
Unfortunately for my theory, this page on blender.org has a table of suggested lumen-watt conversions in which the ratio is nowhere close to any fixed value. Either they don't know what they're talking about or I don't.
My advice, for what it's worth, is to use a simple fixed factor like 300 lm/W. As long as you're consistent, the value shouldn't matter since it's equivalent to a change in the overall exposure. | {
"domain": "physics.stackexchange",
"id": 100414,
"tags": "visible-light"
} |
Calculation in field theory | Question: I am little bit rusty in field theoretical calculations. I am reading the book by Altland Condensed matter field theory 2nd ed. On page 15, he derives the funcional derivative of the action:
$$
S[\phi+\epsilon\theta] - S[\phi] = \int_Md^mx(\mathcal{L}(\phi+\epsilon\theta, \partial_\mu\phi+\epsilon\partial_\mu\theta - \mathcal{L}(\phi,\partial_\mu\phi))\\ \stackrel{?}{=} \int_M d^mx \left[\frac{\partial\mathcal{L}}{\partial\phi^i}\theta^i + \frac{\partial\mathcal{L}}{\partial\partial_\mu\phi^i}\partial_\mu\theta^i \right]\epsilon + \mathcal{O}(\epsilon^2).
$$
Naturally, my question is how to obtain the second equality.
Answer: We're just Taylor expanding $\mathcal L$. Hopefully you agree that
$$
f(x+\delta x, y+\delta y)=f(x,y)+\frac{\partial f}{\partial x}\delta x+\frac{\partial f}{\partial y}\delta y + \ ...
$$
Similarly, we have
$$
\mathcal L(\phi+\epsilon\theta,\dot\phi+\epsilon\dot\theta)=\mathcal{L}(\phi,\dot\phi)+\frac{\partial\mathcal L}{\partial \phi}\epsilon\theta+\frac{\partial\mathcal L}{\partial \dot\phi}\epsilon\dot\theta +\ ...
$$
where I've simply made the substitutions
\begin{align}
f\leftrightarrow\mathcal L\\
x\leftrightarrow \phi\\
y\leftrightarrow \dot\phi\\
\delta x \leftrightarrow \epsilon \theta\\
\delta y \leftrightarrow \epsilon\dot\theta
\end{align}
If you plug in this expression for $\mathcal L(\phi+\epsilon\theta,\dot\phi+\epsilon\dot\theta)$ into your expression for the integral, you get the final result. | {
"domain": "physics.stackexchange",
"id": 43197,
"tags": "homework-and-exercises, lagrangian-formalism, field-theory, variational-calculus"
} |
Dynamics of circular motion of the bob in a conical pendulum | Question: The question has asked us to find the velocity and tension in the string whose lenght is L. Let 'r' be the radius of the circular path of the bob. I think the free body diagram looks like this
I want to calculate tension first
In the diagram ,
T = mg.costheta and
mg = T.costheta
From the driagram of the entire system
costheta = √(L^2 - r^2) / L
So my answer depends on which equation i use. The second one is the correct one. How to know which equation we have to use? the answer for velocity is is also different for the two equations.
Is there something like i should only resolve one of these two? Because since tension's component is what i need to find centripetal force, i dont really need to resolve mg?
Answer: Hope this helps
So you have:
ΣFy = T cosθ = mg (since the mass is stationary in the y-direction)
ΣFx = T sinθ = ma = $\frac{mv^2}{r}$ (since the sine component is responsible for the centripetal force of the circular motion)
From pythagoras you know that sinθ = $\frac{r}{L}$ so
T sinθ = T$\frac{r}{L}$ = $\frac{mv^2}{r}$
By dividing T$\frac{r}{L}$ = $\frac{mv^2}{r}$ by T cosθ = mg you get
$\frac{r}{cos θ L}$ = $\frac{v^2}{g r}$
By rearranging the above equation:
v = r ⋅ sqrt($\frac{g}{cosθ L}$), which is the tangetial speed of the mass.
The angular speed ω = $\frac{v}{r}$ so
ω = sqrt($\frac{g}{cosθ L}$)
The tension in the string can either be written as
T = $\frac{m v^2 L}{r^2}$
or
T = $\frac{m g}{cosθ}$
Hope it helped | {
"domain": "physics.stackexchange",
"id": 45309,
"tags": "homework-and-exercises, newtonian-mechanics"
} |
Python - Predicting the probability of 6 Heads or Tails in a row from a set sample size | Question: I'm wondering whether there is a way to make this even more efficient or reduce the number of variables.
import random
numberOfStreaks = 0
results = []
head_streak = ['H'] * 6
tail_streak = ['T'] * 6
sample_size = 1000000
for i, experimentNumber in enumerate(range(sample_size)):
# Code that creates a list of 100 'heads' or 'tails' values.
results.append(random.choice(('H', 'T')))
# Code that checks if there is a streak of 6 heads or tails in a row.
try:
temp = results[i-5:]
if temp == head_streak or temp == tail_streak:
numberOfStreaks += 1
except:
pass
print('Chance of streak: %s%%' % (numberOfStreaks / sample_size))
Answer: # Code that creates a list of 100 'heads' or 'tails' values.
results.append(random.choice(('H', 'T')))
This comment is severely misleading: the code does not create a list of 100 values, it create an infinitely growing list that extends up to sampleSize values by the time the program terminates.
Independently of the misleading comment, this is a bad idea, and can be avoided by limiting the size of the results list in some say (del results[:-6], or results = results[-6:], I'm not sure which is better). This would also obsolete the temp variable, because the results array would no longer contain extra flips.
try:
temp = results[i-5:]
if temp == head_streak or temp == tail_streak:
numberOfStreaks += 1
except:
pass
Bare except statements are a bad idea. Bare except:pass statements even more so. Among other problems, it means that if you press Ctrl-C while your code is executing that section, the code won't exit.
It's not clear what exception you are trying to catch (results[i-5:] doesn't throw an error if results is less than five items long; it just truncates the list), so I can't suggest a direct replacement, but I would recommend either catching a specific exception, or removing the try-catch entirely.
Python lists natively support negative indexing, so you can simplify results[i-5:] to results[-6:] and remove the i variable entirely. As suggested by the question asker in the comments, this makes the enumerate call unnecessary.
The i variable will then be unused. It's clearer to name variables you don't use as _, so it's easy to tell that they aren't used.
Full code:
import random
numberOfStreaks = 0
results = []
head_streak = ['H'] * 6
tail_streak = ['T'] * 6
sample_size = 1000000
for _ in range(sample_size):
# Code that generates another 'heads' or 'tails' value
results.append(random.choice(('H', 'T')))
# Code that checks if there is a streak of 5 heads or tails in a row.
results = results[-6:]
if results == head_streak or results == tail_streak:
numberOfStreaks += 1
print('Chance of streak: %s%%' % (numberOfStreaks / sample_size)) | {
"domain": "codereview.stackexchange",
"id": 38632,
"tags": "python, python-3.x, random"
} |
Is the specific heat of salt the same as Na + CL when dissolved? | Question: I had this question when my friends were discussing how salt influences boiling time. When crystal salt is dissolved into water, it turns into a single Na and a single Cl atom. If knew the total heat capacity of the salt I want to add to my water, could I be confident, that when dissolved, total heat capacity I added to the water is equal to the undissolved salt's heat capacity?
On one hand my intuition tells me that heat capacity should remain the same. Salt is salt.
On the other, doesn't the salt lose a rotational vibration mode when is dissolves into its two monatomic elements?
I am not trained in chemistry, so sorry if this is a naive question.
Thanks!
Answer: Salt does not dissolve to to $\ce{Na}$ and $\ce{Cl}$ atoms, but to hydrated $\ce{Na+}$ and $\ce{Cl-}$ ions, so it's heat capacity is very different to the solid $\ce{NaCl}$.
And yes it looses vibration modes of the crystal lattice, less the rotation, as there is nothing to rotate in the lattice.
There are also vibrations of coordination bonds to water. Dissolved ions also have translational degrees of freedom, what is missing in solids.
Even for the identical substance, its heat capacity in solid and liquid state differ. E.g. liquid water has about 2x bigger heat capacity than ice ( 4.2 versus 2.1 J/g/K ). | {
"domain": "chemistry.stackexchange",
"id": 13649,
"tags": "aqueous-solution"
} |
Energy in a Solenoid? | Question:
Consider a circuit consisting of a battery, a resistor and a solenoid inductor. Then, the emf $\mathcal{E}$, is defined as:
$$\mathcal{E} = L\frac{di}{dt} + iR$$
Multiplying both sides by $i$ gives:
$$\mathcal{E}i = Li\frac{di}{dt} + i^2R$$
The term on the left side gives the rate at which the battery does work. Since the second term on the right side gives the rate at which energy appears as thermal energy in the resistor, the second term gives the rate at which magnetic potential energy is stored in the magnetic field.
Therefore $$\frac{dU_B}{dt} = Li\frac{di}{dt}$$
$$\int^{U_B}_{0} dU_B = \int^i_0 Li\text{ }di$$
$$U_B = \frac{1}{2}Li^2$$
Q1) I'm assuming there finding the energy in the steady state. I thought the current was constant in the steady state so shouldn't $\frac{di}{dt}$ be zero?
Q2) Why isn't the emf:
$$\mathcal{E} = -L\frac{di}{dt} + iR$$
Since the self-induced emf generated by an inductor tries to oppose the flow of current, shouldn't the emf be the opposite way?
Q3)The bounds of the integral: $U_B$ and $i$. How are they related? Are they the energy and current at the same point in time $t$? Or is $U_B$ the energy at any point in time and $i$ the current at some other point in time (not necessarily the same times)?
Answer: 1) With a constant and DC power source eventually the solenoid will become fully 'charged'. At that point its 'resistance' term vanishes because it no longer produces an emf against the battery. At this point, the $\frac{di}{dt}$ term will be zero, because the current isn't changing.
2) When you cut power, the magnetic flux is no longer maintained by the current. However, the flux will try to stay constant, so that means the current will continue as it did before, powered by the magnetic field of the solenoid.
3) $U_B$ is simply all the $dU_B$ added up over time. You'd have to integrate until $t =\infty$, aka until the current behaves as if there's no solenoid at all (the integral at the righthand side). | {
"domain": "physics.stackexchange",
"id": 12305,
"tags": "electromagnetism, electricity, electric-circuits, induction, inductance"
} |
Couldn't find file menu in Gazebo | Question:
My OS is ubuntu 14.04, ROS indigo, the version of Gazebo is 2.2.3 and I follow the tutorials in http://learn.turtlebot.com/.
When I learned to edit map in Gazebo, I couldn't find the file menu.
How can I get file menu back or is there other ways to save map?
Originally posted by ROS_learner on ROS Answers with karma: 16 on 2016-04-26
Post score: 0
Answer:
I finally find it.
As in http://gazebosim.org/tutorials?cat=guided_b&tut=guided_b2, (NOTE: Some Linux desktops hide application menus. If you don't see the menus, move your cursor to the top of the application window, and the menus should appear. ). Finally, I maximize the Gazebo window, and put the cursor to the top, GET IT!
Originally posted by ROS_learner with karma: 16 on 2016-05-06
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 24472,
"tags": "ros"
} |
MySQL CLI: Create a MySQL user & DB | Question: Please review my Bash code to create an authenticated, all privileged MySQL user plus a DB with the same name, through the MySQL CLI:
read -sp "DB user password:" dbrootp_1 && echo
read -sp "DB user password again:" dbrootp_2 && echo
read -sp "DB user password:" dbuserp_1 && echo
read -sp "DB user password again:" dbuserp_2 && echo
if [ "$dbrootp_1" != "$dbrootp_2" ]; then echo "Values unmatched" && exit 1 fi
if [ "$dbuserp_1" != "$dbuserp_2" ]; then echo "Values unmatched" && exit 1 fi
cat <<-DBSTACK | mysql -u root -p"$dbrootp"
CREATE USER "$domain"@"localhost" IDENTIFIED BY "$dbuserp";
CREATE DATABASE "$domain";
GRANT ALL PRIVILEGES ON "$domain".* TO "$domain"@"localhost";
DBSTACK
Answer: Just executing the code, I'd receive 4 different inputs like this:
DB user password:
DB user password again:
DB user password:
DB user password again:
which would make me think that some thing went wrong in the first time, and I'd input the same password again. You are not making it obvious for the user to input root password first.
echo "Enter root user credentials"
read -sp "root user password:" dbrootp_1 && echo
read -sp "root user password again:" dbrootp_2 && echo
echo "Enter new user credentials"
read -sp "new user password:" dbuserp_1 && echo
read -sp "new user password again:" dbuserp_2 && echo
makes it slightly better.
if [ "$dbrootp_1" != "$dbrootp_2" ]; then echo "Values unmatched" && exit 1 fi
if [ "$dbuserp_1" != "$dbuserp_2" ]; then echo "Values unmatched" && exit 1 fi
You may wish to convey exactly which set of password was mismatched. It is more a UX debate though. You might not want to do that as it might cause some security vulnerability! However, look at
function error_quit() {
local EXITCODE=$1
local MESSAGE="$2"
>&2 echo $MESSAGE
exit $EXITCODE
}
[[ "$dbrootp_1" != "$dbrootp_2" ]] && { error_quit 1 "Values unmatched" }
[[ "$dbuserp_1" != "$dbuserp_2" ]] && { error_quit 1 "Values unmatched" }
Splitting into a function gives you an added benefit so that in future you might expand the script, and use the same minimal exit setup instead of copying it everywhere.
The >&2 directs the error to stderr stream. | {
"domain": "codereview.stackexchange",
"id": 29477,
"tags": "mysql, bash, console"
} |
Header file points to wrong package? | Question:
Hi,
we have our own costmap package but didn't change the name of it.
So we have two costmap packages with the same name.
The problem is when opening with QTCreator, the include in costmap.cpp points me to the original installed header file.
But compiling is NO problem, although it shouldn't compile when really using the original header file!
So QT has something wrong here.
Next thing is that when we want to compile a package that uses the costmap package, it fails because of undefined references in the costmap package. So now, the header file is really pointing to the original header file and not to ours.
Any hints on what we can do to point to the correct file?
A roscd costmap points me to our package.
Originally posted by madmax on ROS Answers with karma: 496 on 2014-03-03
Post score: 0
Answer:
You're not really concrete in what exactly you have setup and how/where you are getting which errors.
So, here is what you should have done:
Ignore QTCreator (for now). It's just some GUI. If the setup is right for ROS, qt creator should be made to pick that up later on.
Assuming your costmap is an overlay of the ROS one, its dependencies should use yours. You can check that this is what actually happens during compile and link. roscd costmap being yours suggests this is correct. Whatever goes wrong you have to review explicitly. If you have a "compatible" costmap as your overlay that should work.
Originally posted by dornhege with karma: 31395 on 2014-03-03
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 17143,
"tags": "catkin"
} |
Why is angular frequency $\omega = \sqrt{\frac{k}{m}}$ dimensionally correct? | Question: So I'm learning about simple harmonic motion, and I came to the part where the differential equation
$$\frac{\mathrm d^2x}{\mathrm dt^2} = -\frac{k}{m} x$$
is solved and simplified to
$$x(t) = A\cos(\omega t - \phi)$$
So here, I don't get why the angular frequency equals the following value
$$\omega = \sqrt{\frac{k}{m}}$$
I tried to see if this has any evident reasonament to see why this is dimensionaly correct (especially with the square root). I already search for different posts here on Physics where it's explained, but the maths behind them are too complicated for me, and also they didn't answer why this is dimensionally correct.
Answer: Seeing that it is dimensionally correct should be easy. Just plug the units and check that they match.
$F=kx$, so $k$ must be newtons/meters. Remember that newtons are $kg\ m/s^2$. So, check that
$$ \frac{k}{m}=\frac{N/m}{kg}=\frac{kg/s^2}{kg}=\frac{1}{s^2}=s^{-2}$$
So the square root indeed has units of $s^{-1}$, which is angular frequency. Dimensionally, it fully makes sense.
If you ask about why this is like this, well, do not try to relate it to angular frequency so soon. You just check that the value $\sqrt{\frac{k}{m}}$ will appear very often, so you decide to give it a name. Let's call it $\omega$. You can call it with any other letter, but later on you will see that it is very related to an angular velocity, so it is a good name. Just that. | {
"domain": "physics.stackexchange",
"id": 54657,
"tags": "classical-mechanics, harmonic-oscillator, frequency, dimensional-analysis, oscillators"
} |
VBA Function Reads Active Live Data Feed and Translates Product Codes | Question: The following Function is one of the main analysis functions of a larger subroutine. The subroutine is responsible for translating codes that look like this "1|G|XNYM:O:LO:201611:P:44:+1/XNYM:O:LO:201611:C:51:+1" to something similar to this "LIVE WTI American X16 44.00/51.00 Strangle".
There are multiple scenarios and this is just one example, but I really wish to refactor this entire function to make it much more streamlined and clean. I struggled to combine conditionals and there are a lot of repetitions. How should I clean this up? The overall subroutine this is apart of is triggered by a worksheet change event, which can occur every few seconds; efficiency is absolutely key. Below the function I will post the various support functions this function makes calls to so it's clear what is going on.
Main Function
Public Function TwoLegStructureAnalysis(ByVal tradeStructure As String, ByVal liveOptionBool As Boolean) As String
'Trades with two legs analysis (two leg including hedged trades)
Dim tradeLegStructureArray() As String, hedgeSplitArray() As String, firstOptionLegArray() As String, secondOptionLegArray() As String
Dim assemblyString As String
Dim sameStrikeBool As Boolean
tradeLegStructureArray() = Split(tradeStructure, "/")
If UCase(Mid(tradeLegStructureArray(0), 6, 1)) = "O" And UCase(Mid(tradeLegStructureArray(1), 6, 1)) = "F" Then
'Hedged single Option trades
'Bifurcates the hedge by colon to split out delta and future
hedgeSplitArray() = Split(tradeLegStructureArray(1), ":")
assemblyString = GetOptionCodes(Mid(tradeLegStructureArray(0), 8, 2)) & " " & TranslateExpirationDate(Mid(tradeLegStructureArray(0), 11, 6)) _
& " " & Format(GetOptionStrike(tradeLegStructureArray(0), liveOptionBool), "##0.00") & " " & GetCallOrPut(Mid(tradeLegStructureArray(0), 18, 1)) & " x" & Format(hedgeSplitArray(UBound(hedgeSplitArray)), "##0.00") _
& " | " & Abs((hedgeSplitArray(UBound(hedgeSplitArray) - 1) * 100)) & "d"
ElseIf UCase(Mid(tradeLegStructureArray(0), 6, 1)) = "O" And UCase(Mid(tradeLegStructureArray(1), 6, 1)) = "O" Then
'Two leg LIVE structures
firstOptionLegArray() = Split(tradeLegStructureArray(0), ":")
secondOptionLegArray() = Split(tradeLegStructureArray(1), ":")
'different two leg structures
If firstOptionLegArray(4) = secondOptionLegArray(4) Then
'Call Spreads/Put Spreads
assemblyString = "LIVE " & GetOptionCodes(Mid(tradeLegStructureArray(0), 8, 2))
'Same expirations
If firstOptionLegArray(3) = secondOptionLegArray(3) Then
Select Case Val(firstOptionLegArray(UBound(firstOptionLegArray))) + Val(secondOptionLegArray(UBound(secondOptionLegArray)))
Case 0
'No ratio
assemblyString = assemblyString & " " & TranslateExpirationDate(firstOptionLegArray(3)) & " " & Format(firstOptionLegArray(5), "##0.00") & "/" & _
Format(secondOptionLegArray(5), "##0.00")
Case Else
assemblyString = assemblyString & " " & TranslateExpirationDate(firstOptionLegArray(3)) & " " & Format(firstOptionLegArray(5), "##0.00") & "/" & _
Format(secondOptionLegArray(5), "##0.00") & " " & Abs(firstOptionLegArray(UBound(firstOptionLegArray))) & "x" & Abs(secondOptionLegArray(UBound(secondOptionLegArray)))
End Select
ElseIf firstOptionLegArray(3) <> secondOptionLegArray(3) Then
'Horizontal
Select Case Val(firstOptionLegArray(UBound(firstOptionLegArray))) + Val(secondOptionLegArray(UBound(secondOptionLegArray)))
Case 0
'again no ratio
assemblyString = assemblyString & " " & TranslateExpirationDate(firstOptionLegArray(3)) & " " & Format(firstOptionLegArray(5), "##0.00") & "/" & _
TranslateExpirationDate(secondOptionLegArray(3)) & " " & Format(secondOptionLegArray(5), "##0.00")
Case Else
'Ratios
assemblyString = assemblyString & " " & TranslateExpirationDate(firstOptionLegArray(3)) & " " & Format(firstOptionLegArray(5), "##0.00") & "/" & _
TranslateExpirationDate(secondOptionLegArray(3)) & " " & Format(secondOptionLegArray(5), "##0.00") & " " & Abs(firstOptionLegArray(UBound(firstOptionLegArray))) & "x" & _
Abs(secondOptionLegArray(UBound(secondOptionLegArray)))
End Select
End If
'Determines callspread or Put Spread
If GetCallOrPut(firstOptionLegArray(4)) = "Call" Then assemblyString = assemblyString & " CS" Else assemblyString = assemblyString & " PS"
'''''''''''''''
ElseIf firstOptionLegArray(4) <> secondOptionLegArray(4) Then
'Straddle/Strangle/Fence
'Same expirations
If firstOptionLegArray(3) = secondOptionLegArray(3) Then
If Val(firstOptionLegArray(UBound(firstOptionLegArray))) + Val(secondOptionLegArray(UBound(secondOptionLegArray))) = 0 Or _
Val(firstOptionLegArray(UBound(firstOptionLegArray))) + Val(secondOptionLegArray(UBound(secondOptionLegArray))) <= -1 Then
'fences
Select Case Val(firstOptionLegArray(UBound(firstOptionLegArray))) + Val(secondOptionLegArray(UBound(secondOptionLegArray)))
Case 0
'No ratio
assemblyString = "LIVE " & GetOptionCodes(Mid(tradeLegStructureArray(0), 8, 2)) & " " & TranslateExpirationDate(firstOptionLegArray(3)) & " " & Format(firstOptionLegArray(5), "##0.00") & "/" & _
Format(secondOptionLegArray(5), "##0.00") & " Fence"
Case -1 To -10
'Ratio
assemblyString = "LIVE " & GetOptionCodes(Mid(tradeLegStructureArray(0), 8, 2)) & " " & TranslateExpirationDate(firstOptionLegArray(3)) & " " & Format(firstOptionLegArray(5), "##0.00") & "/" & _
Format(secondOptionLegArray(5), "##0.00") & " " & Abs(firstOptionLegArray(UBound(firstOptionLegArray))) & "x" & Abs(secondOptionLegArray(UBound(secondOptionLegArray))) & " Fence"
End Select
ElseIf Val(firstOptionLegArray(UBound(firstOptionLegArray))) = Val(secondOptionLegArray(UBound(secondOptionLegArray))) Or _
Val(firstOptionLegArray(UBound(firstOptionLegArray))) + Val(secondOptionLegArray(UBound(secondOptionLegArray))) >= 3 Then
'No ratio straddle/strangle
'Same strike straddle/differentstrike strangle
If firstOptionLegArray(5) = secondOptionLegArray(5) Then
assemblyString = GetOptionCodes(Mid(tradeLegStructureArray(0), 8, 2)) & " " & TranslateExpirationDate(firstOptionLegArray(3)) & " " & Format(firstOptionLegArray(5), "##0.00") & " Straddle"
ElseIf firstOptionLegArray(5) <> secondOptionLegArray(5) Then
Select Case Val(firstOptionLegArray(UBound(firstOptionLegArray))) + Val(secondOptionLegArray(UBound(secondOptionLegArray)))
Case 2
assemblyString = "LIVE " & GetOptionCodes(Mid(tradeLegStructureArray(0), 8, 2)) & " " & TranslateExpirationDate(firstOptionLegArray(3)) & " " & Format(firstOptionLegArray(5), "##0.00") & "/" & _
Format(secondOptionLegArray(5), "##0.00") & " Strangle"
Case 3 To 10
assemblyString = "LIVE " & GetOptionCodes(Mid(tradeLegStructureArray(0), 8, 2)) & " " & TranslateExpirationDate(firstOptionLegArray(3)) & " " & Format(firstOptionLegArray(5), "##0.00") & "/" & _
Format(secondOptionLegArray(5), "##0.00") & " " & Abs(firstOptionLegArray(UBound(firstOptionLegArray))) & "x" & Abs(secondOptionLegArray(UBound(secondOptionLegArray))) & " Strangle"
End Select
End If
End If
'Horizontal/Different Expirations
ElseIf firstOptionLegArray(3) <> secondOptionLegArray(3) Then
If Val(firstOptionLegArray(UBound(firstOptionLegArray))) + Val(secondOptionLegArray(UBound(secondOptionLegArray))) = 0 Or _
Val(firstOptionLegArray(UBound(firstOptionLegArray))) + Val(secondOptionLegArray(UBound(secondOptionLegArray))) <= -1 Then
'fences
Select Case Val(firstOptionLegArray(UBound(firstOptionLegArray))) + Val(secondOptionLegArray(UBound(secondOptionLegArray)))
Case 0
'No ratio
assemblyString = "LIVE " & GetOptionCodes(Mid(tradeLegStructureArray(0), 8, 2)) & " " & TranslateExpirationDate(firstOptionLegArray(3)) & " " & Format(firstOptionLegArray(5), "##0.00") & "/" & _
TranslateExpirationDate(secondOptionLegArray(3)) & " " & Format(secondOptionLegArray(5), "##0.00") & " Fence"
Case -1 To -10
'Ratio
assemblyString = "LIVE " & GetOptionCodes(Mid(tradeLegStructureArray(0), 8, 2)) & " " & TranslateExpirationDate(firstOptionLegArray(3)) & " " & Format(firstOptionLegArray(5), "##0.00") & "/" & _
TranslateExpirationDate(secondOptionLegArray(3)) & " " & Format(secondOptionLegArray(5), "##0.00") & " " & Abs(firstOptionLegArray(UBound(firstOptionLegArray))) & "x" & Abs(secondOptionLegArray(UBound(secondOptionLegArray))) & " Fence"
End Select
ElseIf Val(firstOptionLegArray(UBound(firstOptionLegArray))) = Val(secondOptionLegArray(UBound(secondOptionLegArray))) Or _
Val(firstOptionLegArray(UBound(firstOptionLegArray))) + Val(secondOptionLegArray(UBound(secondOptionLegArray))) >= 3 Then
'strangle
If firstOptionLegArray(5) <> secondOptionLegArray(5) Then
Select Case Val(firstOptionLegArray(UBound(firstOptionLegArray))) + Val(secondOptionLegArray(UBound(secondOptionLegArray)))
Case 2
assemblyString = "LIVE " & GetOptionCodes(Mid(tradeLegStructureArray(0), 8, 2)) & " " & TranslateExpirationDate(firstOptionLegArray(3)) & " " & Format(firstOptionLegArray(5), "##0.00") & "/" & _
TranslateExpirationDate(secondOptionLegArray(3)) & " " & Format(secondOptionLegArray(5), "##0.00") & " Strangle"
Case 3 To 10
assemblyString = "LIVE " & GetOptionCodes(Mid(tradeLegStructureArray(0), 8, 2)) & " " & TranslateExpirationDate(firstOptionLegArray(3)) & " " & Format(firstOptionLegArray(5), "##0.00") & "/" & _
TranslateExpirationDate(secondOptionLegArray(3)) & " " & Format(secondOptionLegArray(5), "##0.00") & " " & Abs(firstOptionLegArray(UBound(firstOptionLegArray))) & "x" & Abs(secondOptionLegArray(UBound(secondOptionLegArray))) & " Strangle"
End Select
End If
End If
End If
End If
Else
assemblyString = "Nothing"
End If
TwoLegStructureAnalysis = assemblyString
End Function
Support Functions
Public Function GetOptionCodes(ByVal optionType As String) As String
Select Case UCase(optionType)
Case "LO"
GetOptionCodes = "WTI American"
Case "OH"
GetOptionCodes = "HO American"
Case "OB"
GetOptionCodes = "RB American"
Case "LN"
GetOptionCodes = "NG European"
End Select
End Function
Public Function TranslateExpirationDate(ByVal expirationDate As Double) As String
Select Case CInt(Right(expirationDate, 2))
Case 1
TranslateExpirationDate = "F" & Mid(expirationDate, 3, 2)
Case 2
TranslateExpirationDate = "G" & Mid(expirationDate, 3, 2)
Case 3
TranslateExpirationDate = "H" & Mid(expirationDate, 3, 2)
Case 4
TranslateExpirationDate = "J" & Mid(expirationDate, 3, 2)
Case 5
TranslateExpirationDate = "K" & Mid(expirationDate, 3, 2)
Case 6
TranslateExpirationDate = "M" & Mid(expirationDate, 3, 2)
Case 7
TranslateExpirationDate = "N" & Mid(expirationDate, 3, 2)
Case 8
TranslateExpirationDate = "Q" & Mid(expirationDate, 3, 2)
Case 9
TranslateExpirationDate = "U" & Mid(expirationDate, 3, 2)
Case 10
TranslateExpirationDate = "V" & Mid(expirationDate, 3, 2)
Case 11
TranslateExpirationDate = "X" & Mid(expirationDate, 3, 2)
Case 12
TranslateExpirationDate = "Z" & Mid(expirationDate, 3, 2)
End Select
End Function
Public Function GetCallOrPut(ByVal legOption As String) As String
'Translates C to Call and P to Put in option Structure
If legOption = "C" Then
GetCallOrPut = "Call"
ElseIf legOption = "P" Then
GetCallOrPut = "Put"
End If
End Function
Public Function GetOptionStrike(ByVal tradeStructure As String, ByVal liveOptionBool As Boolean) As Double
'Finds option strike within structure Code and separates it out. Split
Dim structureArray() As String
structureArray() = Split(tradeStructure, ":", , vbTextCompare)
Select Case liveOptionBool
Case True
GetOptionStrike = structureArray(UBound(structureArray))
Case False
GetOptionStrike = structureArray(UBound(structureArray) - 1)
End Select
End Function
Public Function CountTradeLegSeparators(ByVal tradeStructure) As Integer
Dim findChar As String, replaceChar As String
findChar = "/"
replaceChar = ""
CountTradeLegSeparators = Len(tradeStructure) - Len(Replace(tradeStructure, findChar, replaceChar))
End Function
Answer: The TranslateExpirationDate function looks like it could use a little map - a simple Static array that you gets initialized the first time the function is called:
Public Function TranslateExpirationDate(ByVal expirationDate As Double) As String
Static map(1 To 12) As String
If map(1) = vbNullString Then
map(1) = "F"
map(2) = "G"
map(3) = "H"
map(4) = "J"
map(5) = "K"
map(6) = "M"
map(7) = "N"
map(8) = "Q"
map(9) = "U"
map(10) = "V"
map(11) = "X"
map(12) = "Z"
End If
Dim integerPart As Integer
integerPart = CInt(Right$(expirationDate, 2))
TranslateExpirationDate = map(integerPart) & Mid$(expirationDate, 3, 2)
End Function
And then if you later need to map 42 to "W", all you need to add is map(42) = "W" and you're done - no need for a new Case block, no need to copy+paste anything.
Ditto with GetOptionCodes:
Public Function GetOptionCodes(ByVal optionType As String) As String
Static map As Collection
If map Is Nothing Then
Set map = New Collection
map.Add "WTI American", "LO"
map.Add "HO American", "OH"
map.Add "RB American", "OB"
map.Add "NG European", "LN"
End If
GetOptionCodes = map(optionType)
End Function
Now these lookups are \$O(1)\$ (instant) instead of \$O(n)\$ (worst-case you need to evaluate every Case block to get your value) and as a bonus, you get stronger validation: if optionType isn't mapped, a runtime error occurs. In TranslateExpirationDate, if the integerPart is out of range, an index out of bounds runtime error occurs. The calling code should handle that.
I'd do similar for GetCallOrPut, so as to make sure something blows up in case of invalid input: it's better to blow up than keep running and produce trash output!
I've only glanced at the main procedure; at a glance, it seems like it's doing quite a lot of things - consider extracting each block into its own dedicated, more specialized function.
You'll want to use the strongly-typed string functions here (e.g. prefer Mid$ over Mid; Left$ over Left, Right$ over Right, UCase$ over UCase... see the whole list here), because the versions without the $ return a Variant that needs to be implicitly converted - if you're after performance, use the strongly-typed ones. Or stringly-typed, rather.
Then, divide & conquer: extract functions, one by one, each more and more specialized - and then you'll be looking at things like taking this:
Select Case Val(firstOptionLegArray(UBound(firstOptionLegArray))) + Val(secondOptionLegArray(UBound(secondOptionLegArray)))
Case 2
assemblyString = "LIVE " & GetOptionCodes(Mid(tradeLegStructureArray(0), 8, 2)) & " " & TranslateExpirationDate(firstOptionLegArray(3)) & " " & Format(firstOptionLegArray(5), "##0.00") & "/" & _
TranslateExpirationDate(secondOptionLegArray(3)) & " " & Format(secondOptionLegArray(5), "##0.00") & " Strangle"
Case 3 To 10
assemblyString = "LIVE " & GetOptionCodes(Mid(tradeLegStructureArray(0), 8, 2)) & " " & TranslateExpirationDate(firstOptionLegArray(3)) & " " & Format(firstOptionLegArray(5), "##0.00") & "/" & _
TranslateExpirationDate(secondOptionLegArray(3)) & " " & Format(secondOptionLegArray(5), "##0.00") & " " & Abs(firstOptionLegArray(UBound(firstOptionLegArray))) & "x" & Abs(secondOptionLegArray(UBound(secondOptionLegArray))) & " Strangle"
End Select
And turning it into this:
Dim result As String
result = "LIVE " & GetOptionCodes(Mid(tradeLegStructureArray(0), 8, 2)) & " " & TranslateExpirationDate(firstOptionLegArray(3)) & " " & Format(firstOptionLegArray(5), "##0.00") & "/" & _
TranslateExpirationDate(secondOptionLegArray(3)) & " " & Format(secondOptionLegArray(5), "##0.00")
Select Case Val(firstOptionLegArray(UBound(firstOptionLegArray))) + Val(secondOptionLegArray(UBound(secondOptionLegArray)))
Case 3 To 10
result = result & Abs(firstOptionLegArray(UBound(firstOptionLegArray))) & "x" & Abs(secondOptionLegArray(UBound(secondOptionLegArray)))
End Select
result = result & " Strangle"
And then all that's missing is a little cleanup to reduce horizontal scrolling, and perhaps introduce a number of local variables to further reduce redundant function calls, and improve readability a bit more. | {
"domain": "codereview.stackexchange",
"id": 22236,
"tags": "vba, excel"
} |
Python replace values in list | Question: I often write code like this:
newarguments = []
for argument in arguments:
if parser.is_label(argument):
argument = str(labels[argument])
newarguments.append(argument)
but this feels really unpythonic. Is there a better way to loop over a list and replace certain values?
Answer: A common way to collapse a list "mapping" code block like this is to use a list comprehension combined with a ternary conditional operator:
newarguments = [str(labels[argument]) if parser.is_label(argument) else argument
for argument in arguments]
List comprehensions are also generally faster (reference). | {
"domain": "codereview.stackexchange",
"id": 26697,
"tags": "python"
} |
Sending email notifications, each to a different set of subscribers | Question: Here is a code which sends email notification on new comments.
<?php
namespace Rib\Src\Apps\ContentNotification\ContentNotificationControllers;
use Rib\Src\Apps\ContentNotification\Email\NewContentNotificationEmail;
use Rib\Src\Services\DbSql;
use Rib\Src\Services\Email;
class JobSendNotificationsCron
{
public function sendEmails()
{
$db = ( new DbSql() )->db()->getConnection();
# Collect jobs to process
$jobsToProcess = $db->table( 'content_notifications_jobs' )
->where( 'status', '=', 'active' )
->orderBy( 'last_mailing_time', 'asc' )
->limit( 5 )
->get()->toArray() ?? null;
# Treat each job indivicually....
foreach ( $jobsToProcess as $job ) {
# Get all users that subscribed to be notified about new content in a particular job
$usersToNotify = $db->table( 'content_notifications_subscribers' )
->where( 'content_id', '=', $job->content_id )
->get()->toArray() ?? null;
# ... and send notification to each user subscribed to the job
foreach ( $usersToNotify as $user ) {
# Send email confirmation email
Email::getMailer()->send( NewContentNotificationEmail::setContent( $job, $user ) );
}
# Update job table. Set job to inactive so it is not treated if no new answer was posted.
# And set last_mailing_time to also skip jobs that had a mailing before delay was expired.
$db->table( 'content_notifications_jobs' )->update(
[
'status' => 'inactive',
'last_mailing_time' => time()
]
);
}
}
}
I don't like this code because there are sql calls in loop.
I'm thinking of refactoring this code. What would be the best way to refactor it by moving the sql out of the loop ?
Answer: Is there any reason for this to be a concrete class/method?
You only seem to be considering happy path.
What happens in DB connection failure occurs? Have you considered passing the DB connection/object as a dependency to this class or method such that this class/method is guaranteed it gets a connection in a proper state?
Have you considered using a join to get notification/subscriber information in a single query and not have to query in loops?
Email::getMailer()->send(...)
Do you really need to instantiate mailer here every time? Should mailer be a passed dependency?
Do you not need a unique key for performing your update? Right now it looks like you are updating every record on the table.
Do you really want to be updating unix timestamps into the table as opposed to using appropriate MySQL datetime or timestamp fields? | {
"domain": "codereview.stackexchange",
"id": 25109,
"tags": "php, mysql, email, eloquent"
} |
why the number of filter coefficients in FIR filter has to be an odd number? | Question: I calculated the order of the FIR filter to be 31(N). so, the number of coefficients has to be 32 (N+1). so, I have to increase it to 33 to make it an odd number.
Why the number of filter coefficients is required to be an odd number?
Answer: I think it is about having a linear phase. Having a linear phase is often what you want because it means the delay introduced by the filter will be the same across all the frequency. Then, if you want a linear phase filter, you need to have a symmetrical arrangement of coefficient around the centre coefficient. As for why you have to have a symmetrical number of coefficient around the centre to be of linear phase, I need to verify something then I will update the answer. :)
Edit: Jason R got the answer while I was looking for it! +1 | {
"domain": "dsp.stackexchange",
"id": 2136,
"tags": "filter-design"
} |
Questions about the Dynamic Window Approach in ROS | Question:
Hi ROS community! Some questions:
Is the use of the Base_Local_Planner with dwa=true deprecated? I've read that the dwa_local_planner is more efficient.
What is the difference between TR and DWA in the base_local_planner? I understand the differences explained in the Wiki, however looking the trajectory_planner.cpp I don't see them.
The only difference I see in the source code is that each choice defines the dynamic window velocity limits using sim_period or sim_time (which are both customizable parameters).
I suspect dwa=true also works as the TR algorithm (with non-circular trajectories) but I would like a second opinion.
The original DWA method is a goal-directed reactive method, while this implementation uses the path_cost (based on the MapGrid class) to evaluate the cost of each local trajectory. Is this method documented in any known paper or it is considered a minor modification of the method made by WG?
Thanks!
Originally posted by Pablo Iñigo Blasco on ROS Answers with karma: 2982 on 2011-05-23
Post score: 3
Answer:
OK, you're actually right here because dt is being computed based on the simulation granularity, and not the controller_frequency. This is important for safety, but in some cases, you're right that it would lead to a very short TR style simulation before the desired velocity is reached. You're also right that the dwa_local_planner doesn't do this TR simulation up to the desired speed, though to be more correct in terms of simulating what the robot will actually do based on acceleration limits, it probably should. However, I doubt that this short TR simulation, or lack there of in the dwa_local_planner, has any noticeable affect on performance. If a change were to be made, it'd probably be most proper to have the dwa_local_planner use the acceleration limits when forward simulating to ramp up to its desired speed.
Originally posted by eitan with karma: 2743 on 2011-05-25
This answer was ACCEPTED on the original site
Post score: 2 | {
"domain": "robotics.stackexchange",
"id": 5639,
"tags": "ros, navigation, base-local-planner, dwa-local-planner"
} |
Implementation of a type-safe generic dynamic array in C | Question: I'm new to C and was trying to write a generic dynamic array which is type safe. I'm not sure if I pulled it off in the best way possible though.
dynarray.h:
#ifndef h_DynArray
#define h_DynArray
#define DYNAMIC_ARR_SIZE 10 // Default size for dynamic arrays
#define DYNAMIC_ARR_GROWTHRATE 2 //Growthrate for dynamic arrays
#define DYNAMIC_ARR_FREE_ON_ERROR 1
#define DYNAMIC_ARR_KEEP_ON_ERROR 0
struct DynArray_Options
{
char freeOnError; // If set, will free the contents of the array when an error is caught, otherwise the contents remain
} DynArray_Options;
struct $DynArray
{
size_t size; // Number of elements in array
size_t capacity; // Capacity of array
unsigned char* data; // Pointer to data
struct DynArray_Options options; // Array options
size_t typeSize; // sizeof(type)
};
void $DynArray_Create(struct $DynArray* arr, size_t typeSize);
void $DynArray_Free(struct $DynArray* arr);
void $DynArray_EmptyPush(struct $DynArray* arr);
void $DynArray_Push(struct $DynArray* arr, void* value);
void $DynArray_Pop(struct $DynArray* arr);
void $DynArray_RemoveAt(struct $DynArray* arr, size_t index);
void $DynArray_Shrink(struct $DynArray* arr);
void $DynArray_Reserve(struct $DynArray* arr, size_t size);
/*
* Defines a DynArray of type(tag).
*/
#define DynArray(tag) DynArray$##tag
/*
* Utility macros for getting functions for type(tag).
*/
#define DynArray_ReinterpretCast(tag) DynArray$##tag##_ReinterpretCast
#define DynArray_Create(tag) DynArray$##tag##_Create
#define DynArray_Free(tag) DynArray$##tag##_Free
#define DynArray_EmptyPush(tag) DynArray$##tag##_EmptyPush
#define DynArray_Push(tag) DynArray$##tag##_Push
#define DynArray_Pop(tag) DynArray$##tag##_Pop
#define DynArray_RemoveAt(tag) DynArray$##tag##_RemoveAt
#define DynArray_Shrink(tag) DynArray$##tag##_Shrink
#define DynArray_Reserve(tag) DynArray$##tag##_Reserve
#define DynArray_Decl(type, tag) \
$DynArray_Decl_Type(type, tag) \
static inline void DynArray$##tag##_Create(struct DynArray(tag)* arr) \
{ \
$DynArray_Create(&arr->$arr, sizeof(type)); \
} \
$DynArray_Decl_Func(type, tag) \
$DynArray_Decl_Func_Push(type, tag)
#define $DynArray_Decl_Type(type, tag) \
struct DynArray(tag) \
{ \
union \
{ \
struct $DynArray $arr; \
struct \
{ \
size_t size; \
size_t capacity; \
type* values; \
struct DynArray_Options options; \
}; \
}; \
};
#define $DynArray_Decl_Func(type, tag) \
static inline struct DynArray(tag) DynArray$##tag##_ReinterpretCast(void* arr) \
{ \
struct DynArray(tag) dst; \
memcpy(&dst, arr, sizeof dst); \
return dst; \
} \
static inline void DynArray$##tag##_Free(struct DynArray(tag)* arr) \
{ \
$DynArray_Free(&arr->$arr); \
} \
static inline void DynArray$##tag##_EmptyPush(struct DynArray(tag)* arr) \
{ \
$DynArray_EmptyPush(&arr->$arr); \
} \
static inline void DynArray$##tag##_Pop(struct DynArray(tag)* arr) \
{ \
$DynArray_Pop(&arr->$arr); \
} \
static inline void DynArray$##tag##_RemoveAt(struct DynArray(tag)* arr, size_t index) \
{ \
$DynArray_RemoveAt(&arr->$arr, index); \
} \
static inline void DynArray$##tag##_Shrink(struct DynArray(tag)* arr) \
{ \
$DynArray_Shrink(&arr->$arr); \
} \
static inline void DynArray$##tag##_Reserve(struct DynArray(tag)* arr, size_t size) \
{ \
$DynArray_Reserve(&arr->$arr, size); \
}
#define $DynArray_Decl_Func_Push(type, tag) \
static inline void DynArray$##tag##_Push(struct DynArray(tag)* arr, type value) \
{ \
$DynArray_Push(&arr->$arr, &value); \
} \
/*
* The following is used to define the "raw" version of DynArray
* which uses a custom Create function to assign sizeof(type).
*/
$DynArray_Decl_Type(unsigned char, raw)
static inline void DynArray$raw_Create(struct DynArray(raw)* arr, size_t typeSize)
{
$DynArray_Create(&arr->$arr, typeSize);
}
$DynArray_Decl_Func(unsigned char, raw)
static inline void DynArray$raw_Push(struct DynArray(raw)* arr, void* value)
{
$DynArray_Push(&arr->$arr, value);
}
#endif
dynarray.c:
#include <stdlib.h>
#include <string.h>
#include "dynarray.h"
void $DynArray_Create(struct $DynArray* arr, size_t typeSize)
{
arr->data = malloc(typeSize * DYNAMIC_ARR_SIZE);
arr->size = 0;
arr->capacity = DYNAMIC_ARR_SIZE;
arr->typeSize = typeSize;
arr->options.freeOnError = DYNAMIC_ARR_FREE_ON_ERROR;
}
void $DynArray_Free(struct $DynArray* arr)
{
free(arr->data);
arr->data = NULL;
}
inline void $DynArray_ErrorFree(struct $DynArray* arr)
{
if (arr->options.freeOnError)
{
free(arr->data);
arr->data = NULL;
}
}
void $DynArray_EmptyPush(struct $DynArray* arr)
{
if (arr->data)
{
if (arr->size == arr->capacity)
{
size_t newCapacity = (size_t)(arr->capacity * DYNAMIC_ARR_GROWTHRATE);
if (newCapacity == arr->capacity) ++newCapacity;
void* tmp = realloc(arr->data, arr->typeSize * newCapacity);
if (tmp)
{
arr->data = tmp;
arr->capacity = newCapacity;
++arr->size;
}
else
{
$DynArray_ErrorFree(arr);
}
}
else
{
++arr->size;
}
}
}
void $DynArray_Push(struct $DynArray* arr, void* value)
{
$DynArray_EmptyPush(arr);
if (arr->data) memcpy(arr->data + (arr->size - 1) * arr->typeSize, value, arr->typeSize);
}
void $DynArray_Pop(struct $DynArray* arr)
{
if (arr->data)
{
if (arr->size > 0)
{
--arr->size;
}
}
}
void $DynArray_RemoveAt(struct $DynArray* arr, size_t index)
{
if (arr->data)
{
if (arr->size > 1 && index > 0 && index < arr->size)
{
size_t size = arr->size - 1 - index;
if (size != 0) memmove(arr->data + index * arr->typeSize, arr->data + (index + 1) * arr->typeSize, size * arr->typeSize);
--arr->size;
}
}
}
void $DynArray_Shrink(struct $DynArray* arr)
{
if (arr->data)
{
size_t newCapacity = arr->size;
if (newCapacity != arr->capacity)
{
void* tmp = realloc(arr->data, arr->typeSize * newCapacity);
if (tmp)
{
arr->data = tmp;
arr->capacity = newCapacity;
++arr->size;
}
else
{
$DynArray_ErrorFree(arr);
}
}
}
}
void $DynArray_Reserve(struct $DynArray* arr, size_t size)
{
if (arr->data)
{
size_t newCapacity = arr->size + size;
if (newCapacity > arr->capacity)
{
void* tmp = realloc(arr->data, arr->typeSize * newCapacity);
if (tmp)
{
arr->data = tmp;
arr->capacity = newCapacity;
arr->size = size;
}
else
{
$DynArray_ErrorFree(arr);
}
}
else
{
arr->size = size;
}
}
else
{
void* tmp = malloc(arr->typeSize * size);
if (tmp)
{
arr->data = tmp;
arr->capacity = size;
arr->size = size;
}
}
}
usage:
DynArray_Decl(int, int) // Declaration outside
int main()
{
struct DynArray(int) intArr;
DynArray_Create(int)(&intArr);
for (size_t i = 0; i < 10; i++)
{
DynArray_Push(int)(&intArr, i);
}
printf("size: %i\n", intArr.size);
printf("%i\n", intArr.values[2]);
}
I'm also not too sure if I utilized union properly in the declaration ($DynArray_Decl_Type) or if that produces undefined behaviour.
Answer: Big task
OP's goal of "trying to write a generic dynamic array which is type safe." is admirable, but not a good task for someone new to C.
I recommend to start with a write a generic dynamic array for void *.
Later, research _Generic.
Not so generic
Approach relies on types not having spaces.
Try struct DynArray(long long) llArr;
Stand-alone failure
#include "dynarray.h" fails as dynarray.h requires prior #include <>. Put those in dynarray.h.
Use #include "dynarray.h" first, before other includes, in dynarray.c to test.
Why 10
Rather than some magic number 10, Rework to start with size 0.
// #define DYNAMIC_ARR_SIZE 10
Allow $DynArray_Free(NULL)
This, like free(NULL), simplifies clean-up
void $DynArray_Free(struct $DynArray* arr) {
if (arr) {
free(arr->data);
arr->data = NULL;
}
}
More reasonable max line length
// if (size != 0) memmove(arr->data + index * arr->typeSize, arr->data + (index + 1) * arr->typeSize, size * arr->typeSize);
vs.
if (size != 0) {
memmove(arr->data + index * arr->typeSize,
arr->data + (index + 1) * arr->typeSize,
size * arr->typeSize);
}
$ is not part of the standard C coding languages
$ not needed anyways. Consider dropping it.
Unneeded if()
In $DynArray_Reserve(), code starts with unneeded if (arr->data) test. realloc(NULL, ...) is OK.
If ptr is a null pointer, the realloc function behaves like the malloc function for the specified size.
Uniform naming
Rather than DYNAMIC_ARR_..., match DYNARRAY_....
Other than that, very good naming scheme.
Allow pushing const data
// void $DynArray_Push(struct $DynArray* arr, void* value)
void $DynArray_Push(struct $DynArray* arr, const void* value)
Not always an error
$DynArray_Shrink() calls $DynArray_ErrorFree(arr) even if arr->size == 0.
Bug??
In $DynArray_Shrink(), code has ++arr->size;. Maybe --arr->size;?
Pedantic: check overflow
// Add
if (arr->capacity > SIZE_MAX/DYNAMIC_ARR_GROWTHRATE) Handle_Error();
size_t newCapacity = (size_t)(arr->capacity * DYNAMIC_ARR_GROWTHRATE);
Use correct specifier
I also suspect code was not compiled with all warnings enabled. Save time, enabled them all.
// printf("size: %i\n", intArr.size);
printf("size: %zu\n", intArr.size);
Info hiding
Consider only declaring struct $DynArray in dynarry.h and putting the definition in dynarray.c. User need not see the members. Create access functions if member access needed. | {
"domain": "codereview.stackexchange",
"id": 42418,
"tags": "beginner, c, type-safety"
} |
Rendering stars in 3D space - AbsMag to OpenGL scale values | Question: I am using the AMNH's Digital Universe stars.speck file to render stars in 3D space. The speck contains parsec-scale coortinates, where our Sun is at 0,0,0. It also lists the AbsMag values - i.e the luminosity from 10 parsecs away. I want to now scale my star textures in openGL, so the resulting star sizes are accurate. How could I convert the AbsMag inverse logarithmic scale to a scale value of 0-infinity, where 0.0 means the star goes away, and 1.0 is no change? Obviously can't make the Sun stay at 1, as that would make it huge.
Answer: "Accurate star size" is a problem. Obviously you do not have the resolution on your screen for accurate angular resolution of stars (they would require an absurdly fine resolution), and the dynamic range of brightness is also too low -- stars range over many orders of magnitude in brightness, far more than a screen can show. Deeply annoying. What one might want to do is to make each star have a brightness and screen size that is proportional to their actual brightness in the sky to give the same feeling as the sky would give. This is still very tough, since screens have different gamma correction. To top it off, the human eye actually has a pretty logarithmic response to light (this is why magnitudes and gamma correction make sense).
Here is a rough idea. A star with AbsMag of $M$ has an actual luminosity of $$L = L_\odot 10^{0.4(M_\odot - M)}$$ where $L_\odot$ is the luminosity of the sun and $M_\odot$ is the absolute magnitude of the sun. Things get easy if we just count luminosity in terms of $L_\odot$, making the sun one unit of luminosity.
A spot of radius $r$ on the screen with luminance $l$ radiates power as $P=\pi r^2 l$. That luminance is due to the gamma-corrected luma value $V$ the computer displays: $l = K V^\gamma$.
Assuming the radius changes with luminosity too as $r(L)$, I would try $r(L)=r_0 L^a$ where $a\approx 0.6$ (but this is guesswork) and the sun has radius $r_0$ pixels.
So trying to put this together, we get the pixel brightness as $$V = V_0 (L/ r(L)^2)^{1/\gamma} = V'_0 L^{(1-2a)/\gamma}$$ where $V_0$ and $V'_0$ are the pixel brightnesses used for the sun in this model. Basically this squashes the actual luminosity with $a$ (representing using bigger spots for brighter stars, not requiring as intense pixels) and $\gamma$ (to correct for the screen and the eye). What values to use will largely be trial and error unless you want to try to use screen photometry equipment.
So the full formula converting from absolute magnitude to pixel value would be: $$V = V_0 10^{0.4(4.83 - M)(1-2a)/\gamma}.$$ | {
"domain": "astronomy.stackexchange",
"id": 3938,
"tags": "absolute-magnitude"
} |
Finding velocity $v$ and position $r$, given a time $t$ under the acceleration of a gravitational force | Question: I was messing with the maths, when I tried to find the velocity as a function of time, $v(t)$, and the position, also, as a function of time, $r(t)$ under the gravity force.
$$ m \ddot{r} = -G \frac{Mm}{r^2}$$
$$ \ddot{r} = - \frac{GM}{r^2} $$
In order to find out the velocity, you will have to integrate over both sides.
$$\dot{r} = \int{ -\frac{GM}{r^2} \; \mathrm d t}$$
$$\dot{r} = -GM \int{ \frac{1}{r^2} \; \mathrm d t}$$
The tricky thing you have to take in account is thar $r$, is a function of time, $r(t)$, so I'm not sure how to continue the integration from here. I've came across this post and also this one.
It talks about how to solve this exact problem by multiplying both sides of the equations by $\dot{r}$ and then, integrate.
$$\ddot{r} \dot{r} = - \frac{GM}{r^2} \dot{r}$$
$$\int{ \ddot{r} \dot{r} \; \mathrm d t }= - GM \int{\frac{\dot{r}}{r^2}
\; \mathrm d t}$$
The thing is, I don't understand exactly how the steps (in the other posts) are done, it seems to me there are big jumps in between the steps, so I can't follow very well the integration process.
It would be nice, if someone could explain me, in depth, how to get the position $r(t)$ and $v(t)$ by integrating over this formulas.
Answer: I'm certain that it isn't possible to write out $r$ and $v$ as functions of $t$, at least explicitly, meaning you can't isolate $r$ or $v$. The problem lies in the form of the final solution that you get after integration.
So, you're stuck on how to integrate from here:
$\int \ddot{r}\dot{r} dt = -GM \int \frac{\dot{r}}{r^2} dt$
Use the following equalities:
$\frac{d}{dt} \bigg[\frac{\dot{r}^2}{2}\bigg] = \dot{r}\ddot{r}$
$\frac{d}{dt} \bigg[\frac{1}{r}\bigg] = -\frac{\dot{r}}{r^2}$
After, it shouldn't be difficult to obtain the following relation, which is shown in one of the posts you've mentioned.
$\dot{r} = \sqrt{v_0^2 + 2GM\big[\frac{1}{r} - \frac{1}{r_0}\big]}$
What you would want to do next is divide both sides by the square root and integrate over $t$, noting that the integration variable should then change as:
$\int \frac{\dot{r}}{\sqrt{v_0^2 + 2GM\big[\frac{1}{r} - \frac{1}{r_0}\big]}} dt = \int \frac{1}{\sqrt{v_0^2 + 2GM\big[\frac{1}{r} - \frac{1}{r_0}\big]}} dr$
This is the part where our jobs become difficult because although I know the solution, and I will share it with you, I do not know the steps to take to get it. I just thought I'd share it so you can get a feel for implicit solutions, the ones where you can't isolate what you want, namely $r(t)$.
$\frac{2GM}{(v_0^2-\frac{2GM}{r_0})^\frac{3}{2}} \tanh^{-1} \Bigg[\frac{\sqrt{v_0^2 + 2GM\big[\frac{1}{r} - \frac{1}{r_0}\big]}}{\sqrt{v_0^2 - \frac{2GM}{r_0}}}\Bigg] + r\frac{\sqrt{v_0^2 + 2GM\big[\frac{1}{r} - \frac{1}{r_0}\big]}}{v_0^2-\frac{2GM}{r_0}} = t + c_1$
I hope I helped a bit! | {
"domain": "physics.stackexchange",
"id": 81622,
"tags": "homework-and-exercises, newtonian-mechanics, newtonian-gravity, integration, calculus"
} |
CX gate with Hadamard | Question: Let's say we got a CX with a Hadamard gate on the control gate and any state at the target gate, will the target necessarily become a superposition of two states?
Best.
Answer: Assuming your control is $|0\rangle$ to begin with. Then after application of Hadamard, the control is:
$$\frac{|0\rangle + |1\rangle}{\sqrt 2}$$.
Now using this as control and applying $X$ gate to the target, say $|0\rangle$, you get:
$$\frac{|0\rangle|0\rangle + |1\rangle |1\rangle}{\sqrt 2}$$
Now, the system is entangled and is in a superposition of the states $|00\rangle$ and $|11\rangle$. Because it is entangled, it does not make sense to separately talk about the target state and ask if it is in superposition.
If you measure out the target, you will notice it is in a classical mixture (not a quantum superposition) of states $|0\rangle$ and $|1\rangle$ represented as follows:
$$\frac{1}{2}|0\rangle\langle0| + \frac{1}{2}|1\rangle\langle1|$$
And looking at the outcome, you will see a basis state (I assume this is what you mean by simple state): $|0\rangle$ or $|1\rangle$, so yes, on measuring you will see a simple state.
A similar thing happens for $|1\rangle$. Any state that is not an eigenstate of $X$ will lead to some sort of entanglement as shown above. However, like @John Garmon said, if you use an eigenstate of $X$, say $|-\rangle$, on applying $CX$ the states becomes:
$$\frac{|0\rangle|-\rangle - |1\rangle|-\rangle}{\sqrt 2}$$
$$= \frac{|0\rangle - |1\rangle}{\sqrt 2} |-\rangle$$
And the target state is not entangled, and is a 'simple' state with respect to the $|+\rangle, |-\rangle$ basis. | {
"domain": "quantumcomputing.stackexchange",
"id": 819,
"tags": "quantum-gate, quantum-state"
} |
Floating-point to String Conversion with Given Precision for Fractional Part | Question: Faced with converting floating-point values from a sensor obtained to string on an embedded system to transmit over UART, I came up with the following dbl2str() to handle either float or double input. The accuracy of the last digit in the fractional part wasn't important as the floating point-values were from a temperature sensor on an MSP432. The intent was to avoid loading stdio.h and math.h.
The double value, an adequately sized buffer and then precision for the fractional-part are parameters to the function:
/**
* convert double d to string with fractional part
* limited to prec digits. s must have adequate
* storage to hold the converted value.
*/
char *dbl2str (double d, char *s, int prec);
The approach is:
Handle 0.0 case where integer-part is '0' and pad fractional part to prec '0's, return at that point.
Save sign flag (1-negative, 0-posititve), set padding variable zeros equal to prec, change sign of floating-point value to positive if negative.
Nul-terminate temp string and fill from end with fractional-part conversion, subtracting 1 from zeros on each iteration, and after leaving conversion loop, pad to remaining zeros.
Add separator '.' and continue to fill temp string with integer-part conversion.
if sign add '-' to front of temp string.
copy temp string to buffer and return pointer to buffer.
(note: the range of floating-point values is from roughly -50.0 to 200.00 so INF was not protected against, nor was exhausting of the 32-byte buffer a consideration)
The code with test case is:
#include <stdio.h>
#include <stdint.h>
#define FPMAXC 32
/**
* convert double d to string with fractional part
* limited to prec digits. s must have adequate
* storage to hold the converted value.
*/
char *dbl2str (double d, char *s, int prec)
{
if (d == 0) { /* handle zero case */
int i = 0;
*s = '0'; /* single '0' for int part */
s[1] = '.'; /* separator */
for (i = 2; i < 2 + prec; i++) /* pad fp to prec with '0' */
s[i] = '0';
s[i] = 0; /* nul-terminate */
return s;
}
char tmp[FPMAXC], *p = tmp + FPMAXC - 1; /* tmp buf, ptr to end */
int sign = d < 0 ? 1 : 0, /* set sign if negative */
mult = 1; /* multiplier for precision */
unsigned zeros = prec; /* padding zeros for fp */
uint64_t ip, fp; /* integer & fractional parts */
if (sign) /* work with positive value */
d = -d;
for (int i = 0; i < prec; i++) /* compute multiplier */
mult *= 10;
ip = (uint64_t)d; /* set integer part */
fp = (uint64_t)((d - ip) * mult); /* fractional part to prec */
*p = 0; /* nul-terminate tmp */
while (fp) { /* convert fractional part */
*--p = fp % 10 + '0';
fp /= 10;
if (zeros) /* decrement zero pad */
zeros--;
}
while (zeros--) /* pad reaming zeros */
*--p = '0';
*--p = '.';
if (!ip) /* no integer part */
*--p = '0';
else
while (ip) { /* convert integer part */
*--p = ip % 10 + '0';
ip /= 10;
}
if (sign) /* if sign, add '-' */
*--p = '-';
for (int i = 0;; i++, p++) { /* copy to s with \0 */
s[i] = *p;
if (!*p)
break;
}
return s;
}
int main (void) {
char buf[FPMAXC];
double d = 123.45678;
printf ("% 8.3lf => %8s\n", d, dbl2str (d, buf, 3));
d = -d;
printf ("% 8.3lf => %8s\n", d, dbl2str (d, buf, 3));
d = 0.;
printf ("% 8.3lf => %8s\n", d, dbl2str (d, buf, 3));
d = 0.12345;
printf ("% 8.3lf => %8s\n", d, dbl2str (d, buf, 3));
d = -d;
printf ("% 8.3lf => %8s\n", d, dbl2str (d, buf, 3));
d = 123.0;
printf ("% 8.3lf => %8s\n", d, dbl2str (d, buf, 3));
d = -d;
printf ("% 8.3lf => %8s\n", d, dbl2str (d, buf, 3));
}
The function does what I intended, but would like to know if there are any obvious improvements that can be made with a slight-eye on optimization.
Program Output
./bin/dbl2str
123.457 => 123.456
-123.457 => -123.456
0.000 => 0.000
0.123 => 0.123
-0.123 => -0.123
123.000 => 123.000
-123.000 => -123.000
Answer:
The intent was to avoid loading stdio.h and math.h.
if there are any obvious improvements that can be made with a slight-eye on optimization.
Consider float rather than double
Avoid splitting string processing
Separate processing for integer part and fraction not needed. A simple alternative is to create a scaled integer and then process that integer "right to left" (least to most).
mult type
A limiting factor is the type of width. Code uses int, which is 16-bit on some embedded machines. To match the rest of codes wide integer type usage, uint64_t mult makes more sense.
Offload padding
dbl2str(double d, char *s, int prec) might as well handle space padding, thus allowing a simple puts() rather than printf ("%8s\n", dbl2str (d, buf, 3));
Such as
dbl2str(double d, char *s, int width, int prec)
Minor: Parameter order
Maybe instead of double d, char *s, int prec, follow the sprintf() order char *s, int prec, double d as a more familiar idiom.
Rounding
The code cost to do basic rounding is not high. I recommend it.
Temperature and -0.000
When reporting temperature, seeing -0.0 can be informative.
Consider using its potential appearance with a signbit(d) test rather than d < 0, or due to rounding.
Size limited string
Early in a project, data is often not what one thinks. A double to string function that uses buffer overflow protection would pay for itself in reduced debugging - better than risk UB.
I did not see a need for special zero handling.
See do loop below.
Some of the above ideas with a modified OP's code
#include <assert.h>
#include <math.h>
#include <stdio.h>
#include <stdint.h>
#include <string.h>
#include <stdbool.h>
#define FPMAXC 32
char* dbl2str2(size_t n, char s[n], int width, int prec, double d) {
assert(prec >= 0 && prec <= 9); // Or some other upper bound
assert(width >= 0 && (unsigned ) width < n);
char tmp[FPMAXC];
char *p = tmp + FPMAXC - 1;
*p = '\0';
// Or use conventional code for signbit, fabs
bool sign = signbit(d);
int w = sign + 1; // count characters used: sign, ','
d = fabs(d) * 2.0; // * 2 for rounding
for (int p = 0; p < prec; p++) {
d *= 10.0;
}
uint64_t i64 = (uint64_t) d;
i64 = (i64 + i64 % 2) / 2; // round
do {
if (prec-- == 0) {
*(--p) = '.';
}
if ((unsigned) ++w >= n) {
*s = 0; // Number too big - add error code here as desired.
return s;
}
*(--p) = (char) (i64 % 10 + '0');
i64 /= 10;
} while (prec >= 0 || i64);
if (sign) {
*(--p) = '-';
// w++ counted above
}
while (w++ < width) {
*(--p) = ' ';
}
return memcpy(s, p, (size_t) (tmp + FPMAXC - p));
}
Output (with "%8s" changed to "%s" and #define dbl2str( d, s, prec) dbl2str2(sizeof(s), (s) , 8, (prec), (d)))
123.457 => 123.457
-123.457 => -123.457
0.000 => 0.000
0.123 => 0.123
-0.123 => -0.123
123.000 => 123.000
-123.000 => -123.000
Code not heavily tested, yet good enough to give some alternative ideas. | {
"domain": "codereview.stackexchange",
"id": 40604,
"tags": "c, floating-point"
} |
Time dependence of generalized coordinates and virtual displacement | Question: The Cartesian coordinates of particles are related to the generalized coordinates via a transformation (for the $x$ component of the $j$-th particle) as:
$$x_j = x_j(q_1, q_2, \ldots, q_N, t)$$
What I can't understand is why in the virtual displacement which occurs in constant time i.e. $\delta t=0$ isn't zero? We can write the virtual displacement as:
$$\delta x_j = \sum_{i=1}^N \frac{\partial{x_j}}{\partial{q_i}}\cdot \delta q_i $$
but because the generalized coordinates can also be considered functions of time then:
$$\delta x_j = \sum_{i=1}^N \frac{\partial{x_j}}{\partial{q_i}}\cdot \dot{q_i} \cdot \delta t$$
If time is frozen isn't virtual displacement also $0$?
Answer: Consider a system of $N$ material points described by position vectors ${\bf x}_1, \ldots, {\bf x}_N$ in a reference frame ${\cal R}$. These position vectors are not free to assume any configuration in the physical space, but they are constrained to satisfy some constraints which, possibly, may depend on time,
$$f_j({\bf x}_1, \ldots, {\bf x}_N, t) =0\quad j=1,\ldots, c < 3N\:.\tag{1}$$
If these functions are smooth and satisfy a condition of functional independence (I do not want to enter into the details), we can choose $n:= 3N-c$ abstract coordinates $q^1,\ldots, q^n$ which can be used the embody the constraints into the formalism. This result holds locally around every admitted configuration and around a given time $t$.
As a matter of fact, we can locally (in space and time) represent the position vectors ${\bf x}_1, \ldots, {\bf x}_N\quad i=1,2,\ldots, N$ as known functions of the said free coordinates.
$${\bf x}_i = {\bf x}_i(q^1,\ldots, q^n,t)\tag{2}$$
When $q^1,\ldots, q^n, t$ varies in their domain (an open set in $\mathbb{R}^{n+1}$), the vectors ${\bf x}_i(q^1,\ldots, q^n,t)$ automatically satisfy the constraints (1). The admissible configuration are therefore determined by the free coordinates $q^1,\ldots, q^n$ at each time $t$.
Now we pass to the notion of virtual displacement compatible with the set of constraints (1). It is defined by fixing $t$ and computing the differential of the functions ${\bf x}_i$ as functions of the remaining variables. The virtual displacement of the system at time $t_0$ around a permitted configuration determined by $q_0^1,\ldots, q_0^n$ is the set of $N$ vectors in the real space
$$\delta {\bf x}_i = \sum_{k=1}^n \left.\frac{\partial {\bf x}_i}{\partial q^k}\right|_{(q^1_0,\ldots, q_0^n,t_0)}\delta q^k\:, \quad i=1,\ldots, N\:.$$
Above the numbers $\delta q^k\in {\mathbb R}$ are arbitrary, not necessarily "infinitesimal" (which does mean anything!).
Example
Consider a point of position vector ${\bf x}$, constrained to live on a circle of radius $r=\sqrt{1+ct^2}$, where $c>0$ is a known constant.
This circle is centered on the origin and stays in the plane $z=0$.
Here we have just two constraint
$$f_1({\bf x}, t)=0, \quad f_2({\bf x}, t)=0$$
where
$$f_1({\bf x}, t) := x^2+ y^2+ z^2 - (1+ct^2)\:, \quad
f_1({\bf x}, t) := z $$
if ${\bf x}= x{\bf e}_x+ y{\bf e}_y+ z{\bf e}_z$
Locally we can use, for instance, the coordinate $q^1=x$ to describe a portion of circle and we have
$$x = q^1\:, \quad y = \sqrt{(1+ct^2) - x^2}\:, \quad z=0\:.$$
The relations (2) here read
$${\bf x}(q^1,t)= q^1 {\bf e}_1+ \sqrt{(1+ct^2) - (q^1)^2}{\bf e}_2$$
The virtual displacements at time $t_0$ are the vectors of the form
$$\delta {\bf x} = \delta q^1 {\bf e}_1+ \frac{\delta q^1}{\sqrt{(1+ct_0^2) - (q^1)^2}}{\bf e}_2$$
for every choice of $\delta q^1$.
The geometric meaning of $\delta {\bf x}$ should be evident: it is nothing but a vector (of arbitrary length) tangent to the circle at time $t_0$ emitted by a configuration determined by the value $q^1$.
REMARK It is worth stressing that varying $t$, the circle changes! Virtual displacement are defined at given time $t$.
The discussed example is actually quite general. The virtual displacement are always vectors which are tangent to the manifold of admitted configurations at given time $t_0$.
The prosecution of Lagrangian approach (once assumed the validity of the postulate of ideal constraints and introducing interactions for instance defined by a Lagrangian) consists of finding the evolution of the system not in terms of curves $${\bf x}_i = {\bf x}_i(t)\:, \quad i=1\,\ldots, N$$ in the physical space. The motion is described directly in terms of free coordinates, i.e., curves $$q^k=q^k(t)\:, \quad k=1,\ldots, n$$
Just at this level it makes sense to introduce the notation $\dot{q}^k = \frac{dq^k}{dt}$, because here we have a curve $t$-parametrized describing the evolution of the system. Before this step $q^1,\ldots, q^n$ and $t$ are independent variables.
A posterori, if we have a description of the motion of the system in terms of free coordinates, we also have the representation in the physical space by composing these curves with the universal (= independent of any possible motion) relations (2),
$${\bf x}_i(t) = {\bf x}_i(q^1(t), \ldots, q^n(t), t)\:, \quad i=1\,\ldots, N\tag{3}$$
It is finally interesting to compare virtual displacements with real displacements when we have a motion $$q^k=q^k(t)\:, \quad k=1,\ldots, n\:.$$
In the physical space, the velocities with respect to the reference frame ${\cal R}$ are given by taking the derivative with respect to $t$ of (3),
$${\bf v}_i(t) = \sum_{k=1}^n \frac{\partial {\bf x}_i}{\partial q^k}\frac{dq^k}{dt} + \frac{\partial {\bf x}_i}{\partial t}\:.$$
An approximate displacement ascribed to an interval of time $\Delta t$ is
$$\Delta {\bf x}_i = {\bf v}_i(t)\Delta t = \sum_{k=1}^n \frac{\partial {\bf x}_i}{\partial q^k}\frac{dq^k}{dt}\Delta t + \frac{\partial {\bf x}_i}{\partial t}\Delta t\:,$$
which can be rephrased to
$$\Delta {\bf x}_i = \sum_{k=1}^n \frac{\partial {\bf x}_i}{\partial q^k}\Delta q^k + \frac{\partial {\bf x}_i}{\partial t}\Delta t\:. \tag{4}$$
This identity has to be compared with the definition of virtual desplacement
$$\delta {\bf x}_i = \sum_{k=1}^n \frac{\partial {\bf x}_i}{\partial q^k}\delta q^k\:.$$
Even if we choose $\delta q^k = \Delta q^k$, the right-hand sides are different in view of the term $ \frac{\partial {\bf x}_i}{\partial t}\Delta t$ which accounts for a part of the displacement, in real motion, due to the fact that constraints may depend on $t$ explicitly, as in the example above. | {
"domain": "physics.stackexchange",
"id": 86540,
"tags": "classical-mechanics, lagrangian-formalism, coordinate-systems, constrained-dynamics, displacement"
} |
How long does it take for pictures by Hubble to arrive on Earth? | Question: Recently, there was news that Hubble took a high definition picture of the Andromeda galaxy. I wanted to know how long does a high definition picture from Hubble takes to arrive on Earth; if at all possible, how does this process occur?
Answer: Hubble transmits about 120 GB of data per week (source: http://hubblesite.org/the_telescope/hubble_essentials/quick_facts.php). One picture of the original camera was only 640k pixels (this camera was built a long time ago), but the images you see are composited from many many images. Today, it uses WFC3 - a 8 mega pixel camera. Even so, a single image from the Hubble takes very little time to transmit - but depending on which composite image (dataset) you are talking about, the answer could be "a very long time".
The time for the radio signal to travel back to earth depends on the distance to the transmitter, but since its orbit is only 569 km above the earth's surface, it could be as little as 2 ms. From the furthest point it will be about 4000 km away, so it would take 13 ms for the signal to reach earth (speed of light ~300,000 km/s). Note that the article linked below says that Hubble doesn't in fact send data to Earth - instead it sends it to the geosynchronous tracking and data relay satellite system (TDRSS) twice a day, which will add some delays.
A more detailed answer is at http://en.wikipedia.org/wiki/Hubble_Space_Telescope#Transmission_to_Earth
And I just found this cool picture (source):
As for the size of the Andromeda data (the Panchromatic Hubble Andromeda Treasury), you can start at https://archive.stsci.edu/prepds/phat/ and dig down to the data files from there - for example, when you drill down to brick 1, you will come to https://archive.stsci.edu/pub/hlsp/phat/brick01/ and you'll see a long, long list of files that amounts to about 94 GB of data. And that's just for one brick. There are 23 bricks. But note that these are not raw image files as sent by Hubble. It seems reasonable to think it took a few months though. | {
"domain": "physics.stackexchange",
"id": 19012,
"tags": "telescopes"
} |
For what kind of data are hash table operations O(1)? | Question: From the answers to (When) is hash table lookup O(1)?, I gather that hash tables have $O(1)$ worst-case behavior, at least amortized, when the data satisfies certain statistical conditions, and there are techniques to help make these conditions broad.
However, from a programmer's perspective, I don't know in advance what my data will be: it often comes from some external source. And I rarely have all the data at once: often insertions and deletions happen at a rate that's not far below the rate of lookups, so preprocessing the data to fine-tune the hash function is out.
So, taking a step out: given some knowledge about data source, how can I determine whether a hash table has a chance of having $O(1)$ operations, and possibly which techniques to use on my hash function?
Answer: There are several techniques that guarantee that lookups will always require O(1) operations, even in the worst case.
How can I determine whether a hash table has a chance of having O(1)
operations, and possibly which techniques to use on my hash function?
The worst case happens when some malicious attacker (Mallory) deliberately gives you data that Mallory has specifically selected to make the system run slow.
Once you have picked some particular hash function, it's probably over-optimistic to assume Mallory will never find out which hash function you picked.
Once Mallory discovers which hash function you picked, if you allow Mallory to give you lots of data to be inserted into your hash table using that hash function, then you are doomed: Mallory can internally rapidly generate billions of data items, hash them with your hash function to find which data items are likely to collide, and then feed you millions of one-in-a-thousand data items that are likely to collide, leading to lookups that run much slower than O(1).
All the techniques that guarantee "O(1) lookups even in the worst case" avoid this problem by doing a little bit of extra work on each insertion to guarantee that, in the future, every possible lookup can succeed in O(1) time.
In particular, we assume (worst case) that Mallory will sooner or later discover which hash function we are using; but he only gets a chance to insert a few data items before we pick a different hash function -- tabulation hashing or some other universal hashing -- one that we specially select such that all the data we have so far can be looked up in 2 or 3 probes -- i.e., O(1).
Because we randomly select this function, we can be fairly sure that Mallory won't know what function we picked for a while.
Even if Mallory immediately gives us data that, even with this new hash function, collides with previous data,
we can then pick yet another a fresh new hash function such that, after rehashing,
all the previous data he and everyone else has fed us can now be looked up in 2 or 3 probes in the worst case -- i.e., O(1) lookups in the worst case.
It's fairly easy to randomly select a new hash function and rehash the entire table often enough to guarantee that each lookup is always O(1).
While this guarantees that each lookup is always O(1), these techniques, when inserting the Nth item into a hash table that already contains N-1 items, can occasionally require O(N) time for that insert.
However, it is possible to design the system such that,
even when Mallory deliberately gives you new data that, using the new hash function, collides with previous data, the system can accept lots of items from Mallory and others before it needs to do a full O(N) rebuild.
Hash table techniques that pick-a-new-function-and-rehash in order to guarantee O(1) lookups, even in the worst case, include:
cuckoo hashing guarantees that each key lookup succeeds with at most 2 hash calculations and 2 table lookups.
hopscotch hashing guarantees that each key lookup succeeds after inspecting at small number H (perhaps H=32) consecutive entries in the table.
dynamic perfect hashing -- the 1994 paper by Dietzfelbinger is the first one I've read that pointed out that, even though it rehashes "frequently" in order to guarantee that each key lookup always succeeds with 2 hash calculations and 2 lookups, it's possible to do a full rehash so rarely that even though each full rehash uses O(n) time, the expected average cost of insertions and deletion are O(1) amortized.
Data Structures/Hash Tables | {
"domain": "cs.stackexchange",
"id": 4770,
"tags": "data-structures, runtime-analysis, hash-tables, dictionaries"
} |
found \N in my data does not count as missing values in r | Question: So scrolling through my columns I find \N embedded. I need to count them, but I get an error. Would it be considered a missing value, it's new to me.
# Check whether attribute HeadOfState in country has any missing values, and if so, how many.
country$headOfState[country$headOfState==""] <- NA
country$headOfState[country$headOfState==\N] <- NA
sum(is.na(country$headOfState))
# Check whether attribute IndepYear in country has any missing values, and if so, how many.
country$IndepYear[country$indepYear==""] <- NA
country$indepYear[country$indepYear=='\N'] <- NA
sum(is.na(country$indepYear))
Error: unexpected input in "country$headOfState[country$headOfState==\"
country$headOfState[country$headOfState==\N] <- NA
Error: unexpected input in "country$headOfState[country$headOfState==\"
Answer: The reason its shorws an error, bacause '\' is a part of base regex expressions in R.
As states here:
The metacharacters in extended regular expressions are . \ | ( ) [ { ^ $ * + ?
So the comparisson like this
country$indepYear=='\N'
or this ( Its not a valid comparisson at all)
country$headOfState==\N
will throw an error. You need to "escape" the "\" symbol.
Try something like this instead if you want to replace "\N" with NA:
country$indepYear[country$indepYear=='\\N'] <- NA
If you need just to count them, you can use this approach:
sum(country$indepYear=='\\N')
Hope this helps. | {
"domain": "datascience.stackexchange",
"id": 4208,
"tags": "r, rstudio"
} |
Electric field in a shell in asymmetric situation | Question: Suppose there is a metallic shell having a uniform charge Q spread over it uniformly. Now we bring a charge Q1 near it.
What is the field at the centre of the shell and in general anywhere in the shell?
By Gauss's law we can say that flux through any closed surface in the shell is zero but because of the asymmetry, I don't think we can say anything more.
What would happen if the shell was not made of a conductor?
What would be the field at the centre if the metallic shell was deformed?
If possible, please try to give an answer not involving complicated maths since I have knowledge of only first order differential equations
Answer: Consider a conductor of any shape filled inside with a conducting material and having charge Q on it. Now bring a charge Q1 near it. The charges on the outer surface of conductor will align so that field inside is zero everywhere. Now after the electrostatic condition has been established, I scoop out a cavity inside the conductor. Assuming perfect vacuum, notice that the forces on any of the charges on the conductor or on the charge Q1 do not change. This means that the charge distribution is unchanged after scooping out cavity. So even after scooping out cavity, the field inside the cavity is still zero everywhere. Now I can make this cavity large enough so that the remaining part is just a shell and this argument will still be valid. So in your case also, field inside shell is zero everywhere.
Now by the reasoning it is clear why this will not necessarily hold in the case of a non conducting material.
This result can be generalized further, although the rigorous proof is through the Laplace equation. Consider a conducting shell( any shape) with a cavity having some charge (which implies there is equal and opposite charge on the inner surface of cavity) and there is also some charge outside the conductor and on the outer surface of the conductor. We find that net field due to charge on outer surface and due to charge outside the shell is zero everywhere inside the cavity and inside the conducting material. Also net field due to charge inside the cavity and the charge on the surface of cavity is zero everywhere outside the cavity. The conductor sort of acts as a shield against the 2 fields. This phenomenon is called electrostatic shielding | {
"domain": "physics.stackexchange",
"id": 80317,
"tags": "electrostatics, electric-fields"
} |
Bioavailability -- what is the effect of absorption rate? | Question: I learnt about Bioavailability and this is the definition given by many sources:
...the fraction (%) of an administered drug that reaches the systemic circulation.
(https://en.m.wikipedia.org/wiki/Bioavailability)
In some other places, it was defined as:
Bioavailability (F) is defined as the rate and extent to which the active constituent or active moiety of a drug is absorbed from a drug product and reaches the circulation."
This 2nd definition is supported by sources including:
Essentials of Medical Pharmacology by KD Tripathi; 8th Edition; Page 22 (photo attached)
https://www.sciencedirect.com/topics/medicine-and-dentistry/bioavailability
https://www.msdmanuals.com/professional/clinical-pharmacology/pharmacokinetics/drug-bioavailability
However, I find it very disturbing to accept that the rate of absorption affects Bioavailability. And this is my reasoning:
Let's say you have 2 drugs, A & B
20g of A is administered and a total of 10g is seen in systemic circulation where A is absorbed at a rate of 1g/hr (taking 10hrs to be fully absorbed at 10g)
20g of B is administered and a total of 10g is seen in systemic circulation and B is absorbed at a rate of 2g/hr (taking 5hrs to be fully absorbed at 10g)
I am assuming both drugs experience same degree of 1st pass Metab and that every other factor that affects Bioavailability is constant.
Isn't the bioavailability of both drugs same at 50% irrespective of the difference in rate of absorption?
Please help.
PS: This source, https://www.sciencedirect.com/topics/medicine-and-dentistry/drug-bioavailability#:~:text=Drug%20bioavailability%20is%20the%20fraction,which%20the%20drug%20is%20absorbed , says:
Bioavailability does not take into account the rate at which the drug is absorbed
I am even more confused!
Answer: I think you will frequently find that measures in biology are not defined in a consistent manner, largely because experimental constraints often require a different operational definition. People prefer to keep the same term for the underlying concept, but recognize that different operational definitions may give slightly different answers. I'd consider this in the realm of "all models are wrong, but some are useful".
In the case you describe here, I would argue this is a conflict between a theoretical and practical version of the same concept.
For drugs, the concept of bioavailability generally applies to different dosing strategies and contrasts intravenous injection (bioavailability = 100% because the entire injection is immediately introduced to systemic circulation) with other routes (e.g., oral, transdermal).
Operationally, though, people don't typically measure bioavailability by counting individual molecules in the systemic circulation versus not. Instead, they measure serum concentrations at multiple time points, calculate the area under the curve, and use a ratio of area under the curve to determine bioavailability.
Here's an example of how that might look from Wikipedia:
For the IV curve, you just have decay kinetics; for PO you have absorption and decay. To compare them, you integrate to get area under the curve which has units of concentration*time, and look at the ratio of PO to IV to report PO bioavailability.
Sort of the "null hypothesis" or basic assumption for kinetics in pharmacology is that everything is single exponential unless you find otherwise. That is, rates of absorption and clearance are directly proportional to concentration in different compartments, or in other words every molecule behaves independently like you expect for any process driven by diffusion or unsaturated enzyme kinetics. Under that assumption, this AUC ratio is equivalent to "fraction of an administered drug that reaches the systemic circulation" and it's also true that "Bioavailability does not take into account the rate at which the drug is absorbed": it doesn't matter "when" a molecule enters the systemic circulation, if you calculate AUC you will at some point have that molecule enter and then leave and it will spend exactly the same amount of average time in circulation as a molecule entering at any other time.
However, drugs aren't always going to perfectly obey exponential kinetics, and another meaning of bioavailability is just "area under the curve", whatever the contributions to the area under the curve might be. That might include a drug that enters circulation very fast saturating an enzyme that metabolizes it, leading to a slower clearance for a drug absorbed quickly versus slowly. If that's the case, your AUC ratio no longer precisely represents bioavailability in terms of "fraction of an administered drug that reaches the systemic circulation" but it retains a meaning of bioavailability in terms of "dose*time".
Your definition:
Bioavailability (F) is defined as the rate and extent to which the active constituent or active moiety of a drug is absorbed from a drug product and reaches the circulation."
is consistent with this practical measurement issue from AUCs.
In contrast, if you are using some sort of pharmacological model with explicit terms for absorption, decay (including higher-order decay terms), etc, you would want to use the theoretical definition for bioavailability.
I'll add that when you read:
Bioavailability does not take into account the rate at which the drug is absorbed
I would interpret this as trying to emphasize that bioavailability is not about peak concentration. In medical practice, this is very important to understand, whereas in a pharmacokinetic sense the other nuances may also become important. | {
"domain": "biology.stackexchange",
"id": 12047,
"tags": "pharmacology, pharmacokinetics"
} |
Sorted linked-list in C (self-compiling) | Question: I needed a sorted-list that I can iterate over to render elements in layers. Right now, delete_item deletes the first node with a matching priority it finds but I intend to delete specific list items in my implementation.
I didn't know if I needed the list to be doubly linked, but it turns out it makes deleting an item from the list easier. I've also seen people initializing the memory with calloc, is it important in this case?
There's also probably a way to give each list item a unique identifier (apart from its pointer) so that I can delete a specific item instead of deleting the first occurrence. For now, I'll just use the pointer...
Also, what do you think of my self-compiling trick?
//usr/bin/gcc ${0##*/} -o temp && ./temp ; rm temp 2>/dev/null ; exit
#include <stdio.h>
#include <stdlib.h>
struct node {
struct node *next;
struct node *prev;
int priority;
};
struct node *head;
void add_item(int priority)
{
struct node *curr = head;
struct node *prev = NULL;
while((curr != NULL) && (curr->priority < priority)) {
prev = curr;
curr = curr->next;
}
struct node *new = malloc(sizeof(struct node));
new->priority = priority;
new->next = curr;
new->prev = prev;
if(prev != NULL) {
prev->next = new;
} else {
head = new;
}
}
void remove_item(int priority)
{
struct node *curr = head;
while((curr != NULL) && (curr->priority != priority)) {
curr = curr->next;
}
if(!curr)
return;
if(curr == head) {
head = curr->next;
} else {
curr->prev->next = curr->next;
if(curr->next)
curr->next->prev = curr->prev;
}
free(curr);
}
void delete_list()
{
struct node *curr;
while(head != NULL) {
curr = head;
head = head->next;
free(curr);
}
}
void print_items()
{
int member = 0;
for(struct node *iter = head; iter != NULL; iter = iter->next)
{
printf("Member: %d -> priority: %d\n", member, iter->priority);
member++;
}
}
int main(int argc, char** argv)
{
printf("This file is self-compiling\n");
add_item(1);
add_item(1);
add_item(2);
add_item(3);
add_item(1);
add_item(5);
add_item(4);
add_item(3);
remove_item(1);
remove_item(4);
remove_item(5);
add_item(5);
print_items();
delete_list();
}
Answer: Bug when inserting item
In add_item(), you are missing this code which fixes the prev pointer of the node after the one you just inseretd:
if (curr != NULL)
curr->prev = new;
Bug when removing item
In remove_item(), if the node to be removed is the head node, you fail to fix up the prev pointer of the second node. In other words, you need to do the following code in both the head and non-head cases:
if (curr->next)
curr->next->prev = curr->prev; | {
"domain": "codereview.stackexchange",
"id": 26321,
"tags": "c, linked-list"
} |
Gazebo throws "Segmentation fault" after installation from binaries | Question: System info:
OS: Ubuntu 22.04.1 LTS
CPU: AMD Ryzen 7 4800H with Radeon Graphics
GPU: NVIDIA GeForce GTX 1650
NVIDIA driver: 520 (proprietary)
I just installed ROS2 Humble + Gazebo with sudo apt install ros-humble-desktop-full and then if I try to execute ign gazebo shapes.sdf line from official tutorial I get these errors:
Stack trace (most recent call last):
#31 Object "/lib/x86_64-linux-gnu/libruby-3.0.so.3.0", at 0x7ff8f29ab327, in
#30 Object "/lib/x86_64-linux-gnu/libruby-3.0.so.3.0", at 0x7ff8f2b4031c, in rb_vm_exec
#29 Object "/lib/x86_64-linux-gnu/libruby-3.0.so.3.0", at 0x7ff8f2b3aca6, in
#28 Object "/lib/x86_64-linux-gnu/libruby-3.0.so.3.0", at 0x7ff8f2b37fd5, in
#27 Object "/lib/x86_64-linux-gnu/libruby-3.0.so.3.0", at 0x7ff8f2b35c44, in
#26 Object "/lib/x86_64-linux-gnu/libruby-3.0.so.3.0", at 0x7ff8f2a81a2e, in
#25 Object "/lib/x86_64-linux-gnu/libruby-3.0.so.3.0", at 0x7ff8f29ac9bc, in rb_protect
#24 Object "/lib/x86_64-linux-gnu/libruby-3.0.so.3.0", at 0x7ff8f2b44c71, in rb_yield
#23 Object "/lib/x86_64-linux-gnu/libruby-3.0.so.3.0", at 0x7ff8f2b4031c, in rb_vm_exec
#22 Object "/lib/x86_64-linux-gnu/libruby-3.0.so.3.0", at 0x7ff8f2b3aca6, in
#21 Object "/lib/x86_64-linux-gnu/libruby-3.0.so.3.0", at 0x7ff8f2b37fd5, in
#20 Object "/lib/x86_64-linux-gnu/libruby-3.0.so.3.0", at 0x7ff8f2b35c44, in
#19 Object "/usr/lib/x86_64-linux-gnu/ruby/3.0.0/fiddle.so", at 0x7ff8ee11344b, in
#18 Object "/lib/x86_64-linux-gnu/libruby-3.0.so.3.0", at 0x7ff8f2b03098, in rb_nogvl
#17 Object "/usr/lib/x86_64-linux-gnu/ruby/3.0.0/fiddle.so", at 0x7ff8ee112d6b, in
#16 Object "/lib/x86_64-linux-gnu/libffi.so.8", at 0x7ff8ee0b7492, in
#15 Object "/lib/x86_64-linux-gnu/libffi.so.8", at 0x7ff8ee0bae2d, in
#14 Object "/usr/lib/x86_64-linux-gnu/libignition-gazebo6-ign.so.6.12.0", at 0x7ff8ed61986c, in runGui
#13 Object "/lib/x86_64-linux-gnu/libignition-gazebo6-gui.so.6", at 0x7ff8ed45e917, in ignition::gazebo::v6::gui::runGui(int&, char**, char const*, char const*, int, char const*)
#12 Object "/lib/x86_64-linux-gnu/libignition-gazebo6-gui.so.6", at 0x7ff8ed45c39c, in ignition::gazebo::v6::gui::createGui(int&, char**, char const*, char const*, bool, char const*, int, char const*)
#11 Object "/lib/x86_64-linux-gnu/libignition-gui6.so.6", at 0x7ff8ec1cb048, in ignition::gui::Application::Application(int&, char**, ignition::gui::WindowType)
#10 Object "/lib/x86_64-linux-gnu/libQt5Widgets.so.5", at 0x7ff8ebc4ecec, in QApplicationPrivate::init()
#9 Object "/lib/x86_64-linux-gnu/libQt5Gui.so.5", at 0x7ff8eab36b6f, in QGuiApplicationPrivate::init()
#8 Object "/lib/x86_64-linux-gnu/libQt5Core.so.5", at 0x7ff8ec4f0b16, in QCoreApplicationPrivate::init()
#7 Object "/lib/x86_64-linux-gnu/libQt5Gui.so.5", at 0x7ff8eab33c07, in QGuiApplicationPrivate::createEventDispatcher()
#6 Object "/lib/x86_64-linux-gnu/libQt5Gui.so.5", at 0x7ff8eab325ee, in QGuiApplicationPrivate::createPlatformIntegration()
#5 Object "/usr/lib/x86_64-linux-gnu/qt5/plugins/platforms/libqxcb.so", at 0x7ff8ee0df522, in
#4 Object "/lib/x86_64-linux-gnu/libQt5XcbQpa.so.5", at 0x7ff8e43cadaf, in QXcbIntegration::QXcbIntegration(QStringList const&, int&, char**)
#3 Object "/lib/x86_64-linux-gnu/libQt5XcbQpa.so.5", at 0x7ff8e43c7722, in QXcbConnection::QXcbConnection(QXcbNativeInterface*, bool, unsigned int, char const*)
#2 Object "/lib/x86_64-linux-gnu/libQt5XcbQpa.so.5", at 0x7ff8e43cdcab, in
#1 Object "/lib/x86_64-linux-gnu/libxkbcommon-x11.so.0", at 0x7ff8e41e18cd, in xkb_x11_keymap_new_from_device
#0 Object "/lib/x86_64-linux-gnu/libxkbcommon-x11.so.0", at 0x7ff8e41e108d, in
Segmentation fault (Address not mapped to object [0x21aa120b8])
If I install Gazebo manually with latest docs, I get similar error on gz sim command.
Answer: The problem was with my terminal emulator (Alacritty) installed with snap. Using default terminal (or re-installation of Alacrity from source) fixed the problem. | {
"domain": "robotics.stackexchange",
"id": 2584,
"tags": "ros, gazebo"
} |
Signal Acquisition in Compressed Sensing | Question: I'm trying to wrap my head around compressed sensing, so I've been reading this intro to the topic.
I'm completely keeping up when they discuss exploiting the sparsity of a signal in some domain to compress it. I also think I get why you'd want to choose a sensing matrix that is incoherent with the basis in which your signal is sparse.
The point where my understanding breaks down is the "Undersampling and Sparse Signal Recovery" section.
Concretely, let's say I have a signal f of length n. I have a sensing matrix which is incoherent with the basis in which f is sparse that is m x n, and m < n.
If I understand correctly, I'm supposed to measure the signal by taking its inner product with each row of my sensing matrix, so I'll end up with samples of length m.
As I understand it, compressed sensing is useful in cases when sampling at the Nyquist/Shannon frequency is prohibitively expensive or slow. I must be missing something really fundamental, but didn't I have to do the work to acquire all samples of f so that it could be correlated with my sensing matrix? As I understand it, I've compressed the data now, which is great, but I didn't save any energy in the acquisition process.
Answer: Just to restate everything for clarity, the Compressed Sensing problem is defined as the following: given a signal $x$ of length $N$, we measure the projection of $x$ by some projection operator, $\Phi$ of size $M \times N$,
$$ y = \Phi x,$$
where $y$ is are our $M$ measurements of the original signal. We could also state this in terms of the inner products of $x$ with the rows of $\Phi$,
$$ y_i = \left< \phi_i, x\right>.$$
In the CS sampling procedure we have an inherent dimensionality reduction since we are taking a projection of $x$. The degree of this dimensionality reduction is usually referred to in terms of the ratio $\frac{M}{N}$ (aka the subsampling ratio or "Subrate"). By obtaining $y$ during acquisition we are simultaneously performing acquisition and dimensionality-reduction rather than full-resolution sampling followed by compression (DCT, DWT, etc.). The measurements $y$ are read off of the sensor, quantized (still an open problem, but decent results can be obtained through simple scalar quantization. See Laska's dissertation for more novel approaches), entropy coded (pick your flavor), and then transmitted or stored.
You touch on a good key point when it comes to CS signal acquisition, namely, if the calculation of this projection is costly then the advantage of CS for acquisition systems would appear small. However, depending on the context and type of signal, such projections can be accomplished in the analog domain (requiring no computation). The single-pixel camera (SPC) is a great example of this.
Researchers have also been able to greatly reduce sampling times for MRI using CS techniques. Specifically, conventional MRI techniques sample along different radial lines within the frequency domain. Each radial line is a measurement of the MRI device requiring some sampling time. Traditionally, for higher resolution MRI, more radial lines must be acquired, incurring longer acquisition time (problematic for the MRI of small children). However, this process can also be characterized as linear projections within the frequency domain. Because of this, CS techniques can be used to recover MRI images from many fewer measurements, allowing for high resolution MRI with much shorter MRI appointments. This was, in fact, the context in which CS was first applied. For more info, this paper by Lustig et al might be a good starting point.
In summation, the usefulness of CS to a particular context really depends on how you implement the projection. Getting it right can require some out-of-the-box thinking to come up with a novel sensing strategy. Thankfully, frameworks such as the SPC are generalizeable to a wide range of different signal contexts, so we don't have to re-invent the wheel every time. | {
"domain": "dsp.stackexchange",
"id": 498,
"tags": "compressive-sensing"
} |
Need help understanding 16QAM constellation diagram | Question: Is the identification of each point on the constellation diagram arbitrary as long as the sender and receiver know the mapping?
I have seen some diagrams which order in an outward spiral, while others go left to right. Does it matter?
Answer: Yes and no.
The mapping is arbitrary as long as the receiver correctly determines which constellation point a symbol is. If the receiver makes a mistake, though, it is most likely going to pick a "neighbor" constellation point (i.e. a constellation point that is only one spot away). It is highly unlikely that a correctly implemented receiver will pick a constellation point far away from the transmitted point. Communication systems try to mitigate the impact of the "neighbor" mistakes by minimizing the Hamming distance between neighbors. A map where all of the nearest neighbors have a Hamming distance of 1 is called a Gray map. If you use a Gray map almost all of the symbol errors will only result in one bit error. | {
"domain": "dsp.stackexchange",
"id": 564,
"tags": "modulation, demodulation"
} |
C# screen recording | Question: I'm developing a screen recording as an application log. The recording should capture a full HD screen at 25fps and on request it should save last 60 seconds of record. It is for embedded app, but the app is built for 32 bit and during development the recording app threw OutOfMemory exception during video generating when I used ImageFormat.Png.
I solved this problem - I forgot to dispose bitmap created during looping list of images. But it should work with rest of application (not well written). Is there a way to optimize memory usage of the recording?
I found a template using array but I changed array to list since it is easier to work with a list than with an array for me. Is it a problem in this case? Is it consuming more memory or CPU to add/remove item to the list instead of array?
Recorder:
using System;
using System.Collections.Generic;
using System.Drawing;
using System.IO;
using System.Linq;
using System.Windows.Forms;
namespace ScreenRecord
{
public class ScreenRecorder
{
private Rectangle bounds = new Rectangle(0, 0, 1920, 1080);
private Timer fpsTimer = new Timer();
private List<Tuple<byte[], DateTime>> Buffer = new List<Tuple<byte[], DateTime>>();
private int BufferLengthSeconds = 60;
public List<Tuple<byte[], DateTime>> SaveVideoLog() {
List<Tuple<byte[], DateTime>> tempBuffer = new List<Tuple<byte[], DateTime>>(Buffer);
Buffer.Clear();
return tempBuffer;
}
private void AddBitmap(Bitmap bitmap) {
try {
DateTime currentTime = DateTime.Now;
DateTime minTime = currentTime.AddSeconds(- BufferLengthSeconds);
//Buffer.RemoveAll(frame => frame.Item2 < minTime);
for (int i = 0; i < Buffer.Count; i++) {
if (Buffer.First().Item2 < minTime)
Buffer.RemoveAt(0);
else
break;
}
using (MemoryStream ms = new MemoryStream()) {
bitmap.Save(ms, System.Drawing.Imaging.ImageFormat.Jpeg);
Buffer.Add(Tuple.Create(ms.ToArray(), currentTime));
}
}
catch (Exception ex) {
Console.WriteLine(ex.Message);
}
}
public void RecordVideo()
{
using (Bitmap bitmap = new Bitmap(bounds.Width, bounds.Height))
{
using (Graphics g = Graphics.FromImage(bitmap))
{
//Add screen to bitmap:
g.CopyFromScreen(new Point(bounds.Left, bounds.Top), Point.Empty, bounds.Size);
}
//Save screenshot:
AddBitmap(bitmap);
}
}
public ScreenRecorder()
{
fpsTimer.Interval = 40;
fpsTimer.Start();
fpsTimer.Tick += new EventHandler(timer1_Tick);
}
private void timer1_Tick(object sender, EventArgs e)
{
RecordVideo();
}
}
}
Video coder:
using System;
using System.Collections.Generic;
using System.Drawing;
using System.IO;
using System.Linq;
using Accord.Video.FFMPEG;
namespace ScreenRecord
{
public static class VideoConverter
{
public static void ConvertToVideoAndSave(List<Tuple<byte[], DateTime>> source, string destinationPath) {
if (!source.Any())
return;
using (MemoryStream ms = new MemoryStream(source.First().Item1)) {
Image frame = Image.FromStream(ms);
ConvertToVideoAndSave(source, destinationPath, frame.Width, frame.Height);
}
}
public static void ConvertToVideoAndSave(List<Tuple<byte[], DateTime>> source, string destinationPath, int width, int height) {
ConvertToVideoAndSave(source, destinationPath, width, height, 25);
}
public static void ConvertToVideoAndSave(List<Tuple<byte[], DateTime>> source, string destinationPath, int width, int height, int framerate) {
ConvertToVideoAndSave(source, destinationPath, width, height, framerate, VideoCodec.H264);
}
public static void ConvertToVideoAndSave(List<Tuple<byte[], DateTime>> source, string destinationPath, int width, int height, int framerate, VideoCodec codecs) {
if (!source.Any()) {
return;
}
using (VideoFileWriter writer = new VideoFileWriter()) {
writer.Open(destinationPath, width, height, framerate, codecs);
DateTime startTime = source.First().Item2;
foreach (Tuple<byte[], DateTime> frame in source) {
using (MemoryStream ms = new MemoryStream(frame.Item1))
using (Bitmap videoFrame = new Bitmap(ms))
writer.WriteVideoFrame(videoFrame, frame.Item2 - startTime);
}
writer.Close();
}
}
}
}
Main form with two buttons:
using System;
using System.Collections.Generic;
using System.Drawing;
using System.Windows.Forms;
namespace ScreenRecord
{
public partial class MainForm : Form
{
ScreenRecorder rec;
public MainForm()
{
InitializeComponent();
}
void Button1Click(object sender, EventArgs e)
{
rec = new ScreenRecorder();
}
void Button2Click(object sender, EventArgs e)
{
VideoConverter.ConvertToVideoAndSave( rec.SaveVideoLog(), "D:\\Test.mp4");
}
}
}
Answer: I don't think this is the best way of capturing the screen, but I'll leave it to someone with more experience in the area to comment on that.
What I will comment on are rather obvious performance issues assuming the overall code stays as it is:
List<Tuple<byte[], DateTime>> tempBuffer = new List<Tuple<byte[], DateTime>>(Buffer)
This copies the list for no obvious reason. The only reason I can think of is to decouple it from an async reader, but by the looks of it everything is sync here. And even so, you can just switch the references without copying everything.
Use UtcNow instead of Now, it is both more performant and makes more sense
Buffer.RemoveAt(0); shifts the list to the left, so your for cycle is O(n^2)
ms.ToArray() copies the entire memory stream. | {
"domain": "codereview.stackexchange",
"id": 41884,
"tags": "c#, performance, image, memory-optimization, video"
} |
Heat of hydrogenation of 1,3-substituted cyclohexanes | Question:
From these two molecules in question, (C) should be more unstable as it is in a cis-configuration. The t-butyl and methyl groups being on the same side of the ring would exert a larger steric force on each other as compared to the trans configuration. However, (D) is mentioned as the correct answer.
How is a trans-1,3 disubstituted cyclohexane more unstable than the corresponding cis-isomer? I drew the chair projections of both compounds and the cis one has both groups on the same side, which would cause 1,3-diaxial interaction as mentioned here
Answer: First of all, as Jan pointed out on the comment, better nomenclature for thes compounds is syn- and anti- instead of cis- and trans-, respectively. However, OP is more familiar with cis- and trans- nomenclature I'd continue it with my answer.
Each substituted cyclohexane ring (e.g., di-substituted ones here) has two chair conformers as depicted in OP's bottom diagram of methylcyclohexane (a mono-substituted one). In one conformer, the methyl group is in equatorial position (left hand structure) and the methyl group in the other one is in axial position (right hand structure). Because of two 1,3-interactions cost additional approximately $2 \times \pu{3.8 kJ/mol}$ steric strain energy, the predominant conformer is equatorial one.
Now, if you put tert-butyl group in $\ce{C3}$-equotorial position on equatorial-methylcyclohexane (left hand structure, vide supra), you get cis-1,3-substituted compound (C). On the other hand, if you put tert-butyl group in $\ce{C3}$-equatorial position of axial-methylcyclohexane (right hand structure), you get trans-1,3-substituted compound (D). Because it is too bulky, the tert-butyl group is always in equotorial position in cyclohexane ring (1,3-interaction with one hydrogen would cost additional approximately $\pu{11.4 kJ/mol}$ steric strain energy), as depicted in following diagram (The larger the substituent, the more the equatorial substituted conformer will be favored):
The most predominant cis-isomer (C) has both substituents in equatorial-positions (the left hand side structure of (C)). Other conformer (get by the ring flip; the right hand side structure of (C)) has both substituents in axial-positions. Thus, as a rule of thumb, the conformer with both substituents in equatorial positions is more stable.
On the other hand, the both trans-isomers of (D) have one substituent in an equatorial-position and the other in an axial-position. Yet, the conformer with the tert-butyl group in the equatorial-position (left hand side structure of (D)) is more stable between two, because tert-butyl group is comparatively much larger than methyl group. However, because most predominant cis-isomer (C) has both groups in the equatorial-positions, it is relatively more stable than (D), which has methyl on axial-position, making additional steric strain energy. | {
"domain": "chemistry.stackexchange",
"id": 12984,
"tags": "organic-chemistry, cyclohexane"
} |
How do I create a direct current with a magnet? The magnet is not to be moved in the direction of the wire | Question: How do I create a direct current with a magnet? The magnet is not to be moved in the direction of the wire. In fact I'm looking for the contrary to the drawing from https://commons.wikimedia.org/wiki/File:RechteHand.png
Update
1st: The background is that in the German wiki under Magnetismus you find the sentence "Any movement of electric charges produces a magnetic field." For a wire this is proved by experiment, even a direct electrical current produces a magnetic field. This was illustrated in the drawing above. My first doubt is, do a number of free flowing electrons in the absence of any external fields produce a magnetic field in the same strength as in the wire. In this case I believe only experimental facts.
2nd: If a DC in a straight wire produces a magnetic field the reason for this must be the EM induction. But there are two strange facts. The EM induction connects always three components, two of them produces the third. This are the electrical current, the magnetic field and the movement of the wire or the magnetic field. But the straight wire in the wiki definition under point one does not moves.
To proof the wiki definition I remembered that the vector components of the EM induction are a cross product. This is the reason I ask help me to find out what is the reverse process of the above linked drawing.
Answer: As I observe the image, question, and all physics, I see that the only logical answers are:
A-the copper is twisted or coiled, technically violating the rule of 'The magnet is not to be moved in the same direction'
OR
B-there is some sort of incline or slope on the N/S divider, but an offset, I'm guessing is also wrong (because the image is illustrated as perpendicular)
otherwise you are charging the copper vertically and since the copper is narrow and long the charge wouldn't be worth anything. this is, as far as my knowledge, impossible. | {
"domain": "physics.stackexchange",
"id": 14430,
"tags": "electromagnetism, induction"
} |
Converting age and sex variables to a 64-unit dense layer | Question: I am studying a preprint for my own learning (https://www.medrxiv.org/content/medrxiv/early/2020/04/27/2020.04.23.20067967.full.pdf) and I am befuddled by the following detail of the neural network architecture:
This is in accord with the paper's description of the architecture (p. 5):
Age and sex were input into a 64-unit hidden layer and was concatenated with the other branches.
How can the two scalars of age and sex be implemented as a 64-unit dense layer?
Answer: Convert them into numbers (using one-hot vectors or direct numerical representations) and then concatenate them. Then, you can pass them through the Dense layer. | {
"domain": "ai.stackexchange",
"id": 2647,
"tags": "convolutional-neural-networks, papers, embeddings"
} |
How can we differentiate between respiration and breathing? | Question: I am a student of 10th grade, and I eagerly want to learn biology. What is the difference between respiration and breathing?
Answer: There are two uses of the term respiration: physiological respiration and cellular respiration
Physiological respiration involves the intake of outside oxygen and its distribution to the tissues of the body. Breathing is a part of physiological respiration and functions to bring oxygen into the lungs and expel carbon dioxide.
Cellular respiration is a chemical process by which energy is obtained within individual cells from biomolecules like glucose. In aerobic respiration, oxygen is used. Cells can also use fermentation and, for some, anaerobic respiration to obtain energy. | {
"domain": "biology.stackexchange",
"id": 3850,
"tags": "physiology, respiration, breathing"
} |
PCAs and Kleene's Recursion Theorem | Question: I might need some help with the following question.
Given a Partial Combinatory Algebra, we can define the fixed point combinator $Y := [\lambda^{*}xy.y(xxy)][\lambda^{*}xy.y(xxy)]$. How does this relate to Kleenes recursion theorem, aka. fixed point theorem?
In the setting of Kleenes first PCA, ie. the PCA of computable functions on $\mathbb{N}$, given a (partial) computable function $f = \varphi_c$ the fixed point combinator satisfies $Yc = c(Yc)$. As I understand it this means that taking d := Yc it translates to $f(d) = \varphi_c(d) = cd = d$, ie. $f$ having a fixed point.
However Kleene's recursion theorem originally gives a weaker assertion, namely that for every total computable function $g$ there is some $n$ such that $\varphi_{g(n)} \simeq \varphi_n$ (cf. Odifreddi - Classical Recursion Theory, Theorem II.2.10).
This really confuses me and I couldn't make up my mind what to do about it. I hope someone can help me out. Anyway, thank you for your time.
Answer:
As I understand it this means that taking $d := Yc$ it translates to $f(d)=φ_{c}(d)=cd=d$, ie. $f$ having a fixed point.
Not every recursive function has a fixed point in the sense of $f(n)=n$ - for example, $f(n)=n+1$. Therefore, there must be something wrong with this proof. As noted in comments, this proof works only if $d=Y c$ is defined.
As you've noticed, you can work around this issue by using a variant of the Y combinator:
Just taking Ycc′=c(Yc)c′ doesn't seem to make your problem any better
But it does! To avoid confusion, I'll call this combinator Z. We have
$Zcc′=c(Zc)c′$.
Let's take a function $f=\varphi_c$ and let $d = Z c$, just like in the previous proof. Now, $d$ is guaranteed to be defined. We have
$d c' \equiv c d c'$
By the definition of application in the Kleene's first algebra this means:
$\varphi_{d}(c')$ $ \simeq \varphi_{\varphi_{c}(d)}(c')$
$\varphi_{d}(c')$ $ \simeq \varphi_{f(d)}(c')$
$\varphi_{d}$ $ \simeq \varphi_{f(d)}$
which is the Kleene's recursion theorem.
Well, literally the same problem arises in the proof of the Recursion theorem in Odifreddi's Classical Recursion Theory, Theorem II.2.10. There he takes $b$ to be a code satisfying $\varphi_{b}(e) \simeq f(\varphi_e(e))$.
No, $b$ is defined in Theorem II.2.10 using the equation:
$\varphi_{\varphi_{b}(e)} \simeq \varphi_{f(\varphi_e(e))}$
Importantly, $b$ is a code - it is a well-defined natural number that encodes a specific program. Furthermore, for every $e$, $\varphi_b(e)$ is also a code - it is not the result of running $f(\varphi_e(e))$, it is merely a code of a function which given $n$, runs the function $\varphi_{f(\varphi_e(e))}$ on $n$. (This difference mirrors the difference between Y and Z combinators.) | {
"domain": "cs.stackexchange",
"id": 14555,
"tags": "computability"
} |
Start a nodelet roscpp (Tegra SGM nodelet) | Question:
I have a ros package which builds a image_proc like node for cuda SGM.
I have built the package and the nodelet is built. I am not sure how to run it. I have looked through the ros tutorials but there isn't a clear explanation of how to run a nodelet for a generic case.
The nodelet is exported as follows
#include <pluginlib/class_list_macros.h>
PLUGINLIB_EXPORT_CLASS(tegra_stereo::TegraStereoProc, nodelet::Nodelet)
After this, how do call this nodelet? Since I do not have a cpp file like the image_proc or stereo_image_proc to directly call the method. Is there a way to launch the nodelet or how do I launch it?
Thanks!
Originally posted by ashwath1993 on ROS Answers with karma: 70 on 2017-07-07
Post score: 1
Answer:
There are three ways to run a nodelet:
Run the nodelet standalone using roslaunch:
<launch>
<node name="my_nodelet" pkg="nodelet" type="nodelet" args="standalone tegra_stereo/TegraStereoProc" />
</launch>
or rosrun:
rosrun nodelet nodelet standalone tegra_stereo/TegraStereoProc
Run the nodelet as part of a nodelet manager. This is described on the ROS wiki here: http://wiki.ros.org/nodelet/Tutorials/Running%20a%20nodelet
Write a node wrapper, and then build and run it as you would a node. Here's what node wrapper code typically looks like:
#include <ros/ros.h>
#include <nodelet/loader.h>
int main(int argc, char **argv){
ros::init(argc, argv, "tegra_stereo_node");
nodelet::Loader nodelet;
nodelet::M_string remap(ros::names::getRemappings());
nodelet::V_string nargv;
std::string nodelet_name = ros::this_node::getName();
nodelet.load(nodelet_name, "tegra_stereo/TegraStereoProc", remap, nargv);
ros::spin();
return 0;
}
If any of these methods fail with an error message that pluginlib can't find your nodelet, make sure that you defined your nodelet in an XML file and exported it in your package.xml. Instructions to do so are on the ROS wiki here: http://wiki.ros.org/nodelet/Tutorials/Porting%20nodes%20to%20nodelets
Bonus Methods
If you're willing to install another package, Southwest Research Institute created swri_nodelet to make some of these tasks easier.
Run the nodelet standalone using roslaunch:
<launch>
<node name="my_nodelet" pkg="swri_nodelet" type="nodelet" args="tegra_stereo/TegraStereoProc standalone" />
</launch>
or rosrun:
rosrun swri_nodelet nodelet tegra_stereo/TegraStereoProc standalone
Run the nodelet with a nodelet manager using roslaunch:
<launch>
<node name="manager" pkg="nodelet" type="nodelet" args="manager" />
<node name="my_nodelet" pkg="swri_nodelet" type="nodelet" args="tegra_stereo/TegraStereoProc my_manager"/>
</launch>
(Note that this syntax makes it easy to switch launch files back and forth between manager and standalone using args.)
Automatically create a node wrapper using swri_nodelet. This takes a couple of steps but saves you from writing boilerplate C++. Also, because it links the nodelet with the executable at compile time, you catch linker errors at compile time, instead of dealing with inscrutable runtime errors. Here's how to do it:
In your nodelet's C++ file, replace the pluginlib macro with this:
#include <swri_nodelet/class_list_macros.h>
SWRI_NODELET_EXPORT_CLASS(tegra_stereo TegraStereoProc)
In your CMakeList, build your library and add a nodelet executable like this:
# (Add swri_nodelet to your find_package(catkin)
add_library(my_nodelet_library src/TegraStereoProc.cpp)
target_link_libraries(my_nodelet_library ${CATKIN_LIBRARIES})
swri_nodelet_add_node(tegra_stereo_proc tegra_stereo TegraStereoProc)
target_link_libraries(tegra_stereo_proc my_nodelet_library)
Originally posted by Ed Venator with karma: 1185 on 2017-07-07
This answer was ACCEPTED on the original site
Post score: 2
Original comments
Comment by ashwath1993 on 2017-07-12:
Thank you! This cleared a lot of confusions regarding running nodelets!
Comment by longlongago on 2018-01-14:
Thank you. It is very clear. Are there any differences between these methods? | {
"domain": "robotics.stackexchange",
"id": 28308,
"tags": "roscpp, nodelet"
} |
Halting problem with extra input | Question: Can there be a function HALT(f, y) so that:
There are some x such that f(x) halts iff there are some y such that HALT(f, y) return true;
There are some x such that f(x) doesn't halt iff there are some y such that HALT(f, y) return false;
HALT always halt and return a boolean
? It makes lots of old conflict don't work. E.g. If f(x) loops forever when HALT(f, x) is true, HALT may return true for half of x.
Answer: I assume you mean "Turing machine" by "function", otherwise it is meaningless to say a function halts on some input.
Suppose $\mathrm{HALT}$ exists, then we construct a Turing machine $H$ as follows:
On input $\langle M, w\rangle$ where $M$ is an (encoding of) Turing machine:
Construct a Turing machine $N_{\langle M, w\rangle}$ with input $x$ as follows: run $M$ on $x$ and return what $M$ returns.
Run $\mathrm{HALT}$ on $\langle N_{\langle M, w\rangle}, 0\rangle$ and return what $\mathrm{HALT}$ returns.
Note the result of running $N_{\langle M, w\rangle}$ has nothing to do with its input $x$. If $M$ halts on $w$, $N_{\langle M, w\rangle}$ halts on all inputs, thus $\mathrm{HALT}( N_{\langle M, w\rangle}, 0)$ returns true. otherwise $N_{\langle M, w\rangle}$ halts on no input, thus $\mathrm{HALT}( N_{\langle M, w\rangle}, 0)$ returns false.
Now we can see $M$ halts on $w$ if and only if $H$ accepts $\langle M, w\rangle$. Since $H$ always halts, $H$ is a decider for the normal halting problem, a contradiction!
So $\mathrm{HALT}$ does not exist. | {
"domain": "cs.stackexchange",
"id": 12901,
"tags": "computability, halting-problem"
} |
Relativistic Doppler Effect Rest Time vs Time Dilation Proper Time | Question: For time dilation, both observers see the other as experiencing a time dilation symmetrically based on relative velocity to each other. In the twin paradox, however, one must be careful of which twin is the "proper" or rest time, and this is based off of acceleration to know who is truly moving. I know this, yet my question is: does the Relativistic Doppler Effect follow these same rules?
Does the doppler effect take into account who is the proper time or is the rest frequency based solely on the velocity of the two frames, irregardless of "proper time"? Because for example I would think that in the twin paradox, both twins would see each other's signals as red shifted since they are moving away from each other. But how could this be the case if their time dilation aren't symmetrical due to only one being in an inertial reference frame, considering that the relativistic doppler shift is originally based off of Lorentz Transformations / Time Dilation in the form of
λ0=(c+u)T0
Is this T0 a different proper time than in the normal time dilation?
Answer: The flaw is in this part of your argument:
In the twin paradox, however, one must be careful of which twin is the "proper" or rest time, and this is based off of acceleration to know who is truly moving.
While there is no acceleration yet, both twins are "truly moving" and each one sees the time of the other running slower. There is no paradox in the fact that A sees the time of B running slower while B sees the time of A running slower.
With the Doppler effect, the same principle applies. In fact, this is true not only for light, but even for sound. Imagine two trains on parallel tracks going away from each other. If they sound a signal of the same pitch, people on each train would hear its pitch higher than the pitch of the other train. Similarly, each twin would see the light coming from the other twin red shifted.
Relativistic Doppler effect
$$\dfrac{\lambda_o}{\lambda_s}=\dfrac{f_s}{f_o}=\sqrt{\dfrac{1+\beta}{1-\beta}}$$
Where
$$\beta=\dfrac{v}{c}$$
There also is a transverse efect, which is beyond the scope of your question. | {
"domain": "physics.stackexchange",
"id": 43927,
"tags": "special-relativity, time-dilation, doppler-effect"
} |
Deuterated solvents vs. regular solvents | Question: Working on dissolving aspartame and i was wondering if it dissolves well in methanol will this mean that it dissolves well in deuterated methanol ?
Answer: In general, it is safe to assume that solubilities in non-deuterated and deuterated solvents are much the same, and this would be the logical solvent of choice if looking for a NMR solvent.
However, there is a small difference in relative solubilities between H/D solvents, and you can gauge potential for differences based on the dielectric properties of the two. By memory, I think H2O/D2O have the largest difference (which is still pretty small).
Another factor that can cause differences in solubilities comes from the relative purity of the two solvents. For instance, many prepackaged deuterated solvents are often drier than bulk bench solvents, and this can sometimes cause significant solubility differences, especially in hygroscopic solvents like DMSO. I've seen countless examples of people claiming a compound was readily soluble in DMSO, only to have it sit like a brick when trying to dissolve in a freshly cracked vial of DMSO-d6. | {
"domain": "chemistry.stackexchange",
"id": 4599,
"tags": "organic-chemistry, nmr-spectroscopy"
} |
Particular targets of high angular resolution infrared telescopes | Question: I work in the field of infrared interferometry, specifically instrumentation.
As such, I need to be aware of the science goals of such an instrument. Is there a quick list of important and contemporary targets that such an instrument of high angular resolution would be pointed at?
(By "high", I mean objects that subtend a small angle on the sky.)
Answer: The most obvious targets are protostar disks (T Tauri, Serpens FIRS 1), the Milky Way's central black hole Sgr A*, and nearby large stars (Betelgeuse).
The MPIfR interferometry group web page has an impressive sample | {
"domain": "physics.stackexchange",
"id": 3091,
"tags": "astronomy, telescopes, observational-astronomy, instrument, infrared-radiation"
} |
The approximate values of density for a flame with respect to heights | Question: I'm aware that the difference in density with respect to height is low, but I'm in need of the values for a project: in which I explain that the continuity equation of fluid dynamics will work well for gases as an approximation since the density difference is not high for different heights of a flame.
p.s: the project is a video explaining the tapering of flame. I also would like to know if showing the values of temperature incase I can't get the values for density would be acceptable.
Answer:
The approximate values of density for a flame with respect to heights
I'm aware that the difference in density with respect to height is low, but I'm in need of the values for a project: in which I explain that the continuity equation of fluid dynamics will work well for gases as an approximation since the density difference is not high for different heights of a flame.
Check out: "Jet flame heights, lift-off distances, and mean flame surface density for extensive ranges of fuels and flow rates", (DOI link: https://doi.org/10.1016/J.COMBUSTFLAME.2015.09.009) by Derek Bradley, Philip H. Gaskell, Xiaojun Gu, and Adriana Palacios for formulas to make these calculations:
"The correlations are based on a vast experimental data base, covering 880 flame heights. They encompass pool fires and flares, as well as choked and unchoked jet flames of CH$_4$, C$_2$H$_2$, C$_2$H$_4$, C$_3$H$_8$, C$_4$H$_{10}$ and H$_2$, over a wide range of conditions. Supply pressures range from 0.06 to 90 MPa, discharge diameters from 4·10$^{-4}$ to 1.32 m, and flame heights from 0.08 to 110 m.".
See also: "The size of flames from natural fires", (11 October 2007) by P.H.Thomas. Origonal paper: Source: Fire Safety Science Digital Archive. "This paper discusses one of the least studied features of natural fires - the length of the turbulent flames rising from the burning fuel.".
An answer about backpressure: Why does exit pressure matches back pressure in a converging diverging nozzle?
PS: The project is a video explaining the tapering of flame.
Here's a great video from NASA: "NASA’s new High Dynamic Range Camera Records Rocket Test", not much tapering in that view but it's shot in HDR and shows the turbulence of the burning. | {
"domain": "physics.stackexchange",
"id": 49534,
"tags": "thermodynamics, fluid-dynamics, density, continuum-mechanics, fluid-statics"
} |
Verifying Newton's 2nd Law | Question: Background/Experiment setup: In one of my classes a mass (0.04 kg) was hung from a pulley and attached to a much heavier mass (around 0.4 kg) that rest on an airtrack. A thread connected the hanging mass with the one on the track.
The hanging mass would be dropped and the acceleration of the mass on the airtrack would be calculated. For 5 trials the mass was increased .01 kg and the acceleration calculated.
Then I divided gravity by each calculated acceleration so I ended up with 6 new numbers. Then for each mass I divided $m_1$ ($0.4$ kg) by each hanging mass. Then I plotted $\frac{m_1}{m_2}$ as the $x$ axis of a graph and $\frac ga$ as the $y$ axis.
Then I found a line of best fit:
$y = 1.0004x + 0.99 $
$r^2 = 0.9994. $
So it produces a very straight line, the data line up very well and produce a great line of best fit.
Question: How does this verify Newton's 2nd law? I understand that it is $F=ma$, and the graph does show that as mass increases so does acceleration proportionally. IS all that is needed to verify it? I'm having a hard time in my lab write up actually explaining how this set up verifies Newton's 2nd law.
Thanks for reading!
Answer: Here's how your equation implies Newton's 2nd law holds.
The equation you discovered looks to me like $$\frac{g}{a}=\frac{m_1}{m_2}+1.$$ This equation can be re-arranged and written: $$m_2g=(m_1+m_2)a.$$
The left-hand side is the magnitude of the gravitational force acting on the smaller hanging mass $m_2$. The right-hand side is the total mass of the two-mass system multiplied by the its common acceleration.
Hopefully that is enough for you to see that what you discovered is Newton's 2nd law. Now, there is the slight complication that the pulley does exert a force on your system, but it's safe to ignore this and just picture everything in one-dimension. | {
"domain": "physics.stackexchange",
"id": 9636,
"tags": "homework-and-exercises, newtonian-mechanics"
} |
Why should a neutrino being nearly massless mean that they travel near the speed of light? | Question: It seems to me that photons travel only at the speed of light due to some intrinsic property of photons but once a particle has mass, its mass (irrespective of how small this mass is) should have nothing to do with how fast it travels and indeed, a neutrino could in theory be caught maybe in some sort of Penning trap and be essentially stationary. I know that electrons have very little mass and can move very fast and also move very slowly.
If a particle has mass much less than that of an electron (as a neutrino does), is this some special case where it becomes somehow more "photon-like?"
One guess that occurs to me is that when neutrinos are produced it is due to very energetic processes so that at the time they come into existence they are already move near the speed of light and since they do not interact with normal matter very much, there is no way that they get slowed down. Furthermore, if they are detected, it is because they interact in a way that causes them to combine with another particle and so they no longer exist in a "free state" -- this implies maybe that they only exist with the speed they have at production time. But this does not mean that perhaps some way could be developed to either create them in a process wherein they have less initial velocity or to in fact slow them down and confine them.
Answer:
One guess that occurs to me is that when neutrinos are produced it is due to very energetic processes so that at the time they come into existence they are already moving near the speed of light and since they do not interact with normal matter very much, there is no way that they get slowed down.
This is essentially correct!
Often, neutrinos and antineutrinos are produced in nuclear processes such as $\beta$ decay, where the typical energies are of the order of the MeV, which (as @rob describes in his answer) means they have Lorentz factors of the order of a million or more.
The fact that we have so far only seen energetic neutrinos is in part due to the specifics of how they interact: their interaction cross-section increases linearly with energy, therefore low-energy neutrinos interact even less than the high-energy ones!
Nevertheless, low-energy neutrinos are thought to exist, for instance in the cosmic neutrino background. | {
"domain": "physics.stackexchange",
"id": 86556,
"tags": "special-relativity, mass, velocity, neutrinos"
} |
Polarization true or false: Repeating circular filters is different from repeating linear filters? | Question: Begin with standard abbreviations for polarized filters:
H = Horizontal,
V = Vertical,
D = Diagonal,
R = Right circular.
HHH means three horizontal filters in a row. Start with totally unpolarized light.
H transmits 50% as does HH, HHH, HHHH, etc.
Similarly, V, VV, VVV, etc. And D,DD, DDD, etc.
HD transmits 25%. As do DV, HR, and RD.
But HV transmits 0%, and HDV counterintuitively transmits 12.5% (I call HDV the famous three-filter trick).
Now, if R consists of H followed by a quarter wave plate, then R transmits 50%, but RR transmits 25%, RRR transmits 12.5% RRRR transmits 6.25%, etc., unlike V, VV, VVV, VVVV etc.
Is this true? Or do RR, RRR, RRRR, etc. all transmit 50%? Is there another form of circular polarizer that does work like linear polarizers when repeated?
Answer: You are exactly correct in your first assertion: Each R stage in and RRR... sequence will diminish the light further.
That's not because there is anything inherently different about circular basis states versus linear states, but because we cheat a bit in how we make circular polarizers, since they start with linear polarizers that are then followed by quarter wave plates. It's a bit like a mismatch of hose connections, since you keep converting back and forth between the two basis sets (linear versus circular).
Can you build a full-pass circular polarizer? Sure.
The first thing to keep in mind is that like linear polarizers, quarter plates have a definite orientation to them in how they operate on light. Along one side-to-side axis of a quarter plate, light moves faster and in one linear polarization, while along the other perpendicular axis light move slower and with the opposite polarization. They call it a "quarter plate" because its thickness is just right for delaying the slower-moving light by exactly one fourth of a wavelength of light. The shift remains (mostly) proportional to wavelength for different lengths of light, so the neat thing about a quarter plate is that it works even with white light of many different frequencies.
A circular polarizer consists of two parts. The initial linear polarizer, say $V$ for vertical, that absorbs 50% of non-polarized light and transmits the other 50% as polarized light.
The second part is a quarter plate whose fast axis is rotated 45$^\circ$ away from the vertical axis of the polarizer. That 45$^\circ$ angle allows the quarter plate to split half of the polarized light into fast mode, and the other half into slow mode. For an observer looking into the oncoming light, a quarter plate that is rotated 45$^\circ$ to the right of the polarization axis of the linear polarizer produces light that rotates clockwise as seen by the observer. That is the right circular polarized light. Call that case the $Q$ quarter plate.
If instead the fast axis of the quarter plate is rotated 45$^\circ$ to the left of the linear polarizer axis, the resulting light rotates left. Call that left circular polarized light (and if I got that switched, someone please flag me!), and call that orientation of the quarter plate $Q^-$.
So, with that in mind, here's the trick for creating true cascading circular polarizers: Simply add a $Q^-$ in front of the linear $V$, in addition to the original $Q$ plate that comes after the polarizer. The resulting sandwich is $Q^-VQ$, versus $VQ$ for ordinary circular polarizers.
The reason this works is that the new plate is 90$^\circ$ out-of-phase with the quarter plate from the circular polarizer in front of it, and so cancel out the action of that plate, since both components are slowed and sped up by the proportions by the two out-of-synch quarter plates.
(As the asker Jim Graber neatly noticed, another way to do this is to add a three-quarters plate in front of the polarizer, that is, to create a $QQQVQ$ sandwich. That also gives a null action by changing all the phases by one full wavelength! Neat trick, Jim.)
So here's how it works. For ordinary mixed polarization light entering $Q^-VQ$ from the left, the first $Q^-$ stage just shifts random phase and so has no major impact on the light. The remaining two stages $VQ$ then act like a conventional circular polarizer that, as expected, diminishes the light by 50%. So, the $Q^-VQ$ sandwich clearly qualifies as a true circular polarizer.
Now send that light through another $Q^-VQ$ sandwich. This time the initial $Q^-$ plate has something meaningful to work with! It cancels the actions of the $Q$ plate from the first sandwich, and so converts the light back to vertical linear polarized light. Since that light is now fully matched to the $V$ plate of the second sandwich, all of it gets through. The final $Q$ plate in the second sandwich then recreates circular polarized light, again at nominally 100% efficiency.
You can repeat this indefinitely, with the internal pairs of $Q$ and $Q^-$ plates in successive sandwiches always canceling out and ensuring 100% transmission of vertically polarized light, but always with the last plate converting back to circular.
Now at this point you may be thinking "yeah, but don't you have to keep all of these supposedly true circular polarizes oriented just like vertical linear polarizers?" The cool answer is no, you don't. What's important is that the initial $Q^-$ plate is properly oriented relative to the $V$ polarizer, not to the circular polarized light that is entering into if from the front. That light is rotationally symmetric and will convert into linear polarized light at any angle, with that angle depending only on how the $Q^-$ plate is oriented.
So, the bottom line to your question is this: Yes, you can make true circular polarizers quite easily using the same materials normally used, the only difference being the addition of that initial $Q^-$ plate. They don't do that in ordinary circular polarizers because it would a very rare event to need cascaded circular polarizers. So instead of adding the cost of another quarter plate in front that would serve no real purpose, they build it in a way that provides perfectly serviceable circular polarized light at a significantly lower cost. | {
"domain": "physics.stackexchange",
"id": 4175,
"tags": "optics, polarization"
} |
Energy/Momentum required to deflect earth from its orbit | Question: This question occurred to me when thinking about the firepower people have on earth.
How hard is it to change the characteristics of the orbit of the earth when an explosion happens on its surface, using conservation of momentum?
I know already that the orbit of the earth is not so accurate already and fluctuates a lot, causing the so-called "leap second".
So my question is: How could such a thing be measured/quantified?
Can we say something like: The explosion energy required to deflect the eccentricity of the earth's orbit by 1%. And how much would that be?
Any ideas?
Answer: So if we consider the 1% change to be directly inward (toward the sun) we can look at the gravitational potential energy difference between these two potential levels. This would be equal to 1% of the Earth's current gravitational potential energy. The gravitational potential energy is given by the equation
\begin{equation}
U = -\frac{GmM}{r}
\end{equation}
If we substitute the mass of earth, $5.97\times10^{24}~\mathrm{kg}$, and the mass of the sun, $1.99\times10^{30}~\mathrm{kg}$, and use the semi-major axis of the earth's orbit, $149,598,261~\mathrm{km}$ as the orbital radius we can find that the energy of the earth in its current orbit is approximated as
\begin{equation}
U = 5.30\times10^{33}~\mathrm{Joules}
\end{equation}
1% of this energy is
\begin{equation}
5.30\times10^{31}~\mathrm{J}
\end{equation}
This is a lot of energy, and I find it sometimes difficult to comprehend very large orders of magnitude. To better understand this energy we can first compare this figure to the energy of the largest nuclear weapon every detonated by humankind, the Tsar Bomba, a nuclear weapon with a yield of approximately $225~\mathrm{PJ}$ or $2.25\times10^{18}~\mathrm{J}$. In order to change the energy of the earth's orbit by 1% using these weapons we would need to direct the equivalent energy of 22.6 trillion Tsar Bombas on the earth inward with respect to its orbit around the sun. This is still a very large number and I wanted to see if I could think of a better comparison.
The meteor that impacted with the earth approximately 65 million years ago, initiating the mass extinction of the dinosaurs is thought to have had a impact energy equivalent to $4.23\times10^{23}~\mathrm{J}$. In order to decrease the earth's orbital energy by 1% we would need the energy equivalent to approximately 126 Million of the Chicxulub impactor.
Based on the figures above, and assuming my math is correct, it would seem that none of the actions taken by man thus far would have any appreciable effect on the earth's orbit. | {
"domain": "physics.stackexchange",
"id": 12502,
"tags": "orbital-motion, energy-conservation, momentum, conservation-laws, earth"
} |
Guessing Game (Heads or Tails) | Question: You guess heads or tails by clicking one of the buttons on easyGUI. If it's right, you will get a "good job" message. If it's wrong, you will get a "wrong" message! After that, there is an option to play again.
Please give me some feedback on how I can make my code better, if there is room for improvement.
import random
import time
import easygui
import sys
while True:
rand = random.choice(["Heads", "Tails"])
firstguess = easygui.buttonbox("Pick one", choices= ["Heads", "Tails"])
if firstguess == rand:
easygui.msgbox("Wow you win!")
else:
easygui.msgbox("Sorry you guessed wrong!")
time.sleep(2)
answer = easygui.buttonbox("Play again?", choices=["Yes","No"])
if answer == "Yes":
pass
else:
break
easygui.msgbox("Ok, see you later!")
sys.exit(0)
Answer: Your script isn't bad. But there is always room for improvement:
It's bad to mix tabs and spaces. The fact that your code broke when you pasted it here shows why. Good editors have a setting that will help you with this. (It is recommended that you use spaces only.)
You don't need sys.exit(0) at the end of your script, since Python will exit automatically when it reaches the end.
Choose your variable names carefully. rand is not very descriptive and firstguess is plain incorrect (every guess, not just the first, gets assigned to this variable).
Don't repeat yourself. Don't write ["Heads", "Tails"] twice, because if you have to change it for whatever reason, you have to remember to change it in two places. Store it in a variable.
You can simplify this:
if answer == "Yes":
pass
else:
break
To this:
if answer != "Yes":
break
I don't know easygui, but it seems you should use a
ynbox
when you're asking "yes or no?". So you can change this:
answer = easygui.buttonbox("Play again?", choices=["Yes","No"])
if answer != "Yes":
break
To this:
play_again = easygui.ynbox("Play again?")
if not play_again:
break
Or simply:
if not easygui.ynbox("Play again?"):
break
With all the changes made and after applying the rules from the the Python style guide PEP8, your code looks like this:
import random # alphabetic order
import time
import easygui # third-party modules after standard-library modules
CHOICES = ["Heads", "Tails"] # ALL_CAPS for variables whose value never changes
while True:
result = random.choice(CHOICES)
guess = easygui.buttonbox("Pick one", choices=CHOICES)
if guess == result:
easygui.msgbox("Wow you win!")
else:
easygui.msgbox("Sorry you guessed wrong!")
time.sleep(2)
if not easygui.ynbox("Play again?"):
break
easygui.msgbox("Ok, see you later!") | {
"domain": "codereview.stackexchange",
"id": 4488,
"tags": "python, beginner, game"
} |
Shannon theorem and zero padding | Question: Sorry if I ask a basic question but I am a bit lost.
I have a data set in frequential domain, I need to go to time domain to perform time-gating and then going back to frequencies. In order to do this, I compute an inverse Discrete Fourier Transform.
Shannon theorem tells us that two times the difference between the greatest and the least frequency, respectively noted as $f_N$ and $f_0 ( \ne 0)$, contained in the signal must be less then the sample-rate in time domain ($F_t$). Do I need to do some zero padding to avoid aliasing ?
Without zero padding on frequential domain, what is the relation between $F_t$ and $f_N, f_0$ ? My guess is that $F_t=f_N-f_0$. Is that correct ?
If I must do some zero padding, should I do it this way :
$$(0,\dots,0,f_0,\dots,f_N,0 \dots,0)$$
With enough zeros such that the first frequency contained in the padded signal is $0$ and the last is $2 f_N$ ?
If my guess above is correct, then the new sample rate is high enough to avoid aliasing, because it will be $F_t=2f_N>2(f_N-f_0)$.
Thanks for your help.
Answer: You'll not experience aliasing – in the end, the IDFT result per definition only contains frequency components that can be represented by the original frequency domain.
If you will, just stop worrying about aliasing: the DFT and inverse DFT are just base change matrices in $\mathbb C^N$, and are invertible.
Aliasing can only happen when something is non-invertible. This is not the case here; so, no problem.
What you might see is leakage, but you're already taking care of that by windowing. | {
"domain": "dsp.stackexchange",
"id": 6543,
"tags": "sampling, dft"
} |
Is this over-fitting or something else? | Question: I recently put together an entry for the House Prices Kaggle competition for beginners. I decided to try my hand at understanding and using XGBoost.
I split Kaggle's 'training' data into 'training' and 'testing'. Then I fit and tuned my model on the new training data using KFold CV and got a score with scikit's cross_val_score using a KFold with shuffle.
the average score on the training set with this cross validation was 0.0168 (mean squared log error).
Next, with the fully tuned model, I check its performance on the never before seen 'test' set (not the final test set for the Kaggle leader board). The score is identical after rounding.
So, I pat myself on the back because I've avoided over-fitting... or so I thought. When I made my submission to the competition, my score became 0.1359, which is a massive drop in performance. It amounts to being a solid 25 grand wrong on my house price predictons.
What could be causing this, if not overfitting?
Here is the link to my notebook, if it helps: https://www.kaggle.com/wesleyneill/house-prices-walk-through-with-xgboost
Answer: I'm not an avid Kaggler, but I do remember a case where the evaluation in time related data was randomly picked (which favored nearest neighbor-approaches, since exact duplicates could exist).
I'm not sure whether there are clues on the evaluation data this time (perhaps you can tell). But a possible overfit could be time related.
If the test set is just a random subsample of the test/train part, and the evaluation part is not randomly sampled, but for instance a holdout of the year 2011, you can still learn rules specific for the time dimension and not find them in test.
A possible way of tackling that would be to resample the test set accordingly. | {
"domain": "datascience.stackexchange",
"id": 7493,
"tags": "xgboost, cross-validation, overfitting, kaggle"
} |
Inconsistency of Equations while finding Velocity in Satellite projection from one planet to another | Question: Here's a question that's creating some doubt to me.
Suppose there are 2 big spheres A and B of mass M and mass 4M, each of radius, R separated by a distance of 6R.
An object of mass, m is projected from the surface of A. What should be the minimum velocity of the body with which it should be projected so that it just reaches the surface of B.
A try to the question:-
We'll first find the neutral point where the gravitational forces by both the objects cancel out each other.
For this, we give a little displacement to the object/satellite, $d\vec{r}$ which is in direction from A to B.
Also, a unit vector $\hat{r}$ is assigned in the same direction.
Suppose the gravitational forces of the masses be $\vec{F_A}$ and $\vec{F_B}$. They are given below:
$\vec{F_A}$ = - $G\frac{Mm}{r^2}\hat{r}$
$\vec{F_B}$ = $G\frac{4Mm}{x^2}\hat{r}$
Negative sign is not present in $\vec{F_B}$ because $\hat{r}$ is in the direction of the force $\vec{F_B}$.
Now, at neutral point, P
$\vec{F_A}$ = - $\vec{F_B}$
FA = FB
$G\frac{Mm}{r^2}$ = $G\frac{4Mm}{x^2}$ [Here, x=6R-r]
$G\frac{Mm}{r^2}$ = $G\frac{4Mm}{(6R-r)^2}$
4 r2 = (6R-r)2
r = 2R
From this point, P(r=2R), the gravitational force, FB is sufficient to attract the satellite to reach the surface of B.
Let WA and WB be the work done by the gravitational forces $\vec{F_A}$ and $\vec{F_B}$ separately from the surface of A to point P.
Work done by Force, FA:-
dWA = $\vec{F_A}\cdot d\vec{r}$
dWA = FA dr cos 180°
dWA = - FA dr ---------Eq(a)
In equation a,
For limits,
Now, when object is at surface of A, r = R
And, when object is at neutral point P, r = 2R
$$\int \, dW_A = \int\limits_{R}^{2R} - F_A \, dr$$
$$W_A = - \int\limits_{R}^{2R} G\frac{Mm}{r^2} \, dr$$
$$W_A = -GMm \int\limits_{R}^{2R} \frac{1}{r^2} \, dr$$
$$W_A = -GMm \biggl[\frac{-1}{r}\biggr]_{R}^{2R} $$
$$W_A = -GMm \biggl[\frac{-1}{2R}-\frac{-1}{R}\biggr] $$
$$W_A = -\frac{GMm}{2R} $$
$$W_A = {\color{violet}{\int\limits_{R}^{2R} - F_A \, dr}} = {\color{pink}{-\frac{GMm}{2R}}} $$
Both equations, violet and the pink one are satisfying each other, infering that the work done by the gravitational force $\vec{F_A}$ will be negative.
Work done by Force, FB:-
dWB = $\vec{F_B}\cdot d\vec{r}$
dWB = FB dr cos 0°
dWB = FB dr ---------Eq(b)
In equation b,
For limits,
Now, when object is at surface of A, r = R -----> x = 6R - r = 5R
And, when object is at neutral point P, r = 2R -----> x = 6R - r = 4R
$$\int \, dW_B = \int\limits_{R}^{2R} F_B \, dr$$
$$W_B = \int\limits_{R}^{2R} G\frac{4Mm}{(6R - r)^2} \, dr$$
$$W_B = 4GMm\int\limits_{5R}^{4R} \frac{1}{x^2} \, dr$$
$$W_B = 4GMm \biggl[\frac{-1}{x} \biggr]_{5R}^{4R} $$
$$W_B = 4GMm \biggl[\frac{-1}{4R}-\frac{-1}{5R}\biggr] $$
$$W_B = 4GMm \biggl[\frac{-1}{20R} \biggr] $$
$$W_B = -\frac{GMm}{5R} $$
$$W_B = {\color{orange}{\int\limits_{R}^{2R} F_B \, dr}} = {\color{cyan}{-\frac{GMm}{5R}}}$$
So, here's my doubt:-
The orange equation infer that the work by gravitational force $\vec{F_B}$ is positive (why) as it was derived from equation (b) and in that equation, the angle between force $\vec{F_B}$ and displacement $d\vec{r}$ was 0°. And, it is equal to the cyan equation.
But in the cyan equation, there is a negative sign which tells us that the work by gravitational force $\vec{F_B}$ is negative.
Hence, the orange equation is not consistent with cyan equation.
My Doubt: Why?
Well, there's lot to be done as we need to find the velocity of satellite with which it should be projected. But, before that, I need to add the 2 work, WA and WB and make it equal to the kinetic energy of the satellite to find the velocity of the body (if I'm not wrong).
But, I'm stuck here. So please help.
OK, I have done a lot in the post but the doubt is lot like similar to the last one asked in this post Work Done by Gravitational Force.
The difference is only that in that post, I had some problem associated to the direction of radial vector $d\vec{r}$. But here, I don't think so, there's any problem with that.
So, please tell me why the the orange equation is not consistent with cyan equation?
Answer: In your transition between the following two equations
$$W_B = \int\limits_{R}^{2R} G\frac{4Mm}{(6R - r)^2} \, dr$$
$$W_B = 4GMm\int\limits_{5R}^{4R} \frac{1}{x^2} \, dr$$
you've forgotten a minus sign due to $dr=-dx$. | {
"domain": "physics.stackexchange",
"id": 44670,
"tags": "homework-and-exercises, newtonian-gravity, work, conservative-field"
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.