anchor stringlengths 0 150 | positive stringlengths 0 96k | source dict |
|---|---|---|
Objects rotating and rolling without slipping | Question: The question below confused the hell out of me. It's pretty much straight forward but until the point of where to use which radius. I know that I'll have to use the formula Tr=Iα and then we simplify T into T = -ma+ma, and α into α = a/r. In which part of the formula should I use the radius of the axle and when should I use the radius of the yo-yo. Thanks in advance. I'll leave you with the question:-
In 1993 a giant 400-kg yo-yo with a radius of 1.5 m was dropped from a crane at a height of 57 m. One end of the string was tied to the top of the crane, so the yo-yo unwound as it descended. Assuming that the axle of the yo-yo had a radius of 0.10 m, estimate its linear speed at the end of the fall.
Answer: The 1.5 m radius you'll use for the moment of inertia. The 0.1 m radius of the axle will relate the linear speed to the angular speed. | {
"domain": "physics.stackexchange",
"id": 29933,
"tags": "newtonian-mechanics, rotational-kinematics"
} |
ROSLAUNCH ON MATLAB | Question:
Hello all,I want to start a node using roslaunch on MATLAB. The node is defined on the remote system and I've initialized global node on MATLAB with ROS master on the remote system. How to accomplish this?Thanks in advance!
Originally posted by AnandGeorge on ROS Answers with karma: 56 on 2017-02-18
Post score: 1
Original comments
Comment by gvdhoorn on 2017-02-18:
I'm not sure, but I don't think the robotics toolbox (I assume you're using that) has any support for roslaunch.
If you can start your node / script in Matlab from the command line, then you could use a simple bash script that roslaunch can start.
Answer:
I FOUND A WAY!
posting the answer here, behold, it may look ugly!
The answer lies with the command system(command) in matlab. With that, you can run any command that you'd run in a terminal.
The first thing one would do is to simply run:
system('roslaunch xyz.launch');
But sadly it's not that simple!
Since matlab exports it's own little world of LD_LIBRARY_PATH, you have the over write the LD_LIBRARY_PATH when you want to execute a command that uses the LD_LIBRARY_PATH that is normally set in your environment outside of matlab. If you want to know what it is, run "echo $LD_LIBRARY_PATH" in a command line.
for me the output was: "/home/XXXX/catkin_ws/devel/lib:/opt/ros/kinetic/lib:/opt/ros/kinetic/lib/x86_64-linux-gnu". I will refer to it with the name LD_path in the examples bellow.
So to launch the launch file you have to run:
system(['export LD_LIBRARY_PATH="LD_path";' 'roslaunch xyz.launch']);
The thing is that it will stall the execution of any script you put it in since the command returns only after you terminate it...
To be able to continue the script we need to use the "&" character at the end of the command like the following example.
system(['export LD_LIBRARY_PATH="LD_path";' 'roslaunch xyz.launch &']);
Now the program runs in background, but you wont be able to terminate it upon then end of a script or function.
To be able to do so, we add the "echo $!" string at the end of the command. This returns the status and PID of the process. We then can kill the process by issuing the kill command. Example bellow:
[status,cmdout] = system(['export LD_LIBRARY_PATH="LD_path";' 'roslaunch xyz.launch & echo $!']);
system('kill', cmdout);
If you want to make sure the process is killed after a "Ctrl-C" input, the onCleanup() matlab function is pretty useful.
It creates an object that executes the function passed in parameter when the object is deleted.
finishup = onCleanup(@() myCleanupFun(cmdout));
"myCleanupFun(cmdout))" being a function I wrote as follows:
function myCleanupFun(cmdout))
system('kill',cmdout)
end
In the case of a script file, since the variables are not deleted upon its termination, you have to call it inside a function. All the variables being local to the function workspace, they are now deleted after the function is stopped thus deleting the "finishup" variable and killing the process.
Originally posted by wonwon0 with karma: 73 on 2018-05-22
This answer was ACCEPTED on the original site
Post score: 4
Original comments
Comment by manuelmelvin on 2019-07-07:
Thanks for the solution. It worked for me. But I have problem with terminating/stopping the launched file. Can you please help me by explaining more, how to stop/terminate the launched node from MATLAB itself? I also tried " system("pkill roslaunch") ", but it kills all the nodes launched.
Comment by Yew Hon Ng on 2019-09-10:
The same method works for rosrun as well, and I assume it works for pretty much all ROS commands. I used it for dynamic_reconfigure and it works like wonder. Thanks!
Comment by wonwon0 on 2019-11-01:
Sorry for the delay... @manuelmelvin If you have a matlab script and want to follow the solution i gave to terminate the process (roslaunch xyz.launch), you have ot call you script from a function to ensure the variable "finishup" is deleted (thus triggering the execution of the "mycleanupFun()" function).
To do so, you create a function that does:
fucntion my_script_launcher()
my_scipt_that_launches_the_launch_file;
end
If inside your script you call the finishup = onCleanup(@() myCleanupFun(cmdout)); line, once the script finishes it will stop only the xyz.launch execution, not all nodes.
Comment by man469 on 2022-11-06:
Hello, does your matlab installed on ubuntu or windows? I try your method in windows, but it seems doesnot work. Look forward to your reply. Thank you!!
Comment by wonwon0 on 2022-11-07:
hello, this answer was made for linux systems. I suspect it can be adapted for windows but i am not sure. | {
"domain": "robotics.stackexchange",
"id": 27055,
"tags": "matlab, roslaunch"
} |
Cost of greater than 1, is there an error? | Question: I'm computing cost in the following way:
cross_entropy = tf.nn.softmax_cross_entropy_with_logits(y, y_)
cost = tf.reduce_mean(cross_entropy);
For the first cost, I am getting 0.693147, which is to be expected on a binary classification when parameters/weights are initialized to 0.
I am using one_hot labels.
However, after completing a training epoch using stochastic gradient descent I am finding a cost of greater than 1.
Is this to be expected?
Answer: The following piece of code does essentially what TF's softmax_cross_entropy_with_logits functions does (crossentropy on softmaxed y_ and y):
import scipy as sp
import numpy as np
def softmax(x):
e_x = np.exp(x - np.max(x))
return e_x / e_x.sum(axis=0)
def crossentropy(true, pred):
epsilon = 1e-15
pred = sp.maximum(epsilon, pred)
pred = sp.minimum(1-epsilon, pred)
ll = -sum(
true * sp.log(pred) + \
sp.subtract(1,true) * \
sp.log(sp.subtract(1, pred))
) / len(true)
return ll
==
true = [1., 0.]
pred = [5.0, 0.5]
true = softmax(true)
pred = softmax(pred)
print true
print pred
print crossentropy(true, pred)
==
[ 0.73105858 0.26894142]
[ 0.98901306 0.01098694]
1.22128414101
As you can see there is no reason why crossentropy on binary classification cannot be > 1 and it's not hard to come up with such example.
** Crossentropy above is calculated as in https://www.kaggle.com/wiki/LogarithmicLoss, softmax as in https://en.wikipedia.org/wiki/Softmax_function
UPD: there is a great explanation of what it means when logloss is > 1 at SO: https://stackoverflow.com/a/35015188/1166478 | {
"domain": "datascience.stackexchange",
"id": 1203,
"tags": "machine-learning, tensorflow, gradient-descent, cost-function"
} |
Generating lists populated with random objects | Question: For the purposes of unit-testing in Python, I've written a very simple function for generating variable-length lists populated with random objects:
constructors = [
object,
dict,
lambda: bool(randint(0, 1)),
lambda: randint(1, 1000),
lambda: ''.join(choice(digits) for _ in range(randint(1, 10)))
]
choices = (
lambda _: choice(constructors)(),
lambda off: fake_list(off),
lambda off: tuple(fake_list(off))
)
def fake_list(off = None):
'''Creates a list populated with random objects.'''
off = (off or 0) + 1
return [choice(choices)(off) for _ in range(max(randint(1, 10) - off, 0))]
List generated with this function typically look something like this:
[(269, '6'), [{}, 990, 347, <object object at 0x7f51b230b130>, {}, [{}, '91921063', 302, '0047953', {}, ()], True], '70262', True]
Do you see any problems with it? Is there a more efficient/elegant way of doing this that you could suggest?
Quality of the generated primitives (strings, integers) is not of priority (i.e. string.digits should be enough).
Answer: Requirements may vary so I find it hard to use with the "hardcoded" constructors. Instead I would define a function that accept such constructors as an argument.
I would also define the function returning the list-comprehension as an inner function so the off parameter is hidden from the public interface (the end-user does not need to know such implementation details). Besides, off is a terrible name for such variable when all it does it to track the recursion depth, better call it depth, then.
An other thing that could be worth taking as parameter (even though a sensible default would make sense most of the time) is the maximum length of the inner lists (and thus the maximum allowed recursion level):
import random
from string import digits
def fake_list(items, max_size=10):
"""Create a list populated with random objects constructed from `items`"""
choices = (
lambda _: random.choice(items)(),
lambda depth: random_list(depth),
lambda depth: tuple(random_list(depth)),
)
def random_list(depth):
return [
random.choice(choices)(depth + 1)
for _ in range(max(random.randint(1, max_size) - depth, 0))
]
if max_size < 1:
raise ValueError('minimal size of output should be at least 1')
return random_list(0)
def test():
constructors = [
object,
dict,
lambda: bool(random.randint(0, 1)),
lambda: random.randint(1, 1000),
lambda: ''.join(
random.choice(digits)
for _ in range(random.randint(1, 10))),
]
print(fake_list(constructors))
if __name__ == '__main__':
test()
That way I can call:
fake_list([lambda: float('-inf'), lambda: range(random.randint(-100, -10), random.randint(10, 100)), set])
if I need to. | {
"domain": "codereview.stackexchange",
"id": 30959,
"tags": "python, random"
} |
Generalised representation of a 2x2 Positive Operator-Valued Measure | Question: Let {$E_{i}$} be a set of 2x2 POVM operators, satisfying $\sum_{i}E_{i}=\mathbb{I_{2x2}}$.
We know that a general 2x2 Hermitian matrix (say, $H$) can be represented by
$$
H =
\left[{\begin{array}{cc}
a_{0}+a_{3} & a_{1}-ia_{2} \\
a_{1}+ia_{2} & a_{0}-a_{3}
\end{array}}\right]=a_{0}\mathbb{I_{2x2}}+a_{1}\sigma_{1}+a_{2}\sigma_{2}+a_{3}\sigma_{3}
$$
where the quantities $a_{k}$ are real, and $\sigma_{i}$'s represent the Pauli matrices.
Is there such a compact way to represent $E_{i}$ [basically satisfying (i) Hermitian peroperty and (ii) positive semi-definiteness], possibly with added constraints?
Answer: By applying extra constraints on the above generalized 2x2 Hermitian matrix for satisfying positive semi-definiteness criteria, we can arrive at a generalized representation for 2x2 POVM operator, say $E_{i}$.
The constraint is that the Hermitian matrix should have only non-negative
eigenvalues [1].
Let $E_{i}$ be
$$
E_{i} =
\left[{\begin{array}{cc}
a_{i0}+a_{i3} & a_{i1}-ia_{i2} \\
a_{i1}+ia_{i2} & a_{i0}-a_{i3}
\end{array}}\right].
$$
The characteristic equation for above the matrix would be
$$
\left|{\begin{array}{cc}
\lambda-(a_{i0}+a_{i3}) & a_{i1}-ia_{i2} \\
a_{i1}+ia_{i2} & \lambda-(a_{i0}-a_{i3})
\end{array}}\right|=0
$$
The roots of the characteristic polynomial should be non-negative,
$$
\lambda^{2}-2a_{i0}\lambda+a_{i0}^{2}-a_{i3}^{2}-a_{i1}^{2}-a_{i2}^{2}=0\\
(\lambda-a_{i0})^{2}=a_{i3}^{2}+a_{i1}^{2}+a_{i2}^{2}\\
\lambda=\pm k+a_{i0}
$$
where $k=|\sqrt[]{a_{i1}^{2}+a_{i2}^{2}+a_{i3}^{2}}|$. Hence, we have the condition:
$$
a_{i0} \ge k
$$
because $\lambda_{min}=-k+a_{i0}$.
By definition, $\sum_{i}E_{i}=\mathbb{I_{2x2}}$. In total, we have (4+$i$) constraints:
$\sum_{i}a_{i0}=1$
$\sum_{i}a_{i1}=0$
$\sum_{i}a_{i2}=0$
$\sum_{i}a_{i3}=0$
$\forall i, a_{i0} \ge |\sqrt[]{a_{i1}^{2}+a_{i2}^{2}+a_{i3}^{2}}|$
[1] Weisstein, Eric W. "Positive Semidefinite Matrix." From MathWorld--A Wolfram Web Resource. http://mathworld.wolfram.com/PositiveSemidefiniteMatrix.html | {
"domain": "physics.stackexchange",
"id": 47281,
"tags": "operators, quantum-information, hilbert-space, measurements, density-operator"
} |
Thermal energy of a system | Question: If I had a hot cup of tea and I added to it cold milk, what would happen is that tea will lose some of its thermal energy to the milk. But why do we notice that our liquid is no longer as hot as before adding milk? After all, the total thermal energy of the system (the cup) is conserved even if there has been a transfer of heat between the two. Suppose the cup is an isolated system.
Answer: Before mixing the average kinetic energy of the molecules which make up the tea is greater than the average kinetic energy of the molecules which make up the milk.
This is restatement of the fact that the tea is hotter than the milk as the temperature of a substance depends on the average kinetic energy of the molecules which make up the substance.
When you mix the tea and the milk together the molecules of the tea and the milk collide and eventually reach an average kinetic energy (temperature) which is less than the average kinetic energy (temperature) of the tea before mixing and is greater than the average kinetic energy (temperature) of the milk before mixing.
So the collisions between the molecules redistribute the kinetic energy amongst the molecules and what is called thermal equilibrium is reached.
If there is no loss of heat or chemical reaction then the kinetic energy of the hot tea plus the kinetic energy of the cold milk is equal to the kinetic energy of the mixture but on average the tea molecules have less kinetic energy after mixing and the milk molecules have more kinetic energy after mixing. | {
"domain": "physics.stackexchange",
"id": 31911,
"tags": "thermodynamics, energy-conservation, temperature"
} |
Electrolysis involving K+ and Li+ ions | Question:
Need clarification on Q31a) in the image.
Possibly a dumb question but the book's answer might be wrong. The book says lithium ions will be reduced instead of potassium ions. How is this true when K+ ions are stronger oxidants?
Their answer for part d even reiterates that's Li+ will react in preference to K+ so I think I may be wrong...not sure how though
Thanks
Answer: $\ce{Li+}$ 's low standard reduction potential value applies to aqueous solutions only.
$\ce{Li+}$ has very high polarising power due to its small size (and in turn a high hydration energy). This is why $\ce{Li+}$ has a very low reduction potential compared to other alkali metals in aqueous solutions.
When you deal with a solution of molten metals, the idea of hydration energy makes no sense anymore. Therefore, the electrochemical series order for aqueous solutions isn't applicable to the molten solution of alkali metals.
As you go down the alkali metal group, the ionisation energy decreases. The tendency of an atom to give its electron (or get oxidised) increases. Therefore, potassium oxidises more easily than lithium or in other words, potassium has a lesser tendency to get reduced compared to lithium.
As lithium has a higher tendency to get reduced than potassium, lithium gets deposited instead of potassium. | {
"domain": "chemistry.stackexchange",
"id": 8300,
"tags": "electrochemistry, electrolysis"
} |
Shadow of a Jovian moon over the Great Red Spot | Question: Where can I find pictures of the shadow of any of the Jovian moons partially covering the Great Red Spot?
A series of such pictures over time would even be better. The idea is to learn more about the structure of the storm by observing how the shadow behaves as it is passing over the boundaries and body of the Great Red Spot storm.
Answer: Because of the current orientation of the plane of the satellites orbits, only Io's shadow falls on the Red Spot. Since Io is the fastest moving of Jupiter's major moons, it's shadow must fall on the Red Spot fairly often. I'd scan the various archives of Jupiter images to look for images that meet this criterion:
http://atmos.nmsu.edu/Jupiter/jupiter.html
http://www.arksky.org/alpo/index.php
http://www.damianpeach.com/jupiter.htm
Interesting idea! | {
"domain": "physics.stackexchange",
"id": 3211,
"tags": "astronomy, moon, imaging, jupiter"
} |
Symmetric Square | Question: I was given an assignment to ensure that a list was symmetric.
I built it in a way that worked and it passed the tests. But I wanted to see if it could've been written more efficiently and still readable.
Original question
A list is symmetric if the first row is the same as the first column,
the second row is the same as the second column and so on. Write a
procedure, symmetric, which takes a list as input, and returns the
boolean True if the list is symmetric and False if it is not.
My solution
def symmetric(master):
master_count = len(master)
if not master: # checks to see if the list is empty
return True
if master_count != len(master[0]): #check to make sure the list is a square
return False
i = 0
while i < master_count:
j = 0
while j < master_count:
print str(master[i][j]) +" "+str(master[j][i])
if master[i][j] != master[j][i]:
return False
j = j + 1 # close the loop
i = i + 1 # close the loop
return True
Tests & expected responses
print symmetric([[1, 2, 3],
[2, 3, 4],
[3, 4, 1]])
#>>> True
print symmetric([["cat", "dog", "fish"],
["dog", "dog", "fish"],
["fish", "fish", "cat"]])
#>>> True
print symmetric([["cat", "dog", "fish"],
["dog", "dog", "dog"],
["fish","fish","cat"]])
#>>> False
print symmetric([[1, 2],
[2, 1]])
#>>> True
print symmetric([[1, 2, 3, 4],
[2, 3, 4, 5],
[3, 4, 5, 6]])
#>>> False
print symmetric([])
#>>> False
Answer: You could use range (or xrange for Python 2) to make the code shorter and clearer.
So instead of
i = 0
while i < master_count:
j = 0
while j < master_count:
print str(master[i][j]) +" "+str(master[j][i])
if master[i][j] != master[j][i]:
return False
j = j + 1 # close the loop
i = i + 1 # close the loop
We have
for i in range(master_count):
for j in range(master_count):
if master[i][j] != master[j][i]:
return False
Actually we're doing twice the amount of work we need to. If master[i][j] == master[j][i], we don't need to check the opposite:
for i in range(master_count):
for j in range(i + 1, master_count):
if master[i][j] != master[j][i]:
return False
Alternatively, you could use all and a generator expression:
return all(master[i][j] == master[j][i]
for i in range(master_count)
for j in range(i + 1, master_count))
I would also reconsider the variable names, e.g. matrix instead of master, dim or n instead of master_count. | {
"domain": "codereview.stackexchange",
"id": 12094,
"tags": "python, matrix, homework"
} |
How can I add 'foxy' as an acceptable version-tag on awswers.ros.org | Question:
The Instruction for tags is:
"You must at least tag the rosdistro you are using, such as indigo, kinetic, lunar, melodic, or ardent."
The error text if you fail to comply is:
"At least one of the following tags is required : boxturtle, cturtle, diamondback, electric, fuerte, groovy, hydro, indigo, jade, kinetic, lunar, melodic, noetic, r2b3, ardent, bouncy, crystal, dashing, eloquent, ros1 or ros2"
shouldn't theses match? and why no love for Foxy? :)
Originally posted by dawonn_haval on ROS Answers with karma: 103 on 2020-08-07
Post score: 0
Answer:
This is something that only the ROS Answers admins can do.
This may change with the Askbot version, but there should be an list of "Mandatory tags" on this page:
https://answers.ros.org/settings/FORUM_DATA_RULES/
Originally posted by chapulina with karma: 366 on 2020-08-07
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by dawonn_haval on 2020-08-07:
How does one go about notifying an admin about such a thing? Is there an issue tracker somewhere?
Comment by chapulina on 2020-08-07:
Here it is: https://github.com/ros-infrastructure/answers.ros.org | {
"domain": "robotics.stackexchange",
"id": 35386,
"tags": "ros, ros2, meta"
} |
What dictates the efficiency of a semiconductor? | Question: Semiconductors can be used for a heat exchange but are less efficient than a Freon air-conditioning system. What dictates this efficiency?
Answer: The efficiency of semiconductors for refrigeration (Peltier junctions) is established by the ratio of the junction's thermal conductivity to its electrical conductivity. Here is why:
For best performance, you would like the junction to conduct heat poorly, so the cold generated on one side of the junction is not immediately cancelled by the heat generated on the other.
At the same time, you also want the electrical conductivity of the junction to be as high as possible, so $I^2R$ losses in the junction are minimized.
Unfortunately, the thermal and electrical conductivities are tied together in a way that makes it very difficult to maximize one while simultaneously minimizing the other.
Materials scientists try to overcome this effect by coming up with weird junction compositions from rarely-visited corners of the periodic table, but with limited success. | {
"domain": "physics.stackexchange",
"id": 80502,
"tags": "semiconductor-physics, scale-invariance"
} |
Specify a random process such that $R_Y[0]=3+u$, $R_Y[1]=-2+u$, and $R_Y[k]=u$ otherwise | Question: there is a problem that I should specify a real-valued random process $Y[n]$ such that the autocorrelation function $R_Y[k]$ satisfies
$$R_Y[0]=3+u,\ R_Y[1]=R_Y[-1]=-2+u,\ \text{and}\ R_Y[k]=u, |k|>1.$$
First subproblem is to find a feasible set of u with |u|>0. I found region $-\frac{1}{2}\leq u$.
The main problem is to specify a random process $Y[n]$.
Considering that $R_Y[k]=u$ for $|k|>1$, I thought $Y[n]$ is likely to have a DC value. Besides, the autocorrelation function has unique values for k=0,1, I set
$$Y[n]=a\,w[n]+b\,w[n-1]+c$$
for constant $a,b,c$ and $w[n]$ is white Gaussian noise with zero mean and variance 1, and has autocorrelation $R_w[k]=\delta[k]$. But the final results becomes $a,b$ are imaginary numbers, which cause complex-valued $Y[n]$.
Is there any way I can get a random process? The problems say
a) specify a random process with the given autocorrelation function (that is, specify the stochastic generation mechanism for the process).
b) Is there a unique random process with the given autocorrelation function? If the answer is no, identify possible sources of difference between the various random processes that all have this given autocorrelation function.
Answer: If I read your problem correctly, $R_Y[k]$ is given by
$$R_Y[k]=3\delta[k]-2\big(\delta[k-1]+\delta[k+1]\big)+u\tag{1}$$
The power spectrum is the DTFT of $R_Y[k]$, which is given by
$$S_Y(\omega)=3-4\cos(\omega)+2\pi u\,\delta(\omega),\qquad -\pi\le\omega<\pi\tag{2}$$
From $(2)$ it is obvious that $S_Y(\omega)\ge 0$ cannot be satisfied, regardless of the value of $u$. Hence, $R_Y[k]$ is not a valid autocorrelation function, and, consequently, there is no solution to your problem. | {
"domain": "dsp.stackexchange",
"id": 10827,
"tags": "autocorrelation, random-process"
} |
Optimizing MD5 OpenSSL Implementation in C for Precomputed Hash | Question: I am trying to port a piece of code from Python to C. What the code does is that it generates a list of precomputed MD5 hash from a plaintext wordlist.
The code also has its variations in SHA-1 and SHA-2 families in all 3 programming languages. The structure is the same.
I originally wrote the code in Bash, which was slow. So I ported it to Python which was significantly faster. Now, I have successfully ported the code to C, in hoping that it will run even faster. However, even with
-O faster flag on in gcc, the code still run slower than the Python version (the difference in execution time increases exponentially with the input size).
I have also doubted about the efficiency of the OpenSSL crypto libraries, but it seems to me that they are relatively well-established after reading through their Documentation.
I am guessing that it is the nested loop that I implemented in the C version of the code that is slowing the whole thing down.
Any suggestion to increase the performance?
Bash Version:
#/!bin/bash
while read line
do
printf $line | md5
done
Python Version:
import hashlib
infile = 'wordlist'
outfile = open("precomputed","a")
with open(infile, "r") as inf:
for line in inf:
outfile.write(hashlib.md5(line.strip().encode('utf8')).hexdigest()+'\n')
C Version:
//-----------Libraries
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <openssl/md5.h>
//------------------------------Main Function------------------------//
int main()
{
//------Define infile, outfile, file length. Define string to be read.---//
FILE *infile, *outfile;
char *string = NULL;
size_t len = 0;
ssize_t read;
//------Open File stream for read(r) and write (w). Error Handling.--//
//Part of MD5 Hash Function (Taken out of While Loop for Optimization)
int md5;
unsigned char result[MD5_DIGEST_LENGTH];
//
infile = fopen("file.txt", "r");
if (infile == NULL)
exit(EXIT_FAILURE);
outfile = fopen("MD.txt","w");
if (outfile == NULL)
exit(EXIT_FAILURE);
//-------------Read line-by-line in using a while loop.--------------//
while ((read = getline(&string, &len, infile)) != -1) {
string[strcspn(string, "\n")] = 0; // Remove newline '\n'
//-------------------------MD5 Hash Function-------------------------//
MD5(string, strlen(string), result);
//output
for(md5 = 0; md5 < MD5_DIGEST_LENGTH; md5++)
fprintf(outfile,"%02x",result[md5]); //convert the hash to hex
fprintf(outfile,"\n"); //newline for the output file
}
free(string); //free string
fclose(infile); // close file streams
fclose(outfile);
exit(EXIT_SUCCESS); //Program Ends
}
I can provide the execution time with different size of input if requested. Any help will be appreciated.
Answer: I'll start with a review of your current code before suggesting possible performance
improvements.
Various comments in your code do not add information and can be removed, for example
//------------------------------Main Function------------------------//
int main()
free(string); //free string
exit(EXIT_SUCCESS); //Program Ends
Always use curly braces { } with if-statements, even if the if or else part consists only
of a single statement. That helps to avoid errors if the code is edited later.
Declare variables at the narrowest scope where they are used, and not at the top
of the function. For example
FILE *infile = fopen("file.txt", "r");
if (infile == NULL) {
exit(EXIT_FAILURE);
}
FILE *outfile = fopen("MD.txt","w");
if (outfile == NULL) {
exit(EXIT_FAILURE);
}
or
for (int md5 = 0; md5 < MD5_DIGEST_LENGTH; md5++) { ... }
Declaring md5 at the top does not increase the performance.
Some variable names can be improved (of course this is partially opinion-based):
char *string is actually the current line.
size_t len is not the length of the current string,
but the capacity of the buffer (re)allocated by getline().
ssize_t read is – strictly speaking – not the number of bytes read into the
buffer because it excludes the NUL character.
int md5 is not an MD5 value but an index into the buffer containing the
MD5 hash.
I would rename unsigned char result[MD5_DIGEST_LENGTH] to md5hash.
Fix compiler warnings:
MD5(string, strlen(string), result);
// Passing 'char *' to parameter of type 'const unsigned char *' converts between pointers to integer types with different sign
You already use different exit codes to indicate success or failure of the program,
which is good. In addition, it is helpful to print some message (to standard error)
in the error case.
According to the C11 standard, the declaration of main should be one of
int main(void) { /* ... */ }
int main(int argc, char *argv[]) { /* ... */ }
(see for example What should main() return in C and C++?
on Stack Overflow). The final return statement can be omitted, there is an implicit
return 0.
Performance improvements
After reading a line from the input file, the string is traversed twice:
while ((read = getline(&string, &len, infile)) != -1) {
string[strcspn(string, "\n")] = 0; // Remove newline '\n'
MD5(string, strlen(string), result);
// ...
}
First to find a terminating newline character, and then again to determine the length.
This is not necessary because getline() returns the number of characters written to
string, i.e. a newline character can only be at position read - 1:
while ((read = getline(&string, &len, infile)) != -1) {
// Remove trailing newline character
if (read > 0 && string[read - 1] == '\n') {
read -= 1;
string[read] = 0;
}
MD5(string, read, result);
// ...
}
However, the impact of this change depends on the line length, and I could not observe
a significant difference in my test.
In order to find further performance bottlenecks, I profiled the program now with
Xcode/Instruments (using the input file generated by ./crunch 7 7 1234567890)
This immediately revealed that most of the time is spent in fprintf():
Possible reasons are:
String formatting is slow.
All stdio print operations are thread-safe, and therefore have to aquire and release
locks for each call.
The solution is to:
Write a custom function for converting the MD5 hash to a hex string.
Call printf for the entire string instead of each single byte so reduce the number
of function calls.
On my computer (a 1.2 GHz Intel Core m5 MacBook) this reduced the time for processing
the above file from 24.5 seconds to 4.4 seconds.
Putting it together
With all those modifications, we have
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <openssl/md5.h>
// Format the data as a hexadecimal string. The buffer must have
// space for `2 * length + 1` characters.
const char *hexString(unsigned char *data, size_t length, char *buffer) {
const char *hexDigits = "0123456789abcdef";
char *dest = buffer;
for (size_t i = 0; i < length; i++) {
*dest++ = hexDigits[data[i] >> 4];
*dest++ = hexDigits[data[i] & 0x0F];
}
*dest = 0;
return buffer;
}
int main(void) {
FILE *infile = fopen("file.txt", "r");
if (infile == NULL) {
perror("Cannot open input file");
exit(EXIT_FAILURE);
}
FILE *outfile = fopen("MD.txt","w");
if (outfile == NULL) {
perror("Cannot open output file");
exit(EXIT_FAILURE);
}
// Read file line-by-line
char *line = NULL;
size_t linecap = 0;
ssize_t lineLength;
while ((lineLength = getline(&line, &linecap, infile)) != -1) {
if (lineLength > 0 && line[lineLength - 1] == '\n') {
// Remove newline character
lineLength -= 1;
line[lineLength] = 0;
}
// Compute MD5 hash
unsigned char md5hash[MD5_DIGEST_LENGTH];
MD5((unsigned char*)line, lineLength, md5hash);
// Print hash as hex string
char hexBuffer[2 * MD5_DIGEST_LENGTH + 1];
fputs(hexString(md5hash, MD5_DIGEST_LENGTH, hexBuffer), outfile);
fputc('\n', outfile);
}
free(line);
// Close output files
fclose(infile);
fclose(outfile);
}
Further suggestions
The names of input and output file are compiled into the program, which makes it inflexible. Possible alternatives are
Pass the file names as arguments on the command line, or
Make your program read from standard input and write to standard output.
Implement a -h option to show a short help/usage of the program. | {
"domain": "codereview.stackexchange",
"id": 30254,
"tags": "performance, algorithm, c, openssl"
} |
Integrating a laplacian over $\mathbb{R}^2$ (related to gravitational lensing) | Question: I am reading somebody's notes on gravitational lensing, and I come across the following:
\begin{equation}
\kappa = \frac{1}{2} \nabla^2_x \Psi
\end{equation}
\begin{equation}
\Rightarrow \Psi = \frac{1}{\pi} \int_{\mathbb{R}^2}\kappa(\vec{x}')\ln|\vec{x}-\vec{x}'| \text{d}^2 x'.
\end{equation}
This is kind of a math question, I guess, but if anyone would care to explain this to me I would appreciate it.
FYI, here $\Psi$ is effective lensing potential and $\kappa$ is convergence, and $\vec{x}$ is like a nondimensionalized vector in the lens plane, e.g. $\vec{\xi}/\xi_0$ where $\vec{\xi}$ is an actual vector (dimension of length) that lives in the lens plane, and $\xi_0$ can be any length scale.
Answer: Let's start with a different problem: a charge distribution in 3D. We can write down the electric potential created by this charge distribution by imagining it as a bunch of point charges and integrating over all of them. The potential of a unit point charge ($q = 1$) at a point $\vec{x}'$ is
$$
V(\vec{x}) = \frac{1}{4 \pi \epsilon_0} \frac{q}{|\vec{x} - \vec{x}'|}.
$$
which implies that the potential due to a charge distribution $\rho(\vec{x})$ is
$$
V(\vec{x}) = \frac{1}{4 \pi \epsilon_0} \int_{\mathbb{R}^3} \rho(\vec{x}') \frac{1}{|\vec{x} - \vec{x}'|} d^3 x'. \qquad \qquad (1)
$$
This is because the solution for the potential of a "point charge"
But we also know that the potential and the charge density are related by
$$
\nabla^2 V = - \frac{\rho}{\epsilon_0}. \qquad \qquad (2)
$$
This implies that if we need to solve an equation of the form (2), we can find the solution by treating the function $\rho(\vec{x})$ as a collection of "point charges" and calculating their combined potential.
In 2-D, the basic idea is the same, but now the "potential" due to a unit point charge is
$$
V(\vec{x}) = -\frac{1}{2 \pi} \ln |\vec{x} - \vec{x}'|.
$$
(You can prove this using Gauss's Law in 2D if you want.) This means that if we want to solve the equation
$$
\nabla^2 V= 2\kappa(\vec{x}),
$$
then by analogy, the solution to this equation is
$$
V(\vec{x}) = \frac{1}{\pi} \int_{\mathbb{R}^2} \kappa(\vec{x}') \ln |\vec{x} - \vec{x}'| \, d^2 \vec{x}'.
$$
This technique of solving differential equations in this way is known as the method of Green's functions; essentially, if you have a linear differential equation of the form
$$
\mathcal{D} \phi = \rho(x)
$$
then you can solve the equation for $\rho(\vec{x})$ equal to a delta function $\delta(\vec{x} - \vec{x}')$, and then use superposition to parlay this into a general solution in the form of an integral over the "charge distribution". | {
"domain": "physics.stackexchange",
"id": 46904,
"tags": "homework-and-exercises, differential-equations, greens-functions, gravitational-lensing"
} |
Nodehandle destroyed and shutdown the ros node | Question:
Hi all
I was wondering what would happen to the previously created subscriptions and publications and services after the nodehandle is destroyed? I have experieced such case: create a nodehandle in the constructor of the class and in my main() I just create a object of the class. The constructor will be called when the object is created and the nodehandle will experience starting the node, subscribe and publish, and shutting down the node as the life of circle of the nodehande is within the constructor and after that the nodehandle (lets assume it is the only one nodehandle in this example) will be destroyed and the shutdown() will be invoked.
Will this work? Will the publish and subcribe work in this way below? Thank you all.
Class A {
A
{
ros::NodeHandle private_nh("~");
pub = private_nh.advertise...
sub_ = private_nh.subscribe...
}
}
int main()
{
ros::init(...);
A a;
ros::spin();
}
Originally posted by dalishi on ROS Answers with karma: 89 on 2014-03-19
Post score: 1
Original comments
Comment by dalishi on 2014-03-19:
The nodehandle declared in the constructor should end its life cycle (destroyed) after the constructor function has been successfully executed, which according to the http://wiki.ros.org/roscpp/Overview/Initialization%20and%20Shutdown, the destroy of last nodehandle instance will kill all open subscriptions, publications, service calls, and service servers. Please correct me if I am wrong.
Answer:
You can always make sure there exists another NodeHandle outside of your class, e.g. in the main function, which is initialized before you create the object and lives as long as you want the node to live. However, as @Wolf points out, I see no benefit in not rather storing a node handle as a (private) member of your class and avoid potential bugs of your class depending on the existance of an external node handle.
Originally posted by demmeln with karma: 4306 on 2014-03-20
This answer was ACCEPTED on the original site
Post score: 3
Original comments
Comment by dalishi on 2014-03-20:
Hi demmeln, thanks for your reply. The reason I raised this problem is I have seen NavfnRos as in navfn_ros.cpp of the navigation stack, it declared temporary NodeHandle in the constructor and no other member NodeHandle nor external NodeHandle in the main(), which confused me. I will take your advice. | {
"domain": "robotics.stackexchange",
"id": 17350,
"tags": "ros, nodehandle, shutdown, publisher"
} |
URL-safe pseudorandom string generator in C# | Question: public static class SimpleToken
{
const string TOKENALPHABET = "ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789-.";
static string NewToken(int length = 16)
{
var rnd = new RNGCryptoServiceProvider();
var tokenBytes = new byte[length];
rnd.GetBytes(tokenBytes);
var token =
Enumerable
.Range(0, length)
.Select(i => TOKENALPHABET[tokenBytes[i] % TOKENALPHABET.Length])
.ToArray();
return new String(token);
}
}
I needed a quick and dirty(?) way to generate long urls for onetime use. It's a simple login-scheme where a user enters his e-mail and gets a one-time URL for logging in. The URL is discarded after one use.
I.e. http://example.com/tokenlogin/3cuzLkh8GcANjqnWcijEeJIHphHx6ZDwfj-2XTR4bfkkqmzmmFYAY2tWsZWST1.5
Answer:
const string TOKENALPHABET = "ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789-.";
I learnt the hard way that the problem with using . in a URL token sent by e-mail is that certain mail clients (Outlook in particular) will attempt to auto-detect URLs in a plain text email, but will exclude a trailing . from the inferred URL, so when your user clicks on the auto-generated link they send an invalid token. I suggest that you change . to _.
static string NewToken(int length = 16)
Length in what units? Generally with cryptographic stuff it's clearer to explicitly use bits as the unit of length.
var rnd = new RNGCryptoServiceProvider();
As already noted in comments, this is IDisposable and the standard using pattern is preferred.
var token =
Enumerable
.Range(0, length)
.Select(i => TOKENALPHABET[tokenBytes[i] % TOKENALPHABET.Length])
.ToArray();
Firstly, this could be simplified to
var token =
tokenBytes
.Select(b => TOKENALPHABET[b % TOKENALPHABET.Length])
.ToArray();
But secondly, by only using 6 bits per byte you're throwing away 25% of the entropy which the system just produced for you. On busy servers, cryptographic-grade entropy is a valuable resource and you should only request as much as you need. | {
"domain": "codereview.stackexchange",
"id": 29653,
"tags": "c#, random, url"
} |
Given two optimal policies, is an affine combination of them also optimal? | Question: If there are two different optimal policies $\pi_1, \pi_2$ in a reinforcement learning task, will the linear combination (or affine combination) of the two policies $\alpha \pi_1 + \beta \pi_2, \alpha + \beta = 1$ also be an optimal policy?
Here I give a simple demo:
In a task, there are three states $s_0, s_1, s_2$, where $s_1, s_2$ are both terminal states. The action space contains two actions $a_1, a_2$.
An agent will start from $s_0$, it can choose $a_1$, then it will arrive $s_1$,and receive a reward of $+1$. In $s_0$, it can also choose
$a_2$, then it will arrive $s_2$, and receive a reward of $+1$.
In this simple demo task, we can first derive two different optimal policy $\pi_1$, $\pi_2$, where $\pi_1(a_1|s_0) = 1$, $\pi_2(a_2 | s_0) = 1$. The combination of $\pi_1$ and$\pi_2$ is
$\pi: \pi(a_1|s_0) = \alpha, \pi(a_2|s_0) = \beta$. $\pi$ is an optimal policy, too. Because any policy in this task is an optimal policy.
Answer: Short answer
Two policies are different if they take different actions in a specific state $s$ (or they give different probabilities of taking those actions in $s$). There can be more than one optimal policy for a given value function: this only happens when two actions have the same value in a given state. Nevertheless, both policies lead to the same expected return. So, although they take different actions, those actions lead to the same expected return, so it doesn't matter which one you take: both actions are optimal.
Long answer
There are a few important points that need to be understood before understanding that an affine combination of optimal policies is also optimal.
A policy $\pi$ is optimal if and only if $v_\pi(s) \geq v_{\pi'}(s)$, for all states $s \in S$ and $\pi' \neq \pi \in \Pi$ [1];
In that case, we denote $\pi$ as $\pi_*$ and $\pi_* = \pi \geq \pi'$, for all $\pi' \neq \pi \in \Pi$.
In simple words, a policy is optimal if it leads to more or equal expected return, in all states, with respect to all other policies
Optimal policies share the same state and state-action value functions [1, 2], i.e. $v_*$ and $q_*$, respectively
In other words, if $\pi_1$ and $\pi_2$ are optimal policies, then $v_{\pi_1}(s) = v_{\pi_2}(s) = v_{\pi_*}(s)$ and $q_{\pi_1}(s, a) = q_{\pi_2}(s, a) = q_{\pi_*}(s, a)$, for all $s \in S$ and $a \in A$
Consequently, two optimal policies $\pi_1$ and $\pi_2$ can differ in state $s$ (i.e. $\pi_1$ takes action $a_1$ and $\pi_2$ takes action $a_2$ and $a_1 \neq a_2$) if and only if there exist actions $a_1$ and $a_2$ in $s$ such that
\begin{align}
v_{*}(s)
&= q_{\pi_1}(s, a_1) \\
&= q_{\pi_1}(s, a_2) \\
&= q_{\pi_2}(s, a_2) \\
&= q_{\pi_2}(s, a_1) \\
&= \max _{a \in \mathcal{A}(s)} q_{\pi_{*}}(s, a) \\
&= \max _{a \in \mathcal{A}(s)} q_{\pi_1}(s, a) \\
&= \max _{a \in \mathcal{A}(s)} q_{\pi_2}(s, a) \tag{1} \label{1}
\end{align}
This holds for deterministic (i.e. policies that always take the same action in a given state, i.e. they give probability $1$ to one action) and stochastic (give non-zero probability only to optimal actions) optimal policies
So, two different optimal policies $\pi_1$ and $\pi_2$ lead to the same expected return, for all states. Given that optimality is defined in terms of expected return, then, if $a_1 = \pi_1(s) \neq \pi_2(s) = a_2$, for some state $s$, then, it doesn't matter whether you take $a_1$ or $a_2$, because both lead to the same expected return. So, as written in this answer, you can either take action $a_1$ or $a_2$: both are optimal in terms of expected returns and this follows from equation \ref{1} above.
In this simple demo task, we can first derive two different optimal policy $\pi_1$, $\pi_2$, where $\pi_1(a_1|s_0) = 1$, $\pi_2(a_2 | s_0) = 1$. The combination of $\pi_1$ and$\pi_2$ is
$\pi: \pi(a_1|s_0) = \alpha, \pi(a_2|s_0) = \beta$. $\pi$ is an optimal policy, too. Because any policy in this task is an optimal policy.
Yes, correct. The reason is simple. In your case, $\pi_1$ and $\pi_2$ give probability $1$ to one action, $a_1$ and $a_2$ respectively, so they must give probability $0$ to any other actions. $\pi$ will give a probability $\alpha$ to action $a_2$ and probability $\beta$ to action $a_1$, but, given that $a_1$ and $a_2$ lead to the same expected return (i.e. they are both optimal), it doesn't matter whether you take $a_1$ or $a_2$, even if $\alpha \ll \beta$ (or vice-versa). | {
"domain": "ai.stackexchange",
"id": 2439,
"tags": "reinforcement-learning, proofs, policies, optimal-policy, optimality"
} |
HackerEarth Girlfriend's Demand challenge, solved two ways | Question: I solved the Girlfriend's Demand challenge on HackerEarth. In summary:
Input: First line of the input contains a string S. Next line contains an integer Q, the number of queries. Each of the next Q lines contains a test case consisting of 2 integers, a and b.
Output: If S is repeated infinitely, would string positions a and b contain the same character (using 1-based indexing)? For each query, print “Yes” or “No”.
Constraints:
1 ≤ |S| ≤ 105
1 ≤ Q ≤ 105
1 ≤ a, b ≤ 1018
Sample Input:
vgxgp
3
2 4
2 5
7 14
Sample Output:
Yes
No
Yes
My first solution:
public static void main(String args[] ) throws Exception {
BufferedReader br = new BufferedReader(new InputStreamReader(System.in));
String item=br.readLine();
StringBuffer sb=new StringBuffer();
int len = item.length();
int inputs = Integer.parseInt(br.readLine());
while(inputs-->0)
{
String strarr[] =null;
strarr = br.readLine().split(" ");
long a=Long.parseLong(strarr[0]);
long b=Long.parseLong(strarr[1]);
a=a-1;
b=b-1;
a=a%len;
b=b%len;
if(item.charAt((int)a)==item.charAt((int)b))
sb.append("Yes\n");
else
sb.append("No\n");
}
System.out.println(sb);
}
While this gave me an output in almost 5.2954 seconds for various inputs, when I tried the same inputs on this version of the code that I wrote:
public static void main(String args[] ) throws Exception {
BufferedReader reader= new BufferedReader(new InputStreamReader(System.in));
String demanded=reader.readLine();
int length = demanded.length();
int T=Integer.parseInt(reader.readLine());
for(int i = 0;i < T; i++){
String[] pairedAAndB=reader.readLine().split(" ");
long a = ((Long.parseLong(pairedAAndB[0]) - 1) % length);
long b = ((Long.parseLong(pairedAAndB[1]) -1) %length);
System.out.println((demanded.charAt((int)a)==demanded.charAt((int)b)) ? "Yes" : "No");
}
}
It took around 15.7819 seconds. What is the best way to convert long to int optimally? What I'm not able to understand is why the second code snippet is taking so much time when the second version is almost similar. There must be something about the second version that I'm not able to understand.
Answer: Slowness caused by System.out.println
The difference between the first and second programs isn't the parsing or long/int conversion. The difference is that in the first program, you use a StringBuffer to build your output into one big string (which is good). In the second program, you call System.out.println() for each of your 100000 lines of output. This takes a significant amount of time.
The current code
I don't see any problem with your first program. It runs about as well as can be expected. There are some strange indentations in your code and a few places where I would have added some spaces, but otherwise I don't see other problems. I played around with it and got it to run about 8% faster by changing the parsing, but it wasn't really significant. For your reference, here is the modified code:
public static void main(String args[] ) throws Exception {
BufferedReader br = new BufferedReader(
new InputStreamReader(System.in));
String item = br.readLine();
int len = item.length();
int numInputs = Integer.parseInt(br.readLine());
StringBuffer output = new StringBuffer(numInputs * 4);
while (numInputs-- > 0) {
String line = br.readLine();
int space = line.indexOf(' ');
long a = Long.parseLong(line.substring(0, space-1));
long b = Long.parseLong(line.substring(space+1));
a = (a-1) % len;
b = (b-1) % len;
if (item.charAt((int)a) == item.charAt((int)b))
output.append("Yes\n");
else
output.append("No\n");
}
System.out.println(output);
} | {
"domain": "codereview.stackexchange",
"id": 13819,
"tags": "java, performance, programming-challenge, comparative-review, integer"
} |
Why will my URDF model not load into Gazebo? | Question:
I have developed a simple robot model using a urdf file and several meshes and I am able to load it up in RViz without issue. However, whenever I try to load it into Gazebo it will not appear. I have tried using several methods such as roslaunch and inserting the model into Gazebo itself.
Here is the urdf file:
<?xml version="1.0"?>
<robot name="Bilkins">
<link name="chassis">
<visual>
<geometry>
<mesh filename="package://bilkins/meshes/chassis.dae" scale="0.1 0.1 0.1" />
</geometry>
<origin rpy="0.0 0 0" xyz="0 0 0"/>
</visual>
<collision>
<geometry>
<mesh filename="package://bilkins/meshes/chassis.dae" scale="0.1 0.1 0.1" />
</geometry>
<origin rpy="0.0 0 0" xyz="0 0 0"/>
</collision>
<inertial>
<mass value="0.2" />
<inertia ixx="0.4" ixy="0.0" ixz="0.0" iyy="0.4" iyz="0.0" izz="0.2"/>
</inertial>
</link>
<link name="rear_axle">
<visual>
<geometry>
<mesh filename="package://bilkins/meshes/rear_axle.dae" scale="0.1 0.1 0.1" />
</geometry>
<origin xyz="-1.3 0 -0.5" />
</visual>
<collision>
<geometry>
<mesh filename="package://bilkins/meshes/rear_axle.dae" scale="0.1 0.1 0.1" />
</geometry>
<origin xyz="-1.3 0 -0.5" />
</collision>
<inertial>
<mass value="0.14" />
<inertia ixx="0.4" ixy="0.0" ixz="0.0" iyy="0.4" iyz="0.0" izz="0.2"/>
</inertial>
</link>
<link name="front_axle">
<visual>
<geometry>
<mesh filename="package://bilkins/meshes/front_axle.dae" scale="0.1 0.1 0.1" />
</geometry>
<origin xyz="1.3 0 -0.5" />
</visual>
<collision>
<geometry>
<mesh filename="package://bilkins/meshes/front_axle.dae" scale="0.1 0.1 0.1" />
</geometry>
<origin xyz="1.3 0 -0.5" />
</collision>
<inertial>
<mass value="0.1" />
<inertia ixx="0.4" ixy="0.0" ixz="0.0" iyy="0.4" iyz="0.0" izz="0.2"/>
</inertial>
</link>
<link name="front_left_wheel">
<visual>
<geometry>
<mesh filename="package://bilkins/meshes/front_left_wheel.dae" scale="0.1 0.1 0.1" />
</geometry>
<origin xyz="1.3 0.9 -0.5" />
</visual>
<collision>
<geometry>
<mesh filename="package://bilkins/meshes/front_left_wheel.dae" scale="0.1 0.1 0.1" />
</geometry>
<origin xyz="1.3 0.9 -0.5" />
</collision>
<inertial>
<mass value="0.02" />
<inertia ixx="0.4" ixy="0.0" ixz="0.0" iyy="0.4" iyz="0.0" izz="0.2"/>
</inertial>
</link>
<link name="front_right_wheel">
<visual>
<geometry>
<mesh filename="package://bilkins/meshes/front_right_wheel.dae" scale="0.1 0.1 0.1" />
</geometry>
<origin xyz="1.3 -0.9 -0.5" />
</visual>
<collision>
<geometry>
<mesh filename="package://bilkins/meshes/front_right_wheel.dae" scale="0.1 0.1 0.1" />
</geometry>
<origin xyz="1.3 -0.9 -0.5" />
</collision>
<inertial>
<mass value="0.02" />
<inertia ixx="0.4" ixy="0.0" ixz="0.0" iyy="0.4" iyz="0.0" izz="0.2"/>
</inertial>
</link>
<joint name="rear_axle_to_chassis" type="continuous">
<parent link="chassis"/>
<child link="rear_axle" />
<axis xyz="0 1 0" />
<origin xyz="1.3 0 0.5" />
</joint>
<joint name="front_axle_to_chassis" type="revolute">
<axis xyz="0 0 1" />
<limit effort="1000.0" lower="-0.1" upper="0.1" velocity="0.5"/>
<parent link="chassis"/>
<child link="front_axle" />
<origin xyz="-1.3 0 0.5" />
</joint>
<joint name="front_left_wheel_to_axle" type="continuous">
<parent link="front_axle"/>
<child link="front_left_wheel" />
<axis xyz="0 1 0" />
<origin xyz="0 -0.9 0" />
</joint>
<joint name="front_right_wheel_to_axle" type="continuous">
<parent link="front_axle"/>
<child link="front_right_wheel" />
<axis xyz="0 1 0" />
<origin xyz="0 0.9 0" />
</joint>
</robot>
And here is the model.config file that was made for it
<?xml version="1.0"?>
<model>
<name>Bilkins</name>
<version>1.0</version>
<sdf>urdf/bilkins.urdf</sdf>
<author>
<name>Andrew Wilkinson</name>
<email>armwilkinson@gmail.com</email>
</author>
<description>
A small recon drone
</description>
</model>
This is the repository I have been using for my project:
https://github.com/ARMWilkinson/bilkinsrobot
I am using Melodic on Ubuntu 18.04
Originally posted by ARMWilkinson on ROS Answers with karma: 3 on 2020-09-29
Post score: 0
Answer:
Hi @ARMWilkinson,
First of all, you do not need a Gazebo model just a URDF or, preferably a Xacro description. Having said that what you are doing in only loading the robot_description in the parameter server, that is the reason why only Rviz is able to see it since Rviz is reading that description from that parameter.
To be able to generate the model in Gazebo you need to call a special python script named "spawner" that it is in change of generating a model in the Gazebo simulation and with the robot_description you load. I recommend you to check this tutorial to learn more about this.
To put things clear, imagine you have your robot_description and want to simulate it in Gazebo, Ideally you will have something like:
<launch>
<arg name="model" default="$(find bilkins)/urdf/bilkins.urdf"/>
<arg name="gui" default="true" />
<arg name="rvizconfig" default="$(find bilkins)/rviz/urdf.rviz" />
<arg name="output" default="log"/>
<!-- Coordinates to spawn model -->
<arg name="x" default="0.0"/>
<arg name="y" default="0.0"/>
<arg name="z" default="0.01"/>
<arg name="roll" default="0.0"/>
<arg name="pitch" default="0.0"/>
<arg name="yaw" default="0.0"/>
<!-- Load model in parameter server -->
<param name="robot_description" command="$(find xacro)/xacro $(arg model)" />
<!-- Generate joint_states and tf_tree -->
<node if="$(arg gui)" name="joint_state_publisher" pkg="joint_state_publisher_gui" type="joint_state_publisher_gui" />
<node unless="$(arg gui)" name="joint_state_publisher" pkg="joint_state_publisher" type="joint_state_publisher" />
<node name="robot_state_publisher" pkg="robot_state_publisher" type="robot_state_publisher" />
<node name="rviz" pkg="rviz" type="rviz" args="-d $(arg rvizconfig)" required="true" />
<!-- Run a python script to the send a service call to gazebo_ros to spawn a URDF robot -->
<node name="urdf_spawner" pkg="gazebo_ros" type="spawn_model" respawn="false" output="$(arg output)"
args="-urdf -model bilkins_robot -param robot_description -x $(arg x) -y $(arg y) -z $(arg z) -R $(arg roll) -P $(arg pitch) -Y $(arg yaw)"/>
</launch>
So, in order, what you have is: Load URDF in parameter server -> Generate robots TF_tree and joint_states -> Spawn model in Gazebo.
Hope this helps you.
Regards.
Originally posted by Weasfas with karma: 1695 on 2020-09-30
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by ARMWilkinson on 2020-09-30:
Thanks, I have tried implementing this but I when I use roslaunch I get an error stating:
RLException: [/home/andrew/catkin_ws/src/bilkinsrobot/launch/gazebo.launch] requires the 'output' arg to be set
The traceback for the exception was written to the log file
Comment by Weasfas on 2020-10-01:
Ops, I forgot to include that argument. That is on me, sorry. I updated my original answer.
Comment by ARMWilkinson on 2020-10-01:
No worries, I really appreciate the help. The error's no longer appearing now, however, whenever I use roslaunch it opens the model up in RViz.
Comment by ARMWilkinson on 2020-10-01:
Actually scratch that, I've managed to get it loading into Gazebo now. Thank you so much for your help. | {
"domain": "robotics.stackexchange",
"id": 35583,
"tags": "ros, gazebo, urdf, ros-melodic, model"
} |
A nozzle and a diffusor are pointed upwards and have the same water pressure applied at their bottom. Why does the water come out at the same height? | Question: A water tank has two pipes installed on the outside near the bottom which point upwards towards the open top of the tank. One is a nozzle the other a diffusor. According to the hydrostatic pradox, they shoot the water out to the same height as the water level in the big tank. How come? Why doesn't the law of continuity apply in which the velocity increases at the exit of the nozzle due to the smaller cross section? I though the water out of the nozzle is faster and shoots up higher than the pipe with the wider diameter. I don't understand if and to what extent the hydrostatic paradox aligns with the law of continuity here. According to my textbook, the velocity out of the pipes is the same as the weight force of the water due to gravity.
Answer: Regardless of if a nozzle or a diffuser were to be attached, according to Torricelli's Equation, the velocity of the liquid particles which are just released from a hydrostatic position will be $v = \sqrt{2gh}$ where $h$ is the height difference between the height of the liquid particle and that of fluid in the tank.
Here in this pic, I have used energy conservation where $dm$ mass is the mass of the sheet of liquid at the surface of nozzle/diffuser and $x$ is the height above the surface which the liquid will rise.
Since $x = h$, the liquid present at the surface of the nozzle/diffuser will shoot out to the same height as the initial height of liquid in the tank.
Things to note are that here, viscosity, air drag and any other dissipative forces are being ignored and if performed in real life, the height gain would be way less than the initial height of liquid in the tank. But the height gained by the liquid should be almost the same in case of both the nozzle and the diffuser
You didn't really provide any information about your textbook so that I could refer to it for what you meant by "the velocity out of the pipes is the same as the weight force of the water due to gravity" but I hope my explanation helped sort out your doubt. | {
"domain": "physics.stackexchange",
"id": 99421,
"tags": "pressure, fluid-statics"
} |
2D Spatial Fourier Tranform on a pressure field | Question: I'm doing some research on Fourier Near-field Acoustic Holography (NAH). The basic theory behind Fouerir NAH is that you take a sound pressure field measurement in a 2d plane p(x,y), 2d spatial Fourtier transform the sound pressure field into the wavenumber domain where it's spatial variation of phase is known, multiply by a propagator and inverse transform back to pressure at the propagated plane.
I'm getting confused with the fact that pressure is usually a time varying signal i.e. p(t) = ...
Fourier transforming pressure would give us the temporal frequency i.e. P(ω). I don't understand how you can measure pressure in the space domain (x,y) and then get to the wavenumber domain with no time involved. Do you need to apply one Fourier Transform to get to the frequency domain, and then another FT to get to the wavenumber domain?
Answer: Time domain representation
As Max have presented in their answer, acoustic pressure variations in two dimensions, that you are interested in, is given as a function of time $t$ and spatial (Cartesian in our case) coordinates $x$ and $y$, denoted by $p \left( x, y, t \right)$.
Now, consider taking a snapshot of the function at some arbitrary time $t_{1}$. You can visualise that as a two-dimensional scalar field $p \left( x, y, t_1 \right)$. For a simple monochromatic plane acoustic wave of (temporal) frequency 100 Hz, with propagation direction $60^{o}$ (measuring counterclockwise from the x-axis) it will be as the image shown below.
In the same way, you could possibly add another monochromatic component to this signal in the same direction. For an additional component of 180 Hz this would look like the next image.
Now, to make it more generic you could make the second component travel in a different direction. The next images show an example of the second component (still 180 Hz temporal frequency) travel in a direction of $110^{0}$. The first image shows the two components (it is compressed in the y direction due to plotting issues, apologies) and the second their superposition.
You could possibly imagine what the case would be if you were to add more and more frequency components with various directions of propagation.
Now, if you take a closer look at all the images you'll see that there's some distinct repetition patterns both in the $x$ and $y$ axes. These repetition patterns constitute the basis of the spatial frequencies. The spatial frequency, "encoded" in the wavenumber shows how many repetitions you get in a spatial dimension per distance measure. In two dimensions the wavenumber can be "split" into two components (in the 2D case and three for the 3D case), one for each axis. For linear acoustic waves (and I believe this is the general case for all linear waves) the equation $k^{2} = k_{x}^{2} + k_{y}^{2}$ must hold. This means that you can calculate the plane wave as the superposition of two plane waves, one traveling in the x-axis direction and the other in the y-axis direction.
Next, think of the two waves separately. If you were to look at them in a single dimension (each one in its own axis that is) you would see the following
Of course, if you change the $y$ dimension offset for the $x$ component and similarly the $x$ offset for the $y$ component you'll see a phase shift.
Frequency domain representation
From a purely mathematical perspective you can indeed Fourier transform those components and find the frequencies they represent. These are the spatial frequencies of the pressure waves. As you may understand, for time invariant signals, a change in the time variable will only introduce a phase shift to the (spatial) frequency components.
Thus, the respective two-dimensional Fourier transforms of the signals will show the frequencies of each dimension (for $t = constant$). The 2D Fourier transform of a monochromatic plane wave of frequency 100 Hz traveling in the direction of the x axis is
Note that the components are drawn as squares. Since this is a Discrete Fourier Transform, these represent frequency bins.
Some things to note here:
In order to make the frequency components more visible I had to zoom-in on the centre of the transformed domain.
You can see that this plane wave has frequency components only on the horizontal axis (apologies for the axes not being visible here).
The "usual" negative components can be seen as the left side frequency component of the decomposition like in the temporal case.
You can spot energy in neighboring bins and this is again due to the (spatial) window. Effectively, in order to eliminate them completely your (spatial) domain must go to infinity or your spatial frequency has to be an integer multiple of the window length (spatial domain). This is the case in temporal domain too.
The visualisation is for the amplitude of the frequency components ($\sqrt{z \cdot z^{*}}$ where, $z$ is a complex number representing the Fourier coefficient for each frequency bin).
Now, we can visualise the Fourier transform of the first plane wave we've shown, with (temporal) frequency 100 Hz traveling in the direction of $60^{o}$. This would look like (the same zooming has been done here too)
In this figure you can better see the leakage effect due to the fact that the spatial frequencies of the two wavenumbers $k_{x}$ and $k_{y}$ are not representing frequencies that are an integer multiple of the respective dimensions. Additionally, you can see that now you have a frequency component on the plain whose location does not coincide with any of the axis (but it does coincide with the propagation direction of the wave). Its x coordinate corresponds to the (spatial) frequency of the $k_{x}$ wavenumber and its y coordinate to the (spatial) frequency of the $k_{y}$ wavenumber.
Similarly, you can see the spectrums of the two other cases ($f_{1}$ = 100 Hz, $f_{2}$ = 180 Hz and angles $60^{o}$ for both waves in the first case and $60^{o}$ and $110^{o}$ in the second case) in the next two figures.
In this way, you can analyse/decompose each two-dimensional plane wave into the superposition of two one-dimensional plane waves and by Fourier transforming their temporal snapshots you can get the spatial frequency components. If you place the spatial components for each temporal frequency on a two-dimensional grid you end up with the two-dimensional Fourier transform. Of course, the transform performs this exact process in a more concise way but the aforementioned procedure can work to provide intuition on the interpretation of the results.
Additionally, the Fourier transform calculates the phase angle for each point in space, which we haven't shown here. Nevertheless I believe it is quite straight forward to do it. You just calculate the two one-dimensional wave components for each point in space (for each propagating wave) and you use the phase of the component on this position's offset like we did with the amplitude. | {
"domain": "dsp.stackexchange",
"id": 10825,
"tags": "fourier-transform, frequency-domain, acoustics, spatial"
} |
Formatting date difference in human-friendly terms | Question: I've written a function to take the difference between two dates and display it in a particular format. I've used a lot of if-else statements, which seems a little messy. Is there a way to shorten this up? Would using the ternary operator make sense?
$date1 = '2018-04-30 10:36:29';
$date2 = '2018-04-30 10:35:29';
echo dateDiff($date1, $date2);
function dateDiff($date1, $date2)
{
$date_1 = new DateTime($date1);
$date_2 = new DateTime($date2);
$diff = $date_1->diff($date_2);
if($diff->days > 365){
return $date_1->format('Y-m-d');
}
elseif($diff->days < 366 AND $diff->days > 7){
return $date_1->format('M d');
}
elseif($diff->days > 2 AND $diff->days < 8){
return $date_1->format('L - H:i');
}
elseif($diff->days == 2) return "Yesterday ".$date_1->format('H:i');
elseif($diff->days < 2 AND $diff->days > 0 OR $diff->days == 0 AND $diff->h > 1) return $date_1->format('H:i');
elseif($diff->days == 0 AND $diff->h < 1 AND $diff->i >= 1) return $diff->i." min ago";
elseif($diff->days == 0 AND $diff->h < 1 AND $diff->i < 1) return "just now";
else return $error = "Error!";
}
Answer: Chained ternary expressions in PHP are a major pain in the ass — don't use them!
You have a lot of tests that are redundant, since earlier tests will have already eliminated longer periods. The error case at the end also seems pointless.
Use consistent indentation and braces for readability and safety — please don't skimp.
function dateDiff($date1, $date2)
{
$date_1 = new DateTime($date1);
$date_2 = new DateTime($date2);
$diff = $date_1->diff($date_2);
if ($diff->days > 365) {
return $date_1->format('Y-m-d');
} elseif ($diff->days > 7) {
return $date_1->format('M d');
} elseif ($diff->days > 2) {
return $date_1->format('L - H:i');
} elseif ($diff->days == 2) {
return "Yesterday ".$date_1->format('H:i');
} elseif ($diff->days > 0 OR $diff->h > 1) {
return $date_1->format('H:i');
} elseif ($diff->i >= 1) {
return $diff->i." min ago";
} else {
return "Just now";
}
}
Be sure to use consistent capitalization for "yesterday" and "just now". | {
"domain": "codereview.stackexchange",
"id": 44116,
"tags": "php, datetime, formatting"
} |
Angular Momentum Operators Non-Degenerate | Question: Typically one writes simultaneous eigenstates of the angular momentum operators $J_3$ and $J^2$ as $|j,m\rangle$, where
$$J^2|j,m\rangle = \hbar^2 j(j+1)|j,m\rangle$$
$$J_3 |j,m\rangle = \hbar m|j,m\rangle$$
There seems to be an implicit assumption that the eigenvalues of these operators are non-degenerate. I can't immediately see how this is obvious. Could someone point me in the direction of a reference, or clarify it in an answer? Apologies if I've missed something trivial!
Answer: The degeneracy or non-degeneracy of these states depends on the problem's hamiltonian as well as on the system's Hilbert space, so this question doesn't really have an answer. However, the angular momentum states will typically be degenerate in the sense that multiple (linearly independent) states will have the same angular momentum characteristics.
For a simple case, consider a single particle in 3D with a spherically symmetric potential. Then you can decompose the wavefunction into radial and angular parts and the latter can always be assumed to have well-defined total and $z$-component angular momenta: you can always write
$$\Psi(\mathbf{r})=\psi(r)Y_{lm}(\theta,\phi).$$
However, you still need to deal with the radial wavefunction, and that will typically will have an infinity (either discrete or discrete + continuum) of energy eigenstates. In that sense the angular momentum states are "degenerate" though of course the energies can depend on $l$.
On a more general, representation theoretic sense, this is still true. If you have some system of particles in 3D then you can always decompose the total system Hilbert space into a direct sum of subspaces with well-defined $J^2$, within which the $J_3$ eigenstates are a good basis. That much is the theorem. However, this doesn't say anything about how many such subrepresentations there will be, what their total angular momentum can be, or even whether it's a good idea to make such a decomposition in the first place (which it won't if the system has other, stronger symmetries!).
What that means in practice is that you need to add a third quantum "number" to your states to get uniquely defined states. This is usually done by notations of the form
$$|\alpha,j,m\rangle$$
where $\alpha$ stands for "all the other quantum numbers of the problem" and therefore will generally be an ordered tuple of numbers. (In the hydrogen atom, for example, it suffices to take $\alpha=n$, the principal quantum number.) This index $\alpha$ then tells you which of the many $J^2=\hbar^2j(j+1)$ representations the state belongs to. To see this notation in action see e.g. these notes on the Wigner-Eckart theorem.
Edit: a word on ladder operators.
Angular momentum ladder operators are linear combinations of angular momentum components ($J_\pm=J_1\pm i J_2$) and since representations are invariant under the action of $\mathbf{J}$, that means the action of $J_\pm$ on a state with well-defined $\alpha$ and $j$ will take it to a state with the same $\alpha$ and $j$ (i.e. in the same subrepresentation).
What this means is that you can define the ladder operators without worrying about what subrepresentation they act on - since their action is the same on all - and then restrict your attention to a fixed subrepresentation with no consequence. When you consider superpositions of states from different representations (like you would if you have an arbitrary radial wavefunction, for instance), the ladder operators work like they should on the different $|\alpha,j,m\rangle$ states, and by linearity this is enough to see how they should behave.
The take-home message is that the angular momentum algebra works fine no matter how many representations you have. If you want to find that out, though, then you do need to worry about exactly how your system looks like. | {
"domain": "physics.stackexchange",
"id": 5699,
"tags": "quantum-mechanics, mathematical-physics, angular-momentum, eigenvalue"
} |
Find missing numbers in a range, with duplicate numbers in a sorted array | Question: Suggestions for cleanup and optimization request.
This question is a follow-up question on this post.
Lets assume the range of
1, 2, 3, 4.
This function takes in an array of repeated elements, [1,1,4,4] and the output is [2, 3].
The question marked as duplicate takes an input of [1, 4] and output is [2, 3].
While output is same, the input differs.
Please make a note of this difference between two questions causing confusion.
final class Variables {
private final int x;
private final int y;
Variables(int x2, int y2) {
this.x = x2;
this.y = y2;
}
public int getX() {
return x;
}
public int getY() {
return y;
}
@Override public String toString() {
return "x = " + x + " y = " + y;
}
}
public final class FindMissingRepeating {
/**
* Takes an input a sorted array with only two variables that repeat once. in the
* range. and returns two variables that are missing.
*
* If any of the above condition does not hold true at input the results are unpredictable.
*
* Example of input:
* range: 1, 2, 3, 4
* input array [1, 1, 4, 4]
* the variables should contain 2 missing elements ie 2 and 3
*
* @param a : the sorted array.
*/
public static Variables sortedConsecutiveTwoRepeating (int[] a, int startOfRange) {
if (a == null) throw new NullPointerException("a1 cannot be null. ");
int low = startOfRange;
int high = low + (a.length - 1);
int x = 0;
int i = 0;
boolean foundX = false;
while (i < a.length) {
if (a[i] < low) {
i++;
} else {
if (a[i] > low) {
if (foundX) {
return new Variables(x, low);
} else {
x = low;
foundX = true;
}
}
low++;
i++;
}
}
int val = foundX ? x : high - 1;
return new Variables(val, high);
}
public static void main(String[] args) {
int[] ar1 = {1, 2, 2, 3, 4, 4, 5};
Variables v1 = FindMissingRepeating.sortedConsecutiveTwoRepeating (ar1, 1);
System.out.println("Expected x = 6, y = 7 " + v1);
int[] ar2 = {2, 2, 4, 4, 5, 6, 7};
Variables v2 = FindMissingRepeating.sortedConsecutiveTwoRepeating (ar2, 1);
System.out.println("Expected x = 1, y = 3 " + v2);
int[] ar3 = {3, 3, 4, 4, 5, 6, 7};
Variables v3 = FindMissingRepeating.sortedConsecutiveTwoRepeating (ar3, 1);
System.out.println("Expected x = 1, y = 2 " + v3);
int[] ar4 = {1, 3, 3, 4, 4, 6, 7};
Variables v4 = FindMissingRepeating.sortedConsecutiveTwoRepeating (ar4, 1);
System.out.println("Expected x = 2, y = 5 " + v4);
}
}
Answer: Firstly there are 2 classes so don't show them as a single code when showing your code. Place a heading with the class name then your code then second class name then your second class's code. That makes it easier to copy paste it into Eclipse when seeing the code.
Why have you named the parameters as x2 and y2 and still using the this keyword? Why are you making the variable names different in the first place? It just adds confusion. Name them as x and y and then use the this keyword. Period.
You are throwing a null pointer exception when a is null and the message is a1 is null. That will become confusing... Also you don't need to throw that explicitly. The assignment to high will throw the exception and give you a proper stacktrace already without any extra code.
The while loop can be made a for loop very easily and will limit the scope and extra variable (i in this case). Note that you are doing i++ in both if as well as else. That shows you didn't take the time to look at your own code.
You passed startOfRange to the function and forgot that it is a variable also. If you have any intention of not changing it then make it explicit and declare the parameter as final in the method's signature.
Also If I am correct then you have used x = 0 only to make it one less than the startRange so that your algorithm works. That means your function will break the moment you decide the startRange to be anything other than 1. Also using such magic numbers and such variable names as x is a bad idea. Really bad idea.
Take some time to refactor the code yourself. It pays if your code is easy to understand by other programmers. Your code can be changed to this and I think now it is much easier to understand the flow of execution of the current program.
public static ExVariables sortedConsecutiveTwoRepeating(int[] a,
final int startOfRange) {
int low = startOfRange;
int high = low + (a.length - 1);
int x = low - 1;
boolean foundX = false;
for (int i = 0; i < a.length; i++) {
if (a[i] > low) {
if (foundX) {
return new ExVariables(x, low);
}
x = low;
foundX = true;
}
if (a[i] >= low) {
low++;
}
}
if (foundX) {
return new ExVariables(x, high);
} else {
return new ExVariables(high - 1, high);
}
}
I think it should be possible to refactor it more so that the return comes only inside the for loop but I didn't take the time to do that.
Hope this helps. | {
"domain": "codereview.stackexchange",
"id": 5152,
"tags": "java, optimization, array, sorting"
} |
Body falling off a smooth hemisphere - simple but I'm missing something. What is it? | Question: So here's the question that I'm stuck on.
A particle slides from the top of a smooth hemispherical surface of radius $R$ which is fixed on a horizontal surface. If it separates from the hemisphere at a height $h$ from the horizontal surface the speed of the particle when it leaves the surface is:
I can simply apply Conservation of Mechanical Energy(or Work-Energy Theorem, if you think like that) and get the first option as my answer =$ \sqrt{2g(R-h)}$
But, if I make a free body diagram at any arbitrary angle from the vertical I make the following equation- $mg\cos(x) - N= (mv^2)/R$
The particle leaves the surface when $N=0$
This gives us $v=\sqrt{Rg\cos(x)}$ and as $R\cos(x) =h\Rightarrow v=\sqrt{gh}$, which isn't an option.
Maybe I'm missing some concepts about circular motion and I can't understand what it is? Please help.
Answer: Actually, both of them are true! The one with
$$v=\sqrt{2g(R-h)}$$
and $$v=\sqrt{gh}$$
But How? See the leaving condition
$$mg\cos\theta =\frac{mv^2}{R}\Rightarrow \cos \theta =\frac{v^2}{Rg}$$
and from the energy conservation at two points
$$v^2=2g(R-h)$$
$$\Rightarrow \cos\theta =\frac{2(R-h)}{R}$$
If you look at the diagram, You will find from simple geometry that
$$\cos \theta =\frac{h}{R}=\frac{2(R-h)}{R}$$
$$\Rightarrow h=\frac{2R}{3}$$
If you put this into both of the expressions for $v$, You find the same result.
$$v=\sqrt{\frac{2}{3}Rg}$$ | {
"domain": "physics.stackexchange",
"id": 75274,
"tags": "homework-and-exercises, newtonian-mechanics, forces, energy-conservation, free-body-diagram"
} |
Handling Categorical Features on NGBoost | Question: Recently I have been doing some research on NGBoost, but I could not see any parameter for categorical features. Is there any parameter that I missed?
__init__(self, Dist=<class 'ngboost.distns.normal.Normal'>, Score=<class 'ngboost.scores.MLE'>, Base=DecisionTreeRegressor(ccp_alpha=0.0, criterion='friedman_mse', max_depth=3,
| max_features=None, max_leaf_nodes=None,
| min_impurity_decrease=0.0, min_impurity_split=None,
| min_samples_leaf=1, min_samples_split=2,
| min_weight_fraction_leaf=0.0, presort='deprecated',
| random_state=None, splitter='best'), natural_gradient=True, n_estimators=500, learning_rate=0.01, minibatch_frac=1.0, verbose=True, verbose_eval=100, tol=0.0001)
https://github.com/stanfordmlgroup/ngboost
Answer: It does not support at the time (it will come just as xgboost did not have to have it)
Given thats its a boosting method in the first place one can ask whats the history of xbgoost and subsequent cat and lgboost. XGBoost implementation of gradientboosting did not handle categorical features because it did not have to, it was sufficient enough as it was. What made xgboost special was the use of Hessian information. When other implementations (e.g. gbm in sklearn in Python) used just gradients, XGBoost used Hessian information when boosting. Which in turn was super faster.
TL;DR handling categorical features is not just a matter of convinience, it is also important for speed. This feature will come subsequently (if we contribute!) for time its all about representing uncertainty. | {
"domain": "datascience.stackexchange",
"id": 6786,
"tags": "machine-learning, boosting, ensemble, natural-gradient-boosting, ngboost"
} |
Finding the color of crystal violet theoretically | Question: I am trying to find the color of an organic molecule, crystal violet, theoretically.
I suspect that the color is due to the large conjugated system. My question is, how would I go about finding the energy levels of such a system? I know how to solve the Schrodinger equation for a particle in a ring or a box (i.e. benzene, or a poly-diene) to solve for energy levels and find the lowest-wavelength transition. But how would I do it for this sort of molecule?
Answer: For a reliable and experimentally comparable answer, you really need to use a numerical method rather than a pencil and paper estimation. For an organic system this size you'll probably be safe using time-dependent density functional theory (TDDFT). Most computational chemistry packages have this built in. Off the top of my head, I know that Gaussian (commercial–standard for organic molecules like this), and GAMESS (free) both have TDDFT built in.
If you're intent on using a toy method you could use Hückel theory as orthocresol suggested. You could in principle tune the parameters of such a model until the excitation energy matches the experimental result, but I don't see what additional physical insight this would give you. | {
"domain": "chemistry.stackexchange",
"id": 11442,
"tags": "quantum-chemistry"
} |
Does `lockstep` argument only affect sensor plugin? | Question:
Gazebo offers an option, --lockstep to update sensor readings using the exactly upate_rate defined by sensor plugin description. Does other kinds of plugins, such as world or model plugin, can be affected by this option?
I think it can. Because these plugins also define theOnUpdate method, which in my opinion it is affected by the upate_rate parameter.
Originally posted by jianwu on Gazebo Answers with karma: 3 on 2021-01-26
Post score: 0
Answer:
A plugin may or may not use the update event. A plugin could, for example, spin up a thread and perform asynchronous operations. You'll have to look at each plugin that you're using to see if it use the update event.
Originally posted by nkoenig with karma: 7676 on 2021-01-26
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by jianwu on 2021-01-27:
Thank for your answer! So you mean that if a plugin has the update event, it will be affected by --lockstep argument?
Comment by nkoenig on 2021-01-27:
Yes, the lockstep will control when Gazebo takes a simulation step, which in turn affects when events are triggered.
Comment by jianwu on 2021-01-27:
Thanks a lot. | {
"domain": "robotics.stackexchange",
"id": 4583,
"tags": "gazebo-plugin, gazebo-9"
} |
rviz screen problem | Question:
hello,
I'm working with rviz and when I run rosrun rviz rviz I get the screen like this. I searched a lot and I tried several solutions but none helped.
I'm using NVIDIA C61 [GeForce 7025 / nForce 630a] graphics card with nvidia-304 driver installed. I also use ubuntu 14.04 and ROS indigo 1.11.20.
any suggestions would be highly appreciated.
thank you in advance
Originally posted by Ghazal on ROS Answers with karma: 1 on 2016-07-30
Post score: 0
Answer:
It looks like a default RViz screen to me...are you sure there's a problem?
You might need to set a robot state publisher or other information sources, then select and/or set up appropriate displays/panels...
Originally posted by kramer with karma: 1470 on 2016-07-30
This answer was ACCEPTED on the original site
Post score: 0
Original comments
Comment by Ghazal on 2016-07-30:
thank you for your reply.
if you take a look at these photos you will get what my problem is. guess the previous video was not in a good quality!
link text
link text
Comment by kramer on 2016-07-31:
I can't make out the contents of those images. The resolution is fine (about 1080 x X), but it looks like the scan lines are interleaved. Can you just do a screen capture rather than taking a picture of the screen? (Try the 'screenshot' program.)
Comment by NEngelhard on 2016-07-31:
"you will get what my problem is" or... you just explain it yourself.
Comment by Ghazal on 2016-07-31:
I didn't know how to explain my problem that's why I used photos. the problem is exactly what you said! when I run rviz it looks like the scan lines are interleaved and the colors are mixed some how. at first I tried the screenshot program but the output is just a black screen that can't be opened.
Comment by Ghazal on 2016-07-31:
and the point is that all my screen becomes like this not only the rviz window and I have to restart my pc to get the screen back and be able to use it again.
Comment by kramer on 2016-08-01:
Ah. That seems like less of a ROS problem and more of a graphics card issue. But I don't know and it's out of my general knowledge area. Sorry, hope someone else can help...
Comment by Ghazal on 2016-08-01:
yeah I think the problem is with my graphics card as well but I only have this problem with rviz. by the way, thank you :) | {
"domain": "robotics.stackexchange",
"id": 25408,
"tags": "ros, rviz"
} |
Does data preprocessing is necessary and important in deep learning? | Question: I really wonder about data preprocessing is really necessary and important in deep learning.
It's really hard to say clearly about difference between Machine learning and deep learning. In definetion, The difference is whether or not humans intervene in the process of finding features. Machine learning need humans intervene to do feature extraction, so must need data preprocessing. So we do PCA, denoising, data balancing, and so on.
Machine learning is important to pre-processing. But, In deep learning, Is really important to do preprocessing at data? I'm not asking about such as embedding in NLP. text embedding is necessary for transform text to computer recognizable.
I'm really confused while studying about deep-learning. Someone says that it's important to do pre-processing, and someone says that isn't important to do pre-processing.
For example, I'll show some situation about computer vision. Let's think about Denoising, erase such as pepper and salt noise in image. Is it important to do denoise for input image? someone who agree about to do preprocessing says that noise is unexpected pixel value, so if we denoised noise, than we can get more clear image, so It brings more good performance. someone who disagree about to do preprocessing says 2 reasons.
First, If we can get 100% clear original image without any noise, then compair denoised image and clear original image. It's not totally same image. Also, we can't get without any noise image because of physical reasons.
Second, Deep learning shows better performance when the amount of input data is large. If human intervene to feature extraction, It's not efficient to do with human power. Deep learning can do feature extraction itself, don't need human to feature extraction.
I apolized that I think it's not good example about my question. so, I give another example.
Think about Iris dataset. In Machine learning, We do PCA(or else) and select efficient data(ex: sepal and petal width and length data) and input data to model. But, deep learning don't need select efficient data. If we have nice deep learning model, then we can input whole data.
last example, It is estimated that GPT-3 has been learned with more than 300 billion sentences. I don't think training dataset did pre-processing. Because, sentence might have typo, wrong grammer, slang, abbreviation, etc because human writing. If do preprocessing about typo, wrong grammer, etc then It would have consumed a lot of human labor and costs.
So... Is deep learning really need pre-processing?
Answer: Building off of @Lelouch's comment -- adding augmentations like noise and performing preprocessing can help make a model robust, especially if your dataset is limited. However, if your dataset is huge to begin with (e.g., containing 300 billion sentences), then that's not really necessary as there's enough edge cases in your training data.
For your comparison to PCA/feature selection, deep models do that automatically to some extent. Early layers can be thought of as "feature-extractors" for later layers. This means that your model can be more prone to overfitting, but that can be fixed with data augmentation (or just by adding more data). | {
"domain": "ai.stackexchange",
"id": 3883,
"tags": "deep-learning, data-preprocessing"
} |
Why are helium resonance lines called "resonance lines"? | Question: Examples of the use of the term:
Formation of the helium extreme-UV resonance lines
On the Formation of the Resonance Lines of Helium in the Sun (unpaywalled)
Formation of the helium EUV resonance lines.
When I hear of a resonant state I think of a free particle incident on a potential where it spends a lot of time then leaves, but the electrons responsible for these lines (HeI 584 Å, HeII 304 Å) start in bound states below their binding energy and are just transitions from one bound state to another.
Since most emission lines from atomic transitions are not called resonance lines:
Why are these lines different?
What is particularly resonant about them?
Answer: It appears to be a conventional label that is applied to transitions between the ground state and another energy level (some definitions specify the first excited level) of an atom and is used in all the physical sciences, not just astrophysics.
e.g. He I (58.4 nm) is a transition from $^1$P to the $^1$S ground state.
In fact all atomic/ionic transitions can be considered resonant phenomena, but the term "resonance line" is applied only to these particular types of transition, perhaps because they are usually the strongest lines in the spectrum from that species, since in most cases, the ground state is the most populated. So there is nothing "particularly resonant" about them
A "resonance line" is:
A spectral line caused by an electron jumping between the ground state and the first energy level in an atom or ion. It is the longest-wavelength line produced by a jump to or from the ground state. Because the majority of electrons are in the ground state in many astrophysical environments, and because the energy required to reach the first level is the least needed for any transition, resonance lines are the strongest lines in the spectrum for any given atom or ion.
https://www.oxfordreference.com/view/10.1093/oi/authority.20110803100415843
And here it is used in that context in astrophysics: (Morton 2013)
The tabulation emphasizes resonance lines, i.e., lines whose lower level is the ground state.
Or in Chemistry: From https://goldbook.iupac.org/terms/view/R05341
The radiative decay of an excitation level may proceed to the neutral ground state and would thus occur at the same energy as the corresponding line in the absorption spectrum. Such a line is called a resonance line and the process is called resonance emission.
Or from the Basic Atomic Spectroscopic Handbook:(https://www.nist.gov/pml/basic-atomic-spectroscopic-data-handbook)
The strongest persistent lines usually include one or more resonance lines, i.e., transitions to the ground level or term. | {
"domain": "astronomy.stackexchange",
"id": 5551,
"tags": "the-sun, spectroscopy, terminology, stellar-atmospheres"
} |
Bidirectional Dictionary | Question: The management of bidirectional mappings is a reoccuring topic. I took the time to write an (hopefully) efficient implementation.
using System;
using System.Collections;
using System.Collections.Generic;
namespace buka_core.misc
{
/// <summary>
///
/// File Bijection.cs
///
/// Provides an implementation of a discrete bijective mapping
///
/// The inverses are created using shallow copies of the underlying datastructures, which leads to
/// the original object and all its derived inverses being modified if one object changes. For this
/// reason the class implements the interface ICloneable which allows the user to create deep copies
///
/// The class also implements the interface IDictionary which provides easy access to the proto-
/// type
///
/// </summary>
/// <typeparam name="T_Proto">Datatype of keys for the prototype</typeparam>
/// <typeparam name="T_Inv">Datatype of keys for its inverse</typeparam>
public class Bijection<T_Proto, T_Inv> : ICloneable, IDictionary<T_Proto, T_Inv>
{
/// <summary>
/// Creates an empty discrete bijective mapping
/// </summary>
public Bijection()
{
}
/// <summary>
/// Used internally to efficiently generate inverses
/// </summary>
/// <param name="proto">The prototype mapping</param>
/// <param name="inverse">Its inverse mapping</param>
private Bijection(IDictionary<T_Proto, T_Inv> proto, IDictionary<T_Inv, T_Proto> inverse)
{
_Proto = proto;
_Inv = inverse;
}
/// <summary>
/// Indexer to insert and modify records
/// </summary>
/// <param name="key">Object for which the corresponding dictionary entry should be returned</param>
/// <returns>The value that key maps to</returns>
public T_Inv this[T_Proto key]
{
get
{
if (!_Proto.ContainsKey(key))
{
throw new KeyNotFoundException("[Bijection] The key " + key + " could not be found");
}
return _Proto[key];
}
set
{
this.Add(key, value);
}
}
/// <summary>
/// Returns a bijection for which keys and values are reversed
/// </summary>
public Bijection<T_Inv, T_Proto> Inverse
{
get
{
if (null == _inverse)
{
_inverse = new Bijection<T_Inv, T_Proto>(_Inv, _Proto);
}
return _inverse;
}
}
private Bijection<T_Inv, T_Proto> _inverse = null; // Backer for lazy initialisation of Inverse
/// <summary>
/// Prototype mapping
/// </summary>
private IDictionary<T_Proto, T_Inv> _Proto
{
get
{
if (null == _proto)
{
_proto = new SortedDictionary<T_Proto, T_Inv>();
}
return _proto;
}
/* private */
set
{
_proto = value;
}
}
private IDictionary<T_Proto, T_Inv> _proto = null; // Backer for lazy initialisation of _Proto
/// <summary>
/// Inverse prototype mapping
/// </summary>
private IDictionary<T_Inv, T_Proto> _Inv
{
get
{
if (null == _inv)
{
_inv = new SortedDictionary<T_Inv, T_Proto>();
}
return _inv;
}
/* private */
set
{
_inv = value;
}
}
private IDictionary<T_Inv, T_Proto> _inv = null; // Backer for lazy initialisation of _Inv
#region Implementation of ICloneable
/// <summary>
/// Creates a deep copy
/// </summary>
public object Clone()
{
return new Bijection<T_Proto, T_Inv>(
new SortedDictionary<T_Proto, T_Inv>(_Proto),
new SortedDictionary<T_Inv, T_Proto>(_Inv)
);
}
#endregion
#region Implementation of IDictionary<T_Proto, T_Inv>
public ICollection<T_Proto> Keys => _Proto.Keys;
public ICollection<T_Inv> Values => _Proto.Values;
public int Count => _Proto.Count;
public bool IsReadOnly => _Proto.IsReadOnly;
public bool Contains(KeyValuePair<T_Proto, T_Inv> item)
{
return _Proto.Contains(item);
}
public bool ContainsKey(T_Proto key)
{
return _Proto.ContainsKey(key);
}
public void Clear()
{
_Proto.Clear();
_Inv.Clear();
}
public void Add(T_Proto key, T_Inv value)
{
if (_Proto.ContainsKey(key))
{
_Inv.Remove(_Proto[key]);
}
if (_Inv.ContainsKey(value))
{
throw new ArgumentException("[Bijection] The inverse already maps " + value + " to " + _Inv[value]);
}
_Proto.Add(key, value);
_Inv.Add(value, key);
}
public void Add(KeyValuePair<T_Proto, T_Inv> item)
{
this.Add(item.Key, item.Value);
}
public bool Remove(T_Proto key)
{
if (_Proto.ContainsKey(key))
{
bool removed_inv = _Inv.Remove(_Proto[key]);
bool removed_proto = _Proto.Remove(key);
return (removed_proto && removed_inv); // == true
}
else
{
return false;
}
}
public bool Remove(KeyValuePair<T_Proto, T_Inv> item)
{
return this.Remove(item.Key);
}
public bool TryGetValue(T_Proto key, out T_Inv value)
{
return _Proto.TryGetValue(key, out value);
}
public void CopyTo(KeyValuePair<T_Proto, T_Inv>[] array, int arrayIndex)
{
_Proto.CopyTo(array, arrayIndex);
}
public IEnumerator<KeyValuePair<T_Proto, T_Inv>> GetEnumerator()
{
return _Proto.GetEnumerator();
}
IEnumerator IEnumerable.GetEnumerator()
{
return _Proto.GetEnumerator();
}
#endregion
#region Overrides
public override bool Equals(object obj)
{
Bijection<T_Proto, T_Inv> obj_bijection = (obj as Bijection<T_Proto, T_Inv>); if (null == obj) return false;
if (this.Count != obj_bijection.Count) return false;
if (!_Proto.Equals(obj_bijection._Proto)) return false;
if (!_Inv.Equals(obj_bijection._Inv)) return false;
return true;
}
public override int GetHashCode()
{
return _Proto.GetHashCode();
}
public override string ToString()
{
return _Proto.ToString();
}
#endregion
}
}
Instances would be used as follows
Bijection<int, string> b = new Bijection<int, string>();
b[1] = "frog";
b[2] = "fish";
b[3] = "dog";
b[5] = "cat";
b[8] = "snake";
b[13] = "crocodile";
Console.WriteLine(b.Inverse["crocodile"]);
Console.WriteLine(b[13]);
Any feedback/ suggestions are welcome. Is it reasonable to keep the object and its inverse tied like this or would it be unexpected behavior that changing the inverse also changes the original object
Answer:
public T_Inv this[T_Proto key]
{
get
{
if (!_Proto.ContainsKey(key))
{
throw new KeyNotFoundException("[Bijection] The key " + key + " could not be found");
}
return _Proto[key];
}
set
{
this.Add(key, value);
}
For get: I would just rely on the behavior of _Proto[TKey] - because you're not adding any new or extended behavior with your code.
For set: I would just do:
_Proto[key] = value;
_Inv[value] = key;
because you're not adding to the dictionary, you're setting.
Update : As JAD points out in his comment this isn't consistent either, because it could lead to orphans in _Inv. So be careful.
public void Add(T_Proto key, T_Inv value)
{
if (_Proto.ContainsKey(key))
{
_Inv.Remove(_Proto[key]);
}
_Proto.Add(key, value);
_Inv.Add(value, key);
}
There is something wrong with the workflow or logic here:
Let's say _Proto.ContainsKey(key) returns true, then you remove the value from the inverse. But if _Proto.ContainsKey(key) is true, _Proto.Add(key, value) will throw an exception, and you then have an inconsistent Bijection object - because the existing inverse was removed while the proto was not.
Further: doing this:
Bijection<string, int> b = new Bijection<string, int>();
b["a"] = 1;
b.Add("b", 1);
b.Add("b", 1); will throw an exception because _Inv already has a key of 1 - but now b.Proto contains both an entry for "a" and "b" with the value of 1, while b.Inv only have the entry 1 = "a".
You'll have to ensure that there always is a one-one correspondence between key and value, and ensure that the Bijection object is consistent even if a invalid operation is performed on it.
Update
I can see, that you've updated the Add() method after I've copied the code to my IDE, so the above relates to the first version.
The new version:
public void Add(T_Proto key, T_Inv value)
{
if (_Proto.ContainsKey(key))
{
_Inv.Remove(_Proto[key]);
}
if (_Inv.ContainsKey(value))
{
throw new ArgumentException("[Bijection] The inverse already maps " + value + " to " + _Inv[value]);
}
_Proto.Add(key, value);
_Inv.Add(value, key);
}
however, doesn't do the trick either, because it will still throw and exception if _Proto contains key leaving the dictionaries out of sync.
public bool Remove(T_Proto key)
{
if (_Proto.ContainsKey(key))
{
bool removed_inv = _Inv.Remove(_Proto[key]);
bool removed_proto = _Proto.Remove(key);
return (removed_proto && removed_inv); // == true
}
else
{
return false;
}
}
You can simplify this by using TryGetValue():
public bool Remove(T_Proto key)
{
if (_Proto.TryGetValue(key, out T_Inv value))
{
_Proto.Remove(key);
_Inv.Remove(value);
return true;
}
return false;
} | {
"domain": "codereview.stackexchange",
"id": 35893,
"tags": "c#, hash-map, .net-core"
} |
Friedmann equations for an open and accelerating expansion | Question: What is the difference between an open and a flat accelerating expansion with regard to the FLRW equations?
Do we only change the density parameters and the equation of state for dark energy? Or is the FLRW equations modified for an open accelerating expansion?
Answer: The homogeneity and isotropy hypotheses lead to the FLRW metric for the universe, namely :
$$ds^2 = cdt^2 - a(t)^2\left [ \dfrac{dr^2}{1+kr^2/R^2}+r^2 d^2\Omega \right ]$$ where $R$ is the curvature radius of the universe and $k$ can take the values -1, 0, or 1 for a spherical, flat or hyperbolic universe. This result is a direct consequence of the cosmological principle.
In order to derive the relationship between these parameters and the universe content, one has to apply the Einstein equations. One thus obtains the so-called Friedmann equations :
$$\left\{\begin{matrix}
\dot{a}^2-\dfrac{8 \pi G}{3c^2} \displaystyle \sum_i \rho_i a^2 & = & \dfrac{kc^2}{R^2} \\
\dfrac{d}{dt}\left ( \rho_i a^3 \right ) & = & -P_i \dfrac{d}{dt} \left (a^3 \right) \\
\end{matrix}\right.$$
Where $\rho_i$ and $P_i$ denote the energy density and pressure of each component of the universe, whose behavior is determined by their state equation (e.g. $P = 0$ for dust-like matter, $P=\rho/3$ for radiation, $P=-\rho$ for dark-energy.)
The present value of $a$ is 1. Evaluating the first equation at the present time $t_0$, and using the definition of the Hubble constant $H_0 = \dot{a}(t_0)/a(t_0) = \dot{a}(t_0)$, we find :
$$H_0^2 - \dfrac{8 \pi G}{3c^2} \displaystyle \sum_i \rho_i = \dfrac{kc^2}{R^2}$$
It is convenient to define the critical density as $\rho_c = 3c^2 H_0^2/(8\pi G)$, so that :
$$\rho_c - \displaystyle \sum_i \rho_i = \dfrac{3c^2}{8 \pi G} \dfrac{kc^2}{R^2}$$
This result means that the universe is spherical if the total energy density exceeds $\rho_c$, flat if they are equal, and hyperbolic if the total density is lesser. This is how the curvature is related to the universe content. You can see that this doesn't depend on the density distribution (i.e. how much is dark energy, how much is otherwise)
One way to quantify departure of the universe from a flat geometry is to evaluate the curvature density parameter $\Omega_k = (\rho_c-\rho)/\rho_c$. The most stringent limits on $\Omega_k$ are obtained by the analysis of the CMB anisotropies and are compatible with a flat universe ($|\Omega_k| < 5 \times 10^{-3}$, arXiv:1502.01589) but there exists other ways to measure of this parameter using standard candles and standard rulers. | {
"domain": "physics.stackexchange",
"id": 34470,
"tags": "general-relativity, cosmology, astrophysics, space-expansion"
} |
Will Proving or Disproving of any of the following have effects on Chemistry in general? | Question: I am working on a project relating to-
https://en.wikipedia.org/wiki/Millennium_Prize_Problems
I wanted to list the effects of them being proved or disproved in different aspects of science and maths. It was fairly easy to find such results on physics and maths as the questions primarily come up from them however I couldn't find any reference to effects it would have on our understanding of chemistry.
Is it because there will be no effect at all and that they are the least concern to chemists (in terms of applicability) or if there are then what are they?
Answer: All right, let's sum it up.
P vs NP, if solved the way that pretty much everybody expects (that is, $\rm P\ne NP$), will have no far-reaching consequences, because that's what we were thinking for quite a while now. If it would miraculously happen to be otherwise (that is, $\rm P=NP$), that would be quite a shock to many fields, especially to computer science, and by extension, to computational chemistry. But I don't believe in miracles.
Navier–Stokes (no matter how it will turn out) will have consequences in fluid dynamics, and by extension, maybe in some relatively narrow areas of chemical technology.
Yang–Mills will cause repercussions in the areas of physics which are the most distant from chemistry.
The rest of the problems, as far as I understand, have no bearing whatsoever on our material world. | {
"domain": "chemistry.stackexchange",
"id": 12586,
"tags": "theoretical-chemistry"
} |
Is there any way to survive solarwinter like in Sunshine - movie? | Question: Is there any way to survive solarwinter like in Sunshine - movie?
Solar winter is where for some reason sun looses its capasity to produce radiation( heat etc.). It doesn't loose everything but some of its radiation energy( say 50 %) That causes earth to cool down causing next "ice age"
Answer: Food could be grown using UV lights, powered by nuclear fission.
We could probably do it.
But it would be the spece equivalent of a human being on a life support machine - all our time and energy would be consumed just with survival, so while humans as a species might survive, our society, culture and science would probably slow down to a crawl, or disappear completely. Most other species would die off, so it would be a pretty dismal future. | {
"domain": "physics.stackexchange",
"id": 3754,
"tags": "astrophysics"
} |
Why do we only initially feel the bending of a hair? | Question: In my neurobiology class, my professor prompted us to take the tip of our pencil and fold or push down a single strand of hair on our arm or leg in the opposite direction that it grows in. He then commented on how we are able to detect the initial stimulus, but how after a while we no longer feel the hair being bent.
My question is thus, why does this occur? Why would we suddenly no longer be able to feel the stimulus of the hair being forced back?
Answer: Short answer
Touch receptors become unresponsive after continuous stimulation, a process called neural adaptation.
Background
There are various touch receptors in the skin (Fig. 1). One of them is the hair follicle receptor. These receptors form sensory nerve fibers that wrap around each hair bulb. Bending the hair stimulates the nerve endings.
All the afferent fibers of the skin receptors in the skin show neural adaptation. A rough classification of the various skin receptors is grouping them in rapidly and slowly adapting fibers.
Rapidly adapting fibers respond mainly to pressure changes. Typical examples are the Pacinian and Meissner's corpuscles, but also the hair follicle receptor.
Fig. 1. shows a static stimulation of the various receptors. Upon initial stimulation, the hair follicle receptor (Fig. 1a) vigorously fires action potentials. Because the hair follicle receptor rapidly adapts, however, it soon stops responding when the hair is continuously, statically bent in that same direction. Once the stimulus is removed it fires another volley of action potentials because the hair moves again.
In other words, the hair follicle receptor responds to strokes, stretch and skin vibrations, as it is a novelty detector; it detects change, just like Pacinian and Meissner's corpuscles do (Fig. 1b,c). This opposed to Merkel cells and Ruffini endings (Fig.1d,e) - these receptors are designed to continuously fire action potentials upon a static pressure stimulus (Fig. 1), although even these receptors eventually give up and become unresponsive.
Fig. 1. The skin receptors and neural adaptation. source: Delmas et al., (2011)
Reference
- Delmas et al., Nature Rev Neurosci (2011); 12: 139-15 | {
"domain": "biology.stackexchange",
"id": 5271,
"tags": "neurophysiology, touch"
} |
Solving recurrence relation where the $f(n)$ has some constant factor $k$ where $0 < k < 1$ | Question: I am trying to see if a recurrence relation where $f(n)$ has some constant factor $k$, e.g. $f(n)=kn$ where $0 < k < 1$, is $O(n)$. I am reaching a different result depending which route I take. Given the following recurrence relation:
$$T(n)=2T(\frac{n}{2})+f(n)$$
$$T(n)=2T(\frac{n}{2})+kn$$
Since $0 < k < 1$, we can represent $kn=n^c$, where $0 < c < 1$
This falls under the case 1 of the Master Theorem, because $a=2, b=2$, and therefore $log_b a = log_2 2 = 1 > c$.
It's $O(n)$.
But if I try to unfold the recurrence:
$$\begin{split}T(n) & = 2T(\frac{n}{2})+kn \\
& = 4T(\frac{n}{4})+2kn \\
& = 8T(\frac{n}{8})+3kn \\
& = ... \\
& = 2^cT(\frac{n}{2^c})+ckn \\
\end{split}$$
When $\frac{n}{2^c}=1$, $n=2^c$, then $log_2 n = c$.
So now it's $T(n) = nT(1) + kn log_2 n$, which is $O(n log_2 n)$. Now I am confused.
Answer: The error is where you claim that $kn=n^c$ where $0<c<1$. This is not correct. Even when $k<1$, it is still the case that $kn=\Theta(n)$; it is not $O(n^c)$ for any $c<1$. (Check the definition of big-O notation.) | {
"domain": "cs.stackexchange",
"id": 14570,
"tags": "recurrence-relation, master-theorem"
} |
Minimum set cover with incompatible sets | Question: I'm interested in a variant of minimum set cover where some sets are ``incompatible'' (they can't be chosen simultaneously).
To state it more formally:
We have a finite base set $X$ and a family $\mathcal{R}$ of subsets of $X$. We also have an undirected graph $G$ with vertex set $\mathcal{R}$ and edges representing incompatibilities between sets. The goal is to find a minimum set cover of $X$ which is also an independent set of $G$.
Does this problem have a name? Or is it a special case of a studied problem? Or can it be reduced to a well-studied problem with a small blowup in runtime? (by "small blowup" I mean not simply polynomial, but preferably a small degree polynomial).
I'm particularly interested in the geometric variant where $X$ is a set of points and the sets in $\mathcal{R}$ are defined as the intersection of $X$ with some simple geometric ranges (in which case tools such as VC-dimension, $\epsilon$-nets, range-searching and so on might come in handy for approximation algorithms). But I welcome any relevant reference.
Answer: Your problem can be stated as a minimum weight maximal independent set problem.
Construction:
Construct a bipartite graph $G = (L,R,E)$, where the right partition $R$ corresponds to $\mathcal{R}$ and the left partition $L$ corresponds to $X$. An edge $(u,v) \in E$ iff an element $u$ is contained in the set $v$.
Let $G'$ be the updated graph obtained by adding edges within $R$ based on the incompatibility of vertices in $R$.
Furthermore, assign a weight of $\infty$ on each vertex of $L$ and weight $1$ on each vertex of $R$. Also, attach a new pendant vertex of weight $0$ to each vertex in $R$. Let $G_{w}$ be this new graph.
Now, your problem can be stated as finding minimum weight maximal independent set problem (MMIS) on $G_{w}$. It is a standard problem discussed here. Your problem has a feasible solution if and only if MMIS has a finite value. Also, your problem has a solution of size $k$ if and only if MMIS has a value $k$.
Correctness:
Suppose, a feasible instance of the independent set cover problem is $S_{1}, \dotsc, S_{k}$. Then, the maximal independent set contains vertices in $R$ corresponding to $S_{1}, \dotsc, S_{k}$ and pendant vertices corresponding to the remaining sets. It is easy to see that it is an independent set and it is also maximal. Moreover, the weight of the independent set is exactly $k$.
Similarly, you can prove the other direction, i.e., if there is a maximal independent set of value $k$, there is an independent set cover of size $k$.
Note: Instead of assigning $\infty$ value to the vertices in $L$, you can also assign some sufficiently large finite value, say $|\mathcal{R}|+1$. That will also work. | {
"domain": "cs.stackexchange",
"id": 18434,
"tags": "algorithms, reference-request, np-hard, set-cover"
} |
Getting parallel items in dependency resolution | Question: I have implemented a topological sort based on the Wikipedia article which I'm using for dependency resolution, but it returns a linear list. What kind of algorithm can I use to find the independent paths?
Answer: I assume that an edge $(u,v)$ means that $u$ has to be executed before $v$. If this is not the case, turn around all edges. I furthermore assume that you are less interested in paths (those are already given by the DAG) than in a good execution strategy given the dependencies.
You can easily adapt the topological sort procedure: instead of appending, merge all items of the same "depth" to one set. You get a list of sets, each of which contains items you can execute/install in parallel. Formally, the sets $S_i$ are defined thus for the graph $G = (V,E)$:
$\qquad \displaystyle \begin{align}
S_0 &= \{ v \in V \mid \forall u \in V. (u,v) \notin E \} \\
S_{i+1} &= \{v \in V \mid \forall u \in V. (u,v) \in E \to u \in \bigcup_{k=0}^i S_k \}
\end{align}$
You can then execute your tasks like this (let's assume there are $k$ sets):
for i=0 to k
parallel foreach T in S_k
execute T
Of course, this does not yield maximum throughput if tasks take different amounts of time: two parallel, independent linear chains sync after each element. To circumvent this, I suggest you work directly on the DAG with a parallel traversal starting in the source nodes -- those in $S_0$ -- and syncing/forking in nodes with several incoming/outgoing edges:
parallel foreach T in S_0
recursive_execute T
where
recursive_execute T {
atomic { if T.count++ < T.indeg then return }
execute T
parallel foreach T' in T.succ
recursive_execute T'
}
and T.count is a simple counter holding the number of predecessors of T which have been executed already, T.indeg the number of predeccessors and T.succ the set of successors. | {
"domain": "cs.stackexchange",
"id": 312,
"tags": "algorithms, graphs, parallel-computing, scheduling"
} |
How much does temperature affect the time of sunrise? | Question: Please forgive me if this is a dumb question, or if my understanding of basic physics is wrong. Please feel free to correct me.
As I understand it, if the Earth didn't have any atmosphere, then the time of sunrise would be the point when the Sun's rays approached your position at a tangent. For example, imagine that the Earth is a perfect sphere, with you standing on top, and the (apparent) motion of the Sun was it travelling clockwise as follows (scales completely wrong, but hopefully the concept is correct)...
Now, assume for simplicity that the Earth's atmosphere is of constant destiny, and starts at a defined height above the planet's surface, then (if I understand correctly), the Sun's rays would be refracted as they entered the atmosphere, meaning that you would see the sun slightly earlier (exaggerated)...
Obviously, this is very simplified, not least because the atmosphere is a gas, and therefore of variable density, presumably being less dense the higher you went. I imagine that the variation in density would mean that the rays appeared to curve, rather than take a sudden turn as shown above.
My question is, how much difference does the air temperature make to the amount of diffraction, which in turn affects the time at which you would see the sunrise? My feeling is that if it were cold across the Earth, then the air would be more dense, resulting in a greater degree of refraction, and so an earlier sunrise. By contrast, a higher temperature would mean lower density, less refraction and a later sunrise.
Anyone able to give me some estimates of how much difference you would expect to see between a warm summer's day and a cold winter's day, assuming normal parameters for "cold" and "warm" for our planet?
Answer: The biggest cause of refraction is the change in density of the atmosphere with altitude, not changes caused weather conditions at the surface.
There are formulas to calculate this effect assuming standard values of temperature and pressure at ground level. The apparent change in position of the sun in those conditions is about the same as the sun's visible diameter.
The time difference this causes depends on the angle at which the sun rises above the horizon, which depends where you are on the earth and what time of the year it is. If the sun rises vertically, the time difference is about 2 minutes, but if it rises at a shallow angle to the horizon it may be much longer.
Changes in air temperature and pressure also have an effect, which is easy to observe (from the known position of the stars, not just by observing the sun) but difficult to predict in a useful way. As a consequence of this, it is not very useful to predict sunrise and sunset times to more accuracy than the nearest minute.
See https://en.wikipedia.org/wiki/Atmospheric_refraction | {
"domain": "physics.stackexchange",
"id": 55336,
"tags": "temperature, refraction, earth, atmospheric-science, sun"
} |
Add edges to undirected graph to make connected and minimize longest path | Question: I am trying to find an efficient algorithm to solve to following problem: Given an undirected disconnected graph, I want to add as few as possible edges to make to graph connected while minimizing the number of vertices on the longest path in the resulting connected graph. As a result I need the number of vertices on the longest simple path in the resulting graph.
For example, given a graph with the following edges:
1 0
2 0
4 3
5 3
There are multiple ways to add edges to make it a connected graph. But since I want to minimize the number of vertices on the longest path in the resulting graph, I have to add an edge between 0 and 3. Now the path with the most vertices contains 4 vertices (2 -> 0 -> 3 -> 5). If I had added an edges between 2 and 4 then the path with the most vertices would've been 1 -> 0 -> 2 -> 4 -> 3 -> 5.
Another example, given a graph with the following edges:
0 1
1 2
3 4
5 6
Here two edges need to be added: between 1 and 3 and between 3 and 5. The path with the most vertices then contains 5 vertices.
My approach was to first use DFS to identify all components. Then, for each component, I find the longest path in that component, I take the vertex that is in the middle of each of those paths and connect them. This results in one connected graph. In that resulting graph I can find the path with the most vertices. I suspect that there must be a more efficient approach to this, especially since it needs to work for a disconnected graph with at most $10^6$ vertices. Hopefully someone has a better idea on how to solve this.
Answer: Clearly, the number of added edges is one less than the number of connected components in the graph, since we can decrease this number by one by adding an edge and we seek the minimum number of added edges.
The trick is to add the edges between central vertices of the connected components. A central vertex of a connected component is a vertex that has minimum distance to the farthest vertex to this vertex in the component (we call this distance the eccentricity). So it is a vertex that minimises the eccentricity.
Here goes the algorithm. In each step we find the connected components of the graph and we handle them as different graphs say $G_1, \dots, G_l$. Note that the task is to add $l-1$ edges between these graphs in a way that makes them connected. Each time we add an edge we will combine two graphs and decrease $l$ by one until it gets to the value 1.
After we found the connected component we find a central vertex in each of them (central vertices might not be unique but we can choose any of them). We keep for each component a central vertex and the value of its eccentricity. Now in each step we combine to graphs, we distinct two cases.
- The minimum eccentricities are equal, then any of the two central vertices can be chosen as a central for the new graph and the new eccentricity is the previous one plus one. (Try to prove the correctness and optimality of this statement).
- The minimum eccentricities differ. Keep the central with the greater eccentricitiy. Its eccentricity does not change. (Also try to prove this).
The eccentricity of each component can be built using a BFS from each vertex in total quadratic time in the size of the input graph. Each iteration can be implemented in constant ime (for example keep a stack of pairs (central, eccentricity) and in each step pop two elements, unify them and push thr new pair back to thr stack. each step decreases the size of the stack by one and hence we end after $l-1$ steps.
The correctness of the previous two cases implies the correctness of the algorithm and that it does not matter in which order you process the components. Hence we get a total quadratic running time. There are some heuristics to improve the running time of computing central vertices if connected components (which is the only quadratic factor in our algorithm and gence the bottleneck) including pruned BFS. However, as far as I know, this is the best achievable asymptotic running time. | {
"domain": "cs.stackexchange",
"id": 14969,
"tags": "algorithms, graphs, efficiency, connected-components"
} |
Volatile nature due to hydrogen bonding | Question: Why does intramolecular hydrogen bonding make an organic (or possibly inorganic as well(?)) compound volatile.
What I think is that it might be due to decrease in the solubility of the compound as intermolecular hydrogen bonding cannot be done after that.
Answer: "Volatile" usually refers to ease of evaporation, high vapor pressure, so it is a property of a pure substance while solubility is a property of the combination of two or more substances.
An intramolecular hydrogen bond makes a compound more volatile because the charge imbalances are offset internally instead of by forming an interaction with another molecule.
Interactions between different molecules of the substance would lower volatility, but intramolecular hydrogen bonds prevent the hydrogen bond donor and acceptor of one molecule from interacting with another molecule. | {
"domain": "chemistry.stackexchange",
"id": 2882,
"tags": "physical-chemistry, hydrogen-bond"
} |
Two body problem with given start positions and velocities | Question: I am trying to program a $n$-body problem simulation. To calculate the position and velocity after a time-step I want to split it in multiple 2 body problems. Now I am stuck, trying to find the velocity and position of two bodies (ignoring all the others) after one time-step with given mass, start position and velocity.
I am able to calculate the force between the two, but I am unable to solve the differential equation
$$
\ddot{\vec{r_{12}}(t)} = G M \frac{\vec{r_{12}(t)}}{|\vec{r_{12}(t)}|^3} \\
M := m_1 + m_2 \\
\vec{r_{12}(t)} := \vec{r_1}(t) - \vec{r_2}(t)
$$
to calculate the position relative to each other.
What is $\vec{r}(t)$ and how can I calculate it? And is there another and maybe faster way to get the positions and velocities?
Edit: As @Sofia pointed out, this was not clear: $\vec{r_1}$ and $\vec{r_2}$ are the location vectors of the two masses.
Answer:
Now I am stuck, trying to find the velocity and position of two bodies (ignoring all the others) after one time-step with given mass, start position and velocity.
You get the final velocity at impact $v$ with
$$v=\sqrt{\int_{r_1}^{r_2} \left(\frac{2 G M}{r^2}+\frac{v_0^2}{r_1-r_2}\right) \, \text{d}r}$$
Where $r_2$ is the initial distance and $r_1$ the final (so if you yould have the moon falling on the earth $r_2$ was the distance from center to center and $r_1$ the radius of the earth plus the radius of the moon.
If you want the velocities at a specific distance in free fall just replace your $r_1$ with the distance you need.
The time $t$ until mass A hits mass B is
$$t=\int_{r_1}^{r_2} \frac{1}{\sqrt{2 G (M_1+M_2) \left(\frac{1}{r}-\frac{1}{r_2}\right)+v_0^2}} \, \text{d}r$$
Those are the analytical solutions. If you want to solve it differentially you need to go numeric. In the following example you have 2 point masses separated by a distance of 6371 km, one the mass of the earth and the other the mass of the moon. They will meet in 889 sec and the heavy mass will move x = 77 km in the right, and the light one R-x in the left direction. If you want to solve for velocity or acceleration at time t just replace x1[t] by x2'[t] or x1''[t]:
I am trying to program a n-body problem simulation
You can also check out my 4 Body simulations at http://yukterez.ist.org/3kp/3k3D.html, http://yukterez.ist.org/4kp2 and http://bit.ly/1aMbgGH - the code is more or less self explaining. | {
"domain": "physics.stackexchange",
"id": 19747,
"tags": "homework-and-exercises, newtonian-mechanics, newtonian-gravity, simulations"
} |
Why does the scattering cross section equal to the sum over all differential cross section; including the incident angle? | Question: According to Beer Lambert's law, the intensity of light passing through a homogeneous medium diminishes at a rate proportional to the incident intensity; i.e.
$$
\frac{dI(s)}{ds} = -I(s)\sigma\, ,
$$
where $s$ is the parameter for the length of path taken by the light, and $\sigma$ is the volume extinction coefficient. Now assuming that the extinction happens purely from scattering, furthermore, assume that only single scattering events take place. The usual method to determining $\sigma$ is to sum the differential scattering cross sections $d\sigma$,
$$
\sigma = \int d\sigma = \oint\frac{d\sigma}{d\Omega}\, d\Omega = \int_0^{2\pi}\, d\phi\, \int_0^\pi \sin\theta\, d\theta \frac{d\sigma}{d\Omega}(\phi,\theta)\, ,
$$
where the last equality was taken from wiki (https://en.wikipedia.org/wiki/Cross_section_(physics)).
My question is, assuming that the incident light is coming in from an angle of $\theta=\pi$, why do we include $\theta=0$ in the integral? Isn't it the case that any light that scatters into the $\theta=0$ angle contributes to the intensity $I(s)$? Instead, shouldn't scattering cross section the equation be the one shown below?
$$
\sigma = \int^{2\pi}_0d\phi\, \lim_{\Theta\rightarrow 0} \int_\Theta^\pi\sin\theta\, \frac{d\sigma}{d\Omega}(\phi, \theta)
$$
Answer: Consider a plane wave propagating inside a homogeneous medium. Its distribution of directions will be
$$\sigma_I(\vec\omega)=I \delta(\vec\omega-\vec\omega_0),$$
where $\vec\omega_0$ is the direction of its wave vector, and $\delta$ is the Dirac delta distribution. Any single scattering on a scattering center $\vec r$ will result in an outgoing wave, which cannot have a finite number of finite-amplitude$^\dagger$ delta-like singularities, because the scattering center itself is finite, and due to this finite size any sharp outgoing wave packet will diverge due to diffraction.
This means that integral of this outgoing wave over any solid angle $\varepsilon$ will have the limit of $0$ as $\varepsilon\to0$. On the other hand, integrating the incoming wave around $\vec\omega_0$ will yield a nonzero limit as the integration domain shrinks to zero (due to the properties of Dirac delta distribution). As your last equation only contains the former wave, but not the latter, this equation would be equivalent to the first one.
This will continue to be true when multiple scattering orders are taken into account. The only case where this analysis may break down is when the medium is organized: e.g. when it's a crystal or a quasi-crystal. In this case the interference between scattered fields (of all scattering orders) of individual scatterers will result in an outgoing wave with delta-like peaks in distribution of directions. But in this case Beer–Lambert law will be inapplicable anyway: $\sigma$ may be finite while extinction due to scattering is zero.
$^\dagger$By amplitude of a delta-like singularity $A\delta(x)$ I mean its factor $A$. | {
"domain": "physics.stackexchange",
"id": 62007,
"tags": "electromagnetism, classical-mechanics, scattering, scattering-cross-section"
} |
What is abnormal Beckmann rearrangement? | Question: What is abnormal Beckmann rearrangement? I read this term in one of my solution papers but I am not able to find relevant literature for the same. Answers linking to some source, or giving the relevant theory are appreciated.
Answer: The abnormal Beckmann rearrangement differs from the normal one that there is a nitrile as product of reaction. As compiled by William Reusch (formerly Michigan State University) in his freely accessible virtual textbook organic chemistry, the assumed mechanism of the abnormal Beckmann rearrangement is e.g.,
(image credit)
For comparison, the normal Beckmann rearrangement leaves with an amide:
(image credit, same page) | {
"domain": "chemistry.stackexchange",
"id": 15960,
"tags": "organic-chemistry, reaction-mechanism, molecular-structure, rearrangements"
} |
What's the furthest distance that something could travel and eventually come back to Earth? | Question: Imagine u shoot a photon into the sky at a mirror far away in space, and you want the photon to bounce off the mirror and eventually come back to you. Considering the cosmological constant, what's the furthest that mirror could be (at the moment you shot the photon) and still have the photon get to it, bounce back and eventually return?
I know the mirror would start moving away, and the photon would have to catch up with it. I think it would have to be around 15 BLY away by the time the photon reached it (I'm not sure), but I don't know how far it would have to be at the start to be 15 BLY away by then.
Answer: There's an easy way to work out the distance to the mirror at the present time, if it moves with the Hubble flow: imagine that instead of being emitted by us, the light is emitted by the mirror image of the Earth and simply passes through the mirror to us. The farthest comoving distance that the mirror Earth could be is the same as the farthest present-day distance from which light emitted in the present day can ever reach us; assuming ΛCDM is correct, that's around 16.5 billion light years. The comoving distance to the mirror (and the metric distance to it in the present day) is exactly half that.
Working out the metric distance to the mirror at the time of reflection is trickier. If (nonstandard terminology warning) $D(a)$ is the comoving distance light can travel from scale factor $a$ to $∞$, then the scale factor at which the light hits the mirror is $D^{-1}\left(\frac12 D(1)\right)$. If all energy was dark energy and the expansion was exactly exponential then the solution would be $2$ exactly. Using central parameters from here I get $a\approx 2.09$, so the mirror distance is a bit higher, around 17 billion light years.
For what it's worth there's an exact expression for this function in a flat perfect dust+dark energy universe:
$$D(a) = \frac{c}{a \sqrt{Ω_Λ} H_0} \; {}_2F_1\left(\frac13,\frac12;\frac43;-\frac{Ω_m}{a^3Ω_Λ}\right)$$
where ${}_2F_1$ is the hypergeometric function. | {
"domain": "astronomy.stackexchange",
"id": 4940,
"tags": "cosmological-inflation, cosmological-horizon"
} |
Viability of a Fayet Iliopoulos term in the MSSM | Question: Why is a Fayet-Iliopoulos term $-kD$ irrelevant or subdominant in the in the MSSM (Minimal Susy Standard Model)?
According to Martin (A Supersymmetry Primer, p.70) it's because squarks and sleptons don't have a mass term in the superpotential. Considering that we want a positive value for the scalar potential (in order to have susy breaking):
$$V=\sum_i |m_i|^2 |\phi_i|^2 +1/2 (k-g\sum_i q_i |\phi_i|^2)^2$$
It seems to me that we can achieve, with every possible VEV for the scalar fields, a positive value for the scalar potential, even without the mass term. What I'm missing?
Answer: Maybe I've understood the problem. In the minimum we have (only one scalar field for simplicity):
$$\frac{dV}{d\phi}=0=\phi [ m^2 -kgq+g^2q^2 \phi^2]$$
If $m=0$ we are we are forced to choose a mexican hat potential with one maximum in $\phi=0$ and two degenerate minima. So we are forced to have a non zero vev for the scalar fields. If these scalar fields are sleptons and squarks this implies that color and EM must be broken (and we don't want this) | {
"domain": "physics.stackexchange",
"id": 21703,
"tags": "quantum-field-theory, supersymmetry, beyond-the-standard-model, mssm"
} |
Water pressure on a floodlight in a swimming pool | Question: I have a circular swimming pool with a diameter of $4.5\;\text{m}$. The water height usually is about 1m. Somewhere in this pool is a single floodlight, circular aswell, diameter $25\;\text{cm}$.
I would like to calculate how much pressure this floodlight is exposed to (the pool is above ground).
My first approach was to measure how deep the floodlight is: bottom of it is $70\;\text{cm}$ beneath the surface.
------- pool top
|
-------| water surface
|
|
< floodlight upper end
<
< floodlight lower end
|
_______| pool bottom
The distance pool bottom to floodlight lower end is $30\;\text{cm}$, floodlight lower endto water surface $70\;\text{cm}$. Diameter, as mentioned, $25\;\text{cm}$.
The water below the floodlight should be safe to ignore, so I assumed the pool is $70\;\text{cm}$ high and the floodlight is located right at the bottom. A pool with $4.5\;\text{m}$ diameter and $70\;\text{cm}$ high water contains about 11.1 tons of water ($11.13\;\text{m}^3$).
With a base area of $15.9\;\text{m}^2$ and a lateral surface of $9.9\;\text{m}^2$ this makes $25.8\;\text{m}^2$ of surface that the water is in contact with and that it can put pressure on (of course more pressure at the bottom than the top).
If I assume that the pressure per $\;\text{cm}^2$ is equal across the whole surface, this would equal to $\frac{11100\;\text{kg}}{258000\;\text{cm}^2} * 491\;\text{cm}^2 = 21,12\;\text{kg}$ mass on the whole speaker - but it's obvious that I can't calculate it like that.
Given that the floodlight is located at the bottom of my fictional $70\;\text{cm}$ high swimming pool, I could just calculate the amount of mass per $\;\text{cm}^2$ floor $(\frac{11100\;\text{kg}}{159000\;\text{cm}^2} = 0.07\;\frac{\text{kg}}{\;\text{cm}^2})$ and could generalize that "as the floodlight is close to the floor, mass per $\;\text{cm}^2$ would be equal enough in order to just calculate $0.07\frac{\;\text{kg}}{\;\text{cm}^2} * 491\;\text{cm}^2 = 34.3\;\text{kg}$".
Disclaimer: I know that pressure is measured in $\frac{\;\text{N}}{\;\text{m}^2}$, but as we're talking about stationary water on earth, the difference is just taking the gravity into account or not - if I can calculate $\;\text{kg}$ or $\;\text{N}$, I can calculate the other one as well.
So the question is: Can I generalize like above or am I on the wrong track? If so, how can I calculate the pressure on the floodlight?
Answer: You should ignore the sides of the pool when calculating the pressure. This is because weight of the water must be fully balanced by the upwards force exerted by the pool's bottom. The isotropy of the water then ensures that an exactly equal amount of force is exerted regardless of the direction. (Newton's third law is satisfied because the force on one wall is exactly cancelled by the opposite wall.)
The upshot of this is that you can safely use the highschool formula, $p=\rho g \,\Delta h$, regardless of which way your surface is facing. Furthermore, as Mike mentions, you can safely take the pressure at the centre of the spotlight, and symmetry and linearity will do the rest of the work for you.
This means that the pressure at $\Delta h=(70-12.5)\,\mathrm{cm}=57.5\,\mathrm cm$ is $p=5.6\,\mathrm{kPa}$ above atmospheric pressure, for a total force of $276\,\mathrm{N}=28\,\mathrm{kg}·g$. | {
"domain": "physics.stackexchange",
"id": 23063,
"tags": "homework-and-exercises, pressure, fluid-statics"
} |
Is complexity class $\Sigma^1_1$ from polynomial hierarchy decidable? | Question: It is within polynomial hierarchy so I assumed it is decidable. The picture like this further hinted in that direction.
Yet, in a paper on page 2 I read
Satisfiability of [...] is highly undecidable (i.e., $\Sigma^1_1$-hard)
which implies that $\Sigma^1_1$-hard is undecidable. Do authors mean something else by "highly undecidable" or $\Sigma^1_1$-hard is really undecidable?
In the latter case, why is it included inside decidable PSPACE?
(or I misunderstanding things...)
Answer: The confusion lies in using the same notation for the polynomial hierarchy and the arithmetical hierarchy.
The similarities are obvious, both notations talk about formulas with increasing quantifier complexity. The important distinction is that the polynomial hierarchy talks about bounded quantification, i.e. your space is limited (the witnesses must be of polynomial size). If the quantification is not bounded you have much more power, e.g. a formula for "program $p$ halts on input $x$" can be written in a $\Sigma_1$ form, which means that the halting problem is in the first level of the arithmetical hierarchy.
To avoid confusion the levels of the polynomial hierarchy are actually denoted by $\Sigma_i^p$, however when it is obvious from the context the $p$ is usually omitted. Note that the superscript in $\Sigma_1^1$ means that the quantification is over sets, which puts it above any level in the arithmetical hierarchy with quantification over the naturals. | {
"domain": "cs.stackexchange",
"id": 10722,
"tags": "complexity-theory, complexity-classes"
} |
Computing two-electron integrals with an STO-3G basis set | Question: I am trying to implement a restricted Hartree-Fock calculation using an STO-3G basis set, for fun. I managed to perform this calculation where only $\mathrm{1s}$ orbitals are present ($\ce{H2}$ and $\ce{HeH+}$) as explained in Szabo and Ostlund's book. In this book, authors give explicit formulas for overlap, kinetic, nuclear-electron, and electron-electron integrals for $\mathrm{1s}$ orbitals and they work correctly.
In order to generalize my calculation to systems containing $\mathrm{2s}$ and $\mathrm{2p}$ orbitals (for $\ce{H2O}$ and $\ce{N2}$), I used the general formulas I found in Cook's book for the electron-nuclear and electron-electron integrals. In this case, I obtain results that are slightly different from Szabo's book:
$$E_\text{tot}(\ce{H2O}) = -74.4442002765 \text{ a.u.}$$
instead of
$$E_\text{tot}^\text{Szabo}(\ce{H2O}) = -74.963 \text{ a.u.}$$
This is obviously problematic since orbital energies suffer from the same error and this leads to an erroneous ionization potential (0.49289045 a.u. instead of 0.391 a.u., a difference of approximately 63 kcal$\cdot$mol$^{-1}$).
Since I checked my code multiple times and I wrote the two-electron computation code from scratch twice, I was wondering if there is a typo in Cook's book. Is there is a good reference where I can find the (correct) formula to compute two-electron integrals of gaussian functions (in Cartesian coordinates) with arbitrary angular momenta? At the moment I am not looking for a very efficient (recursive) algorithm to perform this task, I only need an exact formula like the one proposed in Cook's book.
Sources:
[1] Szabo and Ostlund, Modern Quantum Chemistry, Dover, 1989.
[2] Cook, Handbook of Computational Chemistry, Oxford University Press, 1998.
Answer: Actually there is a mistake in the analtical expression in Cook's Book. On his web page he has a pdf with the corrected verison
http://spider.shef.ac.uk/
Maybe this solves your problem, but I would also recommend to implement the Obara-Saika Scheme or rys-Quadrature since they are really much more efficient. If your are programming in Python, you might have a look at the PyQuante project, which implements all this stuff.
Concerning Obara-Saika you might also read about the Head-Gordon Pople Scheme. It is in principial an adapted version of Obara-Saika which reduces the FLOP count. | {
"domain": "chemistry.stackexchange",
"id": 4594,
"tags": "quantum-chemistry, computational-chemistry, orbitals, basis-set, ab-initio"
} |
$H = e^{i\pi/4} \sqrt{iNOT}$? | Question: In the paper Valley qubit in Gated MoS$_2$ monolayer quantum dot, a description of how a $NOT$ gate would be performed on a qubit in the described device is given.
The authors say that in the described implementation the operation performed is an $iNOT$ gate, and that the Hadamard operation can be implemented by performing half of the $iNOT$ Rabi transition. Particularly, the authors say that $H = e^{i\pi/4}\sqrt{iNOT}$, (actually they say $H = e^{i\pi/4}\sqrt{NOT}$, but I assume this is a typo).
Most generally, my question is: Does $H = e^{i\pi/4}\sqrt{iNOT}$? I cannot work it out.
My confusion may stem from my lack of understanding concerning why the implemented operation corresponds to $iNOT$ instead of simply $NOT$. My understanding is that $iNOT = i\sigma_x$. I'd appreciate any insight you have on this as well. Thank you.
Answer: It cannot be the case that $H=e^{i\pi/4}\sqrt{iNOT}$. Whatever your interpretation of $iNOT$ (I'd agree with your definition), just square the thing. $H^2=I$, the identity, and so it is certainly not the case that
$$
I=e^{i\pi/2}iNOT.
$$
It is true, however, that if you perform the sequence that would give you an operation such as $NOT$ or $iNOT$, and you only evolve for half the time, that gives you something that achieves an equivalent result to the Hadamard. It's usually referred to as a beam-splitter. For example, if a theorist writes about a Mach-Zehnder interferometer, they usually write down a sequence of Hadamard - phase - Hadamard, whereas an experimentalist will use beam splitter - phase - beam splitter. It achieves the same practical task (e.g. identify if the phase is 0 or $\pi$) although the interpretation of the results is different as which output is which gets switched. | {
"domain": "quantumcomputing.stackexchange",
"id": 1416,
"tags": "quantum-gate, experimental-realization, hadamard"
} |
Position of Gyroscope on rigid body | Question: Does the position of a Gyroscope effect the sensor's reading? for example if I place a gyroscope at the centre of gravity of body and another at one corner, will the angle measurements be different for both Gyroscopes or position does not effect the sensor measurements?
Answer: To my understanding Gyroscopes output angle changes relative to their frame. As such if you place them on a solid and undeformable body, it shouldn't matter where they are placed.
There are some cases, that Gyroscopes are coupled with magnetometers, in order to give absolute measurements - based on the earths magnetic field- however, that is not the norm. | {
"domain": "engineering.stackexchange",
"id": 3585,
"tags": "automotive-engineering"
} |
What would be acceptable size of list for O(n!) algorithm | Question: I was wondering what the limitations to an algorithm that runs in O(n!). For example lets take an algorithm that generates all permutations of a list. Now what i am wondering is what the upper bound would roughly be for size of list. I understand that this can vary based on ram/memory. Let's say that this would be ran client side and be expected to work on basically any machine a user would have at his/her home.
Thanks in advance for any insight.
Answer: It depends a bit on the constants hidden in the $O$, so no general answer is possible. In my experience you can aim for the low teens. 13! is about six billion. A computer can do about a billion things in a second, so a runtime of a couple of minutes seems reasonable if you only want to enumerate permutations. 15! is already more than a thousand times bigger, so most likely whatever you want to do will take too long.
On my machine counting to 11! in Python takes 14 seconds and counting to 12! takes two and a half minutes. Counting to 13! would hence take about half an hour. Counting to 15! would take about a week. | {
"domain": "cs.stackexchange",
"id": 8195,
"tags": "time-complexity"
} |
Why in MuSiC (DoA) don't we search for minima of the inverse (pseudo)spectrum? | Question: I have started working on a Direction-of-Arrival estimation problem and I am testing various algorithms. I came across Multiple Signal Classification (MuSiC) method. In all textbooks I have looked at, the final step of the method is to search for maximums in the pseudo-spectrum given by
$$ P \left(\theta \right) = \frac{1}{a^{H}\left(\theta\right) V_{n} V_{n}^{H} a\left(\theta\right)}$$
where $P\left(\theta\right)$ is the spatial pseudo-spectrum, $a\left(\theta\right)$ is the array manifold of the array under consideration, $[ \cdot]^{H}$ denotes Hermitian transposition and $V_{n}$ is the matrix whose columns form the noise subspace (the noise eigenvectors).
Now, my question is, why should we look for maxima in $P\left(\theta\right)$ an not for minima in $\frac{1}{P\left(\theta\right)}$? Is there something I am missing here? I believe that the implementation of the algorithm could potentially benefit from avoiding this division.
Any insights and/or hints are most welcome. Thanks in advance.
Answer: Let's define the null-spectrum $Q(\theta)$ and pseudo-spectrum $P(\theta)$ as
$$Q(\theta) = a(\theta)^H{v_n}{v_n^H}a(\theta)$$
$$P(\theta) = \frac{1}{Q(\theta)}$$
The choice to find the maxima in $P(\theta)$ comes down to personal preference. It is more intuitive to associate a peak in a signal to some phenomenon being observed. This is especially so when reading a text, where the author is trying to teach you how the algorithm works.
You can very well search for the minima in $Q(\theta)$ instead of the maxima in $P(\theta)$ and get the same answer as can be seen in the example below
Nothing is stopping you from using the minima. In general there is no significant advantage to using either except for avoiding division operations and potential divide-by-zero, which usually fine in real systems where some kind of noise is present. Still, certain systems and their implementation can make using one over the other more appropriate. | {
"domain": "dsp.stackexchange",
"id": 9537,
"tags": "doa"
} |
Where does the Higgs Boson fit in the three generations of charged particle? | Question: I am reading a book called "Gauge Fields, Knots and Gravity" by Baez et. al.
In the first chapter, the authors explains that there are three generations of charged particles:
First: electrons, electron neutrino, up/down quarks
Second: muon, muon neutrino, charm/strange quarks
Third: tau, tau neutrino, top/bottom
quarks
Afterwards the author mentions that another charged particle is rumored to exist called the Higgs Boson that is neither a quark or a lepton and explains the relation between EM and the weak force. This was written dated 1994.
Now that the Higgs Boson has been discovered, I wonder if it fits anywhere in the "3-generation" scheme provided by the author? If the book was to be updated now, where would Higgs Boson lie? As some sort of 4th generation particle?
Answer: Leptons and quarks are fermions. ( Fermions are particles with half integer spins.)
You can, like the author has, divide them into three generations on basis of their masses.
The Higgs boson is a boson. (Bosons are particles with integer spins.)
The Higgs boson (which happens to be electrically neutral) is part of a completely different category of particles, and it cannot be a part of the three generations you've mentioned.
Also, I suppose the author was talking about fermions and not charged particles, because neutrinos are electrically neutral.
Also remember that electric neutrality is not a criteria which determines whether particles are fermions are not. There are bosons which are electrically charged too. (like the $+W$ and $-W$ bosons, that @Timaeus mentioned.) | {
"domain": "physics.stackexchange",
"id": 22287,
"tags": "particle-physics, higgs"
} |
Message definition value in roscpp | Question:
It's trivial to get the concatenated definition of a message in roscpp, provided that the message is compiled-in:
#include <iostream>
#include <nav_msgs/Odometry.h>
int main(int argc, char* argv[])
{
std::cout << ros::message_traits::Definition<nav_msgs::Odometry>::value();
return 0;
}
However, I'd like to be able to get arbitrary message definition strings from the workspace, by message type string. Now, I know I can use ros::package::getPath, track dependencies, and build up the concatenation myself, but I'm wondering if the canonical one created in the python message generator gets cached somewhere accessible as well?
The use case is for rosserial_server, which uses topic_tools::ShapeShifter to advertise topics and messages which are not known at compile-time (but are proxied from clients who did know about them... the clients declare the topic type and MD5, but don't store the entire message string). Currently this is being addressed with a shim rospy node, but I'm interested in eliminating that component and doing everything in the C++ binary.
So first, is there some way to do this already? If not, is this functionality which should go somewhere general rather than just sticking it in the rosserial_server codebase?
Originally posted by mikepurvis on ROS Answers with karma: 1153 on 2013-09-14
Post score: 0
Answer:
You could give cpp_introspection a try. The package cannot introspect arbitrary mesage types, but only messages from packages for which an introspection library have been built before. It is therefore no solution for rosserial_server, but probably for others with similar problems.
There is no documentation available so far. Here is a simple usage example:
# add this to your CMakeLists.txt:
find_package(cpp_introspection REQUIRED)
introspection_add(nav_msgs RECURSIVE)
You can then examine the package and messages in a C++ program like that:
#include <iostream>
#include <introspection/introspection.h>
using namespace cpp_introspection;
int main(int argc, char **argv) {
load("path/to/introspections/libs");
MessagePtr msg = messageByDataType("nav_msgs/Odometry");
if (!msg) {
std::cerr << "not found" << std::endl;
return 1;
}
std::cout << "Definition of " << msg->getDataType() << ":" << std::endl;
std::cout << msg->getDefinition() << std::endl;
return 0;
}
Check the message.h header to get an impression of the full message API.
As far as I know there is no built-in mechanism in ROS or roscpp that allows introspection of messages in C++ without having to include the headers or invoke external programs. The cleanest solution would be to automatically compile a light-weight library for each message package by extending gencpp and use pluginlib or a similar mechanism to load them dynamically on demand.
Originally posted by Johannes Meyer with karma: 1266 on 2013-09-15
This answer was ACCEPTED on the original site
Post score: 2
Original comments
Comment by mikepurvis on 2013-09-16:
Very interesting—looks like a much more ambitious project, perhaps for the core roscpp team to take on at some point if there is sufficient demand. | {
"domain": "robotics.stackexchange",
"id": 15525,
"tags": "ros"
} |
Tool for translating PDAs to CFGs | Question: We know that all push down automata are representable using context-free grammars. Furthermore, there is an algorithm to construct a CFG from any PDA (e.g. Sipser's proof in intro to theory of computation).
Are there any tools which do this translation? I.e. I can put in a set of transition functions and it will return an equivalent CFG.
Answer: jflap is pretty nice and can do this. See here: http://www.cs.duke.edu/csed/jflap/. | {
"domain": "cstheory.stackexchange",
"id": 261,
"tags": "automata-theory, fl.formal-languages, grammars, software"
} |
Why don't qubits continuously rotate in the $z$ direction due to free time evolution? | Question: If we have a physical qubit with energy eigenstates $|0\rangle$ and $|1\rangle$ with energy separation $\Delta E$ its Hamiltonian in the absence of any interaction is
$$H=\hbar\frac{\Delta E}{2}\sigma_z $$
the time evolution operator is $U=e^{-iHt/\hbar}$, so why doesn't a qubit left to itself not just continuously rotate in the $z$ direction? And if it does, how is this not a problem? It seems to me that after some time $t$, a $\sigma_z$ gate will be applied whether we want it or not!
Answer: It does, unless you have a way to tune your Hamiltonian such that $\Delta$ becomes zero. Since a tunable Hamiltonian is something you usually want in a quantum computer implementation, this should not be a problem.
If this term is non-switchable, it just means that the basis in which you are working is continuously rotating, and you have to keep track of that and implement your "active" gates in a correspondingly rotated basis, or exactly at the time when the evolution due to your Hamiltonian has cancelled out. (Essentially, this is like working in the interaction picture.)
(By the way, it is debatable if you should call this "in the absence of any interaction", since this level splitting can very well seen as some type of interaction, and will in many cases depend on some external parameter.) | {
"domain": "quantumcomputing.stackexchange",
"id": 936,
"tags": "experimental-realization"
} |
Difference between shells, subshells and orbitals | Question: What are the definitions of these three things and how are they related? I've tried looking online but there is no concrete answer online for this question.
Answer: Here's a graphic I use to explain the difference in my general chemistry courses:
All electrons that have the same value for $n$ (the principle quantum number) are in the same shell
Within a shell (same $n$), all electrons that share the same $l$ (the angular momentum quantum number, or orbital shape) are in the same sub-shell
When electrons share the same $n$, $l$, and $m_l$, we say they are in the same orbital (they have the same energy level, shape, and orientation)
So to summarize:
same $n$ - shell
same $n$ and $l$ - sub-shell
same $n$, $l$, and $m_l$ - orbital
Now, in the other answer, there is some discussion about spin-orbitals, meaning that each electron would exist in its own orbital. For practical purposes, you don't need to worry about that - by the time those sorts of distinctions matter to you, there won't be any confusion about what people mean by "shells" and "sub-shells." For you, for now, orbital means "place where up to two electrons can exist," and they will both share the same $n$, $l$, and $m_l$ values, but have opposite spins ($m_s$). | {
"domain": "chemistry.stackexchange",
"id": 2062,
"tags": "quantum-chemistry, electrons, electronic-configuration, orbitals"
} |
How to set joint axis for both parts attached to a hinge? | Question:
Hi,
I'm trying to implement an interface to a different physics engine than the four already available in Gazebo. In my case, an axis must be specified for each part attached to the joint. For instance, in the example tutorial "Making a mobile robot" found on the Gazebo website (its a differential-based robot with two motorized wheels), I would need to specify for both revolute joints, the axis on the wheel AND the axis on the chassis.
When I look at the code for the physics interface there is this method :
Joint::SetAxis(const unsigned int _index, const math::Vector3 &_axis)
which takes the axis value from the SDF and saves it into the physics engine data structure. My understanding is that the _index parameter gives me the part this axis is related to, but I can't see how to specify that index in an SDF, if it is possible.
If someone can tell me I am on the right direction or give me a hint about how to solve this, it will be truly appreciated!
Thanks to all!
Originally posted by dbrodeur on Gazebo Answers with karma: 38 on 2017-05-01
Post score: 0
Answer:
The _index parameter in any method from Joint class is for specifying the coordinate ID which is an unsigned int value less than GetAngleCount(). Thus, in the case of an Hinge, GetAngleCount() returns 1 and the only possible index value is 0.
In Hinge2, , GetAngleCount() returns 2 and the only possible index value are 0 and 1.
SetAxis has only one value per angle and it applies to both parts attached. If a different engines needs to specify different values, than an sdf tag like those for , , etc. should be use to specify that particular property of the specific engine.
Originally posted by dbrodeur with karma: 38 on 2017-05-05
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 4094,
"tags": "sdformat"
} |
strange choice of constant when showing $an + b = O(n^2)$ ("Introduction to Algorithms" book) | Question: In the third edition of Introduction to Algorithms, the authors state:
... when $a$ > 0, any linear function $an+b$ is $O(n^2)$, which is
easily verified by taking $c = a + |b|$ and $n_0 = max(1, -b/a).$
[In the above, $c$ and $n_0$ come from the standard definition of big-oh:
$f(n)$ is in $O(g(n))$ if there exist positive constants $c$ and $n_0$ such that $0 \le f(n) \le cg(n)$ for all $n \ge n_0$.]
I have no trouble accepting that $an+b \in O(n^2)$, but I'm questioning the motivation/origin of the weird choice of $n_0$.
If we assume that $ n \ge 1$, then, since $a > 0$, we have $an^2 \ge an$ and since $n^2 \ge 1$ we have $|b|n^2 \ge |b| \ge b$, implying that $(a+|b|)n^2 \ge an+b$. So it seems that $n_0$ could simply have been taken as $1$. Or am I messing up somewhere?
Answer: Because of the requirement $an+b>0$. The definition requires $0 \leq f(n)$. | {
"domain": "cs.stackexchange",
"id": 9505,
"tags": "algorithm-analysis, asymptotics"
} |
Counting inversions in C++ via mergesort and vectors | Question: I have a text file that contains 100,000 numbers from 1 ~ 100,000 in an unsorted manner (no duplicates).
The name of the file is "IntegerArray.txt"
My task is to count the number of inversions in the text file. An inversion is a pair of elements a, b where a comes before b, but a > b. Thus "1 2 3 6 4 5" contains two inversion (6 > 4 and 6 > 5).
I implemented it with the merge sorting method.
The following is my code:
long long mergeAndCount(vector<int>& vec, int p, int q, int r) {
long long count = 0;
vector<int> L, R;
for (int i = p; i <= q; ++i) L.push_back(vec[i]);
for (int j = q + 1; j <= r; ++j) R.push_back(vec[j]);
L.push_back(numeric_limits<int>::max()); //sentinel element
R.push_back(numeric_limits<int>::max());
int i = 0, j = 0;
for (int k = p; k <= r; ++k) {
if (L[i] <= R[j]) {
vec[k] = L[i];
++i;
} else {
vec[k] = R[j];
++j;
if (L[i] != L.back() && R[j] != R.back())
count += q - p + 1 - i
}
}
return count;
}
long long inversion(vector<int>& vec, int p, int r) {
long long count = 0;
if (p < r) {
int q = (p + r) / 2;
count = inversion(vec, p, q);
count += inversion(vec, q + 1, r);
count += mergeAndCount(vec, p, q, r);
}
return count;
}
int main() {
ifstream infile("IntegerArray.txt");
int a;
vector<int> vec;
while (infile >> a)
vec.push_back(a);
cout << inversion(vec, 0, vec.size()-1);
return 0;
}
The result from the above code is 2407905288, which is correct.
The answer by brute force with the following code is 2407905288, which is correct. The following is a brute force version (it took me more than 10 minutes to run this code):
long long inversion(vector<int>& vec, int p, int r) {
long long count = 0;
for (int i = 0; i < vec.size(); ++i)
for (int j = i + 1; j < vec.size(); ++j)
if (vec[i] > vec[j])
++count;
return count;
}
Any improvements in runtime or efficiency would be much appreciated.
Answer: What kind of efficiency?
You ask for efficiency, but don't clarify whether you want programmer or processor efficiency. And, at least sometimes, the two can both be improved simultaneously by leveraging the right pre-existing tools, so it's worth considering programmer efficiency.
Programmer efficiency comes in at least three general categories:
Writing less code:
I would be strongly tempted to confirm whether this can be written to leverage std::sort with a custom type that implements a counter in its overload to std::swap. If you can transform the problem like this, you don't have to worry about coding a merge sort correctly, and should have less code.
Writing clearer code:
The arguments to mergeAndCount are named vec, p, q, and r, and have no comments that clarify the meaning of p, q, and r (inversion's arguments p and r suffer similarly). Furthermore mergeAndCount's local variables are named L and R. What do these mean? The parameters appear to be local indices for special spots in the vector; perhaps they could be named accordingly.
As an extension of "naming things is important", consider also renaming inversion to use a verb instead of a noun. Something like invert is typically better than inversion; something that indicates what it's doing could be better still. Perhaps sortAndCount?
Sometimes avoiding raw loops can help. For example, the loops that fill L and R could be written with std::copy. But better yet, you could construct them in place with the range constructor:
vector<int> L(&vec[p], &vec[q + 1]);
vector<int> R(&vec[q + 1], &vec[r + 1]);
Two side comments here: your use of a final index rather than one past final (closed range instead of open range) conflicts with the standard library. If main called inversion with .size() instead of .size() - 1, and everything else was updated to match, a number of + 1 and/or - 1 items should disappear, starting with the three + 1 instances in my little code excerpt.
Second, I'm uncertain what I think of the use of a sentinel as opposed to verifying the index location. Doesn't it have a side effect that would harm lists with that number? While the current problem definition excludes use of the sentinel, problems have a tendency of changing over time; attempts to reuse code may result in using code that isn't suited for the new problem.
Making it harder to screw up:
The indices you pass into inversion and mergeAndCount represent very important values that never change. You should consider marking them const in order to avoid losing the ends of your ranges.
On the flip side, processor efficiency comes from many things that often compete with programmer efficiency. Here are a couple that could be relevant.
Avoiding allocations:
Often allocating in a loop is a source of run-time inefficiency. Note, for example, that std::merge requires input and output ranges, rather than allocating its own helper ranges internally. This allows std::sort to allocate them once and reuse them as necessary. (This comment conflicts with the above suggestion to use the range constructor.)
Memory access patterns:
Scanning through memory linearly is often the fastest way to access it. Your code does a pretty good job of this already, so without profiling there's little I can offer here. | {
"domain": "codereview.stackexchange",
"id": 17564,
"tags": "c++, sorting, mergesort, vectors"
} |
How do fields change when you transform spacetime coordinates? | Question: In field theory, I have seen the notion that when you infinitessimaly transform your coordinates in the form $$x'_\mu = x_\mu + \delta x_\mu$$
The fields transform as
$$\phi(x) \rightarrow \phi'(x') =\phi(x) + \delta\phi(x) + \partial_\mu\phi x^\mu$$
Why do the fields transform this way? I am a bit confused about what $\phi'(x')$ really means here. Isn't $\phi'$ already the transformed field after you perform the coordinate transformation? Why isn't it something like $\phi(x') = \phi(x) + \delta\phi(x) + \partial_\mu\phi x^\mu$, where instead on the LHS you have an unprimed field?
Answer: We should distinguish between two different things:
We can define a new field by $\phi'(x)\equiv \phi(x+\delta x)$. Then $\phi'(x)$ and $\phi(x)$ are two different configurations of the scalar field, both expresed in the same coordinate system.
Wwe can write the original field $\phi$ in a new coordinate system: $\phi'(x')\equiv\phi(x)$. Then $\phi'(x')$ and $\phi(x)$ both describe the same configuration of the scalar field, expressed in two different coordinate systems. This is the case shown in the question.
This might be a little more clear if we write $x(p)$ for the coordinates of the point $p$. Then the two cases above look like this:
$\phi'(x(p))= \phi(x(p'))$
$\phi'(x'(p)) = \phi(x(p))$.
In the first case, $x(p')$ is a new point expressed in the original coordinate system. In the second case, $x'(p)$ is the original point expressed in a new coordinate system. The second case is the one shown in the question. Both cases define a new function $\phi'$ of the list of coordinates (list of real numbers), but only the first case defines a new function of spacetime (function of $p$). In the second case, we're just writing the same function of spacetime (function of $p$) in two different ways. | {
"domain": "physics.stackexchange",
"id": 82867,
"tags": "field-theory, coordinate-systems, calculus"
} |
Do we need to do multiple times deep learning and average ROC? | Question: Do we need to do multiple times deep learning and average ROC(AUC)?
Since we've might get different ROC(AUC) every round we train and test (by KERAS)
Is it necessary to average ROC(AUC) with multiple times training and testing?
(or just choose the best round?)
Answer: No, you don't need to do that. Do probability calibration (temperature scaling) on your trained network. Here a link to do that. | {
"domain": "datascience.stackexchange",
"id": 9227,
"tags": "machine-learning, deep-learning, keras"
} |
Bidirectional jerk motion on a stopping vehicle | Question: A stopping vehicle (say a car) has an apparent retardation (which may/may not be constant in magnitude) when force via brakes is applied.
I travel by subway trains, and I noticed an odd phenomenon. The thing about such trains (might be irrelevant) is that, being light-weight, their motion somewhat mimics that of cars, and the effects of motion are more apparent as one usually stands in such trains. The thing I noticed was that I could feel a force pulling me in the initial direction of motion, as the train slowed down. That obviously is the inertia. But as soon as the train halted, I noticed a secondary jerk...this time in the opposite direction, and it was kind of short lasting.
I'm curious to know, what causes this secondary jerk backwards as soon as a vehicle comes to rest. I'm guessing it has something to do with the reaction force by the brakes which overcome the forward motion and provide an impulse backwards. But then it has to have a proper force 'mirror' as per the third law of motion. Also, there never is any intentional backwards motion here (the drivers are precise, I guess).
So what could it really be?
Answer: "Jerk" is indeed the correct term to describe both the experience and the cause, which is a (sudden) change in acceleration.
If the car decelerates at a constant rate, there is a constant force on you from the safety belt, to prevent you hitting the windscreen. If the car were to maintain the same deceleration when it had reached zero velocity then it would immediately start going backwards. However, the car does not move backwards. The deceleration changes from a constant value to zero in a very short time. But there is still a force on you from the springiness in the safety belt, which is under tension, throwing you back into the seat, which is no longer accelerating backwards. The force on you from the safety belt changes suddenly as you are flung back into the seat, as it did if the braking also started suddenly; the sudden change in the force on you is what causes the discomfort.
The same effect happens (in reverse) when a car accelerates. During constant acceleration your seat pushes you forward with a constant force. The seat is padded for comfort, so that, as with the safety belt, the force on you does not change suddenly. When the driver pulls his foot off the accelerator to change gear, the car and seat suddenly stop accelerating. The springy seat is still pushing you forward, so you accelerate forward away from the seat. The force on you drops to zero suddenly as you "jerk" forward. If the seat had not been padded, the change in the force on you would have been even more sudden, even more uncomfortable.
The Wikipedia article explains it this way :
A highly reproducible experiment to demonstrate jerk is as follows. Brake a car starting at a modest speed in two different ways:
apply a constant, modest force on the pedal till the car comes to a halt, only then release the pedal;
apply the same, constant, modest force on the pedal, but just before the halt, reduce the force on the pedal, optimally releasing the pedal fully, exactly when the car stops.
The reason for the by-far-bigger jerk in 1 is a discontinuity of the acceleration, which is initially at a constant value, due to the constant force on the pedal, and drops to zero immediately when the wheels stop rotating.
Note that there would be no jerk if the car started to move backwards with the same acceleration. Every experienced driver knows how to start and how to stop braking with low jerk. See also below in the motion profile, segment 7: Deceleration ramp-down.
The situation on the braking train is explained by muscular control (see Wikipedia article : Physiological Effects).
In place of the safety belt your muscles supply a braking force when you hold onto a rail or hanging strap to avoid falling forward. When the train reaches zero velocity but does not move backwards this force is no longer required. However, the time over which the change in force is required is too short for your muscular control system to respond to, with the result that you involuntarily throw yourself backwards.
Whereas a constant force is tolerable if not too large, a sudden change in force can be very uncomfortable. Such discomfort is exploited in roller-coasters to enhance the thrill of the ride. Not only "jerk" (rate of change of acceleration) but also higher derivatives such as "jounce" (rate of change of jerk) are desirable and carefully designed. | {
"domain": "physics.stackexchange",
"id": 32680,
"tags": "newtonian-mechanics, forces, acceleration, inertia, jerk"
} |
Building a wormhole | Question: We regularly get questions about wormholes on this site. See for example Negative Energy and Wormholes and How would you connect a destination to a wormhole from your starting point to travel through it?. Various wormhole solutions are known, of which my favourite is Matt Visser's wormhole because it's closest to what every schoolboy (including myself many decades ago) thinks of as the archetypal wormhole.
The trouble is that Visser has pulled the same trick as Alcubierre of starting with the required (local) geometry and working out what stress-energy tensor is required to create it. So Visser can tell us that if we arrange exotic string along the edges of a cube the spacetime geometry will locally look like a wormhole, but we know nothing about what two regions of spacetime are connected.
My question is this: suppose I construct a Visser wormhole by starting in Minkowksi spacetime with arbitrarily low densities of exotic matter and gradually assembling them into the edges of a cube, how would the spacetime curvature evolve as I did so?
I'm guessing that I would end up with something like Wheeler's bag of gold spacetime. So even though I would locally have something that looked like a wormhole it wouldn't lead anywhere interesting - just to the inside of the bag. I'm also guessing that my question has no answer because it's too hard to do any remotely rigorous calculation. Still, if anyone does know of such calculations or can point me to references I would be most interested.
Answer: It's a bit hard to exactly construct a stress-energy tensor similar to a wormhole in normal space since part of the assumption is that the topology isn't simply connected, but consider the following scenario :
Take a thin-shell stress-energy tensor such that
$$T_{\mu\nu} = \delta(r - a) S_{\mu\nu}$$
with $S_{\mu\nu}$ the Lanczos surface energy tensor, where the Lanczos tensor is similar to a thin-shell wormhole. For a static spherical wormhole, that would be
\begin{eqnarray}
S_{tt} &=& 0\\
S_{rr} &=& - \frac{2}{a}\\
S_{\theta\theta} = S_{\varphi\varphi} &=& - \frac{1}{a}\\
\end{eqnarray}
If we did this by the usual cut and paste method (cutting a ball out of spacetime before putting it back in, making no change to the space), the Lanczos tensor would be zero due to the normal vectors being the same (there's no discontinuity in the derivatives). But we're imposing the stress-energy tensor by hand here. This is a static spherically symmetric spacetime, for which we can use the usual metric
$$ds^2 = -f(r) dt^2 + h(r) dr^2 + r^2 d\Omega^2$$
with the usual Ricci tensor results :
\begin{eqnarray}
R_{tt} &=& \frac{1}{2 \sqrt{hf}} \frac{d}{dr} \frac{f'}{\sqrt{hf}} + \frac{f'}{rhf}\\
R_{rr} &=& - \frac{1}{2 \sqrt{hf}} \frac{d}{dr} \frac{f'}{\sqrt{hf}} + \frac{h'}{rh^2}\\
R_{\theta\theta} = R_{\varphi\varphi} &=& -\frac{f'}{2rhf} + \frac{h'}{2rh^2} + \frac{1}{r^2} (1 - \frac{1}{h})
\end{eqnarray}
Using $R_{\mu\nu} = T_{\mu\nu} - \frac 12 T$ (this will be less verbose), we get that $T = -\delta(r - a) [2(ah)^{-1} + 2 (ar^2)^{-1}]$, and then
\begin{eqnarray}
\frac{1}{2 \sqrt{hf}} \frac{d}{dr} \frac{f'}{\sqrt{hf}} + \frac{f'}{rhf} &=& \delta(r - a) \frac{1}{a} (\frac{1}{h} + \frac{1}{r^2}) \\
- \frac{1}{2 \sqrt{hf}} \frac{d}{dr} \frac{f'}{\sqrt{hf}} + \frac{h'}{rh^2} &=& \delta(r - a) \frac{1}{a}[\frac{1}{h} + \frac{1}{r^2} - 1]\\
-\frac{f'}{2rhf} + \frac{h'}{2rh^2} + \frac{1}{r^2} (1 - \frac{1}{h}) &=& \delta(r - a) \frac{1}{a}[\frac{1}{h} + \frac{1}{r^2} - 1]
\end{eqnarray}
This is fairly involved and I'm not gonna solve such a system, so let's make one simplifying assumption : just as for the Ellis wormhole, we'll assume $f = 1$, which simplifies things to
\begin{eqnarray}
0 &=& \delta(r - a) \frac{1}{a} (\frac{1}{h} + \frac{1}{r^2}) \\
\frac{h'}{rh^2} &=& \delta(r - a) \frac{1}{a}[\frac{1}{h} + \frac{1}{r^2} - 1]\\
\frac{h'}{2rh^2} + \frac{1}{r^2} (1 - \frac{1}{h}) &=& \delta(r - a) \frac{1}{a}[\frac{1}{h} + \frac{1}{r^2} - 1]
\end{eqnarray}
The only solution for the first line would be $h = - r^2$, but then this would not be a metric of the proper signature. I don't think there is a solution here (or if there is, it will have to involve a fine choice of the redshift function), which I believe stems from the following problem :
From the Raychaudhuri equation, we know that in a spacetime where the null energy condition is violated, there is a divergence of geodesic congruences. This is an important property of wormholes : in the optical approximation, a wormhole is just a divergent lense, taking convergent geodesic congruence and turning them into divergent ones. This is fine if the other side of the wormhole is actually anoter copy of the spacetime, but if this is leading to inside flat space, this might be a problem (once crossing the wormhole mouth, the area should "grow", not shrink as it would do here).
A better example, and keeping in line with the bag of gold spacetime, is to consider a thin-shell wormholes that still has trivial topology. Take the two manifolds $\mathbb R^3$ and $\mathbb S^3$. By the Gauss Bonnet theorem, a sphere must have a part in which it has positive curvature (hence focusing geodesics). Then perform the cut and paste operation so that we have the spacetime
$$\mathcal M = \mathbb R \times (\mathbb R^3 \# S^3)$$
Through some topological magic, this is actually just $\mathbb R^4$. The thin-shell approximation is easily done here, and it will give you the proper behaviour : geodesics converge onto the mouth, diverge upon crossing the mouth, then go around the inside of the sphere for a bit before possibly getting out.
From there, it's possible to take various other variants, such as smoothing out the mouth to make it more realistic (which will indeed give you a bag of gold spacetime), as well as a time dependancy to obtain this spacetime from flat Minkowski space. | {
"domain": "physics.stackexchange",
"id": 47702,
"tags": "general-relativity, wormholes, exotic-matter"
} |
Do all forces act in the same way where gravity is close to zero? | Question: Suppose that I put in the outer space (where gravity from other bodies is negligible) a large, perfectly round sphere totally filled with water. At the bottom (even though "bottom" doesn't make much sense in the deep space) of the sphere there are several marbles, all touching each other.
If I make rotate the sphere, all marbles should stay exactly where they are. Right?
Is that true regardless of the force that I apply?
And what happens, instead, if I strongly shake the sphere?
Answer: Wrong. You are neglecting the viscosity of the water. Friction from the inner wall of the sphere will move the water, which will in turn move the marbles and they will likely rotate around the sphere, though at a speed slower than the rotation of the sphere.
If your sphere was on Earth, then the only reason the marbles stay at the bottom is gravity pulling them down.
If you shake the sphere (even assuming that the walls do not touch the marbles), then they will move in this situation too due to the force of the water flowing around the sphere. | {
"domain": "physics.stackexchange",
"id": 6628,
"tags": "gravity, forces, space"
} |
Does the frame in which the CMB is isotropic violate the Copernican Principle? | Question: The Copernican Principle states that Earth is not at a special place in the Universe, and by extension, that there are no "special places" in the Universe (per homogeneity of the universe, aka the cosmological principle). However, the frame in which the CMB is isotropic appears pretty special:
There is exactly one reference frame in which the CMB is isotropic
It's independent of the observer's motion
Every observer agrees which reference frame that is
It's not trivial, since it makes the CMB simpler (and the CMB underpins much of modern cosmology)
Does the frame in which the CMB is isotropic violate the Copernican Principle? If so, why do we still believe in the Copernican Principle?
Answer: CMB frame provides a privileged foliation of spacetime by spacelike hypersurfaces, essentially by defining universal cosmic time via some function $t_c=f(T_\text{CMB})$. A given “slice” $t_c=\mathrm{const}$ could be then interpreted as a space part of a spacetime.
Copernican Principle as stated by OP means that “there are no special places”. Applied to cosmological models we could identify “places” with points of space within a slice $t_c=\mathrm{const}$. And indeed if averaged over sufficiently large scale these slices appear to be homogeneous spaces, every point of it is just like any other point.
We see that cosmological models that assume spatial homogeneity do satisfy Copernican Principle in the sense outlined above.
Note, that the “time” part of spacetime is not homogeneous: every moment (on cosmological timescale) is unique and not like the moments before or after, at the very least they differ by having a specific value of CMB temperature. | {
"domain": "physics.stackexchange",
"id": 68461,
"tags": "cosmology, cosmic-microwave-background"
} |
Do particles have spin because there exist spinor representations for the Lorentz group? | Question: I am reading Peskin and Schroeder's An introduction to field theory. They first describe the spinor representation of the Lorentz group, and then they mention the fact that different particles have different spins, and go on to describe the angular momentum associated with those particles. This is for the spinor representation of the Lorentz group, but: if there are other representations of the Lorentz group, like the spinor representation, will those other representations have another angular-momentum-like quantity associated with them?
Answer: The spin of a quantum field is related to the representation of the Lorentz group they transform under: scalar fields transform under the trivial representation, spinors transform under the spinorial representation, gauge bosons under the vectorial representation, gravitons (if they exist) under the second-rank tensorial representation...
If you restrict to spatial rotations, any infinitesimal transformation can be written in terms of the infinitesimal generation of the transformation, namely the [total] angular moment: $$\phi(x) \to \phi'(x') = \left(1 - \frac{i}{2} \omega_{\mu\nu}J^{\mu\nu}\right)\phi(x)$$
What is the source of the change between the original and the rotated fields? There are two:
The transformation of the points $x$ to $x'$: this contribution is present in every field, and the related generator is what we call orbital angular moment
If the field has more than one component, these components transform in each other in a non-trivial way. The generator associated with this fact is called spin, and naturally, depends on the representation of the Lorentz group for the field.
EDIT: To clarify, in every representation the conserved quantity is spin. The only difference is the value of the spin of particles: Scalar fields have spin 0, spinorial fields have spin 1/2, gauge bosons have spin 1 and second-rank tensor have spin 2. | {
"domain": "physics.stackexchange",
"id": 22571,
"tags": "special-relativity, angular-momentum, quantum-spin, group-representations, spinors"
} |
How does a microelectrode work? | Question: On Wikipedia, the entire microelectrode page states only the following:
A microelectrode is an electrode of very small size, used in electrophysiology for either recording of neural signals or electrical stimulation of nervous tissue. Initially, pulled glass pipette microelectrode was used with later introduction of insulated metal wires. These microelectrodes are made from inert metals with high Young modulus such as tungsten, stainless steel, platinum and iridium oxide and coated with glass or polymer insulator with exposed conductive tips. More recent advances in lithography yielded to silicon based microelectrodes.
Can someone provide a more elaborate explanation of how it works?
Like ideally, I would like something as specific as the usual descriptions you see in introductory chemistry books for how emf works in a battery. In other words, I know how electric potential works, but I am not understanding the exact mechanism of the microelectrode works.
If there are many types of microelectrodes, just give me the properties of one involving small shafts of hollow glass filled with a conductive salt solution.
Pictures would be great.
Answer: A microelectrode is quite literally a small electrode and they come in a variety of shapes.
The glass pipette electrode you are specifically referring to is mostly used for patch clamp experiments. Patch clamp experiments are performed using various configurations:
Source: Leica
So basically there is the cell-attached configuration, where a patch of cell membrane is sucked slightly into the pipette. Upon stronger suction the membrane is ruptured and the whole-cell patch clamp configuration is obtained. Here, access is gained to the cell, but a high-impedance seal is maintained between pipette and cell membrane. Lastly, there is a configuration where the patch is broken off the cell to exclude effects of the entire cell. This patch configuration is often used to obtain single-channel recordings. Dependent on the exact technique an inside-out or outside-out patch configuration is reached (see the picture above).
Hence, the glass pipette is basically only there to obtain a proper seal (cell-attached), to gain access to the cell (whole-cell), or to break off a piece of membrane. Notably, in whole-cell mode the pipette provides a means to administer compounds into the cell (think ions or drugs).
The recording electrode is situated inside the pipette and is a simple metal wire (platinum or platinum/iridium) and nothing different from any microelectrode as you rightfully state in your question. The electrode wire simply picks up a voltage difference (or current flow) between the interior of the cell (in whole-cell mode) and the exterior. The exterior is measured using a reference electrode outside the cell.
Furthermore, current can be injected to stimulate the cell, or to current-clamp the cell at a specific current. Likewise, voltage can be applied to voltage-clamp the cell. Therefore, an inert material is often chosen such as platinum or platinum/iridium to maintain integrity and prevent electrolytic breakdown. Electrodes are made out of metal to provide a low-resistance means to pick up currents or voltage-differences.
Voltage differences or currents can be amplified with an amplifier, registered and analyzed. See the following picture for the basic recording setup:
Source: Robinson et al. (2013)
Reference
- Robinson et al. Front Neur Circuits 2013; 7: 1-7 | {
"domain": "biology.stackexchange",
"id": 3732,
"tags": "neuroscience, electrophysiology"
} |
Do clouds float on anything? | Question: Clouds of the same type have such a consistent altitude, it appears as if they're floating on an invisible layer. Is this true?
Put another way, is there a significant change in air composition at the point where clouds float/form, or is that altitude determined primarily by air pressure?
Answer: Clouds are made up of very tiny liquid water droplets that have small fall speeds. This means that with just a little bit of upward velocity in the wind, the cloud water can remain aloft and appear to float. Clouds are not always static blobs -- if you watch some clouds you will see that air can flow through them, condensing into new cloud where it goes in and evaporating into clear air on its way out (easily visible in mountain wave clouds).
The reason clouds tend to have a consistent base altitude is due to how temperature and water vapor behave in the boundary layer. On sunny, windy days the boundary layer is well mixed and potential temperature and water vapor mixing ratio are constant with height. With a constant potential temperature the actual temperature will decrease at about 10$^\circ$C per kilometer of height. With constant specific humidity and decreasing temperature as we go upward this means the relative humidity is increasing. At some height the relative humidity becomes 100% and we call this height the lifted condensation level (LCL), which will also be the height of the cloud bases. If surface characteristics are similar across an area, so will be the LCL and this will make all the clouds in the area have similar bases. You were on the right track with the significant change in air composition, but it is a thermodynamic change with pressure playing a significant role. | {
"domain": "earthscience.stackexchange",
"id": 578,
"tags": "meteorology, clouds"
} |
Visitor Pattern in C++ 11 | Question: I have referred some class diagram to actually create visitor.
#include <iostream>
using namespace std; //know should not be used, but for simplicity
class Book;
class Pen;
class Computer;
class ICartVisitor
{
public:
virtual int Visit(Book &) = 0;
virtual int Visit(Pen &) = 0;
virtual int Visit(Computer &) = 0;
};
class IElement
{
public:
virtual int accept (ICartVisitor& cartVisitor) = 0;
virtual int getPrice () = 0;
};
class Book : public IElement
{
public:
int accept (ICartVisitor& cartVisitor)
{
return cartVisitor.Visit(*this);
}
int getPrice ()
{
return 100;
}
};
class Pen : public IElement
{
public:
int accept (ICartVisitor& cartVisitor)
{
return cartVisitor.Visit(*this);
}
int getPrice ()
{
return 5;
}
};
class Computer : public IElement
{
public:
int accept (ICartVisitor& cartVisitor)
{
return cartVisitor.Visit(*this);
}
int getPrice ()
{
return 10000;
}
};
class CartVisitor : public ICartVisitor
{
public:
//inlining all the functions for simplicity
int Visit(Book & book)
{
return book.getPrice();
}
int Visit(Pen & pen)
{
return pen.getPrice();
}
int Visit(Computer & computer)
{
return computer.getPrice();
}
};
int main(int argc, char* argv[])
{
int total = 0;
ICartVisitor *cartVisitor = new CartVisitor();
total += cartVisitor->Visit(*(new Computer()));
total += cartVisitor->Visit(*(new Book()));
total += cartVisitor->Visit(*(new Pen()));
cout<<total;
return 0;
}
Answer: Stop using new
// No No No
ICartVisitor *cartVisitor = new CartVisitor();
Just create a local object:
ICartVisitor cartVisitor;
Again. You don't need to dynamically create objects.
total += cartVisitor->Visit(*(new Computer()));
// In this case a temporary object will work.
total += cartVisitor->Visit(Computer()); // Creates a temporary object
// That is correctly destroyed.
Visitor pattern
The visitor pattern is good when your class hierarchy is stable and does not change much (or at all).
Also the interface that indicates you accept a visitor usually only has the accept() method. Seems like you are mixing two interfaces into IElement.
class IElement
{
public:
virtual int accept (ICartVisitor& cartVisitor) = 0;
virtual int getPrice () = 0;
};
I don't see anything wrong with your visitor pattern. But I would not expect it to return an int (it can just a bit unexpected). This is because usually when you visit an object it means that object will have sub-objects that need to be visited. Rather than returning state via the calls to visit() you mutate the state of the visitor object. When you are finished visting all the object just query the visitor about its state.
class ICartVisitor
{
public:
virtual void Visit(ShoopingCart& cart) = 0;
virtual void Visit(Book &) = 0;
virtual void Visit(Pen &) = 0;
virtual void Visit(Computer &) = 0;
};
Then an implementation would look like this:
class CartVisitor : public ICartVisitor
{
int costOfItemsInCart;
public:
CartVisitor()
: costOfItemsInCart(0)
{}
int getCostOfCart() const {return costOfItemsInCart;}
void Visit(ShoopingCart & cart)
{
// Nothing happens you can't buy a cart.
// Note it is the accept on the shopping cart that
// calls the visit on all the items in the cart.
// The visitor does not need to know anything about
// the carts implementation. It just knows that each
// object knows about its own sub objects and will be
// passed to them via the accept method.
}
void Visit(Book & book)
{
// Book does not need a virtual `getPrince()` method
// Being the visitor of Book you know that it is a book
// and its whole interface. You don't need to implement
// a limited interface (you have the whole Book interface).
// You just need to have a book with `getPrice()` method on it.
// for this particular implementation of the Visitor.
// Other visitors may not need the price (they may be calculating
// the weight of the cart for determining shipping costs).
costOfItemsInCart += book.getPrice();
}
void Visit(Pen & pen)
{
costOfItemsInCart += pen.getPrice();
}
void Visit(Computer & computer)
{
costOfItemsInCart += computer.getPrice();
}
}; | {
"domain": "codereview.stackexchange",
"id": 16859,
"tags": "c++, object-oriented, c++11, design-patterns"
} |
Surprising Results in Complexity (Not on the Complexity Blog List) | Question: What were the most surprising results in complexity?
I think it would be useful to have a list of unexpected/surprising results. This includes both results that were surprising and came out of nowhere and also results that turned out different than people expected.
Edit: given the list by Gasarch, Lewis, and Ladner on the complexity blog (pointed out by @Zeyu), let's focus this community wiki on results not on their list. Perhaps this will lead to a focus on results after 2005 (as per @Jukka's suggestion).
An example: Weak Learning = Strong Learning [Schapire 1990]: (Surprisingly?) Having any edge over random guessing gets you PAC learning. Lead to the AdaBoost algorithm.
Answer: Here is the guest post by Bill Gasarch with help from Harry Lewis and Richard Ladner:
http://blog.computationalcomplexity.org/2005/12/surprising-results.html | {
"domain": "cstheory.stackexchange",
"id": 116,
"tags": "cc.complexity-theory, big-list"
} |
Time derivative of expectation value of observable is always zero (quantum mechanics) | Question: In my book about quantum mechanics it state that the time derivative of an arbitrary observable is:
$$\frac{d}{dt}\langle A \rangle = \frac{1}{i\hbar} \langle [A,H] \rangle + \bigg{\langle }\frac{dA}{dt} \bigg{\rangle} $$
with $H$ being the Hamiltonian. They derived this equation by using the product rule of differentiation for the bra $\langle \psi|$ , the ket $|\psi\rangle$ and the operator $A$ and by using the Schrodinger equation (+ its conjugate form). However, when I used the product rule on only the bra $\langle \psi|$ and the ket $A|\psi\rangle$ I get the following: $$\frac{d}{dt}\langle A \rangle = \bigg{(}\frac{d}{dt} \langle \psi|\bigg{)} A|\psi\rangle + \langle \psi| \bigg{(}\frac{d}{dt} (A|\psi\rangle)\bigg{)} = -\frac{1}{i\hbar} \langle \psi|HA|\psi\rangle + \frac{1}{i\hbar} \langle \psi|HA|\psi\rangle = 0$$
Here, for the second term, I used the Schrodinger equation on the state $A|\psi\rangle$. What did I do wrong ?
Thanks in advance !
Answer: I think this is a nice question. It ultimately boils down to the following:
If $i\hbar\frac{d}{dt}|\psi\rangle = H|\psi\rangle$, then why does $i \hbar\frac{d}{dt}\big(A|\psi\rangle\big) \neq H\big(A|\psi\rangle\big)$, since $A|\psi\rangle$ is also a valid state vector?
The answer is a bit subtle. The time evolution of a quantum mechanical state takes the form of a path through the underlying Hilbert space - that is, a function
$$\psi: \mathbb R\rightarrow \mathcal H$$
$$t \mapsto \psi(t)\in \mathcal H$$
The Schrodinger equation tells us that the physical paths through the Hilbert space are such that
$$i\hbar\psi'(t)= H\big(\psi(t)\big)$$
In particular, the time derivative acts on the function $\psi$, while the Hamiltonian operator acts on the state vector $\psi(t)$. The standard Dirac notation obscures this by writing
$$i\frac{d}{dt}|\psi\rangle = H|\psi\rangle$$
from which it is easy to get the mistaken impression that it makes sense to differentiate a state vector with respect to time.
Armed with this clarification, the answer is that $\psi(t)$ being a physical path does not guarantee that $A\big(\psi(t)\big)$ is a physical path. The latter is merely the image of a physical path under the action of the function (operator) $A$.
This concept is not reserved for quantum mechanics. Think about classical physics. Newton's law applied to a free particle yields $\frac{d^2}{dt^2} x = 0$. Does this imply that $\frac{d^2}{dt^2}f(x) = 0$ for some arbitrary function $f$? Certainly not - for example, consider $f(x)=x^2$.
If $\psi(t)$ is a physical path, then one has that
$$\frac{d}{dt}(A\psi(t)) = \frac{\partial A}{\partial t} \psi(t) + A \psi'(t) = \frac{\partial A}{\partial t}\psi(t) + A\big(\frac{1}{i\hbar}H\psi(t)\big)$$
Inserting this into the expectation value then yields the correct result,
$$\begin{align}\frac{d}{dt}\langle \psi(t),A\psi(t)\rangle &= \langle \psi'(t),A\psi(t)\rangle + \langle \psi(t),\frac{\partial A}{\partial t}\psi(t)\rangle + \langle \psi(t),A\psi'(t)\rangle\\&=-\frac{1}{i\hbar}\langle H\psi,A\psi\rangle +\frac{1}{i\hbar}\langle \psi,AH\psi\rangle + \left\langle\frac{\partial A}{\partial t}\right\rangle\\&=-\frac{1}{i\hbar}\langle \psi,HA\psi\rangle +\frac{1}{i\hbar}\langle\psi,AH\psi\rangle + \left\langle\frac{\partial A}{\partial t}\right\rangle\\&=\frac{1}{i\hbar}\left\langle[A,H]\right\rangle + \left\langle\frac{\partial A}{\partial t}\right\rangle\end{align}$$ | {
"domain": "physics.stackexchange",
"id": 68863,
"tags": "quantum-mechanics, operators, schroedinger-equation"
} |
How can an irreversible isothermal process exist? | Question: An isothermal process is one in which temperature of the gas is constant throughout the process.
From my physics textbook:
the state of a gas is described by specifying its pressure $p$, volume $V$ and temperature $T$. if these parameters can be uniquely specified at a time, we say that the gas is in thermodynamic equilibrium.
This is always the case in a reversible process, but if we cannot specify temperature during an irreversible process, how can we say that during an irreversible isothermal process, the temperature of gas remains constant throughout?
Answer:
how can we say that during an irreversible isothermal process, the
temperature of gas remains constant throughout?
As @Chet Miller pointed out in his answer, the term "isothermal", or constant temperature, in connection with an irreversible process refers only to the temperature at the boundary between the system and surroundings, where the surroundings is considered to be an ideal (constant temperature) thermal reservoir. Since there can only be one temperature at a specific location, and given that the temperature of an ideal thermal reservoir at the boundary is considered constant (which in reality is isn't), then so must the temperature of the gas at the boundary be considered constant.
As a practical matter, for want of a better term, one uses the term isothermal for an irreversible process carried out in a constant temperature surroundings to distinguish it from a reversible process carried out in the same constant temperature surroundings. See the figure below.
For the reversible isothermal expansion the external pressure is gradually reduced so that both the pressure and temperature throughout the gas, not just at the boundary with the surroundings, is always in equilibrium with the surroundings.
For the irreversible process the external pressure is suddenly reduced (e.g., suddenly removing a weight from atop a piston) and the gas allowed to expand rapidly (irreversibly) until pressure and temperature equilibrium is reestablished. During the irreversible expansion the temperature and pressure of the gas is only the same as the surroundings at the boundary (pressure because of Newton's 3rd law). Beyond the boundary, however, temperature and pressure gradients exist.
Hope this helps. | {
"domain": "physics.stackexchange",
"id": 94781,
"tags": "thermodynamics, equilibrium, reversibility"
} |
Is a computer without RAM, but with a disk, equivalent to one with RAM? | Question: Memory is used for many things, as I understand. It serves as a disk-cache, and contains the programs' instructions, and their stack & heap. Here's a thought experiment. If one doesn't care about the speed or time it takes for a computer to do the crunching, what is the bare minimum amount of memory one can have, assuming one has a very large disk? Is it possible to do away with memory, and just have a disk?
Disk-caching is obviously not required. If we set up swap space on the disk, program stack and heap also don't require memory. Is there anything that does require memory to be present?
Answer: Sure. In principle, given appropriate hardware, you could have just a disk, with everything stored on disk. Any time the CPU did a load or store instruction, there could be some hardware that turns that into a disk read or write. It'd be extremely slow: on a magnetic disk, each seek takes about 10ms, so you could do about 100 random-access reads and writes per second.
Some systems have flash memory-mapped into their address space. Flash memory provides non-volatile (persistent) storage. So, in some ways this resembles what you mention -- though those systems usually also have RAM as well. | {
"domain": "cs.stackexchange",
"id": 6235,
"tags": "computability, memory-management, memory-hardware"
} |
What is the axial tilt of a planet measured relative to? | Question: I am very much a beginner on the astronomy front but I understand about planets having different axial tilts, hence why Venus turns the opposite direction from the Earth and Uranus turns sideways.
However, I am confused as to what the axial tilt bearing is measured from. For example, if all planets were in a perfect line from the sun, would they spin perfect in alignment on their own various axial tilts? Would some still be 'off-set' by a few degrees to one side or the other?
Do we actually know please?
Answer: If you look at the Astronomical Almanac for the year 2011 as an example, in the table at the top of page E3, you find two measures of axial tilt. The third column is the declination of the planet's north pole for the mean equinox and equator of date 2011 January 0, 0 hours terrestrial time. So this is measured with respect to the Earth's orbit. The last column is the inclination of the planet's equator to the planet's orbit.
If you look at the right ascension column in the same table, you see that on that particular date the right ascensions are all different. | {
"domain": "astronomy.stackexchange",
"id": 968,
"tags": "planet, rotation"
} |
All output functions of a truth table | Question:
http://www.cim.mcgill.ca/~langer/273/3-notes.pdf
I can name one more: XNOR. But besides these, what other output functions are there?
Answer: Suppose you try to come up with a new output function, which you call $A\otimes B$, with new, custom behaviour. You will have to come up with four values: $0\otimes 0, 0 \otimes 1, 1 \otimes 0, 1 \otimes 1$. One arbitrary example is: $0\otimes 0 = 0, 0 \otimes 1 = 1, 1 \otimes 0 = 0, 1 \otimes 1 = 0$. Any sequence of zeroes and ones will do (this is the answer to your question). So you can write a string of four zeroes or ones, and that will represent your function. For example, my function is represented with $0100$. How many such strings are there? Well, try enumerating them: $0000,0001,0010,0011,\ldots$ Hence the remark that there are $2^{4}=16$ possible functions.
Not all of these functions are equally useful. The example I gave is equivalent to $A\otimes B = \neg A \wedge B$. In fact, every (binary) output function can be written as a combination of the three functions $\neg, \wedge, \vee$, i.e. NOT, OR and AND.
Consider functions that use three arguments, $A,B,C$. The truth table of such a function has $8$ entries, so there are a total of $2^{8}=256$ functions on three variables. Can you find a formula for the number of functions on $n$ variables? The example functions in your excerpt are all examples of commutative functions, which are functions $\circ$ where $A\circ B = B \circ A$ for every value of $A$ and $B$. Can you think of a noncommutative function? | {
"domain": "cs.stackexchange",
"id": 6798,
"tags": "propositional-logic"
} |
Are there animals who do things just for pleasure like humans? | Question: In a debate my father said that only humans do things that are not directly linked to survival(painting, playing musical instruments, singing etc). He said that this is because we are "special" and not like other animals on the planet.
I said that this is only because we have much more developed brain than other animals. But now I'm also interested in one question.
Are there animals who do similar things(not exactly same, of course) not for mating/survival but just because they like it?
Answer: I see that Gerardo has said effectively what I was going to: animals don't do things because they have reasoned out that these things will help them survive; they engage in behaviors because these behaviors lead to reward circuitry response and animals who find general categories of behavior rewarding have tended to survive better. Sometimes behaviors that are not directly related to survival trigger the same circuitry.
(P.S.: this is exactly what is happening in humans, too.)
Masturbation is one example, as above; so are animals that 'adopt' either conspecific or heterospecific juveniles who don't belong to them, which is not uncommon in many species. (Barn cats in particular do this often; aside from all the cute Youtube pictures of cats who think ducklings are kittens, it's very common for queens with kittens about the same age to co-nurse them.) You can argue that sociality and taking pleasure in altruistic activities with no immediate personal benefit is a survival adaptation, but that's not a calculation the animal is consciously making at the time.
Animals often have strong aesthetic preferences, especially for conspecifics; female and male animals often prefer traits that don't have anything obvious to do with survival in potential mates, and there is an entire discipline of sexual selection in biology devoted to teasing out why. It's worth noting, however, that in many species it turns out that females have preferences that vary widely for a number of traits, and that female preferences do not always line up to male reality...
Outside of naturalistic social behavior, numerous examples of elephants and chimpanzees creating drawings or paintings in their enclosures without any conscious reinforcement from their keepers have proved interesting reading. There are also examples of animals who keep "pets", including the famous case of Koko the gorilla and her pet kittens. You may find this article, with descriptions and citations of several animals who may or may not be engaging "art" for its own sake, to be a useful read. | {
"domain": "biology.stackexchange",
"id": 7527,
"tags": "ethology"
} |
How to find the resolution of a spectrum? | Question: I have been tasked to find the spectral resolution of some synthetic spectra (wavelength in Angstroms vs. flux) of different stars and degrade them to the resolution of observed spectra. But I am not sure how to find the resolution of a given spectrum?
I tried calculating the values for $\dfrac{\lambda}{\Delta\lambda}$ and averaging them, but it is giving me a value that is a lot larger than I expected.
Edit:
I have added a part of an observed spectrum below. I have calculated the dLambda value and the value $\frac{\lambda}{\Delta\lambda}$ (named Resolution here). The resolution should be around 30000 (taken using VLT UVES) but I am getting a value that is 10 times larger.
Answer: You really need to find the resolution that the synthetic spectra were generated at. This isn't something you should be trying to find from the spectra themselves.
From the Table you have shown, you appear to have just divided the wavelength by the bin size to get 330,000 for the resolving power. (NB Resolving power is the wavelength divided by the FWHM of a resolution element. The resolution is the FWHM of the resolution element.)
It is highly unlikely that this gives the right result because then your synthetic spectra would be undersampled.
However, if you can assume that the line broadening in the spectra is dominated by the imposed resolution of the spectra, rather than the intrinsic broadening of the lines and turbulence present in the atmosphere itself - which might be true for relatively low resolution spectra - then a possible procedure is to take the Fourier transform.
The FT will be that of the convolution of random delta functions with the line spread function. The result will be the product of the FT of the random delta functions and that of a Gaussian (which is also a Gaussian). This should give you a total product that has a Gaussian envelope. The inverse of the frequency-space width of this Gaussian envelope should then be an estimate of the Gaussian sigma corresponding to your resolution.
Perhaps an easier rough way of doing this is to try and find what look like isolated absorption lines and fit them with Gaussian functions. The FWHM of those Gaussians is approximately your resolution (it is actually an upper limit, because it assumes the intrinsic broadening is negligible).
A final thought is that perhaps the resolution is limited by the number density of pixels in the synthetic spectra. In which case, it is possible that the spectra have been binned to have a resolution (FWHM) of 2 pixels? But from the information you have given I suspect that the resolution element is more like 10 pixels. | {
"domain": "astronomy.stackexchange",
"id": 6365,
"tags": "observational-astronomy, spectroscopy, spectra"
} |
Can BRAT be used for text classification annotation? | Question: BRAT (brat rapid annotation tool) can be used for named-entity annotation:
Can BRAT be used for text classification annotation? I.e., given the text, annotate whether it belongs to some classes?
Answer: According to their documentation , BRAT does a lot of things, but text classification isn't one of them. BRAT is "too powerful" for that. I'd recommend you use a tool like Prodigy instead.
This should do what you're looking for. | {
"domain": "datascience.stackexchange",
"id": 2428,
"tags": "nlp"
} |
Wheel Odometry Covariance | Question:
Hi,
I am using robot_localization. I'm using the typical setup
EKF1 fuses encoders + IMU
EKF2 fuses encoders + IMU + GPS
My question is about encoder covariances. The robot_localization documentation says
If the odometry provides both position and linear velocity, fuse the linear velocity.
If the odometry provides both orientation and angular velocity, fuse the orientation.
Following this advice I am fusing the orientation that the wheel encoders report, but I don't know what to set the encoders pose covariance to. Logically I would think the pose covariance should grow every time you move. It would just keep growing and growing without bound. But when I look at the example pose covariance diagonals of the diff_drive_contoller package it says to use static values
pose_covariance_diagonal: [0.001, 0.001, 1000000.0, 1000000.0, 1000000.0, 0.03]
I'm very confused because diff_drive_controller seems like a commonly used package but those covariances don't make sense to me.
Originally posted by shoemakerlevy9 on ROS Answers with karma: 545 on 2018-07-27
Post score: 1
Answer:
Okay, I'll give my opinion on some of the statements you've made (not sure I can hit everything).
Unless the orientation estimate that your wheel odometry provides is based on an internal imu, I would disagree with fusing the orientation over the angular velocity. Wheel odometry orientation will drift over time and throw off your yaw estimate. If your imu is calibrated properly (magnetometer) it should be able to provide you with a better orientation estimate. I'm not sure why it says that in the r_l docs, maybe I'm misinterpreting it and someone else can shed some light.
Regarding the high diff_drive_controller covariances: I assume the inflated covariance values correspond to Z, roll, and pitch. For a vehicle navigating in 2D, the robot would not be measuring these values. I think that in older versions of robot_localization and robot_pose_ekf, it was common practice to inflate the covariance for measurements which you did not want to include in the measurement update (high covariance means measurements will have low impact on state estimate in update). Not with robot_localization you can set in your configuration which measurements you want to fuse as outlined here.
Regarding your statement "Logically I would think the pose covariance should grow every time you move": I assume you are referring to the covariance of position odometry coming from wheel encoders - logically I would agree with this because most of the time the position odometry on a vehicle is just coming from integration, so over time this estimate would accumulate error. All I can say on this is that if you only fuse velocity information coming from the wheel odometry you shouldn't need to worry about setting these covariances. What you'll find if you only fuse linear / angular wheel velocity in your first EKF is that the covariance for your X and Y pose coming out of that EKF does grow over time (as you'd expect).
Originally posted by stevejp with karma: 929 on 2018-07-27
This answer was ACCEPTED on the original site
Post score: 3
Original comments
Comment by Tom Moore on 2018-07-30:
@shoemakerlevy9, yes, that is bad/outdated advice. It only makes sense if the orientation data is absolute, and not subject to drift. If you are fusing wheel encoder data, just fuse the velocities.
r_l never used the "inflate covariances to ignore" model; just disable that variable in the config.
Comment by jayess on 2018-07-30:
@Tom Moore I'm guessing that your comment was intended for @shoemakerlevy9
Comment by Tom Moore on 2018-07-30:
Yes, apologies. Updated.
Comment by jayess on 2018-07-30:
No worries
Comment by robot_new_user on 2019-08-27:
@jayess @Tom Moore @stevejp can you please provide some suggestions on how to estimate the covariance for the velocities if those are not available beforehand? | {
"domain": "robotics.stackexchange",
"id": 31388,
"tags": "navigation, ros-kinetic, robot-localization"
} |
Given a connected graph with edges >= vertices, find an algorithm in O(n + m) that orients the edges such that every vertex has indegree of at least 1 | Question: I'm not exactly sure how to approach this problem. I was thinking we would need to detect a vertex in a cycle, then run DFS on it while also orienting the edges along each vertex u in the cycle, to be directed from u to some vertex v incident to it. But I am not sure what edges cases I might be overlooking, or whether the idea is sufficient. Below is one such example.
Answer: Step 1. Find any spanning tree $T$ for the graph $G=(V,E)$. Number of its edges is $|V|-1$.
Step 2. Find any edge $\{u,v\} \in E$, which doesn't belong to the tree $T$. It can be done because $|E| \ge |V|$.
Step 3. Choose the vertex $v$ as "root" for the tree $T$ and orient all the edges in the tree $T$ "from" this root.
Step 4. Orient the edge $\{u,v\}$ from the vertex $u$ to the vertex $v$. Arbitrarily orient all the remaining edges, which aren't oriented yet. | {
"domain": "cs.stackexchange",
"id": 15951,
"tags": "algorithms, graphs"
} |
Cross Validation how to determine when to Early Stop? | Question: When using "K-Fold Cross Validtion" for Neural Net, do we:
Pick and save initial weights of the network randomly (let's call it $W_0$)
Split data into $N$ equal chunks
Train model on $N-1$ chunks, validating against the left-out chunk (the $K$'th chunk)
Get validation error and revert the weights back to $W_0$
shift $K$ by 1 and repeat from 3.
Average-out the validtion errors, to get a much better understanding of how network will generalize using this data.
Revert back to $W_0$ one last time, and train the network using the ENTIRE dataset
Question 1:
I realize 7 is possible, because we have a very good understanding of how network will generalize with the help of step 6. - Is this assumption correct?
Question 2:
Is reverting back to the initial $W_0$ a necessity, else we would overfit? (revert like we do in step 4. and 7.)
Question 3, most important:
Assume we've made it to step 7, and will train the model using ENTIRE data. By now, we don't intend to validate it after we will finish. In that case how do we know when to stop training the model during step 7?
Sure, we can train with same number of epochs as we did during Cross validation. But then how can we be sure that Cross Validation was trained with an appropriate number of epochs in the first place?
Please notice - during steps 3, 4, 5 we only had $K$'th chunk to evaluate Training vs Validation loss. $K$'th chunk is very small, so during the actual Cross-Validation it was unclear when to Early-Stop... To make things worse, it will be even more difficult in case of Leave-One-Out (also know as All-But-One), where K is simply made from a single training example
Answer: First, I would like to emphasize, that cross-validation on itself does not give you any insights about overfitting. This would require comparing training and validation errors over the epochs. Typically you make such comparison with your eye and you can start with one train/validation split.
Question 1: By getting validation error N times, you develop a reasonable (whether it is very or not very good is a question) understanding of how your network will perform (= what error it will give) on the new unseen data.
Often you do cross validation as a part of grid search of hyper-parameters. Then averaging errors at step 6 is mainly for choosing the best hyper-parameters: you believe that the hyper-parameters are best if the corresponding network produces the smallest average validation error. Simultaneously this error is your estimation on what error the final model will give you on the new data.
If you want, you can proceed with your exploration and compare validation errors (for one and the same hyper-parameter set) with each other, calculate standard deviation in order to get further insights.
The concept "model generalizes well" is more related to the absence of overfitting. To make this conclusion, you need to compare train and validation errors. If they are close to each other, then the model generalizes well. This is not directly related to cross validation.
Question 2: The only purpose to take the whole dataset is to train the model on more data. And more data is always good. On the other side, if you are happy with one of the models produced during cross-validation, you can take it.
Question 3: You can take the number of epochs as one of the parameters in grid-search of hyper-parameters. This search usually goes with cross-validation inside. When at step 7 you take the whole data set, you take more data. Thus, overfitting, if at all, at this stage can only be reduced. If it bothers you, that each chunk is small, replace K-fold cross validation with, for example, K times 50/50 train/test splits. And I would never worry about leave-one-out. It was developed for small (very small) datasets. For Neural Net to be good, you typically need large or very large dataset. | {
"domain": "datascience.stackexchange",
"id": 8740,
"tags": "cross-validation"
} |
How to find the distance between stars | Question: I wanted to calculate the distance between the end points of the Orion's belt (Alnitak and Mintaka).
I searched in the internet and I found out their angular separation to be 2.736°.
Then I searched for the actual distances of the two stars from Earth; and it turned to be 1260 light years for Alnitak and 1200 light years for Mintaka.
Then I used the cosines law to find the actual distance between the two stars. I reached 84 light years as the answer.
Questions:
Is the cosines law a good way to find the actual distance between two celestial objects in such far distances?
Is there another way to find the actual distance between two celestial objects?
Can we find the actual distance between two celestial bodies if we don't know the angular distance between them?
Answer:
If you know the distances of both objects to the Earth and the angular separation between them, using the law of cosines is the only reasonable option.
Suppose you know the distances of both objects to the Earth, but not the angular separation. Since you probably also know the coordinates of both stars $\alpha_1, \ \delta_1$ and $\alpha_2, \ \delta_2$, first calculate the angular separation $\theta$ between the two stars using:
$$\cos \theta = \sin\delta_1 \sin\delta_2 + \cos\delta_1 \cos\delta_2 \cos(\alpha_1 - \alpha_2)$$
Now you know the cosine of the angle of separation $\boldsymbol{\cos \theta}$ and you can use the law of cosines to calculate the distance between the two objects.
$$d=\sqrt{d_1^2+d_2^2-2 \ d_1 d_2 \cos \theta \, \, }$$
Of course, in general, if only the distances to the Earth are known, but neither the angular separation nor the coordinates, it is not possible to calculate the distance between them.
Best regards.
PD. A related question, when objects are at cosmological distances: Comoving distance between two points [(RA1, Dec1, z1) and (RA2, Dec2, z2)] | {
"domain": "astronomy.stackexchange",
"id": 6950,
"tags": "distances, angular-resolution"
} |
Java Inventory System | Question: I'm often applying to jobs just to test out my skills. I have recently applied to one, had to submit a Java problem, but got rejected. Could someone please review my application briefly and tell me what was the worst mistake in my code? I am interested in improving myself but it's hard for me to figure out what I need improvements on.
Here is the problem:
Introduction
Assume you are working on a new warehouse inventory management system
named IMS. IMS will be responsible for the inventory tracking within
physical, single site warehouses. IMS will track the named physical
location of a product within the warehouse and the inventory level of
each product. IMS will be deployed to busy warehouses supporting many
pickers and restockers working with individual terminals and clients.
Updates to inventory levels will be handled in real time to prevent
pickers trying to pick a product that is out of stock.
Assumptions
Each product will be stored at one and only one named location within
the warehouse. Inventory adjustments may be additive (restocks) or
subtractive (picks). No additional product information needs to be
tracked beyond location and level.
Problem
In Java, implement the picking and restocking routines for the IMS
system. The IMS interface will be the first
component to be implemented; all relevant domain objects will have to
has already been distributed to other teams which depend on it.
GitHub
Here are the important parts:
IMS.java (given to us, to create an inventory)
public interface IMS {
PickingResult pickProduct(String productId, int amountToPick);
RestockingResult restockProduct(String productId, int amountToRestock);
}
Inventory.java (implements IMS and it's methods)
import java.util.Hashtable;
import java.util.Set;
public class Inventory implements IMS {
private Hashtable<String, Product> products = new Hashtable<String, Product>();
private Hashtable<String, Location> locations = new Hashtable<String, Location>();
public void addLocation(Location location) {
locations.put(location.getName(), location);
}
public void addProduct(Product product, Location location, int amount) {
products.put(product.getName(), product);
location.restockProduct(product.getName(), amount);
}
/*
* returns location that has the product when you pass a product ID to it
*/
public Location getLocationByProductID(String productId) throws Exception {
Set<String> keys = locations.keySet();
Location result = null;
for (String key : keys) {
Location current = locations.get(key);
if (current.hasProduct(productId)) {
if (result == null) {
result = current;
} else {
// oh uh, we found the product in two locations, warning the
// user
throw new Exception("product found in two locations");
}
}
}
return result;
}
public void displayInventory() {
Set<String> keys = locations.keySet();
for (String key : keys) {
Location current = locations.get(key);
System.out.println(current.getName());
current.displayInventory();
}
System.out.println("");
}
@Override
public PickingResult pickProduct(String productId, int amountToPick) {
Location loc = null;
Product product = products.get(productId);
// transaction data
boolean transactionSuccess = false;
String transactionMessage = "";
try {
loc = getLocationByProductID(productId);
if (loc == null) {
throw new Exception("Product " + productId + " wasn't found in any location");
}
int amount = loc.getAmountOfProduct(productId);
if (amount < amountToPick) {
throw new Exception("We do not have enough products for this transaction (quantity available: " + amount
+ "), please restock the product " + productId + "!");
}
loc.pickProduct(productId, amountToPick);
transactionSuccess = true;
transactionMessage = "We have successfully picked " + amountToPick + " items of " + productId;
} catch (Exception e) {
transactionMessage = e.getMessage();
}
return new PickingResult(transactionSuccess, transactionMessage, loc, product, amountToPick);
}
@Override
public RestockingResult restockProduct(String productId, int amountToRestock) {
Location loc = null;
Product product = products.get(productId);
// transaction data
boolean transactionSuccess = false;
String transactionMessage = "";
try {
loc = getLocationByProductID(productId);
if (loc == null) {
throw new Exception("Location wasn't found");
}
loc.restockProduct(productId, amountToRestock);
transactionSuccess = true;
transactionMessage = "We have successfully restocked " + amountToRestock + " items of " + productId;
} catch (Exception e) {
transactionMessage = e.getMessage();
}
return new RestockingResult(transactionSuccess, transactionMessage, loc, product, amountToRestock);
}
}
Location.java
import java.util.Hashtable;
import java.util.Set;
public class Location {
private String name;
private Hashtable<String, Integer> amountByProductID = new Hashtable<String, Integer>();
public Location(String name) {
this.name = name;
}
public void restockProduct(String name, Integer amount) {
Integer previousAmount = getAmountOfProduct(name);
amountByProductID.put(name, previousAmount + amount);
}
/*
* returns false if we don't have enough product
*/
public boolean pickProduct(String name, Integer amount) {
Integer previousAmount = getAmountOfProduct(name);
if (previousAmount < amount) {
// not enough items
return false;
}
amountByProductID.put(name, previousAmount - amount);
return true;
}
public boolean hasProduct(String productId) {
return amountByProductID.get(productId) != null;
}
public Integer getAmountOfProduct(String productId) {
Integer amount = amountByProductID.get(productId);
return amount != null ? amount : 0;
}
public String getName() {
return this.name;
}
public void displayInventory() {
Set<String> keys = amountByProductID.keySet();
for (String productId : keys) {
Integer amount = amountByProductID.get(productId);
System.out.println(" " + productId + ": " + amount.toString());
}
}
}
(I renamed a few variables just so that it's not searchable by other candidates).
Answer: Maps
A HashMap or ConcurrentHashMap (if you require the thread-safety) is preferred over the legacy Hashtable, as mentioned in the Javadoc:
As of the Java 2 platform v1.2, this class was retrofitted to implement the Map interface, making it a member of the Java Collections Framework. Unlike the new collection implementations, Hashtable is synchronized. If a thread-safe implementation is not needed, it is recommended to use HashMap in place of Hashtable. If a thread-safe highly-concurrent implementation is desired, then it is recommended to use ConcurrentHashMap in place of Hashtable.
Also, if you need to iterate through the entries of a Map, there is the Map.entrySet() method to do so. Therefore, instead of:
Set<K> keys = map.keySet();
for (K key : keys) {
V value = map.get(key);
// do something with key and value
}
It will be more efficient (not to mention compact) to do this:
for (Entry<K, V> entry : map.entrySet()) {
// use temporary variables if required
K key = entry.getKey();
V value = entry.getValue();
// do something with key and value
}
edit: If you only need to iterate through the values, there is values() as pointed out by @njzk2's comment:
for (V value : map.values()) {
// do something with value
}
Throwing Exceptions
For example:
try {
loc = getLocationByProductID(productId);
if (loc == null) {
throw new Exception("Location wasn't found");
}
loc.restockProduct(productId, amountToRestock);
transactionSuccess = true;
transactionMessage = "We have successfully restocked " + amountToRestock +
" items of " + productId;
} catch (Exception e) {
transactionMessage = e.getMessage();
}
return new RestockingResult(transactionSuccess, transactionMessage,
loc, product, amountToRestock);
You may want to consider using a custom Exception, e.g. LocationNotFoundException, so that you can describe what the specific error is more programmatically. That will also eliminate the unusually wide catch (Exception e) clause, which is sometimes frowned upon because it is practically a catch-all and hints that the codebase is not even sure what can be thrown-and-caught here.
Also, throwing and then catching an Exception immediately in this style also seems like this is being used to control the program flow. It is as if you are using it to skip the last few lines, i.e. for constructing a 'successful' RestockingResult payload. It is arguably better to simply return a 'unsuccessful' RestockingResult payload once an invalid location is encountered.
BTW, were you instructed how PickingResult and RestockingResult should 'look like'?
edit
int vs Integer
Using the primitive int is often recommended over Integer as it prevents the (wrong?) usage of null, unless you really require something to represent the absence of an int. For example, in your method below:
public void restockProduct(String name, Integer amount) {
Integer previousAmount = getAmountOfProduct(name);
amountByProductID.put(name, previousAmount + amount);
}
Sure, you actually do a null check in getAmountOfProduct(String), but there's nothing to stop a caller from accidentally passing in amount as null. 0 + null will result in a NullPointerException.
Also, if you happen to be on Java 8, there's the nicer Map.merge(K, V, BiFunction) method:
public void restockProduct(String name, int amount) {
amountByProductID.merge(name, Integer.valueOf(amount), Integer::sum);
}
This uses the method reference Integer.sum(int, int) to add any existing value with the new value for the given key.
Type inference for generic instance creation
Since Java 7, type inference for generic instance creation, aka diamond operator, can be used for simplification:
// This
Map<KeyType, ValueType> map = new HashMap<>();
// Instead of
Map<KeyType, ValueType> map = new HashMap<KeyType, ValueType>(); | {
"domain": "codereview.stackexchange",
"id": 15921,
"tags": "java, interview-questions"
} |
Fetch an array result set using a mysqli wrapper class | Question: I have a PHP class:
<?php
//connectionclass.php
class connectionclass{
public $conn;
public $warn;
public $err;
function __construct(){
$this->connect();
}
private function connect(){
$this->conn = @ new mysqli('localhost', 'sever_user', 'user_password');
if ($this->conn->connect_error) {
$this->conn = FALSE;
$this->warn = '<br />Failed to connect database! Please try again later';
}
}
public function get_data($qry){
$result = $this->conn->query($qry);
if ($result->num_rows>=1) {
while($row=$result->fetch_assoc()){
$rows[] = $row;
}
return $rows;
} else {
$this->err = $this->conn->error;
return FALSE;
}
}
}
?>
and a PHP page:
<?php
//login.php
include('/include/connectionclass.php');
$db = new connectionclass();
$query = "SELECT * FROM USERS WHERE user_country='India'";
$data = $db->get_data($query);
$rownum = count($data);
for($i=0;$i<$rownum;$i++){
echo $data[$i]['name'].'- '.$data[$i]['age'].'<br />';
}
?>
Is there any idea for an easier way to iterate $data in the PHP page?
Answer: First, you can use mysqli_result::fetch_all in your connectionclass::get_data() instead of manually looping and building the result array.
Second, you can use foreach to easily iterate through results | {
"domain": "codereview.stackexchange",
"id": 23233,
"tags": "php, object-oriented, mysqli, wrapper"
} |
Convert PoseWithCovariance to Transform | Question:
Is it possible to convert geometry_msgs/PoseWithCovariance to geometry_msgs/Transform and how?
Originally posted by Ugnius Malukas on ROS Answers with karma: 18 on 2017-09-15
Post score: 0
Answer:
If covariance is not important:
geometry_msgs::PoseWithCovariance poseWithCovarianceMsg;
tf::Transform transform;
transform.setRotation(tf::Quaternion(poseWithCovarianceMsg.pose.orientation.x,
poseWithCovarianceMsg.pose.orientation.y,
poseWithCovarianceMsg.pose.orientation.z,
poseWithCovarianceMsg.pose.orientation.w));
transform.setOrigin(tf::Vector3(poseWithCovarianceMsg.pose.position.x,
poseWithCovarianceMsg.pose.position.y,
poseWithCovarianceMsg.pose.position.z));
geometry_msgs::Transform transformMsg;
tf::transformTFToMsg(transform, transformMsg);
Originally posted by Ugnius Malukas with karma: 18 on 2017-09-15
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 28861,
"tags": "ros"
} |
Existence of creation and annihilation operators | Question: In a multiple particle Hilbert space (any space of any multi-particle system), is it sufficient to define creation and annihilation operators by their action (e.g. mapping an n-particle state to an n+1-particle state) or does one have to do anything else, like "proving existence". Intuitively, one can apriori say it exists if you can write it down, but I donno.
Answer: For a theory with a mass gap, it is sufficient to define the creation and annihilation operators by their action, but these operators will not have local properties in general, they won't be the Fourier transform of fields that obey microcausality or come from a local Lagrangian.
When particles are spread out over all space, they have no interaction. Their S-matrix element is
$$ S = I + i A $$
Where I is the identity, so has delta-functions making the incoming k's equal to the outgoing k's, while A has only one overall delta function for energy momentum conservation (A might be the identity on some of the particles and not the identity on others, but this is still fewer delta functions). This means that two infinite plane waves have delta-function scattering in the forward direction, but only a smooth distribution in the off-forward direction. The interpretation is just that as you make the beam of particles more tenuous, the number of collisions goes to zero.
This argument fails to a certain extent in infrared divergent theories (like quantum electrodynamics), because you need to make a nonlocal field with every charged particle. In this case, you need to prove existence, since it isn't clear that the same Hilbert space includes both a zero electron state and a one electron state with an infinite range field (although the zero electron state must contain electron-positron states which are net neutral, and so have a shorter range field).
But in any relativistic field theory with a mass gap, if you have a vacuum and a one particle state, you can define creation and annihilation operators which make n-particle (necessarily noninteracting) states. Then you can define a quantum field from these operators. This quantum field will not have local interactions in general, since the particles can be H-atoms, or pions, they don't have to be pointlike photons.
The issue with constructing quantum field theories is to make sure that the field for the elementary particles is local, and this sometimes requires different degrees of freedom, like quarks, which are not asymptotic. In this case, you can't define a quark creation operator, because the quark's infinite range strong field is an infinite mass string, and is definitely outside the Hilbert space of the theory. | {
"domain": "physics.stackexchange",
"id": 4276,
"tags": "quantum-mechanics, mathematical-physics, particle-physics, operators, hilbert-space"
} |
Memoizing in PostScript | Question: The pusherr and poperr procedures maintain an internal stack as a lisp-like
linked-list. It's a little slower than an earlier stackwise
version according to naive testing. It allocates lots of 2- and 3- element
arrays for its data. But the code using them becomes very readable IMO with this
approach.
The memo-func function generates a procedure body specified by the
template but with the DICT replaced by the dict argument.
The resulting procedure simply pushes the local dict, then pushes an error handler for the /undefined
error, then just does load, and pops the handler and the dict. If load does not find the needed value, it will signal an /undefined error which will be caught be our handler.
The error handler calls the procedure named /default and saves the input and output as a new definition in the local dict.
Improvements?
%!
<<
/errorstack null % [ {errordict/errname{handler}} null ]
/pusherr {
errordict 3 1 roll 3 copy pop 2 copy get % ed /n {new} ed /n {old}
3 array astore cvx errorstack 2 array astore /errorstack exch store
put
}
/poperr {
errorstack dup null ne {
aload pop /errorstack exch store
exec put
} if
}
/memo-func {
{
DICT begin
/undefined { pop dup default dup 3 1 roll def } pusherr
load
poperr
end
}
dup length array copy
dup 3 2 roll 0 exch put cvx
}
>> begin
/fib <<
0 1
1 1
/default { dup 2 sub fib exch 1 sub fib add }
>> 100 dict copy memo-func def
0 1 100 { fib = } for
The 100 dict copy allows you to also set the initial size of the dictionary to determine how many entries you can add before it needs to rehash.
This code was previously posted in comp.lang.postscript at the end of a thread which includes the stackwise version mentioned above.
Answer: Bug
poperr leaves a null on the stack if errorstack is null. Better:
/poperr {
errorstack null ne {
errorstack aload pop /errorstack exch store
exec put
} if
}
This leaves the stack clean and makes extra calls to poperr harmless. If desireable, some action could be put in the else case but since this code is mucking about with error handlers, it is unclear whether signalling an error is correct here. | {
"domain": "codereview.stackexchange",
"id": 29081,
"tags": "fibonacci-sequence, memoization, postscript"
} |
Can we see the Fourier transform as a filtering operation? | Question: Is the Fourier Transform (FT) a filtering operation?
Is it possible to graphically represent this transform?
I know that a signal can be represented in the frequency domain, but I want to know if the FT operation can be "drawn" as a coefficient series.
The FT matrix is composed by $N\times N$ elements, and I can apply it to the input sequence of length $N$ in order to obtain an output sequence of length $N$. But I want to understand if there is another way to see the FT matrix in one dimension.
Answer: The Fourier transform is linear. In its discrete version, its result can be interpreted as the result of $N$ different (complex) filters (whose impulse responses are sines with different frequencies) run in parallel, whose outputs are subsampled by a factor of $N$. In other words, a critically sampled complex filter bank.
If you think of a traditional linear filter, that weights samples of a signal, then you cannot really interpret an FT as a "single" filter.
It is possible to graphically represent this transform for 1D signals, using spiral-like or phasor representations to express their phase, as depicted in What is the physical significance of negative frequencies?: | {
"domain": "dsp.stackexchange",
"id": 4090,
"tags": "filters, fourier-transform, linear-systems"
} |
Data that's not missing is called...? | Question: Is there a standard term for data that are not missing? I.e. is it called non-missing, present, or something else?
Answer: Depends on content, but I would probably go for "observed" (vs. "unobserved"). A suitable direct antonym of "missing" might be "extant". | {
"domain": "datascience.stackexchange",
"id": 1166,
"tags": "data-cleaning, terminology, missing-data"
} |
Math equation generator program | Question: I am building this program in Python to generate 10 random arithmetic questions of either multiplication, addition or subtraction and then saves users scores to a .txt file. I just wanted some help condensing the code as I'm really stuck as of what to do with it.
How can I combine the function (e.g. multiplication, addition and subtraction) into one?
import random
import time
import sys
ans = 0 #variable to hold question answer
question = 0 #question number
user_score = 0 #user's score
userInput = int() #where user enters the answer
lastName = str() #holds last name
firstName = str() #holds first name
form = str() #holds user's form
def function(score,name): #writes user's information to a .txt file
sumOfStudent = (name + ' scored ' + str(user_score))
classNameTxt = (className, '.txt.')
f = open(className, 'a')
f.write(sumOfStudent + form + '\n')
f.close()
def multiplication(): #creates a multiplication question
global ans
numberOne, numberTwo = random.randint(0,20), random.randint(0,20)
print("What is" , numberOne , "*" , numberTwo)
ans = (numberOne * numberTwo)
def subtraction(): #creates a subtraction question
global ans
numberOne, numberTwo = random.randint(0,20), random.randint(0,20)
print("What is" , numberOne , "-" , numberTwo)
ans = (numberOne - numberTwo)
def addition(): #creates a addition question
global ans
numberOne, numberTwo = random.randint(0,20), random.randint(0,20)
print("What is" , numberOne , "+" , numberTwo)
ans = (numberOne + numberTwo)
operation = [multiplication,subtraction,addition] #holds all of the opperators
randOperation = random.choice(operation) #chooses a random operator
lastName = input("Please enter your surname: ").title()
firstName = input("Please enter your first name: ").title()
className = input("Please enter your form: ").title()
print()
def main(): #main game loop - ask questions and checks it against answer, stops are a give amount of questions
question = 0
user_score = 0
randOperation = random.choice(operation)
while True:
try:
randOperation()
randOperation = random.choice(operation)
if question >= 10:
break
userInput = int(input("Enter the answer: "))
if userInput == ans:
print("Correct!" + "\n")
user_score += 1
question += 1
else:
print("Incorrect!" + "\n")
question += 1
except ValueError:
print("I'm sorry that's invalid")
question += 1
main() #initializes the function
print(firstName, lastName , "you scored" , user_score , "out of 10") #shows the user's score and name
user_name = firstName + ' ' + lastName
function(user_score,user_name)
def endMenu():
while True:
try:
options = int(input('''Press '1' to view users' scores,
press '2' to restart the test,
press '3' to exit the game,
Enter option here: '''))
except ValueError:
print("I'm sorry that was invalid...")
if options == 3: #exits the game...
sys.exit()
elif options == 2: #starts the game loop again because it's in a function
main()
elif options == 1: #displays everything on the .txt file
f = open('userScore.txt', 'r')
print(f.read())
print()
endMenu()
else:
print("Sorry, I don't understand. Please try again...")
print()
endMenu()
endMenu()
Answer: Besides the obvious code repetition that you mentioned, a few salient issues deserve mentioning:
Your main() function is not really the main code of your program. Rather, you have some free-floating code outside of any function, and the main function is actually called endMenu(), which is surprising.
The endMenu() function is improperly recursive. The while True loop should suffice.
You use global variables a lot. While firstName and lastName are somewhat excusable (they are entered once by the user, and never change subsequently), you really shouldn't use globals for transient state like ans, question, etc.
function is the least-informative name possible for a function.
To address those issues, as well as the code repetition you mentioned, I'd write the program like this:
import operator
import random
OPERATIONS = [
('+', operator.add),
('-', operator.sub),
('*', operator.mul),
]
def random_question(binary_operations, operand_range):
"""Generate a pair consisting of a random question (as a string)
and its answer (as a number)"""
op_sym, op_func = random.choice(binary_operations)
n1 = random.randint(min(operand_range), max(operand_range))
n2 = random.randint(min(operand_range), max(operand_range))
question = '{} {} {}'.format(n1, op_sym, n2)
answer = op_func(n1, n2)
return question, answer
def quiz(number_of_questions):
"""Ask the specified number of questions, and return the number of correct
answers."""
score = 0
for _ in range(number_of_questions):
question, answer = random_question(OPERATIONS, range(0, 21))
print('What is {}'.format(question))
try:
user_input = float(input("Enter the answer: "))
except ValueError:
print("I'm sorry that's invalid")
else:
if answer == user_input:
print("Correct!\n")
score += 1
else:
print("Incorrect!\n")
return score
def identify_user():
# TODO, as an exercise for you
pass
def display_score(first_name, last_name, class_name):
# TODO, as an exercise for you
pass
def menu():
# TODO, as an exercise for you
pass
def main():
first_name, last_name, class_name = identify_user()
while True:
menu_choice = menu()
if menu_choice == 1: # Display score
display_score(first_name, last_name, class_name)
elif menu_choice == 2: # Run quiz
QUESTIONS = 10
score = quiz(QUESTIONS)
print('{first_name} {last_name}, you scored {score} out of {QUESTIONS}'.format(**locals()))
elif menu_choice == 3: # Exit
break
else:
print("Sorry, I don't understand. Please try again...")
print()
if __name__ == '__main__':
main()
Observe how there are no global variables: each function accepts parameters and returns specific output values. | {
"domain": "codereview.stackexchange",
"id": 11603,
"tags": "python, beginner, random, quiz"
} |
can training for too long lead to overfitting? I am not sure about the specifics of this | Question: does training for a large number of epochs lead to overfitting? I am concerned about this as I am getting an accuracy of nearly 1 on val and training dataset when I am training for 50 epochs
Answer: Yes training for large number of epochs will lead to overfitting. This is because after a point the model starts learning noise from the training set. After a certain number of epochs, majority of what has to be learnt is already learnt and if you continue past that point, the noise present in the dataset starts affecting the model.
Based on your question, you think 50 epochs is not that many but how many epochs to set also depends on your dataset and what model you are using. If you have a large enough dataset, 50 epochs can be too much. Similarly if you have a small dataset, 50 epochs might not be enough.
On the same note, if you have a neural network with a lot of parameters, (example gpt2 or 3) you don't need that many epochs as the model is large and complex enough to learn from the data in just a few epochs. But if you have a relatively smaller neural network then you might need to increase the epochs so that the model can have sufficient iterations to learn from the data.
I would advise using learning curves to visualize how your model is performing for certain number of epochs. sklearn has a library learning_curves I think, for that purpose. | {
"domain": "datascience.stackexchange",
"id": 11719,
"tags": "machine-learning, deep-learning"
} |
How are photons arranged in a light ray? | Question: Title, or more specifically, if a constant light source is directed onto a flat surface, would the photons spread equally on said surface? In addition, say, at the beginning, 10 photons reached the surface, would it still be 10 photons reaching the surface after a split second?
I just came this up when after watching a photoelectric effect experiment presented by Khan Academy. Any help is very appreciated.
Answer: Well, I think your question is in fact about flux of photons, namely, a number of photons reaching a certain cross section during a certain time interval. Say, million of photons reaching one square meter surface in one second.
Back to your question, the answer actually depends on how intense this flux of photons is. If we are talking about a laser beam falling on a spot behind a focusing lens, then yes, a number of photons reaching this spot for one second (or one millisecond, or even one femtosecond) will be (more or less) the same. Meaning that fluctuations of this number are much lower than the number itself.
This is about photons reaching a whole spot on the surface. The distribution of photons across the spot is defined by the properties of the light source. Most lasers produce so called Gaussian laser beams, which means that intensity distribution (and hence photon flux distribution) follows Gauss law (https://en.wikipedia.org/wiki/Gaussian_beam).
The above said becomes less and less applicable when we turn to weaker light sources and shorter times. This is something that statistical optics is about, and yes, under certain conditions a (very small) numbers of photons reaching a certain spot during some (very short) time interval may vary significantly.
Hope that helps. | {
"domain": "physics.stackexchange",
"id": 87450,
"tags": "quantum-mechanics, photoelectric-effect"
} |
Does Newton's Third Law specify direction or not? | Question: In wiki and open stax it is said that Newton’s Third Law includes the fact that Newton's Third Law involves the concept of direction in its statement but in this answer it is said that the direction aspect is not given in the statement of third law and requires extra experimental ideas.
The particular reason I'm concerned about this is that we could derive the conservation of angular momentum if the third law does have the direction aspect in its statement, so does it or does it not?
Answer:
$\mathbf{F_1} = - \mathbf{F_2}$ according to third law but the forces are not along the connecting line. | {
"domain": "physics.stackexchange",
"id": 97655,
"tags": "newtonian-mechanics, forces"
} |
If isospin is conserved under strong interactions why it is represented by SU(2)? | Question: As far as I know from my readings SU(2) is a representation group of isospin symmetry which shows deep symmetry of the strong force which conserves flavor.
Isospin symmetry is broken under weak interactions. On the other hand, Standard Model is assumed to have SU(3)xSU(2)xU(1) symmetry which here, SU(2) refers weak interaction. So if it refers weak interactions and isospin not conserved, why it is represented with SU(2) group? What is conserved rather than flavor now? I'm sure I misunderstand something please help me figure out.
Answer: (Strong) $SU(2)_F$ isospin is a global $u\leftrightarrow d$ flavor symmetry of the strong force (but not a symmetry of the EM force, and hence only an approximate symmetry).
The $SU(2)_F$ sits inside an approximate global $SU(3)_F$ flavor symmetry of the $u$, $d$ and $s$ quark, cf. e.g. this Wikipedia page.
(Strong) isospin is different from the weak isospin and the standard model gauge group $SU(3)_C\times SU(2) \times U(1)$. | {
"domain": "physics.stackexchange",
"id": 19515,
"tags": "gauge-theory, group-theory, group-representations, strong-force, isospin-symmetry"
} |
How many gigabases of DNA are there on earth? | Question: The human genome is about 770 MB, the C. elegans genome is about 100 MB, the yeast S. cerevisiae is about 12 MB. Different other genomes have been sequenced: how many GB of genomic DNA we have now?
Let say we would like to make a Noah hard-disk ark: how much space would it take to represent the genomes of all known species on earth? There is a way to provide an estimate?
I'm also interested in the total biodiversity: for instance, if two species each have 1 GB genomes and have half of their DNA in common this would count as 1.5 GB.
Answer: If you simply take one order of insects, Coleoptera, there are just under 400,000 described species with estimates from 850,000 to 4,000,000 species total in just this order. The number of primates is under 1,000. If your assumption of say 10MB for all other primates would be accurate, just adding in the low end estimate of 850,000 at 10MB per 1000 we are quickly at 8,500GB which seems to be a factorial out of the GB range.
So, we have a broad estimate of non-bacterial of plants, animals etc. at say 8,700,000.
Jason Gans found in a 1 gram of soil survey approximatly 1,000,000 bacterial species.
SO the total accounting for species number is totally impossible to estimate for anything at this time, let alone the genome.
Even for something as "common" as a giraffe, there are up to 9 sub-species with genome differences within each subspecies.
So, once we get them all decribed, we can then work on the genome sequence for each and get you some answers! | {
"domain": "biology.stackexchange",
"id": 9066,
"tags": "genetics, dna, bioinformatics"
} |
Doubt on Motional EMF | Question: In motional EMF of a rod of resistance r connected with resistance R moved with velocity v is it equivalent to
1) battery, r and R all connected in parallel.
2) all in series .
?
I think it should be 1) as V will be the potential across the rod but have seen questions considering it as latter.
Answer: Suppose our setup looks like this:
where $r$ is the resistance between the ends of the rod and $R$ is the external resistance. You don't specify which direction the field is in so I've arbitrarily chosen it to be going into the page.
The motion of the rod at a velocity $\mathbf v$ causes a Lorentz force $\mathbf F_e = -e\mathbf v \times \mathbf B$ on the electrons in the rod, so the force on the electrons points downwards:
So the electrons will travel round the circuit like this:
Note that the electrons flow in series through the rod then through the external resistance $R$ then back into the rod again, so the equivalent circuit looks like this: | {
"domain": "physics.stackexchange",
"id": 64387,
"tags": "electromagnetism, electromagnetic-induction"
} |
How is called the magnitude of acceleration? | Question: According to wikipedia page of velocity:
The scalar absolute value (magnitude) of velocity is called speed
and according to wikipedia page of acceleration:
Accelerations are vector quantities (in that they have magnitude and direction)
thus I am wondering,
How is called the magnitude of acceleration?
Answer: There is no separate word to distinguish the vector of acceleration from its magnitude.
The same is true with the word force, which is also both a vector and often described by the same word when talking of its magnitude.
Velocity and speed seem to be the exception, probably because speed is an everyday term, " speed of going from town A to town B" implies curving roads and changing vectors. | {
"domain": "physics.stackexchange",
"id": 68602,
"tags": "kinematics, acceleration, terminology, vectors"
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.