anchor stringlengths 0 150 | positive stringlengths 0 96k | source dict |
|---|---|---|
Finding the smartest set from an array of numbers | Question: Is there any way to make this program run in better time ? While running, it is taking 1 second for the sample test case to pass and 5-10 seconds for the rest of the test cases.
Problem statement
A smart-set is a set of distinct numbers in which all the elements have the same number of 1s in their binary form. The set of all smallest elements from each smart-set
that can be formed from a given array of distinct positive numbers is known as the smartest-set.
So given an array of distinct numbers, outline the elements of the smartest-set in ascending sorted order.
Example
Let the array be {6 , 2 , 11 , 1 , 9 , 14 , 13 , 4 , 18}.
In binary form the set is {110, 010, 1011, 0001, 1001, 1110, 1101, 0100, 10010}.
The smart-sets are {1, 2, 4}, {6, 9, 18}, {11, 13, 14}.
The smartest-set is {1,6,11} as each element is the smallest element from each smart-set.
Input Format
The first line of input consists of an integer t. This is the number of test cases. For each test case,
the first line of input contains an integer n. Here n is the number of elements in the array. The next line contains n space separated distinct integers which are the elements
of the array.
Output Format
The output will space separated integer elements of the smartest-set in ascending order.
Constraints
0 < t < 1000 (This is the number of test cases )
2 < n < 10000 (This is the number of integer elements of the array)
1 < Xi < 100000 (This is the size of each element of the array)
SAMPLE STDIN 1
3
9
6 2 11 1 9 14 13 4 18
3
7 3 1
3
1 2 4
SAMPLE STDOUT
1 6 11
1 3 7
1
Code
test_case=input()
for case_num in range(int(test_case)):
num_of_elements=input()
arr=input()
dictn={}
#print (num_of_elements,arr)
for bin_values in list(map(int,arr.split())):
count=0
for occurence in [int(x) for x in list('{0:0b}'.format(bin_values))]:
if occurence==1:
count=count+1
dictn[bin_values]=count
v = {}
for key, value in sorted(dictn.items()):
v.setdefault(value, []).append(key)
lst=[]
for key,value in (v.items()):
x=min(value)
lst.append(str(x))
s= ' '
s = s.join(lst)
print (s)
Answer: Review
Don't print() but return variables, so other functions can reuse the outcome.
Split up your code into functions, for re usability and to easier test parts of your code.
Write test cases using the unittest module rather then doing it from the CLI every time
There are some code improvements as well,
You could benefit from the collections.defaultdict module
An easier way to count the "1" in the binaray format is: str(bin(ele)[2:]).count("1")
You can benefit from list or dict comprehension, see PEP202
Alternative
import unittest
from collections import defaultdict
def smartest_set(array):
smart_sets = defaultdict(list)
for element in array:
num_of_ones = str(bin(element)[2:]).count("1")
smart_sets[num_of_ones].append(element)
# If you'd really want the output be printed, you can add a print statement in the function
result = [min(e) for e in smart_sets.values()]
print(result)
return result
class Test(unittest.TestCase):
def test_smartest_set(self):
array = [6, 2, 11, 1, 9, 14, 13, 4, 18]
expected = [1, 6, 11]
self.assertEqual(expected, smartest_set(array))
# More tests here...
if __name__ == "__main__":
unittest.main()
Or a slightly faster approach storing the minimum value of the counts only
def smartest_set_2(array):
smarts = {}
for ele in array:
num_of_ones = str(bin(ele)[2:]).count("1")
smarts[num_of_ones] = min(ele, smarts.get(num_of_ones, float('inf'))
return smarts.values() | {
"domain": "codereview.stackexchange",
"id": 31072,
"tags": "python, programming-challenge, array"
} |
Would someone versed in relative motion consider it more accurate to say that a car slammed into, or collided with, a wall? | Question: I've read that it's just as correct to think of A moving toward B (ie. A doesn't move, but B does) as it is to think about B as moving toward A (again, A doesn't move, but B does) or as the two moving toward each other.
If I imagine a universe in which only two objects exists - then that seems like it would be the case. However, in that universe there isn't really a point of reference to say whether one object isn't moving.
Is there a useful and accurate way to think about these kinds of relations?
Thank you
-Hal.
Answer: Any point may be chosen as a reference point, as speed is always relative to something else.
Let's imagine I'm driving on the highway, and I'm going 10mph relative to the car I'm overtaking. Unfortunately for me, the other car was moving at the speed limit(60mph) and there was a speed camera, which gave me a ticket for going 70mph relative to the camera.
If there were only 2 objects in a universe, the only things one could establish for sure is the speed of one object relative to the other(and vice versa), and the direction of movement.
One can't establish that both(or neither) objects would be moving, because one can't define relative to what they are moving. And for describing speed, one needs a reference. | {
"domain": "physics.stackexchange",
"id": 15190,
"tags": "relative-motion"
} |
How to approach stationarity in Hamilton mechanics? | Question: The analogue of the action in Hamiltonian mechanics is
$$ S [ q, p] =\int_{t_1}^{t_2} [p_\alpha (t^\prime) \dot{q}_\alpha (t^\prime) - H (q_\alpha (t^\prime),p_\alpha (t^\prime),t^\prime)]d t^\prime. $$
How would one go about determine the conditions under which the action is stationary with respect to variations of the $q_\alpha, p_\alpha$? I assumed that you consider the variations $\delta q_\alpha$ and $\delta p_\alpha$ but I can't see how to deal with terms of the form $\delta p_\alpha \; \dot{q}_\alpha (t^\prime)$.
Answer: Assuming that $\delta p$, $\delta q$ are zero at $t=t_0, t_1$ so that we can ignore integrated out bits, we have
$$
\delta S= \int_{t_0}^{t_1} \left\{\left(- \frac{d p}{dt} -\frac{\partial H}{\partial q}\right)\delta q(t) +\left(\frac{dq}{dt}- \frac{\partial H}{\partial p}\right)\delta p(t)\right\}dt
$$
so stationarity needs
$$
\frac{d p}{dt} =-\frac{\partial H}{\partial q}\\
\frac{d q}{dt} =+\frac{\partial H}{\partial p},
$$
i.e Hamilton's equations. | {
"domain": "physics.stackexchange",
"id": 76098,
"tags": "homework-and-exercises, classical-mechanics, hamiltonian-formalism, variational-principle, variational-calculus"
} |
What are some good textbooks on material and crystal physics? | Question: I am looking for good physics textbooks on materials and crystal physics, I do not mind if they are a bit general but they must cover useful topics. However, the specific topics I would like some suggestions for are piezo-electricity, ferroelectricity and ferromagnetism in crystals and materials.
I would like them to be as rigorous as possible as I need more theoretical frameworks that these phenomena occur in rather than a focus on engineering/application.
Answer: As usual one should start with the fundamental Solid State books for which I recommend
Introduction to Solid State Physics - Kittel
Solid State Physics - Ashcroft & Mermin
Solid-State Physics: An Introduction to Principles of Materials Science -
Ibach & Lüth
For the books concerning with the magnetism I can list you some names, none of which I can say much because I did not read them thoroughly myself
Fundamentals of Magnetism - Getzlaff
Magnetism, Basics and Applications - Stefanita
Magnetism, From Fundamentals to Nanoscale Dynamics - Stohr & Siegmann
Modern Theory of Magnetism in Metals and Alloys - Kakehashi
Quantum Theory of Magnetism - Nolting & Ramakanth
I also made a fast search for the books on Piezoelectricity in Springerlink and here is the list. For these books I cannot say anything at all since I did not even open their cover.
Piezoelectricity Evolution and Future of a Technology - HeywangKarl & Lubitz & Wersing
Special Topics in the Theory of Piezoelectricity - Yang
An Introduction to the Theory of Piezoelectricity - Yang
Advanced Mechanics of Piezoelectricity - Qin | {
"domain": "physics.stackexchange",
"id": 47685,
"tags": "resource-recommendations, material-science, crystals, ferromagnetism, piezoelectric"
} |
What does electrostatic energy stored in a region mean? | Question: Recently my teacher taught me about Energy density(Energy per unit volume) and told us the formula for the same.
This was explained by taking "Energy" as Energy of a parallel plate Capacitor and dividing it by the volume contained within the capacitor.
And after this I was explained to integrate the Energy Density times volume over a region to get the energy contained in that region.
Now I understand that separating two attracting charges requires energy, which is stored as the potential energy of the "capacitor plates".
But when assigned to a non material object, like energy in a "region" I can't understand what it means at all, nor can I form any intuition regarding this.
Please help me out if you know what this means.
Answer:
Now I understand that separating two attracting charges requires
energy, which is stored as the potential energy of the "capacitor
plates".
The energy is stored in the electric field between the capacitor plates.
But when assigned to a non material object, like energy in a "region"
I can't understand what it means at all, nor can I form any intuition
regarding this.
It isn't any "region". It is a region where a field exists. For the capacitor, the energy is stored in the electric field of the region. Similarly, in the case of gravity, the energy is stored in the gravitational field of the region.
Could you tell me what "Energy stored in an electric field" is?What is
its practical interpretation?
One definition of energy is it is "the capacity for doing work". An electric field (and gravitational field) has the capacity to do work.
Suppose you have an charged air capacitor, i.e., a capacitor having two plates separated by an air gap. Energy is stored in the electric field in the gap. As proof, if you could place an electron somewhere in the gap it would experience a force due to the electric field causing it to accelerate towards the positively charged plate. That electric force has done work on the electron giving it kinetic energy.
Again, suppose we have a capacitor that was charged by a battery and then the battery removed. There remains a voltage across the charged battery and energy stored in the electric field between the plates. If you now connect say a resistor across the capacitor terminals a current will flow causing heat dissipation in the resistor. Where did the energy come from to create the current and heat? From the electric field between the capacitor plates. Eventually the current will stop when the voltage falls to zero and all the energy stored in the electric field of the capacitor has been used. That initial stored energy was
$$E=\frac{CV^2}{2}$$
Where $V$ = the initial voltage of the fully charged capacitor and $C$ is the capacitance of the capacitor.
Hope this helps. | {
"domain": "physics.stackexchange",
"id": 70684,
"tags": "electrostatics, energy, charge, potential-energy, capacitance"
} |
Is a corpse more flexible than a living person? | Question: I read that the majority of humans have the muscular-skeletal potential to perform the splits that you see many gymnasts perform. The reason a living person with no flexibility training can not achieve the splits is because the person's central nervous system (CNS) automatically constricts the muscle fibers of the legs when the legs move beyond the person's normal range of motion. I read that this automatic CNS response is called the stretch reflex, and it is the CNS's way to protect the body from performing potentially dangerous movements.
So my question is: does this mean a body with a non-functioning CNS is very flexible? For example, if I could not perform splits while I was alive, could someone position my corpse into the splits before the event of rigor mortis?
Answer: According to the science in this article: healthunify.com/why-cant-everybody-cant-do-a-split/, not really. Flexibility such as splits are only possible by lengthening the skeletal muscle fibers (in the case of splits, this includes the hamstring and iliopsoas). Which means that splits aren't solely dependent on how your CNS functions.
Furthermore, with death eventually comes rigor mortis where due to the body's chemical changes after death (lack of ATP production etc... https://biologydictionary.net/rigor-mortis/), the body becomes rigid which makes it even more difficult for a dead person to do splits than a live one without seriously damaging the dead person's body/muscles. | {
"domain": "biology.stackexchange",
"id": 10703,
"tags": "death, central-nervous-system"
} |
Deriving infinitesimal time dilation for arbitrary motion from Lorentz transformations | Question: I'm trying to derive the infinitesimal time dilation relation $dt = \gamma d\tau$, where $\tau$ is the proper time, $t$ the coordinate time, and $\gamma = (1-v(t)^2/c^2)^{-1/2}$ the time dependent Lorentz factor. The derivation is trivial if one starts by considering the invariant interval $ds^2$, but it should be possible to obtain the result considering only Lorentz transformations. So, in my approach I am using two different reference frames $(t,x)$ will denote an intertial laboratory frame while $(t',x')$ will be the set of all inertial frames momentarily coinciding with the observed particle, i.e. the rest frame of the particle. These frames are related by $$t' = \gamma \left(t-\frac{Vx}{c}\right),\quad x' = \gamma \left( x - V t\right),$$ where $V$ is some nonconstant (i.e. time dependent) parameter which is, hopefully, the velocity of the particle in the laboratory frame. Treating $x$, $t$ and $V$ as independent variables (for now) and taking the differential of the above relations, I obtain $$dt' = \gamma \left(dt-\frac{Vdx}{c}\right) - \frac{\gamma^3}{c^2}(x-Vt)dV,$$ and $$dx' = \gamma \left( dx - V dt\right) - \gamma^3 \left(t-\frac{Vx}{c}\right) dV.$$ Imposing either the definition of the rest frame $dx'=0$ or (what should be equivalent) $dx = Vdt$, the only way in which i obtain $dt = \gamma dt'$ is if $dV=0$. So, the derivation breaks badly at some point or I must be wrong in using some of the above equations. Which one is it?
Answer: The Lorentz transformation you used is a Lorentz transformation for a fixed (constant) $V$, the relative velocity between two frames, so by the construction $dV=0$. Frames with non-constant velocities are not "inertial" and they don't belong to special relativity. You may still measure the proper time along an accelerating world line but the coordinate system in which the accelerating observer would always have $x=0$ isn't inertial, so it's not good for a simple formulation of the laws of physics in special relativity. The proper time of an accelerating observer is simply obtained by dividing his world line to infinitesimal pieces and summing (integrating) their proper time. It can't be computed without an integral which is complicated in general. Because you haven't considered any integrals of complicated functions, you haven't been (correctly) calculating the proper time of an accelerating observer, so considering $dV\neq 0$ couldn't have served any useful purpose.
Moreover, the very idea to differentiate the Lorentz transformations is a bit redundant. Without a loss of generality, you could have additively shift the coordinates so that $0=x=t=x'=t'$ around the point you're interested in. When you do so, $dx,dt,dx',dt'$ are really nothing else than $x,t,x',t'$ that are just assumed to be infinitely small. The time dilation is then simply derived by setting e.g. $x=0$ which reduces the first equation to $t'=\gamma t$. It's really the same thing as $dt'=\gamma \cdot dt$ when $t,t'$ are infinitesimal. The $\gamma$ factor is on the proper side of the equation because $x=0$ means that the unprimed coordinates are those for which $x=0$ is the world line of the moving object. That means that $t$ is the time measured in the rest frame of the moving object and indeed, we have $\tau_{\rm proper}=t=t'/\gamma$ where $t'$ is the time measured from some other frame. (Of course, the same calculation may be done passively and/or with the opposite interpretation of primed and unprimed systems – but one isn't allowed to mix these two approaches inconsistently.) | {
"domain": "physics.stackexchange",
"id": 6609,
"tags": "time, relativity"
} |
Problem compiling package 3d_navigation in electric | Question:
I'm using ros-electric on Ubuntu 10.04 and I tried to install 3d_navigation package.
I download the package by commands as follow:
svn co https://code.ros.org/svn/wg-ros-pkg/branches/trunk_diamondback/sandbox/3d_navigation/
svn co https://alufr-ros-pkg.googlecode.com/svn/trunk/octomap_mapping
svn co https://code.ros.org/svn/wg-ros-pkg/stacks/motion_planning_common/branches/arm_navigation_metrics
Then I rosmake the folder and was told pose_follower and sbpl cannot be found. So I downloaded them by commands as follow:
hg clone https://kforge.ros.org/navigation/experimental
svn co https://code.ros.org/svn/wg-ros-pkg/stacks/motion_planners/trunk/sbpl
then I again rosmake my 3d_navigation. Then Error message occurred :
[ rosmake ] Last 40 linesanning_models: 4.9 sec ] [ octomap_ros: 4.5 sec ] [ motion_planning_msgs: 1.4 sec ] [ 3 Active 155/167 Complete ]
{-------------------------------------------------------------------------------
[ 0%] Built target rospack_genmsg_libexe
make[3]: Entering directory `/home/bcddivad/code/ros/bcddivad_3d_navigation/arm_navigation_metrics/planning_models/build'
make[3]: Leaving directory `/home/bcddivad/code/ros/bcddivad_3d_navigation/arm_navigation_metrics/planning_models/build'
[ 0%] Built target rosbuild_precompile
make[3]: Entering directory `/home/bcddivad/code/ros/bcddivad_3d_navigation/arm_navigation_metrics/planning_models/build'
make[3]: Leaving directory `/home/bcddivad/code/ros/bcddivad_3d_navigation/arm_navigation_metrics/planning_models/build'
make[3]: Entering directory `/home/bcddivad/code/ros/bcddivad_3d_navigation/arm_navigation_metrics/planning_models/build'
[ 50%] Building CXX object CMakeFiles/planning_models.dir/src/kinematic_model.o
/home/bcddivad/code/ros/bcddivad_3d_navigation/arm_navigation_metrics/planning_models/src/kinematic_model.cpp: In member function ‘bool planning_models::KinematicModel::addModelGroup(const planning_models::KinematicModel::GroupConfig&)’:
/home/bcddivad/code/ros/bcddivad_3d_navigation/arm_navigation_metrics/planning_models/src/ kinematic_model.cpp:242: error: wrong number of template arguments (1, should be 3)
/usr/include/boost/detail/container_fwd.hpp:84: error: provided for ‘template<class Key, class Compare, class Allocator> struct std::set’
/home/bcddivad/code/ros/bcddivad_3d_navigation/arm_navigation_metrics/planning_models/src/kinematic_model.cpp:242: error: invalid type in declaration before ‘;’ token
/home/bcddivad/code/ros/bcddivad_3d_navigation/arm_navigation_metrics/planning_models/src/kinematic_model.cpp:250: error: request for member ‘insert’ in ‘joint_set’, which is of non-class type ‘int’
/home/bcddivad/code/ros/bcddivad_3d_navigation/arm_navigation_metrics/planning_models/src/kinematic_model.cpp:253: error: wrong number of template arguments (1, should be 3)
/usr/include/boost/detail/container_fwd.hpp:84: error: provided for ‘template<class Key, class Compare, class Allocator> struct std::set’
/home/bcddivad/code/ros/bcddivad_3d_navigation/arm_navigation_metrics/planning_models/src/kinematic_model.cpp:253: error: expected initializer before ‘it’
/home/bcddivad/code/ros/bcddivad_3d_navigation/arm_navigation_metrics/planning_models/src/kinematic_model.cpp:254: error: ‘it’ was not declared in this scope
/home/bcddivad/code/ros/bcddivad_3d_navigation/arm_navigation_metrics/planning_models/src/kinematic_model.cpp:254: error: request for member ‘end’ in ‘joint_set’, which is of non-class type ‘int’
/home/bcddivad/code/ros/bcddivad_3d_navigation/arm_navigation_metrics/planning_models/src/kinematic_model.cpp: In member function ‘planning_models::KinematicModel::LinkModel* planning_models::KinematicModel::constructLinkModel(const urdf::Link*)’:
/home/bcddivad/code/ros/bcddivad_3d_navigation/arm_navigation_metrics/planning_models/src/kinematic_model.cpp:424: error: ‘ROS_ASSERT’ was not declared in this scope
/home/bcddivad/code/ros/bcddivad_3d_navigation/arm_navigation_metrics/planning_models/src/kinematic_model.cpp: In member function ‘shapes::Shape* planning_models::KinematicModel::constructShape(const urdf::Geometry*)’:
/home/bcddivad/code/ros/bcddivad_3d_navigation/arm_navigation_metrics/planning_models/src/kinematic_model.cpp:449: error: ‘ROS_ASSERT’ was not declared in this scope
/home/bcddivad/code/ros/bcddivad_3d_navigation/arm_navigation_metrics/planning_models/src/kinematic_model.cpp: In constructor ‘planning_models::KinematicModel::JointModelGroup::JointModelGroup(const std::string&, const std::vector<const planning_models::KinematicModel::JointModel*, std::allocator<const planning_models::KinematicModel::JointModel*> >&, const std::vector<const planning_models::KinematicModel::JointModel*, std::allocator<const planning_models::KinematicModel::JointModel*> >&, const planning_models::KinematicModel*)’:
/home/bcddivad/code/ros/bcddivad_3d_navigation/arm_navigation_metrics/planning_models/src/kinematic_model.cpp:1301: error: wrong number of template arguments (1, should be 3)
/usr/include/boost/detail/container_fwd.hpp:84: error: provided for ‘template<class Key, class Compare, class Allocator> struct std::set’
/home/bcddivad/code/ros/bcddivad_3d_navigation/arm_navigation_metrics/planning_models/src/kinematic_model.cpp:1301: error: invalid type in declaration before ‘;’ token
/home/bcddivad/code/ros/bcddivad_3d_navigation/arm_navigation_metrics/planning_models/src/kinematic_model.cpp:1305: error: request for member ‘insert’ in ‘group_links_set’, which is of non-class type ‘int’
/home/bcddivad/code/ros/bcddivad_3d_navigation/arm_navigation_metrics/planning_models/src/kinematic_model.cpp:1312: error: request for member ‘insert’ in ‘group_links_set’, which is of non-class type ‘int’
/home/bcddivad/code/ros/bcddivad_3d_navigation/arm_navigation_metrics/planning_models/src/kinematic_model.cpp:1318: error: wrong number of template arguments (1, should be 3)
/usr/include/boost/detail/container_fwd.hpp:84: error: provided for ‘template<class Key, class Compare, class Allocator> struct std::set’
/home/bcddivad/code/ros/bcddivad_3d_navigation/arm_navigation_metrics/planning_models/src/kinematic_model.cpp:1318: error: expected initializer before ‘it’
/home/bcddivad/code/ros/bcddivad_3d_navigation/arm_navigation_metrics/planning_models/src/kinematic_model.cpp:1319: error: ‘it’ was not declared in this scope
/home/bcddivad/code/ros/bcddivad_3d_navigation/arm_navigation_metrics/planning_models/src/kinematic_model.cpp:1319: error: request for member ‘end’ in ‘group_links_set’, which is of non-class type ‘int’
make[3]: *** [CMakeFiles/planning_models.dir/src/kinematic_model.o] Error 1
make[3]: Leaving directory `/home/bcddivad/code/ros/bcddivad_3d_navigation/arm_navigation_metrics/planning_models/build'
make[2]: *** [CMakeFiles/planning_models.dir/all] Error 2
make[2]: Leaving directory `/home/bcddivad/code/ros/bcddivad_3d_navigation/arm_navigation_metrics/planning_models/build'
make[1]: *** [all] Error 2
make[1]: Leaving directory `/home/bcddivad/code/ros/bcddivad_3d_navigation/arm_navigation_metrics/planning_models/build'
-------------------------------------------------------------------------------}
How can I solve this?
Originally posted by bcddivad on ROS Answers with karma: 25 on 2012-04-22
Post score: 0
Answer:
The 3d_navigation wiki page clearly states that the instructions only work for diamondback, also see this related question and answers. A version for electric or fuerte is currently worked on. You could also try what is available at https://kforge.ros.org/Sushi/trac (but there's no official release and support yet).
In any case, get rid of the arm_navigation branch you installed (third line from your checkouts) as that is not needed for electric, and be prepared to fix some code yourself since a few things changed.
Originally posted by AHornung with karma: 5904 on 2012-04-22
This answer was ACCEPTED on the original site
Post score: 1 | {
"domain": "robotics.stackexchange",
"id": 9075,
"tags": "ros, 3d-navigation, ros-electric, rosmake, sbpl"
} |
Phonons vs Normal mode | Question: What is the difference between Phonons and Normal mode? My professor told me that they are the same and that one can get the other from some derivation (I'm a bit unsure if he really said the 'derivation part' because he said more details around that which I did not fully grasp). If it is the case that there is no difference between Phonons and Normal mode then how come there exist two very different words for the same thing?
Below I will make some reference to what I have read that to me makes it seem that there is no difference between Phonons and Normal mode:
"The quanta of lattice vibrations are known as phonons" - Introductory to solid state physics Second Edition by H.P. Myers $\tag{1}$
"A normal mode of an oscillating system is a pattern of motion in which all parts of the system move sinusoidally with the same frequency and with a fixed phase relation." - Wikipedia $\tag{2}$
Related posts:
phonons-and-modes
what-is-the-difference-between-normal-mode-and-just-mode
difference-between-mechanical-modes-and-phonons - this one almost answers my question but then I emphasize on "If it is the case that there is no difference between Phonons and Normal mode then how come there exist two very different words for the same thing?"
Answer: Normal mode describes the shape of the vibration pattern — how the system oscillates, and with what frequency. Phonons describe another aspect of the vibration: amplitude. In particular, in a coherent state mean number of phonons in a given normal mode is proportional to squared amplitude of vibration of this normal mode. | {
"domain": "physics.stackexchange",
"id": 82601,
"tags": "solid-state-physics, phonons, normal-modes"
} |
Ranking skills depending on similarity | Question: I need to rank human skills depending on their similarity to the input skill. So if I enter "Dutch language", I want the list like this:
0.97 Dutch
0.86 Dutch lessons
0.55 Frisian
0.50 Flemish
0.27 German language
I have a database of around 4500 human skills (ranging from "programming in C" to "baking almond cake") with 600 manually categorized. I can already find corresponding article on BabelNet and pull domain, categories and related terms.
Example skill with the data from BabelNet:
name:"photography"
categories:
0:"Photography"
1:"French_inventions"
2:"Optics"
3:"1822_introductions"
manualCategory:"art & music"
domains:
ART_ARCHITECTURE_AND_ARCHAEOLOGY:1
compounds:
0:"digital_photography"
1:"landscape_photography"
2:"photographic_developing"
3:"motion_photography"
4:"nature_photography"
...
48:"photographic_plates"
otherForms:
0:"still_photography"
1:"photo"
2:"photos"
3:"photographed"
4:"photographers"
...
20:"Photographer"
Can you suggest me the approach or at least steer in the right direction?
Answer: Pretty late but I'm surprised this wasn't answered more. "Cosine similarity" is a great technique to try, though simply letting users search with a hard string and then ranking by popularity isn't so bad (e.g. "dutch" brings up everything with "dutch" in it, though I would discard mid-word matches, so "ball" wouldn't return "football", but would return "ball room dancing").
I'd say in any approach a main issue will be deduplicating previous (non-standardized) skills input by users that weren't quite standardized. You could also try replacing the candidate skills with versions that have different synonyms substituted at search time, e.g. "soccer coaching" might be stored also as "football coaching" if most of your content is from Europeans.
Sometimes extreme accuracy might not be the best goal, though... You may want to encourage users to explore new skills that they never knew existed! Not sure what your needs are...
Whatever you settle on, it might be worth building a semi-hand-crafted test set of queries and relevant results so that you can see if the performance is terrible (Google precision and recall in the context of search results). | {
"domain": "datascience.stackexchange",
"id": 937,
"tags": "machine-learning, classification, nlp, text-mining, similarity"
} |
custom rosdep rules | Question:
I am migrating to fuerte and I am having problem with rosdep 2. We have a bunch of packages and we used to have a bunch of rosdep rules to support our packages. I am wondering what I am supposed to do with rosdep2?
I tried creating /etc/ros/rosdep/sources.list.d/30-myrosdep.list with a line such as: "yaml file://path/to/my/rosdep.yaml", but it was rejected.
Originally posted by brice rebsamen on ROS Answers with karma: 1001 on 2012-12-17
Post score: 1
Answer:
Hi Brice,
the URI needs a hostname, so it should look like this:
file://localhost/home/martin/code/rosdistro/rosdep/base.yaml
The instructions from the official documentation tell you to first push to your forked github repo and then directly use your github links, but I prefer testing my rules locally (using a set of rules like the one above) before pushing to github.
You should still create a github account, fork the repository etc. (as explained in the page I linked to) and edit the forked files. Your sources.list.d entry should point to those files. That way, when you're satisfied, you can submit a pull request so everybody can use your rosdep rules.
Originally posted by Martin Günther with karma: 11816 on 2012-12-17
This answer was ACCEPTED on the original site
Post score: 2
Original comments
Comment by dornhege on 2012-12-18:
It might also be a good idea to check if there already is a rule in rosdep. I found many packages actually already in there.
Comment by joq on 2012-12-18:
An additional slash should also work: file:///home/martin/code/rosdistro/rosdep/base.yaml.
Comment by Martin Günther on 2012-12-18:
@joq: It should, but it doesn't. Rosdep insists on a hostname even for file:// schemes; this is a bug in rosdep. | {
"domain": "robotics.stackexchange",
"id": 12138,
"tags": "rosdep"
} |
What is the correct formulation of momentum balance for a body of continuum? | Question: What is the correct form of the momentum balance equation
for a continuum body $\mathscr{B}$
whose particles are
fixed,
and occupies volume $V(t)$ at time $t$?
\begin{align}
&\frac{\mathrm{d}}{\mathrm{dt}}\mathrm{\int}_\mathscr{B}
{u_i\mathrm{d}m}=F_i\\
\end{align}or\begin{align}
&\frac{\mathrm{d}}{\mathrm{dt}}\int_{V(t)}
{u_i\rho\mathrm{d}V}=F_i
\end{align}?
where $u_i$ is velocity, and $F_i$ is the applied force to $\mathscr{B}$.
It is strange that the two forms seem to be correct, although $\mathscr{B}$ does not change with time, while $V(t)$ does.
Answer: This should just be a matter of notation and both formulations are equal. (Perhaps you give a reference to where that notation is from.) Whether you write down the mass element as $\text{d}m$ or $\rho\, \text{d}V$, whether you parametrize the body by points in a manifold (which is my guess as to what $\mathcal{B}$ refers to) or via the volume $V\left(t\right)$ it occupies at time $t$ (this should be the image of what Truesdell and Noll call a configuration of the body $\mathcal{B}$ (a smooth homeomorphism into $\mathbb{R}^3$)) - as long as you know what you are doing with this notation both are fine. | {
"domain": "physics.stackexchange",
"id": 93307,
"tags": "momentum, differentiation, continuum-mechanics, navier-stokes, solid-mechanics"
} |
Rotating and Shifting image doesn't change FFT2 | Question: I have an image and its fourier transform.
When I rotate it, its fourier transform rotates too, but I can't figure it out. Why does this happen?
On the other hand, when I shift the image, its fourier transform doesn't change. As I know, time shifting means frequency shifting. Am Iwrong?
Answer: Time shift corresponds to a phase shift in frequency domain, not a frequency shift. Since you display only the magnitude of the 2D FFT before and after the shift, you do not observe the phase shift.
$$f(x) \Rightarrow F(w) \\ f(x-a) \Rightarrow e^{-j2\pi wa} F(w)
$$
Rotation in space corresponds to a rotation in frequency domain as you observe.
For proofs, see these notes on Some Properties of Fourier Transform | {
"domain": "dsp.stackexchange",
"id": 4326,
"tags": "image-processing, fourier-transform"
} |
Minimizing profile curvature of 2D image | Question: I have 2D grayscale image representing terrain heights. I am in search
for (fast) algorithm to minimize profile curvature of this image on selected scales (in scale-space sense).
Profile curvature is function of partial derivatives:
$$P_c = - \frac{(dx^2 \times dxx + 2 \times dx \times dy \times dxy + dy^2 \times dyy)}{((dx^2 + dy^2) \times (dx^2 + dy^2 + 1)^{1.5})}$$
and its geometrical meaning is nicely explained by this image:
I came up with completely ad-hoc iterative approach to minimize curvature
at specified scale, but I am little bit lost when trying to apply this accross
all scales (I am trying to utilize Gaussian / Laplacian image pyramids). It works by adjusting heights by adding height offset according to profile curvature at given point (height offset is negative in convex areas and positive in concave areas).
Results for single scale are not completely hopeless though:
Original data:
Processed data:
Result of this procedure should be formation of sharp V-shaped ridges
and valleys.
Is there any mathematically sound approach how to solve this kind of problems ?
Any tips are appreciated !
Answer:
I am in search for (fast) algorithm to minimize profile curvature of this image on selected scales (in scale-space sense).
...
... to minimize curvature at specified scale, [...], trying to apply this accross all scales (I am trying to utilize Gaussian / Laplacian image pyramids). It works by adjusting heights by adding height offset according to profile curvature at given point (height offset is negative in convex areas and positive in concave areas).
Modifications of curvature are basically modifications of a "signal's" dynamics.
In effect, what you are trying to say is "If the signal is rising (or falling) faster (or slower) than some threshold then...do something different". This "do something different" can be anything from "don't respond" to "respond slower" or "respond faster" and so on.
In one dimension, the Douglas-Peucker algorithm does more or less what you are after.
The algorithm takes only one parameter which you can relate to the scale of observation but essentially modifies the local slope so that it remains within certain limits.
Variants that do surface simplification, or mesh simplification exist too (or this, in the case of contours) which use some expression of "cost" or "stress" to decide which element to remove during the simplification.
If you are "tied" to a height map (i.e. implicit 3D data), then you are looking at removing data (i.e. decimation) and interpolating / filtering. For example, you can use your $P_c$ to "mark" the areas of high or low curving slope in one pass and compute the "fill-in" values of each patch on a second pass. For examples of these approaches, you might want to take a look at this or this or more generally this.
Hope this helps. | {
"domain": "dsp.stackexchange",
"id": 6028,
"tags": "image-processing"
} |
To find the equation of motion in $q$ co-ordinate when Hamiltonian is given and to find the set of Canonical Co-ordinates | Question:
With the help of derivative of Hamiltonian with respect to momentum co-ordinate, derivative of position co-ordinate is found but still there is p in the question. How to eliminate that?
Answer: General recipe:
Hamilton's equations give $\dot{p},\,\dot{q}$ as functions of $p,\,q$.
Differentiate $\dot{q}$ to get $\ddot{q}$ by the chain rule.
Rearrange the formula for $\dot{q}$ to give $p$ as a function of $q,\,\dot{q}$.
Use that to rewrite $\dot{p}$, then $\ddot{q}$. | {
"domain": "physics.stackexchange",
"id": 80985,
"tags": "classical-mechanics, coordinate-systems, hamiltonian-formalism, hamiltonian"
} |
Why does my blood taste like rust? | Question: I thought it is just me, but when I searched it on Google, it revealed that there are many people who experience this:
Why is it that whenever I taste my own blood, I always think it tastes like rust? I have never eaten or tasted rust, so how can I relate something to it?
Answer: What you are calling "taste" is actually produced by the olfactory sense – "smell." True taste, which requires contact of a substance with your tongue, is limited to sensations of sweetness, sourness, saltiness, bitterness, and umami. The chemistry of taste is certainly quite interesting, but anything beyond those "flavors" is produced by smell (via far more complex physiological processes). You have certainly smelled rust (or, more accurately, various oxidation states of iron that exist in blood, likely from wet or weathered steel). So you are, in fact, correctly associating the smell of iron oxides from two different sources: steel and hemoglobin. | {
"domain": "chemistry.stackexchange",
"id": 4112,
"tags": "everyday-chemistry"
} |
Does the growth in size of hippocampus of London taxi drivers affect their net number of brain cells? | Question: In the paper, London Taxi Drivers and Bus Drivers: A Structural MRI
and Neuropsychological Analysis in the Journal Hippocampus, the authors state that changes in the hippocampus of taxi drivers in London results from their specific occupation. The authors stated this was more pronounced than in bus-drivers.
What accounts for grey matter change? Do their brains have more brain cells than other people? Or the increased in size of the posterior hippocampus means some other region of their brain must have less brain cells to have enough to grow their hippocampus?
Answer: The study you linked was very interesting, and there is a reason that the authors of this paper and many others like it refer to grey matter density or grey matter volume.
From another study on grey matter (density), Neurolinguistics: structural plasticity in the bilingual brain,
Whether grey-matter reorganization in this region is related to changes in neuropil, neuronal size, dendritic or axonal arborization will be revealed by methods other than whole-brain magnetic resonance imaging.
Meaning, there's really no way to know without autopsy and histological studies whether this means more neurons, increase in neuron size, more dendrites, more connections between neurons, etc. So, the answer to your question is, no one knows exactly.
Histopathological studies are usually done on diseased patients (e.g. Alzheimer's), not healthy taxi drivers.
However, one mouse study of loss of hippocampal grey matter did correllate MRI studies and histopathological studies, with the result that
no changes in the number or volumes of the somas of neurons, astrocytes or oligodendrocytes were detected. A loss of synaptic spine density of up to 60 % occurred on different-order dendrites in the ACC and hippocampus...
Loss is not the same as gain, however, so the definitive cause cannot be ascertained yet.
Stress-Induced Grey Matter Loss Determined by MRI Is Primarily Due to Loss of Dendrites and Their Synapses | {
"domain": "biology.stackexchange",
"id": 9420,
"tags": "neuroplasticity"
} |
Basic question of General relativity about covariant derivative | Question: I was reading the book of Wald on General relativity. And in the page number (33) he derives the equation for the action of $\nabla_{a}$ over a tensor of rank $(k,l)$.
This is the equation (3.1.14):
$$\nabla_{a} T^{b_{1}...b_{k}} {}_{c_{1}...c_{l}}= \tilde{\nabla}_{a} T^{b_{1}...b_{k}}{}_{c_{1}...c_{l}}
+\sum_{i}C^{b_i}_{ad}T^{b_{1}...d...b_{k}}{}_{c_{1}...c_{k}} -\sum_{j}C^{d}_{ac_{j}}T^{b_{1}...b_{l}}{}_{c_{1}...d...c_{l}}.
$$
In the next page he says that the most import application of the equation (3.1.14) araises in the case where $\tilde{\nabla}_{a}$ is equal to is equal to the usual operator $\partial_{a}$ in that case $C^{c}_{ab}$ is equal to the Christoffel symbol.
My question is what happen if I choose $\nabla_{a}=\partial_{a}$. In that case $C^{c}_{ab}$ is also equal to Christoffel symbols?
Answer: $\nabla_a$ is a covariant derivative. In the context instead of $\partial_a$ is used $\tilde\nabla_a$. $C$ coefficients are the connections, and another name for them is Christoffel symbols. If you take partial derivative $\partial_a$ of the $(k,l)$ tensor, instead of the covariant derivative, as you say $\nabla_a=\partial_a$, it will not be the covariant and the connection terms/Christoffel symbols in the identity disappear.
Or more precisely, choosing $\nabla_a=\partial_a$ means replacing the covariant derivative of the tensor with the ordinary partial derivative. In the ordinary derivative, there is no notion of the Christoffel symbols. In your context, they are zero, ${C^c_{ab}}=0$.
After replacing $\nabla_a$ with $\partial_a$
$$\nabla_{a} T^{b_{1}...b_{k}} {}_{c_{1}...c_{l}}=\partial_{a} T^{b_{1}...b_{k}} {}_{c_{1}...c_{l}}=
\tilde{\nabla}_{a} T^{b_{1}...b_{k}}{}_{c_{1}...c_{l}}
.
$$ | {
"domain": "physics.stackexchange",
"id": 66851,
"tags": "general-relativity, differential-geometry, differentiation"
} |
Noetic python raw Image publisher (from video) | Question:
Hi everyone,
I have spent nearly 2 full days with this problem now and have some extensive research, but just can't figure it out.
My apologies for being a burden, I ussually hesitate to waste anyones time. Before asking any questions I give it a good amount of
effort.
I'm attempting to read a video stream from a file with opencv and then just publish every single fram on a topic.
should be of message type sensor_msgs/Image
Once I have this completed, I will write a subscriber on a remote host, but I have to get the publisher done first, which is where I'm struggling.
I have tried the below program, which executes without errors, until I add go into rviz and add new item by topic. that's when the application crashes.
I'm not sure at this point if the publisher works and the problem is on rviz side, or if the publisher is not properly set-up.
PLEASE help !
ROS noetic on Ubuntu 20_04 (native, not in a VM)
#! /usr/bin/env python
import rospy
import cv2
from std_msgs.msg import String
from sensor_msgs.msg import Image
from cv_bridge import CvBridge, CvBridgeError
import sys
VERBOSE=True
bridge = CvBridge()
def main(args):
rospy.init_node('video_stream', anonymous=True)
image_publisher = rospy.Publisher("/output/image_raw/stream", Image, queue_size=10)
video_capture = cv2.VideoCapture('../video/ros.mp4')
width = video_capture.get(3) # float `width`
height = video_capture.get(4) # float `height`
if VERBOSE:
print("initiated node and publisher")
while not rospy.is_shutdown():
try:
ret, frame = video_capture.read()
#cv_image = bridge.imgmsg_to_cv2(frame, "bgr8")
vmsg = Image
vmsg.header = rospy.Time.now()
vmsg.height = height
vmsg.width = width
vmsg.encoding = "rgb8"
vmsg.is_bigendian = True
vmsg.step = 3
vmsg.data = frame
# Publish new image
image_publisher.publish(vmsg)
except KeyboardInterrupt:
break
print("Shutting down")
# </while>
cv2.destroyAllWindows()
return 0
if __name__ == '__main__':
try:
main(sys.argv)
except KeyboardInterrupt:
print("Shutting down")
Here is the error output I get when connecting it in rviz:
publish_video.py
initiated node and publisher
Traceback (most recent call last):
File "./publish_video.py", line 58, in <module>
main(sys.argv)
File "./publish_video.py", line 44, in main
image_publisher.publish(vmsg)
File "/opt/ros/noetic/lib/python3/dist-packages/rospy/topics.py", line 882, in publish
self.impl.publish(data)
File "/opt/ros/noetic/lib/python3/dist-packages/rospy/topics.py", line 1066, in publish
serialize_message(b, self.seq, message)
File "/opt/ros/noetic/lib/python3/dist-packages/rospy/msg.py", line 152, in serialize_message
msg.serialize(b)
TypeError: serialize() missing 1 required positional argument: 'buff'
Originally posted by USAotearoa on ROS Answers with karma: 78 on 2021-12-15
Post score: 1
Original comments
Comment by ljaniec on 2021-12-15:
Can you first listen to a topic in a terminal first to see if the first few Image messages are structured correctly? Shouldn't the line with vmsg = Image have a () on the end (for calling the constructor)?
Answer:
Figured it out
#!/usr/bin/env python3
import rospy
from sensor_msgs.msg import Image
from cv_bridge import CvBridge
import cv2
def publish_message():
# Node is publishing to the video_frames topic using
# the message type Image
pub = rospy.Publisher('video_frames', Image, queue_size=10)
# Tells rospy the name of the node.
# Anonymous = True makes sure the node has a unique name. Random
# numbers are added to the end of the name.
rospy.init_node('video_pub_py', anonymous=True)
# Go through the loop 10 times per second
rate = rospy.Rate(10) # 10hz
# Create a VideoCapture object
# The argument '0' gets the default webcam.
#cap = cv2.VideoCapture(0)
cap = cv2.VideoCapture('../video/ros.mp4')
# Used to convert between ROS and OpenCV images
br = CvBridge()
# While ROS is still running.
while not rospy.is_shutdown():
# Capture frame-by-frame
# This method returns True/False as well
# as the video frame.
ret, frame = cap.read()
if ret == True:
# Print debugging information to the terminal
rospy.loginfo('publishing video frame')
# Publish the image.
# The 'cv2_to_imgmsg' method converts an OpenCV
# image to a ROS image message
pub.publish(br.cv2_to_imgmsg(frame))
# Sleep just enough to maintain the desired rate
rate.sleep()
if __name__ == '__main__':
try:
publish_message()
except rospy.ROSInterruptException:
pass
Originally posted by USAotearoa with karma: 78 on 2021-12-15
This answer was ACCEPTED on the original site
Post score: 4
Original comments
Comment by osilva on 2021-12-15:
Thank you for your efforts and posting the answers so others can benefit.
Comment by osilva on 2021-12-15:
Also you can accept your answer by clicking on the check mark.
Comment by fegaeze on 2022-07-25:
Had to sign up to comment! This was useful in solving my problem. Thank you. | {
"domain": "robotics.stackexchange",
"id": 37253,
"tags": "ros, opencv, image, publish"
} |
Temperature effect on metal implant | Question: How much would a metal implant (rod inside the tibia bone) be affected inside the body by the temperature outside the body. How would i find out the temperature of the implant considering that homoeostasis keeps the body at 37C? The metal I'm thinking about is a shape memory alloy called Nickel-Titanium. Thanks in advance.
Answer: Your body keeps metal implants at body temperature. An external temperature change severe enough to penetrate to the implant would require your body to, in effect, shut down. In the absence of such dramatic temperature changes, the nickel titanium will be maintained at the temperature of the surrounding bone. Its advantage over other metals is that it is slightly elastic, like bone, and acts like bone when there is strain as a result of stress that deforms the bone.
Although nickel titanium alloy (nitinol) has been used for many bio-medical purposes, nickel allergy may appear in humans who have been sensitized to the metal through repeated contact. This propensity is controlled by passivation, a process that isolates corrosive substances from the surface of the biomedical prosthesis or device. The surface oxidation layer created by passivation should be robust enough to avoid micro cracks during flexing that might expose the nickel in nitinol to your body. | {
"domain": "physics.stackexchange",
"id": 92738,
"tags": "material-science"
} |
Wick Rotation in Curved space | Question: So over time I have learned to do exhaustive searches before asking things here.
Wick rotations are cool if you are trying to work in qft and make statements about the thermodynamics of some physical thing you are probing. You want to talk about some thermodynamics and who knows may be statistics, and like most people . . . oh Wick rotation is what you would think. Now we are in curved space, do you just throw away this tactic in the dust bin? And yes, I have read up a few things, but it seems I need a proper pointer to where to find how to attack this thing properly. Just really curious.
Answer: Matt Visser's How to Wick rotate generic curved spacetime is a great reference on this subject, which basically summarizes a lot of folklore on the subject.
Addendum (Summary of Paper). This turns out to be an important problem in quantum gravity and QFT in curved spacetime for the obvious reason ("How do we know the usual tricks still work in curved spacetime?").
Visser re-frames the Wick rotation in a more coordinate independent way (the naive $t\mapsto it$ prescription gives incorrect solutions even for de Sitter spacetime).
The more general Wick rotation analytically continues the metric while leaving the local coordinate charts invariant. When we restrict our attention back to flat spacetime, this approach recovers the usual QFT-textbook prescription for Wick rotations.
What does it look like? Well, suppose $g_{L}(-,-)$ is the metric tensor (using MTW-style notation), and $V$ is a non-vanishing timelike vector field. So in $-+++$ signature, $g_{L}(V,V)<0$. The Wick rotation amounts to swapping out this Lorentzian metric for:
$$\tag{1}g_{\epsilon} = g_{L} + i\epsilon\frac{V\otimes V}{g_{L}(V,V)}$$
and using this $g_{\epsilon}(-,-)$ metric everywhere instead.
How do we recover the usual Wick rotation? Well, use flat spacetime, so $g_{L}\mapsto\eta_{L}$ and $V\mapsto(1,0,0,0)$. Then the propagator for the scalar field, for example, becomes
$$\tag{2}\Delta_{F}(P) = \frac{-i}{\eta_{\epsilon}(P,P)+m^{2}}$$
where $\eta_{\epsilon}$ is the Wick rotated metric tensor for flat spacetime. Eq (2) precisely what you'd find in any generic textbook on QFT. So, good, this generalized procedure --- i.e., Eq (1) --- recovers the usual results we want. | {
"domain": "physics.stackexchange",
"id": 15688,
"tags": "general-relativity, mathematical-physics, differential-geometry, wick-rotation, qft-in-curved-spacetime"
} |
On dual nature of light? | Question: It is said that light exhibits dual nature, in some instances as particles or photons and other instances as electromagnetic waves. How to detect which nature is light exhibiting and why only one nature exhibited at a time, why can't the be photons moving in wave-like motion? The Michelson-Morley experiment tries to refute the existence of ether but all waves require a medium to propagate?
Answer:
How to detect which nature is light exhibiting and why only one nature exhibited at a time,
We perform an experiment and make some observations. We then see whether those observations show wave-like or particle-like behavior of the light involved.
For example, when we see interference or diffraction effects, we call those wave-like behaviors. When we see that only discrete amounts of energy can be absorbed from or emitted into the electomagnetic field we call that a particle-like behavior.
But really, light doesn't sometimes behave like a particle and sometimes behave like a wave. It always behaves like light. It's only that some of the behaviors of light are analogous to the behavior of particles in classical physics and others are analogous to the behavior of waves in classical physics.
why can't the be photons moving in wave-like motion?
Because we observe behavior (for example diffraction) that isn't consistent with this model.
all waves require a medium to propagate?
This just isn't true. Many of the waves that pre-1900 physicists were familiar with (for example, sound) propagated through physical media.
But that doesn't mean it's not possible for some other type of wave (light) to propagate without a medium. | {
"domain": "physics.stackexchange",
"id": 62299,
"tags": "visible-light, electromagnetic-radiation, wave-particle-duality, aether"
} |
Spell Checker program using dictionary implementation | Question: This is a small program written for a small assignment to check spelling. The assignment asks for the following output:
A list of words that are misspelled in the file
A count of the whitespace delimited words contained in the file
For all words that appear in the file, a count of the number of times they appear. This should be in alphabetical order
A list of the top five words in terms of frequencey of appearance from the file.
My program runs successfully. I just want to get some suggestions to improve it.
import java.io.File;
import java.io.FileNotFoundException;
import java.util.ArrayList;
import java.util.Collections;
import java.util.Iterator;
import java.util.Scanner;
public class SpellChecker {
private final HashDict dict;
private final HashDict wordFile;
final static String dictionary = "dict.txt";
final static String file = ("big_flat_file.txt");
/**
* Constructor of spellChecker
*/
public SpellChecker() {
dict = new HashDict<>();
wordFile = new HashDict<>();
read(dictionary);
}
/**
* @param args the command line arguments
*/
public static void main(String[] args) {
SpellChecker checker = new SpellChecker();
int wordCount = checker.count(file);
System.out.println("The file contains following misspelled words: ");
checker.spellCheck();
System.out.println("The file contains " + wordCount + " words in total");
System.out.println();
System.out.println("The frequency of all words are listed below: ");
checker.wordFreq();
}
/**
* read dictionary into a hashed dictionary
*
* @param fileName the file of dictionary
*/
public void read(String fileName) {
File theFile = new File(fileName);
try {
Scanner reader = new Scanner(theFile);
while (reader.hasNext()) {
String input = reader.next();
dict.add(input, 0);
}
} catch (FileNotFoundException e) {
System.out.print("file not found");
}
}
/**
* add every word into a hashed dictionary as the key, its frequency as
* value, and count the total words
*
* @param fileName a given .txt file
* @return an integer of total words in the file
*/
public int count(String fileName) {
File theFile = new File(fileName);
int totalCount = 0;
try {
Scanner sc = new Scanner(theFile);
while (sc.hasNext()) {
int freq = 0;
String word = sc.next().replaceAll("[^A-Za-z]+", "").toLowerCase();
totalCount++;
if (wordFile.contains(word)) {
freq = (int) wordFile.getValue(word.toLowerCase());
}
freq++;
wordFile.add(word, freq);
}
} catch (FileNotFoundException ex) {
}
return totalCount;
}
/**
* Check every word in the file to see if it is misspelled by comparing it
* with the dictionary. Ignore all the single letters. Print out the word
* that is not contained in the dictionary
*/
public void spellCheck() {
Iterator traverse = wordFile.getKeyIterator();
while (traverse.hasNext()) {
String e = (String) traverse.next();
if (!e.matches("[A-Za-z]{1}")) {
if (!dict.contains(e)) {
System.out.println(e);
}
}
}
}
/**
* Count the time of all words appear in the file, and list them in
* alphabetical order.
* List top five words that appear most.
*/
public void wordFreq() {
Iterator traverse = wordFile.getKeyIterator();
ArrayList<String> list = new ArrayList<>();
int top1 = 0, top2 = 0, top3 = 0, top4 = 0, top5 = 0;
int top1Index = 0, top2Index = 0, top3Index = 0, top4Index = 0,
top5Index = 0;
while (traverse.hasNext()) {
String e = (String) traverse.next();
list.add(e);
Collections.sort(list, String.CASE_INSENSITIVE_ORDER);
}
// find the top 5 words that appear most frequent
for (int i = 0; i < list.size(); i++) {
System.out.println(list.get(i) + " " + wordFile.getValue(list.get(i)));
int freq = (int) wordFile.getValue(list.get(i));
if (freq > top1) {
top1 = freq;
top1Index = i;
} else if (freq > top2) {
top2 = freq;
top2Index = i;
} else if (freq > top3) {
top3 = freq;
top3Index = i;
} else if (freq > top4) {
top4 = freq;
top4Index = i;
} else if (freq > top5) {
top5 = freq;
top5Index = i;
}
}
System.out.println();
System.out.println("The top 5 frequent used words in the file are");
System.out.println(list.get(top1Index) + " " + top1);
System.out.println(list.get(top2Index) + " " + top2);
System.out.println(list.get(top3Index) + " " + top3);
System.out.println(list.get(top4Index) + " " + top4);
System.out.println(list.get(top5Index) + " " + top5);
}
}
The HashDict is my own written chained dictionary and is checked by my instructor. I am not going to put it here, but I will if somebody asks for it.
----------------------------------update code--------------------------------------
Following are my updated code based on suggestions I've got.
import java.io.File;
import java.io.FileNotFoundException;
import java.util.ArrayList;
import java.util.Collections;
import java.util.Iterator;
import java.util.Scanner;
public class SpellChecker {
private final HashDict dict;
private final HashDict wordFile;
final static String dictionary = "dict.txt";
/**
* Constructor of spellChecker
*/
public SpellChecker() {
dict = new HashDict<>();
wordFile = new HashDict<>();
readDict(dictionary);
}
/**
* @param args the command line arguments
*/
public static void main(String[] args) {
Scanner in = new Scanner(System.in);
System.out.println("What is the filename?");
String inputFileName = in.nextLine();
SpellChecker checker = new SpellChecker();
int wordCount = checker.loadFileAndCount(inputFileName);
checker.printMisspelledWords();
System.out.println();
System.out.println("The file contains " + wordCount + " words in total");
System.out.println();
checker.wordFreq();
}
/**
* readDict dictionary into a hashed dictionary
*
* @param fileName the file of dictionary
*/
public void readDict(String fileName) {
File theFile = new File(fileName);
try {
Scanner reader = new Scanner(theFile);
while (reader.hasNext()) {
String input = reader.next();
dict.add(input, 0);
}
} catch (FileNotFoundException e) {
System.out.print("file not found");
}
}
/**
* add every word into a hashed dictionary as the key, its frequency as
* value, and loadFileAndCount the total words
*
* @param fileName a given .txt file
* @return an integer of total words in the file
*/
public int loadFileAndCount(String fileName) {
File theFile = new File(fileName);
int totalCount = 0;
try {
Scanner sc = new Scanner(theFile);
while (sc.hasNext()) {
int freq = 0;
String word = sc.next().replaceAll("[^A-Za-z]+", "").toLowerCase();
totalCount++;
if (wordFile.contains(word)) {
freq = (int) wordFile.getValue(word.toLowerCase());
}
freq++;
wordFile.add(word, freq);
}
} catch (FileNotFoundException ex) {
System.out.println("File not found.");
}
return totalCount;
}
/**
* Check every word in the file to see if it is misspelled by comparing it
* with the dictionary. Ignore all the single letters. Print out the word
* that is not contained in the dictionary
*/
public void printMisspelledWords() {
Iterator traverse = wordFile.getKeyIterator();
System.out.println("The file contains following misspelled words: ");
while (traverse.hasNext()) {
String e = (String) traverse.next();
if (!(e.length() == 1 && Character.isLetter(e.toCharArray()[0]))) {
if (!dict.contains(e)) {
System.out.println(e);
}
}
}
}
/**
* Count the time of all words appear in the file, and list them in
* alphabetical order. List top five words that appear most.
*/
public void wordFreq() {
Iterator traverse = wordFile.getKeyIterator();
ArrayList<String> list = new ArrayList<>();
int[] topFreq = new int[5];
int[] topFreqIndex = new int[5];
while (traverse.hasNext()) {
String e = (String) traverse.next();
list.add(e);
}
Collections.sort(list, String.CASE_INSENSITIVE_ORDER);
System.out.println("The frequency of all words are listed below: ");
// print out all words and their frequencies, and
// find the top 5 words that appear most frequent
for (int i = 0; i < list.size(); i++) {
int freq = (int) wordFile.getValue(list.get(i));
System.out.println(list.get(i) + " " + freq);
for (int m = topFreq.length - 1; m >= 0; m--) {
if (freq > topFreq[m]) {
if (m == topFreq.length - 1) {
topFreq[m] = freq;
topFreqIndex[m] = i;
} else {
int tempFreq = topFreq[m];
int tempFreqIndex = topFreqIndex[m];
topFreq[m] = freq;
topFreqIndex[m] = i;
topFreq[m + 1] = tempFreq;
topFreqIndex[m + 1] = tempFreqIndex;
}
}
}
}
System.out.println();
System.out.println("The top 5 frequent used words in the file are");
for (int m = 0; m < topFreq.length; m++) {
System.out.println(list.get(topFreqIndex[m]) + " " + topFreq[m]);
}
}
}
Most of the changes are in the wordFreq method.
The output is like this:
run:
What is the filename?
big_flat_file.txt
The file contains following misspelled words:
// all misspelled words
The file contains 6629 words in total
The frequency of all words are listed below:
// all words and their frequency. TL
The top 5 frequent used words in the file are
of 262
and 248
to 246
the 183
167
One thing concerned me is that the last top frequent word is a blank space. The blank space also appears in the word count, so that it is not the topFreq method's problem. I am not sure why it is counted as a word.
Answer: I don't know Java well, so I'll just be writing about more general ideas.
Zero One Infinity Rule: This is a guideline which says: "Allow none of foo, one of foo, or any number of foo." This applies in your wordFreq() method. When you have a lot of variables like top1, top2, etc. you're giving yourself a lot more opportunity to make a typo, and there's more code you have to change if you want the top 10 instead of the top 5. Instead, store these in an array (or ArrayList, or whatever the most appropriate container is)
Bug: I'm not convinced wordFreq is correct. I'm unsure of the ethics or the CodeReview policy on describing bugs for homework assignments, so I will be cautious and say nothing more at this time.
Performance: Look at
while (traverse.hasNext()) {
String e = (String) traverse.next();
list.add(e);
Collections.sort(list, String.CASE_INSENSITIVE_ORDER);
}
First, why do you need this to be sorted? Second, I suspect resorting the list every time you add a new element is wasteful. Of course, it will have no noticeable impact in a program this small, but it is good to be mindful of algorithmic inefficiencies.
I/O: To improve flexibility, avoid hard-coding pathnames. It would be better to accept them as command-line arguments. You could use the hard-coded names as default values if the user doesn't supply an arguments. This is helpful if, for example, you want to use a script to run your program against a lot of input-files as a test of some sort.
Error handling: If you are going to hardcode file names, it would be good to make that clear in your exception text. Right now if it can't find the dictionary, I will just see the message "file not found", and I have no way to know what you want without opening the code. Including the name of the file not found in the error message will make the program easier to use.
public int count(String fileName) {
try {
...
} catch (FileNotFoundException ex) {
}
}
If the intent is to keep going with an empty word list if there is no file, this needs to be more explicit. A comment would help clarify. Or better, an explicit check to see if the file exists. (I am unsure how using exceptions as control-flow is considered in the Java world. It is usually frowned upon). If this is not the intent, the error needs to be handled.
Naming: I found your variable names to be mostly clear. It confused me that spellCheck wrote output though. printMisspelledWords would be much more clear. In general, it is recommended for methods to be verbs and classes and objects to be nouns.
Some of your names are vague. I had to keep checking which file read() was supposed to be read. It is surprising that count() not only returns the word count, but loads the dictionary too. You could be more clear my having functions like loadDictionary() loadWordList(), and getWordCount(). | {
"domain": "codereview.stackexchange",
"id": 30560,
"tags": "java, strings, hash-map"
} |
Moment of inertia of a rod, what is wrong? | Question: The moment of inertia (MOI) of a rod that rotates around its center is $\frac{1}{12} m l^2$, while a rod that rotates around its end is $\frac{1}{3} ml^2$, as listed here.
That doesn't sound right - please see the image. A rod that rotates around its center can be viewed as two rods rotating around a common end point. Each rod's MOI is $\frac{1}{3} ml^2$, so two rods have MOI $\frac{2}{3} ml^2$
What is wrong?
Answer: Start with the moment of inertia (about one end) of a rod of length $L/2$ and mass $m/2$:
$$ I = \frac{1}{3}\frac{m}{2}\left(\frac{L}{2}\right)^2 = \frac{mL^2}{24} $$
Multiply by two, to get a rod of length $L$ and mass $m$ pivoted about the middle and you get:
$$ I = \frac{mL^2}{12} $$
You forgot to allow for the doubling/halving of the mass. | {
"domain": "physics.stackexchange",
"id": 18697,
"tags": "moment-of-inertia"
} |
why is it so hard to simulate up and down position on headphones? | Question: i'm wondering how come simulating front,back,left and right source position on headphones is easy but up and down is not ? (maybe even impossible ?)
is it due to the shape of the ear ? but should we not be able to modify the input signal according to the shape of the ear and then simulate ups and downs movements ? why binaural impulses don't simulate correctly ups and downs on headphones ?
thanks
Jeff
Answer: Front/back is actually very hard too, at least for stationary simulation without head tracking.
The main reason is simple: Left/Right is done by looking at the differences between the two ear signals, i.e. interaural level differences and interaural time differences. If a source is located to the left, the sound will arrive earlier on the left ear and will be louder (depending on frequency).
In the median plane (front, up, back down), the ear signals are almost identical. The only information available are pinna cues, shoulder reflection, interaural changes due to micro rotations of the head, small asymmetries, contextual information, etc. . That's a lot less robust so the localization acuity in general is a lot worse. It's also harder to model since it depends a lot on the individual human and one-size-fits-all doesn't work well. | {
"domain": "dsp.stackexchange",
"id": 8067,
"tags": "impulse-response, 3d"
} |
How can I show the relations between travel destinations? | Question: I'm trying to do a project about email marketing. I'm working on a tourism company and I want to make a best destination suggestion for the clients. But I need to see the relations between destinations.
Example: How many people visited Dublin and then visited London?
My question: How can I best analyse this relation between the cities, given data about traveler itineraries?
I want to send email offers to clients who went to London and didn't go to Dublin (assuming a strong relation between London and Dublin).
Answer: You can try Graph databases (Neo4j/Orient DB etc). Store location and connection between the location as nodes and edges. Then do analysis over graph data. Based on your need, you can use additional attributes (like count) and assign weights for edges etc. Neo4j supports collaborative filtering also. | {
"domain": "datascience.stackexchange",
"id": 337,
"tags": "statistics"
} |
Does Redshift depends on wavelength? | Question: I came across this equation on Wikipedia:
$$z=\frac{\lambda_{\text{observed}} - \lambda_{\text{emitted}}}{\lambda_{\text{emitted}}}$$
and it got me thinking: If I measure a wavelength of $700 \;\text{nm}$ compared to a rest wavelength of $656 \;\text{nm}$, that gives me a redshift of $0.097$.
So if every wavelength gets shifted by the same amount, in this case $720 - 656 = 64 \;\text{nm}$. Does that mean that the redshift value is different for every wavelength?
If not, every wavelength gets shifted by a different amount with a fixed redshift value?
How you calculate the velocity of a body in these scenarios?
Answer: Every wavelength doesn't get redshifted by the same absolute amount.
Every wavelength is multiplied by a factor of $1+z$. | {
"domain": "physics.stackexchange",
"id": 82868,
"tags": "velocity, wavelength, doppler-effect, redshift"
} |
How to solve for the torque when there is more than one lever arm? | Question: I have encountered this problem in my Giancoli physics textbook and I would like to seek your kind guidance regarding a simple point of confusion of mine.
In this problem, we are asked to find the components of the force exerted by the hinge on the door, and we are given the dimensions of that door. Finding the vertical component was fine, no problems encountered. But the problem lies within finding the horizontal components.
In the solutions manual, it used different lever arms for each force involved, is this acceptable? If so, can you make me understand why so.
Also, I tried to draw a schematic representation of a possible explanation, can you please tell me if it is logical or not
Answer: The lever arm for the counter clockwise torque about the bottom hinge is the perpendicular distance between the line of action of the gravitational force and the bottom hinge. The clockwise torque about the bottom hinge is due to the horizontal reaction at the top hinge where the lever arm is the distance between the hinges. Thus the lever arms are different. Your schematic looks correct regarding the lever arms.
But while their lever arms are correct, all their solutions are incorrect because the problem is what is called "statically indeterminate". Statically indeterminate means the equations of equilibrium are insufficient to solve for the reactions. The underlying physical reason is there are redundant reactions.
Take for example the vertical reactions at the hinges. Their equation
$$\sum F_{y}=F_{Ay}+F_{By}-mg=0$$
is correct. But then they incorrectly conclude that
$$F_{Ay}=F_{By}=\frac{1}{2}mg$$
The first equation tells us that any combination of $F_{Ay}+F_{By}$ that equals $mg$ is possible, up to and including that $F_{Ay}=0$ or $F_{By}=0$. The vertical reactions of the hinges are redundant. You can remove either one and the door could still be in vertical equilibrium, as long as the single remaining hinge, as well as the door and door jam where the hinges are screwed into, are capable of supporting the entire load.
Next consider rotational equilibrium. In their free body diagram of the door they fail to include the moment reactions possible at the hinges. I am using the term "moment" instead of "torque" because that's the term used in statics to describe a torque (at least it was when I took statics over 50 years ago).
A frictionless hinge offers no moment reaction to the movement of the door perpendicular to the plane of the door, i.e., no moment reaction to the swinging of the door. But the hinges, or more properly the hinges plus the door jam where the hinges are screwed into, do provide a moment reaction to the weight of the door since the hinges cannot rotate in the plane of the door. In effect, the hinges are what are called "cantilever supports". A cantilever support provides vertical, horizontal and moment reactions.
So the sum of the torques (moments) about hinge B equation should be
$$mg\frac{w}{2}-F_{Ax}(h-2d)-M_{B}=0$$
where $M_{B}$ is the clock wise moment reaction of hinge B.
Similarly, the sum of the moments about hinge A are
$$mg\frac{w}{2}-F_{Bx}(h-2d)-M_{A}=0$$
where $M_{B}$ is the clock wise moment reaction of hinge B.
Finally, we have
$$F_{Ax}=F_{Bx}$$
Note that we now have three equations and four unknowns ($F_{Ax}$, $F_{Bx}$, $M_A$, and $M_B$). Once again we have redundant reactions. In this case the redundant reactions are the moment reactions of the hinges. They are redundant because, once again, we can eliminate either hinge and the door could still be in rotational equilibrium, as long as the remaining hinge (and the door itself) is strong enough to provide the necessary moment reaction. Which is why one would never have a door with a single hinge to provide all of the support.
When a problem is statically indeterminate, one has to include additional equations relating to the deformations of the structure and the material properties. That is the subject of mechanics of deformable solids.
Hope this helps. | {
"domain": "physics.stackexchange",
"id": 85789,
"tags": "homework-and-exercises, newtonian-mechanics, forces, equilibrium"
} |
Spring rotated in uniform circular motion | Question: Why does a spring stretch when rotated in uniform circular motion?
The horizontal rod containing the massless spring(stiffness = k) and block of mass 'm' is rotated uniformly about point P.
This centripetal force is provided by the elastic(restoring) force of the stretched spring which is 'kx' where x is elongation of the spring as compared to rest position.
Everything looks very neat very decent.
I am listing the problem as seen from two reference frames.
1. Rotating frame (as seen from rod).
As seen from the rod, centrifugal force is acting outwards. This will cause the block to move towards right, and hence stretching the spring. The centrifugal force will be balanced by the spring force acting left. Everything seems fine.
2. Inertial Frame (ground)
I am unable to understand why does the spring stretch in the first place? There is no reason why the spring should stretch. Their is no force acting on the block in the outward direction. There is a centripetal force acting on the block when it is undergoing rotation. But that force is towards the center. So the spring should contract???
Answer: For the block to move in a circle, there must be a centripetal force (as you have said)--something needs to be constantly pulling the block toward the center of the circle if it is to move in a circle. The thing that does the pulling in this case is the spring. (In other words, the source of the centripetal force is the spring. Without the spring, there is no centripetal force, assuming there is no friction between the block and the rod.)
If this is not clear, imagine what would happen if the spring was not attached to the block. When the rod begins to rotate, the block will soon slide off the end of the rod because we know from Newton's First Law that the block will try to move in a straight line unless we apply a force to change its direction. If the spring is attached to the block, however, when the block tries to go straight and slide off the end of the rod, the spring will stretch and apply a force toward the center of the circle. That will prevent the block from flying off the end of the rod. | {
"domain": "physics.stackexchange",
"id": 25532,
"tags": "newtonian-mechanics, reference-frames, inertial-frames, centripetal-force"
} |
Remove duplication in Ruby map with conditional entries | Question: I have a nasty (IMO) piece of code for a datatable configuration, how do i best remove the duplication here?
I've tried the obvious things I know (collapsing into one statement and trying inline conditionals) but it isn't working and I can't help thinking there must be a more elegant Ruby way here.
if show_status == "true"
if show_requested_by == "true"
records.map do |record|
{
warning: as_warning(record),
updated_at: as_time_ago(record.updated_at),
status: as_status(record.status),
user: record.user.nil? ? "" : record.user.formal_name,
manager: record.manager.nil? ? "" : record.manager.formal_name,
first_date: as_date(record.first_date),
last_date: as_date(record.last_date),
days: as_number(record.total_days),
hours: as_number(record.total_hours),
units: as_number(record.total_units),
value: as_number(record.total_value),
comment: as_comment(record.comment),
actions: as_actions(record)
}
end
else
records.map do |record|
{
warning: as_warning(record),
updated_at: as_time_ago(record.updated_at),
status: as_status(record.status),
manager: record.manager.nil? ? "" : record.manager.formal_name,
first_date: as_date(record.first_date),
last_date: as_date(record.last_date),
days: as_number(record.total_days),
hours: as_number(record.total_hours),
units: as_number(record.total_units),
value: as_number(record.total_value),
comment: as_comment(record.comment),
actions: as_actions(record)
}
end
end
else
if show_requested_by == "true"
records.map do |record|
{
warning: as_warning(record),
updated_at: as_time_ago(record.updated_at),
user: record.user.nil? ? "" : record.user.formal_name,
manager: record.manager.nil? ? "" : record.manager.formal_name,
first_date: as_date(record.first_date),
last_date: as_date(record.last_date),
days: as_number(record.total_days),
hours: as_number(record.total_hours),
units: as_number(record.total_units),
value: as_number(record.total_value),
comment: as_comment(record.comment),
actions: as_actions(record)
}
end
else
records.map do |record|
{
warning: as_warning(record),
updated_at: as_time_ago(record.updated_at),
manager: record.manager.nil? ? "" : record.manager.formal_name,
first_date: as_date(record.first_date),
last_date: as_date(record.last_date),
days: as_number(record.total_days),
hours: as_number(record.total_hours),
units: as_number(record.total_units),
value: as_number(record.total_value),
comment: as_comment(record.comment),
actions: as_actions(record)
}
end
end
end
Answer: Another way to do this would be to inline the conditionals, returning nil when they are not met and use compact or compact! to remove those entries at the end. Something like:
records.map do |record|
{
warning: as_warning(record),
updated_at: as_time_ago(record.updated_at),
status: show_status == 'true' ? as_status(record.status) : nil,
user: show_requested_by=='true' ? (record.user.nil? ? "" : record.user.formal_name) : nil,
manager: record.manager.nil? ? "" : record.manager.formal_name,
first_date: as_date(record.first_date),
last_date: as_date(record.last_date),
days: as_number(record.total_days),
hours: as_number(record.total_hours),
units: as_number(record.total_units),
value: as_number(record.total_value),
comment: as_comment(record.comment),
actions: as_actions(record)
}.compact
}
Note: If any of your entries can be nil (and it is important to keep them in the hash) then this won't work.
Update: As an alternative that handles nils that are expected you could also go for something like:
records.map do |record|
{
status: show_status == 'true' ? as_status(record.status) : nil,
user: show_requested_by=='true' ? (record.user.nil? ? "" : record.user.formal_name) : nil,
}.compact
.merge {
warning: as_warning(record),
updated_at: as_time_ago(record.updated_at),
manager: record.manager.nil? ? "" : record.manager.formal_name,
first_date: as_date(record.first_date),
last_date: as_date(record.last_date),
days: as_number(record.total_days),
hours: as_number(record.total_hours),
units: as_number(record.total_units),
value: as_number(record.total_value),
comment: as_comment(record.comment),
actions: as_actions(record)
}
}
Although, if the user or status, should be nil they still won't be handled correctly. | {
"domain": "codereview.stackexchange",
"id": 28889,
"tags": "ruby, array"
} |
Vector Potential and Zero Divergence | Question: In the following problem and many others in magnetostatics (Griffiths book) I'm asked to check that $\nabla . A = 0$. While "It is
always possible to make the vector potential divergenceless." also "we are at liberty to pick that as we see fit". I think I missed that the book was Imposing the Coulomb gauge condition.
If B is uniform, show that $A(r) = −1/2 (r × B)$ works.
Why do I have to check that $\nabla . A = 0$ ?
Couldn't $\nabla . A$ be anything else?
Answer: You're right that $\nabla \cdot \mathbf{A}$ could be anything, but the relationship between $\mathbf{A}$ and $\mathbf{B}$ you show is only true in the Coulomb gauge.
It's straightforward to show that $\mathbf{B} = \nabla \times \mathbf{A}$ by taking the curl of that equation. However, I can also write
$\mathbf{A}(\mathbf{r}) = -\frac{1}{2} (\mathbf{r} \times \mathbf{B}(\mathbf{r})) + \nabla \phi(\mathbf{r})$
for any scalar potential $\phi$, and $\mathbf{B} = \nabla \times \mathbf{A}$ is still true, but $\nabla \cdot \mathbf{A}=0$ is not in general true.
I don't have that book, so I'm not sure if the phrasing in the question makes things confusing. | {
"domain": "physics.stackexchange",
"id": 37387,
"tags": "electromagnetism, magnetostatics"
} |
How do we know distant stars still exist | Question: The Pleiades are 450 light years away. Thus when we see the twinkle in the night sky the light that hits our retina emanates from the year 1569 or so. On that proviso; how can we be sure that the star is still alive given the fact that we are seeing light from hundreds of years ago.
Answer: We don't know from sensory evidence. As far as we can tell, no information ever moves faster than the speed of light, so we are always at least 450 years out of date for anything happening in the Pleiades.
The only evidence we have is inferential: As far as we can tell (and we've looked real hard at this) the world works the same both here and at the Pleiades. And as far as we can tell (and, once again, we've looked real hard at this) stars like those in the Pleiades don't just disappear or go out in only 450 years -- it takes millions of years.
So, based on what we observe of them as of 450 years ago, and based on what we know of how stars work, we conclude that they are pretty much the same "today" -- though, really, what we're concluding is that when we observe them 450 years from now, they'll still look pretty much like they do today.
This is all inferential, but I'd bet just about anything on it. (If only I were here in 450 years to collect!) | {
"domain": "astronomy.stackexchange",
"id": 3470,
"tags": "star, observational-astronomy"
} |
What makes PROLOG Turing-complete? | Question: I know that it can be proven PROLOG is Turing-complete by constructing a program that simulates a Turing machine like this:
turing(Tape0, Tape) :-
perform(q0, [], Ls, Tape0, Rs),
reverse(Ls, Ls1),
append(Ls1, Rs, Tape).
perform(qf, Ls, Ls, Rs, Rs) :- !.
perform(Q0, Ls0, Ls, Rs0, Rs) :-
symbol(Rs0, Sym, RsRest),
once(rule(Q0, Sym, Q1, NewSym, Action)),
action(Action, Ls0, Ls1, [NewSym|RsRest], Rs1),
perform(Q1, Ls1, Ls, Rs1, Rs).
symbol([], b, []).
symbol([Sym|Rs], Sym, Rs).
action(left, Ls0, Ls, Rs0, Rs) :- left(Ls0, Ls, Rs0, Rs).
action(stay, Ls, Ls, Rs, Rs).
action(right, Ls0, [Sym|Ls0], [Sym|Rs], Rs).
left([], [], Rs0, [b|Rs0]).
left([L|Ls], Ls, Rs, [L|Rs]).
Source
However, I’m wondering which parts of the PROLOG language one could strip away (esp. function symbols, clause overloading, recursion, unification) without losing Turing completeness. Are function symbols themselves Turing complete?
Answer: It's a fairly reliable rule of thumb that Turing-completeness depends on the ability to construct answers or intermediate values of unrestricted "size" and the ability to loop or recurse an unrestricted number of times. If you have those two things, you probably have Turing-completeness. (More specifically, if you can construct Peano arithmetic, then you certainly have Turing-completeness!)
Let's assume for the moment that you've already stripped arithmetic. We'll also assume that you don't have any non-logical features like atom_chars, assert, and so on, which enable general shenanigans.
If you stripped out function symbols, you can't construct answers or intermediates of unrestricted size; you can only use atoms which appear in the program and the query. As a result, the set of all possible solutions to any query is finite, so taking the least fixed point of the program/query will always terminate. Datalog (a relational database query language based on Prolog) works on this principle.
Similarly, if you restricted Prolog to primitive recursion only (that includes no recursion as a degenerate case), then the amount of recursion that you can do is bounded by the size of the query, so all computation terminates. So you need general recursion for Turing-completeness.
And, of course, if you have general recursion, you can cut a whole bunch of features and retain Turing-completeness, including general unification (construction and top-level pattern matching is sufficient), negation, and the cut. | {
"domain": "cs.stackexchange",
"id": 13532,
"tags": "programming-languages, turing-completeness, logic-programming, prolog"
} |
How does Schur's Lemma mean that the Dirac representation is reducible? | Question: In chapter 3 of Peskin and Schroeder, when they're talking about "Dirac Matrices and Dirac Field Bilinears," they introduce $\gamma^{5}$ and give some properties of it. One of the properties is $[\gamma^{5},S^{\mu\nu}]=0$. Then they say that this means the Dirac representation must be reducible, "since eigenvectors of $\gamma^{5}$ whose eigenvalues are different transform without mixing (this criterion for reducibility is known as Schur's Lemma)."
I've looked at the wikipedia page for Schur's Lemma, and at various math notes online about Schur's lemma, and I don't see the relevance here. I understand Schur's Lemma to be something like this: that if you have an irreducible representation of a algebra on a vector space, and a linear operator on that vector space commutes with that representation for every element in the algebra, then the linear operator is either 0 or invertible.
How does this reduce down to "since eigenvectors of $\gamma^{5}$ whose eigenvalues are different transform without mixing"?
Answer: The reasoning is supposed to go as follows:
$\gamma^5$ commutes with all algebra elements, hence with the whole image of the algebra representation.
$\gamma^5$ has at least two different eigenvalues, meaning it is not a scalar multiple of the identity.
If the representation of the $S^{\mu\nu}$ (that form the Lorentz algebra $\mathfrak{so}(1,3)$) were irreducible, $\gamma^5$ would be a scalar multiple of the identity by Schur's lemma, which would contradict 2.
Therefore, the representation of the $S^{\mu\nu}$ must be reducible.
Caveat: The Dirac representation is irreducible as the representation of the Clifford algebra, see e.g. this question and its answers. | {
"domain": "physics.stackexchange",
"id": 31710,
"tags": "representation-theory, dirac-matrices"
} |
Average days on market calculation | Question: I have the following method which calculates average days on market for a sale type segment:
def average_days_on_market_sale_type_segment
# on market days
on_market_days = {}
# 1. get last year sales - note that the dates need to be sequential in time (oldest in time comes first, most recent in time commes last)
last_year_sales = self.real_property_sale_projects.where(status: 'reviewed')
.where('deal_date between ? and ?', Time.at(Time.now - 1.years), Time.now)
.map(&:real_property_sale)
# 2. loop each sale type, convert sale description string to symbol
last_year_sales.each do |sale|
sd = sale.sale_description.parameterize.underscore.to_sym
on_market_days[sd] ||= { count: 0, days: 0 }
on_market_days[sd][:count] += 1
# get the days on market value
on_market_days[sd][:days] += sale.real_property_sale_project.days_on_market
end
# 3. summarize average
summary = {}
on_market_days.each do |k, v|
summary[k.to_s.humanize] = (v[:days].to_f / v[:count].to_f).round
end
# 4. return summarized result
# it should contain each sale description type as a key and average day as a number
summary
end
Is there a more elegant way to achieve the same thing? Does the solution contain any performance issues/gotchas?
Answer:
Use 1.year.ago to get the start of the time range.
Create a method on the sale model that returns the description in the form you want (i.e. underscored symbol). On the other hand, is it necessary? You turn it back into a string afterward. If you're relying on parameterize + underscore to "normalize" slightly different strings that mean the same, you should probably normalize your database with a migration instead. If the strings are already normalized, just use 'em directly.
Use #group_by to group by sales type.
Use #fdiv if you want to avoid integer math.
The record-fetching a bit complex, but we'll get to that. Looking just at the average calculation, you can do this instead:
averages = last_year_sales
.group_by(&:description) # I'm assuming that there's no real need to symbolize the description
.map do |description, sales|
sum = sales.map { |sale| sale.real_property_sale_project.days_on_market }.reduce(:+)
[description, sum.fdiv(sales.count)]
end
Hash[averages]
Overall though, your database seems to be laid out in a way that makes this query extra tricky.
You're fetching all associated real_property_sale_projects (I'll just say "projects"), and then - for each of those - you fetch an associated real_property_sale? But when you have to do the averaging, the sale doesn't contain all the necessary information - the days_on_market number is attached to the project.
And you're doing all this in a 3rd model altogether it seems. So that's pretty complex.
However, you can also do it all in the database - it's exceedingly good at aggregating data. For instance, this, I think, should work:
self.real_property_sale_projects
.joins(:real_property_sale)
.where(status: "reviewed")
.where("real_property_sale_projects.deal_date between ? and ?", 1.year.ago, Time.now)
.group("real_property_sales.description")
.average(:days_on_market)
That should give you the result. There may be a nicer way to write it. | {
"domain": "codereview.stackexchange",
"id": 16829,
"tags": "ruby, ruby-on-rails"
} |
Meeting Scheduler | Question: I am building a simple meeting scheduler application in c#. I have written following code in c# which is working fine.
void Main()
{
var sc = new MeetingScheduler();
sc.Schedule();
}
public class Meeting : IComparable<Meeting>
{
public DateTime StartTime { get; set; }
public DateTime EndTime { get; set;}
public int Duration { get; set;}
//duration in minutes
public Meeting(DateTime startTime, int duration)
{
this.StartTime = startTime;
this.Duration = duration;
this.EndTime = startTime.AddMinutes(duration); ;
}
public int CompareTo(Meeting o)
{
if (this.EndTime.CompareTo(o.StartTime) < 0)
{
return -1;
}//end time is before the other's start time
if (this.StartTime.CompareTo(o.EndTime) > 0)
{
return 1;
}////start time is after the other's end time
return 0;
}
public override String ToString()
{
return "meeting {" +"from " + StartTime +", minutes=" + Duration +'}';
}
}
public class MeetingScheduler
{
private List<Meeting> meetings = new List<Meeting>();
public Meeting BookRoom(Meeting meeting)
{
if (meetings.Count == 0)
{
meetings.Add(meeting);
return null;
}
else
{
int pos = -Array.BinarySearch(meetings.ToArray(), meeting);
if (pos > 0)
{
meetings.Insert(pos - 1, meeting);
return null;
}
else
{
return meetings[-pos];
}
}
}
public List<Meeting> GetMeetings()
{
return meetings;
}
public void Schedule()
{
MeetingScheduler meetingScheduler = new MeetingScheduler();
Meeting[] meetingsToBook = new Meeting[]
{
new Meeting(new DateTime(2017,09,22,8,30,0), 15),
new Meeting(new DateTime(2017,09,22,8,44,0), 15),
new Meeting(new DateTime(2017,09,22,9,10,0), 60),
};
foreach (Meeting m in meetingsToBook)
{
Meeting oldMeeting = meetingScheduler.BookRoom(m);
if (oldMeeting != null)
{
Console.WriteLine("Could not book room for " + m + " because it collides with " + oldMeeting);
}
}
Console.WriteLine("meetings booked: " + meetingScheduler.GetMeetings().Count());
foreach (Meeting m in meetingScheduler.GetMeetings())
{
Console.WriteLine(m.StartTime + "-> " + m.Duration + " mins");
}
}
}
I need some review/comments/suggestion on my implementation
Answer: Possible Ideas to consider
Returning null values unnecessarily when you should be returning a meeting.
Instantiating an object and then instantiating the same object within itself. You can simply use this See this page here.
You could use public properties rather than GetMeetings() to retrieve the meetings. Using a method is probably more OOP so you’ve done well there.
You can create a method which checks for clashes with other meetings.
Perhaps try working through this refactored code. Hope this helps.
Refactored Code:
internal class Program
{
private static void Main(string[] args)
{
MeetingScheduler sc = new MeetingScheduler();
sc.Schedule();
}
public class Meeting : IComparable<Meeting>
{
public DateTime StartTime { get; set; }
public DateTime EndTime { get; set; }
public int Duration { get; set; }
//duration in minutes
public Meeting(DateTime startTime, int duration)
{
this.StartTime = startTime;
this.Duration = duration;
this.EndTime = startTime.AddMinutes(duration); ;
}
public int CompareTo(Meeting o)
{
if (this.EndTime.CompareTo(o.StartTime) < 0)
{
return -1;
}//end time is before the other's start time
if (this.StartTime.CompareTo(o.EndTime) > 0)
{
return 1;
}////start time is after the other's end time
return 0;
}
public override String ToString()
{
return "meeting {" + "from " + StartTime + ", minutes=" + Duration + '}';
}
public bool CollidesWith(Meeting other)
{
if ((thisEndsWhileOtherIsRunning(other)) || thisBeginsWhileOtherIsRunning(other))
{
return true;
}
else
{
return false;
}
}
private bool thisBeginsWhileOtherIsRunning(Meeting other)
{
return this.StartTime >= other.StartTime && this.StartTime <= other.EndTime;
}
private bool thisEndsWhileOtherIsRunning(Meeting other)
{
return this.EndTime >= other.StartTime && this.EndTime <= other.EndTime;
}
}
public class MeetingScheduler
{
private List<Meeting> meetings = new List<Meeting>();
public void BookRoom(Meeting meeting)
{
if (meetings.Any(m => m.CollidesWith(meeting)))
{
Console.WriteLine("Could not book room for " + meeting + " because it collides with another meeting");
}
else
{
meetings.Add(meeting);
Console.WriteLine("Add meeting: {0} ", meeting);
}
}
public void Schedule()
{
Meeting[] meetingsToBook = new Meeting[]{
new Meeting(new DateTime(2017,09,22,8,30,0), 15),
new Meeting(new DateTime(2017,09,22,8,44,0), 15),
new Meeting(new DateTime(2017,09,22,9,10,0), 60),
};
foreach (Meeting m in meetingsToBook)
{
this.BookRoom(m);
}
PrintMeetings();
}
private bool NoMeetingsBooked()
{
return meetings.Count == 0;
}
public List<Meeting> GetMeetings()
{
return meetings;
}
private void PrintMeetings()
{
Console.WriteLine("meetings booked: " + this.GetMeetings().Count());
foreach (Meeting m in this.GetMeetings())
{
Console.WriteLine(m.StartTime + "-> " + m.Duration + " mins");
}
}
}
} | {
"domain": "codereview.stackexchange",
"id": 27587,
"tags": "c#"
} |
Sign of force while solving the equation of motion for an ideal spring-block system | Question: I've a rather simple confusion about restoring forces, and it has been bothering me for a small while now.
One of the simplest restoring forces, is the one described by Hooke's law :
$$\vec{F}=-k\vec{x}$$
Suppose, I'm stretching a spring to the right of the equilibrium, and I've defined the right side to be the positive direction, denoted by the unit vector $\hat{i}$. Hence I can write the above equation as :
$$\vec{F}=-kx\hat{i}=kx(-\hat{i})$$
This shows that the force acts to the left side, as expected. However, this is where my confusion begins.
In order to solve this equation, we compare this to Newton's second law. Hence, we write $$\vec{F}=m\ddot{x}\hat{i}=kx(-\hat{i})$$
Hence, $$m\ddot{x}=-kx$$
My problem is, why do we write $\vec{F}=m\ddot{x}\hat{i}$ and not $m\ddot{x}(-\hat{i})$ since we already know that the force works in the negative $\hat{i}$ direction ? Why do we inherently consider the force to work in the positive $\hat{i}$ direction and then write something like, the force in the positive $\hat{i}$ direction is $-kx\hat{i}$ ? This seems like saying, the force in the positive direction is negative, and so, the force must be in the negative direction.
Couldn't we have directly said that the force in the negative direction is positive, and defined $\vec{F}=-m\ddot{x}{\hat{i}}$ ?
Is this some convention in Newton's law, where $\vec{F}$ is always defined as $+ m\ddot{x}\hat{i}$, that force in the direction of displacement is assumed to be positive, and only if $m\ddot{x}\hat{i}$ becomes negative, only then we can say the force does the work in the negative direction ?
It doesn't seem to matter whether the force is acting along the left or the right, in the equation of motion, we always take $\vec{F}=m\ddot{\vec{x}}$ along whichever direction has been considered to be positive.
In my SHM sum, if I had considered the exact same system, but considered the left direction to be positive i.e $\hat{i}$, then the displacement would be negative, if I stretch the spring to the right side. However, in that case too,
$$\vec{F}=-k\vec{x}=-k(-x)\hat{i}=kx\hat{i}$$
However, what would be the LHS in this scenario. Should I write, $\vec{F}=m\ddot{x}\hat{i}$ as before ? But this would not give me the correct equation of motion, as there is a missing negative sign.
My guess is, since $x$ is negative here, the acceleration $\ddot{x}$ is also negative, and this is where the extra negative sign comes from.
Another alternative guess is that, if the force is defined to be $\vec{F}=m\ddot{x}\hat{x}$ where $\hat{x}$ is the direction of positive displacement, then in our sum, $\hat{x}=-\hat{i}$.
So, my confusion is regarding why $\vec{F}$ is taken to be positive and written as $m\ddot{x}\hat{x}$, even when we know that the force is acting in the negative direction ?
Answer: Ignoring the absolute directions for a moment, things accelerate in the same direction they’re pushed, and springs resist deforming. This explains the lack and presence of a minus sign, respectively, in $\vec{F}=m\ddot{\vec{x}}$ and $\vec{F}=-k\vec{x}$. That’s all you need.
The problem arises because you decide on a direction when you replace $\vec{x}$ with $x\hat{i}$. It looks like this ends up confusing you because $\hat{i}$ always points in the positive direction, whereas you know that $\vec{F}$ doesn’t; thus, it’s compelling to add or remove minus signs. But any negatives are taken care of in the sign of $x$ and $\ddot{x}$. So you must resist this compulsion to avoid a sign error. | {
"domain": "physics.stackexchange",
"id": 85311,
"tags": "newtonian-mechanics, forces, classical-mechanics, harmonic-oscillator, conventions"
} |
Get main RenderPanel from Display | Question:
Hey,
I need to grab a pointer to rviz main RenderPanel - the one that renders the scene - from a Display plugin.
I've managed to grab a RenderPanel but in some cases there are more than one and I haven't found a way to identify them yet.
Originally posted by StefanFabian on ROS Answers with karma: 46 on 2016-08-18
Post score: 0
Answer:
Okay, I just accidentally found out how to get it while searching for a way to get the ViewController.
You get it as follows:
Your class should have a **DisplayContext *** property called context_.
Through this property we can access the ViewManager by calling context_->getViewManager() that can give us a reference to the RenderPanel by calling getRenderPanel().
So in short to grab the RenderPanel you simply call:
rviz::RenderPanel* panel = context_->getViewManager()->getRenderPanel();
Originally posted by StefanFabian with karma: 46 on 2016-08-27
This answer was ACCEPTED on the original site
Post score: 1 | {
"domain": "robotics.stackexchange",
"id": 25549,
"tags": "rviz"
} |
Why is the contribution of a path in Feynmans path integral formalism $\sim e^{(i/\hbar)S[x(t)]}$? | Question: In the book "Quantum Mechanics and Path Integrals" Feynman & Hibbs state that
The probability $P(b,a)$ to go from point $x_a$ at the time $t_a$ to the point $x_b$ at the time $t_b$ is the absolute square $P(b,a) = \|K(b,a)\|^2$ of an amplitude $K(b,a)$ to go from $a$ to $b$. This amplitude is the sum of contributions $\phi[x(t)]$ from each path. $$ K(b,a) = \sum_{\text{paths from $a$ to $b$}} \phi[x(t)].\tag{2-14}$$ The contributions of a path has a phase proportional to the action $S$: $$ \phi[x(t)] = \text{const}\ e^{(i/\hbar)S[x(t)]}.\tag{2-15}$$
Why must the contribution of a path be $\sim e^{(i/\hbar)S[x(t)]}$? Can this be somehow derived or explained? Why can't the contribution of a path be something else e.g. $\sim \frac{S}{\hbar}$, $\sim \cos(S/\hbar)$, $\log(S/\hbar)$ or $e^{- (S[x(t)]/\hbar)^2}$ ?
Edit: I have to admit that in the first version of this question, I didn't exclude the possibility to derive the contribution of a path directly from Schrödinger's equation. So answers along this line are valid although not so interesting. I think when Feynman developed his formalism his goal was to find a way to quantize systems, which cannot be treated by Schrödinger's equation, because they cannot be described in terms of a Hamiltonian (e.g. the Wheeler-Feynman absorber theory). So I think a good answer would explain Feynman's Ansatz without referring to Schrödinger's equation, because I think Schrödinger's equation can only handle a specific subset of all the systems that can be treated by Feynman's more general principle.
Answer: There are already several good answers. Here I will only answer the very last question, i.e., if the Boltzmann factor in the path integral is $f(S(t_f,t_i))$, with action $$S(t_f,t_i)~=~\int_{t_i}^{t_f} dt \ L(t)\tag{1},$$ why is the function $f:\mathbb{R}\to\mathbb{C}$ an exponential function, and not something else?
Well, since the Feynman "sum over histories" propagator should have the group property
$$\begin{align} K(x_3,t_3;x_1,t_1) ~=~&\cr
\int_{-\infty}^{\infty}\mathrm{d}x_2 \ K(x_3,t_3;x_2,t_2)& K(x_2,t_2;x_1,t_1),\end{align}\tag{2}$$
one must demand that
$$\begin{align}f(S(t_3,t_2)&f(S(t_2,t_1)) \cr~=~& f(S(t_3,t_1)) \cr~=~& f(S(t_3,t_2)+S(t_2,t_1)).\end{align}\tag{3}$$
In the last equality of eq. (3) we used the additivity of the action (1). Eq. (3) implies that
$$f(0)~=~f(S(t_1,t_1)) ~=~ 1.\tag{4}$$
(The other possibility $f\equiv 0$ is physically un-acceptable.)
So the question boils down to: How many continuous functions $f:\mathbb{R}\to\mathbb{C}$ satisfy $$f(s)f(s^{\prime}) ~=~f(s+s^{\prime})\quad\text{and}\quad f(0) ~=~1~?\tag{5}$$
Answer: The exponential function!
Proof (ignoring some mathematical technicalities): If $s$ is infinitesimally small, then one may Taylor expand
$$\begin{align}f(s) ~=~& f(0) + f^{\prime}(0)s +{\cal O}(s^{2}) \cr ~=~& 1+cs+{\cal O}(s^{2}) \end{align}\tag{6}$$
with some constant $c:=f^{\prime}(0)$. Then one calculates
$$\begin{align} f(s) ~=~&\lim_{n\to\infty}f(\frac{s}{n})^n\cr
~=~&\lim_{n\to\infty}\left(1+\frac{cs}{n}+o(\frac{1}{n})\right)^n\cr
~=~&e^{cs},\end{align} \tag{7}$$
i.e., the exponential function! $\Box$ | {
"domain": "physics.stackexchange",
"id": 57865,
"tags": "quantum-mechanics, path-integral, action, propagator, partition-function"
} |
Overlapping atomic radii in data of experimentally observed protein assemblies | Question: I am looking at experimentally observed configurations of viral capsid proteins like that of the tobacco mosaic virus. https://www.rcsb.org/structure/6R7M
When taking the atom centers of a monomer and applying the transformations specified in the .pdb file, I get a structure where the VDW radii of different proteins overlap. I have tested a variety of sets of radii and it happens for all of them. I tested with Bondis radii, ProtOr radii and several radii taken from popular force fields (AMBER, PARSE, CHARMM).
I am wondering:
What is the least wrong set of radii to choose here and why?
Is there a popular way to resolve this?
The Clashscore depends on overlaps within a protein right? What radii are chosen for this measure?
Thank you in advance
Ivan
Answer: Looking at the validation report for model 6R7M, we see that there are no symmetry-related clashes listed. (Although there are two clashes within the asymmetric unit, HA of ASP20 - HD3 of PRO21 and HG13 of ILE95 - HB2 of ALA111)
The relevant portion of the PDB's validation explanation page is here:
All-atom contacts within the ASU are calculated by the Reduce and Probe programs within MolProbity (Word et al., 1999; Chen et al., 2010). This method was developed to quantify the detailed non-covalent fit of atomic interactions within or between molecules (H-bonds, favorable van der Waals, and steric clashes). Since most such interactions involve H atoms on one or both sides, all hydrogens must be present or added (Reduce optimizes rotation of OH, SH, NH3, etc. within H-bond networks, but methyls stay staggered). At present, in order to ensure comparable scores between NMR and X-ray, hydrogen atoms are removed from the analysed structure, and replaced by a different set placed by Reduce in idealised and optimized nuclear-H positions. All-atom unfavorable overlaps ≥0.4Å are then identified as clashes, using van der Waals radii tuned for the nuclear H positions suitable for NMR (rather than the electron-cloud H positions suitable for X-ray).).
(bolding mine)
The main takeaway is that MolProbity does not consider that amount of overlap to be a problem. So to your second bullet point, I don't think anything needs to be resolved.
As for which radii should be used, I don't have good advice. It looks like that is still being debated. However, looking up the original paper for the MolProbity tools used by the PDB for validation, the radii for O and non-carbonyl C are 1.4 and 1.75 Angströms, respectively. That sums to 3.15, I measure 2.92 between the atomic centers, so slightly less overlap than you got.
My initial assumption was that these would be poorly-resolved residues near the edge of the macromolecule and that the VDW overlap would be due to fitting errors, but it looks pretty good to me: | {
"domain": "biology.stackexchange",
"id": 12520,
"tags": "molecular-biology, protein-structure"
} |
How was the neutron's spin measured? | Question: In 2012 it was asked, How to measure the spin of a neutral particle. I'm not sure that the answer "Neutron spin can be measured in a Stern Gerlach setup." was really carried out. So what for techniques are required to measure the spin of a neutron?
Answer: The spin of the neutron was measured by the Stern-Gerlach experiment by Sherwood, Stephenson and Bernstein (1954) (sadly paywalled, free links welcome),
Abstract: A neutron beam was polarized by total reflection from a magnetized iron mirror. The beam was then analyzed by passing it through an inhomogeneous magnetic field. From the deflection pattern obtained, it is inferred that the resultant neutron spin in the polarized beam was parallel to the magnetic field applied to the mirror. Thus the nuclear and magnetic scattering amplitudes for iron are of the same sign when the neutron spin and electronic spin are oppositely directed, and conversely.
So the techniques would be the same for any other Stern-Gerlach apparatus. Note that neutral particles will still have an intrinsic magnetic moment, so the Stern-Gerlach experiment would result in the split pattern. | {
"domain": "physics.stackexchange",
"id": 23830,
"tags": "experimental-physics, quantum-spin, neutrons, precession"
} |
What is the difference between options protein and replication in the NCBI database? | Question: After checking the NCBI help page, I am still unclear about the difference between protein and replication interactions for HIV.
http://www.ncbi.nlm.nih.gov/genome/viruses/retroviruses/hiv-1/interactions/
Answer: Interactions denote protein-protein interactions, which means physical association between proteins. By nature, these networks/graphs are undirected.
Replication interactions (actually a not very good term) denote gene regulatory interactions that affect HIV replication. These sets also include the regulatory effects of HIV genes on host genes (and hence the terminology is unsuitable). These networks are directed as well as signed (positive or negative interactions i.e. activation or inhibition respectively).
You can see for yourself, from the dropdown boxes and the interaction lists in this page. | {
"domain": "biology.stackexchange",
"id": 4214,
"tags": "bioinformatics, proteins, virology, protein-interaction"
} |
Is it possible to create a shockwave gun (Like in Minority Report) | Question: I love the gun in Minority Report. It seems to send a shockwave that can throw people and objects back. The gun has a rotating mechanism that triggers the action and loads a charge.
Here's it in action: http://gph.is/29xazEx
I Googled shockwave guns but it was mostly pressurized air inside of plastics tubes. Very uncool and mostly fit to destroy flowers or topple pencils.
For it to be usable as a sidearm it would have to be:
Light
Portable
Produce sufficient force WITHOUT actually exploding
NOT throw the user back or break his arm
Now would it be possible to create this gun in real life - assuming you had an army of researchers?
If not, what are the physical impossibilities?
Thanks for indulging me.
Answer: The device doesn't have to generate shock waves. A vortex ring generator will do. When I was in grad school (1960s) I attended a lecture where a demonstration was presented that was basically any empty coffee can with the top and bottom removed and a rubber membrane secured over one end with a strong rubber band. A clamp around the can served as a handle to hold it in place. The lecturer lite a cigarette and blew smoke into the can and then hit the rubber membrane with a rubber hammer while holding the can stationary with his other hand. This sent a 15 cm diameter smoke ring propagating across the lecture hall. He then had a person wear a hat while standing at the far end of the lecture hall and showed that a properly aimed vortex ring could knock the hat off (estimated distance of 10 m).
This isn't the kind of "gun" shown in your linked video, but it does give an indication of what could be done with a simple vortex ring generator. Of course (as with all guns) recoil would have to be taken into account which would probably compromise some of your requirements. | {
"domain": "physics.stackexchange",
"id": 73328,
"tags": "forces, pressure, explosions, shock-waves"
} |
How does the coil rotate? | Question: I came across this question in one of my tests. But I was unable to understand which way does the coil actually rotate ?
Any visualizations would be quite helpful. As per the given question I thought it rotates in an out of the plane of the screen about the upper side of the frame. But I am not so sure ?
Answer: You don't need to think of the whole loop all at once. Each of the four legs of the "loop" can be considered one at a time.
Also, the problem is in a sense two-dimensional, since the way it is specified the loop is fixed such that only torque about the x-axis matters. So if it is easier, you can just draw two-dimensional diagrams only showing y and z axis and just sort of forget about the x axis.
For each leg you have a magnetic force contribution to the torque and a gravitational contribution to the torque.
The leg running parallel to the x-axis with the current running in the negative x direction is fixed in place (it is the axis of rotation as specified in the problem). So take the origin at some (any) point on this leg. This leg thus does not contribute to the torque.
So now you only have to consider the other three legs (one running parallel to the x-axis with current in the positive x direction and and two not parallel to the x-axis).
The two legs that are not running parallel to the x-axis do not contribute any torque due to the magnetic force. This is because $\vec \tau = \vec r \times L(\vec I\times\vec B)$, but for these two legs $\vec I$ changes sign in the torque calculation, but $\vec r$ does not.
However, these two legs both contribute equally to the torque due to the gravitation force (gravity is in the $-\hat k$ direction for both--you should not have trouble calculating this torque).
Now there is just one last leg to consider: the leg running parallel to the x axis, with the current running parallel to the x axis. The torque due to gravity is again in the same direction as the last two legs (but of a different magnitude--you should not have trouble calculating it). I think the total torque due to gravity (all three legs) should come out to something like
$$
-\sqrt{2}Lmg\;,
$$
where the negative sign in my convention means it is causing it to fall back towards the ground.
The torque due to the magnetic field for the final leg we are considering is non-zero, but can be calculated straightforwardly (since effectively in the 2d world the $\vec r$ is constant--in the 3d world $\vec r$ isn't constant as you integrate along the leg, but this doesn't matter because the variable part doesn't contribute torque along x). I think it comes out to:
$$
+\frac{4IL^2}{\sqrt(2)}\;.
$$
In equilibrium these two contributions sum to zero, which allows you to solve for $I$ as:
$$
I = \frac{mg}{2L}\;.
$$
The units might look a little weird, but that is because B as specified in the problem is unitless.
I'm not positive that I worked out all the details correctly, but this is probably at least good enough to get you started and you can check the details as you solve the problem on your own. | {
"domain": "physics.stackexchange",
"id": 75611,
"tags": "homework-and-exercises, electromagnetism, electromagnetic-induction"
} |
What fraction of dry land is below sea level? | Question: Someone just asked me if it would be practical to counter the rise of sea level by pumping water into storage on land. It struck me that if there is enough land below sea level, this would require construction of aquifers, but would not require using energy to pump the water as the destination of the water would be below its current level. But I have no idea how much land below sea level is available.
Answer: This question is intriguing. At the current rate of sea rise (about 3 mm per year), all of the Earth's dry land below sea level would be submerged in just a decade. There are an estimated 49 countries around the global with at least some real estate below -0- metres elevation, but the sum of all the potential volume (around 7,500 km^3) is insignificant compared to the volume of water generated by glacial and ice cap melt: "A meter of sea level rise is a volume 50 times greater than all of the depressions that are below sea level in the world", according to this link: http://mountainmystery.com/2015/08/17/hiding-rising-seas-in-sunken-deserts/. | {
"domain": "earthscience.stackexchange",
"id": 564,
"tags": "geophysics, ocean, climate, sea-level, water"
} |
How can I transform a Result into a Bool in Q# + C# environment? | Question: I'm writing a Q# program and I'm calling an operation defined in the program that outputs a Result (One or Zero).
For example something like this:
output = QuantumOperation.Run(sim).Result;
I want to convert now output to a bool within C#, but I don't know how.
One way to go around this is to change the output of QuantumOperation to be a bool and transforming the result at the end of the operation with ResultAsBool(result). However, I want to maintain the output of the operation to be a Result. So ideally the conversion should happen within the C# host code.
Answer: Q# type Result is represented by C# class Microsoft.Quantum.Simulation.Core.Result; in particular, it has constants Result.Zero and Result.One that you can compare the return of your operation to.
using Microsoft.Quantum.Simulation.Core;
boolOutput = (QuantumOperation.Run(sim).Result == Result.One); | {
"domain": "quantumcomputing.stackexchange",
"id": 1133,
"tags": "q#"
} |
Are the unperturbed hydrogen wavefunctions solutions of the fine structure Schrödinger equation? | Question: The hydrogen atom Hamiltonian, with fine structure effects included, is $$H = \frac {p^2}{2m}-\frac{e^2}{r}-\frac{p^4}{8m^3c^2}+\frac{e^2}{2m^2c^2}\frac{\mathbf{S\cdot L}}{r^3}+\frac{\pi}{2}\frac{e^2\hbar ^2}{m^2c^2}\delta(r).$$ Where the first two terms are the unperturbed Hamiltonian, $H_0$, the third term is the relativistic correction, $H_{rel}$, the fourth term is the spin-orbit coupling, $H_{s.o}$, and the final term is the Darwin correction, $H_D$. Correct me if I'm wrong, but all of these terms commute with the square of the total angular momentum operator $\mathbf J^2= \mathbf S^2+\mathbf L^2 +2\mathbf {S\cdot L},$ and so the eigenfunctions $\psi_{njm}$ of $H_0$ (in the coupled basis) should diagonalize the entire Hamiltonian. Is this correct?
Answer: You are not correct.
Your Hamiltonian commutes with $\vec J=\vec L+\vec S$ but $H_0$ does not commute with $p^4/8m^3c^2$ or with the $\delta(r)$ term.
With $(n\ell m)$ referring to unperturbed hydrogen states, the perturbation will mix $n$ values via the $p^4$ and $\delta(r)$ terms, and and the spin-orbit term will mix $\ell$ values because a given value of $j$ can occur as two states with $\Delta \ell=1$ (like $\ell_1=1$ and $\ell_2=2$) can combine with $s=1/2$ to form states with the same $j$ (like $j=3/2$). | {
"domain": "physics.stackexchange",
"id": 90239,
"tags": "quantum-mechanics, perturbation-theory, hydrogen"
} |
Catkin and overlays | Question:
Just to recap the workspace_overlaying tutorial, I have two catkin based workspaces catkin_A and catkin_B. I want to configure them so that the catkin_A overlays ROS standard packages and catkin_B overlays catkin_A. Assuming that both have been properly initialised and filled with packages as follows
$ source /opt/ros/hydro/setup.bash
$ mkdir -p ~/catkin_X/src
$ cd ~/catkin_X/src; catkin_init_workspace
So to set up the overlay structure i do
$ source /opt/ros/hydro/setup.bash
$ cd ~/catkin_A
$ catkin_make
and
$ source ~/catkin_A/devel/setup.bash
$ cd ~/catkin_B
$ catkin_make
In order to use the catkin_B workspace I have to source $ source ~/catkin_B/devel/setup.bash. The $ source ~/catkin_A/devel/setup.bash is the crucial part as the tutorial says the question is do I have to do it every time I call catkin_make in the catkin_B to clear the catkin_B from environment?
How about catkin_make in catkin_A if I have previously used catkin_B do I have to `source /opt/ros/hydro/setup.bash'?
I'm asking this question as the overlay setup gets broken from time to time, rosrun starts looking in /opt/ros/hydro for nodes that should be overlaid in catkin_A.
Originally posted by liborw on ROS Answers with karma: 801 on 2013-08-22
Post score: 1
Original comments
Comment by Dirk Thomas on 2013-08-22:
Btw. the invocation of catkin_init_workspace is not necessary. That will be implicitly done by catkin_make.
Comment by liborw on 2013-08-22:
Thanks I didn't know that.
Answer:
No. It sets bash environment variables.
You need to run it once for each time you open a new shell (or terminal). If you almost always use catkin_B, just source it in your ~/.bashrc, which automatically runs each time you open a new shell.
Originally posted by joq with karma: 25443 on 2013-08-22
This answer was ACCEPTED on the original site
Post score: 0
Original comments
Comment by liborw on 2013-08-22:
I probably wasn't clear so I updated the question. | {
"domain": "robotics.stackexchange",
"id": 15338,
"tags": "ros, catkin, overlay"
} |
Is it NP-hard to _play_ minesweeper perfectly? | Question: This paper shows that it is NP-hard "to determine if there is some pattern of mines in the blank squares that give rise to the numbers seen."
If there is a way to "lead a perfect player into" such positions, then it would easily follow that it is also NP-hard to play minesweeper perfectly. However, I do not see any way to rule out the possibility of an algorithm that plays minesweeper perfectly by never going into the sort of positions that would be produced by applying that paper's reduction to hard instances of SAT.
Is playing minesweeper perfectly known to be NP-hard?
Answer: Just an extended comment; in
Allan Scott, Ulrike Stege, Iris van Rooij, Minesweeper May Not Be NP-Complete but Is Hard Nonetheless
the authors face the problem I mentioned in my comment above.
From the abstract:
In volume 22 of The Mathematical Intelligencer, Richard
Kaye published an article entitled "Minesweeper is NP-Complete."
We point out an oversight in Kaye's analysis
of this well-known game. As a consequence, his
NP-completeness proof does not prove the game to be hard.
We present here an improved model of the game,
which we use to show that the game is indeed a hard problem;
in fact, we show that it is co-NP-complete.
....In this article, we investigate the complexity of the
deterministic part of playing Minesweeper. In our
analysis, we assume that the player is given a
consistent Minesweeper board configuration.
That is, when completely revealed,
the board presented has in each square either
a mine or a numeber corresponding to the correct number
of neighboring mines.
Further, we assume an ideal player who makes no
mistakes in reasoning. Note that these assumptions imply
that the player can only lose (try to open a square containing a mine) when forced
to guess. Therefore, our optimal player who follows
the strategy described previously will avoid guessing whenever possible.
However, to do so he must be able to decide
whether it is possible to make progress on the board
without guessing. This problem is what we call the
Minesweeper Inference problem.
...
There is also a recent paper on arXiv that seems relevant (I just found it now):
Michiel de Bondt, The computational complexity of Minesweeper
Abstract: We show that the Minesweeper game is PP-hard, when the object is to locate all mines with the highest probability. When the probability of locating all mines may be infinitesimal, the Minesweeper game is even PSPACE-complete. In our construction, the player can reveal a boolean circuit in polynomial time, after guessing an initial square with no surrounding mines, a guess that has 99 percent probability of success. Subsequently, the mines must be located with a maximum probability of success.
Furthermore, we show that determining the solvability of a partially uncovered Minesweeper board is NP-complete with hexagonal and triangular grids as well as a square grid, extending a similar result for square grids only by R. Kaye. Actually finding the mines with a maximum probability of success is again PP-hard or PSPACE-complete respectively.
Our constructions are in such a way that the number of mines can be computed in polynomial time and hence a possible mine counter does not provide additional information. The results are obtained by replacing the dyadic gates in [3] by two primitives which makes life more easy in this context. | {
"domain": "cstheory.stackexchange",
"id": 2559,
"tags": "np-hardness, board-games"
} |
Are elementary particles as old as the universe? | Question: Lets take quarks for example. Are the quarks - that constitute the universe - as old as the universe? Or are particles always created? Or were they all created when the universe was born?
Answer:
Are elementary particles as old as the universe?
Some of them are and some of them aren’t.
The quarks and electrons in most atoms have been around since the Big Bang. The photons of the cosmic microwave background have been traveling across the universe for almost 14 billion years.
On the other hand, every kind of elementary particle can be created and destroyed. It is particularly easy to create massless particles because they can have arbitrarily small amounts of energy. A radio antenna, for example, creates scads of low-energy photons. The Sun creates enormous numbers of photons and neutrinos.
It takes more energy to create particles with mass, because their minimum energy is $mc^2$. But we do this all the time in accelerators. A high energy photon passing by an atom can pair-create an electron and a positron. Two colliding protons can create multiple quark-antiquark pairs. Or a Higgs boson. Etc. Feynman diagrams represent particles interacting and sometimes appearing, disappearing, or turning into a different kind of particle.
Relativistic quantum field theory was designed with particle creation and destruction as a core feature. A quantum field is a field of operators that create and destroy particles! This is very different from, say, the non-relativistic quantum mechanics of the Schrodinger equation.
In conclusion, some elementary particles are very old, and others are brand new. | {
"domain": "physics.stackexchange",
"id": 66437,
"tags": "cosmology, universe, quarks, elementary-particles"
} |
Bridge, We've Got a Problemo | Question: Now that we can move across the bridge, it's time to finally set up the problem so that it can be solved. A reminder of what the assignment is:
Welcome to the Bridge Crossing Problem. Person Pn can cross the bridge
in n minutes. Only one or two persons can cross at a time because it
is dark, and the flashlight must be taken on every crossing. When two
people cross, they travel at the speed of the slowest person. Devise a
sequence of crossings so that all four people get across the bridge in
no more than 17 minutes.
Here's what I've come up with in a class form. Any suggestions for improvement?
BridgeProblem.java:
package bridge;
import java.util.List;
import java.util.ArrayList;
/**
* This class represents the Bridge Crossing problem.
* It provides an introductory message describing the problem,
* stores the problem's possible moves and current state, and
* tests for whether the problem has been successfully solved.
* @author syb0rg
*/
public class BridgeProblem
{
private List<BridgeMove> moves;
private BridgeState currentState;
/**
* The bridge problem constructor should create the initial bridge state
* object and store it as the problem's current state.
* It should also create the 10 valid bridge move objects and store them
* on an accessible list.
*/
public BridgeProblem()
{
String[] moveNames = new String[]{"P1 crosses alone",
"P2 crosses alone",
"P5 crosses alone",
"P10 crosses alone",
"P1 crosses with P2",
"P1 crosses with P5",
"P1 crosses with P10",
"P2 crosses with P5",
"P2 crosses with P10",
"P5 crosses with P10"};
currentState = new BridgeState(Position.WEST,
Position.WEST,
Position.WEST,
Position.WEST,
Position.WEST,
0);
moves = new ArrayList<BridgeMove>();
for (int i = 0; i < moveNames.length; ++i)
{
moves.add(new BridgeMove(moveNames[i]));
}
}
/**
* Getter (accessor) for this problem's introduction string.
* @return the introduction string
*/
public String getIntroduction()
{
return "Welcome to the Bridge Crossing Problem.\n\n" +
"Person Pn can cross the bridge in n minutes.\n" +
"Only one or two persons can cross at a time because it is dark,\n" +
"and the flashlight must be taken on every crossing.\n" +
"When two people cross, they travel at the speed of the slowest person.\n" +
"Devise a sequence of crossings so that all four people get across\n" +
"the bridge in no more than 17 minutes.\n\n";
}
/**
* Getter (accessor) for this problem's list of valid bridge move objects.
* @return the list of bridge moves
*/
public List<BridgeMove> getMoves()
{
return moves;
}
/**
* Tests for whether the current state of this problem indicates that the
* problem has been successfully solved.
* @return true if the problem has been solved, false otherwise
*/
public boolean success()
{
return currentState.getTimeSoFar() <= 17
&& currentState.getP1Position() == Position.EAST
&& currentState.getP2Position() == Position.EAST
&& currentState.getP5Position() == Position.EAST
&& currentState.getP10Position() == Position.EAST
&& currentState.getFlashlightPosition() == Position.EAST;
}
/**
* Getter (accessor) for this problem's current bridge state.
* @return the current state
*/
public BridgeState getCurrentState()
{
return currentState;
}
/**
* Setter (mutator) for this problem's current bridge state.
* @param currentState the state to be made this problem's current state
*/
public void setCurrentState(BridgeState currentState)
{
this.currentState = currentState;
}
}
Answer: Why are you creating all these classes if you end up hardcoding all the values anyway?
There are two parts especially that I would make more flexible:
String[] moveNames = new String[]{"P1 crosses alone",
"P2 crosses alone",
"P5 crosses alone",
"P10 crosses alone",
"P1 crosses with P2",
"P1 crosses with P5",
"P1 crosses with P10",
"P2 crosses with P5",
"P2 crosses with P10",
"P5 crosses with P10"};
And:
return currentState.getTimeSoFar() <= 17
&& currentState.getP1Position() == Position.EAST
&& currentState.getP2Position() == Position.EAST
&& currentState.getP5Position() == Position.EAST
&& currentState.getP10Position() == Position.EAST
&& currentState.getFlashlightPosition() == Position.EAST;
These things indicate, to me, some things that I would change:
A BridgeMove should not be constructed from a String. It makes more sense to construct with a varargs parameter BridgeMove(String... names), P1 crosses with P2 and such could be the result of calling the toString method (or a getDescription method) of that class.
Even better instead of using Strings for a person would indeed be to use a class Person with two fields: Name and time.
A BridgeState should contain a Map<String, Position> where the keys are the name of the person and the value is the position of that person.
Don't hardcode the parameters of the problem in your constructor, add a factory method that creates an instance with those parameters instead.
Example of factory method: (this can be improved further in various ways)
public static BridgeProblem createStandardProblem() {
List<Person> persons = new ArrayList<>();
Person p1 = new Person("Adam", 1);
persons.add(p1);
Person p2 = new Person("Bert", 1);
persons.add(p2);
...
List<BridgeMove> moves = new ArrayList<>();
moves.add(new BridgeMove(p1));
moves.add(new BridgeMove(p2));
moves.add(new BridgeMove(p1, p2));
...
return new BridgeProblem(persons, moves);
}
It's quite easy to make these changes which would make your code much cleaner. Perhaps more importantly: Use a loop in your success method (which perhaps should be named isSuccess btw) to check if all your persons are at the eastmost position. | {
"domain": "codereview.stackexchange",
"id": 18180,
"tags": "java, object-oriented, homework"
} |
inertia tensor of rigid body in generalized coordinate frame? | Question: Assuming we know the inertial tensor of a homogeneous rigid body about a coodinate frame at its COM and aligned to it principal axes, how do we find the inertial tensor for the body in some other general coordinate frame which has a linear transformation (4x4) (which accounts for both rotation and translation) T from the principal C.F at the COM ?
Answer: The general 4×4 transformation matrix has a structure where the top left 3×3 submatrix is the rotation + scaling factors. If there is no scaling, then extract the this matrix $ \mathrm{R}$ from $$\text{transform}= \left| \matrix{ \mathrm{R} & \vec{t} \\ \vec{0}^\intercal & 1 } \right| $$
Then you do the standard
$$ \mathrm{I}_{\rm world} = \mathrm{R}\,\mathrm{I}_{\rm body} \mathrm{R}^\intercal $$
This assumes the transformation is defined as local -> world sense.
Now if you want to include the parallel axis theorem to move the MMMOI definition to a new point them you use the following rule
$$ \mathrm{I}_{\rm world} = \mathrm{R}\,\mathrm{I}_{\rm body} \mathrm{R}^\intercal - m [\vec{t}\times] [\vec{t}\times]$$
where $[\vec{t}\times]$ is the 3×3 skew-symmetric cross product operator:
$$\pmatrix{x\\y\\z} \times = \left[ \matrix{0 & -z & y\\ z & 0 & -x \\ -y & x & 0} \right] $$
such that $\vec{a} \times \vec{b}$ becomes the vector/matrix product $[\vec{a} \times] \vec{b}$. | {
"domain": "physics.stackexchange",
"id": 57034,
"tags": "rotational-dynamics, reference-frames, moment-of-inertia"
} |
Does a diatomic molecule falling into a black hole dissociate? | Question: I've just answered Dipping a Dyson Ring below the event horizon, and while I'm confident my answer is correct I'm less certain about the exact consequences. To simplify the situation consider a diatomic molecule falling into a Schwarzschild black hole with its long axis in a radial direction.
The inner atom cannot interact with the outer atom because no radially outwards motion is possible, not even for electromagnetic fields travelling at the speed of light. However I find I'm uncertain exactly what the outer atom would experience.
We know that the outer atom feels the gravitational force of the black hole even though gravitational waves cannot propagate outwards from the singularity. That's because it's experiencing the curvature left behind as the black hole collapsed. Would the same be true for the interaction of the outer atom with the inner atom? Would it still feel an electromagnetic interaction with the inner atom because it's interacting with the field (or virtual photons if that's a better description) left behind by the inner atom? Or would the inner atom effectively have disappeared?
If the latter, presumably the fanciful accounts of observers falling into the black hole (large enough to avoid tidal destruction) are indeed fancy since it's hard to see how any large scale organisation could persist inside the event horizon.
Later:
I realise I didn't ask what I originally intended to. In the above question my molecule is freely falling and the question arose from a situation where the object within the event horizon is attempting to resist the inwards motion. I'll have to go away and re-think my question, but thanks to Dmitry for answering what I asked even if it wasn't what I meant :-)
Answer: I might be mistaken, but:
Even though the molecule is inside the event horizon (relative to a distant observer), from the point of view of the molecule the event horizon is still ahead of it, and has not yet been reached. The inner atom would still be able to communicate with the outer atom well after they've both crossed the event horizon from our point of view.
It's only when the molecule is much closer to the singularity (i.e. about to be spaghettified), that the inner atom will disappear into the event horizon relative to the outer atom; any bond between them will be broken, and any charge of the inner atom will be added to the charge of the black hole. | {
"domain": "physics.stackexchange",
"id": 10026,
"tags": "general-relativity"
} |
Noble gas configuration and valence shell electrons | Question: I know that the maximum number of electrons within a shell is equal to 2n^2. I would think that the noble gases would reflect this but that isn’t always the case
n = 1
2(1^2) = 2, this is the atomic number of He ✅
n = 2
2(2^2) = 8, plus the 2 e from n = 1, this is 10, which is the atomic number for Ne ✅
n = 3
2*(3^2) = 18, this is the atomic number for Ar, but I realized that the 3d orbital is empty. The 3rd energy level should have a total of 18 e and the total amount of electrons until the 3rd energy level should include 10 e from the previous 2. I understand that this is because of the aufbau principle, but it got me confused about how to consider noble gases. I thought they were supposed to have the maximum number of valence electrons, but then I thought what about group 12, Lutetium, and Lawrencium? Just based on their electron configuration, every orbital they have is completely occupied. I know these are very different from noble gases. I’m just wondering why, especially because these elements can be very reactive (right?).
Edit: I forgot to mention that after n = 3, nothing matches up at all.
n = 4
2(4^2) = 32, For Kr, Z = 36
n = 5
2(5^2) = 50, For Xe, Z = 54
n = 6
2(6^2) = 72, For Rn, Z = 86
And lastly, Idk if Oganesson is even considered a noble gas but it’s in the same group so I’ll put it here.
n = 7
2(7^2) = 98, For Og, Z = 118
Answer: The number of orbitals per shell (n) is n2. At most there are two electrons per orbital, each with opposite spin. However, due to nuclear screening, the subshells (n and l), and as a result the orbitals (n, l, and ml) are not of the same energy.
When you get to more complicated situations, such as the third row transition metals, and you see exceptions such as chromium and copper (and their respective group members) having ns1 (n-1)d5 and ns1 (n-1)d10 configurations, the added stability from half-filled and fully filled sub shells outweighs pairing the ns electron as it is more distant from the nucleus and less stabilized by screening.
With lutetium, the configuration is [Xe] 4f14 5d1 6s2; for lawrencium, the configuration is [Rn] 5f14 7s2 7p1.
In terms of reactivity, valence electrons further from the nucleus are easier to remove, so reactivity can vary. With regard to these rare earth metals you mentioned, I am not as familiar and have limited knowledge on them. But, in terms of filling orbitals, you are correct to follow Aufbau. (f will come before d and be (n-1) lower than the respective d, and d will be (n-1) lower than the respective s/p.
Does this help? :) | {
"domain": "chemistry.stackexchange",
"id": 15576,
"tags": "physical-chemistry, transition-metals, electronic-configuration, noble-gases"
} |
Is this tall, white tree in central Australia a kind of Eucalyptus? | Question: This is near Alice Springs NT, Australia, about -24°S latitude, 550m. Area is arid but it's in a maintained park.
I don't know the exact size, but the photo is hand-held from the ground and clearly the tree is quite tall.
I only have this one photo to work with (it's not mine). I've added some cropped sections of the same original image below for easier viewing.
It reminded me of the tree I saw in the linked video below, which is likely a Eucalyptus considering the koala context.
I don't need an exact species, just confirmation of my genus hunch.
Oh! No koalas were seen in the tree eating leaves at the time, which would have made a "differential diagnosis" easer.
From Koala Bears and Eucalyptus - Periodic Table of Videos at about 02:10 (Sir Poliakoff reminds us a few times in the video (himself as well) that koalas are not "bears"!)
Answer: This tree is ilwempe (Corymbia aparrerinja) or Ghost Gum. It is endemic to Central Australia where you saw it, and depicted in the paintings of local aboriginal people.
https://en.wikipedia.org/wiki/Corymbia_aparrerinja
Before the 1990's all trees of this form were in the genus Eucalyptus (and usually called gum trees), since then many have been put into the genuses Corymbia and Angophora | {
"domain": "biology.stackexchange",
"id": 10716,
"tags": "species-identification"
} |
geometrical optics | Question: I doing an experiment now.
My structure has 700nm periodicity.
The surface of the unit structure is sloped at 45 degrees from the normal incidence angle.
My laser is 500 nm.
I can expect there will be diffraction.
I wonder if the geometrical optics can be applicable for my case?
Or just I need to consider the diffraction?
I am asking this question because I have something that I can not explain.
Answer: It depends on what you mean by whether "geometrical optics can be applicable".
Geometric optics will work fine with a system with a grating like the one you mention, although you need to know the rules for calculating the directions and strengths of the transmitted / reflected rays. There are three main points to heed here:
In general, several separate rays will emerge from the incidence of one ray;
Their directions are governed by the Bragg Resonance Condition.
Their relative strengths are governed by the actual shape of the periodic structure: these are related to the amplitudes of the components in the Fourier series describing the periodic index variation.
The reason that geometric optics works well here is that a ray stands for essentially a plane wave i.e. a wavefront that has minimal aberration over length scales that are typically much longer than the length scales in the grating you mention. Plane waves incident on a grating give rise to a set of plane waves, one for each integer solution in the Bragg resonance law. | {
"domain": "physics.stackexchange",
"id": 19946,
"tags": "homework-and-exercises, optics, geometric-optics"
} |
Single trace partition function | Question: I would be glad if someone can help me understand the argument in appendix B.1 and B.2 (page 76 to 80) of this paper.
The argument in B.1 supposedly helps understand how the authors in that paper got from equation 3.6 to 3.8 on page 18 of the same paper. These 3 equations form the crux of the calculation done in this paper and unfortunately I am unable to see this.
In the appendix B.2 they calculate certain rational functions which are completely mysterious to me. Like I can't understand what it means to say that $\frac{1}{1-x}$ is the partition function of the operator $\partial$. Similarly one can see such functions in the equations B.7, B.8 and B.10 of that paper.
Curiously these polynomials had also appeared on page 6 and 7 in this paper long before the above paper. I am completely unable to understand how the series of polynomials between equation 15 to 20 of this paper were gotten and what they mean.
I haven't seen any book ever discuss the methods being used here. I would be glad to hear of some pedagogic and expository references on the background of all this.
Answer: Their convention for the partition function is explained in the second paragraph before the equation (B.7) of the paper by Shiraz et al. It is
$$z(x) = \sum_{operators} x^{\Delta_{operator}}$$
Note that this is just a different way of writing the usual $\mbox{Tr }\exp(-\beta H)$ if you identify $\exp(-\beta)\equiv x$ and $\Delta\equiv H$. Yes, the dimension is the same thing as the Hamiltonian (of the radial quantization) and it is often helpful to avoid exponentials and write powers of $x$ only, so therefore the exponential redefinition of $\beta$ vs $x$. The trace is the summation over the basis.
They're calculating the partition function of a whole theory, not the $\partial$ operator itself. So the partition function is the sum over operators, as described above. In this simple case, the operators are $\partial_i \partial_j \dots \phi$, i.e. arbitrary derivatives of $\phi$ by $d$ different partial derivative symbols.
The derivatives with respect to different directions commute with each other and are completely independent. So imagine $d=1$ for a while, only one direction. In that case, you have operators
$$\phi, \partial \phi, \partial^2 \phi, \dots$$
and their dimensions are
$$\Delta=0,1,2,\dots$$
plus the dimension of $\phi$ if it were nonzero. The partition sum is the sum of $x^\Delta$ over these operators which means
$$1+x+x^2+\dots = \frac{1}{1-x}.$$
The sum is obtained as geometric series. Note that the coefficients of the Taylor expansion are simply equal to one: there is no source where you could have gotten something else.
Now, the operators in the $d$-dimensional space may be obtained by acting with some derivatives in the 1st direction; some in 2nd, and so forth, on $\phi$. So the space of operators is a tensor product of spaces from each of the $d$ directions, and the partition sum is therefore the product of the partition sums from the individual directions, i.e. the $d$-th power of $1/(1-x)$.
There are other factors multiplying the total partition function but you haven't asked about it, and I can't explain every detail in a 50-page paper you haven't asked about. But yes, the other paper you mentioned almost certainly uses the same basic insight about the geometric series. | {
"domain": "physics.stackexchange",
"id": 406,
"tags": "quantum-field-theory, gauge-theory"
} |
Coloration of substances because of F-Centers | Question: In case of metal deficient defects in the crystal lattice of a metal, the positive voids are filled by electrons.
How does this result in the coloration of a substance?
Does this condition arise only in the case of metals?
Answer: For a simple explanation, we could explain the coloration of the substance by the absorption of the energy of the electrons in the void, which leads to the color of the compliment color (as is suggested by http://rspa.royalsocietypublishing.org/content/204/1078/406)
However, a more rigorous explanation could be found here: http://ptp.oxfordjournals.org/content/4/2/181.full.pdf
For the second question, it is written in Ceramic Materials: Science and Engineering that "the term color center is now applied to any defect, including an impurity, that produces color in an insulator". It also notes that the original observation was made by the production of F centers, which suggests that the modern interpretation of the phenomenon applies to the defects which product color in an insulator.
I'm not sure if I answered you question exactly as you hoped, but if you look into the resources, there is a wealth of knowledge there. | {
"domain": "chemistry.stackexchange",
"id": 3153,
"tags": "crystal-structure, solid-state-chemistry, color"
} |
What is the maximum rate of a coherent demodulator implemented in a PC? | Question: I would like to implement my own xPSK coherent demodulator on a traditional PC connected to an acquisition board.
I am wondering what kind of bitrate it is possible to reach processing only with the microprocessor?
If it is the range of 100 kbits, 1 Mbits, 10 Mbits?
Answer: I think the question must be corrected as ksamples, or Msamples. It varies according to how many samples the system can process in real-time, modulation degree ($m$), and oversampling factor ($OS$). For digital communication systems, typically, receivers require more processing power when compared to transmitters. Therefore, the bottleneck will probably be caused by how complex your receiver is. For a basic setup (uncoded transmission), you can fetch the I/Q signal up to 5-10 Msample/sec. Assuming that you set your OS=4, you can roughly receive 1-2.5 Msymbols/sec. Secondly, for example, you use 8-PSK (each symbol conveying 3 bits), so the throughput (in an ideal case) is 3-7.5 Mbits/sec. Unfortunately, you should also insert some PHY frame headers as well as pilot sequences and time guards between the consecutive frames to accommodate a well-synchronized communication system. Therefore, the throughput may reduce by 20%. | {
"domain": "dsp.stackexchange",
"id": 10583,
"tags": "digital-communications, demodulation"
} |
What would happen if you tried to use oil as fuel in a fusion reactor? | Question: At first, this question seemed silly, but there might be some sense to it. OpenAI's GPT algorithm suggested to me that using oil in fusion technology could be a breakthrough. I thought about it for a bit, and my first thought is that the oil might combust, but the chemical combustion energy is small in comparison to the energy of nuclear fusion, so it probably wouldn't do much good. However, it does have hydrogen (it is a hydrocarbon).
Now, I wonder if the molecules of the oil would hold the hydrogen in place, so freely roaming atoms of hydrogen might more readily fuse to it during high heat? My thought is no, at least not for long, because the extreme temperatures of fusion would break the molecules into atoms. I wonder though if the atoms being close together (as molecules of hydrogen in the oil) might improve the rate of fusion, even though there are other elements mixed in? What are your thoughts, do you think oil would have any advantages over using hydrogen (particularly deuterium) in a fusion reactor?
Edit:
Another angle I haven't considered is the newly discovered C-N-O fusion cycle which is a catalyst for fusing hydrogen into helium. I wonder if that makes using oil more feasible?
https://youtu.be/jPE2IWnpCgs
Answer: At energies high enough to smash hydrogens together hard enough to make them fuse, the oil molecules that contain them have long since been torn to pieces. This means the scheme holds no advantage. | {
"domain": "physics.stackexchange",
"id": 73057,
"tags": "nuclear-physics, atoms, hydrogen, fusion"
} |
Why are there no ellipsoidal drums? | Question: It occurred to me today that all drums I could think of have circular heads. It then occurred to me that perhaps an elliptical drumhead would produce different overtones. Do the overtones produced by an elliptical drumhead not sound pleasant? And as a more general question, how does a general drumhead reverberate? What are the equations for a drumhead described by a certain polar curve's, its standing waves?
Answer: I suspect the actual answer is something boring like ease of manufacture and tuning.
However, one can work out the solutions for the wave equation in elliptic coordinates, perform a separation of variables and end up with a system of differential equations admitting Mathieu functions as solutions. The boundary value problem can be solved relatively straightforwardly [1-4].
(Following [3]) In elliptic coordinates $\xi,\eta$, $x=f \cosh(\xi)\cos(\eta), y=f\sinh(\xi)\sin(\eta)$ where $f$ is the distance from the origin to the foci $(\pm f,0)$. $0\leq \xi < \infty$ is the "radial" coordinate constant on ellipses and $0\leq \eta < 2\pi$ is the "polar" coordinate constant on hyperbolas. The solutions of the wave equation $$\psi(\xi,\eta,t)=T(t)R(\xi)\Theta(\eta)$$ split into the time part $$T''(t)+k^2\nu^2T=0,$$ and the two spatial parts $$R''(\xi)-(\alpha-2q\cosh(2\xi))R(\xi)=0$$ (modified Mathieu equation) and $$\Theta''(\eta)+(\alpha-2q\cos(2\eta))\Theta(\eta)=0$$ (ordinary Mathieu equation) where $-k^2, \alpha$ are the constants of separation and $q=k^2f^2/4$.
One interesting difference from circular membranes is that for each mode there is an even and odd mode, and they oscillate with different frequencies [3]. So if you stimulate one of the modes it is likely to produce a mix of two frequencies that likely do not have a nice rational ratio, and hence does not sound very harmonious. By changing the eccentricity they can likely be made to fit [4], but I suspect this will not make the whole spectrum harmonious.
[1] E. Mathieu, M´e moire sur le mouvement vibratoire d’une membrane de forme elliptique, J. Math. Pures Appl., vol. 13, pp. 137-203, 1868. http://sites.mathdoc.fr/JMPA/PDF/JMPA_1868_2_13_A8_0.pdf
[2] http://booksite.elsevier.com/9780123846549/Chap_Mathieu.pdf
[3] http://optica.mty.itesm.mx/pmog/Papers/P001.pdf
[4] http://www.altenberg.com/peter/pdfs/mathJournalSamp.pdf | {
"domain": "physics.stackexchange",
"id": 42362,
"tags": "waves, acoustics"
} |
Lower bound and worst case scenario | Question: We know that the lower bound is the minimum amount of work needed to solve a problem. So for a given problem say x it has the best algorithm ( the most efficient algorithm to solve this problem ) say algorithm y, then the lower bound efficiency calculated from this algorithm y is the least time this problem x can be solved through . So why do we calculate the lower bound efficiency for this algorithm on the worst case input ? why not on the best case input ? I mean lower bound is the minimum amount of work which therefore occurs on the best case scenario . Every time I see a decision tree algorithm problem to solve the lower bound of some sorting algorithm , the usual word is always mentioned "the worst case lower bound is blah blah blah" which confuse me so much ! Someone please fix my understanding :( .
Answer: When analyzing algorithms it makes little sense to consider the best-case scenario as it is very often trivial and not very informative.
You can convince yourself that almost every algorithm can be adapted to have a best-case complexity of $O(n)$, where $n$ is the size of the input, by simply running a preliminary check that verifies if the input instance belongs to some class of instances for which the solution is trivial.
Just to give a concrete example: the best case for every sorting algorithm can be made $O(n)$ if you just check whether the input sequence of numbers is already sorted.
The focus is often on the worst-case complexity. Once you decide that you want to compare algorithms with respect to their worst-case complexity, it also makes sense to ask how quickly a problem can be solved.
It is usually impossible to give a sharp bound to the time needed to solve a problem, therefore one seeks upper and lower bound.
An upper bound of $O(f(n))$ tells you that there is some algorithm that solves the problem in $O(f(n))$ worst-case time.
A lower bound of $\Omega(g(n))$ tells you that no conceivable algorithm can take $o(g(n))$ time to solve the problem.
Just to be clear: a lower bound on the time needed to solve a problem is expressed as a function of the input size $n$ and is the smallest amount of work that is necessary to solve all instances of size $n$. Intuitively (not a formal definition) you can think of it as $\min_{A \in \mathcal{A}} \max_{I \in \mathcal{I}_n} T(A,I)$ where $\mathcal{A}$ is the set of all possible algorithms that solve your problem, $\mathcal{I}_n$ is the set of all instances of size $n$, and $T(A,I)$ is the time needed by $A \in \mathcal{A}$ to solve instance $I \in \mathcal{I}_n$.
What you seem to have in mind instead is $\min_{A \in \mathcal{A}} \min_{I \in \mathcal{I}_n} T(A,I)$.
A lower bound for a problem is useful to establish how "hard" that problem is to solve (problems that require more time to solve are harder, this would make little sense if we looked at the easiest instances instead), and to measure how far an algorithm is from being optimal.
For example Merge Sort solves the sorting problem in the optimal amount of time because its running time (asymptotically) matches the $\Omega(n \log n)$ lower bound for the sorting problem (in the comparison-based model). | {
"domain": "cs.stackexchange",
"id": 15881,
"tags": "algorithms, lower-bounds, decision-tree"
} |
no matching function for call to ‘ros::NodeHandle::advertiseService(const char [12], bool (Eddie::*)(), Eddie* const)’ | Question:
Hi I am getting the following error:
**error: no matching function for call to ‘ros::NodeHandle::advertiseService(const char [12], bool (Eddie::*)(), Eddie* const)’**
Below is the code:
**This is my eddie.cpp class:**
include eddie.h
Eddie::Eddie(){
/ get_version_srv is a ros::ServiceServer declared in my .h file
// node_ is a ros::NodeHandle declared in my .h file
get_version_srv = node_.advertiseService("get_version", &Eddie::get_board_version, this);
node_handle_.param<std::string>("serial_port", port, port);
}
Eddie::~Eddie()
}
bool Eddie::get_board_version(parallax_eddie_robot::get_version::Request &req, parallax_eddie_robot::get_version::Response &res)
/DO STUFF
eturn true;
//MAIN FUNCTION
int main(int argc, char** argv)
OS_INFO("Initializing the robot's control board");
os::init(argc, argv, "parallax_board");
ddie eddie;
os::Rate loop_rate(10);
eturn 0;
*Here is my eddie.h file:**
fndef _EDDIE_H
define _EDDIE_H
include ros/ros.h
include fcntl.h
include termios.h
include semaphore.h
include string
include sstream
include map
class Eddie {
public:
Eddie();
virtual ~Eddie();
private:
sem_t mutex;
struct termios tio;
int tty_fd;
ros::NodeHandle node_handle_;
ros::ServiceServer get_version_srv_;
bool get_board_version(parallax_eddie_robot::get_version::Request &req, parallax_eddie_robot::get_version::Response &res);
};
endif
*With the new changes described in the post below I get the following errors**
**Also modified above code (from original post) to reflect the new changes I have made**
/home/opslab/fuerte_workspace/sandbox/parallax_eddie_robot/include/eddie.h:36:26: error: ‘parallax_eddie_robot’ has not been declared
/home/opslab/fuerte_workspace/sandbox/parallax_eddie_robot/include/eddie.h:36:69: error: expected ‘,’ or ‘...’ before ‘&’ token
/home/opslab/fuerte_workspace/sandbox/parallax_eddie_robot/src/eddie.cpp:7:98: error: no matching function for call to ‘ros::NodeHandle::advertiseService(const char [12], bool (Eddie::*)(int), Eddie* const)’
/home/opslab/fuerte_workspace/sandbox/parallax_eddie_robot/src/eddie.cpp:7:98: note: candidates are:
/opt/ros/fuerte/include/ros/node_handle.h:821:17: note: template<class T, class MReq, class MRes> ros::ServiceServer ros::NodeHandle::advertiseService(const string&, bool (T::*)(MReq&, MRes&), T*)
/opt/ros/fuerte/include/ros/node_handle.h:859:17: note: template<class T, class MReq, class MRes> ros::ServiceServer ros::NodeHandle::advertiseService(const string&, bool (T::*)(ros::ServiceEvent<MReq, MRes>&), T*)
/opt/ros/fuerte/include/ros/node_handle.h:898:17: note: template<class T, class MReq, class MRes> ros::ServiceServer ros::NodeHandle::advertiseService(const string&, bool (T::*)(MReq&, MRes&), const boost::shared_ptr<X>&)
/opt/ros/fuerte/include/ros/node_handle.h:938:17: note: template<class T, class MReq, class MRes> ros::ServiceServer ros::NodeHandle::advertiseService(const string&, bool (T::*)(ros::ServiceEvent<MReq, MRes>&), const boost::shared_ptr<X>&)
/opt/ros/fuerte/include/ros/node_handle.h:975:17: note: template<class MReq, class MRes> ros::ServiceServer ros::NodeHandle::advertiseService(const string&, bool (*)(MReq&, MRes&))
/opt/ros/fuerte/include/ros/node_handle.h:1011:17: note: template<class MReq, class MRes> ros::ServiceServer ros::NodeHandle::advertiseService(const string&, bool (*)(ros::ServiceEvent<MReq, MRes>&))
/opt/ros/fuerte/include/ros/node_handle.h:1045:17: note: template<class MReq, class MRes> ros::ServiceServer ros::NodeHandle::advertiseService(const string&, const boost::function<bool(MReq&, MRes&)>&, const VoidConstPtr&)
/opt/ros/fuerte/include/ros/node_handle.h:1083:17: note: template<class S> ros::ServiceServer ros::NodeHandle::advertiseService(const string&, const boost::function<bool(S&)>&, const VoidConstPtr&)
/opt/ros/fuerte/include/ros/node_handle.h:1111:17: note: ros::ServiceServer ros::NodeHandle::advertiseService(ros::AdvertiseServiceOptions&)
/opt/ros/fuerte/include/ros/node_handle.h:1111:17: note: candidate expects 1 argument, 3 provided
home/opslab/fuerte_workspace/sandbox/parallax_eddie_robot/src/eddie.cpp: At global scope:
/home/opslab/fuerte_workspace/sandbox/parallax_eddie_robot/src/eddie.cpp:86:31: error: ‘bool Eddie::get_board_version’ is not a static member of ‘class Eddie’
/home/opslab/fuerte_workspace/sandbox/parallax_eddie_robot/src/eddie.cpp:86:31: error: ‘parallax_eddie_robot’ has not been declared
/home/opslab/fuerte_workspace/sandbox/parallax_eddie_robot/src/eddie.cpp:86:75: error: ‘req’ was not declared in this scope
/home/opslab/fuerte_workspace/sandbox/parallax_eddie_robot/src/eddie.cpp:86:80: error: ‘parallax_eddie_robot’ has not been declared
/home/opslab/fuerte_workspace/sandbox/parallax_eddie_robot/src/eddie.cpp:86:125: error: ‘res’ was not declared in this scope
/home/opslab/fuerte_workspace/sandbox/parallax_eddie_robot/src/eddie.cpp:86:128: error: expression list treated as compound expression in initializer [-fpermissive]
/home/opslab/fuerte_workspace/sandbox/parallax_eddie_robot/src/eddie.cpp:87:1: error: expected ‘,’ or ‘;’ before ‘{’ token
Originally posted by kbedolla on ROS Answers with karma: 48 on 2012-11-21
Post score: 0
Original comments
Comment by kbedolla on 2012-11-21:
I made the necessary changes and made my get_board_version function take in the necessary parameters so the function signature looks like the following: bool get_board_version(parallax_eddie_robot::get_version::Request &req, parallax_eddie_robot::get_version::Response &res);
Comment by kbedolla on 2012-11-21:
However, I still get the same error along with new errors: I will put the list of errors above by editing the original post.
Comment by kbedolla on 2012-11-22:
Thanks petermilani. However, I am still getting the "‘parallax_eddie_robot’ has not been declared" error. As well as "error: ‘bool Eddie::get_board_version’ is not a static member of ‘class Eddie'". Also an error saying that my parameters for the get_version_function 'was not declared in this scope'
Answer:
Fixed it! Long story short, I forgot to include the header for my service in my header file. Thanks for everyone's answers!
Originally posted by kbedolla with karma: 48 on 2012-11-24
This answer was ACCEPTED on the original site
Post score: 1 | {
"domain": "robotics.stackexchange",
"id": 11829,
"tags": "ros"
} |
use prony method for damped exponentials | Question: let us consider represented of damping exponential model by prony method,there is source code
y=zeros(1,N);
for i=1:N
y(i)=x(800*i);
end
d=zeros(1,N/2);
for i=1:N/2
d(i)=y(i+N/2);
end
D=zeros(N/2,N/2);
for i=1:N/2
for j=N/2:-1:1
D(i,-j+N/2+1)=y(i+j-1);
end
end
a=pinv(D)*d';
muhat=roots([1,-a']);
U=zeros(N,N/2);
for i=1:N
for j=1:N/2
U(i,j)=muhat(j,1)^(i-1);
end
end
C=pinv(U)*y';
and equation of model is following with there solving procedures :
with their description :
and solving strategies :
Find Roots of charactreristic polynomial formed from the linear prediction coefficients
Solve the original set of linear equations to yield the estimates of the exponential amplitude and sinusoidal phase
i want following thing: we have given signal and i want input be it's length and L,or function should have following form
function [amplitudes,damping_factor,phase,frequency]=prony(y,n,L)
%n-length(y)
%L-number of complex exponential,
how can i continue?how to change given code in my case?thanks in advance
Answer: you have C and U, and C can be written as C_i = (A_i/2)*exp(j*phase)
U_i can be written as exp(damp_i + j*2*pi*f_i)*T
its should be easy to extract the info from that. T is sampling period.
%to get damping factor i and frequency i use something like this
%assume damping factor at sample 1 is 0.3, frequency at 1 is 20
%and out sampling frequency is 500 Hz
damp1 = [0.3 0.4 0.5];
f1 = [20 30 40];
T = 1/500;
%assume this is your ui vector from th ealgorithm
ui = exp((damp1 + j*2*pi*f1)*T);
%get the damping and frequencies
lv1 = log(ui);
r_damp1 = real(lv1)/T;
r_f1 =imag(lv1)/(T*2*pi);
%assume this is your Ci vector from algorithm
Ai = [0.1 0.2 0.3];
phasei=[pi/8 pi/4 pi/2];
Ci = (Ai/2).*exp(j*phasei);
%extract phase and amplitude
r_Amp = abs(Ci)*2;
r_phase = angle(Ci); | {
"domain": "dsp.stackexchange",
"id": 1673,
"tags": "matlab, frequency-spectrum, periodic"
} |
What exactly is voltage in this case? | Question: My textbook states the following:
Voltage is the same across each component of the parallel circuit.
However I am confused on what the exact meaning of voltage is in this case, is voltage used interchangeably for potential difference?
Answer: yes, the potential difference is the voltage. | {
"domain": "physics.stackexchange",
"id": 72219,
"tags": "electricity, electric-circuits, voltage"
} |
What is the expression for strong field Schwarzschild circular orbit perihelion precession? | Question: How can you calculate how large the perihelion shift should be under Schwarzschild conditions in general relativity for minimally disturbed circular orbits in the strong field limit?
I think there is a functioning expression that works well in the weak fields of our solar system.
What would the perihelion shift be for a minimally disturbed circular orbit just above the minimum stable circular orbit at $r=6GM/c^2$ and, for instance, at $r=9GM/c^2$, $r=12GM/c^2$ and $r=15GM/c^2$ ?
Answer: The periapsis precession for a circular geodesic with radius $r$ in the Schwarzschild metric is
$$ 2
\pi \left(
\sqrt{\frac{r}{r-6M}}-1\right)$$
The answer for a generic bound geodesic with (dimensionless) semilatus rectum $p$ and eccentricity $e$ is:
$$ 4\sqrt{\frac{p}{p-6+2e}}K\left(\sqrt{\frac{4e}{p-6+2e}}\right)-2\pi,$$
where $K$ is the complete elliptic integral of the first kind.
These formula's are, of course, assuming that the orbit is a geodesic and therefore that the orbiting body is a test particle whose own gravitational influence can be neglected.
Calculation of corrections due to the mass of the orbiting body can be found here and here. | {
"domain": "physics.stackexchange",
"id": 98531,
"tags": "general-relativity, black-holes, orbital-motion, geodesics"
} |
Inscribed_radius is not updated by footprint? | Question:
As a continuation of this question, I understand that inscribed_radius of a robot is calculated solely based on the robot footprint. Also, my understanding is that inscribed_radius is initialized here.
If my understanding is correct, then inscribed_radius will be minimum of itself and the footprint calculation. See here
Since the inscribed_radius has been initialized as 0.1, then if the footprint inscribed radius is larger than that, inscribed_radius will always remain 0.1. This is consistent with what I have seen in my testing as well.
Thanks,
Rico
Originally posted by RicoJ on ROS Answers with karma: 41 on 2020-08-26
Post score: 0
Answer:
I understand that inscribed_radius of a robot is calculated solely based on the robot footprint.
You are absolutly right the inscribed_radius is calculated in the function calculateMinAndMaxDistances that you linked. The part about the inscribed_radius initialized at 0.1 is correct too but it's just in order to have a default value in case you don't call calculateMinAndMaxDistances.
then inscribed_radius will be minimum of itself and the footprint calculation
You've missed something in the function , the first instruction is :
min_dist = std::numeric_limits<double>::max();
So the inscribed_radius isn't equal to 0.1 anymore (std::numeric_limits<double>::max() returns DBL_MAX which is the maximum possible value for a double : 1.79769e+308, check here for more details). It's set to this value in order to make sure that when you call :
min_dist = std::min(min_dist, std::min(vertex_dist, edge_dist));
min_dist can only be equal to std::min(vertex_dist, edge_dist) (or DBL_MAX), since vertex_dist and edge_dist are calculated from the footprint the inscribed_radius changes according to the footprint.
This is consistent with what I have seen in my testing as well
You might not have changed the footprint (or just a little bit) when testing this, I tried with a circular footprint and I got these results :
robot_radius = 0.14 >> inscribed_radius = 0.13731
robot_radius = 1.14 >> inscribed_radius = 1.1181
Originally posted by Delb with karma: 3907 on 2020-08-27
This answer was ACCEPTED on the original site
Post score: 1 | {
"domain": "robotics.stackexchange",
"id": 35470,
"tags": "navigation, ros-melodic, ros-kinetic, costmap, move-base"
} |
Login system based off of another site | Question: I have created a PHP login system based off of this site. My main concern is: Is it secure? I chose that because to me it looked secure, but a 2nd or 3rd opinion never hurts. The idea of the project is to have something I can just pull into a new project directory and start off from there depending on my needs so it so it does have a few unused files.
Authentification Class
<?php
class authentification extends SQLQUERY
{
/*
* Hash code taken from:
* Password hashing with PBKDF2.
* Author: havoc AT defuse.ca
* www: https://defuse.ca/php-pbkdf2.htm
*/
/*
* Authentifications EXCEPTIONS (1000)
* 1001 : Login already exists
* 1002 : Email already exists
* 1003 : Current user is not a guest (for registration)
* 1004 : Submitted registration password is too common
* 1005 : Submitted registration password is too short
* 1100 : User password not in correct format (when this is raised, we force user to enter a new password, if it was valid to start with)
* This allows for including this code with an existing database
* 1101 : User password is expired
* 1102 : User is not properly activated
* 1200 : Access violation - User is not admin
*
*/
// These constants may be changed without breaking existing hashes.
const PBKDF2_HASH_ALGORITHM = "sha256";
const PBKDF2_ITERATIONS = 1000;
const PBKDF2_SALT_BYTES = 24;
const PBKDF2_HASH_BYTES = 24;
const HASH_SECTIONS = 4;
const HASH_ALGORITHM_INDEX = 0;
const HASH_ITERATION_INDEX = 1;
const HASH_SALT_INDEX = 2;
const HASH_PBKDF2_INDEX = 3;
const MINIMUM_PASS_LENGTH = 1;
const ADMIN_CAN_LOGIN = 1; //DISABLED RETURNS WRONG PASSWORD ON ALL ADMIN LOGIN ATTEMPTS
const SESSION_TIMEOUT = 86400; //IN SECONDS
const DEFAULT_LANG_ID = "dl_basicl"; //Session key for default language
public $login = '';
public $email = '';
public $pass = '';
private $sess_id = '';
private $user_id = -1;
public function __construct($db, $login, $email, $pass, $user_id = -1)
{
SQLQUERY::__construct($db);
$this->login = $login;
$this->email = $email;
$this->pass = $pass;
//for when it is already known from a global
$this->user_id = $user_id;
//set user session
$this->sess_id = session_id();
if(!$this->sess_id) {
$this->sess_id = $PHPSESSID;
}
}
/**
*
* @return boolean checks if CheckCredentials are valid.
*/
public function login()
{
$this->sess_clear();
if($this->CheckCredentials()){
$this->sess_write();
$this->auth_user_update_date();
$this->auth_user_login_count();
return true;
}
return false;
}
/**
*
* @return boolean Logs the user out
*/
public function logout()
{
return $this->sess_delete();
}
/**
*
* @return array returns the user infos
* Array returns id, login, user_level, default_lang, email
*/
public function checkuser()
{
$this->sess_clear();
$id = $this->sess_read();
$this->user_id = $id ? $id : -1;
$this->auth_user_update_date();
return $this->get_user_infos();
}
/**
*
* @return boolean Gets whether user should login or if it is first admin setup
*/
public function shouldLogAdmin()
{
if($this->IsLoginAdmin())
{
return $this->HasPassword();
}
else
{
//if it is not an admin, return false
throw new Exception("Current user is not an admin", 1200);
}
}
/**
*
* @return boolean Sets the first admin password
*/
public function setAdminPassword()
{
$rs = SQLQUERY::Execute(SQLQUERY::Prepare(SQLQUERY::AuthUpdateFirstAdminPass()), array($this->create_hash(), $this->login));
return true;
}
/**
* Updates password through user id from the panel
* @return boolean updates a new password based on the user id
*/
public function updatePassword()
{
$rs = SQLQUERY::Execute(SQLQUERY::Prepare(SQLQUERY::AuthUpdatePassword()), array($this->create_hash(), $this->user_id));
return true;
}
/**
*
* @return boolean Sets a new password for an expired account
*/
public function setNewPassword()
{
if($this->is_common_pass($this->pass))
{
throw new Exception("Submitted registration password is too common", 1004);
}
if(strlen($pass) < self::MINIMUM_PASS_LENGTH)
{
throw new Exception("Submitted registration password is too short", 1005);
}
$rs = SQLQUERY::Execute(SQLQUERY::Prepare(SQLQUERY::AuthUpdateNewPassword()), array($this->create_hash(), $this->login));
return true;
}
/**
*
* @return boolean True on user registration
*/
public function registerUserWithAdmin( $login, $email, $pass)
{
$register = $this->create_new_user($login, $email, $pass);
return $register;
}
/**
*
* @param int $active Default new user active status - 1 if unset anywhere.
* @return boolean
*/
public function registerUser($active = 1)
{
//if not, check if user can register (is not logged in already)
if($this->user_id != -1)
{
throw new Exception("Current user is not a guest (for registration)", 1003);
}
//if not register a new user
$register = $this->create_new_user($active, $this->login, $this->email, $this->pass);
if($register)
{
$this->user_id = $register;
}
return true;
}
/**
* Checks if a given email actually exists for an active non-admin user in the system
* @return boolean
*/
public function emailExists()
{
$rs = SQLQUERY::Execute(SQLQUERY::Prepare(SQLQUERY::AuthUsersByEmail()), array($this->email));
while(isset($rs->fields) && !$rs->EOF)
{
if($rs->fields[1] > 0)
{
return true;
}
$rs->MoveNext();
}
return false;
}
/**
* @TODO email can login
* Allows the user to login through a mail sent in his email
* This is no less secure than letting them reset their password
* Login count validation blocks further login attempts
* @return boolean
*/
public function emailLogin()
{
/*$this->sess_clear();
if( @TODO email can login ){
$this->sess_write();
$this->auth_user_update_date();
$this->auth_user_login_count();
return true;
}*/
return false;
}
/***********************************
* NON-PUBLIC FUNCTIONS START HERE *
***********************************/
/**
*
* @return String The encoded password with the format: algorithm:iterations:salt:hash
*/
private function create_hash()
{
$pass = $this->pass;
//create unique salt
$salt = base64_encode(mcrypt_create_iv(self::PBKDF2_SALT_BYTES, MCRYPT_DEV_URANDOM));
return self::PBKDF2_HASH_ALGORITHM . ":" . self::PBKDF2_ITERATIONS . ":" . $salt . ":" .
base64_encode( $this->pbkdf2(self::PBKDF2_HASH_ALGORITHM,$pass,$salt,self::PBKDF2_ITERATIONS,self::PBKDF2_HASH_BYTES,true) );
}
/**
*
* @param String $good_hash
* @return boolean Validates the given password
*/
private function validate_password($good_hash)
{
$params = explode(":", $good_hash);
if(count($params) < self::HASH_SECTIONS)
{
return false;
}
$pbkdf2 = base64_decode($params[self::HASH_PBKDF2_INDEX]);
return $this->slow_equals( $pbkdf2, $this->pbkdf2($params[self::HASH_ALGORITHM_INDEX],$this->pass, $params[self::HASH_SALT_INDEX], intval($params[self::HASH_ITERATION_INDEX]),strlen($pbkdf2),true));
}
/**
*
* @param String $a
* @param String $b
* @return boolean Compares two strings $a and $b in length-constant time.
*/
private function slow_equals($a, $b)
{
$diff = strlen($a) ^ strlen($b);
for($i = 0; $i < strlen($a) && $i < strlen($b); $i++)
{
$diff |= ord($a[$i]) ^ ord($b[$i]);
}
return $diff === 0;
}
/**
* PBKDF2 key derivation function as defined by RSA's PKCS #5: https://www.ietf.org/rfc/rfc2898.txt
* @param String $algorithm - The hash algorithm to use. Recommended: SHA256
* @param String $password - The password.
* @param String $salt - A salt that is unique to the password.
* @param int $count - Iteration count. Higher is better, but slower. Recommended: At least 1000.
* @param int $key_length - The length of the derived key in bytes.
* @param boolean $raw_output - If true, the key is returned in raw binary format. Hex encoded otherwise.
* @return String A $key_length-byte key derived from the password and salt.
*
* Test vectors can be found here: https://www.ietf.org/rfc/rfc6070.txt
*
* This implementation of PBKDF2 was originally created by https://defuse.ca
* With improvements by http://www.variations-of-shadow.com
*/
private function pbkdf2($algorithm, $password, $salt, $count, $key_length, $raw_output = false)
{
$algorithm = strtolower($algorithm);
if(!in_array($algorithm, hash_algos(), true))
{
die('PBKDF2 ERROR: Invalid hash algorithm.');
}
if($count <= 0 || $key_length <= 0)
{
die('PBKDF2 ERROR: Invalid parameters.');
}
$hash_length = strlen(hash($algorithm, "", true));
$block_count = ceil($key_length / $hash_length);
$output = "";
for($i = 1; $i <= $block_count; $i++) {
// $i encoded as 4 bytes, big endian.
$last = $salt . pack("N", $i);
// first iteration
$last = $xorsum = hash_hmac($algorithm, $last, $password, true);
// perform the other $count - 1 iterations
for ($j = 1; $j < $count; $j++) {
$xorsum ^= ($last = hash_hmac($algorithm, $last, $password, true));
}
$output .= $xorsum;
}
if($raw_output)
{
return substr($output, 0, $key_length);
}
else
{
return bin2hex(substr($output, 0, $key_length));
}
}
/**
*
* @return integer Checks password and username for login method. If found, returns the id
*/
private function CheckCredentials()
{
if($this->CanLogin())
{
$rs = SQLQUERY::Execute(SQLQUERY::Prepare(SQLQUERY::AuthGetUserPassword()), array($this->login));
if(!$rs->EOF){
if($this->validate_password($rs->fields[1]) && $rs->fields[2] > 0)
{ //check if password is good and user is not inactive
$this->user_id = $rs->fields[0];
if($rs->fields[2] == 2)
{//if pass is correct but was set to expire, provoke change
throw new Exception("Password has expired", 1101);
}
if($rs->fields[2] >= 3)
{//if pass is correct but user is not yet verified
throw new Exception("User is not properly activated", 1102);
}
return true;
}
elseif($rs->fields[1] == $this->pass )
{//if pass is in plain text in db then it must be changed
throw new Exception("Password format is incorrect", 1100);
}
}
}
return false;
}
/**
* Checks if attempting to log in as admin and returns false if cannot admin login
* Normal users can still log in
*
* @return boolean Returns if the login can login and always fails if ADMIN_CAN_LOGIN is 0
*/
private function CanLogin()
{
$alogin = $this->IsLoginAdmin();
if(self::ADMIN_CAN_LOGIN == 0 && $alogin)
{
return false;
}
return true;
}
/**
*
* @return boolean Returns whether the login is admin or not
*/
private function IsLoginAdmin()
{
$rs = SQLQUERY::Execute(SQLQUERY::Prepare(SQLQUERY::AuthGetRootUsers()), array());
while($rs->fields && !$rs->EOF)
{
if($this->login == $rs->fields[0])
{
return true;
}
$rs->MoveNext();
}
return false;
}
/**
*
* @return boolean Returns whether the login has a password
*/
private function HasPassword()
{
$rs = SQLQUERY::Execute(SQLQUERY::Prepare(SQLQUERY::AuthGetUserPassword()), array($this->login));
if(strlen($rs->fields[1]) > 0)
{
return true;
}
return false;
}
/**
*
* @return integer Returns the id_user from active sessions table accordingly his session id
*/
private function sess_read()
{
$rs = SQLQUERY::Execute(SQLQUERY::Prepare(SQLQUERY::AuthUserActiveSession()), array($this->sess_id));
if(!$rs->EOF)
{
return $rs->fields[0];
}
return false;
}
/**
*
* @return boolean Writes the session id and user id to the table
*/
private function sess_write()
{
$ip_address = $_SERVER["REMOTE_ADDR"];
$file = $_SERVER["REQUEST_URI"];
if(!$file) {
$file= $_SERVER["SCRIPT_NAME"];
}
SQLQUERY::Execute(SQLQUERY::Prepare(SQLQUERY::AuthDeleteActiveSession()), array($this->sess_id));
//insert
SQLQUERY::Execute(SQLQUERY::Prepare(SQLQUERY::AuthInsertSession()), array($this->user_id, $this->sess_id, $file));
return true;
}
/**
*
* @return boolean Flushes a session
*/
private function sess_delete()
{
SQLQUERY::Execute(SQLQUERY::Prepare(SQLQUERY::AuthDeleteActiveSession()), array($this->sess_id));
return true;
}
/**
*
* @param date $time
* @return boolean Clears expired sessions
*/
private function sess_clear()
{
SQLQUERY::Execute(SQLQUERY::Prepare(SQLQUERY::AuthDeleteOldSession()), array(self::SESSION_TIMEOUT));
return true;
}
/**
* Updates the user_id variable only for the object
*
* @return boolean true when the user is not a guest
*/
private function update_user()
{
$this->user_id = sess_read();
if(!$this->user_id)
{
$this->user_id = -1;
return false;
}
return true;
}
/**
*
* @return boolean Updates user session date
*/
private function auth_user_update_date()
{
$date = date("YmdHis");
$file = $_SERVER["REQUEST_URI"];
if(!$file) {
$file= $_SERVER["SCRIPT_NAME"];
}
SQLQUERY::Execute(SQLQUERY::Prepare(SQLQUERY::AuthUpdateSession()), array($file, $this->sess_id, $this->user_id));
//if not a guest
if($this->user_id > 0){
SQLQUERY::Execute(SQLQUERY::Prepare(SQLQUERY::AuthUpdateUserLastSeen()), array($this->user_id));
//SQLQUERY::Execute(SQLQUERY::Prepare(SQLQUERY::AuthUpdateUserActive()), array(1, $this->user_id));
}
return true;
}
/**
*
* @return boolean Updates user login count
*/
private function auth_user_login_count()
{
//if not a guest
if($this->user_id > 0){
SQLQUERY::Execute(SQLQUERY::Prepare(SQLQUERY::AuthUpdateUserLoginCount()), array($this->user_id));
}
return true;
}
/**
*
* @return array The user infos
*/
private function get_user_infos()
{global $lang;
$arr = array();
if($this->user_id == -1)
{
//load guest infos
$arr["id"] = -1;
$arr["login"] = isset($lang) ? $lang["guest"] : "guest";
$arr["user_level"] = 0;
$arr["default_lang"] = isset($_SESSION["lrdl"]) ? $_SESSION["lrdl"] : 1;
$arr["email"] = "";
}
else
{
$rs = SQLQUERY::Execute(SQLQUERY::Prepare(SQLQUERY::AuthGetUserInfos()), array($this->user_id));
//rs has id, login, user_level, default_lang, email
while(isset($rs->fields) && !$rs->EOF)
{
$arr["id"] = $rs->fields[0];
$arr["login"] = $rs->fields[1];
$arr["user_level"] = $rs->fields[2];
$arr["default_lang"] = $rs->fields[3];
$arr["email"] = $rs->fields[4];
$rs->MoveNext();
}
}
return $arr;
}
/**
*
* @return integer The newly create user ID. False if none.
* @throws Exception 1001, 1002, 1004
*/
private function create_new_user($active, $login, $email, $pass = "")
{
$rs = SQLQUERY::Execute(SQLQUERY::Prepare(SQLQUERY::AuthGetDupeRegistrations()), array($this->login, $this->email));
if($rs->fields && !$rs->EOF){
if($rs->fields[1] == $this->login)
{
throw new Exception("Login already exists", 1001);
}
else
{
throw new Exception("Email already exists", 1002);
}
}
else
{
if($pass == "")
{
$this->pass = $this->random_password();
}
else
{
if($this->is_common_pass($pass))
{
throw new Exception("Submitted registration password is too common", 1004);
}
if(strlen($pass) < self::MINIMUM_PASS_LENGTH)
{
throw new Exception("Submitted registration password is too short", 1005);
}
}
$pass = $this->create_hash();
$default_lang = isset($_SESSION[self::DEFAULT_LANG_ID]) ? $_SESSION[self::DEFAULT_LANG_ID] : 1;
$rs = SQLQUERY::Execute(SQLQUERY::Prepare(SQLQUERY::AuthInsertNewUser()), array($active, $default_lang, $login, $pass, $email));
return SQLQUERY::LastID();
}
return false;
}
/**
*
* @return string A random password
*/
private function random_password()
{
//make sure random pass is at least 8 characters long
$min = self::MINIMUM_PASS_LENGTH;
if($min < 8)
{
$min = 8;
}
$alphabet = "abcdefghijklmnopqrstuwxyzABCDEFGHIJKLMNOPQRSTUWXYZ0123456789";
$pass = array(); //remember to declare $pass as an array
$alphaLength = strlen($alphabet) - 1; //put the length -1 in cache
for ($i = 0; $i < $min; $i++)
{
$n = rand(0, $alphaLength);
$pass[] = $alphabet[$n];
}
if($this->is_common_pass($pass))
{
return $this->random_password();
}
else
{
return implode($pass); //turn the array into a string
}
}
/**
* Checks if a password is in the top 1050 common passwords list
* @param string password The password to check
* @return bool True if the password is common
*/
private function is_common_pass($pass)
{
$common_password = array("char limit");
if( in_array($pass, $common_password) )
{
return true;
}
return false;
}
}
?>
SQL Query Class
<?php
/**
* SQL Queries
* This is meant to be used with ADODB which allows setting params
*/
class SQLQUERY {
private $db;
public function __construct($db) {
$this->db = $db;
}
/**
*
* @param String $sql
* @return Object Returns object created by prepare statement
*/
public function Prepare($sql)
{
return $this->db->Prepare($sql);
}
/**
*
* @param Object $o Object that has been prepared
* @param array $array Array of parameters
* @return RecordSet The result of the query.
*/
public function Execute($o, $array)
{
return $this->db->Execute($o, $array);
}
/**
* @return RecordSet The last inserted id.
*/
public function LastID()
{
return $this->db->Insert_ID();
}
/********************************************
* START OF QUERIES LIST *
********************************************/
protected function AuthGetUserPassword()
{
return "SELECT id, password, active FROM ".USERS_TABLE." WHERE login = ".$this->db->Param('a')."";
}
protected function AuthUpdateUserLastSeen()
{
return "UPDATE ".USERS_TABLE." SET date_last_seen = NOW() WHERE id = ".$this->db->Param('a')."" ;
}
protected function AuthUpdateUserLoginCount()
{
return "UPDATE ".USERS_TABLE." SET login_count = login_count + 1 WHERE id = ".$this->db->Param('a')."" ;
}
protected function AuthUpdateUserActive()
{
return "UPDATE ".USERS_TABLE." SET active = ".$this->db->Param('a')." WHERE id = ".$this->db->Param('b')." and active = ".$this->db->Param('c')." ";
}
protected function AuthUserActiveSession()
{
return "SELECT id_user FROM ".ACTIVE_SESSIONS_TABLE." WHERE session = ".$this->db->Param('a')."" ;
}
protected function AuthDeleteActiveSession()
{
return "DELETE FROM ".ACTIVE_SESSIONS_TABLE." WHERE session = ".$this->db->Param('a')." " ;
}
protected function AuthDeleteOldSession()
{
return "DELETE FROM ".ACTIVE_SESSIONS_TABLE." WHERE UNIX_TIMESTAMP(update_date) < UNIX_TIMESTAMP(now())-".$this->db->Param('a')." " ;
}
protected function AuthInsertSession()
{
return "INSERT INTO ".ACTIVE_SESSIONS_TABLE." (id_user, session, file, update_date)
VALUES ( ".$this->db->Param('a').", ".$this->db->Param('b').", ".$this->db->Param('c').", NOW()) ";
}
protected function AuthUpdateSession()
{
return "UPDATE ".ACTIVE_SESSIONS_TABLE."
SET update_date = NOW(), file=".$this->db->Param('a')."
WHERE session=".$this->db->Param('b')." AND id_user = ".$this->db->Param('c')."";
}
protected function AuthGetRootUsers()
{
return "SELECT login FROM ".USERS_TABLE." WHERE user_level = 100";
}
protected function AuthGetUserInfos()
{
return "SELECT id, login, user_level, default_lang, email FROM ".USERS_TABLE." WHERE id = ".$this->db->Param('a')." ";
}
protected function AuthUpdateFirstAdminPass()
{
return "UPDATE ".USERS_TABLE." SET date_registration = NOW(), password=".$this->db->Param('a')." WHERE login=".$this->db->Param('b')." AND user_level = 100 AND password = '' ";
}
protected function AuthUpdatePassword()
{//updates through ID
return "UPDATE ".USERS_TABLE." SET password=".$this->db->Param('a')." WHERE id=".$this->db->Param('b')." ";
}
protected function AuthUpdateNewPassword()
{//updates through login (which is unique key in DB) and user cannot have been deactivated purposefully
return "UPDATE ".USERS_TABLE." SET password=".$this->db->Param('a').", active = 1 WHERE login=".$this->db->Param('b')." AND active > 1 ";
}
protected function AuthUsersByEmail()
{
return "SELECT login, active FROM ".USERS_TABLE." WHERE email = ".$this->db->Param('a')." AND user_level < 100";
}
protected function AuthGetDupeRegistrations()
{
return "SELECT id, login, email FROM ".USERS_TABLE." WHERE login = ".$this->db->Param('a')." OR email = ".$this->db->Param('b')." ";
}
protected function AuthInsertNewUser()
{
// a b c d e
return "INSERT INTO ".USERS_TABLE." (active, default_lang, date_registration, user_level, login, password, email)
VALUES ( ".$this->db->Param('a').", ".$this->db->Param('b').", NOW(), 1, ".$this->db->Param('c').", ".$this->db->Param('d').", ".$this->db->Param('e').") ";
}
}
?>
Most of the above is called by the auth.php script in the root directory.
Answer: Preamble
As a matter of security, I can safely say that you would fail a professional security audit in seconds. You should be including the hashing library as an external component. You also need to forget everything you know about querying databases and start again.
Also, look at Model-View-Controller. The fact you have a templating engine and separate classes for Database work implies that you are attempting MVC, but the implementation itself is nothing like MVC.
SQLQUERY
As for the code here itself, your SQLQUERY class leaves a lot to be desired.
First: the naming. SQLQUERY makes me instantly think that this is a subclass of PDOStatement. But it isn't. It's a wrapper around a PDO object.
Consider the following:
class DatabaseWrapper {
private $db;
const USERS_TABLE = "`users`";
public function __construct(PDO $db) {
$this->setInstance($db);
}
public function &getInstance() {
return $this->db;
}
protected function setInstance(PDO $db) {
$this->db = $db;
}
}
It type-hints the database object in the constructor for dependency injection.
It allows you to get a direct reference to the PDO object that is injected.
Likewise, it also allows to you set it later (and allows classes that extend it to do the same)
The users table is a class constant.
As for your queries themselves...
You're going to have to never, under any circumstances, use string interpolation (which is not too much of an issue in your implementation, except for the fact that you are using your controller as a model and your model as an array...)
Remove the ->Param() method and instead use $this->db->prepare().
This means that:
protected function AuthUsersByEmail()
{
return "SELECT login, active FROM ".USERS_TABLE." WHERE email = ".$this->db->Param('a')." AND user_level < 100";
}
Would instead become:
protected function getUsersByEmail($email, $level = 100) {
$sql = "SELECT login, active FROM ". static::USERS_TABLE . " WHERE email = ? AND user_level < ?";
return $this->prepare($sql, [$email, $level]);
}
Where prepare is the following:
protected function prepare($sql, array $bindings = []) {
try {
$prepare = $this->db->prepare($sql);
$prepare->execute($bindings);
return $prepare->fetchAll();
} catch (PDOException $e) {
throw new RuntimeException("prepare failed: $sql, " . $e->getMessage());
}
}
In short, you should make your SQLQUERY class into a model that fetches data for you.
Also something I've noticed is that your entire project will be generating notices like crazy because of poor PHP.
parent::__construct() should be used to call a parent constructor. using SQLQUERY::__construct() implies that it is a static method, and PHP will complain.
protected functions are not static! In your entire authentication class, you are simply trying to call protected (and hence internal) functions from the global scope as public static methods.
This system would need a fairly hefty redesign in order to be considered for use in production environments.
You need to split up your files, and huge files are pretty much unreadable. Look at composer for autoloading your classes.
Also, have you tried using the built-in password hashing functions that PHP provides? (password_hash, password_verify, password_needs_rehash). You should really stick with vetted implementations of cryptographic functions that have been extensively peer-reviewed.
I strongly, strongly recommend that you consult PSR0, PSR1 and PSR2. | {
"domain": "codereview.stackexchange",
"id": 6393,
"tags": "php, mysql, security, authentication"
} |
What is the importance of higher generations of particles in cosmology? | Question: Although there are 6 types of quarks and 3 leptons according to the standard model, most of them are unstable and rapidly decay to the lighter, first generation analogues. Only the up and down quarks and the electron make up a substantial fraction of the universe.
If those heavier analogues did not exist, would the universe be much different from how it is right now? Would it affect, for instance, the life cycle of stars or the abundance of elements in the universe? What is the importance of higher generations of particles in cosmology?
Answer: From [1]: "Nature is very good at blowing up stars. We [theoretical physicists] are not." That's a reference to the fact that we still don't completely understand why core-collapse supernovae explode. We do know that most ($\sim$ 99%) of the radiated energy from a core-collapse supernova is in the form of neutrinos [2], and we do know that this includes significant contributions from all flavors (generations) of neutrinos [2][3]. The core-collapse environment is so extreme that neutrino-neutrino interactions are significant [4]. According to [5], the core-collapse environment is the only known environment in which significant neutrino-neutrino interactions can be observed.
Altogether, this raises the possibility that neutrinos — and the fact they come in multiple flavors — might have a significant influence on core-collapse supernova explosions. So this might provide an interesting answer to your question, insofar as the existence and features of core-collapse supernovae are important for cosmology. The Standard Model of particle physics is not mathematically consistent unless each generation is complete — which means that if multiple generations of neutrinos exist, then multiple generations of the other leptons and quarks are also required.$^\dagger$
As far as I know, the the relevance of multiple neutrino flavors to the supernova explosion mechanism is still an open question. One recent study [6] says this about the role of multiple neutrino generations:
Neutrino oscillations [which require multiple flavors] do impact the dynamics of the simulations [of core-collapse supernovae], but do not cause the explosion to occur.
Further research may tell us how significant (or not) this "impact" really is.
Footnote:
$^\dagger$ The details of the connection between generations and neutrino flavors might turn out to be more subtle, because neutrinos are massless in the original (renormalizable) Standard Model. Explaining the observed neutrino masses requires something beyond that original model, which could conceivably change the simple "one neutrino flavor per generation" picture. This, like the details of how core-collapse supernova work, is still an active area of research.
References:
[1] Slide 22 in "The Neutrino Mechanism of Core-Collapse Supernovae" (https://www.astro.princeton.edu/~burrows/classes/541/NeutrinoMechv2.pdf)
[2] Page 127 in "Introduction to neutrino physics" (https://cds.cern.ch/record/677618/files/p115.pdf)
[3] Page 5 in "Supernova Signatures of Neutrino Mass Ordering" (https://dukespace.lib.duke.edu/dspace/bitstream/handle/10161/15940/sn_mo.pdf)
[4] Page 7 in "Supernova Neutrinos: Theory" (https://arxiv.org/abs/1604.07332)
[5] First page in "Theory and Phenomenology of Supernova Neutrinos" (https://aip.scitation.org/doi/pdf/10.1063/1.4915560)
[6] Last slide in "The Effects of Neutrino Oscillations on Core-Collapse Supernova Explosions" (https://indico.ectstar.eu/event/47/contributions/979/attachments/705/926/SNCrossroads_Stapleford.pdf), dated 2019 | {
"domain": "physics.stackexchange",
"id": 66447,
"tags": "standard-model"
} |
Hope to disable collision between Robot model and object model While the Lidar could still detect the Object in Gazebo | Question: Right now I have added this code to the collision element of the Object model sdf file, which allows the robot could go through the object very very slowly. Is there a way to make the robot have no collision with the object while I could still use Lidar to detect the objects in the Gazebo?
0
0
<collide_without_contact>true</collide_without_contact>
Answer: If you are on Gazebo Classic (e.g. Gazebo 11), you can try to use collide_without_contact:
See here for the spec,
See here for an example.
Unfortunately, this seems not implemented yet for 'new Gazebo' (i.e. 'Gazebo Sim', e.g. Gazebo Fortress or Garden or Harmonic).
A possible alternative (both for Gazebo Classic as well as Gazebo Sim) is to use a collision bitmask:
See here for documentation
(It is a Gazebo Classic documentation page, but I think the usage is identical in Gazebo Sim),
See the gz-sim GitHub repository for an example demo world. | {
"domain": "robotics.stackexchange",
"id": 38749,
"tags": "navigation, gazebo, collision"
} |
Calculating the period of a quasi-circular orbit | Question: In solving an exercise I had to find the equation of the quasi-circular orbits of an object with the potential $V(r)=-\alpha r^{-1-\eta}$ and I expressed it as:
$$r(\phi)=\frac{r_c}{1+\epsilon \cos(\phi\sqrt{1-\eta})}$$
Where $r_c$ is the radius of the circular orbit and $\epsilon$ depends on the initial conditions.
Now (among other things) I am asked about the period of the motion. I thought that in order to find the period I should integrate $\phi(t)$ using the conservation of angular momentum $L$ in the form $\dot\phi(t)=\frac{L}{mr^2(\phi)}$. This integration isn't easy at all and, in my opinion, can only be approximated.
However, the author of the exercise wrote that the period can be found easily by $mr_c^22\pi/T=L$ but he doesn't explain why. My question is where does this formula come from and whether it is exact or just an approximation.
Answer: I think "quasi-circular" is a misleading name for this problem. Perhaps "quasi-elliptical" would be better? I say this because this problem does, in fact, contain a closed circular orbit (the radius of which you have called $r_c$). For that orbit you can find the period using Kepler's second law, which gives the result you show.
An interesting way to approach this problem is to consider only small radial oscillations about the circle. Find the oscillation frequency by expanding the effective potential about the minimum. See how this is related to the frequency of the circular orbit. What happens when $\eta\rightarrow 0$? You should find that in that case (inverse square force law), the small-oscillation frequency is identical to the circular orbit frequency. This is another way of seeing that the orbits must be ellipsoidal. But, for non-zero $\eta$, this is no longer the case. For small $\eta$, you instead get near-ellipses that just fail to close. These precessing ellipses are described by the radial formula you gave. | {
"domain": "physics.stackexchange",
"id": 3993,
"tags": "homework-and-exercises, classical-mechanics, perturbation-theory"
} |
What are the consequences of mixing Ferric Chloride Solution, distilled vinegar, baking soda and water? | Question: I was attempting to etch and blade with a ferric chloride solution. I did not have enough so I filled a glass with vinegar and water (3 parts vinegar to 1 part water) then added 2 oz. of ferric chloride solution. I added baking soda later to neutralize the ferric chloride solution, but was met with a deep red foam. I quickly added more baking soda and flushed the solution down a deep sink in my basement. I rinsed out the sink and glass with water and continued to add baking soda to neutralize any ferric chloride solution that had been spread by the red foam. What reaction occurred and is this and do I need to worry about it?
Answer: I suspect the red salt you are seeing is Iron (III) carbonate, which was likely created from, as you noted, the neutralization of aqueous Iron (III) chloride with Baking Soda (in excess?) per the reactions:
$\ce{FeCl3 (aq) + 3 NaHCO3 (aq)-> 3 NaCl (aq) + 2Fe(HCO₃)₃(aq)}$
$\ce{2Fe(HCO₃)₃ (aq) → Fe₂(CO₃)₃ (s) + 3H₂O (l) + 3CO₂ (g)}$
$\ce{2Fe(HCO₃)₃ + HAc (aq) → Fe₂(Ac)₃ (s) + 3H₂O (l) + 3CO₂ (g)}$
where the last reaction could also lead to Iron (III) acetate from the vinegar presence, which is also subject to further neutralization by the Baking Soda. | {
"domain": "chemistry.stackexchange",
"id": 14777,
"tags": "reaction-mechanism, safety"
} |
Does airglow intensity systematically change during the night? | Question: Airglow is caused, among other factors, by recombination of atoms ionized during the day. This makes me think that during the night concentration of these ions should reduce, lowering intensity of airglow.
But is this reduction of intensity actually measurable, or is the recombination so slow as to preserve airglow at almost the same intensity?
Answer: There is a measurable decay of airglow radiance during the night. Here's an example measurement of $\mathrm{OH}^-$ infrared emissions ($\sim2\,\mathrm{\mu m}$ wavelength) done on Mauna Kea in ref. 1:
References
S. K. Ramsay, C. M. Mountain, T. R. Geballe, Non-thermal emission in the atmosphere above Mauna Kea, Monthly Notices of the Royal Astronomical Society, Volume 259, Issue 4, December 1992, Pages 751–760, https://doi.org/10.1093/mnras/259.4.751 | {
"domain": "earthscience.stackexchange",
"id": 2671,
"tags": "atmosphere, upper-atmosphere"
} |
Equations of motion describing a great circle | Question:
I'd like to argue that equations of motions of the form
$$\ddot \varphi = 0 \quad \text{and} \quad \ddot\theta = \sin\theta\cos\theta\dot\varphi^2$$
describe a great circle.
I think the standard argument goes something like this:
$$\ddot\varphi =0\quad \Longrightarrow\quad \dot\varphi = const. =:\omega\quad \Longrightarrow\quad \varphi(t)=\omega t + \varphi_0.$$
We can now fix an initial condition for $\theta$, lets say, $\theta(t=0)=\pi/2$. With this we get
$$\ddot \theta(t=0)= 0\quad \Longrightarrow \quad \theta(t)=\frac{\pi}{2},\quad \forall t,$$
which would describe a great circle.
And this last implication is where I get lost. What exactly is the argument that guarantees here that $\theta$ is constant in all time? It seems to be related to $\dot\varphi = const.$ but I just cann't formulate a satisfying argument why $\dot \theta = 0$.
Answer: Note that we can perform a fairly standard trick with the $\theta$ equation:
$$2\dot \theta \ddot \theta = 2\sin(\theta)\cos(\theta)\dot \theta \dot \varphi^2$$
$$\implies \frac{d}{dt}\left(\dot \theta ^2\right) = \frac{d}{dt}\left(\dot \varphi^2\sin^2(\theta)\right)$$
since $\dot\varphi$ is constant. Therefore we have that
$$\dot\theta^2 = \dot\varphi^2\sin^2(\theta)+C$$
$$\implies \dot\theta = \pm \sqrt{ \dot\varphi^2\sin^2(\theta)+C}$$
This is a bit easier to work with. Note also that you must fix two initial conditions for $\theta$, not just one. | {
"domain": "physics.stackexchange",
"id": 67414,
"tags": "homework-and-exercises, classical-mechanics, symmetry, geodesics, equations-of-motion"
} |
Evaluating the average time complexity of a given bubblesort algorithm. | Question: Considering this pseudo-code of a bubblesort:
FOR i := 0 TO arraylength(list) STEP 1
switched := false
FOR j := 0 TO arraylength(list)-(i+1) STEP 1
IF list[j] > list[j + 1] THEN
switch(list,j,j+1)
switched := true
ENDIF
NEXT
IF switched = false THEN
break
ENDIF
NEXT
What would be the basic ideas I would have to keep in mind to evaluate the average time-complexity? I already accomplished calculating the worst and best cases, but I am stuck deliberating how to evaluate the average complexity of the inner loop, to form the equation.
The worst case equation is:
$$
\sum_{i=0}^n \left(\sum_{j=0}^{n -(i+1)}O(1) + O(1)\right) = O(\frac{n^2}{2} + \frac{n}{2}) = O(n^2)
$$
in which the inner sigma represents the inner loop, and the outer sigma represents the outer loop. I think that I need to change both sigmas due to the "if-then-break"-clause, which might affect the outer sigma but also due to the if-clause in the inner loop, which will affect the actions done during a loop (4 actions + 1 comparison if true, else just 1 comparison).
For clarification on the term average-time: This sorting algorithm will need different time on different lists (of the same length), as the algorithm might need more or less steps through/within the loops until the list is completely in order. I try to find a mathematical (non statistical way) of evaluating the average of those rounds needed.
For this I expect any order to be of the same possibility.
Answer: For lists of length $n$, average usually means that you have to start with a uniform distribution on all $n!$ permutations of [$1$, .., $n$]: that will be all the lists you have to consider.
Your average complexity would then be the sum of the number of step for all lists divided by $n!$.
For a given list $(x_i)_i$, the number of steps of your algorithm is $nd$ where $d$ is the greatest distance between a element $x_i$ and his rightful location $i$ (but only if it has to move to the left), that is $\max_i(\max(1,i-x_i))$.
Then you do the math: for each $d$ find the number $c_d$ of lists with this particular maximal distance, then the expected value of $d$ is:
$$\frac1{n!}\ \sum_{d=0}^n{\ dc_d}$$
And that's the basic thoughts without the hardest part which is finding $c_d$. Maybe there is a simpler solution though.
EDIT: added `expected' | {
"domain": "cs.stackexchange",
"id": 10833,
"tags": "algorithms, time-complexity, sorting, average-case"
} |
How does amount of free electrons in an material relate to it's ability to reflect? | Question: From this Wikipedia article on critical frequency, it turns out that there is a limiting frequency at or below which the radio waves are reflected back to the earth. The reason for such a frequency is said to be electron limitation and said by wiki " The inadequacy of the existing number of free electrons to support reflections at higher frequency" but how do electrons in the ionosphere reflect EM waves?
Answer: Electrons in ionosphere can be described as charged particles in a harmonic potential (due to positive ions). That is, they experience elastic force $-m\omega^2 x$ in addition to force from the external EM wave; here $\omega$ is called the plasma frequency and is a natural frequency of oscillation in such plasma. It depends on the concentration of the electrons (in the model there are also positive ions in the plasma so the medium is neutral).
It turns out from analysis of equations of motion of such a model that if the frequency of the external EM wave $\Omega$ is much lower than $\omega$, then electrons oscillate in sync with the force due to the external EM wave and thus produce an EM wave of their own, a so called secondary wave. We we add the waves together, net EM field looks like very weak wave passes through, and a wave of similar strength to the primary one gets reflected back. So we have near total reflection. For low frequency radiation, ionosphere behaves as a shiny polished layer of metal, a mirror.
When the EM wave frequency gets close to $\omega$, the plasma begin to absorb the EM wave energy a lot, it gets hotter and only very weak EM wave gets reflected and transmitted.
But if $\Omega$ is much higher than $\omega$, then electrons do not manage to keep up with the external wave, so their oscillations get only very small amplitude and they produce only very weak secondary EM wave which can be neglected. Then the result is the original EM waves passes through the plasma layer almost unchanged. | {
"domain": "physics.stackexchange",
"id": 75802,
"tags": "electromagnetism, waves"
} |
Most scalable pseudotime ordering algorithm | Question: What algorithms for linear Pseudotime trajectory construction (diffusion-based) are the most scalable to large datasets?
I'm currently using Slingshot based on the recommendation in this manuscript: https://www.biorxiv.org/content/biorxiv/early/2018/03/05/276907.full.pdf
Slingshot is nice and easy to use, but it turns out it is terrible at scaling to big datasets.
For 2000 highest variable genes and a random sampling of a variable number of cells in my ~80% sparse expression matrix I get the following computation times in R (96 GB RAM):
#cells / Time (min)
100 / 0.7
1000 / 4.3
2000 / 12.3
5000 / 71.1
At this rate it would take nearly 1.5 days to process 200,000 cells and 4.5 months to process 2 million cells based on 2,000 HVG.
I feel like Slingshot is limited by a fundamentally unnecessary comparison of diffusion for all cells against all cells, rather than preclustering and then finer resolution of pseudotime across cluster edges.
What algorithms are better, but still implement a similar realization of pseudotime ordering? I just need a single linear trajectory.
Answer: As it turns out, pseudotime algorithms are exceedingly difficult to parallelize. Pseudotime does not scale linearly to a subset of data compared to the full dataset, and as such a course-grained approach to parallelizing pseudotime calculations on random bins of the data, followed by bin merging, does not work. These values will never converge on the actual full-dataset solution.
I put in a pretty good effort trying to parallelize slingshot, but with no success. I realize my code may be of limited utility without more explanation, but I'm not putting more time into this. FWIW:
This code:
Finds computationally optimal bin size for slingshot based on a number of genes and features
Randomly splits the SingleCellExperiment object into that number of bins and runs slingshot on each bin
Orients each bin so that a cell with a pseudotime of 1 in the first bin is similar to a pseudotime of 1 in every bin. Pseudotime can get flipped depending on the sampled subset.
Merges all bins into a single matrix so each cell has a assigned pseudotime based on randomly sampled bins.
Repeats steps #2 to #5 with a different random seed for binning every time.
Measures convergence towards actual solution at three pseudotime values, where convergence is defined as correlation of cell indices along the trajectory.
I abandoned the project after observing that more iterations to not change the convergence towards the actual solution. While pseudotime values close to 0 recapitulated the actual solution quite well, pseudotime values towards 100 failed to correlate at all with the actual solution. There was no convergence with more iterations.
Maybe this will be helpful to somebody, which is why I'm posting.
# Parallel Iterative Sampling with Slingshot
# pissshot is a course-grained estimator of the actual slingshot solution, and iteratively converges towards the actual solution
# pissshot randomly partitions cells in a large database into bins, bin size is determined by finding the computationally optimal number of cells
# pissshot is run independently on each bin, pseudotime vectors are aligned across all bins, and then all bins are merged
# pissshot repeats this process and the average of pseudotime assignments for each cell is taken at the end of each iteration.
# The convergence of the blingshot model at the end of each iteration is calculated by running slingshot on a small sample of adjacent cells at a random pseudotime interval and measuring correlation with the predicted model
# When satisfactory convergence is reached, blingshot returns a singlecellobject, just as slingshot would
pissshot <- function(sce, num_iterations = 20){
cat("Running blingshot on", dim(sce)[2],"cells and",dim(sce)[1],"features\n")
cat(" Step 1/5: Finding computationally optimal increment sizes\n ")
# increment_size <- FindOptimalIncrementSize(sce)
increment_size <- 688
num_increments <- ceiling(dim(sce)[2]/increment_size)-1
cat("\n ...Increment size of",increment_size,"cells is computationally optimal\n")
cat(" Step 2/5: Course-grained pseudotime assignment across",num_increments,"increments")
ps <- list()
convergenceArr <- c()
for(iteration in seq(from=1,to=num_iterations,by=1)){
cat("\n ... iteration",iteration,"...\n")
cat(" randomly assigning cells to",num_increments,"bins\n 0%.")
bins <- PartitionBins(sce, increment_size = increment_size, seed = 7*iteration)
cat("100%\n running slingshot on each bin\n 0%")
cell_pseudotimes <- c()
for(b in seq(from=1,to=length(bins),by=1)){
bin_with_pseudotime <- suppressMessages(slingshot(bins[[b]]))
# Find right cell and left cell in pseudotime array (cell with highest/lowest pseudotime values)
pcells <- as.data.frame(colData(bin_with_pseudotime)$slingPseudotime_1)
rownames(pcells) <- colnames(bin_with_pseudotime)
pcells <- data.frame(lapply(pcells, function(x) as.numeric(as.character(x))), check.names = F, row.names = rownames(pcells))
pcells.sorted <- pcells[order(-pcells[,1]), , drop = FALSE]
right_cell_ID <- rownames(pcells.sorted)[dim(pcells.sorted)[1]]
left_cell_ID <- rownames(pcells.sorted)[1]
right_cell_pos <- match(right_cell_ID,colnames(bins[[b]]))
left_cell_pos <- match(left_cell_ID,colnames(bins[[b]]))
right_cell_logcounts <- assays(bins[[b]])$logcounts[,right_cell_pos]
left_cell_logcounts <- assays(bins[[b]])$logcounts[,left_cell_pos]
if(b==1&&iteration==1){
ref_right_cell_logcounts <- right_cell_logcounts
ref_left_cell_logcounts <- left_cell_logcounts
}
# Figure out whether this pseudotime in this bin has flipped relative to reference (first bin), and if so, flip pseudotime values
cor_opposite_ends <- mean(cor(right_cell_logcounts,ref_left_cell_logcounts),cor(left_cell_logcounts,ref_right_cell_logcounts))
cor_same_ends <- mean(cor(right_cell_logcounts,ref_right_cell_logcounts),cor(left_cell_logcounts,ref_left_cell_logcounts))
flipped <- FALSE
if(cor_opposite_ends > cor_same_ends){
flipped <- TRUE
}
if(flipped==TRUE){
# find max pseudotime in colData(bin_with_pseudotime)$slingPseudotime
# Recalculate pseudotime in bin_with_pseudotime as max-value
pseudotimes <- colData(bin_with_pseudotime)$slingPseudotime_1
colData(bin_with_pseudotime)$slingPseudotime_1 <- max(pseudotimes)-pseudotimes
}
cells_with_pseudotime <- as.data.frame(colData(bin_with_pseudotime)$slingPseudotime_1,colnames(bin_with_pseudotime))
cell_pseudotimes <- rbind(cell_pseudotimes, cells_with_pseudotime)
cat(".")
}
cat("100%\n")
ps[[iteration]] <- cell_pseudotimes[order(row.names(cell_pseudotimes)), , drop = FALSE]
pss <- data.frame(ps)
if(iteration>1){
convergence10 <- measureConvergence(pss, sce = sce, ptime = 10, increment_size = increment_size)
convergence50 <- measureConvergence(pss, sce = sce, ptime = 50, increment_size = increment_size)
convergence99 <- measureConvergence(pss, sce = sce, ptime = 99, increment_size = increment_size)
cat(" convergence at 10:",convergence10,"... at 50:",convergence50,"...at 100:",convergence99,"\n")
convergenceArr <- rbind(convergenceArr,c(iteration,convergence10,convergence50,convergence99))
}
# Iterate until the model converges towards the actual slingshot solution, within an indicated fraction (i.e. convergence = 0.02)
# The convergence measure compares how similar the model is to the actual slingshot solution, i.e. a convergence of 0.02 means that cell pseudotime values are within 2% of the actual solution
# Assess this by selecting a pseudotime at random, analyzing the following increment_size cells, and comparing accuracy of pseudotime measurement across that trajectory with the composite of the course-grained analysis
# need to study how convergence changes over pseudotime
}
# remove cells with really high standard deviations
return(pss)
# return(convergenceArr)
}
measureConvergence <- function(mat, sce = sce, ptime = 45, increment_size = 500){
# this line is throwing an error
mat[,"avg"] <- apply(mat[,1:dim(mat)[2]],1,mean)
mat <- mat[order(mat[,"avg"]), , drop = FALSE]
# select a random pseudotime increment
mat <- mat[mat[, "avg"] > ptime,]
mat <- mat[1:increment_size,]
cell_IDs <- rownames(mat)
# pull out this list of cell_IDs from the sce object and run slingshot
sce2 <- suppressMessages(slingshot(sce[,cell_IDs]))
cpcells <- as.data.frame(colData(sce2)$slingPseudotime_1)
rownames(cpcells) <- colnames(sce2)
cpcells <- data.frame(lapply(cpcells, function(x) as.numeric(as.character(x))), check.names = F, row.names = rownames(cpcells))
# sort cpcells by increasing pseudotime
cpcells <- cpcells[order(cpcells[,1]), , drop = FALSE]
ordered_cells <- cbind(rownames(mat),rownames(cpcells))
# get the index of each cell in the dataframe, run a correlation on how well the indices line up
cell_indices <- c()
for(cell in cell_IDs){
new_indices <- c(which(ordered_cells[,1] == cell), which(ordered_cells[,2] == cell))
cell_indices <- rbind(cell_indices,new_indices)
}
convergence_measure <- cor(cell_indices[,1],cell_indices[,2])
plot(cell_indices[,1],cell_indices[,2])
return(convergence_measure)
}
PartitionBins <- function(sce, increment_size = 500, seed = 123){
bins <- c()
while(dim(sce)[2]>=increment_size){
if(dim(sce)[2]<increment_size*2){
bins <- c(bins, sce)
sce <- sce[,1]
}
else{
set.seed(seed)
train_ind <- sample(seq_len(ncol(sce)), size = increment_size)
bins <- c(bins, sce[,train_ind])
sce <- sce[,-train_ind]
cat(".")
}
}
return(bins)
}
FindOptimalIncrementSize <- function(sce){
# Find the computationally optimal increment_size for grainy alignments using 5 increment points (100, 500, 1000, 2500)
res <- c(100,250,500,750,1000,1500,2500,5000)
runtimes <- c()
for(i in res){
runtimes <- c(runtimes, suppressMessages(system.time(slingshot(sce[,0:i])))["elapsed"])
}
mat <- as.data.frame(t(rbind(res,runtimes)))
c <- summary(lm(mat$runtimes ~ poly(mat$res, 2, raw=TRUE)))$coefficients[,"Estimate"]
predicted_runtimes <- c()
for(i in seq(from=100, to=5000, by=1)){
predicted_runtime <- (c[1] + c[2]*i + c[3]*i^2)*(dim(sce)[2]/i)
predicted_runtimes <- rbind(predicted_runtimes,c(i,predicted_runtime))
cat(".")
}
increment_size <- predicted_runtimes[which.min(predicted_runtimes[,2]),1]
return(increment_size)
} | {
"domain": "bioinformatics.stackexchange",
"id": 1472,
"tags": "r, scrnaseq, statistics"
} |
Steric hindrance of an unbonded electron pair | Question: This question is half inspired by this question, and half inspired by the structure of molecules like hydrazine and hydrogen peroxide.
When I was looking at the aforementioned molecules, I started trying to reason as to why the lowest energy conformation of these molecules are what they are.
$\hspace{34ex}$
In hydrazine, the lowest energy conformation is not the one in which the unbonded electron pairs are anti-periplanar. This makes me think that hydrogen have a greater steric hindrance than unbonded electron pairs do.
$\hspace{34ex}$
In hydrogen peroxide, however, the lowest energy conformation is not the one in which the hydrogen are anti-periplanar. This contradicts what I reasoned form the structure of hydrazine.
Being confused by this, I tried a different approach—what about the cyclohexyl carbanion? Surely using $\Delta G = -RT \ln{K_\mathrm{eq}}$ the relative steric hindrance of an unbonded electron pair and a hydrogen atom could be determined. What I became unsure about, however, was how rapid inversion might affect this, and additionally whether or not there even is even any data regarding the major and minor confirmations of the cyclohexyl carbanion (because I am unsure if it even exists in such a way that this can even be measured).
So am I flawed in my reasonings? Does the equilibrium data for the cyclohexyl carbanion exist?
Answer: Since I just covered this material in depth in my physical organic chemistry class, I figured I'd write up an answer.
Electronegative elements lower the energies of all molecular orbitals to which they contribute, of which their low-lying $\sigma ^{*}$ orbitals are of particular importance. Electronegative atom lone pair MOs, which have little bonding character and are also (relatively) high-lying in energy, are very close in energy and will donate their electrons to the low lying empty orbitals, producing a stabilizing interaction. Molecular conformations are in part shaped by these interactions, which is known as the donor-acceptor effect.
Hydrogen peroxide is a classic example of this effect. In addition to sterics predicting that the hydrogens be anti-periplanar, this conformation also minimizes the molecule's net dipole (another stabilizing effect), yet its most stable conformation adopts anti-clinal geometry. This can essentially be thought of as a compromise between between sterics and dipole minimization, and the donor-acceptor effect:
$\hspace{4.3cm}$
The $\sigma ^{*} (\ce{O-H})$ is an excellent acceptor, and oxygen's highest energy unbonded pair is one of the strongest donating. The lower energy lone pair occupies a $\sigma\text{-out}$ orbital, and the higher energy lone pair occupies a pure $\mathrm{p}$ orbital. The mixing of this orbital with the $\sigma ^{*} (\ce{O-H})$ orbital favors a $90º$ dihedral angle, but due to steric and dipole effects, the actual angle is $\approx 120º$.
$\hspace{5.3cm}$
In hydrazine, the $- 2.5 \ \mathrm{kcal \ mol^{-1}}^{[1]}$ preference for the two $\mathrm{n}(\ce{N})$ orbitals being syn-clinal over anti-periplanar is also explained by the donor-acceptor effect. When the $\mathrm{n}(\ce{N})$ are syn-clinal, they are each eclipsed by the $\sigma ^{*} (\ce{N-H})$ orbitals, and orbital mixing occurs. The effects of this are further seen in the $\ce{N-N}$ bond length difference between the two conformations: $1.448\ \mathrm{Å}$ in the syn structure and $1.489\ \mathrm{Å}$ in the anti structure. The shortening of the bond length in the syn structure is a direct result of the multiple bond character introduced by the $\mathrm{n}(\ce{N}) \rightarrow \sigma ^{*} (\ce{N-H})$ interactions$^{[1]}$.
$^{[1]}$ Wilcox, C.; Bauer, S. Journal of Molecular Structure: THEOCHEM 2003, 625 (1-3), 1–8. | {
"domain": "chemistry.stackexchange",
"id": 6404,
"tags": "molecular-structure, covalent-compounds, stereoelectronics"
} |
Is the vapour like gas produced from ice a water vapour? | Question: We know that a ice is a condensed form of water but when it is kept on a tray, we can see vapour like gas produced from the surface. Is that the water vapour or not?
If it is then why mass of water remains same?
I am a bit confused :-(
Answer:
If it is then why mass of water remains same?
The total mass remains the same, but it's now made up of both the gaseous and solid forms of water. The vapour you see reduces the mass of the ice, but only very slightly, so it's very hard to measure the difference.
A frost-free freezer illustrates this very well. In frost-free freezers, the cabinet is kept cold by circulating cold, dry air through it. That air carries away any vapour from the ice. The vapour is released outside the freezer, so that water is lost from the inside.
If you put some uncovered ice in such a freezer, you will see that, over weeks or months, the ice gradually disappears. If you were able to collect the water vapour that is released outside, you'd see it carries away all the mass that's lost inside. | {
"domain": "physics.stackexchange",
"id": 77160,
"tags": "thermodynamics, water"
} |
When does $\hbar \rightarrow 0$ provide a valid transition from quantum to classical mechanics? When and why does it fail? | Question: Lets look at the transition amplitude $U(x_{b},x_{a})$ for a free particle between two points $x_{a}$ and $x_{b}$ in the Feynman path integral formulation
$U(x_{b},x_{a}) = \int_{x_{a}}^{x_{b}} \mathcal{D} x e^{\frac{i}{\hbar}S}$
($S$ is the classical action). It is often said that one gets classical mechanics in the limit $\hbar \rightarrow 0$. Then only the classical action is contributing, since the terms with non-classical $S$ cancel each other out because of the heavily oscillating phase. This sounds reasonable.
But when we look at the Heisenberg equation of motion for an operator $A$
$\frac{dA}{dt} = \frac{1}{i \hbar} [A,H]$
the limit $\hbar \rightarrow 0$ does not make any sense (in my opinion) and does not reproduce classical mechanics. Basically, the whole procedure of canonical quantization does not make sense:
$\{\cdots,\cdots\} \rightarrow \frac{1}{i \hbar} [\cdots,\cdots]$
I don't understand, when $\hbar \rightarrow 0$ gives a reasonable result and when not. The question was hinted at here: Classical limit of quantum mechanics. But the discussion was only dealing with one particular example of this transition. Does anyone has more general knowledge about the limit $\hbar \rightarrow 0$?
Answer: The theory of deformation quantization provides a framework in which the
quantum to classical transition can be carried out and understood.
According to this theory, for (practically any) quantum system, one can find (may be nonuniquely) a Poisson manifold $\mathcal{M}$ (phase space) equipped with an associative product called the "star product" such that the quantum observables are represented by
smooth functions on $\mathcal{M}$ and the quantum operator product is given by the star product.
Furthermore, the star product of two functions has a formal power series in $\hbar$
$f\star g = \sum_{k=0}^{\infty} \hbar^k B_k(f,g)$
Such that:
$B_0(f,g) = fg$
$B_1(f,g)-B_1(g, f) = \{f,g\}$, (Poisson bracket)
Thus we obtain:
$f\star g - g\star f = \hbar\{f,g\} + \sum_{k=2}^{\infty} \hbar^k (B_k(f,g)-B_k(g,f))$
Please notice that according to the deformation Philosophy, the quantum
observables are just functions on the phase space just as the classical
observables and all the quantum noncommutativity is provided by the star
product. Thus if we define $\hat{f} = \frac{\hbar}{i} f $, we get the
required classical limit.
It should be emphasized that this procedure can be carried out even for
quantum systems defined by matrix algebras for example an appropriate
phase for spin iis the two-sphere $S^2$, please, see the following
article by Moreno and Ortega-Navarro. Morover,
Kontsevich in his seminal work provided a constructive method to construct this star product on every finite dimensional Poisson manifold, Please see the following
Wikipedia page.
It is also worthwhile to mention that there are efforts to generalize
the deformation construction to field theories and incorporate renormalization into it, please see the following work by Dito. | {
"domain": "physics.stackexchange",
"id": 32838,
"tags": "quantum-mechanics, classical-mechanics, path-integral"
} |
How to send tf data from multiple namespaces to Rviz? | Question:
My setup: I have setup a launch file with two groups defining the namespaces for two robots, and launching rviz together with a joint_state_publisher, robot_state_publisher, ned_static_transform_publisher and a robot_description parameter.
The goal is to simulate multiple robots, each in its own namespace and visualize in Rviz.
Each robot sends tf transforms with sendTransform() from tf.TransformBroadcaster(), but they become the same /tf when seen in rqt_graph, and in rviz it means that my robot model is updating to the two robots seperatepositions, and not having a seperate model for each.
I have tried to used the tf_prefix, but that does not do any thing. I am using Indigo. I have found that it might be because that method is not valid since Hydro. See [1].
I have searched quite a bit around via google and directly on the answers.ros.org, but only found older posts saying to use tf prefixes as , and also a lot saying that it does not work any more. But I have not found a solution. I have thought about writing another node that takes the poes from the two robots and let this send the transforms.
I hope someone has another and more elegant solution that the one I just proposed. Any pointers or help is appreciated.
[1] http://answers.ros.org/question/12877/tf-on-multiple-robots-gets-crowded/?answer=19017#post-id-19017
Originally posted by nickoe on ROS Answers with karma: 3 on 2014-10-24
Post score: 0
Answer:
The principle of the tf_prefix still works, but the implementation is different. AFAIK the currently recommended method is to simple name tf frames "correctly", i.e. robot_a's base_link frame should just be named /robot_a/base_link. The TF library is agnostic to that.
The robot_state_publisher however needs to prefix frames correctly. If it doesn't this won't work easily.
Originally posted by dornhege with karma: 31395 on 2014-10-24
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by nickoe on 2014-10-24:
Ok, thank you for your reply. Would that mean that I should somehow make my node figure out what namespace it is in and prefix that to the frame id? Adn what should I modify for robot_state_publisher?
<node name="robot_state_publisher" pkg="robot_state_publisher" type="state_publisher" />
Comment by nickoe on 2014-10-26:
Ok, I did just that by making the node sending the tf prefix the frame id with the namespace using the method in http://answers.ros.org/question/62620/programatically-get-nodes-namespace/ I did nothing to robot_state_publisher. | {
"domain": "robotics.stackexchange",
"id": 19839,
"tags": "ros, rviz, multiple-machines, ros-indigo, transform"
} |
Molecular chirality and optical rotation | Question: Why does having molecular chirality result in optical rotation? The dissymetry or chirality of molecules translates to the rotation of plane polarized light, the magnitude and direction depending on the concentration and the nature of the substance. But why does molecular chirality cause the rotation of plane polarized light.
If this rotation is due to the assymetry around the bond, the different molecules are oriented in all possible directions(randomly) and hence the light will fall on them in all different orientations, and should ultimately be undeflected since for every orientation of the molecule, we can reverse the orientation such that the light appears to be falling on the molecule from a direction other than the one for our original molecule.
Moreover, why does inverting the configuration invariably reverses the direction of rotation of light?
Answer:
since for every orientation of the molecule, we can reverse the orientation such that the light appears to be falling on the molecule from a direction other than the one for our original molecule.
This is false.
Let's take 2-butanol. For this stereoisomer, light is turning clockwise when viewed from the right side (I'm not sure of this, but we can assume).
Here, the disks are there to clarify the direction from which we are viewing the entire setup. Here, the "face" of the disks is to the right so we are viewing it from the right.
By the principle of reversibility of light, if light enters from the other side, we get the reverse path:
Rotate this diagram (by $180^\circ$ around an axis perpendicular to your screen):
Note that the disks are being viewed from the other side (the left) now. But we can flip them:
Take a close look at this. This is the same molecule as the first, just oriented backwards. And looking from the right, it still gives a clockwise twist to the angle.
So while rays of light incident on different orientations of a molecule may differ in the exact value of the angle of rotation, rays of light incident on a reversed (not stereoisomeric, just spatially rotated) molecule have the same direction and value of the angle of rotation and do not cancel.
[8]: | {
"domain": "chemistry.stackexchange",
"id": 627,
"tags": "organic-chemistry, stereochemistry, symmetry, chirality"
} |
Are Pompeii and Herculaneum unique? | Question: Has anyone ever found or gone looking for similar locations, i.e. volcanic eruption sites in which unfortunate victims – human and non-human – have been entombed in the volcanic ash, with the possibility of revealing their forms by producing casts from the voids? Such sites, if they exist, could reveal exciting new knowledge about ancient peoples and animals.
Answer: Probably the best known is more recent, the 1902 eruption of Mt. Pelée on Martinique, where 30,000 people were killed by pyroclastic flows. I don't know the extent of burial - it appears that the city may have been destroyed more by the ash cloud than the dense part of the flow. | {
"domain": "earthscience.stackexchange",
"id": 630,
"tags": "volcanology, paleontology, volcanic-hazard, archaeology, pyroclastic-flows"
} |
K&R Exercise 3-3. Expands shorthand notations (e.g., a-z to abc..xyz, 0-9 to 012..789) | Question: I have been learning C with K&R Book 2nd Ed. So far I have completed quite a few exercises.
For the following exercise (Chapter 3, Ex-3.3):
Exercise 3-3. Write a function expand(s1, s2) that expands shorthand
notations like a-z in the string s1 into the equivalent complete list
abc...xyz in s2. Allow for letters of either case and digits, and be
prepared to handle cases like a-b-c and a-z0-9 and -a-z. Arrange that
a leading or trailing - is taken literally.
I have written this solution. I would like to know how to improve it.
#include <stdio.h>
#include <ctype.h>
#define MAXLINE 1024
int get_line(char line[], int maxline);
void expand(const char s1[], char s2[]);
int match(int start, int end);
int
main(void)
{
char s1[MAXLINE];
char s2[MAXLINE];
while (get_line(s1, MAXLINE) > 0) {
expand(s1, s2);
printf("%s", s2);
}
return (0);
}
/**
* Here I have tried to write a loop equivalent to the loop seen
* previously in chapter 1. (without using && and ||,
* as specified in chapter 2 of the book, exercise 2.2).
*
* for (i = 0; i < lim-1 && (c = getchar()) != EOF && c != '\n'; ++i)
* ...
**/
int
get_line(char s[], int lim)
{
int c, i;
i = 0;
while (--lim > 0) {
c = getchar();
if (c == EOF)
break;
if (c == '\n')
break;
s[i++] = c;
}
if (c == '\n')
s[i++] = c;
s[i] = '\0';
return (i);
}
void
expand(const char s1[], char s2[])
{
int i, j, ch;
for (i = j = 0; (ch = s1[i++]) != '\0'; ) {
if (s1[i] == '-' && match(s1[i-1], s1[i+1])) {
for (i++; ch < s1[i]; ) {
s2[j++] = ch++;
}
} else
s2[j++] = ch;
}
s2[j] = '\0';
}
int
match(int start, int end)
{
return ((isdigit(start) && isdigit(end)) ||
(islower(start) && islower(end)) ||
(isupper(start) && isupper(end)));
}
these are a few of the tests that I did with the program that I've written.
a-z
abcdefghijklmnopqrstuvwxyz
a-b-c
abc
a-z0-9
abcdefghijklmnopqrstuvwxyz0123456789
-a-z
-abcdefghijklmnopqrstuvwxyz
A-Z
ABCDEFGHIJKLMNOPQRSTUVWXYZ
0-9
0123456789
-A-D
-ABCD
0-7
01234567
a-h
abcdefgh
Answer: General Observations
The code generally looks good.
An experienced C programmer would probably use pointers rather than indexing through the array.
When unit testing code such as the functions int match(int start, int end) and void expand(const char s1[], char s2[]) it is generally better to create the strings to be tested in the code rather than reading in the strings, you should also prepare strings that are the expected output of the functions. Automated tests are better because they are reproducible.
One of the problems with using the K&R book is that it predates the introduction of the bool type into standard C. If I was writing this code I would include stdbool.h and have match return a bool instead of an int.
On Windows 10 using Visual Studio 2022 there seems to be a bug, the program never terminates when a new line is entered without any text.
Prefer C Standard Library Functions
The code includes the function get_line(char s[], int lim), however there are standard C library functions that can perform this operation, one is char fgets(char str, int count, FILE *stream). Using library functions is generally preferred over writing your own function because it doesn't need debugging and it may perform better than the function you write.
Code Organization
Function prototypes are very useful in large programs that contain multiple source files, and that in case they will be in header files. In a single file program like this it is better to put the main() function at the bottom of the file and all the functions that get used in the proper order above main(). Keep in mind that every line of code written is another line of code where a bug can crawl into the code.
Variable Names
The variable names s and lim are not as descriptive as they could be, for instance I might rename lib to be buffer_size.
Alternate Implementation
#include <ctype.h>
#include <stdio.h>
#include <stdbool.h>
#include <string.h>
#define MAXLINE 1024
bool
match(int start, int end)
{
return ((isdigit(start) && isdigit(end)) ||
(islower(start) && islower(end)) ||
(isupper(start) && isupper(end)));
}
void
expand(const char s1[], char s2[])
{
int i, j, ch;
for (i = j = 0; (ch = s1[i++]) != '\0'; ) {
if (s1[i] == '-' && match(s1[i - 1], s1[i + 1])) {
for (i++; ch < s1[i]; ) {
s2[j++] = ch++;
}
}
else
s2[j++] = ch;
}
s2[j] = '\0';
}
int
main(void)
{
char s1[MAXLINE];
char s2[MAXLINE];
while (strlen(fgets(s1, MAXLINE, stdin)) > 0) {
expand(s1, s2);
printf("%s", s2);
}
return (0);
} | {
"domain": "codereview.stackexchange",
"id": 43821,
"tags": "beginner, c, formatting, io"
} |
How could the gold and the passengers both be saved in the final sequence of the 1969 movie, Italian Job? | Question: At the end of the Italian Job, the cast are left in this predicament; the bus they are travelling on with a pile of gold has veered over the edge of a cliff and is balanced precariously with the gang members on one side and the gold on the exposed side.
Any attempt to leave the bus will condemn the gold to fall hundreds of feet down the Italian mountainside and, it is assumed, attract the police who are looking for it.
Video here: https://www.youtube.com/watch?v=HZCaSyid4m0
The gang leader played by Michael Caine, optimistically declares in the last line, from the floor of the tipping bus: "Hang on lads - I've got an idea!"
How could the team save themselves and the gold using real life physics?
Edit: It is obvious that the total tonnage of gold in the bus weighs more than the combined body weight of those in the back of the bus which renders the final scene implausible but for the sake of this question assume the masses are equal.
Answer: If the people move their center of mass further back, perhaps by pressing against the back wall and/or hanging off the back of bus, the person who weighs the least could crawl over to the gold and pass it to the rest of the people until all of the mass of the gold is transferred and they can exit the bus with it.
It's essentially just playing with levers.
If you want to check out the math, look here. | {
"domain": "physics.stackexchange",
"id": 38929,
"tags": "homework-and-exercises, newtonian-mechanics, torque"
} |
Which node reads the configuration file? | Question:
Hello,
I am working with NXT-ROS. To define the sensors and actuators, you need to create a robot.yaml file. My question is, how is this file used? Is it read by some other node? By which?
I think it might be the node nxt_ros, but i am not really sure.
Thanks
Originally posted by mikelom on ROS Answers with karma: 13 on 2015-03-27
Post score: 1
Answer:
You are correct as per the tutorial:
<node pkg="nxt_ros" type="nxt_ros.py" name="nxt_ros" output="screen" respawn="true"> <rosparam command="load" file="$(find learning_nxt)/robot.yaml" /> </node>
This launch file XML takes the robot.yaml file and loads the values to the ROS parameter server. It does this using the <rosparam> tags. Since it is nested in the <node> tags it sets all the parameters as private names under the nxt_ros node.
Hope this helps.
Originally posted by aak2166 with karma: 593 on 2015-03-27
This answer was ACCEPTED on the original site
Post score: 2 | {
"domain": "robotics.stackexchange",
"id": 21270,
"tags": "actuator, nxt, yaml, nxt-ros, sensor"
} |
What does "conformally coupled scalar" mean? | Question: "Conformally coupled scalar $\phi$" - I encounter it a lot, but I can't find what it means.
Answer: A minimally coupled free scalar field is described by the action
$$
S[g,\phi] = \frac{1}{2} \int d^D x \sqrt{g} g^{ab} \partial_a \phi \partial_b \phi .
$$
However, this theory is not conformally invariant (i.e. invariant under Weyl transformations). In particular, if I rescale $g \to \Omega^2 g$ (where $\Omega$ is a function), then the action transforms as
$$
S[\Omega^2 g,\phi] = \frac{1}{2} \int d^D x \Omega^{D-2} \sqrt{g}g^{ab} \partial_a \phi \partial_b \phi .
$$
where the $\Omega^D$ factor comes from $\sqrt{g}$ and $\Omega^{-2}$ comes from $g^{ab}$. Clearly, the action is not conformally invariant (unless $D=2$). One way to fix this is to rescale $\phi$ as well by $\phi \to \Omega^{-\frac{1}{2}(D-2)} \phi$. We then find
$$
S[\Omega^2 g,\Omega^{-\frac{1}{2}(D-2)} \phi] = \frac{1}{2} \int d^D x \sqrt{g} [ g^{ab} \partial_a \phi \partial_b \phi + {\cal O} \left( \partial \Omega \right) ] .
$$
As is clear, the action is still not invariant under Weyl transformations because of terms proportional to the derivative of $\Omega$. To fix this, we add an extra term to the original action
$$
S[g,\phi] = \frac{1}{2} \int d^D x \sqrt{g} \left( g^{ab} \partial_a \phi \partial_b \phi + \xi R[g] \phi^2 \right) . \tag{1}
$$
It is then easy to show that for an appropriate choice of the constant $\xi$, this action is conformally invariant. A scalar field which couples to the background metric $g$ according to (1) is known as a "conformally coupled scalar" for obvious reasons.
Note that the action for a conformally coupled scalar reduces to the action of a minimally coupled scalar in flat spacetime. However, the two theories have different stress tensors.
HW for the reader:
Show that the action (1) is conformally invariant iff. $\xi = \frac{D-2}{4(D-1)}$.
Find the stress tensor for the action (1) and show that it differs from the stress tensor of the minimally coupled scalar in flat spacetime. Verify that the stress tensor for the conformally coupled scalar is traceless but one for minimally coupled scalars is not (Tracelessness of the stress tensor is a requirement for Weyl invariance). | {
"domain": "physics.stackexchange",
"id": 87968,
"tags": "lagrangian-formalism, field-theory, definition, conformal-field-theory, qft-in-curved-spacetime"
} |
Simple Scala money library | Question: for some fun, I decided to start working on something that let me work with Money calculations. I realise there are libraries out there like Joda Money, but I'm doing this mainly for fun.
I was hoping to get a review on the way I've started to construct the library.
GitHub
private def calculate(that: Money)(f: (BigDecimal, BigDecimal) => BigDecimal): Either[String, Money] = (this.currency, that.currency) match {
case (c, c1) if c == c1 => new Right(this.copy(this.currency, f(this.amount, that.amount)))
case _ => new Left(Money.COMPARE_ERROR_MESSAGE)
}
private def compare(that: Money)(f: (BigDecimal, BigDecimal) => Boolean): Either[String, Boolean] = (this.currency, that.currency) match {
case (c, c1) if c == c1 => new Right(this.amount > that.amount)
case _ => new Left(Money.COMPARE_ERROR_MESSAGE)
}
These two methods are incredibly similar except for the return type. Could someone suggest a way to improve this?
Answer: First thoughts:
If an operation can fail, it is generally good practise to make this explicit in the API by returning a Try instead of an Either.
The duplicate code seems to stem from the fact that Money operations are only valid on Currencies of the same type. Scala gives you the power to have this check at compile time.
Example Code:
object Money {
sealed trait Currency
case object GBP extends Currency
}
case class Money[T <: Money.Currency](amount: BigDecimal) {
private def calculate(that: Money[T])(f: (BigDecimal, BigDecimal) => BigDecimal): Money[T] = {
copy(amount = f(this.amount, that.amount))
}
private def compare(that: Money[T])(f: (BigDecimal, BigDecimal) => Boolean): Boolean = {
this.amount > that.amount
}
} | {
"domain": "codereview.stackexchange",
"id": 20729,
"tags": "scala, finance, scalaz"
} |
General relativity: is curvature of spacetime really required or just a convenient representation? | Question: I'm not really far into the general theory of relativity but already have an important question: are there formulations that can do without spacetime curvature and describe the general theory of relativity/all associated gravitational effects in global cartesian coordinates? Idea: Einstein chose spacetime curvature so that one had not to build gravitational effects into the rest of physics equations like Maxwell's equations. I assume that spacetime curvature provides a convenient way to apply physics laws not related to gravity unmodified locally because at small distances we may use special relativity as an approximation.
Please correct me everywhere I am wrong.
Answer: The curvature of spacetime is a property of the spacetime manifold itself, it is not related to any particular choice of coordinates. The very essence of relativity lies in the Einstein field equations, which, colloquially, tell you that the energy-matter content of spacetime determines its curvature and metric. There is no formulation of general relativity known to me that avoids the idea of curved spacetime.
But this is not worrying - generally, such a geometric theory of physics is the kind we want. For classical mechanics, such a geometric picture is given by the Hamiltonian formalism and its symplectic manifolds. For gauge theories describing the other fundamental forces except for gravity, the geometric picture is given by the theory of principal bundles and the (e.g. electromagnetic) physical fields are amenable to be described as the curvature of such bundles (and wonderfully analogous to formulating GR with jet and frame bundles). For gravity, the geometric picture is that it is spacetime itself that has curvature. We don't even want to get rid of a description in terms of curvature, generally speaking.
Just because you can locally choose frames such that, at least at one point, the metric is flat, and hence you have SR, this is different from saying that the curvature is not required - a spacetime is flat only when there is a choice of coordinates such that the metric is flat everywhere. | {
"domain": "physics.stackexchange",
"id": 77583,
"tags": "general-relativity, spacetime, curvature"
} |
Why do we have only one DNA? | Question: I've been crossed a very interesting thought..
Why is it that our body has only single form of DNA, and what would happen if we had multiple forms of DNA?
Answer: We have just one type of DNA, because humans develop from a single cell. This single cell contains only type of DNA and cell division makes exact copies of it. Since humans (and basically all other organisms) form by cell division, they therefore have to have the same one type of DNA as the first cell.
The question then is, why should the first cell have only one type of DNA. This is probably because the molecules of DNA "float" freely in the nucleus and having two different "types" of DNA could lead to some weird interactions between the different types.
It would be possible, theoretically, to have more nuclei, each with a different type of DNA, but I don't really see why any organism would want to do that. It would make genetic errors much more probable and it seems that all organisms live all right with just one type of DNA.
*I write "a different type of DNA", but what I really mean (and what I think you also meant) is "a different way of storing genetic information". In a way, humans have two different ways of doing that - RNA and DNA, but only DNA is used for long-term storage, since it is more stable.
A different way could be to e. g. use a different molecule than DNA, maybe different bases than A, C, G, T; or use a different sugar than (deoxy-)ribose or maybe a different link than the phosphate groups. This makes for a very exciting field of study (and I think the part with more bases has already been done somewhere.) | {
"domain": "chemistry.stackexchange",
"id": 11591,
"tags": "dna-rna"
} |
The game of life with a truly infinite board | Question: The game of life is often implemented by representing the board as a 2D boolean array. This doesn't scale very well to larger boards -- it starts to consume lots of memory, and without some separate mechanism to keep track of a list of live cells, you have to visit each board cell on each iteration. This implementation just keeps a list of live cells to represent the board state; the board "size" is limited only by the maximum of an integer.
import Data.List as L
import Data.Map as M
type Coo = (Int,Int)
type Board = Map Coo Int
moveBoard::Coo->Board->Board
moveBoard (dx,dy) = M.mapKeysMonotonic (\(x,y)->(x + dx, y + dy))
countNeighbors::Board->Board
countNeighbors b =
unionsWith (+) [ moved (-1, -1), moved (0, -1), moved (1, -1),
moved (-1, 0), moved (1, 0),
moved (-1, 1), moved (0, 1), moved (1, 1) ]
where moved (dx, dy) = moveBoard (dx, dy) b
lifeIteration::Board->Board
lifeIteration b = M.union birth survive
where neighbors = countNeighbors b
birth = M.map (const 1) (M.filter (==3) neighbors)
survive = M.intersection b (M.filter (==2) neighbors)
glider = M.fromList $ L.map (\(x,y)->((x,y),1::Int)) ([(1,1),(1,2),(1,3),(2,3),(3,2)]::[(Int,Int)])
Answer: Edit: This answer was given when the reviewed code looked quite differently.
Any specific questions? Here's what stands out for me:
Why do toList in emptyNeighbors, then go back to Set again? You could simply use Data.Set.map there.
countNeighbor is very inefficient: the filter operation always iterates over all life cells, and you are calling it three times per existing cell! That's unneeded, as you only ever care about a handful of neighbourhood cells.
My idea to fix issue 2 would be to build a Map of the neighbour count of every cell. If you represent the board as a Map with only 1 cells in it, that can be done pretty efficiently using mapKeysMonotonic and unionsWith:
type Coo = (Int, Int)
type Board = Map Coo Int
moveBoard :: Coo -> Board -> Board
moveBoard (dx,dy) = M.mapKeysMonotonic (\(x, y) -> (x+dx, y+dy))
countNeighbours :: Board -> Board
countNeighbours b =
unionsWith (+) [ moved (-1) (-1), moved 0 (-1), moved 1 (-1)
, moved (-1) 0 , moved 1 0
, moved (-1) 1 , moved 0 1 , moved 1 1 ]
where moved dx dy = moveBoard (dx,dy) b
Note that usage of mapKeysMonotonic is only safe because the order of coordinates doesn't change when we add a constant. Effectively, this means that the library can simply replace the concrete coordinates without any internal resorting.
The iteration is then a simple matter of using filter, map and intersection over the result:
lifeIteration :: Board -> Board
lifeIteration b = M.union birth survive
where neighbours = countNeighbours b
birth = M.map (const 1) (M.filter (==3) neighbours)
survive = M.intersection b (M.filter (==2) neighbours)
Changing your formulation slightly by having a life cell with 3 neighbours "rebirth" instead of survive, as that's a bit simpler to write.
Also note that this is a bit "clever" by taking advantage of the fact that intersection always returns the value of the first Map, therefore I don't need to do another M.map (const 1) step in there.
I hope this is helpful to you. | {
"domain": "codereview.stackexchange",
"id": 1497,
"tags": "haskell, game-of-life"
} |
Question on capacitor and springs | Question: The question is as follows:
One of the plates of a charged parallel plate capacitor is connected to a non conducting spring of stiffness K while the other plate is fixed. The other end of the spring is also fixed in equilibrium, the distance between the plates is d, which is twice the elevation in the spring. If the length of the spring is halved by cutting it, the distance between the plates in equilibrium will be (Consider that in both the cases spring is in natural length if the capacitor is uncharged)
Here is my approach:
Now the force of attraction between plates of capacitor is F=Q²/2A€ where Q is charge on capacitor plates, and A is the area of plates of capacitor and € permittivity of medium.
Thus the force does not depend on the distance between the plates. Thus for this problem F should be constant as Q or A does not change.
Now spring force =Kx where x is the elongation. It is given 2x=d
In equilibrium, F=Kx
Now when spring is halved the spring constant (stiffness) becomes 2K. Now in equilibrium, F=2Kx1 where x1 is new elongation.
Therefore x1=x/2=d/4.
Now I don't see how I can get the new distance between plates as the natural length of spring is not mentioned. Please help.
Answer: As you have stated in your problem, the force between the capacitor plates is a constant for small separation distances
$$F_c=\frac{Q^2}{2\epsilon A}$$
where $Q$ is the magnitude of charge on one plate, $\epsilon$ is permittivity of the dielectric, and $A$ is the area of one of the plates.
By Hooke's law, the magnitude of the spring force is given by
$$F_s=kx$$
where $k$ is the spring constant and $x$ is the distance the spring is displaced from its equilibrium value (defined as $x=0$). Therefore, the new equilibrium position is just where these forces are equal:
$$F_c=F_s$$
$$\frac{Q^2}{2\epsilon A}=kx_{eq}$$
$$x_{eq}=\frac{Q^2}{2\epsilon Ak}$$
You can use this expression, the other given information, and what you have stated about the spring constants to compare the equilibrium positions in each given case.
Now the next thing we need to do is to express the new plate separation in terms of things we know or have found. Let's assume a system where we have fixed the end of the spring not attached to a plate and we have also fixed the position of the bottom plate. So essentially the only thing that can move is the plate attached to the spring, and the spring is able to be stretched. Something that is also important to note is that the end of the spring without the plate and the bottom plate are now separated by a fixed distance independent of the properties of the spring or the capacitor.
In any of the scenarios there are three relevant lengths:
The resting (unstretched) length of the spring $L$
The equilibrium length of the spring $x_{eq}$. i.e. the distance the top plate has moved from the unstretched state
The separation between the plates $d$
If you were to draw a diagram out, you would see that no matter what, the sum of these three things must be constant.
$$L+x_{eq}+d=constant$$
Therefore, we can easily compare scenario 1 with scenario 2:
$$L_1+x_1+d_1=L_2+x_2+d_2$$
We just need to solve this for $d_2$. If you look through the above work and information given in the problem, we can rewrite every other term in this equation in terms of given variables. I will leave this to you as to not work out the entire problem here. | {
"domain": "physics.stackexchange",
"id": 51191,
"tags": "homework-and-exercises, newtonian-mechanics, electrostatics, capacitance, spring"
} |
Why does mint oil feel cold on the skin? | Question: When putting (japanese) mintoil on the skin it produces a cool feeling.
You can experience this, when adding it to your bath or using a spray with mint oil on your skin. The cool feeling occures even if the actual temperature of the spray or bath is quite warm.
Where does this sensoric impression come from?
Answer: Anything related to mint usually contains menthol. What does it do? It triggers the TRPM8 ion channels, causing your skin's cold receptor to become sensitive, and causing it to overfire.
This causes the brain (receiving cold signals from skin) to feel cold, and that's why you feel cold.
I have summarised it in a few sentences, but as with any scientific discovery, it took quite a few years to fully understand what was going on. NCBI have summarised this quite nicely over here (http://www.ncbi.nlm.nih.gov/books/NBK5238/) | {
"domain": "biology.stackexchange",
"id": 3849,
"tags": "human-biology, senses"
} |
Has this metric (which seems like flat space but isn't) been studied before | Question: I am investigating the metric
$ds^2 = -dt^2 + (1+C)dr^2 + r^2 d\theta^2 + r^2 \sin^2\theta d\phi^2, $
where $C$ is a constant. This intuitively seems like flat space but actually has a non-zero Kretschmann scalar, which goes as $K=4C/r^4$ and therefore diverges at the origin. Has this metric been studied before?
N.B. By transforming into Cartesian coordinates, we can write this metric as
$-dt^2 + dx^2 + dy^2 + dz^2 + C \frac{\left(x dx + y dy + zdz\right)^2}{x^2
+ y^2 + z^2},$
which is clearly not flat space.
Answer: As has been said in the comments, it's a conical geometry with the 'tip' at $r=0$.
Let's assume that $1+C>0$, because otherwise you'll have a signature $(--++)$ metric, which is weird. Apart rom that, the time direction does not play a rôle here, so I'll ignore it for now.
By a rescaling of the $r$ coordinate, you can get the spatial matric
$$\text{d} s^2 = \text{d} r^2+ \frac{r^2}{1+C} \text{d} \Omega^2 \,.$$
Here, $\text{d} \Omega^2$ is the usual spherical volume element. Hence, a sphere around the origin with radius $r=R=\text{const.}$ will have an area of
$$\frac{4\pi R^2}{1+C}\,,$$
that is, it will be smaller (for $C>0$, larger otherwise) than in flat space.
A two-dimensional analogy that is simpler to visualise is an actual cone: Take a 2d plane, cut out a wedge with tip at the origin and opening angle ('deficit angle') $\phi=2\pi \frac{C}{1+C}$ and glue the edges. Then you end end up with a conical singularity in otherwise flat space, which you can notice by looking at circles surrounding the origin. | {
"domain": "physics.stackexchange",
"id": 39359,
"tags": "general-relativity, metric-tensor"
} |
Is there a soft fabric cellphone rays can't penetrate? | Question: Do there exist any clothing like fabric which could effectively block rays from a cellphone?
Answer: It turns out that you need comparatively little thickness of a conductive material to effectively block the GHz frequencies of mobile phones. The two mechanisms that do this are reflection of the waves and then secondly, absorption of the waves in the conductive material through Ohmic heating.
In answering this question, I arrived at a formula for the power attenuation caused by a sheet of aluminium film.
$$\left[\frac{E_t}{E_i}\right]^2 \simeq \left[4 \frac{\eta_{\rm Al}}{\eta_0} \exp(-t/\delta)\right]^2 \simeq 0.22 \omega^{-1} \exp(-44 \omega^{1/2} t),$$
where $\delta = (2/\mu_r \mu_0 \sigma \omega)^{1/2}$ is the skin depth, $\sigma$ is the conductivity ($3.5\times 10^7$ S/m for Al), $t$ is the foil thickness in metres and $\omega$ is the angular frequency in s$^{-1}$.
So at 1 GHz ($\omega \simeq 6\times 10^{9}$ s$^{-1}$) a sheet of metallised film, coated with aluminium with a thickness of $\sim 10^{-6}$ m, would offer a power attenuation factor of $\sim 10^{-12}$.
That would easily do the job - but there is a caveat. You cannot have holes or tears in the fabric that are even nearly comparable with the wavelength of the radiation (30 cm). Punctures would probably be ok, but you would need to have a way of sealing it up. Note that you could not breathe inside a bag made of this substance!
Other possibilities beside metallised film can be found by looking up conductive textiles. A brief search suggests that you can get these fabrics at reasonable prices. The calculations of their effectiveness will vary with the conductivity and thickness of the material, but should be comparable with the calculations above. | {
"domain": "physics.stackexchange",
"id": 35139,
"tags": "radiation, microwaves"
} |
How does relativity affect the electric force? | Question: I enquired after the effect of gravity at relativisic speed and, though some comments simply suggested that "...is resolved with the velocity addition formula just like all the others", the excellent answer by John Rennie showed that it is way too complicated to find, since GR is involved.
That question was closed as off topic , probably since I posed a concrete problem (setting $v$ at $0.866 c$ and acceleration at $1 \text{m}/\text{s}^2$) and someone thought it was a HW question. I'll not make the same mistake now and simply ask what happens to the electric acceleration when a charge is travelling at near $c$: is just the velocity addition formula enough to calculate the effective increase of vlocity, since GR is not involved. If not, what is the formula I need.
If you have some data about concrete experiments, please give some or a link, I suppose that, unlike gravity, these situation are examined daily in details at places like LHC
Answer: This can be done very easily without worrying about relativistic addition of velocities because if a charge $q$ crosses a potential difference $V$ then its energy changes by $\Delta E = qV$.
A quick aside: the LHC actually uses RF cavities to accelerate the protons, but let's leave aside that complication and just assume the charged particle repeatedly crosses a potential difference $V$.
The energy of a relativistic particle is:
$$ E^2 = p^2c^2 + m^2c^4 \tag{1} $$
where $p$ is the relativistic momentum:
$$ p = \gamma mv = \frac{mv}{\sqrt{1 - v^2/c^2}} $$
and with a bit of algebra we can rearrange this into an expression for the velocity in terms of the energy:
$$ v = c\,\sqrt{1 - \frac{m^2c^4}{E^2}} \tag{2} $$
So if we start with some energy $E$ and cross a potential difference $V$ the energy changes from $E$ to $E+qV$. So if our initial velocity is:
$$ u = c\,\sqrt{1 - \frac{m^2c^4}{E^2}} $$
then the final velocity is:
$$ v = c\,\sqrt{1 - \frac{m^2c^4}{(E+qV)^2}} $$
And if this occurs over some short distance $\ell$ we can approximate the acceleration using the SUVAT equation:
$$ v^2 = u^2 + 2as $$
to get:
$$ a = \frac{c^2}{2\ell} \left(1 - \frac{m^2c^4}{(E+qV)^2} - 1 + \frac{m^2c^4}{E^2}\right) $$
which for reasons that will shortly become clear I'm going to rearrange to:
$$ a = \frac{m^2c^6}{2\ell} \left( \frac{\frac{qV}{E}\left(2 + \frac{qV}{E}\right)}{E^2\left(1+\frac{qV}{E}\right)^2} \right) \tag{3} $$
You could be forgiven for wondering where on Earth all this algebra is going, but I'm now going to make the assumption that the energy gained in crossing our electric field, $qV$, is much less than the total energy $E$. This is reasonable even for a stationary particle because remember that $E$ includes the rest mass energy so its minimum value is $E+mc^2$ even when the particle is stationary. Since with this assumption $qV/E \ll 1$ our equation (3) simplifies drastically to:
$$ a \approx \frac{m^3c^6}{E^3} \left( \frac{qV}{m\ell} \right) $$
Suppose our particle is stationary then the energy is $E=mc^2$ and our equation becomes:
$$ a_0 = \frac{qV}{m\ell} $$
And this is just the classical acceleration of a charge $q$ in a field gradient $V/\ell$. So we can write our final equation as:
$$ a = \frac{m^3c^6}{E^3} a_0 \tag{4} $$
where $a_0$ is the acceleration at non-relativistic speeds. And there's our result. The acceleration we observe in the lab is less than the acceleration at non-relativistic speeds by a factor of $m^3c^6/E^3$.
If you haven't completely lost the will to live by now there is one more step we can take. For very highly relativistic particles the energy is much greater than the rest mass and our original energy equation (1) becomes:
$$ E\approx pc \approx \gamma mv$$
Substituting this in our equation (4) we get:
$$ \frac{a}{a_0} \approx \frac{c^3}{\gamma^3v^3} $$
And since for highly relativistic particles $v \approx c$ we end up with:
$$ \frac{a}{a_0} \approx \frac{1}{\gamma^3} \tag{5} $$ | {
"domain": "physics.stackexchange",
"id": 38787,
"tags": "electromagnetism, relativity"
} |
Commutation of creation and annihilation operators with opposite momentum | Question: So we have the rules for creation and annihilation operators.
\begin{equation}
\left[ a_\textbf{p}, a_\textbf{q}^\dagger \right] = \delta_{\textbf{p},\textbf{q}}
\end{equation}
and
\begin{equation}
\left[ a_\textbf{p}, a_\textbf{q} \right] = \left[ a^\dagger_\textbf{p}, a_\textbf{q}^\dagger \right] = 0
\end{equation}
I just have the question, does this mean that
\begin{equation}
\left[ a_\textbf{p}, a_\textbf{-p}^\dagger \right] = 0
\end{equation}
Despite having the same momentum just, in the opposite direction?
What if I was to apply the creation operator $a^\dagger_\textbf{-p}$ on the state $\left|n, \textbf{p} \right>$?
If the commutator does equal zero, is this an example of right handed and left handed particles (opposite helicity) not interacting?
Thank you for reading.
Answer: Yes it's zero since $\delta(\textbf{p},-\textbf{p})=0$. There is nothing special about $\textbf{p}$ and $-\textbf{p}$. Described in relatively moving coordinates these states would not have opposite momenta.
"What if I was to apply the creation operator $a^\dagger_\textbf{-p}$ on the state $\left|n, \textbf{p} \right>$?"
You would get the $n+1$ particle state $\left|1, -\textbf{p} \right>\left|n, \textbf{p} \right>$. Each momentum state can be thought of as an independent harmonic oscillator. | {
"domain": "physics.stackexchange",
"id": 33862,
"tags": "homework-and-exercises, quantum-field-theory"
} |
ROS2 Foxy Gazebo spawn_entity [SystemPaths.cc:459] File or path does not exist [""] | Question:
I am having trouble getting an xacro / URDF model to spawn in Gazebo, however I am able to get it working in rviz2.
ROS2, Foxy, Gaebo-11.3.0, x86.
I am providing the minimal example in the following question.
I am getting the following on stdout
[gazebo-3] [Wrn] [SystemPaths.cc:459] File or path does not exist [""] [model://test_robot/src/description/meshes/test_robot.stl]
[gazebo-3] [Err] [Visual.cc:2956] No mesh specified
[gazebo-3] [Wrn] [SystemPaths.cc:459] File or path does not exist [""] [model://test_robot/src/description/meshes/test_robot.stl]
[gazebo-3] [Err] [Visual.cc:2956] No mesh specified
[gazebo-3] [Wrn] [SystemPaths.cc:459] File or path does not exist [""] [model://test_robot/src/description/meshes/test_robot.stl]
I have attempted to add the meshes folder to GAZEBO_MODEL_PATH with no luck.
Here is my working directory
.
├── CMakeLists.txt
├── include
│ └── test_robot
├── launch
│ ├── gazebo.launch.py
│ └── rviz.launch.py
├── package.xml
├── rviz
│ └── urdf_config.rviz
└── src
└── description
├── meshes
│ └── test_robot.stl
└── test_robot.xacro
7 directories, 7 files
My CMakeLists
cmake_minimum_required(VERSION 3.5)
project(test_robot)
# Default to C99
if(NOT CMAKE_C_STANDARD)
set(CMAKE_C_STANDARD 99)
endif()
# Default to C++14
if(NOT CMAKE_CXX_STANDARD)
set(CMAKE_CXX_STANDARD 14)
endif()
if(CMAKE_COMPILER_IS_GNUCXX OR CMAKE_CXX_COMPILER_ID MATCHES "Clang")
add_compile_options(-Wall -Wextra -Wpedantic)
endif()
# find dependencies
find_package(ament_cmake REQUIRED)
install(
DIRECTORY src launch rviz
DESTINATION share/${PROJECT_NAME}
)
if(BUILD_TESTING)
find_package(ament_lint_auto REQUIRED)
# the following line skips the linter which checks for copyrights
# uncomment the line when a copyright and license is not present in all source files
#set(ament_cmake_copyright_FOUND TRUE)
# the following line skips cpplint (only works in a git repo)
# uncomment the line when this package is not in a git repo
#set(ament_cmake_cpplint_FOUND TRUE)
ament_lint_auto_find_test_dependencies()
endif()
ament_package()
My xacro (yes I am aware there is no collision mesh, but I don't think that is relevant):
<?xml version="1.0" encoding="UTF-8"?>
<robot name="test_robot" xmlns:xacro="http://www.ros.org/wiki/xacro">
<link name="base_link">
<inertial>
<origin xyz="0 0 0" rpy="0 0 0"/>
<mass value="1"/>
<inertia
ixx="1" ixy="0" ixz="0"
iyy="1" iyz="0"
izz="1"/>
</inertial>
<visual>
<origin xyz="0 0 0" rpy="0 0 0"/>
<geometry>
<mesh filename="package://test_robot/src/description/meshes/test_robot.stl"/>
</geometry>
<material name="grey"/>
</visual>
</link>
</robot>
My gazebo launch file
"""
Spawn Robot Description
"""
import os
from ament_index_python.packages import get_package_share_directory
from launch import LaunchDescription
from launch.actions import DeclareLaunchArgument, ExecuteProcess, IncludeLaunchDescription
from launch.launch_description_sources import PythonLaunchDescriptionSource
from launch.substitutions import Command, LaunchConfiguration
from launch_ros.actions import Node
from launch_ros.substitutions import FindPackageShare
def generate_launch_description():
test_robot_description_share = FindPackageShare(package='test_robot').find('test_robot')
default_model_path = os.path.join(test_robot_description_share, 'src/description/test_robot.xacro')
robot_state_publisher_node = Node(
package='robot_state_publisher',
executable='robot_state_publisher',
parameters=[{'robot_description': Command(['xacro ', LaunchConfiguration('model')])}]
)
# GAZEBO_MODEL_PATH has to be correctly set for Gazebo to be able to find the model
spawn_entity = Node(package='gazebo_ros', executable='spawn_entity.py',
arguments=['-entity', 'my_test_robot', '-topic', '/robot_description'],
output='screen')
return LaunchDescription([
DeclareLaunchArgument(name='model', default_value=default_model_path,
description='Absolute path to robot urdf file'),
robot_state_publisher_node,
spawn_entity,
ExecuteProcess(
cmd=['gazebo', '--verbose','worlds/empty.world', '-s', 'libgazebo_ros_factory.so'],
output='screen'),
])
Originally posted by vinny on ROS Answers with karma: 291 on 2021-03-07
Post score: 1
Answer:
Use the file://$(find) syntax.
<?xml version="1.0" encoding="UTF-8"?>
<robot name="test_robot" xmlns:xacro="http://www.ros.org/wiki/xacro">
<link name="base_link">
<inertial>
<origin xyz="0 0 0" rpy="0 0 0"/>
<mass value="1"/>
<inertia
ixx="1" ixy="0" ixz="0"
iyy="1" iyz="0"
izz="1"/>
</inertial>
<visual>
<origin xyz="0 0 0" rpy="0 0 0"/>
<geometry>
<mesh filename="file://$(find test_robot)/src/description/meshes/test_robot.stl"/>
</geometry>
<material name="grey"/>
</visual>
</link>
</robot>
Originally posted by vinny with karma: 291 on 2021-03-08
This answer was ACCEPTED on the original site
Post score: 3 | {
"domain": "robotics.stackexchange",
"id": 36176,
"tags": "gazebo, ros2, urdf, xacro"
} |
Creating embed for Discord reading from dictionary | Question: I have been working on where I create a payload of dicts with different values as store, name etc etc which you will see very soon. The idea is that with the payload I send to this script, it should see if the values is in the payload (dict) and if it is then add it into discord embed. As simple as it sounds.
I have done something like this:
#!/usr/bin/python3
# -*- coding: utf-8 -*-
import time
from threading import Thread
from typing import Dict
import pendulum
from discord_webhook import DiscordEmbed, DiscordWebhook
from loguru import logger
mixed_filtered = {
'test1': 'https://discordapp.com/api/webhooks/529345345345345/6h4_yshmNDKktdT-0VevOhqdXG9rDhRWclIfDD4jY8IbdCQ5-kllob-k1251252151',
'test2': 'https://discordapp.com/api/webhooks/529674575474577/6h4_yshmNDKktdT-0VevOhqdXG9rDhRWclIfDD4jY8IbdCQ5-kllob-fhgdfdghfhdh'
}
mixed_unfiltered = {
'test1': 'https://discordapp.com/api/webhooks/12412421412412/6h4_yshmNDKktdT-0VevOhqdXG9rDhR12412412414Q5-kllob-kI2jAxCZ5PdIn',
'test2': 'https://discordapp.com/api/webhooks/529617352682110997/6h4_yshmNDKktdT-0VevOhqdXG912412412412IbdCQ5-kllob-kI2jAxCZ5PdIn'
}
def create_embed(payload: dict) -> None:
# -------------------------------------------------------------------------
# Name of the product, URL of the product & New! or Restock! url
# -------------------------------------------------------------------------
embed = DiscordEmbed(
url=payload["link"],
description=payload["status"],
color=8149447
)
# -------------------------------------------------------------------------
# Image product
# -------------------------------------------------------------------------
embed.set_thumbnail(
url=payload["image"]
)
# -------------------------------------------------------------------------
# Store Name
# -------------------------------------------------------------------------
embed.add_embed_field(
name="Site",
value=f'{payload["store"]}'
)
# -------------------------------------------------------------------------
# The price of the product
# -------------------------------------------------------------------------
if payload.get("price"):
embed.add_embed_field(
name="Price",
value=payload["price"]
)
# -------------------------------------------------------------------------
# If store is Nike
# -------------------------------------------------------------------------
if "Nike" in payload["store"]:
if payload.get("nikeStatus"):
embed.add_embed_field(
name="\u200b",
value="\u200b"
)
embed.add_embed_field(
name="Status",
value=payload["nikeStatus"]
)
# -------------------------------------------------------------------------
# Nike Sales Channel
# -------------------------------------------------------------------------
if payload.get("salesChannel"):
embed.add_embed_field(
name="Sales Channel",
value="\n".join(payload["salesChannel"])
)
# -------------------------------------------------------------------------
# Sizes available
# Add extra spaces for sizes to make it cleaner for discord embed
# -------------------------------------------------------------------------
if payload.get("sizes"):
payload["stock"] = sum(v for v in payload["sizes"].values() if v)
payload["sizes"] = [f"{k} - ({v})" if v else k for k, v in payload["sizes"].items()]
# If we have stock in values then sum it up
embed.add_embed_field(
name="\u200b",
value="\u200b"
)
characterCount, i = 0, 0
for j, item in enumerate(payload["sizes"]):
# There is a limitation for Discord where if we reach over 1020 characters for one embed column.
# IT will throw a error. Now I check if the characters count is less than 900 then we create a new embed.
if len(item) + characterCount > 900:
embed.add_embed_field(
name="Sizes",
value="\n".join(payload["sizes"][i:j])
)
characterCount, i = len(item), j
else:
characterCount += len(item)
if characterCount:
embed.add_embed_field(
name="Sizes",
value="\n".join(payload["sizes"][i:])
)
embed.add_embed_field(
name="\u200b",
value="\u200b"
)
embed.add_embed_field(
name="\u200b",
value="\u200b"
)
# -------------------------------------------------------------------------
# If store is footlocker
# -------------------------------------------------------------------------
if "Footlocker" in payload["store"]:
if payload.get("stockLoaded"):
embed.add_embed_field(
name="Stock Loaded",
value=payload["stockLoaded"].upper()
)
if payload.get("styleCode"):
embed.add_embed_field(
name="\u200b",
value="\u200b"
)
embed.add_embed_field(
name="Style Code",
value=payload["styleCode"]
)
# -------------------------------------------------------------------------
# Release date for the product
# -------------------------------------------------------------------------
if payload.get("releaseDate"):
embed.add_embed_field(
name="Release Date",
value=payload["releaseDate"].to_datetime_string()
)
# -------------------------------------------------------------------------
# Stock keeping unit etc. 508214-660
# -------------------------------------------------------------------------
if payload.get("sku"):
embed.add_embed_field(
name="SKU",
value=payload["sku"]
)
# -------------------------------------------------------------------------
# Total stock of the product
# -------------------------------------------------------------------------
if payload.get("stock"):
embed.add_embed_field(
name="Total Stock",
value=payload["stock"]
)
# -------------------------------------------------------------------------
# Login/Cart/Checkout shortcut links
# -------------------------------------------------------------------------
embed.add_embed_field(
name="Shortcuts Links",
value=f'{" | ".join(shortcuts for shortcuts in payload["shortcut"])}'
)
# -------------------------------------------------------------------------
# Quick task for bots
# -------------------------------------------------------------------------
if payload.get("quicktask"):
embed.add_embed_field(
name="Quick Tasks",
value=f'{" | ".join(shortcuts for shortcuts in payload["quicktask"])}'
)
# -------------------------------------------------------------------------
# Footer timestamp
# -------------------------------------------------------------------------
embed.set_footer(
text=f'AutoSnkr | {pendulum.now("Europe/Stockholm").format("YYYY-MM-DD [[]HH:mm:ss.SSSS[]]")}'
)
# -------------------------------------------------------------------------
# Set title on the embed
# -------------------------------------------------------------------------
if payload.get('stock') and payload.get('name'):
embed.title = f'({payload["stock"]}) {payload["name"]}'
elif payload.get('name'):
embed.title = payload["name"]
else:
embed.title = payload.get('link')
# -------------------------------------------------------------------------
# Send payload/embed to Discord Notification function
# -------------------------------------------------------------------------
collection = mixed_filtered if payload["keyword"] else mixed_unfiltered
for region, discord_collection in collection.items():
webhook = DiscordWebhook(
url=discord_collection,
username="AutoSnkr Monitor",
)
webhook.add_embed(embed)
# Adding thread so each URL can post as fast as possible without needing to wait for each other
Thread(
target=post_embed,
args=(
payload,
region,
webhook
)
).start()
def post_embed(payload: Dict, region: str, webhook: DiscordWebhook) -> None:
success: bool = False
while not success:
try:
response = webhook.execute()
success = response.ok
# If we get a 429, retry after a short delay
if response.status_code == 429:
sleep_time = int(response.headers["retry-after"]) / 1000
logger.debug(f"Rate limit -> Retrying in {sleep_time} seconds")
time.sleep(sleep_time)
continue
# any response other than a 429 or a 200 OK is an error.
if not response.ok:
# FIXME Add discord notficiation and raise exception
pass
logger.info(f"Succesfully sent to Discord Reporter -> {region}")
except Exception as err:
# FIXME Add discord notficiation and raise exception
pass
if __name__ == '__main__':
create_embed(
{
"store": "Basket4ballers",
"link": "https://www.basket4ballers.com/en/pg/26471-nike-pg5-bred-cw3143-101.html",
"name": "Nike PG5 Bred",
"price": "EUR 119.9",
"image": "https://cdn1.basket4ballers.com/114821-large_default/nike-pg5-bred-cw3143-101.jpg",
"sizes": {
"[EU 38.5](https://www.basket4ballers.com/?controller=cart&add=1&as=true&qty=1&id_product=26471&token=e4d64f25476dcee4b08744d382dc405b&ipa=205049)": 1,
"[EU 39](https://www.basket4ballers.com/?controller=cart&add=1&as=true&qty=1&id_product=26471&token=e4d64f25476dcee4b08744d382dc405b&ipa=205052)": 1,
"[EU 40](https://www.basket4ballers.com/?controller=cart&add=1&as=true&qty=1&id_product=26471&token=e4d64f25476dcee4b08744d382dc405b&ipa=205055)": 3,
"[EU 40.5](https://www.basket4ballers.com/?controller=cart&add=1&as=true&qty=1&id_product=26471&token=e4d64f25476dcee4b08744d382dc405b&ipa=205058)": 4,
"[EU 41](https://www.basket4ballers.com/?controller=cart&add=1&as=true&qty=1&id_product=26471&token=e4d64f25476dcee4b08744d382dc405b&ipa=205061)": 9,
"[EU 42](https://www.basket4ballers.com/?controller=cart&add=1&as=true&qty=1&id_product=26471&token=e4d64f25476dcee4b08744d382dc405b&ipa=205064)": 11,
"[EU 42.5](https://www.basket4ballers.com/?controller=cart&add=1&as=true&qty=1&id_product=26471&token=e4d64f25476dcee4b08744d382dc405b&ipa=205067)": 11,
"[EU 43](https://www.basket4ballers.com/?controller=cart&add=1&as=true&qty=1&id_product=26471&token=e4d64f25476dcee4b08744d382dc405b&ipa=205070)": 16,
"[EU 44](https://www.basket4ballers.com/?controller=cart&add=1&as=true&qty=1&id_product=26471&token=e4d64f25476dcee4b08744d382dc405b&ipa=205073)": 21,
"[EU 44.5](https://www.basket4ballers.com/?controller=cart&add=1&as=true&qty=1&id_product=26471&token=e4d64f25476dcee4b08744d382dc405b&ipa=205076)": 15,
"[EU 45](https://www.basket4ballers.com/?controller=cart&add=1&as=true&qty=1&id_product=26471&token=e4d64f25476dcee4b08744d382dc405b&ipa=205079)": 20,
"[EU 45.5](https://www.basket4ballers.com/?controller=cart&add=1&as=true&qty=1&id_product=26471&token=e4d64f25476dcee4b08744d382dc405b&ipa=205082)": 7,
"[EU 46](https://www.basket4ballers.com/?controller=cart&add=1&as=true&qty=1&id_product=26471&token=e4d64f25476dcee4b08744d382dc405b&ipa=205085)": 17,
"[EU 47](https://www.basket4ballers.com/?controller=cart&add=1&as=true&qty=1&id_product=26471&token=e4d64f25476dcee4b08744d382dc405b&ipa=205088)": 7,
"[EU 47.5](https://www.basket4ballers.com/?controller=cart&add=1&as=true&qty=1&id_product=26471&token=e4d64f25476dcee4b08744d382dc405b&ipa=205091)": 5,
"[EU 48](https://www.basket4ballers.com/?controller=cart&add=1&as=true&qty=1&id_product=26471&token=e4d64f25476dcee4b08744d382dc405b&ipa=205094)": 3,
"[EU 48.5](https://www.basket4ballers.com/?controller=cart&add=1&as=true&qty=1&id_product=26471&token=e4d64f25476dcee4b08744d382dc405b&ipa=205097)": 2,
"[EU 49.5](https://www.basket4ballers.com/?controller=cart&add=1&as=true&qty=1&id_product=26471&token=e4d64f25476dcee4b08744d382dc405b&ipa=205100)": 1},
"shortcut": ["[Login](https://www.basket4ballers.com/en/authentification?back=my-account)",
"[Cart](https://www.basket4ballers.com/en/commande)",
"[Checkout Delivery](https://www.basket4ballers.com/en/commande?step=1)",
"[Checkout Shipping Service](https://www.basket4ballers.com/en/commande)",
"[Checkout PAyment](https://www.basket4ballers.com/en/commande)"],
"webhook": "mixed",
"status": "Restock!",
"keyword": True
}
)
The mock data is at the very bottom but in the future I will instead send the payload to the function.
I wonder what can I do to try to have less code but that still does the job. I feel like there should be a way more cleaner way to do this than what I did but looking forward to see what can be improved :)
Let me know if there is any missing information. The script should be runnable by copy pasting it but make sure to create your own discord webhooks to test the embed. I will unfortunately need to modify them so no one can spam me :)
Answer: If I were a smart man I'd give up on recommending that you stop using a payload dictionary for internal data representation and instead use classes, but I'm not a smart man. Please. I implore you. We're not in JavaScript - objects don't have to be dictionaries. This could be well-represented by a class for product, and a class for product size.
Otherwise:
if payload.get("price") should be replaced by if 'price' in payload if it were only that statement; but since you actually use it,
if payload.get("price"):
embed.add_embed_field(
name="Price",
value=payload["price"]
)
should become
price = payload.get('price')
if price is not None:
embed.add_embed_field(name='Price', value=price)
More broadly: your create_embed is a presentation function but mixes in logic concerns such as stock summation, and store-specific logic (i.e. Footlocker). That should be separated.
Your
# FIXME Add discord notficiation and raise exception
first of all has a typo - notficiation -> notification - and second of all, while this is waiting to be fixed it's of crucial importance that you not swallow exceptions. Part of development and debugging is seeing errors, and your code breaks that. So replace your pass with a raise in the meantime. | {
"domain": "codereview.stackexchange",
"id": 41242,
"tags": "python, python-3.x, discord"
} |
Get translation of prismatic joints through C++ API | Question:
Hi all,
I am making a ROS plugin for a cylindrical manipulator. I need to get the translation of the prismatic joint in order to report the position of the end effector. I looked through the Joint API (http://osrf-distributions.s3.amazonaws.com/gazebo/api/dev/classgazebo_1_1physics_1_1Joint.html) and failed to see a function that would give me this information. I found GetAngle, but that is meaningless for a prismatic joint. The SliderJoint API didn't provide any help either. How do I get the translation of a prismatic joint through the C++ API?
Thanks in advance!
Originally posted by Robocop87 on Gazebo Answers with karma: 13 on 2015-03-28
Post score: 1
Answer:
The function is probably badly named, but I believe GetAngle for a prismatic joint returns the joint's displacement in meters. See this issue.
I did a quick test with the GUI. Insert the Simple Arm model and on the World tree, select the arm_wrist_lift_joint, which is prismatic. Expand axis1 and you'll see that the joint's lower and upper limits are -0.8 m and 0.1 m respectively. Now expand the joint control panel on the right and apply some forces to move this joint up and down, and you'll see the value of angle_0 go from one joint limit to the other.
Originally posted by chapulina with karma: 7504 on 2015-03-28
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by Robocop87 on 2015-03-29:
Great, thank you for your answer! I agree, badly named function.
Comment by Robocop87 on 2015-03-30:
One other question, the GetAngle function returns a math::Angle type which can be resolved to a number using math::Angle::Degree or math::Angle::Radian. If it is a prismatic joint, which do I use? I'll assume radian but I'm not sure.
Comment by chapulina on 2015-03-30:
Radian seems to be the one. | {
"domain": "robotics.stackexchange",
"id": 3741,
"tags": "gazebo"
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.