anchor stringlengths 0 150 | positive stringlengths 0 96k | source dict |
|---|---|---|
How do I add a tool to an inverse kinematic solver | Question: This may be a very basic question, however I am having great difficulty to find the right answer.
I have a functioning python code for the inverse kinematics of a 6 axis UR Robot. For any transformation matrix in space I get the corresponding joint values. However: I do not know how I can add the TCP transformation to the calculation. I have a tool attached to the robot and the IK solution should of course account for that. How do I do this? can I simply transform the coordinates before I send them to the inverse kinematics function or does this need to happen inside of the IK function? This feels like it is super easy but I am drawing a blank for the last few days.
So far I have taken the dot product of the coordinates I want to approach and the TCP transformation. However this only works without any rotation as soon as rotation is added this doesn't work anymore what am I missing? My googling skills have failed me so far.
Answer: A fixed (not dependent on any changing parameter) coordinate system transformation can be defined between the TCP and the "end of the robot" coordinate system. Depending on how you formulate CS transformations this can be a 4x4 transformation matrix or a double quaternion or any other form of expressing coordinate system transformations.
Two possibilities exist which are equivalent in results but one of them may be preferred based on current implementation:
Transform the TCP coordinates to "end of robot" coordinates using the fixed transformation. Since you probably want to transform also orientation not just position, the simplest would be to take the unit vectors of the desired TCP coordinate system axes and transform them to capture both position and rotation. That maps all unit vectors to the coordinate system you have the inverse kinematics for, you just need to rewrite the unit vector coordinates to the convention you use to describe orientation (e.g. Euler angles)
Add the fixed transformation to the inverse kinematics problem. You can just add another fixed transformation to the kinematics problem. This will not affect the math, since is is a matrix (if you used the 4x4 matrices) with all elements known. | {
"domain": "robotics.stackexchange",
"id": 2040,
"tags": "robotic-arm, kinematics, inverse-kinematics"
} |
What is known on violations of unitarity or locality? | Question: Recently the amplituhedron become a hot topic. I realized that two of the central pillars that QFT is based on, unitarity and locality, are no longer playing an important part (due to gravitational interactions).
I would like to get more details on why we should give up these seemingly innocent assumptions. In particular: are black holes the only source of concern here?
Answer: The authors of the work on the amplituhedron (Trnka and Arkani-Hamed) have said that locality and unitarity are "emergent" in this formalism for N=4 Super Yang Mills. The way that locality and unitary appear in these theories is certainly interesting but I think the word "emergent" here is misused.
The usual usage of the word emergence in physics comes from the theory of complex systems where emergent phenomena arise as from the complex interactions of many parts in some limit e.g. on large scales. This means that the emergent system is approximate when viewed at the detailed level of the system. An example is the thermodynamic laws of entropy and temperature which emerge from the statistical physics of molecules.
In N=4 Super Yang Mills locality and unitarity are exact. When you start from the amplituhedron formalism they may not be apparent initially but are derived from the mathematical description. There is no complexity and no approximation, in fact the amplituhedron may be a simpler description that the usual QFT one and it certainly leads to simpler calculations. I think the word "derived" is much more appropriate here than "emergent"
In the question you mention gravitational interactions. Super Yang Mills is not directly a theory of gravity. It is thought to be dual to a theory of gravity due to AdS/CFT but the amplituhedron has not yet cast any light on the duality so that is not the important point here.
Nima Arkani-Hamed likes to compare the situation with the discovery of indeterminsim in quantum mechanics. Before QM was discovered physicists had recast classical mechanics as a principle of least action. In that formalism determinism was no longer directly apparent but it could be derived as an exact principle by solving the equations. Later the principle of least action was deformed by quantum mechanics and then determinism was only an emergent approximation in the classical limit. Here the word emergent is appropriate.
The amplituhedron formalism in this analogy is really just like the principle of least action. Locality and unitarity are still exact but are derived. Nima Arkani-Hamed sometimes makes it sound like they have reached the point analogous to quantum mechanics where locality and unitarity are emergent but that does not yet seem to be the case at least in the work he has talked about.
It is widely thought that locality and possibly even unitarity could be emergent in quantum gravity. Black holes are relevant here but it is not just about large scale black holes. There are arguments that locality fails at small scales because measurement fails when you probe at the Plank scale. The argument is that the energy required becomes so large that small black holes would form and prevent the measurement. Unitarity may fail due to black hole information loss but there is not a consensus on that question. Nima Arkani-Hamed has another argument that unitarity must ultimately fail because all systems are finite.
Super Yang Mills is not a theory of quantum gravity (unless you count AdS/CFT) so there is no reason to expect a breakdown of locality. There has been some progress in applying some of the theory of scattering amplitudes to N=8 supergravity which is a theory of quantum gravity. If that can be related to the amplituhedron we may get a better idea of whether or not locality and/or unitarity are really emergent. If this involved some deformation of the amplituhedron so that locality and/or unitarity are not exact then the analogy with indeterminism in quantum mechanics would be complete. It is too soon to say if things will work out that way or not. | {
"domain": "physics.stackexchange",
"id": 9637,
"tags": "quantum-field-theory, quantum-gravity, locality, unitarity, amplituhedron"
} |
Cannot reproduce nvidia-docker2 installation from the tutorial | Question:
Hello, I've been trying to get OpenGL work with nvidia-docker2 to run gazebo in Docker.
I've started to repeat the steps outlined in "1.2 nvidia-docker2" section of this http://wiki.ros.org/docker/Tutorials/Hardware%20Acceleration#nvidia-docker2 tutorial.
First, I've found some mistakes in it:
One should do RUN DEBIAN_FRONTEND=noninteractive apt-get install -y tzdata, the current installation command does not work -- the installation process hangs when selecting location area
Package lsb_release should be lsb-release
One needs to also install gnupg before running apt-key
After fixing these I got my Docker image built. But the last step, which should display RViz, tells me
root@2e605489cabd:/# rosrun rviz rviz
QStandardPaths: XDG_RUNTIME_DIR not set, defaulting to '/tmp/runtime-root'
No protocol specified
qt.qpa.screen: QXcbConnection: Could not connect to display :0
Could not connect to any X display.
The host system is Ubuntu 16.04.
Originally posted by Ilya Kuzovkin on ROS Answers with karma: 56 on 2018-06-28
Post score: 3
Answer:
Found the answer. On my system I was running docker from under root (sudo docker instead of just docker inside run_my_image.sh).
To allow root user the access to running X server one must run (on the host machine) the following command
$ xhost +si:localuser:root
Now I am able to run RViz and Gazebo.
Originally posted by Ilya Kuzovkin with karma: 56 on 2018-06-28
This answer was ACCEPTED on the original site
Post score: 1 | {
"domain": "robotics.stackexchange",
"id": 31111,
"tags": "ros, ros-melodic, ubuntu, docker, opengl"
} |
Product of an odd number of Dirac $\gamma$ matrices | Question: Suppose $n$ is an odd number. Why can we write $a\llap{/}_1 a\llap{/}_2 ... a\llap{/}_n$ as
$$a\llap{/}_1 a\llap{/}_2 ... a\llap{/}_n = V_\mu \gamma^\mu + A_\mu \gamma^\mu \gamma_5$$
for some $V_\mu, A_\mu$?
I know that $\gamma^{\mu} \gamma^\nu=g^{\mu \nu}-i\sigma^{\mu \nu}$ and I tried to write $a\llap{/}_1 a\llap{/}_2 ... a\llap{/}_n$ as $a_{1\mu}a_{2\nu}… a_{n-1\alpha} \gamma^{\mu} \gamma^\nu…\gamma^\alpha a\llap{/}_n$ and use $\gamma^{\mu} \gamma^\nu=g^{\mu \nu}-i\sigma^{\mu \nu}$. It gives a term with $a_1 \cdotp a_2...a_{n-2} \cdotp a_{n-1} a\llap{/}_n$ but then there are terms with products of $\sigma^{\mu \nu}$ that I don't know how to deal with. I don't know if this is the best approach to the problem.
Answer: It follows from the basic anticommutation relation for the $\gamma$-matrices, $\{\gamma_\mu,\gamma_\nu\}=2g_{\mu\nu}$, that (i) two different $\gamma$-matrices anticommute, and that (ii) the square of a single $\gamma$-matrix is plus or minus the unit matrix. Then the statement in the question follows from the following claim: the product of an odd number of $\gamma$-matrices equals, up to a numerical factor, a single $\gamma$-matrix or a product of three $\gamma$-matrices.
The proof is by reduction of $n$. Consider a product $\gamma^{\mu_1}\dotsb\gamma^{\mu_n}$ with odd $n\geq 5$. Since the indices on the $\gamma$-matrices take values from the range $0,\dotsc,3$, there must be a pair of indices, $\mu_i$ and $\mu_j$ with $i<j$, such that $\mu_i=\mu_j$ and all $\mu_k$ with $i<k<j$ are different from $\mu_i=\mu_j$. Using the above properties (i) and (ii), the matrices $\gamma^{\mu_i}$ and $\gamma^{\mu_j}$ can be removed from the product, possibly up to changing the overall sign. One then continues to reduce the number of $\gamma$-matrices in the product in the same manner until it becomes $n=3$. If all the remaining $\gamma$-matrices are different, we are done. If there is still one pair of matrices with equal indices, the whole product is equivalent to a single $\gamma$-matrix. Done. | {
"domain": "physics.stackexchange",
"id": 69789,
"tags": "dirac-equation, dirac-matrices, clifford-algebra"
} |
Completely autonomous traversal of a planar graph | Question: I have to program an autonomous robot to traverse through a grid given in the following figure.
But the main problem is that the nodes to visit is not known beforehand, it will be received by the bot in real time.
E.g.- after reaching the node 19, the bot now has to go to node 6. The shortest path(19-17-7-6) can be calculated by Dijkstra algo but i don't know how to make the robot traverse that path.
Any idea ?
Edit: Sorry for not making the question clear enough.
I am facing the problem in determining the current position and the direction the robot is facing so i can't define the set of commands (turn left/right/forward) to traverse to the next desired node.
I am thinking about an extra array for previously visited nodes and the current node and an extra variable for facing direction.
But for that i will have to define command sets for each node from each and every node.
Any better idea ?
Answer: You are absolutely correct that Dijkstra's Shortest Path Algorithm can tell you the correct path for the robot to follow. The problem seems to be that you cannot tell where the robot is, and what actions to make the robot take to get to the next node.
There's no "right" answer here, but I can offer some guidance about how I'd do it.
Where is the robot
If the robot can observe the configuration of the outgoing edges, then he can narrow down his location to a set of possible nodes. For example, if there are two outgoing edges, he can only be in 2,3,5,6,8,9, 11, or 12. Similarly for 3, and 4 outgoing is uniquely the center
Since you say you have to deal with the robot's orientation when moving, it is probably safe to assume the robot can measure its orientation or at least discover it relative to the current configuration of edges. If the robot has some way of knowing his orientation, then the number of edges and their cardinal direction would help even more. For example, with two edges facing north and south, then we know the robot is at 3,5,9, or 11.
Furthermore, if the robot knew the history of possible locations, then we can also incorporate that information. If we knew the robot had two outgoing edges facing east and west, then he moved to the westmost edge and now had two edges facing north and south, then we know the robot is now at 3 and came from 2. What's cool here, is the robot did not know from just the edges, it was the action of moving that caused it to figure out where it was.
I might do that as follows:
Keep a separate list of all nodes and their edges. Assume all nodes are "unmarked"
At the start
Observe the number of outgoing edges at the current node
Mark all nodes with a different number of edges
Observe the outgoing edge orientations, and mark all nodes which have different orientation (i.e., if we notice an edge going north, mark all nodes which don't have an edge going north).
At this point, we know the robot is only in an unmarked node
Travel along an arbitrary edge (but not back to start)
Check the new outgoing edges. Mark all nodes which have different outgoing edges.
Additionally, mark all nodes which have the same edges but are not connected to an unmarked node
Repeat this process until only one unmarked node remains. Without orientation, this will never happen though (for example, 4 and 10 are identical up to orientation)
If you have no orientation information, and no way to observe outgoing edges, then history is all you have. If you know what node you started at, then by all means, you must come up with a complete sequence of actions to get the robot to the desired node. Here's how we might do that.
Getting there
Let's assume the robot starts at 1 and has to get to 21. I'd program a small subroutine that did the following.
Call dijkstra to get the shortest path.
Until reached:
Turn the robot toward the next node
take the edge
Update the robot's current node
Update the next node from the list in Dijkstra's output
Using this, the robot always knows his current node and next node. We only assumed that we knew the starting orientation. | {
"domain": "robotics.stackexchange",
"id": 921,
"tags": "mobile-robot, automatic, dynamic-programming"
} |
tf2 tutorial: follower turtle makes circles, doesn't follow leader | Question:
Hello all.
I was following the TF2 tutorial and reached the listener part. I have implemented the code accordingly but I noticed my outcome is very different than the one shipped with :
sudo apt-get install ros-$ROS_DISTRO-turtle-tf2 ros-$ROS_DISTRO-tf2-tools ros-$ROS_DISTRO-tf
Before I continue more, here is what I have written so far:
#include <iostream>
#include <string>
#include <functional>
#include <ros/ros.h>
#include <tf2_ros/transform_listener.h>
#include <turtlesim/Spawn.h>
#include <geometry_msgs/Twist.h>
#include <geometry_msgs/TransformStamped.h>
int main(int argc, char **argv)
{
ros::init(argc, argv, "tf2_demo_listener_node");
ros::NodeHandle n;
ros::service::waitForService("spawn");
ros::ServiceClient spawner = n.serviceClient<turtlesim::Spawn>("spawn");
turtlesim::Spawn turtle;
turtle.request.x = 2;
turtle.request.y = 2;
turtle.request.theta = 0;
turtle.request.name = "turtle2";
spawner.call(turtle);
ros::Publisher publisher = n.advertise<geometry_msgs::Twist>("turtle2/cmd_vel", 10);
tf2_ros::Buffer buffer;
tf2_ros::TransformListener listener(buffer);
ros::Rate rate(10.0);
while (n.ok())
{
geometry_msgs::TransformStamped transform_stamped;
try
{
transform_stamped = buffer.lookupTransform("turtle2", "turtle1",
ros::Time(0),
ros::Duration(0.1));
}
catch (const std::exception &e)
{
ROS_WARN(e.what());
ros::Duration(0.2).sleep();
continue;
}
geometry_msgs::Twist vel_msg;
//Note
// here we are converting the cartesian coordinate (x,y) to polar coordinates(r,theta):
// for theta we have arctan(y,x) which would be atan2(y,x) (its in radian) and
// for r we have sqrt(x^2 + y^2)
auto theta_radian = atan2(transform_stamped.transform.translation.y,
transform_stamped.transform.translation.x);
auto r = sqrt(pow(transform_stamped.transform.translation.x, 2) +
pow(transform_stamped.transform.translation.y, 2));
vel_msg.angular.z = 4.0 * theta_radian; //4.0 1.0
vel_msg.linear.x = 0.5 * r;
publisher.publish(vel_msg);
rate.sleep();
}
return 0;
}
Now if I run this this is the pattern I get :
and ultimately it ends up spinning next to the turtle1 like this:
Now if I run the roslaunch turtle_tf2 turtle_tf2_demo.launch it performs neatly and nicely follows the turtle1 everywhere until it reaches to it:
now my question is, what is it that I'm doing wrong that does not yield the very same result? seemingly the codes are the same(although implemented in different languages (c++ vs python), however the effect is very different.
What am I missing here?
Originally posted by Rika on ROS Answers with karma: 72 on 2021-07-06
Post score: 0
Original comments
Comment by Delb on 2021-07-06:
I tried your code and it's running as the tutorial so your issue is probably somewhere else. Are you running everything as in the tutorial ? Have you changed anything else ?
Also, you tried turtle_tf2_demo.launch but it's using the python script, can you try turtle_tf2_demo_cpp.launch to see if it's working fine ?
Comment by Ranjit Kathiriya on 2021-07-06:
I tried running your code, it is working fine for me also.
https://github.com/ros/geometry_tutorials/tree/indigo-devel/turtle_tf2
Have a look at this package, I think you will find the solution to your problem. I think your code is perfect.
Comment by Rika on 2021-07-07:
Thank you both very much. yes you were right the problem was somewhere else. I posted an answer below. but I'd also appreciate if someone could shed some light on why broadcaster needs to be only one?
Answer:
OK I found where my problem had stemmed from. The code here is OK as others kindly pointed out in the comments. what was wrong, was the broadcaster that I had written for sending the trutle1/turtle2 transformations.
basically I was instantiating a new broadcaster in my callback and this was the culprit. I had to either make it static or use one instance for this to work(why is that an issue? I still don't know).
So this is my broadcaster I provided some explanation concerning the issue in the comments of callback_fn:
#include <iostream>
#include <string>
#include <functional>
#include <ros/ros.h>
#include <tf2_ros/transform_broadcaster.h>
#include <geometry_msgs/TransformStamped.h>
#include <turtlesim/Pose.h>
#include <tf/LinearMath/Quaternion.h>
int main(int argc, char** argv)
{
ros::init(argc, argv, "tf2_broadcaster_node");
ros::NodeHandle n;
ros::NodeHandle tmp_node("~");
std::string turtle_name = "";
if (!tmp_node.hasParam("turtle"))
{
if (argc != 2)
{
ROS_ERROR("Number of arguments do not match! must provide a turtle's name as the argument!");
return -1;
}
else
{
// Get the second argument which is the turtlename (the first argument is the applications exe path)
turtle_name = std::string(argv[1]);
std::cout<<"turtle_name is: "<<turtle_name<<std::endl;
}
}
else
{
// if we already have a turtlename in our parameter server, fetch it!
tmp_node.getParam("turtle", turtle_name);
std::cout<<"turtle_name: "<<turtle_name<<std::endl;
}
// see the comment below, either uncomment this or
// the static one below. see the explanation for more details
//tf2_ros::TransformBroadcaster broadcaster;
auto callback_fn = [&](turtlesim::PoseConstPtr pose)
{
// read https://answers.ros.org/question/381937/
// short story: it doesn't have to be defined as static or be shared!
// what matters is that they shouldn't be created/destroyed very fast
// that is every few milliseconds e.g.
// or as jvdhoorn says :
// "What is important is to make sure it has a sane lifecycle
// (ie: it does not get created and destroyed every few milliseconds)."
// otherwise, you will end-up with all sorts of nonsensical weird happenings
// previously I forgot about this and when I ran the listener app which relies
// on this, the trutle2 was moving erratically, in circles, it would get close to
// turtle1, but not always and not as fast, it would take a lot of time and huge
// circles! remove static and see it yourself.
// (run tf2_transform_carrot_demo.launch and see the result)
static tf2_ros::TransformBroadcaster broadcaster;
geometry_msgs::TransformStamped transform;
tf::Quaternion qtr;
// since we want all the nodes to be linked together, here we are specifying
// one parent called "world" for this node. any other nodes that have this as
// parent will be able to communicate with this node. and because our turtles
// need to be linked in order to communicate through tf2 this is necessary
transform.header.frame_id = "world";
transform.header.stamp = ros::Time::now();
//this means we are naming our frame after turtlename
// and our parent frame is world, this allows us to
// be with other frames that already exist and
// world is their parent (i.e. this way we can have multiple turtles
// roaming in the same world)
transform.child_frame_id = turtle_name;
// as turtlebot only moves in 2d coord, we only
// have x, and y and not z.
transform.transform.translation.x = pose->x;
transform.transform.translation.y = pose->y;
transform.transform.translation.z = 0;
// again turtlebot only moves in 2d, so there is no roll or pitch
// we only have yaw. remember that angles are in radian in ros.
qtr.setRPY(0, 0, pose->theta);
transform.transform.rotation.w = qtr.w();
transform.transform.rotation.x = qtr.x();
transform.transform.rotation.y = qtr.y();
transform.transform.rotation.z = qtr.z();
// Note:
// sendTransform and StampedTransform have opposite ordering of parent and child.
// Note2:
// You can also publish static transforms on the same pattern by instantiating
// a StaticTransformBroadcaster (from tf2_ros/static_transform_broadcaster.h) instead
// of a TransformBroadcaster. The static transforms will be published on the
// /tf_static topic (as apposed to /tf) and will be sent only when required(latched topic)
// and not periodically. For more details see here
broadcaster.sendTransform(transform);
ROS_INFO("turtlename: %s", turtle_name.c_str());
};
ROS_INFO("-turtlename: %s", turtle_name.c_str());
ros::Subscriber subscriber = n.subscribe<turtlesim::Pose>(turtle_name + "/pose", 1000, callback_fn);
ros::spin();
return 0;
}
Doing so results in a prefect turtle hide and seek :
also I'd be grateful if someone could explain why it has to be one instance of the broadcaster for this to work and what happens when its not. (why the eratic behavior?)
Update:
OK I asked the question here and here is the answer :
in short :
gvdhoorn:
All you need to do is make sure you
create your ros::TransformBroadcaster
in a scope which lives longer than
that of the callback which is using
it. For the code you show in #q381845,
that would basically mean the scope of
main(..) (as you are using lambdas to
implement your callbacks). In more
traditional code, you'd make the
broadcaster a member variable of a
class, or keep it in global scope
(although that comes with its own
disadvantages).
Thanks a lot
Originally posted by Rika with karma: 72 on 2021-07-07
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 36653,
"tags": "ros, c++, ros-kinetic, ubuntu, tf2"
} |
Convert minutes portion of time to decimal | Question: I am making an hours calculator app. It takes a start time, end time and time taken for lunch.
For the start and end time it takes a four-digit hh:mm time. For example: 10:20.
I have made a function which converts the time into a decimal,
so 10:20 = 10.33. The function works, but I feel it looks a little heavy and wondered if anyone has any suggestions of how I could make it better...
const minuteConverter = time => {
let h = Number(time.split(':')[0]);
let m = Math.round((1 / 60 * (Number(time.split(':')[1])) + Number.EPSILON) * 100) / 100;
let mConverted = Number(m.toString().split('.')[1])
return Number(`${h}.${mConverted}`)
};
console.log(minuteConverter('10:20'))
The time must be output as a number to two decimal places. For example,
'10:20' >> 10.33
'9:45' >> 9.75
'15:33' >> 15.55
Answer:
Use const instead of let to declare variables if the value doesn't change.
You are executing time.split(':') twice.
A short method to convert a string to a number is the unary +.
JavaScript has the toFixed() method to format a number to a fixed number of digits:
function minuteConverter(time) {
const [h, m] = time.split(':');
const value = +h + +m / 60;
return value.toFixed(2);
} | {
"domain": "codereview.stackexchange",
"id": 39415,
"tags": "javascript, datetime, functional-programming, ecmascript-6"
} |
Keplerian Telescope Exit Pupil Location - Whats the Basis for It's Formula? | Question: For a simple two lens Keplerian telescope, this is the formula for the location of the exit pupil:
$$z'=\frac{f_2}{f_1}(f_1+f_2)$$
Where $z'$ is the distance to the exit pupil location (i.e. eye relief), $f_1$ is the focal length of the objective, and $f_2$ is the focal length of the eye peice lens.
I've found the equation, in one incarnation or another, in the following three sources:
Handbook of Optics, Third Edition, Volume 1
Handbook of Optical Design, Third Edition
an opti202L lab manual of unknown origins
I've done some work on the equation and it appears to be applying the thin lens equation to the eye piece lens:
$$\frac{1}{f_2}=\frac{1}{z'}-\frac{1}{z}$$
Where $z=-(f_1+f_2)$. This is the part I dont understand. The only way I've seen the thin lens equations used for systems of lenses, $z$ was treated as the distance to the previous lens' image formation point. This, however, suggests that the rear eye peice is looking at the objective lens itself and not the image formed by the objective.
How does this work and whats the rationale for using it to find the exit pupil location?
Answer: The exit pupil of any optical system is defined as the image of the controlling aperture stop (AS) formed by the subsequent optical components. The controlling aperture stop is the optical component which limits the maximum cone of rays from an object point which can actually be processed by the whole system (basically it controls how much light from each point actually makes it all the way through). In a system like a camera we deliberately set the AS by having an iris which opens and closes, but in the absence of an iris one of the lenses limits the maximum angle. In the case of a simple telescope this is generally the main front lens, and the exit pupil is its image formed by the eyepiece lens as your formula states. Though if you put too small an eyepiece lens then this could be the AS itself in which case it will also be the exit pupil.
In general the exit pupil is designed to form at the position where the eye will be placed and further that it will be the same size as the pupil of the eye itself, this ensures that the maximum possible amount of light can be transmitted from the object to the viewer and that the image fills the eye. The exit pupil limits the cone angle of rays illuminating each point of the image.
The aperture stop should not be confused with the field stop which controls the field of view of the system. This aperture limits the maximum angular distance an object can be from the optical axis and still be seen. Its image in the subsequent components is called the Exit Window and effectively defines the total size of the final image. To an observer on the image side this window appears to limit the area of the of the image. | {
"domain": "physics.stackexchange",
"id": 16669,
"tags": "optics, geometric-optics, telescopes, lenses"
} |
Electrostatic energy affected by dielectric medium | Question:
An isolated metallic object is charged in vacuum to a potential $V_0$ using a suitable source, it's electrostatic energy being $W_0$. It is then disconnected from the source and immersed in a large volume of dielectric with dielectric constant $K$. The electrostatic energy of the sphere in the dielectric is:
I know that in a general case, electrostatic energy $E=\frac{Q^2}{8\pi\epsilon_0R}$. I always believed that an outside dielectric medium has no affect on the internal electrical energy, since it cannot affect the charge distribution. However, this question's answer is not $W_0$...
My question has a situation similar to this question but that question does not talk about energy changes. Please can anyone elaborate exactly why the metallic object's electrostatic energy changes. Thank you!
Answer: Electrostatic energy does not only depend on the charge on the object. It depends on the work done in bringing charge to it from infinity, where the potential energy is presumed to be zero.
The polarization of the dielectic medium affects the electric field between the surface of the metal object and infinity. Therefore it also affects the work done in bringing charge from infinity. | {
"domain": "physics.stackexchange",
"id": 45253,
"tags": "electrostatics, voltage"
} |
Java MVC Implementation | Question: I am trying to write an MVC based application from scratch using Java 8 and Swing... I am having some difficulties with this.
I want to know if I am going the correct way about this...
My controller code is:
public class Controllerv2 {
private final Map<String, View> views = new HashMap<>();
private final Map<String, Command> commands = new HashMap<>();
private final ModelContainer model = new ModelContainer(this);
public void addView(String key, View view) {
views.put(key, view);
}
public View getView(String key) {
return views.get(key);
}
public void showView(String key) {
getView(key).showForm();
}
public void addCommand(String commandString, Command cmd) {
commands.put(commandString, cmd);
}
public void executeControllerCommand(String commandString) {
commands.get(commandString).execute();
}
}
public abstract class View {
protected final JDialog dialog = new JDialog();
protected final GUIAction guiAction = new GUIAction();
protected final Controllerv2 controls;
Views are subclassed from this class:
public View(Controllerv2 suppliedController) {
this.controller = suppliedController;
}
// Fires up the form
protected abstract void initComponents();
// Package up the fields into an object of type T and return
public abstract <T> T readFields();
// Self explanatory functions
public void showForm() {dialog.setVisible(true);}
public void hideForm() {dialog.setVisible(false);}
// For fields that need updating
public abstract void updateFields(String[] msg);
}
This is an example of a concrete View:
public final class VBasexLoginR extends View {
private JButton connectButton;
private JLabel connectionLabel;
private JTextField hostField;
private JLabel hostLabel;
private JLabel passwordLabel;
private JButton pingButton;
private JTextField portField;
private JLabel portLabel;
private JPasswordField pwField;
private JLabel titleLabel;
private JTextField uidField;
private JLabel userId;
public VBasexLoginR(Controllerv2 c) {
super(c);
initComponents();
}
@Override
protected void initComponents() {// Boilerplate to setup the form}
@Override
public BaseXCredentials readFields() {
return new BaseXCredentials(hostField.getText(),portField.getText(),uidField.getText(),pwField.getPassword());
}
@Override
public void updateFields(String[] msg) {
connectionLabel.setText(msg[0]);
}
}
Am I going in the right direction with this?
Also, I think I may need different models, one for running queries on BaseX and then ones for working with the results. Thus far I have...
public class ModelContainer {
private final Map<String, Model> models = new HashMap<>();
private final Map<String, ModelCommand> commands = new HashMap<>();
private final Controllerv2 controller;
ModelContainer(Controllerv2 suppliedController) {
this.controller = suppliedController;
}
public void addModel(String key, Model model) {
models.put(key, model);
}
public Model getModel(String key) {
return models.get(key);
}
public void addCommand(String key, ModelCommand cmd) {
commands.put(key, cmd);
}
public void executeModelCommand(String key) {
commands.get(key).execute();
}
}
Views can be got at through the getView method in controls
Is this correct or is this overkill?
EDIT -- Taking into account Mat's Mug response, changed the code to reflect the naming conventions suggested.
Answer: I'm only going to scratch the surface here, and leave more seasoned java reviewers take a stab at this one, for I've never written a line of java-8 - but some concepts transcend language barriers.
Naming in particular.
Signatures can, and should talk.
public void addView(String viewString, View view) {
views.put(viewString, view);
}
public View getView(String viewString) {
return views.get(viewString);
}
Take this line:
public void addView(String viewString, View view) {
The name viewString is pretty confusing. One has to look at how it's implemented to go "oh, that's a key for a hashmap!". How about this instead?
public void addView(String key, View view) {
Now there's no ambiguity whatsoever, and getView(String key) becomes crystal-clear.
Consistency is King
This is just a minor thing, but I'd prefer consistency in the choice of verbs used to describe an action. For example, if a form is a view and a form gets shown, then a view should be shown too, not opened:
public void openView(String viewString) {
getView(viewString).showForm();
}
Would be
public void showView(String viewString) {
getView(viewString).showForm();
}
Same with commands - if a command executes, I wouldn't want the verb run to refer to the same thing, I'd prefer it be consistent:
public void runControllerCommand(String commandString) {
commands.get(commandString).execute();
}
Would be:
public void executeControllerCommand(String commandString) {
commands.get(commandString).execute();
}
Single-letter identifiers
Don't. This:
ModelContainer(Controllerv2 c) {
this.controls = c;
}
Is pretty darn confusing. Does c stand for controller, or for controls? Actually there seems to be something fishy going on here, "controls" is a nice name for a collection of controls, but you have it as ..a Controllerv2 (is there a Controllerv3? Avoid "versioning" classes like this, especially in the early development stages). Notice the puzzled expression in my face? ;)
Anyway, that c would probably be better of as controller.
Constructor arguments
I like this part - final makes your intent very clear:
private final Map<String, Model> models = new HashMap<>();
private final Map<String, ModelCommand> commands = new HashMap<>();
private final Controllerv2 controls;
ModelContainer(Controllerv2 c) {
this.controls = c;
}
I'd push it a step further though, and provide a way to initialize the class with models and commands, like this:
private final Map<String, Model> models = new HashMap<>();
private final Map<String, ModelCommand> commands = new HashMap<>();
private final Controllerv2 controller;
ModelContainer(Controllerv2 controller, Map<String, Model> models, Map<String, ModelCommand> commands) {
this.controller = controller;
this.commands = commands;
this.models = models;
}
And now the client code doesn't need to call AddCommand 20 times when there's that many commands to add.
Indentation
Your indentation is a little off, too. Whenever a code file ends like this:
}
}
You know something's gone wrong. I'd blame it on the Java-style braces but that's just the c# mug talking, just select all but the signature and last closing brace, and give that Tab button some lovin' ;) | {
"domain": "codereview.stackexchange",
"id": 8973,
"tags": "java, mvc, java-8"
} |
Mean field approach to toric code | Question: Toric code is one of the few exactly solvable model in condensed matter, however, like the paper (http://arxiv.org/abs/1104.5485) that uses SU(2) slave fermion to "solve" Kitaev's honeycomb model, is it possible to use the same approach to mean-field toric code? It seems that for each plaquette or star operator in toric code's Hamiltonian, it needs 8 fermions since they are 4-spin interaction terms, which is pretty daunting if we try the mean-field approach. Any comments will be welcomed.
Answer: Yes, there is a similar mean-field approach to toric code. It was introduced in Xiao-Gang Wen's book Section 10.2 and 10.3, known as the Wen plaquette model.
In the toric code model, the qubits are defined on the links, which are not very convenient in terms of the spin liquid language. So the first step is to redefine a smaller square lattice, such that the qubits are on the sites. This illustrated in the following figure as going from the left (toric code model) to the right (Wen plaquette model).
The toric code model (on the left) is given by the following Hamiltonian
$$H=-\sum_v Q_v-\sum_f B_f,$$
where $v$ stands for the vertex and $f$ stands for the face (plaquette) on the square lattice. The qubits are defined on the link. The stabilizers $Q_v$ and $B_f$ depicted in the figure are given by
$$Q_v=\sigma_1^z\sigma_2^z\sigma_3^z\sigma_4^z, B_f=\sigma_5^x\sigma_6^x\sigma_7^x\sigma_8^x.$$
The same definition is just translated through out the lattice. In the Wen plaquette model (on the right), the qubits are defined on the site. We represent the Pauli operators on a single qubit by drawing lines through the site: $\sigma^z = $, $\sigma^x = $, such that a ring around the plaquette stands for the operator
$O_p = $$=\sigma_1^z\sigma_2^x\sigma_3^z\sigma_4^x.$
It is not hard to figure out that under a basis transformation of the qubits, the toric code Hamiltonian becomes
$$H=-\sum_p O_p,$$
where $p$ stands for the plaquette of the smaller square lattice. The plaquettes in the smaller lattice are naturally partitioned into red and blue plaquettes following the check-board pattern. The operator $O_p$ surrounding a red (blue) plaquette corresponds to the $Q_v$ ($B_f$) operator in the toric code model.
Now every qubit can be considered as a spin-1/2 on each site, and we are ready to run the standard SU(2) projective construction for the spin liquid. We introduce four Majorana spinons on each site (equivalent to two complex spinons), denoted as $\chi_i^0,\chi_i^1,\chi_i^2,\chi_i^3$. They can be arranged in a matrix form as
$$F_i=\sigma^0\chi_i^0+\mathrm{i}\sigma^1\chi_i^1+\mathrm{i}\sigma^2\chi_i^2+\mathrm{i}\sigma^3\chi_i^3.$$
The SU(2) spin and SU(2) gauge transformation simply acts on $F_i$ as left and right SU(2) rotations respectively. Then the onsite spin operator can be fractionalized as
$$\sigma_i^a = \mathrm{Tr}F_i^\dagger \sigma^a F_i, (a=1,2,3),$$
under the gauge constraint that $\chi_i^0\chi_i^1\chi_i^2\chi_i^3=1$. Or explicitly, we have $\sigma_i^z=\mathrm{i}\chi_i^0\chi_i^3-\mathrm{i}\chi_i^1\chi_i^2$ and $\sigma_i^x=\mathrm{i}\chi_i^0\chi_i^1-\mathrm{i}\chi_i^2\chi_i^3$, which can be graphically represented as follows:
Under this fractionalization, the ring operator $O_p$ simply becomes a product of 8 Majorana fermions around the plaquette,
$$O_p=\chi_1\chi_2\chi_3\chi_4\chi_5\chi_6\chi_7\chi_8.$$
The arrangement of the Majorana fermions are shown below.
One can then take the mean field decomposition in the following manner
$$O_p=\langle\chi_1\chi_2\rangle\langle\chi_3\chi_4\rangle\langle\chi_5\chi_6\rangle\langle\chi_7\chi_8\rangle.$$
The corresponding mean field Hamiltonian $H_\text{MF}$ simply describes Majorana fermions dimerized across each link.
$$H_\text{MF}=\sum_{\langle i j \rangle\in\text{x-link}}\mathrm{i}u\chi_i^1\chi_j^3+\sum_{\langle i j \rangle\in\text{y-link}}\mathrm{i}u\chi_i^0\chi_j^2$$
So the fermion spectrum is fully gapped, as each pair of Majorana fermions forms a bounding state independently. Because Majorana fermions are real, the mean field ansatz also breaks the SU(2) gauge structure down to $Z_2$, thus resulting in a $Z_2$ spin liquid, described by the emergent $Z_2$ topological order at low energy. | {
"domain": "physics.stackexchange",
"id": 26056,
"tags": "condensed-matter, anyons, spin-models"
} |
Moment of inerta of square frame | Question: A rod of length $l$ and mass $m$ has $ml^2/12$ as moment of inertia about an axis through its center of mass.
(source: draw.to)
Say now we take four identical copies of the rod above and form a square frame, whose center of mass lies exactly at the geometric center of the square. How can we then use the moment of inertia of a single rod to calculate the moment of inertia of the entire square frame?
(source: draw.to)
I recently began learning about moment of inertias so I am not really sure how to go about doing this.
Answer: You can use the parallel axis theorem to work out the moment of inertia of a rod of length $l$ with it's centre of mass displaced from the axis of rotation by $\frac{l}{2}$ then multiply this value by four to get the moment of inertia of the whole square.
The parallel axis theorem is:
\begin{equation}
I = I_{cm} + md^2
\end{equation}
Where $I$ is the moment of inertia when the object has been displaced, $I_{cm}$ is the moment of inertia of the object when the axis of rotation passes through the centre of mass and $d$ is the distance it is displaced.
For each rod in your square we have:
\begin{align}
I_{cm} &= \frac{1}{12} ml^2 \\
d &= \frac{l}{2}\\
I &= ml^2 \left(\frac{1}{12} + \frac{1}{4} \right) = \frac{ml^2}{3}
\end{align}
So multiplying by four gives:
\begin{equation}
I_{square} = \frac{4ml^2}{3}
\end{equation} | {
"domain": "physics.stackexchange",
"id": 18349,
"tags": "homework-and-exercises, moment-of-inertia"
} |
Web scraper for restaurant ratings | Question: I'm new to using type hinting in Python. I've used it for a small scraper I had to build (see code below), everything works fine and mypy gives no errors. However I'm sure there are better ways to write this (avoiding the repetition between the Ratings and ScrapedData tuples, better way to handle the Literal in function signature). Any feedback is greatly appreciated, even on other aspects of the code.
I'm using Python 3.7 so I don't think I can use TypedDict.
import os
import requests
import lxml.html
import pandas as pd
from lxml.html import HtmlElement
from requests import Session
from requests.adapters import HTTPAdapter
from requests.packages.urllib3.util.retry import Retry
from enum import Enum
from typing import List, Optional, NamedTuple
from typing_extensions import Literal
from multiprocessing import Pool
HEADER = {"User-Agent": "Mozilla/5.0"}
TITLE_XPATH = '//div[@class="review-title"]'
REVIEW_XPATH = '//section[@class="review-container"]'
SENTIMENT_XPATH = '//div[@class="left-header"]'
RATING_XPATH = '//section[@itemprop="reviewrating"]'
SUBJECT_XPATH = './/div[@class="subject"]'
STAR_XPATH = './/span[@class="or-sprite-inline-block common_yellowstar_desktop"]'
POSITIVE_XPATH = './/div[contains(@class, "smiley_smile")]'
NEUTRAL_XPATH = './/div[contains(@class, "smiley_ok")]'
NEGATIVE_XPATH = './/div[contains(@class, "smiley_cry")]'
class Evaluation(Enum):
POSITIVE: int = 1
NEUTRAL: int = 0
NEGATIVE: int = -1
NONE: None = None
class Ratings(NamedTuple):
taste: Optional[int] = None
environment: Optional[int] = None
service: Optional[int] = None
hygiene: Optional[int] = None
value: Optional[int] = None
class ScrapedData(NamedTuple):
url: str
title: Optional[str] = None
review: Optional[str] = None
sentiment: Literal[
Evaluation.POSITIVE, Evaluation.NEUTRAL, Evaluation.NEGATIVE, Evaluation.NONE
] = Evaluation.NONE
taste: Optional[int] = None
environment: Optional[int] = None
service: Optional[int] = None
hygiene: Optional[int] = None
value: Optional[int] = None
class Scraper:
def __init__(self, url_file: str) -> None:
if not os.path.exists(url_file):
raise OSError("File Not Found: %s" % url_file)
with open(url_file, "r") as fp:
self.urls = [_.strip() for _ in fp.readlines()]
self.data: list = []
@staticmethod
def __requests_retry_session(
retries: int = 3,
backoff_factor: float = 0.3,
status_forcelist: tuple = (500, 502, 504),
session: Session = None,
) -> Session:
"""
Handles retries for request HTTP requests params are similar to those
for requests.packages.urllib3.util.retry.Retry
https://www.peterbe.com/plog/best-practice-with-retries-with-requests
"""
session = session or requests.Session()
retry = Retry(
total=retries,
read=retries,
connect=retries,
backoff_factor=backoff_factor,
status_forcelist=status_forcelist,
)
adapter = HTTPAdapter(max_retries=retry)
session.mount("http://", adapter)
session.mount("https://", adapter)
return session
@staticmethod
def __safe_extract_text(elements: List[HtmlElement]) -> Optional[str]:
"""
Returns the text content of the first element extracted from Xpath or None if none has been found
:param elements:
The result of a call to .xpath on the tree
:return: the string extracted or None if there are no elements
"""
if len(elements) > 0:
return elements[0].text_content()
else:
return None
@staticmethod
def __extract_sentiment(
elements: List[HtmlElement]
) -> Literal[
Evaluation.POSITIVE, Evaluation.NEUTRAL, Evaluation.NEGATIVE, Evaluation.NONE
]:
if len(elements) < 1:
return Evaluation.NONE
element = elements[0]
if len(element.xpath(POSITIVE_XPATH)) > 0:
return Evaluation.POSITIVE
elif len(element.xpath(NEUTRAL_XPATH)) > 0:
return Evaluation.NEUTRAL
elif len(element.xpath(NEGATIVE_XPATH)) > 0:
return Evaluation.NEGATIVE
return Evaluation.NONE
@staticmethod
def __extract_ratings(elements) -> Ratings:
if len(elements) < 1:
return Ratings()
element = elements[0]
rating_subjects = element.xpath(SUBJECT_XPATH)
if len(rating_subjects) != 5:
return Ratings()
extracted_ratings = Ratings(
*[len(_.xpath(STAR_XPATH)) for _ in rating_subjects]
)
return extracted_ratings
def scrape_page(self, url: str) -> ScrapedData:
print("Scraping : %s" % url)
r = self.__requests_retry_session().get(url, headers=HEADER, timeout=10)
tree = lxml.html.fromstring(r.content)
# Extract title
title = self.__safe_extract_text(tree.xpath(TITLE_XPATH))
# Extract review
review = self.__safe_extract_text(tree.xpath(REVIEW_XPATH))
# Extract overall sentiment
sentiment = self.__extract_sentiment(tree.xpath(SENTIMENT_XPATH))
# Extract specific grades
ratings = self.__extract_ratings(tree.xpath(RATING_XPATH))
return ScrapedData(
url, title, review, sentiment.value, *ratings._asdict().values()
)
def scrape(self) -> None:
p = Pool(5)
self.data = p.map(self.scrape_page, self.urls)
p.terminate()
p.join()
def save(self, output_file: str = "content.csv"):
data = pd.DataFrame(self.data)
data.to_csv(output_file, index=None)
if __name__ == "__main__":
s = Scraper("reviewsurl.csv")
s.scrape()
s.save()
Answer: Optional
I find it less useful to include None in an enum like Evaluation, and more useful to write Optional[Evaluation] where appropriate. It's useful to be able to choose whether you have a value that cannot be None at a certain point, or otherwise, based on context.
In other words, this:
sentiment: Literal[
Evaluation.POSITIVE, Evaluation.NEUTRAL, Evaluation.NEGATIVE, Evaluation.NONE
] = Evaluation.NONE
can just be
sentiment: Optional[Evaluation] = None
The same goes for the return value of __extract_sentiment.
File existence
I find this:
if not os.path.exists(url_file):
raise OSError("File Not Found: %s" % url_file)
to be redundant. open will do that for you.
Lists
Since you're learning about type hinting: what is this a list of?
self.data: list = []
Similarly, this:
status_forcelist: tuple = (500, 502, 504)
is probably
status_forcelist: Tuple[int, ...] = (500, 502, 504)
Inner lists
extracted_ratings = Ratings(
*[len(_.xpath(STAR_XPATH)) for _ in rating_subjects]
)
should be
extracted_ratings = Ratings(
*(len(_.xpath(STAR_XPATH)) for _ in rating_subjects)
)
In other words, unpack a generator, not a materialized list. Also, never call a variable _ if you actually use it. | {
"domain": "codereview.stackexchange",
"id": 38400,
"tags": "python, python-3.x, web-scraping"
} |
Are telescope eyepieces ever made with a diopter adjustment (similar to DSLRs)? | Question: I am learning about eyepieces, and "eye relief" and what sorts of eyepieces an amateur may want for their purposes, and was wondering if there are eyepieces with diopter adjustments, or if, due to the optical systems in reflecting and refracting telescopes, diopter adjustment like those in a DSLR viewfinder are never found in the designs of eyepieces.
Answer: Are there eyepieces with diopter adjustments? The answer is both yes and no.
Diopters are not needed on telescopes for simple spherical myopia (near-sighted or far-sighted vision issues) but may be needed if a person suffers from cylindrical myopia (astigmatism) and wants to be able to use the telescope without wearing prescription eyeglasses.
For near-sighted & far-sighted vision issues there is generally no need of a diopter adjustment on a telescope as their is on an SLR or DSLR camera for spherical correction. Also, cameras generally don't a way to compensate for astigmatism ... but it turns out you can get eyepiece adapters that compensate when using a telescope (more on that later).
Why? To answer the question, it's important to first recognize what the diopter adjustment does on a DSLR camera.
On a DSLR, the "SLR" stands for "Single Lens Reflex" because the camera is designed to allow the photographer to look through the same lens that will be used to capture the photo (as opposed to viewfinder or rangefinder cameras... or even twin-lens reflex cameras where the photographer looks through a separate lens used to "approximate" the image that will be captured.
If you remove the lens and inspect the top of the mirror chamber, you'll notice that the mirror is shining light up onto a piece of frosted glass. The glass will often have etched marks to serve as aids when taking the photograph.
The distance that light must travel through the lens, into the camera body, onto the reflex mirror and up onto the frosted glass (focusing screen) is actually the same as the distance through the camera and onto the imaging sensor (when the mirror swings clear to capture a photograph).
This means that when you look through the viewfinder, you're technically not looking "through the lens" ... you're actually looking at a back-lit projection screen (the frosted glass).
This means you could have an image which is precisely focused on the focusing screen (and also precisely focused for the imaging sensor) but appears out of focus due to eyesight issues.
The Diopter Adjustment allows you to adjust for near-sighted/far-sighted issues (spherical myopia). To adjust, manually put the camera out of focus and point at something with no contrast (such as a blue sky outdoors or a plain white wall if indoors) and adjust the diopter until the etched markings on the focus screen are as sharp as possible. Once you've done this, you can be confident that if the image projected onto the focus screen is well-focused, your eyes will also perceive it as being well-focused. It also means you may not need to wear your prescription eyeglasses when using the camera.
Telescopes
With a telescope, you aren't looking at an image on a focusing screen (such as frosted glass)... you really are looking through the telescope. When doing visual observing, there is no camera imaging sensor involved.
This means if you are near-sighted or far-sighted (both of which are spherical corrections) you can just adjust the telescope's focus to compensate. There is no need for a diopter adjustment.
Spherical myopia means the curvature of your eye is spherical... but is either slightly too flat ... or slightly too round to focus accurately at certain distances (resulting in a person being near-sighted or far-sighted).
But there is also cylindrical myopia (aka astigmatism). This means that instead of having uniform curvature (spherical curvature) on your eye, your eye's curvature is a bit more barrel-shaped where the curvature in one direction is stronger than the orthogonal direction.
TeleVue (and so far as I know ... only TeleVue offers such a thing) makes a product called Dioptrx. This is an adapter that fits onto many TeleVue eyepieces (not all TeleVue eyepieces are compatible with Dioptrx). It is designed for those who suffer from astigmatism.
You would need to know your corrective lens prescription to get the correct Dioptrx adapter (and since your right vs. left eyes will each have their own prescription, you need to decide which Dioptrx to order based on the eye you prefer to use when viewing through the telescope.
If you follow the link to the TeleVue Dioptrx page, you'll notice that the instructions offer helpful information on how to read your eyeglass prescription and how to identify the cylindrical correction needed for your eye. But you'll also notice it doesn't matter if the correction needed is positive or negative and also the axis can be ignored. This is because you can't just rotate the lenses in your eyeglasses ... they have to mounted in your eyeglasses frames in the correct orientation to work properly. But you can rotate the telescope eyepiece. This is why the positve/negative and axis values can all be ignored (TeleVue marks the mounting ring of the Dioptrx with letters ... so once you decide which axis offers the sharpest view, you note the letter at the top and just make sure to always rotate the eyepiece to the same orientation relative to your eyes when using it.
The Dioptrx astigmatism corrector allows you to use your telescope without needing to wear prescription eyeglasses. If your astigmatism if very weak (the cylindrical correction is a low value ... such as 0.50 ... then Dioptrx may not be needed. But if your correction is very high (e.g. 2.00 or more) then Dioptrx will probably be a big help. | {
"domain": "astronomy.stackexchange",
"id": 5009,
"tags": "optics"
} |
what does the log directory consists of? | Question:
Hey ,I m beginner to the ROS . After running nodes earlier , I got some .log files in log directory . I want to know what is it basically . What info it consists ?
Originally posted by charvi on ROS Answers with karma: 50 on 2019-02-27
Post score: 0
Answer:
Are you walking through the tutorials? That's a great way to learn and you will quickly come across the use of some logging statements like ROSINFO in c++ or rospy.loginfo in python.
The output form these messages can end up several places depending on your configuration etc.
Have a look at the introduction pages:
https://wiki.ros.org/roscpp/Overview/Logging
https://wiki.ros.org/rospy_tutorials/Tutorials/Logging
Originally posted by knxa with karma: 811 on 2019-03-01
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by charvi on 2019-03-03:
Thank you. | {
"domain": "robotics.stackexchange",
"id": 32551,
"tags": "ros-kinetic"
} |
Energy use by muscles, actual work done by muscles and more | Question: Lately, I've started exercising in the gym and outside. I've also started to look at the details of food I eat.
Food usually has a label saying the amount of energy is inside it. For example, some chocolate says it has 400 kilocalorie for 100 grams.
I have some questions about that:
Does the label say the amount of total chemical energy exists inside the food, which will be released if burned, or the energy available to the human body after digested, and because digesting requires energy, the food contains actually MORE energy then written?
When I'm riding on a static bike, it has a screen which counts the kcal I burn. It has a thing connected to my arm, measuring my heart beat. After about an hour of riding, it said I used about 700 kcal. Is it the amount of work I did, i.e if there was a battery connected to the bicycle, I'd generate 700 kcal of electricity, OR, the amount of energy my body LOST, because muscle efficiency is far from 100% (so, I lost 700 kcal, but generated only 150 kcal of electricity, for example)
When doing a physical activity, the heart beat and breathing rate increases. Lets say I'm riding a bicycle or running. Whats the ratio between energy consumed by the muscle to energy consumed by the lungs or heart?
Answer:
I speak only for the U.S. regulations: the calorie labels on wrappers refer to the energy released when burned. Sometimes these are inaccurate. Many dieticians recommend calculating the calories based on weights of protein, carbohydrates and fats in the serving: 4 kcal in each gram of protein and carbohydrate and 9 kcal in each gram of fat in your food. Multiply carbs, proteins and fats by the appropriate value and add them up.
The reading on the bike does not take into account your pulse. Just the raw work done turning the pedals. Some very smart bikes do take pulse into account, but the often inaccurate heart monitors lead to inaccurate calculations of total energy.
The heart and diaphragm (muscle primarily responsible for breathing) do use more energy when exercising than when you are sitting still. But that increase is small relative to the increase in the primary muscle sets involved in the exercise. Your resting heart rate is probably around 70 bpm, your exercising heart rate is probably around 160-180 bpm. So it has increased its rate and energy consumption by about 3 times. Your skeletal muscles maintain a basal metabolic rate during inactivity (one purpose of this is your body needs the heat from slightly tensed muscles to thermoregulate itself) - but during exertion, their metabolic rate increases enormously. Exercise increases their energetic demands by whole orders of magnitude.
The actual value of the (energy used by heart) / (energy used by skeletal muscles during exercise) depends on what exercise you do and how efficiently you do it. The change in the ratio between inactivity and activity depends on how much muscle mass you have (more muscle uses more energy during inactivity) and how strenuously you exercise. But the point is that the increased energetic demands by skeletal muscles far outpace the 3x increase in energetic demands by the heart and diaphragm - so the latter accounts for pretty much all total energy consumption at even moderate activity levels. | {
"domain": "biology.stackexchange",
"id": 387,
"tags": "human-biology, metabolism, muscles"
} |
Connect 4 minimax does not make the best move | Question: I'm trying to implement an algorithm that would choose the optimal next move for the game of Connect 4. As I just want to make sure that the basic minimax works correctly, I am actually testing it like a Connect 3 on a 4x4 field. This way I don't need the alpha-beta pruning, and it's more obvious when the algorithm makes a stupid move.
The problem is that the algorithm always starts the game with the leftmost move, and also during the game it's just very stupid. It doesn't see the best moves.
I have thoroughly tested methods makeMove(), undoMove(), getAvailableColumns(), isWinningMove() and isLastSpot() so I am absolutely sure that the problem is not there.
Here is my algorithm.
NextMove.java
private static class NextMove {
final int evaluation;
final int moveIndex;
public NextMove(int eval, int moveIndex) {
this.evaluation = eval;
this.moveIndex = moveIndex;
}
int getEvaluation() {
return evaluation;
}
public int getMoveIndex() {
return moveIndex;
}
}
The Algorithm
private static NextMove max(C4Field field, int movePlayed) {
// moveIndex previously validated
// 1) check if moveIndex is a final move to make on a given field
field.undoMove(movePlayed);
// check
if (field.isWinningMove(movePlayed, C4Symbol.BLUE)) {
field.playMove(movePlayed, C4Symbol.RED);
return new NextMove(BLUE_WIN, movePlayed);
}
if (field.isWinningMove(movePlayed, C4Symbol.RED)) {
field.playMove(movePlayed, C4Symbol.RED);
return new NextMove(RED_WIN, movePlayed);
}
if (field.isLastSpot()) {
field.playMove(movePlayed, C4Symbol.RED);
return new NextMove(DRAW, movePlayed);
}
field.playMove(movePlayed, C4Symbol.RED);
// 2) moveIndex is not a final move
// --> try all possible next moves
final List<Integer> possibleMoves = field.getAvailableColumns();
int bestEval = Integer.MIN_VALUE;
int bestMove = 0;
for (int moveIndex : possibleMoves) {
field.playMove(moveIndex, C4Symbol.BLUE);
final int currentEval = min(field, moveIndex).getEvaluation();
if (currentEval > bestEval) {
bestEval = currentEval;
bestMove = moveIndex;
}
field.undoMove(moveIndex);
}
return new NextMove(bestEval, bestMove);
}
private static NextMove min(C4Field field, int movePlayed) {
// moveIndex previously validated
// 1) check if moveIndex is a final move to make on a given field
field.undoMove(movePlayed);
// check
if (field.isWinningMove(movePlayed, C4Symbol.BLUE)) {
field.playMove(movePlayed, C4Symbol.BLUE);
return new NextMove(BLUE_WIN, movePlayed);
}
if (field.isWinningMove(movePlayed, C4Symbol.RED)) {
field.playMove(movePlayed, C4Symbol.BLUE);
return new NextMove(RED_WIN, movePlayed);
}
if (field.isLastSpot()) {
field.playMove(movePlayed, C4Symbol.BLUE);
return new NextMove(DRAW, movePlayed);
}
field.playMove(movePlayed, C4Symbol.BLUE);
// 2) moveIndex is not a final move
// --> try all other moves
final List<Integer> possibleMoves = field.getAvailableColumns();
int bestEval = Integer.MAX_VALUE;
int bestMove = 0;
for (int moveIndex : possibleMoves) {
field.playMove(moveIndex, C4Symbol.RED);
final int currentEval = max(field, moveIndex).getEvaluation();
if (currentEval < bestEval) {
bestEval = currentEval;
bestMove = moveIndex;
}
field.undoMove(moveIndex);
}
return new NextMove(bestEval, bestMove);
}
The idea is that the algorithm takes in the arguments of a currentField and the lastPlayedMove. Then it checks if the last move somehow finished the game. If it did, I just return that move, and otherwise I go in-depth with the subsequent moves.
Blue player is MAX, red player is MIN.
In each step I first undo the last move, because it's easier to check if the "next" move will finish the game, than check if the current field is finished (this would require to analyze for all possible winning options in the field). After I check, I just redo the move.
From some reason this doesn't work. I am stuck with that for days! I have no idea what's wrong... Any help greatly appreciated!
EDIT
I'm adding the code how I'm invoking the algorithm.
@Override
public int nextMove(C4Game game) {
C4Field field = game.getCurrentField();
C4Field tmp = C4Field.copyField(field);
int moveIndex = tmp.getAvailableColumns().get(0);
final C4Symbol symbol = game.getPlayerToMove().getSymbol().equals(C4Symbol.BLUE) ? C4Symbol.RED : C4Symbol.BLUE;
tmp.dropToColumn(moveIndex, symbol);
NextMove mv = symbol
.equals(C4Symbol.BLUE) ?
max(tmp, moveIndex) :
min(tmp, moveIndex);
int move = mv.getMoveIndex();
return move;
}
Answer: I suspect that you'll have to remove this code:
if (field.isWinningMove(movePlayed, C4Symbol.BLUE)) {
field.playMove(movePlayed, C4Symbol.RED);
return new NextMove(BLUE_WIN, movePlayed);
}
from the max() method, and remove this code:
if (field.isWinningMove(movePlayed, C4Symbol.RED)) {
field.playMove(movePlayed, C4Symbol.BLUE);
return new NextMove(RED_WIN, movePlayed);
}
from the min() method.
In the first case, you're checking whether the move that RED just made was a winning move. You don't want to check there if it was a winning move from BLUE, because it wasn't BLUE who just made that move; it was red. The same counts the other way around in the second case.
Additionally, the initial call into the algorithm seems overly complicated. I am not sure what the intended use of the tmp variable there is, or that dropToColumn() call. I would rewrite it to be more like:
@Override
public int nextMove(C4Game game) {
C4Field field = game.getCurrentField();
NextMove mv = null;
if(game.getPlayerToMove().getSymbol().equals(C4Symbol.BLUE)){
mv = max(field, -1);
}
else{
mv = min(field, -1);
}
return mv.getMoveIndex();
}
This will require an adaptation of the max() and min() methods such that they skip the whole checking-for-wins thing if the previous movePlayed equals -1.
With the code you currently have there, you do not perform a minimax search for the optimal move in the current game state; instead you first arbitrarily modify the current game state using that tmp.dropToColumn() call, and perform the minimax search in that arbitrarily modified game state. The optimal move to play in such an arbitrarily-modified game state will tend not to be the optimal move in the game state that you really are in. | {
"domain": "ai.stackexchange",
"id": 736,
"tags": "game-ai, minimax, java"
} |
Is every unambiguous grammar regular? | Question: While searching for an answer to this question I found out that there is an unambiguous grammar for every regular language.
But is there a regular language for every unambiguous grammar? How can I prove that this is/isn't true?
Answer: The following grammar is unambiguous yet generates a non-regular language:
$$ S \to aSb \mid \epsilon $$ | {
"domain": "cs.stackexchange",
"id": 16912,
"tags": "formal-languages, formal-grammars, proof-techniques, ambiguity"
} |
What does an output of -I for all amino acids mean in a psi-blast pssm matrix file? | Question: I have run psi-blast using the NR database, remotely, with one iteration, for several sequences to calculate an evolutionary profile (PSSM) for each of those sequences.
However, many of the PSSM files contain lines of -I -I -I -I -I -I -I -I -I -I -I -I -I -I -I -I -I -I -I -I rather than numerical values. What does it mean?
> Last position-specific scoring matrix computed
> A R N D C Q E G H I L K M F P S T W Y V
> 1 G -I -I -I -I -I -I -I -I -I -I -I -I -I -I -I -I -I -I -I -I
> 2 N 4 -1 -2 -2 0 -1 -1 0 -2 -1 -1 -1 -1 -2 -1 1 0 -3 -2 0
> 3 K -2 -1 4 4 -3 0 1 -1 0 -3 -4 0 -3 -3 -2 0 -1 -4 -3 -3
> 4 E 0 -3 -3 -3 9 -3 -4 -3 -3 -1 -1 -3 -1 -2 -3 -1 -1 -2 -2 -1
> 5 K -2 -2 1 6 -3 0 2 -1 -1 -3 -4 -1 -3 -3 -1 0 -1 -4 -3 -3
> 6 A -1 0 0 2 -4 2 5 -2 0 -3 -3 1 -2 -3 -1 0 -1 -3 -2 -2
> 7 D -2 -3 -3 -3 -2 -3 -3 -3 -1 0 0 -3 0 6 -4 -2 -2 1 3 -1
> 8 R 0 -2 0 -1 -3 -2 -2 6 -2 -4 -4 -2 -3 -3 -2 0 -2 -2 -3 -3
> 9 Q -2 0 1 -1 -3 0 0 -2 8 -3 -3 -1 -2 -1 -2 -1 -2 -2 2 -3
> 10 K -1 -3 -3 -3 -1 -3 -3 -4 -3 4 2 -3 1 0 -3 -2 -1 -3 -1 3
> 11 V -1 2 0 -1 -3 1 1 -2 -1 -3 -2 5 -1 -3 -1 0 -1 -3 -2 -2
> 12 V -1 -2 -3 -4 -1 -2 -3 -4 -3 2 4 -2 2 0 -3 -2 -1 -2 -1 1
> 13 S -1 -1 -2 -3 -1 0 -2 -3 -2 1 2 -1 5 0 -2 -1 -1 -1 -1 1
> 14 D -2 0 6 1 -3 0 0 0 1 -3 -3 0 -2 -3 -2 1 0 -4 -2 -3
> 15 L -1 -2 -2 -1 -3 -1 -1 -2 -2 -3 -3 -1 -2 -4 7 -1 -1 -4 -3 -2
> 16 V -1 1 0 0 -3 5 2 -2 0 -3 -2 1 0 -3 -1 0 -1 -2 -1 -2
> 17 A -1 5 0 -2 -3 1 0 -2 0 -3 -2 2 -1 -3 -2 -1 -1 -3 -2 -3
> 18 L 1 -1 1 0 -1 0 0 0 -1 -2 -2 0 -1 -2 -1 4 1 -3 -2 -2
> 19 E 0 -1 0 -1 -1 -1 -1 -2 -2 -1 -1 -1 -1 -2 -1 1 5 -2 -2 0
> 20 G 0 -3 -3 -3 -1 -2 -2 -3 -3 3 1 -2 1 -1 -2 -2 0 -3 -1 4
> 21 A -3 -3 -4 -4 -2 -2 -3 -2 -2 -3 -2 -3 -1 1 -4 -3 -2 11 2 -3
> 22 L -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1
> 23 D -2 -2 -2 -3 -2 -1 -2 -3 2 -1 -1 -2 -1 3 -3 -2 -2 2 7 -1
> 24 M -1 0 0 1 -3 4 4 -2 0 -3 -3 1 -1 -3 -1 0 -1 -2 -2 -2
> 25 Y 0 -3 -3 -3 9 -3 -4 -3 -3 -1 -1 -3 -1 -2 -3 -1 -1 -2 -2 -1
> 26 K -4 -4 -4 -4 -4 -4 -4 -4 -4 -4 -4 -4 -4 -4 -4 -4 -4 -4 -4 -4
> 27 L -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1
> 28 D -1 -2 -3 -3 -1 -2 -3 -4 -3 3 3 -3 2 0 -3 -2 -1 -2 -1 2
> 29 N -I -I -I -I -I -I -I -I -I -I -I -I -I -I -I -I -I -I -I -I
> 30 S -2 1 -1 -1 -2 -1 -2 -2 -1 -1 -1 -1 -2 0 -1 0 0 -1 -1 -3
> 31 R 4 0 -2 1 4 -1 -3 0 -3 0 -3 -4 4 -1 0 -1 -3 -1 0 -4
> 32 Y -3 -1 -3 -4 -3 -3 -2 -3 -1 -3 -1 -1 -3 -3 -3 -1 -1 -1 -3 -2
Answer: It means -inf. It has to do with how the PSSM score is calculated. | {
"domain": "bioinformatics.stackexchange",
"id": 599,
"tags": "blast, sequence-homology, phylogenetics, pssm, psi-blast"
} |
2019 ROS distribuition | Question:
Hello,
Can anyone tell me what is the name of the next ROS distribuition that will release in May?
Regards.
Originally posted by joelsonmiller on ROS Answers with karma: 5 on 2019-03-11
Post score: 0
Answer:
Can anyone tell me what is the name of the next ROS distribuition that will release in May?
Noetic Ninjemys (from here, search for "noetic").
Originally posted by gvdhoorn with karma: 86574 on 2019-03-11
This answer was ACCEPTED on the original site
Post score: 1 | {
"domain": "robotics.stackexchange",
"id": 32629,
"tags": "ros"
} |
Transforming NodaTime ZoneIntervals | Question: Version 2 may be found here.
The company I work for has customers around the globe. I work with a time-series database that contains manufacturing process data for each customer site. I was asked to provide daily averages for the past 2 years. Requesting averages from the 3rd party time-series database is easy. The difficulty is that each request needs to be issued specific for each site's time zone.
NodaTime's ZoneInterval provides me some information, but I need to transform it for my 3rd party database. The calls to the time series database expects start and end times in UTC, and you may ask for the summaries to be returned in evenly spaced TimeSpan intervals - think hours here not a "day". This is easy enough for most days during the year except for any DST transition days where the day length is not 24 hours.
Here is the ZonedDateRange.cs class used to perform the custom transformation:
using System;
using System.Collections.Generic;
using System.Linq;
using NodaTime;
using NodaTime.TimeZones;
namespace NodaTime_Zoned_Ranges
{
public class ZonedDateRange
{
public enum DayState { Standard, DST, SpringForward, FallBack }
public DateTimeZone Zone { get; private set; }
public DayState State { get; private set; }
public LocalDate StartDay { get; private set; }
public LocalDate EndDay { get; private set; }
public ZonedDateTime ZoneStart => Zone.AtStartOfDay(StartDay);
public ZonedDateTime ZoneEnd => Zone.AtStartOfDay(EndDay.PlusDays(1));
public DateTime UtcStart => ZoneStart.ToDateTimeUtc();
public DateTime UtcEnd => ZoneEnd.ToDateTimeUtc();
public double HoursPerDay => IsTransitionDay ? (UtcEnd - UtcStart).TotalHours : 24;
public int DaysInRange => IsTransitionDay ? 1 : (int)((ZoneStart - ZoneEnd).TotalDays);
// -1 = Falling back DAY, +1 Spring Forward DAY, 0 means no transition occuring BUT the day still could be DST.
public int Transition => (State == DayState.FallBack) ? Backward : (State == DayState.SpringForward) ? Forward : None;
public bool IsTransitionDay => (Transition != None);
public const int Backward = -1;
public const int Forward = 1;
public const int None = 0;
// Private constructor forces using static factory.
private ZonedDateRange() { }
// A list should be fairly small. Consider U.S. Central Time for an entire calendar year. There will only be 5 items in the list.
// 1) CST from Jan 1 to the day before Spring forward.
// 2) Spring Forward transition day (one day is both start and end)
// 3) CDT from day after Spring Forward and day before Fall Back.
// 4) Fall Back transition day (again, only 1 day in range)
// 5) CST after Fall Back day
// The most important thing is that all days in a range will have the same length.
// That way you can safely average in whatever that length is.
public static IEnumerable<ZonedDateRange> GenerateRanges(DateTimeZone zone, Instant anchorInstant, int days)
{
if (zone == null)
{
throw new ArgumentNullException(nameof(zone));
}
var anchorDay = anchorInstant.InZone(zone).Date;
// If days is negative, anchorInstant is the endDay and we go back in time to get the start day.
// Otherwise, anchorDay is the anchorInstant and we go forward in time to get the end day.
var inclusiveStartDay = (days < 0) ? anchorDay.PlusDays(days) : anchorDay;
var inclusiveEndDay = (days < 0) ? anchorDay : anchorDay.PlusDays(days);
return GenerateRanges(zone, inclusiveStartDay, inclusiveEndDay);
}
public static IEnumerable<ZonedDateRange> GenerateRanges(DateTimeZone zone, LocalDate inclusiveStartDay, LocalDate inclusiveEndDay)
{
if (zone == null)
{
throw new ArgumentNullException(nameof(zone));
}
// Small adjustment to add an extra day to the inclusive end day.
// When working with LocalDate(s) that are inclusive, we generally start at the start of the start day
// but want to end at the END of the end day, which is really the start of the next day following the
// end day.
var exclusiveEndDay = inclusiveEndDay.PlusDays(1);
var startInstant = inclusiveStartDay.AtStartOfDayInZone(zone).ToInstant();
var endInstant = exclusiveEndDay.AtStartOfDayInZone(zone).ToInstant();
// Just in case the start or end day occurs on transition day, we pad each endpoint with a few days.
// We will later prune away this padding.
var pad = Duration.FromDays(5);
var padStartInstant = startInstant.Minus(pad);
var padEndInstant = endInstant.Plus(pad);
var intervals = zone.GetZoneIntervals(padStartInstant, padEndInstant).ToList();
// Take care of easy cases.
// Check count returned instead of custom SupportsDaylightSavingsTime method.
// E.g. Argentina supported DST in the past, but since 2010 has been on Standard time only.
if (intervals.Count == 1)
{
yield return Create(zone, inclusiveStartDay, exclusiveEndDay, DayState.Standard);
yield break;
}
for (var index = 0; index < intervals.Count; index++)
{
var interval = ClampInterval(intervals[index], padStartInstant, padEndInstant);
// Chop off the Start and End dates, since those are transition days.
// That is move Start ahead 1 day, and move End back 1 day.
var currStartDate = interval.Start.InZone(zone).Date.PlusDays(1);
var currEndDate = interval.End.InZone(zone).Date.PlusDays(-1);
var endLength = zone.HoursInDay(interval.End);
var endState = DayState.Standard;
if (endLength > NodaConstants.HoursPerDay)
{
endState = DayState.FallBack;
}
else if (endLength < NodaConstants.HoursPerDay)
{
endState = DayState.SpringForward;
}
var startState = (endState == DayState.FallBack) ? DayState.DST : DayState.Standard;
var range = Create(zone, currStartDate, currEndDate, startState);
AdjustEndPoints(range, inclusiveStartDay, exclusiveEndDay);
if (IsOkayToOutput(range))
{
yield return range;
}
var endTransitionDate = interval.End.InZone(zone).Date;
range = Create(zone, endTransitionDate, endTransitionDate, endState);
AdjustEndPoints(range, endTransitionDate, endTransitionDate);
if (IsOkayToOutput(range))
{
yield return range;
}
}
}
private static void AdjustEndPoints(ZonedDateRange range, LocalDate startDay, LocalDate endDay)
{
if (range.StartDay < startDay)
{
range.StartDay = startDay;
}
if (range.EndDay > endDay)
{
range.EndDay = endDay;
}
}
private static bool IsOkayToOutput(ZonedDateRange range) => (range.UtcEnd > range.UtcStart);
private static ZoneInterval ClampInterval(ZoneInterval interval, Instant start, Instant end)
{
var outstart = start;
var outend = end;
if (interval.HasStart && outstart < interval.Start)
{
outstart = interval.Start;
}
if (interval.HasEnd && interval.End < outend)
{
outend = interval.End;
}
return new ZoneInterval(interval.Name, outstart, outend, interval.WallOffset, interval.Savings);
}
private static ZonedDateRange Create(DateTimeZone zone, LocalDate startDate, LocalDate endDate, DayState state)
{
var range = new ZonedDateRange
{
Zone = zone,
StartDay = startDate,
EndDay = endDate,
State = state
};
return range;
}
// This alters the StartDate and UtcStartTime so you may want to perform this on a Clone().
internal void AdjustStartDateForward(LocalDate adjustedStartDate)
{
if (adjustedStartDate < StartDay || adjustedStartDate > EndDay)
{
throw new Exception($"The {nameof(adjustedStartDate)} must be exclusively within the current StartDate and EndDate.");
}
AdjustDates(adjustedStartDate, EndDay);
}
// This alters the EndDate and UtcEndTime so you may want to perform this on a Clone().
internal void AdjustEndDateBackward(LocalDate adjustedEndDate)
{
if (adjustedEndDate < StartDay || adjustedEndDate > EndDay)
{
throw new Exception($"The {nameof(adjustedEndDate)} must be exclusively within the current StartDate and EndDate.");
}
AdjustDates(StartDay, adjustedEndDate);
}
private void AdjustDates(LocalDate adjustedStart, LocalDate adjustedEnd)
{
StartDay = adjustedStart;
EndDay = adjustedEnd;
}
public ZonedDateRange Clone()
{
var clone = new ZonedDateRange();
clone.Zone = Zone;
clone.State = State;
clone.StartDay = StartDay;
clone.EndDay = EndDay;
return clone;
}
}
}
Here is the Extensions.cs class for a few convenient extensions:
using System;
using NodaTime;
namespace NodaTime_Zoned_Ranges
{
public static class Extensions
{
// For DST Transition days, hours will be less than or greater than 24.
public static double HoursInDay(this DateTimeZone zone, Instant instant)
{
if (zone == null)
{
return NodaConstants.HoursPerDay;
}
var day = instant.InZone(zone).LocalDateTime.Date;
var bod = zone.AtStartOfDay(day);
var eod = zone.AtStartOfDay(day.PlusDays(1));
return (eod.ToInstant() - bod.ToInstant()).TotalHours;
}
/// <summary>
/// Preferred format of ISO 8601 time string.
/// Unlike Round Trip format specifier of "o", this format will suppress decimal seconds
/// if the input time does not have subseconds.
/// </summary>
public const string DateTimeExtendedIsoFormat = "yyyy-MM-ddTHH:mm:ss.FFFFFFFK";
/// <summary>
/// Returns an ISO-8601 compliant time string.
/// If the input Kind is Local and TimeZoneInfo.Local is not "UTC", then the output string will contain a time zone offset.
/// Unlike ToString("o"), if the input time does not contain subseconds, the output string will omit subseconds.
/// </summary>
/// <param name="time">DateTime</param>
/// <returns>String</returns>
public static string ToIsoString(this DateTime time)
{
// TimeZoneInfo MUST use Equals method and not == operator.
// Equals compares values where == compares object references.
if (time.Kind == DateTimeKind.Local && TimeZoneInfo.Local.Equals(TimeZoneInfo.Utc))
{
// Don't use time zone offset if Local time is UTC
time = DateTime.SpecifyKind(time, DateTimeKind.Utc);
}
return time.ToString(DateTimeExtendedIsoFormat);
}
}
}
Finally, here is Program.cs for some quick and dirty testing:
using System;
using NodaTime;
namespace NodaTime_Zoned_Ranges
{
class Program
{
static void Main(string[] args)
{
var zoneIds = new string[] { "Central Brazilian Standard Time", "Singapore Standard Time" };
var startDay = new LocalDate(2018, 1, 1);
var endDay = new LocalDate(2019, 12, 31);
foreach (var zoneId in zoneIds)
{
var zone = DateTimeZoneProviders.Bcl.GetZoneOrNull(zoneId);
ZoneTest(zone, startDay, endDay);
}
Console.WriteLine("\n\nPress ENTER key");
Console.ReadLine();
}
private static void ZoneTest(DateTimeZone zone, LocalDate startDay, LocalDate endDay)
{
Console.WriteLine($"\n\n*** TEST FOR ZONE: {zone.Id} , Start:{startDay} , End:{endDay}\n");
var startInstant = startDay.AtStartOfDayInZone(zone).ToInstant();
var endInstant = endDay.PlusDays(1).AtStartOfDayInZone(zone).ToInstant();
Console.WriteLine("NodaTime DateTimeZone.GetZoneIntervals");
var intervals = zone.GetZoneIntervals(startInstant, endInstant);
var i = 0;
foreach (var interval in intervals)
{
Console.WriteLine($" [{i++}]: {interval}");
}
Console.WriteLine("\nCustom ZonedDateRange");
i = 0;
var ranges = ZonedDateRange.GenerateRanges(zone, startDay, endDay);
foreach (var range in ranges)
{
Console.WriteLine($" [{i++}]: {range.State,13}: [{range.UtcStart.ToIsoString()}, {range.UtcEnd.ToIsoString()}] HoursPerDay: {range.HoursPerDay}");
}
}
}
}
Here is sample Console Window output:
*** TEST FOR ZONE: Central Brazilian Standard Time , Start:Monday, January 1, 2018 , End:Tuesday, December 31, 2019
NodaTime DateTimeZone.GetZoneIntervals
[0]: Central Brazilian Daylight Time: [2017-10-15T03:59:59Z, 2018-02-18T02:59:59Z) -03 (+01)
[1]: Central Brazilian Standard Time: [2018-02-18T02:59:59Z, 2018-11-04T03:59:59Z) -04 (+00)
[2]: Central Brazilian Daylight Time: [2018-11-04T03:59:59Z, 2019-02-17T03:00:00Z) -03 (+01)
[3]: Central Brazilian Standard Time: [2019-02-17T03:00:00Z, EndOfTime) -04 (+00)
Custom ZonedDateRange
[0]: DST: [2018-01-01T03:00:00Z, 2018-02-17T03:00:00Z] HoursPerDay: 24
[1]: FallBack: [2018-02-17T03:00:00Z, 2018-02-18T04:00:00Z] HoursPerDay: 25
[2]: Standard: [2018-02-18T04:00:00Z, 2018-11-04T03:59:59.999Z] HoursPerDay: 24
[3]: SpringForward: [2018-11-04T03:59:59.999Z, 2018-11-05T03:00:00Z] HoursPerDay: 23.0000002777778
[4]: DST: [2018-11-05T03:00:00Z, 2019-02-16T03:00:00Z] HoursPerDay: 24
[5]: FallBack: [2019-02-16T03:00:00Z, 2019-02-17T04:00:00Z] HoursPerDay: 25
[6]: Standard: [2019-02-17T04:00:00Z, 2020-01-02T04:00:00Z] HoursPerDay: 24
[7]: Standard: [2020-01-06T04:00:00Z, 2020-01-07T04:00:00Z] HoursPerDay: 24
*** TEST FOR ZONE: Singapore Standard Time , Start:Monday, January 1, 2018 , End:Tuesday, December 31, 2019
NodaTime DateTimeZone.GetZoneIntervals
[0]: Malay Peninsula Standard Time: [StartOfTime, EndOfTime) +08 (+00)
Custom ZonedDateRange
[0]: Standard: [2017-12-31T16:00:00Z, 2020-01-01T16:00:00Z] HoursPerDay: 24
Press ENTER key
Based on the output, I hope you can see why I need to perform the transform. For Brazil, I can make 8 specific summary calls to my 3rd party database, each with differing UTC start and end times, as well as day length. For Singapore, you can see I can get very specific UTC times from an interval that has no start or end time.
I have no specific question other than the always implied question of "Please review my code for readability and performance."
Answer: Aside: the zone intervals reported by Noda Time look somewhat broken to me; that may be due to them coming from the Windows time zone database. I'll need to look into that transitions don't happen on "the second before the start of the hour".
I haven't had time to look at this completely, but a few minor suggestions:
Naming
You're using "day" a lot where I'd use "date". I find that less ambiguous, because a "day" can mean both a period and a date. I've adjusted the code below assuming that.
GenerateRanges
var inclusiveStartDate = (days < 0) ? anchorDate.PlusDays(days) : anchorDate;
var inclusiveEndDate = (days < 0) ? anchorDate : anchorDate.PlusDays(days);
That would be simpler IMO as by adding days unconditionally and then just taking the min/max:
var anchorPlusDays = anchorDate.PlusDays(days);
var inclusiveStartDate = LocalDate.Min(anchorDate, anchorPlusDays);
var inclusiveEndDate = LocalDate.Max(anchorDate, anchorPlusDays);
Extensions
I'd personally use separate extension classes for code using NodaTime types, and code using BCL types.
AdjustEndpoints
I'd probably try to make your ZonedDateRange completely immutable (removing the need for Clone), and instead have WithStartDate, WithEndDate methods, then make AdjustEndpoints something like this:
private static ZonedDateRange AdjustEndPoints(
ZonedDateRange range, LocalDate startDate, LocalDate endDate) =>
range.WithStartDate(LocalDate.Max(range.StartDate, startDate))
.WithEndDate(LocalDate.Min(range.EndDate, endDate));
(The WithStartDate and WithEndDate methods can return "this" if the argument is equal to the current value.) | {
"domain": "codereview.stackexchange",
"id": 37348,
"tags": "c#, datetime, nodatime"
} |
How long does it take a black hole to eat a star? | Question: I presume the answer is that it depends on the mass and size of the star and black hole and how they approach either other, but I was wondering if somebody could provide some rough bounds (e.g. hours vs thousands of years) theorized or based on historical observed data.
By "eating" I mean the time it takes a star to go through the event horizon of the black hole and potentially reach the singularity.
Answer: One of the main ways black holes are noticed is by looking at a solar system where the star appears to move as though it were a binary star system (e.i. two stars) when only one is seen. In these situations, depending on the distances, the black hole "feeds" off the original star, and a stream of the stellar plasma is slowly pealed off the star into the black hole.
This matter can sometimes form a very vivid accretion disk, that can be observed using telescopes (see Herbig–Haro object). This process can take a very long time, on the order of millions of years. However, of course, a rogue black hole could enter a star system head on and collide right with the sun and "suck it up," which would happen rather quickly (to an outside observer). | {
"domain": "physics.stackexchange",
"id": 816,
"tags": "black-holes, astrophysics, accretion-disk"
} |
Binary Search Tree implementation (OOP/classic pointers) | Question: In my implementation of a binary search tree, most functions take a pointer to a node (because of their recursive nature). So I found myself having to overload them, and the overloaded versions form somewhat of a public interface to the functionalities of the class. Is the design of the class good or bad? If it's bad please suggest an alternative.
(Please note: I haven't set the copy constructor and a copy assignment operator as deleted because I plan on writing a custom version of those very soon)
Another thing I'd like to take your input on is my use of classic pointers. Should I replace them with unique_ptrs?
Here's the header:
/////////BSTree.h//////////
#pragma once
#include <iostream>
#include <algorithm>
#include <vector>
#include <memory>
template <typename T>
struct node {
T key;
node<T>* left;
node<T>* right;
};
template <typename T>
class BSTree {
public: //interface
BSTree();
~BSTree();
void insert(T item);
void buildTree(std::vector<T> v);
bool find(T item);
T findMin();
T findMax();
void traverseInOrder();
void traversePreOrder();
void traversePostOrder();
void destroyTree();
void erase(T item);
int height();
private: //implementation
void insert(node<T>*& t, T item);
node<T>* find(node<T>* t, T item);
void traverseInOrder(node<T>* t);
void traversePreOrder(node<T>* t);
void traversePostOrder(node<T>* t);
node<T>* findMin(node<T>* t);
node<T>* findMax(node<T>* t);
void destroyTree(node<T>*& t);
bool erase(node<T>*& t, T item);
int height(node<T>* t);
node<T>* root_;
};
template <typename T>
BSTree<T>::BSTree()
{
root_ = nullptr;
}
template <typename T>
BSTree<T>::~BSTree()
{
destroyTree();
}
template <typename T>
void BSTree<T>::insert(node<T>*& t,T item)
{
if (t == nullptr) //correct spot found or first insertion
{
t = new node<T>;
t->key = item;
t->left = nullptr;
t->right = nullptr;
}
//look for correct spot in right or left subtree recursively
else if (item > t->key)
insert(t->right, item);
else
insert(t->left, item);
} //O(h) on average, O(n) worst case
template <typename T>
node<T>* BSTree<T>::find(node<T>* t, T item)
{
if (t == nullptr) //empty tree
return nullptr;
else if (item == t->key) //item found
return t;
//item not found; look for right and left subtrees recursively
else if (item < t->key)
return find(t->left,item);
else
return find(t->right, item);
} //O(h) worst case
template <typename T>
node<T>* BSTree<T>::findMin(node<T>* t)
{
node<T>* min = t;
while (min->left != nullptr) //min node is the leftmost node
min = min->left;
return min;
} //O(h) worst case
template <typename T>
node<T>* BSTree<T>::findMax(node<T>* t)
{
node<T>* max = t;
while (max->right != nullptr) //max node is the rightmost node
max = max->right;
return max;
} //O(h) worst case
template <typename T>
void BSTree<T>::traverseInOrder(node<T>* t)
{
if (t != nullptr)
{
traverseInOrder(t->left);
std::cout << t->key << " ";
traverseInOrder(t->right);
}
} //O(n)
template <typename T>
void BSTree<T>::traversePreOrder(node<T>* t)
{
if (t != nullptr)
{
std::cout << t->key << " ";
traversePreOrder(t->left);
traversePreOrder(t->right);
}
} //O(n)
template <typename T>
void BSTree<T>::traversePostOrder(node<T>* t)
{
if (t != nullptr)
{
traversePostOrder(t->left);
traversePostOrder(t->right);
std::cout << t->key << " ";
}
} //runs in O(n)
template <typename T>
bool BSTree<T>::erase(node<T>*& t, T item)
{
if (t == nullptr) //no deletion
return 0;
else if (item > t->key) //item not found; look in right or left subtrees
erase(t->right, item);
else if (item < t->key)
erase(t->left, item);
else //item found
{
if (t->left == nullptr && t->right == nullptr) //item is contained in a leaf node
{
delete t;
t = nullptr;
}
else if (t->left == nullptr) //node has only a right child
{ //replace the node with its right child
node<T>* del = t;
t = t->right;
delete del;
}
else if (t->right == nullptr) //node has only a left child
{ //replace the node with its left child
node<T>* del = t;
t = t->left;
delete del;
}
else //node containing the item has both its children
{ //replace the node to delete with the min from the right subtree
node<T>* temp = findMin(t->right);
t->key = temp->key;
erase(t->right, t->key);
} //alternatively we can replace the node to delete with the max node from the left tree
return 1; //item found and deleted
}
} //requires O(h) time on average and O(n) worst case
template <class T>
int BSTree<T>::height(node<T>* t)
{
if (t == nullptr) //empty tree has a height of 0
return 0;
else if (t->left == nullptr && t->right == nullptr) //leaf has height of 1
return 1;
else //calculate height recursively using the forumula:
return (1 + std::max(height(t->left), height(t->right)));
} //runs in O(h)
template <typename T>
void BSTree<T>::destroyTree(node<T>*& t)
{ //destruction must occur from leafs all the way up to root
if (t != nullptr)
{
destroyTree(t->left);
destroyTree(t->right);
delete t;
t = nullptr;
}
} //runs in O(n)
template <typename T>
void BSTree<T>::insert(T item)
{
insert(root_, item);
}
template <typename T>
void BSTree<T>::buildTree(std::vector<T> v)
{
for (auto item : v)
insert(item);
} //this runs in O(m*h) on average and O(m*n) worst case (where m denotes vector length)
template <typename T>
bool BSTree<T>::find(T item)
{
if (find(root_, item) != nullptr)
return 1;
return 0;
}
template <typename T>
T BSTree<T>::findMin()
{
node<T>* min;
min = findMin(root_);
if (min != nullptr)
return min->key;
return std::numeric_limits<T>::max();
}
template <typename T>
T BSTree<T>::findMax()
{
node<T>* max;
max = findMax(root_);
if (max != nullptr)
return max->key;
return std::numeric_limits<T>::min();
}
template <typename T>
void BSTree<T>::traverseInOrder()
{
traverseInOrder(root_);
}
template <typename T>
void BSTree<T>::traversePreOrder()
{
traversePreOrder(root_);
}
template <typename T>
void BSTree<T>::traversePostOrder()
{
traversePostOrder(root_);
}
template <typename T>
void BSTree<T>::erase(T item)
{
erase(root_, item);
}
template <class T>
int BSTree<T>::height()
{
return height(root_);
}
template <typename T>
void BSTree<T>::destroyTree()
{
destroyTree(root_);
}
Here's a demo:
#include <iostream>
#include <vector>
#include "BSTree.h"
int main() {
BSTree<int> h;
std::vector<int> v = {9,5,7,99,3};
h.buildTree(v); //construct tree out of vector
h.insert(-2); //atomic insertion
h.insert(50);
std::cout << "The height of the tree: " << h.height() << std::endl;
std::cout << "Min element: " << h.findMin() << std::endl;
std::cout << "Max element: " << h.findMax() << std::endl;
h.traversePreOrder(); std::cout << std::endl;
h.traversePostOrder(); std::cout << std::endl;
h.traverseInOrder(); std::cout << "(this output is sorted)" << std::endl;
//search returns a pointer to leaf, hence:
if (h.find(5))
std::cout << "5 is found" << std::endl;
if (h.find(55)) //evaluates to false
std::cout << "55 is found";
h.erase(5);
if (!h.find(5)) //notice '=='
std::cout << "5 is no longer in tree";
h.destroyTree();
return 0;
}
Answer: Memory Management
If you don't have any special reason for using raw pointers, smart pointer can save you from a lot of headaches.
Passing Paramters
Passing non-trivial parameters (like std::vector) by value degrades performance.
API
Except for some homework assignments, one never uses a tree simply for printing values to the console. Your traverse methods should provide a way to iterate over the elements and do anything with each element.
A standard approach would be to provide iterators, but you could at least accept a callback and simply invoke it as you're traversing the elements.
If you're not exposing any public method that takes or returns a Node, I would hide the structure, it as an implementation detail. | {
"domain": "codereview.stackexchange",
"id": 24062,
"tags": "c++, binary-search"
} |
How does horse extract the energy to needs from a relatively small digestive system? | Question: Recently I saw Inside nature's gaints episode on horse, and was fascinated about its internal organisation. And my question is that they have a very large lungs to accommodate, but a relatively smaller stomach. But other herbivores like the hippopotamus have a very large digestive system even though they need less energy than horse. How does horse manage this problem? How do they get that much of energy even though it had small digestive system and more over it is an herbivore?
Answer: The digestive system of a horse is by no means small:
They have 15 to 21 m (50 to 70 ft) of small intestine, with a capacity of 38 to 45 L.
They have a 1.2 m (4 ft) long caecum that holds 26 to 30 L.
They have 3.0 to 3.7 m (10 to 12 ft) of colon, capacity up to 76 L.
(Figures sourced from Equine Anatomy (Wikipedia))
Certainly horse stomachs are comparatively small as they are hindgut fermenters, but that is not where most energy and nutrients are absorbed. Some food passes from the stomach to the small intestines before it is fully digested, which enables more continuous eating to keep up with their energy and nutrition requirements. Most proteins, carbohydrates, and fats are absorbed in the horse's small intestine. Some nutrients are absorbed in the large intestine as well, see Is there nutrient absorption in the large intestine of hindgut fermenters?.
The hippopotamus is a pseudoruminant, having a 3 chambered stomach (unlike the 4 chambers of a cow). Hippos are therefore considered to be foregut fermenters, and are not comparable to horses. In terms its digestive tract, the hippo is much more similar to a cow than a horse. | {
"domain": "biology.stackexchange",
"id": 6448,
"tags": "anatomy, digestive-system, digestion"
} |
''A dynamical theory of the electromagnetic field'' who come the intrinsic energy equation? | Question: maybe my question looks simple but i get stuck.
Bellow i send you a photo from Maxwell's paper on dynamical theory of the electromagnetic field. I put in a red square the equation which i can't understand how come from the above one. If we suppose L,M,N as constant shouldn't the whole equation be zero? Where is my fault?
Answer: From the equation above the one in the box (the equality is not explicity in the paper, but we know it's zero, right?): $$ \frac{1}{2} \frac{d}{dt} \left(L x^2 + 2 M x y + N y^2 \right) + \frac{1}{2} \frac{dL}{dt} x^2+ \frac{dM}{dt} x y + \frac{1}{2} \frac{dN}{dt} y^2 = 0$$.
If $L, M, N$ are constants, then $$\frac{dL}{dt} = \frac{dM}{dt} = \frac{dN}{dt} = 0,$$ and only the term $$\frac{1}{2} \frac{d}{dt} \left(L x^2 + 2 M x y + N y^2 \right) =0 $$
survives. Indeed, THIS should be zero, but note that the boxed equation has no time derivative! Using the fundamental theorem of calculus:$$ \int dt \frac{d}{dt}\left[ \frac{1}{2}\left(L x^2 + 2 M x y + N y^2 \right) \right]= cte \equiv E $$
Therefore
$$\frac{1}{2} L x^2 + M x y + \frac{1}{2} N y^2= E .$$
Edit: I set the left-handed side to zero because the paper says these represent the work that is not converted into heat, so I believe they are assumed to be conserved. | {
"domain": "physics.stackexchange",
"id": 43849,
"tags": "electromagnetism, maxwell-equations"
} |
Funcional Integral in QFT | Question: I am reading Peskin and Schroeder's chapter on functional methods. They propose the amplitude:
$$
\langle \phi_b(\vec{x})|e^{-iHT}| \phi_a(\vec{x})\rangle
=
\int \mathcal{D}\phi\mathcal{D}\pi
\exp \bigg[
i\int_0^T \left(
\pi \dot\phi - \frac{1}{2}\pi^2 - \frac{1}{2} (\nabla\phi)^2-V(\phi)
\right) \bigg]
$$
They then said that we can complete the square and integrate over $\mathcal{D}\pi$ integral. How is this done?
I was able to complete the square: $\pi\dot\phi - \frac{1}{2} \pi^2=-\frac{1}{2}(\pi-\dot\phi)^2+\frac{1}{2}\dot\phi^2$ and rewrite the amplitude as:
$$
\langle \phi_b(\vec{x})|e^{-iHT}| \phi_a(\vec{x})\rangle
=
\int \mathcal{D}\phi
\exp \bigg[
i\int_0^T \left(
\frac{1}{2} \partial_\mu\phi\partial^\mu\phi-V(\phi)
\right) \bigg]
\int \mathcal{D}\pi \bigg[ -\frac{1}{2}i \int^T_0 (\pi-\dot\phi^2) \bigg]
$$
I am only confused about the functional integral (mathematically), how is it done?
Answer: You do the following
$$\int D\pi \exp[ -\frac12 i \int_0^T (\pi-\dot \phi)^2] =\int D\pi \exp[-\frac12 i\int_0^T \pi^2]= constant$$
where I've shifted $\pi\to \pi+\dot \phi$ which leaves the measure $D\pi$ invariant. The result is a $\phi$ independent multiplicative constant. It's not quite well-defined; you'll need to determine this constant by other means, if necessary. | {
"domain": "physics.stackexchange",
"id": 72893,
"tags": "quantum-field-theory, path-integral"
} |
Why is acceleration directed inward when an object rotates in a circle? | Question: Somebody (in a video about physics) said that acceleration goes in if you would rotate a ball on a rope around yourself.
The other man (ex Navy SEAL, on YouTube too) said that obviously it goes out, because if you release the ball, it's going to fly in outward direction.
Then somebody said that the second man doesn't know physics; acceleration goes in.
Answer: As a rule of thumb: when somebody states that something is obvious you should really doubt everything he says. Especially if he is an ex navy seal :)
Think about the ball moving in circle: Newton's first law of dynamics states that if an object is left alone, meaning: the object is not subjected to forces, it would keep moving with the same velocity. Remember that velocity is a vector, so this statement means that the object left alone would keep also the same direction of motion.
But in the case of a ball moving in circle of course its direction of motion changes with time, this must imply that the ball is subjected to a force (remember that a force $\vec{F}$ creates an acceleration $\vec{a}$ according to the second law of dynamics: $\vec{F}=m\vec{a})$. Ok, but the force pulls inward or outward? (That is analogous to asking: the acceleration is directed inward or outward?) Well think again about the velocity of the ball: as time passes the velocity curves inward, this must mean that the acceleration is directed inward.
But why then if you let the ball free it moves outward? The answer is that it doesn't really move outward, it simply begins moving in a straight line again since you are no longer applying force to it, as the first principle of dynamics states. Everything is consistent. Of course moving in a straight line in this context means moving away from the previous location of the rotational motion, so an observer has the impression of the ball moving away from the center, when the ball is as stated simply continuing his motion with the velocity it had at the time of release. | {
"domain": "physics.stackexchange",
"id": 74302,
"tags": "acceleration, vectors, rotational-kinematics, centripetal-force"
} |
Decibels, acoustic intensity and sound perception | Question: I know the formula: $I[dB] = log_{10}\left(\frac{I}{I_0}\right)$ with $I$ being the acoustic intensity in $[W/m^2]$ and $I_0 = 10^{-12} [W/m^2]$ being the human threshold of hearing. Also, I am told that "the sound intensity is doubled every 3 dB". But since this measurement is the logarithm of a ratio, I don't get it:
What physical value is doubled exactly?
Is it related to the "sound perception" that we have? If it is, I find it strange because I really don't have the feeling that the sound is "twice as loud" when I set my speaker to 3 more dB.
If the answer to the previous question is no, what is the actual relation between that formula and the "feeling of loudness"?
Answer:
What physical value is doubled exactly?
It is the sound intensity, as in the quote. The sound intensity is a specific physical quantity, representing the power carried along by the sound wave per unit area, measured in watts per meter-squared (W/m$^2$). The way to think about this is that if there is a "spherical" speaker emitting sound waves, and the amount of energy emitted per second is $P\times1$ second (so that $P$ is the power), then that amount of energy gets spread out over a larger and larger sphere, centered on the source, as the wave moves away from the source. Dividing the power by the surface area of that sphere gives a sort-of "density" of power. This quantity is relevant because it is the one that determines how "loud" a sound is.
Note that if we increase the intensity by a factor of 2, then
$$
\textrm{SIL}_{\textrm{new}} - \textrm{SIL}_{\textrm{old}}
=(10\,\textrm{dB})\log_{10}\left(\frac{I_{\textrm{new}}}{I_0}\right)
-(10\,\textrm{dB})\log_{10}\left(\frac{I_{\textrm{old}}}{I_0}\right)
=(10\,\textrm{dB})\log_{10}\left(\frac{I_{\textrm{new}}}{I_{\textrm{old}}}\right)
=(10\,\textrm{dB})\log_{10}2\approx3\,\textrm{dB}\,.
$$
Is it related to the "sound perception" that we have? If it is, I find it strange because I really don't have the feeling that the sound is "twice as loud" when I set my speaker to 3 more dB.
It is related to the sound perception but in a complicated way. "Doubling the intensity" does not translate into "perceived as twice as loud". I'll put the basics below.
If the answer to the previous question is no, what is the actual relation between that formula and the "feeling of loudness"?
So, here we go.
The human hearing apparatus does not respond the same to all frequencies. It is easier for us to hear sounds at frequencies near 1000 - 5000 Hz than it is to hear sounds with frequencies significantly lower or higher than this. An accumulation of experiments have lead to a set of standard equal loudness curves, shown below (from Wikipedia).
The way to understand the figure is as follows. The frequency of a pure tone is on the horizontal axis, and the sound pressure level (viz., sound intensity level or decibel level) is on the vertical axis. A single point in the space therefore represents a particular pure tone played at a particular frequency and particular intensity. The red curves represent notes that humans perceive as being the same loudness. (You can do this exercise yourself.) What the curves show is that if you have a low-frequency sound playing, you have to play it at a higher intensity than a sound played at, say 1000 Hz. Similarly for the high-frequency sounds.
The red curves are also known as equal-phon curves: we come up with a new quantity, called the phon, which acts like the decibel scale but takes into account this difference in the frequency response of our hearing apparatus. The phon scale is set so that a change of 10 phons equals a change of 10 decibels (roughly) for 1000 Hz sounds.
Now, this gets us closer to the subjective experience of sound, and the final step is that studies also show if a sound increases by 10 phons, than we perceive that sound as being twice as loud. So finally, there is the sone scale, which is a scale that doubles every time the number of phons increases by 10.
So! - the relationship between the bare physical facts of the sound wave---i.e., its intensity, which is in a sense a measure of how "spread-out" the energy is in the wave---and the subjective experience of the loudness of the sound is somewhat complicated. But, we can trace back through all of the relationships above to make the following statement:
Roughly speaking, for sounds played at 1000 Hz, if the intensity increases by a factor of 10, we perceive the sound as being twice as loud. (This is because doubling the perceived loudness corresponds to doubling the number of sones, which corresponds to increasing the phon level by 10, which corresponds to increasing the dB level by 10 (because the sound is at 1000 Hz!), which corresponds to increasing the intensity by a factor of 10.
However, at other frequencies, this relationship is not quite the same, and one needs to trace through this sequence of relationships again. | {
"domain": "physics.stackexchange",
"id": 93137,
"tags": "acoustics, intensity"
} |
Pythonic way for validating and categorizing user input | Question: In multiple parts of my program, user input needs to be validated and categorized. As these "validation trees" can get pretty unwieldy, I've started splitting them out in different functions, for example like this:
from enum import Enum, auto
class InputCat(Enum):
INVALID = auto()
HELP = auto()
SELECT = auto()
GOTO_PAGE = auto()
GOTO_PLAYER = auto()
def _goto_player(user_input: str) -> InputCat:
"""Checks if string is of type GOTO_PLAYER."""
for char in user_input[1:]:
if not (char.isalpha() or char in {' ', '-'}):
print("Invalid input: if character after '>' is alpha, all other characters must be alpha, ' ' or '-'.")
return InputCat.INVALID
return InputCat.GOTO_PLAYER
def _goto_page(user_input: str) -> InputCat:
"""Checks if string is of type GOTO_PAGE."""
for char in user_input[1:]:
if not char.isnumeric():
print("Invalid input: if character after '>' is numeric, all other characters must be numeric too.")
return InputCat.INVALID
return InputCat.GOTO_PAGE
def _goto(user_input: str) -> InputCat:
"""Checks if string is of type GOTO_PAGE or GOTO_PLAYER."""
if len(user_input) == 1:
print("Invalid input: need more input after '>'.")
return InputCat.INVALID
if user_input[1].isnumeric():
return _goto_page(user_input)
elif user_input[1].isalpha():
return _goto_player(user_input)
else:
print("Invalid input: character after '>' must be alphanumeric.")
return InputCat.INVALID
def _select(user_input: str) -> InputCat:
"""Checks if string is of type SELECT."""
for char in user_input:
if not char.isnumeric():
print("Invalid input: if first character is numeric, all other characters must be numeric too.")
return InputCat.INVALID
return InputCat.SELECT
def _help(user_input: str) -> InputCat:
"""Checks if string is of type HELP."""
if len(user_input) > 1:
print("Invalid input: when using '?', no other characters are allowed.")
return InputCat.INVALID
return InputCat.HELP
def get_category(user_input: str) -> InputCat:
"""Checks if string is of type HELP, SELECT, GOTO_PAGE, GOTO_PLAYER or INVALID."""
if not user_input:
print('Invalid input: no input.')
return InputCat.INVALID
if user_input[0] == '?':
return _help(user_input)
elif user_input[0].isnumeric():
return _select(user_input)
elif user_input[0] == '>':
return _goto(user_input)
else:
print("Invalid input: first char must be alphanumeric, '>' or '?'.")
return InputCat.INVALID
I was wondering if this is a readable and 'pythonic' way of doing this? If not, what would be better alternatives?
Answer: Don't print promiscuously. You've broken your problem down into several
small functions, which is a good general instinct. But you've undermined those
functions by polluting them with side effects -- printing in this case.
Whenever feasible, prefer program designs that rely mostly on data-oriented
functions (take data as input and return data as output, without causing side
effects). Try to consolidate your program's necessary side effects in just a
few places -- for example, in a top-level main() function (or some
equivalent) that coordinates the interaction with the user.
Don't over-engineer. You have described a simple usage pattern: ? for
help; N for select; >N for goto-page, and >S for goto-player (where N
is an integer and S is a string of letters). Validating that kind of input
could be done reasonably in various ways, but none of them require so much
scaffolding. The small demo below uses (INPUT_CAT, REGEX) pairs. For more
complex input scenarios, you could use
callables (functions or classes) instead of regexes.
Resist the temptation for fine-grained dialogue with users. My impression
is that you want to give the user specific feedback when their input is
invalid. For example, if the user enters ?foo, instead of providing a
general error message (eg, "Invalid input") you say something specific
("Invalid input: when using '?', no other characters are allowed."). That
specificity requires more coding on your part and reading/understanding on the
part of users as you pepper them with different flavors of invalid-reply
messages. But is it really worth it? I would suggest that the answer is no.
Instead of fussing with all of those details, just provide clear documentation
in your usage/help text. If a user provides bad input, tell them so in a
general way and, optionally, remind them how to view the usage text.
When feasible, let a data structure drive the algorithm rather than
logic. Your current implementation is based heavily on conditional
logic. With the right data structure (INPUT_RGXS in the demo below),
the need for most of that logic disappears.
import sys
import re
from enum import Enum, auto
class InputCat(Enum):
HELP = auto()
SELECT = auto()
GOTO_PAGE = auto()
GOTO_PLAYER = auto()
INVALID = auto()
INPUT_RGXS = (
(InputCat.HELP, re.compile(r'^\?$')),
(InputCat.SELECT, re.compile(r'^\d+$')),
(InputCat.GOTO_PAGE, re.compile(r'^>\d+$')),
(InputCat.GOTO_PLAYER, re.compile(r'^>\w+$')), # Adjust as desired.
)
def main(args):
# Printing occurs only at the program's outer edges.
for user_input in args:
ic = get_input_category(user_input)
if ic == InputCat.INVALID:
print(f'Invalid reply: {user_input!r}')
else:
print(ic)
def get_input_category(user_input: str) -> InputCat:
# A data-oriented function.
for ic, rgx in INPUT_RGXS:
if rgx.search(user_input):
return ic
return InputCat.INVALID
if __name__ == '__main__':
main(sys.argv[1:]) | {
"domain": "codereview.stackexchange",
"id": 44771,
"tags": "python, beginner, validation"
} |
Virtual Address and Physical Address Space | Question: A machine has 48-bit virtual addresses and 32-bit physical addresses. Pages are 8K. How many entries are needed for a conventional page table ?
The answer is 2^35 entries
Why not (2^32) / (2^13) = 2^19 entries...?
My doubt here is why does the number of page table entries depend on virtual address space rather than physical address space?
What does it mean to have more virtual address space than physical address space? The process can only access as many pages present in the physical memory. So it makes sense to have same have both virtual and physical memory address space same. But why have more virtual address space?
Does it mean 2 different virtual address entries are mapped to the same physical location in the same page table? Guess that would lead to inconsistency right? Can someone help me get the bigger picture here..... I really need to internalize this concept.
Answer: When we are using Virtual Memory, we are translating from a Virtual Address Space to a Physical Address Space.
To understand why it requires $2^{35}$ entries, consider the page size. In order to byte address $8$KB, we require $2^{13}$ entries. If we consider the Virtual Address given to us of $48$ bits, this gives us $48-13=35$ bits. These are the bits that are used for indexing into the page table for determining the address of the page or 'frame' in main memory.
The reason for having more virtual memory is because programs were getting so large that they couldn't be fit in main memory. So in order to allow a greater degree of multiprogramming, virtual memory was introduced. A program could have a 'large' memory to itself. However only the required parts would be loaded into memory as and when required a concept refered to as Demand Paging. | {
"domain": "cs.stackexchange",
"id": 12018,
"tags": "operating-systems, memory-management, virtual-memory, paging"
} |
RPG character creation implementation | Question: This is a little yet unfinished RPG I made with C#. It runs well but I'm wondering if there are things I could improve to make it more clear or optimal. I'm basically asking for some feedback and suggestions.
The program is obviously not finished but I'm asking now before I move on so I know if I got some serious design or logic issues there. I'm a beginner in programming so keep that in mind when answering.
class Main
{
static void Main(string[] args)
{
Console.ForegroundColor = ConsoleColor.White;
Console.WriteLine("1. New Game\n" +
"2. Load Game\n" +
"3. Exit Game\n");
string choice = Console.ReadLine();
bool correctChoice = false;
Hero newHero = new Hero();
while (correctChoice == false)
{
switch (choice)
{
case "1":
CharacterCreation.createChar(newHero);
correctChoice = true;
break;
case "2":
Console.WriteLine("This feature is not yet implemented.");
correctChoice = false;
break;
case "3":
Environment.Exit(0);
break;
default:
Console.WriteLine("Please enter a correct value.");
break;
}
if (correctChoice == true)
break;
else
choice = Console.ReadLine();
}
Console.WriteLine("\n Your character is {0}, a {1}.", newHero.heroName, newHero.heroClass);
Console.ReadLine();
}
}
class CharacterCreation
{
public static void createChar(Hero newHero)
{
Console.WriteLine("\n We will proceed in character creation.");
Console.WriteLine("Please enter your character name:");
newHero.heroName = Console.ReadLine();
Console.WriteLine("\nNow choose your character's class: \n" +
"Warrior......1\n" +
"Mage........2\n" +
"Rogue........3");
string classChoice = Console.ReadLine();
bool correctChoice = false;
while (correctChoice == false)
{
switch (classChoice)
{
case "1":
newHero.heroClass = "Warrior";
correctChoice = true;
break;
case "2":
newHero.heroClass = "Mage";
correctChoice = true;
break;
case "3":
newHero.heroClass = "Rogue";
correctChoice = true;
break;
default:
Console.WriteLine("Please enter a correct value.");
break;
}
if (correctChoice == false)
classChoice = Console.ReadLine();
else break;
}
Hero.ClassSelect(newHero);
}
}
class Character
{
public int charHealth, charMaxHealth, charStr, charInt, charAgi;
}
class Hero : Character
{
public string heroName, heroClass;
public int level, experience, gold;
public static void ClassSelect(Hero newHero)
{
newHero.charAgi = 10;
newHero.charInt = 10;
newHero.charMaxHealth = 100;
newHero.experience = 0;
newHero.gold = 0;
newHero.level = 1;
switch (newHero.heroClass)
{
case "Warrior":
newHero.charMaxHealth = newHero.charMaxHealth * 115 / 100;
newHero.charStr = newHero.charStr * 12 / 100;
break;
case "Mage":
newHero.charInt = newHero.charInt * 12 / 100;
break;
case "Rogue":
newHero.charAgi = newHero.charAgi * 12 / 100;
break;
}
newHero.charHealth = newHero.charMaxHealth;
}
}
Answer: I haven't done much work in C#, but from a general coding perspective...
class Main
{
static void Main(string[] args)
{
// lotsa stuff....
}
}
These (the class and method both) really ought to be public, as I believe that most application runners/executables will be looking for public methods. Additionally, a Main() method should be mostly empty, containing just enough code to launch the rest of the application (whether this submits an additional process or not).
Console.WriteLine() // Many times
Ideally, never call out to the console specifically. What you should be doing is injecting some sort of output/input stream. It's fine if these actually come from the console, but explicitly tying it to the console itself is frowned upon.
Console.WriteLine("1. New Game\n" +
"2. Load Game\n" +
"3. Exit Game\n");
So... you're using WriteLine() to write multiple lines at a time? Seems a little counter-intuitive. I'm slightly torn between recommending using individual calls (and potentially having output re-ordered because of some threading issue), or just using Write()...
while (correctChoice == false)
{
// stuff
}
Using booleans like this is frowned upon; don't compare the boolean with a value, use the boolean itself (that's what it's for) - so, it should be while (!correctChoice) { ... }. Additionally, the name is somewhat ambiguous: possibly availableOptionChosen?
switch (choice)
{
// lotsa stuff
}
Look into something called the Strategy Pattern. While the use of a switch works out for simple things, software patterns are something to always keep in mind.
case "2":
Console.WriteLine("This feature is not yet implemented.");
correctChoice = false;
break;
Strictly speaking, correctChoice = false; shouldn't be necessary.
case "3":
Environment.Exit(0);
break;
Using Environment.Exic(0); should end the running thread. However (if this is anything like Java's System.Exit()) this may have unintended consequences. It would be much better to simply return to the next level.
default:
Console.WriteLine("Please enter a correct value.");
break;
You check the 'remaining' cases. Good; generally speaking, you should always do so, even if it's impossible (some of those should maybe only be 'asserts', used for verification-time checks).
Console.ReadLine();
}
}
You read a line immediately before the program exits. Why?
class CharacterCreation
{
// Lotsa stuff
}
This is semantically a Builder-pattern, so the class name is a little odd - perhaps CharacterCreator?
public static void createChar(Hero newHero)
{
// Stuff
}
Why the abbreviation of 'Character' (and why all the abbreviations, generally)? Also, if it's named 'create...', you shouldn't be passing it in - it should be the return type (alternatively, call it setupCharacter(), and pass in the Hero). Additionally, newHero is an awkward name here - just use hero (you don't have an `oldHero you're using, do you?).
while (correctChoice == false)
{
switch (classChoice)
{
The stuff I mentioned about your main loop applies here as well. This is especially prevalent with the class, if you're eventually going for something like DnD (which can have hundreds of classes).
class Character
{
public int charHealth, charMaxHealth, charStr, charInt, charAgi;
}
Always put member definitions on their own lines. Always make them private, and provide equivalent public properties. Don't abbreviate unnecessarily, seek to be understandable. Don't prefix variable names with their enclosing class' name (especially abbreviated) - it's noise.
class Hero : Character
{
// Acceptable
}
This works, although it may become awkward in the future.
public static void ClassSelect(Hero newHero)
{
newHero.charAgi = 10;
newHero.charInt = 10;
newHero.charMaxHealth = 100;
newHero.experience = 0;
newHero.gold = 0;
newHero.level = 1;
switch (newHero.heroClass)
{
case "Warrior":
newHero.charMaxHealth = newHero.charMaxHealth * 115 / 100;
newHero.charStr = newHero.charStr * 12 / 100;
break;
case "Mage":
newHero.charInt = newHero.charInt * 12 / 100;
break;
case "Rogue":
newHero.charAgi = newHero.charAgi * 12 / 100;
break;
}
newHero.charHealth = newHero.charMaxHealth;
}
This, though, belongs in CharacterCreation (that's what it's there for). Additionally, instead of
newHero.charXxx = newHero.charXxx * 12 / 100;
you should probably just be setting it to the desired value. Especially because that's probably a bug, since a warrior will start out with a higher Int (10) than a mage
((10 * 12) / 100) = 120 / 100 = 1.2 -> 1
which I doubt is what you want. | {
"domain": "codereview.stackexchange",
"id": 2918,
"tags": "c#, beginner, console, role-playing-game"
} |
How do I phase fold the light curve for a variable star? | Question: I have some observational data for a star where I've done aperture photometry to get a partial period. I understand that you need to use other techniques to estimate a period for stars whose period is longer than one night of observing. I know phase folding your data is quite popular. How does one go about phase folding data? Right now, I have multiple days of data plotted that look similar to the picture of the plot I've attached below.
The X-axis is time (Julian date). T1 is the target star, and the c2, c3, and c4 are check stars
Answer: "Phase folding" is just a procedure whereby you replace the time-axis values by t % P, where P is the period and % is the modulo operator that returns the remainder of t/P.
There are are a number of proceduures/algorithms to find the period. The most commonly used in astronomy is probably the Lomb-Scargle periodogram for unevenly spaced data. I highly recommend reading VanderPlas 2018 as a way of understanding the technique. There are commonly used python implementations in scipy and astropy. | {
"domain": "astronomy.stackexchange",
"id": 6402,
"tags": "star, observational-astronomy, data-analysis, photometry, light-curve"
} |
The relationship between symplectomorphism, canonical transformations, and the symplectic group | Question: This is a follow up to this question.
In the answer by Qmechanic, they state that the symplectic group, $Sp(2n,\mathbb{R})$, is the group of linear, time-independent canonical transformations.
If we consider a canonical transformation as a symplectomorphism on phase space (as per V. I. Arnold here), how can we restrict this to linear transformations? Since linearity is only defined if the phase space has a vector space structure, which in general it doesn't. More generally, how can we arrive at the symplectic group, from the symplectomorphisms on phase space?
Answer: Let there be given a $2n$-dimensional symplectic manifold $(M,\omega)$.
OP is right that geometrically speaking the symplectic group $Sp(2n,\mathbb{R})$ is only guaranteed to be canonically/manifestly imbedded into the groupoid of symplectomorphisms if the manifold $M$ is a vector space.
Given a preferred/distinguished/fixed/fiducial Darboux/canonical coordinate system $(q^1,\ldots,q^n,p_1,\ldots,p_n)$ with an origin, one can use it to define a linear structure in a neighborhood $U\subseteq M$, and in this way locally define a notion of linear transformations. (This is the pragmatic attitude of my linked Phys.SE answer.) | {
"domain": "physics.stackexchange",
"id": 76070,
"tags": "classical-mechanics, differential-geometry, hamiltonian-formalism, phase-space, poisson-brackets"
} |
Question regarding part of the proof for the typical subspace theorem | Question: Part three (going by N&C page 544) states that
$$tr(S(n)\rho^{\otimes n})=tr(S(n)\rho^{\otimes n}P(n,\epsilon))+tr(S(n)\rho^{\otimes n}(I-P(n,\epsilon))).$$
Now I understand how the term on the left of + goes to 0 as n $\to \infty$. However, I am confused how the term on the right does. N&I states that you can set
$$0 \le tr(S(n)\rho^{\otimes n}(I-P(n,\epsilon))) \le tr(\rho^{\otimes n}(I-P(n,\epsilon))) \rightarrow 0\,\,\text{ as } n\to \infty.$$
I don't quite understand why this is the case. My only assumption is that the eigenvalues of $\rho^{\otimes n}(I-P(n,\epsilon))$ are bounded in such a way that as $n \to \infty$ it will go to zero. However, I am unsure how to go about calculating this bound, though I assume it is of a similar form to the eigenvalues of $\rho^{\otimes n}P(n,\epsilon)), 2^{-n(S(\rho)-\epsilon)}$
Answer: By part 1, we have that for any $\delta > 0$, then for sufficiently large $n$, $tr( \rho ^ {\otimes n} P(n, \epsilon)) \geq 1 - \delta$.
This means that $tr( \rho ^ {\otimes n} P(n, \epsilon)) \rightarrow 1$ as $n \rightarrow \infty$, since it is at most 1. | {
"domain": "quantumcomputing.stackexchange",
"id": 1735,
"tags": "error-correction, nielsen-and-chuang"
} |
What's the point of Pauli's Exclusion Principle if time and space are continuous? | Question: What does the Pauli Exclusion Principle mean if time and space are continuous?
Assuming time and space are continuous, identical quantum states seem impossible even without the principle. I guess saying something like: the closer the states are the less likely they are to exist, would make sense, but the principle is not usually worded that way, it's usually something along the lines of: two identical fermions cannot occupy the same quantum state
Answer: Real particles are never completely localised in space (except possibly in the limit case of a completely undefined momentum), due to the uncertainty principle. Rather, they are necessarily in a superposition of a continuum of position and momentum eigenstates.
Pauli's Exclusion Principle asserts that they cannot be in the same exact quantum state, but a direct consequence of this is that they tend to also not be in similar states.
This amounts to an effective repulsive effect between particles.
You can see this by remembering that to get a physical two-fermion wavefunction you have to antisymmetrize it.
This means that if the two single wavefunctions are similar in a region, the total two-fermion wavefunction will have nearly zero probability amplitude in that region, thus resulting in an effective repulsive effect.
To see this more clearly, consider the simple 1-dimensional case, with two fermionic particles with partially overlapping wavefunctions.
Let's call the wavefunctions of the first and second particles $\psi_A(x)$ and $\psi_B(x)$, respectively, and let us assume that their probability distributions have the form:
The properly antisymmetrized wavefunction of the two fermions will be given by:
$$
\Psi(x_1,x_2) = \frac{1}{\sqrt2}\left[ \psi_A(x_1) \psi_B(x_2)- \psi_A(x_2) \psi_B(x_1) \right].
$$
For any pair of values $x_1$ and $x_2$, $\lvert\Psi(x_1,x_2)\rvert^2$ gives the probability of finding one particle in the position $x_1$ and the other particle in the position $x_2$.
Plotting $\lvert\Psi(x_1,x_2)\rvert^2$ we get the following:
As you can clearly see from the picture, for $x_1=x_2$ the probability vanishes. This is an immediate consequence of Pauli's exclusion principle: you cannot find the two identical fermions in the same position state.
But you also see that, the more $x_1$ is close to $x_2$, the smaller is the probability, as it must be due to continuity of the wavefunction.
Addendum: Can the effect of Pauli's exclusion principle be thought of as a force in the conventional $F=ma$ sense?
The QM version of what is meant by force in the classical setting is an interaction mediated by some potential, like the electromagnetic interaction between electrons.
This corresponds to additional terms in the Hamiltonian, which says that certain states (say, same charges very close together) correspond to high-energy states and are therefore harder to reach, and vice versa for low-energy states.
Pauli's exclusion principle is conceptually entirely different: it is not due to an increase of energy associated with identical fermions being close together, and there is no term in the Hamiltonian that mediates such "interaction" (important caveat here: this "exchange forces" can be approximated to a certain degree as "regular" forces).
Rather, it comes from the inherently different statistics of many-fermion states: it is not that identical fermions cannot be in the same state/position because there is a repulsive force preventing it, but rather that there is no physical (many-body) state associated with them being in the same state/position.
There simply isn't: it's not something compatible with the physical reality described by quantum mechanics.
We naively think of such states because we are used to reasoning classically, and cannot wrap our heads around what the concept of "identical particles" really means.
Ok, but what about things like degeneracy pressure then?
In some circumstances, like in dying stars, Pauli's exclusion principle really seems to behave like a force in the conventional sense, contrasting the gravitational force and preventing white dwarves from collapsing into a point.
How do we reconcile the above described "statistical effect" with this?
What I think is a good way of thinking about this is the following:
you are trying to squish a lot of fermions into the same place.
However, Pauli's principle dictates a vanishing probability of any pair of them occupying the same position.
The only way to reconcile these two things is that the position distribution of any fermion (say, the $i$-th fermion) must be extremely localised at a point (call it $x_i$), different from all the other points occupied by the other fermions.
It is important to note that I just cheated for the sake of clarity here: you cannot talk of any fermion as having an individual identity: any fermion will be very strictly confined in all the $x_i$ positions, provided that all the other fermions are not.
The net effect of all this is that the properly antisymmetrized wavefunction of the whole system will be a superposition of lots of very sharp peaks in the high dimensional position space.
And it is at this point that Heisenberg's uncertainty comes into play: very peaked distribution in position means very broad distribution in the momentum, which means very high energy, which means that the more you want to squish the fermions together, the more energy you need to provide (that is, classical speaking, the harder you have to "push" them together).
To summarize: due to Pauli's principle the fermions try so hard to not occupy the same positions, that the resulting many-fermion wavefunction describing the joint probabities becomes very peaked, highly increasing the kinetic energy of the state, thus making such states "harder" to reach.
Here (and links therein) is another question discussing this point. | {
"domain": "physics.stackexchange",
"id": 34885,
"tags": "quantum-mechanics, spacetime, wavefunction, fermions, pauli-exclusion-principle"
} |
Circular motion of a ball suspended from the rim of a rotating table | Question: Susppose there is a ball with radius $r$ suspended to the rim of a circular spinning table with radius $R$ by a string with length $l$ which makes an incline angle $\theta$ with the vertical axis.
If I want to calculate the centripetal force of the ball, is it equal to:
$M_{\rm Ball} (R+l\sin \theta) \omega $ ?
Answer:
This is a diagram of the given setup . Now let's try to define the coordinates of the centre of mass of the hanging particle as a function of time :-
$$ r(t) = (R'\cos (\omega t), R'\sin(\omega t),-(l+r)\cos \theta ) $$
Differentiating this result twice would yield you the acceleration as a function of time
$$ a(t)= (-\omega^{2} R'\cos (\omega t), -\omega^{2} R'\sin(\omega t),0) $$
Now combine the components of the acceleration to give you the direction and magnitude of the acceleration of the centre of mass of the hanging object. Hope it helped ! | {
"domain": "physics.stackexchange",
"id": 57357,
"tags": "homework-and-exercises, rotational-dynamics, rotational-kinematics, centripetal-force"
} |
Magnetic flux (and flux in general) | Question: The general interpretation of flux as I understand it (and please correct me if I'm wrong) is that it represents how much something is going through another (surface or volume (and perhaps lines?)), I'll quote Khanacademy :
Magnetic flux is a measurement of the total magnetic field which passes through a given area.Source
Considering that magnetism is a force, I very well understand that we only want the force that is pushing in the direction of the infinitesimal surface and keeping in mind the definition given before, it seems much logical to me to use this :
$$\iint \frac{\mathbf{B}\cdot\mathbf{dS}}{|\mathbf{dS}|}$$
We find the direction with the dot product but take off the surface and then we sum up the force. I probably am misunderstanding the flux definition and hope someone would have the kindness to clear this up.
Edit : This integral can't be done since we no more have an infinitesimal to integrate with respect to it.
My problem with this is that when I'm thinking that we're kind of distributing the force over that $|\mathbf{dS}|$ we'll be loosing "strength", $|\mathbf{B}|*0.00000000.......1$, I hope you're getting what I mean.
Edit 2 : I got it, I was thinking wrong from the beginning by ignoring the units, a fractional number of surface would still actually represent something because of the meaning of a square meter which is a finite quantity (a collection of points dare I say) and thus fractions of it are still finite quantities ($n\to\infty\in \mathbb{N}$ points forming an area) have escaped my thought, we are actually adding the strength "$n$ times", I was blind to the unit.
Thank you for your time!
Answer: So you could try to do this, if $\text d\mathbf S$ is an infinitesimal area element whose magnitude is its area and direction normal to the surface. However this is not the standard definition of flux, and as you have seen this presents some problems. The standard form is to not divide by the magnitude of the area element:
$$\Phi=\iint\mathbf B\cdot\text d\mathbf S$$
To more easily compare this to what you have written, we can express the integral as
$$\Phi=\iint\mathbf B\cdot\left(\hat n|\text d\mathbf S|\right)$$
i.e. we have explicitly written out the magnitude $|\text d\mathbf S|$ and direction $\hat n$ of the area element $ d\mathbf S$. Therefore, your definition becomes
$$\Phi_{you}=\iint\frac{\mathbf B\cdot\left(\hat n|\text d\mathbf S|\right)}{|\text d\mathbf S|}=\iint\mathbf B\cdot\hat n$$
And this doesn't really make any sense. You will have an infinite sum of terms that do not go to $0$, and so your proposed flux approaches infinity.
This relates to your worry of "losing strength", but this is what we do in integrals we first see in introductory calculus:
$$I=\int_a^b f(x)\ \text dx$$
this is the limit of an infinite sum, so we need the $\text dx$ in order to "remove the strength" so we have a converging sum. More explicitly we want
$$I=\lim\limits_{N\to\infty}\sum_{i=0}^Nf(x_i)\Delta x$$
where $x_i=a+i\Delta x$ and $\Delta x=\frac{b-a}{N}$ so that $\Delta x\to0$ as $N\to\infty$. If we divide by our $\Delta x$ we get
$$I=\lim\limits_{N\to\infty}\sum_{i=0}^Nf(x_i)=\lim\limits_{N\to\infty}\sum_{i=0}^Nf\left(a+\frac{b-a}{N}i\right)$$
which does not converge. | {
"domain": "physics.stackexchange",
"id": 57191,
"tags": "electromagnetism, magnetic-fields, vectors, vector-fields"
} |
Does there exist a sentence of first-order logic that is satisfiable only in infinite models that do not have a finite algorithmic representation? | Question: There exist sentences of first-order logic that are satisfiable and are satisfiable only by models of infinite size. However, all such sentences I can think of are satisfied by infinite models that can be generated by a non-terminating algorithm of a finite size.
For a example, there exists a sentence of first-order logic that is satisfied only by a model of an infinite size:
$$ \exists x \lnot \exists y S ( y, x ) \land $$
$$ \forall x \exists y S ( x, y ) \land $$
$$ \forall x \forall y \forall z ( ( S ( x, z ) \land S ( y, z) ) \to x = y ) $$
A model for this sentence can be generated (listed, enumerated) by a finite algorithm that never terminates - just list the elements of the infinite model in a sequence one after another and make $S$ hold only for successive pairs of such elements.
Does there exist a sentence of first-order logic that is satisfiable only in some infinite models for which there does not exist a finite algorithm that generates them?
Answer: One possible answer to your question is Tennenbaum's Theorem. It states that no non-standard model of arithmetic can have only computable operations, i.e. either addition or multiplication will be non-computable given any recursive description of the elements of the model.
Now the remaining difficulty is having a single formula which has as models only non-standard models of arithmetic. Note that while arithmetic isn't finitely axiomatizable, there are conservative extensions of PA which are finite.
Now it's easy to build an infinite axiomatization of (an extension of) PA which has only non-standard models: add to PA a constant $c$ and the axioms
$$ c\neq 0,\ c\neq S(0),\ c\neq S(S(0)),\ldots$$
There's a trick to turn this into a finite axiom. Add the binary predicate $\mathrm{NS}$ to your language, along with the axiom
$$ \forall xy,\ \mathrm{NS}(x, y)\leftrightarrow x\neq y\wedge \mathrm{NS}(x, S(y))$$
Then the above schema is expressed by $\mathrm{NS}(c, 0)$.
Important edit: It is crucial that the induction schema (as axiomatized by your finite theory) does not apply to formulas containing $\mathrm{NC}$! Otherwise it is quite easy to show $\forall y,\ c\neq y$ and conclude $c\neq c$, a contradiction. I think that if you exclude those formulas from the induction principle, the above reasoning should go through without trouble. | {
"domain": "cstheory.stackexchange",
"id": 2736,
"tags": "lo.logic"
} |
Does "completeness" of operator fields in QFT have a counterpart in non-relativistic QM? | Question: In section II.1.2 of Haag's Local Quantum Physics, he lists the Wightman axioms of QFT, in particular describing an axiom (F) called "completeness":
F. Completeness. By taking linear combinations of products of the operators $\Phi(f)$ one should be able to approximate any operator acting on $\mathcal H$. This may be expressed by saying that $\mathcal D$ contains no subspace which is invariant under all $\Phi(f)$ and whose closure is a proper subspace of $\mathcal H$. Alternatively one may say that there exists no bounded operator which commutes with all $\Phi(f)$ apart from the multiples of the identity operator (Schur's lemma).
$\mathcal D$ refers to a dense subspace of the Hilbert space $\mathcal H$ on which $\Phi$ reduces to an operator-valued function on spacetime, rather than an operator-valued distribution.
In other words, all operators on the Hilbert space are approximated by expressions of the form $a_1\phi(x_1)\cdots\phi(x_{n_1})+a_2\phi(x_1)\cdots\phi(x_{n_2})+\cdots$. I can see the appeal of such an axiom, but it seems to come out of nowhere. Is there a counterpart of this in "ordinary" (non-relativistic) quantum mechanics (NRQM)?
For example, considering a single non-relativistic free particle in one dimension without spin, can all operators on $L^2(\mathbb R)$ be approximated by linear combinations of products of $\hat x$ and $\hat p$? Is this a fact of a NRQM that has always been there, but just isn't important in that context? If it is new in QFT, is there a conceptual reason why it should hold there but not NRQM?
Answer: The condition that there shouldn't be any invariant subspace is just the statement that the representation of the algebra of observables is irreducible (hence the reference to Schur's lemma in your quote, which is a statement about equivariant maps between irreducible representations). "Completeness" is just a somewhat unusual choice of name for this property.
It appears in ordinary QM for example when we use the Stone-von Neumann theorem to fix the representation of $x$ and $p$ as multiplication and differentiation on $L^2(\mathbb{R})$ - the statement of the theorem is that this is the unique irreducible representation of the CCR.
Irreducibility of the representation of the algebra of observables is equivalently the statement that there are no superselection sectors, see also this answer of mine. | {
"domain": "physics.stackexchange",
"id": 92718,
"tags": "quantum-mechanics, quantum-field-theory, hilbert-space, operators"
} |
Hi, I am trying to install ROS hydro on Ubuntu 12.04.05 LTS. But it seems that hydro repository is not available? | Question:
I added this Repository:
sudo sh -c 'echo "deb http://packages.ros.org/ros/ubuntu precise main" > /etc/apt/sources.list.d/ros-latest.list'
As I tried it for the other repository that listed in ROS hydro installation.
But when I want to install ros hydro :
sudo apt-get install ros-hydro-desktop-full
It returns
Unable to locate package ros-hydro-desktop-full
Everything was working fine before last may!
Originally posted by Rony_s on ROS Answers with karma: 25 on 2019-07-18
Post score: 0
Answer:
Because of what is explained in #q325039, all older releases of ROS were moved to the snapshot repositories.
You'll want to read SnapshotRepository and setup your system to use the final snapshot of hydro for precise.
Edit: there is actually a Hydro snapshot available, even though SnapshotRepository does not list it.
Originally posted by gvdhoorn with karma: 86574 on 2019-07-18
This answer was ACCEPTED on the original site
Post score: 2
Original comments
Comment by tfoote on 2019-07-18:
The list is only for periodic snapshots. There are final snapshots for all EOL distros. | {
"domain": "robotics.stackexchange",
"id": 33477,
"tags": "ros-hydro"
} |
A Sub-string Extractor with Specific Keywords Parameter Implementation in C# | Question: I am trying to implement a sub-string extractor with "start keyword" and "end keyword" and the extracted result is from (but excluded) the given start keyword to (but excluded) end keyword. For example:
Input String
Start Keyword
End Keyword
Output
"C# was developed around 2000 by Microsoft as part of its .NET initiative"
""(empty string)
""(empty string)
"C# was developed around 2000 by Microsoft as part of its .NET initiative"
"C# was developed around 2000 by Microsoft as part of its .NET initiative"
""(empty string)
".NET"
"C# was developed around 2000 by Microsoft as part of its"
"C# was developed around 2000 by Microsoft as part of its .NET initiative"
"C#"
""(empty string)
"was developed around 2000 by Microsoft as part of its .NET initiative"
"C# was developed around 2000 by Microsoft as part of its .NET initiative"
"C#"
".NET"
"was developed around 2000 by Microsoft as part of its"
"C# was developed around 2000 by Microsoft as part of its .NET initiative"
".NET"
""(empty string)
"initiative"
"C# was developed around 2000 by Microsoft as part of its .NET initiative"
""(empty string)
"C#"
""(empty string)
"C# was developed around 2000 by Microsoft as part of its .NET initiative"
".NET"
"C#"
""(empty string)
The experimental implementation
The experimental implementation is as below.
private static string GetTargetString(string stringInput, string startKeywordInput, string endKeywordInput)
{
int startIndex;
if (String.IsNullOrEmpty(startKeywordInput))
{
startIndex = 0;
}
else
{
if (stringInput.IndexOf(startKeywordInput) >= 0)
{
startIndex = stringInput.IndexOf(startKeywordInput) + startKeywordInput.Length;
}
else
{
return "";
}
}
int endIndex;
if (String.IsNullOrEmpty(endKeywordInput))
{
endIndex = stringInput.Length;
}
else
{
if (stringInput.IndexOf(endKeywordInput) > startIndex)
{
endIndex = stringInput.IndexOf(endKeywordInput);
}
else
{
return "";
}
}
// Check startIndex and endIndex
if (startIndex < 0 || endIndex < 0 || startIndex >= endIndex)
{
return "";
}
if (endIndex.Equals(0).Equals(true))
{
endIndex = stringInput.Length;
}
int TargetStringLength = endIndex - startIndex;
return stringInput.Substring(startIndex, TargetStringLength).Trim();
}
Test cases
string test_string1 = "C# was developed around 2000 by Microsoft as part of its .NET initiative";
Console.WriteLine(GetTargetString(test_string1, "", ""));
Console.WriteLine(GetTargetString(test_string1, "", ".NET"));
Console.WriteLine(GetTargetString(test_string1, "C#", ""));
Console.WriteLine(GetTargetString(test_string1, "C#", ".NET"));
Console.WriteLine(GetTargetString(test_string1, ".NET", ""));
Console.WriteLine(GetTargetString(test_string1, "", "C#"));
Console.WriteLine(GetTargetString(test_string1, ".NET", "C#"));
The output of the above test cases.
C# was developed around 2000 by Microsoft as part of its .NET initiative
C# was developed around 2000 by Microsoft as part of its
was developed around 2000 by Microsoft as part of its .NET initiative
was developed around 2000 by Microsoft as part of its
initiative
If there is any possible improvement, please let me know.
Answer: Maybe the last if-statement could be simplified by removing the Equals(true), for Equals(0) already returns a bool, doesn’t it?
Edit:
Actually, I think you could skip the whole if block because if endIndex is 0 it couldn’t bypass the if-statement before, could it?
If startIndex is 0 empty string will be returned startIndex >= endIndex .
If startIndex is less than 0 then empty string will be returned.
So how could endIndex be 0 at the last if-statement? | {
"domain": "codereview.stackexchange",
"id": 40368,
"tags": "c#, strings"
} |
Turtlebot on willowgarage world in ros hydro | Question:
Hi,i am new in ros.i am just wondering how to run turtlebot gazebo simulator with willowgarage.world in ros hydro.
I have found a link related with my question but it wasn't enough for me.Because it is not a ros hydro solution.
Originally posted by ali ekber celik on ROS Answers with karma: 114 on 2013-11-16
Post score: 0
Answer:
You can use the same launch file as for turtlebot_empty_world.launch. You should copy the launch file to a package you've created (any package will do). Then change the line
<arg name="world_name" value="$(find turtlebot_gazebo)/worlds/empty.world"/>
to
<arg name="world_name" value="worlds/willowgarage.world"/>
I just tested it and it works fine on my computer. Good luck!
Originally posted by Angus with karma: 438 on 2013-11-16
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by ali ekber celik on 2013-11-17:
Unfortunately it did not work for me.When i run roslaunch command,only an empty world and a turtlebot on the gazebo screen.Should i change empty_world.launch file?Thanks.
Comment by Angus on 2013-11-17:
So you've copied "turtlebot_empty_world.launch" file into a new package, then renamed the launch file, then launched it? Are you sure you're not launching the old one?
Can you get the mud world to load in gazebo using the launch file here: http://gazebosim.org/wiki/Tutorials/1.9/Using_roslaunch_Files_to_Spawn_Models
Comment by ali ekber celik on 2013-11-17:
I have just done all you said, respectively.But when i started my new ros package with my renamed launch file,there was only an empty world and a turtlebot on gazebo screen.Also i tried all of roslaunch command with 4 different world which specified on your link,but only willowgarage.world did not work.The other ones have run successfully.
Should i do any other configuration for willowgarage to run on ros hydro?
Thanks.
Comment by Angus on 2013-11-17:
Can you launch empty_world then go to "insert" in Gazebo and try inserting Willow Garage manually?
Comment by jorge on 2013-11-17:
Just a hint: willowgarage world is massive, and takes much longer to load in my PC. Maybe you run out of RAM?
Comment by ali ekber celik on 2013-11-18:
My programs run with 6 gb ram.Maybe i must try another map.Could you advise another map in gazebo,actually i need a world just like a room.
Comment by jorge on 2013-11-18:
You can easily do a simple maze-like map with gazebo's building editor (Ctrl + B).
We did this for showing Turtlebot2 navigation in gazebo: https://github.com/turtlebot/turtlebot_simulator/blob/hydro-devel/turtlebot_gazebo/worlds/playground.world) | {
"domain": "robotics.stackexchange",
"id": 16178,
"tags": "gazebo, turtlebot, ros-hydro"
} |
What are the purposes of granulocytes in acute inflammation? | Question: I heard the phrase
Neutrophilic leucocytes are kings in the acute inflammation.
Neutrophils are granulocytes, while leucocytes are not granulocytes.
I think this statement refers to the fact that leucocytes have neutrophilic properties, instead of being neutrophils itself.
Acute inflammation players are, I think
Granulocytes, monocytes and leucocytes
while in chronic inflammation similar player but the main factor is macrophages so the phrase which I heard
Macrophages are the kings of chronic inflammation.
I think granulocytes are related to the Healing and Proliferation.
Eosinophils for instance in the allergic responses of acute inflammation.
I think granulocytes are phagocytosed by the macrophages and this way spread their functions through macrophages around.
Probably, neutrophiles are attached to the leucocytes such that they spread their function through leucocytes (so called neutrophilic leucocytes).
Let's cover first this, since we can after understanding it go to the chronic one.
What is the purposes of granulocytes in acute inflammation?
Answer: MattDMo's answer
Essentially, [neutrophilic granulocytes] are the first infiltrating
cells to arrive at the site of injury at the beginning of acute
inflammation (bacterial infection, environmental exposure, etc.).
which I agree with. | {
"domain": "biology.stackexchange",
"id": 1972,
"tags": "immunology, pathology, inflammation"
} |
cannot launch on multiple computers | Question:
Using Ros Electric and Ubuntu 11.10
I'm attempting to run a launch file from my control computer and have some nodes run locally and some on the remote computer.
My launch files looks like this:
<launch>
<machine name="Controller" address="localhost" default="true" >
<env name="ROSLAUNCH_SSH_UNKNOWN" value="1" />
</machine>
<node some_local_nodes />
<include file="$(find remote_pkg)/launch/app.launch"/>
</launch>
<launch>
<machine name="MYREMOTE" address="MYREMOTE" default="never" user="me" password="my_password">
</machine>
<node machine="MYREMOTE" my_remote_app_nodes />
</launch>
What I get when I run my roslaunch command is:
remote[MYREMOTE-0] starting roslaunch
remote[MYREMOTE-0]: creating ssh connection to MYREMOTE:22, user[me]
remote[MYREMOTE-0]: failed to launch on MYREMOTE:
MYREMOTE is not in your SSH known_hosts file.
Please manually:
ssh me@MYREMOTE
then try roslaunching again.
If you wish to configure roslaunch to automatically recognize unknown
hosts, please set the environment variable ROSLAUNCH_SSH_UNKNOWN=1
[MYREMOTE-0] killing on exit
unable to start remote roslaunch child: MYREMOTE-0
I know that the remote computer is in my know_hosts file and I followed the instructions here but I continue to get the same error.
I've tried it with/out passwords and with/out the ROSLAUNCH_SSH_UNKNOWN=1
Any suggestions?
Originally posted by DocSmiley on ROS Answers with karma: 127 on 2012-10-02
Post score: 0
Original comments
Comment by Lorenz on 2012-10-02:
So log in on the other computer using exactly the same host name and user as specified in the launch file works and you are not asked for a password and it's in the known_hosts file with exactly that name?
Answer:
Try " export ROSLAUNCH_SSH_UNKNOWN = 1"
Originally posted by ysongros with karma: 16 on 2014-08-26
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 11207,
"tags": "ros, ssh, multi-machine"
} |
SPOJ "It's a murder!" challenge | Question: I wrote a solution for SPOJ "It's a murder!". I'm exceeding the time limit even after n log(n) solution and fast I/O.
Given a list of numbers, I need to calculate the sum of the sum of the previously encountered numbers that are smaller than the current number. For example, given 1 5 3 6 4, the answer is
(0) + (1) + (1) + (1 + 5 + 3) + (1 + 3) = 15
My code has complexity n log(n) and is pretty much similar to how we calculate the number of inversions. The function merge in the code below merges two sorted arrays and the function merge_sort is the basic call to merge sort procedure. The array sum1 stores the cumulative sum in array1 of elements whose index is strictly less less that current index . The variable ans stores the final answer. How can I make my code more efficient ?
#include <bits/stdc++.h>
using namespace std ;
//Declaration of global variables
int array[100000] , array1[100000] , array2[100000] ;
long long int sum1[100000] ;
long long int ans ;
void merge_sort(int left , int right) ;
void merge(int left , int mid , int right) ;
int main()
{
int t,counter,n,i ;
// t is the number of testcases
scanf("%d",&t) ;
for(counter=0;counter<t;counter++)
{
// n is the number of elements in the array
scanf("%d",&n) ;
for(i=0;i<n;i++)
{
scanf("%d",&array[i]) ;
}
// ans hold the final answer and so it is initialized to 0 for every test case
ans =0 ;
merge_sort(0 , n-1) ;
printf("%lld\n",ans );
}
}
void merge(int left , int mid , int right)
{
int index , index1 , index2 ;
// array1 is used to store the elements from left to mid
// array2 is used to store the elemetns from mid+1 to right
// sum1 holds the sum of elements whose index is less than current index in array1 so that sum1[0] is always 0 .
// sum1 is initialised to 0
memset(sum1,0,sizeof(sum1)) ;
// copying into array1 from left to mid
index1 = 0 ;
for(index=left;index<mid+1;index++)
{
if(index1!=0)
{
sum1[index1] = sum1[index1-1] + array1[index1-1] ;
}
array1[index1] = array[index] ;
index1++ ;
}
// copying into array2 from mid+1 to right
index2 = 0;
for(index=mid+1;index<right+1;index++)
{
array2[index2] = array[index] ;
index2++ ;
}
//merging the two arrays array1 and array2 and adding to the variable ans array[index1] if array1[index1] < array2[index2]
index1 = 0 ;
index2 = 0 ;
index = left ;
while((index1<mid-left+1)&&(index2<right-mid))
{
if(array1[index1]<array2[index2])
{
array[index] = array1[index1] ;
index++ ;
index1++ ;
}
else if(array1[index1]>=array2[index2])
{
ans = ans + sum1[index1];
array[index] = array2[index2] ;
index++ ;
index2++ ;
}
}
if(index1<mid-left+1)
{
while(index1<mid-left+1)
{
array[index] = array1[index1] ;
index++ ;
index1++ ;
}
}
else if(index2<right-mid)
{
while(index2<right-mid)
{
ans = ans + sum1[index1-1] + array1[index1-1];
array[index] = array2[index2] ;
index++ ;
index2++ ;
}
}
}
void merge_sort(int left , int right)
{
// Typical merge sort procedure
if(left==right)
{
}
else
{
int mid = (left+right)/2 ;
merge_sort(left , mid) ;
merge_sort(mid+1 , right) ;
merge(left , mid , right) ;
}
}
Answer: Your main issue is this line:
memset(sum1,0,sizeof(sum1)) ;
The whole array is being initialized to zero, even if you are only dealing with a range of one element. You really only need to initialize the first element:
sum1[0] = 0; // initialize here
// copying into array1 from left to mid
index1 = 0 ;
for(index=left;index<mid+1;index++)
{
if(index1!=0)
{
sum1[index1] = sum1[index1-1] + array1[index1-1] ;
}
The rest is already assigned proper values. | {
"domain": "codereview.stackexchange",
"id": 20800,
"tags": "c++, programming-challenge, time-limit-exceeded, mergesort"
} |
Dirac notation - specific acting orientation for operators | Question: I have this doubt:
Imagine two operators $A$ and $B$ and the state $\psi$.
I know that the following statement is true:
$$\langle\psi| A|\psi\rangle^*=\langle\psi| A^\dagger|\psi\rangle$$
But is it correct to write:
$$ \langle\psi|AB|\psi\rangle^*=\langle\psi|B^\dagger A^\dagger|\psi\rangle=\langle\psi|B^\dagger A^\dagger\psi\rangle\hspace{15pt}?$$
This doubt came to me, because I was doing some execises and applied this identity. I found out that what I reached was wrong. I tried to find the error and only this came to my head.
Answer: Dirac notation is ill-suited for non-self-adjoint operators. Here's why:
Let $(-,-)$ be the inner product on our Hilbert space. The expectation value of $AB$ is then
$$ \langle AB \rangle_\psi = (\psi,AB\psi)$$
by definition, and Dirac notation writes $\langle \psi \vert AB \vert \psi \rangle$. for this. But, in this notation, it is no longer clear to which side the operator $AB$ acts - one could as well interpret this expression as meaning $(BA\psi,\psi)$, which is not the same if $A,B$ are not self-adjoint. So, by
$$ \langle \psi \vert A \vert \psi \rangle^\ast = \langle \psi \vert A^\dagger \vert \psi \rangle$$
you really mean
$$ (\psi,A\psi)^\ast = (A\psi,\psi) = (\psi,A^\dagger\psi)$$
where the last equality is by definition of the adjoint.
So, examining the expression with $AB$, we find
$$ (\psi,AB\psi)^\ast = (AB\psi,\psi) = (\psi,(AB)^\dagger\psi) = (\psi,B^\dagger A^\dagger\psi)$$
and thus
$$ \langle \psi \vert AB \vert \psi \rangle^\ast = \langle\psi\vert B^\dagger A^\dagger \vert \psi \rangle $$
if all operators are interpreted as acting on the states to their right. However, since this is not usually understood - for self-adjoint operators it doesn't matter, and many texts freely switch the direction of the action of the operators whenever convenient - you should refrain from using Dirac notation for operators which are not self-adjoint. | {
"domain": "physics.stackexchange",
"id": 26203,
"tags": "quantum-mechanics, operators, hilbert-space, notation"
} |
Differentiating between Titration, Buffer, diluted buffer, Deionized H₂O | Question: I had a chemistry lab where we had to titrate three solutions with $\ce{HCl}$: a buffer, a diluted buffer and deionized water. My partner and I didn't label the charts we made, so now I have to figure out which one is which.
The way I see is that from the Henderson-Hasselbalch equation:
$\mathrm{p}\ce{H} = \mathrm{p}K_a + \log\left(\ce{[A^{-}]/[HA]}\right)$,
the diluted buffer should have a larger slope at the beginning then the original buffer solution.
My question is, is my logic correct, and how do I differentiate between the original buffer and the deionized water?
Answer: What is the expected pH of deionized water and what would the pH be after adding a few drops of acid? For both buffer solutions, there is an equivalence point, how would you use that to tell them apart? Usually deionized water is slightly acidic. I kind of mentioned that water doesn't have an equivalence point. Is there a graph without one?
What is the purpose of a buffer? A diluted buffer would have the same characteristic, but in a smaller amount. Water would be missing this characteristic. You should be able to figure out which graph is which now. | {
"domain": "chemistry.stackexchange",
"id": 1484,
"tags": "acid-base, titration"
} |
Why is sodium such a common ion for in ion tails? | Question: I was doing research about ion tails of planetary bodies and noticed that ion tails composed of sodium were common. For example, Mercury and the Moon both have ion tails made of sodium. Why is sodium, as opposed to other ions, commonly seen in ion tails?
Answer: From http://science.nasa.gov/science-news/science-at-nasa/2000/ast26oct_1/:
"When a Leonid meteoroid hits the Moon it vaporizes some dust and
rock," explains Jody Wilson of the Boston University Imaging Science
Team. "Some of those vapors will contain sodium (a constituent of Moon
rocks) which does a good job scattering sunlight. If any of the impact
vapors drift over the lunar limb, we may be able to see them by means
of resonant scattering. They will glow like a faint low-pressure
sodium street lamp."
Also, relative to other elements, sodium has a low ionization energy. | {
"domain": "astronomy.stackexchange",
"id": 509,
"tags": "the-moon, planet"
} |
Best practice for properties initialization using constructor | Question: I am declaring a "class" using a "constructor method". In both samples, properties uuid and dateCreated are initialized but in different places.
I would like to know the "best practice" in this scenario and why (also in terms of readability).
Solution A
define([
'dojo/_base/declare',
'dojo/_base/lang',
], function (declare, lang) {
return declare('xxx.xxx.Command', null, {
uuid: utilis.UUID(), // IS GOOD PRACTICE?
dateCreated: Date.now(), // IS GOOD PRACTICE?
action: null,
properties: null
},
constructor: function (action, receiver, properties) {
this.action = action;
this.receiver = String(receiver);
this.properties = properties;
}
});
});
Solution B
define([
'dojo/_base/declare',
'dojo/_base/lang',
], function (declare, lang) {
return declare('xxx.xxx.Command', null, {
uuid: null,
dateCreated: null,
action: null,
properties: null
},
constructor: function (action, receiver, properties) {
this.uuid = utilis.UUID();
this.dateCreated = Date.now();
this.action = action;
this.receiver = String(receiver);
this.properties = properties;
}
});
});
Answer: I know very little about Dojo, but for me it makes sense to keep constructor as small as possible. I wouldn't also predefine properties with null values and go with this:
define([
'dojo/_base/declare',
'dojo/_base/lang',
], function (declare, lang) {
return declare('xxx.xxx.Command', null, {
uuid: utilis.UUID(),
dateCreated: Date.now(),
},
constructor: function (action, receiver, properties) {
this.action = action;
this.receiver = String(receiver);
this.properties = properties;
}
});
}); | {
"domain": "codereview.stackexchange",
"id": 11863,
"tags": "javascript, dojo"
} |
catkin_add_gtest usage | Question:
Hello everyone, turtlebot-description package has a CMakeLists as below:
cmake_minimum_required(VERSION 2.8.3)
project(turtlebot_description)
find_package(catkin REQUIRED COMPONENTS urdf xacro)
catkin_package(
CATKIN_DEPENDS urdf xacro
)
install(DIRECTORY robots
DESTINATION ${CATKIN_PACKAGE_SHARE_DESTINATION}
)
install(DIRECTORY meshes
DESTINATION ${CATKIN_PACKAGE_SHARE_DESTINATION}
)
install(DIRECTORY test
DESTINATION ${CATKIN_PACKAGE_SHARE_DESTINATION}
)
install(DIRECTORY urdf
DESTINATION ${CATKIN_PACKAGE_SHARE_DESTINATION}
)
catkin_add_gtest(${PROJECT_NAME}_test_urdf test/test_urdf.cpp)
During the build, I get an error message as below:
| CMake Error at /media/Build/next/build/tmp/sysroots/duovero/usr/share/catkin/cmake/test/tests.cmake:17 (message):
| catkin_add_gtest() is not available when tests are not enabled. The CMake
| code should only use it inside a conditional block which checks that
| testing is enabled:
|
| if(CATKIN_ENABLE_TESTING)
|
| catkin_add_gtest(...)
|
| endif()
It's obvious that the unit test should be only called when testing is enabled. However, the last commit on this code was 7 months ago. I am sure some would have caught the error by now. So I began looking for the correct use case for this macro, and the ROS doc doesn't mention much about it. Can someone verify that it indeed needs to check whether testing is enabled?
Originally posted by Adam YH Lee on ROS Answers with karma: 53 on 2014-04-16
Post score: 2
Answer:
Confirmed; that is the correct use of catkin_add_gtest()
You should open this as a bug against the turtlebot_description package.
Originally posted by ahendrix with karma: 47576 on 2014-04-16
This answer was ACCEPTED on the original site
Post score: 2
Original comments
Comment by Adam YH Lee on 2014-04-17:
great thank you! | {
"domain": "robotics.stackexchange",
"id": 17673,
"tags": "catkin"
} |
moving a catkin ws requires recompile because of absolute paths? | Question:
Hi all,
I am looking to offload compilation of our catkin ws to a CI server (recompile takes over an hour on our ARM boards). Most of the building blocks are in place now. I only just found out about an inconvenient issue.
What I want to do:
build the ws on our CI server (e.g. on a nighly basis)
tar the build and devel folders
on a local development machine, download the tarred build / devel folders from CI server
remove local build / devel folders and extract the CI tar folders in place.
build the catkin ws. Only the diffs since the CI job need to be build on the local machine.
Currently I am more or less at the 5th step. When I build it, it complains that the folders of ws, build, source, and devel space do not match:
Error: Attempting to build a catkin workspace using build space: "/home/koenlek/catkin_ws/build" but that build space's most recent configuration differs from the commanded one in ways which will cause problems. Fix the following options or use `catkin clean -b` to remove the build space:
- install_space: /builds/k.lekkerkerker/catkin_ws/install (stored) is not /home/koenlek/catkin_ws/install (commanded)
- source_space: /builds/k.lekkerkerker/catkin_ws/src (stored) is not /home/koenlek/catkin_ws/src (commanded)
- workspace: /builds/k.lekkerkerker/catkin_ws (stored) is not /home/koenlek/catkin_ws (commanded)
- devel_space: /builds/k.lekkerkerker/catkin_ws/devel (stored) is not /home/koenlek/catkin_ws/devel (commanded)
If I search for the hardcoded CI path /builds/k.lekkerkerker/catkin_ws in the build and devel folders I get thousands of matches. And from these commands I find that there are also matches on binary level:
grep -HIrn "/builds/k.lekkerkerker/catkin_ws" > /tmp/file1
grep -Hrn "/builds/k.lekkerkerker/catkin_ws" > /tmp/file2
diff /tmp/file1 /tmp/file2
If I remove the build folder as suggested in the error, I factually need to do a clean build, so I don't gain any time. As there are changes on the binary level, I doubt whether a search and replace will work... So I am pretty much stuck here.
Is there a way to force catkin to work with relative paths? Or can I tell it that it has moved? Or can you cleverly search & replace stuff (I don't know how to solve that on the binary level). Or should I maybe get creative with chroot (with which I have no experience).
A solution would be to make the CI server build in the same folder as where we will use the ws on the local computers, but I don't really like that approach. We could also get creative with symlinks to fake some stuff, but I don't like that either.
I use ROS Indigo on Ubuntu 14.04, with catkin tools (instead of catkin_make) 0.3.1. A quick check with catkin_make shows the same absolute path issue.
Originally posted by koenlek on ROS Answers with karma: 432 on 2015-12-31
Post score: 1
Original comments
Comment by gvdhoorn on 2015-12-31:
I'm not sure what you're trying to do is a supported use case (transferring build dirs is often a hairy operation, whatever the build system). You might be interested in catkin_tools#240 and 251.
Comment by Mani on 2015-12-31:
Is generating debs an option here?
Answer:
I don't think that CMake supports moving the build (or source) folder to a different location. Since catkin is based on CMake the same "limitation" applies.
Originally posted by Dirk Thomas with karma: 16276 on 2015-12-31
This answer was ACCEPTED on the original site
Post score: 1 | {
"domain": "robotics.stackexchange",
"id": 23328,
"tags": "catkin"
} |
Precise Meaning of Grand Canonical Partition Function | Question: Wikipedia cites the grand canonical partition function as
$$\mathcal{Z}=\sum_i e^{-\beta (E_i-\mu N_i)},$$
where $i$ denotes each available microstate of the system, $N_i$ the number of particles in that microstate, and $E_i$ the energy of that microstate.
From my lecture notes, I have the following definition
$$\mathcal{Z'}=\sum_{N=0}^{N_{tot}}\sum_i e^{-\beta (E_i-\mu N)}.$$
Here, we said $N$ changes from zero to some fixed number of total particles available to the system ($N_{tot}$), and each state has energy $E_i$.
To my understanding, the grand canonical ensemble allows for the number of particles in a system to change. Definition of $\mathcal{Z'}$ seems to suggest that exactly$-$we allow the system to have any number of particles from the range $0$ to $N_{tot}$, and for each $N$ we sum over all possible states those $N$ particles can be in. On the contrary, the definition of $\mathcal{Z}$ from wikipedia seems to suggest that the number of particles for each microstate $i$ is fixed$-$it seems that we simply sum over each microstate taking into account their energy and number of particles.
I am confused as to what the grand canonical ensemble really means in terms of particles being allowed to vary. Is my definition of $\mathcal{Z'}$ incorrect? Or is my understanding flawed?
Answer: The two sums are exactly the same.
In your lecture notes, the sum over $N$ is written down explicitly. For each $N$, you have a different set of microstates that you are summing over, all of them having exactly $N$ particles. In Wikipedia, the sum over $N$ is implicit in the sum over $i$, since in Wikipedia's notation, each microstate comes with the number of particles $N_i$ as an associated variable.
Why would you use the grand canonical partition function? Your intuition is correct.
Suppose you have an open system which particles can move into and out of. This means it's connected to another system containing particles of the same type.
Now, suppose you want to consider the thermodynamics of this open system. A canonical partition function isn't good enough because the number of particles varies, so you need a grand canonical partition function. | {
"domain": "physics.stackexchange",
"id": 44774,
"tags": "statistical-mechanics, partition-function"
} |
How many oscillating fields exist in a light wave? | Question: Is light made from one magnetic and one electric field which are oscillating perpendicular to each other? In textbooks, it says that the light can be seen to oscillate in every plane perpendicular to direction of travel of the wave. This would imply that light has many oscillating electric and magnetic fields in a singular wave? What does it therefore mean by polarisation?
Answer: Disclaimer: It looks like we're talking in terms of classical electromagnetism, so that's what I'm going to do here. If you go all quantum mechanical or relativistic, the answers will be consistent, but different.
The simplest EM wave will have one oscillating E field and one oscillating B (magnetic) field. That wave will be polarized (i.e. E field is wiggling in one specific direction) and monochromatic (i.e. perfect sine wave).
Technically you could argue that the "simplest wave" is circularly polarized instead of linearly polarized, but for the purposes of your question, it doesn't matter. I'm just covering my pedantic backside by saying that.
Anything more complicated than your monochromatic, linearly polarized sine wave can usually be thought of as a COMBINATION of simple EM waves. For example, if you want a perfectly polarized light source that's more than a single frequency, you just take a bunch of simple polarized monochromatic waves of different frequency and add them together.
Here's an example of what I'm talking about plotted in desmos.
This is a bunch of waves. Each is polarized in the same direction (i.e. up and down). Each has a different frequency.
Here's what the actual wave would look like (because the total E field at any given location comes from the SUM of all your separate individual E fields)
New example, if you want monochromatic but unpolarized light, then get a bunch of simple EM waves (i.e. sine waves where the E field is oscillating in a particular direction), each polarized in a different direction, and add them together. The only constraint is that for each simple single-wave building block, the B field has to be perpendicular to the E field. And both the E and B field have to be perpendicular to the direction your simple sine wave is traveling.
Here's an example of a lot of simple EM waves of different polarizations being added together. (Made in python)
These are all the simple sine waves considered separately.
And this would be the scalar E field you'd get when they're all added together.
So...
Is light made from one magnetic and one electric field?
Yes. Light is made from one B field and one E field. Technically, that's all you ever get, since the E and B fields that we actually see are the TOTAL E and B fields, where everything gets summed together at every point.
Is light made from one magnetic and one electric field which are oscillating perpendicular to each other?
It depends. E and B fields of individual simple EM waves must be perpendicular to each other. But if you have multiple individual simple EM waves of different polarizations all in the same place, then your TOTAL E and B fields must be a sum of all of them.
In textbooks, it says that the light can be seen to oscillate in every plane perpendicular to direction of travel of the wave.
That's correct. This is really just a different way of saying that the direction of oscillation of the E field must be perpendicular to the B field, and the two must be perpendicular to the direction the simple EM wave travels. So the plane perpendicular to the direction of travel contains all the different polarizations your simple EM wave COULD have.
This would imply that light has many oscillating electric and magnetic fields in a singular wave?
Hopefully this question has answered itself at this point. If you have a simple EM wave, then it only has one polarization (i.e. direction the E field wiggles). Real light is normally multiple simple EM waves all added together. I wouldn't think of that as multiple E and B fields in a singular wave. I'd think of it as multiple WAVES added together in a singular E and B field. I hope the distinction makes sense, because I think it's important.
What does it therefore mean by polarisation?
Polarization means "the direction the E field wiggles". If you have a bunch of simple EM waves all wiggling in the same direction, your resulting light will be polarized. If MOST of the simple EM waves are pointing in one direction but SOME are pointing in a different direction, then your light is MOSTLY polarized. If your simple EM waves are pretty much pointing in any direction allowed (where "allowed directions" are ones that lie on the plane perpendicular to the direction the light is traveling), then the light is unpolarized. | {
"domain": "physics.stackexchange",
"id": 88184,
"tags": "waves, visible-light"
} |
Meaning of equation for CNN probabilities | Question:
So the first equation above refers to a CNN (rather a committee of CNNs) for image classification. I am unable to understand exactly what the author is trying to do in the first equation.
So far, I think they're calculating the index of max likehlihood probabilities for all committees, then adding up the probabilities for all committees for those indices, and finally taking the maximum index.
But this seems overly convoluted and I'm not really sure. Could someone clarify this?
Answer: I agree the equation might not be clear, but you can decompose it into something like the following:
First, the term $\operatorname{argmax}_k p^i (y=k|\mathbf{x})$ tells you which label has the higher probability from model $i$ given the input object $\mathbf{x}$.
Then, this "iterates" over all models in the Committee, computing for each the label that is most likely.
Finally finding which label is the most common one (that $\operatorname{argmax}_j$) at the end.
Also, it helps to think about it in pseudo-code
def get_label(CNNs, x):
labels = [0, 0, 0, 0, 0] # each position refers to that last $j$
for pCNNi in CNNs:
predictions = pCNNi(x)
label_i = predictions.index(max(predictions)) # this is the argmax_k
labels[label_i] += 1
return labels.index(max(labels)) # this is the argmax_j | {
"domain": "datascience.stackexchange",
"id": 4673,
"tags": "cnn, image-classification, notation"
} |
Do repeated sentences impact Word2Vec? | Question: I'm working with domain-oriented documents in order to obtain synonyms using Word2Vec.
These documents are usually templates, so sentences are repeated a lot.
1k of the unique sentences represent 83% of the text corpus; while 41k of the unique sentences represent the remaining 17% of the corpus.
Can this unbalance in sentence frequency impact my results? Should I sub-sample the most frequent sentences?
Answer: Are the sentences exactly the same, word to word? If that is the case I would suggest removing the repeated sentences because that might create a bias for the word2vec model, ie. repeating the same sentence would overweigh those examples single would end with higher frequency of these words in the model. But it might be the case that this works in your favor for find synonyms. Subsample all the unique sentences, not just the most frequent ones to have a balanced model.
I would also suggest looking at the FastText model which is built on top of the word2vec model, builds n grams at a character level. It is easy to train using gensim and has some prebuilt functions like model.most_similar(positive=[word],topn=number of matching words) to find the nearest neighbors based on the word embeddings, you can also use model.similarity(word1, word2) to easily get a similarity score between 0 and 1. These functions might be helpful to find synonyms. | {
"domain": "datascience.stackexchange",
"id": 6317,
"tags": "machine-learning, nlp, word2vec, word-embeddings"
} |
Addition of acid and base to buffer system | Question: So I was doing my chemistry assignment and became stuck. Can I get some help?
a) Calculate the pH of a buffer system that contains 0.40 M of NH3(aq) and 0.50 M of NH4Cl(aq) . Note that the Kb value of NH3(aq) is 1.8×10−5.
My ans for delta pH: 9.158362492
b) Determine the change in pH if 2.50mL of 0.100 M HCl is added to 0.040 L of the buffer system described in part a).
**My ans for delta pH: **
c) Determine the change in pH if 2.50mL of 0.100 M NaOH is added to 0.040 L of the buffer system described in part a).
My ans for delta pH:
Answer: Ok, let's beat this to death with ICE tables.
(a) Calculate the pH of a buffer system that contains 0.40 M of $\ce{NH3(aq)}$ and 0.50 M of $\ce{NH4Cl(aq)}$ . Note that the $K_\beta$ value of $\ce{NH3(aq)}$ is $1.8\times10^{−5}$.
Some observations:
We want the pH, not the pOH.
For $\ce{NH4^+}$, $K_\alpha = \dfrac{K_\mathrm{w}}{K_\beta} = \dfrac{1.00\times10^{-14}}{1.8\times10^{−5}} = 5.556\times10^{-10},\quad \mathrm{p}K_\alpha = 9.2553 $
Since $\ce{NH4+(aq) > NH3(aq)}$ the solution will be slightly more acidic than the $\mathrm{p}K_\alpha$
We'll assume that the equilibrium between $\ce{NH4+(aq)}$ and $\ce{NH3(aq)}$ doesn't shift so that the Henderson-Hasselbalch approximation can be used.
$\begin{array}{|c|c|c|} \hline
\ & \ce{NH3} & \ce{NH4^+} \\ \hline
\text{I} & \pu{0.400 M} & \pu{0.500 M} \\
\text{C} & 0 & 0 \\
\text{E} & \pu{0.400 M} & \pu{0.500 M} \\ \hline
\end{array}\\$
The Henderson-Hasselbalch approximation gives us a method to approximate the pH of the weakly acidic buffer solution as follows:
$$\mathrm{pH} \approx \mathrm{p}K_\mathrm{a} + \log\dfrac{\ce{[NH3]}}{\ce{[NH4+]}}$$
$$ = 9.2553 + \log\dfrac{0.40}{0.50} = 9.1584 \ce{->[Round] = 9.16}$$
(b) Determine the change in pH if 2.50 mL of 0.100 M HCl is added to 0.040 L of the buffer system described in part (a).
Some observations:
Again we need to determine the pH, which we can subtract from the pH in part (a) to get the change in pH.
Let's just work in millimoles, mM. Since molarity is moles/volume, and the volume for $\ce{NH4+(aq)}$ and $\ce{NH3(aq)}$ is the same, the volume term just cancels. This saves some work calculating the dilutions.
HCl is a strong acid and shifts the equilibrium according to the reaction:
$$\ce{NH3 + H+ -> NH4+}$$
We'll assume that the equilibrium between $\ce{NH4+(aq)}$ and $\ce{NH3(aq)}$ doesn't shift further so that the Henderson-Hasselbalch approximation can be used.
$\begin{array}{|c|c|c|} \hline
\ & \ce{NH3} & \ce{NH4^+} \\ \hline
\text{I} & \pu{16 mM} & \pu{20 mM} \\
\text{C} & \pu{-0.25 mM} & \pu{+0.25 mM} \\
\text{E} & \pu{15.75 mM} & \pu{20.25 mM} \\ \hline
\end{array}\\$
$$\mathrm{pH} \approx \mathrm{p}K_\mathrm{a} + \log\dfrac{\pu{mM \ce{NH3}}}{\pu{mM \ce{NH4^+}}}$$
$$ = 9.2553 + \log\dfrac{15.75}{20.25} = 9.1462$$
Therefore $\Delta\mathrm{pH} = 9.1462 - 9.1584 = -0.0122 \ce{->[round]} -0.01$
(c) Determine the change in pH if 2.50mL of 0.100 M NaOH is added to 0.040 L of the buffer system described in part (a).
Some observations:
Again we need to determine the pH, which we can subtract from the pH in part (a) to get the change in pH.
Again, let's just work in millimoles, mM.
NaOH is a strong base and shifts the equilibrium according to the reaction:
$$\ce{NH4+ + OH- -> NH3 + H2O}$$
We'll assume that the equilibrium between $\ce{NH4+(aq)}$ and $\ce{NH3(aq)}$ doesn't shift further so that the Henderson-Hasselbalch approximation can be used.
$\begin{array}{|c|c|c|} \hline
\ & \ce{NH3} & \ce{NH4^+} \\ \hline
\text{I} & \pu{16 mM} & \pu{20 mM} \\
\text{C} & \pu{+0.25 mM} & \pu{-0.25 mM} \\
\text{E} & \pu{16.25 mM} & \pu{19.75 mM} \\ \hline
\end{array}\\$
$$\mathrm{pH} \approx \mathrm{p}K_\mathrm{a} + \log\dfrac{\pu{mM \ce{NH3}}}{\pu{mM \ce{NH4^+}}}$$
$$ = 9.2553 + \log\dfrac{16.25}{19.75} = 9.1706$$
Therefore $\Delta\mathrm{pH} = 9.1706 - 9.1584 = +0.0122 \ce{->[round]} +0.01$ | {
"domain": "chemistry.stackexchange",
"id": 13545,
"tags": "acid-base, everyday-chemistry"
} |
Downsampling impact on complex phase | Question: For my application, I have to downsample a bandpass complex signal which spectrum is located on the second Nyquist zone.
Knowing that this processing will cause a spectrum inversion, what would be the additional side effects (SNR reduction, less amplitude ..) ?
Will the phase's linearity be conserved (supposing that the pass-band anti aliasing filter has a linear phase) ? Does anyone have a reference detailing those side effects of the downsampling ?
Thanks for any help.
Mourad.F
Answer: Decimating a signal (selecting every Dth sample and discarding the rest) does not distort the signal within the passband in any way other than to cause aliases from higher frequencies to fold into the signal bandwidth. Depending on how we model the system the phase may be effected since $z^{-n}$ is replaced with $z^{-n/D}$, but the phase will still be linear, so the signal itself will not be distorted. (In multi-rate control loop modelling applications for example I had to pay attention to this change in phase slope). For further information on this side detail see the last paragraph.
Therefore an "anti-alias" filter is required, no different than what is done in and A/D application, just in the digital domain (decimation is a digital-to-digital converter).
The ideal decimation filter would pass the passband with no distortion while completely rejecting all the energy in the image bands. If that could feasibly be done then the decimation operation itself would be perfect (no effect on the signal in the passband). Actual performance depends on how well we design the decimation filter. So in your case, of a signal in the second Nyquist zone (that once sampled will be in the first Nyquist zone with spectrum inverted as you describe) will not go through any further distortion with a subsequent decimation (if linear phase filter is used the result will be linear phase).
Since the folding areas are in distinct bands, a multi-band filter is a good choice for decimation filter design (firls and firpm in Matlab support the design of multiband filters).
As far as the impact on noise due to decimation itself, see this post that further details how the variance of the noise is not effected by the decimation: Properties of Up- and Downsampling
However the devil is in the details, as with any digital design you must pay attention to quantization effects, filter realization and passband distortion etc but this is not due to decimation specifically.
To further explain what I touched on in the first paragraph regarding the change in phase slope, consider a unit delay prior to a decimate by 2. A unit delay (with transfer function $z^{-1}$) has a constant magnitude over all frequencies, and a phase shift that varies from 0 to $2\pi$ as the frequency goes from DC to the sampling rate. After the decimate by two, the transfer function of the unit delay becomes $z^{1/2}$ which has a constant magnitude over all frequencies with a phase shift that varies from 0 to $\pi$ as the frequency goes from DC to the new sampling rate. So in the digital domain where we work with normalized frequencies, the slope of the phase has changed, but with regards to the related analog frequency in Hz, the slope is the same: At our new sampling frequency in Hz we see the same phase shift we got at that particular frequency prior to decimation. | {
"domain": "dsp.stackexchange",
"id": 6686,
"tags": "snr, downsampling, aliasing, linear-phase"
} |
Java recursive sort verification function for a linked list | Question: It should return true, if there is no element that follow or the following elements are greater than their predecessors.
public class Element {
private int value;
private Element next;
public boolean isSorted(){
if (this.next == null)
return true;
else return (this.value < this.next.value && this.next.isSorted());
}
}
What do you guys think of it?
Answer: With this implementation it is impossible for a list with duplicate elements to be sorted. However, that possibility is easy enough to support:
return (next == null)
|| ((value <= next.value) && next.isSorted());
Now, empty lists are explicitly sorted by the first clause. Lists with an element strictly greater than a previous element are not sorted by the second. And the third says that if the first two values are sorted and the rest of the list is sorted, the list as a whole is sorted.
This works because this uses short circuit Boolean operators. So when next is null, it doesn't bother to check the rest of the statement (true || is always true). And when the current value is greater than the next, it doesn't need to continue (false && is always false). Finally, if it makes it to the last clause, it can simply return the result of it. Because it knows that the first clause was false and the second was true; otherwise, it would never have reached the third.
Alternately, you could change the name from isSorted to isIncreasing which would better describe what you are actually doing. If that is what you actually wanted to do.
Moving from a recursive solution on Element to an iterative solution on the linked list would also make sense. | {
"domain": "codereview.stackexchange",
"id": 40062,
"tags": "java, recursion"
} |
Makefile and directory structure | Question: I am looking for improvements for this basic makefile and directory structure. I know this is overkill and could be compiled with one line in a flat directory structure, but I want to use this as a learning exercise.
CC=g++
CFLAGS=-Wl,--no-as-needed -std=c++11
LIBS=-lpthread
SRC=$(PWD)/src
BUILDDIR=$(PWD)/build
OUTDIR=$(BUILDDIR)/bin
TEMPDIR=$(BUILDDIR)/tmp
MKDIR_P = mkdir -p
.PHONY: directories clean run
all: directories $(OUTDIR)/terminal
$(OUTDIR)/terminal: $(TEMPDIR)/main.o $(TEMPDIR)/terminal.o
$(CC) $(CFLAGS) $(TEMPDIR)/terminal.o $(TEMPDIR)/main.o $(LIBS) -o $(OUTDIR)/terminal
$(TEMPDIR)/main.o: $(SRC)/main.cpp $(SRC)/terminal.hpp
$(CC) -c $(CFLAGS) $(SRC)/main.cpp -o $(TEMPDIR)/main.o
$(TEMPDIR)/terminal.o: $(SRC)/terminal.cpp $(SRC)/terminal.hpp
$(CC) -c $(CFLAGS) $(SRC)/terminal.cpp -o $(TEMPDIR)/terminal.o
directories: ${OUTDIR} $(TEMPDIR)
$(OUTDIR):
${MKDIR_P} $(OUTDIR)
$(TEMPDIR):
${MKDIR_P} $(TEMPDIR)
clean:
rm -rf $(BUILDDIR)
run:
$(OUTDIR)/terminal
Answer:
Stem rules:
Spelling out compilation rule for each object file is tedious and error prone.
$(TEMPDIR)/%.o: $(SRC)/%.c
$(CXX) $(CFLAGS) -c $< -o $@
takes care about all of them.
Autodependencies:
Spelling out dependencies for each object file is tedious and error prone. gcc can do it for you:
$(TEMPDIR)/main.d: $(SRC)/main.c
$(CXX) $(CFLAGS) -MM -MT $< -o $@
-include $(TEMPDIR)/main.d
Of course you we don't want to spell it out for each source file, which naturally leads us to the next step.
Use macros. Shall you add more source files, only FILES needs to be modified.
FILES = main.c terminal.c
SOURCES = $(patsubst %,$(SRC)/%,$(FILES))
OBJECTS = $(patsubst %.c,$(TEMPDIR)/%.o,$(FILES))
DEPS = $(patsubst %.c.$(TEMPDIR)/%.d,$(FILES))
$(TEMPDIR)/%.o: $(SRC)/%.c
$(CXX) $(CFLAGS) -c $< -o $@
$(TEMPDIR)/%.d: $(SRC)/%.c
$(CXX) $(CFLAGS) -MM -MT $< -o $@
-include $(DEPS)
$(TARGET): $(OBJECTS)
$(CXX) $(LDFLAGS) $(OBJECTS) -o $(TARGET) | {
"domain": "codereview.stackexchange",
"id": 11586,
"tags": "c++, make"
} |
Why did the process of sleep evolve in many animals? What is its evolutionary advantage? | Question: The process of sleep seems to be very disadvantageous to an organism as it is extremely vulnerable to predation for several hours at a time. Why is sleep necessary in so many animals? What advantage did it give the individuals that evolved to have it as an adaptation? When and how did it likely occur in the evolutionary path of animals?
Answer: This good non-scholarly article covers some of the usual advantages (rest/regeneration).
One of the research papers they mentioned (they linked to press release) was Conservation of Sleep: Insights from Non-Mammalian Model Systems by John E. Zimmerman, Ph.D.; Trends Neurosci. 2008 July; 31(7): 371–376. Published online 2008 June 5. doi: 10.1016/j.tins.2008.05.001; NIHMSID: NIHMS230885. To quote from the press release:
Because the time of lethargus coincides with a time in the round worms’ life cycle when synaptic changes occur in the nervous system, they propose that sleep is a state required for nervous system plasticity. In other words, in order for the nervous system to grow and change, there must be down time of active behavior. Other researchers at Penn have shown that, in mammals, synaptic changes occur during sleep and that deprivation of sleep results in a disruption of these synaptic changes. | {
"domain": "biology.stackexchange",
"id": 10091,
"tags": "evolution, sleep"
} |
Is P=NP relative to the halting oracle? | Question: Consider the following variant $\mathscr{H}$ of the halting oracle: given the code $e$ for an ordinary Turing machine and an input $n$ to it, we let $\mathscr{H}(\langle e,n\rangle) = \langle 0,0\rangle$ if the result $\varphi_e(n)$ of the execution of $e$ on $n$ is undefined (does not terminate), and $\langle 1,\varphi_e(n)\rangle$ if $\varphi_e(n)$ is defined. (As far as computability goes, the first component is enough, of course, but since I'm going to measure complexity I think it makes more sense to include $\varphi_e(n)$ in the result.)
Now consider Turing machines having access to $\mathscr{H}$ as an oracle. Even though $\mathscr{H}$ is not computable, we can define relativized complexity classes such as $\mathbf{P}^{\mathscr{H}}$, $\mathbf{NP}^{\mathscr{H}}$, $\mathbf{EXPTIME}^{\mathscr{H}}$, etc., as usual. (I hope there is no hidden difficulty in exactly how the oracle is queried. I'm interested in time complexity, and I think my definition of $\mathscr{H}$ means that the only thing that really matters is the number of calls to the oracle, since any computable function is computed “for free”.)
Question: Is $\mathbf{P}^{\mathscr{H}} = \mathbf{NP}^{\mathscr{H}}$?
(Of course, even an informal argument explaining why the question should be just as hard to answer as the case without oracle counts as an answer!)
Motivation: I'm aware that there are computable oracles making the relativized version of $\mathbf{P}=\mathbf{NP}$ true or false, and in fact quite easy either way (see Papadimitriou, Computational Complexity (1994), theorems 14.4 and 14.5, neither proof seems to adapt here), but I'm confused as to what happens when we relativize the theory of complexity to non-computable oracles (cross-links to this question as well as this question of mine which was perhaps too vague and this other one concernings a purposefully “weak” oracle, might be relevant here). The specific oracle $\mathscr{H}$ tries to formalize the idea that all computable functions are given “for free”, so complexity relative to it seems like a reasonable candidate for a complexity theory of $\mathbf{0'}$ (which I hope would turn out to be much easier than that of $\mathbf{0}$).
Answer: $\text{P}^\mathcal{H} = \text{NP}^\mathcal{H} = \text{PSPACE}^\mathcal{H}$ as noted in the linked answer (note that the query tape counts as space). Specifically, using $n$ calls to the halting oracle, we can compute the first $n$ bits of the Chaitin's constant. Using these bits, we can compute the oracle output for all queries of length $ < ≈n$ and decide the space-bounded problem — in unlimited time without assistance or in $n^{O(1)}$ time using another call to the oracle.
The equality of the complexity classes stems not from the complexity of $\mathcal{H}$, but its closure. Let $U\mathcal{H}$ be the halting oracle, with the machine+input encoded in unary; let $U\mathcal{H}'$ also allow querying the $i$th bit of machine output (for machines that halt) with $i$ in binary. We have $\text{P}^\mathcal{H} = \text{EXP}^{U\mathcal{H}} = \text{EXP}^{U\mathcal{H'}}$. Now, for every oracle $A$, $\text{P}^{\text{EXP}^A} = \text{PSPACE}^{\text{EXP}^A}$. Roughly speaking, an $\text{EXP}^A$ oracle is sufficiently closed relative to itself; using determinacy (see here) it can be shown that for every Borel property $p$ there is $B$ such that whether $\text{EXP}^{B,A}$ satisfies $p$ is independent of $A$.
To explore P vs NP for different representations of $0'$, let $Ω$ be the Chaitin's constant, and $UΩ$ be the Chaitin's constant with bits queried sequentially (or bit index in unary). The Chaitin's constant (machine halting probability) encodes the halting problem, but looks like random data. A caveat is that $Ω$ and I think $\text{P}^Ω$, $\text{NP}^Ω$, $\text{P}^{UΩ}$, and $\text{NP}^{UΩ}$ depend on the choice of (reasonable) encoding.
We have:
$\text{P}^Ω ≠ \text{NP}^Ω$ (the polynomial hierarchy is infinite relative to a random oracle)
$\text{P}^{UΩ} = \text{NP}^{UΩ}$ iff NP = RP
$\text{P}^{U\mathcal{H}} = \text{NP}^{U\mathcal{H}}$ iff NP is in P/poly ($U\mathcal{H}$ is powerful but sparse)
$\text{P}^{U\mathcal{H}'} = \text{NP}^{U\mathcal{H}'} = \text{PSPACE}^\mathcal{U\mathcal{H}'}$
Also, the distinction between decision oracles and function oracles is relevant to fine-grained complexity but not for P vs NP as restricting to decision oracles gives at most a quadratic slowdown.
Despite $\text{P}^\mathcal{H} = \text{NP}^\mathcal{H}$, nondeterminism can reduce the number of oracle calls: Just two calls suffice for $\mathcal{H}$ — one to certify haltings and another to certify nonhaltings. By contrast, deterministically computing the $n$th binary digit of the Chaitin's constant requires (even with unbounded time) $n-O(\log n)$ calls to the halting oracle. This is by the incompressibility of the Chaitin's constant and its capture of the halting problem; $O(\log n)$ is here because $n$ itself (if lucky) can give a logarithmic number of bits to jump-start the search for $Ω$. (And for unbounded time, using the function version of $\mathcal{H}$ does not reduce the required number of calls.) | {
"domain": "cstheory.stackexchange",
"id": 5751,
"tags": "cc.complexity-theory, computability, oracles"
} |
Edit link and joint names in the editor | Question:
Hello list,
Is it possible to initial set joint and link names or edit them afterwards in the editor?
Or should that be done in the sdf-file directly?
I want to create a complex robot in the editor and than later convert it into a urdf to use it from ROS.
During this development I would like more meaningful names for the links and joints.
Thanks in advance,
Sietse Achterop
Originally posted by Sietse on Gazebo Answers with karma: 25 on 2018-10-03
Post score: 0
Answer:
You can edit the joint name in the 'joint inspector' after you right click on the selected part. You can't rename the links, though.
Originally posted by kumpakri with karma: 755 on 2018-10-03
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by Sietse on 2018-10-05:
Thanks. It is a pity that there seems no work ongoing in better integrating urdf and sdf. | {
"domain": "robotics.stackexchange",
"id": 4326,
"tags": "gazebo"
} |
Interpreting a formula tattoo | Question: I saw
on the Internet and I'm curious as to what compound is represented by it.
Answer: The molecule seen in the tattoo is a protein-based mammalian hormone called oxytocin, commonly called "the love hormone" because it is involved in several aspects of sexual reproduction.
Image Source | {
"domain": "chemistry.stackexchange",
"id": 146,
"tags": "organic-chemistry, structural-formula, identification"
} |
How to generate graphs with a Hamiltonian path? | Question: I need to create a graph generator for my next project. Generally algorithms are trying to find a Hamiltonian path in a graph. So I can create a graph generator, generate a graph, and then I can decide whether the graph has a Hamiltonian path or not. If not, I can regenerate the graph but this is not a cool way.
I want my generated graphs that always have a Hamiltonian path. My graph has two additional conditions:
Vertices can only have 2, 3, or 4 edges.
Possible number of vertices follow the sequence 6, 10, 15, 20, 25...n-5,n where n % 5 = 0.
How should I start, and how do I achieve this easily?
Answer: The simplest solution is the one suggested by Yuval Filmus. That is, take a simple path on $n$ vertices, and finish the construction after you have added some number of edges. Clearly, it is easy to enforce the maximum degree condition as well.
Alternatively, you could generate graphs whose structure guarantees the existence of a Hamiltonian path. For example, as shown by Tutte, every 4-connected planar graph has a Hamiltonian path (see e.g., Wikipedia for the statement). Then, a high-performance tool for generating planar graphs is plantri. My feeling is that this is unnecessarily complicated for your use case, but it gives you a possible another approach you might be interested in later on. | {
"domain": "cs.stackexchange",
"id": 16778,
"tags": "graphs, hamiltonian-path"
} |
Missing genes and normalisation of RSEM output using EBSeq | Question: Without going into too much background, I just joined up with a lab as a bioinformatics intern while I'm completing my masters degree in the field. The lab has data from an RNA-seq they outsourced, but the only problem is that the only data they have is preprocessed from the company that did the sequencing: filtering the reads, aligning them, and putting the aligned reads through RSEM. I currently have output from RSEM for each of the four samples consisting of: gene id, transcript id(s), length, expected count, and FPKM. I am attempting to get the FASTQ files from the sequencing, but for now, this is what I have, and I'm trying to get something out of it if possible.
I found this article that talks about how expected read counts can be better than raw read counts when analyzing differential expression using EBSeq; it's just one guy's opinion, and it's from 2014, so it may be wrong or outdated, but I thought I'd give it a try since I have the expected counts.
However, I have just a couple of questions about running EBSeq that I can't find the answers to:
1: In the output RSEM files I have, not all genes are represented in each, about 80% of them are, but for the ones that aren't, should I remove them before analysis with EBSeq? It runs when I do, but I'm not sure if it is correct.
2: How do I know which normalization factor to use when running EBSeq? This is more of a conceptual question rather than a technical question.
Thanks!
Answer: Yes, that blog post does represent just one guy's opinion (hi!) and it does date all the way back to 2014, which is, like, decades in genomics years. :-) By the way, there is quite a bit of literature discussing the improvements that expected read counts derived from an Expectation Maximization algorithm provide over raw read counts. I'd suggest reading the RSEM papers for a start[1][2].
But your main question is about the mechanics of running RSEM and EBSeq. First, RSEM was written explicitly to be compatible with EBSeq, so I'd be very surprised if it does not work correctly out-of-the-box. Second, EBSeq's MedianNorm function worked very well in my experience for normalizing the library counts. Along those lines, the blog you mentioned above has another post that you may find useful.
But all joking aside, these tools are indeed dated. Alignment-free RNA-Seq tools provide orders-of-magnitude improvements in runtime over the older alignment-based alternatives, with comparable accuracy. Sailfish was the first in a growing list of tools that now includes Salmon and Kallisto. When starting a new analysis from scratch (i.e. if you ever get the original FASTQ files), there's really no good reason not to estimate expression using these much faster tools, followed by a differential expression analysis with DESeq2, edgeR, or sleuth.
1Li B, Ruotti V, Stewart RM, Thomson JA, Dewey CN (2010) RNA-Seq gene expression estimation with read mapping uncertainty. Bioinformatics, 26(4):493–500, doi:10.1093/bioinformatics/btp692.
2Li B, Dewey C (2011) RSEM: accurate transcript quantification from RNA-Seq data with or without a reference genome. BMC Bioinformatics, 12:323, doi:10.1186/1471-2105-12-323. | {
"domain": "bioinformatics.stackexchange",
"id": 44,
"tags": "rna-seq, differential-expression, ebseq, rsem"
} |
Bring phone numbers in consistent format | Question: I'm using this code to bring my phone numbers in a consistent format.
Desired: +(country code)phone number
Possible patterns:
01721234567 -> change to desired pattern
00491234567 -> change to desired pattern
+4912345678 -> do nothing, already desired pattern
This is what I use:
String number = allContactNumbers.get(i).get(j);
number = number.replaceAll("[^+0-9]", ""); // All weird characters such as /, -, ...
String country_code = getResources().getString(R.string.countrycode_de);
if (number.substring(0, 1).compareTo("0") == 0 && number.substring(1, 2).compareTo("0") != 0) {
number = "+" + country_code + number.substring(1); // e.g. 0172 12 34 567 -> + (country_code) 172 12 34 567
}
number = number.replaceAll("^[0]{1,4}", "+"); // e.g. 004912345678 -> +4912345678
For some reason I'm not happy with it though. I hope there is somebody to tell me, whether my code is written properly!
Answer: There are two aspects to this, the general design, and the implementation.
Implementation
The entire things should be extracted as a function. The first line of code:
String number = allContactNumbers.get(i).get(j);
indicates that this code is being run inside a loop (i,j). You need to extract it to be:
String number = normalizePhoneNumber(allContactNumbers.get(i).get(j));
and the function would look something like:
public String normalizePhoneNumber(String number) {
......
return normalized;
}
Right, about that function, putting your code in it ends up with:
public String normalizePhoneNumber(String number) {
number = number.replaceAll("[^+0-9]", ""); // All weird characters such as /, -, ...
String country_code = getResources().getString(R.string.countrycode_de);
if (number.substring(0, 1).compareTo("0") == 0 && number.substring(1, 2).compareTo("0") != 0) {
number = "+" + country_code + number.substring(1); // e.g. 0172 12 34 567 -> + (country_code) 172 12 34 567
}
number = number.replaceAll("^[0]{1,4}", "+"); // e.g. 004912345678 -> +4912345678
return number;
}
Design
OK, about the design.... I believe this is a problem. Handling phone numbers is much more complicated than what you have.... it is really a challenging problem.
It is easy enough to strip off the junk, but it gets hard really fast. For example, this is the the Queen of England's land-line number (not kidding):
Public Information Officer
Buckingham Palace
London SW1A 1AA
Tel (during 9am - 5pm (GMT) Monday to Friday): (+44) (0)20 7930 4832. Please note, calls to this number may be recorded.
As a Canadian, the international dialing code for this would be:
01144207934832
How will your script translate that in to:
+442079305832
I think you need to revise your plan for this problem, it is more complicated than you realize. | {
"domain": "codereview.stackexchange",
"id": 6901,
"tags": "java, regex"
} |
Multiple machine communication from Hydro to Groovy | Question:
Hi all,
I'm currently running on Hydro and I was trying to set up a multiple machines network with my lab robot. However, the robot is running ROS Groovy, and what we found was that my computer running Hydro was able to receive information from the nodes running on the robot computers, but was unable to send the same information back to the robots.
On the robot computers, the nodes running on my Hydro computer would be detected through rostopic list, but nothing would actually be received (ran tests with rostopic echo/bw/hz and saw nothing in all three tests).
We've been following the network setup found here. When we troubleshooted with netcat, we were able to see the text from my computer on the robot computer. It would have no issue when my teammate ran from his Groovy laptop with the robot computers.
So is there a compatibility issue when it comes to sending information through nodes from Hydro to Groovy?
Thanks
Originally posted by jakemorris on ROS Answers with karma: 13 on 2013-12-07
Post score: 1
Answer:
There is no general incompatibility between Groovy and Hydro. They can transparently exchange messages.
An exception is if the specific message definition has not been changed between both distributions. But this is only the case for a very small number of messages and obviously not the case in your scenario since one direction seems to work fine.
I guess one of your machines needs a different ROS_IP / ROS_HOSTNAME to make it work. It is likely that you would have the same issue when both computers would use the same distro.
Originally posted by Dirk Thomas with karma: 16276 on 2013-12-07
This answer was ACCEPTED on the original site
Post score: 3 | {
"domain": "robotics.stackexchange",
"id": 16375,
"tags": "ros, multiple, ros-groovy, ros-hydro"
} |
Find relationships between the voltage difference across capacitor and resistor | Question: I'm trying to learn kirchhoff's law. but im stuck on how to solve for a circuit that has both a resistor and a capacitor. How would you proceed?
Answer: I'd start by writing down Kirchoff's voltage law. Then, I'd think about how to calculate the voltage for each element in the circuit and insert those into Kirchoff's voltage law. | {
"domain": "physics.stackexchange",
"id": 56206,
"tags": "homework-and-exercises, electric-circuits, electrical-resistance, capacitance, voltage"
} |
Is "ice point" the same as "freezing point"? | Question: I am using an old book and they almost use both terms interchangeably sometimes. Do they mean the same thing? Similiarly for "steam point" and "boiling point"
Answer: Ice point is the freezing point of water.
Freezing point is the point where liquid turns to solid.
The difference is that in the case of ice point is substance is always water.
(Note that there is a slight discrepancy of 0.01 K between the triple point of water, melting point, freezing point and/or ice point. And technically, Ice point is the point where water and Ice are in equillibrium.)
See e.g. http://www.thefreedictionary.com/freezing+point, http://www.thefreedictionary.com/ice+point. | {
"domain": "chemistry.stackexchange",
"id": 191,
"tags": "thermodynamics, terminology"
} |
Calculate height of a projectile in rotating spaceship reference frame | Question: In a rotating reference frame, or in a rotating spaceship, the apparent gravity felt by an object is the centrifugal force. Any moving object also experiences a Coriolis force. When I want to do calculations with a projectile (an object thrown across the circle in such a rotating frame), how do I take into account that the Coriolis acceleration constantly changes? I have been doing some calculations using the projectile equation and changing the acceleration variable to the Coriolis and centrifugal acceleration. But obviously, this is not correct. Lastly, how can I derive the maximum height of such projectile in this rotating frame of reference, given launch angle, angular velocity, and the radius of the circle?
Any advice would be appreciated. Thank you.
Answer:
The position vector $~\mathbf r~$ is
$$\mathbf r=\mathbf R+\mathbf R',$$
where
$$\mathbf R= \left[ \begin {array}{c} -\rho\,\sin \left( \theta \right)
\\ -\rho\,\cos \left( \theta \right)
\\ 0\end {array} \right]
$$
$$\mathbf R'=\left[ \begin {array}{c} X\\ {\frac {\sin \left(
\alpha \right) X}{\cos \left( \alpha \right) }}-\frac 12\,{\frac {g{X}^{2}}
{{v}^{2} \left( \cos \left( \alpha \right) \right) ^{2}}}
\\0\end {array} \right]
$$
so that
$$\mathbf r=\left[ \begin {array}{c} -\rho\,\sin \left( \theta \right) +X
\\ -\rho\,\cos \left( \theta \right) +X\tan \left(
\alpha \right) -\frac 12\,{\frac {g{X}^{2}}{{v}^{2} \left( \cos \left(
\alpha \right) \right) ^{2}}}\\ 0\end {array}
\right]
$$
you have one generalized coordinate which is $~X$ you can now obtain the equation of motion with :
$$T=\frac m2\mathbf{\dot{r}}\cdot \mathbf{\dot{r}} \\
U=-m\,g\,\mathbf r_y$$
and the external forces due the rotation
$$\mathbf F_s=\left[ \begin {array}{c} m \left( -{\omega}^{2}r_{{x}}-2\,\omega\,v_{
{y}} \right) \\ m \left( -{\omega}^{2}r_{{y}}+2\,
\omega\,v_{{x}} \right) \\ 0\end {array} \right]
$$
with Euler- Lagrange you obtain differential equation
$$\ddot X(\tau)=f(X(\tau)~,\dot X(\tau)~,\omega~,v~,\alpha,\rho)$$
$\alpha~$ launch angle
$v~$ launch velocity
$\omega~$ angular velocity
$~\rho~$ circular radius
with the solution $~X(\tau~)$
Numerical simulation
I stop the simulation if $~x'(\tau)\,\ge 2\,\rho\sin(\theta)~$ | {
"domain": "physics.stackexchange",
"id": 76900,
"tags": "newtonian-mechanics, reference-frames, inertial-frames, projectile, coriolis-effect"
} |
Is it possible that the force due to gravity varies based on density? | Question: It took a while to find relativity and the various subtle (or unmeasurable) effects it has within our universe. Is it possible that, although Newton's gravitational force equation leaves out any notion of density or size, this too might impact the forces acting around us?
I noticed this question already, but I am not satisfied with its answer. I wonder if two objects, A and B, having the same mass but (for lack of a better term) relativistically different densities, would exert slightly different gravitational forces on a third object C which is far enough from A and B (and equally distant from both) to negate any "diffusion" effects on the gravitational fields of A and B?
In other words, is it possible that the equations we have are "good enough", and we just don't have examples (yet) that introduce the same kind of errors as those that confirmed the theory of relativity?
If not, why not? Is there any particular reason aside from "these are the equations we have"?
I realize that finding the mass of something like a black hole would be at best impractical, if not entirely impossible, without assuming that its mass directly determines its gravitational field strength.
Answer: The answer is No and the reason is the equivalence principle which says that there exist natural units in which the gravitational mass (the mass $m$ in $F=GMm/r^2$) is equal to the inertial mass (the mass $m$ in $F=ma$) for all objects in the Universe. This is equivalent to the statement that all objects, regardless of their composition, density, and other properties, accelerate by the same acceleration in a given gravitational field (any gravitational field).
This equivalence principle is the starting point behind Einstein's general theory of relativity which describes gravity as the curvature of spacetime (and which is fully respected by more modern theories of gravity, especially string theory). This principle is also experimentally verified at the relative accuracy level of order $10^{-17}$ which is incredibly accurate (one may compare two different materials which have noticeably different densities etc. and the acceleration is still perfectly the same). In principle, it's always possible that some experimental deviations will be found by finer experiments in the future (but that's true about any insight in physics). However, it's not just the absence of any experimental hints that undermines any idea that the gravitational force should depend on anything else aside from mass; it is also the complete absence of candidate theories that would be compatible with the basic observations and where the deviation would be implanted as anything more than just an ugly, partially inconsistent, unjustified, and numerically small deformation of a beautiful, consistent, robust, justified theory. | {
"domain": "physics.stackexchange",
"id": 50785,
"tags": "gravity, density"
} |
Reducing a 2D array to its longest unique series that aren't contained within any other | Question: I have an array whose members are arrays containing strings. Each array has 5 elements, some of which may be empty but always at the end of the array. The order of the strings matters because their hierarchical labels—call them catLists. I need all the longest unique lists, ie. some of the shorter catLists are contained within longer ones, and for my purposes the shorter ones are redundant.
Here's a sample of the data:
// in fact my $catLists array is hundreds of elements long and changing weekly
$catLists = array(
0 => array('500ml', '750ml', '', '', '')
1 => array('sml', 'med', 'lrg', 'xlg', 'xxl'),
2 => array('big', 'bigger', 'biggest', '', ''),
3 => array('sml', 'med', 'lrg', '', ''),
4 => array('big', '', '', '', ''),
5 => array('300ml', '500ml', '750ml', '1L', '')
6 => array('1L', '750ml', '', '', ''));
// but if this were the list, then my desired output would be
$reducedCatLists = array(
0 => array('sml', 'med', 'lrg', 'xlg', 'xxl'),
1 => array('big', 'bigger', 'biggest'),
2 => array('300ml', '500ml', '750ml', '1L'),
3 => array('1L', '750ml'));
Although sequence matters, position doesn't, which is why, for my purposes, $catLists[0] would be considered contained within $catLists[5] but $catLists[6] wouldn't.
I've no idea how to do this efficiently; right now I'm reducing every array to a string and then comparing substrings, but it's not clear to me that this is a good way to do this (self-taught programmer who leans toward design rather than math—no CS here at all).
$catList_strings = array();
$reduced_catLists = array();
foreach ($catLists as $i => $catList) {
$stringified = implode("",$catList);
$contained = false;
foreach ($catList_strings as $cl_str) {
if ( strpos($stringified, $cl_str) ) $contained = true;
}
if (!$contained) {
$catList_strings[] = $stringified;
$reduced_catLists[] = array_filter($catList);
}
}
When this goes live I'd really like it not to be a burden to the server.
Answer: Bug
Try the following input:
$catLists = array(
0 => array('sml', 'med', 'lrg', 'xlg', 'xxl'),
1 => array('smlmed', 'lrg', '', '', ''));
I think that this will produce just one row of output, although it should produce two (these arrays are distinct). To avoid this, you should check the arrays after finding matching strings or use some character that would never appear in the strings to join them. If you change the join character, note that you may have to trim out the empty portion before generating the string.
For example, the colon (:) would work in your example:
$stringified = ':' . implode(':', $catList) . ':';
but would produce strings like :big::::: when you probably just want :big:.
Avoid unnecessary variables
$contained = false;
foreach ($catList_strings as $cl_str) {
if ( strpos($stringified, $cl_str) ) $contained = true;
}
if (!$contained) {
$catList_strings[] = $stringified;
$reduced_catLists[] = array_filter($catList);
}
You can just say
foreach ($catList_strings as $cl_str) {
if ( strpos($stringified, $cl_str) ) {
$catList_strings[] = $stringified;
$reduced_catLists[] = array_filter($catList);
break;
}
}
This is also more performant, as it stops as soon as it finds a match. The original code tried to generate more.
Consider using an associative array
You can make this more direct. Something like:
function array_contains($base, $candidate, $start = 0) {
if (count($base) < count($candidate) - $start) {
return false;
}
foreach ($base as $v) {
if ($v != $candidate[$start]) {
return false;
}
}
return true;
}
$indexesOf = array();
foreach ($catLists as $i => $catList) {
$trimmedList = array_filter($catList);
foreach ($trimmedList as $j => $cat) {
if (isset($indexesOf[$cat]) {
if (0 == $j) {
foreach ($indexesOf[$cat] as $index => $start ) {
if (array_contains($catList, $catLists[$index], $start)) {
break 2;
}
}
}
} else {
$indexesOf[$cat] = array();
}
$indexesOf[$cat][$i] = $j;
$reducedCatLists[] = $trimmedList;
}
}
This uses a dictionary lookup to search for arrays that could potentially contain the current one.
Bug 2
Perhaps this isn't an issue, but the array order matters here. You only check arrays that you've already seen. I do this as well. Not sure how much it matters, but
$catLists = array(
0 => array('500ml', '750ml', '', '', ''),
1 => array('300ml', '500ml', '750ml', '1L', ''));
and
$catLists = array(
0 => array('300ml', '500ml', '750ml', '1L', ''),
1 => array('500ml', '750ml', '', '', ''));
will return different results.
If you want to get the same result from both inputs, then you have to loop through the array twice. The first time, you build the lookup array. The second time, you build the reduced list.
Test
If you have historically accurate values for $catList run a test where you time how long it takes to process either way.
Note that if it only changes weekly, it may be reasonable to save $reducedCatList when it first appears. That would be unlikely to cause significant load to the server, as you can choose to do it at a time when most legitimate users don't use the server. Something like 3 AM is great, but even a time like 3 PM might work. I don't know what patterns your customers follow. | {
"domain": "codereview.stackexchange",
"id": 13565,
"tags": "php, array, sorting"
} |
Vacuum polarization and initial state radiation correction for the cross section | Question: I saw that the experimental Born cross section measured for (e+e- -> Hadrons) for positron-electron particle physics experiments such as the (BESIII or BaBar experiments) is corrected by initial state radiation and vacuum polarization factors. The cross section is given by:
$$
\sigma^B=\frac{N^\text{obs}}{\mathcal{L}_\text{int}(1+\delta^r)(1+\delta^\upsilon)\varepsilon \mathcal{B}}
$$
where $(1+\delta)$ stands for the correction factors.
Can anyone give me a simplified explanation on why the cross sections are corrected like this? or maybe provide me an easy to understand (without having a strong background in theory) reference?
Answer: This question seems to come down to just a basic question of terminology.
I recognize that people often say things like "corrections to the cross section" but this is phraseology is a bit sloppy, I guess, and may cause confusion.
What that phrase really means is "corrections to a less accurate approximation of the cross section."
The universe doesn't make approximations, so the observed cross section is the true cross section and there can be no "corrections" to it.
However, we aren't capable of computing the exact cross section. We always have to make some approximations. There are various approximations that are commonly made.
First, if you imagine taking the Taylor expansion of the exact cross section in terms of the coupling constant(s), we only compute up to some order in that expansion. The lowest order corresponds to Feynman diagrams with no loops, or "tree level" diagrams. For important processes that are to be measured in an experiment, a lot of effort goes in to computing further terms in that expansion, which diagramatically corresponds to adding loops to the tree level diagrams. These loop diagrams are corrections to the tree level calculation that make the result closer to (but still not exactly equal to) the true cross section. Again, they are not "corrections" to the true cross section, which would obviously be a meaningless thing to say.
The other issue that requires approximation is that if we want to measure the production of, say, a certain B meson at an experiment, we can never force the scattering process to produce only that B meson and nothing else. In general, when we detect a B meson there will also be any number of other particles that were also produced in the same interaction. So the total cross section for B meson production is the sum of $\sigma_B + \sigma_{B, X} + \sigma_{B, X, Y}+\dots$ where $X,Y,\dots$ are anything else. These other things could be initial or final state radiation (extra gauge bosons coming off the initial or final particles), other baryons from the hard scattering, etc. In practice, it's only feasible to compute the cross section including up to some finite number of extra particles. Including these extra particles is again a correction to the no-extra-particle approximation that corrects our calculation to be closer to the true total cross section that includes all possible additions.
As a final comment, these two kinds of corrections (loops and extra radiation) are actually not independent of each other. They are intimately related and one needs to do both to get consistent results due to an issue known as infrared divergences. I won't get into that here, but it's something you can read about another time. | {
"domain": "physics.stackexchange",
"id": 74942,
"tags": "particle-physics, experimental-physics, scattering-cross-section, particle-accelerators"
} |
new view controller in rviz (fixed to the center of the target frame): how to rotate with the frame? | Question:
I wrote a new view controller for rviz. Heavily based on FPS, but cannot be translated, so that it stays attached to the target frame (I copied from FPS and suppressed the code related to translation).
I attach it to a frame inside the car (think base_link + offset), so that it gives a view from the drivers perspective.
But when the car turns, the camera does not turn with the car. I want it to be always aligned with the target frame, plus whatever yaw and pitch I give it with the mouse.
How to achieve that please?
Originally posted by brice rebsamen on ROS Answers with karma: 1001 on 2013-08-23
Post score: 0
Original comments
Comment by brice rebsamen on 2013-08-23:
cool. Sadly I'm still using Fuerte... I'm planning to upgrade in a couple of months, I'll use your plugin then. Meanwhile I see if I can do some backport...
Answer:
I recently wrote a view controller which does exactly that. It's part of the oculus plugin: https://github.com/ros-visualization/oculus_rviz_plugins/blob/groovy-devel/src/fixed_view_controller.cpp
Originally posted by dgossow with karma: 1444 on 2013-08-23
This answer was ACCEPTED on the original site
Post score: 1 | {
"domain": "robotics.stackexchange",
"id": 15347,
"tags": "rviz, ogre"
} |
FMO analysis of the [3,3] Sigmatropic reaction regarding geometry. Is it accurate? | Question: While studying the reasons behind the stereospecificity of [3,3] Sigmatropic reactions, I ran into an image portraying both possible transition states for the reaction: The Chair TS and Boat TS. The complete FMO orbitals were only drawn for the Chair TS and the final geometries of the products was shown for both transition states:
I find it a little bit hard sometimes to visualize the three dimensional transitions that result in a specific configuration, so I tried to complete the FMO analysis with the orbitals for the Boat TS and to draw the sigma-bond rotations necessary to create constructive overlap between the lobes. I came up with the following:
On the Chair TS we see the σ* shaded lobe is perpendicular to the shaded lobe of the π HOMO from the double bond next to it (having a 3D model would make it easier to see because in the drawing they seem aligned, but they are actually perpendicular). The sigma-bond would then have to rotate 90º inward for effective constructive interaction, "throwing" the R-Group outwards and resulting in the E-configuration seem in the product. The other σ* lobe also rotates 90º in the same direction (I forgot to draw the arrow there, but it would be the same as the one beside the shaded lobe), but for some reason this rotation result in the regular "cyclic-like" geometry for that double bond (Z-configuration?).
If it wasn't hard enough to visualize it on the Chair TS, it gets worse on the Boat TS. I came up with the orbital configuration as seem in the image. First, I got a little confused about the σ* shading, because the whole premise for the shading had to do with constructive overlap where the double bonds would occur. But there I had to draw opposite phases on the lower part. That, I believe, is because initially they were aligned with the same phases (as in the Chair TS), but the conformation twisting threw the constructive phases in opposite directions. Anyways, to reach constructive overlap the sigma-bond next to the substituted carbon would have to rotate close to 90º counter-clockwise, while the other sigma-bond would have to rotate almost 180º. That explains why this transition state is so unfavorable, but I'm still struggling a bit with the geometry. The 180º rotation sounds like it would result in the unsubstituted double bond pointing out of the "cyclic-like" shape as seem in the product. But I can't clearly see how the 90º rotation throws the R-Group inward in a Z-configuration geometry. Instead, I came up with another orbital configuration possible that might be the correct one:
In that orbital configuration the sigma-bond of the substituted carbon wouldn't have to rotate, and we would have the R-Group pointing inward like in the product.
Which orbital configuration is correct, if any? Is my reasoning about the sigma-bond rotations and the resulting products geometry correct? Do you have any tips when it comes to visualizing geometrical issues on more complicated molecules?
Answer: For chair-like transition states the angle at which you draw your chair can make a significant difference to how easy it is to visualise stereoelectronic interactions. Sadly, there is no real way to know which is the best way to draw it, except by either trial and error, or prior experience (or leveraging on other people's experience, i.e. copying it from somebody else!).
If you rotate the chair a bit so that the forming bond is in front (as opposed to at the side), then you can have something like this:
You don't need to overthink how the sigma bond rotates; this is a sigmatropic rearrangement, not a electrocyclic reaction where there are conrotatory/disrotatory modes. The geometry at the substituted carbon definitely changes (from tetrahedral to trigonal planar), so there is some kind of "flattening" as the reaction proceeds, but all you really need to pay attention to is the relative disposition of the R substituent and the rest of the carbon chain: here they are clearly trans to one another, which is an (E)-alkene. This is stereospecific, so if you start with the conformation shown in the left, you will end up with a trans alkene. There isn't a different "rotation mode" that leads to the cis alkene.
The main way of getting a cis, or (Z)-alkene, is not actually via a boat TS, but rather via a chair TS with the R substituent axial instead of equatorial. Note that this is a different conformation from the previous image, so again it's not about which direction the sigma bond rotates in, but rather the original location of the substituent (axial/equatorial). The orbital diagram is exactly the same, so there's no need to redraw it, but it should be easy enough to see that this leads to the reverse stereochemistry.
The boat TS is trickier to draw. This is my preferred representation:
but I am not entirely sure which TS is higher in energy (i.e. whether the substituent prefers to point up or point down), even after consulting several references. I don't think it's a matter of great significance; after all, the boat TS usually only plays a very minor role, as far as I know. | {
"domain": "chemistry.stackexchange",
"id": 12290,
"tags": "organic-chemistry, stereochemistry, molecular-orbital-theory, pericyclic"
} |
Why is black a better emitter of heat than white? | Question: I know that black absorbs light and converts it into heat which makes it a good emitter of radiant heat while white reflects it. Let's say if I place 2 cups, 1 black and 1 white, same material, in a dark room, which cools faster? Why is black a better emitter? Is it because it converts light to heat?
Answer: The existence of equilibrium demands that emissivity is equal to absorptivity.
As long as a body is, say, emitting more energy than absorbing, its temperature will be decreasing: therefore, as thermal equilibrium imply all parts of the system share the same temperature, for equilibrium to be reachable, the body's emissivity must equal its absorptivity. See the Kirchhoff's law of thermal radiation.
Another point to consider is that being black or white are object characteristics with respect to visible frequencies of light, while at room temperature most of the emission/absorption happens at lower frequencies (infrared). For example, the infrared pictures of the aluminum box below make it evident that the emissivities of its white and black surfaces are very similar (as explained in its manual).
Source: Wikipedia
Answer:
Now, if the cups are "black" and "white" at infrared, than the black cup will cool faster, since it's emitting energy through radiation at a higher rate than the white one (and absorbing less then emitting, since the environment is cooler). | {
"domain": "physics.stackexchange",
"id": 42824,
"tags": "thermodynamics, visible-light, thermal-radiation"
} |
Leaving a stellar black hole's orbit | Question: Say Earth is orbiting a black hole 1900 km in diameter, 1.5E33 kg in mass, and has a gravitational pull of 110,858,725,761.77 m/s^2. It's around 40 AU away Earth's orbital speed is 129,305.15664 m/s. (I did the calculations, it makes sense)
I'm writing an extra credit assignment for my 10th grade physics class. I need the earth to leave its current orbit and fly outward at a speed around its current orbital velocity (but not caused by a major collision because I want the Earth to stay intact).
I'm too far invested in my assignment and I don't want to change anything too major, we haven't learned much about black holes and special relativity so keep that in mind in any answers.
Thanks, please correct me if I made a foolish mistake.
Answer: As the orbit speeds are far below $c$, relativity doesn't matter. The fact that it is a black hole doesn't matter either-it could be a star of the same mass. The gravitational field of a spherical object is the same as a mass point at the center as long as you are outside the object. It takes a lot to change the momentum of the earth that much. The only thing I can think of to eject earth from orbit is to have another massive body come flying through to slingshot the earth away from the black hole. Does that work for you?
Added: Here is how I would think about it. You need an orbit simulator to get it really right. I believe there is a free version of Satellite Tool Kit and there must be others on the web. For simplicity, let everything happen in a plane. You need a big body, call it A, to transfer this much momentum to the earth. I would guess 30-100 earth masses, but you might wind up with more. Figure out what velocity change you want the earth to have. I am hoping we can think about the two events separately and let the simulator patch them together. The first is A falling in from infinity on a parabolic trajectory. When it passes earth's orbit, it will be travelling $\sqrt 2$ times faster than earth. We get to pick the direction, which will be set by the angular momentum of the system. The second is the scattering of A and earth, which we analyze in their CM frame, ignoring the black hole. I think it will be so fast that will be OK. Decide what velocity change (as a vector) you want the earth to feel. Find a set of parameters (incoming velocity and miss distance) in the A-earth frame that results in that velocity change for earth. Now find an angle for A to cross earth's orbit that results in the proper relative velocity of A and earth in their CM and direct A to cross earth's orbit at the correct distance from earth. Try it and see if it works. You only have three parameters-the mass of A, the angular momentum of A around the black hole, and how far ahead of earth A will cross the orbit before you allow for the A-earth interaction.
Let me know how it works. No guarantees. | {
"domain": "physics.stackexchange",
"id": 14217,
"tags": "homework-and-exercises, black-holes, orbital-motion, escape-velocity"
} |
Sampling among constrained partitions | Question: I'm working on a clustering problem and want to sample partitions (possible clustering solutions) among a set of constrained ones.
Here is the problem: I have a set of objects $O=\{o_1,\ldots,o_n\}$ and would like to sample among reflexive, symmetric and transitive relations $R \in O \times O$ such that samples satisfy a set of must-link/cannot-link constraints. More specifically, I do have for some pairs $o_i,o_j$ either that $(o_i,o_j) \in R$ or $(o_i,o_j) \not\in R$. Alternatively, I could see it as the problem of sampling graphs that are disjoint cliques with pre-specified arcs or absences or arcs.
Reject sampling is likely to not work in practice, as the number of rejection would quickly be prohibitive as n and the set of constraints get high enough (something we can expect).
Do any of you know if this problem has been treated, or any easy way to solve it? Uniformity within the set of samples is desirable, but not absolutely necessary.
Thanks.
Answer: A reflexive, symmetric and transitive relation $R$ on $O$ is the same as a partition of $O$. If $(o_i,o_j) \in R$ then $o_i,o_j$ are in the same partition, and so can be merged. If $(o_i,o_j) \notin R$ then $o_i,o_j$ are not allowed to be in the same partition. After merging, we reach the following problem:
Given a graph, partition the set of vertices so that each partition is an independent set.
Equivalently (up to a permutation of colors):
Given a graph, find a proper coloring.
Intuitively, it seems that a random partition will have many parts, so you can try to generate a random sample in the following way. Greedily sample a "small" independent set whose size you choose in advance as a Poisson random variable with constant expectation, remove it from the graph, and repeat. Of course, this won't be a uniform sample. You can gather statistics in some small cases to see what is a good choice of parameters in this procedure, and whether the results are good enough for your purposes. | {
"domain": "cs.stackexchange",
"id": 8660,
"tags": "sampling, partitions"
} |
What is the name for 'cheating' with data, i.e. letting machine learning models use out-of-sample data? | Question: What is the term for abusing data and machine learning methods to get better results than normally (with proper training and test set)? Some of my co-workers call it data-mining but I do not believe that is the correct word?
Answer: I found the word: data snooping | {
"domain": "datascience.stackexchange",
"id": 11314,
"tags": "machine-learning, data-mining, data"
} |
When I run roscore, it has the following error | Question:
Hi:
When I use roscore command, it has the following error. Please help me.
... logging to /home/turtlebot/.ros/log/b271445a-f9f2-11e0-ad43-485d607d0ff5/roslaunch-turtlebot8-4104.log
Checking log directory for disk usage. This may take awhile.
Press Ctrl-C to interrupt
Done checking log file disk usage. Usage is <1GB.
Traceback (most recent call last):
File "/opt/ros/diamondback/stacks/ros_comm/tools/roslaunch/src/roslaunch/__init__.py", line 247, in main
p.start()
File "/opt/ros/diamondback/stacks/ros_comm/tools/roslaunch/src/roslaunch/parent.py", line 254, in start
self._start_infrastructure()
File "/opt/ros/diamondback/stacks/ros_comm/tools/roslaunch/src/roslaunch/parent.py", line 212, in _start_infrastructure
self._start_server()
File "/opt/ros/diamondback/stacks/ros_comm/tools/roslaunch/src/roslaunch/parent.py", line 163, in _start_server
self.server.start()
File "/opt/ros/diamondback/stacks/ros_comm/tools/roslaunch/src/roslaunch/server.py", line 371, in start
code, msg, val = xmlrpclib.ServerProxy(self.uri).get_pid()
File "/usr/lib/python2.6/xmlrpclib.py", line 1199, in __call__
return self.__send(self.__name, args)
File "/usr/lib/python2.6/xmlrpclib.py", line 1489, in __request
verbose=self.__verbose
File "/usr/lib/python2.6/xmlrpclib.py", line 1228, in request
h = self.make_connection(host)
File "/usr/lib/python2.6/xmlrpclib.py", line 1305, in make_connection
return httplib.HTTP(host)
File "/usr/lib/python2.6/httplib.py", line 1020, in __init__
self._setup(self._connection_class(host, port, strict))
File "/usr/lib/python2.6/httplib.py", line 657, in __init__
self._set_hostport(host, port)
File "/usr/lib/python2.6/httplib.py", line 682, in _set_hostport
raise InvalidURL("nonnumeric port: '%s'" % host[i+1:])
InvalidURL: nonnumeric port: ''
Originally posted by HelenJiang on ROS Answers with karma: 1 on 2011-10-18
Post score: 0
Answer:
You can get this error by setting your ROS_HOSTNAME or ROS_IP with a value including a colon. The ROS_HOSTNAME or ROS_IP should not have a colon in it such as ROS_IP=192.168.1.1
For more information on environment variables see http://www.ros.org/wiki/ROS/EnvironmentVariables
Originally posted by tfoote with karma: 58457 on 2011-10-18
This answer was ACCEPTED on the original site
Post score: 1 | {
"domain": "robotics.stackexchange",
"id": 7009,
"tags": "roscore"
} |
Scheduling Algorithm for Video Player | Question: Background:
In a nutshell. I coded a video player application (on linux). I used a proprietary library/API, with this, video and audio frames comes with well defined format. After decoding (via API), I call scheduling() function for video and audio frames and library does the rest. For argument sake I oversimplified the example.
Now let's say we switched to open source API (like Libav) with this I have everything similar as before except scheduling functions.
The Question:
Short Version:
I need help to devise a scheduling algorithm to use in my video player.
Long Version:
I have video frames and audio frames with timestamps (can be converted to wallclock time via a simple function). Naive approach I did was, rendering frames same order immediately as they arrive. This looks like working but overtime lips sync problem occurs (as expected).
I don't quite get it what scheduling function I used before actually doing and since these are closed source (comes as pre-compiled) I can't check it.
Notes:
The original function takes these parameters (if these give some idea/help):
theFrame in frame to display
displayTime in time at which to display the frame in
timeScale units
displayDuration in duration for which to display the frame in
timeScale units
timeScale in time scale for displayTime and displayDuration
I have all of this parameters/values for each frame.
Update:
What is a scheduling algorithm supposed to do?
What I know is, Scheduling algorithm buffers frames and outputs (video or audio) on scheduled time.
What are its outputs?
Audio and video uses different function but basically does same thing. It outputs video or audio. Simplest terms you see the video and hear audio on the player application thanks to this function correctly in sync.
What are the requirements?
1) Frame to output (video or audio). i.e. YUV Frame Buffer or a PCM packet for audio.
2) Timestamp (stamped on encoder side). i.e. 64bit signed integer, after simple conversion function gives a wall clock kinda time (HH:MM:SS.ms).
3) Timing information (stream time_base and frame duration) i.e. time_base for video : 1(num)/50(den=framerate), duration is tricky let's say it's 1/framerate secs (20 ms in this example)
What is the "overtime lips sync problem"?
Video and audio goes out of sync. In other words, video and audio doesn't match anymore.
What I'm asking is a generic logic for a Scheduling Algorithm, I don't wanna bore with you with details, so I kept is simple.
Let me ask different way:
Think I have a file (like a SRT subtitle file) it has text on every line and timestamp. And I wanna print it on linux terminal line by line according to it's timestamps.
My naive thinking will be:
1) Start a timer when program starts, load the file (SRT) on the memory.
2) Parse lines and timestamps.
3) Find the line with (in the SRT) closest to our timer and print it.
4) Repeat step (3) until EOF
In my original question/problem, I can't load whole media file, it can be a live stream or very big file. I can only buffer few seconds worth of frames (i.e. 25 frames (each for video or audio)).
So what should I do ?
Answer: If frames are stored in order:
Read the next frame. Delay until the time when you are supposed to display it. Display it. Repeat.
If the timestamp is in the past, you can either display that frame now and immediately read the next frame, or you can drop that frame and display the next frame.
If frames are not stored in order:
Store the frames in a priority queue, with the time when it is supposed to be displayed as the key. Then, repeat the following: extract the next frame from the priority queue (a DeleteMin operation), wait until the time when you are supposed to display it, display it, then repeat.
Or, just sort the frames by the time when they are supposed to appear and apply the simple algorithm at the beginning of my answer. | {
"domain": "cs.stackexchange",
"id": 11687,
"tags": "scheduling, video"
} |
Triple Delta Potential in Quantum Mechanics | Question: I am facing a problem of Quantum Mechanics, and I gently need your help in continuing to solve it.
The problem is the old usual problem of a particle subject to a potential, which this time has the form
$$V(x) = \alpha \delta(x^3+2ax^2-a^2x - 2a^3)$$
And we need to find the energies and the wave function normalization.
So first of all I used the well known identity for the Dirac Delta Distribution in order to write the potential as
$$V(x) = \alpha \left(\frac{1}{6a^2}\delta(x-a) + \frac{1}{2a^2}\delta(x+a) + \frac{1}{19a^2}\delta(x+2a)\right)$$
By the way, we can take $\alpha = 1$ in case.
From here, a simple sketch of the potential highlights $4$ regions:
$$\begin{cases}
x < -a \\
-a < x < +a\\
a < x < 2a \\
x > 2a
\end{cases}
$$
But my first doubt is: shall I split the second region into two other regions like
$$\begin{cases}
-a < x < 0\\
0< x < +a
\end{cases}
$$
or not?
Also, I attempted to write down the general solution fo the EVEN wave function case, and I got stuck also because of the previous regions question. I think I shall go for
$$\psi_e(x) = \begin{cases}
A e^{-kx} ~~~~~ x > 2a \\
A e^{kx} ~~~~~ x < -a \\
\ldots
\end{cases}
$$
Where the $\ldots$ represent my doubts about how to write the general solution in those cases...
I would really be grateful for any help or clarification about this!
Answer:
From here, a simple sketch of the potential highlights $4$ regions:
$$\begin{cases}
x < -a \\
-a < x < +a\\
a < x < 2a \\
x > 2a
\end{cases}
$$
This is wrong. A delta function $\delta(x + x_0)$ has a peak at $-x_0$, not at $x_0$. You've flipped the sign of $x$.
But my first doubt is: shall I split the second region into two other regions like
$$\begin{cases}
-a < x < 0\\
0< x < +a
\end{cases}
$$
or not?
No, you don't. I guess you're doing this out of habit, because you've seen that done in other problems, but think about it: what is special about $x = 0$? Why should something happen there at all? Why not also split at $x = a/2$ or $x = \pi^\pi a$?
You only need to split the solution at $x = 0$ if the potential actually changes there. In many problems, the potential is set up this way for convenience, but it doesn't happen here.
Also, I attempted to write down the general solution fo the EVEN wave function case, and I got stuck also because of the previous regions question. I think I shall go for
$$\psi_e(x) = \begin{cases}
A e^{-kx} ~~~~~ x > 2a \\
A e^{kx} ~~~~~ x < -a \\
\ldots
\end{cases}
$$
Where the $\ldots$ represent my doubts about how to write the general solution in those cases...
Again, I guess you're trying an even wavefunction out of habit, but this is not right. If the potential is even or odd, it can be shown that your energy eigenstates to be chosen to be even or odd. But the potential you're dealing with here is neither. If you demand your solution is even, you will get no solution at all.
I would really be grateful for any help or clarification about this!
To the left of the first delta function, take a growing exponential. To the right of the last delta function, take a decaying exponential. In both of the two intermediate regions, take superpositions of growing and decaying exponentials. | {
"domain": "physics.stackexchange",
"id": 49878,
"tags": "quantum-mechanics, homework-and-exercises, wavefunction, schroedinger-equation, dirac-delta-distributions"
} |
What is the frequency transfer function when $x(t) = e^{-3t}u(t)$ and $y(t) = 2u(t)[e^{-t}-e^{-4t}]$ | Question: What I did was take the fourier transform of both $x(t)$ and $y(t)$ and then divided $Y(j\omega)/X(j\omega)$. So
$$Y(j\omega) = \frac{2}{1+j\omega}-\frac{2}{4+j\omega}\quad\text{and}\quad X(j\omega) = \frac{1}{3+j\omega}$$
However, the answer is of the form: $\displaystyle \frac{A_1}{1+j\omega}+\frac{A_2}{4+j\omega}$ where the $A$'s are constant. I am not getting the solution of this form. I got $Y(j\omega)$ in this form, but not $H(j\omega)$.
Am I doing something wrong?
Answer: Your answer is correct; it's just in a different form than the given answer. In order to get your answer in the given form, do the following:
rewrite $Y(j\omega)$ by combining the two terms: $Y(j\omega)=\displaystyle\frac{N(j\omega)}{(j\omega+1)(j\omega+4)}$, where $N(j\omega)$ is a (very simple) polynomial. I'm sure you know how to obtain $N(j\omega)$.
write the frequency response as $\displaystyle H(j\omega)=\frac{N(j\omega)(j\omega+3)}{(j\omega+1)(j\omega+4)}$ and use partial fraction expansion to obtain the answer in the given form.
Alternatively, you could also determine $h(t)$ via inverse Fourier transform of $H(j\omega)$ as you obtained it (and noting that multiplication by $j\omega$ corresponds to differentiation in the time domain). You will see that $h(t)$ is in the form $A_1e^{-t}u(t)+A_2e^{-4t}u(t)$, which directly leads to $H(j\omega)$ in the given form. | {
"domain": "dsp.stackexchange",
"id": 4863,
"tags": "fourier-transform"
} |
Ln2 Dewar Filling | Question: For cooling of a measurement device (HPGE) i need around 30 Liters of LN2 (Liquid Nitrogen). I'm a hobbyist but, familiar with the basic rules of Ln2 (Eyeshield,Gloves, Ventilation, Pressure Danger) and also have a professional O2 sensor by hand.
My current problem is, that i own a special 30L dewar for the detector, in the region where i live, suppliers of LN2 will not fill my dewar because firstly they want to sell(rent) their storage units, and for liablity issues because my dewar is old ;)
Even if they could fill it, the dewar including Ln2 would weight over 35kg, which is kinda heavy to lift into the third floor by staircase.
My question is now, can i fill the dewar from smaller dewars? The suppliers rent out 10l Dewars. Can i "pour" the LN2 via a funnel in my dewar? Or is there any other solution that does not result in expensive devices?
Or is is possible to fill via a hose and gravity? (by placing the source dewar higher than the target dewar and let the Ln2 flow by gravity, so only one pump-action is needed to start the process)
Answer: Mechanically yes, considering the weight, pouring from a smaller Dewar is easier than from a larger one. It takes some practice to aim, of course. Using an additional plastic funnel consumes some of LN2 because the funnel obviously has to be cooled before LN2 passes the bottom of the chute.
Not knowing your experimental setup: would be an other mean of cooling the detector than LN2 acceptable for you? Some of the ICCD cameras for example cool their sensor with a Peltier element, which in turn is cooled with water of a liquid thermostate. This reliably allows temperatures around $\pu{-20 ^\circ{}C}$ of the detector. Or, as a less expensive approach, why not backing your sensor with a trough made of metal, and fill the trough with a mixture of of dry ice and isopropanol?
Re pumping LN2: The larger mobile Dewars I've seen deliver LN2 by their own pressure like a spray can:
In the simplest form, beside a pressure indicator (I) and a safety valve (S), they have a metal tubing (H) reaching to the inner bottom of the vessel. H is removed to fill the tank, but in idle state, H is shut and there is a small, occasional hissing pressure release from the safety valve S because still some the LN2 evaporates. To obtain the LN2, you place your small Dewar below the hose H and open the valve of H. Thus, opening the valve at H allows the liquid to pass the hose toward the Dewar. There may be additional valves to eventually increase the pressure above LN2, and additional indicators to report the content of LN2, etc.; all omitted in the drawing for clarity:
(modified illustration, based on one here). | {
"domain": "chemistry.stackexchange",
"id": 15390,
"tags": "liquids"
} |
Addition of a noble gas - Le Chatlier | Question: So, regarding a generic reaction (for the sake of argument, let us assume that it is completely gas phase), we have Le Chatelier's Principle. Let us use this example:
$$\ce{2SO2(g) + O2(g) <=> 2SO3(g)} \quad\quad \Delta H < 0$$
Now, we have the questions: oxygen gas is added, heat is added, a noble gas is added, etc.
For the addition of oxygen gas, we obviously shift right, and for heat, we shift left.
For a noble gas, I would expect no shift to happen. Indeed, this is what the SAT's seem to say, and I completely agree. However, my teacher seems to have the conception that when one adds a noble gas to a reaction such as above, you decreases molecular space, and the reaction does shift/speed up.
I have a test tomorrow, and if she gives me a question like this, I am planning to write that there is no shift or rate change. Now, if she marks this wrong, I need a formal argument to support my claim.
Am I correct in saying that with the addition of a noble gas, there is no change in either equilibrium shift or rate change? Why?
Answer: See, in an equilibrium reaction in gas phase at constant temperature, noble gases can be added in two ways―
At constant volume in the system
At constant pressure in the system.
Effect of adding noble gases at constant volume
At constant temperature and volume, if noble gas is added, total number of molecules,i.e., amount of the element increases (or you can say, 'number of moles' increases, though it's not correct, have you ever heard of number of kilograms?) in the system. As a result, also total pressure of the system increases, but there is no change in partial pressure of the reactants and products (because total pressure increases and mole fraction decreases). So, there is no effect on equilibrium by adding noble gases in a system at constant temperature and volume.
Effect of adding noble gases at constant pressure
At constant temperature and pressure, addition of noble gas to the system leads to increase in volume (cause, amount of the element increases). As a result, partial pressure of the reactants and products decreases (because total pressure constant but mole fraction decreases) as well as the sum of partial pressure of the reactants and products also decreases. Here, according to Le Chatelier's Principle, the equilibrium would be shifted in that direction where number of molecules or volume increases. | {
"domain": "chemistry.stackexchange",
"id": 8361,
"tags": "equilibrium"
} |
Messed up units! | Question: In the article, Environment-assisted quantum transport, $\gamma$ is a constant equal to $2\pi kT/\hbar*E_{R}/(\hbar\omega_{c})$ where $T$ is the temperature, $k$ is the Boltzmann constant. Supposedly, this constant should be in units of $\rm cm^{-1}$, but when I do calculations with different values of temperature (in K), my units end up in $\rm cm/s^2$. How can the values end up in $\rm cm^{-1}$?
Answer:
Supposedly, this constant should be in units of cm^-1 but when I do calculations with different values of temperature (in K), my units end up in cm/s^2.
You did something wrong. $kT$ has units of energy. Assuming $E_R$ also has units of energy and $\omega_c$ has units of inverse time means $2\pi kT/\hbar\, E_R/(\hbar\omega_c)$ has units of inverse time -- in other words, frequency.
How can the values end up in cm-1?
Wavenumber (units of inverse length) is oftentimes used in lieu of frequency (units of inverse time) for convenience.
Edit in response to comments
The reason I said "for convenience" is multi-fold. One reason is that it is very convenient to drop those physical constants. For example, the article could have said
For the FMO complex, the reorganization energy is found to be $E_R = 35\,\text{cm}^{-1}\,\hbar c$ [29] and the cutoff $\omega_c = 150\,\text{cm}^{-1}\,c$, inferred from ...
There's no reason for being that pedantically correct. Dropping those conversion factors that involve $\hbar$, $c$, and $k$ is "convenient". The numerical value of each of those constants is one in natural units. Note that this system of natural units ($\hbar = c = k = 1$) leaves one free dimension-full quantity to be resolved. Whether one uses electron-volts or centimeters, the conversion factors between mass, length, time, and temperature, are obvious. Dropping those conversion factors is "convenient" in the sense that the paper is easier to write, and it's easier to read if you know what's going on. (Physics journal papers aren't written for lay audiences. They're written for other physicists to read.)
Another reason for my use of "convenient" is that the numbers that result are humanly convenient, at least in this case. An energy of 35 cm-1 is a much more "convenient" value than is its SI equivalent, 6.95×10-22 joules. The same goes with regard to frequency. A value of 105 cm-1 is a much more convenient value than is its SI equivalent, 3.15×1012 hertz.
As a sanity check that this is exactly what the paper is doing, consider the following in the cited paper:
For the above spectral density the rate turns out to be $γ_φ(T) = 2\pi \frac {kT}{\hbar}\frac{E_R}{\hbar \omega_c}$. This gives a rough estimate for the dephasing rate at room temperature of around 300 cm−1 ...
Room temperature in this system of units is about 205 cm−1 (or being pedantically correct, $205\,\text{cm}^{-1}\,\hbar c/k$). With this, $2\pi (205\,\text{cm}^{-1}) \frac {35}{105}$ is about 300 cm-1. | {
"domain": "physics.stackexchange",
"id": 22576,
"tags": "quantum-mechanics, homework-and-exercises, units, si-units"
} |
How to prevent the second publisher to publish on the same topic? | Question:
Hello everyone,
I am working with ROS2 in my project and I know ROS/ROS2 is capable to handle multiple publishers and subscribers on the same topic. Due to some limitations in my project, I would like to prevent the second publisher to publish on the same topic. It needs to be kind of one publisher for one topic.
Basically, is it something doable in the ROS environment? If so, is there any option or notification infrastructure that notifies the first publisher or any other node?
Thanks in advance.
Originally posted by ali ekber celik on ROS Answers with karma: 114 on 2021-02-11
Post score: 1
Original comments
Comment by mgruhler on 2021-02-12:
an observation: you tagged this question as ROS1 and ROS2, spanning multiple distros (noetic&melodic vs. foxy&eloquent). Anything specific this refers to?
Comment by skpro19 on 2021-02-12:
twist_mux package does something similar for geometry_msgs::Twist messages.
Answer:
No, the intent of the Publish-Subscrib-Pattern is to allow that. See also the ROS1 wiki and the ROS2 wiki for background information.
Note that you could work around that with using the (getNumPublishers function)[http://docs.ros.org/en/noetic/api/roscpp/html/classros_1_1Subscriber.html#ac98e5ff9790e8e5cc11453948ea128ee] of subscribers (in ROS1 at least) to check if this is actually fine. Even though that would then not allow you to do a rostopic pub for testing purposes...
You need to make sure manually, that this requirement is met.
Originally posted by mgruhler with karma: 12390 on 2021-02-12
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by ali ekber celik on 2021-02-12:
Thanks for your answer. I will take look at rclcpp API more and probably the method you told will help me somehow.
Comment by ali ekber celik on 2021-02-15:
By the way, the corresponding method of getNumPublishers() in ROS2 is count_publishers()
Comment by variable on 2022-01-06:
does count_publishers() get the total publishers of a topic, or the number of publishers for a node? | {
"domain": "robotics.stackexchange",
"id": 36076,
"tags": "ros, ros2, c++"
} |
Does effective theory have the same meaning in particle and condensed matter physics | Question: I have a naive question about the meaning of effective theory in particle physics and condensed matter physics.
In particle physics, from what I know, the effective theory comes from the Wilsonian renormalization. We start from a theory with a fundamental cutoff $\Lambda$, e.g. Planck scale, and consider a smaller cutoff $\Lambda'$, e.g. LHC scale, $\Lambda'<\Lambda$. The Lagrangian then reshuffle into an infinity series including all possible powers of fields. The theory with $\Lambda'$ cutoff is called effective theory (possibly only a few leading terms)
In condensed matter theory, I heard the effective theory means neglect some details, like the BCS theory.
Roughly speaking, "the effective" in both fields has similar meaning, that we ignore something ($>\Lambda'$ in particle physics or uninteresting detail in condensend matter).
My question is, strictly speaking, does the two "effective" in particle and condensed physics have the same meaning? Can we start from QED, lower the cutoff, lead to BCS theory? (perhaps such crazy attempt will never work out...) or any deeper connection than heuristic similarities?
Answer: Yes, the term "effective action" has the same meaning in particle physics and condensed matter physics. After all, the discovery of the concepts of the Renormalization Group by Ken Wilson and others was being done simultaneously in both disciplines and the exchange of ideas was fruitful in both directions.
On the other hand, what the effective theories actually are depends on the context i.e. on the disciplines, too. In particle physics, people study effective actions for excitations of the vacuum – most of the environment is the vacuum. That's why any effective action is still a relativistic theory. On the other hand, condensed matter physics studies dynamics in a different environment or medium – various solids – so the Lorentz symmetry is spontaneously broken and one obtains non-relativistic effective field theories.
But the Cooper pairs in the BCS theory are exactly analogous to hadrons in particle physics – two examples of degrees of freedom in an effective theory that is valid beneath an energy scale but ignores some detailed phenomena that are only seen with high energies (per excitation).
In particle physics, the quantity deciding about the scale is roughly speaking $|p_\mu|$ – in the $c=1$ units, it doesn't matter which component of the energy-momentum we consider. In condensed matter physics, one must distinguish them because they don't obey $E\sim pc$ in general due to the breaking of relativity by the environment. The usual quantity deciding about the scale is the energy, not the momentum.
One must also be ready for different conventions. In condensed matter physics, they use the opposite sign of the beta-function for running couplings, for example, as they imagine the flow to go in the opposite direction. This is a purely sociological difference, much like the difference between mostly-plus and mostly-minus metric tensors. | {
"domain": "physics.stackexchange",
"id": 9486,
"tags": "quantum-field-theory, renormalization, terminology, approximations, 1pi-effective-action"
} |
Convert Ternary Expression to a Binary Tree | Question: This is programming question I came across (geeksforgeeks) :
Given a string that contains ternary expressions. The expressions may be nested, task is convert the given ternary expression to a binary Tree.Any feedback appreciated.
All the required classes are added as members.
I am using a simple recursive approach to build the tree.
Each time a '?' expression is encountered then the next element is added as the left node.Similarly, each time a ':' expression is encountered then the next element is added as the right node.
Input : string expression = a?b:c
Output : a
/ \
b c
Input : expression = a?b?c:d:e
Output : a
/ \
b e
/ \
c d
The Code:
public class TernaryExpressionToBinaryTree {
public class Node {
private Character data;
private Node left;
private Node right;
public Node(Character data) {
this.data = data;
}
}
public class BST {
public Node getRoot() {
return root;
}
public void setRoot(Node root) {
this.root = root;
}
public Node root;
public void addNode(Character data) {
root = addNode(root, data);
}
private Node addNode(Node node, Character data) {
if (node == null) {
return new Node(data);
}
if (data < node.data) {
node.left = addNode(node.left, data);
} else {
node.right = addNode(node.right, data);
}
return node;
}
}
/*
Preorder traversal
*/
public void displayTree(Node node) {
if (node != null) {
System.out.print(node.data + " | ");
displayTree(node.left);
displayTree(node.right);
}
}
public Node buildTreeFromTernaryExpression(Node node, String expression, int index) {
// check corner cases
if (expression == null || expression.trim().isEmpty() || index < 0 || index > expression.length()) {
return null;
}
// if its is a valid character
if (node == null || index == 0 || expression.charAt(index) != '?' || expression.charAt(index) != ':') {
node = new Node(expression.charAt(index));
}
//if it is a valid expression ? or :
if ((index + 1) < expression.length()-1 && expression.charAt(index + 1) == '?') {
node.left = buildTreeFromTernaryExpression(node.left, expression, index + 2);
}
if ((index + 1) < expression.length()-1 && expression.charAt(index + 1) == ':') {
node.right = buildTreeFromTernaryExpression(node.right, expression, index + 2);
}
return node;
}
public static void main(String args[]) {
TernaryExpressionToBinaryTree ternaryExpressionToBinaryTree = new TernaryExpressionToBinaryTree();
Node root = ternaryExpressionToBinaryTree.buildTreeFromTernaryExpression(null, "a?b?c:d:e", 0);
ternaryExpressionToBinaryTree.displayTree(root);
}
}
Answer:
You can delete the BST class. You don't use it anywhere.
Error handling looks weird to me. I suggest throwing an exception if the input is invalid. You also don't check all possible cases. For instance, you code prints a tree ? | a | b | for the input ??a:b, which is clearly invalid. I'd recommend either adding proper error handling or just dropping it altogether (the problem statement says that the string is a ternary expression, anyway).
Comments should not repeat the code. If you have something like // if it is a valid character, it's a good indicator that the following check should be moved to a separate method with a proper name (something like private boolean isValidCharacter(Node node, String expression, int index). | {
"domain": "codereview.stackexchange",
"id": 26105,
"tags": "java, recursion, tree"
} |
Helper methods for working with the WPF Dispatcher | Question: I've written the following small helper class for use in my WPF applications. It comes up from time to time that I need to display message boxes, or otherwise interact with the UI from a non UI thread.
public static class ThreadContext
{
public static InvokeOnUiThread(Action action)
{
if (Application.Current.Dispatcher.CheckAccess())
{
action();
}
else
{
Application.Current.Dispatcher.Invoke(action);
}
}
public static BeginInvokeOnUiThread(Action action)
{
if (Application.Current.Dispatcher.CheckAccess())
{
action();
}
else
{
Application.Current.Dispatcher.BeginInvoke(action);
}
}
}
Here is a small example of how this might be used.
public static MyFunction()
{
ThreadContext.InvokeOnUiThread(
delegate()
{
MessageBox.Show("Hello world!");
});
}
This works, but declaring the delegate the way I do seems overly verbose.
Is there anyway to make the syntax less verbose while still allowing for arbitrary functions to be passed? Is there anything else that you would suggest to improve this solution -- including an entirely different solution?
Answer: anonymous methods to the rescue! I saw somebody griping about this syntax recently on a blog and found it unfortunate they were dissimenating this. The cleaner new and happy way to define a delegate, is quite simply:
The method signature without a name or types, so for example:
Sum(int a, int b)
becomes:
(a, b)
Follow this with our trusty lambda operator => and then our method body! This may be in a statement block between our trusty {} or just in a single line the same way you can put a single line after an if or while etc.
So in conclusion there are 3 ways to create a method:
Standard (must be declared as class member level):
public int Sum(int a, int b)
{
return a+b;
}
Delegate (can be declared and instantiated as a method local):
delegate(int a, int b)
{
return a+b; // god help me if this syntax is correct, I haven't created a delegate in years
}
Anonymous method (can be declared and instantiated as a method local):
(a, b) => { return a+b; };
Back to your example...
ThreadContext.InvokeOnUiThread(
delegate()
{
MessageBox.Show("Hello world!");
});
In anonymous method format becomes:
ThreadContext.InvokeOnUiThread(() => { MessageBox.Show("Hello world!"); });
Or if you prefer:
Action actionToInvokeOnUiThread = () => { MessageBox.Show("Hello world!"); };
ThreadContext.InvokeOnUiThread(actionToInvokeOnUiThread); | {
"domain": "codereview.stackexchange",
"id": 24222,
"tags": "c#, multithreading, wpf"
} |
Are there ways to genetically increase mutation rate of E.coli? | Question: Are there ways to increase mutation rate of E.coli using genetic modifications? I know possible ways it can be done without genetic modifications by exposing the cells to stressful conditions like anaerobic and radiation but what are some genetic ways?
Thanks!
Answer: Mutations are caused by the insertion of the wrong bases during replication, or by chemical changes to bases, either by chemical agents or by radiation.
The rate of mutations could be increased by exchanging the E.coli polymerases with other polymerases displaying higher error rates (or with less proofreading capabilities). A lot of errors are quickly repaired via different mechanisms, knocking out all or a few of those mechanisms might result in an E.coli strain with an increased mutation rate (if they survive).
https://www.ncbi.nlm.nih.gov/books/NBK9900/
A more elaborate way would be to engineer E.coli to contain a pathway that synthesizes a mutagenic compound. The easiest way to to this would be 1 enzyme that generates reactive oxygen species, as those also cause DNA damage. You could also delete some of the catalases and other enzymes that remove reactive oxygen species instead.
https://www.ncbi.nlm.nih.gov/pubmed/16824196 | {
"domain": "biology.stackexchange",
"id": 7051,
"tags": "genetics, cell-biology, ecoli"
} |
Convolve a SED with a filter. Is convoluting the mathematical operation? | Question: I know that in order to get the Flux of a star (or something else) in a particular filter from its SED (luminosity per unit wavelength), I need to convolve the spectrum (SED) with the filter response. Most of the formulas I see to do this are
$$F_{b}=\dfrac{\int f(\lambda) T_{b}(\lambda)d\lambda}{\int T_{b}(\lambda)d\lambda}\,,$$
where $f(\lambda)$ is the SED, and $T_b$ is the filter response in band $b$.
This formula seems very different from a mathematical convolution that I would write as
$$f*g(\lambda)=\int_{-\infty}^{\infty} f(x) g(\lambda-x)dx\,.$$
Are these two convolutions the same thing? Or is the "astronomy convolution" a different thing (e.g. SED Fitter python package)?
Answer: To get the flux of an SED through a particular filter, you actually multiply the the SED by the filter's response. Talking about convolution in this context is a bit of a misnomer.
Basically, for each wavelength, you look at what fraction of the light will go through the filter, and you sum up the values you get at those wavelengths.
The division by $\int T_{b}(\lambda)d\lambda$ is just a normalization term. | {
"domain": "astronomy.stackexchange",
"id": 4305,
"tags": "spectroscopy, luminosity, photometry, mathematics"
} |
What is an awkward constant and is there a relation to ad-hoc hypotheses? | Question: [I'm not entirely sure whether this is the right board, since it is not a technical but a soft question].
I'm reading the textbook "Spin Dynamics - Basics of Nuclear Magnetic Resonance", 2nd ed., (2008), by Malcolm H. Levitt. On page 24, induced magnetism is introduced as follows:
[...] The equilibrium value of the induced magnetic moment is often proportional to the applied magnetic field B, and has the same direction. In SI units, this relationship is written as follows:
$$\mu_{induced}=\mu_0^{-1}V\chi \vec{B}$$
where $\mu_0 = 4\pi \cdot 10^{−7} \textrm{H}~\textrm{m}^{−1}$
is an awkward constant, called the magnetic constant or vacuum permeability.
Now, I wonder, what is an awkward constant? I assume it is not a technical term, but the author finds the introduction of this constant a far-fetched, counter-intuitive, or vague concept? I've heard people calling Einstein's cosmological constant $\Lambda$ ad-hoc. Is this related and if yes, how and if no, what makes the ad-hoc-ness different?
Answer: Here "awkward" simply has its dictionary meaning. From Google,
awk·ward (adj.)
causing difficulty; hard to do or deal with.
deliberately unreasonable or uncooperative.
causing or feeling embarrassment or inconvenience.
not smooth or graceful; ungainly.
uncomfortable or abnormal.
The writer evidently feels that the introduction of the constant makes the following development, on the whole, more ungainly and inconvenient. This is subjective: where some see an ungainly constant, others might see a useful handle for keeping dimensional analysis checks on the maths. That's all the story here, really.
This also has no relation to the description of the cosmological constant as 'ad hoc', where the latter term also has its dictionary meaning. (Specifically, the cosmological constant feels like it's a bit of a hack contrived to make the field equations match the observed reality.)
Ever so rarely, on occasion, words in physics texts simply mean what they mean everywhere else. | {
"domain": "physics.stackexchange",
"id": 25605,
"tags": "soft-question, magnetic-fields, terminology"
} |
How exactly do EM waves work? | Question:
It’s a fairly simple question. Basically, my physics teacher said that EM waves work like in the image I posted here. An electric field creates a magnetic field in the same place, then this magnetic field creates an electric field in the place forward. However, intuitively, I think this is wrong, but I’m not 100% sure.
I know that electric fields can create magnetic fields and vice versa, but I always thought they would happen at the same place, same time. Intuitively, I think EM waves are like this animation here: https://youtu.be/oZZ4wKYtVl8
Basically, continuous disturbances that just happen to occur both in magnetic and electric fields at the same time.
Which interpretation is correct?
Answer: You cannot find an electrical wave without a magnetic wave. There is a reason why they are called the electromagnetic waves because they are just one wave and their effect is observed in different dimensions (one operates in X then the other operates in Y).
I think your teacher was trying to explain you the concept of magnetic flux. In order to see current, you need to have a change in magnetic field. You can read up on Maxwell equations and Faraday law.
Your diagram is correct and the video which is just the dynamic version of the image is also correct. The correct word to use would be change rather than disturbance and it happens at the same time because if one field exists then the other field also exists and they both cause each other to exists and you can imagine that there is no time gap if that is what's bothering you. | {
"domain": "physics.stackexchange",
"id": 44362,
"tags": "electromagnetism, visible-light, waves, electromagnetic-radiation"
} |
Another cash machine with special bill values in Java | Question: Task:
An ATM dispenses banknotes with the following values/denominations: {5, 30, 35, 40, 150, 200}. The user should be able to repeatedly enter an amount divisible by 5 without leaving a remainder. The denomination of banknotes with the smallest possible number of notes should be issued that corresponds to the amount. Recursive calls are not allowed to use.
Questions:
Can you take a look at my code? Are the variable names unique? Is the approach ok, or is there anything to improve? What is the best way to calculate the depth constant?
/**
* This utility method calculates the minimal amounts of bills (5, 30, 35, 40, 150, 200) needed for a given amount.
* The amount argument should be between 0 and 10000 and be divisible by 5.
*
* @param amount the amount (between 0 and 10000 and divisible by 5) to be patched
* @return an array with the minimal amounts of bills needed
*/
public static int[] patch(final int amount) {
if (amount < 0 || amount > 10000 || amount % 5 != 0) {
throw new IllegalArgumentException("Invalid amount");
}
final int[] bills = {5, 30, 35, 40, 150, 200};
final int depth = Math.min(amount / bills[0] + 1, 50);
// (How can you calculate the best depth here?)
int bestCount = Integer.MAX_VALUE;
int[] bestResult = new int[bills.length];
int[] indexes = new int[bills.length];
a:
for (int i = 0; ; ) {
// calculates the current count of bills and sum
int count = 0;
int sum = 0;
for (int j = 0; j < indexes.length; j++) {
count += indexes[j];
sum += indexes[j] * bills[j];
}
// checks if the current count is the best, and terminates this iteration if no further solution can be found
if (amount - sum <= 0) {
if (amount - sum == 0 && count <= bestCount) {
bestCount = count;
System.arraycopy(indexes, 0, bestResult, 0, indexes.length);
}
indexes[i] = depth - 1;
}
// increments the indexes (or leaves the entire loop)
indexes[i]++;
if (indexes[i] == depth) {
do {
indexes[i] = 0;
i++;
// overflow protection , time to leave
if (i == indexes.length) {
break a;
}
} while (indexes[i] == depth);
indexes[i]++;
i = 0;
}
}
return bestResult;
}
Answer: cite your references
This is a well known problem.
But you do not appear to be using a well known solution technique.
That is, this code is "tricky".
(It seems to be using Dynamic Programming, but I'm at a loss
to describe the meaning of what it stores.)
Tell us what approach you're using,
perhaps by citing a blog, textbook, or wikipedia article.
Then the Gentle Reader can follow along,
tracing the corresponding elements between article and code,
to verify that each element is correctly implemented.
As written, you have needlessly made that a more difficult process.
repetition
Thank you for the introductory javadoc.
* This ....
* The amount argument should be between 0 and 10000 and be divisible by 5.
*
* @param amount the amount (between 0 and 10000 and divisible by 5) to be patched
DRY
up that second sentence; simply delete it.
It's completely redundant with the very helpful @param amount description.
Why do we keep our code DRY?
Suppose the max increased to 20_000 a few months down the road.
Now we have to not only keep revised code + @param doc in sync,
but we have to chase down other instances as well.
We try to minimize the number of occurrences that might drift out of sync.
That makes all of our documentation more believable,
even documentation for shopworn code.
design of Public API
final int[] bills = {5, 30, 35, 40, 150, 200};
It's kind of weird that the function signature doesn't accept such an array,
given that the return value is very much in terms of these denominations.
Alternatively we should give some distinctive name to this currency system,
and use a related name for the function, showing it is a special purpose
function that is tightly linked to that system.
On which topic: This function is named patch ?!? What?
I can't imagine how that's relevant.
I would have expected a name like dispenseBills or perhaps makeChange.
diagnostic message
throw new IllegalArgumentException("Invalid amount");
This is some lovely error checking, you can certainly retain it as-is.
But imagine that application code triggered this error,
and some poor maintenance engineer reading the web logs
needs to track down the details, to figure out which component
did the Wrong Thing and should be repaired.
First question that comes to mind will be "what was the amount?"
Did it run afoul of the negative check? Too big? Lacked a factor of five?
We can make that debugging process a little easier by
throwing an error with formatted string which includes the amount in question.
dynamic allocation
final int depth = Math.min(amount / bills[0] + 1, 50);
This expression does not seem well motivated.
The worst case would be min(2_001, 50),
and those two numbers don't seem to be of very different magnitude to me;
it's just a "small" difference in memory allocation, a factor of forty.
We made a
promise
up in the javadoc, we will deliver "minimal amounts of bills"
if caller did the right thing.
And we verified the right thing when we did the precondition check.
Turning a total function into partial one, to save a little RAM,
doesn't seem wise and in any event it certainly requires documentation
of this new potential failure mode.
Better to design such details out of the system.
You don't want to allocate zeros which typically will never be used?
Fine. Rather than using a "hard to size" static datastructure such as array,
prefer a dynamic datastructure such as HashMap.
With Integer keys, it offers you the same interface.
// (How can you calculate the best depth here?)
If you choose to retain depth then you'll want to include a
GCD calculation.
Imagine we drop the "divisible by 5" requirement,
report "impossible" by returning all zero counts or throwing an error,
and adopt this alternate currency system:
final int[] altBillValues = {29, 30, 35, 40, 150, 200};
Then even with the 10_000 ceiling,
we would definitely need a much greater maximum depth.
while loop
This is a bit weird:
a:
for (int i = 0; ; ) {
...
if ( ... ) {
break a;
You had an opportunity to name that location, but used meaningless a instead.
Sigh!
Here is more standard control flow that you might use:
i = 0;
while ( ... ) {
...
That leaves us with deciding what predicate the while should use, different from the while (True) that OP essentially uses.
I am a little surprised that we're not comparing "amount dispensed so far"
against "amount we were asked to dispense".
This all seems to be fallout from the decision to artificially impose
a "max depth" constraint, something that was not present in the original
problem definition.
Perhaps the OP algorithm always terminates, but from a cursory reading
there's no obvious loop variant that always drives us closer to
termination. If the maximum allowed depth was "small", say 4, it appears
the algorithm could return bestResult containing wrong bill counts.
The current control structure does not inspire confidence in correctness.
Resizing the array when needed, or relying on a HashMap, would simplify the code.
positive infinity
This is very clear:
int bestCount = Integer.MAX_VALUE;
These? Less so.
indexes[i] = depth - 1;
...
indexes[i]++;
if (indexes[i] == depth) {
We seem to be dancing around the notion of positive infinity here,
in a needlessly complex way. If a comparison of indexes[i] >= depth
denotes "solution impossible" we could get away with slightly less
complex code. Designing such artifacts out of the solution entirely
seems preferable.
naming
This potentially is a good identifier:
final int[] bills = {5, 30, 35, 40, 150, 200};
But it turns out to be a little on the vague side,
given that bills are central to the problem we're solving.
Consider using billValues instead.
amount is a good parameter name.
Again a little on the vague side, consider desiredAmount. No biggie.
bestCount is a good name, but we maybe don't need it at all,
since it's redundant with sum of bestResult.
A helper function could recompute it as needed.
OTOH indexes is a truly terrible name, which should have been billCounts.
It appears many times, and obscures the meaning of the code.
With such a rename, rather than "result" go for parallel structure
with a name like bestCounts or bestBillCounts.
The identifiers i, j, count, sum are all perfect as-is.
obscure tests
if (amount - sum <= 0) {
if (amount - sum == 0 && count <= bestCount) {
It's unclear why that's the best way to phrase the checks, instead of:
if (amount <= sum) {
if (amount == sum && count <= bestCount) {
It might have been the best way if there was a difference
we were tracking through the code, perhaps a difference that is
part of a loop variant, or that appears in a cited reference.
But as it stands it's just a cognitive speed bump.
This codebase appears to achieve a subset of its design goals.
I would not be willing to delegate or accept maintenance tasks on it.
Recommend "discard and begin again, with a cited standard algorithm",
rather than "refactor the existing code".
You might choose to tackle a simpler variant first.
Ignore the "no recursion!" requirement, and produce a
well-tested
recursive solution. With that in hand, you'll be in
a good position to refactor it into a solution where you
manually maintain a stack or you maintain a
DP array. | {
"domain": "codereview.stackexchange",
"id": 45180,
"tags": "java, algorithm, iteration"
} |
Should I use keras or sklearn for PCA? | Question: Recentl I saw that there is some basic overlapping of functionality between keras and sklearn regarding data preprocessing.
So I am a bit confused that whether should I introduce a dependency on another library like sklearn for basic data preprocessing or should I stick with only keras as I am using keras for building my models.
I would like to know the difference for scenarios like
which is good for production
which will give me better and faster response
is there any issue with introducing dependency over other libraries for just 1 or 2 functionality
which has a better compatibility with other tools like tensorboard or libraries like matplotlib, seaborn etc.
Thanks in advance.
Answer:
which is good for production
They are both good. sklearn can be used in production as much as tensorflow.keras
which will give me better and faster response
I think that doesn't really depends on the libray, rather on the size of your models and of your datasets. That what really matters. Both modules can be used to create very optimized and fast models.
is there any issue with introducing dependency over other libraries for just 1 or 2 functionality
There are not issues in using sklearn and tensorflow.keras together. In the ML/Data Science world they are probably the two most common tools. No worries about that!
which has a better compatibility with other tools like tensorboard or libraries like matplotlib, seaborn etc.
Well, keras is now a branch of tensorflow (it's tensorflow.keras). The TensorBoard is designed specifically for it. Other than that, all other visualization libraries such as matplotlib and seaborn are perfectly compatible.
Final thoughts:
use sklearn and keras in sequence without problems, Data preprocessing steps can use a lot more libraries. Don't worry of using one more, especially if it's a very solid and popular one such as sklearn.
However, you might want to substitute PCA with Autoencoders. That's arguably the best dimensionality reduction technique, it's non-linear, meaning it can carry more information with less variables, and it can be implemented in tensorflow.keras. In that way you'd have a neural network that generates a compressed representation of your data, and another that makes the prediction. That's just a suggestion of course, you know your task better than anyone else. | {
"domain": "datascience.stackexchange",
"id": 7732,
"tags": "deep-learning, keras, scikit-learn, feature-engineering, pca"
} |
Proving that test particles in GR, follow spacetime geodesics | Question: My question is pretty much in the title. According to this paper, this is not exactly proven rigorously yet. What I dont understand is what exactly is not proven. If I'm not too wrong, a test particle is just a particle, which does not affect the ambient gravitational field.
According to this Paper, it can be easily shown that the eqn. Of motion for a point particle in a curved space (having metric $g_{\mu \nu}$), can be found by considering the action to be just,
$$S = -mc \int ds \, ,$$
where
$$ds^2 = -g_{\mu \nu}dx^{\mu}dx^{\nu}$$
This would give the geodesic eq. As the eq. Of motion.
Now, if we assume that the particle is somewhat massive, couldn't we just modify the metric linearly, by superposing a mass coupled metric as,
$$g'_{\mu \nu} = g_{\mu \nu} + m h_{\mu \nu}$$
Considering the same form of the action above, we should get the same geodesic eq. Along with a subsidiary eq.(which has a mass coupling). In the limit $m \rightarrow 0$, the subsidiary term should vanish, leaving the original geodesic eq.
So what I want to know, is what exactly is the unsolved part of this problem ? I failed to understand it from the text itself. It would be great if someone can point that out.
Answer: Proving that test particles in GR follow geodesics is far from trivial. A core aspect of the problem is that point particles do not make sense in general relativity (or similarly non-linear theories). Hence argument that starts by assuming a point particle (such as given by the OP) cannot be fully trusted.
That being said I know of (at least) two methods the rigorously prove that massive objects follow geodesics in the limit that their mass goes to zero.
The first starts by assuming an object that is sufficiently compact compared to the curvature length scale(s) of the background spacetime in which it moves (in particular if the background spacetime is a black hole, you assume that the object's mass is much smaller than that of the background black hole and that the size of the small object is of a similar order of magnitude as its Schwarzschild radius. I.e. its size scales linearly with its mass). You can than use Einstein's equation and matched asymptotic expansions to find a systematic expansion of the equations of motion for the object's worldline as an expansion in the ratio of these length scales (i.e. the ratio of the masses in case of a black hole background). The zeroth order term in this expansion is simply the geodesic equation in the background spacetime.
This general framework for solving the equations of motion of a 2-body system in GR is known as the "gravitational self-force" approach. A good recent review (that also deals with the problem of point particles in GR) was published last year by Barack and Pound (arXiv:1805.10385, See section 3 and specifically 3.5).
The second approach (developed by Weatherall and Geroch), starts from energy-momentum distributions that are a) conserved b) satisfy the dominant. One can then show that sufficiently localized concentrations of energy "track" geodesics. (See arXiv:1707.04222, Theorem 3 gives a precise statement of their result). | {
"domain": "physics.stackexchange",
"id": 58403,
"tags": "general-relativity, lagrangian-formalism, metric-tensor, variational-principle, geodesics"
} |
jQuery toggle script | Question: I have been working on a toggle script, but I find myself repeating my code. Is there a way of combining it all?
jQuery(document).ready(function($) {
var topContainer = $("#alles");
var topButton = $(".abutton");
topButton.click(function() {
topContainer.slideToggle(1200, 'easeInOutQuart');
$(".hideable").slideToggle(1200, 'easeInOutQuart');
});
var topContainer2 = $("#voorbeelden");
var topButton2 = $(".bbutton");
topButton2.click(function() {
topContainer2.slideToggle(1200, 'easeInOutQuart');
$(".hideable").slideToggle(1200, 'easeInOutQuart');
});
var topContainer3 = $("#contact");
var topButton3 = $(".cbutton");
topButton3.click(function() {
topContainer3.slideToggle(1200, 'easeInOutQuart');
$(".hideable").slideToggle(1200, 'easeInOutQuart');
});
});
.container {
display:block;
width:500px;
margin:0 auto;
}
.hideable {
width: 500px;
margin: 0 auto;
}
#alles {
float:left;
width:500px;
height:500px;
background:#ff4000;
}
#voorbeelden {
float:left;
width:500px;
height:100px;
border-top:1px solid #ddd;
background:#cc6600;
}
#contact {
float:left;
width:500px;
height:100px;
border-top:1px solid #ddd;
background:#bb3300;
}
.toggleable {
display:none;
}
<script src="https://ajax.googleapis.com/ajax/libs/jquery/2.1.1/jquery.min.js"></script>
<div class="container">
<header>
<h1>stuff</h1>
<h2>more stuff</h2>
</header>
<div id="main" class="hideable"><img src="imgs/rk.svg" alt="image"></div>
<div id="alles" class="toggleable"></div>
<div id="voorbeelden" class="toggleable"></div>
<div id="contact" class="toggleable"></div>
<ul>
<li>
<a href="#" class="abutton">
<h2 class="orsp-title">a</h2>
<span class="orsp-category">stuff</span>
</a>
</li>
<li>
<a href="#" class="bbutton">
<h2 class="orsp-title">b</h2>
<span class="orsp-category">stuff</span>
</a>
</li>
<li>
<a href="#" class="cbutton">
<h2 class="orsp-title cbutton">c</h2>
<span class="orsp-category">stuff</span>
</a>
</li>
</ul>
</div> <!--/container-->
Answer: First, let's clean up your code a bit more to reduce it to it's basics (demo see here http://jsfiddle.net/bjelline/azfXy/):
<div id="alles" class="toggleable">alles content</div>
<div id="voorbeelden" class="toggleable">voorbeelden content</div>
<div id="contact" class="toggleable">contact content</div>
<ul>
<li><a href="#" class="abutton">show alles</a></li>
<li><a href="#" class="bbutton">show voorbeelden</a></li>
<li><a href="#" class="cbutton">show contact</a></li>
</ul>
there are three buttons that are supposed to toggle three divs in another part of the page.
First we have to create a connection between the buttons and the divs. that's easy: you already made them links to #, we can just add the id of the corresponding div here. Also, I changed the class on the three buttons to be the same - we have the href to distinguish them:
<div id="alles" class="toggleable">alles content</div>
<div id="voorbeelden" class="toggleable">voorbeelden content</div>
<div id="contact" class="toggleable">contact content</div>
<ul>
<li><a href="#alles" class="button">show alles</a></li>
<li><a href="#voorbeelden" class="button">show voorbeelden</a></li>
<li><a href="#contact" class="button">show contact</a></li>
</ul>
Now handling the click on the button is the same for all buttons: preven the normal funktioning of the link, read out the href, toggle the correspondig div:
$('.button').click(function (event) {
event.preventDefault();
var id = $(this).attr('href');
$(id).slideToggle(200);
});
Working Demo at http://jsfiddle.net/bjelline/ZmAK4/
And while you are learning jQuery: always try to uphold two principles:
the page should still be readable without javascript (progressive enhancement)
put all javascript into tags or js-files, not in html attributes
how can we make the page usabel without javascript? easy: without javascript, everything should be visible. how do we achive that? don't use CSS to hide the .togglables but use the first line of javascript instead.
$('.togglables').hide();
P.S. now I try to add the image that indicates if any of the divs is visible (demo see here http://jsfiddle.net/bjelline/HmCWr/)
$('.button').click(function (event) {
event.preventDefault();
var id = $(this).attr('href');
$(id).toggle();
$('.hideable').toggle( $('.toggleable:visible').length > 0 );
});
Here $('.toggleable:visible').length counts the number of divs that are visible. if theres more then one visible, the image is show, else it is hidden (with the method toggle(true/false) )
Notice that I removed the Slide-Animation. I'm avoiding the following problem: With the slide animation the div that is just closing still count's as open.
If you insist on the slide animation you probably need javascript variables to record the open/close state of each div..... | {
"domain": "codereview.stackexchange",
"id": 4537,
"tags": "javascript, beginner, jquery, html, css"
} |
Function to return the sum of multiples within a range | Question: Project Euler problem 1 says:
Find the sum of all the multiples of 3 or 5 below 1000.
I have written a program that will return the sum of the multiples of any list of positive integers. So for example if I wanted to know the sum of every integer that was a multiple of 3 and 5 that is less than 10 I would add 3,5,6, and 9 to get 23.
My function can do this for any number of positive integers, between any range that starts from 1.
# Function to return sum of multiples that are within a certain range
def MultiplesSum(multiple, end, start=1):
mSum = 0
naturals = range(start, end)
offset = -start
if start == 0:
raise ZeroDivisionError('Cannot start with the number 0')
for num in multiple:
for i in naturals:
if i % num == 0 and naturals[i+offset] != 0:
# print mSum,"+", i,"=",i+mSum
mSum += i
naturals[i+offset] = 0
i += 1
return mSum
problem1 = MultiplesSum([3,5,2], 16)
print problem1
# prints 88
Answer: Simpler implementation
You don't need naturals at all, and offset either.
You could make the outer loop iterate over the range(start, end),
and for each value check if it can be divided by any one of nums,
and add to the sum.
msum = 0
for i in range(start, end):
for num in nums:
if i % num == 0:
msum += i
break
return msum
API
The interface of your function is not intuitive:
The range start parameter goes after the range end parameter. That's not intuitive. Consider how Python's range works:
With one arg, the first arg is the end of the range
With two args, the first arg is the start, the second is the end of the range. It's natural that way
I suggest to follow that example
The first arg is called "multiple", singular. It's not a multiple. It's a collection of numbers. So nums would be a better name, and it would make it easier for readers to understand how to use this function
Input validation
In this part:
if start == 0:
raise ZeroDivisionError('Cannot start with the number 0')
The zero division error is inappropriate. You did not divide by zero,
and it won't make sense for callers to catch this exception and handle it.
In other languages this should be an IllegalArgumentException, in Python ValueError would be more appropriate.
The real problem is not division by zero, but inappropriate use of the function.
But then, why is start=0 inappropriate? I see nothing wrong with that.
What would be wrong is having 0 in nums.
That's what you should validate instead.
Coding style
The recommended practice is to use snake_case for function names, not CamelCase. See more coding style recommendations in PEP8. | {
"domain": "codereview.stackexchange",
"id": 13961,
"tags": "python, programming-challenge"
} |
Python - Random dice | Question: I am a fairly new programmer, and I stumbled with a beginner practice project, a random dice number generator. The code works, but I would like to know if there is an efficient way compared to mine to write it.
#Random dice
import random
#returns a number (side of the dice)
def roll_dice():
print (random.randint(1, 6))
print("""
Welcome to my python random dice program!
To start press enter!
Whenever you are over, type quit.
""")
flag = True
while flag:
user_prompt = input(">")
if user_prompt.lower() == "quit":
flag = False
else:
print("Rolling dice...\nYour number is:")
roll_dice()
Answer: You can make it look nicer!
Instead of writing comments in front of your functions, do it with docstrings:
def roll_dice():
"""Print a number between 1 and 6 (side of the dice)"""
print(random.randint(1, 6))
You can observe that I made a couple of changes to this:
removed the extra space you had in your print() function
added the docstring I mentioned above
modified the content of docstring (your function doesn't return anything, it just prints a random number). A beginner programmer might get the wrong idea.
used 4-spaces indentation instead of two.
2 new lines in front of your function
Now, let's try to make it even better!
You have two magic numbers, 1 and 6. Since you put the logic inside a function, let's make use of it and define those as arguments to our function:
def roll_dice(min_dice, max_dice):
"""Print a number between min_dice and max_dice (side of the dice)"""
print(random.randint(min_dice, max_dice))
The above has the advantage of an easier customization of your program. More, you can abstract things even more, and give those arguments a default value:
def roll_dice(min_dice=1, max_dice=6):
"""Print a number between min_dice and max_dice (side of the dice)"""
print(random.randint(min_dice, max_dice))
More, you can go a step further and make the function actually do something. Let's make it so that it returns a value. Just replace the print() function with return.
Okay, so far so good. I think we've managed to handle this part pretty well. Let's move on.
First, let's apply the changes we did in the first part, to this one too:
print("""
Welcome to my python random dice program!
To start press enter!
Whenever you are over, type quit.
""")
flag = True
while flag:
user_prompt = input(">")
if user_prompt.lower() == "quit":
flag = False
else:
print("Rolling dice...\nYour number is:")
roll_dice()
What I don't like about this, is the fact that you didn't wrap the logic inside a function. Let's do that first:
def play():
print("""
Welcome to my python random dice program!
To start press enter!
Whenever you are over, type quit.
""")
flag = True
while flag:
user_prompt = input("> ")
if user_prompt.lower() == "quit":
flag = False
else:
print("Rolling dice...\nYour number is:")
roll_dice()
The changes that I'd like to make to this function are the following:
remove the flag variable
move out the intro message in it
def play():
while True:
user_prompt = input("> ")
if user_prompt.lower() == "quit":
return False
else:
print("Rolling dice...\nYour number is: {}".format(roll_dice()))
Moving next, let's build our main() function:
def main():
print("Welcome to my python random dice program!\n"
"To start press ENTER!\n"
"Whenever you are over, type quit.\n")
play()
Last but not least, let's call our main function:
if __name__ == "__main__":
main()
You can see I added an extra line: if __name__ == "__main__". By doing the main check, you can have that code only execute when you want to run the module as a program and not have it execute when someone just wants to import your module and call your functions themselves.
The full code:
import random
def roll_dice(min_dice=1, max_dice=6):
"""Print a number between min_dice and max_dice (side of the dice)"""
return random.randint(min_dice, max_dice)
def play():
"""Return false if user enters 'quit'. Otherwise, print a random number"""
while True:
user_prompt = input("> ")
if user_prompt.lower() == "quit":
return False
else:
print("Rolling dice...\nYour number is: {}".format(roll_dice()))
def main():
print("Welcome to my python random dice program!\n"
"To start press ENTER!\n"
"Whenever you are over, type quit.\n")
play()
if __name__ == "__main__":
main() | {
"domain": "codereview.stackexchange",
"id": 25645,
"tags": "python, dice"
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.