anchor stringlengths 0 150 | positive stringlengths 0 96k | source dict |
|---|---|---|
Hawking Radiation from a Physical Black Hole | Question: For distant observer, a physical black hole takes an infinite amount of time collapse, because time is redshifted near the Schwarzchild radius. Instead of the matter crossing the horizon, it will just pancake closer and closer to the $r_s$, without ever forming a horizon. On the other hand, Hawking's derivation relies on the existence of a bona fide horizon. Can the horizon radiate before it even exists?
Answer: While the original derivation of the phenomenon of Hawking radiation (HR) relied on the existence of eternal future event horizon, since then it has been established that its existence is more of a mathematical convenience for the calculations rather than a strict requirement for HR.
First, let us clear a misunderstanding:
For distant observer, a physical black hole takes an infinite amount of time collapse, because time is redshifted near the Schwarzchild radius. Instead of the matter crossing the horizon, it will just pancake closer and closer to the $r_s$, without ever forming a horizon.
If we consider only classical general relativity (i.e. no quantum effects), then by the clock of a distant observer, event horizon forms in finite amount of time. In the quote above, OP is forgetting that $r_s$ is not some external constant but is instead determined by all this matter that forms the black hole. To illustrate, let us consider a particle of small mass $\delta m$ falling into an already formed black hole of mass $M$. The particle does not have to reach $r=r_{s}= 2 M $ to cross the horizon (we are using GR units). Instead it only has to reach $r= 2 (M + \delta m) = r'_{s}$ and this takes only a finite amount of time (by that same external observer's clock) on the order of $M \ln(M/\delta m)$. By that time the event horizon would expand to its new value $r_s'= 2(M + \delta m)$, enveloping the particle.
Nevertheless, the existence of bona fide event horizons is far from established if we take into account quantum corrections. Depending on how the full theory of quantum gravity works, event horizons may form very late during the collapsing objects evolution, or they may not form at all. For example, in [1] Hajicek showed, that it is possible to have an evaporating black hole spacetime with an apparent horizon but without the event horizon, and such a spacetime would still have Hawking radiation. Furthermore, in [2] it has been shown that whenever there is an apparent horizon, not necessarily of gravitational nature, and a basic quantumness of physics, then there would be Hawking radiation. So one could observe HR in various analogue black holes, where the effective metric with an apparent horizon emerges describing propagation of relevant degrees of freedom there.
But even the apparent horizon is not really necessary for HR. What is needed is the approximately exponential relationship between affine parameters of null geodesics approaching the black-hole like object from past null infinity and escaping away from it to future null infinity [3].
The following spacetime diagram of a physical “black hole-like” spacetime formed as a result of a star collapse illustrates this property:
We have two null geodesics escaping to a future null infinity $\mathscr{I}^{+}$ with a separation $u_2 - u_1$ (measured by affine parameters at $\mathscr{I}^{+}$, but it is translatable to a difference in signal arrival time for a static observer far away from the black hole). Tracing these geodesics back in time, we note that they arrived from the past null infinity $\mathscr{I}^{-}$, but their separation there $U_2-U_1$ is exponentially smaller. It is this exponential scaling that is the necessary and sufficient condition for the existence of Hawking radiation, and not the global geometric properties of a spacetime (such as the existence of an event horizon). For the “usual” black hole spacetimes the approximate relationship between parameters $U$ and $u$ at $\mathscr{I}^{-}$ and $\mathscr{I}^{+}$ is:
$$U≈U_H - A e^{-κ_Hu},$$ as $u\to +\infty$, with $κ_H$ being the surface gravity at the horizon, but in general (for example for evaporating black hole) instead of constant $κ_H$ there would be a function $κ(u)$. Once this exponential scaling is quantified and if a suitable adiabatic condition is fulfilled it is possible to write the coefficients of time-dependent Bogoliubov transformation with the local properties of exponential scaling relation controlling the temperature of radiation. It is expected that physical black hole spacetimes would satisfy the requirements for HR even if various quantum corrections are included.
This exponential scaling offers an interesting additional point of view on Hawking radiation and related phenomena (Unruh and Gibbons–Hawking effects): thermal radiation arises as a result of quantum vacuum fluctuations being subjected to relativistic exponential scaling transformation [4]. This point of view is suitable for treating situations without a clear geometric features such as event (or apparent) horizon.
Hajicek, P. (1987). Origin of Hawking radiation. Physical Review D, 36(4), 1065, doi:10.1103/PhysRevD.36.1065.
Visser, M. (2003). Essential and inessential features of Hawking radiation. International Journal of Modern Physics D, 12(04), 649-661, doi, arXiv:hep-th/0106111.
Barcelo, C., Liberati, S., Sonego, S., & Visser, M. (2011). Hawking-like radiation from evolving black holes and compact horizonless objects. Journal of High Energy Physics, 2011(2), 1-30, doi, arXiv:1011.5911.
Hu, B. L. (1996). Hawking-Unruh thermal radiance as relativistic exponential scaling of quantum noise. Thermal Field Theory and Applications, edited by YX Gui, F. C. Khanna and ZB Su (Singapore, World Scientific, 1996), 249-260, arXiv:gr-qc/9606073. | {
"domain": "physics.stackexchange",
"id": 54495,
"tags": "thermodynamics, general-relativity, black-holes, hawking-radiation, qft-in-curved-spacetime"
} |
To find the impulse repsonse using the difference equation | Question: A causal linear time-invariant filter has transfer function
a) Denote the input signal by x[n] and the output signal by y[n]. Find the difference
equations for the filter.
f) Find the impulse response of the system. (Eliminate j (the imaginary unit) from
your answer.)
I used the partial fractions method and obtained the following solution.
However it was time consuming. Is there a way that I can simplify this difference equation to get the impulse response of the IIR filter to get the same solution. Any help will be appreciated(as I am not aware of solving by difference equations)
Answer: There is no real shortcut for computing the impulse response. Partial fractions is the standard way to do it. However, in the case of a second-order transfer function as in your example, the result can be found in tables of $\mathcal{Z}$-transform pairs. E.g., if you use the last two correspondence of this table, you can pretty quickly write down the result.
By the way, your result can't be correct (at least not for negative $n$), because you know that the system is causal, so its impulse response must be zero for $n<0$. | {
"domain": "dsp.stackexchange",
"id": 8046,
"tags": "discrete-signals, infinite-impulse-response, homework"
} |
Parsing prospero parameters | Question: Prospero is a URI scheme. There you have fields and values.
Could you check this code and give common suggestions or even test cases for JUnit tests?
(The documentation is in German.)
/**
* Gibt einen Prospero-Parameter aus der URL zurück. Dabei wird nur der Prospero-Part geprüft.
*
* @param url Die URL, darf nicht <code>null</code> sein.
* @param name Der Name des Parameters, darf kein '=' enthalten, darf nicht <code>null</code>
* sein.
* @return Den Wert des Parameters oder
* <dl>
* <dt><code>null</code></dt>
* <dd>wenn der Parameter ohne <code>=</code> angegeben wurde.</dd>
* <dt><code>error</code>-Parameter</dt>
* <dd>wenn kein solcher Parameter gefunden wurde</dd>
* </dl>
*/
public static String getProsperoParam(URL url, String name, String error) {
String path = url.getPath();
String[] split = path.split(";", 2);
if (split.length == 2) {
String params = split[1];
for (String param : params.split("&")) {
String[] paramParts = param.split("=", -1);
if (paramParts.length == 1) {
if (name.equals(paramParts[0])) {
return null;
}
}
if (paramParts.length == 2) {
if (name.equals(paramParts[0]))
return paramParts[1];
}
if (paramParts.length > 2) {
if (name.equals(paramParts[0])) {
return param.substring(name.length() + 1);
}
}
}
}
return error;
}
I successfully tested these cases:
assert URLTools.getProsperoParam(new URL("http://a:a@a.a.a/a.a?a=a#a"), "a", null) == null;
assert URLTools.getProsperoParam(new URL("http://a:a@a.a.a/a.a;a=4?a=a#a"), "a", null).equals("4");
assert URLTools.getProsperoParam(new URL("http://a:a@a.a.a/a.a;a=4=a?a=a#a"), "a", null).equals("4=a");
assert URLTools.getProsperoParam(new URL("http://a:a@a.a.a/a.a;a=4=a&m=3?a=a#a"), "m", null).equals("3");
String missingString = "Missing!";
assert URLTools.getProsperoParam(new URL("http://a:a@a.a.a/a.a;a=4=a&m=3?a=a#a"), "e", missingString) == missingString;
assert URLTools.getProsperoParam(new URL("http://a:a@a.a.a/a.a;e?a=a#a"), "e", "asdf") == null;
assert URLTools.getProsperoParam(new URL("http://a:a@a.a.a/a.a;e=?a=a#a"), "e", null).equals("");
Answer: Check input
Your comments mention that url shouldn't be null, so it's best to make sure that it really isn't:
if (url == null) {
throw new IllegalArgumentException("url cannot be null");
}
Nested if statements
if (paramParts.length == 1) {
if (name.equals(paramParts[0])) {
return null;
}
}
You have this principle three times. It can be written as:
if (paramParts.length == 1 && name.equals(paramParts[0])) {
return null;
}
Although I would rewritten your if statements like this:
if (paramParts.length > 0 && name.equals(paramParts[0])) {
switch(paramParts.length) {
case 1:
return null;
case 2:
return paramParts[1];
default:
return param.substring(name.length() + 1);
}
}
And are you sure that paramParts.length == 2 (now case 2) is really needed? I think the other cases cover it.
Use early return
If you return early, you can reduce nesting:
if (split.length != 2) {
return error;
}
Tests
The specification for prosperos says:
each field/value pair is separated from each other and from the rest of the URL by a ";" (semicolon).
You are never testing if this would work, and it wouldn't:
String prosperoParam = prospero.getProsperoParam(new URL("http://a:a@a.a.a/a.a;a=a;b=b"), "b", null);
assertEquals(prosperoParam, "b");
If you want to change the spec, I would comment on it in the Java doc (something like: See RFC 4157, but note that this implementation uses '&' instead of ';' for the separation of field/value pairs (it still does use ';' for the separation between the first field/value pair and the rest of the URL).
Also, sometimes you use == to compare strings in your tests, I would replace it with equals. | {
"domain": "codereview.stackexchange",
"id": 10079,
"tags": "java, parsing, unit-testing, url"
} |
Ambiguity in the direction of dissolution equilibrium | Question: The following reaction is usually carried out to test for the carbonate anion in an inorganic salt;
$$\ce{CO3^2- +BaCl2/CaCl2 -> BaCO3/CaCO3(s) + 2 Cl-}$$
The resulting carbonate is stated to be an insoluble precipitate, and also to be dissolvable in mineral acids like $\ce{HCl}$.
How does that happen? If the initial test is to actually work, the dissolution equilibrium should lie to the right; in the favour of the carbonate being formed. The dissolution requires the metal ions in the carbonate's solid lattice to break up and get solvated by the water molecules, ending up just like the strong electrolyte that we started with($\ce{BaCl2}$ or $\ce{CaCl2}$).
If you're going to just take some hydrochloric acid and treat the precipitate with it, and the reaction we just saw works out in the reverse, what does that mean about the equilibrium constant? It's supposed to lie to one side no matter what you start with, right?
Answer: The ion $\ce{CO3^{2-}}$ introduced in the first reaction must be produced by dissolving a soluble carbonate like $\ce{Na2CO3}$. On the other hand $\ce{BaCl2}$ or $\ce{CaCl2}$ have to be dissolved in water to react. If they are introduced as solid these substances will not react, unless they slowly get dissolved. Nobody really knows why these chlorides are soluble in water, and the corresponding carbonates are insoluble. So the precipitation reaction should be rewritten as $$\ce{Ba^{2+} + CO3^{2-} <=> BaCO3(s) \ (1)}$$ $$\ce{Ca^{2+} + CO3^{2-} <=> CaCO3(s) \ (2)}$$ Usually the equation is displaced to the right side, and a mixture of some carbonate plus a solution containing Ba or Ca produces a dense precipitate of carbonate. But by adding an acidic solution, containing the ions $\ce{H+}$ (or $\ce{H3O^+}$), the following reaction happens that consumes the acidic ions : $$\ce{2 H+ + CO3^{2-} -> CO2(g) + H2O}$$ Here the ions $\ce{CO3^{2-}}$ are all destroyed and produce a gas which leaves the solution. This displaces the first equilibriums ($1$) and ($2$) to the left-hand side, and the precipitates disappear. | {
"domain": "chemistry.stackexchange",
"id": 15528,
"tags": "inorganic-chemistry, equilibrium, precipitation, identification"
} |
Speedup cython dictionary counter | Question: I wrote a simple cython script to optimize the collections.Counter of a dictionary counter and the python zip implementation (the main input is a list of tuples). Is there a way to speed it up?
%%cython --annotate
cimport cython
import numpy as np
cimport numpy as np
from collections import defaultdict
@cython.boundscheck(False)
@cython.wraparound(False)
def uniqueCounterListCython(list x not None):
cdef:
Py_ssize_t i,n
n = len(x)
dx = defaultdict(int)
for i from 0 <= i < n:
dx[x[i]] += 1
return dx
@cython.boundscheck(False)
@cython.wraparound(False)
def zipCython(np.ndarray[long,ndim=1] x1 not None, np.ndarray[long,ndim=1] x2 not None):
cdef:
Py_ssize_t i,n
n = x1.shape[0]
l=[]
for i from 0 <= i < n:
l.append(((x1[i],x2[i])))
return l
Sample input -
uniqueCounterListCython(zipCython(np.random.randint(0,3,200000),np.random.randint(0,3,200000)))
EDIT:
Found a kind of trivial way to speed things up - just merge the two functions:
@cython.boundscheck(False)
@cython.wraparound(False)
def uniqueCounterListCythonWithZip(np.ndarray[long,ndim=1] x1 not None, np.ndarray[long,ndim=1] x2 not None):
cdef:
Py_ssize_t i,n
n = x1.shape[0]
dx = defaultdict(int)
for i from 0 <= i < n:
dx[((x1[i],x2[i]))] += 1
return dx
Any more suggestions?
Answer: You don't give us much context for this problem, so it's unclear to me exactly what you are trying to achieve. But in your example, you have a pair of NumPy arrays containing integers in the range 0–2, and you seem to want to count the number of occurrences of each pair of values.
So I suggest encoding pairs of integers in the range 0–2 into a single integer in the range 0–8, using numpy.bincount to do the counting, and then using numpy.reshape to decode the result, like this:
>>> import numpy as np
>>> x, y = np.random.randint(0,3,200000), np.random.randint(0,3,200000)
>>> counts = np.bincount(x * 3 + y).reshape((3, 3))
>>> counts
array([[22282, 22093, 22247],
[22084, 22295, 22396],
[22012, 22243, 22348]])
A quick check that I got the encoding/decoding right:
>>> counts[0,2] == np.count_nonzero((x == 0) & (y == 2))
True
This runs much faster than the code in your question (assuming I have interpreted your profile screenshots correctly):
>>> from timeit import timeit
>>> timeit(lambda:np.bincount(x * 3 + y).reshape((3, 3)), number=1000)
2.7519797360000666 | {
"domain": "codereview.stackexchange",
"id": 6315,
"tags": "python, optimization, numpy, cython"
} |
Can't see some topics for planning and navigation | Question:
Hi, I'm running ROS Melodic on Ubuntu 18.04.
I'm following this tutorial to perform slam and navigation on my own robot: http://emanual.robotis.com/docs/en/platform/turtlebot3/navigation/#run-navigation-nodes
I can create a good map using gmapping, but after that I can't navigate.
This is my launch file for navigation:
<!--Start base control of robot and rviz-->
<include file="$(find robot)/launch/start_robot_pc.launch">
<arg name="fixed_frame" value="map"/>
</include>
<!--Start kinect and fake scan-->
<include file="$(find robot)/launch/kinect.launch"/>
<!-- Load map previously created -->
<node pkg="map_server" name="map_server" type="map_server" args="$(arg map_file)"/>
<!-- AMCL -->
<include file="$(find robot)/launch/includes/amcl.launch.xml"/>
<!-- move_base -->
<include file="$(find robot)/launch/includes/move_base.launch.xml">
<!--param name="base_local_planner" value="dwa_local_planner/DWAPlannerROS" /-->
</include>
This is my move_base launch:
<launch>
<node pkg="move_base" type="move_base" respawn="false" name="move_base" output="screen">
<param name="base_local_planner" value="dwa_local_planner/DWAPlannerROS"/>
<rosparam file="$(find robot)/param/costmap_common_params.yaml" command="load" ns="global_costmap" />
<rosparam file="$(find robot)/param/costmap_common_params.yaml" command="load" ns="local_costmap" />
<rosparam file="$(find robot)/param/local_costmap_params.yaml" command="load" />
<rosparam file="$(find robot)/param/global_costmap_params.yaml" command="load" />
<!--rosparam file="$(find robot)/param/base_local_planner_params.yaml" command="load" /-->
<rosparam file="$(find robot)/param/dwa_local_planner_params.yaml" command="load" />
<rosparam file="$(find robot)/param/move_base_params.yaml" command="load" />
<remap from="cmd_vel" to="/cmd_vel"/>
<remap from="odom" to="odom"/>
</node>
</launch>
I can estimate 2D pose of robot but when I use "2D Nav Goal" arrow, nothing happens.
I run rostopic list and I get only this topics starting with /move_base:
/move_base/current_goal
/move_base/goal
/move_base_simple/goal
I checked tutorial's topics and I saw this:
/move_base/DWAPlannerROS/global_plan
/move_base/DWAPlannerROS/local_plan
/move_base/NavfnROS/plan
/move_base/current_goal
/move_base/global_costmap/costmap
/move_base/global_costmap/costmap_updates
/move_base/goal
/move_base/local_costmap/costmap
/move_base/local_costmap/costmap_updates
/move_base/local_costmap/footprint
/move_base_simple/goal
Why I can't see the same topics?
UPDATE:
I have disassembled the tutorial's code and I discover that I saw those topics because rviz was subscribed, but no node was the publisher.
So, why the navigation doesn't work for me?
Originally posted by FabioZ96 on ROS Answers with karma: 48 on 2019-09-05
Post score: 0
Original comments
Comment by pavel92 on 2019-09-05:
Do you get any warnings/errors when you launch your navigation launch file? If so, can you please share the terminal output? Otherwise it is hard to debug from this point where the problem might be. My initial lucky guess would be that you are missing a transform from base_link to map which does not allow costmap initialization.
Comment by FabioZ96 on 2019-09-05:
This is the warning I get:
Timed out waiting for transform from \base_footprint to \map to become available before running costmap, tf error: canTransform: target_frame \map does not exist. canTransform: source_frame \base_footprint does not exist..
If I don't set "map" as fixed frame, the majority part of rviz's visualizzations don't work.
But I can't see neither DWAPlannerROS and NavfnROS.
Answer:
That is because you are missing the transform from \base_footprint to \map so your costmap and planner plugins are not started. You need to check which transforms are generated by your localization (amcl). It might be base_link instead of base_footprint to map or it might not be created at all. In the first case, in your costmap_common_params, or local/global_costmap_params files change the robot_base_frame: base_footprint to robot_base_frame: base_link. The other case is that amcl cant create the transform for some reason. You can check the existing transforms by running:
rosrun rqt_tf_tree rqt_tf_tree
If you just want to see if move_base launches properly and get all of the topics then you can temporary test this by creating a static transform:
rosrun tf static_transform_publisher 0.0 0.0 0.0 0.0 0.0 0.0 1.0 map base_footprint 100
On another note, you can remove the .xml from amcl.launch.xml and move_base.launch.xml in your navigation launch file since it is obsolete.
Originally posted by pavel92 with karma: 1655 on 2019-09-06
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by FabioZ96 on 2019-09-06:
Thank you so much. Now it works and I can see maps an trajectories.
But I get some warning and the robot doesn't move properly.
[ WARN] [1567787328.029768159]: Parameter max_trans_vel is deprecated (and will not load properly). Use max_vel_trans instead.
[ WARN] [1567787328.030064495]: Parameter min_trans_vel is deprecated (and will not load properly). Use min_vel_trans instead.
[ WARN] [1567787328.030338975]: Parameter max_rot_vel is deprecated (and will not load properly). Use max_vel_theta instead.
[ WARN] [1567787328.030609905]: Parameter min_rot_vel is deprecated (and will not load properly). Use min_vel_theta instead.
[ WARN] [1567787328.031128466]: Parameter rot_stopped_vel is deprecated (and will not load properly). Use theta_stopped_vel instead.
In move base I load DWA params: <rosparam file="$(find robot)/param/dwa_local_planner_params.yaml" command="load" /> and in this file I tried to rename files as suggest in warning but I get this warning too.
Comment by FabioZ96 on 2019-09-06:
What I mean is that robot receive little impulses of velocity that don't allow it to move forward. So periodically, the algorithm try to rotate it in place, but the rotation is very slow.
Comment by gvdhoorn on 2019-09-07:\
On another note, you can remove the .xml from amcl.launch.xml and move_base.launch.xml in your navigation launch file since it is obsolete.
unless the goal was to hide those launch files from the roslaunch auto-complete ..
Comment by pavel92 on 2019-09-08:
@FabioZ96 This is a totally new problem that you are facing and I would suggest to open a new question and post your dwa_local_planner_params config there. If the answer above solved your initial problem you can mark it as true.
As gvdhoorn has mentioned, you can keep the .xml extension if your goal is to hide it from roslaunch auto-completion.
Comment by FabioZ96 on 2019-09-09:
Ok, I opened a new question here: https://answers.ros.org/question/332487/robot-move-in-little-jerks/ | {
"domain": "robotics.stackexchange",
"id": 33729,
"tags": "navigation, ros-melodic"
} |
catkin_package(CATKIN_DEPENDS ...) vs. find_package(catkin REQUIRED COMPONENTS ...) | Question:
Hi,
Let's say I have a package A that depends on sensor_msgs. I have to add sensor_msgs to both
find_package(catkin REQUIRED COMPONENTS sensor_msgs)
catkin_package(CATKIN_DEPENDS sensor_msgs)
What is the difference between the two? What purpose do each of them solve? For instance, I noticed that message_generation has to be in the find_package call while the message_runtime has to be in catkin_package call. Why are they not in both?
Thank you.
Originally posted by 2ROS0 on ROS Answers with karma: 1133 on 2017-05-03
Post score: 7
Answer:
find_package() finds dependencies for this package.
catkin_package(CATKIN_DEPENDS) declares dependencies for packages that depend on this package.
For example: message_generation is needed by this package to build messages, while message_runtime is needed by dependent packages to use the already generated messages.
Originally posted by kmhallen with karma: 1416 on 2017-05-03
This answer was ACCEPTED on the original site
Post score: 7 | {
"domain": "robotics.stackexchange",
"id": 27803,
"tags": "ros, catkin, catkin-package, find-package, cmake"
} |
Kill/shutdown other nodes | Question:
Hi,
I have a launch file that starts several nodes, one of which processes a video file and publishes messages to topics, on per video frame basis. Other nodes listen to those topics and process the received messages. I would like to run this launch file in a script with different video files passed as launch file parameters. The issue is that the listening nodes do not know when the publisher node terminates, and thus roslaunch does not return since listening nodes are still running even though there are going to be no data published on the topics.
I can think of setting timers in the listening nodes and signal_shutdown(.) them if time lapse since last received message exceeds some predefined threshold, but I suppose there may be better solutions. I cannot send shutdown signal from one node to terminate another node within ROS API, right? What options (other than timer) do I have in order to implement that architecture in ROS?
Originally posted by Alexey Ozhigov on ROS Answers with karma: 31 on 2012-12-28
Post score: 2
Original comments
Comment by yigit on 2012-12-28:
I don't understand the problem very well. If you know the id of the roslaunch process, for example, is killing that process enough for you?
Comment by Alexey Ozhigov on 2012-12-28:
The problem is that if you have a launch file with several nodes, all of which perform some processing of fixed set data of predefined size, all of the nodes should know when the processing task they are running for is finished so that all the nodes terminate. Yes, I am now doing kill by PID.
Answer:
In the roslaunch xml include the option required="true" as shown here.
Originally posted by SL Remy with karma: 2022 on 2012-12-28
This answer was ACCEPTED on the original site
Post score: 5
Original comments
Comment by joq on 2012-12-28:
Good answer. I didn't know about that option. | {
"domain": "robotics.stackexchange",
"id": 12219,
"tags": "ros, roslaunch, node"
} |
How do CNNs use a model and find the object(s) desired? | Question: Background: I'm studying CNN's outside of my undergraduate CS course on ML. I have a few questions related to CNNs.
1) When training a CNN, we desire tightly bounded/cropped images of the desired classes, correct? I.e. if we were trying to recognize dogs, we would use thousands of images of tightly cropped dogs. We would also feed images of non-dogs, correct? These images are scaled to a specific size, i.e. 255x255.
2) Let's say training is complete. Our model's accuracy seems sufficient, with no problems. From here, let's have a large, HD image of a non-occluded dog running through a field with various obstacles. With a typical NN and some data, we just take the model, cross it with some input, and bam it's going to output some class. How will the CNN view this large image, and then 'find' the dog? Do we run some type of preprocessing on the image to partition it, and feed the partitions?
Answer: Though there can be a very detailed explanation for this question but I will try to make you understand much minimal words.
1) Cropping the images to a particular size isn't a necessary condition and neither is scaling. But put this way, it doesn't matter whether a dog is represented in a B&W image or RGB image because a convolution network learns features in the images which are independent of colors. Scaling and resizing help to limit the value of pixels between 0 and 1.
2) Once you have trained your CNN model, it has learned all the features like edges,etc. to recognize a dog in the image. Because the model has learned the features, it acquires certain properties like translation invariance which means that no matter where you position a dog in the image, it's still a dog and have the same features. How the model recognize it? It checks for the features of a dog, learned during training, no matter what the size of the new image is or where the dog is in the image or what the dog is doing.
For getting a in-depth understanding you can refer to the following resources:
http://neuralnetworksanddeeplearning.com/chap6.html
http://cs231n.github.io/convolutional-networks/ | {
"domain": "datascience.stackexchange",
"id": 10535,
"tags": "neural-network, computer-vision, convolutional-neural-network, beginner"
} |
$Q$ Transfer via Radiation Formula | Question: According to the formula:
$$
\frac{\Delta Q}{\Delta t}=\sigma\epsilon A T^4
$$
What does $T$ refer to in a situation where I am modelling the power of radiation from air of temperature to surface of emissivity $$?
Is it the temperature difference? And if so, would the equation look like this?
$$
\frac{\Delta Q}{\Delta t}=\sigma\epsilon A (T_{2} - T_{1})^4
\hspace{0.5in}or\hspace{0.2in}
\frac{\Delta Q}{\Delta t}=\sigma\epsilon A (T_{2}^4 - T_{1}^4)
$$
Further, what would the equation look like if the surface had greater temperature than the air?
Edit: the source of the formula is from the IBO Exam Data Booklet (https://ibphysics.org/wp-content/uploads/2016/01/annotated-physics-data-booklet-2016.pdf)
Answer: The term $\sigma\varepsilon A T^4$ is used to model the graybody output power—i.e., the outgoing radiative heat flux from some surface with emissivity $\varepsilon$, area $A$, and temperature $T$, without considering any radiative input.
If that surface entirely faces an environment at temperature $T_\mathrm{env}$, then the net output is typically modeled as $\sigma\varepsilon A(T^4-T_\mathrm{env}^4)$ because that environment itself radiates heat toward the surface. (Alternatively, the net rate of heat gain at the surface can be modeled as $\sigma\varepsilon A(T_\mathrm{env}^4-T^4)$.)
Special care may be needed to treat a surrounding environment of gas only (as in our atmosphere), as some wavelengths may be unabsorbed and some radiation passing essentially transparently to outer space ($T\approx 0\,\mathrm{K}$). This can cause the effective temperature of the atmosphere for radiative heat transfer calculations (or the so-called "sky temperature") to differ from the actual temperature.
All these topics are discussed in detail in introductory heat transfer textbooks, e.g., Incropera & DeWitt's Fundamentals of Heat and Mass Transfer. | {
"domain": "physics.stackexchange",
"id": 92124,
"tags": "thermodynamics, electromagnetic-radiation, temperature, thermal-radiation"
} |
Are there equations for placing sounds in the stereo field using delays? | Question: I'm thinking of the typical stereo widener (haas, binaural) effects here, those that extend from the typical pan knob to create "out of the headphones" -experience.
These processors can be automated to move sounds around, but I was wondering whether there was mathematics that described how to set the delays in order to move the sound into a certain spot in an angle -like fashion.
Such as in this plug-in:
What about modelling the "distance" from the listener?
Answer: Spherical head, distant source
For a spherical head with the ears at opposite points on its surface, a planar wave from a distant sound source reaches an ear on a straight line through air or if blocked by the head then partially through the shortest path along the surface of the sphere. If the sound source is at the same vertical coordinate $z$ as the ears, we can think of the head as a circle on the $x, y$ plane:
For simplicity, the head is a unit circle centered at $x = 0, y = 1$, turned so that the nose moves by an angle $\alpha$ counterclockwise from the origin. We are looking for the distances the sound still has to travel after the planar wavefront first reaches the head. There is a shortest straight line path (red dashed) of length $1-\sin(\alpha)$ to the right (red) ear. The shortest path (white dashed) to the left (white) ear consists of a straight line segment of length $1$ plus a curved path of length $\alpha$. The difference between the lengths to the two ears is $\alpha + 1 - (1 - \sin(\alpha)) = \sin(\alpha) + \alpha$. This holds for $0 \le \alpha \le \frac{\pi}{2}$, but because the function is antisymmetrical, it applies also to $-\frac{\pi}{2} \le \alpha \le 0$ giving a negative value for that range.
Modifying the equation to include the speed of sound $c$ and the head radius $r$, the inter-aural time difference (ITD) turns out as:
$$\text{ITD}(\alpha) = \frac{r}{c} (\sin(\alpha) + \alpha)$$
The formula is applicable for angles $-\frac{\pi}{2} \le \alpha \le \frac{\pi}{2}$, where $\alpha = 0$ means the source is straight ahead and $\alpha = \frac{\pi}{2}$ means it is 90° to the right, if the left channel is delayed for positive ITD. The plotted $\sin(\alpha) + \alpha$ part swings from $-\frac{\pi}{2} - 1$ to $\frac{\pi}{2} + 1$, so for a realistic maximum ITD of 0.6 ms (for distant sources) one can use simply:
$$\text{ITD}(\alpha) = \frac{0.6\ \text{ms}}{\frac{\pi}{2} + 1} (\sin(\alpha) + \alpha) \approx 0.23\ \text{ms}\ (\sin(\alpha) + \alpha)$$
For sources that are behind, before using the formula, add or subtract $\pi$ to/from $\alpha$ to bring it to $-\frac{\pi}{2}\ldots\frac{\pi}{2}$ and flip its sign.
Spherical head, nearby source
If one wants to take into account the distance to the source, the wavefront becomes spherical, or circular on the $x, y$ plane. Let's look at a head with just one ear considered:
The head is again a unit circle, but this time centered at the origin. The source (blue) is on the negative $y$ axis at a distance $d$ from the center of the head. The ear (red) is rotated by an angle $\alpha$ counterclockwise from the positive $x$ axis, or clockwise if $\alpha$ is negative. We shall constrain $-\frac{\pi}{2} \le \alpha \le \frac{\pi}{2}$ so that the shortest air path to the ear is always via the right side of the head as pictured. At the time the sound first reaches the head, the wavefront (blue) is cicular with radius $d-1$. We don't need that for the calculations, because we can use the time the sound leaves the source as the starting point. There is some negative angle $\beta$ such that if $\alpha \le \beta$, there is a direct air path of length $\sqrt{\cos(\alpha)^2 + \left(d + \sin(\alpha)\right)^2}$ to the ear located at $x = \cos(\alpha)$, $y = \sin(\alpha)$. Otherwise the sound travels a distance $D = \sqrt{\cos(\beta)^2 + \left(d + \sin(\beta)\right)^2}$ to the point $x = \cos(\beta)$, $y = \sin(\beta)$ where the straight path and the circle meet tangentially, and transitions to a further curved path of length $\alpha-\beta$. There is a right triangle (pink) that enables to solve $\beta$ using the pythagorean theorem:
$$d^2 = D^2 + 1^2\\
\Rightarrow \beta = - \text{asin}(\frac{1}{d}),\ D = \sqrt{d^2-1}$$
To recap, modifying the equations to include head radius $r$ and speed of sound $c$:
$$-\frac{\pi}{2} \le \alpha \le \frac{\pi}{2},\\
T_l = \left\{\begin{array}{ll}\frac{r}{c}\sqrt{\cos(\alpha)^2 + \left(\frac{d}{r} + \sin(\alpha)\right)^2}&\text{if } \alpha \le - \text{asin}(\frac{r}{d}),\\
\frac{r}{c}\left(\sqrt{(\frac{d}{r})^2-1} + \alpha + \text{asin}(\frac{r}{d})\right)&\text{otherwise,}\end{array}\right.\\
T_r = \left\{\begin{array}{ll}\frac{r}{c}\sqrt{\cos(\alpha)^2 + \left(\frac{d}{r} - \sin(\alpha)\right)^2}&\text{if } \alpha \geq \text{asin}(\frac{r}{d}),\\
\frac{r}{c}\left(\sqrt{(\frac{d}{r})^2-1} - \alpha + \text{asin}(\frac{r}{d})\right)&\text{otherwise,}\end{array}\right.\\
\text{ITD}(\alpha) = T_l(\alpha) - T_r(\alpha)$$
Again, for sources that are behind, before using the formula, add or subtract $\pi$ to/from $\alpha$ to bring it to $-\frac{\pi}{2}\ldots\frac{\pi}{2}$ and flip its sign.
For increasing distances, this definition of ITD approaches the previous one: | {
"domain": "dsp.stackexchange",
"id": 3398,
"tags": "audio, delay"
} |
What sort of "mass" is explained by the Higgs mechanism? | Question: When I asked this question (probably in a less neutral form) to physicists, their answer was something along the lines that it's not gravity (i.e. unrelated to gravitons) but inertial mass. (So I wondered whether this is an analogous mechanism to gravitons, only that it explains inertia.) Now after some weeks of thinking (and reading) about this, I think I finally figured out what they were trying to tell me. This is related to the following comment for a similar question:
Have you made up your mind on what "mass" of a particle means to you in that question? Maybe that will help.
For me, the obvious candidates what "mass" might mean are
gravitational mass
inertial mass
rest mass
My current guess is that the Higgs mechanism explains why "other particles" (only fermions, or also other bosons?) have a non-zero rest mass. (I imagine it's some form of explanation for potential energy related to the mere presence of the particle, even in the absence of "interactions" with other particles.) However, at least some of the (popular science) explanations really seem to try to explain something related to motion and inertia, and I got the answer "inertial mass" so often that I wonder whether it's actually really the "inertial mass" (of fermions) that is "directly explained" by the Higgs mechanism (this doesn't preclude that this explanation might be "translated" into something equivalent to rest mass).
Answer: Think of the Higgs mechanism as affecting rest-mass. This is the mass that a particle has when it is sitting still (you can weigh it to figure it out).
Think of gravity as affecting energy. More energy = more gravitational force. So an electron that is moving very quickly has a total energy of its rest mass (E = mc^2) + its kinetic energy.
Consider an electron and a positron. These both have rest-mass. When they collide and turn into two photons, all rest-mass is gone. Energy is conserved, so the system still weighs the same at all times. But the Higgs mechanism only affects the electron and positron, not the photons. | {
"domain": "physics.stackexchange",
"id": 4179,
"tags": "mass, higgs"
} |
Which one is the correct representation of the Landau levels? | Question: Sometimes the Landau levels in a finite 2D sample drawn as in the figure below:
where the energy $E$ is graphed against the width $x$ of the sample in real space where $x=0$ and $x=W$ are the two edges of it. But sometimes it is also graphed in the $E$ vs $k$ plane as in the figure below:
Which one of them is the correct representation of the Landau levels?
Answer: Both. Remember (or note) that in the Landau gauge $\vec A = xB\hat y$, the energy eigenfunctions of a 2D particle in a uniform magnetic field are plane waves in $y$ and exponentially localized in around $x=k \ell_B^2$, with $\ell_B \equiv \sqrt{\frac{\hbar}{eB}}$ and $\hbar k$ the momentum in the $y$-direction. As a result, you could either describe the energy of these states via the $x$-coordinate about which they are localized, or $k = x/\ell_B^2$. | {
"domain": "physics.stackexchange",
"id": 75938,
"tags": "condensed-matter, solid-state-physics, quantum-hall-effect"
} |
Modulating an oscillator by gain, pitch, and offset | Question: I've an Audio Host application that sends block of samples to a DLL I'm building. It sends 44100 samples per seconds (i.e. Sample Rate 44100hz), per block, processing 16 distinct Voices.
It processes an Oscillator, which plays a basic sine wave, modulated at Audio Rate by Gain, Pitch and Offset.
Here's the C++ code I have (I'm on MSVC, Release/x86 Configuration with /02 (/Ot) optimized settings):
#include <iostream>
#include <cstring>
#include <cmath>
#include <chrono>
const int voiceSize = 16;
const int bufferSize = 256;
const double pi = 3.141592653589793238;
const double twopi = 2 * pi;
double sampleRate = 44100.0;
double noteFrequency = 130.81278;
double hostPitch = 1.0;
#define BOUNDED(x,lo,hi) ((x) < (lo) ? (lo) : (x) > (hi) ? (hi) : (x))
class Param
{
public:
int mControlRate = 1;
double mValue = 0.5;
double mProcessedValues[voiceSize][bufferSize];
double *pModValues;
Param(double min, double max) {
mMin = min;
mMax = max;
mRange = max - min;
}
inline double GetMin() { return mMin; }
inline double GetRange() { return mRange; }
inline double GetProcessedVoiceValue(int voiceIndex, int sampleIndex) { return mProcessedVoicesValues[voiceIndex][sampleIndex]; }
inline void AddModulation(int voiceIndex, int blockSize) {
double *pModVoiceValues = &pModValues[voiceIndex * bufferSize];
double *pProcessedValues = mProcessedValues[voiceIndex];
int i = 0;
for (int sampleIndex = 0; sampleIndex < blockSize; sampleIndex += mControlRate, i++) {
pProcessedValues[i] = BOUNDED(mValue + pModVoiceValues[sampleIndex], 0.0, 1.0);
}
}
private:
double mMin, mMax, mRange;
double mProcessedVoicesValues[voiceSize][bufferSize];
};
class Oscillator
{
public:
double mRadiansPerSample = twopi / sampleRate;
double ln2per12 = std::log(2.0) / 12.0;
double mPhase[voiceSize];
Param *pGain, *pOffset, *pPitch;
Oscillator() {
pGain = new Param(0.0, 1.0);
pOffset = new Param(-900.0, 900.0);
pPitch = new Param(-48.0, 48.0);
// reset osc phase (start at 0.0)
for (int voiceIndex = 0; voiceIndex < voiceSize; voiceIndex++) {
Reset(voiceIndex);
}
}
~Oscillator() {
delete pGain;
delete pOffset;
delete pPitch;
}
void Reset(int voiceIndex) {
mPhase[voiceIndex] = 0.0;
}
void ProcessVoiceBlock(int voiceIndex, int blockSize, double noteFrequency, double *left, double *right) {
// local copy
double phase = mPhase[voiceIndex];
double offsetMin = pOffset->GetMin();
double offsetRange = pOffset->GetRange();
double pitchMin = pPitch->GetMin();
double pitchRange = pPitch->GetRange();
// precomputed data
double bp0 = noteFrequency * hostPitch;
// process block values
for (int sampleIndex = 0; sampleIndex < blockSize; sampleIndex++) {
double value = (sin(phase)) * pGain->GetProcessedVoiceValue(voiceIndex, sampleIndex);
*left++ += value;
*right++ += value;
// next phase
phase += BOUNDED(mRadiansPerSample * (bp0 * GetPitchWarped(pPitch->GetProcessedVoiceValue(voiceIndex, sampleIndex), pitchMin, pitchRange) + GetOffsetWarped(pOffset->GetProcessedVoiceValue(voiceIndex, sampleIndex), offsetMin, offsetRange)), 0, pi);
while (phase >= twopi) { phase -= twopi; }
}
// revert local copy
mPhase[voiceIndex] = phase;
}
inline double GetOffsetWarped(double normalizedValue, double min, double range) { return min + normalizedValue * range; }
inline double GetPitchWarped(double normalizedValue, double min, double range) { return exp((min + normalizedValue * range) * ln2per12); }
};
class MyPlugin
{
public:
double gainModValues[voiceSize][bufferSize];
double offsetModValues[voiceSize][bufferSize];
double pitchModValues[voiceSize][bufferSize];
Oscillator oscillator;
MyPlugin() {
// link mod arrays to params
oscillator.pGain->pModValues = gainModValues[0];
oscillator.pOffset->pModValues = offsetModValues[0];
oscillator.pPitch->pModValues = pitchModValues[0];
// some fancy data for mod
for (int voiceIndex = 0; voiceIndex < voiceSize; voiceIndex++) {
for (int sampleIndex = 0; sampleIndex < bufferSize; sampleIndex++) {
gainModValues[voiceIndex][sampleIndex] = sampleIndex / (double)bufferSize;
}
}
}
void ProcessDoubleReplace(int blockSize, double *bufferLeft, double *bufferRight) {
// init buffer
memset(bufferLeft, 0, blockSize * sizeof(double));
memset(bufferRight, 0, blockSize * sizeof(double));
// voices
for (int voiceIndex = 0; voiceIndex < voiceSize; voiceIndex++) {
// envelopes - here's where mod values will change, at audio rate
// add mod to params
oscillator.pGain->AddModulation(voiceIndex, blockSize);
oscillator.pOffset->AddModulation(voiceIndex, blockSize);
oscillator.pPitch->AddModulation(voiceIndex, blockSize);
// osc buffer
oscillator.ProcessVoiceBlock(voiceIndex, blockSize, noteFrequency, bufferLeft, bufferRight);
}
}
};
int main(int argc, const char *argv[]) {
double bufferLeft[bufferSize];
double bufferRight[bufferSize];
MyPlugin myPlugin;
// audio host call
int numProcessing = 1024 * 50;
int counterProcessing = 0;
std::chrono::high_resolution_clock::time_point pStart = std::chrono::high_resolution_clock::now();
while (counterProcessing++ < numProcessing) {
int blockSize = 256;
myPlugin.ProcessDoubleReplace(blockSize, bufferLeft, bufferRight);
// do somethings with buffer
}
std::chrono::high_resolution_clock::time_point pEnd = std::chrono::high_resolution_clock::now();
std::cout << "execution time: " << std::chrono::duration_cast<std::chrono::milliseconds>(pEnd - pStart).count() << " ms" << std::endl;
}
Considering I'm running 16 Voices contemporaneously, it takes an huge amount of CPU for a simple Gain/Pitch/Offset modulation.
I hope I can do it better. Any tips/suggestions? Vectorizing?
Note: I could use std::clamp and C++17, but it doesn't matter a lot here: nothing really change with that expedient.
Answer: counting flops
Let's unpack the inner loop, where all the work is being done. I am repeating the whole thing here, wrapping to 80 columns and adding some indentation to make it easier to read.
double value = sin(phase) *
pGain->GetProcessedVoiceValue(voiceIndex, sampleIndex);
*left++ += value;
*right++ += value;
// next phase
phase += BOUNDED(
mRadiansPerSample * (
bp0 * GetPitchWarped(pPitch->GetProcessedVoiceValue(voiceIndex,
sampleIndex), pitchMin, pitchRange)
+ GetOffsetWarped(pOffset->GetProcessedVoiceValue(voiceIndex,
sampleIndex), offsetMin, offsetRange)
),
0, pi
);
while (phase >= twopi) { phase -= twopi; }
// . . .
inline double GetOffsetWarped(double normalizedValue, double min, double range)
{
return min + normalizedValue * range;
}
inline double GetPitchWarped(double normalizedValue, double min, double range)
{
return exp((min + normalizedValue * range) * ln2per12);
}
Let's see if I've got this straight. This is your program, as I understand it. You have 16 independent oscillators, each with their own time-varying frequency and gain. The frequency is a function of a pitch and an offset frequency. For now, I'll define pitch as separate from frequency (omega) in more or less the same manner you have done, switching to python for brevity: omega = omega_C * 2**(pitch/12), where omega_C = TAU*440*2**(-2 + 3/12) is the frequency of the C below middle C. So pitch is in semitones and frequency is in rad/s. In numpy terms, you are doing the following for each oscillator:
# code sample 0
# Rescale user inputs.
pitches = pitch_min + pitches_raw*pitch_range
omega_offsets = omega_offset_min + omega_offsets_raw*omega_offset_range
gains = gain_min + gains_raw*gain_range
omegas = omega_C * 2**(pitches/12) + omega_offsets
values = gains*np.sin(phase0 + np.cumsum(omegas)*dt)
That is basically the entire program. First, the constants.
dt is the time in seconds between samples. dt = 1/sample_rate, sample_rate = 44100. I have written the cumulative sum using this dt to illustrate the direct correspondence to the integral over a time-varying omega that the sum is approximating.
phase0 is the oscillator's phase at the beginning of the block you are currently processing.
gains, pitches, and omega_offsets are three floating point user inputs, gains and pitches possibly being mapped to an after touch MIDI keyboard with a pitch bend and omega_offsets mapped to a sweet MIDI slider, all being sampled at the audio sample rate. They are being represented as numpy vectors, hence the esses. Gains ranges from 0 to 1, pitches ranges from 4 octaves below omega_C to 4 octaves above omega_C, and omega_offsets ranges from -900Hz to 900Hz. (Incidentally, your gain min and max are not being used!) That offset frequency is super weird and kinda cool. That makes no sense from a conventional music theory standpoint. With any offset at all, pitch will no longer match up with octaves. I'd actually like to hear what that sounds like.
You could have made it easier for the reader to figure all that out. If I didn't already have some idea what to look for, I would have been completely lost. You should find a way to get your code to look as much like code sample 0 as possible.
Anyway, how fast should this loop run? How many flops does this loop translate to? Counting the number of operations per sample and using code sample 0 as a guide, I count 5 +s, 7 *s, an exp and a sin. Adding another + for when the 16 voices are added, that's 6 +s. Using this benchmark as a rough guideline, that translates to 6 + 7 + 10 + 15 = 38 flops per sample per voice. Multiplying by 16 voices and 44100 samples per second, that's 27 megaflops. A 1 GHz processor has a thousand cycles to do just 27 of those calculations. You should be seeing almost zero CPU on your toaster. There's something else going on here.
In fact, there's nothing wrong with your code. It was your benchmark. See the epilogue. But I left all these suggestions here anyway if for some reason you want your code to go even faster.
small loops
Your code might be slower than it could be because your inner loop is too big. Loops run fastest when they are as small as possible. This is because the fastest memory your system has—registers—is in extremely limited supply. If your loop is too big, the system runs out of registers and it has to use slower memory to run the loop. You can break up that inner loop into smaller loops—one per line of code in code sample 0 should work OK. In theory, that should make your code run faster. But the compiler is actually able to figure out most of that, since everything is contained in a single file. It is able to inline whatever it wants and reorder execution however it wants, because it knows what every function is actually doing. This idea is closely related to vectorisation.
vectorising
You're doing the same operations blockSize times. You should be able to speed that up with simd. The Intel MKL has simd-accelerated vectorised sin(), etc. that you can use. Code sample 0 gives you a basic idea of what the vectorised code is going to look like. The MKL docs don't give an example, so I'll give one now. As a reminder, you really don't have to do this. See the epilogue.
#include <mkl.h>
// determines the size of a static array.
#define SASIZE(sa) (sizeof(sa)/sizeof(*sa))
// . . .
float phases[] = {1.f, 2.f, 3.f, 4.f, 5.f};
float sin_phases[SASIZE(phases)];
vsSin(SASIZE(phases), phases, sin_phases);
// Now, sin_phases[i] = sin(phases[i]). All of the sin()s were computed
// all at once, about 4 or 8 at a time using simd instructions.
float
This is all audio data, so you don't need doubles. Floats will work just fine, and they'll be way way faster, especially after you get simd working.
integral phase
You can store and manipulate phase using uint32_ts. It is naturally modular with modulus 2**32, so you never need to do any fmods or subtractions, especially repeated subtractions in a loop.
while (phase >= twopi) { phase -= twopi; } // idk about this...
That there is some weirdness. The mod 2**32 happens automatically when converting from float, adding, subtracting, whatever, so it should be a lot faster and a lot more accurate. You should only convert back to float phase before a sin operation or something like that. Here's an example. I also included an example of how to do the same modulo operation in all floating point numbers.
#include <cstdio> // Hey, I like *printf().
#include <cinttypes>
// These are actually templated C++ math functions. The c doesn't
// stand for C, I guess...
#include <cmath>
using namespace std;
#define TAU 6.283185307179586
// fmodf, but always rounding toward negative infinity.
// The output will always be in [0, y). This is not the case
// with fmod(). See man fmod.
float fmodf_rd(float x, float y)
{
float r = fmod(x, y);
return r < 0.f? r + y : r;
}
uint32_t integral_phase(float phi)
{
return phi / (float)TAU * 0x1p32f;
}
float float_phase(uint32_t phi_i)
{
return (float)phi_i * (float)TAU * 0x1p-32f;
}
int main(int argc, const char *argv[])
{
// Phases, in radians.
float alpha = TAU/12., beta = atan(2.f);
uint32_t alpha_i = integral_phase(alpha), beta_i = integral_phase(beta);
// Phase calculations in radians and floating point:
float gamma = fmodf_rd(5.f*alpha - 7.f*beta, (float)TAU);
// The same phase calculation in integral phase. Note there is no
// mod operation at all.
uint32_t gamma_i = 5*alpha_i - 7*beta_i;
printf("difference = %.9g\n", (double)(float_phase(gamma_i) - gamma));
return 0;
}
Epilogue: An M. Night Shyamalan twist.
This has been bothering me enough that I finally ran your code. You were saying you were getting slow results, but your code, though a little funky, did not look particularly slow actually. So I assumed it was something wrong with your benchmark. I nuked the C++ chrono badness and replaced it with dtime(), a wrapper I wrote around clock_gettime(). dtime() returns the current value of CLOCK_MONOTONIC in seconds as a double. I also redid the benchmark math, outputting the performance in terms of %CPU. This is the diff:
--- mod.cpp- 2018-11-20 03:19:11.091296221 -0500
+++ mod.cpp 2018-11-20 03:39:39.529689861 -0500
@@ -1,7 +1,21 @@
-#include <iostream>
+#include <stdio.h>
+#include <time.h>
+#include <assert.h>
+
#include <cstring>
#include <cmath>
-#include <chrono>
+
+double timespec_to_d(struct timespec *ts)
+{
+ return (double)ts->tv_sec + 1e-9*(double)ts->tv_nsec;
+}
+double dtime(void)
+{
+ struct timespec ts;
+ int res = clock_gettime(CLOCK_MONOTONIC, &ts);
+ assert(res == 0);
+ return timespec_to_d(&ts);
+}
const int voiceSize = 16;
const int bufferSize = 256;
@@ -154,7 +168,7 @@
// audio host call
int numProcessing = 1024 * 50;
int counterProcessing = 0;
- std::chrono::high_resolution_clock::time_point pStart = std::chrono::high_resolution_clock::now();
+ double t0 = dtime();
while (counterProcessing++ < numProcessing) {
int blockSize = 256;
myPlugin.ProcessDoubleReplace(blockSize, bufferLeft, bufferRight);
@@ -162,6 +176,8 @@
// do somethings with buffer
}
- std::chrono::high_resolution_clock::time_point pEnd = std::chrono::high_resolution_clock::now();
- std::cout << "execution time: " << std::chrono::duration_cast<std::chrono::milliseconds>(pEnd - pStart).count() << " ms" << std::endl;
+ double dt_busy = (dtime() - t0)/numProcessing/bufferSize,
+ dt_total = 1./sampleRate;
+ printf("CPU = %.6g %%\n", dt_busy/dt_total);
+ return 0;
}
dt_busy is the total amount of time your code takes to process a single sample. dt_total, the time between samples, is the total amount of time your code has to process a single sample. Divide the two and you predict %CPU usage while your DLL is running.
aaaaaaand this is the output:
% g++ -o mod mod.cpp
% ./mod
CPU = 0.105791 %
%
When running as a plugin with realtime user input streams, your code will use .1% CPU. It was your benchmark the entire time. | {
"domain": "codereview.stackexchange",
"id": 32679,
"tags": "c++, performance, audio, signal-processing"
} |
Ray reflection with a triangle | Question: I'm trying to reflect a ray of a triangle using Processing and not sure I got the code right. I'm using two bits of pseudo code from two locations:
Intersection of a Line with a Plane from Paul Bourke's site.
Reflecting a vector from 3DKingdoms site.
Here's my code so far:
PVector[] face = new PVector[3];
float ai = TWO_PI/3;//angle increment
float r = 300;//overall radius
float ro = 150;//random offset
PVector n;//normal
Ray r1;
void setup(){
size(500,500,P3D);
for(int i = 0 ; i < 3; i++) face[i] = new PVector(cos(ai * i) * r + random(ro),random(-50,50),sin(ai * i) * r + random(ro));
r1 = new Ray(new PVector(-100,-200,-300),new PVector(100,200,300));
}
void draw(){
background(255);
lights();
translate(width/2, height/2,-500);
rotateX(map(mouseY,0,height,-PI,PI));
rotateY(map(mouseX,0,width,-PI,PI));
//draw plane
beginShape(TRIANGLES);
for(PVector p : face) vertex(p.x,p.y,p.z);
endShape();
//normals
PVector c = new PVector();//centroid
for(PVector p : face) c.add(p);
c.div(3.0);
PVector cb = PVector.sub(face[2],face[1]);
PVector ab = PVector.sub(face[0],face[1]);
n = cb.cross(ab);//compute normal
line(c.x,c.y,c.z,n.x,n.y,n.z);//draw normal
pushStyle();
//http://paulbourke.net/geometry/planeline/
//line to plane intersection u = N dot ( P3 - P1 ) / N dot (P2 - P1), P = P1 + u (P2-P1), where P1,P2 are on the line and P3 is a point on the plane
PVector P2SubP1 = PVector.sub(r1.end,r1.start);
PVector P3SubP1 = PVector.sub(face[0],r1.start);
float u = n.dot(P3SubP1) / n.dot(P2SubP1);
PVector P = PVector.add(r1.start,PVector.mult(P2SubP1,u));
strokeWeight(5);
point(P.x,P.y,P.z);//point of ray-plane intersection
//vector reflecting http://www.3dkingdoms.com/weekly/weekly.php?a=2
//R = 2*(V dot N)*N - V
//Vnew = -2*(V dot N)*N + V
//PVector V = PVector.sub(r1.start,r1.end);
PVector V = PVector.sub(r1.start,P);
PVector R = PVector.sub(PVector.mult(n,3 * (V.dot(n))),V);
strokeWeight(1);
stroke(0,192,0);
line(P.x,P.y,P.z,R.x,R.y,R.z);
stroke(192,0,0);
line(r1.start.x,r1.start.y,r1.start.z,P.x,P.y,P.z);
stroke(0,0,192);
line(P.x,P.y,P.z,r1.end.x,r1.end.y,r1.end.z);
popStyle();
}
void keyPressed(){ setup(); }//reset
class Ray{
PVector start = new PVector(),end = new PVector();
Ray(PVector s,PVector e){ start = s ; end = e; }
}
I've tried to keep it simple and comment it. Still, the reflected vector (drawn in green) doesn't seem to be quite right:
I'm not sure if the reflected vector should meet the normal. Should I normalize/scale some vectors at some point?
Answer: Turns out I needed to normalize the normal first:
n.normalize();
full code for reference:
PVector[] face = new PVector[3];
float ai = TWO_PI/3;//angle increment
float r = 300;//overall radius
float ro = 150;//random offset
PVector n;//normal
Ray r1;
void setup() {
size(500, 500, P3D);
for (int i = 0 ; i < 3; i++) face[i] = new PVector(cos(ai * i) * r + random(ro), random(-50, 50), sin(ai * i) * r + random(ro));
r1 = new Ray(new PVector(-100, -200, -300), new PVector(100, 200, 300));
}
void draw() {
background(255);
lights();
translate(width/2, height/2, -500);
rotateX(map(mouseY, 0, height, -PI, PI));
rotateY(map(mouseX, 0, width, -PI, PI));
//draw plane
beginShape(TRIANGLES);
for (PVector p : face) vertex(p.x, p.y, p.z);
endShape();
//normals
PVector c = new PVector();//centroid
for (PVector p : face) c.add(p);
c.div(3.0);
PVector cb = PVector.sub(face[2], face[1]);
PVector ab = PVector.sub(face[0], face[1]);
n = cb.cross(ab);//compute normal
n.normalize();
line(c.x, c.y, c.z, n.x, n.y, n.z);//draw normal
pushStyle();
//http://paulbourke.net/geometry/planeline/
//line to plane intersection u = N dot ( P3 - P1 ) / N dot (P2 - P1), P = P1 + u (P2-P1), where P1,P2 are on the line and P3 is a point on the plane
PVector P2SubP1 = PVector.sub(r1.end, r1.start);
PVector P3SubP1 = PVector.sub(face[0], r1.start);
float u = n.dot(P3SubP1) / n.dot(P2SubP1);
PVector P = PVector.add(r1.start, PVector.mult(P2SubP1, u));
strokeWeight(5);
point(P.x, P.y, P.z);//point of ray-plane intersection
//vector reflecting http://www.3dkingdoms.com/weekly/weekly.php?a=2
//R = 2*(V dot N)*N - V
//Vnew = -2*(V dot N)*N + V
//PVector V = PVector.sub(r1.start,r1.end);
PVector V = PVector.sub(r1.start, P);
PVector R = PVector.sub(PVector.mult(n, 2 * (V.dot(n))), V);
strokeWeight(1);
stroke(0, 192, 0);
line(P.x, P.y, P.z, R.x, R.y, R.z);
stroke(192, 0, 0);
line(r1.start.x, r1.start.y, r1.start.z, P.x, P.y, P.z);
stroke(0, 0, 192);
line(P.x, P.y, P.z, r1.end.x, r1.end.y, r1.end.z);
popStyle();
}
void keyPressed() {
setup();
}//reset
class Ray {
PVector start = new PVector(), end = new PVector();
Ray(PVector s, PVector e) {
start = s ;
end = e;
}
} | {
"domain": "codereview.stackexchange",
"id": 1533,
"tags": "computational-geometry, processing, raytracing"
} |
Why does my Gazebo GUI look completely different from what I see in the docs? | Question:
I want to get started with Gazebo, so I installed Gazebo Garden on Ubuntu 22.04 and opened it alongside a basic tutorial. The GUI I see on my laptop looks completely different from what I see in the docs though. The first two images below are what I see. Below that is what I find in the docs. Are the docs old and am I seeing the new UI? Or what am I doing wrong here?
Also, in the About window I see I've got version 7 Garden. As far as I understand Gazebo Garden is the last version, but I also see versions up to 11, so I don't understand how I can have Garden (the latest stable version) and version 7 (I guess a very old version). What's wrong with that?
From the docs:
Originally posted by kramer65 on Gazebo Answers with karma: 3 on 2023-06-11
Post score: 0
Answer:
Gazebo 11 is a version of old Gazebo, now known as Gazebo-classic. The new Gazebo used to be called Ignition, but due to trademark reasons, it has been renamed back to Gazebo (see https://community.gazebosim.org/t/a-new-era-for-gazebo/1356). The latest version of the new Gazebo collection is Garden and contains the simulator gz-sim v7.x.
Originally posted by azeey with karma: 704 on 2023-06-12
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 4708,
"tags": "gazebo-7"
} |
Problem for backwards moving vehicle in hills | Question: The vehicle running at Hill station.. Same time vehicle moves back ward direction in manual transmission system. So avoid the problem.. I think ratchet pinion mechanism suitable! or not?
Answer: There is a system called a "hill holder" in cars which engages the parking brake on a hill while stopped. When the clutch is engaged to drive up the hill, the parking brake is slowly released by the mechanism. Manual transmission Subarus had this feature years ago. | {
"domain": "engineering.stackexchange",
"id": 3184,
"tags": "mechanical-engineering, automotive-engineering"
} |
Semi-playable chess game in Ruby | Question: I made a simple, semi-playable command line Chess game for my class exercise.
At this point, this game can do the following things:
Print out the updated board representation after every move
Players can take turns
Players can actually move pieces
The code is pretty long so I don't really expect people to read all of it. However, if you can somehow skim through them and provide one quick feedback, I would be really grateful! I just started learning Ruby and really want to be conscious of how I arrange objects and follow the SOLID principle.
Besides that, I also have one quick question: Is it recommended to put a lot of functions within one function? Sometimes I put 5 or 6 functions within the initialization function of a class. I am worried that might cause some trouble in the future. However, if I don't do that, I will have to call those functions one by one in the main app.rb, which would make the code longer in the main app.rb. Which is generally better?
App.rb
require_relative("lib/input.rb")
require_relative("lib/Board.rb")
require_relative("lib/Chess.rb")
require_relative("lib/Rook.rb")
require_relative("lib/Pawn.rb")
require_relative("lib/King.rb")
require_relative("lib/Bishop.rb")
require_relative("lib/Knight.rb")
require_relative("lib/Queen.rb")
puts "Welcome to this boring Chess Game!"
puts "It is so boring. U sure u want to play? y/n "
enter_game = gets.chomp.upcase
puts "you don't have a choice" if enter_game == "N"
puts "-----------------------"
puts "-----------------------"
puts "X: 1,2,3,4,5,6,7,8"
puts "Y: 1,2,3,4,5,6,7,8"
puts "Sample input: 58 would be Black King(bKing)"
puts "-----------------------"
puts "-----------------------"
board1 = Board.new
board1.show
play_condition = true
while play_condition
ori_cor = board1.ask_for_ori
board_move_validity_ori = board1.check_board_valid(ori_cor)
# a convert function might be more useful here because we are reusing it again and again and also it's pretty simple to set up
# puts "value 0 #{index_x}"
# puts index_y
if !board_move_validity_ori
# puts "false condition"
board1.prompt_valid
next
end
#puts index_start_x
#puts index_start_y
des_cor = board1.ask_for_des
board_move_validity_des = board1.check_board_valid(des_cor,1)
if !board_move_validity_des
board1.prompt_valid
next
end
array_ori = board1.convert(ori_cor)
array_des = board1.convert(des_cor)
start_x = array_ori[0]
start_y = array_ori[1]
# obtaining x,y index from the string input
index_start_x = array_ori[2]
index_start_y = array_ori[3]
final_x = array_des[0]
final_y = array_des[1]
index_final_x = array_des[2]
index_final_y = array_des[3]
#p array_ori
#p array_des
intended_piece = board1.grid[index_start_x][index_start_y]
#puts "can move" if intended_piece.can_move?(final_x,final_y)
#puts piece_name # pice name obtained
#puts intended_piece
piece_move_validity = intended_piece.can_move?(final_x,final_y)
#puts piece_move_validity
if piece_move_validity == true
puts "can move"
# des_status = board1.check_empty(index_final_x,index_final_y)
# if des_status == "empty"
#puts "start moving"
board1.move(intended_piece,index_start_x,index_start_y,index_final_x,index_final_y)
# puts "call move function"
# else
# # compare
# puts "call compare function"
# end
else
board1.prompt_valid
end
board1.show
end
Board.rb
class Board
include Input
attr_reader :grid
def initialize
@turn = 0
bR = Rook.new(1,8,"black","B5")
bK = Knight.new(2,8,"black","B4")
bB = Bishop.new(3,8,"black","B3")
bQ = Queen.new(4,8,"black","B2")
bKing = King.new(5,8,"black","B1")
bB2 = Bishop.new(6,8,"black","B3")
bK2 = Knight.new(7,8,"black","B4")
bR2 = Rook.new(8,8,"black","B5")
wR = Rook.new(1,1,"white","W5")
wK = Knight.new(2,1,"white","W4")
wB = Bishop.new(3,1,"white","W3")
wQ = Queen.new(4,1,"white","W2")
wKing = King.new(5,1,"white","W1")
wB2 = Bishop.new(6,1,"white","W3")
wK2 = Knight.new(7,1,"white","W4")
wR2 = Rook.new(8,1,"white","W5")
bP1 = Pawn.new(1,7,"black","BP")
bP2 = Pawn.new(2,7,"black","BP")
bP3 = Pawn.new(3,7,"black","BP")
bP4 = Pawn.new(4,7,"black","BP")
bP5 = Pawn.new(5,7,"black","BP")
bP6 = Pawn.new(6,7,"black","BP")
bP7 = Pawn.new(7,7,"black","BP")
bP8 = Pawn.new(8,7,"black","BP")
wP1 = Pawn.new(1,2,"white","WP")
wP2 = Pawn.new(2,2,"white","WP")
wP3 = Pawn.new(3,2,"white","WP")
wP4 = Pawn.new(4,2,"white","WP")
wP5 = Pawn.new(5,2,"white","WP")
wP6 = Pawn.new(6,2,"white","WP")
wP7 = Pawn.new(7,2,"white","WP")
wP8 = Pawn.new(8,2,"white","WP")
@grid = [
[bR,bK,bB,bQ,bKing,bB2,bK2,bR2],
[bP1,bP2,bP3,bP4,bP5,bP6,bP7,bP8],
["00","00","00","00","00","00","00","00"],
["00","00","00","00","00","00","00","00"],
["00","00","00","00","00","00","00","00"],
["00","00","00","00","00","00","00","00"],
[wP1,wP2,wP3,wP4,wP5,wP6,wP7,wP8],
[wR,wK,wB,wQ,wKing,wB2,wK2,wR2],
]
end
def check_empty(final_x,final_y)
if @grid[final_x][final_y] == "00"
"empty"
else
@grid[final_x][final_y]
end
end
def show
puts "Current Board Status:"
@grid.each do |row|
row.each do |element|
if element == "00"
print "|#{element}|"
else
print "|#{element.name}|"
end
end
print "\n"
end
end
def prompt_valid
puts "Plz input valid coordinates for the piece"
puts "-----------------------"
puts "-----------------------"
puts "X: 1,2,3,4,5,6,7,8"
puts "Y: 1,2,3,4,5,6,7,8"
puts "Sample input: 58 would be B1 / Black King(bKing)"
puts "-----------------------"
puts "-----------------------"
end
def move piece, index_start_x, index_start_y, index_des_x, index_des_y
#puts index_start_x,index_start_y,index_des_x,index_des_y
#puts "destination original value #{@grid[index_des_x][index_des_y]}"
#puts "destination become #{@grid[index_des_x][index_des_y].name}"
#puts "original is #{@grid[index_des_x][index_des_y].name}"
@grid[index_des_x][index_des_y] = piece.clone
@grid[index_des_x][index_des_y].start_x = index_start_y + 1
@grid[index_des_x][index_des_y].start_y = 8 - index_start_x
@grid[index_start_x][index_start_y] = "00"
if @turn ==1
@turn = 0
elsif @turn == 0
@turn = 1
end
end
def compare
end
end
Input.rb
module Input
def ask_for_ori
puts "-------------------------"
puts "-------------------------"
if @turn ==0
puts "White chess turn"
else
puts "Black chess turn"
end
puts "Input the coordinates of the piece you want to move"
puts "-------------------------"
puts "------------------------"
input_cor = gets.chomp
end
def ask_for_des
puts "-------------"
puts "Input the coordinates of the destination"
input_des = gets.chomp
end
def check_board_valid(input_cor, des = 0) # whether input is within the range
array1 = input_cor.split("")
if des == 0
if (@grid[8-(array1[1].to_i)][(array1[0].to_i)-1].name =~ /[W]/ ) && @turn == 0
puts "first con"
true
elsif (@grid[8-(array1[1].to_i)][(array1[0].to_i)-1].name =~ /[B]/) && @turn ==1
puts "second con"
true
else
puts "It's not your turn yet piece of shit"
return false
end
end
#puts @grid[8-(array1[1].to_i)][(array1[0].to_i)-1]
if (1..8).include?(array1[0].to_i) && (1..8).include?(array1[1].to_i) && array1.length ==2
if @grid[8-(array1[1].to_i)][(array1[0].to_i)-1] == "00" && des ==0
puts "This slot has an empty piece.pls try again"
return false
end
return true
else
return false
end
end
def convert input_cor
start_x = input_cor.split("")[0].to_i
start_y = input_cor.split("")[1].to_i
# obtaining x,y index from the string input
index_start_x = 8 - start_y
#puts "after math equal #{index_start_x}"
index_start_y = start_x -1
array1 = start_x,start_y,index_start_x,index_start_y
#p array1
#array1
end
end
For the sake of not making this page looks too horrifying, other source code is included in my git. Please check it out if you have the time.
Answer: First of all, some variable names can be more descriptive.
It looks like you're using tabs for indentation. It's convenient to use 2 spaces.
It's convenience to use single quotes if there is no interpolation in the string.
Constructions like that one:
if @x_diff <= 1 && @y_diff <= 1
true
else
false
end
Can be simplified to:
@x_diff <= 1 && @y_diff <= 1
You're using @turn for some kind of "flag" with two possible states. Why not to use true/false? That piece of code can be rewritten in more concise way:
if @turn == 1
@turn = 0
elsif @turn == 0
@turn = 1
end
Like that:
@turn = false
# ...
@turn = !@turn # now 1 is true and false is 0.
"00" is used for "placeholder" of empty square. It will be easier to use false or nil. And use "00" only for rendered part.
Updated @grid and Board#show can be rewritten in that way:
@grid = [
[bR,bK,bB,bQ,bKing,bB2,bK2,bR2],
[bP1,bP2,bP3,bP4,bP5,bP6,bP7,bP8],
[nil] * 8,
[nil] * 8,
[nil] * 8,
[nil] * 8,
[wP1,wP2,wP3,wP4,wP5,wP6,wP7,wP8],
[wR,wK,wB,wQ,wKing,wB2,wK2,wR2],
]
#...
def show
puts "Current Board Status:"
@grid.each do |row|
row.each { |element| print "|#{element ? element.name : '00'}|" }
print "\n"
end
end
Board#check_board_valid have bunch of responsibilities. It looks more like "functional-styled" code. Can be separated into simple methods:
def check_board_valid(input_cor, des = 0) # whether input is within the range
input_cor = input_cor.split('').map(&:to_i)
return unless coord_valid?(input_cor) # maybe add some message here
figure = @grid[8 - input_cor.last)][input_cor.first - 1]
return puts('This slot has an empty piece.pls try again') unless figure
if des == 0 # this condition still can be simplified
if whites_move?(figure)
puts 'first con'
true # probably, that line can be removed after overall refactoring
elsif blacks_move?(figure)
puts 'second con'
true # probably, that line can be removed after overall refactoring
else
puts 'It\'s not your turn yet piece of shit'
return false
end
end
end
private
def coord_valid?(input_cor)
input_cor.size == 2 && input_cor.all? { |coord| (1..8).include?(coord) }
end
def whites_move?(figure)
figure.is_white? && !@turn
end
def blacks_move?(input_cor)
figure.is_black? && @turn
end
# at "lib/Chess.rb"
class Chess
#...
def is_black?
self.name =~ /[B]/
end
def is_white?
self.name =~ /[W]/
end
end
That's it for now. I hope, that you get the main idea. | {
"domain": "codereview.stackexchange",
"id": 20561,
"tags": "object-oriented, ruby, chess"
} |
Problem of Klein-gordon Equation | Question: Ryder in his QFT book writes in eqn (2.20):
Probability density, $\rho = \frac{i\hbar}{2m}(\phi^*\frac{\partial \phi}{\partial t} - \phi \frac{\partial \phi^*}{\partial t})$
Then in the next paragraph he writes: Since the Klein-Gordon equation is second order, $\phi$ and $\frac{\partial \phi}{\partial t}$ can be fixed arbitrarily a given time, so $\rho$ may take on negative values,..
What does he mean by this line?
Answer: He means that since it is a second order differential equation, it is completely determined by two sets of information, namely the value of $\phi$ and ${\dot \phi}$ at some time $t=t_0$. In other words, $\phi(t_0)$ and ${\dot \phi}(t_0)$ paramaterizes the solution. We can therefore choose them to take any values, some which will imply a negative value of $\rho$. | {
"domain": "physics.stackexchange",
"id": 9158,
"tags": "quantum-field-theory, klein-gordon-equation"
} |
Are there other finite element types besides the usual? | Question: By finite element type I specifically mean element geometry.
I am aware of Beam/Strut, Quad, Tri, Tet & Hex but are there other types, eg a hexagonal prism, rhombihedral etc.. ?
Answer: Certain geometries can benefit from polyhedral elements or elements with edge degrees of freedom. I can think of three main developments in that direction:
1) Voronoi cell finite elements, e.g., https://www.sciencedirect.com/science/article/pii/0045794994904359
2) Isogeometric elements, e.g., https://www.sciencedirect.com/science/article/pii/S0045782504005171
3) Finite elements based on exterior calculus, e.g., https://www.cambridge.org/core/journals/acta-numerica/article/finite-element-exterior-calculus-homological-techniques-and-applications/1A2AEB067BCA561D9ED6D674026539B9
These elements are still being actively designed, but attention today is mostly on isogeometric and exterior calculus-based elements; primarily based of meshing issues and the need to solve multiphysics problems and to avoid element locking. | {
"domain": "engineering.stackexchange",
"id": 1887,
"tags": "finite-element-method, meshing"
} |
Sorting criteria for Kruskal's algorithm | Question: I am studying Kruskal's algorithm. Is the only acceptable sorting criteria to sort edges from lowest weighted edge to greatest weighted edge? I ask this because I assume that if the algorithm used an alternative sorting criteria say, greatest to lowest, then it would not produce a MST. My reasoning is that edges with larger weights would be added first.
Any help with this concept would be greatly appreciated.
Answer: Yes, for Kruskal's you have to sort it in ascending order of weights, for precisely the reason that you gave.
There is, however, another algorithm, namely Reverse-Delete which is essentially the opposite of Kruskal's. You start with the original graph, and process edges in descending order of weight. Then we remove each edge while the graph is still connected. | {
"domain": "cs.stackexchange",
"id": 18254,
"tags": "algorithms, graphs, weighted-graphs, minimum-spanning-tree"
} |
cannot install rosserial in kinetic | Question:
I have tried to install rosserial for kinetic.But when i try to install ros_lib into arduino environment,it gives the error message [rospack] Error: package 'rosserial_arduino' not found .
Originally posted by renjith on ROS Answers with karma: 28 on 2016-06-17
Post score: 0
Original comments
Comment by vmatos on 2016-07-07:
Could you explain all the steps you took?
If you downloaded the source and built the package, have you sourced setup.bash?
Comment by renjith on 2016-07-07:
I actually figured out the problem a while back.I didnt add the setup file to bashrc and so whenever i opened a new terminal , i got this error.Still thanks vmatos.
Answer:
I don't see any release for Kinetic of rosserial on wiki/rosserial or status_page/ros_kinetic_default.html?q=serial, so I would think not being able to install it is expected.
Originally posted by gvdhoorn with karma: 86574 on 2016-06-17
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 24968,
"tags": "arduino, rosserial, ros-kinetic"
} |
Simple neural-network simulation in C++ (Round 2) | Question: Intro
Yesterday I posted this question. Since then, I've updated my code to incorporate these suggestions. I've also removed the dependence on C++11. Finally, I've made the following changes that get me closer to my overall goal:
Rather than iterate over different values for dt within my script, I have dt specified on the command line. Specifically, an integer is specified on the command line that corresponds to (1 +) an index in dt_array. This allows me to process different values of dt in parallel using the Sun Grid Engine.
Rather than use a single value for I_syn_bar, I now iterate over 100 values of I_syn_bar.
If you read through the current state of my script below, you'll see that I'm writing to disk 100 text files per dt. When I set n_x to 2 instead of 100, the script is very fast: 6 s on my machine. But when I set n_x to 100, and submit the script as a job to the SGE, it takes ~1 hour to complete (more than 6 s * 50). Hence, there seems to be some penalty being imposed on me for the heavy file I/O I'm using (in addition to the general SGE overhead).
My goal now is to change the code so that I'm writing the data for all 100 values of I_syn_bar, but in fewer files. I have a 2D matrix for each value of I_syn_bar. In order to write data for multiple values of I_syn_bar to the same text file, I need a 3D object of some kind (and a strategy for writing this object to file). Another constraint I have is that I need these files to be able to be read into Python.
Code
#include <math.h>
#include <vector>
#include <string>
#include <fstream>
#include <iostream>
#include <iterator>
#include <Eigen/Dense>
#include <sys/types.h>
#include <sys/stat.h>
#include <unistd.h>
#include <stdlib.h>
#include <sstream>
using Eigen::MatrixXd;
using Eigen::ArrayXd;
bool save_mat(const MatrixXd& pdata, const std::stringstream& file_path)
{
std::ofstream os(file_path.str().c_str());
if (!os.is_open())
{
std::cout << "Failure!" << std::endl;
return false;
}
os.precision(11);
const int n_rows = pdata.rows();
const int n_cols = pdata.cols();
for (int i = 0; i < n_rows; i++)
{
for (int j = 0; j < n_cols; j++)
{
os << pdata(i, j);
if (j + 1 == n_cols)
{
os << std::endl;
}
else
{
os << ",";
}
}
}
os.close();
return true;
}
std::string get_save_file()
{
std::string dan_dir;
struct stat statbuf;
if (stat("/home/daniel", &statbuf) == 0 && S_ISDIR(statbuf.st_mode))
{
dan_dir = "/home/daniel/Science";
}
else if (stat("/home/dan", &statbuf) == 0 && S_ISDIR(statbuf.st_mode))
{
dan_dir = "/home/dan/Science";
}
else if (stat("/home/despo", &statbuf) == 0 && S_ISDIR(statbuf.st_mode))
{
dan_dir = "/home/despo/dbliss";
}
std::string save_file = "/dopa_net/results/hansel/test/test_hansel";
save_file = dan_dir + save_file;
return save_file;
}
double f(const double t, const double tau_1, const double tau_2)
{
return tau_2 / (tau_1 - tau_2) * (exp(-t / tau_1) - exp(-t / tau_2));
}
ArrayXd set_initial_V(const double tau, const double g_L, const double I_0,
const double theta, const double V_L, const int N,
const double c)
{
const double T = -tau * log(1 - g_L / I_0 * (theta - V_L));
ArrayXd V(N);
for (int i = 0; i < N; i++)
{
V(i) = V_L + I_0 / g_L * (1 - exp(-c * (i - 1) / N * T / tau));
}
return V;
}
int main(int argc, char *argv[])
{
// Declare variables set inside loops below.
double t;
double I_syn_bar;
int i;
std::stringstream complete_save_file;
// Declare and initialize constant parameters.
const int n_x = 100;
const double x_min = 0; // uA / cm^2.
const double x_max = 1; // uA / cm^2.
const double x_step = (x_max - x_min) / (n_x - 1); // uA / cm^2.
const double tau_1 = 3.0; // ms.
const double tau_2 = 1.0; // ms.
const int N = 128;
const double dt_array[3] = {0.25, 0.1, 0.01}; // ms.
const char* task_id = argv[argc - 1];
const int task_id_int = task_id[0] - '0';
const double dt = dt_array[task_id_int - 1];
const double tau = 10; // ms.
const double g_L = 0.1; // mS / cm^2.
const double I_0 = 2.3; // uA / cm^2.
const double theta = -40; // mV.
const double V_L = -60; // mV.
const double c = 0.5;
const double C = 1; // uF / cm^2.
const int sim_t = 10000; // ms.
const int n_t = sim_t / dt;
const std::string save_file = get_save_file();
// Save V for each I_syn_bar, for the dt specified on the command line.
for (double I_syn_bar = x_min; I_syn_bar < x_max; I_syn_bar += x_step)
{
MatrixXd V(N, n_t);
V.col(0) = set_initial_V(tau, g_L, I_0, theta, V_L, N, c);
double I_syn = 0; // uA / cm^2.
ArrayXd t_spike_array = ArrayXd::Zero(N);
i = 1;
for (double t = dt; t < sim_t; t += dt)
{
ArrayXd prev_V = V.col(i - 1).array();
ArrayXd current_V = prev_V + dt * (-g_L * (prev_V - V_L) + I_syn +
I_0) / C;
V.col(i) = current_V;
I_syn = 0;
for (int j = 0; j < N; j++)
{
if (current_V(j) > theta)
{
t_spike_array(j) = t;
V(j, i) = V_L;
}
I_syn += I_syn_bar / N * f(t - t_spike_array(j), tau_1, tau_2);
}
i++;
}
complete_save_file << save_file << dt << "_" << I_syn_bar << ".txt";
save_mat(V, complete_save_file);
complete_save_file.str("");
complete_save_file.clear();
}
return 0;
}
Timing Information
---------------------------------------------
| n_x | command-line arg | SGE? | Time |
---------------------------------------------
| 2 | 1 | no | 6 s |
---------------------------------------------
| 2 | 1 | yes | 30 s |
---------------------------------------------
| 100 | 1 | no | 10 m 16 s |
---------------------------------------------
| 100 | 1 | yes | 53 m 5 s |
---------------------------------------------
Answer: If you skim the relevant part of the paper which this analysis is attempting to replicate (cf. the link in the Round 1 question), you'll see that my code saves an intermediate result on the way to the final result that is plotted in Figure 1. Figure 1 plots sigma_N, whereas I'm saving V here.
For each I_syn_bar, V is a matrix of size number of neurons by number of time points. This is a lot of data to save. sigma_N takes up much less space: a single double for each I_syn_bar.
I made two fairly small changes to my code:
I followed the advice in the comments and replaced all appearances of endl with "\n".
I carried out the analysis through the computation of sigma_N and saved sigma_N instead of V.
sigma_N, in my new code, is defined as follows (within, but at the very end of, the loop over I_syn_bar):
// Compute A_N.
A_N = V.colwise().mean();
// Compute delta_N.
delta_N = A_N.square().mean() - pow(A_N.mean(), 2);
// Compute delta.
V_squared = V.unaryExpr(std::ptr_fun(square));
V_squared_time_mean = V_squared.rowwise().mean();
V_time_mean = V.rowwise().mean();
V_time_mean_squared = V_time_mean.square();
delta = (V_squared_time_mean - V_time_mean_squared).mean();
// Compute sigma_N.
sigma_N_array[j] = delta_N / delta;
The Crux of this Answer
Let me restate this: In the code in the Round 2 question above, I'm saving 100 matrices of size N X n_t to disk, where N is 128 and n_t is 40000 (if dt is 0.25). As I mention in the question, all this file-writing is causing the code to be undesirably slow.
MY SOLUTION HERE is to carry the analysis a step farther. Rather than save all these matrices, I save the variable they will eventually be used to compute: sigma_N. Whereas V (what was originally saved) is 100 X 128 X 40000, sigma_N is 100 X 1.
Although the extra analysis step adds a tiny amount of time, this is far outweighed by the time saved by not having to write all those large matrices.
If you don't understand what sigma_N is, please, as I said, consult the earlier question and the paper linked there. If you are unwilling to do this, or otherwise don't understand what I'm saying, please ask me to clarify a particular point. I am happy to do that.
Timing Information
----------------------------------------------
| n_x | command-line arg | SGE? | Time |
----------------------------------------------
| 2 | 1 | no | 0.4 s |
----------------------------------------------
| 2 | 1 | yes | 1.2 s |
----------------------------------------------
| 100 | 1 | no | 41.4 s |
----------------------------------------------
| 100 | 1 | yes | 49.1 s |
----------------------------------------------
| 100 | 1-3 (in parallel) | yes | 18 m 28 s |
----------------------------------------------
Note, with regard to the last line of the table above, that the code is dramatically slower with 3 as the command-line argument (dt = 0.01) than with 2 or 1.
Caveat
Everything about the Round 1 question, the Round 1 answer, the Round 2 question, and this answer is great and wonderful when it comes to considering factors that influence the speed of a C++ script.
However, it turns out that this script overlooks a number of considerations that are needed in order to replicate Figure 1. I've made tweaks to the script in a few places to handle this. Since these considerations are outside the scope of both this question and the Round 1 question, I'm not going to describe them here. | {
"domain": "codereview.stackexchange",
"id": 16254,
"tags": "c++, performance, numerical-methods, neural-network, eigen"
} |
Does a single moving charge produce magnetic field in an empty universe? | Question: If we have a singular charge in a universe and it is not stationary can it produce a magnetic field? Because even though it's moving there will be no relative motion and no electric force will act on it.
Answer: According to our current understanding of electromagnetism, a single moving charge in an empty universe would not produce a static magnetic field in the reference frame where the charge is at rest.
Here's why:
Magnetic field arises from relative motion: Magnetism is a consequence of the theory of relativity. It's not an absolute property of a single moving charge. The magnetic field is created by the relative motion between the observer and the charge.
Reference frame dependence: If you are in the reference frame where the charge is stationary, you wouldn't observe a magnetic field because there's no relative motion. However, an observer in a different reference frame where the charge is moving would detect a magnetic field.
However, there's a twist:
Time-dependent magnetic field: Even in the reference frame where the charge is at rest, there can be a time-dependent magnetic field. As the charge accelerates or decelerates, a changing electric field is produced, and according to Maxwell's equations, a changing electric field creates a changing magnetic field. So, a single moving charge wouldn't produce a static magnetic field in its rest frame, but it could produce a time-dependent magnetic field during acceleration/deceleration. | {
"domain": "physics.stackexchange",
"id": 100627,
"tags": "electromagnetism, magnetic-fields, electric-current"
} |
Why is $\delta[an]=\frac 1 a \delta[n]$ in discrete time? | Question:
Why is $\delta[an]=\frac 1 a \delta[n]$ in discrete time? Prove.
Hi, I want to prove it but I don’t know on what facts to rely. In continuous time we have integral and dt in that integral that turns to be dt/a for continuous time delta function. But in discrete time, we don’t have dt. So how to prove it then?
Answer: It's actually good that you couldn't prove it because the claim is wrong. Note that in discrete time we have $\delta[n]=1$ for $n=0$ and $\delta[n]=0$ for $n\neq 0$. So if $a\neq 0$, you simply get $\delta[an]=\delta[n]$ (note that $a$ must be an integer for this to make sense). If $a=0$ then $\delta[an]=1$ (for all $n$). | {
"domain": "dsp.stackexchange",
"id": 8727,
"tags": "discrete-signals, impulse-response"
} |
Identifying Compression and Tension as well as Hogging | Question: Im looking for clarification on how the solution identified the moment to be hogging, as well as the axial stress to be in compression?
Also I'm confused as to how the solution came to a conclusion that the bottom of the cross section is compressive stress, whilst the top of the cross section is tensile stress.
Please reference the photo below with the red box's
Thankyou!
Answer: Once again, the superposition principle is our friend. It basically means we can look at each load in isolation, find the individual results, and then add them all together to get the final result.
In this case, we are actually dealing with two loads:
There's obviously the force $P$;
And then there's the bending moment $P\cdot e$, where $e$ is the eccentricity of the load $P$ in relation to the beam's neutral axis. Since the beam is rectangular with height 200 mm, we trivially know the neutral axis is at 100 mm. And since the load is only 30 mm from the bottom, we know $e=100-30=70\text{ mm}$.
Now let's look at these loads in isolation:
The force $P$
This one is pretty obvious. The force alone (remember, we're not yet considering the bending moment effect, so you can think of the force as acting at the neutral axis) obviously causes uniform compression throughout the beam. This means the beam remains perfectly straight, just a bit shorter. Not much to discuss here, I believe.
The bending moment $P\cdot e$
Herein lies the rub.
The bending moment is counter-clockwise in this case. This seems pretty intuitive to me, so I won't spend any time demonstrating this.
The issue is how the beam deflects given this bending moment.
Looking at the free end, it can rotate in one of two ways: clockwise or counter-clockwise. It once again seems pretty intuitive to me that the beam will rotate counter-clockwise, but here's a demonstration, just in case it isn't to you:
Let's transform the bending moment into a force couple: two opposing forces which cancel each other out other than generating a bending moment. A classic example is a crank valve.
So, if you had a crank valve at the end of a beam, and you applied the forces shown above, how would you expect it to rotate? It seems clear the beam would rotate counter-clockwise. By this I mean that the free end (and indeed any other slice along the beam) would rotate counter-clockwise, such that the top fiber moves a bit to the left and the bottom fiber moves to the right.
And that tells us all we need to know: if the top fiber moves to the left, then the total length of that fiber has increased (from 700 mm to... 701 mm or whatever). And we know that if an element is lengthened, it must be under tension1. Likewise, if the bottom fiber moved to the right, that fiber's length shortened and it is therefore under compression.
Total result
So, summing both loads' results, we get that the top fiber is under compression due to the force and under tension due to the bending moment. Whether the final result is compression or tension will depend on which stress is greater.
The bottom fiber is under compression due to both loads, so that's simple enough.
Hogging
As for why the moment is considered hogging, well, that's just the technical definition of hogging moment.
When the beam bends in such a way that it forms concavity downwards (cup-shaped) it is called as sagging. Whereas the bending which results in convexity upwards (like a hump); is known as hogging. [source]
Basically, positive moment is sagging, negative moment is hogging.
And if you remember your sign conventions:
You can see that counter-clockwise moments on the left side of the beam are negative, and therefore hogging.
1 Or possibly under a positive thermal load, if you want to be a pedant. | {
"domain": "engineering.stackexchange",
"id": 2784,
"tags": "mechanical-engineering, structural-engineering, civil-engineering"
} |
The Higgs field a new Luminiferous aether? | Question: As of this writing it has been made clear to me that classical physics' Luminiferous aether was a terriblly poor discriptor of space. With the advent of Special Relativity and General Relativity, that aether ceased to be applicable. Today I began reading on the Higgs boson, and the Higgs field. How are those theories different from or similar to the seemingly irrational Luminiferous aether?
P.S.
I am a first year student at Bakersfield Community College. If this question seems naive, you now have an explanation.
(EDIT)
To me the similarity rests upon their fundamental influences regarding static positions in space. I currently only accept models in which static spacial positions are impossible. Contrastingly relative positions seem quite reasonable. In my humble opinion my naivatisms inhibit me from absorbing and understanding the Higgs boson. If they actually are similar, public protest would probably be more apparant. However, I have put forth my best effort to pose an direct question here.
P.S. The questions Where does space go when it falls? and What happens to time lost when matter accelerates? come much earlier in my studies, and they seem relevant.
Answer: Here comes a simple minded answer by an experimentalist.
The luminiferous ether was discarded because it violated special relativity. It presupposed a fixed reference frame of the ether against which everything moved. In special relativity there exists no absolute frame of reference, and special relativity has been vindicated many times experimentally.
The Higgs field, as also the vacuum sea in general, comes from quantum field theory formulations of the interactions of elementary particles. All quantum field theories are consistent with special relativity , and thus the Higgs field is also consistent with special relativity. It therefore cannot play the role of the luminiferous ether, and also the same holds for the vacuum sea, which is seething with virtual pairs of particle/antiparticle.
At the level of your studies this should suffice. | {
"domain": "physics.stackexchange",
"id": 6654,
"tags": "standard-model, higgs, aether"
} |
Particle as wave, stable? | Question: I've started reading about the wave-particle duality but, after a few steps, reached a dead end:
Schrodinger equation solutions for a free particle is a sum of terms of the form:
$$\psi(\mathbf{r}, t) = Ae^{i(\mathbf{k}\cdot\mathbf{r}-\omega t)}$$
however, a single element of this form can not normalize, thus, can not exists alone. That is, a particle must be an addition of several terms, a wave packet. (TODO: verify a Gaussian wave packet normalizes :-).
The restriction:
$$ \omega = \frac{\hbar k^2}{2m} $$
applies to previous wave function. That means that each component of the wave packet has a different propagation speed. As consequence, the particle spreads.
The question: particles tends to disperse (dissolve) ? If so, how to explain the stable existence of protons, fermions, ... ?
Answer:
The question: particles tends to disperse (dissolve) ? If so, how to explain the stable existence of protons, fermions, ... ?
It is important to have the correct frame of reference,in physics the mathematical models we are using are not "explaining" the data, i.e. the data is not created by the mathematics, but is modeled by the mathematics . Successful physics models are the predictive ones, not just the descriptive ones.
Schrodinger's equation has plane wave solutions and it is true that a plane wave cannot be normalized, It is evident a single plane wave cannot be used to model free particles. Schrodinger's equation is very successful (predictive) in modeling particles in potential wells, and the theory of quantum mechanics that developed has in its postulates that the wavefunction describing particles are probability distributions.
The modeling of single particles using plane waves (probability waves) uses the wavepacket solutions, to model single particles . These can be made narrow enough to restrict the particle to the dimensions experiments have measured, and thus model stable particles like protons and electrons if necessary for visualization.
Fortunately, physics models of quantum mechanics have progressed to quantum field theories, and it is not necessary to describe mathematically single free particles in order to predict the behavior of experiments. Experimental predictions are done with QFT calculations using Feynman diagrams as you will learn if you continue your studies.
Maybe this answer of mine will help. | {
"domain": "physics.stackexchange",
"id": 71299,
"tags": "quantum-mechanics, particle-physics, wave-particle-duality"
} |
Equation for predicting contrail formation | Question: I am working on a data visualisation of airline flight paths and their probability of forming contrails.
Given weather data for a specific area (location, time, temperature, pressure and humidity) and aircraft data (location, time, altitude, speed and engine efficiency)
Is there an equation that will give me a rough estimate of the probability contrails will form at this point?
Answer: Schmidt-Appleman criterion (SAC) is used to model contrail formation. I advise reading in full Ulrich Schumann's article "A contrail cirrus prediction model" (http://www.geosci-model-dev.net/5/543/2012/gmd-5-543-2012.pdf) to get all the relevant formulae and literature references.
While it is possible to transcribe the formulae here, the reference cited does a better job of presenting state of the art (2012). | {
"domain": "earthscience.stackexchange",
"id": 319,
"tags": "climate-change, nwp"
} |
Qiskit's classifier is not optimising the weights | Question: I am using qiskit's VQC to build a classifier. Dimensionality of the data is 2 and number of classes are 4. The feature map I used is ZZFeatureMap and ansatz is the RealAplitudes. Then entanglement and reps are "full" and 2 respectively for both feature map and ansatz. Optimizer used is 'COBYLA'. Please find below the respective code.
When I try to fit the classifier, it is not optimising the weights and the loss is always "nan". It is stopping after 7 iterations.
What am I doing wrong and how can I train a classifier using qiskit's VQC ? Thank you!
Answer: The issue was with 0.3.0 version of qiskit_machine_learning library. Fix is to install 0.4.0 version. More details about the issue and how to install 0.4.0 is in this answer. | {
"domain": "quantumcomputing.stackexchange",
"id": 3484,
"tags": "qiskit, optimization, quantum-enhanced-machine-learning"
} |
Inconsistency with electrostatic energy formulas | Question: The energy of point charge configuration can be written as:
$$W = \frac{1}{2}\sum_{i=1}^{n}q_{i}V(r_{i}) \, ,$$
which can take both positive and negative values.
However, when we integrate the equation to get the energy of a continuous charge dustribution:
$$W = \frac{1}{2}\int\rho Vd\tau \Rightarrow W = \frac{\epsilon_{0}}{2}\left [ \int E^{2} d\tau + \oint VE\cdot da\right ] \, .$$
Take the volume to integrate to be all space, then the second term vanishes:
$$W = \frac{\epsilon_{0}}{2}\int E^{2}d\tau \, .$$
This formula can take only positive values.
So there is a discrepancy between the two formulas. What caused the discrepancy?
According to Griffith's Introduction to Electrodynamics, it says in the former equation $V(r_{i})$ represent the potential due to all charges but $q_{i}$, whereas later $V(r_{i})$ is the full potential. But why would the original charge has any potential when there is no other charge already present?
Answer: The difference is the zero point. When summing over charges, the reference is a state in which this charges are infinitely separated. Those are still distinct, localized charges, just separated from each other.
When integrating $E^2$ over all space, the reference state has all charge separated. Even the individual charges from the first method are broken up, so that $E=0$ everywhere.
The second method has its reference at an "absolute zero", so to speak. The first method has its reference with a lot of positive energy needed to gather the individual charges. That's why the first method can have negative values. | {
"domain": "physics.stackexchange",
"id": 26701,
"tags": "energy, electrostatics"
} |
Proof of Coulomb's law in two and higher dimensions | Question: I have found that the Coulomb force in two dimension varies with $\frac 1 r$:
\begin{equation}\tag{2}F=\frac{1}{2\pi\epsilon}\cdot\frac{q_1q_2}{r}\end{equation}
But I was not able to prove it. I think it can be proved using the Laplace equation and other further modifications.
One question related to this topic is answered here:
2 dimensional Coulomb's law equation
But the proof has not been given, rather a general introduction is given. Also, some links related to Green's function have been provided, which I didn't understand. (If anyone can elaborate on this, that would be helpful)
so my question is
How to prove Coulomb's law in two dimensions or genralised N dimension?
And Coulomb's law depends upon $ \vec{r} $ only and,
\begin{equation}\tag{2}F=\frac{1}{4\pi\epsilon}\cdot\frac{q_1q_2}{r^2}\end{equation}
so we can put the value of $ r^2 $ (distance) both in 2 or 3 dimensions, so why would the equation change as it is not dependent upon $\Theta $ (Theta) or $\Phi $ (Phi). So intuitively the equation should not change?
Is Gauss law applicable only in 3 dimensions or valid for any dimension, because the Gauss divergence theorem is only for 3 dimensions?
Also, please write some other insightful details if comes across.
Answer: As with all derivations, it depends on what you want to treat as fundamental. Typically we would derive Coulomb's law from the Maxwell equations, so we're trying to solve
$$\nabla\cdot \mathbf{E} = -\nabla^2 \varphi = q\delta(\mathbf{x})/\epsilon_0\qquad (1)$$
In $n$ spatial dimensions and in Cartesian coordinates $(x_1,\ldots,x^n)$, this becomes
$$\sum_{k=1}^n \frac{\partial^2}{\partial x_k^2} \varphi = -\frac{q}{\epsilon_0}\delta(\mathbf x)\qquad(2)$$
Because this problem has spherical symmetry, we can move to hyperspherical coordinates. If we do so, we will find that$^\dagger$
$$\frac{1}{r^{n-1}}\frac{\partial }{\partial r}\left(r^{n-1} \frac{\partial\varphi}{\partial r}\right) = -\frac{q}{\epsilon_0} \delta(r)\qquad (3)$$
Away from $r=0$, we would therefore have that $$\frac{\partial}{\partial r}\left(r^{n-1} \frac{\partial \varphi}{\partial r}\right)=0 \implies r^{n-1} \frac{\partial \varphi}{\partial r} = c$$ for some constant $c$, and therefore that $\varphi = c\ r^{2-n}+d$ (unless $n=2$, in which case we'd have a logarithm). The constant $d$ can be set to zero by demanding that the potential vanish at infinity (this is an arbitrary choice, but a convenient one). The constant $c$ can be determined by using the divergence theorem to integrate $(1)$ over a hypersphere of radius $R$. Because of the spherical symmetry, the left hand side would be the surface area of the $(n-1)$-sphere of radius $R$ times $\varphi'(R)$:
$$-\frac{2\pi^{n/2}}{\Gamma(n/2)}R^{n-1} \varphi'(R)=\left(\frac{2\pi^{n/2}(n-2)}{\Gamma(n/2)}\right) c$$
while the right hand side is simply equal to $q/\epsilon_0$ because of the delta function. As a result,
$$\varphi(r) = \frac{\Gamma(n/2)}{2(n-2)\pi^{n/2}\epsilon_0} \frac{q}{r^{n-2}}\qquad (4)$$
In $n=3$ dimensions, we have $\Gamma(3/2)=\sqrt{\pi}/2$ so this reduces to the familiar case
$$\varphi^{(3)}(r) = \frac{1}{4\pi\epsilon_0} \frac{q}{r} \implies \mathbf{E}^{(3)}(r) = \frac{1}{4\pi\epsilon_0} \frac{q}{r^2}\hat r$$
In 4-dimensions, $\Gamma(2)=1$ so we would have
$$\varphi^{(4)}(r) = \frac{1}{4\pi^2 \epsilon_0} \frac{q}{r^2} \implies \mathbf{E}^{(4)}(r) = \frac{1}{2\pi^2 \epsilon_0} \frac{q}{r^3} \hat r$$
In the other direction, for $n=1$ we have $\Gamma(1/2)=\sqrt{\pi}$ and so
$$\varphi^{(1)}(r) = -\frac{1}{2\epsilon_0} q r \implies \underbrace{\mathbf{E}^{(1)}(r)=\frac{1}{2\epsilon_0} q \hat r}_{\text{constant}}$$
So intuitively the equation should not change?
The problem is that $\nabla^2$ changes in higher dimensions, so if you reuse the familiar form of Coulomb's law then it will not obey the Maxwell equations. Assuming you'd like to treat the latter as more fundamental, we need to use Gauss' law to find the more general form of Coulomb's law.
Is Gauss law applicable only in 3 dimensions or valid for any dimension, because the Gauss divergence theorem is only for 3 dimensions?
The divergence theorem holds in an arbitrary number of dimensions. If we assume that Gauss' law holds in an arbitrary number of dimensions, then we find Coulomb's law as I did above. Of course, Gauss' law is a physical statement, not a purely mathematical one, so there's no way to mathematically prove that it holds for all dimensions.
$^\dagger$This expression should not be taken too literally, as the delta function at the origin has some pathological issues in spherical coordinates. The spirit of this equation is that we will find the solution for $r\neq 0$, and obtain the remaining undetermined constant by integrating $(1)$. | {
"domain": "physics.stackexchange",
"id": 71786,
"tags": "electrostatics, gauss-law, spacetime-dimensions, coulombs-law"
} |
Picking a guitar string of fixed length to get any nth harmonic, is it possible? | Question: In physics textbook, we can calculate the nth harmonic of a vibrating string of a fixed length. How can we do this in a real guitar?
For example, if I just pick a single open string, how can I get any arbitrary harmonic for this?
More precisely,
the first harmonic has 2 nodes and 1 anti nodes
the second harmonic has 3 nodes and 2 anti nodes
the nth harmonic has n+1 nodes and n anti nodes
Answer: There is a technique called flageolet where you damp the string with a finger laid lightly onto the site at the node of a higher harmonic. You do not press the string to the fretboard but just damp the string at a position, where there is a node of the specific harmonic.
When you now pluck the string all harmonics, which do not have a node at the specified position, are damped strongly, the ones which do have nodes there are not, so if you let the finger rest shortly after plucking and then release the string, you will have only specific higher harmonics (and the pattern of nodes can clearly be seen).
This technique is easy for the first three or four harmonics, for the first three ones the nodes coincide with frets (the twelveth, the seventh and the fifth). The fourth one is very close to the fourth fret. Higher harmonics are technically quite difficult to produce cleanly, as the finger placement must be very precise and is not indicated by a fret.
Another technique to excite the higer harmonics is to use a loudspeaker, a frequency generator and let resonance do the job for you.
On a side note, flageolet is commonly used to tune a guitar, as the fifths between adjacent strings assures, that the second harmonic of the higher pitched string matches the third harmonic of the lower pitched string, and the beat frequency of high notes is faster (and therfore easier to hear precisely).
Also the second and third harmonic of the deep E string are at the frequencies of the high e and the b string. | {
"domain": "physics.stackexchange",
"id": 22353,
"tags": "waves, string, harmonics"
} |
Alternative way for checking an anagram pair | Question: I'm writing a program to check whether a pair of Strings (entered by the user) are anagrams. I've already did with the standard way to sort both of the strings using Arrays.sort() but I also thought of an alternative version not requiring any sorting and ended up with this:
import java.util.Scanner;
public class Anagrams {
public static void main(String[] args) {
String s1, s2;
try (Scanner sc = new Scanner(System.in)) {
System.out.println("Enter the two words: ");
s1 = sc.nextLine().toLowerCase().replace(" ", "");
s2 = sc.nextLine().toLowerCase().replace(" ", "");
}
if (s1.length() == s2.length() && isAnagram(s1, s2)) {
System.out.format("%s is an anagram of %s.", s1, s2);
} else {
System.out.format("'%s' and '%s' are not anagrams.", s1, s2);
}
}
public static boolean isAnagram(String s1, String s2) {
return (sumOfChars(s1) == sumOfChars(s2));
}
public static int sumOfChars(String s) {
int sum = 0;
for (char c : s.toCharArray()) {
sum += c;
}
return sum;
}
}
I know this isn't perfect but it does work with most of the anagram pairs that exist in the dictionary (that I've tested). Can I improve anything here or should I forgo this?
Answer: This wont ever work entirely correctly. What you did here can also be seen as calculating the hash of the 2 Strings. And just like you cannot use a hash to compare equality of 2 objects, you can't use your sumOfChars to compare if 2 strings are anagrams or not. (Simple counter example are the strings "ac" and "bb".)
On the other hand, that doesn't mean this idea is useless for exactly the same reasons as why we still provide a hashCode method when implementing equals.
If the 2 strings are more often NOT anagrams then they ARE anagrams, it can be a lot cheaper to first check if the sums are equal, and after that check for actual "anagram equality".
I suggest to update your isAnagram method to something like this:
public static boolean isAnagram(String s1, String s2) {
if(s1.length() != s2.length() return false;
if(sumOfChars(s1) != sumOfChars(s2)) return false;
return sameLetterCount(s1, s2);
}
Notice that I also put the length() check in this method as well. The idea is actually the same. You start with a really cheap test that returns early in certain cases (strings of different length). Then you do a bit more expensive test that takes out most (but still not all) cases to return early again.
When those tests didn't fail, you got some good candidates to do the expensive actual checking for containing the same letters.
I have written it in my example so that it calls a different method. But you might as well just implement some letter counting algorithm right after the 2 if checks. | {
"domain": "codereview.stackexchange",
"id": 29148,
"tags": "java, beginner, strings"
} |
Concatenate several CSV files in a single dataframe | Question: I have currently 600 CSV files (and this number will grow) of 50K lines each i would like to put in one single dataframe.
I did this, it works well and it takes 3 minutes :
colNames = ['COLUMN_A', 'COLUMN_B',...,'COLUMN_Z']
folder = 'PATH_TO_FOLDER'
# Dictionnary of type for each column of the csv which is not string
dictTypes = {'COLUMN_B' : bool,'COLUMN_D' :int, ... ,'COLUMN_Y':float}
try:
# Get all the column names, if it's not in the dict of type, it's a string and we add it to the dict
dictTypes.update({col: str for col in colNames if col not in dictTypes})
except:
print('Problem with the column names.')
# Function allowing to parse the dates from string to date, we put in the read_csv method
cache = {}
def cached_date_parser(s):
if s in cache:
return cache[s]
dt = pd.to_datetime(s, format='%Y-%m-%d', errors="coerce")
cache[s] = dt
return dt
# Concatenate each df in finalData
allFiles = glob.glob(os.path.join(folder, "*.csv"))
finalData = pd.DataFrame()
finalData = pd.concat([pd.read_csv(file, index_col=False, dtype=dictTypes, parse_dates=[6,14],
date_parser=cached_date_parser) for file in allFiles ], ignore_index=True)
It takes one minute less without the parsing date thing. So i was wondering if i could improve the speed or it was a standard amount of time regarding the number of files. Thanks !
Answer: Here is my untested feedback on your code. Some remarks:
Encapsulate the functionality as a named function. I assumed folder_path as the main "variant" your calling code might want to vary, but your use case might "call" for a different first argument.
Use PEP8 recommendations for variable names.
Comb/separate the different concerns within the function:
gather input files
handle column types
read CSVs and parse dates
Depending on how much each of those concerns grows in size over time, multiple separate functions could organically grow out of these separate paragraphs, ultimately leading to a whole utility package or class (depending on how much "instance" configuration you would need to preserve, moving the column_names and dtypes parameters to object attributes of a class XyzCsvReader's __init__ method.)
Concerning the date parsing: probably the bottleneck is not caused by caching or not, but how often you invoke the heavy machinery behind pd.to_datetime. My guess is that only calling it once in the end, but with infer_datetime_format enabled will be much faster than calling it once per row (even with your manual cache).
import glob
import os
import pandas as pd
def read_xyz_csv_folder(
folder_path,
column_names=None,
dtypes=None):
all_files = glob.glob(os.path.join(folder_path, "*.csv"))
if column_names is None:
column_names = [
'COLUMN_A',
'COLUMN_B', # ...
'COLUMN_Z']
if dtypes is None:
dtypes = {
'COLUMN_B': bool,
'COLUMN_D': int,
'COLUMN_Y': float}
dtypes.update({col: str for col in column_names
if col not in dtypes})
result = pd.concat((
pd.read_csv(file, index_col=False, dtype=dtypes)
for file in all_files),
ignore_index=True)
# untested pseudo-code, but idea: call to_datetime only once
result['date'] = pd.to_datetime(
result[[6, 14]],
infer_datetime_format=True,
errors='coerce')
return result
# use as
read_xyz_csv_folder('PATH_TO_FOLDER')
Edit: as suggested by user FMc in their comment, switch from a list comprehension to a generator expression within pd.concat to not create an unneeded list. | {
"domain": "codereview.stackexchange",
"id": 45202,
"tags": "python, python-3.x, csv, pandas"
} |
The magnetic field inside a loop | Question: At the center of a long solenoid the Bfield is constant, but inside a single loop it is obviously not.
Do you know of any picturethat shows with colours the intensity of the field inthe various regions? is the center the strongest point or not?
Answer: On the axis, one can shown analytically that it's strongest at the center. Off the axis, need to use computer. The result shows that, as expected, it should be strongest near the loop. | {
"domain": "physics.stackexchange",
"id": 38906,
"tags": "magnetic-fields"
} |
Peskin and Schroeder's QFT book page 289 | Question: On Peskin and Schroeder's QFT book page 289, the book is trying to derive the functional formalism of $\phi^4$ theory in first three paragraphs. But the book omits many details (I thought), so I have some troubles here.
For the free Klein-Gordon theory to $\phi^4$ theory:
$$ \mathcal{L}=\mathcal{L}_0-\frac{\lambda}{4 !} \phi^4. $$
Assuming $\lambda$ is small, we expand
$$\exp \left[i \int d^4 x \mathcal{L}\right]=\exp \left[i \int d^4 x \mathcal{L}_0\right]\left(1-i \int d^4 x \frac{\lambda}{4 !} \phi^4+\cdots\right). $$
Here I thought the book use one approximation, since $\phi^4$ don't commute with $\mathcal{L_0}$, their have $\pi$ inside $\mathcal{L_0}$. And according to Baker-Campbell-Hausdorff (BCH) formula, the book omit order to $\lambda$. Is this right?
The book further says on p. 289:
"Making this expression in both the numerator and denominator of (9.18), we see that each is expressed entirely in terms of free-field correlation functions. Moreover, since $$ i \int d^3 x \mathcal{L}_{\mathrm{int}}=-i H_{\mathrm{int}},$$ we obtain exactly the same expression as in (4.31)."
I am really troubled for this, can anyone explain for me?
Here eq. (9.18) is
$$\left\langle\Omega\left|T \phi_H\left(x_1\right) \phi_H\left(x_2\right)\right| \Omega\right\rangle=$$
$$\lim _{T \rightarrow \infty(1-i \epsilon)} \frac{\int \mathcal{D} \phi~\phi\left(x_1\right) \phi\left(x_2\right) \exp \left[i \int_{-T}^T d^4 x \mathcal{L}\right]}{\int \mathcal{D} \phi \exp \left[i \int_{-T}^T d^4 x \mathcal{L}\right]} . \tag{9.18} $$
And eq. (4.31) is
$$\langle\Omega|T\{\phi(x) \phi(y)\}| \Omega\rangle=\lim _{T \rightarrow \infty(1-i \epsilon)} \frac{\left\langle 0\left|T\left\{\phi_I(x) \phi_I(y) \exp \left[-i \int_{-T}^T d t H_I(t)\right]\right\}\right| 0\right\rangle}{\left\langle 0\left|T\left\{\exp \left[-i \int_{-T}^T d t H_I(t)\right]\right\}\right| 0\right\rangle} . \tag{4.31} $$
Answer: In the path integral formalism, $\phi$ is not an operator, it is an integration variable. In other words, inside the integral $\int D\phi$, $\phi$ is just an ordinary classical field. So there’s no need to worry about Baker-Campbell-Hausdorff and such. | {
"domain": "physics.stackexchange",
"id": 91088,
"tags": "quantum-field-theory, operators, path-integral, interactions, correlation-functions"
} |
Difference between scattering amplitude and scattering lenght | Question: I'm studying neutron scattering theory and I noticed that one usually writes the scattered wave as a spherical wave: $$\psi \sim \frac{-b} {r}e^{ikr}$$ where $b$ is known as scattering lenght. From usual scattering theory (for instance Sakurai's chapter) one shows that the scattered wave can be written as (suppose $\vec{k} \parallel \hat{z}$): $$
\psi \sim e^{ikz}+f(\theta, \phi)\frac{e^{ikr}}{r}.
$$
So my doubts:
1) Why (in the first eq.) do we forget about the transmitted plane wave? Is it just not important now?.
2) What is the relationship between the scattering amplitude $f$ and $b$? For instance in Sakurai's book a scattering lenght "$a$" is basically defined as a certain limit (page 403) and then it turns out that $\sigma_{tot}= 4 \pi a^2$ which I see is also a property of $b$ ! So I would say they're the same.. but in the book $a$ reduces to $f$ only in the case of an $S$ wave scattering with $l=0$.. While from the first and the second equation I'd say $b=-f(\theta, \phi)$ no matter what $l$ is! So what is their conection??.
Thanks!!
Answer: With regard to your first question, the transmitted plane wave doesn't undergo any scattering from the potential. This is made explicit by the representation of the scattered wavefunction as a sum of an incident planewave and an outgoing spherical wave. As such, the transmitted wave doesn't have anything to tell us about the scattering event. All of that information is contained in the outgoing spherical wave.
As to the second question, your physical intuition is correct. By replacing $f(\theta, \phi) = -b$ we've explicitly taken the low energy limit and assumed that only the s-wave channel is important for this process. Imagine that you have a scattering potential with rotational symmetry, then expanding the scattering amplitude in terms of partial waves $$f(\theta) = \sum_{l=0}^\infty(2l+1)f_l P_l(\cos{\theta}),$$ you can see that if $f(\theta) = -b$ then we must have $f_0 = -b$ and $f_l = 0, \ l>0$ (recall that the Legendre polynomials are orthogonal, so they must vanish here term by term and not merely as a sum). If we wanted to include higher partial waves then there would have to be some non-spherically symmetric terms in the scattering amplitude.
In general, one has to specify a scattering length for each partial wave separately. If we let the s-wave scattering length be $a_s$ and neglect all higher partial waves, but do not take the 0 energy limit, then $$f(k) = \frac{1}{k\cot{\delta_s(k)}-ik} \approx \frac{1}{-\frac{1}{a_s}-ik}.$$ The full expansion for $$k \cot{\delta_s(k)} = -\frac{1}{a_s} + \frac{1}{2} r_s k^2 + \mathcal{O}(k^4)$$ is known as the Effective Range Expansion. You can see that for $k \to 0$ you recover the expressions you quoted for the scattering amplitude and total cross-section . | {
"domain": "physics.stackexchange",
"id": 24130,
"tags": "quantum-mechanics, nuclear-physics, scattering, neutrons"
} |
Speed of Electron in matter: | Question: I'm trying to have a light on these questions:
At what speed do the electrons orbits nucleus of an atom?
Does the speed of electron vary at different shells or levels?
Is that true what I heard,that the electrons of low level orbits moves in higher speed approaching c? if so,gaining speed will literally brings up its mass (by Relativity),then those electron would gain higher mass,i don't know,may be getting closer to nucleus and till growing in speed,gaining mass,getting closer.... Will it result in something like an electron falling into nucleus or so?
Answer: First of all it is important that you do not mistake the electrons for objects that are moving nicely on perfect circular orbits. A correct answer can only be obtained trough quantum mechanical calculations.
Perhaps take a look at this older post: How is the energy of an electron-shell related to the speed of electrons in that shell?
Or
How fast do electrons travel in an atomic orbital? | {
"domain": "physics.stackexchange",
"id": 40987,
"tags": "electrons, relativity, speed"
} |
Is it possible in principle to observe all galaxies in the observable universe? | Question: Are there fundamental physical limitations that prevent the observation and cataloging of all (or almost all) galaxies in the observable universe? If there are no physical limitations, then what technological conditions are necessary to achieve such a goal?
Answer: In principle, so long as the galaxy is on our past light cone and emits light, there's no fundamental reason we can't detect that light (barring "statistical accidents" like that galaxy happening to be occluded by a closer galaxy).
In practice, we are very far from being able to measure every galaxy. Here are some practical considerations that place limits on our ability to do so. Surveys typically have to trade off between sky coverage and depth. The depth of a survey is limited by the power of the telescope you are using, the amount of time you are willing to wait for one image, the angular resolution of the telescope, and the spectral range of the survey (since the intensity of the light we receive decreases as one over the square of the distance to the galaxy, the angular size of galaxies decreases with distance, and galaxies may emit in different bands and if they are far enough away will be at such a high redshift that they may move out of the observing band). If you take less time per image, you can cover a wider area, but will not be able to go as deep. You also have to worry about the Milky Way; of course it's much more difficult to observe light from galaxies that has to pass through the Milky Way than light which does not. Furthermore it matters what kind of observation you want to make. Typically you want to measure a redshift, so some kind of spectral measurement is needed. Ideally you would measure the spectrum from each galaxy and compute the redshift by looking at individual lines, but this is very time consuming. Many surveys measure photometric redshift, which is much easier that taking a full spectrum but also has a much larger uncertainty.
The LSST will be the state of the art in the next 10 years or so for galaxy surveys. It plans to get photometric redshifts for $10^{10}$ galaxies ranging from the local group to redshifts up to and even larger than 6 over $10^4$ square degrees. | {
"domain": "physics.stackexchange",
"id": 84542,
"tags": "astronomy, galaxies, observable-universe"
} |
Is an Isothermal Process really possible? Heat cannot convert into work with 100% efficiency! | Question: We know that heat is an energy of higher entropy (Diffused/Dispersed Energy). Work is the energy of lower entropy (Concentrated form of Energy). We know that entropy cannot reduce on its own, but we see that in an isothermal process, the heat seems to convert into work with 100% efficiency. How is that possible? Doesn't it violate the Law of Entropy?
Is an Isothermal process really possible which can convert heat into work with 100% efficiency?
Some say that it is possible in theory but not practically possible. My Question is, when such a process like an isothermal process violates the idea of Entropy so how could it be even theoretically possible?
Kindly help.
Answer: Heat from a single reservoir can, in principle, be converted to work entirely - during isothermal expansion done reversibly. 2nd law does not prohibit this. It says only that it is not possible to have a cyclic process where the system returns to the original state, and have net heat conversion to work with efficiency higher than the Carnot efficiency.
In isothermal expansion of gas, the gas does work and its volume increases. At this stage, there is no limit to efficiency, it can be 100%. However, this isothermal expansion can't be maintained indefinitely in practice, at some point it has to change, and the system has to be brought back to the original volume or close to it.
During the return, the system has to give off some heat to lower temperature reservoir, so that the system can return to the original state (so that expansion and working can continue after that). This limits efficiency of the conversion of heat into work. | {
"domain": "physics.stackexchange",
"id": 96974,
"tags": "thermodynamics, temperature, entropy"
} |
Can mirrors give rise to chromatic aberration? | Question: Chromatic aberration is a type of image defect due to the fact that different wavelengths of light have different refractive indices. In almost all sources I read so far, this type of optical aberration is discussed only with respect to lenses.
The following text is from this webpage:
For many years, chromatic abberation was a sufficiently serious problem for lenses that scientists tried to find ways of reducing the number of lenses in scientific instruments, or even eliminating them all together. For instance, Isaac Newton developed a type of telescope, now called the Newtonian telescope, which uses a mirror instead of a lens to collect light.
The above statement hints that chromatic aberration is not present in mirrors due to their extensive usage in place of lenses long time ago. But, is the chromatic aberration eliminated completely or partially due to the use of mirrors? I'm having this doubt since, refraction takes place even in mirrors in addition to reflection, from the glass layer (present before the silvered surface). Or in other words, can mirrors give rise to chromatic aberration?
Answer: Reflecting light from a back-surface glass mirror is equivalent to passing the light through a glass plate of twice the thickness of the mirror glass (because light reflecting from a back-surface glass mirror has to pass through the glass in both directions).
A very thin light beam passing through a flat glass plate at an angle will be refracted in one direction as it enters the glass, then will be refracted in the other direction as it exits the glass, so the portions of the beam outside the glass will be parallel, but slightly offset from each other.
Because the refractive index of glass is wavelength dependent, the amount of offset will be slightly different for beams with different wavelengths. That means the red component of a reflected image will be slightly offset from the blue component, for example. So, the reflection of, e.g., a white cube at a shallow angle will have a slight rainbow effect on its edges.
A front-surface mirror does not have that problem. | {
"domain": "physics.stackexchange",
"id": 100133,
"tags": "optics, reflection, refraction, geometric-optics"
} |
Would covering part of Lake Mead with large sheets of bubble wrap reduce the amount of water lost to evaporation? | Question: I have been recently thinking about how the water level of Lake Mead keeps dropping due to its high water evaporation rate brought on by recent years of historical drought conditions. The continuous lowering of Lake Mead threatens to shut down Hoover Dam and also threatens the water supply for Las Vegas, since they get most of their drinking water from Lake Mead.
As a short term solution to this problem, I am thinking that it might be worthwhile for the U.S. Army Corps of Engineers to start laying down long sheets of bubble wrap over the surface of Lake Mead to reduce the rate of water evaporation. Large quantities of water vapor would be trapped beneath the sheets of bubble wrap and this should reduce the amount of water lost to evaporation.
I am sure large amounts of bubble wrap could be quickly manufactured especially if the U.S. government were to subsidize bubble wrap manufacturers around the nation, and if they also were to pay for the transport of this bubble wrap from these factories to Lake Mead. The U.S government could also pay for the costs of ships and crews deploying the bubble wrap out on the lake.
Lake Mead is 247 square miles in size and it may be too expensive to cover all of its surface with sheets of bubble wrap. Yet, even if say only 33% of the lake's surface could be covered, this should still have a significant impact on reducing the amount of water lost to evaporation.
Once the drought comes to an end and the water level on Lake Mead has risen back to its normal level, then these ships would go back out on the lake to collect the bubble wrap and it could be stored in warehouses for future use if the need for it arise again.
Would covering part of Lake Mead with large sheets of bubble wrap reduce the amount of water lost to evaporation?
Answer: Something similar has been done with plastic balls. I'd guess they're much easier to manufacture, since it doesn't have to be a continuous sheet. It sounds like there is some local water savings, and the time to outweigh the manufacturing water use isn't a huge concern in this context.
Edit: that's evidently not why they have the balls there, but they do say it can reduce evaporation by 85-90% nevertheless.
That being said, I'm not sure what the cost vs benefit would look like. The highest possible value--likely much higher than realistic--for reduced evaporation would be about one-seventh of a very low estimate of the inflow. The losses are dominated by actual use, not evaporation.
Discharge from a Colorado River gage not far upstream of Lake Mead seems to typically be in the range of 10,000-15,000 cfs (about 280 cms at the low end). Not accounting for any inflows between that gage and Lake Mead, so that's a doubly low estimate.
Mean annual potential evapotranspiration from Water Years 2010-2019 (from here) at Lake Mead seems to be about 2 m. If the lake's evaporation was exactly equal to that maximum, and all of it was stopped by the covering, then this would equate to about 40 cms, on average. (That's a remote sensing estimate, so I'm not sure how accurate it is over a large water body; I know some algorithms struggle with that.) | {
"domain": "earthscience.stackexchange",
"id": 2480,
"tags": "climate-change, hydrology, water, geoengineering, water-vapour"
} |
Does a truck stop faster if the stack on the back of truck is stable or if it moves forward? | Question: My question in the title might not be very descriptive so I am re-writing it here:
If there is a truck in motion and it has stack of hay (lets suppose) on the back. Now if the truck comes to a sudden stop will it stop faster if the force exerted by the truck on hay had overcome the friction force (another wording: will it be faster if the hay slips forward) or will it stop faster if the hay remains constant.
EDIT:
I do not know where everyone is getting tied down option or not. The hay isn't tied down (sorry for misguiding). There are two situations, it slides on the trucks back or it doesn't slide on trucks back. And in which situation does the truck stop faster?
Answer: On the whole, static friction is higher than dynamic friction. This means that if you can brake without your wheels skidding, you will come to a halt more quickly. So let's assume that the truck brakes without skidding, and see where that gets us.
Let's assume that your truck has weight $W = Mg$ with a haystack with additional weight $w = mg$ on top. Coefficient of (static) friction between wheels and road is $\mu_1$, and friction between haystack and truck bed is $\mu_2$.
The normal force on the tires is $W+w$ and the maximum static friction is $\mu_1(W+w)$.
If the wheels stop in a distance $D$ and the hay stack can slide an additional distance $d$ on the truck bed, then the work done to bring the entire assembly to a halt (which must be equal to the kinetic energy we started with) is
$$\mu_1 (W+w) D + \mu_2 w d$$
If there was no movement of the haystack, $d=0$ and the second term would disappear.
So if we have to do the same amount of work to bring the truck to a halt in both cases, we can set the distance of the no-slide truck to $D'$ and we get the expression:
$$\mu_1 (W+w) D + \mu_2 w d = \mu_1(W+w) D'\\
D' = D + \frac{\mu_2 w }{\mu_1(W+w)}d$$
Since the second term is always greater than zero, the stopping distance will be longer for the truck that has the hay stack tied down (or otherwise not sliding). | {
"domain": "physics.stackexchange",
"id": 21612,
"tags": "newtonian-mechanics, kinematics, friction, inertia"
} |
Any idea to implement a generic controller for a Quadcopter? | Question:
I m trying right now to implement a quadcopter. I wrote the URDF file for defining the flying robot and it moves using odometry's informations pretty well in Rviz.
But here the first problem.
I m using the move_base as a controller to move the quadcopter around but trying to put some goals in the free space (with a predefined height) it is going to move on a planar surface even if in the move_base one could set the robot as holonomic (free to move on all 6 DOF).
My idea was to define one more frame (/frame_footprint) or "the shadow" of the robot the moves always in the plane and let it controlling by move_base pkg which works pretty good (see the following image):
With another transformation I would move the last frame (/frame_link) in respect of /base_footprint. Since it can rotate and move on the z axis it will take as input the height of the copter.
But now I have the first question:
How can I control the height? Are there some controller in ROS for a 6DOF flying robot?
Any suggestion for a controller?
Originally posted by Andromeda on ROS Answers with karma: 893 on 2014-09-02
Post score: 1
Answer:
hector_quadrotor provides what you describe (controller, URDF model and more, like complete gazebo simulation including dynamics), so I´d recommend taking a look at that. As demonstrated in the indoor SLAM demo video, most capabilities required for indoor navigation as required by move_base are already available, so it should be relatively easy to create a move_base config. Unfortunately I´m not aware of an existing one.
Originally posted by Stefan Kohlbrecher with karma: 24361 on 2014-09-03
This answer was ACCEPTED on the original site
Post score: 3
Original comments
Comment by Andromeda on 2014-09-03:
Vielen Dank Stefan!
That's what i wanted!!
Comment by Andromeda on 2014-09-09:
Hi,
the code is really difficult to understand. There are some guidelines or documents that describe how you developed such a package?
Regards | {
"domain": "robotics.stackexchange",
"id": 19274,
"tags": "transform"
} |
Get data from binance api and save to ClickHouse DB | Question: I did the following
Created DB and table in prepate_table function
Extracted data from binance in GetHistoricalData function
Saved the data to DB
My code works fine but I want to optimize my code and remove redundant steps
My solution
import os
import pandas as pd
import time
import clickhouse_driver
import datetime
from binance.client import Client
# Binance test_key https://testnet.binance.vision/key/generate
API_KEY = "--"
API_SECRET = "--"
def GetHistoricalData(
timedelta_days=10,
ticker="BTCUSDT",
kline_interval=Client.KLINE_INTERVAL_1HOUR
):
# Calculate the timestamps for the binance api function
untilThisDate = datetime.datetime.now()
sinceThisDate = untilThisDate - datetime.timedelta(days=timedelta_days)
client = Client(API_KEY, API_SECRET)
client.API_URL = 'https://testnet.binance.vision/api'
candle = client.get_historical_klines(ticker, kline_interval, str(sinceThisDate), str(untilThisDate))
# Create a dataframe to label all the columns returned by binance so we work with them later.
df = pd.DataFrame(candle, columns=['dateTime', 'open', 'high', 'low', 'close', 'volume', 'closeTime', 'quoteAssetVolume', 'numberOfTrades', 'takerBuyBaseVol', 'takerBuyQuoteVol', 'ignore'])
# as timestamp is returned in ms, let us convert this back to proper timestamps.
df.dateTime = pd.to_datetime(df.dateTime, unit='ms').dt.strftime("%Y-%m-%d %X")
df.set_index('dateTime', inplace=True)
# Get rid of columns we do not need
df = df.drop(['quoteAssetVolume', 'numberOfTrades', 'takerBuyBaseVol','takerBuyQuoteVol', 'ignore'], axis=1)
return df
def prepare_table():
client = clickhouse_driver.Client.from_url(f'clickhouse://default:{os.getenv("CLICK_PASSWORD")}@localhost:9000/crypto_exchange')
# field names from binance API
client.execute('''
CREATE TABLE IF NOT EXISTS historical_data_binance
(
dateTime DateTime,
closeTime Int64,
open Float64,
high Float64,
low Float64,
close Float64,
volume Float64,
kline_type String,
ticker String
) ENGINE = Memory
''')
return client
def insert_data(client, insert_data, db_name="crypto_exchange", table_name="historical_data_binance"):
"""
insert_data = {
"dateTime": dateTime,
"closeTime": closeTime,
"open": open,
"high": hign,
"low": low,
"close": close,
"volume": volume,
"kline_type": kline_type,
"ticker": ticker
}
"""
columns = ', '.join(insert_data.keys())
query = 'insert into {}.{} ({}) values'.format(db_name, table_name, columns)
data = []
data.append(insert_data)
client.execute(query, data)
client_db = prepare_table()
hist_data = GetHistoricalData(kline_interval=Client.KLINE_INTERVAL_1HOUR, ticker="BTCUSDT",)
for row in hist_data.iterrows():
data = row[1].to_dict()
data["dateTime"] = datetime.datetime.strptime(row[0], "%Y-%m-%d %X")
data["closeTime"] = int(data["closeTime"])
data["open"] = float(data["open"])
data["high"] = float(data["high"])
data["low"] = float(data["low"])
data["close"] = float(data["close"])
data["volume"] = float(data["volume"])
data["kline_type"] = Client.KLINE_INTERVAL_1HOUR
data["ticker"] = "BTCUSDT"
insert_data(client_db, data)
What can be improved?
Answer:
By PEP8, GetHistoricalData should be get_historical_data; likewise for your local variables like until_this_date
Introduce PEP484 type hints, for instance timedelta_days: Real (if it's allowed to be floating-point) or int otherwise
You should not be redefining 'https://testnet.binance.vision/api'; that's already defined as BaseClient.API_TESTNET_URL - see the documentation. This also suggests that you should be using testnet=True upon construction.
This date conversion:
df.dateTime = pd.to_datetime(df.dateTime, unit='ms').dt.strftime("%Y-%m-%d %X")
is only half good idea. Datetime data should be stored in machine format instead of rendered user presentation data, so keep the to_datetime and drop the strftime.
When you call df.drop, pass inplace=True since you overwrite df anyway.
Your use of iterrows broken down into individual ClickHouse insert statements is going to be slow. ClickHouse supports multi-row values() syntax. Try making insertion batches that use this syntax to reduce the total number of inserts that you perform. | {
"domain": "codereview.stackexchange",
"id": 42391,
"tags": "python, database, pandas"
} |
Removal the DC Component of Gabor Filters | Question: I recently was studying gabor filters and came to know that they have a shortcoming that they do have a minimum but some reasonable dc component.
now my question is why is it necessary to remove the dc component (using log gabor).
If there are any other limitation of gabor filters then please do tell me.
thanks..
Answer: gabor filter formed a sine/cosine wave with a gaussian, where the center of frequency of the filter is specified by the frequency of sine-cosine and the band width of the filter is specified by the width of the gaussian (valid perfectly inside a certain range and outside it, attenuates the frequency). In other words, more than one gabor filter to cover the spectrum.
the gabor filter can be designed for a band-width of 1 octave maximum.
you can find more details about gabor and log-gabor | {
"domain": "dsp.stackexchange",
"id": 1507,
"tags": "image-processing, gabor"
} |
Applying a phase shift to a coherent state vs a phase-space rotation (Mach-Zhender) | Question: I'm having some trouble with the physical implementation of a phase vs rotation in phase-space for a coherent state.
Say I have a laser pulse which yields a coherent state $|\alpha\rangle$, I then put that light into a Mach-Zender interferometer with a 50/50 beamsplitter to the point where one of the arms adds an additional phase of $\phi$, does that mean the state is
$|\frac{e^{i\phi}\alpha}{\sqrt{2}}\rangle$
or is it
$e^{i\phi}|\frac{\alpha}{\sqrt{2}}\rangle$
I thought it was the first instance, but then thinking about it, I know MZIs can create destructive interference so when $\phi = \pi$, the output state has zero amplitude, which would suggest we get $|\alpha\rangle - |\alpha\rangle$ and not $|\alpha\rangle + |-\alpha\rangle$
Which leads me to my next question, if I had state $|\alpha\rangle$ and wanted to transform it to $|-\alpha\rangle$, how would I go about physically implementing that? Can it be done with a MZI?
Answer: It is the first case: The MZ produces
$$ |\alpha\rangle\rightarrow \left| \frac{1}{\sqrt{2}} \exp(i\phi)\alpha \right\rangle . $$
After the last beam splitter you then get
$$ |\alpha_{out}\rangle = \left| \frac{1}{2} \exp(i\phi)\alpha
+ \frac{1}{2} \exp(-i\phi)\alpha\right\rangle
= \left| \cos(\phi)\alpha \right\rangle . $$
For $\phi=\pi/2$, it would become the vacuum state $|0\rangle=|\text{vac}\rangle$. | {
"domain": "physics.stackexchange",
"id": 98477,
"tags": "optics, quantum-optics, coherent-states"
} |
ICP scan matching failure | Question:
Hello, ROS people.
I am using ROS laser_scan_matcher but faced this ICP failure.
Please have a look the following video.
YouTube video
Is there anyone who encountered this and any recommendations?
Cheers.
Originally posted by enddl22 on ROS Answers with karma: 177 on 2012-06-04
Post score: 2
Answer:
(Warning: Shameless plug ahead)
Not sure about ICP, but you could also try hector_mapping and see if it works better. We normally get good results with the Hokuyo UTM-30LX scanner and similar scenarios like that shown in your video.
/edit: Updated results with hector_mapping. I found that increasing the scan queue size from 5 to 25 gets rid of all errors, resulting in nice map without inconsistencies. It's a interesting question why that is though, considering that it takes only 4-8% of CPU (probably idling waiting for transforms). Map looks like this:
The results can be reproduced using this launch file. Similar to this tutorial, just start the launch file in one terminal:
roslaunch hector_slam_launch postproc_qut_logs.launch
and then play back your bagfile in another terminal:
rosbag play 2.5D_bag02.bag --clock
Originally posted by Stefan Kohlbrecher with karma: 24361 on 2012-06-05
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by enddl22 on 2012-06-08:
Thank you for reply. I tried to run with hector_mapping but it keeps complaining "lookupTransform base_footprint to laser timed out. Could not transform laser scan into base_frame." I might need time to run it with. If you could run it with the linked bag file, it would be great to see results.
Comment by enddl22 on 2012-06-08:
The bag file link is https://dl.dropbox.com/u/12446150/2.5D_bag02.bag
Thank you. | {
"domain": "robotics.stackexchange",
"id": 9669,
"tags": "ros"
} |
Test cases for an elevator simulator | Question: I'm trying to test an elevator simulator program that runs in the console and requires user interaction. At the moment, my tests look like this:
gem 'minitest', '>= 5.0.0'
require 'minitest/spec'
require 'minitest/autorun'
require_relative 'elevator'
class SingleFloorElevator < Elevator
@@moves = ["2", "N"].each
def get_input; @@moves.next end
end
class MultiFloorElevator < Elevator
@@moves = ["2", "Y", "7", "Y", "4", "N"].each
def get_input; @@moves.next end
end
class InvalidElevetorOne < Elevator
@@moves = ["-2", "2"].each
def get_input; @@moves.next end
end
class InvalidElevatorTwo < Elevator
@@moves = ["2000", "5"].each
def get_input; @@moves.next end
end
class ElevatorTest < MiniTest::Test
def test_can_accept_and_move_to_floor
e = SingleFloorElevator.new
e.run
assert_equal(e.current_floor, 2)
end
def test_can_change_direction
e = MultiFloorElevator.new
e.run
assert_equal(e.going_down, true)
assert_equal(e.going_up, false)
end
describe "it can maintain a list of floor numbers" do
it "cannot travel below the ground floor" do
e = InvalidElevetorOne.new
e.enter_floor
assert_equal(e.floors.size, 1)
assert_equal(e.floors, [2])
end
it "cannot travel higher than the top floor" do
e = InvalidElevatorTwo.new
e.enter_floor
assert_equal(e.floors.size, 1)
assert_equal(e.floors, [5])
end
end
end
As you can see, I'm overriding the Elevator class (more important the get_input method which grabs user input from the console) multiple times according to what I need to test. It all works perfectly fine but I was wondering if there was a tidier way of doing things?
Answer: I kind of wish you had posted the elevator class as well, but it's kind of interesting how you much you can tell about the CUT (code under test) from looking at the just the tests themselves.
First, your elevator shouldn't be responsible for getting what floor it should go to from the user. Your program should tell it where to go. If you inverted control, you wouldn't have to subclass the code under test in order to test it.
Which brings me to my second point...
If you have to mock the CUT, you're no longer actually testing anything. You can't have confidence that your real class is going to behave as you expect it to, because you've very explicitly changed how it behaves. You're not testing your elevator. You're testing your mock elevators, which is no test at all.
Sure, you are testing the elevator. I know that, but certainly, this class has too many responsibilities. It shouldn't be getting user input from the console. How could you possibly reuse the elevator class in a GUI application later? | {
"domain": "codereview.stackexchange",
"id": 16980,
"tags": "object-oriented, ruby, unit-testing"
} |
Listing Docker Images without registry URI | Question: I have a private container registry behind a firewall that will need some Google registry images. To make sure things are updated sanely, I'm writing a mirror script in Bash. I have one line that lists the images without their registry URIs, but it's overly complicated and I am piping through 3 awk statements.
docker images | awk '{printf("%s\:%s\n", $1, $2)}' | awk 'NR>1' | awk -F '/' '{print $3}'
First awk statement:
$ docker images | awk '{printf("%s\:%s\n", $1, $2)}'
REPOSITORY:TAG
gcr.io/google_containers/skydns:2015-10-13-8c72f8c
Second awk statement:
$ docker images | awk '{printf("%s\:%s\n", $1, $2)}' | awk 'NR>1'
gcr.io/google_containers/skydns:2015-10-13-8c72f8c
Third awk statement:
$ docker images | awk '{printf("%s\:%s\n", $1, $2)}' | awk 'NR>1' | awk -F '/' '{print $3}'
skydns:2015-10-13-8c72f8c
How can I reduce this to one awk statement?
Answer: It looks like the purpose of the second command is to skip the REPOSITORY:TAG table header, which would always be the first line of the docker images output. You can therefore combine the first two awk commands as:
awk 'NR > 1 { printf("%s:%s\n", $1, $2) }'
To keep just the trailing part of the REPOSITORY field, you could do gsub('.*/', '', $1). Note that that is not the same as awk -F '/' '{print $3}', but likely better.
Therefore, I would write the solution as
docker images | awk 'NR > 1 { gsub('.*/', '', $1); printf("%s:%s\n", $1, $2); }' | {
"domain": "codereview.stackexchange",
"id": 16341,
"tags": "bash, awk"
} |
Generate credit cards dataset for locating number region | Question: Currently I'm working on a project for scanning credit card and text extraction from cards. So first of all I decided to preprocess my images with some filters like thresholding, dilation and some other stuff. But it was not successfully for OCR of every credit cards. So I learned a lot and I found a solution like this for number plate recognition that is very similar to my project.
In the first step I want to generate a random dataset like my cards to locate card number region, and for every card that I've generated I cropped two images that one of them has numbers and another has not. I generated 2000 images for every cards.
so I have some images like this:
(does not have numbers)
(has numbers)
And after generating my dataset I used this model with tensorflow to train my network.
model = models.Sequential()
model.add(layers.Conv2D(8, (5, 5), padding='same', activation='relu', input_shape=(30, 300, 3)))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Conv2D(16, (5, 5), padding='same', activation='relu'))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Conv2D(32, (5, 5), padding='same', activation='relu'))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Flatten())
model.add(layers.Dense(512, activation='relu'))
model.add(layers.Dense(2, activation='softmax'))
Here is my plot for 5 epochs.
I almost get 99.5% of accuracy and It seems to be wrong, I think I have kind of overfitting in my data. Does it work correctly or my model is overfitted ? And how can I generate dataset for this purpose ?
Answer: I am assuming the question you are asking is how to prevent over-fitting on the maximum accuracy. Your graph does show that your model over-fits.
There is a couple of different methods to prevent over-fitting from happening. You can specify training to stop after a certain amount of epochs. In your case it seems to be 2 or 3 epochs. Take care as a new initialization might require more or less epochs to reach optimal accuracy thus it is required run your model a couple of times to determine the correct amount of epochs.
You can also specify the accuracy you want your model to reach before you stop training, this can be dangerous as a local-minima could be found below your expected accuracy thus resulting in your training to run indefinitely. Also your model could have become more accurate then your expected accuracy but stopped prematurely due to the model reaching your stopping-condition.
You can combine the two. Terminate training once it reach your expected accuracy otherwise terminate it if it reaches a certain amount of epochs. This way you prevent your training from happening indefinitely.
In Tensorflow you would want to build your own custom training seen here
The method you are interested is
epoch_accuracy = tfe.metrics.Accuracy()
In their example they get the accuracy after each epoch, you would want to create your own batches and apply it there.
If you really want to dive deeper, you can implement methods for the model to detect when it starts over-fitting such as looking at the standard deviation of the accuracy across a certain measure (usually epochs but your model starts over-fitting after 5 so its not a good measure in your case, maybe do standard deviation over a certain amount of batches trained) if the accuracy leaves the standard deviation space then it might be good idea to stop training. The problem with this technique is that your model has to over-fit a bit to know it needs to stop. The other problem is that your model might seem it is over-fitting but was just a dip in accuracy and was stopped prematurely. (There is methods around this but can't recall them)
Another technique is to add drop out. This introduce noise and pushes your model out of a local-minima. Also forces nodes that was ignored to be used and optimized. This surprisingly works well but is not a surefire way to prevent over-fitting. This technique is built into Tensorflow and don't need custom training. You can find more reading about Tensorflow dropout here, also they have more techniques on how to overcome over-fitting but their solutions is to change the model and your model seems to be fine.
tf.keras.layers.Dropout(0.2),
These are just some techniques I listed but there is a lot more out there. Unfortunately there is no concrete method to prevent it but steps can be taken to stop well before over-fitting occurs. The first two techniques are mostly used together. | {
"domain": "ai.stackexchange",
"id": 1359,
"tags": "neural-networks, convolutional-neural-networks, tensorflow, datasets, optical-character-recognition"
} |
How do you identify the elastic limit and yield point on a stress/strain graph? | Question:
I understand what these points are, but I'm struggling to identify E and Y on certain graph shapes.
With graph B I can identify Y, but there doesn't seem to be a clear elastic limit between P and Y.
With graph C there doesn't seem to be a clear yield point or elastic limit, so I'm not sure where I would place them.
Graph D doesn't follow Hooke's Law at all, so there is no limit of proportionality. In this case, where would the elastic limit and yield point go?
Note: This is not a specific homework question, in case anyone thinks the question is too specific. This is a general question about the placement of points on a graph that I need to be able to do for my A-Level course.
Answer: The elastic limit is the point beyond which the material will not return to its original length when the load is removed. Yield point is essentially the same, except that it is usually defined when the permanent strain reaches a particular level such as 0.2%.
Neither the elastic limit nor the yield point can be identified from a graph in which the load is continuously increased. In order to identify these points the load must be removed. So there is no way of identifying these points from the graphs you have been given.
The elastic limit is not related to the limit of proportionality, which can be identified from such a graph. In some cases, of which graph D is an example, the approximately-linear region does not pass through the origin. The limit of proportionality will then be at the end of this region. | {
"domain": "physics.stackexchange",
"id": 41645,
"tags": "elasticity, continuum-mechanics, stress-strain"
} |
Which phylum appeared most recently | Question: I'm aware that our earliest records of many major animal and plant phyla come from the Cambrian or Precambrian periods, and I'm also vaguely aware of some of the objections raised with general concept of phyla. With this in mind, I'm curious which of widely accepted biological phyla appeared most recently, and what evidence do we have of their relatively recent appearance?
I'm most interested in animals, but I'd also welcome any information about other organisms.
Answer: In my view, we simply don't have good enough data to answer this question. The fossil evidence is too sparse prior to the Cambrian and the evidence that we do have suggests that the phyla were already too separated. Meanwhile, the depth of time and the different lifecycles and circumstances of the species involved mean that any "genetic clocks" we might use are likely to be too poor at keeping time and, indeed, different attempts have delivered different estimates of the time of divergence. | {
"domain": "biology.stackexchange",
"id": 2466,
"tags": "evolution, taxonomy, phylogenetics, cladistics"
} |
4-digit 5's complement of a negative number | Question: Let $n = -13, \ k = -24$
How do I find the 4-digit 5's complement of each number? What would be the result of $n + k$ in complement representation?
I understand how to calculate $n$-digit, 2's complement. I convert it to base 2, invert and add one.
Also, with positive numbers, let's say $n = 13, \ k = 24$, the 4-digit 5's complement would be $(5542)_{10}$ and $(5531)_{10}$. Correct? What would their addition be?
Answer: Let me demystify $b$'s complement.
Let $b \geq 2$ be an integer. Given a $d$-digit integer $x$ is base $b$, we want to find a $d$-digit integer $y$ such that $y \equiv -x \pmod{b^d}$. Why is this useful? Suppose that $z$ is also a $d$-digit integer, and assume that $z-x \geq 0$. Then $z-x \equiv z+y \pmod{b^d}$, and so if we add $z$ and $y$ and ignore the carry, we will get $z-x$ to achieve subtraction.
How do we find $y$, the $b$'s complement of $x$? We take $y =k\times b^d-x= b^d-x = (b^d-1-x)+1$ (with $k\in \mathbb{Z}$, let $k = 1$). Now the representation of $b^d-1$ in base $b$ consists of $d$ digits of $b-1$, so $b^d-1-x$ is obtained by inverting all the digits of $x$ (subtracting them from $b-1$). This is $b$'s complement: the $b$'s complement $y$ of $x$ is obtained by inverting all digits and adding 1. | {
"domain": "cs.stackexchange",
"id": 15370,
"tags": "base-conversion, numeral-representations"
} |
Does chemical energy contribute to mass? | Question: Does chemical energy contribute to the mass of an object? I don't mean the bond energy, but the possible energy that could be released (i.e. Does an atom of oxygen and a molecule of hydrogen (H2) have more mass together (even when not bonded) than the sum of the masses separately (assuming each one is considered in empty space)?)
Answer: I believe I now understand the question - and my earlier comment and the link for a potential duplicate does not apply.
If I'm right, you are actually asking this question:
If you take two containers - one with hydrogen, and one with oxygen -
and you weigh them separately; then allow the gases to mix, will the
mass of the mixture be different?
If the atoms react, the chemical energy releases will result in a corresponding loss of mass for the combined system; see for example this earlier question or the answer to another related question.
If the atoms did not react, there is no change in the chemical energy; but interestingly, there is a change in the entropy of the system. And entropy is closely related to energy. Speculating here - my thermodynamics is very very rusty:
<speculation>
In fact, if you have a perfectly reversible engine, you can do work by changing the amount of entropy: $\Delta U = T\Delta S$. This means that the small change in entropy as you mix your gases will result in a small change in energy, and this should produce a change in mass.
</speculation>
I'm willing to be proven wrong on this last point. As I said - thermodynamics rusty. | {
"domain": "physics.stackexchange",
"id": 25449,
"tags": "nuclear-physics, atomic-physics, physical-chemistry, mass-energy, binding-energy"
} |
Suggestions on Turtlebot- Create or Roomba? | Question:
Dear All,
I will soon be entering into my PhD studies and would like to work with ROS vision libraries for SLAM. I am really interested in turtlebot since its low cost and efficient enough for using vision systems and doing SLAM. My question is, which is better for turtlebot, i am in japan. Create or roomba? I can get create with little efforts here in Japan. Also if in future I want to use a stereo camera to my Turtlebot wil that be possible.
I heard turtlebot with Roomba is little tricky to get start with. I am new to ROS and need something which gives me a good head start.
So icreate or roomba and why?
Originally posted by Vegeta on ROS Answers with karma: 340 on 2012-09-22
Post score: 0
Answer:
Create is a better option for the TurtleBot. If you opt for Roomba you may have to tweak the parts to fit it. Since you say that you are just starting with ROS and TurtleBot, do not waste time with Roomba. I suggest you to buy a standard TurtleBot to start off with. You have lots of cool apps for it in ROS and well laid tutorials to keep you advancing. I worked on the TurtleBot with create and found it perfect (never worked with roomba though). And also, you can add a stereo camera in future to your TurtleBot. So, go for Create! Wish you good luck!
Originally posted by prasanna.kumar with karma: 1363 on 2012-09-22
This answer was ACCEPTED on the original site
Post score: 3
Original comments
Comment by dejanpan on 2012-09-23:
Some more insightful info also here: http://ros-users.122217.n3.nabble.com/iRobot-Roomba-vs-Create-td2019380.html. | {
"domain": "robotics.stackexchange",
"id": 11113,
"tags": "ros, roomba, turtlebot, create, stereo"
} |
Scraping next page using BeautifulSoup | Question: I have created a script for article scraping - it finds title, subtitle, href-link, and the time of publication. Once retrieved, information is converted to a pandas dataframe, and the link for the next page is returned as well (so that it parses page after page).
Everything works as expected, though I feel there should be an easier -or more elegant- way of loading a subsequent page within main function.
import requests
import pandas as pd
from bs4 import BeautifulSoup
from time import sleep
def read_page(url):
r = requests.get(url)
return BeautifulSoup(r.content, "lxml")
def news_scraper(soup):
BASE = "https://www.pravda.com.ua"
container = []
for i in soup.select("div.news.news_all > div"):
container.append(
[
i.a.text, # title
i.find(class_="article__subtitle").text, # subtitle
i.div.text, # time
BASE + i.a["href"], # link
]
)
dataframe = pd.DataFrame(container, columns=["title", "subtitle", "time", "link"])
dataframe["date"] = (
dataframe["link"]
.str.extract("(\d{4}/\d{2}/\d{2})")[0]
.str.cat(dataframe["time"], sep=" ")
)
next_page = soup.select_one("div.archive-navigation > a.button.button_next")["href"]
return dataframe.drop("time", axis=1), BASE + next_page
def main(START_URL):
print(START_URL)
results = []
soup = read_page(START_URL)
df, next_page = news_scraper(soup)
results.append(df)
while next_page:
print(next_page)
try:
soup = read_page(next_page)
df, next_page = news_scraper(soup)
results.append(df)
except:
next_page = False
sleep(1)
return pd.concat([r for r in results], ignore_index=True)
if __name__ == "__main__":
df = main("https://www.pravda.com.ua/archives/date_24122019/")
assert df.shape == (120, 4) # it's true as of today, 12.26.2019
Answer: Optimization and restructuring
Function's responsibility
The initial approach makes the read_page function depend on both requests and BeautifulSoup modules (though BeautifulSoup functionality/features is not actually used there). Then, a soup instance is passed to news_scraper(soup) function.To reduce dependencies let read_page function extract the remote webpage and just return its contents r.content. That will also uncouple news_scraper from soup instance arguments and allow to pass any markup content, making the function more unified.
Namings
BASE = "https://www.pravda.com.ua" within news_scraper function is essentially acting like a local variable. But considering it as a constant - it should be moved out at top level and renamed to a meaningful BASE_URL = "https://www.pravda.com.ua".
i is not a good variable name to reflect a document element in for i in soup.select("div.news.news_all > div"). Good names are node, el, atricle ...
The main function is better renamed to news_to_df to reflect the actual intention.main(START_URL) - don't give arguments uppercased names, it should be start_url.
Parsing news items and composing "date" value
As you parse webpages (html pages) - specifying html.parser or html5lib (not lxml) is preferable for creating BeautifulSoup instance.
Extracting an article publication time with generic i.div.text would be wrong as a parent node div.article could potentially contain another child div nodes with text content. Therefore, the search query should be more exact: news_time = el.find(class_='article__time').text.Instead of assigning, traversing and dropping "time" column and aggregating:
dataframe["date"] = (
dataframe["link"]
.str.extract("(\d{4}/\d{2}/\d{2})")[0]
.str.cat(dataframe["time"], sep=" ")
)
- that all can be eliminated and the date column can be calculated at once by combining the extracted date value (powered by precompiled regex pattern DATE_PAT = re.compile(r'\d{4}/\d{2}/\d{2}')) and news_time value.
Instead of collecting a list of lists - a more robust way is to collect a list of dictionaries like {'title': ..., 'subtitle': ..., 'date': ..., 'link': ...} as that will prevent confusing the order of values for strict list of column names.
Furthermore, instead of appending to list, a sequence of needed dictionaries can be efficiently collected with generator function. See the full implementation below.
The main function (new name: news_to_df)
The while next_page: turned to while True:.
except: - do not use bare except, at least catch basic Exception class: except Exception:.
The repeated blocks of read_page, news_scraper and results.append(df) statements can be reduced to a single block (see below).One subtle nuance is that the ultimate "next" page will have '/archives/' in its a.button.button_next.href path, signaling the end of paging. It's worth to handle that situation explicitly:
if next_page == '/archives/':
break
The final optimized solution:
import requests
import pandas as pd
from bs4 import BeautifulSoup
from time import sleep
import re
BASE_URL = "https://www.pravda.com.ua"
DATE_PAT = re.compile(r'\d{4}/\d{2}/\d{2}')
def read_page(url):
r = requests.get(url)
return r.content
def _collect_newsitems_gen(articles):
for el in articles:
a_node = el.a
news_time = el.find(class_='article__time').text
yield {'title': a_node.text,
'subtitle': el.find(class_="article__subtitle").text,
'date': f'{DATE_PAT.search(a_node["href"]).group()} {news_time}',
'link': f'{BASE_URL}{a_node["href"]}'}
def news_scraper(news_content):
soup = BeautifulSoup(news_content, "html5lib")
articles = soup.select("div.news.news_all > div")
next_page_url = soup.select_one("div.archive-navigation > a.button.button_next")["href"]
df = pd.DataFrame(list(_collect_newsitems_gen(articles)),
columns=["title", "subtitle", "date", "link"])
return df, f'{BASE_URL}{next_page_url}'
def news_to_df(start_url):
next_page = start_url
results = []
while True:
print(next_page)
try:
content = read_page(next_page)
df, next_page = news_scraper(content)
results.append(df)
if next_page == '/archives/':
break
except Exception:
break
sleep(1)
return pd.concat([r for r in results], ignore_index=True)
if __name__ == "__main__":
df = news_to_df("https://www.pravda.com.ua/archives/date_24122019/")
assert df.shape == (120, 4) # it's true as of today, 12.26.2019
If printing the final resulting df with print(df.to_string()) - the output would look like below (with cutted the middle part to make it a bit shorter):
https://www.pravda.com.ua/archives/date_24122019/
https://www.pravda.com.ua/archives/date_25122019/
https://www.pravda.com.ua/archives/
title subtitle date link
0 Голова Закарпаття не зрозумів, за що його звіл... Голова Закарпатської обласної державної адміні... 2019/12/24 23:36 https://www.pravda.com.ua/news/2019/12/24/7235...
1 Стало відомо коли відновлять будівництво об'єк... На зустрічі представників керівництва ХК Київм... 2019/12/24 22:41 https://www.pravda.com.uahttps://www.epravda.c...
2 ВАКС продовжив арешт Гримчаку до 14 лютого Вищий антикорупційний продовжив арешт для коли... 2019/12/24 22:25 https://www.pravda.com.ua/news/2019/12/24/7235...
3 Економічні новини 24 грудня: транзит газу, зни... Про транзит газу, про зниження "платіжок", про... 2019/12/24 22:10 https://www.pravda.com.uahttps://www.epravda.c...
4 Трамп: США готові до будь-якого "різдвяного по... Президент США Дональд Трамп на тлі побоювань щ... 2019/12/24 22:00 https://www.pravda.com.uahttps://www.eurointeg...
5 У податковій слідчі дії – електронні сервіси п... Державна податкова служба попереджає, що елект... 2019/12/24 21:55 https://www.pravda.com.ua/news/2019/12/24/7235...
6 Мінфін знизив ставки за держборгом до 11% річних Міністерство фінансів знизило середньозважену ... 2019/12/24 21:31 https://www.pravda.com.uahttps://www.epravda.c...
7 Україна викреслила зі списку на обмін ексберку... Російський адвокат Валентин Рибін заявляє, що ... 2019/12/24 21:13 https://www.pravda.com.ua/news/2019/12/24/7235...
8 Посол: іспанський клуб покарають за образи укр... Посол України в Іспанії Анатолій Щерба заявив,... 2019/12/24 20:45 https://www.pravda.com.uahttps://www.eurointeg...
9 Міністр енергетики: "Газпром" може "зістрибнут... У Міністерстві енергетики не виключають, що "Г... 2019/12/24 20:03 https://www.pravda.com.uahttps://www.epravda.c...
10 Зеленський призначив Арахамію секретарем Націн... Президент Володимир Зеленський затвердив персо... 2019/12/24 20:00 https://www.pravda.com.ua/news/2019/12/24/7235...
...
110 Уряд придумав, як захистити українців від шкод... Кабінет міністрів схвалив законопроєкт, який з... 2019/12/25 06:54 https://www.pravda.com.ua/news/2019/12/25/7235...
111 Кіберполіція та YouControl домовилися про спів... Кіберполіція та компанія YouControl підписали ... 2019/12/25 06:00 https://www.pravda.com.ua/news/2019/12/25/7235...
112 В окупованому Криму продають прикарпатські яли... У центрі Сімферополя, на новорічному ярмарку п... 2019/12/25 05:11 https://www.pravda.com.ua/news/2019/12/25/7235...
113 У США схожий на Санту чоловік пограбував банк,... У Сполучених Штатах чоловік з білою, як у Сант... 2019/12/25 04:00 https://www.pravda.com.ua/news/2019/12/25/7235...
114 У Росії за "дитячу порнографію" посадили блоге... Верховний суд російської Чувашії засудив до тр... 2019/12/25 03:26 https://www.pravda.com.ua/news/2019/12/25/7235...
115 Уряд провів екстрене засідання через газові пе... Кабінет міністрів у вівторок ввечері провів ек... 2019/12/25 02:31 https://www.pravda.com.ua/news/2019/12/25/7235...
116 Нова стратегія Мінспорту: розвиток інфраструкт... Стратегія розвитку спорту і фізичної активност... 2019/12/25 02:14 https://www.pravda.com.ua/news/2019/12/25/7235...
117 Милованов розкритикував НБУ за курс гривні та ... Міністр розвитку економіки Тимофій Милованов р... 2019/12/24 01:46 https://www.pravda.com.uahttps://www.epravda.c...
118 Російські літаки розбомбили школу в Сирії: заг... Щонайменше 10 людей, в тому числі шестеро – ді... 2019/12/25 01:04 https://www.pravda.com.ua/news/2019/12/25/7235...
119 Ліквідація "майданчиків Яценка": Зеленський пі... Президент Володимир Зеленський підписав закон,... 2019/12/25 00:27 https://www.pravda.com.ua/news/2019/12/25/7235...
P.S. From Ukraine with love ... | {
"domain": "codereview.stackexchange",
"id": 36967,
"tags": "python, web-scraping, beautifulsoup"
} |
Is there any implementation of Extended Isolation Forest algorithm in R/Python? | Question: I am using isofor package for regular Isolation Forest but I came by an article about Extended Isolation Forest and need your advise which package has this functions implemented in R/Python.
Answer: There is a package on Github called "Extended Isolation Forest for Anomaly Detection", I used it a couple months ago and it seemed to work. For how accurate or how buggy it is, I'm not sure but if anything seems off you can check the source code for errors in the implementation of the paper Extended Isolation Forest by Harir et.al. | {
"domain": "datascience.stackexchange",
"id": 4344,
"tags": "python, r, decision-trees, ensemble-modeling"
} |
The Conservation Laws in Peskin and Schroeder (page 309) | Question: I'm working on the Conservation Laws in Peskin (page 309), but I was confused for it.
In last section, I know that
Classical: the action is stationary.i.e. $\delta S =0$ when $\phi(x)\rightarrow \phi(x)+\epsilon(x)$, so we obtain Euler-Lagrange equation
Quantum: the generating functional is invariant.i.e. $\delta Z[J]=0$ when $\phi(x)\rightarrow \phi(x)+\epsilon(x)$, so we obtain Dyson-Schwinger equations
However, when I started to derive Noether's theorem, I got confused.
When $\phi(x)\rightarrow \phi(x)+\epsilon(x)\Delta\phi_a(x)$ (Eq.9.93), in the classical case and in the quantum case, which is stationary?
What's the difference between $\phi(x)\rightarrow \phi(x)+\epsilon(x)$ and $\phi(x)\rightarrow \phi(x)+\epsilon(x)\Delta\phi(x)$, they're both infinitesimal transformations. Why would one give DS equations and one give Noether theorem? What are the differences and connections between them?
Answer:
The core of OP's question seems to be the following question:
What's the difference between (1) the infinitesimal variations to derive EL equations, and (2) the infinitesimal symmetry variations in Noether's theorem?
This is explained in this related Phys.SE post.
Concerning the Schwinger-Dyson (SD) equations, they are derived using a vertical translation symmetry (which is formally of type 2) in the target space of the fields.
Other SD-like equations can be derived using other symmetry variations (2) if they exist. This is e.g. useful to derive Ward identities. | {
"domain": "physics.stackexchange",
"id": 72033,
"tags": "quantum-field-theory, lagrangian-formalism, field-theory, path-integral, noethers-theorem"
} |
How do Hadamard and CNOT gates work on Qiskit SDK? Why is the output reversed? | Question: Here is the code that I have been using on IBM Q Experience (should be the latest version of Qiskit). From my understanding it seems like the outputs of Hadamard and CNOT gates are reversed in a 2-qubit system in Qiskit: A Hadamard gate operating on state 0 actually acts on state 1 and vice versa, similarly with a CNOT gate. Is there something wrong with my understanding? The comments in my code summarize the output results I find surprising.
%matplotlib inline
# Importing standard Qiskit libraries
from qiskit import QuantumCircuit, execute, Aer, IBMQ
from qiskit.compiler import transpile, assemble
from qiskit.tools.jupyter import *
from qiskit.visualization import *
from iqx import *
from qiskit.quantum_info import Statevector
sv01 = Statevector.from_label('01')
print(sv01)
plot_state_qsphere(sv01.data)
h0circuit = QuantumCircuit(2)
h0circuit.h(0)
h0circuit.draw("mpl")
final01 = sv01.evolve(h0circuit)
print(final01)
plot_state_qsphere(final01.data)
#Shouldn't the output be state 1/sqrt(2) (|01> + |11>) ????
#And why is the phase on the state |00> pi?
#Shouldn't the phase be pi for |01> instead on the qsphere (ambiguous, probably doesn't matter)?
h1circuit = QuantumCircuit(2)
h1circuit.h(1)
h1circuit.draw("mpl")
final01 = sv01.evolve(h1circuit)
print(final01)
plot_state_qsphere(final01.data)
#Shouldn't the output be state 1/sqrt(2) (|00> - |01>) ????
#Conclusion of the above. An H gate on qubit 0 seems to apply to qubit 1. An H gate on qubit 1 seems to apply to qubit 0.
sv01 = Statevector.from_label('01')
print(sv01)
plot_state_qsphere(sv01.data)
c0xCircuit = QuantumCircuit(2)
c0xCircuit.cx(0,1)
c0xCircuit.draw('mpl')
final01 = sv01.evolve(c0xCircuit)
print(final01)
plot_state_qsphere(final01.data)
#Shouldn't this result be |01> ?
c1xCircuit = QuantumCircuit(2)
c1xCircuit.cx(1,0)
c1xCircuit.draw('mpl')
final01 = sv01.evolve(c1xCircuit)
print(final01)
plot_state_qsphere(final01.data)
#Shouldn't this result be |11> ?
#Conclusion of the above. The cnot (0,1) gate actually acts as a cnot (1,0) gate. The cnot (1,0) gate actually
#acts as a cnot (0,1) gate. The states 0 and 1 are flipped somehow.
```
Answer: This has to do with Qiskit uses little-endian for ordering. In Qiskit’s convention, higher qubit indices are more significant (little endian convention).
Thus the CNOT (CX) gate matrix representation in Qiskit is actually:
$$
CX\ q_0, q_1 =
I \otimes |0\rangle\langle0| + X \otimes |1\rangle\langle1| =
\begin{pmatrix}
1 & 0 & 0 & 0 \\
0 & 0 & 0 & 1 \\
0 & 0 & 1 & 0 \\
0 & 1 & 0 & 0
\end{pmatrix}
$$
Here is the link to the Qiskit's documentation. | {
"domain": "quantumcomputing.stackexchange",
"id": 2269,
"tags": "quantum-gate, programming, qiskit"
} |
Is the CMB rest frame special? Where does it come from? | Question: It seems that we are moving relative to the universe at the speed of ~ 600 km/s.
This is the speed of our galaxy relative to the cosmic microwave background.
Where does this rest frame come from? Is this special in any way (i.e., an absolute frame)?
EDIT: I think the more important question is "where does the CMB rest frame come from?".
Answer: I found this answer at Professor Douglas Scott's FAQ page. He researches CMB and cosmology at the University of British Columbia.
How come we can tell what motion we have with respect to the CMB? Doesn't this mean there's an absolute frame of reference?
The theory of special relativity is based on the principle that there are no preferred reference frames. In other words, the whole of Einstein's theory rests on the assumption that physics works the same irrespective of what speed and direction you have. So the fact that there is a frame of reference in which there is no motion through the CMB would appear to violate special relativity!
However, the crucial assumption of Einstein's theory is not that there are no special frames, but that there are no special frames where the laws of physics are different. There clearly is a frame where the CMB is at rest, and so this is, in some sense, the rest frame of the Universe. But for doing any physics experiment, any other frame is as good as this one. So the only difference is that in the CMB rest frame you measure no velocity with respect to the CMB photons, but that does not imply any fundamental difference in the laws of physics.
“Where does it come from?” is also answered:
Where did the photons actually come from?
A very good question. We believe that the very early Universe was very hot and dense. At an early enough time it was so hot, ie there was so much energy around, that pairs of particles and anti-particles were continually being created and annihilated again. This annihilation makes pure energy, which means particles of light - photons. As the Universe expanded and the temperature fell the particles and anti-particles (quarks and the like) annihilated each other for the last time, and the energies were low enough that they couldn't be recreated again. For some reason (that still isn't well understood) the early Universe had about one part in a billion more particles than anti-particles. So when all the anti-particles had annihilated all the particles, that left about a billion photons for every particle of matter. And that's the way the Universe is today!
So the photons that we observe in the cosmic microwave background were created in the first minute or so of the history of the Universe. Subsequently they cooled along with the expansion of the Universe, and eventually they can be observed today with a temperature of about 2.73 Kelvin.
EDIT:
@starwed points out in the comments that there may be some confusion as to whether someone in the rest frame is stationary with respect to the photons in the rest frame. I found a couple more questions on Professor Scott's excellent email FAQ page to clarify the concept.
In your answer to the "How come we can tell what motion we have with respect to the CMB?" question, there is one more point that could be mentioned. In an expanding universe, two distant objects that are each at rest with respect to the CMB will typically be in motion relative to each other, right?
The expansion of the Universe is certainly an inconvenience when it comes to thinking of simple pictures of how things work cosmologically! Normally we get around this by imagining a set of observers who are all expanding from each other uniformly, i.e. they have no "peculiar motions", only the "Hubble expansion" (which is directly related to their distance apart). These observers then define an expanding reference frame. There are many different such frames, all moving with some constant speed relative to each other. But one of them can be picked out explicitly as the one with no CMB dipole pattern on the sky. And that's the absolute (expanding) rest frame!
Assumptions: From most points in the universe, one will measure a CMBR dipole. Thus, one would have to accelerate to attain a frame of reference "at rest" relative to the CMBR. Questions: Having attained that "rest frame", would one not have to accelerate constantly to stay at rest (to counter attraction of all the mass scattered around the universe)? [abridged]
I think the assumption is wrong, and therefore the question doesn't need to be asked.
The fact that there's a CMB dipole (one side of the sky hotter and the other side colder than the average) tells us that we are moving at a certain speed in a certain direction with respect to the "preferred" reference frame (i.e. the one in which there is no observed dipole). To get ourselves into this dipole-free frame we just have to move with a velocity which cancels out the dipole-producing velocity. There's no need to accelerate (accept the rapid acceleration you'd need to do to change velocity of course).
Our local motion (which makes us move relative to the "CMB frame" and hence gives us a dipole to observe) is caused by nearby clusters and superclusters of galaxies pulling us around. It's true that over cosmological timescales these objects are also moving. And so if we wanted to keep ourselves always in the dipole-free frame we'd have to make small adjustments to our velocity as we moved and got pulled around by different objects. But these changes would be on roughly billion year timescales. And so to get into the frame with no CMB dipole basically just requires the following 3 steps: (1) observe today's dipole; (2) move towards the coldest direction at just the right speed to cancel the dipole; and (3) maintain basically that same velocity forever. | {
"domain": "physics.stackexchange",
"id": 46981,
"tags": "cosmology, reference-frames, cosmic-microwave-background"
} |
Do superluminal worldlines constitute closed time-like curves under the right conditions? | Question: A closed time-like curve is defined as a word-line that returns to its starting point, in the $x$, $y$, $z$, $t$ coordinates. So, for a chronology respecting observer, an object traversing a closed time-like curve would appear to violate causality. Under certain conditions (Tolman et al.), superluminal propagation of signals also appears to violate causality.
Does this indicate that for a tachyonic anti-telephone situation, the worldline of a superluminal tachyon could be considered as a closed time-like curve?
Answer: No. The world-line of a tachyon is spacelike. That's the definition of a tachyon. That means it can't be a closed timelike curve.
The existence of CTCs is a property of the spacetime, not of the particles inhabiting it. Minkowski space simply doesn't have CTCs. | {
"domain": "physics.stackexchange",
"id": 61130,
"tags": "special-relativity, relativity, faster-than-light, tachyon, closed-timelike-curve"
} |
Teleop based control of a custom vehicle | Question:
I am making a ground vehicle which can be controlled over wifi signals using a keyboard. So rather than worrying about all the custom interfaces that i would have to create, i though of modifying the turtlebot teleop interface. Any help on how to procede would be great.
Originally posted by incognito on ROS Answers with karma: 5 on 2012-12-19
Post score: 0
Answer:
Here is my implementation of teleop for carlike robot. Instead of Twist (whish has 6 variables) I use simple Drive message containing only acceleration and steering angle. Hope it would be helpful somehow.
#include "ros/ros.h"
#include "drive_base/Drive.h"
#include "sstream"
#include <termios.h>
#include <signal.h>
#include <fcntl.h>
#define KEYCODE_A 0x61
#define KEYCODE_D 0x64
#define KEYCODE_S 0x73
#define KEYCODE_W 0x77
class TeleopRoboCVKeyboard
{
private:
drive_base::Drive cmd;
ros::NodeHandle n_;
ros::Publisher drive_pub_;
public:
void init()
{
//Clear out our cmd - these values are roundabout initials
cmd.Acceleration = 0;
cmd.Angle = 0;
drive_pub_ = n_.advertise<drive_base::Drive>("command", 100);
ros::NodeHandle n_private("~");
}
~TeleopRoboCVKeyboard() { }
void keyboardLoop();
};
int kfd = 0;
struct termios cooked, raw;
void quit(int sig)
{
tcsetattr(kfd, TCSANOW, &cooked);
exit(0);
}
int main(int argc, char** argv)
{
ros::init(argc, argv, "drive_cmd_publisher");
TeleopRoboCVKeyboard tpk;
tpk.init();
signal(SIGINT,quit);
tpk.keyboardLoop();
return(0);
}
void TeleopRoboCVKeyboard::keyboardLoop()
{
char c;
bool dirty=false;
// get the console in raw mode
tcgetattr(kfd, &cooked);
memcpy(&raw, &cooked, sizeof(struct termios));
raw.c_lflag &=~ (ICANON | ECHO);
// Setting a new line, then end of file
raw.c_cc[VEOL] = 1;
raw.c_cc[VEOF] = 2;
tcsetattr(kfd, TCSANOW, &raw);
puts("Reading from keyboard");
puts("---------------------------");
puts("Use 'WS' to forward/back");
puts("Use 'AD' to left/right");
for(;;)
{
// get the next event from the keyboard
if(read(kfd, &c, 1) < 0)
{
perror("read():");
exit(-1);
}
switch(c)
{
// Walking
case KEYCODE_W:
puts("throttle");
cmd.Acceleration = cmd.Acceleration + 10;
dirty = true;
break;
case KEYCODE_S:
puts("brake");
cmd.Acceleration = cmd.Acceleration - 10;
dirty = true;
break;
case KEYCODE_A:
puts("turn left");
cmd.Angle = cmd.Angle - 0.1;
dirty = true;
break;
case KEYCODE_D:
puts("turn right");
cmd.Angle = cmd.Angle + 0.1;
dirty = true;
break;
}
if (dirty == true)
{
drive_pub_.publish(cmd);
}
}
}
Originally posted by Peter Listov with karma: 338 on 2012-12-19
This answer was ACCEPTED on the original site
Post score: 2
Original comments
Comment by SL Remy on 2012-12-20:
You can choose to update only two of the 6 geometry_msgs/Twist parameters (the angular z for the "left and right" and the linear x for the "forward and back") | {
"domain": "robotics.stackexchange",
"id": 12167,
"tags": "control, wifi, teleop, robot, keyboard-teleop"
} |
Why can we write $\rho=\sum_\mu q_\mu|\varphi_\mu\rangle\!\langle\varphi_\mu|$ iff $q\preceq \mathrm{spec}(\rho)$? | Question: Exercise 2.6 in Preskill's notes (chapter 2, around page 48, pdf available here) asks to prove that an arbitrary state $\rho=\sum_i p_i |\alpha_i\rangle\!\langle\alpha_i|$, where $p_i$ and $|\alpha_i\rangle$ are obtained from the spectral decomposition of $\rho$, can be also decomposed as
$$\rho = \sum_\mu q_\mu |\varphi_\mu\rangle\!\langle \varphi_\mu|,$$
for some ensemble of pure states $|\varphi_\mu\rangle$, if and only if $q\preceq p$, that is, if and only if there is some doubly stochastic matrix $D$ such that $q_\mu=\sum_i D_{\mu i}p_i$.
As a hint, he remembers that the two decompositions above are simultaneously possible if and only if
$$\sqrt{q_\mu}|\varphi_\mu\rangle = \sum_i \sqrt{p_i} V_{\mu i}|\alpha_i\rangle,$$
for some unitary $V$. He also says we can use Horn's lemma: if $q\preceq p$ then $q=Dp$ with $D_{\mu i}=|U_{\mu i}|^2$ for some unitary $U$.
I understand why the statements in the hints hold, but I'm not sure how to apply them to the given exercise. I tried using the above relation to isolate $q_\mu$, but then I get
$$\sqrt{q_\mu} = \sum_i \sqrt{p_i} V_{\mu i}\langle \varphi_\mu|\alpha_i\rangle
\implies q_\mu = \sum_{ij} \sqrt{p_i p_j} V_{\mu i}\bar V_{\mu j} \langle \varphi_\mu|\alpha_i\rangle \langle\alpha_j|\varphi_\mu\rangle,$$
which I'm not sure how to reframe as $q=Dp$.
On the other hand, taking the expectation value over $|\alpha_i\rangle$, I get the relation
$$p_i = \sum_\mu q_\mu |\langle\alpha_i|\varphi_\mu\rangle|^2,$$
but again, this amounts to $p=A q$ for some $A$ about which we only know the columns sum to one. Besides, even if $A$ was doubly stochastic, the relation would be in the wrong direction.
Shur-Horn's theorem (if $\rho$ is Hermitian then $\operatorname{diag}(\rho)\preceq \operatorname{spec}(\rho)$) seems to be also relevant, but the diagonal of $\rho$ in the above notation would have elements $q'_i = \sum_\mu q_\mu |\langle i|\varphi_\mu\rangle|^2$, so we'd get $q'\preceq p$, which is not quite the same as $q\preceq p$ as far as I can tell.
With $\operatorname{spec}(\rho)$ I mean the vector whose elements are the eivanvalues of $\rho$.
Answer: We have
$\sqrt{q_\mu}\langle\alpha_j|\varphi_\mu\rangle = \sum_i \sqrt{p_i} V_{\mu i}\langle\alpha_j|\alpha_i\rangle = \sqrt{p_j} V_{\mu j}$.
Thus $q_\mu |\langle\alpha_j|\varphi_\mu\rangle|^2 = p_j |V_{\mu j}|^2$ and
$q_\mu \sum_{j}|\langle\alpha_j|\varphi_\mu\rangle|^2 = \sum_{j}p_j |V_{\mu j}|^2$.
Hence $q_\mu = \sum_{j}p_j |V_{\mu j}|^2$, since $\sum_{j}|\langle\alpha_j|\varphi_\mu\rangle|^2 = \langle\varphi_\mu|\varphi_\mu\rangle = 1$.
That is, it's the same unitary that appears in the HJW theorem and in Horn's lemma.
In the other way, suppose we are given distribution $q$ such that $q_\mu = \sum_{j}p_j |V_{\mu j}|^2$ for some unitary $V$. As in the HJW theorem we can consider purification $\sum_i \sqrt{p_i} |\alpha_i\rangle \otimes V|i\rangle = \sum_i \sqrt{p_i} |\alpha_i\rangle \otimes \sum_k V_{k i}|k\rangle = \sum_k \big(\sum_i \sqrt{p_i} V_{k i} |\alpha_i\rangle \big) \otimes |k\rangle$.
Then $\sqrt{q_k}|\varphi_k\rangle := \sum_i\sqrt{p_i} V_{k i} |\alpha_i\rangle$. | {
"domain": "quantumcomputing.stackexchange",
"id": 3209,
"tags": "quantum-state, textbook-and-exercises, majorization"
} |
Material Derivative on a Surface Element | Question: I'm recently working on material derivatives, defined as follows.
$$\frac{D }{Dt}=\frac{\partial }{\partial t}+v\cdot \nabla $$
It is easy to show that a material(or convective) derivative of an infinitesimal vector can be shown as
$$\frac{D \delta x}{Dt}=\delta x \cdot \nabla v $$
Now, if there is a surface element defined as $\delta S = \delta x \times \delta y $, I have to show that this has the following relationship
$$\frac{D \delta S}{Dt}=(\nabla \cdot v)\delta S-(\nabla v) \cdot
\delta S $$
I am trying to solve this by directly using the levi-civita symbol which gives
$$(\frac{D \delta S}{Dt})_{i} = (\frac{D \delta x}{Dt} \times \delta y + \delta x \times \frac{D \delta y}{Dt})_{i} = \varepsilon_{ijk}[(\delta x\cdot \nabla v)_{j} \delta y_{k}+\delta x_{j}(\nabla v \cdot \delta y)_{k}] = \varepsilon_{ijk}[\nabla v_{jl}\delta y_{k}\delta x_{l}+\nabla v_{kl}\delta y_{l}\delta x_{j}]$$
where $\nabla v_{ij} = \frac{\partial v_{j}}{\partial q_{i}}$
I could not go further, so I wrote down the result in the similar form,
$$(\frac{D \delta S}{Dt})_{i}=(\nabla \cdot v)\delta S_{i}-((\nabla v) \cdot
\delta S)_{i} = \nabla v_{jj}\delta S_{i}-\nabla v_{ji}\delta S_{j} $$
and this is all I have for now. Could anyone give me a clue to proving the equation above?
Answer: It can be solved by using the fact that $\varepsilon_{ijk} a_j b_k = - \varepsilon_{ijk} a_k b_j$.
\begin{align*}
\frac{D \delta S_i}{D t} &= \varepsilon_{ijk}(\nabla v_{jl} \delta y_k \delta x_l + \nabla v_{kl} \delta y_l \delta_j) \\
&= \varepsilon_{ijk}(\nabla v_{jl} \delta y_k \delta x_l - \nabla v_{jl} \delta y_l \delta_k) \\
&= \varepsilon_{ijk}(\delta_{ln} \delta_{km} - \delta_{kn} \delta_{lm}) \nabla v_{jl} \delta x_n \delta y_m \\
&= \varepsilon_{ijk}\varepsilon_{plk}\varepsilon_{pnm} \nabla v_{jl} \delta x_n \delta y_m \\
&= (\delta_{ip} \delta_{jl} - \delta_{il} \delta_{jp}) \nabla v_{jl} \delta S_p \\
&= \nabla v_{jj} \delta S_i - \nabla v_{ji} \delta S_j
\end{align*} | {
"domain": "physics.stackexchange",
"id": 82356,
"tags": "fluid-dynamics, tensor-calculus"
} |
Voltage drops in series - less work to do the same job | Question: So I have been trying to explain electricity to younger siblings and it surprised me how little I know of what actually goes on.
My question comes in the form of a consideration:
Consider a circuit with a battery and a bulb. The battery has a voltage $V$. The potential difference across the bulb is then also $V$. This is the work that is done to move a unit charge across the bulb. Now consider two identical bulbs in series in the same circuit. It now only takes $\frac{V}{2}$ of work to move a unit charge across each bulb.
I may be thinking about this entirely wrong, but if I take the number of bulbs to infinity, then a current will still flow (?) and the amount of work to move a unit charge across the bulb tends to $0$? As such, it takes increasingly less work to do the same job.
It seems to allude to something like a perpetual motion machine. So why bother spending any work doing it at all?
Of course, I realise I must have a misconception somewhere, but it's just where that is that I am not too sure.
Answer:
I take the number of bulbs to infinity, then a current will still flow (?)
Each bulb is a resistor.
If you take the number of bulbs to infinity, then the equivalent resistance of the series combination of bulbs goes to infinity. Therefore the current through the bulbs goes to zero.
... and the amount of work to move a unit charge across the bulb tends to 0? As such, it takes increasingly less work to do the same job.
The amount of work done per unit charge is going to zero, but the rate that charge is moving through the system also goes to zero. In fact it goes to zero even faster than the work done per unit charge goes to zero, with the result that the amount of power transferred from the battery to the series combination of bulbs also goes to zero, and therefore the amount of light produced also goes to zero. | {
"domain": "physics.stackexchange",
"id": 67311,
"tags": "electricity, electric-circuits, perpetual-motion"
} |
Why is matter drawn into a black hole not condensed into a single point within the singularity? | Question: When we speak of black holes and their associated singularity, why is matter drawn into a black hole not condensed into a single point within the singularity?
Answer: The many comments have covered the main points about the question, but I thought it would be worthwhile explaining how the behaviour is calculated. If we solve the Einstein equation for a point mass we get the Schwarzschild metric:
$$ds^2 = -\left(1-\frac{2M}{r}\right)dt^2 + \left(1-\frac{2M}{r}\right)^{-1}dr^2 + r^2 d\Omega^2$$
All equations look scary to non-nerds, but don't worry too much about the details. The key points are that the equation involves time, $dt$, and the distance from the centre of the black hole, $dr$, and it calculates a quantity called the line element, $ds$.
The time, $t$, and radial distance, $r$, are the physical quantities measured by an observer outside the black hole, i.e. that's you and me. They are exactly what you would think i.e. the time is what we measure with a stopwatch. The radial distance is what you'd get by measuring the circumference of a circle round the black hole and dividing by $2\pi$ (because the circumference of a circle is $2\pi r$).
The line interval, $ds$, is a bit more abstract but for our purposes $ds$ is the time as measured by someone falling into the black hole. This is called the proper time and usually written as $\tau$. You've probably heard that time slows down as you approach the speed of light, and you get a similar effect here (as dmckee menioned in a comment). That means the time measured by someone falling into the black hole, i.e. the proper time $\tau$, is not the same as the time $t$ that we measure when watching the black hole from the outside.
The point of all this stuff is that you can use the metric to calculate how long it takes to fall from some distance, $R$, outside the black hole to the event horizon. First of all let's calculate this for the person falling into the black hole. This means we have to calculate the proper time, $\tau$. You can find this in any book on GR, or by Googling, and the result is:
$$ \Delta \tau = \frac{2M}{3} \left[ \left( \frac{r}{2M} \right)^{3/2} \right] ^R _{2M} $$
Again, don't worry about the detail. A long as we know the mass of the black hole, $M$, and the starting distance, $R$, we just feed these into a calculator and it gives us $\Delta\tau$ which is the time measured by the person falling into the black hole. The time obviously depends on how far away you start and how big the black hole is, but it's just a number of seconds. In fact if we use $r = 0$ in the expression about we can calculate how long it takes to fall through the event horizon and on to the singularity at the black hole.
So, the point to note down at this stage is that the person falling into the black hole reaches the event horizon and inded the singularity in a finite time.
The next step is to calculate the time measured by you and me sitting outside the black hole. This is a bit more involved, but we end up with an expression:
$$ dt = \frac {-(\epsilon + 2M)^{3/2}d\epsilon} {(2M)^{1/2}\epsilon} $$
where $\epsilon$ is the distance from the event horizon i.e. $\epsilon = r - 2M$. Integrating this to find the time to reach the event horizon is a bit messy, but if restrict ourselves to distances very near the event horizon we find:
$$\Delta t \propto ln(\epsilon)$$
but $\epsilon$ is the distance from the event horizon, so it's zero at the event horizon and $ln(0)$ is infinity. That means that the time you and I measure for our astronaut to reach the event horizon, $\Delta t$, is infinity.
And this is why you get the apparently paradoxical result about falling into black holes. The time measured by the person falling in, $\Delta\tau$, is finite but the time measured by people outside the black hole, $\Delta t$, is infinite. | {
"domain": "physics.stackexchange",
"id": 3468,
"tags": "general-relativity, black-holes, singularities"
} |
Semi-supervised classification with SelfTrainingClassifier: no training after calling fit() | Question: I am practicing semi-supervised learning, at the moment experimenting with sklearn.semi_supervised.SelfTrainingClassifier. I found a dataset for multiclass classification (tweet sentiment classification into 5 sentiment categories) and randomly removed 90% of the targets.
Since it is textual data, preprocessing is needed: I applied CountVectorizer() and created a sklearn.pipeline.Pipeline with the vectorizer and the self-training classifier instance.
For the base estimator of the self-training classifier I used RandomForestClassifier.
My problem is, when running the below script, no training happens. The argument verbose is set to True so if any iteration happened, I would see its output. Also when inspecting the predicted labels, they are identical to the initial ones, confirming that despite no errors showing, something is not in order.
The full code:
import pandas as pd
import numpy as np
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import LabelEncoder
from sklearn.semi_supervised import SelfTrainingClassifier
from sklearn.ensemble import RandomForestClassifier
from sklearn.feature_extraction.text import CountVectorizer
# Coronavirus dataset from Kaggle: https://www.kaggle.com/datatattle/covid-19-nlp-text-classification
# For this semi-supervised demonstration, only train file is used.
df = pd.read_csv("./datasets/Corona_NLP_Train.csv", encoding='latin-1')
# subsample the dataset (purely for efficiency, i.e. running the examples quicker)
df = df.sample(frac=0.1)
print("Original data shape: ", df.shape)
# Unlabeled data must be denoted by -1 in the target column. Since original data is labeled, we remove labels for 90% of target
rand_indices = df.sample(frac=0.90, random_state=0).index
# create new 'Sentiment_masked' column
df['Sentiment_masked'] = df['Sentiment']
df.loc[rand_indices, 'Sentiment_masked'] = -1
# check original 'Sentiment' distribution
print("Original (unaltered) sentiment distribution:\n", df['Sentiment'].value_counts())
# check masked sentiment distribution
print("Masked sentiment distribution:\n", df['Sentiment_masked'].value_counts())
X = df['OriginalTweet']
y = df['Sentiment_masked']
stclf = SelfTrainingClassifier(
base_estimator = RandomForestClassifier(n_estimators = 100),
threshold = 0.9,
verbose = True)
pipe = Pipeline([('vectorize', CountVectorizer()), ('model', stclf)])
pipe.fit(X, y)
And I returned the updated/modified labels using:
pd.Series(pipe['model'].transduction_).value_counts()
which yielded:
-1 3704
Positive 117
Negative 93
Neutral 79
Extremely Positive 72
Extremely Negative 51
i.e. the exact same as what df['Sentiment_masked'].value_counts() yielded earlier.
What I am missing here?
Answer: The reason you are not seeing any verbose output from the model fitting and no change in the model's labels is because the treshold you are currently using is too high, which doesn't allow the model to add any new pseudo-labels to the dataset. Decreasing the threshold (e.g. to 0.7) does show output with the number of labels added in each iteration:
End of iteration 1, added 54 new labels.
End of iteration 2, added 163 new labels.
End of iteration 3, added 310 new labels.
End of iteration 4, added 576 new labels.
End of iteration 5, added 982 new labels.
End of iteration 6, added 1350 new labels.
End of iteration 7, added 249 new labels.
End of iteration 8, added 12 new labels.
End of iteration 9, added 3 new labels.
End of iteration 10, added 1 new labels.
The reason you are not seeing any change when getting the value counts for the different labels is because the model doesn't actually add the newly generated pseudo-labels to the original dataset. It only adds the labels internally (see the source code) and after fitting returns the class itself which now contains the classifier trained on the original dataset plus the pseudo-labels. This is stored in the base_estimator_ attribute which is then called when predicting on new data (e.g. see the predict method of sklearn.semi_supervised.SelfTrainingClassifier). | {
"domain": "datascience.stackexchange",
"id": 10449,
"tags": "machine-learning, classification, multiclass-classification, semi-supervised-learning"
} |
Identify this indoor plant with strange stem-like leaves | Question: It appears that this plant branches multiple times from the base of its stem; however, I'm not sure if the base stems are from one organism. It becomes yellowish in color at the tip each stem-like leaves; one node can branch 3-4 leaves at a time.
Additional information:
As for the specimen location- I am currently living in Japan and this plant was bought as a gift for me. It probably is imported from other countries since it doesn't seem to be native here. For maintenance; I was told to just cut the tip of the leaves if it began to fade in color and die; and water the plant at least once a week.
Answer: Looks like a Pencil Tree (Euphorbia tirucalli), sometimes called "pencil plant", "pencil cactus" or "milkbush." This "leafless-looking" plant is not actually a cactus, but actually a "Euphorbia" (i.e., member of the Euphorbiaceae family), and should present a white "milky" liquid latex when broken.
CAUTION: THIS IS A POISONOUS PLANT [See symptoms here].
Description: A succulent, spineless shrub with round, green branches; leaves clustered at the tip of the branches and soon dropping
Although it eventually can grow into a small-sized tree in its natural habitat, this plant is found more commonly as a medium-sized house plant or landscaping "shrub."
Range: Tropical Africa and India
Euphorbia tirucalli is very well adapted to semi-arid conditions, but also occurs in both dry and moist forest, savanna and shrub land, and withstands salt stress associated with coastal conditions, but no frost. It occurs from sea-level up to 2500 m altitude. It grows well on a wide variety of light-textured, neutral to acidic soils. It is commonly associated with human settlements and becomes naturalized rapidly. It is locally common and often occurs in groups. [Source]. | {
"domain": "biology.stackexchange",
"id": 6746,
"tags": "species-identification, botany, trees"
} |
Unitary time evolution operator and the corresponding Hamiltonian | Question: Could someone tell me how to find the eigenvalues $(E)$ of a time independent Hamiltonian $(H)$ if the eigenvalues of the corresponding unitary time evolution operator $U$ $\left(=e^{itH/\hbar}\right)$ are given , at a particular instant of time $t_0$, and given in the form of $e^{i\theta(t_0)}$. That is, I need the relation between $E$ and $\theta(t_0)$.
Answer: I will try to give a more general answer: given an operator $A$, how can we interpret a function of the operator $f(A)$?
There are two possible options:
i) if the function $f(x)$ admits to a power-series expansion about $x=0$, we can use this to define the $f(A) = \sum \frac{f^{(n)}(0)}{n!} A^n$ and we know how to take integer powers of operators - we just apply the operator $n$ times.
ii) if the operator $A$ can be diagonalized and has eigenstates $|a\rangle$ with eigenvalues $a$ where the eigenstates form a basis, then we can define $f(A)$ via its operation on the eigenstates $f(A)|a\rangle = f(a) |a\rangle$. Then, for any state $|\psi\rangle$ we can know how $f(A)$ acts upon it by expanding $|\psi\rangle$ in the eigenbasis $|\psi\rangle = \sum_a c_a |a\rangle$ and then acting with $f(A)$ on each of the states in the expansion.
If both $A$ has a basis of eigenvectors and $f(x)$ has a power series, it is easy to see that both definitions are consistent with one another. If none of the things apply, then the function may be ill-defined (for example - $\ln(a^{\dagger})$, where $a^{\dagger}$ is the Harmonic oscillator raising operator, is not well defined).
Now for the case at hand -- you have the function $U = \exp(-i H t/\hbar)$ of the Hamiltonian. As both definitions can be applied here, we can choose to focus on the second. It is clear the the eigenvectors of $U$ are the eigenvectors of $H$, and if $H$ has an eigenvalue $E$ when acting on a state $H|\psi_E\rangle = E |\psi_E\rangle$, then $U|\psi_E\rangle = \exp(-i H t/\hbar) |\psi_E\rangle = \exp(-iEt/\hbar)|\psi_E\rangle$, which gives you the answer. | {
"domain": "physics.stackexchange",
"id": 84871,
"tags": "quantum-mechanics, homework-and-exercises, operators, hamiltonian, time-evolution"
} |
Concise way to say "increases with n or some term of n" | Question: I'm writing a thesis proposal and one of the systems involved has unknown complexity. It's not a focus of the proposal, but I wanted to include a line like this, as speculation:
Presumably the complexity of each new connection scales with the number of rules.
Unfortunately I don't know if asymptotic complexity (space and time) will be O($\log{n}$), O($n$), O($n\log{n}$) or even O($n^{99}$). But I do doubt that it will be constant, so increasing $n$ will have some effect on it.
What is a good, accurate way to write this concisely? I'm asking here and not at English Language and Usage because the CS accuracy is essential.
Answer: "Presumably the complexity of each new connection is an increasing function of the number of rules."
But you don't mean complexity. Problems have complexity, algorithms have running time (or, more generally, resource usage), functions have growth rate. | {
"domain": "cs.stackexchange",
"id": 3853,
"tags": "complexity-theory, terminology"
} |
Given a regular language, calculate its equivalence classes | Question: I was given the following regular language:
For any $n$, the language $L_{n}$ consists of all words over $Σ = \{0, 1\}$ whose $n$th character
from the end is 1.
I know it's regular because it can be expressed as a regular expression ($\Sigma^{*}1\Sigma^{n-1}$), and I constructed a DFA and an NFA for it. I need to calculate its equivalence classes. I thought that the equivalence classes will be words that have 1 in the same place later in the word - for example one of the classes will be $L(\Sigma^{*}1\Sigma^{n-2})$, because no matter what digit will be added to all these words, the new words will belong to the original language. But I'm not sure I'm on the right track. I would love it if I could understand the process of thinking on this kind of question.
Answer: Whether a word and its extensions belongs to $L_n$ or not depends on its final $n$ letters (we'll see later what happens if a word is shorter than $n$ letters). Indeed, we can "probe" all of these letters using the following: for $i=1,\ldots,n$, we have $w1^{n-i} \in L_n$ iff the $i$th letter from the end is $1$. This shows that if two words have different length-$n$ suffixes, then they definitely belong to different equivalence classes.
In contrast, we can check that any two words with the same length-$n$ suffix are equivalent. Indeed, suppose $x,y$ are such words, and consider a word $z$. If $|z| \leq n-1$ then $xz \in L_n$ if the $n-i$'th letter from the end of $x$ is $1$. Since $x,y$ have the same length $n$ suffixes, the condition for $yz \in L_n$ is identical. If $|z| \geq n$, then whether $xz \in L_n$ or not only depends on $z$, and in particular holds just as well for $yz \in L_n$.
Finally, what about words which are shorter than $n$ letters? Recall the test we had above: $w1^{n-i} \in L_n$ iff the $i$th letter from the end is $1$. If $|w| < i$ then $w1^{n-i} \notin L_n$, and so $w$ behaves as if the $i$th letter from the end is $0$. Pursuing this route, we find that the equivalence class of $w$ depends on the length-$n$ suffix of $0^nw$, and this definition works whether $|w| \geq n$ or not. | {
"domain": "cs.stackexchange",
"id": 15058,
"tags": "formal-languages, regular-languages"
} |
Microscopic derivation of the Josephson effect | Question: In section 18.7 of Bruus & Flensberg the authors provide a microscopic derivation of the Josephson effect.
The hamiltonian on both sides of the tunnelling junction is just the typical BCS hamiltonian, on one side (with fermion operators $c$)
\begin{equation}
H_c = \sum_{k,\sigma} \epsilon_k c_{k,\sigma}^\dagger c_{k,\sigma} - \sum_{k}(\Delta e^{-i\phi_c}c^\dagger_{k,\uparrow}c^\dagger_{-k,\downarrow}+\Delta e^{i\phi_c} c_{-k,\downarrow}c_{k,\uparrow})
\end{equation}
and on the other side (with fermion operators $f$)
\begin{equation}
H_f = \sum_{k,\sigma} \epsilon_k f_{k,\sigma}^\dagger f_{k,\sigma} - \sum_{k}(\Delta e^{-i\phi_c}f^\dagger_{k,\uparrow}f^\dagger_{-k,\downarrow}+\Delta e^{i\phi_c} f_{-k,\downarrow}f_{k,\uparrow})
\end{equation}
Here we let the gap parameter for both superconductors have the same magnitude $\Delta$ but different phases $\phi_c$ and $\phi_f$.
We then introduce a tunnelling Hamiltonian coupling the two superconductors
\begin{equation}
H_t = \sum_{k,p,\sigma} (tc^\dagger_{k\sigma} f_{p,\sigma}+t^* f^\dagger_{p\sigma} c_{k,\sigma})
\end{equation}
We can deal with the phases by introducing a gauge transformation $c\rightarrow e^{-i\phi_c/2} c$ and $f \rightarrow e^{-i\phi_f/2} f$ so that the tunnelling coefficients acquire a phase $t \rightarrow e^{-i\phi/2} t$ with $\phi=\phi_c-\phi_f$. Then we see that the Josephson current is
\begin{equation}
I_J = \langle I \rangle = -2e \langle \dot{N} \rangle = -2e \bigg\langle \frac{\partial H_t}{\partial \phi}\bigg\rangle
\end{equation}
where we used the Heisenberg equations of motion $\dot{N} = -i[N,H]$ and the fact that $N=-i\frac{\partial}{\partial \phi}$. The current can be calculated perturbatively using the Dyson series
\begin{equation}
\exp(-\beta H) = \exp(-\beta H_0)\bigg[1-\int_0^\beta d\tau \ \hat{H}_t(\tau)\bigg]+o((H_t)^2)
\end{equation}
where $\hat{H}(\tau)$ is in the interaction picture. Then
\begin{equation}
I_J = -2e\bigg[\bigg\langle \frac{\partial H_t}{\partial \phi}\bigg\rangle_0-\frac{1}{2}\bigg\langle \int_0^\beta d\tau \ \hat{H}_t(\tau) H_t\bigg\rangle_0+\ ...\bigg]
\end{equation}
where $\langle \rangle_0$ denotes thermal averaging with respect to $H_0=H_c+H_f$. The first order contribution is
\begin{equation}
\bigg\langle \frac{\partial H_t}{\partial \phi}\bigg\rangle_0 = \frac{\partial}{\partial \phi}\text{Tr}(e^{-\beta H_0} H_t) = \frac{\partial}{\partial \phi} \sum_m e^{-\beta E^0_m} \langle m|H_t|m\rangle
\end{equation}
which I presume (but I am really not too sure about this) vanishes since $H_t$ moves an odd number of fermions from one superconductor to the other, so its action on any eigenstate $|m\rangle$ of the BCS hamiltonian will produce a state orthogonal to it. The second order contribution on the other hand is given as
\begin{equation}
\frac{\partial}{\partial \phi} \int_0^\beta d\tau \ (t^2 e^{i\phi} \mathcal{F}_{\downarrow \uparrow}(\textbf{k},\tau) \mathcal{F}^*_{\downarrow \uparrow}(\textbf{p},-\tau)+c.c.)
\end{equation}
where we defined $\mathcal{F}(\textbf{k},\tau)=-\langle \mathcal{T} c^\dagger_{k,\downarrow}(\tau)c^\dagger_{-k,\uparrow}(0)\rangle$. However I have a hard time making sense of this equation. I find that
\begin{equation}
\frac{1}{2}\frac{\partial}{\partial \phi} \int_0^\beta d\tau \ \bigg \langle \sum_{k,p,\sigma}\sum_{k',p',\sigma'} (tc^\dagger_{k\sigma} f_{p,\sigma}+t^* f^\dagger_{p\sigma} c_{k,\sigma})(tc^\dagger_{k'\sigma'} f_{p',\sigma'}+t^* f^\dagger_{p'\sigma'} c_{k',\sigma'})\bigg \rangle
\end{equation}
but I'm not sure how this simplifies to the result by Bruus & Flensberg.
Answer: First, in the last equation of your problem, the $t$ there should be $te^{i\phi}$, then it's clear why the result only contains the anomalous Green's function $\mathcal{F}(\textbf{k},\tau)$: terms like $\left\langle
(tc^\dagger_{k\sigma} f_{p,\sigma}t^* f^\dagger_{p'\sigma'} c_{k',\sigma'})\right \rangle$ have no $\phi$ dependence so you can drop them. What's left is the
\begin{equation}
\frac{1}{2}\frac{\partial}{\partial \phi} \int_0^\beta d\tau t^2\ \sum_{k,p,\sigma}\sum_{k',p',\sigma'} e^{i\phi}\left\langle c^\dagger_{k\sigma} f_{p,\sigma}c^\dagger_{k'\sigma'} f_{p',\sigma'}\right\rangle+e^{-i\phi}\left\langle f^\dagger_{p\sigma} c_{k,\sigma} f^\dagger_{p'\sigma'} c_{k',\sigma'}\right\rangle
\end{equation}
by wick theorem and counting the spin index properly, and finally Fourier tranforming back to the real space you could get the result. | {
"domain": "physics.stackexchange",
"id": 87418,
"tags": "superconductivity, josephson-junction"
} |
Understanding notation regarding particles states and wavefunctions | Question: In the development in my notes of second quantisation I have a problem in understanding notation.
We start by considering a basis $\psi_i(\mathbf{r})$ for the Hilbert space of single particle wavefunctions and then build it to a basis of the Hilbert space of two particles (bosons or fermions). We then generalise this to the case of $N$ particles.
For $N$-particle systems the wavefunction can be written as $\psi_{i_1,\dots,i_N}(\mathbf{r}_1,\dots,\mathbf{r}_N)$. We want to build a basis similar to the case of two particles $\dots$ Let $\psi_{i_1,\dots,i_N}(\mathbf{r}_1,\dots,\mathbf{r}_N)$ be the many particle state in which the single particle states $i_1,\dots,i_N$ are occupied (for bosons some or all of the states may coincide). One might at first consider the product state $\psi_{i_1}(\mathbf{r}_1)\dots \psi_{i_N}(\mathbf{r}_N)$ $\dots$
I cannot understand what this means physically. If we know there is a particle(s) in state $i_1$ why do we need $\mathbf{r}_1$? Does the state $i_1$ not specify position? Also what exactly does this mean by state, I thought that the wavefunction was the state? Furthemore how does this relate to a site?
Answer:
If we know there is a particle(s) in state $i_1$ why do we need
$r_1$? Does the state $i_1$ not specify position?
The fact that the particle is in a state $|\psi\rangle$ does not specify the particle's position - it specifies the wave-function
$$
\langle x | \psi \rangle = \psi(x)
$$
from which one can find
$$|\psi(x)|^2$$
This is the probability density for observing the particle $\psi$ at position $x$ - a distribution over $x$. It does not specify that the particle $\psi$ is at the position $x$.
what exactly does this mean by state, I thought that the wavefunction
was the state?
The wave-function $\psi(x)$ is one representation of the state - a particular choice of (position) basis states, in this case
$$|\psi\rangle = \int dx \psi(x) |x\rangle$$
This is similar to representing a wavefunction as vector $\psi_i$ in some basis states $|i\rangle$, e.g.
$$|\psi\rangle = \sum \psi_i |i\rangle$$ | {
"domain": "physics.stackexchange",
"id": 21497,
"tags": "quantum-mechanics, quantum-field-theory, notation"
} |
Boiling water efficiency | Question: What is the most efficient method to boil water without thermal energy. eg. burning gas. Reducing pressure reduces the energy required but is there something on a molecular level that is the most efficient?
Are microwaves the most efficient or does generating microwaves take more energy that boiling the water??
Answer: As you may have realized, if you need to change the state of something, energy is required. Efficiency requires a definition of what is "useful," in this case you could be aiming for the phase change or the increased temperature.
If you seek the increased temperature, look to put in a place where energy can enter easily but has difficulty exiting. Think hot car in the sun: light enters, turns into infrared and heat and has difficulty exiting.
If you seek the phase change, you could be looking for the reduction of liquid or the increase of gas. If you need the reduction of liquid, you just need to have some place for it to go - just exposing it to "dry" air is enough to promote evaporation.
If you need the gas, that is the most interesting case and what steam tables are good for. You need to find the state that has the division of phases you seek (x% liquid, 100%-x% gas) and differs least from your initial state in energy. Then you can strive to remove the gas, extract as much energy from the removed, pure gas as possible without putting it into a different phase. Generally, this will involve reducing pressure and waiting for the fluid to absorb energy from the surroundings (increased volume should eventually result in lower temperature) until a desired liquid- gas division forms, then slowly sucking out just the gas (pushing it out will cause it to liquify from increased pressure!).
Depending on environmental conditions, you may find it difficult to create the pistons and seals necessary for this. At room temp, should be as simple as filling a partially expanded piston with water and hanging a weight from it where the weight tries to expand the piston. Hard part is the vaccum pump you need to pull the gas out (raising the weight). | {
"domain": "engineering.stackexchange",
"id": 4811,
"tags": "thermodynamics, energy, steam"
} |
Source Control and custom VBA Code Exporter | Question: Context
Me and my team which I'm part of, for most of the time code in Excel VBA. Writing code in it self is pretty enjoyable, but combining our work in a shared repository is pretty painful. The ease our pain of combining/merging binary files, we have decided to help ourselves and create the Source Code Manager module.
Key features:
User, should be able to export code modules from the Workbook into any location
User, should be able to import code modules from any location back to Workbook.
Today, I would like from all of you to investigate the former.
Note: if you want to export your code, might have to select Trust access to the VBA project object model in the Developer Macro Settings (File - Options - Trust Center - Trust Center Settings). Otherwise, code will not be exported!
Note2: Everything, which is related to code management, I will try purposefully keep in one module to make it as shareable as possible. Sometime, I will intentionally break a OO principles like creating a new class and moving to it a set of closely related functionalities.
If anything needs an explanation, let me know. This might be a good indication that a piece which I wrote is in a 'not easy to follow' manner, which I'm actively trying to avoid!
Dependencies
To start working with this code you need to add following references to your VB Project
Microsoft Scripting Runtime
Microsoft Visual Basic for Application Extensibility 5.3
Microsoft ActiveX Data Object 6.1 Library
This project requires also additional modules which can be found in External modules
ArrayH
Exception
Tools
Client code
In it's simplest form, you have to call one method SourceControlH.ExportProjectComponents which requires two parameters, source from which components will be exported (of type Workbook) and location, where components will be stored (of type String).
Public Sub Start()
SourceControlH.ExportProjectComponents ThisWorkbook, ThisWorkbook.Path & "\src"
End Sub
The following code presents how do I use SourceControlH module with the Workbook_AfterSave event which lets me to export the entire VbProject when it is saved.
Private Sub Workbook_AfterSave(ByVal Success As Boolean)
Dim ExportFolder As String
ExportFolder = ThisWorkbook.Path & "\src"
If Fso.FolderExists(ExportFolder) = False Then
Fso.CreateFolder ExportFolder
End If
SourceControlH.ExportProjectComponents ThisWorkbook, ExportFolder
End Sub
Under the hood
This section will present the actual code which does all the magic.
SourceControlH.bas
Option Explicit
'@Folder("Helper")
Private Const ModuleName As String = "SourceControlH"
' Path to the folder where components will be saved.
Private pExportFolderPath As String
' Indicates if empty components should be exported or not.
Private pExportEmptyComponents As Boolean
Public Property Get ExportEmptyComponents() As Boolean
ExportEmptyComponents = pExportEmptyComponents
End Property
Public Property Let ExportEmptyComponents(ByVal Value As Boolean)
pExportEmptyComponents = Value
End Property
' Exports and saves project's components, from Source workbook
' to the location which is specified in Path argument.
' If Source.VBProject is protected, throw an InvalidOperationException.
' If target path does not exists or if path does not points to a folder,
' throw an DirectoryNotFoundException.
Public Sub ExportProjectComponents(ByVal Source As Workbook, ByVal Path As String)
Const MethodName = "ExportProjectComponents"
If Source.VBProject.Protection = vbext_pp_locked Then
Exception.InvalidOperationException "Source.VBProject.Protection", _
"The VBA project, in this workbook is protected " & _
"therefor, it is not possible to export the components. " & _
"Unlock your VBA project and try again. " & ModuleName & "." & MethodName
End If
If Tools.Fso.FolderExists(Path) = False Then
Exception.DirectoryNotFoundException "Path", ModuleName & "." & MethodName
End If
pExportFolderPath = NormalizePath(Path)
Dim Cmp As VBComponent
For Each Cmp In GetExportableComponents(Source.VBProject.VBComponents)
ExportComponent Cmp
Next Cmp
End Sub
Private Function GetExportableComponents(ByVal Source As VBIDE.VBComponents) As Collection '<VbComponents>
Dim Output As New Collection
Dim Cmp As VBIDE.VBComponent
For Each Cmp In Source
If IsExportable(Cmp) Then
Output.Add Cmp
End If
Next Cmp
Set GetExportableComponents = Output
Set Cmp = Nothing
Set Output = Nothing
End Function
Private Function IsExportable(ByVal Component As VBIDE.VBComponent) As Boolean
' Check if component is on the list of exportable components.
If ArrayH.Exists(Component.Type, ExportableComponentsTypes) = False Then
IsExportable = False
Exit Function
End If
If IsComponentEmpty(Component) = False Then
IsExportable = True
Exit Function
End If
If pExportEmptyComponents = True Then
IsExportable = True
Exit Function
End If
IsExportable = False
End Function
Private Property Get ExportableComponentsTypes() As Variant
ExportableComponentsTypes = Array(vbext_ct_ClassModule, vbext_ct_MSForm, vbext_ct_StdModule, vbext_ct_Document)
End Property
' Indicates if component is empty by checking number of code lines.
' Files, which contains just Option Explicit will be counted as empty.
Private Function IsComponentEmpty(ByVal Source As VBIDE.VBComponent) As Boolean
If Source.CodeModule.CountOfLines < 2 Then
IsComponentEmpty = True
ElseIf Source.CodeModule.CountOfLines = 2 Then
Dim Ln1 As String: Ln1 = Source.CodeModule.Lines(1, 1)
Dim Ln2 As String: Ln2 = Source.CodeModule.Lines(2, 1)
IsComponentEmpty = (VBA.LCase$(Ln1) = "option explicit" And Ln2 = vbNullString)
Else
IsComponentEmpty = False
End If
End Function
Private Sub ExportComponent(ByVal Component As VBIDE.VBComponent)
' Full name means - name of the component with an extension.
Dim FullName As String: FullName = GetComponentFullName(Component)
Dim ExportPath As String: ExportPath = pExportFolderPath & FullName
Component.Export ExportPath
End Sub
' To avoid problems with saving components, add backslash
' at the end of folder path.
Private Function NormalizePath(ByVal Path As String) As String
NormalizePath = Path & IIf(Path Like "*\", vbNullString, "\")
End Function
Private Property Get ComponentTypeToExtension() As Dictionary
Dim Output As New Dictionary
With Output
.Add vbext_ct_ClassModule, "cls"
.Add vbext_ct_MSForm, "frm"
.Add vbext_ct_StdModule, "bas"
.Add vbext_ct_Document, "doccls"
.Add vbext_ct_ActiveXDesigner, "ocx"
End With
Set ComponentTypeToExtension = Output
End Property
Private Function GetComponentFullName(ByVal Component As VBIDE.VBComponent) As String
GetComponentFullName = Component.Name & "." & ComponentTypeToExtension.Item(Component.Type)
End Function
External modules
Modules, with code, which are used by SourceControlH module. These has to be include in your project as well.
Exception.cls
Option Explicit
'@Exposed
'@Folder("Lapis")
'@PredeclaredId
' The exception that is thrown when a null reference (Nothing in Visual Basic)
' is passed to a method that does not accept it as a valid argument.
Public Sub ArgumentNullException(ByVal ParamName As String, ByVal Message As String)
Err.Raise 513, , "Value cannot be null." & vbNewLine & vbNewLine & _
"Additional information: " & Message & vbNewLine & vbNewLine & _
"Parameter: " & ParamName
End Sub
' The exception that is thrown when one of the arguments provided to a method is not valid.
Public Sub ArgumentException(ByVal ParamName As String, ByVal Message As String)
Err.Raise 518, , "An exception of type ArgumentException was thrown." & vbNewLine & vbNewLine & _
"Additional information: " & Message & vbNewLine & vbNewLine & _
"Parameter: " & ParamName
End Sub
' The exception that is thrown when a method call is invalid for the
' object's current state.
Public Sub InvalidOperationException(ByVal ParamName As String, ByVal Message As String)
Err.Raise 515, , "An exception of type InvalidOperationException was thrown." & vbNewLine & vbNewLine & _
"Additional information: " & Message & vbNewLine & vbNewLine & _
"Parameter: " & ParamName
End Sub
' Occurs when an exception is not caught.
Public Sub UnhandledException(ByVal Message As String)
Err.Raise 517, , "An exception of type UnhandledException was thrown." & vbNewLine & vbNewLine & _
"Additional information: " & Message & vbNewLine & vbNewLine
End Sub
' The exception that is thrown when a file or directory cannot be found.
Public Sub DirectoryNotFoundException(ByVal ParamName As String, ByVal Message As String)
Err.Raise 520, , "An exception of type DirectoryNotFoundException was thrown." & vbNewLine & vbNewLine & _
"Additional information: " & Message & vbNewLine & vbNewLine & _
"Parameter: " & ParamName
End Sub
Tools.bas
Option Explicit
'@Folder("Lapis")
Private Const ModuleName As String = "Tools"
'@Ignore EncapsulatePublicField
Public Fso As New FileSystemObject
' Returns number of text lines based on the specified Stream.
' Throws an ArgumentNullException when Stream is set to nothing.
' Throws an ArgumentException when Stream is closed.
Public Function LinesCount(ByVal Stream As ADODB.Stream) As Long
Const MethodName = "LinesCount"
If Stream Is Nothing Then
Exception.ArgumentNullException "Stream", ModuleName & "." & MethodName
End If
If Tools.IsStreamClosed(Stream) Then
Exception.ArgumentException "Stream", "Stream is closed. " & ModuleName & "." & MethodName
End If
Dim Ln As Long
Stream.Position = 0
Do While Stream.EOS <> True
Stream.SkipLine
Ln = Ln + 1
Loop
LinesCount = Ln
End Function
' Determines if TextStream is closed.
' There is no property of TextStream object (like object.IsClosed) to know directly if Stream is closed
' or not. To know TextStream's state, method will attempt to read cursor position. If it fails
' (throws an error), that will mean Stream is not readable (closed).
' Throws an ArgumentNullException when Stream is set to nothing.
Public Function IsStreamClosed(ByVal Stream As ADODB.Stream) As Boolean
Const MethodName = "IsStreamClosed"
If Stream Is Nothing Then
Exception.ArgumentNullException "Stream", ModuleName & "." & MethodName
End If
On Error Resume Next
Dim i As Long: i = Stream.Position
If Err.Number = 91 Then
IsStreamClosed = True
On Error GoTo 0
ElseIf Err.Number = 0 Then
IsStreamClosed = False
Else
' Other, unexpected error occured. This error has to be moved upward.
Exception.UnhandledException ModuleName & "." & MethodName
End If
End Function
ArrayH.bas
Option Explicit
'@Folder("Helper")
' Item parameter has to be a simple type.
' Arr has to have only one dimension.
Public Function Exists(ByVal Item As Variant, ByRef Arr As Variant) As Boolean
Exists = (UBound(Filter(Arr, Item)) > -1)
End Function
Conclusion
What do you thing about this code?
I'm looking forward to hear your thoughts about this piece. Within a week, I will also submit an import functionality for a review.
Answer: The overall first impression is a very good one. Procedures are small, focused, generally well-named, everything is pretty much in its place - well done!
What follows is a series of observations, and suggestions / how I'd go about "fixing" them.
Controlling object lifetime
In my view, it's important to be able to reliably know whether any object pointer is valid at any given point in the code that needs to dereference that pointer: for any object we create and consume, we want to be able to control when it's created, and when it's destroyed.
So while this is procedural code and we're not going to fuss much about coupling, we can still flag the auto-instantiated object:
'@Ignore EncapsulatePublicField
Public Fso As New FileSystemObject
While convenient, a global-scope auto-instantiated FSO object isn't something I'd recommend. Kudos for early-binding (side note: consider qualifying the library, e.g. As Scripting.FileSystemObject), but like anything accessing external resources (e.g. database connection, file handle, etc.), IMO its scope and lifetime should be as limited as possible. With a global-scope As New declaration, you give VBA the entire control over that object's actual lifetime.
Alternatively, a With block could hold the object reference in a tight, well-defined local scope, and we wouldn't even need to declare a variable for it:
Private Sub Workbook_AfterSave(ByVal Success As Boolean)
Dim ExportFolder As String
ExportFolder = ThisWorkbook.Path & "\src"
With New Scripting.FileSystemObject
If Not .FolderExists(ExportFolder) Then
.CreateFolder ExportFolder
End If
End With
SourceControlH.ExportProjectComponents ThisWorkbook, ExportFolder
End Sub
Note that the {bool-expression} = False condition is redundant - comparing a Boolean expression to a Boolean literal is always redundant: Not {bool-expression} is more idiomatic, more concise, and more expressive.
Portability
VBA code that doesn't need to be tied to a particular specific VBA host application's object model library, should avoid such dependencies.
Public Sub ExportProjectComponents(ByVal Source As Workbook, ByVal Path As String)
The Source parameter should be a VBProject object, not a Workbook; by taking in an Excel.Workbook dependency, the module becomes needlessly coupled with the Excel object model: if you needed to reuse this code in the future for, say, a Word VBA project, you'd need to make changes.
Consistency
Qualifying members is nice! Consistently qualifying members is better :)
If Tools.Fso.FolderExists(Path) = False Then
Why is this Fso qualified with the module name, but not the one in ThisWorkbook? Without Rubberduck to help, a reader would need to navigate to the definition to make sure it's referring to the same object.
But, then again, I'd New up the FSO on the spot, and let VBA claim that pointer as soon as it's no longer needed:
With New Scripting.FileSystemObject
If Not .FolderExists(Path) Then
Exception.DirectoryNotFoundException "Path", ModuleName & "." & MethodName
End If
End With
Other notes
I like your centralized approach to error-raising very much! I find the term "exception" a bit misleading though (if it's an exception, where's my stack trace?), and the procedure names read like properties. I'd propose something like this:
Errors.OnDirectoryNotFound "Path", ModuleName & "." & MethodName
It removes the doubled-up "Exception" wording from the instruction, and the On prefix is reminiscent of the .NET convention to name event-raising methods with that prefix.
The Exception module being a class feels a bit wrong, even more so given the @PredeclaredId Rubberduck annotation, which presumably was synchronized and indicates the class has a VB_PredeclaredId = True attribute value: the class is never instantiated, only its default instance is ever invoked. The .NET equivalent is a static class with static methods, and the idiomatic VBA equivalent is a standard procedural module.
Of course Public Sub procedures in a standard module would be visibly exposed as macros in Excel, and using a class module prevents that... but so does Option Private Module!
Side note, there's a spelling error in this message:
Exception.InvalidOperationException "Source.VBProject.Protection", _
"The VBA project, in this workbook is protected " & _
"therefor, it is not possible to export the components. " & _
"Unlock your VBA project and try again. " & ModuleName & "." & MethodName
The comma after The VBA project is superfluous, there should be a dot after is protected, and so therefor should be Therefore, capital-T.
That said, VBA project protection can easily be programmatically thwarted, so with a little tweaking I think you could make this macro a bad boy that can just unlock a locked project to export it - but yeah, prompting the user with "oops, it's locked, try again" is probably the more politically-correct way to go about handling project protection.
I'm not finding any uses for the LinesCount function, and it validating whether the stream is open strikes me as weird: raising this error would clearly only ever happen because of a bug, and should be a Debug.Assert check, if present at all.
If ArrayH.Exists(Component.Type, ExportableComponentsTypes) = False Then
That H again? I'm starting to think it just stands for Helper, which is a code smell in itself. Once more, this condition would read better as If Not ArrayH.Exists(...) Then, but I'd like to point out that these helper methods feel very much like what would be extension methods in .NET-land, and ArrayExt.Exists - or better, a fully spelled-out ArrayExtensions.Exists would raise fewer eyebrows. Kudos for avoiding the trap of just dumping all "helper" procedures and functions into some Helpers bag-of-whatever module.
' Path to the folder where components will be saved.
Private pExportFolderPath As String
' Indicates if empty components should be exported or not.
Private pExportEmptyComponents As Boolean
This very much Systems Hungarian p prefix is distracting: there's no Hungarian Notation anywhere in the code and it reads like a charm - yes, naming is hard. Yes, naming is even harder in a case-insensitive language like VBA (or VB.NET).
You could make a simple, private data structure to hold the configuration state, and with that you wouldn't need any prefixing scheme:
Private Type ConfigState
ExportFolderPath As String
WillExportEmptyComponents As Boolean
End Type
Private Configuration As ConfigState
Note that because Export is a noun in the String variable, but a verb in the Boolean one, a distinction is necessary IMO. Adding a Will prefix to the Boolean name clarifies everything I find. And now you can have properties named exactly after the ConfigState members, without any prefixing scheme - note the Rubberduck annotation opportunity for a @Description annotation, too:
'@Description("Indicates if empty components should be exported or not.")
Public Property Get WillExportEmptyComponents() As Boolean
WillExportEmptyComponents = Configuration.WillExportEmptyComponents
End Property
Public Property Let WillExportEmptyComponents(ByVal Value As Boolean)
Configuration.WillExportEmptyComponents = Value
End Property
Speaking of Rubberduck opportunities, the @Folder organization can be enhanced - using @Folder annotations on the project's wiki describes how the annotation can be used to create subfolders:
'@Folder("Parent.Child.SubChild")
We have SourceControlH and ArrayH modules under '@Folder("Helper"), some Tools module (FWIW "Tools" has the exact same smell as "Helper" does) under '@Folder("Lapis"); the Exception module is under '@Folder("Lapis") as well, which means the tree structure looks like this:
- [Helper]
- ArrayH
- SourceControlH
- [Lapis]
- Tools
- Exception
Not sure what Lapis means, but the contents of the Tools module has this "whatever couldn't neatly fit anywhere else" bag-of-whatever feeling to it. What I wonder though, is why there's no clear dedicated SourceControl folder.
I'm not going to claim a more OOP approach would even be warranted here (procedural is perfectly fine), but the basis for sticking to procedural feels wrong: it's not a self-contained module, it has dependencies and must be packaged as a "bunch of modules that need to be imported together" anyway.
Having a Helpers.SourceControl folder would give you the dedicated space to cleanly split responsibilities while keeping the components neatly regrouped (in Rubberduck's Code Explorer toolwindow, that is).
' Full name means - name of the component with an extension.
Dim FullName As String: FullName = GetComponentFullName(Component)
I've seen Microsoft claim using the : instruction separator like this is "good practice" and "helps transition to VB.NET syntax". I'm not buying it at all. It looks awful and crowded. That comment is also very informative: it reads "this GetComponentFullName procedure needs a better name". In the Excel object model, FullName includes not only the file extension, but also the full path: your version of "full" isn't quite as "full" as it should be. In fact, FullName is actually nothing more than a fileName:
Dim fileName As String
fileName = GetComponentFileName(Component)
Kudos here:
.Add vbext_ct_Document, "doccls"
This file extension is compatible with Rubberduck's own handling of document modules. By default, the VBIDE API exports document modules with a .cls file extension, which makes them import as class modules: to import them back into a Worksheet module, or into ThisWorkbook, you need some special handling, and that different file extension works great.
Source-controlling VBA code is hard, because the code in a document module (e.g. Worksheet) can very well include references to objects that exist in the host document, like ListObject tables and whatnot - and these can't really be under source control (not without having the whole host document under source control too!). Worksheet layout can't be restored from source code, unless the worksheet layout is itself actually coded: this means a VBA project restored from source control can't really ever fully restore a project without the original host document anyway. So, kudos for tackling this thorny issue.
Note that the last few pre-release builds of Rubberduck include bulk import/export functionality that does everything your code does, out of the box, without requiring programmatic access to the VBIDE Extensibility library, and without needing to share and manage versions for a SourceControlH module across devs and projects: | {
"domain": "codereview.stackexchange",
"id": 36788,
"tags": "vba"
} |
DFT and multiplication/convolution equivalence | Question: Is there a simple or potentially intuitive explanation for, with the DFT, vector multiplication in one domain being equivalent to circular convolution of the transforms of the vectors in the other domain?
Since a DFT is just multiplication by a (special) square matrix, what about this matrix and matrix multiply allows the above duality?
Answer: As you correctly say, the DFT can be represented by a matrix multiplication, namely the Fourier matrix $\mathbf{F}$. On the other hand the DFT "transforms" a cyclic convolution in a multiplication (as all Fourier transform variant as DFT, DTFT, FT have a similar property of transforming convolution to multiplication) and vice versa.
To understand this in the matrix picture, note that also (circular) convolution with a certain sequence can be represented by a matrix multiplication. More specifically this is a circulant matrix, a special kind of a Toeplitz matrix.
so $\mathbf{y} = \mathbf{c} * \mathbf{x}$ with $*$ the cyclic convolution can be written as
$\mathbf{y} = \mathbf{C}(\mathbf{c}) \mathbf{x}$ with $\mathbf{C}$ denoting the circulant matrix formed from entries of vector $\mathbf{c}$.
If we "transform" this equation with the DFT (i.e. multiplication by $\mathbf{F}$) we obtain
$\widehat{\mathbf{y}} = \mathbf{F} \, \mathbf{C}(\mathbf{c}) \,\mathbf{F}^H\, \widehat{\mathbf{x}}$
with $\widehat{\mathbf{y}} =\mathbf{F}\mathbf{y}$ and $\widehat{\mathbf{x}} =\mathbf{F}\mathbf{x}$ the respective DFTs (note $\mathbf{F}^H$ represents the IDFT).
The point is now that $\mathbf{F} \mathbf{C}(\mathbf{c}) \,\mathbf{F}^H$ is always a diagonal matrix, because all circulant matrices are diagonalized by the Fourier matrix. This means that the eigenvectors of circulant matrices are just given by the rows of the Fourier matrix.
This is of course consistent with the convolution picture,
because the DFT transforms the convolution to an elementwide multiplication. Moreover the diagonal elements of this matrix are just the DFT of $\mathbf{c}$, or, eqivalently, the eigenvalues of the circulant matrix formed from $\mathbf{c}$. | {
"domain": "dsp.stackexchange",
"id": 7823,
"tags": "fft, fourier-transform, dft"
} |
Heat equation and discontinuous thermal conductivity in the finite volume method | Question: I am looking for a rigorous way to deal with discontinuous thermal conductivity $K(x)$ in the heat equation (e.g. on the boundary of two different materials)
$$ u' = \nabla_x\cdot (K(x)\nabla_x u) + \text{\{possible other terms\}},$$
where $u(t,x)$ is the temperature. Clearly the equation breaks down at discontinuity since the derivative of $K$ does not exist, what becomes the boundary condition of the non-discretized equation above to replace the discontinuity?
When the equation is discretized, for basic finite difference scheme at boundary it is possible to in a loose sense use the "average value" (is there a derivation somewhere?)
$$K(x)\approx (1/2) (K(x+1)+K(x)).$$
For finite volume scheme not at the boundary
$$Au' = \int\int\int (\nabla_x\cdot K\nabla_x u)dA$$
$$=\int\int (K\nabla_x u) \cdot \bar ndS$$
$$=K\int \int\nabla_xu \cdot \bar n dS$$
Let's say a finite difference scheme is applicable, although there could be more generality
$$\approx K \sum_k (u(x_1,...,x_k+1,...,x_d) + u(x,...,x_k-1,...,x_d)-2u(x))$$
The average value approximation then yields
$$\approx \sum_k \frac{1}{2}(K(x)+K(x_1,...,x_k+1,...,x_d)) \cdot (u(x_1,...,x_k+1,...,x_d) -u(x)) +...,$$
for the finite volume method, is it valid on the boundary? References to books are welcome.
Answer: Put a grid point $x_j$ at the interface between the two regions. Then $$k^+\frac{T_{j+1}-T_j}{\Delta x}=k^-\frac{T_j-T_{j-1}}{\Delta x}$$That is, the heat flux is continuous at the interface. | {
"domain": "physics.stackexchange",
"id": 95985,
"tags": "fluid-dynamics, thermal-conduction"
} |
Mammal Population Estimates as a Fermi problem | Question: Can We estimate the order of magnitude of number of mammals on earth? Can this be treated as Fermi problem or not?
Answer: Yes and no. Yes in the sense that I believe a Fermi-type scrutiny of the problem quickly tells you that you need very detailed data.
What may be derivable from such an analysis is an estimate of the total mammal biomass. A kilogram of living mammal has roughly the same energy needs whatever mammal it belongs to, although there would need to be a "fudge factor" to account for the animal's size: small animals tend to have very high surface area to mass ratios and thus high metabolic rates. You could hazard a guess Looking at the kinds of numbers of livestock farmers can successfully sustain per unit area in various parts of the world and you might hazard a guess as to what size of mammal mass there could be in the various climates.
But - and here is where you would be forced to shift from a Fermi type analysis to detailed research to get anything sensible - you would need to estimate what proportion of animals of each size makes up the mammal biomass. Is eight tonnes of mammal biomass one African Bull Elephant, or 160 000 mice at 50 grams each? Here I think we strike a problem: I can't see any way of "estimating" the size distribution, without detailed knowledge of population dynamics in each ecology.
Also of relevance may be the Biology Stack Exchange Question "How Many Mice Are There On The Earth?". | {
"domain": "physics.stackexchange",
"id": 17391,
"tags": "homework-and-exercises, order-of-magnitude, estimation"
} |
Does Earth get gravity due to it's spinning? | Question: Consider my example below.
Example: 1. A man standing on grass(point) on earth surface.
2. He jumped.
3. And returned back, and he could find the same grass(point) under his foots. Right?
If earth is spinning, when he is on air (on jump) and return back to the earth surface, the grass(point) should be moved right or left side based on rotating direction of earth Right. But it did not happen on real scenario.
Newton told that as "Gravity". and the definition would be the force that attracts a body towards the centre of the earth, or towards any other physical body having mass.
But from my example the man attaching with a earth surface by some force due to not only of earth's massive body, but also for it's spinning.
Because If gravity only about to mass means the man return back to earth surface is OK, but how could he feel like settle down on same place on earth surface that he stood before he jumps.
I am really curious about the answer.
Answer: What you have to realize is, both the man and the ground beneath him are travelling at the same speed. What that means is, yes, the ground underneath the man is travelling east very quickly, but the man is also travelling east just as quickly.
The gravitational force causes the man to come back down, but it has nothing to do with making sure he stays over the same point on the earth's surface. He will ALREADY stay over the same patch of earth, due to his speed eastward. | {
"domain": "physics.stackexchange",
"id": 23889,
"tags": "newtonian-gravity, reference-frames, earth, relative-motion"
} |
Can I further optimize this solution for HackerRank's “Making Candies”? | Question: My C++ solution for HackerRank's "Making Candies" problem, reproduced below, is as optimized as I can make it and appears similar to what others say are optimal solutions. However, six of the test cases still fail due to timing out. I would be interested to know if there are any significant opportunities to optimize my code that I missed.
I'm guessing that I'm either missing some way to simplify the computation (perhaps part of it can be precomputed and stored in a lookup table?), or there's some way to compute the answer without using a loop.
std::ios::sync_with_stdio(false);
long m, w, p, n;
std::cin >> m >> w >> p >> n;
for (long candies = 0, passes = 0, total = LONG_MAX; ; ++passes) {
const auto production = __int128{m} * w;
const long goal = n - candies;
const long passes_needed = goal/production + !!(goal%production);
const long subtotal = passes + passes_needed;
if (passes_needed <= 2) {
std::cout << subtotal;
return 0;
}
if (total < subtotal) {
std::cout << total;
return 0;
}
total = subtotal;
candies += production;
if (candies >= p) {
const auto d = std::div(candies, p);
long budget = d.quot; candies = d.rem;
const long diff = w - m;
if (diff < 0) {
const long w_hired = std::min(budget, -diff);
budget -= w_hired;
w += w_hired;
} else if (diff > 0) {
const long m_purchased = std::min(budget, diff);
budget -= m_purchased;
m += m_purchased;
}
const long half = budget >> 1;
m += half + (budget & 1);
w += half;
}
}
Answer: I figured out how to get the failing test cases passing, with a little help from G. Sliepen's answer. It turns out I needed to calculate the number of passes needed to buy another machine or worker, and return early, as G. Sliepen suggested, if that number equaled or exceeded the number of passes needed to complete the order without any purchases.
But, in addition to returning early, I also could leverage that calculation to advance the algorithm forward to the point where we buy something. This addition is what ultimately allowed the program to pass all the test cases.
The relevant snippet:
const std::int64_t passes_needed_to_buy =
candies < p ? (p - candies) / production : 0;
if (passes_needed_to_buy >= passes_needed - 1) {
// We won't buy anything, so return early
std::cout << total;
return 0;
}
// Skip forward to when we buy something
passes += passes_needed_to_buy;
candies += production * passes_needed_to_buy;
The full program:
#include <algorithm>
#include <cassert>
#include <cmath>
#include <cstdint>
#include <iostream>
#ifdef __SIZEOF_INT128__
using Int128 = __int128;
#else
#include <boost/multiprecision/cpp_int.hpp>
using Int128 = boost::multiprecision::int128_t;
#endif
int main() {
std::ios::sync_with_stdio(false);
std::int64_t m, w, p, n;
std::cin >> m >> w >> p >> n;
assert(m > 0 && m <= 1'000'000'000'000);
assert(w > 0 && w <= 1'000'000'000'000);
assert(p > 0 && p <= 1'000'000'000'000);
assert(n > 0 && n <= 1'000'000'000'000);
for (std::int64_t candies = 0, passes = 0, total = INT64_MAX; ; ++passes) {
const auto production = Int128{m} * w;
const auto goal = n - candies;
const std::int64_t passes_needed =
goal/production + !!(goal%production);
const auto subtotal = passes + passes_needed;
if (passes_needed <= 2) {
std::cout << subtotal;
return 0;
}
if (total < subtotal) {
std::cout << total;
return 0;
}
total = subtotal;
candies += production;
const std::int64_t passes_needed_to_buy =
candies < p ? (p - candies) / production : 0;
if (passes_needed_to_buy >= passes_needed - 1) {
std::cout << total;
return 0;
}
passes += passes_needed_to_buy;
candies += production * passes_needed_to_buy;
const auto d = std::div(candies, p);
auto budget = d.quot; candies = d.rem;
const auto diff = w - m;
if (diff < 0) {
const auto w_hired = std::min(budget, -diff);
budget -= w_hired;
w += w_hired;
} else if (diff > 0) {
const auto m_purchased = std::min(budget, diff);
budget -= m_purchased;
m += m_purchased;
}
const auto half = budget >> 1;
m += half + (budget & 1);
w += half;
}
} | {
"domain": "codereview.stackexchange",
"id": 41005,
"tags": "c++, performance, time-limit-exceeded, mathematics, complexity"
} |
Sed script that removes specific lines in `seq` output | Question: I have a file to process with sed but I am not quite familiar with the commands with capital letters used for multi-line patterns. I use seq to test the script and I have converted it to the problem described below. The sed script and expected output are also attached. I believe the script can be written in a much better way but I am not sure how to do it.
Problem description:
Filter the output of seq $m, where $m is a given integer. removing the \$n^{\rm th}\$ line if either \$n-2\$, \$n-1\$, \$n\$, or \$n+1\$ contains the digit \$7\$.
Sed script (together with the seq pipe, note this is GNU sed):
seq "$m"|sed ':c N;N;N;:a N;s/.*\n.*7.*\n.*\n.*\n//g;tc;P;s/^[^\n]*\n//g;ba;'
I believe setting two labels (a and c) is not necessary.
Edit: There seems to be a nicer alternative, based on the answer to Delete 5 Lines Before and 6 Liens After Pattern Match Using sed, as follows,
seq "$m"|sed 'N;/7/!{P;D};:b N;s/\n/&/3;Tb;d'
This avoids writing out a lot of N; and \n's explicitly. Still I believe it can be improved.
Sample output for m=100
1
2
3
4
5
10
11
12
13
14
15
20
21
22
23
24
25
30
31
32
33
34
35
40
41
42
43
44
45
50
51
52
53
54
55
60
61
62
63
64
65
82
83
84
85
90
91
92
93
94
95
100
Related: Extension of Game of Sevens Challenge at CodeGolf
Answer: N;/7/!{P;D}
Nice job there, this very concisely (and clearly) allows you to capture both lines \$n\$ and \$n+1\$.
:b N;s/\n/&/3;Tb;d
Now the loop that follows it is mostly redundant. You are essentially appending lines of input until you are left with \$4\$ lines in total. You already have \$2\$ in your pattern space and need \$2\$ more, i.e. lines \$n-1\$ and \$n-2\$, which can simply be expressed in N;N followed by a d. | {
"domain": "codereview.stackexchange",
"id": 31133,
"tags": "bash, sed"
} |
cell uptake prediction | Question: I'm generating random molecules I would like to know if they are able to pass through the cell membrane. Are there any ways (preferably computational) to predict cellular uptake of an organic molecule which is about 55 atoms?
Answer: Absorption, distribution, metabolism, excretion, and toxicity ADMET can be somewhat computationally defined but is largely secretive due to the money involved in drug discovery. FAF-Drugs2 will do basic ADMET. The Schordinger Suit provides a more indepth ADMET for a fee. | {
"domain": "biology.stackexchange",
"id": 1049,
"tags": "cell-membrane, organic-chemistry"
} |
How to split tasks between interrupts and the main loop on a bare metal controller? | Question: I'm working on a robotics project where I have 3 services running. I have my sensor DAQ, my logic ISR (motor controller at 20kHz) and my EtherCAT slave controller.
DAQ and EtherCAT run in the idle and the logic runs during an interrupt. The logic does some calculations and controls the motor. The EtherCAT service (kinda like CANbus) runs together with my DAQ in the idle loop. I can not run the DAQ in the interrupt because that leaves me with less than 100ns for the EtherCAT service to run.
I'm not sure whether this is the right way to do this especially considering all the scary things i've read regarding data corruption when using interrupts.
Does anyone have some nice ideas on how to handle these services?
I'm running all my code on a Zynq 7020 (on the ARM Cortex) and it's written in C++.
Here is an example of my code:
/**
* Get all sensor data
*/
void Supervisor::communication(void) {
// Get all the sensors data
dispatchComm.getData(&motorAngle, &motorVelocity, &jointAngle, &springAngle, &tuningParameter);
}
/**
* Run all the logic
*/
void Supervisor::logic(void) {
dispatchLogic.calculate(tuningParameter, motorAngle, motorVelocity, jointAngle, springAngle);
dispatchLogic.getData(&angle, &magnitude);
// Dispatch values to the motor drive
dispatchComm.setMotorDriveSetpoint(angle, magnitude);
dispatchComm.writeToPiggyback((uint32_t) (tuningParameter), motorAngle, motorVelocity);
}
Answer: Some very general advice.
My first impression from your question is that you are struggling with the fact that no hardware provides infinite resources. You can either slow down your control loops or look for better hardware (which may not exist).
If your control has nested loops, then run each one a slow as possible. Are you controlling position at 20kHz? Pull that out and run that at 500Hz (A little faster if your mechanical mechanism is designed for high performance). Is you DAQ measuring current for the controller? Try running your current control at 5kHz but commutating at 20kHz.
All those numbers are assuming a meso scale device using electro-magnetic actuators.
Regarding interrupts, the rule of thumb is to do as little as possible, as quickly as possible, inside the interrupt and everything else outside it. However it feels like you might be using a timer interrupt to substitute for a real-time task scheduler (nothing wrong with that), in which case, you will want to do everything needed for that cycle in the interrupt. Add a time-overrun check at the end so that if your interrupt is taking longer than 50us you know about it and can adjust your strategy or optimize code.
If you are writing interrupts then you need to go learn the details about data corruption issues. They are not scary and they are straightforward to manage on all modern processors. You can protect data in your idle loop by disabling interrupts in critical sections or using higher level synchronization constructs if you really need to.
A final thought; even though you are coding in C++, you should read the processor datasheet and instruction set so that you have an idea of the capabilities and limitations of the hardware if you have not already.
EDIT:
From the limited code you added it looks like high level control for a series elastic actuator. The spring will dominate the mechanical time constant of the actuator system and you probably do not need that loop to run at 20kHz in lockstep with your motor commutation. | {
"domain": "robotics.stackexchange",
"id": 1076,
"tags": "c++, interrupts"
} |
Infinite gravity "train" | Question: I am trying to do some simple physic engine, and when I implemented gravity, I could see this :
You can Imagine this as planets and they're attracting each other so in this situation, they can just go infinitely..
So I want to know if it is a mistake in my engine or if this really could work in the universe :) Thanks
Answer: Assuming that there are no other bodies in the simulated universe besides those four circles, we can define a system of bodies such that there are no external forces acting on the system.
The four circles may exert forces on each other, but the center of mass of the system of the four circles will not accelerate because there is no external body from which a force could cause an acceleration.
With that logic, it is entirely possible for the four circles to keep moving in a straight line. However, it is unlikely (though still plausible) for the circles to be lined up as they are because the system would be more stable if the four circles bunched up in a ball as they move in a straight line. | {
"domain": "physics.stackexchange",
"id": 45427,
"tags": "newtonian-mechanics, gravity, newtonian-gravity, perpetual-motion"
} |
What exactly does it mean to embed classical data into a quantum state? | Question: As the title states.
I am a Machine Learning Engineer with a background in physics & engineering (post-secondary degrees). I am reading the Tensorflow Quantum paper. They say the following within the paper:
One key observation that has led to
the application of quantum computers to machine learning is their ability to perform fast linear algebra on a
state space that grows exponentially with the number of
qubits. These quantum accelerated linear-algebra based
techniques for machine learning can be considered the
first generation of quantum machine learning (QML) algorithms tackling a wide range of applications in both supervised and unsupervised learning, including principal
component analysis, support vector machines, kmeans clustering, and recommendation systems.
These algorithms often admit exponentially faster solutions compared to their classical counterparts on certain
types of quantum data. This has led to a significant
surge of interest in the subject. However, to apply
these algorithms to classical data, the data must first
be embedded into quantum states, a process whose
scalability is under debate.
What is meant by this sentence However, to apply these algorithms to classical data, the data must first be embedded into quantum states?
Are there resources that explain this procedure? Any documentation or links to additional readings would be greatly appreciated as well.
Thanks in advance!
Note: I did look at this previous question for reference. It helped. But if anyone can provide more clarity from a more foundational first principles view (ELI5 almost), I would be appreciative
How do I embed classical data into qubits?
Answer: First it is instructive to ask oneself: "how does classical data get into my computer?" In a classical computer, your data is always stored in bits. Because calculations in base 2 are not very straightforward for most people there are abstractions like int types for integers and float types for rational numbers with the associated math operations readily abstracted for the user -- which means that you can easily add, multiply, divide and so on.
Now, on a quantum computer you run into a fundamental problem: Qubits are really expensive. When I say really expensive, this does not only mean that building a quantum computer costs a fortune, but also that in current applications you only have a handful of them (Google's quantum advantage experiment used a device with 53 qubits) -- which means that you have to economize your use of them. In machine learning applications you usually use single precision floating point numbers, which use 32 bits. This means a single "quantum float" would also need 32 qubits, which means that state of the art quantum computers can't even be used to add two floating point numbers together due to the lack of qubits.
But you can still do useful stuff with qubits, and this is because they have additional degrees of freedom! One particular thing is that you can encode an angle (which is a real parameter) bijectively into a single qubit by putting it into the relative phase
$$
| \theta \rangle = \frac{1}{\sqrt{2}}(|0\rangle + \mathrm{e}^{i\theta} |1\rangle)
$$
And this is the heart of embedding data into quantum states. You simply can't do the same thing you would be doing on a classical computer due to a lack of sufficient qubit numbers and therefore you have to get creative and use the degrees of freedom of qubits to get your data into the quantum computer. To learn more about very basic embeddings, you should have a look at this paper. One particular example I want to highlight is the so-called "amplitude embedding" where you map the entries of a vector $\boldsymbol{x}$ into the different amplitudes of a quantum state
$$
| \boldsymbol{x} \rangle \propto \sum_i x_i | i \rangle
$$
There is no equals sign because the state needs to be normalized, but for the understanding this is not important. The special thing about this particular embedding is that it embeds a vector with $d$ elements into $\log_2 d$ qubits which is a nice feature in our world where qubits are expensive! | {
"domain": "quantumcomputing.stackexchange",
"id": 2021,
"tags": "quantum-algorithms, quantum-state, programming, experimental-realization, embedding"
} |
What's the difference between spinor and spin? | Question: Some related information might be found here: What is the difference between a spinor and a vector or a tensor? and Wikipedia seemed to have an explanation but was not very clear.
From what I read, in Dirac equation, spinor seemed to be a method of factorize the coefficient of wave function, while spin was the component of the spinor?
Could you tell me what's the difference between spinor and spin?
Answer: The spin of a particle is its intrinsic angular momentum, a vector quantity unrelated to any actual rotation of the particle. As you learned in quantum mechanics its magnitude is quantized as $\sqrt{s(s+1)}\hbar$ and any of its three components as $m_s\hbar$. Here $s=0,1/2,1,3/2,2,...$ is the principal spin quantum number and $m_s$, which ranges from $+s$ to $-s$ in steps of 1, is the secondary spin quantum number.
When talking about a particle or field, often the value of $s$ is called “the spin”. But sometimes, if one is focused on, say, the $z$-component of the angular momentum, $m_s$ is called “the spin”; it should really be called “the $z$-component of the spin”, but that gets tedious.
So “the spin” can mean (1) the intrinsic angular momentum; (2) the quantum number $s$ that specifies the magnitude of that angular momentum; (3) a component (usually the $z$-component) of that angular momentum; (4) the quantum number $m_s$ that specifies that component.
In quantum field theory, the quanta of various kinds of fields have various spins. A scalar field is said to be “spin 0” because its quanta have $s=0$. A spinor field is said to be “spin 1/2“ because its quanta have $s=1/2$. A vector field is said to he “spin 1“ because its quanta have $s=1$. A tensor field with two indices is said to be “spin 2” because its quanta have $s=2$.
In QFT, you can also think about spin more abstractly in terms of how the field transforms under spatial rotations rather than how much intrinsic angular momentum its quanta have. This connection should not be too surprising, because the conservation of angular momentum is related to invariance under rotations.
Under rotations of the coordinate system, the fields have to transform according to a “representation” of the rotation group. The relevant mathematics is the theory of Lie groups (like the rotation group $SO(3)$ and the larger Lorentz and Poincaré groups) and their representations. A representation is a set of linear transformations in an abstract vector space (often a complex one) of arbitrary dimension that compose in the same way that the abstract group elements do. (“In the same way” means their composition is homomorphic to the group composition.)
Scalar, spinor, vector, and tensor fields are different representations of the same rotation group, in different dimensions. Here the dimensions are not physical dimensions like height and width but “field dimensions”. For example, a Dirac spinor has four components. Under rotation, they mix together linearly, similarly to how $x$, $y$, and $z$ spatial coordinates mix together linearly under a rotation. This four-dimensional complex vector space is an abstract representation space, not a geometric space like spacetime. Weyl and Majorana spinors have two components and “live” in a two-dimensional abstract representation space.
The word “spinor” is used to mean several different but closely related things: (1) A particular representation with $s=1/2$; (2) An element in the representation space, i.e., a multi-component field that transforms according to this representation; (3) A particle that is a quantum of such a field.
A fuller discussion of spinors would get into projective representations, covering groups, and other arcana of group theory. Hopefully this intro will get you oriented to the simpler ideas first. | {
"domain": "physics.stackexchange",
"id": 62916,
"tags": "special-relativity, angular-momentum, quantum-spin, definition, spinors"
} |
Phase factors of eigenstates for a time-dependent Hamiltonian | Question: For a time-dependent Hamiltonian, the Schrödinger equation is given by
$$i\hbar\frac{\partial}{\partial t}|\alpha;t\rangle=H(t)|\alpha;t\rangle,$$
where the physical time-dependent state $|\alpha;t\rangle$ is given by
$$|\alpha;t\rangle = \sum\limits_{n}c_{n}(t)e^{i\theta_{n}(t)}|n;t\rangle$$
and
$$\theta_{n}(t)\equiv -\frac{1}{\hbar}\int_{0}^{t}E_{n}(t')dt'.$$
Here $e^{i\theta_{n}(t)}$ is the phase factor that has been pulled out from the eigenstate-expansion coefficients of $|\alpha;t\rangle$.
Why is $\theta_{n}(t)$ given by
$$\theta_{n}(t)\equiv -\frac{1}{\hbar}\int_{0}^{t}E_{n}(t')dt'?$$
Answer: This makes things more convenient, but it is ultimately not necessary.
The choice is motivated by trying to generalize the phase factor $e^{-iE_n t/\hbar}$ from the time-independent theory, and more specifically it is the solution to $i\hbar \partial_te^{i\theta_n(t)} = E_n(t)$. This means, in turn, that it is chosen so that the combination $e^{i\theta_n(t)}|n;t⟩$ obeys the Schrödinger equation - so long as you forget that $i\hbar \partial_t |n;t⟩$ might also be nonzero, or in other words that it obeys the Schrödinger equation if you ignore nonadiabatic effects.
However, it is crucial to realize that the choice carries no physical meaning, since that phase is only one of the factors of the actual coefficient:
$$
\tilde c_n(t) = c_n(t) e^{i\theta_n(t)}.
$$
It is only the full $\tilde c_n(t)$ that gives you the probability amplitude contribution of $|n;t⟩$ to $|\alpha;t⟩$, and those are the physical dynamics. The factorization into a known factor $e^{i\theta_n(t)}$ and a variable $c_n(t)$ is for convenience: it allows us to put in explicitly the dynamics which we have already solved, and therefore it allows us to use the $c_n(t)$ to solve for the remaining unknown dynamics.
It is also important to note that the $c_n(t)$ are not trivial quantities, either: they represent non-adiabatic transitions as well as non-adiabatic phases that act in addition to the $e^{i\theta_n(t)}$. However, defining the $\theta_n(t)$ that way allows us to use the $c_n(t)$ to focus explicitly on these (complicated) nonadiabatic effects, and put the (solved) adiabatic dynamics to bed early. | {
"domain": "physics.stackexchange",
"id": 35448,
"tags": "quantum-mechanics, schroedinger-equation, time-evolution"
} |
Building a controlled NOT gate which is controlled by the qubit it is acting on | Question: I have a question, which I have thought about for a while now, and can't seem to figure out.
I have a quantum circuit and I would like to construct a unitary gate acting on a qubit $q$ that acts as a NOT gate on $q$ if $q = |1\rangle$, and leaves $q$ invariant if $q = |0\rangle$. In other words, I would like the NOT gate to act on $q$ AND be controlled by $q$. I am not sure how to implement this? Is there a sequence of gates that gives this required result?
I tried to to implement it by finding a unitary operator so that $|0\rangle$ goes to $|0\rangle$ and $|1\rangle$ also goes to $|0\rangle$ (which is equivalent to the above). I found the matrix that does this, but it is not unitary (and I don't think there is any unitary matrix that does this operation).
I feel like I am missing something obvious here, but can't figure it out. Any help would be appreciated! Thanks!
Answer: The operation you want is impossible, because it is not reversible. It sends both |1> and |0> to |0>, so afterwards if you see |0> you can't tell if it came from |1> or from |0>. So not reversible, therefore not unitary, therefore not possible.
The operation that is closest to what you want is the Reset operation, which effectively discards the qubit's value by swapping in a fresh |0> qubit. | {
"domain": "quantumcomputing.stackexchange",
"id": 4054,
"tags": "quantum-circuit"
} |
MAC1 simulator/debugger | Question: In constructing an answer to this question, I wrote a small debugger/simulator for the MAC-1 instruction set.
There are a number of simulators already out there, but most are either too old to be compiled and run using a modern version of Java or C or C++ or they intended to exercise the machine down to the internal register level which was not of interest for this particular question.
My goals in creating this (other than just to have some fun) were to allow for the simple extension or modification of the instruction set and to have a functional if very simple debugging environment. Those old enough to remember the ddt debugger from CP/M back in the 1970s will notice some similarities.
A non-goal was run-time performance, so in particular, the search mechanism for instructions is a simple linear search. With only a handful of instructions, there seemed little point in optimizing this.
mac1.cpp
#include <iostream>
#include <iomanip>
#include <cstdint>
#include <algorithm>
#include <sstream>
#include <initializer_list>
#include <stdexcept>
constexpr unsigned MEMSIZE = 4096;
uint16_t extract(std::string &str) {
uint16_t word = 0;
try {
word = std::stoul(str, 0, 16);
}
catch (std::invalid_argument) {
}
return word;
}
static std::string hex4(uint16_t x) {
std::stringstream s;
s << std::setfill('0') << std::setw(4) << std::hex << x;
return s.str();
}
struct Regs {
Regs() :
halt{true},
ac{0},
pc{0},
sp{0},
mem{0}
{}
uint16_t &m(uint16_t offset) {
offset &= (MEMSIZE -1);
return mem[offset];
}
const uint16_t &m(uint16_t offset) const {
offset &= (MEMSIZE -1);
return mem[offset];
}
bool halt; // currently halted if true
uint16_t ac; // accumulator
uint16_t pc; // program counter
uint16_t sp; // stack pointer
uint16_t mem[MEMSIZE]; // memory
};
class Instruction {
public:
Instruction(std::string name, std::string bits, std::string desc, void (*action)(Regs &r, uint16_t &x))
: mnemonic_{name},
description_{desc},
action_{action},
argmask_{0}
{
bitsToMask(bits);
}
friend std::ostream &operator<<(std::ostream &out, const Instruction& inst) {
return out
<< inst.mnemonic_ << '\t'
<< inst.maskToBits() << '\t'
<< inst.description_ << '\n';
}
bool match(uint16_t word) const {
return (word & mask_) == pattern_;
}
bool name(std::string &name) const {
return name == mnemonic_;
}
bool exec(Regs &r, uint16_t &x) const {
action_(r, x);
return true;
}
bool needsArg() const {
return argmask_;
}
void list(uint16_t &w) const {
std::cout << mnemonic_;
if (argmask_) {
std::cout << " 0x" << hex4(argmask_ & w);
}
std::cout << '\n';
}
uint16_t assemble(uint16_t &w) const {
return (w & ~mask_) | pattern_;
}
private:
void bitsToMask(const std::string &bits) {
uint16_t maskval = 1u << 15;
for (int ch : bits) {
switch(ch) {
case '1':
pattern_ |= maskval;
// drop through to '0' case
case '0':
mask_ |= maskval;
maskval >>= 1;
break;
case 'x':
maskval >>= 1;
argmask_ = 0xfff;
break;
case 'y':
argmask_ = 0xff;
maskval >>= 1;
break;
case ' ':
break;
default:
std::cout << "Can't understand '" << ch << "' in mask for " << mnemonic_ << "\n";
}
}
}
std::string maskToBits() const {
std::stringstream s;
for (uint16_t maskval = (1u << 15); maskval; maskval >>= 1) {
if (mask_ & maskval) {
s << ((mask_ & pattern_) ? '1' : '0');
} else {
s << 'x';
}
}
s << '\n';
return s.str();
}
std::string mnemonic_;
std::string description_;
void (*action_)(Regs &r, uint16_t &x);
uint16_t argmask_;
uint16_t mask_;
uint16_t pattern_;
};
static const Instruction instructions[]{
Instruction("LODD","0000 xxxx xxxx xxxx","load direct", [](Regs &r, uint16_t &x){r.ac = r.m(x);}),
Instruction("STOD","0001 xxxx xxxx xxxx","store direct", [](Regs &r, uint16_t &x){r.m(x) = r.ac;}),
Instruction("ADDD","0010 xxxx xxxx xxxx","add direct", [](Regs &r, uint16_t &x){r.ac = r.ac + r.m(x);}),
Instruction("SUBD","0011 xxxx xxxx xxxx","subtract direct", [](Regs &r, uint16_t &x){r.ac = r.ac - r.m(x);}),
Instruction("JPOS","0100 xxxx xxxx xxxx","jump positive", [](Regs &r, uint16_t &x){if (static_cast<int16_t>(r.ac) >= 0) r.pc = x;}),
Instruction("JZER","0101 xxxx xxxx xxxx","jump zero", [](Regs &r, uint16_t &x){if (r.ac == 0) r.pc = x;}),
Instruction("JUMP","0110 xxxx xxxx xxxx","jump always", [](Regs &r, uint16_t &x){r.pc = x;}),
Instruction("LOCO","0111 xxxx xxxx xxxx","load constant", [](Regs &r, uint16_t &x){r.ac = x;}),
Instruction("LODL","1000 xxxx xxxx xxxx","load local", [](Regs &r, uint16_t &x){r.ac = r.m(r.sp + x);}),
Instruction("STOL","1001 xxxx xxxx xxxx","store local", [](Regs &r, uint16_t &x){r.m(x + r.sp) = r.ac;}),
Instruction("ADDL","1010 xxxx xxxx xxxx","add local", [](Regs &r, uint16_t &x){r.ac = r.ac + r.m(r.sp + x);}),
Instruction("SUBL","1011 xxxx xxxx xxxx","subtract local", [](Regs &r, uint16_t &x){r.ac = r.ac - r.m(r.sp + x);}),
Instruction("JNEG","1100 xxxx xxxx xxxx","jump negative", [](Regs &r, uint16_t &x){if (static_cast<int16_t>(r.ac) < 0) r.pc = x;}),
Instruction("JNZE","1101 xxxx xxxx xxxx","jump nonzero", [](Regs &r, uint16_t &x){if (r.ac != 0) r.pc = x;}),
Instruction("CALL","1110 xxxx xxxx xxxx","call a procedure",[](Regs &r, uint16_t &x){r.sp = r.sp - 1; r.m(r.sp) = r.pc; r.pc = x;}),
Instruction("PSHI","1111 0000 0000 0000","push indirect", [](Regs &r, uint16_t &){r.sp = r.sp - 1; r.m(r.sp) = r.m(r.ac);}),
Instruction("POPI","1111 0010 0000 0000","pop indirect", [](Regs &r, uint16_t &){r.m(r.ac) = r.m(r.sp++);}),
Instruction("PUSH","1111 0100 0000 0000","push onto stack", [](Regs &r, uint16_t &){r.m(--r.sp) = r.ac;}),
Instruction("POP ","1111 0110 0000 0000","pop from stack", [](Regs &r, uint16_t &){r.ac = r.m(r.sp); r.sp++;}),
Instruction("RETN","1111 1000 0000 0000","return from a procedure",[](Regs &r, uint16_t &){r.pc = r.m(r.sp++);}),
Instruction("SWAP","1111 1010 0000 0000","swap ac and sp", [](Regs &r, uint16_t &){std::swap(r.ac, r.sp);}),
Instruction("INSP","1111 1100 yyyy yyyy","increment sp", [](Regs &r, uint16_t &x){r.sp += (x & 0xff);}),
Instruction("DESP","1111 1110 yyyy yyyy","decrement sp", [](Regs &r, uint16_t &x){r.sp -= (x & 0xff);}),
Instruction("HALT","1111 1111 yyyy yyyy","halt", [](Regs &r, uint16_t &){r.halt = true;}),
};
class Mac1
{
public:
Mac1() :
verbose{true},
regs()
{}
friend std::ostream& operator<<(std::ostream &out, const Mac1& mic) {
return out
<< "PC = 0x" << hex4(mic.regs.pc)
<< " SP = 0x" << hex4(mic.regs.sp)
<< " AC = 0x" << hex4(mic.regs.ac)
<< '\n';
}
void setPC(uint16_t word) {
regs.pc = word;
}
void setAC(uint16_t word) {
regs.ac = word;
}
void setSP(uint16_t word) {
regs.sp = word;
}
void dumpmem(uint16_t loc=0, size_t sz=MEMSIZE, std::ostream& out = std::cout) const {
if ((loc > MEMSIZE) || (loc+sz > MEMSIZE))
return;
for (size_t i = loc; i < sz+loc; ++i) {
if (i%8 == 0) {
out << "\n" << hex4(i) << ": ";
}
out << hex4(regs.m(i)) << ' ';
}
out << '\n';
}
const Instruction *match(uint16_t word) const {
for (const auto &inst : instructions) {
if (inst.match(word)) {
return &inst;
}
}
return nullptr;
}
void step() {
uint16_t word = regs.m(regs.pc++);
uint16_t x = word & 0xfff;
const Instruction *inst = match(word);
if (inst != nullptr) {
inst->exec(regs, x);
if (verbose) {
inst->list(word);
}
return;
}
// no instruction found, so halt
regs.halt = true;
}
void run() {
bool oldverbose = verbose;
verbose = false;
for (regs.halt = false; !regs.halt; step())
{}
verbose = oldverbose;
}
void list(uint16_t ptr, unsigned n) {
for ( ; n; --n, ++ptr) {
std::cout << hex4(ptr) << ": ";
uint16_t word = regs.m(ptr);
const Instruction *inst = match(word);
if (inst != nullptr) {
inst->list(word);
} else {
std::cout << "??? 0x" << hex4(word);
}
}
}
void modifymem(uint16_t ptr) {
std::cout << "Modifying memory starting at address 0x" << hex4(ptr) << ". Enter `q` to quit\n";
std::string cmd;
while (std::cin >> cmd && cmd[0] != 'q') {
regs.m(ptr++) = extract(cmd);
}
}
void assemble(uint16_t ptr) {
std::string cmd;
std::string arg;
std::cout << "Assembling starting at address 0x" << hex4(ptr) << ". Enter `q` to quit\n";
uint16_t word;
while (std::cin >> cmd && cmd.length() > 1) {
if (cmd[0] == 'q') { // quit
return;
}
for (const auto &inst : instructions) {
if (inst.name(cmd)) {
if (inst.needsArg()) {
std::cin >> arg;
word = extract(arg);
} else {
word = 0u;
}
regs.m(ptr) = inst.assemble(word);
std::cout << hex4(ptr) << ": ";
inst.list(word);
++ptr;
}
}
}
}
void load(uint16_t loc, size_t sz, uint16_t *data) {
if ((loc > MEMSIZE) || (data == nullptr) || (loc+sz > MEMSIZE))
return;
std::copy(data, &data[sz], ®s.m(loc));
}
private:
bool verbose;
Regs regs;
};
void help() {
std::cout << "Commands: [h]elp, [?], [a]ssemble, [d]ump, [g]o, [l]ist, [m]emory, [r]egisters, [s]tep, [q]uit\n"
"rpc XXXX, rsp XXXX, rac XXXX, dXXXX, lXXXX\n";
}
int main()
{
Mac1 mac1;
uint16_t word;
help();
std::cout << "> ";
std::string command;
while (std::cin >> command) {
switch(std::tolower(command[0])) {
case 'h':
case '?':
help();
break;
case 'a': // assemble
command = command.substr(1);
word = extract(command);
mac1.assemble(word);
break;
case 'g': // go
mac1.run();
break;
case 'd': // dump memory
command = command.substr(1);
word = extract(command);
mac1.dumpmem(word, 0x20, std::cout);
break;
case 'l': // list
command = command.substr(1);
word = extract(command);
mac1.list(word, 16);
break;
case 'm': // modify memory
command = command.substr(1);
word = extract(command);
mac1.modifymem(word);
break;
case 's':
mac1.step();
// fall through to 'r'
case 'r':
if (command == "rpc") {
std::string arg;
std::cin >> arg;
word = extract(arg);
mac1.setPC(word);
} else if (command == "rac") {
std::string arg;
std::cin >> arg;
word = extract(arg);
mac1.setAC(word);
} else if (command == "rsp") {
std::string arg;
std::cin >> arg;
word = extract(arg);
mac1.setSP(word);
}
std::cout << mac1;
break;
case 'q':
return 0;
default:
std::cout << "Sorry, I don't know '" << command << "'\n";
}
std::cout << "> ";
}
}
This is all compiled with g++ as:
g++ -Wall -Wextra -pedantic -std=c++14 mac1.cpp -o mac1
A sample test script is this:
test.mac
m 0 3 50 109 q
a50 LODD 1 PUSH LODD 2 PUSH LODD 3 PUSH CALL 100 INSP 3 STOD 0 HALT FF q
a100 LODL 1 SUBL 2 JNEG 105 LODL 2 STOL 1 LODL 1 SUBL 3 JNEG 10A LODL 3 RETN LODL 1 RETN q
rpc 50
rsp 20
g
d
r
q
This puts three numbers, 0x3, 0x50 and 0x109 into locations 1, 2 and 3 in memory. It then assembles a short program at location 0x50 which calls a subroutine at location 0x100 which finds the smallest of the three passed numbers and returns it in the ac register. The script runs this short program, displays the first few bytes of memory (memory location 0 will contain the smallest of the three numbers), displays the register contents and then quits.
The resulting output should look like this:
Commands: [h]elp, [?], [a]ssemble, [d]ump, [g]o, [l]ist, [m]emory, [r]egisters, [s]tep, [q]uit
rpc XXXX, rsp XXXX, rac XXXX, dXXXX, lXXXX
> Modifying memory starting at address 0x0000. Enter `q` to quit
> Assembling starting at address 0x0050. Enter `q` to quit
0050: LODD 0x0001
0051: PUSH
0052: LODD 0x0002
0053: PUSH
0054: LODD 0x0003
0055: PUSH
0056: CALL 0x0100
0057: INSP 0x0003
0058: STOD 0x0000
0059: HALT 0x00ff
> Assembling starting at address 0x0100. Enter `q` to quit
0100: LODL 0x0001
0101: SUBL 0x0002
0102: JNEG 0x0105
0103: LODL 0x0002
0104: STOL 0x0001
0105: LODL 0x0001
0106: SUBL 0x0003
0107: JNEG 0x010a
0108: LODL 0x0003
0109: RETN
010a: LODL 0x0001
010b: RETN
> PC = 0x0050 SP = 0x0000 AC = 0x0000
> PC = 0x0050 SP = 0x0020 AC = 0x0000
> >
0000: 0003 0003 0050 0109 0000 0000 0000 0000
0008: 0000 0000 0000 0000 0000 0000 0000 0000
0010: 0000 0000 0000 0000 0000 0000 0000 0000
0018: 0000 0000 0000 0000 0057 0050 0050 0003
> PC = 0x005a SP = 0x0020 AC = 0x0003
>
Note that commands are not echoed, so this only shows the program's responses. Comments welcome.
Here's a live link where you can try the program.
Answer:
Memory is register?
Schematics (see the first link) clearly indicates that memory is not a part of the controller. I'd rather dedicate the private address/data/access mode registers to CPU, and let the RAM be simulated on its own.
Schematics also doesn't specify the address bus width, and/or the semantics of misaligned access. The simulation seems to assume the host behaviour; comments are welcome.
static_cast
Is it warranted? r.ac & (1 <<(mem_width - 1)) seems to convey the same semantics in a more explicit manner.
Inconsistency
Actions for CALL and PSHI instructions explicitly decrement r.sp = r.sp - 1;, whereas RET and PUSH, POP and POPI use ++/-- operators. Not that it affects the behaviour, but surely brings attention. | {
"domain": "codereview.stackexchange",
"id": 17633,
"tags": "c++, simulation, c++14"
} |
Viruses: Adaptation to a new host through repeated host jumps | Question: A friend told me, during a 3 minute discussion, that viruses that are endemic in host $A$ and make repeated jumps to host $B$ but can't be transmitted between individuals of species $B$, may slowly adapt (through these repeated jumps) to be able to be transmitted between individuals of host $B$ and become epidemic.
I don't know much about epidemiology. I don't understand how a virus population that is endemic to host $A$ may adapt to host $B$ with repeated jumps while the viruses that jump to host $B$ are dead end because they cannot be transmitted further more. Or Are these viruses able to jump back to host $A$ to bring back their newly acquired adaptations to host $B$?
Also, I might misunderstand the meaning of "repeated host jumps transmission". I first thought it meant repeated jumps from a reservoir population in host $A$ to host $B$, but it is also possible that it describes the dynamic of a virus population that is adapted to jump from species to species and they actually gain this ability by keeping jumping and jumping. But then, how could a virus species get adapted to jump from species to species? I'm a bit confused…. Can you give me some hints about this process of cross-species transmission through repeated jumps?
Answer:
A friend told me, during a 3 minute discussion, that viruses that are
endemic in host A and make repeated jumps to host B but can't be
transmitted between individuals of species B, may slowly adapt
(through these repeated jumps) to be able to be transmitted between
individuals of host B and become epidemic.
This is...mostly true. A good example is avian influenza - there are a number of human infections linked to contact with infected poultry, but the virus is not particularly good at sustaining long chains of human-to-human transmission, likely due to differences in the respiratory tract between humans and birds.
I don't know much about epidemiology. I don't understand how a virus
population that is endemic to host A may adapt to host B with repeated
jumps while the viruses that jump to host B are dead end because they
cannot be transmitted further more. Or Are these viruses able to jump
back to host A to bring back their newly acquired adaptations to host
B?
What they mean by repeated jumps, or what they should mean, is not a virus hopping back and forth, but a virus going from A to B many times. This is my problem with your friend's statement, it's not so much a "slow adaptation" as, well, random chance.
Imagine that there's a mutation in Virus A, which lives in chickens, that will make it well adapted to live in human beings. It's an extremely rare mutation, so the dominant strain of Virus A is maladapted for human to human transmission. That means most Virus A infections in humans will be the non-productive kind, that will infect a person, but not go on to infect other people. But, with enough rolls of the dice, a human might get the human-favored Virus A, and since there's now intense selective pressure for that strain of the virus within its new human host, it might establish itself and take off. | {
"domain": "biology.stackexchange",
"id": 2175,
"tags": "evolution, virology, virus, medicine, epidemiology"
} |
Normal forces while external forces are also present | Question: Frictional force($f$) is calculated as the product of coefficient of friction and the normal forces. $$f=\mu N$$
If suppose an external force $F_{ext}$ is applied on a block (at rest) at an angle $\theta$ with the horizontal, then would $N = mg$ (weight of block) or $N =mg+F_{ext}sin\theta$ (I want to know the effect on friction due to change in $N$)?
Answer: Sum of forces in the $y$-direction $=$
$$F\sin \theta + N - mg$$
(using the standard Cartesian axes). Although you haven't stated it explicitly, I will assume that the block has no motion along the $y$-axis (if it does, $N = 0$). Hence
$$F\sin \theta + N - mg = 0$$
So $N = mg - F\sin\theta$, where $mg$ is implicitly assumed to be positive. | {
"domain": "physics.stackexchange",
"id": 72761,
"tags": "newtonian-mechanics, forces, friction, free-body-diagram"
} |
GPS track angle (Course over ground) | Question:
I have question about track angle. I use gpss_client(for CTurtle) and in topic fix (gps_common/GPSFix) which besides latitude, longitude, altitude, speed, DOP factors give track angle which definition in fix topic in short is Direction in degrees from north. So i use google and find that angle is relative direction of travel to a ground position(same as Course Over Ground). In gorund robots we can asumme that track is equal heading which in marine vessels must be corrected for wind and sea current (http://en.wikipedia.org/wiki/Course_%28navigation%29).
One definition of track angle is at:
http://www.ehow.com/about_5584477_gps-track-angle-definition.html
So my question is how is track angle measured. If we use compas convention that this angle is measured from true north in clock wise direction so North is 0° and East is 90°. But i think here in ROS is opposite 0° is North and 90° is West(counter-clock wise direction.)
I read that all coordinate systems in ROS are right-handed(and that counter-clockwise direction is positive directon.
Well if someone uses track and know in which direction form North track angle is measured(CCW(left) or CW(right)), can that knowdlege share with me,and other ROS users in forum.
What i think is that track angle is measured form North(0°) to West(90°)in counter clockwise direction.(my GPs reciever is LS20030 from Locosys.)
Originally posted by Jurica on ROS Answers with karma: 97 on 2011-05-16
Post score: 1
Answer:
So, the two major conventions for GPS is NED (North-East-Down) and ENU (East-North-Up). Both of these are right-handed. NED is commonly used in aerial vehicles, where everything of interest is below, and ENU is commonly used in ground-based vehicles, where altitude is of interest.
Heading is represented in the NED coordinate system. This means that your heading, Phi, is 0 in the North direction, and is positive in the East direction, the same as a compass heading.
ROS works better following the ENU convention, as it more closely matches the body coordinate frame (X-forward,Y-left,Z-up).
Basically, you probably want to choose a convention for your work, and rotate the course over ground heading into your chosen coordinate frame. Looking at the datasheet for your receiver, it looks like course over ground is given as a compass heading (ENU) from True north, or Magnetic north, whichever you choose to use.
To get this into NED coordinates, you need to rotate the ENU coordinate frame. This can be accomplished using quaternions, rotation matricies, etc.
Assuming that your heading is represented in Radians, with North = 0, Pi to -Pi
Subtract pi/2 from your heading (phi -= pi/2)
Negate your heading (phi = -phi)
Check the wrap on your heading (re-wrap to Pi to -Pi)
I'm not sure if a checkwrap function exists in ROS's geometry libraries, but you can get an example from Our EKF Functions, where we had to deal with the same problem.
Originally posted by mjcarroll with karma: 6414 on 2011-05-16
This answer was ACCEPTED on the original site
Post score: 2
Original comments
Comment by mjcarroll on 2011-05-18:
We just did exactly that, but without the GPS heading. Here is our MATLAB implementation: https://github.com/mjcarroll/mech7710-final
Comment by Jurica on 2011-05-17:
So if I want true heading of robot i must add track+90°(or -(270-track) gives same result) , and i finnaly have same odometry frame in right angle towards global GPS (UTM) frame.(I plot odometryin utm and GPS x,y in utm and they match, so i think that is good).
Comment by Jurica on 2011-05-17:
I think i will use ENU system so track angle which GPs gives(fix topic) is angle which shows how far from north y-axis is rotated in right direction. Robot (P3AT) heading is 0° and turns about x-axes(what ROS gives is negative angluar velocity so turns in clockwise direction).
Comment by Jurica on 2011-05-17:
I plan to use gps track angle for fusion with odometry heading, so i use odometry + GPS in EKF and i plan to fuse measurements form GPSn like x,y(in utm system) and gps track angle and speed which GPS gives(SOG). First realization in Matlab and then i hope so in ROS.
Comment by mjcarroll on 2011-05-17:
We recently ran into the same issue on our project. I would say that ENU seems to work "better" with the ROS body-frame coordinate conventions.
Comment by Jurica on 2011-05-16:
Thx for yours answer. Yes it seems that headins is in NED convention, so i must choose which i will be using, and then convert one coordinate system to another. | {
"domain": "robotics.stackexchange",
"id": 5582,
"tags": "ros, gpsd-client"
} |
What happens if an object has more kinetic energy than the Gravitational Binding Energy? | Question: So the binding energy of an object is the amount of energy needed to move it an infinite distance away from another mass to essentially “escape” its gravitational field. But what happens if you give the object more energy than the binding energy. Will the object go “past infinity”?
Answer: I'd just like to slightly refine John and Mike's answers. If the initial kinetic energy equals or exceeds the binding energy, then the particle speed asymptotically approaches a constant, never stopping and "turning around". Of course, in the very special case that the KE equals the binding energy, that constant speed is precisely zero.
Note that though the particle never stops moving, it's always, always a finite distance from the other mass. Not only can it not go "past" infinity, it can't ever get there! | {
"domain": "physics.stackexchange",
"id": 3850,
"tags": "gravity, binding-energy"
} |
Are my computations of the forward and backward pass of a neural network with one input, hidden and output neurons correct? | Question: I have computed the forward and backward passes of the following simple neural network, with one input, hidden, and output neurons.
Here are my computations of the forward pass.
\begin{align}
net_1 &= xw_{1}+b \\
h &= \sigma (net_1) \\
net_2 &= hw_{2}+b \\
{y}' &= \sigma (net_2),
\end{align}
where $\sigma = \frac{1}{1 + e^{-x}}$ (sigmoid) and $ L=\frac{1}{2}\sum(y-{y}')^{2} $
Here are my computations of backpropagation.
\begin{align}
\frac{\partial L}{\partial w_{2}}
&=\frac{\partial net_2}{\partial w_2}\frac{\partial {y}' }{\partial net_2}\frac{\partial L }{\partial {y}'} \\
\frac{\partial L}{\partial w_{1}} &=
\frac{\partial net_1}{\partial w_{1}} \frac{\partial h}{\partial net_1}\frac{\partial net_2}{\partial h}\frac{\partial {y}' }{\partial net_2}\frac{\partial L }{\partial {y}'}
\end{align}
where
\begin{align}
\frac{\partial L }{\partial {y}'}
& =\frac{\partial (\frac{1}{2}\sum(y-{y}')^{2})}{\partial {y}'}=({y}'-y) \\
\frac{\partial {y}' }{\partial net_2}
&={y}'(1-{y}')\\
\frac{\partial net_2}{\partial w_2}
&= \frac{\partial(hw_{2}+b) }{\partial w_2}=h \\
\frac{\partial net_2}{\partial h}
&=\frac{\partial (hw_{2}+b) }{\partial h}=w_2 \\
\frac{\partial h}{\partial net_1}
& =h(1-h) \\
\frac{\partial net_1}{\partial w_{1}}
&= \frac{\partial(xw_{1}+b) }{\partial w_1}=x
\end{align}
The gradients can be written as
\begin{align}
\frac{\partial L }{\partial w_2 } &=h\times {y}'(1-{y}')\times ({y}'-y) \\
\frac{\partial L}{\partial w_{1}}
&=x\times h(1-h)\times w_2 \times {y}'(1-{y}')\times ({y}'-y)
\end{align}
The weight update is
\begin{align}
w_{i}^{t+1} \leftarrow w_{i}^{t}-\alpha \frac{\partial L}{\partial w_{i}}
\end{align}
Are my computations correct?
Answer: One important point I missed in first review: error is a summatory, its derivative is also a summatory.
About offsets "b": usually they are different in each cell (if not fixed to some value, as 0). Thus, replace them by b1 and b2. Moreover, they should be optimized in the same way that the weights. | {
"domain": "ai.stackexchange",
"id": 466,
"tags": "neural-networks, backpropagation, calculus, forward-pass"
} |
Why is lead sulfide found in nature, whereas lead oxide is less common? | Question: This question probably applies to other heavier metals as well. The only rationalization I can figure out is that lead in the +2 oxidation state (which is most common) is a borderline soft acid and prefers to take electrons from a soft sulfur instead of trying to react with a hard oxygen.
Is this the right direction? Thank you in advance.
Answer: Lead(II) and sulfide ions are large and polarizable, making them soft in the context of hard/soft acid/base theory (HSAB), which states that ions with similar hardness will form stronger bonds. The oxide ion is hard and therefore has a tendency to form weaker interactions with Pb(II).
HSAB theory has been discussed on Chem.SE in this question and this answer. | {
"domain": "chemistry.stackexchange",
"id": 629,
"tags": "inorganic-chemistry, transition-metals, hsab"
} |
Is there a simple method to send goal for frontier_exploration? | Question:
Hello,
I am using the frontier_exploration on ROS Hydro Ubuntu 12.04 and using a pioneer p3dx to explore. I can successfully send exploration goals by clicking the publish point tool and drawing a polygon point by point, but its really troublesome to click the points to make the polygon for the next goal. Is there a simple method where I can just draw a rectangle? I do not understand how to do that in Rviz. Can someone help. Thanks
Alex
Originally posted by AlexR on ROS Answers with karma: 654 on 2014-12-03
Post score: 1
Original comments
Comment by syafiqsalam on 2015-04-19:
can you share how to using frontier_exploration.
Answer:
Honestly, short of writing an RViz plugin, the 'Publish Point' tool is the easiest thing to do. If you open the 'Tool Properties' panel, you can set the PP tool to not disable after each click, which would make it less frustrating.
Originally posted by paulbovbel with karma: 4518 on 2015-05-14
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 20237,
"tags": "ros, frontier-exploration, ros-hydro"
} |
Yang Mills Hamiltonian: why do we use the Weyl's temporal gauge? | Question: Do you know why in the quantization of $SU(2)$ Yang-Mills Gauge Theory, it is always chosen the Weyl (temporal) gauge to derive the Hamiltonian?
Is it possible to fix another gauge?
Answer: Formally, the gauge-invariant observables do not depend on the choice of gauge-fixing condition (such as, e.g., Lorenz gauge, Coulomb gauge, axial gauge, temporal gauge, etc). Similarly, the Hamiltonian can formally be gauge-fixed in any gauge.
However, it is my understanding that to avoid the Gribov problem, an algebraic (rather than a differential) gauge-fixing condition is preferred. See also the footnote on p. 15 in S. Weinberg, Quantum Theory of Fields, Vol 2. | {
"domain": "physics.stackexchange",
"id": 5340,
"tags": "quantum-field-theory, gauge-theory, hamiltonian, yang-mills, gauge"
} |
Bash script to reconfigure the screen orientation when entering or exiting tablet mode | Question: I'm writing a bash script for my tablet laptop to flip the screen depending on a file in Manjaro Linux. My current script is the following:
while true
do
state=$(cat /sys/devices/platform/thinkpad_acpi/hotkey_tablet_mode) #This file indicates if it's in tablet mode or not
if [[ $state == 0 && $oldState == 1 ]]
then
xrandr --output LVDS1 --rotate normal
xsetwacom --set "$a" Rotate none
xsetwacom --set "$b" Rotate none
xsetwacom --set "$c" Rotate none
oldState=0
elif [[ $state == 1 && $oldState == 0 ]]
then
xrandr --output LVDS1 --rotate inverted
xsetwacom --set "$a" Rotate half
xsetwacom --set "$b" Rotate half
xsetwacom --set "$c" Rotate half
oldState=1
/home/eto/scripts/backlight 0
fi
sleep 1s
done
The code in the if statements don't matter, but what I'm worried about is the entire logic of how it checks the tablet mode. Is there a better way to check the tablet mode indicator file rather than a while loop? I feel like this is a very inefficient way of doing so as it reads the disk quite frequently, but I don't see how else I could approach this. Any input is appreciated as I'm very much an amateur with coding.
Answer: Good
I like most of what you've done so far as is:
Using [[ (double square brackets) for conditionals is a good practice.
Using $() for command substitution instead of the classic backticks is also a good modern shell practice.
Most of your variable substitutions are quoted. This is a good habit in case the variable contains spaces it won't get broken up by shell parsing.
Could be better
There are some minor things I'd improve:
include the #! line at the top
typically you see folks add their thens to the end of the previous line like if [[ cond ]]; then. Your outer loop could also be done as while true; do.
the comment gets sort of lost out on the right. Why not put it on the line above?
Think about
The script is getting invoked every second and chewing up some amount of CPU. There's probably not much CPU being burned in this case, but one thing to consider is whether this sort of polling could be avoided. Linux includes inotify as a function to let processes known when a file has changed so they don't have to constantly check the file themselves. This is available in the shell through inotifywait. If the file you're checking works with inotifywait you could avoid the sleep.
A bonus you get beyond conserving CPU from skipping sleep is that the script should run with lower latency. Instead of needing to wait for up to a second to detect the change your script would be invoked within a small fraction of a second. | {
"domain": "codereview.stackexchange",
"id": 36277,
"tags": "beginner, bash, linux"
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.