anchor stringlengths 0 150 | positive stringlengths 0 96k | source dict |
|---|---|---|
When does voltage drop occur? | Question: Why or when does it occur in a circuit? What does it imply when you speak of a voltage drop across a resistor? (Obviously, it probably means that the current's voltage before the resistor is higher than the voltage after the resistor, but why does this drop occur?)
Answer: Voltage is the unit of electric potential, the electric potential difference (in your case, the potential difference between the two ends of resistor in a circuit) can be called the voltage drop.
The potential difference produces an electric field $\vec{E}$, and the direction of $\vec{E}$ points from high potential to low potential. The electric field applies a force on charged particles (i.e. electrons in circuits) such that the electrons are driven by this force and move, thereby producing a current. So you can see the potential (voltage) difference is the reason why there is a current. By the way, you cannot say the "current's voltage", since the current is defined as $I = dQ/dt$. That is, it only describes the flow of charge per unit time.
When electrons move through a resistor they are scattered by the other electrons and nuclei, causing the electrons lose some of their kinetic energy. But the presence of the electric field will then accelerate the electrons again. We can calculate the average kinetic energy statistically, and assume the electrons are moving at a single average velocity.
Thus after each collision there is a loss of kinetic energy (it is converted to heat) but which is recovered due to the work done by the electric field. And this work is equal to the potential energy difference. You can see that the electrons have the same kinetic energies both when they enter and when they leave the resistor, but different potential energies. So we can say the voltage drop at the two ends of a resistor is caused by the potential energy difference. | {
"domain": "physics.stackexchange",
"id": 1636,
"tags": "electromagnetism, electricity, voltage"
} |
Color filters that subtract just one color? | Question: Below is how your normal color filters work:
They let only a single color/wavelength pass. But if this is used say in photography everything would just look very blue with a blue filter, like below:
I wanted to ask if any filters exist that Subtract just one color/wavelength, and let all others pass. So a blue "subtractive" filter, let's say, will block off only blue and let all others pass. This is cool because this would preserve the original color constitution of the object, rather than just making everything look like the color of the filter (as we saw above).
So, is this possible? Do such "Subtractive" filters exist? If so, please link.
Answer: The type of filter you describe is called a notch filter. They generally work using interferometry i.e. they are designed to produce destructive interference at a single wavelength (or small band of wavelengths). The result is their transmission spectrum has a notch in it at the required wavelength, hence the name.
A quick Google found these examples of notch filter spectra (taken from this document): | {
"domain": "physics.stackexchange",
"id": 41776,
"tags": "visible-light, waves, electromagnetic-radiation"
} |
How to predict a class for a file given a small number of files for training? | Question: I have been given the following data:
20 example CSV files, each labeled as belonging to one of six fixed classes, say A, B, C, D, E, F.
Each file has roughly 20000 rows and 10 floating point columns.
Within each file, the values seem pretty noisy, but the relationships between pairs of columns seem pretty linear (but noisy).
I have not been given any domain knowledge related to the content of the files, except A) the files are likely experimental measurements, and B) that the order of the records should not have any effect on the classifier; i.e. classification whould be invariant under permuting rows of files.
I have been asked to see if there is a useful way to predict (with, for example, accuracy > 0.8) a class label for a previously unseen file.
At first I thought it was going to be a no-brainer, given the total number of records over all the files.
But as I got into it, it seemed more an more like really I had only 20 training examples, and really felt as if I were exhausting them pretty quickly and data dredging.
It feels difficult.
I am wondering if there is a standard aproach in a situation like this.
Thanks for any help!
Answer: Given that only that most classes have less than four samples, it is not useful to do a train/test split. A train/test split is the most useful way to assess generalization.
One option could be to craft rules by hand. Explore the data and manually construct rule-based logic for each of the classes. | {
"domain": "datascience.stackexchange",
"id": 10931,
"tags": "classification, multiclass-classification"
} |
Why doesn't $x$ reach a constant for a block experiencing $v^n$ resistive force? | Question: I am stuck on the Exercise 3.5 of Newtonian Dynamics by R. Fitzpatrick:
A block of mass $m$ slides along a horizontal surface which is lubricated with heavy oil such that the block suffers a viscous retarding force of the form
$$F = - c\,v^n,$$
where $c>0$ is a constant, and $v$ is the block's instantaneous velocity. If the initial speed is $v_0 $ at time $t=0$, find $v$ and the displacement $x$ as functions of time $t$. Also find $v$ as a function of $x$. Show that for $n=1/2$ the block does not travel further than $2\,m\,v_0^{3/2}/(3\,c)$.
The last part of the question asks to show that for $n=1/2$ the block does not travel further than $2mv_0^{3/2}/(3c)$.
We start from Newton's second law
$$ m \frac{d^2x}{dt^2} = m \frac{dv}{dt} = m v \frac{dv}{dx}= -cv^n. $$
Separating variables gives
$$ \int_{v_0}^{v} \frac{dv'}{(v')^{n-1}} = -\frac{c}{m} \int_0^x dx', $$
$$ v^{-n+2} = v_0^{-n+2} - \frac{(-n+2)cx}{m}. $$
Plugging $n=1/2$,
$$ v^{3/2} = v_0^{3/2} - \frac{3cx}{2m}. $$
Setting the velocity to zero (this must be the case if the block stops moving),
$$ x =\frac{2m v_0^{3/2}}{3c}, $$
which is the desired result.
The problem arises when I try to solve for $x$ in terms of $t$. Now,
$$ m \frac{dv}{dt} = -cv^n, $$
$$ \int_{v_0}^{v} \frac{dv'}{(v')^n} = -\int_0^t \frac{c}{m} dt', $$
$$ \frac{1}{v^{n-1}} = \frac{1}{v_0^{n-1}} - \frac{(-n+1)c}{m} t. $$
Rising everything to $1/(1-n)$ power (of course, assuming that $n \ne 1$),
$$ v = \left( \frac{1}{v_0^{n-1}} - \frac{(-n+1)c}{m} t \right)^\frac{1}{1-n}.$$
Plugging $n=1/2$ gives:
$$ \frac{dx}{dt} = \left( v_0^{1/2} -\frac{c}{2m} t \right)^2. $$
Let's separate the variables and try to integrate,
$$ \int_0^x dx = \int_0^t \left( v_0^{1/2} - \frac{c}{2m} t' \right)^2 dt', $$
$$ x_{\mathrm{f}} = \int_0^{\infty} \left( v_0^{1/2} - \frac{c}{2m} t' \right)^2 dt'. $$
I've plugged $t = \infty$ because it seems to me that the block must stop to this time if it's going to stop at all. The problem is that the integral on the right hand side won't converge! So $x$ has no finishing point, which contradicts the first part of the solution. What's going on here?
Answer: From
$$\dfrac{dx}{dt}=\left(v_0^{1/2}-\dfrac{c}{2m}{t}\right)^2$$
and
$$v(t_f)=\left.\dfrac{dx}{dt}\right|_{t=t_f}=0$$
you should be able to get a finite bound on your last integral.
EDIT: one possible reason for which your final integral doesn't properly converge comes from an earlier step. Indeed, you moved from:
$$m\dfrac{dv}{dt}=-cv^n$$
to:
$$\dfrac{dv}{v^n}=-\dfrac{c}{m}dt$$
The big caveat here is of course that this is only valid for $v\neq 0$. And in fact, the physical solution tells us that $v=0$ forever when $t_f$ is reached! | {
"domain": "physics.stackexchange",
"id": 20469,
"tags": "homework-and-exercises, newtonian-mechanics"
} |
Is anisotropic radioactivity really impossible? | Question: What if the nucleus has a magnetic moment, and also the electron shell has one? I suspect, in this case, the orientation of the nucleus could be "fixed" by the electron shell.
Maybe a mono-crystal of such a material would have an anisotropic radioactivity. Thus, its radioactivity wouldn't be the same in all directions.
I think, relative many requirements should be fulfilled:
The nucleus must be radioactive.
It must have a magnetic moment.
The electron shell must have also a magnetic moment.
The orientations of the atoms in the crystal structure should be unidirectional.
Is it possible? Does any such crystal already exist? Maybe the crystal of a radioactive isotope of some ferromagnetic element could fulfill all of these?
If not, why not?
Answer: The idea that you propose is possible. In fact, there is a very famous example of your setting: the Wu experiment.
Chien-Shiung Wu at the US National Bureau of Standards prepared a thin surface of ${}^{60}Co$. This isotope decays by beta decay, producing one electron and one antineutrino. Due to the small magnetic moment of the nuclei, they had to cool the surface to 0.003 K, and then the sample was magnetized in an uniform magnetic field.
The result was quite amazing: the emitted electrons showed a preference to be in the direction of the nuclear magnetic moment, what proves that parity symmetry is violated. But that's another story...
More info: Wu experiment (wikipedia) | {
"domain": "physics.stackexchange",
"id": 31705,
"tags": "radioactivity"
} |
move_base to use map published on a certain topic | Question:
Hello, I'm having trouble trying to make my move_base constantly subscribe to a certain topic to load current SLAM-made map. There are some related questions here like this, this, this but none of them helped.
I mean that I would like move_base node to update map it is using with the one that is published to a given topic and periodically updated.
I'm using visual SLAM. I have PCL data but can't use it directly as I have to perform some space clearing operations. Ready to use map is published as a ROS message of type nav_msgs/OccupancyGrid and correctly visualized in rViz.
What's problematic here is that I can successfully use static_map loaded from file but I have no idea how to use dynamically published map instead of direct sensor stream.
Here are my current configuration files:
global_costmap:
global_frame: "map"
robot_base_frame: "base_link"
transform_tolerance: 20.0
update_frequency: 2.0
publish_frequency: 1.0
rolling_window: true
always_send_full_costmap: true
static_map: false
map_type: costmap
Local costmap:
local_costmap:
global_frame: "odom"
robot_base_frame: "base_link"
transform_tolerance: 10.0
update_frequency: 1.0
publish_frequency: 0.5
rolling_window: true
always_send_full_costmap: true
static_map: false
map_type: costmap
# ---- Following most likely replaced by static map layer
width: 1.0
height: 1.0
resolution: 0.01
Common parameters:
plugins:
# - {name: static_map, type: "costmap_2d::StaticLayer"}
- {name: inflation, type: "costmap_2d::InflationLayer"}
- {name: sonar, type: "range_sensor_layer::RangeSensorLayer"}
Static map layer:
map_topic: "/map_topic"
first_map_only: false
So at the moment I don't use static map layer and everything loads correctly except that blank costmaps are published (global and local). No error is shown.
Otherwise, if I enable static map layer, set global and local costmaps to use rolling window and non-static map move_base node is stuck at:
[ INFO] [1528620968.885129620]: Using plugin "static_map"
[ INFO] [1528620968.904921357]: Requesting the map...
I'm sure that topic name is correct.
In the static map layer there is parameter defined map_topic and first_map_only but seem not to work as I expected. If I set global and local costmap's parameters as:
static_map: true
rolling_window: false
and run map_server with appropriate file (previously saved map) everything works good. But it's not the way I would like to do it.
Any tips are welcome.
Originally posted by rayvburn on ROS Answers with karma: 78 on 2018-06-10
Post score: 1
Answer:
There was a conflict in parameter names. I've been trying to load costmap's static layer static_map parameters into move_base/global_costmap/static_map namespace , but there was a single static_map parameter already in /move_base/global_costmap.
Also, it's probably better not to put all used plugins into common_costmap parameters but into separate files (for local and global).
What I did in global costmap's .yaml file was renaming namespace from static_map to static_layer (see question post)
global_costmap:
plugins:
- {name: static_layer, type: "costmap_2d::StaticLayer"}
# ... other parameters
and then in move_base's .launch file:
<rosparam file="$(find diff_drive_mapping_robot_navigation)/config/static_map_params.yaml"
command="load"
ns="global_costmap/static_layer" />
Hope it will somehow help someone and save his time.
Originally posted by rayvburn with karma: 78 on 2018-06-10
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by madgrizzle on 2020-04-07:
Is it possible to buy you a cup of coffee? Thanks for this answer. It is exactly the problem I was facing and driving me crazy.. until I found this post. | {
"domain": "robotics.stackexchange",
"id": 30994,
"tags": "navigation, ros-kinetic, ubuntu, move-base, costmap-2d"
} |
Question on power | Question: A wind turbine produces a power $P$ when the wind speed is $v$. Assuming that the efficiency of the turbine is constant, the best estimate for the power produced when the wind speed becomes $2v$ is
(1) $2P$
(2) $4P$
(3) $6P$
(4) $8P$
My doubt here is that power, $P= F.v$ and basically if i write $F=$ $m { d v\over dt}$ in this expression i get a dependance of $P$ directly proportional to $v^2$ which means the answer should be $4P$ but the answer comes to $8P$. Why is that?? What other relations can be used in linking power to velocity?
Thanks in advance.
Answer: Consider the energy being transferred,
$$
E = \frac{1}{2}mv^2
$$
At $2v$, the mass of air impinging on the turbine also doubles, so effectively
$$
E\propto v^3
$$
as
$m = \rho V = \rho Avt$, where $\rho$ = air density, $A$ = area, and $t$ = time | {
"domain": "physics.stackexchange",
"id": 13803,
"tags": "homework-and-exercises, power, renewable-energy"
} |
rqt_graph not finding QT4 | Question:
Ubuntu 14.04 LTS / 64 bit.
ROS/Gazebo Jade installed per instructions at "http://wiki.ros.org/jade/Installation/Ubuntu"
I'm working through the tutorials, in order. Everything went well until the first use of "rqt_graph" in the turtle tutorial. This produced the error:
rosrun rqt_graph rqt_graph
Traceback (most recent call last):
File "/opt/ros/jade/lib/rqt_graph/rqt_graph", line 8, in <module>
sys.exit(main.main(sys.argv, standalone='rqt_graph.ros_graph.RosGraph'))
File "/opt/ros/jade/lib/python2.7/dist-packages/rqt_gui/main.py", line 59, in main
return super(Main, self).main(argv, standalone=standalone, plugin_argument_provider=plugin_argument_provider, plugin_manager_settings_prefix=str(hash(os.environ['ROS_PACKAGE_PATH'])))
File "/opt/ros/jade/lib/python2.7/dist-packages/qt_gui/main.py", line 336, in main
from python_qt_binding import QT_BINDING
File "/opt/ros/jade/lib/python2.7/dist-packages/python_qt_binding/__init__.py", line 55, in <module>
from .binding_helper import loadUi, QT_BINDING, QT_BINDING_MODULES, QT_BINDING_VERSION # @UnusedImport
File "/opt/ros/jade/lib/python2.7/dist-packages/python_qt_binding/binding_helper.py", line 265, in <module>
getattr(sys, 'SELECT_QT_BINDING_ORDER', None),
File "/opt/ros/jade/lib/python2.7/dist-packages/python_qt_binding/binding_helper.py", line 84, in _select_qt_binding
QT_BINDING_VERSION = binding_loader(required_modules, optional_modules)
File "/opt/ros/jade/lib/python2.7/dist-packages/python_qt_binding/binding_helper.py", line 139, in _load_pyqt
_named_import('PyQt4.%s' % module_name)
File "/opt/ros/jade/lib/python2.7/dist-packages/python_qt_binding/binding_helper.py", line 106, in _named_import
module = builtins.__import__(name)
AttributeError: 'module' object has no attribute '__import__'
For Python 2.7, that's a standard error. There is no
builtins.__import__
in Python 2.7.6. That feature is in Python 3, but not 2.7.6. Try this:
>python
Python 2.7.6 (default, Mar 22 2014, 22:59:56)
[GCC 4.8.2] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import builtins
>>> builtins.__import__
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: 'module' object has no attribute '__import__'
Now, in Python 2.7.9, it's different. There, "builtins" isn't defined, so the code at line 36 does
import builtin as builtins
after which "builtins.import" is defined.
This looks like an incompatibility between "/opt/ros/jade/lib/python2.7/dist-packages/python_qt_binding/binding_helper.py" and Python 2.7.6.
Although that file is under the dist-packages directory for ROS Jade's Python library, it's not a standard Python component; it's a ROS component.
See "http://wiki.ros.org/python_qt_binding".
Note that on this Ubuntu system, Qt isn't installed in the system Python. That's what you get with the base Ubuntu configuration. So it's possible that this works on systems where Qt is already present and the fancy binding loader isn't needed. I'm not sure. But the "_named_import" function in that module can't work with Python 2.7.6, which is the default with Ubuntu 14.04 LTS, even with all current updates.
Originally posted by John Nagle on ROS Answers with karma: 41 on 2015-06-07
Post score: 1
Original comments
Comment by John Nagle on 2015-06-07:
QT4 is installed in the stock Python. "import PyQt4" will work. It's only "binding helper" that seems to be broken. Incidentally, installing Python 2.7.9 as the default Python is not an option; that apparently breaks some Ubuntu 14.04 tools.
Comment by John Nagle on 2015-06-07:
From the Python documentation: "import is an advanced function that is not needed in everyday Python programming, unlike importlib.import_module()." It's an internal function which changes between Python releases. Using it in binding_helper is a bug. Importlib should be used instead.
Answer:
Reported as an issue/bug: https://github.com/ros-visualization/python_qt_binding/issues/24
Originally posted by John Nagle with karma: 41 on 2015-06-08
This answer was ACCEPTED on the original site
Post score: 1 | {
"domain": "robotics.stackexchange",
"id": 21857,
"tags": "ros, python, qt4, rqt-graph, ros-jade"
} |
Why do we experience time flow? | Question: I had read that a photon travelling at the speed of light does not experience time flow (time stops for it) also I had read somewhere that we are travelling at the speed of light. Why do we experience flow of time then?
Answer: Your question raises a common and interesting misconception.
Time does not slow down from the perspective of a moving observer. The time dilation effect means that where a moving observer passes between two stationary clocks, the time experienced by the observer is less than the time difference recorded on the clocks. From the perspective of the moving traveller, time is passing at its usual rate- it is the clocks that are out of synchronisation.
The effects of SR are entirely reciprocal. From the point of view of a reference frame moving relative to Earth at 0.9999999c, time on Earth appears to have almost stopped, yet we do not experience frozen time. Our experience of time remains constant regardless of how quickly we are moving relative to any other frames in which our time seems to be more or less dilated. | {
"domain": "physics.stackexchange",
"id": 80016,
"tags": "special-relativity, speed-of-light, time, inertial-frames, observers"
} |
Prevent Laserscan from clearing custom costmap_2d layer | Question:
Hello,
I am attempting to write my own custom costmap_2d layer plugin for use in my global costmap. I have been following this tutorial http://wiki.ros.org/costmap_2d/Tutorials/Creating%20a%20New%20Layer.
Specifically I am adding obstacles to the costmap via custom sensors and these appear correctly in rviz in my costmap layer when I turn off the laser scanner.
However, when I am using the obstacle_layer with a laser scanner as a sensor source the laser scan is always clearing my custom obstacles in the costmap ( The obstacles appear behind my robot but not in front where the scanner is facing).
My question is this, can I set costmap_2d::LETHAL_OBSTACLE to some other value that cant be cleared by the laser? Or conversely can I setup the laser scanner obstacle_layer in such a way that it does not effect these custom obstacles?
Thanks
Edit 1:
@billy and @Procópio thank you for the quick responses. It sounds like the combination_method is really what I am looking for, however I am not sure exactly where that goes? Here is my original costmap configs:
costmap_common.yaml:
footprint: [ [-1.5, 0.9], [3.0, 0.9], [3.0, -0.9], [-1.5, -0.9] ]
robot_base_frame: base_footprint
transform_tolerance: 0.5
resolution: 0.05
update_frequency: 4.0
publish_frequency: 3.0
#layer definitions (used as plugins for local and global params)
static_layer:
enabled: true
map_topic: "/map"
subscribe_to_updates: true
obstacle_layer:
enabled: true
obstacle_range: 8.0
raytrace_range: 8.0
inflation_radius: 0.2
track_unknown_space: false
observation_sources: laser
laser: {data_type: LaserScan, sensor_frame: hokuyo_link_main, clearing: true, marking: true, topic: /main_body/scan_filtered}
inflation_layer:
enabled: true
cost_scaling_factor: 10.0
inflation_radius: 3.0
custom_layer:
enabled: true
Costmap_global.yaml
global_frame: map
rolling_window: false
track_unknown_space: true
static_map: true
update_frequency: 2.0
publish_frequency: 1.0
plugins:
- {name: static_layer, type: "costmap_2d::StaticLayer"}
- {name: obstacle_layer, type: "costmap_2d::VoxelLayer"}
- {name: inflation_layer, type: "costmap_2d::InflationLayer"}
- {name: custom_layer, type: "custom_layer::CustomLayer", output: "screen"}
Correct me if I am wrong but it sounds like what you are saying is I should add "combination_method: 0" to my custom_layer arguments list in my costmap_global.yaml file? Ie something like the following:
plugins:
- {name: static_layer, type: "costmap_2d::StaticLayer"}
- {name: obstacle_layer, type: "costmap_2d::VoxelLayer"}
- {name: inflation_layer, type: "costmap_2d::InflationLayer"}
- {name: custom_layer, type: "custom_layer::CustomLayer", output: "screen", combination_method: 0}
This should have the effect of allowing my custom_layer to overwrite the layers above it in my plugins list? Also is this combination_method / other layer settings documented somewhere?
Thanks again for the help.
Edit 2:
After a bit more testing it looks like the solution is a combination of things. Both the order of the plugins and the use of the combination_method param were important. In my case the obstacles from my custom layer had indeed been added to the costmap, the problem was that my inflation layer came before my custom layer in the plugins list, therefore I was not seeing the inflated obstacles in rviz. I was also incorrect in setting "combination_method: 0" for my custom layer, that clearly has no effect since I am not using that param in my layer. I am posting the final working configuration in case someone else runs into this same problem:
costmap_common_final.yaml:
footprint: [ [-1.5, 0.9], [3.0, 0.9], [3.0, -0.9], [-1.5, -0.9] ]
robot_base_frame: base_footprint
transform_tolerance: 0.5
resolution: 0.05
update_frequency: 4.0
publish_frequency: 3.0
#layer definitions (used as plugins for local and global params)
static_layer:
enabled: true
map_topic: "/map"
subscribe_to_updates: true
obstacle_layer:
enabled: true
combination_method: 1
obstacle_range: 8.0
raytrace_range: 8.0
inflation_radius: 0.2
track_unknown_space: false
observation_sources: laser
laser: {data_type: LaserScan, sensor_frame: hokuyo_link_main, clearing: true, marking: true, topic: /main_body/scan_filtered}
inflation_layer:
enabled: true
cost_scaling_factor: 10.0
inflation_radius: 3.0
custom_layer:
enabled: true
Costmap_global_final.yaml
global_frame: map
rolling_window: false
track_unknown_space: true
static_map: true
update_frequency: 2.0
publish_frequency: 1.0
plugins:
- {name: static_layer, type: "costmap_2d::StaticLayer"}
- {name: obstacle_layer, type: "costmap_2d::ObstacleLayer"}
- {name: custom_layer, type: "custom_layer::CustomLayer"}
- {name: inflation_layer, type: "costmap_2d::InflationLayer"}
Also for some reason I had my laser scanner set to a VoxelLayer which I changed back to ObstacleLayer to make sure that wasn't the problem.
Thanks again @Procópio for sending me in the right direction
Originally posted by biglotusturtle on ROS Answers with karma: 165 on 2017-07-20
Post score: 0
Original comments
Comment by Procópio on 2017-07-21:
please, post your costmap configs.
Answer:
I had similar question. Below is link to the question with description how I fixed it. It's all about setting up the layers properly and then setting the 'clearing' setting false.
For me it worked for what I needed. The laser would no longer clear the obstacles...but with the way I have it setup, once the robot moves far from the obstacle and that area no longer appears in the cost map, the obstacle would be gone when the robot goes back to that area. In my use case, that is perfect, but it may be an issue for you.
http://answers.ros.org/question/251095/costmap-clears-obstacles-when-it-should-not/
Originally posted by billy with karma: 1850 on 2017-07-20
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by Procópio on 2017-07-21:
complementing the answer, the order of the layers and the combination_method matters. a combination_method: 0 will overwrite the layer above, while a combination_method: 1 will sum the layers | {
"domain": "robotics.stackexchange",
"id": 28403,
"tags": "navigation, laser, costmap-2d"
} |
What modelling tools are available to design a robot | Question: I am planning to build a robot.
1) What free or low cost robot modelling tools exist.
Answer: Here are some that I know of:
Solidworks student edition:
http://www.solidworks.com/sw/education/cad-faq-students.htm
(Free for students who use a school's privately assigned password, otherwise $149)
AutoCAD Student version: http://students.autodesk.com/ (Free for students)
Google Sketchup: http://www.sketchup.com/products/sketchup-make (Free)
FreeCAD: http://www.freecadweb.org/ (Free) | {
"domain": "robotics.stackexchange",
"id": 219,
"tags": "design"
} |
Estimate the length of poly-A tails from randomly-primed RNAseq data | Question: So a poly-A tail is a long chain of adenine nucleotides that is added to a messenger RNA (mRNA) molecule during RNA processing to increase the stability of the molecule.
For my project, I would like to estimate the length of poly-A tails from randomly-primed RNAseq data.
Any ideas how this could be done? Are there algorithms/tools which currently do this?
Perhaps I would need to model this somehow? All suggestions appreciated
Answer: Just look for polyA tracts at the end of sequences, and count them if they're larger than ~18 bases. I've done this with MinION cDNA reads by mapping the polyA adapter sequence (with an elongated polyA sequence) to the reads using LAST, and working out the length of aligned sequence.
Unfortunately, because the cDNA adapter finishes with a TVN sequence, rather than TTT, binding of the adapter is not necessarily a good representation of the true polyA length. I expect that Illumina adapters will have similar problems, if not worse (due to the read length limits).
It might be possible to use direct-RNA sequencing to better categorise the polyA lengths, but I've yet to look at the public direct-RNA datasets. Just in case you're interested in looking yourself, here's a direct RNA dataset for NA12878 from the wonderful nanopore-WGS consortium:
https://github.com/nanopore-wgs-consortium/NA12878/blob/master/RNA.md
Here's the consortium's graph from that summary document showing polyA length: | {
"domain": "bioinformatics.stackexchange",
"id": 319,
"tags": "rna-seq, software-recommendation, rna, algorithms"
} |
How to implement quantum gate from matrix in Q# | Question: Is it possible to implement a quantum gate from a matrix in Q#, the equivalent of unitary function in Qiskit ? My final goal is to implement cirq CZPowGate in Q#.
Thank you.
Answer: There's a feature request for a Q# operation to apply an arbitrary unitary operation given its representation as a matrix; if you're interested, please go on and leave a comment on that request!
In the meantime, though, it's actually really straightforward to implement the same unitary operation as cirq.CZPowGate in Q#.
open Microsoft.Quantum.Math as Math;
operation ApplyCZPow(t : Double, control : Qubit, target : Qubit) : Unit is Adj + Ctl {
Controlled R1([control], (t * Math.PI(), target));
}
This uses the R1 operation provided with the Microsoft.Quantum.Intrinsic namespace together with the Controlled keyword to apply the same rotation as cirq.CZPowGate.
To see how this works, note that as per the Cirq documentation for CZPowGate, that instruction is represented by the unitary matrix
$$
U_{\text{CZPowGate}}(t) = \left(\begin{matrix}
1 & 0 & 0 & 0 \\
0 & 1 & 0 & 0 \\
0 & 0 & 1 & 0 \\
0 & 0 & 0 & e^{i \pi t}
\end{matrix}\right).
$$
This is a special case of a controlled operation with a single control qubit, as it has the pattern
$$
\Lambda(U) = \left(\begin{matrix}
& 0 \\
0 & U
\end{matrix}\right) = |0\rangle\langle0| \otimes + |1\rangle\langle1| \otimes U
$$
for some unitary matrix $U$. In particular, $U$ in this case represents a rotation that leaves the $|0\rangle$ state alone but that applies a phase to the $|1\rangle$ state; precisely the action of the R1 operation.
In Q#, the Controlled keyword can be used to get the controlled version of any controllable operation (that is, an operation with is Ctl in its signature). Each Controlled operation takes as its first input an array of controls, and takes all of the original inputs to the uncontrolled operation as the second input. For example, CNOT(control, target) in Q# is shorthand for Controlled X([control], target).
Similarly, a Toffoli in Q# can be written as Controlled X([control1, control2], target) and the Fredkin operation can be written as Controlled SWAP([control], (target1, target2)).
Using the same pattern here works since R1 is a controllable operation. We can check that we get what we expect by using the Microsoft.Quantum.Diagnostics.DumpOperation operation to check the unitary representation of our new ApplyCZPow operation (run online without installing):
open Microsoft.Quantum.Diagnostics as Diag;
operation DumpApplyCZPow(t : Double) : Unit {
Diag.DumpOperation(2, ApplyToFirstTwoQubitsCA(ApplyCZPow(t, _, _), _));
} | {
"domain": "quantumcomputing.stackexchange",
"id": 1891,
"tags": "programming, q#, matrix-representation"
} |
Improving spectrogram resolution in Python? | Question: I'm using the specgram() function in matplotlib to generate spectrograms of speech wave files in Python, but the output is always of vastly inferior quality to what my normal transcription software, Praat, can generate. For example, the following call:
specgram(
fromstring(spf.readframes(-1), 'Int16'),
Fs=framerate,
cmap=cm.gray_r,
)
Generates this:
While Praat, working on the same audio sample with the following settings:
View range: 0-8000Hz
Window length: 0.005s
Dynamic range: 70dB
Time steps: 1000
Frequency steps: 250
Window shape: Gaussian
Generates this:
What am I doing wrong? I've tried fiddling with all the specgram() parameters, but nothing seems to improve the resolution. I have virtually no experience with FFTs.
Answer: Here are the matplotlib.specgram parameters
matplotlib.mlab.specgram(x,
NFFT=256,
Fs=2,
detrend=<function detrend_none at 0x1dd6410>,
window=<function window_hanning at 0x1e0b1b8>,
noverlap=128,
pad_to=None,
sides='default',
scale_by_freq=None)
The parameters provided in the question description need to be converted to comparable mpl.specgram
parameters. The following is an example of the mapping:
View range: 0-8000Hz Fs=16000
Window length: 0.005s NFFT = int(Fs*0.005) = 80
noverlap = int(Fs*0.0025) = 40
Dynamic range: 70dB n/a
Time steps: 1000 n/a
Frequency steps: 250
Window shape: Gaussian default window is hanning change to gaussian
If you use 8ms you will get a power of 2 FFT (128). The following is the description of the Praat settings from their website
View range (Hz) :
the range of frequencies to display. The standard is 0 Hz at the bottom and 5000 Hz at
the top. If this maximum frequency is higher than the Nyquist frequency of the Sound
(which is half its sampling frequency), some values in the spectrogram will be zero,
and the higher frequencies will be drawn in white. You can see this if you record a
Sound at 44100 Hz and set the view range from 0 Hz to 25000 Hz.
Window length :
the duration of the analysis window. If this is 0.005 seconds (the standard),
Praat uses for each frame the part of the sound that lies between 0.0025 seconds
before and 0.0025 seconds after the centre of that frame (for Gaussian windows,
Praat actually uses a bit more than that). The window length determines the
bandwidth of the spectral analysis, i.e. the width of the horizontal line in
the spectrogram of a pure sine wave (see below). For a Gaussian window, the -3 dB
bandwidth is 2*sqrt(6*ln(2))/(π*Window length), or 1.2982804 / Window length.
To get a broad-band' spectrogram (bandwidth 260 Hz), keep the standard window
length of 5 ms; to get anarrow-band' spectrogram (bandwidth 43 Hz), set it to
30 ms (0.03 seconds). The other window shapes give slightly different values.
Dynamic range (dB) :
All values that are more than Dynamic range dB below the maximum (perhaps after
dynamic compression, see Advanced spectrogram settings...) will be drawn in white.
Values in-between have appropriate shades of grey. Thus, if the highest peak in
the spectrogram has a height of 30 dB/Hz, and the dynamic range is 50 dB (which
is the standard value), then values below -20 dB/Hz will be drawn in white, and
values between -20 dB/Hz and 30 dB/Hz will be drawn in various shades of grey.
Link to Praat settings
The OP's question might be concerning the contrast difference between the Praat
specgram and the mpl (matplotlib) specgram. Praat has a Dynamic Range setting
which affects the contrast. The mpl function does not have a similar setting/parameter. The mpl.specgram does return the 2D array of power levels (the spectrogram) the
dynamic range could be applied to the return array and re-plotted.
The following is a code snippet to create the plots below. The example is ~1m15s speech with a chirp from 20Hz-8000Hz.
import numpy
import pylab
import wave
import array
pylab.close('all')
w1 = wave.open('example_no_noise.wav')
w2 = wave.open('example_noise.wav')
# hmmm, probably a better way to do this, scipy.io function?
x1 = numpy.array(array.array('h', w1.readframes(w1.getnframes())))
x2 = numpy.array(array.array('h', w2.readframes(w2.getnframes())))
x1 = x1 / (2.**(16-1)) # normalize
x2 = x2 / (2.**(16-1)) # normalize
Fs = 16000.
NFFT = int(Fs*0.005) # 5ms window
noverlap = int(Fs*0.0025)
pylab.figure(1)
pylab.specgram(x1, NFFT=NFFT, Fs=Fs, noverlap=noverlap,
cmap=pylab.get_cmap('Greys'))
pylab.title('Full 1m15s example min noise')
pylab.figure(2)
pylab.specgram(x2, NFFT=NFFT, Fs=Fs, noverlap=noverlap,
cmap=pylab.get_cmap('Greys'))
pylab.title('Full 1m15s example more noise')
pylab.figure(3); n=2100*176;
pylab.specgram(x2[n:n+256*256], NFFT=NFFT, Fs=Fs, noverlap=noverlap,
cmap=pylab.get_cmap('Greys'))
pylab.title('Full ~4s example min noise')
pylab.figure(4); pylab.plot(x1[n:n+256*256]) | {
"domain": "dsp.stackexchange",
"id": 194,
"tags": "fft, spectrogram, python"
} |
Get the prediction probability using prediction function | Question: I'm new to SVM models. I took custom SVM classifier from the github. In there standard predict function was overwritten by custom predict function.
def predict(self, instances):
predictions = []
for instance in instances:
predictions.append((int)(np.sign(self.weight_vector.dot(instance.T)[0] + self.bias)))#class is determined based on the sign -1,1
I want to get prediction probability of each class. [-1 probability, 1 probability]
I know from standard SVM predict_proba function I can get the probability. Since this is a customized SVM how do I get that? Can somebody help me.
Thank you
Answer: I have followed scikit-learn https://github.com/scikit-learn/scikit-learn/blob/fd237278e/sklearn/linear_model/_base.py#L293 link and was able to get predicted probabilities. | {
"domain": "datascience.stackexchange",
"id": 9468,
"tags": "svm, predict"
} |
[ROS2] Passing additional arguments to component constructor | Question:
Hi,
I am using ROS2 Foxy on Ubuntu 20.04.
I am trying to create a node using components using manual composition.
I would like to pass additional arguments during the creating of the component objects. The following files were created.
###Node Code
#include <memory>
#include "test_pkg/test_component.hpp"
#include "rclcpp/rclcpp.hpp"
int main(int argc, char * argv[])
{
setvbuf(stdout, NULL, _IONBF, BUFSIZ);
rclcpp::init(argc, argv);
rclcpp::executors::MultiThreadedExecutor exec;
rclcpp::NodeOptions options;
RCLCPP_INFO(rclcpp::get_logger("rclcpp"), "Adding node...");
std::string test_ID = "J1";
auto test_node = std::make_shared(test_ID, options);
exec.add_node(test_node);
RCLCPP_INFO(rclcpp::get_logger("rclcpp"), "Spinning");
exec.spin();
rclcpp::shutdown();
return 0;
}
###Component Header
#ifndef COMPOSITION__TEST_COMPONENT_HPP_
#define COMPOSITION__TEST_COMPONENT_HPP_
#include <test_pkg/visibility_control.h>
#include <rclcpp/rclcpp.hpp>
#include <std_msgs/msg/float32.hpp>
#include <std_srvs/srv/set_bool.hpp>
namespace test_composition
{
class Test : public rclcpp::Node
{
public:
COMPOSITION_PUBLIC
explicit Test(const std::string &test_ID, const rclcpp::NodeOptions &options);
protected:
void timer_callback();
void client_output();
private:
rclcpp::Publisher::SharedPtr test_publisher;
rclcpp::TimerBase::SharedPtr test_publisher_timer;
size_t count_;
std::string test_name;
};
}
#endif
###Component Code
#include "test_pkg/test_component.hpp"
#include <chrono>
#include <memory>
#include <iostream>
#include <utility>
#include "rclcpp/rclcpp.hpp"
#include "std_msgs/msg/float32.hpp"
#include "std_srvs/srv/set_bool.hpp"
using namespace std::chrono_literals;
namespace test_composition
{
Test::Test(const std::string &test_ID, const rclcpp::NodeOptions &options)
: Node(test_ID, options)
{
RCLCPP_INFO(get_logger(), "Configuring Node...");
test_name = test_ID;
test_publisher = create_publisher(test_name + "/speed", 0);
test_publisher_timer = create_wall_timer(500ms, std::bind(&Test::timer_callback, this));
}
void Test::timer_callback()
{
auto msg = std::make_unique();
msg->data = ++count_;
test_publisher->publish(std::move(msg));
}
}
#include "rclcpp_components/register_node_macro.hpp"
RCLCPP_COMPONENTS_REGISTER_NODE(test_composition::Test)
I want to create multiple nodes using the Test class with different "test_ID" and therefore need to pass the additional argument to the constructor. But when I try to build the program using colcon build, I get the following error.
###Error
error: no matching function for call to ‘test_composition::Test::Test(const rclcpp::NodeOptions&)’
146 | { ::new((void *)__p) _Up(std::forward(__args)...); }
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
/home//user/test_ws/src/test_pkg/src/test_component.cpp:16:5: note: candidate: ‘test_composition::Test::Test(const string&, const rclcpp::NodeOptions&)’
16 | Test::Test(const std::string &test_ID, const rclcpp::NodeOptions &options)
| ^~~~
/home/user/test_ws/src/test_pkg/src/test_component.cpp:16:5: note: candidate expects 2 arguments, 1 provided
make[2]: *** [CMakeFiles/test_component.dir/build.make:63: CMakeFiles/test_component.dir/src/test_component.cpp.o] Error 1
make[1]: *** [CMakeFiles/Makefile2:107: CMakeFiles/test_component.dir/all] Error 2
make: *** [Makefile:141: all] Error 2
Without the additional (test_id) argument the program gets build. I have also tried doing the same with a simple node without composition and it build correctly.
Any help is greatly appreciated. Thank you!
Originally posted by Cryoschrome on ROS Answers with karma: 75 on 2022-04-14
Post score: 0
Answer:
The issue is caused by your component not having the correct constructor signature required for a component. Components must have a constructor that takes rclcpp::NodeOptions as the only argument. So, simply add another constructor that calls the constructor you already have to fix this issue.
In your header file,
COMPOSITION_PUBLIC
explicit Test(const rclcpp::NodeOptions &options);
COMPOSITION_PUBLIC
explicit Test(const std::string &test_ID, const rclcpp::NodeOptions &options);
In your source file,
Test::Test(const rclcpp::NodeOptions &options)
: Test("", options)
{}
Test::Test(const std::string &test_ID, const rclcpp::NodeOptions &options)
: Node(test_ID, options)
{
RCLCPP_INFO(get_logger(), "Configuring Node...");
test_name = test_ID;
test_publisher = create_publisher<std_msgs::msg::float32>(test_name + "/speed", 0);
test_publisher_timer = create_wall_timer(500ms, std::bind(&Test::timer_callback, this));
}
Originally posted by ijnek with karma: 460 on 2022-04-15
This answer was ACCEPTED on the original site
Post score: 1 | {
"domain": "robotics.stackexchange",
"id": 37579,
"tags": "ros"
} |
Is possible to replace the photon as a force carrier of electromagnetic force with a sfermion-fermion relation pair model? | Question:
force carriers or messenger particles or intermediate particles are particles that give rise to forces between other particles.
I read that
A field’s spin is determined by how it transform if you rotate the coordinates by $Λx$ and the field transforms by some amount.
A scalar doesn’t transform $ϕ(x)→ϕ(Λx)$
A vector transforms as the coordinates $A(x)→ΛA(Λx)$
I want to ask if is possible to replace messenger particles or intermediate particles philosophy model because this because leads to always perceive a particular type of use of that intermediation, which often leads to physical limits, but which in reality could be overcome (in a topologically way not with a direct physical access)
intermediate particles are gauge boson and this it means talking about vector bosons because we are talking about force carriers but this is typically 'wrestling' approach, but we can introduce a pragmatic interest, for a topological approach without having to always think with the muscles
In fact if you 'replace' the photon as a force carrier of electromagnetic force with a sfermion-fermion relation pair model we can unblock a scalar access without 'forcing' the system in a force carriers way in order to move from vectors to modules
a sfermion is a hypothetical spin-0 superpartner particle (sparticle) of its associated fermion. Each particle has a superpartner with spin that differs by $1/2$. Fermions in the SM have spin-$1/2$ and therefore sfermions have spin 0
We should consider the fermion-sfermion pair, not just the sfermion
I set this Equivalence relation
photon = fermion-sfermion relation pair
Answer: Physics has numerous examples of alternative or dual descriptions, but I think you're asking for too many things at once here: you want electromagnetism to be described by a scalar, and for the scalar to be a superpartner of charged particles, and for electromagnetism to be a topological supersymmetric effect. If you're hoping for something equivalent to quantum electrodynamics (QED) and satisfying all those properties, there's no reason to think that it exists. And if you think we are free to just make up new theories of electromagnetism, you underestimate just how well-tested QED is.
This kind of radical speculation is a two-edged sword. Anyone who goes on to make unexpected discoveries must be able to consider possibilities that look unlikely, and really probe whether they could still be true. But there is also a risk of getting stuck on a personally favored hypothesis and neglecting to learn about what actually works, especially if one is an amateur speculator.
A fermion and a sfermion together form a chiral superfield. A chiral superfield can interact with itself (Wess-Zumino) but the fermion is Majorana, lacking an antiparticle. Alternatively, one could keep the vector photon and posit a spin 1/2 partner. But again, this 'photino' is uncharged, inherited from the fact that the photon also has no charge, it does not interact with itself (unlike the more complicated weak bosons and gluons).
As Pierre Fayet has written, at the beginning of supersymmetry, when people sought supersymmetric pairs among the known particles, it was more natural to seek the photon's superpartner in the uncharged neutrino, and the electron's superpartner in the charged W boson. But none of those ideas worked out either.
A relatively elementary problem, too, is that there are many charged particles: electron, muon, tau, the proton and charged pions (or the various quarks). Is the scalar photon a superpartner of all of these? Or (as in the supersymmetric standard model) is there a different superpartner for each elementary fermion, in which case how do you get the same interaction for all charged fermions?
There are various interesting ways to combine topology and supersymmetry and electromagnetism. I suppose it is remotely possible that composite photons could somehow emerge from topological interactions among chiral superfields. That's the closest I can get to your vision. | {
"domain": "physics.stackexchange",
"id": 54221,
"tags": "photons, supersymmetry, bosons"
} |
Tetrahedron resistor and splitting of current | Question: I found in this link something I can't understand:
Consider a current flowing out of point a below. The current must split up into three equal portions, since all three
branches from point a are connected to the same resistance. Thus, the currents in branches ab, ac, and ad must be
equal.
This is the premise needed to remove the resistor bd.
But why "same resistance"? Why does it say that the current equally splits between the three branches? I think this is also evidently in contrast with the last simplification step. How could be the current same in the branches if their resistances are 2R,2R,R?
I believe that there's an unneeded assumption. To remove the bd resistor we just can say that the current in ab and in ac (not in ad!) are the same because the shortest journey to d is 2R.
Is it wrong?
Answer: The solution provided contradicts itself. The three branches from point a are not connected to the same resistance - path ad requires passage through only 1 resistor, while a path passing through b or c requires passage through at least 2 resistors. Further, the statement that the voltages at points b, c, and d are the same is impossible. If this were the case, no current would flow along bd or cd. Since all currents were already said to be equal, no current could flow along ad either, and no current would flow anywhere.
However, since paths $a\rightarrow b \rightarrow d$ and $a\rightarrow c \rightarrow d$ are equivalent, we can state that the voltages at points b and c are equal. This means that no current will flow along bc, so we can disregard that resistor. The subsequent work and the final effective circuit drawn are correct. | {
"domain": "physics.stackexchange",
"id": 36834,
"tags": "electric-circuits, symmetry, electrical-resistance"
} |
$k$-Opt TSP Local Search is NOT exact when $k = \lceil |V|/2 \rceil$ | Question: I've been self-studying the book Algorithms by Papadimitriou, Dasgupta and Vazirani.
I am having a hard time with a question about local search involving the traveling salesman problem (TSP).
We'll say a local search algorithm is exact if it always returns a globally optimal solution.
Consider a local search algorithm for TSP that uses neighborhoods defined by $k$-change: two tours $T_0$ and $T_1$ are neighbors if one can delete $j \leqslant k$ edges from $T_0$ and add back another $j$ edges to obtain $T_1$.
This is known as the $k$-Opt algorithm.
It's easy to see and the book itself discusses how for low values of $k$ (relative to the number $n$ of vertices), $k$-Opt may get stuck on locally optimal solutions that are not globally optimal.
In other words, $k$-Opt is not exact for these values of $k$.
The exercise I'm interested in claims that
$k$-Opt with $k = \lceil n/2 \rceil$ is not exact.
I've tried my hand and searched around but couldn't make it.
I found the paper "Some Examples of Difficult Traveling Salesman Problems", available here (and Papadimitriou is one of the authors by the way).
It contains a family of examples for which $k$-Opt with $k < (3n/8)$ is not exact, and uses a particular kind of subgraph (which it calls the diamond) to impose a structure on feasible tours.
I've tried to replicate and extend something like this with little success.
Answer: Answer to the first question:
Please see the following figure for $n = 8$. All the red and black edges have weight $1$. The green edge $(C,H)$ has weight $2$. And, the blue edges have weight $0$. The edges that are not there have weight $\infty$; I have not drawn these edges for simplicity.
Now, note that the cycle $(A,B,C,D,E,F,G,H)$ is the optimal tour with weight $5$. Also, the tour formed by the colored edges (red, green, and blue) has weight $6$. Note that the tour formed by these colored edges is locally optimal to $\lceil n/2 \rceil$ change but it is not globally optimal.
Why colored tour is locally optimal?
Proof: For the sake of contradiction assume that it is not locally optimal. It means that there exist at most four edges in the tour that can be replaced to obtain a smaller weight tour. Observe that to obtain a smaller weight tour, the edge $(C,H)$ must be replaced and the blue edges can not to be replaced. If we replace edge $(C,H)$, then we must add the edges $(C,B)$ and $(A,H)$; otherwise, the weight of the tour will become $\infty$.
After adding these edges, we can not keep edges $(B,D)$ and $(A,G)$ in the tour. Therefore, we must add edges $(D,E)$ and $(G,F)$; otherwise, the weight of the tour will become $\infty$. Now, we must remove edges $(B,E)$ and $(A,F)$.
To complete the tour, we need to add edge $(A,B)$; however, we can not add any more edges since we have already added $4 = n/2$ new edges. Therefore, a tour can only be formed by adding edges $(B,F)$ and $(E,A)$. It makes the weight of the tour $\infty$.
This proves that the tour formed by colored edges is locally optimal.
The technique can be easily generalized to the general value of $n$. For example, see the following figure for $n = 14$. Here, again, the tour formed by the colored edges is locally optimal to $\lceil n/2 \rceil$ change but it is not globally optimal. | {
"domain": "cs.stackexchange",
"id": 19662,
"tags": "algorithms, graphs, traveling-salesman, hamiltonian-circuit"
} |
How do I set up and for a skid steer robot? | Question:
I am attempting to model a Pioneer3-AT mobile robot in Gazebo and control it from ROS. I have been reasonably successful except for one final touch, rotation control.
If I set the MaxForce() for each wheel joint to something very high such as 50, the robot will turn in place very close to the commanded rate. But with this setting any linear motion request causes the robot to do a flip.
If I reduce the MaxForce() for each wheel to 3, the linear response is very close to what I see on the real robot but the rotational control is very slow, perhaps only 1% of the correct speed.
This makes me think that I need to configure mu, mu2, slip1, slip2 for each wheel properly to get the desired performance but I'm not sure how to choose reasonable values. After many frustrating rounds of guess-and-check, the best I was able to get is a robot that kinda responds to the desired velocity commands but slides all over the place (+/- 1m) when attempting to rotate in place.
Additionally, when I watch the joints rotating in Gazebo, I see that each of the wheels will 'slip' for a very short time and this is probably the source of the sliding. I wonder if I could use a single force to rotate both wheels on the same side of the robot. (like using a belt between two wheels on the same side of the robot driven from a single motor.) I think this might help too.
Relevant Files:
model.sdf
ros_model_plugin.cc
screenshot of robot http://i45.tinypic.com/20a29ns.png
Originally posted by dereck on Gazebo Answers with karma: 86 on 2013-02-23
Post score: 2
Original comments
Comment by ZdenekM on 2013-05-22:
Hi. I'm very interested in model of P3-AT. If you have updated version - can you share it somehow? Many thanks.
Answer:
Hi,
You should also fix the inertia values. These can be computed with the help of this wikipedia page:
For example if you choose the chassis to be a cuboid the ixx, iyy, izz values need to be:
1/12*m(h^2+d^2) (where m = mass, h = height, d = depth):
<link name="chassis">
<inertial>
<mass>14.0</mass>
<inertia>
<ixx>1/12*m(h^2+d^2)</ixx>
<ixy>0.0</ixy>
<ixz>0.0</ixz>
<iyy>1/12*m(h^2+d^2)</iyy>
<iyz>0.0</iyz>
<izz>1/12*m(h^2+d^2)</izz>
</inertia>
</inertial>
As for mu, mu2, slip1, slip2 they are explained here, notice that the values are between [0..1]. For more information check out this ODE page and just search for mu, mu2, slip1, slip2 it will be more deeply explained.
Cheers
Originally posted by AndreiHaidu with karma: 2108 on 2013-02-24
This answer was ACCEPTED on the original site
Post score: 3
Original comments
Comment by dereck on 2013-02-26:
Thank you very much for the inertia wiki link. That's exactly what I needed! The mu, mu2, slip1, slip2 links do tell me what each parameter is for, but would you have any links that talk about how to configure them?
Comment by AndreiHaidu on 2013-02-26:
Hi, well in your model.sdf file you change the values between 0 and 1. 0 - meaning no friction and 1 - maximum friction. But you should know that in gazebo there is no rotating friction. So you just have to play with the values to see which fits you best. (was this what you where asking for? if not please explain more with what you meant by "how to configure them". Cheers
Comment by dereck on 2013-02-28:
So, its just a guess and check operation then? Thank you!
Comment by AndreiHaidu on 2013-03-01:
I guess it is faster than figuring out values after these formulas http://www.ode.org/ode-latest-userguide.html#sec_3_11_1 :) | {
"domain": "robotics.stackexchange",
"id": 3064,
"tags": "gazebo-1.4, skid-steer, sdformat"
} |
Number of Images from pair of plane mirrors? | Question: Basically not exactly a homework, but I came across this problem.
I have two plane mirrors angled at 60 degree angle. An object is placed in between symmetrically. What will be the number of total images obtained?
What I mean is that there will be images of images, how can we count them (If at all), and how are they related to angle between the mirrors? Any help will be appreciated.
Answer: You can solve this problem the same way that you solve most problems with compound optical systems in simple geometrical optics, that is:
Find the image(s) produced by the first element in the system
Use those images as the objects for the next element in the system, find the resulting images.
Repeat step 2 as neccesary.
In this case, there is a complication due to the fact that you essentialy have two paths in parallel to keep track of. More explicitly, find the images in each of the mirrors. Next, using those images as objects, find their images in the other mirrors. You will see that due to the geometry, at a certain point, you will run out of objects, because there are no more images generated in front of a mirror.
See also http://www.physicsclassroom.com/class/refln/u13l2f.cfm for some helpful diagrams and pictures. | {
"domain": "physics.stackexchange",
"id": 48907,
"tags": "homework-and-exercises, optics, reflection"
} |
What evidence exists for string theory viability? | Question: I know that string theory is still under heavy development, and as such it still can't make predictions (or not that many predictions anyways).
On the other hand, it is clear from the number of years this theory has been under development and from the large number of theoretical physicists studying it, that it is considered a good and viable candidate as a quantum gravity theory.
So, what is the evidence that this is true? Why is it considered such a good candidate as the correct quantum gravity theory?
Without wanting to sound inflammatory in the least, it has been under heavy development for a very long time and it's still not able to make predictions, for example, or still makes outlandish statements (like extra dimensions) that would require a high amount of experimental evidence to be accepted. So - if so many people believe it is the way to go, there have to be good reasons, right? What are they?
Answer: Some random points in support of ST, with no attempt to be systematic or comprehensive. I will not get into a long discussion, if someone does not find this convincing I'd advise them not to work on the subject. I also don't have the time to elaborate or justify the claims below, just take them at face value, or maybe you can ask more specific follow-up questions.
ST incorporated naturally and effortlessly all of the mathematical structures underlying modern particle physics and gravity. It does so many times in surprising and highly non-trivial ways, many times barely surviving difficult consistency checks. Certainly, anyone with any familiarity with the subject has a strong feeling that you get out much more than you put in.
ST quantizes gravity, and that form of quantization avoids all the difficulties that more traditional approaches encounter. This is also surprising: it was developed originally as a theory of the strong interactions, and when people discovered it contains quantized gravity they spent years trying to get rid of it, with no success. As a theory of quantum gravity, it passes many consistency checks, failure of any of them would invalidate the whole framework, for example in providing a microscopic description of a large class of black holes.
ST extends the calculational tools available to us to investigate interesting physical systems, many times the only such calculational techniques available. Again, it does so in novel and unexpected ways. It therefore provides a natural language to extend quantum field theory, to models which include quantized gravity, and (relatedly) models with strong interactions. Many calculations using that language are simpler and more natural than other methods, it seems therefore to be the right language to discuss a large class of physical systems.
ST contains in principle all the ingredients needed to construct a model for particle physics, though this has proven to be more difficult than originally thought. But, in view of the above, even if it turns out not to provide a model for beyond the standard model physics, it is certainly useful enough for many physicists to decide to spend their time studying it.
Of course, other people may have their own reasons to find the subject interesting, I'm only speaking for myself. | {
"domain": "physics.stackexchange",
"id": 28257,
"tags": "string-theory"
} |
Where's the official Python API documentation for TF? | Question:
The web article http://wiki.ros.org/tf only seems to point to the C++ API documentation (e.g. the following sub-article http://wiki.ros.org/tf/Overview/Data%20Types only contains C++ examples). I would like to use Python instead. I would like to read the official Python documentation for tf. Where can I find it?
In other ROS Wiki pages, there's usually the possibility to see both the Python and C++ version of the code or documentation. Why doesn't http://wiki.ros.org/tf provide such a duality?
I know there are a few tf tutorials (e.g. http://wiki.ros.org/tf/Tutorials/Writing%20a%20tf%20broadcaster%20%28Python%29) which use Python, but I would like to see all other examples and documentation in Python. There's this page http://mirror.umd.edu/roswiki/doc/diamondback/api/tf/html/python/tf_python.html, but it doesn't seem to be official.
I have also seen that there's also a new Python interface to the new tf library, i.e. to tf2, which is called tf2_ros. However, the documentation is pretty much empty. For example, the class TransformBroadcaster is not documented. Why? Why aren't people putting some effort on the documentation?
Furthermore, there's also the following article http://wiki.ros.org/tf/TfUsingPython, but there's no link from the web page http://wiki.ros.org/tf to it, besides being very poorly written and very incomplete.
Originally posted by nbro on ROS Answers with karma: 372 on 2018-04-07
Post score: 0
Answer:
All of the pages listed by @nbro are linked directly from the Package Links linkbox to the right of the Package Summary header on the tf2_ros wiki page:
which takes you to:
All ROS pkgs with (Doxygen) API docs should have those links.
You should not need to search docs.ros.org yourself (but you can, of course).
Edit1: there is actually a tutorial about this: Navigating the ROS wiki.
Originally posted by gvdhoorn with karma: 86574 on 2018-04-08
This answer was ACCEPTED on the original site
Post score: 3
Original comments
Comment by nbro on 2018-04-10:
It's incredible I have never noticed "Code API" on that side of a ROS wiki article. If I am the only one who missed it, then shame on me, but if there are many other people in my situation, shame on the designers of the ROS wiki.
Comment by gvdhoorn on 2018-04-10:
"shame"? Really?
Comment by tfoote on 2018-04-10:
@gvdhoorn FYI I've given @nbro an official warning relating to this and a few other posts. https://github.com/ros/geometry/issues/161#issuecomment-380206764
Comment by gvdhoorn on 2018-04-10:
To be clear: I don't mind frank discussions and stating of opinions (I'm Dutch, I actually appreciate it). I do believe however that one needs to be careful not to come off as harsh or abrasive. This could just be a matter of the 'email effect', but expressions such as "shame on .." are a bit much.
Comment by nbro on 2018-04-12:
Yes, I understand that there may be a tutorial. My point is: there should not exist the need for a tutorial for such a basic functionality. Don't you also think?
Comment by gvdhoorn on 2018-04-12:
Every person is different. What may be apparent or intuitive to one could be opaque to another. I guess the tutorial / guide exists to explain how the wiki is layed out so that it becomes easier to find what you are looking for.
I can't comment on whether or not the current layout is bad or not, ..
Comment by gvdhoorn on 2018-04-12:
.. as I feel I'm too accustomed to all of this already to really have an objective opinion.
Comment by nbro on 2018-04-12:
There are fields like human-computer interaction (HCI) and user-experience design (UXD) which exist to solve this kind of problems.
Comment by gvdhoorn on 2018-04-12:
Certainly. Perhaps you know of a way to get them involved? This is largely a community based project, so improvements should really come from the community.
Comment by vschmidt on 2018-04-12:
This discussion seems to revolve around where the existing documentation can be found. But it doesn’t, in my opinion really address the original questions, which point to the fact that the existing documentation could use some work.
Comment by gvdhoorn on 2018-04-13:
I'm not sure: the title of the post by the OP is:
Where's the official Python API documentation for TF?
that does seem to ask for where it can be found. | {
"domain": "robotics.stackexchange",
"id": 30568,
"tags": "ros, ros-kinetic, documentation, transform"
} |
Fourier heat conduction and temperature gradient | Question: I am studying introductory heat behaivuor, and I came across a rather elementary problem. Say there is a wall and constant temperature differential maintained on either side of the wall. I read that a linear temperature gradient within the wall results, but I don't know how I would derive/figure this out?
I can think of one numeric way to do this, splitting the wall into many many tiny elements, setting them all to any temperature T_0, and then performing an iterative numeric pattern around every element, averaging surrounding temperature. This doesn't use very much reasoning, and I was hoping for a more Physics-based answer. Any ideas?
Answer:
This doesn't use very much reasoning, and I was hoping for a more Physics-based answer. Any ideas?
The linear temperature gradient is simply the consequence of Fourier's Law of heat conduction:
$$\vec{q}=-k\frac{dT}{dx}$$
Where $\vec{q}$ is the local heat flux density, which is equal to the product of thermal conductivity, $k$ and the negative of the local temperature gradient $\frac{dT}{dx}$. Integrated, $\Delta T$ becomes a linear function of $x$, so:
$$\frac{\Delta T}{\Delta x}=-\frac{\vec{q}}{k}$$
If $\vec{q}=\text{constant}$. | {
"domain": "physics.stackexchange",
"id": 27578,
"tags": "thermodynamics"
} |
How to create a map data for navigation stack using kinect | Question:
I'd like to ask that
how to use pointcloud_to_laserscan
Can rgbdslam package be used for creating a map for navigation stack?
I'd like to create a map data with kinect sensor for navigation.
I've already read some tutorials and found that
I have to use not cloud data but laser scan data.
pointcloud_to_laserscan can convert data, but
the usage is not described in the wiki.
Could you teach me how to use it?
And I found that rgbdslam can create a map using kinect.
Is this package able to generate a map for navigation stack?
Originally posted by moyashi on ROS Answers with karma: 721 on 2012-06-24
Post score: 0
Original comments
Comment by prince on 2012-06-25:
Had looked at this similar question http://answers.ros.org/question/9450/how-to-setup-pointcloud_to_laserscan/
Comment by Felix Endres on 2012-06-26:
Also see this: http://answers.ros.org/question/12995/2d-map-from-octomap/
Answer:
RGBDSlam is the best choice if you have both the rgb and depth stream, but Depthimage_to_laserscan is better if you only have the depth data (my project is like this).
Depthimage_to_laserscan requires two things, the depth image topic and the camera_info topic (see monocular camera calibration tutorials). You'll also need to set up a coordinate frame to reference the kinect to. For example, with my stationary kinect 360, I use these:
rosrun tf static_transform_publisher 0 0 0 0 0 0 map camera_depth_frame 20
rosrun depthimage_to_laserscan depthimage_to_laserscan image:=/kinect/mono16/image_raw camera_info:=/kinect/mono16/camera_info _scan_height:=20 _scan_time:=0.167 _range_min:=0.75 _range_max:=2.5
You'd probably want to use a larger scan height if possible, but my slow little raspberry pi cannot handle much more than 640x40 at 6fps, so im limited in height.
After that, you'll see a new /scan topic. Use that in place of a laser scanner as described in the navigation stack tutorials: http://wiki.ros.org/navigation/Tutorials/RobotSetup
Originally posted by TimboInSpace with karma: 141 on 2015-03-23
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 9928,
"tags": "ros, slam, navigation, pointcloud-to-laserscan"
} |
Adding acid to hydrates | Question: What would happen if I added $\ce{Na2CO3 \cdot n(H2O)}$ to a solution of $\ce{HCl}$? Could I consider the reaction the same as just $\ce{Na2CO3 + HCl -> NaCl + CO2 + H2O}$?
Answer: There is no difference in properties of salt hydrates and non-hydrates after they are dissolved in water, as they are hydrated to same state. So, yes, you can.
However, there is difference in reaction of salt hydrates and non-hydrates outside water solutions. | {
"domain": "chemistry.stackexchange",
"id": 132,
"tags": "inorganic-chemistry"
} |
Does zinc react with nitrogen? | Question: Does zinc react with nitrogen ?
Is this reaction possible:
3Zn + N2 → Zn3N2
Or maybe there are another reaction?
Answer: Wikipedia gives several synthetic routes to zinc nitride. One of these routes is electrical discharge between zinc electrodes in a nitrogen atmosphere. Synthesis without electrical discharge generally uses ammonia as the ultimate nitrogen source. | {
"domain": "chemistry.stackexchange",
"id": 14575,
"tags": "metal"
} |
What is the speed of light in case of Critical Angle? | Question: When light travels from an optically denser to a rarer medium, it bends away from the normal and at a specific angle of incidence, the angle of refraction is ${90}^{\circ}$. When the angle of incidence is equal to the Critical angle, the refracted ray grazes the path of separation of the two media. So at this angle, the speed of light would be that of the denser medium or that of the rarer medium?
Answer: Based on this answer, the transmission coefficients of Frenels equations are $0$ and thus no light is refracted. Therefore any light that exists traveling at the speed of the denser medium because all light got reflected.
However, if light is travelling on a boundary (more generally), the beam has an actual width so anything on the rarer medium will travel at the speed light travels on that medium and vice versa for the denser medium. No light can travel in the $0$ width boundary. | {
"domain": "physics.stackexchange",
"id": 99232,
"tags": "optics, visible-light, speed-of-light, reflection, refraction"
} |
What does it mean to transform as a scalar or vector? | Question: I'm working through an introductory electrodynamics text (Griffiths), and I encountered a pair of questions asking me to show that:
the divergence transforms as a scalar under rotations
the gradient transforms as a vector under rotations
I can see how to show these things mathematically, but I'd like to gain some intuition about what it means to "transform as a" vector or scalar. I have found definitions, but none using notation consistent with the Griffiths book, so I was hoping for some confirmation.
My guess is that "transforms as a scalar" applies to a scalar field, e.g. $T(y,z)$ (working in two dimensions since the questions in the book are limited to two dimensions).
It says that if you relabel all of the coordinates in the coordinate system using:
$$\begin{pmatrix}\bar{y} \\ \bar{z}\end{pmatrix} = \begin{pmatrix}\cos\phi & \sin\phi \\ -\sin\phi & \cos\phi\end{pmatrix} \begin{pmatrix}y \\ z\end{pmatrix}$$
so $(\bar{y},\bar{z})$ gives the relabeled coordinates for point $(y,z)$, then: $$\bar{T}(\bar{y},\bar{z}) = T(y,z)$$
for all y, z in the coordinate system, where $\bar{T}$ is the rotated scalar field. Then I thought perhaps I'm trying to show something like this?
$$\overline{(\nabla \cdot T)}(\bar{y},\bar{z})=(\nabla \cdot T)(y,z) $$
where $\overline{(\nabla \cdot T)}$ is the rotated gradient of $T$.
The notation above looks strange to me, so I'm wondering if it's correct. I'm also quite curious what the analogous formalization of "transforms as a vector field" would look like.
Answer: There are a number of ways of mathematically formalizing the notions "transforming as a vector" or "transforming as a scalar" depending on the context, but in the context you're considering, I'd recommend the following:
Consider a finite number of types of objects $o_1, \dots, o_n$, each of which lives in some set $O_i$ of objects, and each of which is defined to transform in a particular way under rotations. In other words, given any rotation $R$, and for each object $o_i$ we have a mapping when acting on objects in $O_i$ tells us what happens to them under a rotation $R$:
\begin{align}
o_i \mapsto o_i^R = \text{something we specify}
\end{align}
For example, if $o_1$ is just a vector $\mathbf r$ in three dimensional Euclidean space $\mathbb R^3$, then one would typically take
\begin{align}
\mathbf r \mapsto \mathbf r^R = R\mathbf r.
\end{align}
Each mapping $o_i\mapsto o_i^R$ is what a mathematician would call a group action of the group of rotations on the set $O_i$ (there are more details in defining a group action which we ignore here). Once we have specified how these different objects $o_i$ transform under rotations, we can make the following definition:
Definition. Scalar under rotations
Let any function $f:O_1\times O_2\times\cdots \times O_n\to \mathbb R$ be given, we say it is a scalar under rotations provided
\begin{align}
f(o_1^R, \dots o_n^R) = f(o_1, \dots o_n).
\end{align}
This definition is intuitively just saying that if you "build" an object $f$ out of a bunch of other objects $o_i$ whose transformation under rotations you have already specified, then the new object $f$ which you have constructed is considered a scalar if it doesn't change when you apply a rotation to all of the objects it's built out of.
Example. The dot product
Let $n=2$, and let $o_1 = \mathbf r_1$ and $o_2 = \mathbf r_2$ both be vectors in $\mathbb R^3$. We define $f$ as follows:
\begin{align}
f(\mathbf r_1, \mathbf r_2) = \mathbf r_1\cdot \mathbf r_2.
\end{align}
Is $f$ a scalar under rotations? Well let's see:
\begin{align}
f(\mathbf r_1^R, \mathbf r_2^R)
= (R\mathbf r_1)\cdot (R\mathbf r_2)
= \mathbf r_1\cdot (R^TR\mathbf r_2)
= \mathbf r_1\cdot \mathbf r_2
= f(\mathbf r_1, \mathbf r_2)
\end{align}
so yes it is!
Now what about a field of scalars? How do we define such a beast? Well we just have to slightly modify the above definition.
Definition. Field of scalars
Let any function $f:O_1\times\cdots \times O_n\times\mathbb R^3\to \mathbb R$ be given. We call $f$ a field of scalars under rotations provided
\begin{align}
f(o_1^R, \dots, o_n^R)(R\mathbf x) = f(\mathbf x).
\end{align}
You can think of this as simply saying that the rotated version of $f$ evaluated at the rotated point $R\mathbf x$ agrees with the unrotated version of $f$ evaluated at the unrotated point. Notice that this is formally the same as the equation you wrote down, namely $\bar T(\bar x, \bar y) = T(x,y)$.
Example. Divergence of a vector field
Consider the case that $\mathbf v$ is a vector field. Rotations are conventionally defined to act on vector fields as follows (I'll try to find another post on physics.SE that explains why):
\begin{align}
\mathbf v^R(\mathbf x) = R\mathbf v(R^{-1}\mathbf x)
\end{align}
Is its divergence a scalar field? Well to make contact with the definition we give above, let $f$ denote the divergence, namely
\begin{align}
f(\mathbf v)(\mathbf x) = (\nabla\cdot \mathbf v)(\mathbf x)
\end{align}
Now notice that using the chain rule we get (we use Einstein summation notation)
\begin{align}
(\nabla\cdot\mathbf v^R)(\mathbf x)
&= \nabla\cdot\big(R\mathbf v(R^{-1}\mathbf x)\big)\\
&= \partial_i(R_{ij}v_j(R^{-1}\mathbf x) \\
&= R_{ij} \partial_i(v_j(R^{-1}\mathbf x)) \\
&= R_{ij}(R^{-1})_{ki}(\partial_k v_j)(R^{-1}\mathbf x)\\
&= (\nabla\cdot \mathbf v)(R^{-1}\mathbf x)
\end{align}
which implies that
\begin{align}
(\nabla\cdot\mathbf v^R)(R\mathbf x) = (\nabla\cdot \mathbf v)(\mathbf x),
\end{align}
but the left hand side is precisely $f(\mathbf v^R)(R\mathbf x)$ and the right side is $f(\mathbf v)(\mathbf x)$ so we have
\begin{align}
f(\mathbf v^R)(R\mathbf x) = f(\mathbf v)(\mathbf x).
\end{align}
This is precisely the condition that $f$ (the divergence of a vector field) be a scalar field under rotations.
Extension to vectors and vector fields.
To define a vector under rotations, and a field of vectors under rotations, we do a very similar procedure, but instead we have functions $\mathbf f:O_1\times O_2\times\cdots \times O_n\to \mathbb R^3$ and $\mathbf f:O_1\times O_2\times\cdots \times O_n\times\mathbb R^3\to \mathbb R^3$ respectively (in other words the right hand side of the arrow gets changed from $\mathbb R$ to $\mathbb R^3$, and the defining equations for a vector and a field of vectors become
\begin{align}
\mathbf f(o_1^R, \dots o_n^R) = R\,\mathbf f(o_1, \dots o_n).
\end{align}
and
\begin{align}
\mathbf f(o_1^R, \dots, o_n^R)(R\mathbf x) = R \,\mathbf f(\mathbf x)
\end{align}
respectively. In other words, there is an extra $R$ multiplying the right hand side. | {
"domain": "physics.stackexchange",
"id": 18715,
"tags": "vectors, representation-theory, covariance"
} |
Transform Bitmap object into jagged array of RGB tuple values | Question: I've been working on this module with the assistance of codereview, and based on the responses and my own research, this is my project so far. The goal behind the code is that it starts with a .NET bitmap object, extracts the byte array of image data from the bitmap, then transforms it into an array of arrays or matrix.
The idea behind putting the bitmap data into the matrix is that I want to be able to use a simple representation of a coordinate, like Array[10][20], to get the RGB value for the pixel located at the (10, 20) coordinate. Rather than using a proper 2D array, I'm choosing to use an array of arrays, or "jagged" array, for speed concerns.
Your assistance with sanity, style, F# conventions, and any other problems you spot is appreciated in helping further refine this algorithm.
open System
open System.Drawing
open System.Drawing.Imaging
open System.Runtime.InteropServices
module ImageSearch =
let LoadBitmapIntoArray (pBitmap:Bitmap) =
let tBitmapData = pBitmap.LockBits( Rectangle(Point.Empty, pBitmap.Size),
ImageLockMode.ReadOnly,
PixelFormat.Format24bppRgb )
let tImageArrayLength = Math.Abs(tBitmapData.Stride) * pBitmap.Height
let tImageDataArray = Array.zeroCreate<byte> tImageArrayLength
Marshal.Copy(tImageDataArray, 0, tBitmapData.Scan0, tImageArrayLength)
pBitmap.UnlockBits(tBitmapData)
pBitmap.Width, tImageDataArray
let Transform2D ( (pArrayWidth:int), (pArray:byte[]) ) =
let tHeight = pArray.Length / ( pArrayWidth * 3 ) // 3 bytes per RGB
let tImageArray = [|
for tHeightIndex in [ 0 .. tHeight - 1] do
let tStart = tHeightIndex * pArrayWidth
let tFinish = tStart + pArrayWidth - 1
yield [|
for tWidthIndex in [tStart .. 3 .. tFinish] do
yield ( pArray.[tWidthIndex]
, pArray.[tWidthIndex + 1]
, pArray.[tWidthIndex + 2] )
|]
|]
tImageArray
let SearchBitmap (pSmallBitmap:Bitmap) (pLargeBitmap:Bitmap) =
let tSmallArray = Transform2D <| LoadBitmapIntoArray pSmallBitmap
let tLargeArray = Transform2D <| LoadBitmapIntoArray pLargeBitmap
let mutable tMatchFound = false
Answer: By convention, F# function names are camelCase, for example transform2D instead of Transform2D.
Hungarian notation is dead and buried. For example, use bitmap instead of pBitmap.
If ImageSearch is a top-level module, declarations do not have to be indented.
Your for...in expressions can be written without brackets. For example, instead of
for tHeightIndex in [ 0 .. tHeight - 1] do
write
for heightIndex in 0 .. height - 1 do
Similarly, instead of
for tWidthIndex in [tStart .. 3 .. tFinish] do
write
for widthIndex in start .. 3 .. finish do
This is not just a style issue, the different versions actually compile to different IL.
The final expression of a function is its return value, so instead of
let tHeight = ...
let tImageArray = [| ... |]
tImageArray
you can write
let height = ...
[| ... |]
In type annotations, it's conventional to have a space on either side of the :, for example instead of
let Transform2D ( (pArrayWidth:int), (pArray:byte[]) ) =
write
let transform2D (width : int, array : byte[]) =
I can't actually back up this last point. MSDN and FSharp.Core don't seem to stick to one style, even in the same function
let isProperSuperset (x:Set<'T>) (y: Set<'T>) = SetTree.psubset x.Comparer y.Tree x.Tree | {
"domain": "codereview.stackexchange",
"id": 8645,
"tags": "image, f#"
} |
Move_base PointCloud2 | Question:
I get the following error with move_base
Client [/move_base] wants topic /zed/point_cloud/cloud_registered to have datatype/md5sum [sensor_msgs/PointCloud/d8e9c3f5afbdd8a130fd1d2763945fca], but our version has [sensor_msgs/PointCloud2/1158d486dd51d683ce2f1be655c3c181]. Dropping connection.
Obviously it does not like PointCloud2. Is this a configuration problem or do i need to do a conversion to Pointcloud?
Originally posted by phil333 on ROS Answers with karma: 16 on 2017-10-12
Post score: 0
Original comments
Comment by andymcevoy on 2017-10-13:
maybe this or this question would help?
Answer:
Found a solution, there is a node which has a configurable launch file in order to convert from PointCloud2 to PointCloud or the other way round:
https://github.com/pal-robotics-forks/point_cloud_converter
Originally posted by phil333 with karma: 16 on 2017-10-13
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 29070,
"tags": "navigation, pcl, move-base, pointcloud"
} |
How does virtual particle explanation of Hawking radiation contradict with consistent loop description? | Question:
the picture of virtual particle pairs is categorically not the right way to think about Hawking radiation. Quite obviously it must be wrong, because it is a loop level effect, and loops in QFT have to close, which they don't in this heuristic picture. (in the comment by Edward Hughes from What are the virtual particles generated during the Hawking radiation?)
Can anyone explain how virtual particle explanation of Hawking radiation goes against consistent loop description - why loop is not being closed?
Answer: It is true that vacuum loops of particle antiparticle generation can have a measurable effect on crossections and other measurements , for interactions that already occur,in higher order feynman diagrams.
For example:
Two example eighth-order Feynman diagrams that contribute to the electron self-interaction. The horizontal line with an arrow represents the electron while the wavy lines are virtual photons, and the circles represent virtual electron–positron pairs.
The loops are particle antiparticle pairs, so they have the quantum numbers of the vacuum. The energy for their mathematical existence comes from the line of the real electron, which has an energy and momentum vector on mass shell.
It is a heuristic picture to talk about vacuum loops generating e+e- pairs at the black hole horizon, because the leaving partner (e+ or e-) takes away energy and momentum, which must be given by the ambient particles about the horizon, eventually the energy lost from the black hole. This is a better diagram
Feynman diagram of the Hawking process for particle creation by a black hole. The heavy, zigzag solid lines represent the worldlines of electrons ( − e ) and of a positron (+ e ). The heavy, vertical dashed line on the left represents the event horizon of the blackhole, and the light, vertical dashed line on the right represents an effective emissive surface, for the case of electron emission. Points A and B represent pair creation and pair annihilation, respectively [12]. The “Compton layer” has a thickness on the order of the Compton wavelength, h/mc . The coordinate system being used here is that of a freely falling observer orbiting around the black hole just outside of its horizon.
Here is how Hawking% describes it :
Just outside the event horizon [of a black hole] there will be virtual pairs of particles, one with negative energy and one with positive energy. The negative particle is in a region which is classically forbidden but it can tunnel [emphasis added] through the event horizon to the region inside the black hole where the Killing vector which represents time translations is spacelike. In this region the particle can exist as a real particle with a timelike momentum vector even though its energy relative to infinity as measured by the time translation Killing vector is negative. The other particle of the pair, having a positive energy, can escape to infinity where it constitutes a part of the thermal emission described above.”
The vacuum loops , assumed to exist due to the uncertainty principle, are an easy graphic for the appearance of particle pairs, but not really true, since in order for a real particle to appear energy and momentum must be supplied and there will no longer be a loop.
%page 202 of refered as ref. 1 here | {
"domain": "physics.stackexchange",
"id": 47490,
"tags": "black-holes, feynman-diagrams, hawking-radiation, virtual-particles, qft-in-curved-spacetime"
} |
Model Plugin with QT objects, is it possible? | Question:
Can QT objects (e.g. A QMainWindow) be declared within a ModelPlugin, and if so can an example be provided? I want to create a plugin to control a model joints so when the model is inserted in gazebo a new window with personalized controls is displayed.
In the header file along from the class that inherits from ModelPlugin i tried declaring another class that inherits from QMainWindow, it compiles just fine but when the plugin is included in a sdf model and i insert it to gazebo it fails to load the plugin with an undefined symbol error for the class inheriting from QMainWindow.
Originally posted by PML on Gazebo Answers with karma: 15 on 2015-03-15
Post score: 0
Answer:
Hi,
I am not sure if you can do it from a Model Plugin, if you manage let me know ;)
However, you can access the QT parts via the GUI overlay.
Cheers,
Andrei
Originally posted by AndreiHaidu with karma: 2108 on 2015-03-16
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by nkoenig on 2015-03-16:
You can't use QT object from a model plugin. As Andrei mentioned, a GUI overlay is the correct way to go.
Comment by PML on 2015-03-16:
Alright, thanks for the answers.
The only way to interact with a model in this case is by using the transport layer, correct? | {
"domain": "robotics.stackexchange",
"id": 3734,
"tags": "gazebo"
} |
Best place to take off vacuum for distillation | Question: I have seen vacuum take-off adapters that are essentially plugs, which could be placed at the top of a three-way above the fractionating column, and also straight and bent vacuum take-offs with an inner stem that seem meant to be placed right before the collection vessel.
Would there be any difference in which place is better to vacuum from? Would it be best to maybe use a T-connector on the vacuum line and vacuum the top of the fractionating column as well as the collecting flask?
Answer: Assuming that you are using a typical distillation set up (boiling container, fractionating column, distillation head, condensation tube, collecting adapter, and collecting vessel), and the the "three-way" you are talking about is the distillation head. It is much better to put the vacuum take off adapter right before your collecting vessel. The reason is this set up creates a vapor flow that goes through your condensation tube, and most of your fractions will be condensed and collected.
If you attach the vacuum take off adapter on top of your distillation head, most of your vapor will take shortcut to your pump through your adapter instead of going to the condensation tube and the collecting vessel (this direction is a dead end).
Actually the top of the distillation head is the place to put your thermometer to measure the temperature of vapor, not for anything else. | {
"domain": "chemistry.stackexchange",
"id": 554,
"tags": "purification"
} |
Better Way to Show and Hide Items in Project | Question: I have this code snippet below that I am looking for a way to make smaller if possible. Any help or suggestions would be appreciated, thanks!
Private Sub ShowAlternateRows(Show As Boolean)
If Show = True Then
splitter.Visible = True
lblAlternateActualPercent.Visible = True
lblAltEstimated.Visible = True
lblStaticAltEstimated.Visible = True
Grid.Cols.Item("Alternate Type").Visible = True
Grid.Cols.Item("Alternate Start").Visible = True
Grid.Cols.Item("Alternate End").Visible = True
Grid.Cols.Item("Alternate Hours").Visible = True
Grid.Cols.Item("Alternate Burn").Visible = True
Grid.Cols.Item("Start Estimated").Visible = False
Grid.Cols.Item("End Estimated").Visible = False
Grid.Cols.Item("Start Actual").Visible = False
Grid.Cols.Item("End Actual").Visible = False
Grid.Cols.Item("Estimated Hours").Visible = False
Grid.Cols.Item("Actual Hours").Visible = False
Grid.Cols.Item("Burn Estimated").Visible = False
Grid.Cols.Item("Object Count").Visible = False
Grid.Cols.Item("Request Count").Visible = False
Grid.Cols.Item("Resource Count").Visible = False
lblStaticActual.Visible = False
lblActual.Visible = False
lblActualPercent.Visible = False
lblStaticEstimated.Visible = False
lblEstimated.Visible = False
lblStaticFiltered.Visible = False
lblFiltered.Visible = False
lblFilteredPercent.Visible = False
Else
splitter.Visible = False
lblAlternateActualPercent.Visible = False
lblAltEstimated.Visible = False
lblStaticAltEstimated.Visible = False
lblStaticActual.Visible = True
lblActual.Visible = True
lblActualPercent.Visible = True
lblStaticEstimated.Visible = True
lblEstimated.Visible = True
lblStaticFiltered.Visible = True
lblFiltered.Visible = True
lblFilteredPercent.Visible = True
Grid.Cols.Item("Start Estimated").Visible = True
Grid.Cols.Item("End Estimated").Visible = True
Grid.Cols.Item("Start Actual").Visible = True
Grid.Cols.Item("End Actual").Visible = True
Grid.Cols.Item("Estimated Hours").Visible = True
Grid.Cols.Item("Actual Hours").Visible = True
Grid.Cols.Item("Burn Estimated").Visible = True
Grid.Cols.Item("Object Count").Visible = True
Grid.Cols.Item("Request Count").Visible = True
Grid.Cols.Item("Resource Count").Visible = True
Grid.Cols.Item("Alternate Type").Visible = False
Grid.Cols.Item("Alternate Start").Visible = False
Grid.Cols.Item("Alternate End").Visible = False
Grid.Cols.Item("Alternate Hours").Visible = False
Grid.Cols.Item("Alternate Burn").Visible = False
End If
End Sub
Answer: Assuming OPTION STRICT OFF (since I don't know the type of Grid.Cols.Item):
Private Sub ShowAlternateRows(show As Boolean)
Dim grpShow =
{
splitter, lblAlternateActualPercent,
lblAltEstimated, lblStaticAltEstimated,
Grid.Cols.Item("Alternate Type"), Grid.Cols.Item("Alternate Start"),
Grid.Cols.Item("Alternate End"), Grid.Cols.Item("Alternate Hours"),
Grid.Cols.Item("Alternate Burn")
}
Dim grpHide =
{
Grid.Cols.Item("Start Estimated"), Grid.Cols.Item("End Estimated"),
Grid.Cols.Item("Start Actual"), Grid.Cols.Item("End Actual"),
Grid.Cols.Item("Estimated Hours"), Grid.Cols.Item("Actual Hours"),
Grid.Cols.Item("Burn Estimated"), Grid.Cols.Item("Object Count"),
Grid.Cols.Item("Request Count"), Grid.Cols.Item("Resource Count"),
lblStaticActual, lblActual,
lblActualPercent, lblStaticEstimated,
lblEstimated, lblStaticFiltered,
lblFiltered, lblFilteredPercent
}
For Each ctrl in grpShow
ctrl.Visible = show
Next
For Each ctrl in grpHide
ctrl.Visible = Not show
Next
End Sub | {
"domain": "codereview.stackexchange",
"id": 4775,
"tags": "vb.net"
} |
Algorithm Analysis of Three nested loop | Question: I'm trying to figure out Time function and Big O of a nested loop code,
public static int example5(int[ ] first, int[ ] second) { // assume equal-length arrays
int n = first.length, count = 0;
for (int i=0; i < n; i++) { // loop from 0 to n-1
int total = 0;
for (int j=0; j < n; j++) // loop from 0 to n-1
for (int k=0; k <= j; k++) // loop from 0 to j
total += first[k];
if (second[i] == total) count++;
}
return count;
}
According to my calculations the time function should lie in $n^4$, but the right answer is $n^3$.
Here are my solution steps,
Constant operations = 2 times
i-loop = 0 to n-1 = n times (although it's 2n+2, but I would pick the most higher order term)
Constant operation = 1 time
j-loop = 0 to n-1 = n x n times = $n^2$ times
k-loop = 1+2+3+...+n-1+n = n x n x $n\frac{n+1}{2}$ times = $\frac{n^4 + n^3}{2}$ times
What I'm doing wrong, can someone please point out.
Answer: Three for loops gets you to n^3.
EDIT:
for (int i=0; i < n; i++) // first n
..etc..
for (int j=0; j < n; j++) // second n
...etc...
Outer two are n, that makes n^2 so far. Next look at inner loop:
for (int k=0; k <= j; k++) // third n
Innermost counts as another n for big O purposes even if running k to j instead of k to n. You're still on the order of n^3.
For that inner loop you're doing 1 + 2 + 3 ... loops as j iterates, so strictly speaking it's not exactly n^3 iterations, but n * n * (n+1)/2, but for big O purposes don't let that throw you off, it's n^3, the (n+1)/2 piece is just another layer of n. If n was 1,000,000 you'll end up with "on the order of" 10^18 loops.
Hope this helps. | {
"domain": "cs.stackexchange",
"id": 16081,
"tags": "algorithms, complexity-theory, time-complexity, algorithm-analysis, runtime-analysis"
} |
What experiment should be conducted? | Question: Im self studying physics and came to this question in my textbook:
A passenger in a moving bus with no windows notices that a ball that had been at rest on the aisle suddenly start to move towards the rear of the bus. Think of two different possible explanations and devise a way to decide which is correct.
Now the two possible scenarios are clearly (1) the bus is speeding up and (2) The bus starts traveling uphill. But I do not know what experiment should be devised to see which it is. I feel like it has something to do with inertial frames. i.e. the first option for the motion of the ball is because the inertial frame is accelerated whereas the second is because all of a sudden the force of gravity began acting on the ball in the horizontal direction (w.r.t. the surface of the bus).
But I am having a hard time thinking of an experiment that can deduce which is occurring. Obviously one "experiment" is look at the speedometer, but I have a feeling that is not what the point of the question is. I was thinking of maybe dropping the ball? And then observing its flight path? If the bus is accelerating then the ball will fall straight down whereas if the bus is traveling uphill then the ball will have a slanted path toward the ground. Would this work and/or is there a better approach?
Answer: Given the two scenarios:
The bus is moving at steady state speed on a flat ground and suddenly accelerates forward
The bus is moving at a steady state speed on a flat ground and suddenly encounters an upward slope
You can note that the force diagram on the ball would be different given each situation.
If we are living in scenerio (1), you would expect that when hung from a scale that could rotate freely (like those you can find at a grocery store), the scale would tilt towards the back of the bus, and show a force higher than simply the mass of the ball itself. This is because the scale has to resist not only the acceleration due to gravity, but also the extra horizontal acceleration from putting your foot on the gas pedal (and you could use the pythagorean theorem to calculate this increase in force, due to the vectors being orthogonal).
However, if we are living in scenario (2), you would imagine that this same rotating scale would simply tilt towards the back of the bus, but read no increase in force from the ball. This is because we are moving at steady-state speed, and the coordinates of our system are merely rotated due to the hill. | {
"domain": "physics.stackexchange",
"id": 98889,
"tags": "newtonian-mechanics, reference-frames, inertial-frames, inertia"
} |
Can Boolean circuits of polylog depth represent all Boolean functions? | Question: Consider a Boolean circuit using (2-input) logical-and, (2-input) logical-or and logical-not as basic components. The depth of the Boolean circuit is the length of the longest path from the input to the output. I wonder if Boolean circuits with a depth Polylog in the number of inputs are sufficient to express any Boolean function. I only know that a depth of $O(n)$ is sufficient ($n$ is the number of inputs), by using the disjunctive normal form to construct a Boolean circuit.
Note that this question is different from the $NC$ complexity, since in $NC^i$ the size of the Boolean circuit is also constrained to be polynomial, while this question does not constrain the size. Thank you!
Answer: A $k$-ary circuit of depth $d$ has size at most $k^d$, hence polylog-depth circuits of fan-in $2$ have quasipolynomial size. Thus, the vast majority of Boolean functions cannot be computed by such circuits, as most functions require exponential circuit size $\Omega(2^n/n)$. | {
"domain": "cs.stackexchange",
"id": 20032,
"tags": "circuits, boolean-complexity"
} |
Hypercharge of the Higgs field | Question: I am puzzled by the hypercharge of the Higgs field. Under the entry "Higgs Mechanism" in Wikipedia, it is written:
Its (weak hypercharge) U(1) charge is 1
However, under the entry "Higgs Boson" in Wikipedia, it is written:
... while the field has charge +1/2 under the weak hypercharge U(1) symmetry
Moreover, on page 527 of Srednicki's textbook "Quantum Field theory", it is written:
... and the complex scalar field $\varphi$, known as the Higgs field, in the representation $(2, -\frac{1}{2})$
here the hypercharge becomes $-\frac{1}{2}$. How do these different hypercharge values come about? And, generally, how is the hypercharge of a field determined?
Answer: It is a non-issue, predicated on two conventions.
The historical convention defines it as
$ Y_{\rm W} = 2(Q - T_3)$, as in the Gell-Mann—Nishijima formula of the strong interactions——for a conserved quantity. There, it was frequently used for strange particles, so the hypercharge could get to be 2, —2, etc... and a normalization like this one was warranted. In the weak interactions, thus, the weak hypercharge is defined as twice the average charge of a weak isomultiplet (where the average $T_3$ vanishes).
However, the more practical younger generation use $ Y_{\rm W} = (Q - T_3)$, instead, so the average charge of the isomultiplet, so, e.g., for right-handed fermions, weak isosinglets, the hypercharge is the charge, without daffy gratuitous 2s in the way. But it is only a matter of convention, and references such as the one you quote also specify the convention, as they should.
Response to comment on conventions Recall both the Higgs entry in WP (Peskin & Shroeder conventions), and Srednicki's text are "modern", so the hypercharge is the average charge of the weak isomultiplet. Since, however, P&S put the v.e.v. downstairs in the Higgs doublet, that is the neutral component, so the upper one is charge +1, hence hypercharge 1/2. By contrast, Srednicki puts the v.e.v. upstairs, (87.4), so the lower component has charge -1, hence hypercharge -1/2. The averaging halves the units since one of the two components is neutral! A rule of thumb: to unconfuse yourself on such conventions, always, always , always , write down the Yukawa term that generates a mass for the charged lepton through its v.e.v. and monitor the charges and hypercharges of all fields, so the term conserves charge and hypercharge--as I'm sure you were trained to do in class. | {
"domain": "physics.stackexchange",
"id": 38320,
"tags": "standard-model, conventions, representation-theory, higgs, electroweak"
} |
What is meant by "vastness of space, which now filled a volume of a hundred million light years"? | Question: I am reading the book Cosmic biology: How life could evolve on other worlds (citation below) and do not understand the meaning of the following paragraph from page 4 (emphasis mine):
It took 200,000 years or more for the universe to cool to 2700 °C, a temperature at which the protons and neutron-proton aggregates could start capturing the free flying electrons to form the first atoms of hydrogen and helium. As the dense fog of electrons condensed into newly formed atoms, the opaqueness cleared and photons could stream without interruption across the vastness of space, which now filled a volume of a hundred million light years.
I am under the assumption that a light-year is a unit of measurement for distance and spatial volume is expressed as though it fills a cube (e.g., 100 ly^3). Even if this practice of denoting volume was unintentionally omitted, the claim does not satisfy my attempt to calculate the spherical volume:
Given 200,000 years of elapsed time since Big Bang -> photons could have traveled a distance of 200,000 ly in any direction.
Formula for radius of a sphere: $$\frac{4}{ 3} \pi r ^ 3 = \frac43 \pi (200,000 ^ 3) = 3.35\times 10^{16} ly.$$
This result is much larger than "[one] hundred million light years."
Am I misreading this paragraph?
Irwin, Louis Neal, and Dirk Schulze-Makuch. Cosmic biology: How life could evolve on other worlds. Springer Science & Business Media, 2010.
Answer: I think this is at best a weird typo, but more likely a confusion by the authors (who are not cosmologists or astronomers, but a biologist and a geologist).
The book is 10 years old, but our view of the timeline of the evolution of the Universe was pretty much the same then. Note that the calculation of the size of the Universe as a function of time is a bit more complicated than you assert, because it expands while photons are traveling.
The three number that the authors give — $200\,000\,\mathrm{yr}$, $2\,700\,^\circ\mathrm{C}$, and 100 million lightyears (Mlyr) — do not correspond to the same epochs, as I will discuss here:
Recombination began at t ~ 200,000 yr
The process the authors discuss is known as the recombination of hydrogen, followed by the decoupling of light from the gas. At an age of $200\,000\,\mathrm{yr}$, the Universe had cooled to $4\,500\,\mathrm{K}$, and atoms began to recombine. At this point, the region that is the observable Universe today had a radius of almost 30 Mlyr, i.e. a volume of some $10^5\,\mathrm{Mlyr}^3$. However, the observable Universe at that time was much smaller, because light hadn't traveled so far — the radius was less than half a Mlyr.
Photons decoupled at T ~ 2,700 ºC
The photons didn't decouple yet, though. Exactly when this happened is a somewhat extended period of time, but can be taken to be the time when the mean free path of a photon is of the order of the size of the observable Universe. This happened around when the Universe was $380\,000\,\mathrm{yr}$, at which time it had cooled to $3000\,\mathrm{K}$, or $2\,700\,^\circ\mathrm{C}$. At this time the radius of today's observable Universe was still only around 40 Mlyr, while the observable Universe at that time was still less than 1 Mlyr.
The diameter of the Universe reached 100 Myr much later
When the Universe was roughly half a million years old, the radius of today's observable Universe had expanded to 50 Mlyr, so its diameter was 100 Mlyr. But firstly, this was quite a lot later than the recombination/decoupling, and secondly it doesn't make much sense to refer to the size of today's observable Universe, since that had no physical significance at that time. The radius of the then's observable Universe didn't reach 100 Mlyr until the Universe was almost 20 million years old
Summary in a figure
The figure below shows how the progression of recombination, from ~100% ionized gas when the Universe was around $200\,000\,\mathrm{yr}$, to ~100% neutral gas when it was $\sim 400\,000\,\mathrm{yr}$. I calculated this by solving the Saha equation assuming a pure hydrogen gas, i.e. neglecting helium, but this is a minor contribution so I think it should be okay.
Along the line you see corresponding temperatures in red, and ratios between the photon mean free path and Hubble distance (~size of the Universe) in green. When this ratio is of order unity, photons decouple. The secondary $x$ axis show the corresponding redshift. | {
"domain": "astronomy.stackexchange",
"id": 4700,
"tags": "big-bang-theory"
} |
Shor's implementation problem on qiskit | Question:
If q4-7 are all supposed to be eigenstates of the operation, why is it just that q7 is in $|1\rangle$? Shouldn't all qubits 4 to 7 be in the $|1\rangle$ state to kick back phases from the $Rz$ gates applied?
I'm trying to see what's inside each of those controlled operations, and as observed below it's a bunch of swap gates. How do swap gates do any meaningful computation? How do they kickback an important phase? I was expecting same quantum gates applied on the eigenstate such that an important phase gets kicked up, so I don't see the point in swap gates.
Answer: First question
There are two implementation of Shor algorithm. The first one is depicted in this picture:
In this case, a function $a^x \mod N$ is implemented in $U_f$ gate and you are trying to find period in such function.
Another implementation (yours one) uses phase estimation for finding the period. See thsis implementation on Wiki for example.
In the first case, all qubits are initialized to $|0\rangle$. In second case as you can see in Wiki link, qubits used as input to an "operator which phase is estimated" are initialized at $|1\rangle$. It seems that part of $X$ gates are included into controlled operators (sorry I have not enought time to reconstruct the opeators).
Second question
As the operators are controlled ones, bottom qubits are entangled with top qubits which allow phase kick-back. As a result, you are able to measure the phase on top qubits.
Concerning swap gate, even such gates are able to perform calculation. You can see here how swap gates implement modular multiplication used in the Shor algorithm. | {
"domain": "quantumcomputing.stackexchange",
"id": 2840,
"tags": "programming, qiskit, shors-algorithm"
} |
Appropriate way to store data in R | Question: I have data, which looks like this:
These data are only for one subject. I will have a lot more.
These data will be analyzed in R.
Now I'm storing them like this:
subject <- rep(1, times = 24)
measurement <- factor(x = rep(x = 1:3, each = 8),
labels = c("Distance", "Frequency", "Energy"))
speed <- factor(x = rep(x = 1:2, each = 4, times = 3),
labels = c("speed1", "speed2"))
condition <- factor(x = rep(x = 1:2, each = 2, times = 6),
labels = c("Control", "Experm"))
Try <- factor(x = rep(x = 1:2, times = 12),
labels = c("Try1", "Try2"))
result <- c(1:8,
11:18,
21:28)
dt <- data.frame(subject, measurement, speed, condition, Try, result)
What is the appropriate way to store these data in R (in a data frame)?
Answer: Without more information all I can say is that:
the say you're storing it is fine in general
you can further transform/store your data depending on your use case
To expand on #2, if I want to study Distance vs Energy across all subjects, then I would format my data like this:
> library(reshape2)
> dt2 <- dt[dt$measurement %in% c('Distance','Energy'),]
> dt_cast <- dcast(dt2, subject+Try~measurement+speed+condition, value.var='result')
The transformed data (dt_cast) would then look like:
subject Try Distance_speed1_Control Distance_speed1_Experm Distance_speed2_Control
1 1 Try1 1 3 5
2 1 Try2 2 4 6
Distance_speed2_Experm Energy_speed1_Control Energy_speed1_Experm Energy_speed2_Control
1 7 21 23 25
2 8 22 24 26
Energy_speed2_Experm
1 27
2 28
Allowing me to, for example, look at the relationship between the Distance_speed1_Control vs Energy_speed1_Control columns.
Basically subset/aggregate your data and then use the dcast to get the rows and columns the computer needs. | {
"domain": "datascience.stackexchange",
"id": 203,
"tags": "r, dataset, data-formats"
} |
Is my first MVC architecture set up to standards? | Question: I just started learning about the MVC architecture. And I started to create my own MVC framework to how I like it the most.
Here I've got the index.php, indexcontroller.php and view.php. I'm not too sure if what I am doing is right, or with best practices. So I would like to know if there is anything I missed so far, or what I could improve?
Index.php (updated)
//report all php errors
error_reporting(E_ALL);
//define site root path
define('ABSPATH', dirname(__FILE__));
//include functions
foreach(glob(ABSPATH . '/functions/*.php') as $filename) {
require $filename;
}
//set config array
$config = parse_ini_file(ABSPATH . '/config.ini', true);
$config = json_decode(json_encode($config));
//auto load classes
spl_autoload_register('autoloadCore');
spl_autoload_register('autoloadController');
spl_autoload_register('autoloadModel');
//url to class router
Glue::stick((Array) $config->routes);
IndexController
class IndexController extends BaseController {
private $_view;
public function GET() {
$this->index();
}
public function POST() {
//don't handle post
}
public function __construct() {
$this->_view = new View();
}
public function index() {
$this->_view->welcome = 'Welcome!';
$this->_view->render('index');
}
}
View
class View {
private $_vars = [];
public function __set($index, $value) {
$this->_vars[$index] = $value;
}
function render($fileView) {
$path = ABSPATH . '/view/' . $fileView . '.php';
if(!file_exists($path)) {
throw new Exception('View not found: '. $path);
}
foreach($this->_vars as $key => $value) {
$$key = $value;
}
include ($path);
}
}
Answer: MVC approaches differ between languages, platforms and frameworks. They are usually termed "MV* frameworks" because they don't exactly follow strict MVC. But we call them MVC nonetheless.
Separate core logic from configurable logic
The way I understand your code, it looks like you've gone over CodeIgniter (or something similar). I assume all your requests run through index.php where the initial logic runs, like the helper loading, routing etc. For that, I suggest you separate the autoloader list and routing list to different files. You can load them via something like include. That way, you don't accidentally modify the core logic.
// autoload.php
$autoload = array(
'autoloadCore',
'autoloadController',
'autoloadModel'
);
// routes.php
$routes = array(
'/' => 'Index',
'/signup' => 'SignUp',
'/login' => 'Login'
);
Additionally, the word "Controller" might not be necessary for the routing. You already know that they always go to controllers. You might want to do that in the underlying logic instead, and keep it easy on the configurable parts.
Route lists cons
One thing to note with this routing strategy is that whenever you add a controller, you always need to list down the route. This approach is not that nice, especially when you are going to be handling a hundred routes (and trust me, it ain't a walk in the park).
Auto-route + custom routes
Why not automatically look for controllers based on a predefined convention (like CodeIgniter). A route of /foobar/bam would route to FoobarController and execute the bam method. As for custom routes, you can map it like so:
$routes = array(
'autobam' => 'foobar/bam` // A route of `/autobam` executes the same as `foobar/bam`
);
And the flow goes like:
- Parse route
- Check for match under custom
- If match, convert to equivalent route
- Use non-custom/equivalent route for locating the controller
- If none, throw error
- Else, load and execute | {
"domain": "codereview.stackexchange",
"id": 7298,
"tags": "php, object-oriented, design-patterns, mvc"
} |
NP-Complete problems that admit an efficient algorithm under the promise of a unique solution | Question: I was recently reading a very nice paper by Valiant and Vazirani which shows that if $\mathbf{NP \neq RP}$, then there can not be an efficient algorithm to solve SAT even under the promise that it is either unsatisfiable or has a unique solution. Thus showing that SAT does not admit an efficient algorithm even under the promise of there being at most one solution.
Through a parsimonious reduction (a reduction that preserves the number of solutions), it is easy to see that most NP-complete problems (I could think of) also do not admit an efficient algorithm even under the promise of there being at most one solution (unless $\mathbf{NP = RP}$). Examples would be VERTEX-COVER, 3-SAT, MAX-CUT, 3D-MATCHING.
Hence I was wondering if there was any NP-complete problem that was known to admit a poly-time algorithm under a uniqueness promise.
Answer: No NP-complete problem is known to admit a polynomial-time algorithm under uniqueness promise. Valiant and Vazirani theorem applies to any known natural NP-complete problem.
For all known NP-complete problems, there is a parsimonious reduction from 3SAT. Oded Goldreich states the fact that "all known reductions among natural $NP$-complete problems are either parsimonious or can be easily modified to be so". ( Computational Complexity: A Conceptual Perspective By Oded Goldreich).
Edit: This edit is solely to allow change of votes. | {
"domain": "cstheory.stackexchange",
"id": 5243,
"tags": "np-hardness, np-complete, unique-solution, promise-problems"
} |
Understanding the definition of a path integral | Question: In the Book "Quantum Mechanics and Path Integrals" by Feynman & Hibbs the path integral is approximated (page 32 and following) by
$$
K(b,a)\approx\int...\int\int\phi[x(t)]dx_1dx_2...dx_{N-1}\tag{2.20}
$$
with $b=(x_b,t_b)$ and $a=(x_a,t_a)$ being the start and endpoints of the path and $$\phi[x(t)]=const\cdot e^{(i/\hbar)S[x(t)]}=const\cdot e^{(i/\hbar)\int_{t_a}^{t_b} L[x(t),v(t),t]dt}.\tag{2.15}$$
Now I dont quite get this approximation.
First of all I assume that the $dx_1dx_2...dx_{N-1}$ integrals have to be executed first and only after that the $dt$ integral in $\phi[x(t)]$ (or rather in $S[x(t)]$) should be executed. Is that right?
And the second thing is that I dont get the meaning behind the $dx_1dx_2...dx_{N-1}$ integrals itself (each is integrated from $-\infty$ to $\infty$ according to wikipedia). So in the book the path was divided into straight lines between $x_k$ and $x_{k+1}$ with equal length and $x_0=x_a$ and $x_N=x_b$. That's why I would have thought the integration would not go from $-\infty$ to $\infty$ but rather from $x_k$ to $x_{k+1}$. So the integral would then look sth. like this
$$K(b,a)\approx\int_{x_{N-1}}^{x_N}...\int_{x_1}^{x_2}\int_{x_0}^{x_1}\phi[x(t)]dx_0dx_1...dx_{N-1} $$
Could someone explain to me in an easy way why that is not the case?
Answer:
The action functional $S[t\mapsto x(t)]$ with its $dt$-integration is the continuum limit of a discretion $S[x_0,\ldots, x_N]$ that is implicitly implied on the RHS of eq. (2.20), so in eq. (2.20) there is strictly speaking not a $dt$-integration to perform, cf. OP's question. It only emerges in the continuum limit.
OP's proposal does not make sense as written as integration limits can only refer to later performed integrations, not the other way around. Anyway, Feynman's point is that we should sum over all histories, i.e. the integration variables should cover the whole target space, i.e. position space $\mathbb{R}$ from $-\infty$ to $\infty$.
References:
R.P. Feynman & A.R. Hibbs, Quantum Mechanics and Path Integrals, 1965; chapters 2 + 3 + 4.
W. Dittrich & M. Reuter, Classical and Quantum Dynamics, 6th ed, 2020; chapters 19 + 20. | {
"domain": "physics.stackexchange",
"id": 89032,
"tags": "quantum-mechanics, definition, path-integral"
} |
How to prove Exact cover problem is NP Complete using Vertex Cover problem? | Question: Using reduction theorem in NP, we want to prove that Exact cover is NPC by reducing it from Vertex Cover Problem. It is easy to derive it from SAT, but we can't find a solution yet to derive it from Vertex Cover.
Answer: You can reduce from vertex cover in cubic graphs since it is known to be $\textrm{NP}$-hard. Let $G=(V,E)$ be the graph in the instance of vertex cover and, for $v \in V$, call $E_v = \{ (v,u) \mid (v,u) \in E \}$ the set of edges incident to $v$ in $G$.
Fix $1 \le k \le |V|$ and construct an instance of the exact cover problem as follows:
The elements to be covered are those in $E$ plus $k$ additional elements $x_1, \dots, x_k$.
There are $7k$ sets for each vertex $v \in V$, namely all sets $X \cup \{x_i\}$ for $X \in 2^{E_v} \setminus \{ \emptyset \}$ and $i=1,\dots,k$. Let $S_v$ be the collection of such sets for $v$.
Finally, there are $k$ sets $\{x_1\}, \{x_2\}, \dots, \{x_k\}$.
If there is an exact cover $C$ for this instance, then we must have $|C| \le k$ (since each set contains one of $x_1, \dots, x_k$) and hence there is a vertex cover of $G$ of size at most $k$. Namely: $\{ v \in V \mid S_v \cap C \neq \emptyset\}$.
Suppose now that there is a vertex cover $C$ of $G$ with at most $k$ vertices. Let $C=\{v_1, \dots, v_\ell \}$.
We now show that the above exact cover instance admits an exact cover. Arbitrarily assign each edge $(u,v)$ of $G$ to one of the vertices in $\{u,v\} \cap C$. For a vertex $v_i \in C$, let $A_i$ be the set of edges assigned to $v_i$. For $i>\ell$ let $A_i = \emptyset$.
The sought exact cover contains all sets $A_i \cup \{x_i\}$ for $i=1,\dots, k$. Notice that if $A_i \neq \emptyset$ then $A_i \cup \{x_i\} \in S_v$.
To conclude the proof you just need to observe that exact cover belongs to $\textrm{NP}$ since the cover itself is a certificate that can be checked in polynomial-time. | {
"domain": "cs.stackexchange",
"id": 15078,
"tags": "algorithms, complexity-theory, np-complete, nondeterminism, p-vs-np"
} |
body acceleration in contact between two bodies | Question:
Hey,
I am trying to simulate a wheel(cylinder) on a surface (ground_plane). I launch the gazebo with sdf file which contains both models and give it a 3 seconds to get rest with parameters Kp =380 and Kd=10. I can see the spring rest when I look at the output of position and velocities commands of z axis (GetWorldCoGPose() and GetWorldCoGLinearVel() gazebo API commands), but when I use GetWorldLinearAccel() command I get acceleration z value as zero although the wheel is moving.
Sombody knows why? maybe the GetWorldLinearAccel() is not the right command?
Thanks in advance
Originally posted by shpower on Gazebo Answers with karma: 13 on 2014-12-25
Post score: 0
Answer:
Link::GetWorldLinearAccel was broken for quite a while, though it has recently been fixed if you build from source. It will be fixed in a new release of gazebo in the next month or so.
Originally posted by scpeters with karma: 2861 on 2014-12-29
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 3694,
"tags": "contact"
} |
Plotting Quantum Tunneling the real way | Question: Often when looking at Quantum Tunneling graphs something similar is shown:
Having an exponentially decaying graph inside the barrier. But these graphs seem to be arranged. There I ask myself how to plot a real peace-wise function for quantum tunnelling.
First of all the wave functions for all areas can be written as:
$\mathbf{I}: \quad \psi_1 = A_1\,e^{\textstyle -i\,k_1\,x}+B_1\,e^{\textstyle i\,k_1\,x} \quad \mathbf{II}: \quad \psi_1 = A_2\,e^{\textstyle -i\,k_2\,x}+B_2\,e^{\textstyle i\,k_2\,x} \quad \mathbf{III} \quad \psi_1 = A_3\,e^{\textstyle -i\,k_1\,x} $
After using some boundary conditions ($\psi_1(0) = \psi_2(0) \quad \psi_1(d) = \psi_3(d) $ etc.) the Amplitude $A_3$ for area $\mathbf{III}$ can be formulated: $A_3 = \dfrac{4\,k_1\,k_2\,e^{\textstyle i\,k_1\,d}}{(k_1+k_2)^2\,e^{\textstyle i\,k_2\,d}-(k_1-k_2)^2\,e^{\textstyle -i\,k_2\,d}}$
Which is directly linked to the transmission coefficient $T = |A_3|^2$ approximated fairly by $T \approx 16\,e^{\textstyle-2\,i\,k_2}$
From this point the exponential decay can be intuitively comprehended, but what about plotting the probability amplitude $|\psi_i|^2$ for all waves dependent on travelled distance $x$? Then I do not get the popular graph that I just had set up. In general $|\psi|^2$ seems to be a wave anyhow so never a monotonic decreasing function.
Answer: The picture you posted actually looks like a fairly accurate one. I'm not sure what you mean by "arranged". In any case, the wave function-squared $|\psi|^2$ does decrease exponentially and monotonically in region II.
Maybe an animation can help to understand the dynamics; I found one on Wikipedia which shows the tunneling quite nicely. | {
"domain": "physics.stackexchange",
"id": 86839,
"tags": "quantum-mechanics, quantum-tunneling"
} |
gazebo plugin undefined symbol | Question:
Hi all,
I am trying to make a gazebo plugin work now. The plugin compiles just fine but when I run it, it gives the the following errors:
undefined symbol: _ZN6gazebo18GazeboRosApiPlugin16spawnGazeboModelERN11gazebo_msgs18SpawnModelRequest_ISaIvEEERNS1_19SpawnModelResponse_IS3_EE
Does anyone know what causes the problem? Thank you!!
Originally posted by shuai on ROS Answers with karma: 46 on 2012-11-19
Post score: 1
Answer:
Figured it out... I forgot to uncomment the definition of spawngazebomodel function. I think that the problem is I declared the function but didn't define it. So the library cannot actually use it.
Originally posted by shuai with karma: 46 on 2012-11-19
This answer was ACCEPTED on the original site
Post score: 1 | {
"domain": "robotics.stackexchange",
"id": 11797,
"tags": "gazebo, plugin"
} |
Pulsar rotation linear velocity | Question: I recently came upon millisecond pulsars. I knew of pulsars, I had never guessed though that they can spin so fast.
I believe I had read about a pulsar rotating at about 10-15Hz.
But I just read about J1748-2446ad, which rotates at the astonishing 716Hz.
Brief research and I find its radius is about 16km, making its rotation circumference:
$c = 2πr ≈ 100.5km$
Given that the pulsar rotates at 716Hz, its linear velocity at the surface should be:
$v = 716c/s ≈ 71900km/s$
Assuming these calculations are nearly correct, could it be that any point on the rotation circumference moves at about 24% of the speed of light? Can this be possible?
Thanks for any answers that will satisfy my curiosity.
Answer: I'm not sure where you get the 17 km radius from. It's possible, although 10 km is a more usually assumed radius for a neutron star. It might be a little bigger because of the rotation rate.
Is it possible? Well, yes it is because the rotation rate is unambiguously 716 Hz and the radius of a neutron star cannot be much smaller than about 10 km before becoming unstable and collapsing into a black hole.
Millisecond pulsars spin very fast because they accrete material from a companion. The accreted material lands on the surface with angular momentum and this adds to the angular momentum of the neutron star, spinning it up. The angular rotation rate is just the total angular momentum divided by the moment of inertia, and because the latter is proportional to the square of the radius, and neutron stars are small, this leads to a very fast rotation rate.
You probably shouldn't be all that surprised. For comparison, if you drop something from a long way off onto the surface of the neutron star, it will be travelling at speeds of about $0.5c$ when it impacts the surface. | {
"domain": "astronomy.stackexchange",
"id": 6416,
"tags": "pulsar"
} |
Variation - Electric pressure on a sphere? | Question: I solved the following question(Answer is correct):
Find the force with which two hemisperical parts of a uniformly charged hollow sphere repel each other?(charge density: +$\sigma$)
Answer:
Radial force on strip is:
$$dF=\left(\frac{\sigma^2}{2\epsilon_0}\right)(2\pi R^2\sin\theta d\theta)$$
Total force on hemisphere is:
$$F=\int dF\cos\theta=\int_0^{\frac{\pi}2}\frac{\pi R^2\sigma^2}{\epsilon_0}\sin\theta\cos\theta d\theta$$
$$F=\frac{\pi R^2\sigma^2}{2\epsilon_0}$$
My question is: If the hollow sphere is uniformly charged on one half with a uniform charge density $+\sigma_1$ and its other half is also charged at charge density +$\sigma_2$.Now find the force with which the two halves repel?
Similiar to previous question force can be given as:
$$F=\int \frac1{4\pi\epsilon_0}\sigma_1\sigma_2dS_1dS_2$$
But it doesn't give the answer, or I must say it can not be manipulated to an integrable form.Note that this is not a problem of integration.It is simple manipulation, as my teacher says, he suggests using previous problem.
Answer
$$F=\frac{\pi R^2\sigma_1\sigma_2}{2\epsilon_0}$$
Answer: Hm, I would give superposition a go.
Start with two equally charged half spheres with surface charge density $\sigma_{2}$. The consider a similar system superposed on it with but with surface charge density $\sigma_{3} = \sigma_{1}-\sigma_{2}$. We can solve this system as an instance of the first case using $\sigma_{1}$. Let's called this known answer $F_{11}$.
Now, the force on the superposed hemisphere is, by superposition, the force on the hemisphere with charge $\sigma_{2}$ (which is what we want to know!), plus the force on the one with charge $\sigma_{3}$. Call these $F_{12}$ and $F_{13}$, respectively.
The crucial insight now is that the ratio of these forces must be the same as the ratio of the charges of the right hemispheres because (also a consequence of the superposition principle) the force on a system, all other things being equal, scales with the charge on it! This means that:
$$\frac{F_{12}}{F_{13}} = \frac{\sigma_{2}}{\sigma_{3}}$$
and
$$F_{12}+\frac{\sigma_{3}}{\sigma_{2}} F_{12} = F_{11}$$
Therefore,
$$F_{12} = \frac{F_{11}}{1+\frac{\sigma_{3}}{\sigma_{2}}} = \frac{\pi R^{2} \sigma_{1} \sigma_{2}}{2 \epsilon_{0}}$$
Which looks like a reasonable result to me (for one thing, it checks with the case of $\sigma_{1} = \sigma_{2}$). | {
"domain": "physics.stackexchange",
"id": 15597,
"tags": "homework-and-exercises, forces, electrostatics"
} |
How were auto encoders used to intialize deep neural networks? | Question: In a document on deep learning about auto encoders, it is said that these networks were used back from 2006 to 2010 for deep neural networks initialization.
Can somebody explain how this was done?
Answer: There were a few different techniques. One popular one was stacked autoencoders, where each layer was trained separately.
Essentially this was done by progressively growing the autoencoder, two layers at a time (one encode layer, plus equivalent decode layer), followed by complete training at each step of growth.
If learning from a fixed training set, you could store the encoded representation of the whole dataset so far as input into next stage of training, saving some computation when building up the layers.
After training each encoder layer separately you could use the weights of the encoder section of the autoencoder as the starting weights of the deep NN. Intuitively this made sense as you would have a representation of the input that you knew could be used to reconstruct it, and that typically was compressed, so should in theory have extracted salient details from the training data population. On top of these pre-trained layers, you may add one or two new layers that implemented whatever classification or regression task that you needed the final NN to perform. Then you would train with the labelled data - this is similar to fine-tuning networks and transfer learning that is still done nowadays.
The results from this pre-training stage could be worthwhile. It is still a valid technique if you have a lot of unlabelled data, and a relatively small amount of labelled data. However, the introduction of ReLU and careful controls on weight initialisation meant that deep networks could often be trained more directly. Recent additions such as skip connections and batch normalisation have further improved more direct training approaches.
Here is an example with code, using TensorFlow. | {
"domain": "datascience.stackexchange",
"id": 2583,
"tags": "deep-learning, autoencoder"
} |
Is it possible that the axis of rotation of a planet also rotates? | Question: Assume that there is a planet that rotates around a axis. Is it possible that this axis also rotates around another axis?
For example, the planet is $\{(x, y, z)| x^2 + y^2 + z^2 \leq 1\}$ and this rotates around $\vec{OP} = (0, 0, 1)$ at first. Then the point $P$ rotates around $y$-axis, $P=(\sin{t}, 0, \cos{t})$.
Answer: It sounds like you are referring to precession. The body rotates about a primary axis of rotation, but this axis also rotates around a second (precessionary) axis.
The Earth does exactly this. We rotate around our primary axis (which points roughly in the direction of the star Polaris) once per day. This axis itself sweeps round in the sky with a period of around 26,000 years. | {
"domain": "physics.stackexchange",
"id": 48580,
"tags": "newtonian-mechanics, rotational-dynamics, reference-frames, planets"
} |
How to Interpret this set of Latitude and Longitude? | Question: I have a house's coordinates that I know is in LA, Ventura, or Orange county, California and it says the latitude and longitude are
latitude longitude
34144442 -118654084
These seem really big. How can I find the address this corresponds to?
Answer: It looks like your coordinates are written in decimal degrees but with the decimal points omitted.
34.144442 -118.654084 corresponds to 34°08'40.0"N 118°39'14.7"W.
Google maps shows this to be a location just outside LA. | {
"domain": "earthscience.stackexchange",
"id": 1157,
"tags": "measurements, mapping"
} |
Correcting duplicate names in generic array - follow up | Question: Follow Up question of : Correcting duplicate names in array
I have an array of file names. For example:
FileContent[] files =
{
new FileContent() {Content = threeItems, Name = "one.zip" },
new FileContent() {Content = fiveItems, Name = "one.zip" },
new FileContent() {Content = sevenItems, Name = "one.zip" },
new FileContent() {Content = threeItems, Name = "two.zip" },
new FileContent() {Content = fiveItems, Name = "two.zip" },
new FileContent() {Content = sevenItems, Name = "two.zip" },
};
Model:
public sealed class FileContent
{
public byte[] Content { get; set; }
public string Name { get; set; }
}
And I've developed the following method to correct duplicate names. What the method does is just change duplicate names. I am just adding incremented value for the next duplicated value. For example, my developed method ChangingDuplicateNames(string[] files)correct the previous array to:
FileContent[] files =
{
new FileContent() {Content = threeItems, Name = "one.zip" },
new FileContent() {Content = fiveItems, Name = "one(1).zip" },
new FileContent() {Content = sevenItems, Name = "one(2).zip" },
new FileContent() {Content = threeItems, Name = "two.zip" },
new FileContent() {Content = fiveItems, Name = "two(1).zip" },
new FileContent() {Content = sevenItems, Name = "two(2).zip" },
};
And implementation of ChangingDuplicateNames(FileContent[] files) is:
private FileContent[] ChangingDuplicateNames(FileContent[] files)
{
//Creating a dicitonary to store duplicated values. "Key" of dictionary
//is duplicated name, "Value" of dictionary is number to add for name
Dictionary<string, int> duplicateNames = files.GroupBy(x => x.Name)
.Where(group => group.Count() > 1)
.ToDictionary(grouped => grouped.Key, grouped => 0);
if (duplicateNames.Count == 0)
return files;
int namesLength = files.Length;
string actualName = string.Empty;
for (int indexArray = 0; indexArray < namesLength; indexArray++)
{
int value;
bool isDuplicate = duplicateNames
.TryGetValue(files[indexArray].Name, out value);
if (isDuplicate)
{
actualName = files[indexArray].Name;
if (value == 0)
files[indexArray].Name = files[indexArray].Name;
else
{
//Adding increment to the mext duplicate name
string fileNameWithoutExtension = Path
.GetFileNameWithoutExtension(files[indexArray].Name);
string fileExtension = Path
.GetExtension(files[indexArray].Name);
files[indexArray].Name = fileNameWithoutExtension + "(" + value + ")"
+ fileExtension;
}
duplicateNames[actualName] = ++value;
}
}
return files;
}
And my question is: Is it possible to improve this algorithm? I mean could be this code smaller?
Maybe I should not iterate through the all array of names, but I cannot figure out how I can change names in files without iterating through all array. Thanks in advance.
Answer: Because we are dealing now with objects, we need to create a copy of each FileContent to achieve the same result like in your previous question.
Instead of having var currentFile = file where file had been a string, we now use var currentFile = new FileContent() { Content = file.Content, Name = file.Name };.
In addition, we are using the HashSet<T> only for look up and return an IEnumerable<FileContent> instead of an array.
private IEnumerable<FileContent> ChangingDuplicateNames(FileContent[] files)
{
var hashSet = new HashSet<string>(StringComparer.OrdinalIgnoreCase);
foreach (var file in files)
{
var currentFile = new FileContent() { Content = file.Content, Name = file.Name };
int counter = 0;
while (!hashSet.Add(currentFile.Name))
{
currentFile.Name = CreateFileName(file.Name, ref counter);
}
yield return currentFile;
}
}
and call it like so
FileContent[] result = ChangingDuplicateNames(files).ToArray(); | {
"domain": "codereview.stackexchange",
"id": 25549,
"tags": "c#, beginner, algorithm, file"
} |
What is plasmon really? Is it a charge density wave of electron gas or an EM wave that exists across the metal surface? | Question: Sometimes plasmons are defined as collective plasma oscillations of the free electron gas in a metal. Therefore, plasmons must be a periodic modulation of electron charge density in the metal. But sometimes, plasmons are defined as electromagnetic waves that exist on the surface of metals and decays inside. What is plasmon really? Is it a charge density wave of electron gas or an EM wave that exists across the metal surface?
Answer: A plasmon is both of these things! With the charge-density wave comes an associated EM wave across the metal surface (or vice-versa). These are two sides of the same coin of a plasmon. You can't have one without the other. | {
"domain": "physics.stackexchange",
"id": 70559,
"tags": "condensed-matter, solid-state-physics, metals, plasmon, collective-excitations"
} |
Why is Li-Fi faster than regular wifi? | Question: I was wondering, basicly every radio-transsmission works with electromagnetic waves which travels at the same speed whether I modulate the information with a low frequency or high frequency such as done with LI-FI.
What makes Li-fi faster? what makes any wireless transmission faster/slower if eventually I send my information with electromagnetic wave which travels at the same speed?
What makes lifi special?
Answer: The wave propagation speed is only relevant when you start approaching the theoretical limit of that type of wave's ability to transmit information.
For an analogy, consider people having a conversation in a room full of air. They're using sound waves to transmit information. The speed at which those waves propagate is the same for every speaker, in that particular medium. Does that imply that everyone transmits information at the same speed?
The answer is no, of course not. The information is overlaid on the sound wave in a particular way, and we call that speech. You can transmit the same information at the same speed across a wide range of frequencies. You can speak faster or slower without varying the pitch of your voice. (In fact, some people can transmit speech faster than you can receive it.) You can use different languages that might allow you to communicate particular ideas at different rates.
The theoretical maximum speed at which information can be transmitted via audible speech is only loosely related to the speed of propagation of an audio wave. Put the same conversation underwater and the waves propagate four or five times faster but you don't even notice because the time required for the wave to propagate is so many orders of magnitude less than the time required to encode and decode a single datum.
Digital signals using EM waves are a much more specialized technology, compared to the human systems that enable verbal communication. But we're still far from any theoretical limit on the rate at which EM waves could possibly transmit information. Each of the 802.11 protocols used by your Wi-Fi device has a theoretical maximum bitrate, based on multiple elements of its design. At one point those were measured in Mb/s, now they're measured in Gb/s and there's plenty of room to keep going. A question over on Physics SE asks about the upper limit for fiber optics (which also use EM waves, of course) and you can see from the answers that, at Gb/s for Wi-Fi or Li-Fi, we're still many orders of magnitude away from hitting a ceiling on transfer speeds.
That's why the wave propagation speed isn't as meaningful as you expect. As far as what makes Li-Fi faster than Wi-Fi, I'm not even sure that's the case in isolation; practically speaking, what seems to be driving the development of the former is over-saturation of the frequencies used by the latter. That's more of an issue of congestion than anything else, which means that if Li-Fi had been developed first, the situation might be reversed, and popular science mags might be writing the same BS headlines about Wi-Fi instead.
"Wireless at light speed." Give me a break. | {
"domain": "engineering.stackexchange",
"id": 943,
"tags": "telecommunication, wireless-communication, wifi"
} |
Complex Scalar Field Conjugate Momentum | Question: Consider a complex scalar field $\psi(x)$ with Lagrangian density
$$
\mathcal{L} = \partial_\mu\psi^* \partial^\mu\psi - M^2\psi^*\psi.
$$
From the Lagranigan density we obtain the momentum $\pi = \dot\psi^*$. How do you obtain this? I expanded $\partial_\mu\psi = \partial _{0}\psi -\nabla\psi$ and then took the conjugate $\partial_\mu\psi^{*} = \partial _{0}\psi^{*} -\nabla\psi^{*}$ and then multiply by $\partial^\mu\psi = \partial^{0}\psi -\nabla\psi$ but at the end I get that the canonical momentum is:
$\partial _{0}\psi^{*} -\nabla\psi^{*}$
what is wrong?
Answer: Do not confuse multiplication with the idea of a dot product. The problem at hand, of constructing the kinetic term in components, is not equivalent to multiplying $(a+b)(c_d)$. Rather, we have two vectors whose components are $\langle \partial_0,\nabla\psi\rangle$ (the spatial components could be written out as well) and we are computing a dot product as defined by the Minkowski metric, which has a different sign somewhere (from what you've written it looks like you're taking the $(+,-,-,-)$ convention, so I'll use that here).
So writing out the dot product, we would actually have
$$
\eta^{\mu\nu}\partial_\mu\psi^*\partial_\nu\psi=\partial_0\psi^*\partial_0\psi-\nabla\psi^*\cdot\nabla\psi.
$$
From here, the conjugate momenta follows from the standard expression
$$
\pi=\frac{\partial\mathcal{L}}{\partial\dot\psi}=\dot\psi^*.
$$ | {
"domain": "physics.stackexchange",
"id": 73377,
"tags": "lagrangian-formalism, momentum, field-theory"
} |
Motor Controller Configuration | Question: With a group of students we are building an exoskeleton for paraplegics. I am part of the Software & Control department and we are designing the motor controller configuration. We have several requirements:
Size is one of our main requirements. We want the Exoskeleton to be
as slim as possible, so this is true for all the components as well.
We want the motor controller to be as small as possible.
The motor controller has to work with a brushless DC motor
A setpoint will be sent from a microcontroller to the motor
controller Two or three absolute encoders need to be connected,
this depends on the design of the joint.
**We are either going to
use position or torque control
We need one encoder for the motor
angle, and one for the joint angle. We might implement Series
Elastic Actuation in our joint and then we want to be able to
measure the deflection in the spring and thus need two encoders
for that.
A continuous power output in the range of 700 - 1200 Watt
While exploring several off-the-shelf possibilities, we came accross the Elmo Gold Twitter and the IOMI Pro. One of the problems of these boards, is the amount of absolute encoders that can be connected. Both the Elmo and the IOMI board can either have 1 or no absolute encoders connected.
We came op with a solution, so that we are able to connect multiple encoders. In our exoskeleton we are going to use an EtherCAT Master-Slave configuration. The High-Level control (e.g. state-machine) is on the EtherCAT Master and sends joint angle setpoints. Our idea is to use a EtherCAT Slave as a sort of second controller, which gets the joint setpoint and the joint encoder values, calculates the motor setpoint and sends this out to a certain off-the-shelf controller like the Elmo Gold Twitter or the IOMI Pro.
My question is the following: Is this even a good solution? And what are other solution to this problem? Are there even better alternatives for a motor controller? Might it be a better idea to build and program your own motor controller? (please bare in mind that we have limited experience in that area)
I thank you all in advance for your reply!
Cheers!
Nathan
Answer: "My question is the following"
Is this even a good solution?
No, not really. As BendingUnit22 mentions, a motor controller... controls a motor. One motor. The biggest problem I see with this is that you've got the EtherCAT slave on the feedback line, trying to do signal processing, instead of piping the feedback directly to the motor controller.
The motor controller is setup for one motor. Use one motor controller for each motor you want to use.
What are other solutions to this problem?
Open-ended design problems are off-topic (you could join us in Robotics Chat and ask there if you like). That said, you haven't mentioned how you are converting from a brushless DC motor to the joint, but I'm going to guess you've got some kind of fixed gearing between the motor and the joint.
Your options there are to either put an encoder on the joint and do processing on the feedback signal to convert to motor units (as you've depicted), or you could put the encoder on the motor and do processing on the reference signal to convert the desired joint angle to motor units. My preference, personally speaking, would be to put the encoder on the motor. This means that all of your processing (converting between joint/motor units) is done before the motor-control loop. Processing introduces delays and those delays (negatively) affect control.
Something to think about, but I'm not going to speculate on your options here beyond pointing that out. You can leverage the knowledge of the fixed gearing between motor and joint to quickly scale desired joint angles to desired motor units and vice-versa, depending on where you want to put the encoder. I believe most BLDC motors come with encoders already, though generally not absolute encoders, which you would need.
Are there even better alternatives for a motor controller?
Terms like "good" and "better" are for you to decide; they have no quantifiable meaning on their own. Once you set a list of specifications, you can rank options by how well they meet or exceed each of the items on your list of specifications. If you can rank your specifications then it should become obvious that some options are "better" for your application than others, but again that's all dependent on the list of specifications that you generate.
I'll add here that I think a lot of people put cost as a specification, which is generally advisable, but they put too much emphasis on it. What matters more to you, cost or quality documentation? Cost or that the motor utilizes industry standards (NEMA bolt patterns, standard shaft sizing, reasonable power interface)? Cost or supplier's return policy? Cost or availability?
What good is a cheap motor when you can't bolt it to any standard accessories or gearboxes? What good is a cheap motor that arrives DOA and then you can't exchange it, or when the exchange process takes 3 months?
I ask because these are all (absolutely colossal) problems I've actually faced on academic design projects. Some of these oversights nearly killed a couple (graded) projects I worked on.
Might it be a better idea to build and program your own motor controller?
Generally speaking, yeah, I think that's a great project. Unfortunately for you, that's not the project you're supposed to be working on. You stated that you're part of the software and control department. The other people on this project are counting on you to get a motor controller done and programmed so the other teams can get their portion of the project completed.
A frame that can't be actuated doesn't help anyone and looks bad on everyone.
If this was your own pet project, then I'd say go for it, but because you're working in a team environment, where there are other people depending on you to complete your work in a timely manner, I'd say save that project for when you have time and try to find an off-the-shelf product that meets your needs for now.
If your teams are big enough you might be able to split this project off for yourself, after you help acquire at least passable motor controllers. The problem with designing your own is that you need to:
Interface with the motor encoder, in all the formats the commercial motor controller could (or, at a bare minimum, in all the formats for the encoders you're using).
Design the schematic for the power electronics capable of providing 1.2kW.
Design the PCB for the schematic, including layout and trace size. 1.2kW isn't "high power" by any means, but it's enough that you're likely to blow up a couple boards before you get the design right.
Design the housing/mount footprint for the completed circuit.
Actually assemble and solder the PCB.
Program the PCB, including the packet processing for EtherCAT if you go that route, the input processing and conditioning for the encoder feedbacks, and then the actual control algorithm for the motor speed control.
This might "only" be a couple weeks of working full time, but there's generally a couple weeks of down time between completing the PCB design and receiving the boards, and like I mentioned I would be money you'll blow at least one board. You have to have everything perfect; all the component datasheets read correctly, transcribed to schematics correctly, transcribed to PCB correctly, components placed correctly, and then soldered correctly, and then programmed correctly, for the thing to work.
If you blow a board because you didn't place or solder or program something correctly, then you're out the turnaround time on the replacement parts.
If you blow a board because you didn't get the schematic right, then you're out the turnaround time on replacement boards, which again is probably a couple weeks.
How do you know the difference between blown by bad schematic and blown by bad construction? Good question. That'll probably take at least a week or two.
And the whole time, the rest of the project is stalled while everyone else on the team waits for the motor controllers.
You stated you or your team ("we") don't have much experience in that area. I don't know how you were thinking you might build and test your own motor controller circuit, but a breadboard is absolutely out of the question. Breadboards can only handle 0.5 to 1 Amp max, so whatever circuit you're testing with one is certainly not a 1.2kW motor controller, or even a 700W motor controller. | {
"domain": "robotics.stackexchange",
"id": 1240,
"tags": "motor, microcontroller"
} |
Is this correct for pressure exerted by a liquid? | Question:
$P_0$ is atmospheric pressure and
$P_1$ is height×density×gravity.
So , is the diagram correct for saying that
Total pressure in the liquid = $P_0+P_1$.
Since the direction of $P_0$ vector is downwards?
Answer: Firstly, pressure is not a vector so don't go by the directions.
Talking about pressure, it increases with an increase in height. So you write pressure at any point at depth $h$ as
$P_\text{depth} = P_0+h \rho g$
So your answer is correct | {
"domain": "physics.stackexchange",
"id": 75477,
"tags": "fluid-dynamics, pressure, vectors, fluid-statics"
} |
Can't install (Failed to fetch http://packages.ros.org/ros/ubuntu/dists/precise/Release ) | Question:
I've followed the install tutorial several times but I keep getting the following error:
W: Failed to fetch http://packages.ros.org/ros/ubuntu/dists/precise/Release Unable to find expected entry '/etc/apt/sources.list.d/ros-latest.list/binary-amd64/Packages' in Release file (Wrong sources.list entry or malformed file)
Originally posted by jd on ROS Answers with karma: 62 on 2012-07-27
Post score: 0
Original comments
Comment by allenh1 on 2012-07-27:
Have you tried deleting that file? Checked your Internet connection? Are you certain that you got the keys?
Comment by jd on 2012-07-28:
I deleted that file and started over again. It seems to be working now. Thanks!
Comment by allenh1 on 2012-07-28:
No problem! Glad I could help!
Answer:
From the comments, the solution to this problem is to run the following:
sudo rm -f /etc/apt/sources.list.d/ros-latest.list/binary-amd64/Packages
Then just start the install tutorial over again.
Originally posted by allenh1 with karma: 3055 on 2012-07-28
This answer was ACCEPTED on the original site
Post score: 1 | {
"domain": "robotics.stackexchange",
"id": 10401,
"tags": "ros"
} |
Which is the easiest way to do Object Detection | Question:
Right now ive been searching the web and got that th best packages are roboearth and tabletop_object_detector, but I have to do the specific of recognizing doors and people, so im no sure this are the best stacks to do the recognition of this two clases. Ive found the pr2_doors and openni_tracker but would really appreciate some help from someone who actually accomplished to do recognition of doors or people.
Originally posted by ctguell on ROS Answers with karma: 63 on 2013-09-12
Post score: 4
Answer:
Both methods won't work for doors or people. The tabletop object detector is based on the assumption that all objects stand on a flat horizontal surface that can be used to segment the items (the tabletop). This is not the case for doors and hardly for people. The RoboEarth object recognizer is based on local image features (SURF), which means that it only works with non-deformable textured objects.
I'm not aware of any out-of-the box package for your use case, but there are components. openni_tracker is able to detect persons, opencv should provide methods for recognizing faces, and building a door detector using PCL should be fairly easy. As always, it depends on your setting, i.e. which sensors, which kinds of doors, open/closed, ...
Originally posted by moritz with karma: 2673 on 2013-09-12
This answer was ACCEPTED on the original site
Post score: 2 | {
"domain": "robotics.stackexchange",
"id": 15513,
"tags": "ros, object-detection, 3d-object-recognition, roboearth"
} |
Does the body move with constant velocity or accelerate? | Question: If I take the case in which a body has a constant force applied of 5N and friction Present is 3N.
Does the body accelerate in this case ? Or move with uniform velocity.
Answer: Newton's law has net force equals mass times acceleration. In your case
$$ 5 - 3 = m a $$
so what do you think happens? $a=0$ or $a \neq 0$? | {
"domain": "physics.stackexchange",
"id": 74138,
"tags": "newtonian-mechanics, newtonian-gravity, kinematics"
} |
Usefulness of consensus number | Question: What information and usefulness does knowing the consensus number of a shared object give me?
Answer: You probably cannot do much better than quote the abstract of Herlihy's original paper:
A wait-free implementation of a concurrent data object is one that guarantees that any process can complete any operation in a finite number of steps, regardless of the execution speeds of the other processes. The problem of constructing a wait-free implementation of one data object from another lies at the heart of much recent work in concurrent algorithms, concurrent data structures, and multiprocessor architectures. First, we introduce a simple and general technique, based on reduction to a consensus protocol, for proving statements of the form, “there is no wait-free implementation of X by Y.” We derive a hierarchy of objects such that no object at one level has a wait-free implementation in terms of objects at lower levels. In particular, we show that atomic read/write registers, which have been the focus of much recent attention, are at the bottom of the hierarchy: they cannot be used to construct wait-free implementations of many simple and familiar data types. Moreover, classical synchronization primitives such as test&set and fetch&add, while more powerful than read and write, are also computationally weak, as are the standard message-passing primitives. Second, nevertheless, we show that there do exist simple universal objects from which one can construct a wait-free implementation of any sequential object. | {
"domain": "cs.stackexchange",
"id": 15731,
"tags": "consensus, shared-memory"
} |
If $x(t)$ be a known trajectory does the $x(-t)$ represent the retracing trajectory? | Question: Assertion
If there is time-reversal invariance, Newton's law (for a system described by one generalized coordinate $q$) $$m\frac{d^2}{dt^2}q(t)=F\Big(q(t)\Big)\tag{a}$$ implies that if $q(t)$ is a solution, $q(-t)$ is also a solution i.e., $$m\frac{d^2}{dt^2}q(-t)=F\Big(q(-t)\Big)\tag{a1}$$ The operational implementation of time-reversal $t\to -t$ requires doing the following: $$q\to q,~~\text{and}~~\dot{q}\to -\dot{q}\tag{b}$$ to the instantaneous values of $q$ and $\dot{q}$ by which the system is made to retrace the path.
A concrete example
Suppose a harmonic oscillator starts from right extreme position A with initial conditions $x=a$ and $\dot{x}=0$ at $t=0$. Therefore, its trajectory is given by $$x(t)=a\cos\omega t.\tag{1}$$ It reaches the mean position B at time $t=T/4$ at which $x=0$ and $\dot{x}=-a\omega$. Now the prescrition of the time-reversal (Eq. (b)) requires that at $t=T/4$, we set $x=0$ and $\dot{x}=+a\omega$, and verify whether the trajectory is retraced. It's easy to check that with these as the initial conditions, the solution becomes $$x(t)=-a\cos\omega t\tag{2}$$ which indeed represents the retracing path BA traversed from B to A i.e., the trajectory (2) is opposite to the trajectory described by (1).
$\bullet$ However, note that the retracing trajectory (2) cannot be obtained from trajectory (1) by simply sending $t\to -t$, and therefore, I have the following question.
Question
If $x(t)$ represents a continuous trajectory AB traversed in the direction from A to B, which trajectory should $x(-t)$ be identified with? From my example, it appears that $x(-t)$ is not the retracing trajectory.
Answer: This is just a semantics question.
$x(-t)$ is the time reversed trajectory of $x(t)$. Physically, for $t > 0$, you can imagine this trajectory as that of a particle that starts moving at negative $t$, then is time reversed (intuitively, 'hits a mirror') at $t = 0$.
the trajectory that "reverses the path" of $x(t)$ by time reversal at time $t_0$ is $x(-t + 2 t_0)$. Note that whenever we talk about "time reversal" alone, we always mean time reversal at time $t = 0$.
if time reversal symmetry holds and $x(t)$ is a solution, $x(-t)$ is a solution
if time reversal symmetry and time translational symmetry hold, and $x(t)$ is a solution, then the "reversed path" $x(-t + 2 t_0)$ is a solution
Most of the time, when time reversal symmetry is present, time translational symmetry is also present, so we don't bother to distinguish these two concepts.
On the other hand, consider a particle in the electric field $E(t) = E_0 (t /t_0)^2$. This electric field obeys time reversal symmetry (about the time $t = 0$, as by convention) but it breaks time translational symmetry. Therefore, if $x(t)$ is a solution, so is $x(-t)$, but $x(-t + 2 t_0)$ is not, as you can check. | {
"domain": "physics.stackexchange",
"id": 48515,
"tags": "newtonian-mechanics, harmonic-oscillator, time-reversal-symmetry"
} |
How do we know the laws of physics are the same throughout the universe? | Question: How do we know the laws of physics are the same throughout the universe? Intuitively I would say they would vary in two natural ways: the constants in the equations may vary or the math in the equations may vary. As a guess they could change over a long time.
What is the farthest radius we can prove from Earth, with absolute certainty, that the laws of physics do not vary? I am aware this may not be a radius but a more complex shape that cannot be simply described by a radius.
The nearest answer I can think of for a radius is a guess. And that guess is based on the farthest physics experiment we have done from earth, which I think is an experiment with mirrors on the moon. Therefore if we assume (I don't know if this assumption is 100% reasonable) all physics laws hold because this experiment works. Then the radius is the distance to the moon. This doesn't give a concrete answer for the radius, merely an educated guess.
Answer: Let's start in the middle:
What is the furthest radius we can prove from earth, with absolute certaintity, that the laws of physics do not vary?
Zero. Proofs are found in mathematics and court rooms, and are impossible in natural science. The best we can do is have falsifiable theories. This holds for every description of reality - there's no "proof" even for the Laws of Gravity.
So, what could we observe that would tell us that physical constants or relationships between physical quantities are different in other parts of the universe, or at other times during its existence?
Gravity: For galaxy clusters, we have independent mass measurements from several different sources that agree within their (admittedly large) error bars. Gravitational lensing, velocity dispersion of the member galaxies and X-ray temperatures are all in agreement. So the laws of gravity seem to work even at redshifts up to 0.5 or even higher.
Atomic physics: We observe highly redshifted objects. The wavelength of the light emitted by these objects is made longer by the expansion of the universe. Observing redshifted spectral lines of different chemical elements (or molecules) tells us that atomic physics worked the same when and where this light was emitted. If the transition levels between electron orbits had changed over time, we would get different redshifts for the same objects depending on what element's spectral line we observe.
Nucleosythesis: Shortly after the big bang, the temperature lowered such that protons and neutrons were no longer created and destroyed constantly. A free neutron has a half live of about 8.5 minutes before it decays into a proton and an electron. Our theories predict that we'd get a helium (2x proton, 2x neutron) content in the universe of about 25%. (The rest of the "normal" matter being essentially all hydrogen), and that is indeed what we observe. Now, the helium content is dependent both on the matter density at the time this took place and the half live of the neutron. From other observations (BAO come to mind) we are fairly certain that we got the matter density about right. Which leaves only a small wiggle room for the half live of the neutron, and hence for changes in the weak force.
We've covered gravity, electromagnetism, and the weak force. I don't know any good test for the strong force.
For a change of natural laws over time, we can look at the isotope distribution in rocks here on earth. We should be able to tell whether the decay rate of various elements was different at earlier times by looking at how many of each of their decay products are around.
To summarize, we cannot say with "absolute certainty", but what we observe seems to indicate that natural laws are the same throughout the universe. | {
"domain": "astronomy.stackexchange",
"id": 2765,
"tags": "cosmology"
} |
What do the intervals between groups of arrays in microarray gene expression data images mean? | Question: This can be a dummy question, but I am not familiar with microarray experiments at all. In this image, what does each of 16 big squares mean, and what are the black intervals between them are? I know each small square corresponds to a gene expression value, but I don't know what each separated group of expression values means. Oftentimes I see those separated expression values in microarray dataset images. I would be happy if someone can explain. Thanks!
Answer: The borders provide visual cues for the image analysis software to know which spot is which. The spots are also not printed all at once, but by a series of print heads, and the spaces allow for a small amount of error in the alignment of the print heads. It also allows for humans to more easily eyeball the results - the chip map can be printed in groups, and if you're looking to see what gene X did in your experiment, and know it's in the 2nd row, in the 2nd block, row 8, column 4, you can find it more easily than just knowing it's in row 73, column 27. | {
"domain": "biology.stackexchange",
"id": 6093,
"tags": "gene-expression, microarray"
} |
Is alcohol based sanitizer flammable? | Question: After applying sanitizers can they catch fire, if yes, then after what time is it safe to use the hands, say in the kitchen.
Also one more question is that if they are flammable then the flammable component is alcohol. However as soon as we apply the sanitizer the alcohol evaporates? Then how can they remain flammable on our hands?
Answer: Firstly, we should realise that sanitizers should not be considered as pure alcohols, alcohols are indeed a major component of it but many other important components are also present. A typical sanitizer contains these things -
Isopropyl alcohol/ Ethyl alcohol, Glycerin, Water, Carbomer(stabilizer) and some other components mostly related to cosmetics.
Now, i will answer your questions one by one
After applying sanitizers, can they catch fire?
Yes, there is 60%-70% alcohol content which is highly flammable. Organic compounds like alcohols, alkanes etc have very high enthalpy of combustion ($\Delta_\mathrm{c}H^\circ$ = -2000 kJ/mol for Isopropyl alcohol) and they can evaporate very easily due to the heat of our hands($\Delta_\mathrm{vap}H^\circ$= 40kJ/mol for isopropyl alcohol).
As soon as we apply alcohol, it evaporates so why is it still flammable?
This is actually wrong, as I earlier mentioned, that sanitizers are not only alcohols, in fact, the 'carbomer' present acts as a stabilizer to emulsify/dissolving all the components in it. These stabilizers prevent the alcohol to evaporate immediately and allow it to disinfect our hands fully. Also, after applying sanitizers, many of you might feel hands becoming sticky it is because of the presence of Glycerin and other cosmetic products used in it which act like a moisturizer. So generally it takes about 30-40 seconds for all(mostly) the alcohol to evaporate and sustaining a risk to be inflammable for a long time.
A thing to keep in mind is that the residue (mostly Glycerin) is also flammable(not that much, it's flammability rating is one) so after the alcohol has evaporated, you aren't fully safe from catching fire.
If your are working in kitchen, there is no actual need of using sanitizers as you can always wash hands in sink with soap and water.
I hope this answer cleared some of your doubts, stay safe :). | {
"domain": "chemistry.stackexchange",
"id": 14010,
"tags": "everyday-chemistry"
} |
NFA or DFA for strings the contain exactly twice substring ab? | Question: Given the language with alphabet: $\{a, b, c\}$
Draw an NFA or DFA for all the strings that have exactly twice substrings $ab$ and at least on $c$.
I'm stuck with "exactly twice $ab$". Can somebody give me some ideas. It's also very good if you can suggest me the regular expression of this statement.
Answer: To answer these kind of questions try to break down your problem into smaller parts.
Can you find a NFA that accepts all words that contain the substring $ab$ exactly once?
If you have NFAs for the languages $L_1$ and $L_2$. Can you assemble the NFAs to a new NFA that accepts $L_1\circ L_2 =\{uv\mid u\in L_1 \text{ and } v\in L_2\}$?
Can you find a NFA that accepts all words that contain a $c$?
If you have NFAs for the languages $L_1$ and $L_2$. Can you assemble the NFAs to a new NFA that accepts $L_1\cap L_2 =\{u \mid u\in L_1 \text{ and } u\in L_2\}$?
Steps 2. and 4. are well known closure constructions for regular languages. Its not hard to come up with these constructions by yourself, but there are also part in most of the textbooks.
As pointed out in the comments, the concatenation is in your case more subtle, since the first string might end with $a$ and the second one might start with $b$. This can be fixed by setting $L_1:=\{w \mid \text{$w$ contains $ab$ once in the end}\}$. In particular, you can always split the words of your language exactly after the first occurrence of $ab$. | {
"domain": "cs.stackexchange",
"id": 2001,
"tags": "formal-languages, regular-languages, finite-automata, regular-expressions"
} |
Thermodynamic potential and partition function | Question: I am a bit confused by the relation between thermodynamic potential and partition functions.
From my understanding, we can generate all thermodynamical quantities by taking partial derivatives to the thermodynamic potential (or $\ln(Z)$ where $Z$ is the partition function). Thus, I was expecting them to be proportional to each other.
For canonical ensembles, the partition function is $Z=\sum e^{-\beta E}$ and the corresponding thermodynamic potential is the Helmholtz free energy $F=-k_BT\ln(Z)$.
Similarly, for the grand canonical ensembles, the partition function is $Z=\sum e^{-\beta E-\alpha N}$ and the corresponding thermodynamic potential is the grand potential $J=-k_BT\ln(Z)$.
However, for microcanonical ensemble, it does not seem to work anymore. The partition function $Z=\sum1=\Omega$ so it will be the entropy which is proportional to $\ln(Z)$. However, the thermodynamic potential is the internal energy $U$ in this case. It seems a bit wired to me.
I am wondering whether there is some deep reason behind this? Thanks!
Answer: The thermodynamic potential for microcanonical ensemble should be $S$ instead of $U$, for two reasons: first, by definition microcanonical ensemble describes an isolated system whose energy is conserved. So in this sense, $U$ is treated like an external condition (like the temperature of a canonical ensemble). Second, thermodynamic potentials satisfy extremal principles: that is, the thermodynamic equilibrium state is the minimum/maximum of the potential. The entropy $S$ does exactly that for isolated systems. | {
"domain": "physics.stackexchange",
"id": 86283,
"tags": "thermodynamics, statistical-mechanics, partition-function, quantum-statistics"
} |
How to call gazebo_msg/GetLinkState service for a sensor mounted on a robot? | Question:
Hi all,
I have the following problem:
To make my own robot in gazebo (a diff-drive with a laser and a kinect) I wrote the following urdf:
https://github.com/schizzz8/lucrezio_simulation_environments/blob/master/urdf/lucrezio_with_logical.urdf.xacro
Now, I'd like to call gazebo_msgs/GetLinkState service to retrieve the pose of the sensor with respect to the gazebo world.
If I do that, that's the output I get:
In fact, in the gazebo window, the robot seems to be made of just few links:
How can I have gazebo to see also camera_link in order to retrieve its pose?
Thanks.
Originally posted by federico.nardi on Gazebo Answers with karma: 23 on 2018-06-05
Post score: 0
Answer:
I see you have a lot of fixed joints on your URDF. When converted to SDF, links connected by fixed joints are lumped together into links for performance, and as a consequence, the link names change.
One thing you could try would be to use the preserveFixedJoint on your URDF, so that the fixed joints are persisted, and link names and poses don't change, for example:
<joint name="camera_joint" type="fixed">
...
</joint>
<gazebo reference='camera_joint'>"
<preserveFixedJoint>true</preserveFixedJoint>"
</gazebo>"
Originally posted by chapulina with karma: 7504 on 2018-06-05
This answer was ACCEPTED on the original site
Post score: 2
Original comments
Comment by federico.nardi on 2018-06-06:
I tried your solution, but nothing changes. Btw, I pushed the fix to the repo so you can check if I'm doing that right. To test it, you can run: roslaunch lucrezio_simulation_environments robot.launch (now the urdf is lucrezio.urdf.xacro)
Comment by federico.nardi on 2018-06-06:
Of course, if I change the joint type to "continuous" then I can see it. My question is: is that a possible solution? In the end, if I'm guaranteed that no one moves camera, I can be fine with this hack.
Comment by chapulina on 2018-06-06:
I think you may be using an older version of Gazebo which doesn't support the preserve tag... In any case, using a revolute joint with zero limits should work fine even if it is moved. | {
"domain": "robotics.stackexchange",
"id": 4275,
"tags": "gazebo-camera, gazebo-sensor"
} |
Migration of older rosdep.yaml bash entries to Groovy | Question:
What is the preferred method for migrating the old multi-line bash script entries in rosdep.yaml files? REP-0111 says that compatiblity was eliminated in Fuerte:
http://ros.org/reps/rep-0111.html#backwards-compatibility
Multi-line entries in rosdep.yaml used to be interpreted as bash files, allowing fancy manipulations of dependencies. Is there a migration path for these types of entries? If they are no longer permitted, shouldn't the rosdistro default files eliminate any entries that have bash style entries?
https://github.com/ros/rosdistro/blob/master/rosdep/base.yaml
I saw that a similar question was asked here, but no real solution was given. And placing a sudo requiring make install in a package makefile can get swallowed up in rosmake.
Originally posted by cmansley on ROS Answers with karma: 198 on 2013-01-29
Post score: 1
Answer:
Embedded bash commands are no longer supported by rosdep2.
I don't believe there is any migration path for those rules. So, they should be removed from the rosdistro files.
I would be happy for someone to convince me this answer is wrong.
Originally posted by joq with karma: 25443 on 2013-01-30
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by cmansley on 2013-01-30:
I understand that they are no longer supported. I am asking what is the migration path? Custom packages have issues with sudo?
Comment by joq on 2013-09-17:
In most cases, the preferred migration path is to create 3rd-party packages instead of downloading and manipulating source code. http://wiki.ros.org/bloom/Tutorials/ReleaseThirdParty . | {
"domain": "robotics.stackexchange",
"id": 12630,
"tags": "rosdep, ros-groovy"
} |
Why are the axial bonds of PF5 longer than those of the equatorial bonds? (Hybridization) | Question: The axial bonds of $\ce{PF5}$ are longer than those of the equatorial positions.
One explanation is that because the axial bonds are experiencing more repulsion
than those of the equatorial and this leads to longer bonds.
Another explanation is that the axial bonds of p/d hybrids, whereas the equatorial bonds are s/p hybrids.
I am able to comprehend and appreciate the first explanation. However, I am unable to understand how there are differences in the hybrids. I had always considered d hybrids a dubious argument at best, and I believed the hybridization would lead to degenerate orbitals.
However, the explanation provided would suggest otherwise, that there are different hybridized orbitals of different energies at different portions of a single hybridized atom (excluding lone pairs. I understand those being non-hybridized in a hybridized atom).
Thus, my question is how the hybrid argument that was presented works and how would one be able to realize the different hybridized states of an atom in a molecule (as an approximation).
Answer: Older textbooks decided to use a d-orbital formalism for elements of the third and lower (in the periodic table) periods to explain hyperconjugation. Nowadays, most of these compounds are explained with multi-centre bonds.
I could go stealing pictures and explanations from ron’s great answer, but I’m not going to. Rather, I’m just going to quickly explain the gist and redirect you to that answer.
If you take away two of the fluorines, then you can imagine the $\ce{PF3}$ fragment to be sp²-hybridised with a remaining p-orbital containing two electrons. (This description is wrong, as can be seen from the pyramidial structure of $\ce{PCl3}$ and its analogues, but we’ll keep it as a working basis.) The p-orbital can now interact with two surrounding further fluorine atoms that approach it from above and below with an electron each in the way that is shown in the image in ron’s answer. The resulting three orbitals will thereby create a four-electron three-centre bond. This bond can also be described mesomerically, by assuming a $\ce{P-F}$ single bond, a positive charge on phosphorous and a negative charge on the other fluorine, with a second mesomeric structure as a mirror image of the first.
From this mesomeric consideration you can see that the bond order of the axial $\ce{P-F}$ bonds is not 1 but rather closer to 0.5. The greater the order the shorter the bond so the smaller the order the longer the bond — and there you have your result. | {
"domain": "chemistry.stackexchange",
"id": 3625,
"tags": "physical-chemistry, bond, hybridization, valence-bond-theory"
} |
Is it possible to get a lasing from every luminescent media? | Question: Let's assume that there is a cavity with a couple of mirrors and gain media between which possesses luminescence under some external excitation/pumping. Let the absolute quantum yield of the gain media be quite high - 50%, luminescence decay is approximately 100 µs and full width at half maximum is about 100 nm with maximum at 500 nm (something like blue-green light). All these parameters are quite real. So with such parameters what do we need to get lasing? And second question, what would the output spectrum look like?
Answer: Relevant link - not exact duplicate . It discusses the need for population inversion, and how to achieve it. A system with fewer than three (four if you want to limit power loss) energy levels is unlikely to be able to sustain population inversion, so while you could get "stimulated emission" it's unlikely that the output of such a medium would be coherent (there would be too many opportunities for spontaneous emission that destroy the coherence). | {
"domain": "physics.stackexchange",
"id": 27771,
"tags": "optics, visible-light, laser"
} |
change object column to numeric above cardinality threshold pandas | Question: Is the following an acceptable way to change an object column to numeric if cardinality threshold (cardinality_threshold) is being breached? I think it would be good to check if column values are numeric though. Thanks!
data = {
'col1':['1', '2', '3'],
'col2':['1', '1', '2'],
}
df = pd.DataFrame(data)
df
cardinality_threshold = 2
cols = [i for i in df.columns]
for i in cols:
if(len(df[i].value_counts()) > cardinality_threshold) :
df[i] = pd.to_numeric(df[i]) # , errors = 'coerce'
print(df.info())
Answer:
Use DataFrame.nunique to vectorize the cardinality check
Use DataFrame.apply to convert to_numeric (not vectorized, but more idiomatic than loops)
Use uppercase for CARDINALITY_THRESHOLD per PEP8 style for constants
CARDINALITY_THRESHOLD = 2
breached = df.columns[df.nunique() > CARDINALITY_THRESHOLD]
df[breached] = df[breached].apply(pd.to_numeric, errors='coerce')
>>> df.dtypes
# col1 int64
# col2 object
# dtype: object
I think it would be good to check if column values are numeric though.
Note that to_numeric already skips numeric columns, so it's simplest to just let pandas handle it.
If you still want to explicitly exclude numeric columns:
Use DataFrame.select_dtypes to get the non-numeric columns
Use Index.intersection to get the non-numeric breached columns
breached = df.columns[df.nunique() > CARDINALITY_THRESHOLD]
non_numeric = df.select_dtypes(exclude='number').columns
cols = non_numeric.intersection(breached)
df[cols] = df[cols].apply(pd.to_numeric, errors='coerce') | {
"domain": "codereview.stackexchange",
"id": 43154,
"tags": "python, pandas"
} |
callback function for advertise is not called | Question:
I have a publisher node and I would like to get a callback when a subscriber connects. I followed the example in the documentation, but I get a compilation warning and no callback. For example, I start a rostopic echo before I launch my node, and I do receive the message, but the publisher is never notified that a subscriber connected.
Here is the warning:
In function ‘int main(int, char**)’: .../AppTestCallbackSubscription.cpp:20:88: warning: the address of ‘void connectCallback(const ros::SingleSubscriberPublisher&)’ will always evaluate as ‘true’ [-Waddress]
Here is the code:
#include <ros/ros.h>
#include <std_msgs/String.h>
//typedef boost::function<void(const SingleSubscriberPublisher&)> SubscriberStatusCallback;
void connectCallback(const ros::SingleSubscriberPublisher& pub)
{
ROS_INFO("Subscriber connected");
}
int main(int argc, char** argv)
{
const std::string TOPIC_NAME = "/foo";
ros::init(argc, argv, "MyPublisher");
ros::NodeHandle nh("~");
// None of these two lines triggers the callback
//ros::Publisher pub = nh.advertise<std_msgs::String>(TOPIC_NAME, 10, connectCallback, ros::SubscriberStatusCallback(), ros::VoidConstPtr(), false);
ros::Publisher pub = nh.advertise<std_msgs::String>(TOPIC_NAME, 10, connectCallback);
// Gives time to the subscribers to connect
ros::Time maxTime(ros::Time::now()+ros::Duration(1)); // Will wait at most 1000 ms
while(ros::Time::now()<maxTime)
{
ros::spinOnce();
}
std_msgs::String msg;
msg.data = "Hello World!";
pub.publish(msg);
ROS_INFO("Message published");
return 0;
}
Originally posted by Benoit Larochelle on ROS Answers with karma: 867 on 2012-11-14
Post score: 1
Answer:
Here is the solution found by @Lorenz and fixed (in the documentation) by @Dirk Thomas.
An explicit type cast is required in front of the callback function:
Instead of: handle.advertise<std_msgs::Empty>("my_topic", 1, connectCallback);
Use: handle.advertise<std_msgs::Empty>("my_topic", 1, (ros::SubscriberStatusCallback)connectCallback);
Originally posted by Benoit Larochelle with karma: 867 on 2012-12-11
This answer was ACCEPTED on the original site
Post score: 1 | {
"domain": "robotics.stackexchange",
"id": 11747,
"tags": "ros, callback, publisher"
} |
What does this paragraph mean? | Question: I'm reading this paper about about an algorithm to measure image sharpness, and am confused by these sentences in the 6th paragraph:
It is well known that the attenuation of high-frequency content can lead to an image that appears blurred. One way to measure this effect is to examine the image’s magnitude spectrum $M(f)$, which is known to fall inversely with frequency, i.e. $M(f)\propto f^{-\alpha}$, where $f$ is the frequency and $-\alpha$ is the slope of the line $\log{M}\propto-\alpha \log{f}$.
I don't know what frequency this is referring to as the images this paper and algorithm are concerned with are grayscale, does anyone have a clue?
Answer: Detail in images require higher frequency basis functions. The frequency in this case is measuring fluctuations in intensity as a distance is traversed. With a lot of detail, a lot of fluctuations, thus higher frequency.
Tamp down the higher frequency and you lose detail, i.e. the image blurs.
The dampening is measured (described) best on a log scale.
Consider this answer Honey Mustard. | {
"domain": "dsp.stackexchange",
"id": 9019,
"tags": "image-processing, magnitude"
} |
How to compute ROOK Polynomials for NxM Matrices | Question: How to compute ROOK Polynomials for NxM Matrices for k objects ?
Answer: See the answers of this question.
Your case is a lot easier, just choose $k$ of the $m$ columns and then you have
$n (n-1)\ldots (n-k+1)$ ways to put the $k$ rooks. So the coefficient of $x^k$ is
$\displaystyle r_k = \binom{m}{k} n (n-1)\ldots (n-k+1) = \binom{m}{k}\binom{n}{k}k!$.
The provided link also gives a dynamic programming approach to compute $r_k$. | {
"domain": "cstheory.stackexchange",
"id": 335,
"tags": "co.combinatorics, polynomials, permutations"
} |
IIR Filter - calculating the phase response | Question: Using formulae from Audio EQ Cookbook from here it was easy to implement biquad IIR filters in C++, i.e. to calculate coefficients b0, b1, b2, a0, a1, a2.
Now, I want to calculate the phase response for a given IIR filter. I have read a number of papers about IIR design, read the websites (DSP Guide) but could not find the way to do it. How I can calculate and specify the phase of an IIR filter during the design?
Answer: Finding the phase response of a biquad at a specific frequency is simple. Recall the transfer function of a biquad:
$$
H(z) = \frac{b_0 + b_1z^{-1} + b_2z^{-2}}{a_0 + a_1z^{-1} + a_2z^{-2}}
$$
The frequency response of a system can be calculated by letting $z = e^{j\omega}$, where $\omega$ is a normalized frequency in the range $[-\pi, \pi)$. SO, it would look like this:
$$
H(e^{j\omega}) = \frac{b_0 + b_1e^{-j\omega} + b_2e^{-j2\omega}}{a_0 + a_1e^{-j\omega} + a_2e^{-j2\omega}}
$$
Because of the complex exponentials, the value of $H(e^{j\omega})$ will be complex. The phase response at the frequency $\omega$ is just the phase angle of the resulting complex number. The magnitude response at the same frequency is likewise equal to the magnitude of the number.
The only other detail you might need is how to arrive at $\omega$: given a signal sampled at sample rate $f_s$ Hz, if you want to know the frequency response at a given frequency $f$ Hz, you can use the above equation, and let:
$$
\omega = \frac{2 \pi f}{f_s}
$$ | {
"domain": "dsp.stackexchange",
"id": 2854,
"tags": "filters, infinite-impulse-response, dsp-core"
} |
Displaying a conflict marker | Question: I'm displaying a conflict marker for each actor for each day, if the current day is in their conflicts:
<li v-for="date in next_three_months">
<template v-for="actor in cast">
<template v-for="conflict in actor.conflicts">
<i v-if="date.datestamp==conflict.datestamp" class="is-conflict"></i>
</template>
</template>
</li>
This works, but it just feels like a lot of code (despite it already being a dozen or so lines shorter than how I was doing it before using Vue JS).
Here's my Vue.js:
new Vue({
el: '#schedule-builder',
data: {
cast: [
{{ cast_list }}
{
actor: {
id: "{{ actor_id }}",
name: "{{ actor_name or 'TBD' }}",
},
conflicts: [
{{ conflict_calendar }}
{ datestamp: "{{ value }}" },
{{ /conflict_calendar }}
],
},
{{ /cast_list }}
],
next_three_months: [
{
datestamp: '{{ now | format:Y-m-d }}',
},
{{ loop times="90" }}
{
datestamp: '{{ now | modify_date:+1 day | format:Y-m-d }}',
},
{{ /loop }}
],
Answer: This post has been around for too long and I don't want it to be a zombie any longer... You haven't replied to our questions in comments but we see you have been on SE sites lately so I am going to go ahead with the code as is.
As was pointed out in comments there appears to be some type of rendering (perhaps server-side) that is mixed in with the JavaScript code, which is not declared, so I wouldn't really be able to try running the code even if I wanted to :/ I could try to dig through your StackOverflow posts to find relevant code but I don't have time for that.
For the code you posted, it doesn't really seem like "a lot" but that depends on perspective... if you really wanted, you could abstract the code in the v-if (i.e. v-if="date.datestamp==conflict.datestamp") to a method and call it instead of having the logic in the markup.
I question whether <template> is the best tag to use in the for loops...While there are no restrictions on permitted content within a <template> tag, it seems odd to have one nested inside the other. Would a simple <div> suffice for one or both? | {
"domain": "codereview.stackexchange",
"id": 32623,
"tags": "javascript, iteration, vue.js"
} |
Game engine ObjectFactory class | Question: I've developed as part of a game engine a template class for a factory which manages objects of one specific type. The factory handles initialization, destruction, and accessing previously created objects by string identifier.
It allows forwarding parameters to object constructors with variadic arguments, and is interfaced with a few freestanding functions within the namespace. Since each object type has its own factory, factories are created automatically and managed by the Game class (in the actual code Game is a singleton and handles a bunch of stuff but in the following simplified code, Game just deletes all the automatically created factories for us).
The following code can be copied to a file and compiled, it runs as expected and includes a simple example. Compile with C++11 enabled.
#include <iostream>
#include <map>
#include <vector>
#include <functional>
#include <memory>
namespace TGE
{
class FactoryBase
{
public:
virtual void destroy() = 0;
};
class Game
{
public:
void manageFactory(FactoryBase* factory)
{
_factoryStack.push_back(factory);
}
void destroy()
{
while(!_factoryStack.empty())
{
_factoryStack.back()->destroy();
_factoryStack.pop_back();
}
}
private:
std::vector<FactoryBase*> _factoryStack;
} MyGame;
// Forward declare template function for friendship later in ObjectFactory
template<class ObjectClass>
std::vector<std::string> objectList();
template<class ObjectClass>
class ObjectFactory : public FactoryBase
{
public:
void destroy()
{
if(_instance != nullptr)
delete _instance;
}
static ObjectFactory<ObjectClass>* instance()
{
if(_instance == nullptr)
_instance = new ObjectFactory<ObjectClass>();
return _instance;
}
template<typename... Args>
std::shared_ptr<ObjectClass> createObject(std::string id, Args&&... args)
{
objectMap[id] = std::shared_ptr<ObjectClass>
(new ObjectClass(std::forward<Args>(args)...));
return objectMap[id];
}
std::shared_ptr<ObjectClass> getObject(std::string id)
{
if(objectMap.find(id) != objectMap.end())
return objectMap[id];
return nullptr;
}
void deleteObject(std::string id)
{
if(objectMap.find(id) != objectMap.end())
objectMap.erase(id);
}
private:
ObjectFactory<ObjectClass>() { MyGame.manageFactory(this); }
~ObjectFactory<ObjectClass>() {}
static ObjectFactory<ObjectClass>* _instance;
friend std::vector<std::string> objectList<ObjectClass>();
std::map<std::string, std::shared_ptr<ObjectClass>> objectMap;
};
template<class ObjectClass>
ObjectFactory<ObjectClass>* ObjectFactory<ObjectClass>::_instance = nullptr;
template<class ObjectClass, typename... Args>
std::shared_ptr<ObjectClass> construct(std::string id, Args&&... args)
{
return ObjectFactory<ObjectClass>::instance()->createObject(id, std::forward<Args>(args)...);
}
template<class ObjectClass>
std::shared_ptr<ObjectClass> acquire(std::string id)
{
return ObjectFactory<ObjectClass>::instance()->getObject(id);
}
template<class ObjectClass>
void destruct(std::string id)
{
ObjectFactory<ObjectClass>::instance()->deleteObject(id);
}
template<class ObjectClass>
std::vector<std::string> objectList()
{
std::vector<std::string> keys;
for(auto itr : ObjectFactory<ObjectClass>::instance()->objectMap)
{
keys.push_back(itr.first);
}
return keys;
}
}
class Rectangle
{
public:
Rectangle(int w, int h) : width(w), height(h) {}
void setVals(int w, int h)
{
width = w;
height = h;
}
void getVals()
{
std::cout << width << 'x' << height << std::endl;
}
private:
int width, height;
};
int main()
{
auto myRectangle = TGE::construct<Rectangle>("MyRectangle", 5, 10);
myRectangle->getVals();
myRectangle->setVals(10, 5);
TGE::acquire<Rectangle>("MyRectangle")->getVals();
TGE::MyGame.destroy();
return 0;
}
Answer: Nicely done program.
Must do
Add a virtual destructor to FactoryBase.
Suggestions
objectList() can easily be implemented without the friend construct if ObjectFactory<ObjectClass> has a function, such as ObjectFactory<ObjectClass>::getObjectList(), that returns the right object.
template<class ObjectClass>
std::vector<std::string> objectList()
{
return ObjectFactory<ObjectClass>::instance()->getObjectList();
}
You have most functions named consistently -- with a verb as the first term -- except objectList(). That should be changed to getObjectList(). | {
"domain": "codereview.stackexchange",
"id": 12981,
"tags": "c++, game, c++11, template, abstract-factory"
} |
Backbone collection filter | Question: I've got this Collection in my Backbone application:
var AnswersCollection = Backbone.Collection.extend({
model: Answer,
initialize: function() {
console.log('Hello from new answers collection');
},
getCorrect: function() {
return this.where({correct: true});
},
getWrong: function() {
return this.where({correct: false});
},
randomSet: function(correct, wrong) {
arr1 = _.sample(this.getCorrect(), correct);
arr2 = _.sample(this.getWrong(), wrong);
result = _.shuffle(arr1.concat(arr2));
coll = new AnswersCollection(result)
return coll;
}
});
This works, and randomSet is called by a View when config have certain options or user click on 'rerender' button, but I'm wondering that maybe I can improve this code ?
The flow looks like this
Get randomly selected X correct answers
Get randomly selected Y wrong answers
Shuffle all answers
Create array of answers, that can be passed to new Collection
This code doesn't look to cool, maybe you will have some ideas how to make It cooler?:)
Answer: Reviewing code for coolness is a first ;)
Some observations:
initialize : if really all you want to do there is Hello World, then you should take it out. initialize is meant for initializing your model with answers.
getCorrect and getWrong are short one-liners that are called once, you should inline them
arr1, arr2, coll and even result are pretty terrible names, how about correctAnswers, wrongAnswers, answers and not creating coll since you could return the results of new AnswersCollection immediately.
The most serious problem though is that you are not declaring the prior mentioned variables with var, that is bad.
Also, from a coolness perspective, braces on their own line is where it's at.
I would counter-propose this:
var AnswersCollection = Backbone.Collection.extend(
{
model: Answer,
randomSet: function(correct, wrong)
{
var correctAnswers = _.sample(this.where({correct: true}), correct),
wrongAnswers = _.sample(this.where({correct: false}), wrong),
answers = _.shuffle(correctAnswers.concat(wrongAnswers));
return new AnswersCollection( answers );
}
}); | {
"domain": "codereview.stackexchange",
"id": 5907,
"tags": "javascript, array, backbone.js"
} |
A recursive_transform_reduce Function for Various Type Arbitrary Nested Iterable Implementation in C++ | Question: This is a follow-up question for A population_variance Function For Various Type Arbitrary Nested Iterable Implementation in C++. Thanks to G. Sliepen's answer, I am trying to implement the mentioned recursive_transform_reduce function here.
The usage description
Similar to std::transform_reduce, the purpose of recursive_transform_reduce function is to apply a function (unary operation) to each element in the given range then reduce the results with a binary operation. There are four parameters in recursive_transform_reduce function (the version without execution policy). The first one is a input range, the second one is the initial value of reduction result, the third one is a function (unary operation) which is to apply to each element and the final one is a binary operation for reduction operation.
The experimental implementation
The experimental implementation of recursive_transform_reduce template function is as below.
// recursive_transform_reduce implementation
template<class Container, class ValueType, class UnaryOp, class BinaryOp = std::plus<ValueType>>
requires (std::ranges::range<Container>) && (!is_elements_iterable<Container>)
constexpr auto recursive_transform_reduce(const Container& input, ValueType init, const UnaryOp& unary_op, const BinaryOp& binop = std::plus<ValueType>())
{
for (const auto& element : input) {
auto result = unary_op(element);
init = binop(init, result);
}
return init;
}
template<class Container, class ValueType, class UnaryOp, class BinaryOp = std::plus<ValueType>>
requires (std::ranges::range<Container>) && (is_elements_iterable<Container>)
constexpr auto recursive_transform_reduce(const Container& input, ValueType init, const UnaryOp& unary_op, const BinaryOp& binop = std::plus<ValueType>())
{
return std::transform_reduce(std::begin(input), std::end(input), init, binop, [&](auto& element) {
return recursive_transform_reduce(element, init, unary_op, binop);
});
}
// With execution policy
template<class ExPo, class Container, class ValueType, class UnaryOp, class BinaryOp = std::plus<ValueType>>
requires ((std::is_execution_policy_v<std::remove_cvref_t<ExPo>>) && (std::ranges::range<Container>) && (!is_elements_iterable<Container>))
constexpr auto recursive_transform_reduce(ExPo execution_policy, const Container& input, ValueType init, const UnaryOp& unary_op, const BinaryOp& binop = std::plus<ValueType>())
{
for (const auto& element : input) {
auto result = unary_op(element);
init = binop(init, result);
}
return init;
}
template<class ExPo, class Container, class ValueType, class UnaryOp, class BinaryOp = std::plus<ValueType>>
requires ((std::is_execution_policy_v<std::remove_cvref_t<ExPo>>) && (std::ranges::range<Container>) && (is_elements_iterable<Container>))
constexpr auto recursive_transform_reduce(ExPo execution_policy, const Container& input, ValueType init, const UnaryOp& unary_op, const BinaryOp& binop = std::plus<ValueType>())
{
return std::transform_reduce(execution_policy, std::begin(input), std::end(input), init, binop, [&](auto& element) {
return recursive_transform_reduce(execution_policy, element, init, unary_op, binop);
});
}
Test cases
With recursive_transform_reduce function here, population_variance function in the previous question can be improved as below.
template<typename T>
concept can_calculate_variance_of = requires(const T & value)
{
(std::pow(value, 2) - value) / std::size_t{ 1 };
};
template<typename T>
struct recursive_iter_value_t_detail
{
using type = T;
};
template <std::ranges::range T>
struct recursive_iter_value_t_detail<T>
: recursive_iter_value_t_detail<std::iter_value_t<T>>
{ };
template<typename T>
using recursive_iter_value_t = typename recursive_iter_value_t_detail<T>::type;
// population_variance function implementation (with recursive_transform_reduce template function)
template<class T = double, class Container>
requires (is_recursive_sizeable<Container>&& can_calculate_variance_of<recursive_iter_value_t<Container>>)
auto population_variance(const Container& input)
{
auto mean = arithmetic_mean<T>(input);
return recursive_transform_reduce(std::execution::par,
input, T{}, [mean](auto& element) {
return std::pow(element - mean, 2);
}, std::plus<T>()) / recursive_size(input);
}
Note: recursive_iter_value_t is referred from How to solve requires clause is incompatible.
The improved version of arithmetic_mean function implementation:
template<typename T>
concept is_recursive_reduceable = requires(T x)
{
recursive_reduce(x, T{});
};
template<typename T>
concept is_recursive_sizeable = requires(T x)
{
recursive_size(x);
};
template<class T = double, class Container> requires (is_recursive_sizeable<Container>)
auto arithmetic_mean(const Container& input)
{
return (recursive_reduce(input, T{})) / (recursive_size(input));
}
The implementation of recursive_reduce function:
template<class T, class ValueType, class Function = std::plus<ValueType>>
auto recursive_reduce(const T& input, ValueType init, const Function& f)
{
return f(init, input);
}
template<class Container, class ValueType, class Function = std::plus<ValueType>>
requires is_iterable<Container>
auto recursive_reduce(const Container& input, ValueType init, const Function& f = std::plus<ValueType>())
{
for (const auto& element : input) {
auto result = recursive_reduce(element, ValueType{}, f);
init = f(init, result);
}
return init;
}
The recursive_size function implementation:
// recursive_size implementation
template<class T> requires (!std::ranges::range<T>)
auto recursive_size(const T& input)
{
return 1;
}
template<class T> requires (!is_elements_iterable<T> && std::ranges::range<T>)
auto recursive_size(const T& input)
{
return input.size();
}
template<class T> requires (is_elements_iterable<T>)
auto recursive_size(const T& input)
{
return std::transform_reduce(std::begin(input), std::end(input), std::size_t{}, std::plus<std::size_t>(), [](auto& element) {
return recursive_size(element);
});
}
The test of population_variance function is like:
std::vector<double> test_vector{ 1, 2, 3, 4, 5 };
std::cout << "recursive_size of test_vector: " << recursive_size(test_vector) << std::endl;
std::cout << "population_variance of test_vector: " << population_variance(test_vector) << std::endl;
// std::vector<std::vector<double>> case
std::vector<decltype(test_vector)> test_vector2;
test_vector2.push_back(test_vector);
test_vector2.push_back(test_vector);
test_vector2.push_back(test_vector);
std::cout << "recursive_size of test_vector2: " << recursive_size(test_vector2) << std::endl;
std::cout << "population_variance of test_vector2: " << population_variance(test_vector2) << std::endl;
auto test_vector3 = n_dim_container_generator<10, std::vector, decltype(test_vector)>(test_vector, 3);
std::cout << "recursive_size of test_vector3: " << recursive_size(test_vector3) << std::endl;
std::cout << "population_variance of test_vector3: " << population_variance(test_vector3) << std::endl;
Then, the execution output:
recursive_size of test_vector: 5
population_variance of test_vector: 2
recursive_size of test_vector2: 15
population_variance of test_vector2: 2
recursive_size of test_vector3: 295245
population_variance of test_vector3: 2
Another test of population_variance function:
std::vector<double> test_vector{ 1, 2, 3 };
std::cout << "recursive_size of test_vector: " << recursive_size(test_vector) << std::endl;
std::cout << "population_variance of test_vector: " << population_variance(test_vector) << std::endl;
// std::vector<std::vector<double>> case
std::vector<decltype(test_vector)> test_vector2;
test_vector2.push_back(test_vector);
test_vector2.push_back(test_vector);
test_vector2.push_back(test_vector);
std::cout << "recursive_size of test_vector2: " << recursive_size(test_vector2) << std::endl;
std::cout << "population_variance of test_vector2: " << population_variance(test_vector2) << std::endl;
auto test_vector3 = n_dim_container_generator<10, std::vector, decltype(test_vector)>(test_vector, 3);
std::cout << "recursive_size of test_vector3: " << recursive_size(test_vector3) << std::endl;
std::cout << "population_variance of test_vector3: " << population_variance(test_vector3) << std::endl;
The execution output of the above test code:
recursive_size of test_vector: 3
population_variance of test_vector: 0.666667
recursive_size of test_vector2: 9
population_variance of test_vector2: 0.666667
recursive_size of test_vector3: 177147
population_variance of test_vector3: 0.666667
A Godbolt link is here.
All suggestions are welcome.
The summary information:
Which question it is a follow-up to?
A population_variance Function For Various Type Arbitrary Nested Iterable Implementation in C++
What changes has been made in the code since last question?
I am trying to implement a recursive_transform_reduce function here and use it in population_variance function.
Why a new review is being asked for?
Please check the implementation of recursive_transform_reduce function and if there is any possible improvement, please let me know.
Answer: Ensure init is only used once
Since you pass the value of init to recursive calls to recursive_transform_reduce(), the value of init ends up being added potentially more than once to the output. For example:
recursive_transform_reduce(std::vector<std::vector<int>>{{},{}}, 1, std::identity())
This will return 3 instead of 1. To fix it, instead of passing init, pass ValueType{} to the recursive call. Although of course that assumes there is a default constructible null element.
Consider using advice given in earlier answers
I think you should follow the advise given to you in this answer to your question you reference about a recursive iter_value_t:
Have one unconstrained template
Avoid the requires clause if possible by using the concept name instead of class in the template parameter list
Also again, both your templates implement do a std::transform_reduce(), you just wrote out one of them manually. But it should just be one of the variants that has to do the heavy lifting, the other can just deal with a single value. Then it becomes:
template<class Input, class T, class UnaryOp, class BinaryOp = std::plus<T>>
constexpr auto recursive_transform_reduce(const Input& input, T init, const UnaryOp& unary_op, const BinaryOp& binop = std::plus<T>())
{
return binop(init, unary_op(input));
}
template<std::ranges::range Input, class T, class UnaryOp, class BinaryOp = std::plus<T>>
constexpr auto recursive_transform_reduce(const Input& input, T init, const UnaryOp& unary_op, const BinaryOp& binop = std::plus<T>())
{
return std::transform_reduce(std::begin(input), std::end(input), init, binop, [&](auto& element) {
return recursive_transform_reduce(element, T{}, unary_op, binop);
});
}
I also changed the names of the types to match std::transform_reduce. | {
"domain": "codereview.stackexchange",
"id": 40185,
"tags": "c++, recursion, c++20"
} |
What actually is interval of two notes? | Question: In my book, it is written that if the two notes have frequency $p$ and $q$ then the interval between them is $p/q$. But while solving a question, I found the answer which is like $q/p$. After surfing different sites I saw there is something like high frequency to low frequency ratio. So I am really confused and want to know what actually right fact is.
Answer: It's hard to see precisely what you are asking. One of the main tuning notes A is 440 Hz for instance. The A 1 octave higher is 880 Hz, 1 octave lower is 220 Hz, etc. The E that is one perfect fifth above A440 is 1.5x or 660 Hz.
However, equal temperament is used on a piano keyboard, where each adjacent key is $^{12} \sqrt 2$ higher than the one below it. (I should say on a computerized keyboard this can be the case. On real life pianos, the tuning is a bit different than equal temperament, partly in order to make the most commonly used keys more in tune).
This video may help you more than this explanation
https://youtu.be/1Hqm0dYKUx4 | {
"domain": "physics.stackexchange",
"id": 90436,
"tags": "waves, acoustics, frequency"
} |
Can the brain detect the passage of a neutrino? | Question: On a few occasions either in bed or sitting around a fire, with my eyes closed, I rarely but sometimes see a very quick fast flash of white and then, with my eyes still closed, the flash disappears immediately. It happens so fast that I sit up and rethink if it was even real. But I know it is real because I have had it happen to me many times in my life. I have also asked other people if it happens to them and 4/5 replied back saying that they had experienced the flash before.
Is it possible for a neutrino to pass the brain and in response produce the white flash? After all the brain is made of 73% water and neutrino detectors are predominantly water.
I tried submitting this question on biology.stackexchange and I was told that questions like these belonged on the physics.stackexhange site.
Answer: The cross-section for neutrino interactions is energy dependent.
For solar neutrinos at $\sim 0.4$ MeV, which would likely dominate any neutrinos likely to interact with a brain (the cosmic background neutrinos have way low energies), the cross-sections are $\sigma \sim 10^{-48}$ m$^2$, for both leptonic processes (elastic scattering from electrons) and neutrino-nucleon interactions.
The mean free path of a neutrino will be given by $l \sim (n\sigma)^{-1}$, where $n$ is number of interacting target particles per cubic metre and $\sigma$ is the cross-section.
If your head is basically water with a density of 1000 kg/m$^3$, then there are $n_e = 3.3\times10^{29}\ m^{-3}$ of electrons and about $6 \times 10^{29} m^{-3}$ of nucleons.
Including both nucleonic and leptonic processes, the mean free path is $\sim 10^{18}\ m$.
So unless your head is 100 light years wide, there is little chance of any individual neutrino interacting with it.
This is only one part of the calculation though - we need to know how many neutrinos are passing through your head per second. The neutrino flux from the Sun is about $7\times 10^{14}$ m$^{-2}$ s$^{-1}$. If your head has an area of about 400 cm$^2$, then there are $3\times 10^{13}$ neutrinos zipping through your brain every second.
Thus is we take $x=20$ cm as the path length through your head, there is a chance $\sim x/l$ of any neutrino interacting, where $l$ was the mean free path calculated earlier.
This probability multiplied by the neutrino flux through your head indicates there are $6\times 10^{-6}$ s$^{-1}$ neutrino interactions in your head, or roughly one every two days.
Whether that would produce any perceptible effect in your brain needs to be shunted back to Biology SE. If we require it (or rather scattered electrons) to produce Cherenkov radiation in the eyeball, then this needs $>5$ MeV neutrinos and so the rate would reduce to 1 per 100 days or even lower due to the smaller number of neutrinos at these energies and the smaller volume of water in the eyeball.
EDIT:
In fact my original answer may be over-optimistic by an order of magnitude since water only acts as a good detector (via Cherenkov radiation) for neutrinos above energies of 5 MeV. Solar neutrinos are predominantly lower energy than this. My calculation ignored atmospheric neutrinos which are produced in far fewer numbers (but at higher energies $\sim 0.1-10$ GeV). The cross-section for these is 4-6 orders of magnitude higher, but I think they are produced in so much lower numbers that they don't contribute.
Conclusion It doesn't have anything to do with neutrinos. The rate would be too low, even if they could be perceived. | {
"domain": "physics.stackexchange",
"id": 20305,
"tags": "estimation, neutrinos, biophysics, perception, cosmic-rays"
} |
Why is the second law of thermodynamics not symmetric with respect to time reversal? | Question: The question might have some misconceptions/ sloppy intuition sorry if that's the case (I'm not a physicist).
I seem to have the intuition that given a system of $N$ charged particles in 3D space colliding (under the effect of gravitational forces and electrostatic forces) elastically with each other, then the evolution of this system is symmetric with respect to time reversal. In the sense that if I record a video of the evolution of this mechanical system and then play it backwards then the resulting video will look like something that can happen in our universe. If this intuition is correct, then it should be easy to prove mathematically from the uniqueness theorem of ordinary differential equations.
I also seem to have the idea that statistical mechanics is nothing but the situation described above with $N$ being very large (particles in a gas are moving under the effect of gravitational and van der Waal forces and nothing else, no?). Thus, I would expect that the evolution of a thermodynamic system with respect to time should be symmetric with respect to time reversal. However this seems to contradict the second law of thermodynamics. Where did I go wrong?
After seeing some of the responses to my question I wish to add the following :
I am NOT trying to refute the second law mathematically (lol :D). As you can see above I don't provide any mathematical proofs . I specifically said "If my intuition is correct, then it should be easy to prove mathematically ". That means I am skeptical about my own intuition because: 1) I don't back it up with a proof, 2) it is in contradiction with a well established law such as the second law.
Answer: The arrow of time in thermodynamics is statistical.
Suppose you have a deterministic system that maps from states that can have character $X$ or character $Y$, to other states that can have character $X$ or character $Y$. The system is such that, for a randomly selected state $X_n$ or $Y_n$, the probability that the system will map it uniquely and deterministically to a state with character $Y$ is $10^9$ times larger than the probability that the system will map it uniquely to a state with character $X$.
Then, given any state $X_n$ or $Y_n$ and the number of times $N$ we have iterated the system, we can run time backward by reversing the iteration of the system and get the corresponding past state, because each state is mapped uniquely and deterministically.
However, if we can only measure the character of the system, we might note that the system originated in a state with character $X$, and, after an unknown number of iterations of the system, it was in character $Y$.
We would correctly note that states with character $X$ always evolve into states with character $Y$ if you wait a while. We could call this the "X-to-Y law" and express it mathematically. If we start with a certain number $x$ of states with character $X$ and number $y$ of states with character $Y$, then after iterations $N$,
$x = 10^{-9N}x_0$ and $y = y_0 + x_0-x$.
However, there is no corresponding "Y-from-X law". If we don't know $N$ and $Y_n$ exactly, we can only speak statistically. And statistically, the chances are overwhelming that, given some state with character $Y$, the state at some previous iteration also had character $Y$. This means we can't reverse the direction of time in our mathematical expression of the "X-to-Y law".
A more plain language explanation:
Suppose you have an oxygen tank and a nitrogen tank in a room and their mass ratio is the same as the mass ratio as that of air. The room pressure is assumed to always equalize with ambient pressure and temperature.
The 2nd law of thermodynamics says that, if you open the tanks and wait half an hour, the oxygen and nitrogen will all be out and the air will be exactly the same as it was before.
The time-reversed 2nd law of thermodynamics says that, any time you're in a room with normal air in it, somebody must have opened an oxygen and nitrogen tank half an hour ago. | {
"domain": "physics.stackexchange",
"id": 83181,
"tags": "thermodynamics, entropy, time-evolution, time-reversal-symmetry, arrow-of-time"
} |
Physical Interpretation of Relationship Between Hall Conductivity and Berry Curvature? | Question: Why is the Hall conductivity in a 2D material
$$\tag{1} \sigma_{xy}=\frac{e^2}{2\pi h} \int dk_x dk_y F_{xy}(k)$$
where the integral is taken over the Brillouin Zone and $F_{xy}(k)$ is the Berry curvature of the filled bands?
What is the physical interpretation of this equation?
Also, can we re-parametrize all of the filled states by another pair of variables $A$ and $B$ and conclude that
$$\tag{2} \sigma_{xy}=\frac{e^2}{2\pi h} \int F(A,B)dAdB$$
where $F(A,B)$ is the Berry curvature with respect to the $A$ and $B$ parameter space?
Answer: The formula follows from the Kubo formula of conductivity (based on the linear response theory), which is discussed in this question: Kubo Formula for Quantum Hall Effect and in the references therein. Starting from the Kubo formula (set $e=\hbar=1$)
$$\tag{1}\sigma_{xy}=i\sum_{E_m<0<E_n}\frac{\langle m|v_x|n\rangle\langle n|v_y|m\rangle-\langle m|v_y|n\rangle\langle n|v_x|m\rangle}{(E_m-E_n)^2},$$
where $|m\rangle$ is the single particle eigen state of the eigen energy $E_m$, i.e. $$\tag{2} H|m\rangle = E_m|m\rangle.$$
Let us take the momentum derivative $\partial_\boldsymbol{k}$ on both sides of Eq. (2), we have
$$\tag{3}(\partial_{\boldsymbol{k}}H)|m\rangle + H\partial_{\boldsymbol{k}}|m\rangle = (\partial_{\boldsymbol{k}}E_m)|m\rangle + E_m \partial_{\boldsymbol{k}}|m\rangle.$$
Then overlap with $\langle n|$ from left, Eq. (3) becomes
$$\tag{4}\langle n|(\partial_{\boldsymbol{k}}H)|m\rangle + E_n\langle n|\partial_{\boldsymbol{k}}|m\rangle = (\partial_{\boldsymbol{k}}E_m)\langle n|m\rangle + E_m \langle n|\partial_{\boldsymbol{k}}|m\rangle.$$
Here we have used $\langle n|H = E_n\langle n|$. If $|m\rangle$ and $|n\rangle$ are different eigen states (for $E_m\neq E_n$ in Eq. (1)), their overlap should vanish, i.e. $\langle n|m\rangle=0$. Also note that $\partial_{\boldsymbol{k}} H$ is nothing but the velocity operator $\boldsymbol{v}=\partial_{\boldsymbol{k}} H$ by definition. So Eq. (4) can be reduced to
$$\tag{5} \langle n|\boldsymbol{v}|m\rangle = (E_m - E_n) \langle n|\partial_{\boldsymbol{k}}|m\rangle.$$
Substitute Eq. (5) to Eq. (1) (restoring the $x$, $y$ subscript), we have
$$\tag{6} \sigma_{xy}=-i\sum_{E_m<0<E_n}\big(\langle m|\partial_{k_x}|n\rangle\langle n|\partial_{k_y}|m\rangle - \langle m|\partial_{k_y}|n\rangle\langle n|\partial_{k_x}|m\rangle\big).$$
On the other hand, the Berry connection is defined as $\boldsymbol{A}=i\langle m|\partial_{\boldsymbol{k}}|m\rangle$, and the Berry curvature is $F_{xy}=(\partial_{\boldsymbol{k}}\times\boldsymbol{A})_z=\partial_{k_x}A_{y}-\partial_{k_y}A_{x}$. Given that $(\partial_\boldsymbol{k}\langle m|)|n\rangle = - \langle m|\partial_\boldsymbol{k}|n\rangle$ (integration by part), we can see
$$\tag{7} F_{xy}= -i \sum_n\big( \langle m|\partial_{k_x}|n\rangle\langle n|\partial_{k_y}|m\rangle - \langle m|\partial_{k_y}|n\rangle\langle n|\partial_{k_x}|m\rangle\big) +i \langle m|\partial_{k_x}\partial_{k_y}-\partial_{k_y}\partial_{k_x}|m\rangle.$$
The last term will vanish as the partial derivatives commute with each other. So, by comparing with Eq. (6), we end up with
$$\tag{8} \sigma_{xy}=\sum_{E_m<0}F_{xy}\sim\int_\text{BZ} d^2k F_{xy}.$$
This means that the Hall conductance is simply the sum of the Chern numbers, i.e. the total Berry flux through the Brillouin zone (BZ), for all the occupied bands. Of course, we are free to re-parameterize the momentum space by another pair of variables and the total Berry flux through the BZ will not change (as it is coordinate independent).
So what is the physical meaning of $F_{xy}$? $F_{xy}$ is an effective magnetic field in the momentum space (perpendicular to the $xy$-plane along the $z$-direction). We know that for the magnetic field $\boldsymbol{B}$ in the real space, a charged particle moving in it will experience the Lorentz force, such that the equation of motion reads $\dot{\boldsymbol{k}}=\dot{\boldsymbol{r}}\times \boldsymbol{B}$. Now switching to the momentum space, we just need to interchange the momentum $\boldsymbol{k}$ and the coordinate $\boldsymbol{r}$, and replace $\boldsymbol{B}$ by $\boldsymbol{F}$ (note that the symbol $\boldsymbol{F}$ here denotes the Berry curvature, not the force), which leads to
$$\tag{9} \dot{\boldsymbol{r}}=\dot{\boldsymbol{k}}\times \boldsymbol{F}$$
So what is $\dot{\boldsymbol{r}}$? It is the velocity of the electron, which is proportional to the electric current $\boldsymbol{j}$. And what is $\dot{\boldsymbol{k}}$? It is the force acting on the electron (because the force is the rate that the momentum changes with time), which is proportional to the electric field strength $\boldsymbol{E}$, so Eq. (9) implies
$$\tag{10} \boldsymbol{j} \sim \boldsymbol{E}\times \boldsymbol{F}.$$
Therefore the Berry curvature $F_{xy}$ at each momentum point simply gives the Hall response of the single-particle state at that momentum. So the Hall conductivity of the whole electron system should be the sum of the Berry curvature over all occupied states, which is stated in Eq. (8). | {
"domain": "physics.stackexchange",
"id": 10396,
"tags": "condensed-matter, solid-state-physics, perturbation-theory, quantum-hall-effect, berry-pancharatnam-phase"
} |
Ion concentration in acid and base | Question: Hello here is a question that is a bit confusing I hope you can help me solve it and understand it:
Determine the hydronium and hydroxide ion concentrations in a solution
that is $10^{-4} M \ce{Ca(OH)2} \text{ AND } 10^{-4} M \ce{HCl}$
Note: the "and" is actually capitalised in my book..
So what I understood from this question is that they are both in one solution. Now I thought of two ways to solve it:
A) the concentrations are equal and they are both strong which means they we're neutralise each other completely and end up being a neutral solution and both $\ce{H3O+}$ and $\ce{OH-}$ will be $10^{-7}$.
B) the calcium base has 2 $\ce{OH-}$ while HCl has 1 H+ so the base concentration is double that of the acid which means that the solution is going to be basic but I don't know how can I solve this to find the actual concentrations.
Which way is right and when does an acid and base really react to become neutral?
Answer: Your second (B) way of thinking is correct.
In the solution there are twice as many OH- ions as H+ ions.
All the H+ from HCL will react with half the OH- from calcium hydroxide.
Half the OH- will remain ($10^{-4} OH-).
Then use $K_w$ to determine the final concentration of H+. | {
"domain": "chemistry.stackexchange",
"id": 2739,
"tags": "acid-base, concentration"
} |
Rotation operator for a point in a coordinate system linearly derived from Cartesian coordinates | Question: For some experimental and practical reason, I have created a new coordinate system in the form
$$x^\prime_i=T_{ij}x_j$$
where $T_{ij}$ isn't a square matrix. $x_i$ is standard Cartesian coordinates, and $x^\prime_j$ is a point in the new system. I have to mention that the new system's axes are not linearly independent. So the last relation can be written as
$$\left(\matrix{x_0^\prime\\x_1^\prime\\x_2^\prime\\x_3^\prime}\right)=\left( \matrix{T_{11} & T_{12} & T_{13} \\ T_{21} & T_{22} & T_{23} \\ T_{31} & T_{32} & T_{33} \\ T_{41} & T_{42} & T_{43} } \right)\cdot \left(\matrix{x\\y\\z}\right)$$
The matrix $T_{ij}$ is well defined.
What I need is a rotation operator that will transform a point in the primed system, as the standard rotation operator does. So say I have the standard rotation matrix in Cartesian coordinates around the z-axis:
$$R_{ij}= \left( \matrix{\cos{\theta}&-\sin{\theta}&0\\ \sin{\theta}&\cos{\theta}&0\\0&0&1} \right)$$
So to rotate a point in Cartesian coordinates, we use the standard operator formula:
$$P^\prime_i=R_{ij}P_j$$
where $P_j$ is the point before rotation, and $P^\prime_i$ is the point after rotation.
How can I write this rotation formula for a point in the new coordinates system that uses 4 points? How will the rotation matrix look like? I expect a rotation matrix that is $4\times4$, but I don't know how to derive it. Please help in that.
Answer: To be consistent with notation, I use the $x'$ for the transformation to the new system and $\tilde{x}$ for the rotation. Thus, as you defined
$$x'_i=T_{ij}x_j,$$
$$\tilde{x}_i=R_{ij}x_j.$$
We know that
$$\tilde{x}'_i=T_{ij}\tilde{x}_j=T_{ij}R_{jk}x_k.\tag{1}$$
You are looking for the transformation matrix $Q_{ij}$, such that
$$\tilde{x}'_i=Q_{ij}x'_j,$$
or
$$\tilde{x}'_i=Q_{ij}x'_j=Q_{ij}T_{jk}x_k\tag{2}$$
Naively, one could now write from (1) and (2)
$$T R = Q T ,$$
$$Q=T R T^{-1}.$$
However, $T$ is not a square matrix, and does not invertable. In other words, such a matrix $Q$ can not be determined uniquely.
Or, looking at it as
$$T_{ij}R_{jk} = Q_{pq}T_{qr}$$
You know that these are twelve equations, because both procut matrices are $4\times3$. $Q_{ij}$ is a $4\times4$ matrix, with, thus, $16$ unknowns. In other words, there are infinitely many possibilities. Unless, of course, you add constraints. | {
"domain": "physics.stackexchange",
"id": 13254,
"tags": "coordinate-systems, linear-algebra"
} |
why is friction acting towards the centre in a level curved road? | Question: We all know that friction is a force that opposes motion and is applied in the opposite direction of motion, but in a leveled curved road it becomes the centripetal force and pulls towards the center. Why?
Answer: According to the Newton's first law, in the absence of an external force, a car would move along a straight line. When the front wheels of the car are turned to follow a curved road, the car is blocked from moving straight by the friction, serving as that external force.
So, we can say that, if it was not for the friction force, opposing the natural straight movement of the car, the car would not be able to turn or stay on a curved road.
The direction of the friction force is normal to the wheels and their trajectory. So, we can say that the friction force acts along the radius of the trajectory curve, pointing to its center, which makes it a centripetal force. | {
"domain": "physics.stackexchange",
"id": 51613,
"tags": "newtonian-mechanics, forces, friction, free-body-diagram, centripetal-force"
} |
Monkeying around with Survey Monkey and Asp.Net Mvc | Question: Intro
I've been a desktop dev for a long time now, and have never really had to monkey with web development until very recently. I have a need to do some custom integration with Survey Monkey's Api and as a "Hello World" I came up with this simple ASP.Net MVC app that queries Survey Monkey for a list of my surveys and displays some real basic info about them. (Nothing I couldn't get from just going to their site, but hey!, I'm learning here.)
I have no idea what I'm doing, so any and all feedback would be greatly appreciated. I'd be particularly interested in some way to cache the results from the API request so that I don't have to fetch it each time the page loads.
The Tech
C# 6.0
MVC 5.0
Ninject MVC for dependency injection.
Survey Monkey API wrapper for C#.
The Code
The API wrapper library needs an API Key & Token to connect to Survey Monkey. In order to keep them out of the GitHub repo, I created some environment variables and a factory class that knows how to access them and return an instance of SurveyMonkeyApi.
./SurveyMonkeyApiFactory.cs
namespace SurveyMonkeyPlayground
{
public interface ISurveyMonkeyApiFactory
{
SurveyMonkeyApi Create();
}
public class SurveyMonkeyApiFactory : ISurveyMonkeyApiFactory
{
public SurveyMonkeyApi Create()
{
string apiKey = Environment.GetEnvironmentVariable("SM_APIKEY");
string token = Environment.GetEnvironmentVariable("SM_TOKEN");
return new SurveyMonkeyApi(apiKey, token);
}
}
}
Next I created a model that is a subset of the full Survey object from the 3rd party library.
./Models/SurveyModel.cs
namespace SurveyMonkeyPlayground.Models
{
public class SurveyModel
{
public string Name { get; set; }
public int ResponseCount { get; set; }
public string Url { get; set; }
}
}
And then a controller to handle requests for a list of all surveys, and details about any given survey.
./Controllers/SurveysController.cs
namespace SurveyMonkeyPlayground.Controllers
{
public class SurveysController : Controller
{
private ISurveyMonkeyApiFactory _apiFactory;
public SurveysController(ISurveyMonkeyApiFactory factory)
{
_apiFactory = factory;
}
// GET: Surveys
public ActionResult Index()
{
var surveyMonkey = _apiFactory.Create();
var surveys = surveyMonkey.GetSurveyList()
.Select(s => new SurveyModel() { Name = s.Nickname, ResponseCount = s.NumResponses, Url = s.AnalysisUrl });
return View(surveys);
}
// GET: Surveys/Details?name=SomeSurveyName
public ActionResult Details(string name)
{
var surveyMonkey = _apiFactory.Create();
var survey = surveyMonkey.GetSurveyList()
.Where(s => s.Nickname == name)
.Select(s => new SurveyModel() { Name = s.Nickname, ResponseCount = s.NumResponses, Url = s.AnalysisUrl })
.First();
return View(survey);
}
}
}
And finally, some views to display this all.
./Views/Surveys/Index.cshtml
@model IEnumerable<SurveyMonkeyPlayground.Models.SurveyModel>
@{
ViewBag.Title = "Surveys";
}
<h2>Surveys</h2>
<table class="table">
<tr>
<th>
@Html.DisplayNameFor(model => model.Name)
</th>
<th>
@Html.DisplayNameFor(model => model.ResponseCount)
</th>
<th></th>
</tr>
@foreach (var item in Model) {
<tr>
<td>
<a href="@item.Url">@item.Name</a>
</td>
<td>
@Html.DisplayFor(modelItem => item.ResponseCount)
</td>
<td>
@Html.ActionLink("Details", "Details", new { Name = item.Name})
</td>
</tr>
}
</table>
./Views/Surveys/Details.cshtml
@model SurveyMonkeyPlayground.Models.SurveyModel
@{
ViewBag.Title = "Surveys";
}
<h2>@Model.Name Details</h2>
<div>
<hr />
<dl class="dl-horizontal">
<dt>
@Html.DisplayNameFor(model => model.Name)
</dt>
<dd>
@Html.DisplayFor(model => model.Name)
</dd>
<dt>
@Html.DisplayNameFor(model => model.ResponseCount)
</dt>
<dd>
@Html.DisplayFor(model => model.ResponseCount)
</dd>
<dt>
Result Analysis Url
</dt>
<dd>
<a href="@Html.DisplayFor(model => model.Url)">@Html.DisplayFor(model => model.Url)</a>
</dd>
</dl>
</div>
<p>
@Html.ActionLink("Back to List", "Index")
</p>
Answer: This looks like a typo to me:
<dd>
<a href="@Html.DisplayFor(model => model.Url)">@Html.DisplayFor(model => model.Url)</a>
</dd>
Shouldn't it just be:
<dd>
<a href="@Model.Url">@Html.DisplayFor(model => model.Url)</a>
</dd>
One thing I find really useful when dealing with projecting to model classes is either use automapper, or write an extension method:
public static class SurveyExtensions
{
public IEnumerable<SurveyModel> ProjectToSurveyModel(IEnumerable<SurveyMonkey.Survey> this source)
{
// null check
return source.Select(s => new SurveyModel
{
Name = s.Nickname,
ResponseCount = s.NumResponses,
Url = s.AnalysisUrl
});
}
}
Then you can simplify your controller:
// [Route("Surveys")]
public ActionResult Index()
{
var surveyMonkey = _apiFactory.Create();
var surveys = surveyMonkey.GetSurveyList()
.ProjectToSurveyModel();
return View(surveys);
}
I've added in a commented out Route attribute just in case that floats your boat duck. I personally prefer attribute routing because I like to see the url in place. Especially because one of the people I work with is mad for clean urls, so you can do:
[Route("something-important/{id}/collection-of-stuff/{collectionId}")]
public ActionResult Xyz(int id, int collectionId)
And it's really obvious what's happening without having to change file and look through the route table. I probably wouldn't do it on actions with obvious routes though (like index).
You can read more about attribute routing here. | {
"domain": "codereview.stackexchange",
"id": 14798,
"tags": "c#, asp.net-mvc, razor"
} |
Rotating reference frames | Question: I'm trying to understand the equations that govern velocity in a rotating reference frame...
\begin{equation}
v_i = (\frac{dr}{dt})_r + \Omega \times r .
\end{equation}
I'd like to build a simple simulation of a rocket taking off from earth with some constant inertial velocity, say: $v_i=[1,0,0]^T$.
I assume some $\Omega$ value to represent the rotation of the earth about z, say $\Omega=[0,0,1]$.
And then solve for the percieved velocity in the rotating frame:
\begin{equation}
(\frac{dr}{dt})_r = v_i - \Omega \times r .
\end{equation}
What I expect, after integrating velocity into position, would be an outwardly rotating spiral showing the relative position of the "rocket" to an observer in the rotating earth frame. What I see, from a simple simulink sim, is quite different.
My sim:
The output:
Thoughts?
Answer: $\def\m{\mathbf}$
Coordinate vector of a point in static frame: $r^s$
Coordinate vector of the same point in rotating frame: $r^r$
(Pure rotation, both frames have the same origin.)
Coordinate transformation (rotation matrix): $R$
The matrix is orthogonal,
i.e., $R^TR=RR^T=\m1$ (the unit matrix)
Important property: $\m0=\frac{d}{dt} \m1 = \frac{d}{dt} (R^TR) = \dot R^T R + R^T \dot R$
That means the matrix $\m\Omega := R^T\dot R$ is anti-symmetric $\m\Omega = -\m\Omega^T$ (with only three relevant components $\Omega_1 := \m\Omega_{32}, \Omega_2 := \m\Omega_{13}, \Omega_3:=\m\Omega_{21}$) and the products $\m\Omega v$ can be expressed with the vector $\Omega=(\Omega_1,\Omega_2,\Omega_3)$ as $\Omega\times v$.
Coordinate vector in rotating frame:
$$
r^s = R\cdot r^r
$$
Velocity, time-derivative in the static frame:
$$
v_s^s := \frac{d r^s}{dt} = \dot R\;r^r + R\;\dot r^r
$$
Apply $R^T$ to this equation:
$$
R^T v_s^s = R^T\dot R\;r^r + \dot r^r
$$
You see you transform the velocity $v_s^s$ into the rotating frame (the same where also $r^r$ lives).
The right name in our nomenclature for the time derivative calculated in the static frame and transformed into the rotating one would be $v_s^r = R^T v_s^s$.
With this you get your formula
$$
v_s^r = \Omega \times r^r + \dot r^r.
$$ | {
"domain": "physics.stackexchange",
"id": 12537,
"tags": "newtonian-mechanics, rotational-dynamics, reference-frames, simulations, inertial-frames"
} |
Does capacitance between two point charges lead to a paradox? | Question: Is it possible to have a capacitance in a system of two point charges? Since there is a potential energy between them and they both have charges then we can divide the charge by the potential and get capacitance.
However, capacitance is supposed to depend only on geometry so should therefore be zero. How does one resolve this paradox?
Answer:
How does one resolve this paradox?
As a general prelude to this answer, I would like to mention that it is well known that classical point charges lead to some unresolvable paradoxes in classical EM. Personally, I do not consider this an indication of an inconsistency in classical EM, but an indication that classical point particles themselves are inconsistent. So what remains here is to determine if this specific case is an instance of an inconsistency.
However, capacitance is supposed to depend only on geometry so should therefore be zero.
It is true that the capacitance depends only on the geometry, but that does not immediately imply that it should be zero. A pair of point charges does have some geometry, specifically the distance, $s$, between them. So all we can say from this is that the capacitance should be some function of the distance between them $C=C(s)$. While we could indeed have $C(s)=0$, that is by no means guaranteed.
Since there is a potential energy between them and they both have charges then we can divide the charge by the potential and get capacitance.
This is actually a little bit incorrect. The potential energy between two point charges is undefined. You can extract an infinite amount of energy from a system of two point charges simply by letting them get sufficiently close together. This is one of the major problems of classical point charges and this fact leads to many of the genuine paradoxes.
If we naively plug in infinity then we get $$C=\frac{Q}{V}=\frac{Q}{\infty}=0$$
Of course, since $\infty$ is not a real number, this method is more than a little suspect. But the voltage at the surface of a spherical charge $Q$ of radius $R$ is $$V=\frac{Q}{4\pi \epsilon_0 R}$$ so $$\lim_{R\rightarrow 0}\frac{Q}{V}=0$$ This result then gives a bit more confidence in the $C=0$ result.
So, although this particular aspect of classical point charges does reach close to the root of many paradoxes, it does seem that $C=0$ is not itself paradoxical. | {
"domain": "physics.stackexchange",
"id": 84677,
"tags": "electrostatics, charge, capacitance"
} |
Internal teeth involute gear pitch circles | Question: I'm having some trouble figuring out how to design the corresponding internal gear for any particular spur gear. With spur gears, meshing is accomplished by having two gears with the same modulus (MOD) or, in the US, the same diametral pitch. The spacing between the two gears is such that their pitch circles touch.
Looking at an internal gear the tooth profile for an internal involute gear is the inverse of the external involute. However if I try to create such a system (in Fusion 360), I get some interference issues:
What I've done is create two spur gears, both MOD = 1, one with 24 teeth which I've just simply inverted to create an internal ring gear and the other with 12 teeth. My questions:
The pitch circles are marked in green. Is the pitch circle for the internal gear the same as the spur gear that I used in inverse to create it? Or is it moved somewhat. The proper relative gear positions should be where the pitch circles touch so that's a main issue.
Although it doesn't look too bad, the left and right teeth in the image are interfering. I think if the addendum of the internal gear's teeth was reduced this would help matters. It seems that the larger the ratios (eg: if the spur gear was 8 teeth instead) the more the gears would crash and perhaps the more the addendum should be reduced.
In addition I could add backslash, would this help the interference issue?
Are there specific rules to design an internal involute gear for a particular MOD?
Answer: First of all, yes the pitch circle is the same for an internal gear as it is for the equivalent external gear. Second, note that indeed the addendum refers to how much the teeth extend inward of the pitch circle for an internal gear (instead of outward, as is the case for external gears). Vice versa for the dedendum. For standard gears, both internal and external, the addendum $a$ and dedendum $d$ are as follows:
$$a=m=\frac{1}{D_p}$$
$$d=1.25m=\frac{1.25}{D_p}$$
$m$ is the module (in millimetres) and $D_p$ is the diametral pitch (in inches$^{-1}$). The units of the addendum and dedendum are consistent with that of the module or diametral pitch, depending what you chose.
Therefore, it looks like you will need to increase the dedendum for the internal gear to $1.25m$, and make sure the addendum is only $m$. This will then prevent these inference issues, and will generally be fine as long as any external gears are not significantly larger than half the diameter of the internal gear they are inside of. | {
"domain": "engineering.stackexchange",
"id": 1310,
"tags": "gears"
} |
Double semion model on a square lattice | Question: We consider the double semion model proposed in Levin and Wen's paper
http://arxiv.org/abs/cond-mat/0404617
http://journals.aps.org/prb/abstract/10.1103/PhysRevB.71.045110
In their paper, the double semion model is defined on a honeycomb lattice.
Now I am trying to study the same model on a square lattice.
Question 1: Is the following Hamiltonian correct?
$$H=-\sum_{\textrm{vertex}} \prod_{k \in \textrm{vertex}}\sigma_{k}^{z} + \sum_{\textrm{plaquette}} \left[ \prod_{j \in \textrm{legs}} i^{(1-\sigma_{j}^{z})/2} \right] \prod_{k \in \textrm{plaquette}} \sigma_{k}^{x}.$$
On the figure there are totally 8 green legs around each plaquette.
As shown in Levin and Wen's paper, the ground state of the double semion model is the equal-weight superposition of all close loops, and each loop contributes a minus sign. Given a loop configuration, the wave function component is given by $(-1)^{\textrm{number of loops}}$. If we have even (odd) number of loops, the wave function component of this configuration is $+1$ ($-1$). On the honeycomb lattice everything looks fine. But I am confusing about the state on the square lattice when the strings are crossing.
Question 2: For the following two configurations, should we regard them as one loop or two loops? Do they have the same amplitude in the ground state wave function?
Here we consider a $3 \times 3$ torus, i.e., we have periodic boundary conditions on both directions. The red line denotes the string, i.e., the spin is $\left| \downarrow \right\rangle$ on each red link.
This is configuration I.
This is configuration II.
Answer: The defining property of the double semion model is the nature of the ground state as a superposition of loop pattern with alternating signs, and not the form of its Hamiltonian. As you noticed, it is not clear how to count loops on a square lattice. As far as I see, this is one reason why the string-net models are defined on honeycomb lattices, since it allows to count loops unambiguously. (In fact, any trivalent graph would do.)
If you want to define a way to count loops on a square lattice, one way is to "decorate" it such that it becomes a trivalent lattice, this is, you replace every four-valent vertex by two three-valent vertices with an edge inbetween. The state of the extra edge is uniquely determined by the state on the surrounding edges, and thus, this gives you a way to count loops on the square lattice. In the same way, you can map the honeycomb Hamiltonian to a new Hamiltonian on the square lattice. Note, however, that this mapping will necessarily break some lattice symmetry.
Your Hamiltonian is rotationally invariant, so I suspect it is not the correct Hamiltonian. I have not analyzed it carefully, but you could try to exactly diagonalize it on a 4x4 lattice and check the ground subspace. Alternatively, you can study different moves to go from one configuration to another and check if they all give the same phase (I suspect that no, and there will be cancellations). For this, of course, you first have to pick a convention of how to count loops. | {
"domain": "physics.stackexchange",
"id": 19394,
"tags": "condensed-matter, research-level, topological-order"
} |
Digital integrator | Question: I have been implementing a control software where I need to calculate a magnetic flux based on the measurement of the phase voltages of a three phase grid (basically three sinewaves) according to the equation $$\psi = \int_0^t u(\tau)\mathrm{d}\tau$$.
It seemed to be a pretty simple task at first glance. So I have written following piece of
code (trapezoidal integration rule) and I have passed perfect sinawave
#define PI 3.14
#define N 64
float x, y;
for(int k = 0; k < N; k++){
x = sin(k*2*PI/N);
y = integrate(1.0, x);
}
float integrate(float T, float xk)
{
static float intk_1 = 0; // I(k-1)
static float xk_1 = 0; // x(k-1)
float intk = intk_1 + T/2*(xk_1 + xk);
intk_1 = intk;
xk_1 = xk;
return intk;
}
I have realized that integration outcome depends on the time instant where the integration starts.
For example in case the integration starts at the time instant where the phase voltage has zero value the integrator output has large offset which occurs already in ideal case i.e. perfect
analog channels, no noise and so on.
In case the integration starts at the time instant where the phase voltage has its positive
maximum the integrator output in ideal case doesn't have the offset. Another complications
occurs in case I will use real analog channels with offsets and noise.
What I have been looking for is some robust integration method which will be immune mainly against the offsets so that I can calculate the the magnetic flux according to the above mentioned equation. One idea which I have is to use some digital filter with appropriate frequency characteristics. But I am not sure whether it is good idea. Can anybody help?
Answer: I'm a bit rusty with solving equations using the Laplace transform so please be indulgent if I made a mistake.
your signal is
$$ x = sin(\omega t + \theta)$$ and you want to find
$$ y = \int{sin(\omega t + \theta)dt} $$ You can rewrite x as $$ x = a\sin(\omega t) + b\cos(\omega t)$$
In Laplace
$$ y(s) = \frac{1}{s} \{\frac{a\omega + bs}{s^2 + \omega^2}\} $$
Using partial fraction expansion we get
$$ y(s) = \frac{a}{\omega s} + \frac{b}{s^2 + \omega^2} - \frac{\frac{as}{\omega}}{s^2+\omega^2} $$
Going back in the time domain we get
$$ y(t) = \frac{a}{\omega} + \frac{b}{\omega} sin(\omega t) - \frac{a}{\omega} cos(\omega t) $$
since a is related to your initial phase $ a = cos(\theta) $, that's why you get an offset. | {
"domain": "dsp.stackexchange",
"id": 10157,
"tags": "algorithms, math, integration, numerical-algorithms"
} |
SMOTE oversampling for class imbalanced dataset introduces bias in final distribution | Question: I have a problem statement where percentage of goods (denoted by 0) is 95%, and for bads (denoted by 1) it is 5% only. One way is to do under sampling of goods so that model understands the patterns properly for both the segment. But going with under sampling is leading to high loss of data which will directly lower down my model performance. Hence I have opted for over sampling of bads, but over sampling has its own problem too:
Check this code snippet:
from imblearn.over_sampling import SMOTE
sm = SMOTE(random_state = 33)
x = train_data.drop(['target'], axis = 1)
y = train_data[['target']]
x_new, y_new = sm.fit_sample(x, y)
y.target.value_counts() # 0 -> 26454 1-> 2499
y_new.target.value_counts() # 0 -> 26454 1-> 26454
after oversampling, I get my equal no. of goods and bads, but the problem is that variable distribution is getting affected.
For e.g. I have 'age' variable, in case of good, the bucket wise variable distribution is
1 - 25 years - 20%
26 - 50 years - 35%
50+ years - 45%
and distribution of bad is (Before OverSampling):
1 - 25 years - 50%
26 - 50 years - 30%
50+ years - 20%
But post oversampling the distribution of bads is changing:
1 - 25 years - 40%
26 - 50 years - 35%
50+ years - 25%
So now the distribution of this variable for Good Vs Bad is not that much effective as it was earlier (before oversampling).
Is there any way that doing oversampling does not affect my variable distribution?
Answer: Class imbalance is a frequent problem in machine learning and techniques to balance the data usualy are of two flavors: undersampling the majority, oversampling the minority or both.
One can always partition the data according to some variables and separately oversample each partition so as to maintain some measure (eg given data distribution). In the same way that separate oversampling can be achieved for only $1$ variable, in the same way separate oversampling can be achieved for $n$ variables. Of course more complex but certainly doable. For example one takes all distinct combinations of variables (or ranges of variables for continous variables) and separately oversamples each such cluster in order to maintain the given data distribution.
The above is a straightforward technique, although one should note that if minority class does not have enough samples there is no guaranty that the given data distribution reflects the (true) underlying data distribution (in other words it may not constitute a representative sample in statistical sense). So for these cases oversampling the whole data, without extra assumptions about underlying distribution, is a maximally unbiased method in the statistical sense.
There is some research lately on hybrid and intelligent methods for (oversampling) class imbalance problems without introducing bias during the process. The following references will provide the relevant background:
Cross-Validation for Imbalanced Datasets: Avoiding Overoptimistic and Overfitting Approaches, October 2018
Although cross-validation is a standard procedure for performance
evaluation, its joint application with oversampling remains an open
question for researchers farther from the imbalanced data topic. A
frequent experimental flaw is the application of oversampling
algorithms to the entire dataset, resulting in biased models and
overly-optimistic estimates. We emphasize and distinguish overoptimism
from overfitting, showing that the former is associated with the
cross-validation procedure, while the latter is influenced by the
chosen oversampling algorithm. Furthermore, we perform a thorough
empirical comparison of well-established oversampling algorithms,
supported by a data complexity analysis. The best oversampling
techniques seem to possess three key characteristics: use of cleaning
procedures, cluster-based example synthetization and adaptive
weighting of minority examples, where Synthetic Minority Oversampling
Technique coupled with Tomek Links and Majority Weighted Minority
Oversampling Technique stand out, being capable of increasing the
discriminative power of data
Learning from Imbalanced Data, 9, SEPTEMBER 2009
With the continuous expansion of data availability in many
large-scale, complex, and networked systems, such as surveillance,
security, Internet, and finance, it becomes critical to advance the
fundamental understanding of knowledge discovery and analysis from raw
data to support decision-making processes. Although existing knowledge
discovery and data engineering techniques have shown great success in
many real-world applications, the problem of learning from imbalanced
data (the imbalanced learning problem) is a relatively new challenge
that has attracted growing attention from both academia and industry.
The imbalanced learning problem is concerned with the performance of
learning algorithms in the presence of under represented data and
severe class distribution skews. Due to the inherent complex
characteristics of imbalanced data sets, learning from such data
requires new understandings, principles, algorithms, and tools to
transform vast amounts of raw data efficiently into information and
knowledge representation. In this paper, we provide a comprehensive
review of the development of research in learning from imbalanced
data. Our focus is to provide a critical review of the nature of the
problem, the state-of-the-art technologies, and the current
assessment metrics used to evaluate learning performance under the
imbalanced learning scenario. Furthermore, in order to stimulate
future research in this field, we also highlight the major
opportunities and challenges, as well as potential important research
directions for learning from imbalanced data.
Data Sampling Methods to Deal With the Big Data Multi-Class Imbalance Problem, 14 February 2020
The class imbalance problem has been a hot topic in the machine
learning community in recent years. Nowadays, in the time of big data
and deep learning, this problem remains in force. Much work has been
performed to deal to the class imbalance problem, the random sampling
methods (over and under sampling) being the most widely employed
approaches. Moreover, sophisticated sampling methods have been
developed, including the Synthetic Minority Over-sampling Technique
(SMOTE), and also they have been combined with cleaning techniques
such as Editing Nearest Neighbor or Tomek’s Links (SMOTE+ENN and
SMOTE+TL, respectively). In the big data context, it is noticeable
that the class imbalance problem has been addressed by adaptation of
traditional techniques, relatively ignoring intelligent approaches.
Thus, the capabilities and possibilities of heuristic sampling methods
on deep learning neural networks in big data domain are analyzed in
this work, and the cleaning strategies are particularly analyzed. This
study is developed on big data, multi-class imbalanced datasets
obtained from hyper-spectral remote sensing images. The effectiveness
of a hybrid approach on these datasets is analyzed, in which the
dataset is cleaned by SMOTE followed by the training of an Artificial
Neural Network (ANN) with those data, while the neural network output
noise is processed with ENN to eliminate output noise; after that, the
ANN is trained again with the resultant dataset. Obtained results
suggest that best classification outcome is achieved when the cleaning
strategies are applied on an ANN output instead of input feature space
only. Consequently, the need to consider the classifier’s nature when
the classical class imbalance approaches are adapted in deep learning
and big data scenarios is clear.
Hope these notes help. | {
"domain": "datascience.stackexchange",
"id": 7854,
"tags": "machine-learning, data, class-imbalance, smote"
} |
A function of a constraint in a mechanics question not equaling zero? | Question: Is this an error (I wrote what I think should be written), or is it okay that the constraint functions do not equal zero?
I thought constraint functions are of the form $g(x,\dot {x},t)=0$, so in this case we should have $g_{jk} (x, t)= (x_j-x_k)^2-l^2_{jk}(t)=0$, but here it seems they took $g _{jk}(x, t)=(x_j-x_k)^2$, which doesn't generally equal zero. Am I missing something?
Answer: Using the Lagrange equation for a constrained system (1)
\begin{equation}
\frac{\partial \mathscr L}{\partial q^i} -\frac d{dt}\bigg(\frac{\partial \mathscr L}{\partial \dot q^i}\bigg) -\sum _{j=1}^{m} \lambda _j \frac{\partial \phi _j}{\partial q^i}=0 \qquad (1)
\end{equation}
We note that since $l_{ij}\neq l_{ij}(x)$ we do not lose information regarding the constraint by eliminating this term in this case. | {
"domain": "physics.stackexchange",
"id": 26835,
"tags": "homework-and-exercises, constrained-dynamics, textbook-erratum"
} |
Brayton cycle: heat calculation for isobaric processes | Question: I have a brayton cycle and want to calculate the efficiency and the back work ratio. In my textbook it is stated that the heat added at process 3-4 happens at constant pressure (isobaric) and therefore we can write $q_{added}=h_4-h_3$. It is also stated that this result is calculated via the first law of thermodynamics using the steady-flow equation: $$\dot{m}(h_3+\frac{C_3^2}{2} + Z_3g)+\dot{Q}+\dot{W}=\dot{m}(h_4+\frac{C_4^2}{2} + Z_4g)$$ Then they note that we can assume that $C_3=C_4$ and $Z_3=Z_4$. Now the question is that I have no clue how they find the result from this equation. I would think that we could write: $$\dot{Q}+\dot{W}=\dot{m}(h_4-h_3)$$ $$q+w=h_4-h_3$$ and that since this is an isobaric process we can write $w=-\int_{V_3}^{V_4}p dv = -p\int_{V_3}^{V_4}dv=-p[V_4-V_3]$. But then we immediately find that $q=h_4+pV_4-[h_3+pV_3]$, which is not the desired result. I have been baffled with this so any help is greatly appreciated on how I can find the correct result.
Answer: Are you aware that the W in the open system version of the first law of thermodynamics does not include all the work, but only the shaft work? It omits the work done to push the fluid into and out of the control volume. The latter is included separately in the h's. In the heat exchange you are analyzing, the shaft work is zero. So the heat added per unit mass is just equal to the specific enthalpy change between the inlet and outlet of the exchanger.
Wasn't this important information covered when they taught you about the open system version of the 1st law? | {
"domain": "physics.stackexchange",
"id": 49415,
"tags": "homework-and-exercises, thermodynamics, heat-engine"
} |
Power Variation on the Time Energy Uncertainty Principle | Question: The time-energy uncertainty principle is typically formulated in quantum mechanics as:
$\Delta E \Delta T \ge h/4\pi$
Where ΔE is uncertainty in energy, and ΔT is the standard deviation in the time taken for the expectation value to change. By dimensional analysis, it would appear from a crude analysis that the above equation is equivalent to:
$\Delta P \Delta T^{2} \ge h/4\pi$
If:
$\Delta P = \Delta E / \Delta T$
Is the second expression correct? My memory of my quantum mechanics class is rusty, but I believe there is a subtlety here I am not accounting for.
Answer: $\Delta E$ and $\Delta T$ mean two different things in the two equations.
In the first equation, you might have a photon or electron. You want to know the values for $E$ and $T$. It is impossible to know them exactly. There is an inherent uncertainty to both. You can arrange things so one uncertainty is small, but that necessarily increases the other uncertainty. $\Delta E$ and $\Delta T$ are the uncertainties.
In the second equation, you might have a process that produces energy. $\Delta E$ is the amount of energy produced in time interval $\Delta T$. | {
"domain": "physics.stackexchange",
"id": 94135,
"tags": "quantum-mechanics, heisenberg-uncertainty-principle"
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.