anchor stringlengths 0 150 | positive stringlengths 0 96k | source dict |
|---|---|---|
What is the intuition behind a Free-Choice Petri Net? | Question: So I've found this definition in a paper called Time and Petri Nets, Popova-Zeugmann, L., but I am unable to understand the intuition behind it.
N is a free-choice net (FC net) if each shared place is the only pre-
place of its post-transitions, i.e., $\text{if}\ t, t' ∈ p^•\ \text{then}\ ^•t\ =\ \{p\}\ =\ ^•t'$.
Could someone give me an example of what this allows to, or disallows to do?
Answer: The literature on Petri nets has many papers that really aim to teach the concepts used.
In the case of free-choice Petri net, such an introduction can be found in the paper Structure Theory of Petri Nets: the Free Choice Hiatus by Eike Best, in the ACPN (Advanced Course on Petri Nets) 1986. (An online copy is here.)
If anywhere, the intuition behind free-choice nets should be explained there, but that doesn't really happen, and I haven't seen such an explanation elsewhere, either.
As far as there is an intuition behind free-choice nets, it is expressed by their name: whenever there is a choice, say, between step B or step C (that is to say: we have a place A with an arc to transition B and an arc to transition C), then that choice is free, that is to say, neither B or C are subject to any other additional conditions (that is to say, they cannot have additional input places).
Rather than focusing on intuition, the paper describes free-choice nets as follows, and I think this is the standard way to explain them:
One of the reasons Petri nets interest us is that they offer descriptions of concurrent system behaviour that can be statically analyzed. We can prove certain properties of the behaviour of Petri nets.
However, for unrestricted Petri nets, certain important properties are extremely hard to analyze; for instance, the question whether the net is live, or whether a given marking can be reached. So it is interesting to study restricted forms of Petri nets for which such analysis is easier.
One possible restriction is to say: no place may have more than one input transition, or more than one output transition. This type of Petri net is called an S-net. This eliminates parallelism and basically turns the net into a state machine. For instance, we can no longer write down a process in which the first thing to happen is A, then B and C in whatever order, and, finally, D. We can't put B and C in parallel.
Another restriction is the reverse: no transition may have more than one input place or more than one output place. This type of Petri net is called a T-net or marked graph. Now, we do have parallelism, but we no longer have choice. For instance, we can no longer write down a process in which the first thing to happen is A, then either B or C, and, finally, D. We can't have a choice between B and C.
Both restrictions are very severe, so it's interesting to look at compromises. Free-choice nets are just that: to quote Wikipedia, they are the nets in which every arc from a place to a transition is either the only arc from that place or the only arc to that transition. I.e. there can be both concurrency (parallelism) and conflict (choice), but not at the same time. So this is a proper generalization of S-nets and T-nets.
Certain properties of the behaviour of Petri nets that are hard or impossible to decide for arbitrary Petri nets become doable for free-choice Petri nets.
So the prime motivation for free-choice nets is not any particular intuition, but the fact that they allow certain forms of static analysis of behaviour. | {
"domain": "cs.stackexchange",
"id": 13799,
"tags": "petri-nets"
} |
About an upside down cup of water against atmosphere pressure | Question: There is an experiment we learned from high school that demonstrated how atmosphere pressure worked.
Fill a cup of water and put a cardboard on top of it, then turn it upside-down, the water will not fall out. The explanation said this was because the atmosphere pressure was greater than the water pressure, which holds the water up.
I believed this explanation once, until I found some points that confused me:
1. Is the water pressure in the cup really smaller than the atmosphere pressure?
This is what we have been taught through our life. However, consider an object in the water under sea level, it experiences the pressure from water plus atmosphere pressure. So the water under sea level must be greater than the atmosphere pressure. Even it is contained in a cup, the pressure wouldn't change. Is this True?
I read the webpage which gave the explanation excluding the reason of pressure. If the cup is fully filled, the compressibility of water is much greater than that of the air, also the surface tension of the water keeps the air out of the cup. So the water is held in the cup. This explains the problem. But I still want to ask if the water pressure smaller or greater than the atmosphere pressure in this situation.
2. When the cup is half filled with water, why it still holds?
I saw most of articles or opinions against this. They all agree that the water would not fall only if the cup is entirely filled. But I did the experiment myself, the water stayed still in the cup even it is not fully filled. Actually even with little amount of water, as long as it covers the open of the cup and the cardboard on it, the water stays in the cup.
Even the compressibility of water is much smaller, the air inside the cup provide enough compressibility, how come it still holds?
Answer: I assume you meant "cardboard", not "cupboard".
When the cup is full of water, it is empty of air, and water is relatively incompressible.
So when you turn it over, in order for the water to leak out, the cardboard would have to move a small amount away from the edge of the cup, which it cannot do without expanding the water slightly, which the incompressibility of the water does not allow.
So if the seal around the edge of the cup is good, you cannot move the cardboard without reducing the pressure in the cup, and the air pressure outside is not being reduced, so the air pressure outside holds it in place.
If you did this in a vacuum (ignoring that the water would boil) it would not work. The cardboard would just fall off.
It is essentially no different that pressing the cardboard against a wet plate of glass. where it sticks unless somehow you can inject some air into the space, say by inserting a needle. | {
"domain": "physics.stackexchange",
"id": 70677,
"tags": "newtonian-mechanics, fluid-dynamics, pressure, home-experiment, surface-tension"
} |
ROS and nxt-python-2.x | Question:
Are there any plans, or is there work under way, to update the ROS NXT stack to use nxt-python-2.x? I would like to make use of the new sensor support in version 2.
Originally posted by Rodrigo Gutierrez on ROS Answers with karma: 23 on 2012-01-14
Post score: 0
Answer:
There are no active development efforts on the nxt stack. Unfortunately nxtpython version 2 came out shortly after our stack was developed. If someone would like to take up the charge and do so please contact me.
Originally posted by tfoote with karma: 58457 on 2012-01-17
This answer was ACCEPTED on the original site
Post score: 1 | {
"domain": "robotics.stackexchange",
"id": 7892,
"tags": "nxt"
} |
Variant of the Sokhotski–Plemelj theorem | Question: I am aware of the Sokhotski–Plemelj theorem (I have also heard people referring to it as the "Dirac identity") which states that in the limit $\eta\rightarrow 0^+$
$$\frac{1}{x\pm i\eta}=\mathcal P\frac{1}{x}\mp i\pi\delta(x) \, .$$
Now, I am reading the book "Solid State Physics" by G. Grosso and G. Pastori Parravicini which states on page 430 that using the above formula it can "easily be proved" that
$$\frac{\hbar\omega}{E_j-E_i-\hbar\omega-i\eta}=
\frac{E_j-E_i}{E_j-E_i-\hbar\omega-i\eta}-1\, .$$
However, I fail to see how the latter formula follows from the former. Is there a trick that I am missing here?
Answer: Noting the simple relation
$$
\frac{\Delta E_{i,j}}{\Delta E_{i,j}-\hbar\omega-i\eta}-\frac{\Delta E_{i,j}-\hbar\omega-i\eta}{\Delta E_{i,j}-\hbar\omega-i\eta}=\frac{\hbar\omega+i\eta}{\Delta E_{i,j}-\hbar\omega-i\eta}
$$
and by Sokhotski-Plemelj theorem
$$
\lim_{\eta\rightarrow 0^+} \frac{i\eta}{\Delta E_{i,j}-\hbar\omega-i\eta}=0
$$
because, if the limit
$$
\lim_{\eta\rightarrow0^+}\int_{a}^{b}\frac{f(x)}{x\pm i\eta}dx
$$
exist, that's what the theorem says, then
$$
\lim_{\eta\rightarrow0^+}\int_{a}^{b}\frac{i\eta f(x)}{x\pm i\eta}dx= \left(\lim_{\eta\rightarrow0^+} i\eta\right)\left( \lim_{\eta\rightarrow0^+}\int_{a}^{b}\frac{f(x)}{x\pm i\eta}dx\right)=0
$$
for any target function $f(x)$. | {
"domain": "physics.stackexchange",
"id": 28192,
"tags": "condensed-matter, mathematical-physics"
} |
Counting distinct lists in a list | Question: I'm trying to remove the duplicate from the list and count the list after removing the duplicates
seq = [[1,2,3], [1,2,3], [2,3,4], [4,5,6]]
new_seq = [[1,2,3], [2,3,4], [4,5,6]]
count = 3
My code takes around 23 seconds for around 66,000 lists in a list
How can I make my code faster?
def unique(seq):
new_seq = []
count = 0
for i in seq:
if i not in new_seq:
new_seq.append(i)
count += 1
return count
Answer: Your function is slow because it is O(n2): each element being added to new_seq has to be compared against every previously added element.
To deduplicate a sequence, use a set. Constructing the set is only O(n) because it uses hashing.
Then, to obtain the size of the set, use len().
def unique(seq):
return len(set(tuple(element) for element in seq)) | {
"domain": "codereview.stackexchange",
"id": 19883,
"tags": "python, performance, array"
} |
What would the chemical name be for C13H8Cl3NO | Question: Formula
C13H8Cl3NO
SMILES
C1=C(C(=CC(=C1)Cl)Cl)N(C(C2=CC=CC=C2)=O)Cl
I found the diagram on the left in a book and drew the one on the right using
https://pubchem.ncbi.nlm.nih.gov/edit2/index.html
And got the SMILES description from that.
Any clues as to what might be an IUPAC name and the formal way to write it out? Have I have even drawn it correctly? I could not do it exactly because the two rings kept connecting if I followed the same orientation. I don't know enough about double ring compounds to even hazard a guess.
Is there a automatic naming engine out there?
A compound called British Impregnite found in The Scientific method by Louis F. Fieser p.137
Answer: Well, let's reconstruct that starting from the very right side, where it says a $\ce{C_6H_5}$. The ring and the $\ce{CO}$ group would be a benzaldehyde if it had an $\ce{H}$ instead of an $\ce{N}$, right? Or a benzoic acid if it was $\ce{OH}$ instead of $\ce{N}$. So what would it be if it had an $\ce{NH_2}$-group? It would be a benzamide. If the $\ce{N}$ is substituted with for example a chloride we call that N-chlorobenzamide. And now we have another $\ce{N}$-centered ligand, the second phenyl ring. The ring has three positions, the one where it's connected to the rest of the molecule would be 1, so that makes a 2,4-dichlorophenyl.
Summarizing we get N-chloro-N-(2,4-dichlorophenyl)benzamide | {
"domain": "chemistry.stackexchange",
"id": 11505,
"tags": "organic-chemistry, nomenclature, molecular-structure"
} |
Can one generate a signal with an asymmetric power spectral density? | Question: My question is as follows. I know that classically speaking, if you have some time varying signal $X(t)$ its power spectral density $S_X(f)$ will be symmetric around $f=0$ as the signal is real. This is why your spectrum analyzer only shows the positive part, the negative part is redundant.
But if one has a complex signal (such as often encountered in quantum mechanics) the PSD can be asymmetric around 0. This for example leads to the asymmetry between absorption and emission in atoms: atoms have spontaneous emission (emission in the absence of photons in the environment) but not spontaneous absorption.
What I am interested in is if there is a way to physically make such a complex signal with asymmetric PSD. Maybe this can be done by using two channels that output the signal with some phase difference? I've tried looking for some source material on this but I have not been successful.
Answer: @user129412, there are many ways to generate both continuous signals and discrete sequence having asymmetric PSDs. One way, for discrete sequences, is shown in the following diagram.
$x_r(n)$ is a narrow bandpass signal having spectral energy in the vicinity of $±f_c$ Hz. The complex-valued $x_c(n)$ output sequence is your desired asymmetrical sequence having spectral energy only in the vicinity of $+f_c$ Hz. (If your Hilbert transformer has M taps, where M is odd, then the upper path delay must be (M-1)/2 samples to achieve time synchronization between the two signal paths.) If you desire sequences whose asymmetrical spectra are centered at zero Hz, many ways to obtain such sequences can be found at:
https://www.dsprelated.com/showarticle/153.php | {
"domain": "dsp.stackexchange",
"id": 4302,
"tags": "fourier-transform, noise, power-spectral-density"
} |
How to access the raw time-series dataset of GOES X-Ray Flux? | Question: I would like to access the time-series dataset of X-Ray Flux obtained by GOES. I expect that the flux units to be Watts / square-meter; this is because of the figure below (obtained from this Nature paper).
I first tried accessing the GOES-R dataset, which contains netcdf (aka .nc) files. I followed the steps in this post on stackoverflow but was unable to open any of the netcdf files.
I also tried accessing the flux dataset from NOAA SWPC; I clicked the "data" tab and click the "json" link, but this only contains data from the last week or so. I prefer to have the full dataset spanning a few decades. The same site hosts an FTP link; the text files present here have data fields for "year", "month", and "day", but not "hour", "minute", and "second".
I notice that most of the data products available are fluxes averaged over a time-interval; I prefer the raw time-series if possible.
I am hopeful that a representation of this dataset exists in the form of a text-file (such as this one provided by RHESSI). Regardless, I am not sure how to access this dataset. I wonder if the data I am seeking is in the netcdf files that I am unable to open, or if there is another way to do this - perhaps with astroquery or astropy?
TLDR: I am looking for the GOES dataset containing the time-series (containing year, month, day, hour, minute, and second) of flux (measured as Watts / square-meter). How can I access this raw data?
Answer: You can use SunPy, check the Retrieving and analyzing GOES X-Ray Sensor (XRS) data example. There it explains how to download data with a time range and if wanted selecting the GOES satellite number.
If you would like the raw data in csv format, then you can get it from the official archive for GOES data. There you can find full an average data in csv format up to March 4th, 2020 (it stops then because GOES 16 and later are part of GOES-R). The link looks like this:
https://satdat.ngdc.noaa.gov/sem/goes/data/full/YYYY/MM/goesXX/csv/gXX_xrs_2s_YYYYMMDD_YYYYMMDD.csv
Where XX is the satellite number. For example, this is the 2s cadence csv file from goes 15 for March 4th, 2021.
For GOES-R I don't believe there's a CSV archive, but SunPy can query and download it, as shown in the example on the gallery.
Once you donwload the file with SunPy, you can load, plot, manipulate, and convert that timeseries to any other format that astropy tables support (including csv).
This is an example on how to query the data and convert it into csv. This is only for one day, but you could extended to a different time range if wanted.
from sunpy import timeseries as ts
from sunpy.net import Fido
from sunpy.net import attrs as a
tstart = "2020-06-21 01:00"
tend = "2020-06-21 23:00"
result = Fido.search(a.Time(tstart, tend), a.Instrument("XRS"), a.goes.SatelliteNumber(16))
goes_16_files = Fido.fetch(result)
goes_16 = ts.TimeSeries(goes_16_files, concat=True)
goes_table = goes_16.to_table()
goes_table.write('goes_16_20200621.csv', format='csv') | {
"domain": "astronomy.stackexchange",
"id": 5977,
"tags": "python, solar-flare, raw-data, flux"
} |
Graphical tool to check controller performance (MoveIt, ros_control)? | Question:
Hi,
is there a graphical tool to plot current joint_states vs desired joint states which are output by moveit?
I want to visually evaluate the performance of the hardware controller / check if my velocity or acceleration params are reasonable.
Is rqt_plot the best choice? And what would be the topic for the desired joint states?
Thanks,
Chris
Originally posted by cschindlbeck on ROS Answers with karma: 56 on 2021-05-04
Post score: 0
Answer:
Is rqt_plot the best choice?
PlotJuggler is probably easier.
And what would be the topic for the desired joint states?
For the OotB set of controllers from ros_control, you could check the state topic.
Those typically carry messages from control_msgs, which include JointControllerState, JointTrajectoryControllerState and PidState.
Originally posted by gvdhoorn with karma: 86574 on 2021-05-04
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by cschindlbeck on 2021-05-04:
Thanks, when i use Gazebo, i can check the state from my position controllers (desired vs acutal), when i do plain moveit and use the fake_controller, there is no such state, is there a way to check in vanilla moveit?
Comment by gvdhoorn on 2021-05-04:\
when i do plain moveit and use the fake_controller, there is no such state, is there a way to check in vanilla moveit?
Afaik, fake_controller doesn't lead to anything actually controlling anything. It's just kinematic play-out with a 'perfect robot' model. There are no dynamics.
You could check the JointState messages published by it (IIRC it does that), but that would not really tell you anything I believe. Just what the state of the play-out is. | {
"domain": "robotics.stackexchange",
"id": 36397,
"tags": "ros, moveit, ros-melodic, ros-control, joint-states"
} |
Data analysis before feeding to ML pipeline | Question: I'm new to machine learning and I've been working through a dataset of ~3000 records with ~100 features. I've been hand rolling Python and R scripts to analyse the data. For example, plotting the distribution of each feature to see how normal it is, identify outliers, etc. Another example is plotting heatmaps of the features against themselves to identify strong correlations.
Whilst this has been a useful learning exercise, going forward I suspect there are tools that automate a lot of this data analysis for you, and produce the useful plots and possibly give recommendations on transforms, etc? I've had a search around but don't appear to be finding anything, I guess I'm not using the right terminology to find what I'm looking for.
If any useful open source tools for this kind of thing spring to mind that would be very helpful.
Answer: This is a great question because indeed there are many tools out there to make this part of the process faster.
I have used as I usually stick to the following two:
Pandas profiling
Facets
You can also search for alternatives
Hopefully, the community can help complement my answer. | {
"domain": "ai.stackexchange",
"id": 2950,
"tags": "machine-learning, python, datasets, data-preprocessing, r"
} |
Adding lines into CMakeList.txt makes an error. I need help | Question:
Hello,
Firstly i need to say im totally beginner on ROS.I managed to install it and my Professor wanted to me complete these tasks;
-Create a ROS package with your name.
-Create two nodes, a chatter which publishes std_msgs and a listener
to listen to chatter.
-The message you are required to send will be assigned for each one
separately.
-The listener should get the message and print it using ROS_INFO
function.
So i managed to create work space , but i stucked at somewhere that wants me to add these lines into my_package/CMakeLists.txt.
lines are ;
add_executable(name_of_node_chatter src/01.cpp) ,
target_link_libraries(name_of_node_chatter ${catkin_LIBRARIES}) ,
add_dependencies(name_of_node_chatter my_package_generate_messages_cpp)
link text
But when i add them into CMakeList.txt like this ;
link text
After i save this txt file i try to run catkin_make command on terminal.It gaves me this error ;
link text
I don't know what to do , please can you help me to fix this problem.
Originally posted by Karandiru on ROS Answers with karma: 3 on 2020-03-23
Post score: 0
Original comments
Comment by karenchiang on 2020-03-24:
Hi, the target you want to build is name_of_node_listener or name_of_node_chatter? In the screenshot of CMakeLists.txt you shared, it's written name_of_node_listener but here you wrote name_of_node_listener. Could you please add your entire CMakeLists.txt to your main question text? You can use the edit button/link for that. It would be easier for people to review. Thank you.
Comment by Karandiru on 2020-03-24:
I guess i made huge mistake , without defining the chatter , i put listener code on CMakeList.txt that is why terminal gives me that 01.cpp error. In this case 01.cpp is chatter and 02.cpp is listener one.And im trying to run the code without chatter.Thank you for your answer. @tcchiang and also @nkhedekar
Comment by jayess on 2020-03-24:
@tcchiang can you please update your question with a copy and paste of the errors instead of using images? Please see the Support page.
Comment by karenchiang on 2020-03-25:
@jayess, I'm sorry but I am not the original poster. Please tag @Karandiru, thanks.
Comment by jayess on 2020-03-25:
@tcchiang Sorry about that. @Karandiru can you please update your question with a copy and paste of the errors instead of using images?
Comment by Karandiru on 2020-03-25:
I can not access to the ubuntu right now that is why i can not copy the errors from terminal.I editted the lines.Problem is still same. @jayess
Answer:
Hello,
You should have added those three lines after find_package(). The order does matter in CMakeLists.txt. Please see wiki/catkin - CMakeLists.txt - Overall Structure and Ordering.
I think your package does not contain any messages/services/actions files so you don't really need
add_dependencies(name_of_node_chatter my_package_generate_messages_cpp)
If you do, you need to add generate_messages(DEPENDENCIES ...). The message generation targets are created by generate_messages(DEPENDENCIES ...). Using add_dependencies(name_of_node_chatter my_package_generate_messages_cpp) is to ensure that my_package_generate_messages_cpp is built before the name_of_node_chatter target.
If your target depends on other messages/services/actions from other packages, even though your package doesn't build any messages/services/actions at compile time, you should add
add_dependencies(name_of_node_chatter ${catkin_EXPORTED_TARGETS})
More details about add_dependencies(...) can be found here.
I think it's always safe to add a dependency on catkin_EXPORTED_TARGETS.
Originally posted by karenchiang with karma: 241 on 2020-03-25
This answer was ACCEPTED on the original site
Post score: 2
Original comments
Comment by Karandiru on 2020-03-28:
Hello ,
First of all thank your for your explanatory answer.But eventho i tried with your instructions , terminal gave me another error I dont know how to post a terminal output like a link but this is my CMakeList file i put lines in order like you said link text and this is terminal output of catkin_make link text i hope you can still help me.
Comment by karenchiang on 2020-03-28:
Hello,
You shouldn't have added the lines inside the find_package() function. I couldn't tell you exactly where you should have added because you didn't show the whole contents of CMakeList.txt. If you are using the auto-generated CMakeLists.txt, then they should be added after include_directories(...). I strongly suggest you really look through this and this.
Comment by karenchiang on 2020-03-28:
You don't need to post a terminal output as a link. Please copy-paste the terminal error messages and the CMakeList.txt into your original question, select the lines, and press ctrl+k or click the preformatted text button (the one with 101010 on it). | {
"domain": "robotics.stackexchange",
"id": 34626,
"tags": "ros-melodic, cmake"
} |
Bayesian Neural net with non probibalistic Data? | Question: Is it possible to construct a Bayesian Neural Network without Probability Distributions as dependent Variable for purpose of predictive modeling?
I mean, if id like to Infer on a Specific Value, like y(e.g. y=5), with a vector of explanatory Variables X(e.g. X=[3,5,1.3,(.....)])
The Bayesian Neural Network infers on a distribution of ymean with standard deviation sigma(e.g. ymean =5, sigma =0.5).
Does it even make sense? Is the loss function of the Neural net able to work by comparing y with ymean, without taking simgma into account?
partial answer:
I think sigma is the result of the distributions in the weight matrices of the Neural net, and it should work. But I want to be sure and understand.
PS: I work in ecology, so getting a probability distribution as result would serve my goals.
Answer: It is possible to predict a single value from a Bayesian Neural Network. Given a set of input data, conduct the forward pass to generate the resulting probability distribution. Then convert that probability distribution to a single specific value in one of the following common ways:
Sample - Take a random sample. That random sample will automatically be weighted by the posterior probability distribution. This type of sampling is similar to Thompson sampling.
Use a measure of central tendency. Given that posterior probability distribution, calculate the most useful measure of central tendency (e.g., mean, median, or mode). | {
"domain": "datascience.stackexchange",
"id": 7259,
"tags": "neural-network, predictive-modeling, bayesian, bayesian-networks"
} |
Filtering triangles | Question: I'm playing with Haskell's list comprehensions, tuples, and foldr.
The exercise I've given myself is to find all triangles with a perimeter of length 26 in the set of triangles whose edges range from 1 to 10 in length.
main = do
print triangles
triangles = [(a,b,c) | a <- [1..10], b <- [1..10], c <- [1..10], hasPerimeterOf 26 (a,b,c)]
-- Determines whether a triangle has a perimeter of targetPerimeter
hasPerimeterOf targetPerimeter triangle = targetPerimeter == perimeter triangle
-- Calculates the perimeter of a triangle
perimeter triangle = foldr (+) 0 (toList(triangle))
-- Converts a tuple of three to a list
toList (a,b,c) = a : b : c : []
Is there a nicer way to convert a homogenous tuple to a list?
Generally, are there ways to simplify this code?
Answer: The sum of length of any two sides of a triangle must be bigger then the length of the third side. Therefore there is no need to calculate hasPerimeterOf for all combinations of a, b and c. You can simplify it by adding another filter:
triangles = [(a,b,c) | a <- [1..10], b <- [1..10], c <- [1..10], c < a + b, hasPerimeterOf 26 (a,b,c)]
Generally, are there ways to simplify this code?
I would simply use pattern matching on the tuple in perimeter like this:
perimeter (a,b,c) = a + b + c
hasPerimeterOf seems redundant here, anyway. Why not simply compare the perimeter with the desired value? It will be very readable and concise that way. The final code:
main = do
print triangles
triangles = [(a,b,c) | a <- [1..10], b <- [1..10], c <- [1..10], trianglePerimeter (a,b,c) == 26]
trianglePerimeter (a,b,c) = a + b + c
Note that I replaced the comment with a more meaningful function name.
EDIT:
As 200_success rightfully pointed out in the comment, the c < a + b filter leads to awkward results, because it removes some duplicate triangles, leaving other duplicates untouched. In my opinion, if triangles (1,2,4) and (2,4,1) are the same, then the final result should contain only one triangle for each unique set of sides. Otherwise, the mentioned filter has to be removed. | {
"domain": "codereview.stackexchange",
"id": 19282,
"tags": "beginner, haskell"
} |
Application of higher-order function in Python | Question:
If \$f\$ is a numerical function and \$n\$ is a positive integer, then
we can form the \$n\$th repeated application of \$f\$, which is
defined to be the function whose value at \$x\$ is
\$f(f(...(f(x))...))\$. For example, if \$f\$ adds 1 to its argument,
then the \$n\$th repeated application of \$f\$ adds \$n\$. Write a
function that takes as inputs a function \$f\$ and a positive integer
\$n\$ and returns the function that computes the \$n\$th repeated
application of \$f\$:
def repeated(f, n):
"""Return the function that computes the nth application of f.
f -- a function that takes one argument
n -- a positive integer
>>> repeated(square, 2)(5)
625
>>> repeated(square, 4)(5)
152587890625
"""
"*** YOUR CODE HERE ***"
Below is the solution:
from operator import mul
def repeated(f, n):
"""Return the function that computes the nth application of f.
f -- a function that takes one argument
n -- a positve integer
>>> repeated(square, 2)(5)
625
>>> repeated(square, 4)(5)
152587890625
"""
def g(x):
i = 1
while i <= n:
x, i = f(x), i + 1
return x
return g
def square(x):
return mul(x, x)
print(repeated(square,4)(2))
I've tested it and it looks fine.
Can I optimise this code better? Do you think I can use better names instead of i & g?
Answer: Nice docstring.
Your loop is too complicated, and not idiomatic Python. Use range(n) to repeat n times:
def repeated(f, n):
"""Docstring here"""
def g(x):
for _ in range(n):
x = f(x)
return x
return g
Your repeated() function works fine for functions that accept a single argument, like square(x) (which could just be written x * x, by the way). However, it fails for higher-arity functions, such as
def fib_iter(a, b):
return b, a + b
To handle multi-argument functions…
def repeated(f, n):
"""Docstring here"""
def g(*x):
for _ in range(n):
x = f(*x) if isinstance(x, tuple) else f(x)
return x
return g
However, there is a bug in that: repeated(square, 0)(2) would return a tuple (2,) rather than an int 2. To work around that special case…
def repeated(f, n):
"""Docstring here"""
def g(*x):
for _ in range(n):
x = f(*x) if isinstance(x, tuple) else f(x)
return x
def unpack(*x):
result = g(*x)
if isinstance(result, tuple) and len(result) == 1:
return result[0]
else:
return result
return unpack | {
"domain": "codereview.stackexchange",
"id": 12518,
"tags": "python, programming-challenge, higher-order-functions"
} |
Confusion in fixing DH frames | Question: I am analyzing a concept of a surgical robot with 4 revolute joints and one sliding joint. I am not able to fix the coordinate frame for last prismatic joint.
Following are the schematics of robot and DH frames I have fixed
As X4 axis and X5 axis are intersecting each other, I am not able to capture joint distance variable(di) for the slider. How can I fix this?
Following are the parameters for rest of the frames
Updated Frames:
Answer: The updated image solves the problem. You did not consider the end-effector coordinate frame earlier.
Also, the crosses (going into) in the diagrams should be replaced by dots(coming out), because the crosses don't hold the right hand rule in case you are using a right hand coordinate system. | {
"domain": "robotics.stackexchange",
"id": 1497,
"tags": "robotic-arm, kinematics, forward-kinematics, dh-parameters, frame"
} |
Is a pulsar EM-radiation affected by diffraction? | Question: Is a pulsar EM-radiation affected by diffraction? As I understand gaussean beam theory there should exist different diffraction or if You want diffusion angles for different wavelengths, for example blue and red.So if the angle for red is wider should we see at first a red tone and after a while the blue pulse from that pulsar or could the higher frequency have even a small probability to be observed as its radiation diffraction cone covers a smaller cross-section area?
Answer: Yes, absolutely . The electromagnetic radiation from pulsars is affected by diffraction at many levels. Pulsars are neutron stars that emit directional radiation (mostly at radio frequencies). Just like you would expect for any kind of electromagnetic radiation, the Maxwell's equations apply. So does the diffraction phenomenon.
Just as the aperture size of a telescope limits its resolution, the size of the emitting region of a pulsar limits the collimation of the beamed radio emission.
For a monochromatic spherical wave solution $e^{ikr}/r$ propagating in the $z$-direction, the gaussian model that describes the beam width $w$ can be derived via the substitution $x \rightarrow x $, $y \rightarrow y $, $z \rightarrow z - iz_0 $. It yields :
\begin{equation}
w(z) = w_0 ( 1 + z^2 / z_0^2 )^2
\end{equation}
Where $z_0= \pi w_0^2 / \lambda$.
You can already see from the dependance on $\lambda$ that the beam width is chromatic (ie it depends on the frequency). This appears clearly if we determine the asymptotic angular spread of the beam (or diffraction angle $\theta_\mathrm{D}$):
\begin{equation}
\mathrm{tan}(\theta_\mathrm{D}) = \mathrm{lim}(z\rightarrow \infty) \frac{w(z)}{z} = \frac{\lambda}{\pi w_0}
\end{equation}
You can see on this figure from Hopf et al. 1975 the theoretical shape of the beam expected for a pulsar:
Thus, you are completely right that the angle of "red" radiation would be greater than the one from the "blue" radiation (in fact, we are talking about radio emission, so "red" and "blue" would be, for example, $\lambda=2$ cm v.s. $\lambda=1$ cm). In fact, Hopf et al. 1975 present a figure in which they show the spectral energy variation with respect to the frequency for different angles. You can see a sharp high-frequency cutoff for $\theta_0 > \theta_\mathrm{N}$:
I must add that the intrinsic broadening of the beam and modification of the spectrum is accompanied by many diffraction and refraction effects that occur along the line of sight between the pulsar and the observer.
Interstellar scintillation of pulsars is a well known effect, it is similar to the scintillation of visible stars (due to atmospheric effects). The scintillation of pulsars is due to small-scale irregularities of the turbulent, ionized interstellar plasma upon which radio electromagnetic waves are scattered.
Interstellar diffraction effects include (time scale ~minute):
Angular broadening
Temporal broadening
Intensity scintillations in time and frequency
Interstellar refraction effects include (time scale ~weeks):
Angular wandering of the apparent sources position
Dispersive and geometric time-of-arrival variations
Slow intensity variations
Modification of the apparent brightness distribution
These diffractive and refractive scintillation effects are very convenient for astrophysicists, because they allow them to probe the electron density spectrum of the interstellar medium with pulsars.
source: Cordes 1986 | {
"domain": "physics.stackexchange",
"id": 79708,
"tags": "electromagnetic-radiation, astronomy, diffraction, pulsars"
} |
Any problem in P can be reduced to the language of odd integers | Question: Given $A=\left\{n\in \mathbb{N} \mid \text{$n$ is odd}\right\}$, we want to prove that if $S \in P$ then there is a Karp reduction from $S$ to $A$.
My attempt:
If $S \in P$ we can solve $S$ with a reduction that converts in polynomial time an input from $S$ to $A$, but I don't know how to prove formally the function and to show that the function is a reduction.
Answer: Let $S$ be any language in $\mathsf{P}$. You are looking for a function $f$ with the following properties:
$f$ can be computed in polynomial time.
If $x \in S$ then $f(x) \in A$.
If $x \notin S$ then $f(x) \notin A$.
Since $S$ is in $\mathsf{P}$, we can determine whether $x \in S$ in polynomial time. Therefore, the reduction $f$ can work as follows:
Determine whether $x \in S$.
If $x \in S$ then output something in $A$.
If $x \notin S$ then output something not in $A$.
You take it from here. | {
"domain": "cs.stackexchange",
"id": 18050,
"tags": "complexity-theory, reductions, np, polynomial-time-reductions"
} |
What wind speeds and gusts can usually damage houses or trees? | Question: In my region wind speed increases in Autumn. It reaches to 30 km/h with gust up to 40 km/h. I want to know at which speed the wind and gust can cause small damage of the normal window glass (We mostly using PVC for the windows).
Tonight I noticed that ventilizer fan damaged due to opposite wind direction and cause to stop. Also, my TV Antenna was completely damaged (I couldn't find it).
I want to know Wind and Gust speeds that can cause damage to usual house, outdoor cables, antenna, windows?
Thanks
Answer: This Beaufort Scale of Wind Force image may help answer your question.
source: strleng.blogspot.com | {
"domain": "earthscience.stackexchange",
"id": 1236,
"tags": "wind"
} |
How is pascal's law applied while the fluid moves in a hydraulic lift example? | Question: I think I have some misconception for Pascal's law. My knowledge is that Pascal's law is applied while the fluid at rest and thus the pressure transmission is equal and act at all directions. But what's really confusing me is the hydraulic lift example. Since the fluid is pushed down by a piston 1 and moves upwards pushing piston 2, then how can we still talk about Pascal's law while the fluid is moving (i.e dynamic pressure would be in presence and thus the pressure is not really equal at pistons based on Bernoulli's principle)?
Answer:
Since the fluid is pushed down by a piston 1 and moves upwards pushing piston 2, then how can we still talk about Pascal's law while the fluid is moving (i.e dynamic pressure would be in presence and thus the pressure is not really equal at pistons based on Bernoulli's principle)?
Strictly speaking, you are absolutely correct. Technically while the lift is moving the fluid is flowing and Pascal's law is not applicable. However, if the lift is rising slowly then the deviations from Pascal's law are small and the example is valid.
In physics, everything is an approximation to some degree or another. So we cannot avoid them and indeed we need to embrace approximations. Even our most fundamental laws are probably approximations. But it is important to always recognize the approximations we are making and consider their impact.
Approximations are useful insofar as they simplify the math, and clarify the major effects. Approximations are problematic insofar as they produce errors that exceed our desired level of accuracy.
For example, I recently re-worked the hydraulics on my tractor to add a grapple for moving farm debris easily. If I wanted to determine how tightly my grapple could grip a load, I could simply take my hydraulic pump specified pressure, the diameter of the cylinder, and the lever arm of the grapple itself. This would give me a good approximation of the grapple's strength using Pascal's law. I only need to know that strength to within 100 lb force or so, and the tractor's hydraulic flow is slow. So the difference between Pascal's law alone and a full fluid dynamic analysis is negligible for my needs. Pascal's law is completely valid for determining if I can grab a specific log. | {
"domain": "physics.stackexchange",
"id": 82069,
"tags": "fluid-dynamics, pressure, fluid-statics"
} |
Why no longitudinal electromagnetic waves? | Question: According to wikipedia and other sources, there are no longitudinal electromagnetic waves in free space. I'm wondering why not.
Consider an oscillating charged particle as a source of EM waves. Say its position is given by $x(t) = \sin(t)$. It is clear that at any point on the $x$ axis, the magnetic field is zero. But there is still a time-varying electric field (more or less sinusoidal in intensity, with a "DC offset" from zero), whose variations propagate at the speed of light. This sounds pretty wave-like to me. Why isn't it? Is there perhaps a reason that it can't transmit energy?
A very similar question has already been asked, but it used a "rope" analogy, and I feel that the answers overlooked the point that I'm making.
Answer: I think this is partly a question of vocabulary, and partly a reflection of the fact that the longitudinal Coulomb oscillations you describe fall off so rapidly with distance. (Basically $1/r^2$ instead of $1/r$.) Therefore they are usually called "near field effects" and are totally dominated by the transverse "waves" after a distance of only a very few wavelengths. Nevertheless, they do exist, even in a vacuum, and they do extend to infinity, just very, very weakly. | {
"domain": "physics.stackexchange",
"id": 90394,
"tags": "electromagnetism, waves, electromagnetic-radiation"
} |
Ideal Rolling Motion on Surface with Friction | Question: I have a question concerning ideal rolling motion on a surface containing friction.
By ideal rolling motion, I mean the tangental velocity of the rolling object is the same as the velocity of the rolling object's center of mass.
If an object is rolling on a flat surface with friction, the only force that produces a net force is the force of friction, acting in the direction opposite to the motion. Thus, the net force on the rolling object is in the direction opposite to motion, so it would seem that the center of mass would accelerate in the direction opposite to that of motion.
On the other hand, the work done by the friction is 0, since the friction is applied over no distance (a point of the rolling wheel only touches the ground for an infinitesimally small amount of time).
Is there any way to reconciliate the seemingly different results that dynamics and work analysis produce? Any guidance would be appreciated.
Answer: "a point of the rolling wheel only touches the ground for an infinitesimally small amount of time", so in that amount of time you only make a differential of work (that is, infinitesimal) $dW=Fdt$, now add the infinite number of differential time intervals into a finite time interval and you get a finite amount of work: $W=\int{Fdt}=F\Delta t$ (I am not sure how familiar are you with calculus) | {
"domain": "physics.stackexchange",
"id": 17566,
"tags": "rotational-dynamics, torque, rotation"
} |
Lexer written in Rust | Question: So I ported a lexer I wrote in C++ over to rust, as I'm starting to learn rust. Since I'm very new though, I don't know any Idioms and good practices in rust. So if anyone could point out some (probably obvious) issues I'd be very thankful.
This is my code:
use std::cmp::PartialEq;
use std::fmt::Debug;
#[derive(Debug, PartialEq)]
pub enum Token {
EOF,
Ident(String),
Str(String),
FNum(f64),
INum(u64),
Assign,
BLsh,
BRsh,
BURsh,
If,
Else,
Elif,
Loop,
Stop,
Skip,
Yes,
No,
Nope,
Fun,
Ret,
And,
Not,
Or,
LBrack,
LBrace,
LPar,
RBrack,
RBrace,
RPar,
Semi,
Comma,
Get,
Concat,
IDiv,
FDiv,
Add,
Minus,
Mul,
Mod,
BXOr,
BAnd,
BOr,
IE,
EQ,
NE,
LT,
LE,
GT,
GE,
}
macro_rules! peek_char {
($e:expr) => {
match $e.peek().cloned() {
Some(c) => c,
None => {
return Token::EOF;
}
}
};
}
pub fn get_token(iterator: &mut std::iter::Peekable<std::str::Chars>) -> Token {
let mut next: char = peek_char!(iterator);
let mut current_token: String = String::new();
if next.is_whitespace() {
loop {
next = peek_char!(iterator);
if !next.is_whitespace() {
break;
}
iterator.next();
}
get_token(iterator)
} else if next == '#' {
loop {
next = peek_char!(iterator);
if next == '\n' {
break;
}
iterator.next();
}
get_token(iterator)
} else if next.is_alphabetic() {
loop {
next = match iterator.peek().cloned() {
Some(c) => c,
None => {
break;
}
};
if !(next.is_alphanumeric() || next == '_') {
break;
}
iterator.next();
current_token.push(next);
}
match current_token.as_str() {
"if" => Token::If,
"else" => Token::Else,
"elif" => Token::Elif,
"loop" => Token::Loop,
"stop" => Token::Stop,
"skip" => Token::Skip,
"yes" => Token::Yes,
"no" => Token::No,
"nope" => Token::Nope,
"fun" => Token::Fun,
"return" => Token::Ret,
"and" => Token::And,
"not" => Token::Not,
"or" => Token::Or,
_ => Token::Ident(current_token),
}
} else if next == '"' {
iterator.next();
loop {
next = peek_char!(iterator);
let to_add: char = if next == '\\' {
iterator.next();
next = peek_char!(iterator);
match next {
't' => '\t',
'b' => '\x08',
'n' => '\n',
'r' => '\r',
'f' => '\x0c',
'"' => '"',
'\\' => '\\',
_ => panic!("unknown escaped character"),
}
} else if next == '"' {
iterator.next();
break;
} else {
next
};
iterator.next();
current_token.push(to_add);
}
Token::Str(current_token)
} else if next.is_digit(10) {
loop {
next = match iterator.peek().cloned() {
Some(c) => c,
None => {
break;
}
};
if !(next.is_digit(10) || next == '.') {
break;
}
iterator.next();
if next == '.' && current_token.contains('.') {
panic!("multiple decimal points in number");
}
current_token.push(next);
}
if current_token.contains('.') {
Token::FNum(
current_token
.parse::<f64>()
.expect("error reading float literal"),
)
} else {
Token::INum(
u64::from_str_radix(¤t_token, 10).expect("error reading integer literal"),
)
}
} else if next == '/' {
iterator.next();
next = match iterator.peek().cloned() {
Some(c) => c,
None => {
return Token::FDiv;
}
};
if next == '/' {
iterator.next();
Token::IDiv
} else {
Token::FDiv
}
} else if next == '?' {
iterator.next();
next = peek_char!(iterator);
iterator.next();
if next == '=' {
Token::IE
} else {
panic!("unknown character");
}
} else if next == '!' {
iterator.next();
next = peek_char!(iterator);
iterator.next();
if next == '=' {
Token::NE
} else {
panic!("unknown character");
}
} else if next == '=' {
iterator.next();
next = match iterator.peek().cloned() {
Some(c) => c,
None => {
return Token::Assign;
}
};
if next == '=' {
iterator.next();
Token::EQ
} else {
Token::Assign
}
} else if next == '<' {
iterator.next();
next = match iterator.peek().cloned() {
Some(c) => c,
None => {
return Token::LT;
}
};
if next == '=' {
iterator.next();
Token::LE
} else if next == '<' {
iterator.next();
Token::BLsh
} else {
Token::LT
}
} else if next == '>' {
iterator.next();
next = match iterator.peek().cloned() {
Some(c) => c,
None => {
return Token::GT;
}
};
if next == '=' {
iterator.next();
Token::GE
} else if next == '>' {
iterator.next();
next = match iterator.peek().cloned() {
Some(c) => c,
None => {
return Token::BRsh;
}
};
if next == '>' {
iterator.next();
Token::BURsh
} else {
Token::BRsh
}
} else {
Token::GT
}
} else {
iterator.next();
match next {
'[' => Token::LBrack,
'{' => Token::LBrace,
'(' => Token::LPar,
']' => Token::RBrack,
'}' => Token::RBrace,
')' => Token::RPar,
';' => Token::Semi,
'.' => Token::Get,
',' => Token::Comma,
'+' => Token::Add,
'-' => Token::Minus,
'*' => Token::Mul,
'%' => Token::Mod,
'$' => Token::Concat,
'^' => Token::BXOr,
'&' => Token::BAnd,
'|' => Token::BOr,
_ => panic!("unknown character"),
}
}
}
Answer: A couple of general notes:
I’m not sure why you used a macro for peek_char!. I’m relatively certain this could have been a standard function.
Take some time to learn about slices.
You have a single, several hundred line function there. It’s hard to read and follow. Extract lots of well named functions to improve readability.
If you panic inside your function, there’s not really any way for the person calling your function to handle it. This should return a Result<Token, ParseError>. | {
"domain": "codereview.stackexchange",
"id": 37388,
"tags": "beginner, parsing, rust, lexical-analysis"
} |
What does "conjugation of coordinates" mean with respect to GF(4) (quantum) codes | Question: In On the classification of all self-dual additive codes over $\textrm{GF}(4)$ of length up to 12 by Danielsen and Parker, they state:
Two self-dual additive codes over $\textrm{GF}(4)$, $C$ and $C^\prime$, are equivalent if and only if the codewords of $C$ can be mapped onto the codewords of $C^\prime$ by a map that preserves self-duality. Such a map must consist of a permutation of coordinates (columns of the generator matrix), followed by multiplication of coordinates by nonzero elements from $\textrm{GF}(4)$, followed by possible conjugation of coordinates.
with the previous definition
Conjugation of $x \in \textrm{GF}(4)$ is defined by $\bar{x} = x^2$.
I am confused what "conjugation of coordinates" means in this context. To me "coordinates" would normally refers to the code matrix's columns, or equivalently the quantum code's qubits. However, here it seems to be referring to the alphabet of the code, or equivalently the Pauli operators of the code's stabilizer generators. If this is the case, what operation does "conjugation of coordinates" represent with respect to the code's stabilizer generators?
Answer:
I am confused what "conjugation of coordinates" means in this context.
Conjugating coordinates of $\mathcal C$ is equivalent to setting some diagonal elements of Γ to 1.
Read "Theorem 12, on page 8 and 9" for an understanding of the usage, this is further explained on page 15 (last paragraph):
"As mentioned before, the set of self-dual linear codes over GF(4) is a subset of the self-dual additive codes of Type II. Note that conjugation of single coordinates does not preserve the linearity of a code. It was shown by Van den Nest $^{[25]}$ that the code $\mathcal C$ generated by a matrix of the form Γ + $ωI$ can not be linear. However, if there is a linear code equivalent to $\mathcal C$, it can be found by conjugating some coordinates. Conjugating coordinates of $\mathcal C$ is equivalent to setting some diagonal elements of Γ to 1. Let $A$ be a binary diagonal matrix such that Γ + $A$ + $ωI$ generates a linear code. Van den Nest $^{[25]}$ proved that $\mathcal C$ is equivalent to a linear code if and only if there exists such a matrix $A$ that satisfies Γ$^2$ + $A$Γ + Γ$A$ + Γ + $I$ = $0$. A similar result was found by Glynn et al. $^{[12]}$. Using this method, it is easy to check whether the LC orbit of a given graph corresponds to a linear code. However, self-dual linear codes over GF(4) have already been classified up to length 16, and we have not found a way to extend this result using the graph approach.".
References:
[12] D. G. Glynn, T. A. Gulliver, J. G. Maks, M. K. Gupta, The geometry of additive quantum codes, submitted to Springer-Verlag, 2004. Book
[25] M. Van den Nest, Local Equivalence of Stabilizer States and Codes, Ph.D. thesis, K. U. Leuven, Leuven, Belgium, May 2005. .PDF (English starts on page 22) | {
"domain": "quantumcomputing.stackexchange",
"id": 203,
"tags": "error-correction, stabilizer-code"
} |
Partition points in a plane with a straigth line | Question: Given are a 2D plane and a array of points in this plane, with every point having an integer value assigned.
Is there an algorithm which, when given a ratio a/b, divides the plane with a straight line, so that the values of the points are distributed as close as possible to the given ratio?
Points may be on the dividing line, then the are counted to the 'left/upper' partition.
Answer: There is a brute $O(n^3)$ algorithm.
There are $\binom n 2 = \frac {n(n-1)} 2$ lines over pairs of points. If a line goes over $k$ points, there are $2(k+1)$ ways to rotate the line by a negligible angle, so that no points go over it and each rotation partitions the $k$ points differently.
For each pair of vertices (v1, v2),
rotate the space so that v1.y = v2.y = 0
let UP be the sum of values of vertices with positive y
let DOWN be the same for negative y
let L be a list of vertices with y = 0, sorted by x
For each of L.length+1 ways to cut L in two,
let LEFT be the sum of values of vertices in L to the left of the cut
let RIGHT be the same for vertices to the right
check whether one of UP + LEFT, DOWN + RIGHT or UP + RIGHT, DOWN + LEFT
is a better partition than your previous best. | {
"domain": "cs.stackexchange",
"id": 2203,
"tags": "algorithms, computational-geometry, partitions"
} |
Difference in viscosity-concentration and reduced viscosity-concentration graphs | Question: I prepared 3 different polymer solutions with varying concentration of 100, 50 and 25 ppm in order to measure their viscosity . When I plot concentration vs viscosity graph in the excel, I have a perfect line that is the average viscosity is directly proportional to the concentration. However, when I plot a graph of reduced viscosity vs concentration, in this case I find that the points are not in a line but are scattered. In short, the values for reduced viscosities are not directly proportional to the concentration as was in the case of viscosity. To calculate the reduced viscosity, I used following relation :
$$
\eta_{\text{red}}=\frac{\eta_0-\nu_s}{\eta_s\cdot c}
$$
where $\eta_0$ is the average viscosity of the solution, $\eta_s$ the average viscosity of the solvent and $c$ the concentration of the solution.
My professor asked me to prepare another solution as according to him this is not normal and that reduced viscosity should also vary more or less directly with the concentration.
My question is - as I have my viscosity vs concentration graph a straight line, how will dividing each viscosities again by their respective concentration give another straight line as the quotient is different in this case? Or did I calculate the reduced viscosity incorrectly ( the formula I provided above).
Answer: The reduced viscosity is normally defined as:
$$ \eta_r = \frac{\eta - \eta_0}{\eta_0\phi} $$
where $\eta$ is the viscosity of the solution, $\eta_0$ is the viscosity of the solvent and $\phi$ is the volume fraction of the polymer. However it's fine to use the concentration of the polymer $c$ as this just multiplies everything by a constant.
If you graph the reduced viscosity against concentration you would expect the graph to look like:
So it will be a straight line but it won't go through the origin. The $y$-intercept, shown as $[\eta]$, is called the intrinsic viscosity and gives you information about the size and shape of the polymer molecules. | {
"domain": "physics.stackexchange",
"id": 30812,
"tags": "physical-chemistry, viscosity, polymers, rheology"
} |
Why have Majorana fermions not been detected? | Question: There are two types of fermions - Dirac's and Majorana's. Majorana's fermions are their own antiparticles and they have not been detected yet. Sometimes, it is conjectured that e.g. neutrinos could be Majorana fermions.
However, it seems to me that there should be no Majorana fermion in current Universe. When particle and its antiparticle meet, they anihilate each other. Since there is no distinction between particles and antiparticle in case of Majorana fermions, a bunch of Majorana fermions should completely anihilate (maybe one particle can surive in case there were odd number of the particles in the bunch). This should happen shortly after Big Bang when all type of particle were created and the Universe was still small. Perhaps we could create Majorana fermions in accelerators, however, I suppose we should not be able to detect them in the Nature.
Does my reasoning make sense? If not, what am I missing?
Answer:
Since there is no distinction between particles and antiparticle in case of Majorana fermions, a bunch of Majorana fermions should completely annihilate
It is not that simple, to model a baryon dominated universe. The same thoughts for Dirac fermions lead to the problem of why there is baryon asymmetry in the observed universe.
According to the Big Bang theory, equal amounts of matter and anti-matter were initially created. When matter and anti-matter come into contact, they annihilate into pure energy, producing photons and nothing else. The relic of this primordial annihilation is the Cosmic Microwave Background, the 2.7 Kelvin radiation that fills the entire Universe. But not all of the matter annihilated into photons: about one out of every billion quarks survived and originated the Universe as we know it today. How could some matter survive the primordial annihilation?
So it is not a matter just for Majorana neutrinos. If you read the link you will see that the question of their existence is still open to research. | {
"domain": "physics.stackexchange",
"id": 75210,
"tags": "particle-physics, antimatter, majorana-fermions"
} |
Kepler space telescope undetected planets | Question: The Kepler space telescope detects planets based on the dip in brightness caused by planets moving past the star.
Wouldn’t that mean that there are an unknown amount of planets that have an orbit that wouldn’t be detected because their orbits don’t cross that path between the star and the telescope?
Answer: That's right. The inclination of the orbital plane around stars is considered to be random throughout the galaxy, thus the planets we can detect by the transit method is just a tiny fraction of the planets that we should expect in our stellar neighbourhood.
The transit method allows for planetary detection only when the line of sight from Earth to the system is contained, or almost contained, in the orbital plane of the planet. This means that only a tiny range of orbital inclinations on each star are good for detection.
Why did I say almost? Because there is some range of inclinations that still would yield a transit. This range is not fixed, and it depends on the distance of the planet to its host star. As you can see in this diagram:
Planet A is closer to the star and thus creates a wider shadow. If an observer is located in that shadowed region far away it can detect planet A. Planet B instead is farther from the star and thus its shadow is narrower. It is interesting to note, that even if both planets here share the exact same orbital plane there are places from where you would only detect planet A and never detect planet B (see the green arrows). This is the reason we have a bias towards planets orbiting closer to their star.
This effect is in fact quite strong: consider our Solar System from an exoplanetary perspective. If you were located in a random star in the sky, what are the chances you would spot an Earth transit? Well, it turns out that it is way more probable to detect a Mercury transit, even if Mercury is the smallest planet, just because of its vicinity to the Sun. A recent paper showed this diagram of the regions of the sky where some alien inhabitants would spot a transit for each of our planets:
As you can see Mercury has the wider strip. Also it's interesting to note that due to these differences in the size of the orbits (let's use the semi-major axis, $a$, as a reference) and due to small differences in orbital inclinations there is no place in the entire sky from which an alien could detect simultaneously more than four of our planets by the transit method. No place in the universe where all the Solar System's planets would be detectable.
The detection method also depends on the relative sizes of the star, $R_s$, and the planet $R_p$: A larger star has a larger disk (as viewed from Earth) that can be easily photobombed by a planet and a larger planet can photobomb more easily if it is larger.
The result is that the probability of detecting a planet increases as we increase both/either $R_p$ and $R_s$ and increases as we decrease the distance to the host star $a$. The relation is then of this form:
$P \sim (R_s+R_p)/a$
This relation imposes several observational biases. We can see exoplanets that are large and closer to their star, but we can't see planets that are small and farther. That is the reason the first detected exoplanets are the so-called hot Jupiters: giant planets much closer to their stars than Mercury is to the Sun. This diagram shows all the exoplanet detections plotted on size vs. orbital distance:
As you can see, small planets are only detectable if they have very small orbits around their stars. We have yet to find a planet the size of Earth (quite small) and with a 365 day orbital period (1 AU distance) using the transit method. There is no reason to think that this is representative of the overall population of planets. The black region of the plot is probably filled with dots, but our instruments can't scout that region yet.
The Kepler telescope had a camera with a field of view on which it could detect more than half a million stars, but the actual number of stars monitored during the mission was around 150,000 stars (these stars had good signals and were perfect targets for the mission). For these 150,000 stars Kepler found 2,345 exoplanets distributed in 1,205 stars. So we can say that for each star targeted by Kepler, the average probability of finding some planets there is around $0.8\;\%$. That should give you an estimate of the occurrence of orbital inclinations that result in transits.
The truth is that this number is too small, because Kepler has several more biases. For example, Kepler only confirmed planets after three transits were detected. Since the Kepler mission lasted for four years and four months we can say that in the best case scenario Kepler was able to detect a planet with an orbital period as long as two years and two months, but this is not even the case since for that to happen a transit should have been detected just at the beginning of the mission, halfway, and at the exact end of it, and this coincidence didn't happen. Thus Kepler had no chance to discover any planet with periods longer than two years (enough for Earth, but not enough for our Jupiter for example), even if the orbital inclination matched perfectly for the transit. So you might expect more possible transits than those actually portrayed by the Kepler telescope.
In fact, for planets close to their stars, it has been estimated that the probability of a random alignment to allow a transit gets up to $10\;\%$. For the case of stars as big as our Sun and planets at the same distance as Earth, the probability of this random occurrence drops to $0.47 \;\%$. So with all the diversity of planets (in terms of sizes and distances to their host star) it is reasonable to expect a $0.8\;\%$ detection rate for Kepler (if we also add the time restriction to observe three transits).
A $0.47\;\%$ is an amazing number! It means that for every Earth-like planet we detect by the transit method we should expect another 213 Earth-like planets orbiting other stars that are undetectable by the transit method.
This kind of reasoning has been expanded. We have many difficulties to detect them, but if you mathematically model that difficulty and the corresponding biases associated with the known instruments and you assume random configurations, you can see that each discovery yields statistical significance to the amount of possible planets that are really out there. There are so many detections now that we can finally establish with statistical confidence that there are more planets than stars in our galaxy (even if we have probed an infinitesimal fraction of the entire population), even if this was something that could be expected we have now strong evidence for that thanks to Kepler. This means that there could be around a trillion or more plants just in the Milky Way. Now we are also able to establish some statistical constraints on the occurrence of Earth-like planets (orbiting in the habitable zone of their sun-like star) thanks to Kepler. There are probably around 11 billion planets in our galaxy with these specifications.
TL;DR
There are many more planets than the ones we can detect by the transit method, between 10 and 100 times more depending on the size and orbital period of the planet you are searching for. | {
"domain": "astronomy.stackexchange",
"id": 3857,
"tags": "planet, kepler"
} |
A question regarding momentum of inertia of L-shaped bar | Question: I have trouble deriving moment of inertia $\Theta_A$ of the bar rotating around the point A shown in the image.
The answer of the problem says:
$\Theta_A=\frac{m}{3}\frac{a^2}{3}+\left[\frac{(2a)^2}{12}+(a^2+a^2)\frac{2m}{3}\right]$.
I understand the first part of RHS of the equation which is the moment of inertia of the shorter member. However, I have trouble deriving the second part which is the moment of inertia of the longer member.
Can someone give me a detailed procedure of the derivation?
Ref:Dietmar Gross et al, Dynamics-Formulas and Problems, Springer
Answer: The first term inside the bracket is missing a factor of mass.
To derive the moment of inertia of the rod of length $2a$, the parallel axes theorem can be applied.
If $I$ is the moment of inertia about the axis through the hinge (which lies outside the rod), and $I_{c}$ is the moment of inertia about the axis through the centre of mass of the rod, with $d$ being the perpendicular distance between the axes, the theorem states that
$I=I_{c}+Md^{2}$
Using the well known result for the moment of inertia of a rod
$I_{c} = \frac{2m (2a)^2}{3 * 12} = \frac{2ma^2}{9}$
And,
$Md^2 = \frac{2m}{3} * (a^2 + a^2) = \frac{4ma^2}{3} $
Observe here that the distance between the CM axis and the hinge axis is $d = \sqrt{2a^2}$ by Pythagoras' theorem.
Now we get
$ I = \frac{2ma^2}{9} + \frac{4ma^2}{3} = \frac{14ma^2}{9}$
This is the moment of inertia of the longer rod. The given answer appears to be incorrect. | {
"domain": "physics.stackexchange",
"id": 96966,
"tags": "homework-and-exercises, newtonian-mechanics, rotational-dynamics, reference-frames, moment-of-inertia"
} |
Set the parameters of a Erdos-Renyi graph generator to get a specific mean degree | Question: I'm trying to reproduce the synthetic networks (graphs) described in some papers.
The topic is the same as a previous question of mine, but with a different focus.
It is stated that the Erdos-Renyi model was used to create $2$ networks with average degrees $\langle k_a \rangle$ and $\langle k_b \rangle$.
In the first paper, the average degree is $k = 4$ , while the number of nodes $n$ is 50000.
In the second paper the average degree is not called $k$, but it's stated that the mean degree is $1.999$ for $n = 200$ (in Fig. 2), while it is $2.45$ for $n = 4000 $ and $ n = 6000$ (in Fig. 7).
I looked for libraries implementing the Erdos-Renyi algorithm and they seem to require different parameters than average degree. One is NetworkX, another is igraph. They work in similar ways and ask for:
$n$ - number of nodes
$0 \leq p \leq 1$ - the probability for drawing an edge between two arbitrary vertices
$m$ - the number of edges in the graph (in alternative to $p$, only in igraph)
How can I calculate the settings to generate a graph with the same average degree as the ones described in the papers?
Here are the references:
Catastrophic cascade of failures in interdependent networks, Buldyrev et al. 2010, with a separately provided Supplementary Information
Small Cluster in Cyber Physical Systems, Huang et al. 2014
Catastrophic cascade of failures in interdependent networks, Havlin et al. 2010, this is on the Arxiv and somewhat clarifies the first
Note that these papers used generating functions to analytically study some properties of those graphs. However, they also run simulations on those models, so they must have generated those networks somehow.
Answer: Average degree and mean degree are the same. In the $G(n,m)$ model, the average degree is $2m/n$. In the $G(n,p)$ model, the expected average degree is $np$. The actual average degree has normal distribution with mean $np$ and standard deviation $\sqrt{2(1-\tfrac{1}{n})p(1-p)}$, so it is pretty close to $np$ with high probability.
When $p=c/n$ for fixed $c$, the degree of a single vertex has roughly Poisson distribution (with mean $c$); formally, as $n\to\infty$ the distribution tends to Poisson. In practice, this means that for "small" $c$ and "large" $n$, the degree will be Poisson. Furthermore, while there is some dependency between vertices, we still expect the entire empirical degree distribution to be close to the same Poisson distribution (i.e., we expect the fraction of coordinates having degree $d$ to be roughly $e^{-c} c^d/d!$); this is probably proved formally in the literature.
When $p$ is constant, the degree of a single vertex has roughly Gaussian distribution, and it is probably the case that the joint distribution is also close to the appropriate multivariate Gaussian. | {
"domain": "cs.stackexchange",
"id": 4210,
"tags": "algorithms, graphs, randomness, sampling, modelling"
} |
Physical meaning of $\langle nlm|\hat{z}|n'l'm'\rangle$ | Question: I'm working on a quantum mechanics problem with some friends and we're trying to make an argument using symmetry rather than maths. What would the physical interpretation of $\langle nlm|\hat{z}|n'l'm'\rangle$ be for two states of the Hydrogen atom?
Answer: The simplest interpretation would be as a transition amplitude. If an electric dipole oriented along the $z$ axis interacted with your atom, the probability of
transition from $\vert n'l'm'\rangle$ to $\vert nlm\rangle$ would be proportional to
\begin{align}
\vert \langle nlm\vert \hat z \vert n'l'm'\rangle\vert^2\, .
\end{align}
Note that the operator $\hat z$ can only connect $n'$ with $n=n'\pm 1$, $l$ with $l'=l\pm 1$ and $m'=m$. If the orientation was different there could be a change in the magnetic quantum numbers. The process $n'\to n'+1$ is clearly absorption whereas $n'\to n'-1$ is emission. | {
"domain": "physics.stackexchange",
"id": 79726,
"tags": "quantum-mechanics, hilbert-space, wavefunction, symmetry, hydrogen"
} |
How to perform a projective measurement on one component of a composite system? | Question: For simplicity, let $|\phi\rangle|\psi\rangle\in\Bbb C^2\otimes\Bbb C^2$. I know how to compute the projective measurement $\{P_m\}_m$ of $|\phi\rangle|\psi\rangle$ on $\Bbb C^2\otimes\Bbb C^2$, but I wonder how to measure the first component of $|\phi\rangle$ of $|\phi\rangle|\psi\rangle$ with respect to a projective measurement $\{P_m\}_m$ on $\Bbb C^2$. And I also wonder will the second component collapses after the measurement? What will be the resulting state? PS. I haven't seen the explaination in N&C's book. A reference is also welcomed.
Answer: If you perform a local measurement $\{P_m\}$ on the first system only, then the global measurement is given by the projectors $P_m \otimes \mathbb{I}$ where $\mathbb{I}$ is the identity matrix.
Consequently, if you perform a local measurement on a product state $|\phi\rangle\otimes|\psi\rangle$, then the state of the second system is not disturbed as the post-measurement state is simply
$$
\frac{(P_m \otimes \mathbb{I})( |\phi\rangle \otimes|\psi\rangle)}{\| (P_m \otimes \mathbb{I})(|\phi\rangle\otimes|\psi\rangle)\|_2} =\frac{P_m |\phi\rangle}{\|P_m |\phi\rangle\|_2} \otimes|\psi\rangle.
$$
In constrast, if you measure an entangled state, then this not true anymore. For instance, take the well-known 2-qubit Bell state
$$
|\phi^+\rangle = \frac{1}{\sqrt 2} \big( | 00 \rangle + | 11 \rangle \big).
$$
Measuring the first system in the computational basis will collapse this either into $| 00 \rangle$ or $| 11\rangle$ with probability $1/2$ each. Thus, the (reduced) state of the second system depends on the outcome of the measurement on the first one. | {
"domain": "quantumcomputing.stackexchange",
"id": 2658,
"tags": "measurement, textbook-and-exercises"
} |
Is cyclobutadiene antiaromatic? | Question: Cyclobutadiene is very unstable. But, some sources claim that this instability can be attributed to other factors such as ring and angle strain rather than antiaromaticity.
According to some, cyclobutadiene is simply non-aromatic (as opposed to antiaromatic) because it doesn't even have a fully conjugated pi system.
What is/are the real reason(s) for its instability?
Answer: If we go back to your earlier question on Frost diagrams (I've reproduced the key figure below), we see why simple molecular orbital theory or the "$4n+2$" rule suggests that benzene is aromatic while cyclobutadiene is antiaromatic.
Forming a planar, conjugated, 6-membered ring and placing 6 π-electrons in it creates a π-system that is energetically more stable than 3 conjugated C=C bonds (e.g. in hexa-1,3,5-triene). On the other hand, forming a planar, conjugated, 4-membered ring and placing 4 π-electrons in it creates a π-system that is energetically less stable than 2 conjugated C=C bonds (e.g. in butadiene). For these reasons we say that the first system, benzene, is "aromatic", while the second system, cyclobutadiene, is "antiaromatic". Other measurements and physical phenomenon such as reactivity, bond lengths, ring currents, etc. support these conclusions.
Free cyclobutadiene has been observed as a transient intermediate. Further theoretical analysis suggests that due to its lack of aromaticity it will distort from a square structure to a rectangular one with alternating single and double bonds (Jahn–Teller effect).
sources claim that this instability can be attributed to other factors such as ring and angle strain rather than antiaromaticity
It turns out that other systems involving the cyclobutadiene skeleton have been prepared and studied. Derivatives of both the cyclobutadiene dication[1] ($4n+2$, $n=0$) and the cyclobutadiene dianion[2] ($4n+2$, $n=1$) have been prepared and studied. Their stability is much less than that of benzene, but much more than that of cyclobutadiene. Placing two charges in these small rings results in extremely high coulombic repulsions in both the dianion and dication, and this may be a significant part of the explanation as to why they are less stable than say, benzene. The fact that they exist as planar structures, with appropriate ring currents and are more stable than cyclobutadiene suggests that the instability observed with cyclobutadiene is not due to ring strain and angle strain alone. Rather the aromatic stabilization - antiaromatic destabilization suggested by simple MO theory seems to be involved.
References
Olah, G. A.; Staral, J. S. Novel aromatic systems. 4. cyclobutadiene dications. J. Am. Chem. Soc. 1976, 98 (20), 6290–6304. DOI: 10.1021/ja00436a037.
Takanashi, K.; Inatomi, A.; Lee, V. Y.; Nakamoto, M.; Ichinohe, M.; Sekiguchi, A. Tetrakis(trimethylsilyl)cyclobutadiene dianion alkaline earth metal salts: new members of the 6π-electron aromatics family. Eur. J. Inorg. Chem. 2008, 2008 (11), 1752–1755. DOI: 10.1002/ejic.200800066. | {
"domain": "chemistry.stackexchange",
"id": 2069,
"tags": "organic-chemistry, aromaticity"
} |
Should I split class or keep it all in one? | Question: Currently, in my project I have one class that is dedicated to the all of the queries for the IBM i and converts it into usable models for my .NET code. There is already 12 methods in this class and there will be many, many more to come. This is for a tool that pulls data from our IBM i and pushes it to our internet database server.
Should I split this class up? Should I make it a partial class and put the code across multiple files? Should I do multiple classes?
Just wondering what the best practice is on something like that.
Currently this is the framework I have in place
class IbmIDatabase
{
private DateTime ZERO_DATE = new DateTime(1900, 1, 1, 0, 0, 0);
private string _connString = String.Empty;
SystemCodeRepository scr = new SystemCodeRepository();
/// <summary>
/// Initializes a new instance of the <see cref="IbmIDatabase"/> class.
/// </summary>
public IbmIDatabase()
{
}
#region System Codes
/// <summary>
/// Gets all system codes.
/// </summary>
/// <returns></returns>
public IEnumerable<SystemCode> GetAllSystemCodes()
{
}
#endregion System Codes
#region Citations
/// <summary>
/// Gets all citations.
/// </summary>
/// <returns></returns>
public IEnumerable<ParkingTicket> GetAllCitations()
{
}
#endregion Citations
#region Queue
/// <summary>
/// Gets the queued records.
/// </summary>
/// <returns></returns>
public IEnumerable<QueuedRecord> GetQueuedRecords()
{
}
/// <summary>
/// Marks the queue as processed.
/// </summary>
/// <param name="id">The id.</param>
public void MarkQueueAsProcessed(int id)
{
#endregion Queue
#region Utility Bill Customer
/// <summary>
/// Gets the utility bill customer.
/// </summary>
/// <param name="id">The customer id.</param>
/// <returns></returns>
public Customer GetUtilityBillCustomer(int id)
{
}
/// <summary>
/// Gets the utility bill customer.
/// </summary>
/// <returns></returns>
public IQueryable<Customer> GetUtilityBillCustomer()
{
}
#endregion
#region Utility Bill Details
/// <summary>
/// Gets the customer history.
/// </summary>
/// <param name="id">CustomerId</param>
/// <returns></returns>
public IQueryable<BillHistory> GetCustomerHistory(int id)
{
}
/// <summary>
/// Gets the customer history.
/// </summary>
/// <returns></returns>
public IQueryable<BillHistory> GetCustomerHistory()
{
}
#endregion
#region Utility Bill Payment Details
/// <summary>
/// Gets the customer payment history.
/// </summary>
/// <param name="id">The id.</param>
/// <returns></returns>
public IQueryable<BillPaymentHistory> GetCustomerPaymentHistory(int id)
{
}
/// <summary>
/// Gets the customer payment history.
/// </summary>
/// <returns></returns>
public IQueryable<BillPaymentHistory> GetCustomerPaymentHistory()
{
}
#endregion
#region Utility Bill Summary
/// <summary>
/// Gets the customer summary.
/// </summary>
/// <param name="id">CustomerId</param>
/// <returns></returns>
public IEnumerable<BillSummary> GetCustomerSummary(int id)
{
}
/// <summary>
/// Gets the customer summary.
/// </summary>
/// <returns></returns>
public IQueryable<BillSummary> GetCustomerSummary()
{
}
#endregion
}
Within each method can be quite a few lines of code (up to many 100 or so?). I left one of the longest ones so far in the sample code.
Answer: This class has a well defined role and by keeping it intact you are satisfying the Single Responsibility Principle. So, I would leave it like that. Perhaps you might want to look into how you could refactor some common code out of each method and into a separate utility class, but that's all. | {
"domain": "codereview.stackexchange",
"id": 1121,
"tags": "c#, classes"
} |
How is "Band Intensity" related to absorption coefficient | Question: I am interested in the linear absorption of $762\,\rm nm$ light near a transition of molecular oxygen. I need to find some experimental numbers that will tell us how far the $762\,\rm nm$ light will propagate before getting absorbed. Specifically, I want to know the e-folding length, $\gamma^{-1}$ (the length over which the intensity will drop by $e^{-1}$). I believe this is also called the optical depth when using Beer-Lambert law.
My main problem is that I do not know the definitions of experimentally measured quantities and how they relate to the e-folding length. I was reading "Atmospheric Propagation of Radiation" by Frederick Smith and page 61 says that for the inverse wavelength $\lambda^{-1}=13\,120.909\,\rm cm^{-1}$ the Band Intensity is $1.95\times10^{-22}\,\rm cm$. In "Laser Remote Chemical Analysis" they call it the integrated band intensity for this line but with units of cm-molecule (basically the same thing).
Does anyone know how the band intensity relates to the e-folding or absorption length?
Our best guess based on physical and dimensional arguments is that the e-folding length will go like $\gamma^{-1} \propto 1 / (B N \Delta\lambda)$ where $B$ is the band intensity with units of $\rm cm$, $N$ is the number density with units of $\rm cm^{-3}$, and $\Delta\lambda$ is the line width of the transition with units of $\rm cm$.
Answer: What the question refers to as "band intensity" is also referred to a "line strength" $S$. To calculate an absorption coefficient $k$ from $S$, a line shape function $f(\nu - \nu_0)$, where $\nu_0$ is the center of the line.
$$k = Sf(\nu - \nu_0)$$
Then "optical depth" = $ku$, where $u$ is called "path length" but is really a measure of the absorbing substance in the path.
See pages 15 and 16 of this lecture for more information: http://irina.eas.gatech.edu/EAS8803_Fall2009/Lec5.pdf
and also: http://nit.colorado.edu/atoc5560/week4.pdf | {
"domain": "physics.stackexchange",
"id": 13346,
"tags": "electromagnetism, optics, atomic-physics, spectroscopy"
} |
Implementing StrStr() | Question: This is a naive implementation - I know that there are algorithms like KMP
However I was trying to implement it as best as I can.
string test = "giladdarmonwhatareyoudoing";
int index = StrStr(test, "are");
//Returns an index to the first occurrence of str2 in str1,
//or a -1 pointer if str2 is not part of str1.
public int StrStr(string test, string strToFind)
{
for (int i = 0; i < test.Length; i++)
{
if (test[i] == strToFind[0])
{
int j;
for (j = 0; j < strToFind.Length; j++)
{
if (test[i + j] != strToFind[j])
{
break;
}
}
if (j == strToFind.Length)
{
return i;
}
}
}
return -1;
}
Answer: Algorithm
your implementation has some serious bugs and unexpected behaviour.
If passing String.Empty as second parameter, it throws an IndexOutOfRangeException
If passing the unique ending part of the first parameter appended by at least one character to the second parameter, it throws an IndexOutOfRangeException
string test = "giladdarmonwhatareyoudoing";
int index = StrStr(test, "ng1");
If you are passing null as either first or second parameter a NullReferenceException is thrown. Here it would be better to throw an ArgumentNullException by using a guard clause.
if (test == null) { throw new ArgumentNullException("test"); }
if (strToFind == null) { throw new ArgumentNullException("strToFind"); }
your implementation could be improved by
checking if strToFind.Length > test.Length
checking if test.Length - i > strToFind.Length
Naming
You shouldn't use hungarian notation. Consider to rename strToFind to searchForm or searchArgument.
Refactoring
After extracting the inner loop to a separate method, removing the now unneeded if (test[i] == strToFind[0]) and implementing the above we will get
public int StrStr(string value, string searchArgument)
{
if (value == null) { throw new ArgumentNullException("value"); }
if (searchArgument == null) { throw new ArgumentNullException("searchArgument"); }
if (searchArgument.Length == 0) { return 0; }
int searchLength = searchArgument.Length;
int length = value.Length;
if (searchLength > length) { return -1; }
for (int i = 0; i < length; i++)
{
if (length - i < searchLength) { return -1; }
if (IsMatchAtIndex(value, searchArgument, i)) { return i; }
}
return -1;
}
private bool IsMatchAtIndex(String value, String searchArgument, int startIndex)
{
for (int j = 0; j < searchArgument.Length; j++)
{
if (value[startIndex + j] != searchArgument[j])
{
return false;
}
}
return true;
}
I prefer the above, but you could also add
if (length - i < searchLength) { return -1; }
inverted as condition to the for loop like
for (int i = 0; i < length && length - i >= searchLength; i++)
{
if (IsMatchAtIndex(value, searchArgument, i)) { return i; }
} | {
"domain": "codereview.stackexchange",
"id": 11150,
"tags": "c#, strings"
} |
scikit-learn RandomForestClassifier always hits 100% test accuracy | Question: I have been playing with a toy problem to compare the performance and behavior of several scikit-learn classifiers.
Brief, I have one continuous variable X (which contains two samples of size N, each drawn from a distinct normal distributions) and a corresponding label y (either 0 or 1).
X is built as follows:
# Subpopulation 1
s1 = np.random.normal(mu1, sigma1, n1)
l1 = np.zeros(n1)
# Subpopulation 2
s2 = np.random.normal(mu2, sigma2, n2)
l2 = np.ones(n2)
# Merge the subpopulations
X = np.concatenate((s1, s2), axis=0).reshape(-1, 1)
y = np.concatenate((l1, l2))
n1, n2: number of data points in each sub-population;
mu1, sigma1, mu2, sigma1: mean and standard deviation of each population from which the sample is drawn.
I then split X and y into training and test set:
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, stratify=y, test_size=0.25)
And then I fit a series of models, for instance:
from sklearn import svm
clf = svm.SVC()
# Fit
clf.fit(X_train, y_train)
or, alternatively (full list in the table at the end):
from sklearn.ensemble import RandomForestClassifier
rfc = RandomForestClassifier()
# Fit
rfc.fit(X_train, y_train)
For all models, I then calculate the accuracy on the training and the test sets. For this I implemented following function:
def apply_model_and_calc_accuracies(model):
# Calculate accuracy on training set
y_train_hat = model.predict(X_train)
a_train = 100 * sum(y_train == y_train_hat) / y_train.shape[0]
# Calculate accuracy on test set
y_test_hat = model.predict(X_test)
a_test = 100 * sum(y_test == y_test_hat) / y_test.shape[0]
# Return accuracies
return a_train, a_test
I compare the algorithms by changing n1, n2, mu1, sigma1, mu2, sigma1 and checking the accuracies of the training and test sets. I initialize the classifiers with their default parameters.
To make a long story short, the Random Forest Classifier always scores 100% accuracy on the test test, no matter what parameters I set.
If, for instance, I test the following parameters:
n1 = n2 = 250
mu1 = mu2 = 7.0
sigma1 = sigma2 = 3.0,
I merge two completely overlapping subpopulations into X (they still have the correct label y associated to them). My expectation for this experiment is that the various classifiers should be completely guessing, and I would expect a test accuracy of around 50%.
In reality, this is what I get:
| Algorithm | Train Accuracy % | Test Accuracy % |
|----------------------------|------------------|-----------------|
| Support Vector Machines | 56.3 | 42.4 |
| Logistic Regression | 49.1 | 52.8 |
| Stochastic Gradien Descent | 50.1 | 50.4 |
| Gaussian Naive Bayes | 50.1 | 52.8 |
| Decision Tree | 100.0 | 51.2 |
| Random Forest | 100.0 | *100.0* |
| Multi-Layer Perceptron | 50.1 | 49.6 |
I don't understand how this is possible. The Random Forest classifier never sees the test set during training, and still classify with 100% accuracy.
Thanks for any input!
Upon request, I paste my code here (with only two of the originally tested classifiers and less verbose outputs).
import numpy as np
import sklearn
import matplotlib.pyplot as plt
# Seed
np.random.seed(42)
# Subpopulation 1
n1 = 250
mu1 = 7.0
sigma1 = 3.0
s1 = np.random.normal(mu1, sigma1, n1)
l1 = np.zeros(n1)
# Subpopulation 2
n2 = 250
mu2 = 7.0
sigma2 = 3.0
s2 = np.random.normal(mu2, sigma2, n2)
l2 = np.ones(n2)
# Display the data
plt.plot(s1, np.zeros(n1), 'r.')
plt.plot(s2, np.ones(n1), 'b.')
# Merge the subpopulations
X = np.concatenate((s1, s2), axis=0).reshape(-1, 1)
y = np.concatenate((l1, l2))
# Split in training and test sets
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, stratify=y, test_size=0.25)
print(f"Train set contains {X_train.shape[0]} elements; test set contains {X_test.shape[0]} elements.")
# Display the test data
X_test_0 = X_test[y_test == 0]
X_test_1 = X_test[y_test == 1]
plt.plot(X_test_0, np.zeros(X_test_0.shape[0]), 'r.')
plt.plot(X_test_1, np.ones(X_test_1.shape[0]), 'b.')
# Define a commodity function
def apply_model_and_calc_accuracies(model):
# Calculate accuracy on training set
y_train_hat = model.predict(X_train)
a_train = 100 * sum(y_train == y_train_hat) / y_train.shape[0]
# Calculate accuracy on test set
y_test_hat = model.predict(X_test)
a_test = 100 * sum(y_test == y_test_hat) / y_test.shape[0]
# Return accuracies
return a_train, a_test
# Classify
# Use Decision Tree
from sklearn import tree
dtc = tree.DecisionTreeClassifier()
# Fit
dtc.fit(X_train, y_train)
# Calculate accuracy on training and test set
a_train_dtc, a_test_dtc = apply_model_and_calc_accuracies(dtc)
# Report
print(f"Training accuracy = {a_train_dtc}%; test accuracy = {a_test_dtc}%")
# Use Random Forest
from sklearn.ensemble import RandomForestClassifier
rfc = RandomForestClassifier()
# Fit
rfc.fit(X, y)
# Calculate accuracy on training and test set
a_train_rfc, a_test_rfc = apply_model_and_calc_accuracies(rfc)
# Report
print(f"Training accuracy = {a_train_rfc}%; test accuracy = {a_test_rfc}%")
Answer: rfc.fit(X, y) should be rfc.fit(X_train, y_train)
You are simply memorizing the entire dataset with RandomForestClassifier. | {
"domain": "datascience.stackexchange",
"id": 7274,
"tags": "scikit-learn, random-forest, accuracy"
} |
$E^2 = (mc^2)^2 + (pc)^2$: What units are used to measure $E$, $m$, $c$ and $p$? | Question: \begin{equation}
E^2 = (mc^2)^2 + (pc)^2
\end{equation}
If I am using this equation to figure out the energy of something, what units would I use? Would it be the metric system? I.e. kilograms for $m$, meters per second for $p$, kilometers per second for $c$? And what units of measurement are used for $E$?
Answer: Any consistent system will do. That's the entire point of systems of units--if you stick to one, you don't need to worry about the units too much. And it never happens that a certain equation only works in a certain system*.
In this case, you would use joules ($\:\mathrm{J}\equiv\:\mathrm{kg\:m^2\:s^{-2}}$), the metric unit of energy. If you were using the cgs system, $m$ would be in grams, $p$ would be in $\:\mathrm{g\:cm\:s^{-1}}$, $c$ would be in centimetres per second, and $E$ would be in ergs ($\:\mathrm{erg}\cong\:\mathrm{g\:cm^2\:s^{-2}}$),
Physical constants may change. Also, some equations have some constants set to one (eg Planck units, Gaussian units), so they may disappear entirely. For example, if $c=1$ (Planck units), the equation becomes $E^2=m^2+p^2$. | {
"domain": "physics.stackexchange",
"id": 5508,
"tags": "energy, units, si-units"
} |
Using the given identities, find the inverse DTFT | Question: Using the given identities,
$$ a^nu[n] \Longleftrightarrow \frac{1}{(1-ae^{-jw})}$$
and
$$\delta[n-k]\Longleftrightarrow e^{-jwk}$$
Find the inverse DTFT of,
$$ H(e^{jw}) = B\cdot\frac{e^{-jw}}{(1-ae^{-jw})}$$
my attempt:
$$h[n] = B\cdot\delta[n-1]a^nu[n]$$
It seems straight forward enough, just plug in the inverse dtft. This is not correct though, there is no delta to be found in the correct solution.
The correct answer is:
$$h[n] = Ba^{n-1}u[n-1]$$
It's like the delta disappeared and the delta convolution property was used.
$$x[n]*\delta[n-1] = x[n-1]$$
But using the identities above, I do not want to use convolution. I am confused why the delta disappeared and the signals it was multiplied with became shifted, any help in understanding how to get the correct answer using the identities would be appreciated!
Answer: As pointed out in the comments, it is important for you to know and understand that multiplication in one domain (time or frequency) corresponds to convolution in the other domain. And, as a consequence, multiplication with $e^{-j\omega}$ in the Fourier domain corresponds to a delay of one sample in the time domain (i.e., convolution with $\delta[n-1]$). So you have
$$\begin{align}\mathcal{F}^{-1}\left\{B\frac{e^{-j\omega}}{1-ae^{-j\omega}}\right\}[n]&=B\cdot\mathcal{F}^{-1}\left\{\frac{e^{-j\omega}}{1-ae^{-j\omega}}\right\}[n]\quad (\textrm{due to linearity of }\mathcal{F}^{-1})\\&=B\cdot\mathcal{F}^{-1}\left\{\frac{1}{1-ae^{-j\omega}}\right\}[n-1]\\&=B\cdot a^{n-1}u[n-1]\end{align}$$
where I used $\mathcal{F}^{-1}$ to denote the inverse discrete-time Fourier transform. | {
"domain": "dsp.stackexchange",
"id": 6681,
"tags": "fourier-transform, transform, fourier, dtft"
} |
How to make a mutable Range in Ruby? | Question: How can I make these methods better?
Range.class_eval do
def addto_begin(x)
return self.begin + x..self.end
end
def addto_end(x)
return self.begin..self.end+x
end
end
Right now, I have to type:
x = 1..10
x = x.addto_begin(3)
to change the begin of the range. What I want to be able to do is:
x = 1..10
x.addto_begin(3)
How do I do this?
Answer: Since ranges are immutable you would need to create your own range class that encapsulates the basic range object. This class will give you that and still keep and the methods associated with the basic range object.
class Myrange
attr_accessor :range
def initialize(a,b=nil,exc=false)
if a.is_a? Range
@range = a
else
@range = Range.new(a,b,exc)
end
end
def addto_begin(x)
Myrange.new(@range.begin + x..@range.end)
end
def addto_end(x)
Myrange.new(@range.begin..@range.end+x)
end
def addto_begin!(x)
@range = @range.begin + x..@range.end
Myrange.new(@range)
end
def addto_end!(x)
@range = @range.begin..@range.end+x
Myrange.new(@range)
end
def to_s
@range.to_s
end
def inspect
self.to_s
end
def method_missing(*args,&blk)
@range.send(*args,&blk)
end
end
x = Myrange.new(1,10)
x = x.addto_begin(3)
p x #=> 4..10
y = x.addto_begin!(3)
p x #=> 7..10
p y.addto_end(3) #=> 7..13
p x.include? 9 #=> true :: Range methods still work | {
"domain": "codereview.stackexchange",
"id": 1709,
"tags": "ruby"
} |
Compressive Sensing vs. Sparse Coding | Question: There apparently are different terminologies used to refer to the same field called "compressive sensing" such as (see this wiki page): compressed sensing, compressive sampling, or sparse sampling. I wonder about "sparse sensing" though!
Nonetheless, and after some internet search, what people refer to as "sparse coding" seems to not refer to the "compressive sensing" field as the other terminologies I cited above.
Is there really a difference between compressive sensing and sparse coding?
What about dictionary learning?
Answer: A couple of reference works offer an exaplanation:
A neurological interpretation described in Scholarpedia
Stanford's Unsupervised Feature Learning and Deep Learning tutorial
If we look at the definition of the term in the context of dictionary learning, for example in K-SVD: An Algorithm for Designing Overcomplete Dictionaries for Sparse Representation, the term is defined:
Sparse coding is the process of computing the representation coefficients $\mathbf x$ based on the given signal $\mathbf y$ and the dictionary $\mathbf D$.
So sparse coding is the operation of finding a sparse representation of a given signal in a given dictionary. In relation to compressed sensing this seems to me to be the most relevant interpretation of the term. As such, sparse coding is closely related to compressed sensing, but compressed sensing specifically deals with finding the sparsest solution to an under-determined set of linear equations which, as the theory shows, is the correct solution in this case with high probability. Sparse coding is then more general in the sense that it does not necessarily deal with an under-determined set of equations. | {
"domain": "dsp.stackexchange",
"id": 5643,
"tags": "sampling, compressive-sensing, compression, sparsity"
} |
Why can only moving metals (aluminium) reflects a magnetic field of a magnet? | Question: A maglev train is riding because the magnets in the train are reflecting their magnetic field on the aluminium rail, so it is lifted up. Also when you rotate a aluminium plate and you hold a magnet above it, it will reflects his own magnetic field (north or south)and is repeling it.
In a way you can compare it with looking in the mirror. You can see yourself because light is reflecting on the mirror back to you. But in case of a mirror you won't need any movement, but why is that in case of magnetic fields?
Answer: The point that you are missing is that light which is a type of electromagnetic wave consists of oscillating electric and magnetic fields. So your assertion is not correct. In the case of a metal the light causes the free electrons to oscillate and reradiate what you call the reflected light.
I am not sure about the use of the term reflection in the context of the maglev. If you move the north pole of a magnet towards a metal plate a force is produced on the free electrons which then move and so a current is produced. That induced (eddy) current produces the equivalent of a north pole which repels the incoming north pole. This process generate heat because a current is flowing in the aluminium which has resistance. If the aluminium is replaced by a superconductor the there is no energy loss and engineers think that this may in the future be the method to be use for magnetic levitation. | {
"domain": "physics.stackexchange",
"id": 28452,
"tags": "electromagnetism"
} |
Permission denied /opt/ros/electric/stacks/nxt/learning_nxt | Question:
Using Ubuntu 10.04 Lucid and ROS Electric. I had successfully installed the NXT packages for ROS and tested them with my brick. Everything seemed to be going well. I'm trying to do the NXT tutorials:
Entered: roscd nxt
Brought me to: /opt/ros/electric/stacks/nxt
Entered: roscreate-pkg learning_nxt rospy nxt_ros
Returned:
Traceback (most recent call last):
File "/opt/ros/electric/ros/bin/roscreate-pkg", line 35, in <module>
roscreate.roscreatepkg.roscreatepkg_main()
File "/opt/ros/electric/ros/tools/roscreate/src/roscreate/roscreatepkg.py", line 125, in roscreatepkg_main
create_package(package, author_name(), depends, uses_roscpp=uses_roscpp, uses_rospy=uses_rospy)
File "/opt/ros/electric/ros/tools/roscreate/src/roscreate/roscreatepkg.py", line 63, in create_package
os.makedirs(p)
File "/usr/lib/python2.6/os.py", line 157, in makedirs
mkdir(name, mode)
OSError: [Errno 13] Permission denied: '/opt/ros/electric/stacks/nxt/learning_nxt'
Before this I had been using roscreate-pkg just fine with the regular ROS tutorials. Now that I'm trying to use it with the NXT packages, it seems to not work.
EDIT 1: In reply to Lorenz, thank you. That is one thing that has confused me about the differences between Fuerte and Electric. In the installing Electric tutorial page, overlays aren't mentioned and I feel like my bashrc isn't setup the way it should be, even though I followed the isntructions verbatim from the install and configuration tutorials. The last three lines of my bashrc are currently (before applying Lorenz's update):
source /opt/ros/electric/setup.bash
export ROS_PACKAGE_PATH=~/ros_workspace:/opt/ros/electric/stacks
export ROS_WORKSPACE=~/ros_workspace
I also don't understand why roscreate-pkg works fine when I use it without NXT in mind.
EDIT 1.1: I ran all of Lorenz's suggested commands verbatim. After trying to run the command: rosws set ~/ros_overlay/sandbox
I get this:
Add element:
{'other': {'local-name': '/home/noah/ros_overlay/sandbox'}}
Continue(y/n): y
Overwriting /home/noah/ros_workspace/.rosinstall
Traceback (most recent call last):
File "/usr/local/bin/rosws", line 66, in <module>
sys.exit(rosinstall.rosws_cli.rosws_main(sys.argv))
File "/usr/local/lib/python2.6/dist-packages/rosinstall/rosws_cli.py", line 519, in rosws_main
return ws_commands[command](workspace, args)
File "/usr/local/lib/python2.6/dist-packages/rosinstall/multiproject_cli.py", line 318, in cmd_set
shutil.move(os.path.join(config.get_base_path(), self.config_filename), "%s.bak"%os.path.join(config.get_base_path(), self.config_filename))
File "/usr/lib/python2.6/shutil.py", line 264, in move
copy2(src, real_dst)
File "/usr/lib/python2.6/shutil.py", line 99, in copy2
copyfile(src, dst)
File "/usr/lib/python2.6/shutil.py", line 52, in copyfile
fsrc = open(src, 'rb')
IOError: [Errno 2] No such file or directory: '/home/noah/ros_workspace/.rosinstall'
Notice the last line.
EDIT 1.2: All of the commands ran fine up to: rosws set ~/ros_overlay/sandbox
The output of the following command is below: ls -l ~/ros_workspace
drwxr-xr-x 11 noah noah 4096 2012-06-25 09:18 beginner_tutorials
I found the .rosinstall in the ~/ros_overlay instead of ~/ros_workspace
Is there a way to move it or modify the command?
Also, I haven't changed anything since I first entered the rosws set command, but now there is a new error output for it:
ERROR: Ambiguous workspace: ROS_WORKSPACE=/home/noah/ros_workspace, /home/noah/ros_overlay/.rosinstall
EDIT 1.3: I feel like the issue isn't with my workspaces or whatnot. I feel like it is with the actual NXT Tutorial. The tutorial tells me to roscd to the NXT directory and then run the create package command. By me being in the NXT directory, /opt/ros/electric/stacks/nxt, won't running roscreate-pkg attempt to create my package in the current directory instead of my workspace? In the Creating Package tutorial, the user is told to roscd into their workspace before creating the package.
Originally posted by Nezteb on ROS Answers with karma: 32 on 2012-06-25
Post score: 0
Original comments
Comment by Lorenz on 2012-06-25:
That's interesting. The rosws init should actually create the .rosinstall file in ~/ros_workspace. Did you get any errors while executing rosws init? After executing it, what's the output of ls -l ~/ros_workspace?
Comment by dornhege on 2012-06-25:
re EDIT 1: Your .bashrc had a workspace setup correctly (for electric). The reason for the error is that you tried to create a package in the installed nxt stack instead of your home workspace. This is always bad and independent of electric or fuerte.
Comment by Lorenz on 2012-06-25:
As I said at the end of my question, you should call roscreate in ~/ros_overlay/sandbox. Ignore what the NXT tutorial or any other tutorial says about where to create packages :) and always use your sandbox dir. The NXT tutorial probably assumes an installation from source.
Answer:
You should never try to edit, create or change files in /opt/ros/.... Instead, create a overlay in your home directory. To create an overlay for electric, just execute the following commands:
sudo apt-get install python-pip
sudo pip install -U rosinstall
rosws init ~/ros_overlay /opt/ros/electric
mkdir ~/ros_overlay/sandbox
source ~/ros_overlay/setup.bash
rosws set ~/ros_overlay/sandbox
Now open your ~/.bashrc and change the line
source /opt/ros/electric/setup.bash
to
source ~/ros_overlay/setup.bash
Now close and re-open your terminal and execute the command roscreate-pkg in ~/ros_workspace/sandbox.
Originally posted by Lorenz with karma: 22731 on 2012-06-25
This answer was ACCEPTED on the original site
Post score: 2
Original comments
Comment by Lorenz on 2012-06-25:
Ah. Sorry. I screwed up the source line. Edited my question to fix it. Of course you should source the setup.bash in ros_overlay, not in ros_workspace.
Comment by Lorenz on 2012-06-25:
There was another mistake in the commands above. I forgot that you have to source setup.bash before calling rosws set, instead your ROS_WORKSPACE variable is not set correctly. Sorry for that. | {
"domain": "robotics.stackexchange",
"id": 9939,
"tags": "ros, nxt, permission, ros-electric"
} |
In Geology - True Dip versus Apparent Dip: When dip is estimated along a line of section, how is that dip determined in relation to true dip? | Question: The map presents a summary example of topography and mapped units X and Y. From the information presented, can someone help me determine or estimate the actual dip along the line of section, A-A'? I am somewhat uncertain regarding the steps to be taken given the information presented on this example map. What, specifically, is the difference between true dip and apparent dip in regard to A-A'?
Answer: This is an excellent question. The answer requires that one visualize a 3-dimensional space with a dip plane, a vertical plane coincident with, and aligned along, our line of interest, and the trace that our line of interest will make on the dip plane where the vertical plane and dip plane intersect. These aspects are already projected onto a two-dimensional map. In this answer, true dip is taken as the dip measured or estimated perpendicular to the strike. Dip is always given a compass heading in the down direction of the dip. Apparent dip is any other dip measured or estimated, and similarly given a compass heading. Also note that rise and fall are somewhat synonymous terms regarding elevation change over a measured horizontal distance.
We can readily identify, on the provided geologic map, two marked geologic units, X and Y, topographic elevation contours at 400 ft and 300 ft, geologic-unit contacts or boundaries denoting the top of unit X, contact of unit X with unit Y (which is also the bottom of unit X, or the top of unit Y), and bottom of unit Y. Presumed units W above, and Z below, are unmarked. North is indicated, and a graduated 1000 ft scale is given, marked in 250 ft divisions. Also, the trace of a line of section, A - A', is shown on the map. This is our line of interest. Within the given context of the question, the dip of line A - A' that is desired is of the projection of line A - A' onto a formation contact; or equivalently for a vertical section along A - A', the apparent dip of the formations in that vertical projection. Otherwise, the true dip of line A - A' in the map plane is zero.
To make this exercise easier to do we need the following -
drafting straight edge
protractor
engineer's scale (to proportion our map scale for measured distance)
drafting pencil (2 or 2H)
vinyl eraser for goofs/corrections
a drafting brush helps keep the drafting surface clean
The explanation, herein, will use and explain measurements by hand so that the process will be clear. A piece of drafting vellum or tracing paper placed over our map is sometimes easier for sketching work and correcting errors or noting scratch calculations.
First, we have to determine the true dip and strike of these units by structure-contouring one of the contact surfaces. An obvious choice is to use the contact boundary between units X and Y. Notice that this contact crosses the 300 ft topographic elevation contour in 3 locations on our map. Consequently, the elevation of the contact is 300 ft at these locations. We can free-hand sketch a structure contour through these three points, but they are close enough to being along a straight line that we can use a straight edge to sketch-in a 300 ft contour line for the structure contour through these points. The alignment of this contour is along the strike of the contact between units X and Y. Take note of the compass alignment of this contour, and take note of the compass alignment of our line of interest, A - A'. Of particular interest is the noted angle between the compass headings of these lines.
The following were determined from the map: 1) alignment of A - A' is N 60 deg E (or S 60 deg W), and 2) alignment of 300 ft structure contour, N 75 deg E (or S 75 deg W). The angle between these lines is difference between their headings, or 15 deg. The true dip of the X - Y contact is to the northwest. In other words, the true dip heading of the X - Y contact is perpendicular to the northeast - southwest alignment of the strike. The alignment of the strike is N 75 deg E (or S 75 deg W).
To continue, we note that the contact of interest crosses the 400 ft topographic elevation contour at 4 locations, all essentially in a linear alignment parallel to the 300 ft structure contour, and somewhat to the southeast. We, therefore, use our straight edge to again sketch-in the parallel alignment of the 400 ft-elevation structure contour. The dip of this contact surface is determined by the arctan(rise/run), wherein the rise is the difference in elevation between the contours, or 100 ft, and the run is determined from the map distance measured perpendicularly between 400 and 300 ft contours, or about 400 ft. Doing the math, the dip is determined as approximately 14 deg. Note the true dip is to the northwest, or 14 deg true dip directed, or aligned at, N 15 deg W. In other words, the map direction of the true dip is perpendicular to the strike, and by using the rise/run we have determined the true dip angle is approximately 14 deg.
Second, we take note of some simple 3-dimensional aspects related to the dip and strike of our X - Y contact, and the map direction of our line of interest. We are interested in knowing the apparent dip traced by of our line of interest on the X - Y contact surface. The line of strike is a horizontal line that has no dip, and we know the rise and run used to determine the contact dip. Essentially, any line of strike (such as a tangent to a structure contour) defines a level line. But the path of our line of interest is tilted against the contact surface. The dip of line A - A' projected on the contact surface, is presented by the intersection of a vertical plane containing A - A', at the contact surface. The slope of the line of interest along the contact plane is an apparent dip, the value of which we seek.
Let's examine some details. When looking at our map, we are looking perpendicular to a horizontal-plane projection of mapped features. Our line of section, A - A', is in a vertical plane that is perpendicular to our map plane. We have to visualize how this line of section is projected onto the contact surface. Obviously, the line of section on the contact surface is defined by the intersection of the vertical plane with the contact surface.
Imagine we are facing down-dip while standing on our line of interest where it crosses the 300 ft elevation structure contour of our contact surface. This would be just to the east of where our line of interest crosses the 300 ft topographic elevation contour; we are standing on our line of interest in the very basal section of formation X. To the left, our line of interest can be imaginarily seen tracing a path that is rising gently along the surface of the X - Y contact; to the right descending gently along the contact surface. We already have mapped information on the dip and strike that is useful: the map distance between the 300 and 400 ft structure contours of the contact surface is a run of 400 ft, consequently resulting in an elevation change (rise) of 100 ft. The apparent dip of our line of interest can be determined trigonometrically from the application of this information, and the angle made by the heading of the line of interest with the X - Y contact line of strike. Let's see how.
The line of true dip on our X - Y contact is aligned down-elevation on the X - Y contact plane perpendicular to the strike. From where we stand, we can scribe an imaginary circular arc of radius 400 ft on the map plane that crosses our line of interest and the line of strike defined by the 300 ft elevation structure contour of our contact surface. The radius of the circular arc is the distance of the run used in determining the dip. The angle measured between the line of interest and the line of strike of the X - Y contact surface is 15 deg. This was determined as the difference, in degrees, of their heading alignments. The apparent vertical rise (or fall) of our line of interest along the plane of the X - Y contact will be the sine of the measured angle between the map heading of the line of interest and strike heading of the X - Y contact (sin of 15 deg) times the known vertical rise (100 ft) of the contact true dip. This gives a vertical rise (or fall) of about 26 ft. Taken along a run of 400 ft, the vertical angle is the arctan(26/400), or about 3.7 deg. Along our line of interest, therefore, the magnitude and direction of this apparent dip is 3.7 deg aligned at N 60 deg E. Our map units, therefore, projected onto the vertical plane of the A - A' line of section, would show an apparent dip of about 3.7 deg. | {
"domain": "earthscience.stackexchange",
"id": 2451,
"tags": "geology, mapping, structural-geology"
} |
Generate Image with Artificial intelligence | Question: I am pretty new to Artificial Intelligence programming, however i do understand the basic concept.
I have an idea in my mind:
Import a JPEG Image,
Convert this Image into a 2D Array (x,y values + r g b values).
Then create a second array with same (xy) values wit rgb all set to 0,0,0.
Now i want to build an AI Layer which will try to lower the error factor between the arrays until they are equal (the rgb values in the second array are equal to the first array (error factor 0) ).
I would prefer to do it in Java. Any suggestions to librarys or example that can help me get started? Thanks for any help.
Answer: For recreating an image exactly the same as the original, you can use an autoencoder. This basically use AI Layers to encode the image raw pixel values to a vector of floats, drastically decreasing the representing vector. Afterwards another AI Layer increases the dimensions back to the original image. The method does not required labels, as it only refer to teh image to encode it to a vector of features. For implementing in java, there is not a lot 9f resources. However, you can check this library out: https://deeplearning4j.org/
For the implementation, see this: https://github.com/eclipse/deeplearning4j-examples/blob/master/dl4j-examples/src/main/java/org/deeplearning4j/examples/unsupervised/variational/VariationalAutoEncoderExample.java
For the method, you can see this python tutorial and implement it in java. https://towardsdatascience.com/autoencoders-in-keras-c1f57b9a2fd7
For generating completely new image, you can try GAN(Generative Adverserial Network). This generates completely new images from a random noise image. The noise image is passed through a generator which is a CNN(convolutional neural network) and get a result image. The result image is then feed to a discriminator(CNN as well) to classify if that image is fake or real. The generator and discriminator compete and slowly gets better. For java implementation, see this: https://github.com/wmeddie/dl4j-gans
Hope I can help you and have a nice day! | {
"domain": "ai.stackexchange",
"id": 1461,
"tags": "machine-learning, training, neurons, java"
} |
is missed? | Question:
When I built the source file from GUI Overlay tutorial, there was a fetal error that <gazebo/gui/GuiPlugin.hh> is missed. So I looked into the include directory and found there was no such a header file. I searched on the Internet and hardly got anything, so I am stuck here and need some help. I installed gazebo-4.0 from the source code a few days ago and still was not familar with it. Did I missed something when I installed it? Or something else? Thanks.
Originally posted by BenWashburn on Gazebo Answers with karma: 7 on 2014-10-28
Post score: 0
Original comments
Comment by nkoenig on 2014-10-29:
Which source file? And, can you post the console output?
Comment by BenWashburn on 2014-10-29:
Thanks. https://bitbucket.org/osrf/gazebo/src/default/examples/plugins/gui_overlay_plugin_spawn/ Here's the link of the source code. After the cmake step, I met the following errors when make it--fatal error: gazebo/gui/GuiPlugin.hh: No such file or directory.-- In the path /usr/local/include/gazebo4.0/gazebo, there exist other header files like <gui.hh> and <Plugin.hh> but no <GuiPlugin.hh>. So I have no idea what has gone wrong.
Comment by BenWashburn on 2014-10-29:
Here's the total console output:
In file included from /home/ben/gazebo_gui_spawn/build/moc_GUIExampleSpawnWidget.cxx:9:0:
/home/ben/gazebo_gui_spawn/build/../GUIExampleSpawnWidget.hh:21:35: fatal error: gazebo/gui/GuiPlugin.hh: No such file and directory
#include <gazebo/gui/GuiPlugin.hh>
compilation terminated
make[2]: *** [CMakeFiles/gui_example_spawn_widget.dir/moc_GUIExampleSpawnWidget.cxx.o] error 1
make[1]: *** [CMakeFiles/gui_example_spawn_widget.dir/all] error 2
make: *** [all] error 2
Answer:
The GuiPlugin.hh header file is not present in gazebo4 and earlier. You must use the default branch (or gazebo_5.* or later branches once they are created).
Originally posted by scpeters with karma: 2861 on 2014-10-30
This answer was ACCEPTED on the original site
Post score: 0
Original comments
Comment by BenWashburn on 2014-10-30:
Thanks a lot. That's quite helpful.
Comment by mohammadkassemzein on 2018-11-09:
So how can I use "Overlay Gui" in earlier version of gazebo (e.g. gazebo2)
Comment by chapulina on 2018-11-09:
You can't, you must upgrade Gazebo. | {
"domain": "robotics.stackexchange",
"id": 3660,
"tags": "gazebo"
} |
Origin of position-momentum asymmetry in quantum mechanics | Question: Elementary quantum theory teaches there exists a symmetry between position space and momentum space - you are free to switch by Fourier transform between position eigenvectors or momentum eigenvectors to express the wave function of a particle.
In articles on decoherence I have read, but don't understand, that this symmetry is broken by environmental interaction and position becomes the preferred basis (or the preferred observable is the position of the system).
Have I understood this correctly? If so is there a simple way to understand the origin of this asymmetry between position and momentum (or an SE post where this has already been discussed at a simple level)?
Answer: The asymmetry arises from the measurement apparatus. Which basis is chosen depends on what kind of environmental interaction you have.
In general this is formalised using the von-Neumann measurement scheme. It describes how a pointer gets entangled with the state variable, where the pointer is an approximately classical object.
The Hamiltonian that causes this entanglement to occur then determines which basis of the quantum system can be distinguished using the measurement apparatus consisting of the pointer and the interaction (e.g. the screen and the magnet respectively, in the Stern-Gerlach experiment, where the spin states become entangled with the position states on the screen).
Note that this only describes how the system and the pointer entangle, i.e. how correlations between them occur. From the comments below this answer I conclude that the OP's question is actually about how decoherence happens dynamically. The compulsory reference on this is Nieuwenhuizen et al., where they solve models that can describe real measurement processes. Please note that this does not have to yield the position basis as the preferred one, in fact in the particular Curie-Weiss model that is solved in the paper it is the spin basis again (simply because spin is easy to deal with). | {
"domain": "physics.stackexchange",
"id": 33116,
"tags": "quantum-mechanics, momentum, hilbert-space, fourier-transform, decoherence"
} |
Will it produce alternating magnetic field with respect to time, if l hybrid soft magnet with self decaying(Radioactive) material? | Question: Radioactive material emits electromagnetic waves (Gamma Ray).
Soft magents has low magnetic coercivity, Right.
Suppose if i hybrid very soft magnet alongside with strong radioactive material, will it produce alternating magnetic field?
If yes, how strong will it.
Answer: Short answer, No.
Longer: gamma rays are of MeV energies. The magnetization of the material is a solid state phenomenon of order of electron volts. The gamma will go right through the material, or interact with a nucleus. It will not see the long wavelength structure of the magnetic field.
In addition, gammas will be emitted randomly and in random directions, so it would not work even if the previous paragraph were not true. | {
"domain": "physics.stackexchange",
"id": 25033,
"tags": "electromagnetism, electromagnetic-radiation, electromagnetic-induction"
} |
How to calculate the frequency of a signal without knowing the sampling frequency using Matlab | Question: I have a signal and I am using Matlab command pwelch to calculate the frequency of the signal, but the frequency I obtained is changed as I change the sampling frequency.
For example, when using sampling frequency equal to 8000[samples/sec], the frequency appears to be 1 Khz, while using 16000 sampling frequency the frequency of the signal appear to be 2 Khz.
Which is the correct frequency? And is there other method to calculate the frequency of a signal without a prior knowledge of the sampling frequency?
Answer: If you do not have the information about the sampling frequency $F_s$ of your digital data, the best option is to talk about dimensionless relative frequencies $f$, or reduced frequency. The frequencies you observe on periodograms will be $f = F/F_s$, where $F$ would be the true frequency (hoping you have no aliasing). This amounts to saying that your sampling period is $1$ (dimensionless) and that the maximum observable frequency in your signal is $1/2$.
However, it is likely that the actual sampling frequency can be obtained from the data file, the experiment or sensor, a recorded phenomenon with known frequency (like the 50 or 60 Hz power) or the person who gave you that signal. | {
"domain": "dsp.stackexchange",
"id": 2971,
"tags": "matlab, frequency"
} |
Electromagnetic tensor in cylindrical coordinates from scratch | Question: I want to calculate the electromagnetic tensor components in cylindrical coordinates. Suppose I did not know that those components are given in Cartesian coordinates by
$$(F^{\mu \nu})=
\begin{pmatrix}
0 & E_x & E_y & E_z \\
-E_x & 0 & B_z & -B_y \\
-E_y & -B_z & 0 & B_x \\
-E_z & B_y & -B_x & 0
\end{pmatrix}.$$
I want to derive the result in the same manner I did in the Cartesian coordinates case, i.e., using that $F^{ \mu \nu} = \partial^\mu A^\nu - \partial^\nu A^\mu$, where $A^\alpha=(V,\vec{A})$, $\vec{B} = \nabla \times \vec{A}$ and $\vec{E} = -\nabla V - \partial \vec{A} / \partial t$. Using the formulas for curl and gradient in cylindrical coordinates, we find
$$
\vec{E} = - \left( \frac{\partial V}{\partial r} + \frac{\partial A_r}{\partial t} \right)\hat{r} \
- \left( \frac{1}{r}\frac{\partial V}{\partial \phi} + \frac{\partial A_\phi}{\partial t} \right)\hat{\phi}
- \left( \frac{\partial V}{\partial z} + \frac{\partial A_z}{\partial t} \right)\hat{z}
$$
and
$$
\vec{B} = \left( \frac{1}{r}\frac{\partial A_z}{\partial \phi} - \frac{\partial A_\phi}{\partial z} \right)\hat{r} \
+\left(\frac{\partial A_r}{\partial z} - \frac{\partial A_z}{\partial r} \right)\hat{\phi} \
+\frac{1}{r}\left(\frac{\partial (r A_\phi)}{\partial r} - \frac{\partial A_z}{\partial r} \right)\hat{z}. \
$$
The invariant interval is given by $ds^2 = -dt^2 + dr^2 + r^2 d\phi^2 + dz^2$, (with $c=1$). Therefore, the metric tensor reads
$$(g_{\mu \nu})=
\begin{pmatrix}
-1 & 0 & 0 & 0 \\
0 & 1 & 0 & 0 \\
0 & 0 & r^2 & 0\\
0 & 0 & 0 & 1
\end{pmatrix},$$
and its inverse is
$$(g^{\mu \nu})=
\begin{pmatrix}
-1 & 0 & 0 & 0 \\
0 & 1 & 0 & 0 \\
0 & 0 & 1/r^2 & 0\\
0 & 0 & 0 & 1
\end{pmatrix}.$$
Which implies $\partial^0 = -\partial_0$, $\partial^1 = \partial_1$, $\partial^2 = \frac{1}{r^2}\partial_2$ and $\partial^3 = \partial_3$.
So, for example,
$$
F^{ 01} = \partial^0 A^1 - \partial^1 A^0 = -\partial_0 A^1 - \partial_1 A^0 = -\frac{\partial A_r}{\partial t}-\frac{\partial V}{\partial r} = E_r,
$$
which is reassuring.
Now,
$$
F^{02} = \partial^0 A^2 - \partial^2 A^0 = -\partial_0 A^2 - \frac{1}{r^2}\partial_2 A^0 = -\frac{\partial A_\phi}{\partial t}-\frac{1}{r^2}\frac{\partial V}{\partial \phi}.
$$
However, I cannot identify this quantity with any component of the electric field. This last expression looks almost like $E_\phi$, except for an extra $\frac{1}{r}$ multiplying $\partial V / \partial \phi$. What went wrong here?
Answer: The problem is that there is a mismatch between the vector basis that you are using to write the 4-vector potential and the ones you are considering for your metric, which are not unitary. The standard cylindrical coordinate basis should have a Minkwoski metric, since we don't really have curvature in this case. The only difference is then a $1/r$ factor in the $\phi$ component.
Therefore, in order to be consistent, you need to replace $A^2 \rightarrow \frac{A_{\phi}}{r}$. Then, you will see that $F^{02}=\frac{E_{\phi}}{r}$, which, again, is consistent with your metric. | {
"domain": "physics.stackexchange",
"id": 66605,
"tags": "electromagnetism, special-relativity, metric-tensor"
} |
nav2d Exploration failed with hectormapping | Question:
Hello,everyone
In the tutorial3.launch, I remove the mapper node and use hectormapping instead.
after roslaunch tutorial3.launch, in a new terminal,type "rosservice call /StartMapping 3" and enter,the robot can run.Then,I tyep ""rosservice call /StartExploration 2 and enter,the terminal,which roslaunch tutorial3.launch,shows some message:
Is the robot out of the map?
Exploration failed, could not get current position.
I add some log in the code,finding that mCurrentMap.getIndex(current_x, current_y, i)=false,mHasNewMap=false,getMap()=false.
here is my launch files,
tutorial3.launch:
<!-- Start Stage simulator with a given environment -->
<node name="Stage" pkg="stage_ros" type="stageros" args="$(find nav2d_tutorials)/world/tutorial.world">
<param name="base_watchdog_timeout" value="0" />
</node>
<!-- Start the Operator to control the simulated robot -->
<node name="Operator" pkg="nav2d_operator" type="operator" >
<remap from="scan" to="base_scan"/>
<rosparam file="$(find nav2d_tutorials)/param/operator.yaml"/>
<rosparam file="$(find nav2d_tutorials)/param/costmap.yaml" ns="local_map" />
</node>
<node pkg="hector_mapping" type="hector_mapping" name="hector_height_mapping" output="screen">
<param name="scan_topic" value="base_scan" />
<param name="base_frame" value="base_link" />
<param name="odom_frame" value="odom" />
<param name="map_frame" value="map"/>
<param name="laser_frame" value="base_laser_link"/>
<param name="output_timing" value="false"/>
<param name="map_pub_period" value="0.5"/>
<param name="update_factor_free" value="0.45"/>
<param name="map_update_distance_thresh" value="0.02"/>
<param name="map_update_angle_thresh" value="0.1"/>
<param name="map_resolution" value="0.05"/>
<param name="map_size" value="1024"/>
<param name="map_start_x" value="0.5"/>
<param name="map_start_y" value="0.5"/>
<remap from="map" to="map"/>
<!-- Start the Navigator to move the robot autonomously -->
<node name="Navigator" pkg="nav2d_navigator" type="navigator">
<rosparam file="$(find nav2d_tutorials)/param/navigator.yaml"/>
</node>
<node name="GetMap" pkg="nav2d_navigator" type="get_map_client" />
<node name="Explore" pkg="nav2d_navigator" type="explore_client" />
<node name="SetGoal" pkg="nav2d_navigator" type="set_goal_client" />
<!-- Pioneer model for fancy visualization -->
<!-- Comment this out if you do not have the package 'p2os' available! -->
<include file="$(find p2os_urdf)/launch/pioneer3at_urdf.launch" />
<node name="front_left_wheel" pkg="tf" type="static_transform_publisher" args="0 0 0 0 0 0 p3at_front_left_hub p3at_front_left_wheel 100" />
<node name="front_right_wheel" pkg="tf" type="static_transform_publisher" args="0 0 0 0 0 0 p3at_front_right_hub p3at_front_right_wheel 100" />
<node name="back_left_wheel" pkg="tf" type="static_transform_publisher" args="0 0 0 0 0 0 p3at_back_left_hub p3at_back_left_wheel 100" />
<node name="back_right_wheel" pkg="tf" type="static_transform_publisher" args="0 0 0 0 0 0 p3at_back_right_hub p3at_back_right_wheel 100" />
<!-- RVIZ to view the visualization -->
<node name="RVIZ" pkg="rviz" type="rviz" args=" -d $(find nav2d_tutorials)/param/tutorial3.rviz" />
Thank you!!
Originally posted by ROSYHB on ROS Answers with karma: 3 on 2017-02-13
Post score: 0
Answer:
The navigator gets the map via a service call. If this service is named differently in hector_mapping, you have to set the navigator's map_service parameter accordingly. The same goes for frame names that are published by hector_mapping.
Originally posted by Sebastian Kasperski with karma: 1658 on 2017-02-15
This answer was ACCEPTED on the original site
Post score: 0
Original comments
Comment by ROSYHB on 2017-02-18:
Thanks for your reply.I've found that navigator can get map via a service call after I change the map_service to "dynamic_map" in ros.yaml. | {
"domain": "robotics.stackexchange",
"id": 27001,
"tags": "ros, nav2d, hector-slam"
} |
Does spacetime interval at event horizon become light-like for all trajectories? | Question: I have been reading up few papers against the black hole paradigm specifically ECO, and i came across the argument raised by them that at r = 2GM, ds does indeed vanish. Is it correct? Please provide a clarification or point me to some paper regarding this. Thanks.
Answer:
Does spacetime interval at event horizon become light-like for all
trajectories?
No. If a geodesic starts time-like, it will stay so. The same goes for light-like and space-like geodesics. But only light-like geodesics can stay at the horizon (if they are directed radially outwards), so the only objects that can keep a constant radial coordinate there are photons. But you will never be as fast or faster than a photon in your vicinity, so if you send a light signal you will never be able to keep up with or overtake it, neither outside nor inside the horizon.
Please provide a clarification or point me to some paper regarding
this.
In this paper for example we find the quote "a timelike geodesic stays timelike and similarly for the spacelike and null geodesics", for more information see page 7. | {
"domain": "physics.stackexchange",
"id": 64612,
"tags": "general-relativity, gravity, black-holes, spacetime, event-horizon"
} |
Why does the Superheterodyne AM Receiver use an envelope detector? | Question: I'm currently learning about the Superheterodyne AM Receiver and I don't understand why does it use an envelope detector (as suggested here) rather than taking the signal back to its baseband frequency through a local oscillator. That seems more natural to me because it's already using one to take the signal to the intermediate frequency.
Answer: The AM receiver could be done with a second Local Oscillator to baseband which would work fine if there was no frequency offset between the transmitter and receiver; otherwise the baseband signal would have an amplitude variation given by the AM in the signal, as well as phase rotation given by the frequency offset and relative delay (consider if the signal happened to be received with 90 degree delay- the result out of a single real mixer after low pass filtering would be 0!) So, the receiver would require two additional mixers driven in quadrature to be able to determine and remove these phase and frequency offsets. The phase and frequency offsets would need to be recovered and removed to intelligibly recover the desired amplitude variation; this is known as coherent demodulation. This can and is often done, at the increase in complexity but also an increase in performance with higher SNR achievable.
Such a frequency offset is inevitable given the transmitter and receiver operate on different reference clocks that can't possibly be at exactly the same frequency, and the frequency difference can be significant for low cost hardware with temperature variations etc. If there is motion between transmitter and receiver, this will add an additional Doppler offset. For software radios where the translation from IF to baseband together with carrier recovery can be done digitally this would likely be the favored approach at the benefit of the flexibility the implementation will provide (and all the other functions that motivated us to go down this path to begin with, certainly not for a fixed AM receiver). For a radio that only needs to demodulate the AM, a simple power detector on the IF output (or depending on the carrier frequency directly on the RF signal) is far simpler and therefore a lower cost, size and power solution. | {
"domain": "dsp.stackexchange",
"id": 10670,
"tags": "modulation, frequency-response, amplitude-modulation"
} |
Why can't we directly measure internal energy and entropy? | Question: I learned about entropy in chemistry, I saw that we can measure $\Delta H$, but can't directly measure the $H$. So I wondered: Why can’t we directly measure $H$? But I couldn’t find the exact reason.
I found that entropy is related to internal energy. So I searched about internal energy, and I could find a simple reason for why we can’t. We can’t directly measure $H$ or internal energy because when we try to measure the internal energy, the internal energy changes. When I found this, I thought that it may be related to the uncertainty principle.
So, why we can’t directly measure internal energy or entropy. And is it related to the uncertainty principle?
Answer: In general it is difficult to quantify the absolute values of diverse thermodynamical potentials of matter due to their complicated structure, however, for simple-structured matter it is possible. The best example for this is the classical (not quantum!) ideal gas where the internal energy $U$ and enthalpy $H$ are given by
$$ U = c_V N k_\text{B} T \quad \text{and} \quad H = c_p N k_\text{B} T$$
where $c_V$ and $c_p$ are the dimensionless specific heat capacity at constant volume respectively at constant pressure (BTW: $c_p = c_V + 1$ for an ideal gas). Furthermore $N$ is the number of particles $k_\text{B}$ the Boltzmann constant. The formula for the total entropy is a bit more complicated and only for an one-atomic ideal gas:
$$ S = Nk_\text{B} \left( \ln \left(\frac{V}{N\lambda^3}\right) + \frac{5}{2} \right) $$
where $\lambda = \frac{h}{\sqrt{2\pi m k_\text{B} T}}$. (Here, we indeed get a quantum-mechanical constant. This related to the statistical meaning of entropy being proportional with the logarithm of the number of micro-states possible for a single macroscopic state. Through counting the micro-states, quantum mechanics comes in.)
The question on the uncertainty of thermodynamic potentials is not related with Heisenberg’s uncertainty principle. It is related with thermodynamic fluctuations. As any kind of matter, an ideal gas is in permanent energy exchange with the outer world. So the formulas given above are actually mean values as the actual values fluctuate. However, due to the large number of particles contained in a macroscopic amount of matter, the relative values of these fluctuations with respect to the corresponding total macroscopic values are very small. So if a limited number of gas particles (for instance those close to the wall of the container of the ideal gas) exchange their energy with the outer world – positive or negative – does not substantially change the total amount of internal energy, so the mean value (taken over a time scale typical for macroscopic physics) remains constant.
Last note: All given formulas are to be understood for equilibrium thermodynamic states. | {
"domain": "physics.stackexchange",
"id": 67007,
"tags": "thermodynamics, energy, heisenberg-uncertainty-principle"
} |
Do photons violate the uncertainty principle, given that they have a constant speed $c$ with no uncertainty? | Question: I have a very basic understanding of quantum physics, but as I understand it the uncertainty principle says that the more precisely you know a particle momentum and the less you know the particle's position.
But I wonder with the photon: given that the velocity is a constant $c$ so there is no uncertainty at all in the speed (and so in the momentum), does that mean for a photon that the uncertainty of the position is "infinite"?
Answer: As explained in If photons have no mass, how can they have momentum?, it is impossible to assign photons a classical momentum $p=mv$, because their mass is zero.
Instead, the photon momentum is determined by its wavelength $\lambda$ via
$$
p = \frac h\lambda,
$$
where $h$ is Planck's constant. This means that the only way to have a completely determined momentum (i.e. $\Delta p=0$) is to have a completely determined wavelength, and that can only be done if the wavepacket has infinite extent (because, if it doesn't, what's the wavelength at the edge of the wavepacket). Thus, the photon momentum is fully compatible with the Heisenberg uncertainty principle. | {
"domain": "physics.stackexchange",
"id": 51005,
"tags": "quantum-mechanics, photons, heisenberg-uncertainty-principle"
} |
Is the exponent in floating point numbers signed or unsigned | Question: This may be sound like a stupid question, because, of course, it can be negative and, as Wikipedia states:
A numeric variable is signed if it can represent both positive and negative numbers, and unsigned if it can only represent non-negative numbers (zero or positive numbers).
But is it really? Because it doesn't work like normal signed integer, because doesn't really have a sign, it's just an unsigned integer starting at a negative point, it seems to me. I guess the first character could indicate whether it's negative xor positive / zero.
For people who don't remember CS class:
exponent of -127: 0000 0000
exponent of zero: 0111 1111
exponent of one: 1000 0000
exponent of 128: 1111 1111
Answer: This is mostly a matter of interpretation. For IEEE 754 floating point numbers (one of the most common implementations), an exponent bias is used:
In IEEE 754 floating point numbers, the exponent is biased in the engineering sense of the word – the value stored is offset from the actual value by the exponent bias. Biasing is done because exponents have to be signed values in order to be able to represent both tiny and huge values, but two's complement, the usual representation for signed values, would make comparison harder.
To solve this problem the exponent is biased before being stored, by adjusting its value to put it within an unsigned range suitable for comparison.
For floating point numbers with single-precision, the effective exponent has 8 bits and ranges from -126 to 127. However, the internal exponent ranges from 1 to 254, meaning that 127 has been added to the effective exponent.
(This means it is valid to say that the exponents of floating point numbers are stored as unsigned integers, even though this is a matter of interpretation.)
In addition to the reasons outlined above, another good reason for this encoding is that two special exponents are required for floating point encoding according to IEEE 754: Floating point numbers can also store the values Infinity, NaN and denormalized values which would be too small to store them in the usual format.
The special constants used are the internal exponents $0$ and $2^r-1$ (where $r$ is the number of bits of the exponent), so in binary $0...0$ and $1...1$. These values are easy to recognize. Note that these exponents belong to effective exponents that are barely ever used. If we had used the internal exponent $0$ without shifting the range, it would be the same as the effective exponent $0$, making it difficult to encode common number ranges. | {
"domain": "cs.stackexchange",
"id": 8448,
"tags": "floating-point"
} |
The temperature a liquid would boil: question incorrectly formulated or not? | Question: I have met a question in a high school physics book which I think is incorrectly formulated.
The question is this: In order to reach boiling temperature, a certain liquid requires twice the amount of energy compared to water. At what temperature does it boil?
The book says the answer is 200 Celsius.
It seems wrong to me because it does not mention the specific heat of the liquid. Knowing only the amount of heat provided is not enough to know the temperature it will reach, I think.
Am I wrong or is the book wrong?
Answer: The book is wrong in the sense that it has made a lot of assumptions without actually accounting for all of them. So you are right that the question has incomplete information.
The inaccuracies in the question are as follows:
In such questions, you must refer to the principle of calorimetry.
$$m_1s_1t_1=H=m_2s_2t_2$$ where $H=$heat absorbed by the body, $m=$mass of that body, $s=$specific heat of the body, $t=$change in temperature of body and the indices $1,2$ refer to the two bodies in contact
The question does not mention the mass of the liquid, whether it is the same as that of water or anything else.
In a similar manner, the question does not mention the specific heat of the liquid, whether it is the same as that of water or not.
Also no mention of initial temperature of liquid.
So the question is quite vague. | {
"domain": "physics.stackexchange",
"id": 27892,
"tags": "thermodynamics, temperature, phase-transition"
} |
How to trigger a stereo camera on Gazebo Ignition with ROS2? | Question:
Hello,
I'd like to trigger a stereo acquisition from ROS Galactic with Gazebo Ignition (Fortress).
I don't find any documentation. Could someone provide me an example ?
I've found https://github.com/ros-simulation/gazebo_ros_pkgs/blob/galactic/gazebo_plugins/include/gazebo_plugins/gazebo_ros_camera.hpp but I'm not sure if those plugins are for ignition or gazebo classic...
Thanks in advance
Update
Ignition plugins are here: https://github.com/gazebosim/gz-sim/. Sensors have been fused into a single Sensor.cc.
To create a stereo camera you need two tags with type "camera" as described here: https://github.com/gazebosim/gz-sim/blob/3b6c0f2a0bc71c5ff391852615609deec3c03674/src/systems/sensors/Sensors.cc#L739-L770
The Load() function parses the tag: https://github.com/gazebosim/sdformat/blob/a51a76edd3b5d423b24034c4288fcda1d122c700/src/Camera.cc#L201-L404
To trigger cameras, you need to set the tag to true and give a <trigger_topic>.
In the end, my urdf looks like this:
<sensor type="camera" name="right">
<update_rate>10.0</update_rate>
<always_on>true</always_on>
<ignition_frame_id>camera_link_optical</ignition_frame_id>
<pose>0 -0.12 0 0 0 0</pose>
<topic>/model/mmx/stereo_camera/right/image_raw</topic>
<camera name="right">
<triggered>true</triggered>
<trigger_topic>/trigger</trigger_topic>
<horizontal_fov>1.3962634</horizontal_fov>

<clip>
<near>0.02</near>
<far>300</far>
</clip>
</camera>
</sensor>
<sensor type="camera" name="left">
<topic>/model/mmx/stereo_camera/left/image_raw</topic>
<update_rate>10.0</update_rate>
<always_on>true</always_on>
<ignition_frame_id>mmx/camera_link_optical</ignition_frame_id>
<pose>0 0 0 0 0 0</pose>
<camera name="left">
<triggered>true</triggered>
<trigger_topic>/trigger</trigger_topic>
<horizontal_fov>1.3962634</horizontal_fov>

<clip>
<near>0.02</near>
<far>300</far>
</clip>
</camera>
</sensor>
There is no message on the topic ign topic -t /model/mmx/stereo_camera/left/image_raw -e which means the camera is in triger mode (not constantly streaming messages on the topic).
But the trigger_topic doesn't appear when I list topics with ign topic -l
And if I pusblish a message on the trigger_topic with ign topic -t "/trigger" -m ignition.msgs.Empty -p " ", cameras are not triggered (still no image message).
Originally posted by Clement on Gazebo Answers with karma: 3 on 2022-08-16
Post score: 0
Original comments
Comment by kakcalu13 on 2022-08-16:
The link you provided is ROS1.
Would this link big helpful for you? https://github.com/gazebosim/ros_gz/tree/foxy/ros_ign_gazebo_demos
Instead of Foxy, use Galactic
Comment by Clement on 2022-08-16:
Indeed it was the old gazebo plugins. The new ones are https://github.com/gazebosim/gz-sim. It seems that all the sensors have been fused into a single sensor: https://github.com/gazebosim/gz-sim/tree/ign-gazebo6/src/systems/sensors. But I can't make it work.
Answer:
OK it works...
I found this page https://community.gazebosim.org/t/new-releases-2022-06-28-fortress/1486 on which there is a screenshot of a trigger command:
So the issue was that I wasn't sending a Boolean message.
The command should have been: ign topic -t "/trigger" -m Boolean -p "data: true" -n 1
Originally posted by Clement with karma: 3 on 2022-08-17
This answer was ACCEPTED on the original site
Post score: 0
Original comments
Comment by Clement on 2022-08-17:
I have another question, is gazebo ignition being abandoned?
There is no documentation at all.
It tooks me almost two days to figure how how to create a stereo bench and trigger a pair of images and it's only by chance.
Are people all moving to Webots ? | {
"domain": "robotics.stackexchange",
"id": 4667,
"tags": "gazebo-camera, gazebo-ignition"
} |
Hydrogen evolution during thermal spray | Question: Firstly, let me state that I'm no chemist. My specialty is in thermally applied coatings such as electric-arc metalizing.
During the course of my career I've always been presented with vague warnings to use "proper" ventilation to prevent hydrogen buildup within a contained area when spraying Al alloys.
Could someone please explain the process that creates hydrogen gas when arcing between two Al electrodes?
Also, what would the "proper" ventilation be? Would dry dust collection/ventilation suffice or would wet collection methods be required? Would this same reaction occur when using other alloys commonly used in thermal deposition processes?
Answer: The culprit in the undesired production of hydrogen gas from electric-arc metalizing is water vapor, as it is the only significant source of hydrogen atoms in the atmosphere. The concentration of water vapor is then of course a major factor in how much hydrogen gas can be produced. In very humid conditions the air can contain as much as a few percent water vapor. The chemical reaction for the conversion of water vapor to hydrogen gas under the conditions of this process is as follows:
$$\ce{H2O <=> H2 + 1/2O2}$$
A few percent water vapor may not seem like enough to produce explosive levels of hydrogen gas, but hydrogen gas becomes explosive in air at levels as low as $\pu{4\%}$.
A recirculating dust filtration system, dry or wet, is not an appropriate system for removing hydrogen gas. The only reliable way to prevent the possibility of hydrogen buildup in a closed area is with an appropriate fume hood (or similar approved device) that vents the gases out of the building.
I can't give an exhaustive list of the metals/alloys that could result in hydrogen production via this process, although base metals like copper, lead, nickel and zinc have the potential to do so. | {
"domain": "chemistry.stackexchange",
"id": 8712,
"tags": "experimental-chemistry, safety, hydrogen"
} |
Drawing 800+ circles in a space-themed shoorter game | Question: I am making a space-themed shooter game, and I need to render many "lasers" at the same time. I am using the library Slick2D, a wrapper around lwjgl. Unfortunately, when there are a lot of these "lasers" on the screen at one time, the game drops from its usual 60 FPS to around 30 or even 20.
Things I've noticed:
Not calling drawSelf = no lag
Not calling .fill = no lag
As is = lag as stated in title
public void drawSelf(Graphics g){
int[] pos = tr.toSlick(blast.getPosition());
g.setColor(Color.green);
Circle circle = new Circle(pos[0], pos[1], tr.xscale/2);
g.fill(circle);
}
public int[] toSlick(float x, float y){
int [] output = new int[2];
output[0] = (int)(x*xscale+width/2);
output[1] = (int)(-1*y*yscale+height/2);
return output;
}
tr is the coordinate transformer object (I'm using jbox2d physics, but that's not impacting anything), and blast is the "Body".
Can I have some suggestions on making this code faster or am I stuck because of Slick's internals?
Answer: By looking at documentation, I found another method that is faster at drawing many circles.
I replaced last 2 lines in drawSelf with:
g.fillOval(pos[0],pos[1],tr.xscale,tr.yscale);
and I got a lot less lag. Turns out it did have to do with internals. | {
"domain": "codereview.stackexchange",
"id": 15353,
"tags": "java, performance, game, graphics"
} |
Removing Single Points from a sensor_msgs::PointCloud2 | Question:
Hi,
I am trying to build a dynamic mask for robot systems using a Kinect2. My Idea is to use a naive euklidean distance search to remove the robot from the world. (I want to have a hole in the cloud where the robot is)
Unfortunately I don't know how to do that. Can you help me?
This is the part that needs help:
for (sensor_msgs::PointCloud2ConstIterator<float> it(output, "x"); it != it.end(); ++it) {
// TODO: do something with the values of x, y, z
try {
transformStamped = tfBuffer.lookupTransform("kinect2_rgb_optical_frame", "RobotCenter",ros::Time(0),
ros::Duration(2));
CamRobX = transformStamped.transform.translation.x;
CamRobY = transformStamped.transform.translation.y;
CamRobZ = transformStamped.transform.translation.z;
CamPointX = it[0];
CamPointY = it[1];
CamPointZ = it[2];
distance = sqrt(pow((CamPointX - CamRobX), 2) + pow((CamPointY - CamRobY), 2) + pow((CamPointZ - CamRobZ), 2));
}
catch (tf2::TransformException &ex) {
ROS_WARN("%s", ex.what());
ros::Duration(1.0).sleep();
continue;
}
// distance refers to a radius of a frame in the middle of the room everything around this frame has to dissapear
if(distance <0.4){
}
else{
}
}
pub.publish(output);
Originally posted by Arthur_Ace on ROS Answers with karma: 13 on 2019-02-20
Post score: 0
Original comments
Comment by gvdhoorn on 2019-02-20:
Do you absolutely want to solve/implement this yourself, or would a pkg/plugin that does this also be ok?
Answer:
Given you're written the majority of the code for this it's only a tiny bit of work to get this working. Just to give a bit of background, there are two different types of point cloud that can be represented by PCL: Dense structured point clouds and sparse point clouds.
In dense point clouds, the points are stored in a grid structure a NxM matrix of 3D points with invalid points being represented as vectors of NANs. These types of point clouds are generally created by depth cameras and stereo vision algorithms.
In sparse point clouds, the points are represented as a flat list of points in no specific order these are more commonly generated by LIDAR sensors.
Getting back to your question, the kinect produces dense structured point clouds so you will be able to 'remove' a point by simply setting its x, y and z values to NAN. This means that the grid structure of the points is preserved.
Hope this helps.
Update
You can set the values of a point using the iterator you are using in your loop as shown below:
it[0] = std::numeric_limits<float>::quiet_NaN();
it[1] = std::numeric_limits<float>::quiet_NaN();
it[2] = std::numeric_limits<float>::quiet_NaN();
The iterator functions like a pointer to the pcl::Point allowing you to get and set its values.
Originally posted by PeteBlackerThe3rd with karma: 9529 on 2019-02-21
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by Arthur_Ace on 2019-02-21:
That clears things up, thank you! Unfortunately I don't know how and where to set the NAN-Value
also, this is already a sensor_msgs_PointCloud2. Can I do the same here?
Comment by PeteBlackerThe3rd on 2019-02-21:
Yes this should work. I've updated my answer with an example for you now.
Comment by Arthur_Ace on 2019-02-21:
I tried that, but when I try to compile (catkin_make) the following error code appears.
it seems like the iterator won't let me override the values for some reason although it should...
Comment by Arthur_Ace on 2019-02-21:
error: assignment of read-only location ‘it.sensor_msgs::PointCloud2ConstIterator::.sensor_msgs::impl::PointCloud2IteratorBase<float, const float, const unsigned char, const sensor_msgs::PointCloud2_std::allocator<void >, sensor_msgs::PointCloud2ConstIterator>::operator’
Comment by gvdhoorn on 2019-02-21:\
the iterator won't let me override the values for some reason although it should...
no, it shouldn't. It's a ConstIterator ..
Comment by Arthur_Ace on 2019-02-21:
So, how can I circumvent this?
Comment by PeteBlackerThe3rd on 2019-02-21:
Ahh if your working with the message passed to a callback it will be a const (immutable) type. You should be able to create a non-const copy of the message and work on that:
sensor_msgs::PointCloud2 newCloud = *cloud_parameter; | {
"domain": "robotics.stackexchange",
"id": 32505,
"tags": "ros, c++, ros-melodic, pcl, pointcloud"
} |
I/Q scatter plot of raised cosine filter in QPSK | Question: I am using a raised cosine filter in my QPSK receiver and I see this IQ diagram of the signal, but it seems off to me. Can someone explain why this happens?
This is the MATLAB code I used to generate it
% Parameters
M = 4; % QPSK
symbolRate = 1e6; % 1 megabaud
fs = 2 * symbolRate; % Sampling frequency (oversampling)
numSymbols = 1000; % Number of symbols
rolloff = 0.35; % Roll-off factor for Raised Cosine filter
% Generate random bits
dataBits = randi([0 1], numSymbols*log2(M), 1);
% QPSK Modulation
qpskModulator = comm.QPSKModulator('BitInput', true);
modulatedSignal = qpskModulator(dataBits);
% Raised Cosine Filter
txFilter = comm.RaisedCosineTransmitFilter('RolloffFactor', rolloff, ...
'OutputSamplesPerSymbol', fs/symbolRate);
txSignal = txFilter(modulatedSignal);
scatterplot(txS
Answer: This is likely due to a fractional timing offset as could be expected depending on the delay in the pulse sampling filter (likely a half sample offset if the filter was generated with an even number of samples), combined with plotting every sample at two samples per symbol in the scatter plot.
This can be confirmed by resampling by two using Matlab’s resample function and then selecting every fourth sample with a sample offset to make up for that half sample delay. | {
"domain": "dsp.stackexchange",
"id": 12443,
"tags": "digital-communications, filter-design, qpsk"
} |
Do emitted photons have a well defined frequency or just a spread as per HUP? | Question: I have read this question:
Conservation of Energy in photon exchange between two atoms
where Kurshal Shah in a comment says:
As per the energy-time uncertainty relation, the emitted photon does not have a definite energy, but a spread. You need to account for that too.
No well-defined frequency for a wave packet?
where Ben Crowell says:
Let's say an isolated atom emits a photon. The excited state in the atom has some lifetime τ. Through the energy-time uncertainty relation, that gives the excited state some uncertainty in energy δE∼h/τ (not the same as ΔE, which is a difference in energy between atomic states). The photon then has the same uncertainty δE in its energy, which corresponds to an uncertainty in frequency. The photon isn't in an eigenstate of energy.
Yes, when you measure the energy of the photon, you get a random outcome. However, there is a quantum-mechanical correlation between this energy and the energy of the atom, so that energy is exactly conserved (not just statistically, on an average basis).
Now energy must be conserved in a closed QM system.
In physics and chemistry, the law of conservation of energy states that the total energy of an isolated system remains constant; it is said to be conserved over time.
https://en.wikipedia.org/wiki/Conservation_of_energy
But if the energy of the emitted photon has some uncertainty as per QM, but the emitting atom's energy levels are specific energy levels, an excited and a ground state, then the difference between the energy levels must be a specific energy level, which should be the energy level of the photon.
Now if I detect (observe, absorb) this photon, then I will get a specific frequency, an eigenvalue. What I do not understand is, why the source of the photon, that is the emitting atom has a definite energy level difference between excited and ground state.
that gives a specific energy level difference
the photon that is emitted must have this specific energy level, since the electron lost this specific kinetic energy to go from excited to ground level
but the questions say, that the photon does have an uncertainty in its energy level
when I detect (observe, absorb) this photon, I get a certain eigenvalue
Question:
Does the photon have a well defined frequency or not?
Answer: Not sure about the definition of a photon (that part gets me confused too) but here is what I understand about energy conservation in such scenarios :
If the photon is emitted in some superposition of energies distributed according to some $f(e)$ (I'll assume discrete energy) then the joint system of the photon and the emitting atom should be described by the entangled state $$|\psi\rangle=\sum_ef(e)|e\rangle_{photon}|E-e\rangle_{atom}$$
where $E$ is the initial energy of the atom (assuming it was in an energy eigenstate) and the kets in the sum are energy eigenstates. Then, when the photon's energy is measured both the atom and the photon collapse into a consistent energy conserving product state. It does not make sense for the photon to be in energy superposition while the atom is in energy eigenstate. | {
"domain": "physics.stackexchange",
"id": 59902,
"tags": "quantum-mechanics, electromagnetism, photons, energy-conservation, heisenberg-uncertainty-principle"
} |
'rosdep init' error when running without internet | Question:
Hi
I am trying to follow catkin tutorial for hydro. I realise 'sudo rosdep init' and rosdep update command need connection to internet repository. We have a local deb repository of ubuntu & ros in lab and are not connected directly to net.
Is there a work around of this problem?
EDIT 2:
I have created /etc/ros/rosdep/sources.list.d . In this directory I copied https://github.com/ros/rosdistro/tree/master/rosdep/sources.list.d/20-default.list. I modified https URL with local URI as file:///hoe/rosdep/base.yaml etc.
Other files are copied from https://github.com/ros/rosdistro/tree/master/rosdep .
I have set 20-default.list properly. rosdep update can read from it. After generating HIT for listed yaml files, rosdep start looking for net with the following prompt
query rosdistro index
https://raw.github.com/ros/rosdistro/master/index.yaml
This file I have added as one of the yaml files!
My current sources.list.d/20-default.list files have following contents
yaml file:///mnt/store/rosdep/osx-homebrew.yaml osx
#yaml https://github.com/ros/rosdistro/raw/master/rosdep/osx-homebrew.yaml osx
# generic
#yaml https://github.com/ros/rosdistro/raw/master/rosdep/base.yaml
yaml file:///mnt/store/rosdep/base.yaml
#yaml https://github.com/ros/rosdistro/raw/master/rosdep/python.yaml
yaml file:///mnt/store/rosdep/python.ymal
#gbpdistro https://github.com/ros/rosdistro/raw/master/releases/fuerte.yaml fuerte
I do not have any entry relataed to hydro which IMHO rosdep is trying to access.
This link https://github.com/ros/rosdistro/raw/master/releases/hydro.yaml do not works! I do not know whether gbpdistro resolves this into an actual link? A fuerte.yaml is available at https://github.com/ros/rosdistro/tree/master/releases but nothing related to hydro or groovy. It is probably because in hydro rosdep is updated. There is a hint in 20-default.list, which read as follows :
newer distributions (Groovy, Hydro, ...) must not be listed anymore, they are being fetched from the rosdistro index.yaml instead
I have no idea how set it up.I need help in resolving this issue.
thanks
prince
Originally posted by prince on ROS Answers with karma: 660 on 2014-02-24
Post score: 1
Answer:
Hi
rosdep uses a source.list file (pretty much like apt does):
On my system it's located here:
/etc/ros/rosdep/sources.list.d/20-default.list
In this .list file you can find several links to github-hosted yaml files.
Try creating the file by hand if it doesn't exist
Make the necessary changes in the sources.list file, and probably in the .yaml files, so rosdep update can work without Internet access...
Edit: Here's the content of the file. cat /etc/ros/rosdep/sources.list.d/20-default.list
# os-specific listings first
yaml https://github.com/ros/rosdistro/raw/master/rosdep/osx-homebrew.yaml osx
yaml https://github.com/ros/rosdistro/raw/master/rosdep/gentoo.yaml gentoo
# generic
yaml https://github.com/ros/rosdistro/raw/master/rosdep/base.yaml
yaml https://github.com/ros/rosdistro/raw/master/rosdep/python.yaml
yaml https://github.com/ros/rosdistro/raw/master/rosdep/ruby.yaml
gbpdistro https://github.com/ros/rosdistro/raw/master/releases/fuerte.yaml fuerte
gbpdistro https://github.com/ros/rosdistro/raw/master/releases/groovy.yaml groovy
gbpdistro https://github.com/ros/rosdistro/raw/master/releases/hydro.yaml hydro
For the yaml files, just fetch the files from github since they're too big to post here.
Edit2 (I can't reply until someone else replies first): You're right, it looks like the gbpdistro lines are useless (rosdep update throws "Ignore legacy gbpdistro "<ros_distro_here>") (It's funny I have them since my setup is only two weeks old...)
Actually, the <ros_distro> specifics can be found in the index.yaml file you linked. There:
[...]
hydro:
distribution: hydro/distribution.yaml
distribution_cache: http://ros.org/rosdistro/hydro-cache.yaml.gz
[...]
But I'm not sure you'll need this since you won't have any Internet access to fetch the git sources (for the ROS packages).
From there, I see two options:
You trick rosdep, changing your /etc/hosts file and setting up a webserver (where you put the file rosdep tries to download) so it'll use yours and not try to access the Internet
You can try the command lines options (see Here) to enforce the update so rosdep is happy and then provides you the system dependencies management you need.
Originally posted by RedoXyde with karma: 66 on 2014-02-25
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by prince on 2014-02-25:
Can you post contents of this file other relevant yaml files?
Comment by prince on 2014-02-25:
I do not have gbpdistro in the 20-default.list. I have edited question to reflect my observations. | {
"domain": "robotics.stackexchange",
"id": 17079,
"tags": "rosdep, ros-hydro"
} |
Time reversal and parity symmetry | Question: I was previously under the misapprehension that time $T$ and parity $P$ symmetries in conjunction ($PT$) were a reflection in $(3+1)$-dimensional space-time, where
$$P: \vec x \to -\vec x$$
$$T: t \to -t$$
but, in fact, because it has determinant $(-1)^4=1$, $PT$ is merely a rotation (by $\pi$). Is this correct? Shouldn't the symmetry we seek be a reflection in space time, to generalize parity so that it includes time?
In other words, if $P$ is a reflection in $3$ spatial dimensions, why isn't $PT$ defined to be a reflection in $(3+1)$-dimensional space-time?
My thought is that $P$ is an intuitive symmetry in spatial dimensions. In the spirit of relativity, why don't we generalize $P$ from a reflection in space to a reflection in space-time (called, say, $PT$)? Why are $P$ and $T$ reflections considered separately? It's problematic to me because their combination ($PT$) is not a reflection. I suppose I know that $P$ and $T$ are separate because only $T$ must be anti-unitary etc...
Answer: $PT-, T-, P-$ transformations refer to subgroup of discrete transformations of the Lorentz group. They transform connected components of the Lorentz group between each other ($PT$ transformation transforms $L^{\uparrow}_{+}$ representation to $L^{\downarrow}_{+}$). In general, they can't be represented as the special case of rotation, which refer to subgroup of continuous transformations. You can't get some other connected component from orthochronous group by acting of any continuous transtormation's matrix on your representation. So, by nature, it can't be rotation. | {
"domain": "physics.stackexchange",
"id": 62777,
"tags": "symmetry, time-reversal-symmetry, parity, cpt-symmetry"
} |
What is the effect of putting salt on water during electrolysis | Question: I want to decompose $\ce{H}$ ion and $\ce{O}$ ion by putting carbon electrode in the water, but it look so slow. What if I put sodium chloride? Would putting $\ce{NaCl}$ have effect on output gas($\ce{H}$ and $\ce{O}$)?
Answer: Yeah there is an effect putting salt during electrolysis. it give you chlorine ion at the anode which is dangerous at higher amount. in the place of sodium chloride you can use sodium hydroxide you can get it from stores it is also safe or sodium bicarbonate from baking soda.
Good luck !! | {
"domain": "chemistry.stackexchange",
"id": 14330,
"tags": "physical-chemistry, electrochemistry, electrolysis"
} |
Proving regular languages are closed under a form of interleaving | Question: I've got the following definitions:
$$\mathrm{Interleave}\,(x,y) = \{w_1\dots w_n\mid w_i\in\{x_i,y_i\} \text{ for }i=1, \dots, |x|\}$$
when $x$, $y$ and $w$ are words with $|x|=|y|$ and $w_i$ means the $i$-th letter in $w$.
$$\mathrm{Interleave}\,(L_1,L_2)\ \ = \!\!\!\!\bigcup_{\substack{x\in L_1,\ y\in L_2,\\ |x|=|y|}}\!\!\!\! \mathrm{Interleave}\,(x,y)$$
when both $L_1$ and $L_2$ are languages.
I have to prove that if I know that $L_1$ and $L_2$ are both regular languages, $\mathrm{Interleave}\,(L_1,L_2)$ is a regular language as well.
I have absolutely no idea how to do it .
Thanks in advance.
Answer: Hint. Because $L_1$ and $L_2$ are both regular, you know they're accepted by NFAs (or DFAs; it doesn't matter) $M_1$ and $M_2$, respectively. To show that $\mathrm{Interleave}\,(L_1,L_2)$ is regular, show that it's accepted by some NFA $M$. For each character it receives, $M$ can decide to act either like $M_1$ or like $M_2$. | {
"domain": "cs.stackexchange",
"id": 4633,
"tags": "regular-languages, finite-automata, closure-properties"
} |
Upgrade security and code quality of a website contact form | Question: I have a simple website contact form created back in 2017.
The form was developed using PHP, PHPMailer, jQuery, HTML and CSS.
I would like to make sure the code is up to modern standards and secure.
I would also like to switch from PHP to AJAX for success and error messaging (in order to avoid a page refresh). But that matter is probably out of scope here so I'll leave it out of the question.
Here's a detailed breakdown:
(1) The form sends messages from site users to site managers.
(2) It uses HTML for structure and CSS for presentation.
(3) User input is sent via SMTP using PHPMailer.
(4) A textarea box, with a character counter, is provided for users to submit their message.
(5) If the character count is exceeded, a jQuery script prevents the form from being submitted.
(6) Google reCAPTCHA is installed to reduce spam.
(7) HTML and standard browser functions are used for user input validation on the client-side.
(8) PHP functions are used for user input validation and sanitization on the server-side.
(9) PHP handles success and error messaging to the user.
The form works fine, as far as I can tell.
I'm looking for some guidance here:
The SMTP/PHPMailer set-up requires a username and password embedded in the code. Is there a more secure way?
Any suggestions for improving the overall quality and efficiency of this code?
Thank you.
HTML, CSS & jQuery
// text area character counter
// displays total characters allowed
// displays warning at defined count (currently 150)
// disables submit button when < 0
// max characters that can be input set by maxlength attribute in HTML
(function($) {
$.fn.charCount = function(submit, options){
this.submit = submit;
// default configuration properties
var defaults = {
allowed: 1250,
warning: 150,
css: 'counter',
counterElement: 'span',
cssWarning: 'warning',
cssExceeded: 'exceeded',
counterText: ''
};
var options = $.extend(defaults, options);
function calculate(obj,submit){
submit.attr("disabled", "disabled");
var count = $(obj).val().length;
var available = options.allowed - count;
if(available <= options.warning && available >= 0){
$(obj).next().addClass(options.cssWarning);
} else {
$(obj).next().removeClass(options.cssWarning);
}
if(available < 0){
$(obj).next().addClass(options.cssExceeded);
} else {
$(obj).next().removeClass(options.cssExceeded);
submit.removeAttr("disabled");
}
$(obj).next().html(options.counterText + available);
};
this.each(function() {
$(this).after('<'+ options.counterElement +' class="' + options.css + '">'+ options.counterText +'</'+ options.counterElement +'>');
calculate(this, submit);
$(this).keyup(function(){calculate(this,submit)});
$(this).change(function(){calculate(this,submit)});
});
};
})(jQuery);
$(document).ready(function(){
$("#comments").charCount($("#submit"));
});
#contact-form {
display: flex;
flex-direction: column;
width: 50%;
font: 1rem/1.5 arial, helvetica, sans-serif;
margin: 0 auto 1em !important;
}
#contact-form > div {
display: flex;
flex-direction: column;
margin-bottom: 10px;
}
#contact-form > div:not(.send-fail-notice):not(#counter-container) {
width: 70%;
}
#contact-form > .ad2 { align-self: flex-start; }
/* label formatting */
#contact-form > div > label {
margin-bottom: 2px;
}
/* asterisk formatting */
#contact-form > div > label > span {
color: #f00;
font-size: 1.5em;
line-height: 1;
}
/* input field formatting */
#contact-form input,
#contact-form textarea {
border: 1px solid #ccc;
background: #fcfcfc;
padding: 2px 5px;
resize: none;
font-family: arial, helvetica, sans-serif;
}
#contact-form input:focus,
#contact-form textarea:focus {
border: 1px solid #777;
}
/* textarea and character counter */
#contact-form > #counter-container { }
#contact-form > #counter-container > .counter {
font-size: 1.5em;
color: #ccc;
}
#contact-form > #counter-container .warning {
color: orange;
}
#contact-form > #counter-container .warning::after {
content: " approaching limit";
font-size: 1em;
}
#contact-form > #counter-container .exceeded {
color: red;
}
#contact-form > #counter-container .exceeded::after {
content: " form won't submit";
font-size: 1em;
}
/* submit button formatting */
#contact-form > button {
align-self: flex-start;
padding: 5px 15px;
cursor: pointer;
border: 1px solid #ccc;
background-color: #f1f1f1;
background-image: linear-gradient(#f1f1f1, #fafafa);
}
#contact-form > button:hover {
background-image: linear-gradient(#e1e1e1, #eaeaea);
}
/* form errors */
.send-fail-notice {
flex-direction: row !important;
align-items: center;
padding: 15px;
background-color: #ffc;
border: 2px solid red;
}
.send-fail-notice > span {
color: #e13a3e;
font-weight: bold;
margin-left: 10px;
}
<form method="post" id="contact-form">
<?php echo $send_fail_one ?>
<?php echo $send_fail_two ?>
<div>
<label for="name">Name <span>*</span></label>
<input type="text" name="name" id="name" maxlength="75" value="<?php echo $name ?>" required>
</div>
<div>
<label for="email">E-mail <span>*</span></label>
<input type="email" name="email" id="email" maxlength="75" value="<?php echo $email ?>" required>
</div>
<div id="subject-line">
<label for="subject">Subject</label>
<input type="text" name="subject" id="subject" maxlength="75" value="<?php echo $subject ?>">
</div>
<div id="counter-container">
<label for="comments">Message <span>*</span></label>
<textarea name="comments" id="comments" maxlength="1500" cols="25" rows="5" required><?php echo $comments ?></textarea>
</div>
<div class="g-recaptcha" data-sitekey="6LdruToUAAAAAO4EZua5CV6_jREeCpv8knqklYa9" style="width: auto;"></div>
<button type="submit" name="submit" id="submit">Send Message</button>
</form>
PHP
// CONTACT FORM PROCESSING SCRIPT
// set default values (prevents undefined variable error)
$send_fail_one = null;
$send_fail_two = null;
$name = null;
$email = null;
$subject = null;
$comments = null;
// Load PHPMailer (v 6.5.3 02/02/2022)
// Import PHPMailer classes into the global namespace
// These must be at the top of your script, not inside a function
use PHPMailer\PHPMailer\PHPMailer;
use PHPMailer\PHPMailer\SMTP;
use PHPMailer\PHPMailer\Exception;
if ($_SERVER["REQUEST_METHOD"] == "POST") {
// Load PHPMailer
require 'PHPMailer/src/Exception.php';
require 'PHPMailer/src/PHPMailer.php';
require 'PHPMailer/src/SMTP.php';
// Create new PHPMailer instance; passing `true` enables exceptions
$mail = new PHPMailer(true);
$mail->CharSet = "UTF-8";
// SMTP Debugging
//SMTP::DEBUG_OFF = off (for production use)
//SMTP::DEBUG_CLIENT = client messages
//SMTP::DEBUG_SERVER = client and server messages
$mail->SMTPDebug = SMTP::DEBUG_SERVER;
$mail->Debugoutput = 'html';
// SMTP settings
$mail->isSMTP();
$mail->Host = 'smtp.gmail.com';
$mail->SMTPAuth = true;
$mail->SMTPSecure = PHPMailer::ENCRYPTION_SMTPS;
$mail->Port = 465;
$mail->Username = 'demo-purposes@yahoo.com';
$mail->Password = 'FakePasswordForDemoPurposes';
$mail->setFrom('demo-purposes@yahoo.com');
$mail->addAddress('demo-purposes@yahoo.com');
$mail->isHTML(true);
// Sanitize & Validate Input
// Trim all $_POST values
/* Using FILTER_SANITIZE_SPECIAL_CHARS as opposed to FILTER_SANITIZE_STRING because:
* (1) FILTER_SANITIZE_SPECIAL_CHARS will disarm code but still display it.
* (2) FILTER_SANITIZE_STRING removes all code, leaving no trace, and all code input is lost (e.g., anything with brackets is lost).
* Hence, with (1) we can identify users trying to submit code.
* Exception: textarea field ('comments') uses FILTER_SANITIZE_STRING because otherwise nl2br (keep line breaks) doesn't work. */
$name = trim(filter_input(INPUT_POST, 'name', FILTER_SANITIZE_SPECIAL_CHARS));
$email = filter_input(INPUT_POST, 'email', FILTER_SANITIZE_EMAIL);
$subject = trim(filter_input(INPUT_POST, 'subject', FILTER_SANITIZE_SPECIAL_CHARS));
$comments = nl2br(filter_input(INPUT_POST, 'comments', FILTER_SANITIZE_STRING));
// Email Body
$message = <<<HTML
<br>
<table style="border: 1px solid #e1e1e1; background-color: #fafafa; margin: 0 auto; width: 75%; padding: 10px 20px;">
<tr><td style="padding: 10px 0;"><b>NAME</b></td></tr>
<tr><td style="border-bottom: 1px solid #e1e1e1; padding-bottom: 15px;">$name</td></tr>
<tr><td style="padding: 10px 0;"><b>E-MAIL</b></td></tr>
<tr><td style="border-bottom: 1px solid #e1e1e1; padding-bottom: 15px;">$email</td></tr>
<tr><td style="padding: 10px 0;"><b>SUBJECT</b></td></tr>
<tr><td style="border-bottom: 1px solid #e1e1e1; padding-bottom: 15px;">$subject</td></tr>
<tr><td style="padding: 10px 0;"><b>MESSAGE</b></td></tr>
<tr><td style="padding-bottom: 15px;">$comments<td></tr>
</table>
HTML;
$mail->Subject = 'Message Received =?utf-8?B?4oCT?= Website Contact Form';
$mail->Body = $message;
// verify recaptcha response
$url = "https://www.google.com/recaptcha/api/siteverify";
$privatekey = "xxx";
$response = file_get_contents($url."?secret=".$privatekey."&response=".$_POST['g-recaptcha-response']."&remoteip=".$_SERVER['REMOTE_ADDR']);
$data = json_decode($response);
// if recaptcha verification is a success...
if(isset($data->success) AND $data->success==true) {
// ... and if phpmailer does not send (for whatever reason, such as a wrong SMTP password)...
if (!$mail->send()) {
// then show this error message:
$send_fail_one = <<<ERROR
<div class="send-fail-notice">
<img src="images/warning-sign.gif" height="44" width="50" alt="error sign">
<span>ERROR. Message not sent.<br>Please try again or contact us at <i><u>bireg</u></i> <i><u>at</u></i> <i><u>outlook</u></i> <i><u>dot</u></i> <i><u>com</u></i>.</span>
</div>
ERROR;
} else {
// ...otherwise delivery is successful and page re-directs to thank you / confirmation page
header('Location: https://www.yahoo.com');
}
} else {
// if re-captcha verification fails, show this error message:
$send_fail_two = <<<ERROR
<div class="send-fail-notice">
<img src="images/warning-sign.gif" height="44" width="50" alt="error sign">
<span>ERROR. Message not sent.<br>Please check the anti-spam box.</span>
</div>
ERROR;
}
}
Answer: Contact forms don't have to be a security risk, but it all depends on what you do with the submitted data. There are basically two types of security you need to keep in mind:
1. The server
You don't store anything in a database, or sent the e-mails yourself. This takes away one of the biggest threats to your server. You've left out a lot of code, so it is still possible you do store the data, but clearly you don't want this to be discussed here.
2. The visitor
Users run a much bigger risk with this contact form. Apart from their comment, they have to supply their name and email address. Why exactly?
Any smart user would, of course, not use their own name and e-mail address. Any email address will do. So, why do ask for it?
You could make supplying a name and email address optional.
You are also using Google's hidden reCAPTCHA, which hands over a lot of user data to Google.
Read: Google reCAPTCHA and the GDPR: a possible conflict?
If the reCAPTCHA misses any of that information Google has a second chance by peeking in the mails you so kindly send via their mail server. You don't leave your visitors any choice, if they want to comment they could as well be sending their comments straight to Google.
Why would this be a problem? Google stored this data in the USA. And the US government likes big data. They use it. It's easy. Edward Snowden showed us how. Foreigners have even less protection under US law.
Also Read: Exclusive: Government Secretly Orders Google To Identify Anyone Who Searched A Sexual Assault Victim’s Name, Address Or Telephone Number.
Google is one of the major reasons there's no privacy on the internet, and they actively work, along with Facebook, to keep it that way. They pursue legitimate business interests, but the result is quite damaging.
Most people, including me, too often ignore this obvious problem. We concentrate on all the technical issues, but there's a whole other side to this story. | {
"domain": "codereview.stackexchange",
"id": 42924,
"tags": "javascript, php, jquery, html, ajax"
} |
Inverse Fourier transform from plots | Question: Given plots of magnitude and phase of a Fourier transform, how can I obtain the analytical formula for the signal?
I know that $$X_1(j\omega) = 3j\omega[H(\omega+3\pi)-H(\omega-3\pi)]$$ but I can't figure out a way from this point to the answer, which is $$x_1(t)=\frac{3}{\pi t^2}[3\pi t\cos(3\pi t)-\sin(3\pi t)]$$
Answer: In this case, it is quite simple. In general, it may be more difficult. I'm not sure what you are getting at with $H(j \omega)$ since you don't define it anywhere, but you are trying to get the analytical time domain expression for the signal from the plots. This is possible by noticing that
$$X_1(j \omega) = |X_1(j \omega)| \exp\left( j \angle X_1(j \omega)\right).$$
In this case, due to the plots' simplicity, we can generate a formula by reading directly from the axes. In this case,
$$X_1(j \omega) = 3 j \omega \left[ u(\omega + 3 \pi) - u(\omega - 3 \pi) \right],$$
where $u(t)$ is the heaviside step function.
Now, we can calculate the Inverse Fourier Transform with the formula,
$$ x_1(t) = \frac{1}{2 \pi} \int_{-\infty}^{\infty} X_1(j \omega) \exp\left(j \omega t \right) d \omega. $$
Plugging in our $X_1(j \omega)$ and simplifying leads us to the expression
$$ x_1(t) = \frac{1}{2 \pi} \int_{-3 \pi}^{3 \pi} 3 j \omega \exp\left(j \omega t \right) d \omega.$$
I assume that you can evaluate this integral. If you need more assistance, let me know. | {
"domain": "dsp.stackexchange",
"id": 4631,
"tags": "fourier-transform"
} |
SDL image error while building gmapping | Question:
While building the gmapping package in my embedded system (odroid-x), i got this error:
Building CXX object CMakeFiles/image_loader.dir/src/image_loader.o
/home/linaro/ros/navigation/map_server/src/image_loader.cpp:43:27: fatal error: SDL/SDL_image.h: No such file or directory
It seems that I do not have the SDL_image.h file. If i download the file, where go i have to place it in? However it run just fine in my laptop. Can anyone tell me whats the problem? Thanks!
Originally posted by Andrick on ROS Answers with karma: 21 on 2013-07-25
Post score: 0
Original comments
Comment by Peter Zhao on 2015-05-07:
Have you solved the problem?If you have ,please tell me how to solve it .Thanks a lot.
Answer:
I had the same issue when building the navigation stack,you can use the Synaptic to install not only the package libsdl-image but also the libsdl-image-dev!
Originally posted by zq7734509 with karma: 26 on 2015-06-15
This answer was ACCEPTED on the original site
Post score: 1 | {
"domain": "robotics.stackexchange",
"id": 15045,
"tags": "navigation, gmapping"
} |
Choosing nonzero entries from an array so no pair in same row or column | Question: Suppose we have an $n\times n$ array $A$ of non-negative real numbers in which the sum of each row and each column is $1$. We want to find $n$ entries of the array $(x_1,y_1), \dots, (x_n,y_n)$ such that
each $A[x_i,y_i]>0$ and
no pair $(x_i,y_i),(x_j,y_j)$ are in the same row or column (i.e., $i\neq j$ implies that $x_i\neq x_j$ or $y_i\neq y_j$ or both).
How can we prove that such a set of entries always exists?
Here is an example solution with a $3\times3$ table.
Answer: The Birkhoff–von Neumann theorem states that a doubly stochastic matrix (a matrix with non-negative entries in which rows and columns sum to 1) can be written as a convex combination of permutation matrices (0/1 matrices which contain precisely one 1 in each row and column). This immediately implies your result.
If you don't want to assume this theorem, you can use Hall's theorem directly. Given your table $A$, define a bipartite graph by having $n$ row-vertices, $n$ column-vertices, and connecting row $i$ and column $j$ if the $(i,j)$th entry is non-zero. Your goal is to find a perfect matching in the graph.
According to Hall's theorem, the graph contains a perfect matching if for all subsets $S$ of row vertices, the number of neighbors of $S$ is at least $|S|$. That is, given a set $S$ of rows, we need to show that the set
$$T = \{ j : A_{ij} \neq 0 \text{ for some $i \in S$}\}$$
contains at least $|S|$ columns. Indeed, the sum of the entries in the rows in $S$ is exactly $|S|$, and the sum of the entries in the columns in $T$ is exactly $|T|$. Therefore the sum of entries in $S \times T$ is at most $|T|$. However, by definition this sum must equal $|S|$ (since all other entries in the rows in $S$ are zero), and so $|S| \leq |T|$, which is what we wanted.
This argument shows that you can use standard algorithms for finding maximum matchings in bipartite graphs to actually find your $n$ entries.
We are now at an excellent position to prove the Birkhoff–von Neumann theorem. The proof is by induction on the number of non-zero entries. The argument above shows that there are at least $n$ non-zero entries, and in that case it's not hard to see that the matrix must be a permutation matrix, completing the base case.
Now consider an arbitrary doubly stochastic $A$ with more than $n$ non-zero entries, and find a matching $(i,\pi(i))$ in $A$. This matching corresponds to sum permutation matrix $P$. Define $\alpha = \min_i A(i,\pi(i)) > 0$. The matrix $B = \frac{A - \alpha P}{1 - \alpha}$ is also doubly stochastic (since subtracting $\alpha P$ removes exactly $\alpha$ from each row and column), with at least one more zero entry. By induction, $B$ is a convex combination $\sum_t \beta_t P_t$ of permutation matrices. Hence so is $A$: $$A = \alpha P + (1-\alpha) \sum_t \beta_t P_t.$$
The Birkhoff–von Neumann theorem can be stated in many different ways. It states that the set of doubly stochastic matrices is the convex hull of the permutation matrices. It also states (with some more work) that the vertices in the polytope of doubly stochastic matrices are the permutation matrices. | {
"domain": "cs.stackexchange",
"id": 5796,
"tags": "proof-techniques, matrices, search-problem"
} |
Can this code be improved regarding map with in a For loop | Question: I am looking for best performance regarding this code point is I don't have a clear idea weather a map integrated in a for loop is a good idea, the problem is the array is a nested object coming from Db that I don't have any control of, could there be a better version ?
the code does the following
the for loop populates data to divs using info in the Incoming array those info varies for example one day will have 3 element other only one the first div in a row should get the data from the first object in the incoming arr the second will get the second etc..>
let [...divMulti] = document.querySelectorAll('.insert')
let Incoming = [{ day: [{ el: '1' }, { el: '2' }, { el: '3' }] }, { day: [{ el: '1' }] }, { day: [{ el: '1' }] }]
let data1 = Incoming.map(element => {
return element.day
})
for (let i = 0; i < divMulti.length; i++) {
data1[i].map(el => {
let item = `<div>Element${i}</div>`
divMulti[i].querySelector('.here').insertAdjacentHTML('afterbegin', item)
})
}
<div class='insert' >parentDiv
<div class='here'></div>
</div>
<div class='insert' >parentDiv
<div class='here'></div>
</div>
<div class='insert' >parentDiv
<div class='here'></div>
</div>
Answer: array.map() is meant to convey that you're transforming the values of one array into another in a different array. If you're intending to do a side-effect (like printing to the console, or manipulating the DOM) iteratively and not return a result, array.forEach() conveys this intention better.
There's also nothing wrong with a nested loop either. If you feel like it's much more readable to write loops than other methods, then go for it. Just make sure to be consistent and name your variables in a way that's not confusing.
Unless you're dealing with hundreds of thousands of data, or needing to render this in 60fps, I wouldn't worry so much about performance. What I would like you to worry about is code readability, and consistency.
In situations like these, I often prefer flattening the data so that each step is just an operation on an array (i get lost in nested loops). At any point I want to debug the result in the chain, I can just pop the result out into a variable for inspection. This is just my way of keeping the code readable, yours may be different.
let [...divMulti] = document.querySelectorAll('.insert')
let Incoming = [
{ day: [{el: '1'}, {el: '2'}, {el: '3'}] },
{ day: [{el: '4'}] },
{ day: [{ el: '5' }] }
]
// Just a utility function you find in other languages. Creates an
// array of pairs from two arrays.
const zip = (a, b) => a.map((k, i) => [k, b[i]]);
// Create insert-incoming pairs.
// [
// [insert[0], incoming[0]],
// [insert[1], incoming[1]],
// [insert[2], incoming[2]],
// ]
zip(divMulti, Incoming)
// This step then returns here-day.el pairs
// [
// [here[0], day[0].el],
// [here[0], day[1].el],
// [here[0], day[2].el],
// [here[1], day[0].el],
// [here[2], day[0].el],
// ]
.flatMap(([div, incomingItem]) => {
const here = div.querySelector('.here')
return incomingItem.day.map(dayItem => [here, dayItem.el])
})
// foreach to convey that we're just looping, not expecting a return.
// insert each el value to its corresponding here.
.forEach(([here, el]) => {
here.insertAdjacentHTML('afterbegin', `<div class="item">Element${el}</div>`)
})
.insert { border: 1px solid red; padding: 5px }
.here { border: 1px solid green; padding: 5px }
.item { border: 1px solid blue; padding: 5px }
<div class='insert'>parentDiv
<div class='here'></div>
</div>
<div class='insert'>parentDiv
<div class='here'></div>
</div>
<div class='insert'>parentDiv
<div class='here'></div>
</div> | {
"domain": "codereview.stackexchange",
"id": 41601,
"tags": "javascript"
} |
ros2_control - Created node by ControllerInterface gets nodename of controller_manager no matter what (diff_drive_controller) | Question:
Hi!
I started to work with ros2_control and want to set it up with a diff_drive_controller. Therefore I wrote an Actuator that implements the handling of my motors.
The issue I now have is that when loading the controller, the new created Node is getting the name of controller_manager, not the passed controller_name. Therefore I have a) two nodes with the same name and b) the defined parameters for the diffdrive_controller can not be loaded because the a node with that name is does not exist (caused by a) :) )
To find the cause of that issue I step by step cloned the ros2_controllers and then the ros2_control github repos into my workspace to be able to at least add some debugmessages.
So my setup is now the following:
ros:foxy docker image with updated packages (apt update && apt upgrade)
/ros2_ws/src contains { ros2_controllers, ros2_control, myrobot_control, myrobot_description }
whereas the launchfile from myrobot_control looks like that
#!/usr/bin/env python3
import os
import xacro
import launch
from launch_ros.actions import Node
from launch import LaunchDescription
from launch.launch_description_sources import PythonLaunchDescriptionSource
from launch.conditions import IfCondition, UnlessCondition
from launch.actions import DeclareLaunchArgument, GroupAction, ExecuteProcess, IncludeLaunchDescription
from launch.substitutions import LaunchConfiguration, PythonExpression
from ament_index_python.packages import get_package_share_directory, get_package_prefix
def get_parsed_urdf_string():
shared_dir = get_package_share_directory('myrobot_description')
path = os.path.join(shared_dir, 'urdf', 'myrobot.xacro')
return xacro.process(path)
def generate_launch_description():
urdf_description_string = get_parsed_urdf_string()
robot_description = { 'robot_description': urdf_description_string }
myrobot_controller_config = os.path.join(
get_package_share_directory("myrobot_control"),
"config",
"controller.yaml",
)
return LaunchDescription([
Node(
package="robot_state_publisher",
executable="robot_state_publisher",
name="robot_state_publisher",
output="screen",
parameters=[robot_description],
),
Node(
package="controller_manager",
executable="ros2_control_node",
name="controller_manager",
parameters=[ {"update_rate": 1 },
robot_description,
myrobot_controller_config ],
output={
"stdout": "screen",
"stderr": "screen",
},
),
])
and the controller.yaml
controller_manager:
ros__parameters:
update_rate: 1 # Hz
diffdrive_controller:
type: diff_drive_controller/DiffDriveController
diffdrive_controller:
left_wheel_names: [ 'front_left_wheel_joint', 'rear_left_wheel_joint' ]
right_wheel_names: [ 'front_right_wheel_joint', 'rear_right_wheel_joint' ]
wheel_separation: 0.51
wheels_per_side: 2
wheel_radius: 0.141
wheel_separation_multiplier: 1.0
left_wheel_radius_multiplier: 1.0
right_wheel_radius_multiplier: 1.0
publish_rate: 50.0
enable_odom_tf: False
Moreover, like initially said, I added some debug outputs to ros2_controls controller_interface.cpp
return_type
ControllerInterface::init(const std::string & controller_name)
{
RCLCPP_INFO(rclcpp::get_logger("ControllerInterface"), "!!!!!!!!!! controller_name: %s", controller_name.c_str());
node_ = std::make_shared<rclcpp::Node>(
controller_name,
rclcpp::NodeOptions().allow_undeclared_parameters(true));
lifecycle_state_ = rclcpp_lifecycle::State(
lifecycle_msgs::msg::State::PRIMARY_STATE_UNCONFIGURED, "unconfigured");
std::shared_ptr<rclcpp::Node> node2 = std::make_shared<rclcpp::Node>("test", rclcpp::NodeOptions().allow_undeclared_parameters(true));
std::shared_ptr<rclcpp::Node> node3 = std::make_shared<rclcpp::Node>("test", "controller_manager", rclcpp::NodeOptions().allow_undeclared_parameters(true));
std::shared_ptr<rclcpp::Node> node4 = std::make_shared<rclcpp::Node>("test");
std::shared_ptr<rclcpp::Node> node5 = std::make_shared<rclcpp::Node>("test", "controller_manager");
RCLCPP_INFO(rclcpp::get_logger("ControllerInterface"), "controller_name: %s, nodename: %s, fqn: %s", controller_name.c_str(), node_->get_name(), node_->get_fully_qualified_name());
RCLCPP_INFO(rclcpp::get_logger("ControllerInterface"), "controller_name: %s, nodename: %s, fqn: %s", controller_name.c_str(), node2->get_name(), node2->get_fully_qualified_name());
RCLCPP_INFO(rclcpp::get_logger("ControllerInterface"), "controller_name: %s, nodename: %s, fqn: %s", controller_name.c_str(), node3->get_name(), node3->get_fully_qualified_name());
RCLCPP_INFO(rclcpp::get_logger("ControllerInterface"), "controller_name: %s, nodename: %s, fqn: %s", controller_name.c_str(), node4->get_name(), node4->get_fully_qualified_name());
return return_type::OK;
}
Based on that infos I start the launchfile like
ros2 launch myrobot_control myrobot_control_launch.py
and try to load the diffdrive_controller in another terminal with
ros2 control load_controller diffdrive_controller
The output i get now is
[ros2_control_node-2] [INFO] [1620293060.380857877] [controller_manager]: update rate is 1 Hz
[ros2_control_node-2] [INFO] [1620293064.272236880] [controller_manager]: Loading controller 'diffdrive_controller'
[ros2_control_node-2] [INFO] [1620293064.275526854] [ControllerInterface]: !!!!!!!!!! controller_name: diffdrive_controller
[ros2_control_node-2] [WARN] [1620293064.275653258] [rcl.logging_rosout]: Publisher already registered for provided node name. If this is due to multiple nodes with the same name then all logs for that logger name will go out over the existing publisher. As soon as any node with that name is destructed it will unregister the publisher, preventing any further logs for that name from being published on the rosout topic.
[ros2_control_node-2] [WARN] [1620293064.289270181] [rcl.logging_rosout]: Publisher already registered for provided node name. If this is due to multiple nodes with the same name then all logs for that logger name will go out over the existing publisher. As soon as any node with that name is destructed it will unregister the publisher, preventing any further logs for that name from being published on the rosout topic.
[ros2_control_node-2] [WARN] [1620293064.319312104] [rcl.logging_rosout]: Publisher already registered for provided node name. If this is due to multiple nodes with the same name then all logs for that logger name will go out over the existing publisher. As soon as any node with that name is destructed it will unregister the publisher, preventing any further logs for that name from being published on the rosout topic.
[ros2_control_node-2] [WARN] [1620293064.334594738] [rcl.logging_rosout]: Publisher already registered for provided node name. If this is due to multiple nodes with the same name then all logs for that logger name will go out over the existing publisher. As soon as any node with that name is destructed it will unregister the publisher, preventing any further logs for that name from being published on the rosout topic.
[ros2_control_node-2] [INFO] [1620293064.351882523] [ControllerInterface]: controller_name: diffdrive_controller, nodename: controller_manager, fqn: /controller_manager
[ros2_control_node-2] [INFO] [1620293064.351914271] [ControllerInterface]: controller_name: diffdrive_controller, nodename: controller_manager, fqn: /controller_manager
[ros2_control_node-2] [INFO] [1620293064.351937679] [ControllerInterface]: controller_name: diffdrive_controller, nodename: controller_manager, fqn: /controller_manager/controller_manager
[ros2_control_node-2] [INFO] [1620293064.351946740] [ControllerInterface]: controller_name: diffdrive_controller, nodename: controller_manager, fqn: /controller_manager
[ros2_control_node-2] [INFO] [1620293064.393450838] [DiffDriveController]: namespace: /, controller_name: diffdrive_controller, nodename: controller_manager
^C[WARNING] [launch]: user interrupted with ctrl-c (SIGINT)
For some reason the Node is changing the its name to "controller_manager", ignoring or overwriting the name passed to it.
Am I doing something wrong or is there an known issue?
Originally posted by Webadone on ROS Answers with karma: 89 on 2021-05-06
Post score: 0
Original comments
Comment by shonigmann on 2021-05-06:
This feels similar to an issue I recently had with rviz... If you remove the name="controller_manager" line from your launch file, does it resolve your node-name-clashing issue?
Comment by Webadone on 2021-05-07:
Ohh indeed that is solving the issue! Thanks a lot! But is that by design or is that a bug? At least it would make sense that its somewhere documented that this can happen
Answer:
As mentioned in the comments, setting the node name argument will set the name of all nodes created by the node as well. The issue is that, if you look in the source of the Node() class, setting the name argument is essentially equivalent to using --ros-args --remap __node:=<new_name>.This remapping seems to hold for all nodes generated by the node itself as they are within the scope of the original remap.
Whether that's by design or its an unintended "feature", I'm not entirely sure, but I've seen the same misuse of the name remap in a number of packages now and I think the current implementation is pretty non-intuitive. In any case, I think most people would be better off using either namespaces with default node names, or using node composition
Originally posted by shonigmann with karma: 1567 on 2021-05-07
This answer was ACCEPTED on the original site
Post score: 1 | {
"domain": "robotics.stackexchange",
"id": 36411,
"tags": "ros, ros2, rclcpp, diff-drive-controller"
} |
Calling ros::NodeHandle::subscribe with a topicname constructed by sprintf | Question:
My Topic names are supposed to change, when I use different sensors. Therefor I want my node to subscribe to different topics depending on parameters on the parameter server. I tried to use this in my CPP node:
char left_camera_topic[50]
std::string camera_type;
n.getParam("/neuromorphic_stereo/config/camera_type",
camera_type);
sprintf(left_camera_topic,"/%s_left/events",camera_type);
ros::Subscriber leftCam =
n.subscribe(left_camera_topic_string,
subscriberCallback);
with
void subscriberCallback(const
dvs_msgs::EventArray& msg);
But building this gets me the error
error: no matching function for call
to 'ros::NodeHandle::subscribe(const
char [50], void (&)(const
EventArray&))' ros::Subscriber
leftCam =
n.subscribe(left_camera_topic_string,
subscriberCallback);
Is it not possible to construct the topicnames like this, or am I doing something wrong?
Originally posted by max11gen on ROS Answers with karma: 164 on 2018-11-15
Post score: 0
Original comments
Comment by gvdhoorn on 2018-11-15:
Why did you delete your question?
Comment by max11gen on 2018-11-15:
@gvdhoorn I'm sorry I deleted the question before you posted your answer because I just did a stupid mistake: I forgot the buffer-size argument of the subscribe function. adding that argument makes the code work and your answer therefor doesn't solve the problem.
Comment by max11gen on 2018-11-15:
@gvdhoorn I could, of course, reopen the question and you adjust your answer to fit the solution. And thanks for the remapping suggestion, I'll have a look at that.
Comment by gvdhoorn on 2018-11-15:
I would suggest to re-open the question, and answer your own question. Then accept that answer.
Answer:
Is it not possible to construct the topicnames like this, or am I doing something wrong?
Raw char arrays are not std::strings, so that's why you get that error.
If you really must use char, you could wrap it in a std::string(..) ctor.
But I don't understand why you are using char in the first place: std::string is a type 'natively' supported by the ros::param API.
And an observation:
My Topic names are supposed to change, when I use different sensors. Therefor I want my node to subscribe to different topics depending on parameters on the parameter server.
Unless you are creating and destroying subscriptions at runtime (so have to change these during the entire runtime of your program, not just in the initialisation phase), don't do this. Don't parameterise topic names like this.
I would suggest to use remapping. See #q303611 for a (very) high-level overview of what that does.
Originally posted by gvdhoorn with karma: 86574 on 2018-11-15
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by max11gen on 2018-11-15:
The reason for me to use char-arrays instead of strings is that strings don't work as buffer in sprintf (at least I couldn't figure out, how. Kind of new to c++... ;))
Comment by gvdhoorn on 2018-11-15:
But with std::string, the + operator can be used to concatenate strings, so no need for sprintf(..) it would seem.
But I would still not use parameters for topic names, as I wrote in my answer.
Comment by max11gen on 2018-11-19:
You were right in the end, gvdhoorn. Using the + operator instead of printf worked just fine. Thanks for your help. | {
"domain": "robotics.stackexchange",
"id": 32052,
"tags": "ros, ros-kinetic, roscpp, qtcreator"
} |
Eulerian angle understanding | Question: I have alot of confusion with Eulerian angle so first of all I would like to address something I don't understand from the book and maybe that would shed some light on the intuition of eulerian angles.
Here is the book Explanation for Eulerian angles:
The coordinate system labeled O-123 is defined by the three principal axes fixed to the rigid body and rotates with it. The coordinate system fixed in space is labeled Oxyz. A third,rotating system, O$x'y'z'$ providing a connection between the principal axes attached to the body and axes fixed in space, is also defined as follows: The $z'$-axis coincides with the 3-axis of the body --- its symmetry axis The $x'$-axis is defined by the intersection of the body 1-2 plane with the fixed xy plane ? This is called the line of nodes.
So I don't understand is the plane O-xyz` changing always with time or is it fixed?
Answer: If the rigid body is rotating then in general the primed axes will be changing with time. An easier way to see this is to look at the Euler angles themselves as in this diagram. If, for instance, $\alpha$ is changing, then both the line of nodes (the $N$-axis in the diagram) and the $z'$-axis (the $Z$-axis in the diagram) are changing. | {
"domain": "physics.stackexchange",
"id": 24155,
"tags": "reference-frames, coordinate-systems, geometry"
} |
How can things around us have different colours if they have specific emission spectra? | Question:
Objects appear in different colours because they absorb some colours (wavelengths) and reflect or transmit other colours. The colours we see are the wavelengths that are reflected or transmitted.
As far as I understand, when an atom absorbs a photon, one of its electrons gets excited (= unstable). So it drops back to the ground state, emitting that energy in the form of photons of a specific colour.
This means that we should find objects to be in certain colours.
But, for example, hydrogen gas is colourless.
How is this possible? Shouldn't Hydrogen gas have a specific colour related to its emission spectra?
I was reading an article on this topic today (https://www.khanacademy.org/science/chemistry/electronic-structure-of-atoms/bohr-model-hydrogen/a/spectroscopy-interaction-of-light-and-matter), and this came into my mind.
Can anyone please help in clearing my understanding?
Answer: At room temperature, hydrogen gas exists as diatomic molecules that do not significantly absorb light in the visible spectrum, i.e., 400 nm to 700 nm. So hydrogen gas at room temperature is colorless. In a hydrogen gas discharge tube, that is not energized by an appropriate high voltage source, the hydrogen gas is colorless, as shown in my photograph below:
Application of high voltage, via an appropriate high voltage power supply, results in a mixture of excited hydrogen atoms and molecules. Both exist in the discharge and both emit light as they continually cycle around their respective excitation and de-excitation pathways.
This figure shows my photographs of the energized hydrogen discharge tube.
On the left is the raw output. To my eyes, it is quite noticeably red hued. On the right, a filter has been used to attenuate light at wavelengths longer than 600 nm. The light transmitted through the filter is, to my eyes, cyan hued.
Others may see the colors differently, but no matter: we have spectrometers, spectrographs, and the like. So, collecting light from the energized hydrogen discharge tube, and dispersing it using one of my homemade echelle spectrographs results in the following two dimensional spectrum (an echellogram):
I have annotated the echellogram to show grating orders, the Balmer line wavelengths, in angstroms, and a little about the acquisition of the echellogram. N.B. 1 angstrom = 0.1 nm.
As noted above, the echellogram shows the four Balmer series lines in the visible spectrum. Each is present twice, in consecutive echelle grating orders, as a consequence of how echelle spectrographs may operate. The Balmer lines are from excited hydrogen atoms emitting light as they de-excite. The other light is from other excited species, e.g., excited hydrogen molecules.
Note particularly the Fulcher alpha bands in the longer wavelength region of the echellogram. These have been know for many years and are useful for various purposes, e.g., in plasma diagnostics. A spectrum in conventional format, i.e., intensity versus wavelength, is shown below:
So it is a bit more complex than just those nice Balmer lines. The neat thing about spectroscopy is that the more you look, the more you see. | {
"domain": "physics.stackexchange",
"id": 96469,
"tags": "quantum-mechanics, electromagnetic-radiation, visible-light, atomic-physics, material-science"
} |
How do we create logical qubits in the surface code: understanding check | Question: I am learning the basics of surface code theory through this paper
I am trying to understand how we create multiple logical qubits. The goal of my post is to check my understanding.
What I understood so far:
One possible way to create a logical qubit is to create holes in the array, i.e we remove some Z and X stabilizer. One requirement we like to have is to avoid the use of logical operator which require applying Pauli operator from one edge of the surface to the holes. For this reason, we usually create a logical qubit by creating a pair of holes in the surface such as on the image below.
On the left, we removed two $Z$ stabilizer measurement (this is called a double $Z$-cut). In principle we then removed two constraints and hence we added two logical qubits (on a conceptual level, but as written after we will not use both of them). The total dimension of the stabilized space is then $2^3$ (as the surface without holes could contain one logical qubit, and we now added two extra logical qubit conceptually).
Now, in practice we will consider storing only one logical qubit on this surface, where the Pauli are the $X_L$ and $Z_L$ represented.
In this example we then only use $2$ dimensions out of the $2^3$ conceptually available (one logical qubit out of the three conceptually available). Is that correct?
If we want to add logical qubits, we create more pair of holes. Hence for $N$ logical qubits that we want to store, we will create $N$ pairs of holes on the surface. Conceptually we could store $2N+1$ logical qubits with this construction but we will only manipulate $N$ out of them. Is that correct?
Answer: This answer provides a "combinatorial" approach to computing the dimension of the logical subspace and is meant to complement the "topological" approach described in Craig Gidney's answer.
Importance of generator independence
Our starting point is the fact that a stabilizer group $S$ on $n$ physical qubits generated by $g$ independent generators stabilizes a $2^{n-g}$-dimensional subspace. See for example proposition $10.5$ on page $458$ in Nielsen & Chuang. This fact supports the general rule alluded to in the question that removing one of the independent generators adds a logical qubit.
The surface code is a stabilizer code so the above rule applies. However, careless application of the rule may lead to the incorrect conclusion that punching a hole in the surface code always adds a logical qubit. This is only true if the generator removed is independent of the other generators. For example, the $Z_c$ generator at the center of the code pictured below
is easily seen to be equal to the product of all the other $Z$ (dark) generators and therefore is not independent. Consequently, removing $Z_c$ fails to introduce a degree of freedom necessary for the formation of a logical qubit.
Topological interpretation
This fact has a topological interpretation. Note that the hole formed by the removal of $Z_c$ has boundary type opposite to the outer boundary of the code. Therefore, it is impossible to find two anti-commuting logical observables, as described in Craig's answer.
Planar codes with two or more corners
It turns out that in a planar hole-free surface code all plaquettes correspond to independent generators if and only if the code has at least two corners defined here as data qubits incident on exactly two stabilizer generators (one of each type). Such a code possesses at least one boundary of each type. Introducing a pair of corners is associated with the net loss of one independent stabilizer generator. Therefore, a planar hole-free surface code with $c\ge 2$ corners encodes
$$
k=\frac{c-2}{2}\tag1
$$
logical qubits. Moreover, removal of any generator from the bulk (so that the number of corners is unaffected) is associated with the loss of a single independent generator. Therefore, a planar surface code with $c\ge 2$ corners and $h$ bulk generators removed encodes
$$
k = h + \frac{c-2}{2}\tag2
$$
logical qubits. The effect of punching a hole larger than a single plaquette depends on the number of removed generators and data qubits, but it is possible to punch a hole of any size while removing $q$ data qubits and $q+1$ generators leading to the introduction of exactly one degree of freedom.
Dimension of the logical subspace
In this example we then only use $2$ dimensions out of the $2^3$ conceptually available (one logical qubit out of the three conceptually available). Is that correct?
Depends on the boundary. By equation $(2)$, the answer is yes if we assume for example that the boundary includes four corners, i.e. the code has two boundaries of each type.
If we want to add logical qubits, we create more pair of holes. Hence for $N$ logical qubits that we want to store, we will create $N$ pairs of holes on the surface. Conceptually we could store $2N+1$ logical qubits with this construction but we will only manipulate $N$ out of them. Is that correct?
Yes, but this is wasteful. By equation $(2)$, if the boundary has two corners, $N$ holes are sufficient. The holes may be of arbitrary type.
If the boundary has no corners, $N$ holes turn out to be sufficient as long as they match the boundary type. Finally, with $N+1$ holes of the same type, all logical operators on $N$ logical qubits may be obtained from chains going around the holes and linking holes to each other so the boundary may be ignored. This is a consequence of the fact that pair-of-holes encoding allows different logical qubits to share one of the holes. | {
"domain": "quantumcomputing.stackexchange",
"id": 3418,
"tags": "error-correction, surface-code"
} |
Why is Earth's inner core solid? | Question: I have never understood why earth's inner core is solid. Considering that the inner core is made of an iron-nickel alloy (melting point around 1350 C to 1600 C) and the temperature of the inner core is approximately 5430 C (about the temperature of the surface of the sun). Since Earth's core is nearly 3-4 times the melting point of iron-nickel alloys how can it possibly be solid?
Answer: Earth's inner core is solid even though the temperature is so high because the pressure is also very high. According to the Wikipedia article on the Earth's inner core, the temperature at the center is $5,700\ \text{K}$ and the pressure is estimated to be $330$ to $360\ \text{GPa}$ ($\sim3\cdot10^{6}\ \text{atm}$).
The phase diagram shown below (taken from this paper) shows the liquid/solid transition, where fcc and hcp are two different crystalline forms of solid iron. You can see clearly from the slope of the line going off toward the upper right that iron should be solid at this temperature and pressure. | {
"domain": "earthscience.stackexchange",
"id": 65,
"tags": "geophysics, core"
} |
Find mobile phone contact according to number typed | Question: I am using the code below trying to replicate the mobile phone feature that displays one or more contact names according to the numbers typed.
This code is intended to be used on a JavaScript course I am teaching, so I try to keep it as simple as possible in order to walk the students through a possible scenario and the required steps to implement a solution programmatically.
I would like to hear any suggestions for changes or comments before moving this example to class.
Thank you very much!
(You can find the full code, along with some pretty styling at this codepen link)
HTML:
<div class="tryout">
<p>Numbers to try:</p>
<p><span></span></p>
</div>
<div class="mobile">
<label for="number">Send SMS</label>
<input type="text" id="number" name="number" placeholder="Enter telephone number">
</div>
<div class="found"></div>
JAVASCRIPT:
const contacts = {
kostas: 6986100100,
maria: 6986100200,
george: 6986300300,
sofia: 6986300400,
chris: 6987500500,
marina: 6944600600
}
const input = document.querySelector("#number");
const found = document.querySelector(".found");
input.addEventListener( "keyup", handleKeyUp );
function handleKeyUp( e ){
found.innerHTML = "";
Object.entries(contacts).map((contact)=>{
let [ key, value ] = contact;
let inputValue = e.target.value;
value = String( value );
if ( value.indexOf( inputValue ) === 0 ){
found.innerHTML += `
<p id="${key}">
<strong>${key}</strong>:
${value.replace( inputValue, `<span>${ inputValue }</span>` )}
</p>
`;
}
})
}
NOTES: Since this is intended for an audience of beginners, best practices, code organization and probably performance considerations are of great importance here.
Answer: Here's a few suggestions:
I would model the contacts list as an array of objects, like this:
const contacts = [
{ name: 'kostas', number: '6986100100' },
{ name: 'maria', number: '6986100200' },
{ name: 'george', number: '6986300300' },
{ name: 'sofia', number: '6986300400' },
{ name: 'chris', number: '6987500500' },
{ name: 'marine', number: '6944600600' }
];
If you use your contacts' first name as object keys, you can't have two contacts with the same name. Moreover, if each contact is represented by an object, you can add additional information to the contact in the future. I would also write the phone numbers as a string, not a number, since phone numbers can contain non-digit characters (like '+') or leading zeros.
The search returns a list of corresponding contacts, so you might as well replace the results div with a list, a <ul class="found"></ul> for example.
Searching for and displaying the matches becomes easy now:
function handleKeyUp(e) {
const inputValue = e.target.value.trim();
if (inputValue) {
found.innerHTML = contacts
.filter(contact => contact.number.startsWith(inputValue))
.map(
contact =>
`<li><strong>${contact.name}</strong>: ${contact.number.replace(
inputValue,
`<span>${inputValue}</span>`
)}</li>`
)
.join('');
} else {
found.innerHTML = '';
}
}
No need to convert the input value to a string, it's a string already. Notice how there's only one call to found.innerHTML. Your code updated the dom once for each found match, but dom operations are expensive and should be minimized. | {
"domain": "codereview.stackexchange",
"id": 36332,
"tags": "javascript, autocomplete"
} |
Does electrically heated water have an adverse effect on hair? | Question: I know I should have asked this question on a different site. but this was the most suitable site available right now for my question. Perhaps after this proposed site goes on Beta, we can move it there.
Coming to the question,
Some of my well-wishers always say that electrically heated (specifically using a coil heater) when used to wash hair, affects the hair adversely. Though I always thought it was a myth, lately I remembered the concept of electrolysis that I had studied in school.
According to what's running on my mind, in this setup, the electric heater may act as anode and the water container may act as cathode or vice-versa and the minerals in water may act as electrolyte. So, my fear is that there may be some chemical reactions going on that may result in different compounds which affect the hair.
How far is my fear justified?
Answer: No, it cannot. With coil heaters, the electricity flow is insulated from the water. If there was any way for electricity to flow from the coil to the water, the coil would be shorted, electricity would no longer flow through the coil, and you wouldn't have any heating.
Electrically heating water has the same effects as heating it over a stove. There may be a slight change in dissolved gas content, but that's it. Nothing to be afraid of. And there shouldn't be any chemical changes going on, except possibly a slight amount of formation of verdigris (rust) for a copper coil. (this would happen anyway while heating if the pot was copper, and it is of a negligible extent) | {
"domain": "physics.stackexchange",
"id": 7375,
"tags": "heat, water"
} |
Software-based sound card OFDM | Question: I'm trying to implement an sound card-based OFDM modem. A paper says:
...A serial to parallel converter is applied and the IFFT operation is
performed on the parallel complex data.
Say I have a stream $S$ of complex numbers, then after serial-to-parallel (S/P) conversion, it becomes $S_1$ and $S_2$. But as far as I'm concerned, OFDM requires the sub-carriers to be orthogonal to each other. So can I just perform IFFT twice directly on the streams without defining the frequencies of sub-carriers? I'm quite puzzled. Any help on this will be greatly appreciated. Thanks.
Answer: The serial/parallel converter shown in most OFDM transmitter block diagrams is required in a hardware implementation but not very meaningful for software implementation. Because in software there is no IFFT block that actually works with parallel inputs, it's all serial processing in reality.
Let $S(k)$ be your input stream of complex numbers and $N$ be the number of subcarriers, i.e. the IFFT size. Then you have to calculate the IFFT of every $N$ samples in $S(k)$, i.e. of the vectors $[S(iN),\ldots,S((i+1)N - 1)]$, with symbol index $i=0,1,2,\ldots$. The result is the OFDM modulated time domain signal. If I understand your example correctly you have to apply the IFFT to both $S_1$ and $S_2$, yes.
The output of the IFFT operation is also complex, in general. Unless you use two audio channels followed by an Inphase/quadrature mixer that shifts the signal to a carrier frequency the vectors defined above have to exhibit a complex conjugate symmetry in order to obtain a real-valued time domain signal. (So-called discrete multitone (DMT))
The allocation of data to subcarriers happens through the IFFT algorithm you're using. The FFTW library, for example, allocates $S(0)$ to frequency zero and $S(N-1)$ to "frequency" $-N/2$. What physical frequency this corresponds to in turn depends on the sampling frequency of the digital-to-analog converter you're using. Orthogonality is inherent to the (I)FFT algorithm, you don't have to apply any further processing to obtain it. | {
"domain": "dsp.stackexchange",
"id": 1457,
"tags": "modulation, software-implementation, ofdm"
} |
How to simply model fluid flow over heigtmap? | Question: My input is a height map, i.e. the ground of the 3D world, represented by a function defined on a $N$-sized grid $h:\{0,N\}^2 \rightarrow \mathbb{R}$. The height of the ground over $(x,y,0)$ is $h(x,y)$.
I would like to model an approximate fluid flow over this height map. As a first attempt, I assume the fluid is perfect, incompressible, and constrained to the ground. A solution to the problem can then be another function $f:\{0,N\}^2 \rightarrow \mathbb{R}$ where $f(x,y)$ is the height of fluid over the ground point $(x,y,h(x,y))$. The fluid top surface is free.
With the perfect and incompressible fluid hypotheses, Navier-Stokes equations are as follow, with $\vec{V}$ the speed vector, $P$ the pressure, and $f$ all other mass forces:
$$\nabla\cdot \vec{V} = 0$$
$$\frac{\partial\vec{V}}{\partial t} + \vec{V}\cdot\nabla\vec{V}= f - \frac{1}{\rho} \nabla P~~(Eq 1)$$
But I think I can simplify even more these equations due to the last hypothesis, yet I do not know how.
EDIT:
Maybe I could assume
$$\frac {dV_z}{dt} <<< 1$$
and use the projection of the previous equation over $z$ axis to derive, for any $x,y,z$ such as the point $(x,y,z) $ is in the fluid :
$$ P (x,y,z) = P_0 + \rho g (z_{fluid surface} - z) $$
I can then integrate this pressure over the height of the fluid column for a given $x,y $ and find the mean pressure of this column, then use this value in the $x $ and $y $ axes projections of the equation 1 in order to find the speeds.
Would this be a correct reasoning?
Answer: I assume you want to solve the shallow water equations, which are the depth-averaged versions of the Euler equations. They can be written as
$\frac{\partial h}{\partial t}+\frac{\partial uh}{\partial x}+\frac{\partial vh}{\partial y}=0$
$\frac{\partial v}{\partial t}+\frac{\partial}{\partial y}\big(v^2+\frac{1}{2}gh^2\big)+\frac{\partial}{\partial x}(uvh)=0$
$\frac{\partial u}{\partial t}+\frac{\partial}{\partial x}\big(u^2+\frac{1}{2}gh^2\big)+\frac{\partial}{\partial y}(uvh)=0$
where $h=h(x,y)$ is the water depth field, $u$ and $v$ are the velocity component fields in $x$ and $y$ directions.
These equations can be solved with many numerical schemes for instance finite difference methods (e.g. using the Lax-Wendroff scheme).
However, your concept of using particles seems to require a Lagrangian approach. From this point of view, the smoothed particle hydrodynamics method (SPH) is a possible choice. Despite of certain difficulties, fortunately, the shallow water equations above are already solved with SPH successfully. I would suggest to read the papers below (if you have access to them) for the technical details of the solution.
[*] Giulia Rossi, Michael Dumbser, Aronne Armanini,
A well-balanced path conservative SPH scheme for nonconservative hyperbolic systems with applications to shallow water and multi-phase flows,
Computers & Fluids,
Volume 154,
2017,
Pages 102-122,
[**] Xilin Xia, Qiuhua Liang, Manuel Pastor, Weilie Zou, Yan-Feng Zhuang,
Balancing the source terms in a SPH model for solving the shallow water equations,
Advances in Water Resources,
Volume 59,
2013,
Pages 25-38
Edit:
Although the implementation of SWE-SPH is easy compared to finite element or finite volume solvers, I think it is better to use an existing solver like SWE-SPHysics, which is an open source tool and a little brother of DualSPHysics. As long as you desire a little more programming and higher configurability, there are further options like Aboria or Nauticle (my own project), being general purpose particle simulation tools. | {
"domain": "physics.stackexchange",
"id": 47390,
"tags": "fluid-dynamics, pressure, simulations"
} |
How is electromagnetic induction analogous to gravitational frame dragging? | Question: This wiki says: https://en.wikipedia.org/wiki/Frame-dragging
Qualitatively, frame-dragging can be viewed as the gravitational
analog of electromagnetic induction.
I was wondering what exactly this means, and the wiki for electromagnetic induction doesn't seem to go into it. I wasn't able to Google anything that shed light on this analogy either.
What does this sentence mean exactly? How is gravitational frame dragging, analogous to electromagnetic induction?
Answer: The common factor is that the complete interaction includes velocity-dependent effects.
In the case of electromagnetism: the Lorentz force is velocity-dependent.
Newtonian gravity is purely position dependent, and that has implications for what has to be assumed about the speed of gravity.
A theory of gravity in which gravity is purely position dependent, and with a finite speed of gravity, is a theory that does not comply with the principle of conservation of momentum. (The problem is often referred to as 'force aberration'; with a finite speed of propagation (and pure position dependence) the force does not point instantaneously to the point where the attracting body is.)
Maxwell's equations for the electromagnetic field imply a particular speed for the propagation of the electromagnetic interaction: the speed of light.
Given that it was recognized that electromagnetic interaction propagates at a finite speed physicists began to explore possibilities of formulating a theory of gravity with a finite speed for the propagation of the gravitatitional interaction.
The physicists of the time recognized that a theory of gravity with a finite speed of propagation would have the potential of accounting for the anomalous precession of Mercury.
In order for such a theory to comply with conservation of momentum the interaction must include velocity-dependent effects, in just the right way.
The constraint of compliance with the principle of conservation of momentum narrows down the possibilities. In that sense it is very much a guiding constraint.
Einstein's 1915 General Relativity rendered all previous attempts at formulating finite-speed theory of gravity obsolete.
For more information:
1999 article by Steve Carlip:
'Aberration and the speed of Gravity'
Carlip discusses the interconnections between speed of propagation of an interaction, velocity-dependent effects of the interaction, and the emission of waves. | {
"domain": "physics.stackexchange",
"id": 84197,
"tags": "electromagnetism, special-relativity, electromagnetic-induction, frame-dragging"
} |
Ros Occupancy Grids | Question:
Hello guys,
Me and a Partner of mine are working on a Turtlebot Project at the Karlsruhe College in Germany.
Our projects aims to make the Turtlebot decide whether it can, for example, drive under a table, or not.
We thought that the Turtlebot would anyhow measure the height of the obstacle in front of him and then
decide (it has a XTion Pro Live Camera). During this, the Turtlebot doesn't have to move (rotation may be allowed).
Our idea was to record a Pointcloud (we already tried out Rtabmap, which did pretty good) and export
the data (.pcd / .ply) for further use. We didn't know how to proceed after this, since none of us is a
real computer expert (we both study Mechatronics).
We also thought we would have to include some libraries into a Cpp file (like the Pcl Library), but we
also found the Octomap library and Gridmap (whose description seem pretty useful to us).
We're kinda lost. If you know any detailed way or package to build that function, please let us know !
Originally posted by fatstudentsweating on ROS Answers with karma: 5 on 2015-08-01
Post score: 0
Answer:
I think you were in the correct path. I would use the octomap_server and, if you appropriately set the filters of the octomap (which can be done in the launch file), you can filter out those obstacles which are higher than the robot.
Other option is to use the pass through filter of the PCL, but to use the octomap server is a much easier option.
Originally posted by Javier V. Gómez with karma: 1305 on 2015-08-03
This answer was ACCEPTED on the original site
Post score: 2
Original comments
Comment by fatstudentsweating on 2015-08-03:
Hello Javi,
thank you very much. I was browsing the documentation of the Octomap server, as you said i could
use ~pointcloud_[min|max]z (float) or ~occupancy[min|max]_z (float) to threshold the height. But how can i implement something like : if (is there anything straight ahead at 80 cm ?){..}?
Comment by Javier V. Gómez on 2015-08-04:
I would open a new question for that, as the solution it is not that easy. Please accept and/or upvote the answer if it was useful for you. | {
"domain": "robotics.stackexchange",
"id": 22351,
"tags": "ros, navigation, octomap, asus-xtion-pro-live, costmap"
} |
Compatible partial permutations | Question: Please, correct my terminology as I am not a combinatorician
(I am using http://en.wikipedia.org/wiki/Partial_permutation). Please, refer me to the solution if this is a solved problem.
Let $P_k$ be partial permutations of the set $\{1, 2, \ldots, n\}$, e.g. $n=5$, $P_1=(1,\cdot,3,2,\cdot)$, and $P_2=(\cdot,2,3,1,5)$. Let $T_k$ be permutations from the same set, e.g. $T_1=(1,3,4,2,5)$. I will write $P_1[4]=2$, $P_2[4]=1$, and $T_1[4]=2$. I will say, that $P_1[2]$, $P_1[5]$, and $P_2[1]$ do not exist.
Let define that a given partial permutation $P$ is included in the given permutation $T$ if for all $i,j$ for which $P[i]$ and $P[j]$ exist: if $P[i] < P[j]$ then $T[i] < T[j]$. For example, the given $P_1$ and $P_2$ are both included in the given $T_1$.
Let define that two given partial permutations are compatible if there exists a permutation such that both partial permutations are included in it. For example, the given $P_1$ and $P_2$ are compatible, because they are both included in $T_1$.
QUESTION: Propose an algorithm to check if two given partial permutations are compatible. Please propose also the proper data structure for representing partial orders which will allow your (efficient) algorithm.
(I am tagging this question with BDD because the solution may help me to represent BDDs with different variable ordering more concisely)
Answer: We can reduce it to the following problem:
Given a directed graph on vertex set $\{1,2,\dots,n\}$, we want to know if there exists a permutation $T$ such that for each edge $(i,j)$ in the graph, $T[i]<T[j]$.
The reduction: add an edge to the graph for each pair $i,j$ where both $P_b[i]$ and $P_b[j]$ exist for some $b \in \{0,1\}$; the orientation of the edge is determined by whether $P_b[i]<P_b[j]$ or not. This yields a directed graph that contains the union of edges from $P_0$ plus edges from $P_1$.
Conveniently, the new problem is easy. Of course, if there are any cycles in the graph, then there is no such permutation $T$ (i.e., the two permutations $P_0,P_1$ are not compatible). Conversely, if there are no cycles in the graph, then there is such a permutation $T$ (i.e., the permutations $P_0,P_1$ are compatible). Do you see why? We can simply apply a topological sort to the graph and use it to construct an example of such a permutation $T$, where we assign $T[i]=j$ if vertex $i$ is the $j$th vertex to be visited in the topologically sorted order.
I'm not sure what BDDs have to do with it. This does not seem like a research-level question. | {
"domain": "cstheory.stackexchange",
"id": 2441,
"tags": "sorting, permutations, partial-order, binary-decision-diagrams"
} |
How to invert coordinate frames of a Transform | Question:
I have a fixed transform coming from the ar_track_alvar pkg, that gives me the markers pose into the camera_link. I want to get the inverse of it and save the exact data but reversed. The problem is that being the marker frame reversed, the inverse gives me wrong result.
How to have x and y markers coordinates to be consistent with the camera_link ones? What is the easiest way to achieve this?
I want to have x pointing forward (red) and y on the left (green).
Thanks for anyone helping me out!
Cheers,
Simone.
Originally posted by simff on ROS Answers with karma: 98 on 2018-02-27
Post score: 1
Original comments
Comment by tuandl on 2018-02-27:
Since you already have a transform from your markers to camera_link, have you tried to apply a rotation with respect to camera_link in your marker's frame?
Comment by simff on 2018-02-27:
Hi @tuandl thanks for the comment! Not yet, I do not know the way to code it, I am new to ROS and tf, How would you to that? Can you append here a simple code snippet or code steps I have to take?
I appreciate your help!
Answer:
This is a basic tf tutorial
You want to do something like
try{
listener.lookupTransform("camera_link", "ar_marker",ros::Time(0), transform);
}
Follow the tutorial to get the marker's positions.
Originally posted by tuandl with karma: 358 on 2018-02-27
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 30159,
"tags": "ros, ros-kinetic, ar-track-alvar, ar-kinect, transform"
} |
How should this fft output be intepreted? | Question: I am currently reading some code, so I can understand how this FFT output is actually been stored.
I have a function which computes FFT of audio file, and stores is in the format.
[real0, realN/2-1, real1, im1, real2, im2, ...]
which kind of confuses me, as I remember DFT/FFT converts from time domain to frequency domain, in which the real part is the amplitude, and the complex part is the phase, but aren't I missing some information regarding what frequency has what amplitude and what phase?
Am I missing something?
Or am I right when i say that I can't from these complex numbers extract the amplitude for a certain frequency?
I am currently using Kaldi framework, in which i am trying to compute the spectogram of an audio file...
The output file it has generated looks like this.
This output was generated from a 1 second audio file.
sampled with 16 kHz, and frame length = 25 ms and hop_length = 10ms.
The number of datapoint the function has generated is 25816.
I am not sure how i should interpret it, i mean is each value supposed to be the power for a given frequency, and if so which frequency ?
Answer: "[...] the real part is the amplitude, and the complex part is the phase [...]" No, complex numbers (and that's what the output of a DFT/FFT is) can be represented either by their real and imaginary parts (as in the output vector of your routine), or by their magnitudes and phases. Please refer to the linked page on how to convert one representation to the other.
The indices $i=0,1,\ldots N-1$ of the output vector of a length $N$ DFT/FFT are related to actual frequencies via the sampling frequency $f_s$:
$$f_i=\frac{if_s}{N}$$
If the input vector is real-valued, the first $N/2+1$ values (assuming $N$ is even) contain all information about the signal. | {
"domain": "dsp.stackexchange",
"id": 4680,
"tags": "fft, power-spectral-density, speech-recognition, frequency-domain, complex"
} |
Why is the singlet state for two spin 1/2 particles anti-symmetric? | Question: For two spin 1/2 particles I understand that the triplet states ($S = 1$) are:
$\newcommand\ket[1]{\left|{#1}\right>}
\newcommand\up\uparrow
\newcommand\dn\downarrow
\newcommand\lf\leftarrow
\newcommand\rt\rightarrow
$
\begin{align}
\ket{1,1} &= \ket{\up\up}
\\
\ket{1,0} &= \frac{\ket{\up\dn} + \ket{\dn\up}}{\sqrt2}
\\
\ket{1,-1} &= \ket{\dn\dn}
\end{align}
And that the singlet state ($S = 0$) is:
$$
\ket{0,0} = \frac{\ket{\up\dn} - \ket{\dn\up}}{\sqrt2}
$$
What I'm not too sure about is why the singlet state cannot be $\ket{0,0}=(\ket{↑↓} + \ket{↓↑})/\sqrt2$ while one of the triplet states can then be $(\ket{↑↓} - \ket{↓↑})/\sqrt2$. I know they must be orthogonal, but why are they defined the way they are?
Answer: Let's temporarily forget that the two $m=0$ states exist, and consider just the two completely aligned triplet states,
$\newcommand\ket[1]{\left|{#1}\right>}
\newcommand\up\uparrow
\newcommand\dn\downarrow
\newcommand\lf\leftarrow
\newcommand\rt\rightarrow
$
$\ket{\up\up}$ and $\ket{\dn\dn}$.
There's not any physical difference between these: you can "transform" your state from one to the other by changing your coordinate system, or by standing on your head. So any physical observable between them must also be the same.
Either of the single-particle states are eigenstates of the spin operator on the $z$-axis,
$$\sigma_z = \frac\hbar 2\left(\begin{array}{cc}1&\\&-1\end{array}\right),$$
and "standing on your head," or reversing the $z$-axis, is just the same as disagreeing about the sign of this operator.
But let's suppose that, on your way to reversing the $z$-axis, you get interrupted midway. Now I have a system which I think has two spins along the $z$-axis, but you are lying on your side and think that my spins are aligned along the $x$-axis. The $x$-axis spin operator is usually
$$\sigma_x = \frac\hbar 2\left(\begin{array}{cc}&1\\1\end{array}\right).$$
Where I see my single-particle spins are the eigenstates of $\sigma_z$,
$$\ket\up = {1\choose0} \quad\text{and}\quad \ket\dn = {0\choose1},$$ you see those single-particle states as eigenstates of $\sigma_x$,
\begin{align}
\ket\rt &= \frac1{\sqrt2}{1\choose1}
= \frac{\ket\up + \ket\dn}{\sqrt2}
\\
\ket\lf &= \frac1{\sqrt2}{1\choose-1}
\end{align}
If you are a $z$-axis chauvinist and insist on analyzing my carefully prepared $\ket{\rt\rt}$ state in your $\up\dn$ basis, you'll find this mess:
\begin{align}
\ket{\rt\rt} = \ket\rt \otimes \ket\rt
&= \frac{\ket\up + \ket\dn}{\sqrt2} \otimes \frac{\ket\up + \ket\dn}{\sqrt2}
\\
&= \frac{\ket{\up\up}}2 + \frac{\ket{\up\dn} + \ket{\dn\up}}2 + \frac{\ket{\dn\dn}}2
\end{align}
This state, which has a clearly defined $m=1$ in my coordinate system, does not have a well-defined $m$ in your coordinate system: by turning your head and disagreeing about which way is up, you've introduced both $\ket{\up\up}$ and $\ket{\dn\dn}$ into your model. You've also introduced the symmetric combination $\ket{\up\dn} + \ket{\dn\up}$.
And this is where the symmetry argument comes in. The triplet and singlet states are distinguishable because they have different energies.
If you propose that the symmetric combination $\ket{\up\dn} + \ket{\dn\up}$ is the singlet state, then you and I will predict different energies for the system based only on how we have chosen to tilt our heads. Any model that says the energy of a system should depend on how I tilt my head when I look at it is wrong. So the $m=0$ projection of the triplet state must be symmetric, in order to have the same symmetry under exchange as the $m=\pm1$ projections. | {
"domain": "physics.stackexchange",
"id": 65100,
"tags": "quantum-mechanics, particle-physics, statistical-mechanics, nuclear-physics, quantum-spin"
} |
How reproducible should CNN models be? | Question: I want to train several CNN architectures with Google Colab (GPU), Keras and Tensorflow.
Since the trained models are not reproducible due to GPU support, I would like to train the models several times and determine the mean and the standard deviation of the results.
I'm totally unsure if I should at least try to make the models minimal reproducible? For example with the following code at the beginning of the program:
import numpy
import tensorflow as tf
import random as rn
import os
os.environ['PYTHONHASHSEED']='0'
np.random.seed(1)
rn.seed(1)
tf.set_random_seed(1)
from keras import backend as K
if 'tensorflow' == K.backend():
import tensorflow as tf
tf.set_random_seed(1)
from keras.backend.tensorflow_backend import set_session
config = tf.ConfigProto()
config.gpu_options.allow_growth = True
config.gpu_options.visible_device_list = "0"
set_session(tf.Session(config=config)
I don't know if this makes sense. Would it be better if I did not use seeds at all?
What do you think?
Answer: Getting slightly different results is natural and should not be a problem. How to minimize the instabilities due to several contributing factors is discussed at length in the linked post below, for Keras using different backends including TensorFlow:
https://machinelearningmastery.com/reproducible-results-neural-networks-keras/ | {
"domain": "datascience.stackexchange",
"id": 6303,
"tags": "keras, tensorflow, cnn, gpu, colab"
} |
Static pressure intuition-Why does the local static pressure change as the local flow rate changes? | Question: From Bernoulli ' theorem, we know that the local static pressure changes as the local flow rate does. But why should the static pressure change anyway, intuitively ? What really makes that happen ? Consider incompressible and isentropic flows.
Answer: As Munson puts it in his book "Fundamentals of Fluid Mechanics", the work done on a particle is equal to the change of its kinetic energy. It's the same principle used in Newtonian mechanics in high school! In the absence of non-conservative forces(say friction), energy is conserved. It's just that in the analysis of fluids you go from a microscopic analysis of each particle to a more macroscopic picture of parts of a fluid.
So, applying the same logic, say you have a cylindrical, horizontal pipe with frictionless walls:
[Image from Khan academy]
Along the streamline that runs across the middle of the pipe, the equation that is one step away from Bernoulli's formula reads(gravitational potential energy is the same along the middle streamline):
$\frac{dP}{dx}=-\frac{1}{2}\rho\frac{d(v^2)}{dx}=-\rho v\frac{dv}{dx}$ where $x$ is the parameter that runs accross the streamline.
Since $u>0$ (it's the length of $\vec{u}$, the velocity vector), the above equation tells us that the fluid moves in the direction in which pressure decreases. This is a restatement of Newton's second law in "fluid language"!
Thus, on the left cross-section, if the fluid has pressure $P_1$ and velocity $u_1$ and on the right cross-section it has $P_2$ and $u_2$, intuition(and the continuity equation) tells us that $u_2>u_1$ and so we expect $P_1>P_2$ since(from the above logic) the net force that must accelerate the fluid from $u_1$ to $u_2$ must have a direction to the right.
So, this is how the local pressure and local fluid velocity are connected. It's a "simple" relation of cause and effect, the same relation found in Newton's mathematical statement of his second law.
To answer your question more clearly, if the velocity on the right cross-section changes, then also the pressure there must change in order to have the right net force that drives the fluid from $u_1$ to (the new) $u_2$. | {
"domain": "physics.stackexchange",
"id": 39494,
"tags": "fluid-dynamics"
} |
How do I update ROS? | Question:
I just received my Kinect. I am in the process of installing ubuntu 11.04. I had a prior version of ROS installed... Is there a simple way to upgrade the current installation?
Sorry, this is a very newbie question. I've spent a little time searching for the answer but it hasn't jumped out at me.
-Mike
Originally posted by mbahr on ROS Answers with karma: 31 on 2011-06-27
Post score: 2
Answer:
Hi,
If you got ROS from package manager, try
sudo apt-get update
sudo apt-get dist-upgrade
Huan
Originally posted by liuhuanjim013 with karma: 152 on 2011-06-27
This answer was ACCEPTED on the original site
Post score: 6
Original comments
Comment by Martin Günther on 2011-07-20:
Simple: use Synaptic or aptitude or whatever to remove the ros-cturtle-* packages and install the ros-diamondback-* packages you need.
Comment by sam on 2011-07-20:
My last opinion maybe incorrect. I have tried on cturtle with dist-upgrade,it doesn't upgrade to diamondback, but only upgrade all the cturtle packages. So how to upgrade cturtle to diamondback?Thank you~
Comment by sam on 2011-07-20:
I figured out that dist-upgrade is not only for ROS. Why dist-upgrade can upgrade the ROS version not just upgrade all the packages of ROS?
Comment by Martin Günther on 2011-07-20:
No, 'dist-upgrade' will not upgrade ubuntu. You can also just do a normal 'apt-get upgrade' instead. (The difference between 'upgrade' and 'dist-upgrade' is that 'dist-upgrade' does some stuff to handle changed dependencies etc. correctly.)
Comment by sam on 2011-07-20:
Is 'apt-get dist-upgrade' only for upgrade ROS version or it will also upgrade ubuntu? And how to only update ROS packages?
Comment by tfoote on 2011-06-28:
Also make sure they read packages.ros.org not code.ros.org. The code.ros.org debian repo is deprecated. http://www.ros.org/wiki/diamondback/Installation/Ubuntu#diamondback.2BAC8-Installation.2BAC8-Sources.Setup_your_sources.list
Comment by Martin Günther on 2011-06-28:
Exactly. Before doing that however, you should have a look at the file /etc/apt/sources.list.d/ros-latest.list and uncomment the lines starting with "deb" (remove the #). They are commented out automatically during upgrade. (I am assuming you are upgrading from Ubuntu 10.10 to 11.04?) | {
"domain": "robotics.stackexchange",
"id": 5984,
"tags": "ros"
} |
Python Script to Calculate Historical S&P 500 Returns over Requested Time Span | Question: I have a Python script that calculates the historical S&P 500 returns from a starting balance and annual contribution. The script also outputs interesting statistics (mean/min/max/stddev/confidence intervals) related to the historical returns. I'm seeking feedback on how the code could be refactored more efficiently.
sp500_time_machine.py
import os, sys, time
import datetime
import argparse
import statistics
import math
from sp500_data import growth
MONTHS_PER_YEAR = 12
FIRST_YEAR = 1928 # This is the first year of data from the dataset
# Given an initial investment, an annual contribution and a span in years, show how the investment would mature
# based on historical trends of the S&P 500
def sp500_time_machine(starting_balance, span, annual_contribution):
realized_gain_list = []
average_gain = 0
current_year = datetime.date.today().year # Grab this from the OS
total_spans = (current_year - FIRST_YEAR - span) + 1
# Adjust the starting year for each span
for base_year in range(total_spans):
realized_gains = starting_balance
# Loop through each span, month by month
for month in range(span * MONTHS_PER_YEAR):
realized_gains = (realized_gains + (annual_contribution / MONTHS_PER_YEAR)) * (1 + growth[month + base_year] / 100)
# Store each realized gain over the requested span in a list for later processing
realized_gain_list.append(realized_gains)
print("S&P realized gains plus principle from %s to %s for %s starting balance = %s" % ((FIRST_YEAR + base_year), (FIRST_YEAR + base_year + span), f'{starting_balance:,}', f'{int(realized_gains):,}'))
average_gain = average_gain + realized_gains
# Display the average, minimum and maximum gain over the requested time span
mean = int(average_gain / total_spans)
print("Average %s year realized gains plus principle over %d years is %s" % (span, total_spans, f'{mean:,}'))
# Calculate the standard deviation
std_dev = statistics.stdev(realized_gain_list)
print("Standard Deviation = %s" % f'{int(std_dev):,}')
# Determine the 99% confidence interval
#
# Stock market returns are not normally distributed, so this is a simplification of actual real-world data
# https://klementoninvesting.substack.com/p/the-distribution-of-stock-market
#
# z-score values are based on normal distributions
# The value of 1.96 is based on the fact that 95% of the area of a normal distribution is within 1.96 standard deviations of the mean
# Likewise, 2.58 standard deviations contain 99% of the area of a normal distribution
# 90% confidence z-value = 1.65
# 95% confidence z-value = 1.96
# 99% confidence z-value = 2.58
upper_interval = mean + 2.58 * (std_dev / math.sqrt(total_spans))
print("99%% Confidence Interval (Upper) = %s" % f'{int(upper_interval):,}')
lower_interval = mean - 2.58 * (std_dev / math.sqrt(total_spans))
print("99%% Confidence Interval (Lower) = %s" % f'{int(lower_interval):,}')
# Find the min/max values
min_gain = min(realized_gain_list)
min_gain_index = realized_gain_list.index(min_gain)
print("Minimum realized gain plus principle over %d years occurred from %s to %s with a final balance of %s" %
(span, min_gain_index + FIRST_YEAR, min_gain_index + FIRST_YEAR + span, f'{int(min_gain):,}'))
max_gain = max(realized_gain_list)
man_gain_index = realized_gain_list.index(max_gain)
print("Maximum realized gain plus principle over %d years occurred from %s to %s with a final balance of %s" %
(span, man_gain_index + FIRST_YEAR, man_gain_index + FIRST_YEAR + span, f'{int(max_gain):,}'))
if __name__ == "__main__":
parser = argparse.ArgumentParser()
parser.add_argument("-s", "--span", help="The number of consecutive years (span) to iterate over")
parser.add_argument("-p", "--principle", help="The initial investment amount")
parser.add_argument("-a", "--annual", help="The annual contribution amount")
args = parser.parse_args()
sp500_time_machine(int(args.principle), int(args.span), int(args.annual))
sp500_data.py
# Monthly growth data taken from https://www.officialdata.org/us/stocks/s-p-500/ and based upon http://www.econ.yale.edu/~shiller/data.htm
# This data contains reinvested dividends
# This data does NOT adjust for inflation!
growth = [-0.83, 5.75, 6.66, 3.44, -4.57, 1.09, 3.59, 7.37, 2.36, 7.08, 0.70, 7.69, 0.81, 2.05, -0.30, 1.80, 2.20, 9.20, 5.96, 4.24, -10.32, -26.19, 4.37, 1.83, 6.64, 4.12, 6.69, -5.65, -9.77, -1.76, -0.90, 0.34, -13.37, -6.80, -6.19, 3.56, 8.14, 2.38, -9.08, -9.16, -2.68, 3.86, -2.49, -14.37, -12.75, 2.05, -18.10, -0.85, -0.05, 1.14, -23.22, -11.31, -12.39, 6.18, 51.35, 10.37, -13.22, -0.34, -2.64, 4.57, -11.27, 0.33, 11.24, 29.32, 17.58, 8.46, -4.64, -0.48, -9.38, 2.80, 2.32, 6.08, 7.75, -4.80, 2.02, -9.83, 1.70, -4.36, -3.51, -2.01, 1.21, 3.21, 1.06, 0.40, -2.62, -5.93, 7.94, 8.27, 4.17, 5.60, 7.10, 2.43, 2.99, 9.71, 0.29, 5.82, 6.03, 2.41, 0.41, -5.02, 4.57, 6.23, 2.30, 1.44, 5.55, 3.10, -1.40, 3.46, 3.30, 0.23, -5.62, -4.09, -3.34, 6.39, 1.44, -13.76, -14.10, -8.27, -1.02, 3.24, -1.80, -6.02, -3.44, 1.56, 2.93, 20.49, 1.06, -4.08, 11.62, 0.47, -2.55, -1.16, -0.46, 0.27, -12.24, 4.10, 2.17, 2.84, -1.07, 11.06, 1.38, -1.41, -1.97, -0.15, -0.23, -0.15, 1.42, -13.34, -8.09, 3.87, 2.65, 4.76, 1.47, 2.85, -3.59, 0.72, -5.72, 1.18, -2.55, -1.59, 4.11, 5.71, 0.08, 0.86, -3.43, -4.08, -5.88, 2.62, -2.48, -4.76, -3.45, 1.87, 5.75, 4.38, 0.05, 1.66, 7.97, 2.15, 1.06, 6.50, 6.43, 4.01, 3.79, 4.36, 2.18, 2.47, -4.54, 2.55, -0.50, -4.21, 1.77, 3.67, -0.24, 3.24, -1.31, 2.20, 5.14, 3.02, -1.06, -1.23, 2.88, -0.28, 2.60, 3.38, 3.73, 0.31, 2.90, 4.16, 2.19, -1.70, 0.71, 7.18, 4.51, 3.61, 2.02, 4.30, 0.59, -2.68, 6.77, 0.52, -0.34, -2.55, -1.62, -14.42, -1.87, -0.01, 3.39, 0.92, 4.27, -3.67, -3.30, -1.36, 3.92, 6.69, -1.56, -2.17, 3.03, -0.73, -1.12, -0.86, -4.45, 1.92, 8.19, 5.33, 4.59, -1.96, -2.49, -0.68, 3.19, -5.10, -0.16, 1.63, -3.33, 1.49, 0.41, -0.18, -4.91, 6.26, 4.17, 1.87, 3.14, 1.95, 3.24, 2.63, 2.52, 1.38, 3.39, 3.91, 2.16, -6.72, 6.64, 4.11, 4.72, 0.38, 0.19, 8.01, 4.31, -1.11, 1.93, 0.63, -1.15, 2.37, 4.97, 3.14, 0.03, -2.25, 3.61, 3.83, -1.33, 0.75, 0.20, 0.46, 3.24, 3.37, 0.88, -1.11, -1.61, 3.67, 4.51, 0.99, -0.77, 0.96, -4.47, 1.00, -3.11, 1.91, 0.90, -4.11, 3.52, 2.71, 1.84, 3.02, 2.68, 2.58, 4.45, 4.42, 1.22, 4.46, 2.39, 2.74, 2.71, 4.30, 4.95, 2.17, 3.70, -0.44, 3.81, -0.08, 6.15, 7.64, -0.30, 4.82, -4.72, 7.07, 1.24, -2.39, 0.95, 7.21, 1.48, -2.84, -0.26, 5.75, -0.28, -3.09, -0.95, -0.71, 1.81, -1.86, -4.00, 1.62, 2.64, 4.16, 1.95, 2.32, -5.21, -3.74, -5.90, -1.80, 0.32, 2.33, 0.70, 2.42, 0.90, 3.56, 2.74, 3.07, 4.05, 2.94, 4.36, 3.33, 2.16, 4.25, -1.27, 2.81, 1.94, 1.77, -0.61, 4.23, -0.32, -3.70, 0.18, 0.67, 3.46, -1.49, -3.61, -1.08, 1.58, -0.62, 3.99, -2.20, 1.49, -2.72, -1.67, 3.54, 2.69, 5.43, 4.37, 3.40, 2.92, 1.26, -1.08, -0.03, 3.84, -0.54, 1.34, 4.77, 1.16, -3.49, 1.91, 0.34, -2.94, -7.19, -11.41, 2.72, 3.02, -0.59, -2.86, 7.20, 4.62, 4.15, 1.60, -0.11, 4.98, 2.27, 0.22, -1.22, 3.03, 2.89, 0.50, -0.31, 2.39, 3.33, 1.48, 2.07, 1.69, 1.22, -0.35, 3.96, -1.23, 1.97, 1.97, 0.94, -1.49, 2.82, 0.98, 0.34, 1.56, 1.73, -4.51, 0.10, 2.12, 3.60, 2.50, 1.08, -0.21, 1.98, -0.43, -3.86, 3.32, -5.01, -0.56, 0.02, -5.77, -3.22, -0.56, 5.32, 0.72, 4.13, 3.73, 2.63, 1.99, 2.06, -0.99, 1.99, 1.85, 1.65, 0.10, -2.88, 3.11, -0.02, -4.26, -1.56, 7.66, 2.56, 2.94, 0.05, -1.93, 3.51, 2.72, 1.79, 1.29, -3.99, -0.24, -1.91, 2.27, 3.51, -4.97, -4.21, -0.28, 0.63, 1.35, 1.00, -5.03, -0.59, -3.20, 2.01, -2.75, -11.20, -0.27, 0.52, 3.26, 6.32, 2.49, 0.21, 7.16, 4.11, 4.15, 2.83, 3.67, -1.11, -1.60, -0.46, -1.52, 2.49, -1.86, -4.37, 7.16, 4.42, 2.09, 2.62, 1.26, -0.78, 0.52, -0.50, 3.78, -1.21, 0.42, 5.25, 2.31, 0.99, -3.33, -1.35, -1.63, -2.57, -1.99, 1.21, -1.64, 2.00, 4.24, -6.85, -6.81, 1.70, -2.47, 4.57, -4.82, -2.71, 0.46, -11.35, -3.76, -10.01, 2.38, 3.74, -6.09, 8.63, 10.81, 4.97, 1.49, 6.71, 2.89, 0.43, -7.00, -0.85, 4.97, 2.04, -1.18, 9.55, 4.18, 0.80, 1.10, -0.38, 0.90, 2.67, -0.56, 2.44, -3.11, -0.37, 3.79, -0.54, -2.37, -0.05, -1.19, 0.06, 0.90, 1.28, -2.08, -1.18, -2.20, 0.98, -0.08, -3.39, -0.97, 0.27, 4.83, 5.50, 0.67, -0.06, 7.33, 0.40, -2.77, -5.44, 1.92, 4.19, -1.06, 2.34, 2.43, -1.89, 2.42, 1.42, 5.01, 1.54, -3.35, -0.32, 4.40, 3.31, 4.40, -8.78, -1.16, 5.04, 6.86, 4.97, 3.50, 2.84, 3.32, 4.61, -1.24, 0.01, -3.07, 4.14, 1.29, -1.62, 0.86, -2.02, 0.80, -8.30, 1.73, 3.04, 1.18, -4.80, -1.91, -2.74, 5.47, 0.57, -5.27, 0.24, 0.79, 12.10, 8.88, 4.50, 1.36, 3.93, 2.13, 3.87, 4.20, 4.42, 1.75, 0.71, -2.41, 3.31, 0.65, -1.14, -0.13, 1.58, -5.11, 0.44, 0.51, -0.25, -1.85, -0.91, 9.21, 1.41, -0.41, 1.29, -0.71, 4.70, 5.79, -0.48, 1.02, 2.74, 2.51, 2.25, -1.85, -1.88, 1.50, 6.42, 5.29, 0.75, 5.70, 6.18, 2.74, 0.49, 3.13, -1.80, 2.28, -2.46, -0.09, 3.53, 1.71, 6.67, 6.46, 4.38, -0.86, 0.17, 4.50, 3.12, 6.45, -3.03, -11.85, -12.30, -1.33, 4.25, 3.33, 3.23, -0.89, -2.19, 6.00, -0.31, -1.72, 1.93, 3.80, -2.02, 2.33, 3.51, 3.30, -0.16, 3.56, 4.12, 3.39, 2.80, 4.69, 0.46, 0.29, -1.81, 2.74, -2.21, -2.53, 2.71, 0.20, 3.85, 3.17, 0.17, -7.86, -4.34, -2.32, 2.98, 4.59, -0.69, 11.61, 3.04, 2.26, -0.18, 0.35, 0.78, 2.68, -0.30, 0.18, 0.02, 0.94, 7.36, -0.60, -1.01, 0.26, 2.07, -1.33, 1.91, 0.94, 0.38, -1.18, 2.76, 3.27, 0.14, 1.72, 2.15, -1.34, 0.72, 0.87, 0.06, 1.76, 1.35, 1.24, 0.01, 0.89, 1.74, -0.08, -1.42, -3.35, 1.06, 1.11, -0.52, 3.08, 0.82, -0.44, -0.37, -1.03, 2.45, 3.82, 2.56, 3.22, 3.35, 3.18, 3.55, 0.51, 3.72, 0.91, 2.36, 3.39, 0.16, 5.90, -0.20, 0.20, 2.35, 1.28, -3.48, 3.08, 2.02, 4.12, 5.05, 1.20, 3.26, 4.36, -0.62, -3.41, 9.22, 5.34, 5.74, 0.35, 1.19, 1.65, -1.15, 2.63, 0.24, 6.40, 5.31, 3.41, -0.22, 0.12, 4.47, -6.97, -4.90, 1.29, 10.97, 4.10, 5.05, -0.07, 2.92, 4.25, -0.10, -0.61, 4.52, -3.77, -0.60, -1.27, 7.11, 2.81, -0.12, -2.48, 3.94, 1.42, -2.84, 3.16, 0.85, 0.94, -1.08, -5.21, -0.77, -3.32, 0.46, -2.14, -9.08, 0.45, 6.88, -2.39, -2.66, -2.05, -11.25, 3.18, 5.05, 1.47, -0.30, -3.35, 4.95, -3.51, -2.82, -5.92, -10.76, 1.14, -4.76, -1.37, 6.63, -1.04, -0.22, -6.41, 1.31, 5.29, 5.31, 5.70, 0.60, -0.17, 3.16, 2.03, 1.21, 3.06, 4.93, 1.09, -1.57, 0.97, -2.56, 2.86, -2.24, -1.39, 2.78, 0.10, 4.77, 2.73, -1.35, 1.68, -0.26, -2.41, 1.34, 2.18, 1.81, 0.31, 0.28, -2.62, 3.96, 2.14, 1.47, -0.02, 1.49, 0.80, -0.79, -2.71, 0.72, 2.29, 2.53, 3.62, 2.00, 2.15, 0.69, 1.60, -2.47, 4.18, 3.39, 0.34, 0.57, -4.20, 3.07, 2.99, -4.81, 1.24, -6.64, -1.56, -2.63, 4.24, 2.56, -4.25, -6.08, 2.11, -4.85, -20.19, -8.61, -0.35, -1.10, -6.70, -5.69, 12.32, 6.66, 2.87, 1.28, 8.12, 3.65, 2.40, 2.09, 2.23, 1.36, -2.90, 5.94, 4.09, -5.88, -3.54, -0.16, 0.86, 3.37, 4.58, 2.49, 3.71, 3.46, 3.15, -1.11, 2.22, 0.66, -3.66, 3.10, -10.40, -0.79, 3.02, 1.77, 1.55, 4.78, 4.16, 2.88, -0.04, -3.09, -1.15, 2.92, 3.39, 3.02, -0.22, -2.84, 2.18, 4.27, 2.33, 2.72, 1.45, 4.57, -1.12, 3.25, 0.25, 1.19, 2.12, 3.86, 1.52, 0.97, -0.13, 2.72, 0.20, 1.53, 3.20, 1.50, -0.43, 1.78, -2.65, 5.71, 0.63, -1.11, 2.83, 0.06, 0.88, 0.98, -0.44, -0.08, -2.42, -4.51, 4.32, 2.93, -1.10, -6.42, -0.55, 6.36, 2.83, -0.30, 1.07, 3.30, 1.20, -0.44, -0.51, 1.20, 3.95, 1.44, 2.58, 1.75, -0.15, 1.69, 1.78, 0.99, 0.25, 1.65, 2.73, 1.59, 2.88, 4.86, -2.89, 0.06, -1.66, 1.96, 2.11, 1.58, 2.45, 1.68, -3.85, -2.08, -5.56, 1.74, 5.83, 1.95, 3.72, -1.53, 1.40, 3.83, -3.13, 3.09, 0.01, 4.43, 2.47, 3.35, 0.12, -18.92, 4.32, 5.89, 6.51, 3.48, 5.89, -0.63, 1.73, 3.95, 4.26, 2.80, 2.49, 0.82, 6.02, 0.76, 1.81, 3.07, 2.19, -0.08, 0.45, 4.74, 0.27, -2.05, -2.90, -0.89, 0.12, -7.87, -3.37, 0.46, 6.45, -7.28, -3.09, 5.29, 0.01, 1.38, 3.15, -2.59, 4.00, 0.60, 4.80, 2.54]
Example usage:
python3 sp500_time_machine.py -s20 -p100000 -a0
Answer: lint
minor pep-8 nit:
It would be useful to run
isort on this.
docstrings where appropriate
# Given an initial investment ...
This is a helpful comment, and I thank you for it.
It would be more helpful as a """docstring""".
returning numeric results
def sp500_time_machine(starting_balance, span, annual_contribution):
This is an OK signature.
Consider adding float, int, float type hints to it.
span has units of years,
but that only becomes apparent upon reading the code.
We could introduce it a little more clearly.
If we did add type hinting, the signature would end with
def ... ) -> None:
which makes me sad.
It fits with the verb "show" in the introductory comment.
At fifty-ish lines this function is not too long.
But it does do more than
one thing.
Consider breaking out the initial loop as a helper function,
which returns a list of results.
More generally, consider making the computation of figures
separate from the display of figures.
This aids composition, and allows a
test suite
to verify specific calculations.
globals and testability
current_year = datetime.date.today().year
This is nice enough, but it relies on a global variable (clock).
It would be much nicer if we saw def ... , current_year=None):
in the signature, which defaults to current year:
current_year = current_year or datetime.date.today().year
That way a unit test could specify e.g. 2022
to "freeze" an historic result in time.
And the test would continue pass in 2024, 2025, and following years.
units, & parallel structure to names
The meaning of this manifest constant is very clear:
FIRST_YEAR = 1928
This seems to be a similar quantity, but it is zero-origin:
for base_year in range(total_spans):
Ultimately it combines with
for month in range(span * MONTHS_PER_YEAR)
to form an anonymous month + base_year index.
Going back and forth on the units is a bit jarring.
Consider incrementing a datetime.date
(or datetime.datetime) in the loop.
Consider breaking out that growth[]
de-reference so a helper function "knows"
how to turn a point-in-time into
the correct array index.
# Store each realized gain over the requested span in a list for later processing
Elide the obvious comment -- the code eloquently said that already.
realized_gain_list.append(realized_gains)
Consider switching to a dict so the year is apparent:
realized_gain[FIRST_YEAR + base_year] = realized_gains.
The idea is to produce a results datastructure
which a maintenance engineer could not possibly misinterpret.
... starting balance = %s" % ...
Outputting an explicit $ dollar sign wouldn't hurt.
Consider using an
f-string
rather than the % percent operator.
extract helper
# Display the average, ...
To keep computation and display separate,
consider breaking out this section into a helper function.
Thank you for the Klement citation
and the reminder of Gaussian facts,
so the magic numbers are well-explained.
wrangling inputs
Nice arg parser.
Aha! "--span", help="The number of consecutive years (span) ... --
that's what I'd been looking for, perfect.
... int(args.principle), int(args.span), int(args.annual)
Consider asking
argparse
to call int() for you,
by specifying type=int.
docstrings on data
In sp500_data.py,
thank you for the pair of URL citations,
and for the "not in constant dollars!" warning.
These comments would work nicely as a """docstring"""
on the growth vector.
Then a maintenance engineer could import it and use
help(growth) to better understand how to correctly
interpret those figures.
The list of pasted numbers is straightforward,
and it's fine that we wind up with a very long line.
It's too bad that we don't see code or comments
explaining how that list was downloaded / cleaned / computed,
in case we want to reproduce results or incorporate
recent data a few years from now.
This function achieves its design goals.
It would benefit from unit tests,
and from pushing print() statements
into separate helpers.
I would be willing to delegate or accept
maintenance tasks on this codebase. | {
"domain": "codereview.stackexchange",
"id": 44954,
"tags": "python, performance"
} |
Defining normalization with respect to judgmental equality instead of reduction | Question: In type theory with a type $\mathbb{N}$ of natural numbers (or some other base type such as booleans) and judgmental equality instead of reductions, canonicity is a meta-theoretical statement claiming that a closed term of type $\mathbb{N}$ is judgmentally equal to a unique numeral (and hopefully the metatheory is constructive or proves the existence of untyped lambda calculus expression that computes such a numeral).
Most presentations of strong normalization on the other hand require picking a directed beta reduction relation, not just an undirected judgmental equality, such as the $\rhd$ in Definition 2.1.2 in An Extended Calculus of Constructions by Luo.
Is there an equivalent characterization of strong normalization that can be stated without picking a notion of reduction for CoC in a similar manner to canonicity, i.e. can it be stated purely in terms of seemingly undirected judgmental equality?
Answer: You could define a predicate $N(t)$ whose intuitive meaning is “term $t$ is in normal form”, and prove a theorem stating that for every closed term $t$ there is precisely one term $t'$ such that $N(t')$ and $t \equiv t'$. This way you capture the notion of "normal form". You cannot really capture the "strong" in "strong normalization" because that specifically refers to sequences of reductions.
Of course, we could just appeal to the axiom of choice and pick one term from each equivalence class, and declare it to be “normal”. The real work begins once we start asking how is the predicate $N$ given, precisely. In lucky cases, one can specify $N(t)$ by a simple syntactic criterion, such as “$t$ does not contain any redeces“. | {
"domain": "cstheory.stackexchange",
"id": 5543,
"tags": "type-theory, calculus-of-constructions, normalization"
} |
Finding the direction of several radio beacons using a faraday cage and a radio receiver | Question: I'm trying to build a system that can detect the direction of a set of radio beacons operating on the same frequency.
In order to do this I have planned to use a normal radio antenna placed inside a faraday's cage that has an open side. Could this work?
I have tried to search for directional radio receivers but they tend to also receive some signal from the sides, which will interfere when multiple beacons operate on the same frequency, or they use multiple antenna’s/the Doppler effect. Both will also interfere with the frequency of the antenna.
So, can we make an effective directional radio detector using a radio receiver inside a faraday's cage with one side open mounted on top of a Stepper motor ?
Answer: Your ideal receiver would be a parabolic reflector with your detector at the focus. In other words a small version of the detectors used in radio astronomy. Your idea of an open cube with the detector at the centre would be an approximation to this, though I suspect rather a poor one with a rather limited angular resolution. | {
"domain": "physics.stackexchange",
"id": 16339,
"tags": "electromagnetism, radio"
} |
What prevents $K^0$ from decaying into $K^+ e^- \overline{\nu}_e$? | Question: Energetically it is just possible for a neutral kaon to decay into a charged kaon, electron and antineutrino, namely:
$$K^0 \rightarrow K^+ e^- \overline{\nu}_e$$
Indeed a mass difference of a few MeVs exists between neutral and charged kaons. This decay would be similar to the usual semileptonic decay, which involves a $\pi^+$ instead of $K^+$. Correct me if I'm wrong. What does prevent this decay from being observed?
Answer: $K_{long}$ decays weakly to
The mass difference with the charged Kaon is of order 4Mev. I think it is a matter of very small phase space which cannot give a measurable probability for the channel. Certainly 4 MeV energy shared with an antineutrino would not be detectable in an experiment measuring $K_{long}$ decays.One would have to identify a $K+$ , plus an electron of very small momentum associated with that. A specialized experiment would have to be devised since the phase space of the interaction is very small. | {
"domain": "physics.stackexchange",
"id": 87534,
"tags": "particle-physics, standard-model, mesons"
} |
Minimal `printf` for integer types in x86 assembly | Question: I'm writing a minimal C runtime targeting an old 32-bit Windows XP machine as a personal project. The C runtime provided by compilers is quite bloated. I wouldn't mind some library bloats up to several megabytes if this was some paid project, since even a very old PC would load it very fast anyway, but as a personal project, I'm just doing whatever comforts me. (1)
This printf can currently only handle %d, %lld, %u, and %llu.
The routine is optimized for size, not for speed. IO doesn't happen in a middle of a hot loop - if it does, it is not a hot loop - so it makes more sense to take the minimal amount of size in an executable.
div with a constant divisor is preferred over multiply and shift with the multiplicative inverse.
mov xl, byte [] instead of movsx exx, byte []; saves a byte
packed code, unaligned jump targets
Code duplication is avoided whenever possible.
Non-variadic functions follow the regparm(3) calling convention. The arguments are passed to eax, edx, and ecx in order, and the return value is stored in eax and edx. A local function divq10 disobeys the rule by also returning with ecx.
printf.s
section .bss
stdout:
resb 4
section .text
extern _GetStdHandle@4
extern _WriteFile@20
err:
ud2
global _initstdout
_initstdout:
push -11
call _GetStdHandle@4
cmp eax, -1
je err
mov [stdout], eax
ret
divq10: ; edx:eax <- edx:eax / 10, ecx <- remainder
push ebx
mov ecx, eax
mov eax, edx
xor edx, edx
mov ebx, 10
div ebx
mov ebx, eax
mov eax, ecx
mov ecx, 10
div ecx
mov ecx, edx
mov edx, ebx
pop ebx
ret
llu2str: ; edx:eax -> *ecx (string), eax <- count
push ebx
push edi
push esi
push ebp
mov edi, eax
mov esi, edx
mov ebp, ecx
xor ebx, ebx
.0:
inc ebx
call divq10
mov ecx, eax
or ecx, edx
jnz .0
mov eax, edi
mov edx, esi
mov edi, ebx
.1:
call divq10
add ecx, '0'
dec ebx
mov [ebp + ebx], cl
jnz .1
mov eax, edi
pop ebp
pop esi
pop edi
pop ebx
ret
global _printf
_printf:
push ebx
push edi
push esi
push ebp
lea ebp, [esp + 24]
mov esi, [ebp - 4]
sub esp, 1024
mov edi, esp
.start:
mov bl, [esi]
test bl, bl
jz .end
cmp bl, '%'
jne .copy
inc esi
mov bl, [esi]
cmp bl, 'u'
jne .d0
mov eax, [ebp]
add ebp, 4
xor edx, edx
.u1:
mov ecx, edi
call llu2str
add edi, eax
jmp .next
.d0:
mov bl, [esi]
cmp bl, 'd'
jne .ll
mov eax, [ebp]
add ebp, 4
cdq
.d1:
mov ecx, edx
shr ecx, 31
jz .u1
mov byte [edi], '-'
inc edi
neg eax
adc edx, 0
neg edx
jmp .u1
.ll:
mov bl, [esi]
cmp bl, 'l'
jne err
inc esi
mov bl, [esi]
cmp bl, 'l'
jne err
inc esi
mov eax, [ebp]
mov edx, [ebp + 4]
add ebp, 8
mov bl, [esi]
cmp bl, 'u'
je .u1
cmp bl, 'd'
je .d1
jmp err
.copy:
mov [edi], bl
inc edi
.next:
inc esi
jmp .start
.end:
mov eax, esp
push 0
push edi
sub edi, eax
push edi
push eax
push dword [stdout]
call _WriteFile@20
test eax, eax
jz err
add esp, 1024
pop ebp
pop esi
pop edi
pop ebx
ret
test.c
void initstdout(void);
void printf();
void start() {
initstdout();
printf("Hello, world!\n");
printf("%d %u %lld %llu\n", 0, 0, 0, 0);
int dm = 1u << 31;
int dx = (1u << 31) - 1;
int ux = -1;
long long lldm = 1llu << 63;
long long lldx = (1llu << 63) - 1;
long long llux = -1;
printf("%d %d %d\n%lld %lld %lld\n", dm, dx, ux, lldm, lldx, llux);
printf("%u %u %u\n%llu %llu %llu\n", dm, dx, ux, lldm, lldx, llux);
}
build.sh
O="-O3 -msse2 -fno-builtin -fno-asynchronous-unwind-tables"
S="-std=c11 -pedantic -masm=intel"
F="$O $S"
LF="--entry=_start --subsystem=console --enable-stdcall-fixup"
SYS="/c/Windows/SysWOW64"
gcc -c $F test.c
nasm -fwin32 printf.s
ld -or.exe $LF *.o *.obj $SYS/kernel32.dll
I used gcc and ld, but MSVC's cl and link should also work fine. I had to put --enable-stdcall-fixup to shut up warnings, but I currently don't know why those warnings are happening. AFAIK Windows API functions have _@ decorations on 32-bit, but the linker is complaining that I shouldn't have put those decorations.
output
Hello, world!
0 0 0 0
-2147483648 2147483647 -1
-9223372036854775808 9223372036854775807 -1
2147483648 2147483647 4294967295
9223372036854775808 9223372036854775807 18446744073709551615
(1) MinGW GCC creates a 100KB executable for a single call to printf, including all the initialization code, and its own fix-up code to patch the default Windows C runtime. MSVC provides a several-hundred-KB DLL runtime, which should always be provided as-is to distribute freely.
Answer:
If, for the sake of this exercise, codesize is the only thing that you care about, then I would dare suggest the following:
For divq10, not having to reload the constant will save 5 bytes, and using xchg with the EAX register is shorter as it is a 1-byte instruction. Total savings 12 bytes.
divq10: ; edx:eax <- edx:eax / 10, ecx <- remainder
push ebx push ebx
mov ecx, eax xor ecx, ecx
mov eax, edx xchg eax, ecx
xor edx, edx xchg eax, edx
mov ebx, 10 mov ebx, 10
div ebx div ebx
mov ebx, eax xchg eax, ecx
mov eax, ecx div ebx
mov ecx, 10 xchg ecx, edx
div ecx pop ebx
mov ecx, edx ret
mov edx, ebx
pop ebx
ret
In llu2str you essentially use the EDI and ESI registers for preservation. Why not simply push/pop the values so you can omit preserving these registers themselves? Total savings 10 bytes.
llu2str: ; edx:eax -> *ecx (string), eax <- count
push ebx push ebx
push edi
push esi
push ebp push ebp
mov edi, eax push eax ; (1)
mov esi, edx push edx ; (2)
mov ebp, ecx mov ebp, ecx
xor ebx, ebx xor ebx, ebx
.0: .0:
inc ebx inc ebx
call divq10 call divq10
mov ecx, eax mov ecx, eax
or ecx, edx or ecx, edx
jnz .0 jnz .0
mov eax, edi pop eax ; (2)
mov edx, esi pop edx ; (1)
mov edi, ebx push ebx ; (3)
.1: .1:
call divq10 call divq10
add ecx, '0' add ecx, '0'
dec ebx dec ebx
mov [ebp + ebx], cl mov [ebp + ebx], cl
jnz .1 jnz .1
mov eax, edi pop eax ; (3)
pop ebp pop ebp
pop esi
pop edi
pop ebx pop ebx
ret ret
In _printf, when hopping to .d0, the BL register did not change and when branching to .ll it didn't change either. You can omit mov bl, [esi] that reloads it.
The code to see if the number is negative mov ecx, edx shr ecx, 31 jz .u1 can be replaced by the shorter test edx, edx jns .u1.
Checking for the double 'll' can happen in one go:
.ll: .ll:
mov bl, [esi] cmp word [esi], 'll'
cmp bl, 'l' jne err
jne err inc esi
inc esi inc esi
mov bl, [esi]
cmp bl, 'l'
jne err
inc esi | {
"domain": "codereview.stackexchange",
"id": 42967,
"tags": "c, formatting, integer, assembly, x86"
} |
Is there a direct relationship between the adsorption energy and the catalytic activty | Question: I have seen in several papers, where people correlate adsorption energy with the catalytic activity; the greater the adsorption energy, the greater would be the catalytic activity.
I understand that in order for a catalytic reaction to take place, the adsorbates should get adsorbed on the surface of the catalyst. But what if the adsorbates are too strongly adsorbed on the catalyst. This is unfavorable for the reaction. Isn't it?
Therefore, I don't understand how one can correlate higher adsorption energies with greater catalytic activity.
Answer: Adsorption energy is indeed related to catalytic activity, but not "greater energy means greater activity".
As you were guessing, there is an optimum window in adsorption energies for which catalytic activity is maximum, this is explained by the Sabatier principle:
The Sabatier principle is a qualitative concept in chemical catalysis named after the French chemist Paul Sabatier. It states that the interactions between the catalyst and the substrate should be "just right"; that is, neither too strong nor too weak. If the interaction is too weak, the substrate will fail to bind to the catalyst and no reaction will take place. On the other hand, if the interaction is too strong, the product fails to dissociate.
There is a nice recent paper in which you can find much more mechanistic details. | {
"domain": "chemistry.stackexchange",
"id": 12584,
"tags": "catalysis, adsorption"
} |
My custom implementation of MATLAB's imdilate | Question: I have been assigned to implement the morphological dilation operation without using MATLAB's imdilate, and I came up with the following solution:
% -------------------------------------------------------------------------
% -> Morphological Operations
% -> Image Dilation
% -------------------------------------------------------------------------
% -------------------------------------------------------------------------
% - [Pre-Operation Code: Getting Everything Ready] -
% -------------------------------------------------------------------------
% Starting With A Clean (Workspace) And (Command Window)
clear;
clc;
% Creating A Matrix Consisting Of 0s And 1s Representing A Binary Image.
binaryImage = [ ...
0 0 0 0 0 0
0 0 1 1 0 0
0 0 1 1 0 0
0 0 0 0 0 0];
% Creating A Matrix Representing Our Structuring Element
structuringElement = [ ...
0 1 0
1 1 1
0 1 0];
% Getting The Number Of Rows (Height) And The Number Of Columns (Width) Of The Binary Image
[imageRows, imageColumns] = size(binaryImage);
% Getting The Number Of Rows (Height) And The Number Of Columns (Width) Of The Structuring Element
[structuringRows, structuringColumns] = size(structuringElement);
% Creating An Empty Matrix That Will Be Used To Store The Final Processed Image
dilatedImage = zeros(imageRows, imageColumns);
ref = imdilate(binaryImage, structuringElement);
% -------------------------------------------------------------------------
% - [Dilation Operation] -
% -------------------------------------------------------------------------
% Going Over Each Row In The Binary Image
for i = 1:imageRows
% Going Over Each Column In The Binary Image
for j = 1:imageColumns
% If The Current Pixel Is A Foreground Pixel (1)
if (binaryImage(i, j) == 1)
% Going Over Each Row In The Structuring Element
for k = 1:structuringRows
% Going Over Each Column In The Structuring Element
for l = 1:structuringColumns
% If The Current Pixel In The Structuring Element Is A Foreground Pixel (1)
if (structuringElement(k, l) == 1)
dilatedImage(i + k - 2, j + l - 2) = 1;
end
end
end
end
end
end
subplot(1, 3, 1), imshow(binaryImage), title('Original Image');
subplot(1, 3, 2), imshow(dilatedImage), title('Dilated Image');
subplot(1, 3, 3), imshow(ref), title('MATLAB imdilate');
I am open to any suggestions! Thank you in advance.
Answer: I have a few different comments.
Firstly, for 2D arrays like in your example, you can simply do conv2(binaryImage,structuringElement,'same'), it's equivalent to imdilate (and about 4 times faster). That might not be acceptable for your case if you are supposed to write the code yourself though.
Next, and importantly, your code has a logic error. If your binary image has a white pixel along any edge, the results are incorrect or you get an error (if it's in the top row). This is due to your indexing into dilatedImage where i+k-2 can be less than 1, or can be larger than the number of rows in dilatedImage.
You have a lot of comments, which is better than none, but I think it's excessive here. It is very clear what this line of code does [imageRows, imageColumns] = size(binaryImage);, for example, even for people who don't know Matlab. Some of your comments make the lines very long as well.
A minor issue, I'd move the imdilate command somewhere else, that part of the code is where you are re-creating imdilate, I think it's nicer to separate the verification part.
Another minor issue is in using i and j as loop indices. Since i and j can be used for complex numbers in Matlab, by convention they are avoided as loop indices (although I do not usually worry about this).
I would put the code that does the work into its own function that takes binaryImage and structuringElement as inputs and returns dilatedImage.
The big thing here is you 6 nested for/if statements, which is hard to read and hard to code. Historically, such nested loops would have been very slow in Matlab, but nowadays it's not such a concern. One thing you should do is swap the order of the first two for loops. Matlab stores arrays in a column major format, so loop over the columns first, then the rows. This gives roughly a 20-25% speedup in my tests. Additionally, checking to remove the out-of-bounds error is also a speedup, since we only set pixels to one if they are inside the image, irregardless of the value of the structuring array. Check out this version of your loop:
for j = 1:imageColumns
for i = 1:imageRows
if binaryImage(i, j) == 1
% Loop through the entries of the structuring element
for l = 1:structuringColumns
for k = 1:structuringRows
% make sure the new pixel is inside the image
if i+k-2>0 && j+l-2>0 && i+k-2<=imageRows && j+l-2<=imageColumns
% make a white pixel if necessary
if structuringElement(k, l)==1
dilatedImage(i + k - 2, j + l - 2) = 1;
end
end
end
end
end
end
end
Of course it would be nice to avoid writing all these loops, and that can be done. This next method is only slightly faster, but to me is cleaner. The idea is to first find the white pixels, and just loop over them, for each one working out which surrounding pixels should be changed.
[rowIdx,colIdx] = find(binaryImage); % the indices of white pixels in the image
[maskRowIdx,maskColIdx] = find(structuringElement); % indices of structuring elements
% now get indices of structuring elements relative to the centre of the array
maskRowIdx = maskRowIdx - floor(structuringRows/2) - 1;
maskColIdx = maskColIdx - floor(structuringColumns/2) - 1;
dilatedImage = zeros(imageRows, imageColumns);
% loop over each white pixel in the image
for i = 1:numel(rowIdx)
% these are just the indices of the pixel
rI = rowIdx(i);
cI = colIdx(i);
% now loop over each non-zero element of the structuring array
for j = 1:numel(maskRowIdx)
% the position of the pixel to change is (r,c)
r = rI+maskRowIdx(j);
c = cI+maskColIdx(j);
% if the pixel is inside the image, we make it a 1
if r>0 && c>0 && r<=imageRows && c<=imageColumns
dilatedImage(r,c) = 1;
end
end
end | {
"domain": "codereview.stackexchange",
"id": 36709,
"tags": "image, matrix, matlab"
} |
Antarctic and arctic meltwater is "bad" because it's dark, but why is transparent liquid on white stuff so dark? | Question: The Washington Post's Antarctic heat wave melted 20 percent of an island’s snow cover in days, caused melt ponds to proliferate includes the figure below of meltwater ponds on top of snow/ice.
The article is a good read, but my question is one of optics, namely what makes a pond of meltwater (essentially pure water) on top of something so white become so dark?
It increases sunlight absorption and heating, which in this case is "bad" but what's the physics or phenomena behind this that makes these so dark in the first place?
If I understand correctly, even shallow ponds or puddles of water can significantly reduce albedo, increase absorption and induce further melting, sometimes in a "runaway" manner. So I don't think it's bulk absorption by the water or contaminants, I think there's something more interesting going on.
Melt ponds on the Antarctic Peninsula’s George VI Ice Shelf on Jan. 19, 2020. (NASA)
Answer: Water has lowest EM absorption in the blue part of light spectrum and increases rapidly towards both UV and red parts of spectrum.
As a result in visible light water is blue. Same goes for the ice as it has very similar absorption spectrum.
While there is a lot of white in the picture, all of it is a thin snow cover on top of blue ice. Once the snow melts you get a blue water on top of thick layer of transparent blue ice that combined absorb most of the light making ponds looks exceptionally dark. Water would be so much lighter in colour if, for example, instead of transparent ice, you'd have sand on the bottom. | {
"domain": "earthscience.stackexchange",
"id": 2006,
"tags": "water, antarctic, ice, arctic, radiative-transfer"
} |
Many-world interpretation: simple physics explanation? | Question: We went a little off topic in physics class today and my teacher was explaining how quantum physics explains the idea of parallel universes. She believes in a idea called many world interpretation where the universe consist of timelines that continuously splits leading to an $\infty $ of parallel universes. She says that it is because particles have wave properties and waves can be in two places at once. So all particles appear in multiple places, but in its own universe. I want to know how does physics explain this theory of parallel universes? I understand the timeline concept, but not the physics. Any explanation will be very much appreciated!
If we are made out of particles then we exhibit wavelike characteristics, right? So, then can theoretically(if quantum theory is proven to be true) be an infinite number of particles that are us, just in different parallel universes which we can't observe?
Can anyone explain to me the physics behind this MVI? I will try to follow each and every answer to the best of my ability.
Below is a picture of the timeline! It keeps on splitting and we remain on one, other version of ourself remain on others? :
Side Note:I looked at wikipedia, but it only refers to a wave function and doesn't give a concrete understanding
Answer: You essentially answered the question yourself when you said it is because quantum particles are waves and can be in more than one place at once. The physics, or mathematics of this is the mathematics of waves, superposition, linear algebra. It's essentially quantum mechanics. So study quantum mechanics. I understand the literature on quantum mechanics isn't exactly enlightening. It requires deep thought. I'm working on presenting these sort of ideas in a more intuitive way. However for the time being, I suggest to learn more quantum theory if you are mathematically inclined. If not, then you'll have to wait. | {
"domain": "physics.stackexchange",
"id": 27054,
"tags": "quantum-mechanics, universe, quantum-interpretations, observable-universe"
} |
How do Hard Drives Send and Recieve Data? | Question: Ok. I know this is probably a commonly asked question. Before you refer me to a site or answer this yourself, I ask for you to please read this through. I have been into computers and electronics for quite a while. I have a pretty good understanding of electronics, and would like to become a computer hardware engineer eventually. For quite a while now, I have been searching for an answer to my question: How to hard drives send and recieve data, and in general, how do they work? Every site I find, simply explains the basics, of which I already am quite familiar with.
An example of what I am trying to figure out: Suppose you want to save a very small file (less than 1KB for simplicity) to a hard drive. Is there a site that explains exactly what information is exchanged between the hard drive and the rest of the computer? I would like to understand what is sent, and when it is sent ( down to ones and zeros ). I would eventually like to know enough information that, in theory, I could build a small storage device compatible with a modern computer. The more information, the better.
Answer: As it stands, this isn't a Computer Science question. Actually, it's several questions, some of which could be phrased of as computer science questions.
Disk-like devices (these days we'd include flash memory) do not understand the concept of a "file" or "directory". You can think of a disk as a big array of fixed-size blocks. The two basic operations are "read a block" and "write a block". Files are an abstraction on top of this implemented by the operating system.
All of this is explained in any operating systems course. But if you just want the short version, I recommend Maurice Bach's classic book The Design of the UNIX Operating System. It doesn't reflect modern operating systems or filesystems that well, but it will give you a good idea about what operating systems do and what they expect of a disk device. If that wasn't enough for you, Practical File System Design by Dominic Giampaolo is a whole book which gives an in-depth look at a more modern filesystem design.
The hardware that sits between there and the disk is a topic all by itself. On a modern system, disks can be connected to a computer using a variety of different hardware connection (e.g. USB, SATA, SCSI), and a modern operating system needs to both be able to talk to the host controller for this bus, but also be able to tell what is attached to the bus. For some busses (e.g. USB) there may be a complex topology for the operating system to discover; the disk may be connected via a hub, for example.
The final link in the chain is what lies between the bus and the storage medium itself, and this is presumably what you're interested in.
My suggestion, if hardware engineering is what you're most interested in, is to hook a simple computer (e.g. a microcontroller) up to a simple storage device (e.g. MMC) and see what you can do. You will learn a lot this way.
As a final comment, there is a book by Jan Axelson called USB Mass Storage: Designing and Programming Devices and Embedded Hosts. I can't recommend it because I haven't read it, but it looks like the sort of thing you might want to know. If I'm reading the table of contents correctly, it looks like it gives you everything you need to know to build a USB flash drive. That sounds like a fun project to me. | {
"domain": "cs.stackexchange",
"id": 9740,
"tags": "memory-hardware, memory-access"
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.