anchor stringlengths 0 150 | positive stringlengths 0 96k | source dict |
|---|---|---|
Questions on the semi-classical interpretation of the (normal) Zeeman effect | Question: I'll put pictures from the book (Introduction to the Structure of Matter: A Course in Modern Physics by John J. Brehm and William J. Mullins) as I think they are relevant to understand my problem:
I have trouble understanding the case where the observer watches the source in a direction perpendicular to the magnetic field. The electron will rotate around B axis, so the observer will only see a linear oscillation of the electron hence linearly polarized light.
But how can the Lorentz force explain the splitting of spectral lines (i.e. the change of the frequency of the electron)? The book suggests to view the linear oscillation as a combination of two counter-rotating motions like this:
But if this is the case, the Lorentz force would act in a plane perpendicular to the image so it won't explain the change of the frequency of the circular motion of the electron (and so the Zeeman splitting, classically).
Instead the situation is clear when we observe along the direction of B, as in that case Lorentz force would act radially.
Answer: Perhaps the key issue is that the two possible orbits which add up to the vertical oscillation, as drawn in the final picture, should be considered as viewed from the on-axis observer, not from the side observer.
Then, if you think carefully about it, a radial force is exactly what's needed to change the frequency of a circular oscillation at a fixed radius $r$. Equating the total inward radial force magnitude $F$ with $mv^2/r = m\omega^2 r$, you can see that changing the total $F$ will change $\omega$.
The only other step you need is to imagine that the radial force on the electron orbit is dominated by the nucleus of its atom, and the Lorentz force is just a small perturbation, either radially inward or outward, depending on which way the electron is orbiting. One of these will have a slightly higher orbital frequency, and one will have a slightly lower frequency, due to the Lorentz force. That gets you a classical analog of Zeeman splitting. | {
"domain": "physics.stackexchange",
"id": 97280,
"tags": "quantum-mechanics, homework-and-exercises"
} |
All types of dependencies in ROS Fuerte | Question:
This question is not about a concrete problem, but should help explaining the different types of dependencies in ROS Fuerte. (I'm not sure if this is the ideal place for such a reference, but I am making this question public and let's see where it takes us.)
As far as I understand, there are two main types of dependencies: ROS depencies and system dependencies. The first type comprises ROS packages that are: 1) already installed on your system, 2) available from ROS, and 3) available from other sources (e.g. project partners, university labs). I believe that there are also 4) ROS stack dependencies (to be confirmed). Finally, system dependencies are typically libraries, such as Boost and Qt.
Below, I would like to explain how to deal with each type of dependency. Ideally, the process should be automatable so that anyone wishing to use a package with dependencies can do it with the smallest number of steps and the least amount of ROS knowledge possible.
Originally posted by Benoit Larochelle on ROS Answers with karma: 867 on 2013-01-20
Post score: 1
Answer:
ROS dependency, already installed
Simply add <depend package="OtherPackage"> in your manifest.xml file
ROS dependency, available from ROS
[to be confirmed and improved]
If the package has a released binary package for your system, you can do something like sudo apt-get install ros-fuerte-other-package and you are now in case #1 above. However, this solution is not nice because it is not automated.
Step 1) Create a yaml file so that ROS can find the source and download it
Step 2) Add <depend package="OtherPackage"> in your manifest.xml file
The user of your package must first do rosdep install MyPackage and then rosmake MyPackage
ROS Q&A that may help further:
http://answers.ros.org/question/9201/how-do-i-install-a-missing-ros-package/
http://answers.ros.org/question/9197/for-new-package-downloading/
http://answers.ros.org/question/9740/automatic-installation-of-dependencies/
http://answers.ros.org/question/9880/satisfying-package-dependencies/
http://answers.ros.org/question/10830/which-deb-package-contains-ros-package-x/
http://answers.ros.org/question/12466/download-a-package-with-rosmake/
http://answers.ros.org/question/34984/using-rosdep-to-install-wg-maintained-stacks/
http://answers.ros.org/question/39571/rosmake-error-no-such-option-rosdep-install/
http://answers.ros.org/question/48301/rosdep-install-sr_control_gui/
http://answers.ros.org/question/50197/bloom-creating-debs-rosdep-cant-resolve-key/
http://answers.ros.org/question/52273/rosdep-and-ros-dependencies/
http://answers.ros.org/question/52547/cannot-locate-rosdep-definition-for-geographicinfo/
ROS dependency, available from other sources
[to be confirmed and improved]
Step 1) Create a yaml file so that ROS can find the source and download it
Step 2) Add <depend package="OtherPackage"> in your manifest.xml file
The user of your package must first do rosdep install MyPackage and then rosmake MyPackage
ROS Q&A that may help further:
http://answers.ros.org/question/36845/cannot-locate-rosdep-definition-for-qt4/
http://answers.ros.org/question/40337/rosdep-doesnt-read-rosdepyaml/
http://answers.ros.org/question/50660/custom-rosdep-rules/
ROS dependency, stack
Simply add <depend stack="OtherStack"> in your stack.xml file
System dependency
Add <rosdep name="OtherLibrary"> in your manifest.xml file
ROS Q&A that may help further:
http://answers.ros.org/question/9430/how-to-use-external-libraries-in-ros-code/
http://answers.ros.org/question/11879/best-practice-for-rosdep-ubuntu-packages-and-others/
http://answers.ros.org/question/34199/what-is-the-correct-way-to-add-external-library/
In addition, you can look at this documentation:
rosintall: http://www.ros.org/wiki/rosinstall
rosdep: http://ros.org/wiki/rosdep, http://ros.org/doc/api/rosdep2/html/rosdep_yaml_format.html, http://ros.org/reps/rep-0125.html
ROS overlays: http://ros.org/wiki/fuerte/Installation/Overlays
Originally posted by Benoit Larochelle with karma: 867 on 2013-01-20
This answer was ACCEPTED on the original site
Post score: 0
Original comments
Comment by KruseT on 2013-01-21:
It may be more accurate to rephrase your question as "Dependencies with rosbuild", which spans fuerte and earlier ros releases. | {
"domain": "robotics.stackexchange",
"id": 12514,
"tags": "rosdep, ros-fuerte, rosinstall"
} |
Updating large and heavily nested JSON properties based on existing properties using nested forEach loops | Question: The structure of the JSON is as such:
Each parent record can have many DataGroups and each DataGroup can have many DataSets and each DataSet can have many DataFields.
What I've done is add a new property called columns on each DataSet. columns is a mapped array of simplified objects based on the DataFields of each DataSet.
I finally stringify the JSON and write to file.
My code works but I'm wondering if there is a better/more performant way of doing this.
getMetadata()
.then(async (response: any) => {
const data = response?.data?.Data;
data.forEach((parentRecord: any) => {
parentRecord.DataGroups.forEach((datagroup: any) => {
datagroup.DataSets.forEach(
(dataset: { DataFields: any; columns?: any }) => {
const fields = dataset.DataFields;
const columns = fields?.map((x: any) => {
return {
Header:
x.UIPrettyName || x.OneReportPrettyName || x.FieldName,
id: x.Id,
};
});
dataset.columns = columns;
}
);
});
});
appendFileSync(
path.join(__dirname, "_metadata.json"),
JSON.stringify(data),
Answer: I don't think there is a more 'performant' way of doing it, as you need to transform all elements. There might be slight tradeoffs over whether forEach is better/worse than for() (StackOverlow has posts on the topic) but I'd not be concerned by that unless you know the amount of data is large enough to justify trying to squeeze performance.
It's possible to reduce the verbosity on the code, although its arguable if that helps with readibility.
For clarity, I've extracted the handler function out.
function metadataHandler(response: any) {
const data = response?.data?.Data;
data?.forEach((parentRecord: any) => {
parentRecord?.DataGroups?.forEach((datagroup: any) => {
datagroup?.DataSets?.forEach(
(dataset: any) => {
dataset.columns = dataset?.DataFields?.map((x: any) => ({
Header: x.UIPrettyName || x.OneReportPrettyName || x.FieldName,
id: x.Id,
}));
}
);
});
});
}
async is not necessary as the handler does nothing async.
const data = response?.data?.Data. Optional chaining can result in undefined being assigned to data, which will lead to an exception on data.forEach. data?.forEach... will prevent an exception if that's the case.
Some of the optional chaining can be removed if you are certain that the response is in fact the correct shape. Whilst any will allow property access we could easily introduce typos into the remainder of the loops and end up with the wrong result. If you know what your data looks like, it might be worth defining types to represent it (I'm guessing at the data types string and number here).
type DataField = { UIPrettyName: string, OneReportPrettyName: string, FieldName: string, Id: number }
type DataSet = { DataFields: DataField[], columns?: { Header: string, id: number }
type DataGroup = { DataSets: DataSet[] }
type Data = { parentRecord: { DataGroups: DataGroup[] } }[]
type MetadataResponse = { data: { Data: Data } }
We can then use a type guard to narrow the type of response
function isMetadataResponse(obj: any): obj is MetadataResponse {
if (typeof obj === 'object'
&& Array.isArray(obj.data?.Data)) {
// let's assume it's enough, but we could add additional checks.
return true
}
return false
}
This gives us a cleaner handler implementation which provides type safety.
function metadataHandler(response: any) {
if (isMetadataResponse(response)) {
const data = response.data.Data;
data.forEach(item => {
item.parentRecord.DataGroups.forEach(datagroup => {
datagroup.DataSets.forEach(dataset => {
dataset.columns = dataset.DataFields.map(x => ({
Header: x.UIPrettyName || x.OneReportPrettyName || x.FieldName,
id: x.Id,
}));
}
);
});
});
}
}
Use of forEach is modifying response by virtue of 'pass by reference'. This may be intentional, but could lead to unintended side effects if the response is handled later, for example because this handler is part of a middleware chain. You might want instead to convert this to use Array.map so that you are returning a transformed object graph.
Not sure if its intended to leave DataFields alongside the mapped columns as that seems redundant. Again, map might be the solutuion here.
Apologies if there are typos or errors in the above, not easy to test without some expected input and intended output. | {
"domain": "codereview.stackexchange",
"id": 43476,
"tags": "javascript, node.js, json, typescript"
} |
Remineralization of ro water | Question: Hello we are trying to prepare a solvent based solution of different kinds of minerals which can be added to reverse osmosis permeate water so that some of the essential minerals can be added back to the water. Do anyone knows which solvent we should use?
Answer: I think water is the best solvent, assuming you want to remineralise the RO water for drinking it. Your solution might become a slurry depending on the required volume. Have you considered simply using a solid mix of these minerals? A solid mix would allow for a more homogeneous mix of different minerals than a low volume liquid. | {
"domain": "chemistry.stackexchange",
"id": 12381,
"tags": "solvents, minerals"
} |
Magnetic field lines and knots | Question: As I was reading the book The Trouble With Physics, I encountered a small paragraph which seemed bit confusing. The paragraph goes as follows:
Picture field lines, like the lines of magnetic field running from the north to south pole of a magnet. The field lines can never end, unless they end on the pole of a magnet, this is one of the Maxwell's laws. But they can make closed circles, and those circles can tie themselves in knots. So perhaps atoms are knots in magnetic field lines.
My questions are:
What exactly is the knot being described here?
And how are atoms related to such knots in magnetic field lines?
Answer: A magnetic field configuration corresponds to a knot when for two magnetic field lines given by the parametric curves: $\mathbf{x}_1(s)$
and $\mathbf{x}_2(s)$, the Gauss linking number
$$L\{x_1, x_2\} = \int ds_1 ds_2 \frac{d \mathbf{x_1}(s_1)}{ds_1} .\frac{\mathbf{x_2}(s_1) - \mathbf{x_2}(s_2)}{|\mathbf{x_1}(s_1) - \mathbf{x_2}(s_2|^3}\times \frac{d\mathbf{x_2}(s_2)}{ds_2}$$
is nonvanishing. This integral is an knot invariant and does not depend
on a smooth deformation of the magnetic field lines (i.e., without cutting and reconnection of the field lines). Knotted configuration solutions of Maxwell equations are described by: Irvine and Bouwmeester in the following review article, based on a previous work by: Ra$\tilde{\rm{n}}$ada.
In order to answer your second question, let me describe a brief history of the application of knots to physics:
The possible connection of knots to elementary particles was originally
suggested by Lord Kelvin in 1867 who speculated that atoms might be knotted vortex tubes in ether. This suggestion was the motivation of the mathematical theory of knot theory treating the analysis and classification of knots. Physicists returned to consider knots and knot invariants in the 1980s. Let me mention the two seminal works by
Polyakov and Witten. Both works treat the relation of knots to the Chern-Simons theory. This subject has applications in string theory and condensed matter physics but not directly in particle physics.
However, the situation significantly changed due to the discovery of knotted stable and finite energy solutions in many models of classical field theories used in particle physics. This direction was initiated by Faddeev and Niemi, where they describe a knotted solution in the $O(N)$ sigma model in $3+1$ dimensions. Please, see the following
review by Faddeev. Later, they argued that such solutions might also play an important role in low energy QCD. There are many works following Faddeev and Niemi's pioneering work.
Now, as very well known, stable, finite energy solutions of
nonlinear classical field theories are called solitons. The most famous types of solitons in gauge field theories are monopoles and instantons.
The soliton solutions are not unique, for example a translation of a
soliton in a translation invariant theory is also a soliton (remains a solution of the field equation). Also, there exist rotationally invariant solutions which when rotated in space or around certain directions, and there are also internal degrees of freedom (which correspond for example to isospin). The collection of these degrees of freedom is called the moduli space of the soliton. Thus the soliton can move and rotate and change its internal state, this is why it corresponds to a particle.
These degrees of freedom (moduli) can be quantized and solitons after
quantization, can describe elementary and more generally non-elementary particles. A wide class of solitons are associated with topological invariants (topological quantum numbers) which are responsible for its stability. One of the sucessful soliton models is known as the Skyrme model and its solitons approximate the proton and the neutron and also heavier nuclei.
Thus, in summary, these knotted solutions correspond to particles because they are solitons. | {
"domain": "physics.stackexchange",
"id": 10005,
"tags": "magnetic-fields, topology, solitons"
} |
Siciliano et al. Rotation Matrix Notation | Question: I'm reading Siciliano et al.'s Robotics: Modeling, Planning and Control, and I'm confused about the notation used in definiting rotation matrices.
On page 46, they state
If $\textbf{R}_i^j$ denotes the rotation matrix of Frame $i$ with respect to Frame $j$, it is
$$
\begin{equation}
\textbf{p}^1 = \textbf{R}_2^1\textbf{p}^2.
\end{equation}
$$
To me, this notation says, "$\textbf{R}_2^1$ 'rotates' a vector from frame 2 to frame 1." However, in the discussion of fixed frames on page 47, they state that
$$
\begin{equation}
\bar{\textbf{R}}_2^0 = \textbf{R}_1^0\textbf{R}_0^1\bar{\textbf{R}}_2^1 \textbf{R}_1^0
\end{equation} = \bar{\textbf{R}}_2^1 \textbf{R}_1^0.
$$
If I try to apply my original interpretation, it would say that $\textbf{R}_1^0$ rotates a vector from frame 1 to 0, and then $\bar{\textbf{R}}_1^2$ rotates that vector from its frame 1 to frame 2, which doesn't make sense.
If I instead interpret it as, " $\textbf{R}_1^0$ rotates a vector from frame 0 to frame 1, and then $\bar{\textbf{R}}_2^1$ rotates that vector from frame 1 to frame 2," then that make sense too.
But then the first equation from page 46 doesn't make sense, since it would say, "rotate $\textbf{p}^2$ from frame 1 to frame 2."
Any suggestions on the proper way to interpret these expressions? Thank you!
Answer: I'm building on Raghav's helpful answer to get to my original confusion. In short, I believe reading these matrix compositions comes down to how you interpret the transformations.
A current frame rotation like
$$
\begin{equation}
R_2^0 = R_1^0R_2^1
\end{equation}
$$
Can be understood two ways.
Way 1:
we rotate frame 0 $(F_0)$ to $F_2$ by rotating $F_0$ to $F_1$ and then $F_1$ to $F_2$. In this interpretation, we read the numbers on the matrices top-to-bottom.
Way 2:
we can express a vector currently in $F_2$ in terms of $F_0$ by rotating it from $F_2$ to $F_1$ and then from $F_1$ to $F_0$. In this interpretation, we read the numbers in the matrices from bottom to top.
Rotation Matrix Composition
At least with Siciliano's notation, we interpret compositions of current frame rotations with Way 1 because the focus is on how frames are being shifted at each step with post-multiplication. | {
"domain": "robotics.stackexchange",
"id": 2639,
"tags": "rotation, frame, matrix"
} |
Why is blood collected at crime scenes? | Question: I have read that mature red blood cells (MRBs)do not have DNA. So I am curious why crime scene technicians collect blood. Is it to collect and amplify segments of white blood cells?
Answer: There's still white blood cells with DNA.
Some forensic blood tests are:
Conventional serological analysis:
Analysis of the proteins, enzymes, and antigens present in the blood, for general doctor tests: (black/white/drunk/stoned/heroin addict/polio vaccinated/hiv pos...)
Restriction Fragment Length Polymorphism (RFLP) DNA :
Direct analysis of certain DNA sequences present in the white blood cells. This method also usually requires a "large" sample size to obtain significant results.
Polymerase Chain Reaction (PCR) DNA :
Analysis of certain DNA sequences that have been copied multiple times to a detectable level. | {
"domain": "biology.stackexchange",
"id": 8218,
"tags": "dna, hematology"
} |
Can electrons be present outside orbitals. If yes how does this affect chemical reactions | Question: In my physical chemistry textbook it is written (Or typed) that orbitals are regions where probability of finding electron is high (90%-100%). But since orbitals are regions of probability is it possible for electrons to be present outside orbitals for just a small number of atoms in a reaction if so how does this affect reactions that take place with these small number of atoms
Answer: I think that your textbook is essentially correct. Typically for illustrative purposes orbitals are drawn with a 90% or 95% radius meaning that this is the region where that much of the electron density can be found at any time.
The wavefunction does extend to infinity, but the scaling of the decrease with distance is very severe, typically it is $\approx exp(-Zr/(na_0))$ where $Z$ is the atomic number, $n$ the principal quantum number, $r$ distance from nucleus and $a_0$ the Bohr radius $\ce{5.29 10^{-11}}$ m. Thus for, say, a 3s H atom orbital at 1 nm the wavefunction has decayed to 0.0018 of its maximum value, the probability of being between $r$ to $r+dr$ is the square of this value, i.e. the chance of being at a larger distance is tiny.
Thus we can assume that molecules can be reasonably described by orbitals with a cut-off value, i.e. atoms are discrete. Recent experiments using atomic force microscopes show that the picture of small discrete atoms is correct, as have numerous x-ray diffraction crystallographic experiments over the last century. Thus although an electron in an atom in its ground state could 'in principle' be anywhere in space the chance that it is any significant distance, i.e. many nanometers from its nucleus is effectively zero. Thus we have well defined molecules, whose shape as I have mentioned can be determined experimentally. This means that molecules exist as identifiable entities!
It also means that most molecules have effectively to collide for a reaction to occur, otherwise electrons in one molecule do not feel the influence of those in the other one sufficiently strongly to react. Thus the number of atoms that you mention might react, when the electron is at a very large separation (compared to average separation), is so small as to have no significant effect on a reaction. (Intermolecular forces do extend for some distance (nm) but their energy is small compared to bond strength and generally do not lead to chemical reaction.)
Two additional point should be mentioned, (a) in metals conduction electrons are not localised on any atom, although core electrons are and (b) in the gas phase highly excited atoms/molecules (called Rydberg atoms/molecules) can be produced by narrow band laser excitation to have electrons in highly excited levels only a few wavenumbers below dissociation. In this case the electron's wavefuntion's mean distance from the nucleus can be microns, ($\ce{10^{-6}}$ m ); as large a bacterium, quite remarkable. | {
"domain": "chemistry.stackexchange",
"id": 6317,
"tags": "physical-chemistry, reaction-mechanism, electrons, orbitals"
} |
Does validation_split in tf.keras.preprocessing.image_dataset_from_directory result in Data Leakage? | Question: For a binary image classification problem (CNN using tf.keras). My image data is separated into folders (train, validation, test) each with subfolders for two balanced classes. Borrowing code from this tutorial, I initially loaded my training and validation sets this way:
train_ds = tf.keras.preprocessing.image_dataset_from_directory(
train_path,
validation_split=0.2,
subset="training",
seed=42,
image_size=image_size,
batch_size=batch_size,
)
val_ds = tf.keras.preprocessing.image_dataset_from_directory(
train_path,
validation_split=0.2,
subset="validation",
seed=42,
image_size=image_size,
batch_size=batch_size,
)
Note that I am loading both training and validation from the same folder and then using validation_split (because I wanted to play around before using the real validation set). My model was performing quite well, achieving validation accuracy of~0.95.
Then I decided to update my code to load the real validation set:
train_ds = image_dataset_from_directory(
train_path,
seed=42,
image_size=image_size,
batch_size=batch_size,
)
val_ds = image_dataset_from_directory(
val_path,
seed=42,
image_size=image_size,
batch_size=batch_size,
)
Now my model is performing substantially worse (~0.75 accuracy). I'm trying to understand why. I suspect my initial code was causing some data leakage. Now that I look at it, I can't tell how the second call of image_dataset_from_directory (for val_ds) knows not to load images that were already loaded for the first call (for train_ds) (unless having the same random seed prevents this). I would be certain this is the issue, except for the fact that I pulled this code directly from a keras.io tutorial - surely they wouldn't make such a basic mistake?
Main question: Given the way that validation_split and subset interact with image_dataset_from_directory(), is the first version of my code resulting in data leakage?
If it should not be resulting in data leakage between training and validation sets, then I will need to consider other possibilities, such as:
There are actual differences between images in the train and validation set folders. I could combine and reshuffle them.
The order of images in the training folder is such that given my random seed "easier" images were getting pulled for the validation set.
Answer: A possible issue is that Keras validation_split uses the "last $x$ percent" of data as validation data without shuffling the data. So if your data has a certain stratification, this stratification will affect the validation set.
I further understand from the docs that the shuffle argument in .fit() does not shuffle data before assigning the validation data. It shuffles training data before each epoch.
As far as I remember I had a similar problem and needed to "manually" shuffle my data before feeding it to the NN in order to avoid problematic bunching of classes in the validation set (defined by validation_split).
From the docs:
validation_split
Float between 0 and 1. Fraction of the training data
to be used as validation data. The model will set apart this fraction
of the training data, will not train on it, and will evaluate the loss
and any model metrics on this data at the end of each epoch. The
validation data is selected from the last samples in the x and y data
provided, before shuffling.
shuffle
Logical (whether to shuffle the training data before
each epoch) or string (for "batch"). "batch" is a special option for
dealing with the limitations of HDF5 data; it shuffles in batch-sized
chunks. Has no effect when steps_per_epoch is not NULL. | {
"domain": "datascience.stackexchange",
"id": 10310,
"tags": "machine-learning, neural-network, keras, tensorflow, cnn"
} |
--ros-args argument is always present in launched Node | Question:
I am trying to launch a node that requires common command line arguments.
In my launch file, I added a Node() instance in my LaunchDescription which contains the package, executable and arguments attributes. However the launched program complains about not knowing about argument --ros-args.
I thought it was the ros_arguments attribute in the Node that added the --ros-args argument to the launch command so I don't understand why I am getting this error.
I am running Ubuntu 22.04 and ROS2 Humble.
How can we launch a program from a launch file without the launch file adding the --ros-args argument to the launch command ?
Originally posted by Sam_Prt on ROS Answers with karma: 28 on 2023-02-14
Post score: 0
Original comments
Comment by christophebedard on 2023-02-14:
Can you share your launch file and the code that reads command line arguments?
Comment by christophebedard on 2023-02-14:
Also, can you clarify what exactly is complaining about the --ros-args? Is it your own code, or is ROS 2 itself complaining about it?
Comment by christophebedard on 2023-02-14:
In short, --ros-args is always added to the commandline arguments, but at the end after any arguments: https://github.com/ros2/launch_ros/blob/6daacbce4bade7ed40f86f16a30a08b4d7ee9272/launch_ros/launch_ros/actions/node.py#L209. If you write code to read these commandline arguments, you might need to filter it out.
Comment by Sam_Prt on 2023-02-20:
Oh ok, I thought it was more of an either/or logic. Thanks for your help ! Would you like to post an answer so I can accept it ?
Comment by christophebedard on 2023-02-20:
sure, done!
Comment by christophebedard on 2023-02-21:
Note that I edited my answer to mention rclcpp::init_and_remove_ros_arguments(). It just popped into my head.
Answer:
--ros-args is always added to the commandline arguments, but at the end, after any arguments: https://github.com/ros2/launch_ros/blob/6daacbce4bade7ed40f86f16a30a08b4d7ee9272/launch_ros/launch_ros/actions/node.py#L209. If you write code to read these commandline arguments, you might need to filter it out.
You can also ask rclcpp to remove ROS arguments from the arguments vector (argv) when initializing. Then I imagine you wouldn't need to filter it out. See rclcpp::init_and_remove_ros_arguments(): https://github.com/ros2/rclcpp/blob/28e4b1bd738c23e3ede2c70bf35786ce829ae910/rclcpp/include/rclcpp/utilities.hpp#L140
Originally posted by christophebedard with karma: 641 on 2023-02-20
This answer was ACCEPTED on the original site
Post score: 1 | {
"domain": "robotics.stackexchange",
"id": 38275,
"tags": "ros2"
} |
Designing context free grammar for a language with range restriction on repetition of alphabets | Question: I am having issue with designing contex free grammar for the following language:
$L = \{0^n 1^m \, | \, 2n \leq m \leq 3n \}$
I can design for the individual cases i.e. for $m \geq 2n$ and $m \leq 3n$ but don't know how should i combine both. Or is it a different approach altogether?
Answer: It has been a while, since I have done context-free grammars, but I think this should be the answer:
$\qquad \displaystyle A \to 0A11 \mid 0A111 \mid \varepsilon$
I am assuming that empty string is in $L$.
The idea behind is, that you are working your way from outside. Since you need at least 2 times more of $1$'s, than you need $0$'s, you need the first rule. The upper limit is handled with the second rule. All cases inbetween are also covered. $-$ is needed for termination. | {
"domain": "cs.stackexchange",
"id": 565,
"tags": "context-free"
} |
Validation default value | Question: I want to check if all items of a list meet a set of criteria:
bool AreValid(list<string> vals)
{
bool allValid=true;
foreach(string val in vals)
{
if(!condition1)
...
allValid=false;
else if(!condition2)
...
allValid=false;
}
return allValid;
}
The reason I can't use LINQ All() is that for each failed condition I'm doing some job.
is it safe to set allValid as true by default?
Answer: Yes, it is fine - I mean it definitelly works and there is nothing unsafe in assigning default value of True to allValid.
My concern is the design. The name of your method AreValid suggest that it is just checking if objects meet specific criteria. You've mentioned that for all those objects that do not meet the criteria there is something being done on them. That doesn't seem to be the very greatest approach.
I would suggest you consider separating checking if objects meets condition and changing state of the objects. It will make your code simpler and easier to read and understand to others. Is changing object state really what you expect from the method named AreValid? It would be quite a surprise for me if I found similar method in the code base.
The other thing is that you could use LiNQ and check for all the objects that do not meet the criteria and then perform some actions on those objects. | {
"domain": "codereview.stackexchange",
"id": 2224,
"tags": "c#, asp.net"
} |
Oldham coupling vs Universal joint | Question: I've heard that both of these coupling types introduce a degree of kinematic error in the driven shaft. What does it actually mean in practice? Is it that the driven shaft doesn't turn uniformly even when the input shaft does? But is this error arbitrary or the same amount of error is experienced for every revolution?
In fact, in what sort of application should one be used but not the other?
Any thought appreciated.
Answer: Both of these types of joints do not provide a constant velocity to the output shaft because the resistance to rotation of the joint varies throughout one rotation. This is an inherent feature of the geometry rather than a materials or manufacturing issue.
However a joint composed of two universal joints back to back will give a much better approximation of constant velocity, these are often found in steering columns. There are also designs which stack two universal joints concentrically.
Universal joints do have the advantage that they are simple to manufacture and tend to be quite rugged.
There are a variety of designs of constant velocity (CV) joints used for different applications for example the CV joints used in front wheel drive vehicles often consist of a spherical inner and outer shell joined by ball bearings located in grooves in both shells.
Key design considerations for selecting a particular joint type include :
The angle, or range of angles at which the joint needs to work
The speeds and loads of the system
Whether the joint will be subject to shock loading
Acceptable levels of vibration | {
"domain": "engineering.stackexchange",
"id": 726,
"tags": "mechanical-engineering"
} |
All your bases are belong to Dijkstra | Question:
CATS, willing to get a hold of all the bases, asked Dijkstra to give him a hand designing the most powerful algorithm to reach all bases with the least distance travelled.
I tried to implement the Dijkstra's algorithm using TDD. This algorithm is designed to find the fastest route to reach all nodes of a graph. Right now, I'm concerned about if I implemented the algorithm with the good angle. I tested it (obviously) and all my tests passes, so I'm pretty confident the algorithm works, I'm just not sure if it's the way it should be written.
Those are the two classes I used to represent the graphs :
[DebuggerDisplay("{Name}")]
public class Node
{
public string Name { get; }
public ICollection<Link> Links { get; }
public Node(string name)
{
if (String.IsNullOrWhiteSpace(name)) throw new ArgumentNullException(nameof(name));
Name = name;
Links = new Collection<Link>();
}
public override bool Equals(object obj)
{
Node node = obj as Node;
if (node == null) return false;
return Name.Equals(node.Name);
}
public override int GetHashCode()
{
return Name.GetHashCode();
}
/// <summary>
/// Creates links betwen two nodes
/// </summary>
/// <param name="a">First node</param>
/// <param name="b">Second node</param>
/// <param name="distance">Distance between nodes</param>
/// <remarks>There is no order in the nodes</remarks>
public static void Join(Node a, Node b, int distance)
{
if (a == null) throw new ArgumentNullException("a");
if (b == null) throw new ArgumentNullException("b");
Link linkAToB = new Link(a, b, distance);
Link linkBToA = new Link(b, a, distance);
a.Links.Add(linkAToB);
b.Links.Add(linkBToA);
}
}
[DebuggerDisplay("({From.Name}) to ({To.Name}), Distance : {Distance}")]
public class Link
{
public Guid Id { get; } = Guid.NewGuid();
public Node From { get; }
public Node To { get; }
public int Distance { get; }
public Link(Node from, Node to, int distance)
{
if (from == null) throw new ArgumentNullException("from");
if (to == null) throw new ArgumentNullException("to");
From = from;
To = to;
Distance = distance;
}
public bool ConnectsSameNodes(Link other)
{
if (other == null) throw new ArgumentNullException("other");
bool connectsSameFrom = other.From.Equals(From) || other.To.Equals(From);
return connectsSameFrom && (other.From.Equals(To) || other.To.Equals(To));
}
public override bool Equals(object obj)
{
Link link = obj as Link;
if (link == null) return false;
return Id == link.Id;
}
public override int GetHashCode()
{
return Id.GetHashCode();
}
}
This is the algorithm's implementation :
public interface IGraphSolverStrategy
{
IEnumerable<Link> Solve(Node head);
}
class LinkDistanceComparer : IComparer<Link>
{
public int Compare(Link x, Link y)
{
if (y == null) throw new ArgumentNullException("y");
if (x == null) throw new ArgumentNullException("x");
return Math.Sign(x.Distance - y.Distance);
}
}
public class DijkstraSolverStrategy : IGraphSolverStrategy
{
public IEnumerable<Link> Solve(Node head)
{
if (head == null) throw new ArgumentNullException(nameof(head));
var orderedLinks = new SortedSet<Link>(new LinkDistanceComparer());
AddLinksToSet(orderedLinks, head.Links);
var traveledLinks = new List<Link>();
while (orderedLinks.Count != 0)
{
var link = orderedLinks.ElementAt(0);
orderedLinks.Remove(link);
if (traveledLinks.Any(l => l.To.Equals(link.To))) continue;
traveledLinks.Add(link);
var linksToAdd = link.To.Links.Where(l => !l.ConnectsSameNodes(link));
AddLinksToSet(orderedLinks, linksToAdd);
}
return traveledLinks;
}
private static void AddLinksToSet(SortedSet<Link> linkSet, IEnumerable<Link> linksToAdd)
{
foreach (var item in linksToAdd)
{
linkSet.Add(item);
}
}
}
Basically, I start with the head node, add the links from this node to a SortedSet then pick the smallest link. The I remove the current link from the set, add the links of the new node, pick the smallest link, and repeat. I make sure there's no closing loops by assuring a link that has the same To node cannot be in the set twice.
Since I used TDD, I thought including my unit tests would be a good move :
[TestFixture]
public class DijsktraSolverStrategyTest
{
private DijkstraSolverStrategy solver = new DijkstraSolverStrategy();
#region Helpers
private static Node CreateHeadWithChilds(params int[] nodesDistance)
{
var head = new Node("head");
for (int i = 0; i < nodesDistance.Length; i++)
{
var distance = nodesDistance[i];
var child = new Node($"child {i + 1}");
Node.Join(head, child, distance);
}
return head;
}
private static Node CreateHeadWithTriangleLink(int triangleDistance, params int[] nodesDistance)
{
var head = CreateHeadWithChilds(nodesDistance);
var otherNodes = head.Links.Select(l => l.To);
Node.Join(otherNodes.ElementAt(0), otherNodes.ElementAt(1), triangleDistance);
return head;
}
#endregion
[Test]
public void Solve_NullHead()
{
Assert.Throws<ArgumentNullException>(() => solver.Solve(null));
}
[Test]
public void Solve_OneNode_ReturnsEmptyList()
{
//Build
var head = new Node("Node");
var expected = new List<Link>();
//Test
var actual = solver.Solve(head);
//Assert
CollectionAssert.AreEqual(expected, actual);
}
[Test]
public void Solve_TwoNodes_ReturnsLinkBetweenNodes()
{
//Build
var head = CreateHeadWithChilds(1);
var expected = new []{ head.Links.Single() };
//Test
var actual = solver.Solve(head);
//Assert
CollectionAssert.AreEqual(expected, actual);
}
[Test]
public void Solve_TwoNodesWithTwoLinks_PicksFastestLink()
{
//Build
const int smallestDistance = 2;
var head = CreateHeadWithChilds(smallestDistance);
Node.Join(head, head.Links.Single().To, smallestDistance + 1);
var expected = head.Links.Where(l => l.Distance == smallestDistance);
//Test
var actual = solver.Solve(head);
//Assert
CollectionAssert.AreEqual(expected, actual);
}
[Test]
public void Solve_HeadWithMultipleChilds_TravelsByOrder()
{
//Build
var distances = new []{ 5, 7, 4 };
var head = CreateHeadWithChilds(distances);
var expected = head.Links.OrderBy(l => l.Distance).ToList();
//Test
var actual = solver.Solve(head);
//Assert
CollectionAssert.AreEqual(expected, actual);
}
[Test]
public void Solve_TriangleNodes_DoesntCloseTheLoop()
{
//Build
var distances = new []{ 5, 7 };
var head = CreateHeadWithTriangleLink(3, distances);
var unexpected = head.Links.Single(l => l.Distance == 7);
//Test
var links = solver.Solve(head);
//Assert
CollectionAssert.DoesNotContain(links, unexpected);
}
[Test]
public void Solve_ThreeLevelHierarchyWithPossibleLoop()
{
var distances = new int[]{ 1, 700000 };
var head = CreateHeadWithTriangleLink(2, distances);
var thirdLevelNode = new Node("3rd child");
Node.Join(head.Links.First().To, thirdLevelNode, 3);
var expected = new List<Link>
{
head.Links.Single(l => l.Distance == 1),
head.Links.First().To.Links.Single(l => l.Distance == 2),
head.Links.First().To.Links.Single(l => l.Distance == 3),
};
var actual = solver.Solve(head);
CollectionAssert.AreEqual(expected, actual);
}
[Test]
public void Solve_InvertedFromTo_TravelIsNonDirectional()
{
//Build
var head = CreateHeadWithChilds(10);
var otherNode = head.Links.Single().To;
var thirdNode = new Node("case");
Node.Join(thirdNode, otherNode, 5);
var expected = new List<Link>(){ head.Links.Single(), otherNode.Links.Single(l => l.Distance == 5) };
//Test
var actual = solver.Solve(head);
//Assert
CollectionAssert.AreEqual(expected, actual);
}
}
Answer: I've been thinking this over for a while now and I'm pretty sure you're not implementing Dijkstra's algorithm (which is the shortest path between two nodes in a graph). It looks like you're computing a minimum spanning tree (set of edges connecting all nodes with minimal cost).
Some additional remarks:
Link is an odd name - more generally the term Edge is used for a connection of two nodes in a graph.
I think your abstraction is a bit too leaky - the algorithm needs to know quite a bit about the internals of the nodes and links (I'm referring to things like this link.To.Links.Where(l => !l.ConnectsSameNodes(link)).
I would stipulate you could implement a Graph object with the following public interface and still create independent graph algorithms:
class Graph<TNode> : IEnumerable<TNode>
{
// adds an edge between two nodes with the provided cost
// creates the nodes if not present
public Graph<TNode> AddEdge(TNode from, TNode to, int cost)
{
}
// returns a list of nodes connected to the source via an edge and the associated cost
public IEnumerable<Tuple<TNode, int>> GetEdges(TNode source)
{
}
// plus IEnumerable<TNode> implementation - enumerate all nodes in the graph
}
This has the advantages that:
Users can define their own node types and associate any meta data they like with it.
The internal implementation of how nodes and edges are stored is hidden from the user as they do not have to concern themselves with it - neither should they.
Update:
To make it clear and summarize the comments left by @BenAaronson as well: Dijkstra's algorithm as explained on wikipedia finds the shortest path between two nodes in a graph. In which case you would expect having to provide a start and a end node for the algorithm. If your goal was to find all shortest paths from a given node to any other node via Dijkstra then this isn't what you're doing.
A simple example would be this graph:
A -6-> D
| ^
5 |
| 1
v |
B -2-> C
You algorithm yields the sequence A -> B, B -> C, C -> D with a total cost of 7 which is indeed the set of edges connecting all nodes with minimal cost. If you were to use Dijkstra's algorithm to compute all shortest paths from A to every other node you would get: A -> B, A -> B -> C, A -> D at cost 13 (if you do not count A -> B twice) since the path with the minimal cost from A to D is indeed the edge with cost 6.
So your implementation finds the set of all edges so that all nodes are connected with the minimal cost. This, as mentioned above, is typically called a minimum spanning tree for which several algorithms exist, most notably Prim's algorithm and Kruskal's algorithm.
I haven't checked it in detail but it looks like your algorithm essentially is an implementation of Kruskal's algorithm. So it's not entirely wrong - it just isn't the algorithm you set out to implement.
You could rename your implementation to KruskalSolverStrategy and try again with Dijkstra (if you need help with that then Stackoverflow would be the better place to ask since CodeReview is about reviewing existing code). | {
"domain": "codereview.stackexchange",
"id": 17650,
"tags": "c#, unit-testing, graph"
} |
What is 'definite' variable in QM? | Question: I have gone through a few of the questions on the website regarding this particular query, but I have not understood what they meant.
When a question says that a particle has definite momentum, are they saying it neglecting the approximations implied by uncertainty principle?
Because in QM, as far as I know, there can not be anything known beyond a certain limit.
P.S. I have a high school math background.
Answer: I have understood your question as two separate questions.
What does it mean that a state has definite momentum?
A general state $|\psi \rangle$in quantum mechanics is a superposition of different eigenstates $|p\rangle$ of the momentum operator:
$$|\psi \rangle=\int \psi(p)|p \rangle dp$$
I have written this for 1 dimension, since the concept does not change in 3d.
However, you can consider the case that $|\psi \rangle=|p \rangle$. Actually there are some issues with this, as $|p\rangle$ is not normalizable, but it works to some approximation. The state $|p \rangle $ is a state with definite momentum in the sense that it has a 100% probability to give momentum $p$, and 0% probability to give any other momentum.
Doesn't such a state violate the uncertainty principle?
Good question, and I see why you'd think so, but it does not! We wish for
$$\sigma_x \sigma_p \geq \hbar/2$$
Where $\sigma_x , \sigma_p$ are the standard deviations in the x and p probability distributions, respectively.
And if there is no momentum uncertainty, as in the case $|\psi \rangle = |p \rangle$, then $\sigma _p =0$, so you'd think that the product is also 0. However, for a momentum eigenstate, the position wave function is proportional to $e^{ipx}$. If you try to calculate $\sigma_x$, you will get infinity. So you have
$$\sigma_x \sigma _p = \infty * 0 \geq \hbar/2 ?$$
This is an indefinite form and a priori can equal any number. However you can prove that the inequality still holds (maybe lacking a bit of mathematical rigor): $\psi (p)$ will be a delta function $\delta (p'-p)$, and you can write this as a limit of gaussian functions. For any of the gaussians in the limit the uncertainty principle holds.
Lastly: Since (according to your edit) you're looking for an answer with high school math background, I'll just summarize without any technical stuff.
Summary (no math background needed)
Mathematically, in quantum mechanics you can conceive of a particle with definite momentum, i.e. with 100% chance that the momentum takes a particular value $p$. But that on its own does not violate the Heisenberg uncertainty principle: The HUP says that the error in position and momentum cannot both be arbitrarily small at the same time. However, having 0 error in one of the variables does not mean the HUP is violated, it just means you have to have large (actually infinite) uncertainty in the other variable.
There is a caveat to this. Mathematically you can conceive of a state that has a definite momentum, and this state is very useful in quantum mechanics. But there are reasons to believe that such a state would never exist physically: you could get states which approximate this state very well, but you'd never quite get there. So if you like, you can think of the definite momentum state as an almost definite momentum state. A state with almost definite momentum has almost a 100% chance to give a momentum very close to $p$, and that chance is so close to 100% that you can just consider it exactly 100% for most purposes. | {
"domain": "physics.stackexchange",
"id": 54675,
"tags": "quantum-mechanics, operators, terminology, heisenberg-uncertainty-principle, observables"
} |
Loop to streamline pandas dataframe to_sql | Question: This code gives me what I am looking for. But I'm just thinking how I can streamline the if statements because I would be repeating myself a couple of times, and that's not really good isn't it?
import requests
import pandas
from sqlalchemy import create_engine
import os
import numpy
from selenium import webdriver
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions
from selenium.webdriver.common.by import By
def _format_data_frame(dataframe_source, df_id):
"""Format the dataframe source that is retrieved from 'my_function()' and put into SQL"""
for each_dataframe in dataframe_source: # Formatting the columns
if "Unnamed: 2" in each_dataframe.columns:
each_dataframe.drop(each_dataframe.index[0], inplace=True)
each_dataframe.rename(columns={"Fare Per Ride (cent)": "Card", "Unnamed: 2": "Cash"}, inplace=True)
if "Card Fare Per Ride (cent)" in each_dataframe.columns:
each_dataframe.rename(columns={"Card Fare Per Ride (cent)": "Card"}, inplace=True)
if "Card Fare (cent)" in each_dataframe.columns:
each_dataframe.rename(columns={"Card Fare (cent)": "Card"}, inplace=True)
if "Description" in each_dataframe.columns:
each_dataframe.rename(columns={"Description": "Distance"}, inplace=True)
# Each dataframe_source has a total of 5 dataframes extracted.
# I don't need the last dataframe, and this portion is just to separate the dataframes out.
truck_services = dataframe_source[0]
feeder_services = dataframe_source[1]
express_services = dataframe_source[2]
other_services = dataframe_source[3]
### How can I streamline the below code? ###
if df_id == "df1":
engine = create_engine("sqlite:///abc.db", echo=False)
connection = engine.connect()
pandas.DataFrame.to_sql(truck_services, name="Truck Services", con=engine, if_exists="append")
pandas.DataFrame.to_sql(feeder_services, name="Feeder Services", con=engine, if_exists="append")
pandas.DataFrame.to_sql(express_services, name="Express Services", con=engine, if_exists="append")
pandas.DataFrame.to_sql(other_services, name="Other Services", con=engine, if_exists="append")
connection.close()
if df_id == "df2":
engine = create_engine("sqlite:///defg_Fares.db", echo=False)
connection = engine.connect()
pandas.DataFrame.to_sql(truck_services, name="Truck Services", con=engine, if_exists="append")
pandas.DataFrame.to_sql(feeder_services, name="Feeder Services", con=engine, if_exists="append")
pandas.DataFrame.to_sql(express_services, name="Express Services", con=engine, if_exists="append")
pandas.DataFrame.to_sql(other_services, name="Other Services", con=engine, if_exists="append")
connection.close()
if df_id == "df3":
engine = create_engine("sqlite:///hijk_Fares.db", echo=False)
## Same thing
connection = engine.connect()
if df_id == "df4":
engine = create_engine("sqlite:///lmno_Fares.db", echo=False)
## Same thing
connection = engine.connect()
if df_id == "df5":
engine = create_engine("sqlite:///pqr_Fares.db", echo=False)
## Same thing
connection = engine.connect()
Answer: As far as I can see the only thing changing is the URL to the DB file.
You can just define a dictionary for this:
DB_URL = {"df1": "sqlite:///abc_Fares.db",
"df2": "sqlite:///defg_Fares.db",
...}
I would also define the titles of the dataframes as a list:
titles = ["Truck Services", ...]
Which you can the easily use:
engine = create_engine(DB_URL[df_id])
connection = engine.connect()
for df, title in zip(dataframe_source, titles):
df.to_sql(title, engine, if_exists="append")
connection.close()
Note that this calls to_sql directly on the dataframes, so no need for pandas.DataFrame(df, ...).
Also note that zip will stop after the shorter iterable is exhausted. So if the list of titles only contains four titles, the fifth dataframe will not be written to the DB.
Final code:
DB_URL = {"df1": "sqlite:///abc_Fares.db",
"df2": "sqlite:///defg_Fares.db",
...}
def _format_data_frame(dataframe_source, df_id):
"""Format the dataframe source that is retrieved from 'my_function()' and put into SQL"""
column_rename = {"Fare Per Ride (cent)": "Card",
"Unnamed: 2": "Cash",
"Card Fare Per Ride (cent)": "Card",
"Card Fare (cent)": "Card",
"Description": "Distance"}
titles = ["Truck Services", "Feeder Services", "Express Services", "Other Services"]
engine = create_engine(DB_URL[df_id])
connection = engine.connect()
# Since zip stops after the shorter iterable is exhausted, this
# leaves the fifth df out
for df, title in zip(dataframe_source, titles):
if "Unnamed: 2" in df.columns:
df.drop("Unnamed: 2", axis=1, inplace=True)
df.rename(columns=column_rename, inplace=True)
df.to_sql(title, engine, if_exists="append")
connection.close()
Note that I also made the if "xxx" in each_dataframe.columns faster by storing the columns once per dataframe in a set, for which membership testing is \$\mathcal{O}(1)\$.
I also made the column renaming a lot easier. The dataframe will ignore all keys in the translation dictionary for which no columns exist, so we can use one common dictionary.
You should also include your imports, right now it is not clear where create_engine comes from. You should have a look if they implemented contextmanagers, so you could do:
with engine.connect() as connection:
for df, title in zip(dataframe_source, titles):
df.to_sql(title, engine, if_exists="append")
Where the connection.close() is done automatically. | {
"domain": "codereview.stackexchange",
"id": 24036,
"tags": "python, sqlite"
} |
Has a near earth object in heliocentric orbit ever been bright enough to be visible to the unaided eye? | Question: The question Can we see asteroid 1998 OR2 with unaided eye? got me thinking. Space.com's Vesta: Facts About the Brightest Asteroid says:
Vesta is the second most massive body in the asteroid belt, surpassed only by Ceres, which is classified as a dwarf planet. The brightest asteroid in the sky, Vesta is occasionally visible from Earth with the naked eye. It is the first of the four largest asteroids (Ceres, Vesta, Pallas and Hygiea) to be visited by a spacecraft. The Dawn mission orbited Vesta in 2011, providing new insights into this rocky world.
The brightness of an object seen from Earth is proportional to (among other things) $1/r^2$ so an object which occasionally passes very close to Earth will occasionally be far brighter than normal; an object normally of order 200 million km away that comes to within 200 thousand km will be a million times brighter, and at 2.5 magnitudes per power of ten that means that for a brief time it could be 15 magnitudes brighter than average.
Questions:
Has a near earth object in heliocentric orbit ever been bright enough to be visible to the unaided eye?
Are there any predictions of events in the foreseeable future when this will happen?
Please exclude comets which are themselves invisible and it's the giant clouds of dust and gas they produce that we see and planets (and dwarf planets) who's heliocentric orbits make them regularly visible.
The following may provide some helpful definitions:
Is there a distinction between NEOs and near-Earth asteroids? Is there a difference?
What (actually) defines an Aten-class near-earth asteroid?
Has Hubble ever been used to try to image a near Earth asteroid?
Is the passage of three asteroids near Earth today just coincidental?
https://space.meta.stackexchange.com/q/1459/12102
Answer: The Near-Earth close approches website shows close approaches to the Earth by near-Earth objects (NEOs). The table showing all close encounters indicates the absolute magnitude.
The data can be exported to a CSV file to estimate the apparent magnitude for each object, using the following equation.
$$
m = H + 5 \log_{10} \bigg( \frac{d_{BS}d_{BO}}{d_0^2} \bigg) - q(\alpha)
$$
where $H$ is the absolute magnitude, $m$ is the apparent magnitude, d_{*} are the distances between the objects and $a(\alpha)$ is the reflected light. $q(\alpha)$ is a number between 0 and 1.
I only want to know what happens when the object is closest to the Earth, so I use the approximation that the distance from the Sun to the NEO is 1AU.
$q(\alpha)$ is complicated to compute, so I just compute $m$ using $q=0$ and $q=1$. This leads to
min value $ = H + 5 \log (d_{BO}) - 1 < m < H + 5 \log (d_{BO}) = $ max value
with $d_{BO}$ the distance between the Earth and the NEO expressed in astronomical units (AU).
The server is unhappy when I try to get the entire database, so I limited my export to the objects that come reasonably close to Earth (d<0.05 AU), with no time limit.
Among these 24588 objects, 4 have a maximal magnitude less than 6, and 16 have a minimal magnitude less than 6. So between 1900 and 2200, no more than 16 NEOs are visible by the naked eye.
In particular, 99942 Apophis (2004 MN4) has an apparent magnitude between 1.7 and 2.7 based on these estimates. Its close approach date is April 13 2029.
But this doesn't say anything on NEOs from before 1900 or after 2200. | {
"domain": "astronomy.stackexchange",
"id": 4449,
"tags": "solar-system, amateur-observing, asteroids, near-earth-object"
} |
Where does this formula for effective thickness of air come from? | Question: I have a formula for effective thickness of air for alpha particles, i.e.
$$d(p,T)=\left[\frac{273.15 K}{T}\cdot\frac{p}{100 kPa}\right]\cdot 16 mm$$
where $p$, $T$ are absolute values of pressure and temperature, 16 mm are the layers of air.
Where does this formula come from? Here is the source of this formula: formula
Answer: This seems to be the thickness compared to "normal condition? at 0°C and atmospheric pressure of 100kPa. The density of air changes proportional to pressure and inverse to temperature. | {
"domain": "physics.stackexchange",
"id": 84693,
"tags": "thermodynamics, statistical-mechanics"
} |
Will gravity pull together two bodies from the other side of an empty universe? | Question: Lets say that there are only two bodies in the universe, 65 kg each. Other than that the universe is completely empty, no neutrons, no photons, no dark energy/matter, not even neutrinos (that is to make things less complicated. If the loss of other things leads to something like the universe exploding like a bubble at the speed of light or something, you can change these parameters. I'm mainly concerned about gravity here). Those two bodies are placed apart from each other at the distance of the observable universe. Will they start moving into each other? Will they collide? (Optional question: If so, with how much speed will they collide?)
Answer: I assume a steady-state universe and that the bodies have no velocity relative to each other.
Yes, they will eventually collide. Gravity has an effect over any distance, including the ~46 billion light-year radius that constitutes the spherical observable universe (the actual size of the universe may be much larger). Of course, the force will not be very strong over a 100 billion light-year separation, so the bodies would not collide for a very long time. A rough estimate of the time taken would be on the order of billions of years.
EDIT: As pointed out in the comments, the above time estimation was wrong by a over a factor of $10^{20}$. The amount of time taken would be around $10^{38}$ years (100 undecillion years or 100 sextillion years, depending on whether you subscribe to the short scale or long scale). The equation used to find this number can be found here. | {
"domain": "physics.stackexchange",
"id": 15955,
"tags": "homework-and-exercises, forces, newtonian-gravity, time, estimation"
} |
Any proof of correctness of the Toom-Cook algorithm? | Question: I found the toom-cook algorithm here: http://www.cs.cmu.edu/~ab/Desktop/15-211%20Archive/res00037/Multiplication_1_print.pdf
and have been trying to chase down proof of it being correct, but can't find anything. Does anyone know how to go about proving the algorithm? I was thinking induction but I can't even figure out how to do it.
This is just for the three-way splitting, page 6. Any ideas?
Answer: If you're only interested in the three-way splitting…
We know that any natural number is a linear combination of powers of ten, where the coefficients are in the range [0,9]. This is our writing system for numbers: the coefficients are simply the digits of the number.
Let's assume that our number has at least six digits (just for simplicity). If it has less than that, we don't need Toom-Cook: $n$ is bounded by a constant, so we can do the multiplication in $O(1)$.
So our number can now be written as $a_0 10^0 + a_1 10^1 + a_2 10^2 + \cdots + a_n 10^n$, where all $a_i$ are single-digit non-negative integers. And $n$ is at least 6.
We also know that we can find $p$ such that $|(n - 2p) - p| \leq 1$. (If $n$ is divisible by 3, then $p = \frac{n}{3}$. If $n-1$ is divisible by 3, then we let $(n-2p)$ be larger by 1. And if $n-2$ is divisible by 3, then we increase $p$ by 1.)
Now, through the laws of multiplication and exponents, it's clear that our number is equal to $(a_0 10^0 + a_1 10^1 + \cdots) + (a_p 10^0 + a_{p+1} 10^1 + \cdots) 10^p + (a_{2p} 10^0 + a_{2p+1} 10^1 + \cdots) 10^{2p}$.
Thus, the number can be divided into three generally-close-to-equal parts, as the paper says. | {
"domain": "cs.stackexchange",
"id": 11521,
"tags": "algorithms, correctness-proof"
} |
Bead on a rotating wire - Conservation of angular momentum, fix points | Question: Lets consider a wire in the x-y plane which rotates with constant angular velocity $\omega$. The coordinates of a bead, which is forced to stay on this wire, can then be expressed as $$x=r \cos(\phi) \\ y=r \sin(\phi)$$ with the rheonomous constraint $\phi=\omega t$. The Lagrangian $L$ is then simply given as $$L=\frac{m}{2}(\dot{r}^2+r^2\omega^2)$$ and the e.o.m are just $$\ddot{r}-\omega^2r=0$$ with the solution $$r(t)=r_0\cosh(\omega t) +\frac{v_0}{\omega} \sinh(\omega t).$$
Now it follows immediately that neither the energy nor the angular momentum are conserved. I think the reason is that we are dealing with rheonomous constraints. Is that correct? But intuitively I would have guessed that a constraint like $\phi=\omega t$ does not necessarily lead to an increasing $r$ (if $v_0\geq0$). So what drives the bead outwards? The constraint only demands that $\phi$ should increase linearly. This would also be fulfilled by a simple circular motion of the bead with constant $r$. Of course, this is not a solution for the differential equation given above. But why not? Is there something more included in the Lagrangian which makes the bead go outwards (something like centrifugal force)?
The maths are absolutely clear, I am just wondering why and how the bead is driven outwards.
Answer: In a coordinate system rotating at constant angular rate $\omega$, neither energy nor angular momentum are conserved and one has coriolis and centrifugal forces. The bead is forced outwards by the centrifugal force: the energy increases by the work done on the bead by the rotating system.
In fact, since the Hamiltonian $$H=\frac{p^2}{2m} - \frac{1}{2}\omega^2r^2$$
satisfies $$\frac{\partial H}{\partial t}=0$$ it is conserved (recall that $dH/dt=\partial H/\partial t$). It is related to energy and angular momentum via $$H=E-\omega J,$$ also known as Jacobi energy.
This is still conserved if there is a conservative force that in the rotating frame is time-independent, e.g. if the bead on the string were attached to the origin by a spring. | {
"domain": "physics.stackexchange",
"id": 22697,
"tags": "lagrangian-formalism, energy-conservation"
} |
Observer pattern implementation without subclassing | Question: I have not yet seen an implementation of the observer pattern in Python which satisfies the following criteria:
A thing which is observed should not keep its observers alive if all other references to those observers disappear.
Adding and removing observers should be pythonic. For example, if I have an object foo with a bound method .bar, I should be able to add an observer by calling a method on the method, like this: foo.bar.addObserver(observer).
I should be able to make any method observable without subclassing. For example, I should be able to make a method observable just by decorating it.
This must work for types which are unhashable, and must be able to be used on an arbitrary number of methods per class.
The implementation should be comprehensible to other developers.
Here is my attempt (now on github), which allows bound methods to observe other bound methods (see the test example below of example usage):
import weakref
import functools
class ObservableMethod(object):
"""
A proxy for a bound method which can be observed.
I behave like a bound method, but other bound methods can subscribe to be
called whenever I am called.
"""
def __init__(self, obj, func):
self.func = func
functools.update_wrapper(self, func)
self.objectWeakRef = weakref.ref(obj)
self.callbacks = {} #observing object ID -> weak ref, methodNames
def addObserver(self, boundMethod):
"""
Register a bound method to observe this ObservableMethod.
The observing method will be called whenever this ObservableMethod is
called, and with the same arguments and keyword arguments. If a
boundMethod has already been registered to as a callback, trying to add
it again does nothing. In other words, there is no way to sign up an
observer to be called back multiple times.
"""
obj = boundMethod.__self__
ID = id(obj)
if ID in self.callbacks:
s = self.callbacks[ID][1]
else:
wr = weakref.ref(obj, Cleanup(ID, self.callbacks))
s = set()
self.callbacks[ID] = (wr, s)
s.add(boundMethod.__name__)
def discardObserver(self, boundMethod):
"""
Un-register a bound method.
"""
obj = boundMethod.__self__
if id(obj) in self.callbacks:
self.callbacks[id(obj)][1].discard(boundMethod.__name__)
def __call__(self, *arg, **kw):
"""
Invoke the method which I proxy, and all of it's callbacks.
The callbacks are called with the same *args and **kw as the main
method.
"""
result = self.func(self.objectWeakRef(), *arg, **kw)
for ID in self.callbacks:
wr, methodNames = self.callbacks[ID]
obj = wr()
for methodName in methodNames:
getattr(obj, methodName)(*arg, **kw)
return result
@property
def __self__(self):
"""
Get a strong reference to the object owning this ObservableMethod
This is needed so that ObservableMethod instances can observe other
ObservableMethod instances.
"""
return self.objectWeakRef()
class ObservableMethodDescriptor(object):
def __init__(self, func):
"""
To each instance of the class using this descriptor, I associate an
ObservableMethod.
"""
self.instances = {} # Instance id -> (weak ref, Observablemethod)
self._func = func
def __get__(self, inst, cls):
if inst is None:
return self
ID = id(inst)
if ID in self.instances:
wr, om = self.instances[ID]
if not wr():
msg = "Object id %d should have been cleaned up"%(ID,)
raise RuntimeError(msg)
else:
wr = weakref.ref(inst, Cleanup(ID, self.instances))
om = ObservableMethod(inst, self._func)
self.instances[ID] = (wr, om)
return om
def __set__(self, inst, val):
raise RuntimeError("Assigning to ObservableMethod not supported")
def event(func):
return ObservableMethodDescriptor(func)
class Cleanup(object):
"""
I manage remove elements from a dict whenever I'm called.
Use me as a weakref.ref callback to remove an object's id from a dict
when that object is garbage collected.
"""
def __init__(self, key, d):
self.key = key
self.d = d
def __call__(self, wr):
del self.d[self.key]
Here is a test routine, which also serves to illustrate use of the code:
def test():
buf = []
class Foo(object):
def __init__(self, name):
self.name = name
@event
def bar(self):
buf.append("%sbar"%(self.name,))
def baz(self):
buf.append("%sbaz"%(self.name,))
a = Foo('a')
assert len(Foo.bar.instances) == 0
# Calling an observed method adds the calling instance to the descriptor's
# instances dict.
a.bar()
assert buf == ['abar']
buf = []
assert len(Foo.bar.instances) == 1
assert Foo.bar.instances.keys() == [id(a)]
assert len(a.bar.callbacks) == 0
b = Foo('b')
assert len(Foo.bar.instances) == 1
b.bar()
assert buf == ['bbar']
buf = []
assert len(Foo.bar.instances) == 2
# Methods added as observers are called when the observed method runs
a.bar.addObserver(b.baz)
assert len(a.bar.callbacks) == 1
assert id(b) in a.bar.callbacks
a.bar()
assert buf == ['abar','bbaz']
buf = []
# Observable methods can sign up as observers
mn = a.bar.callbacks[id(b)][1]
a.bar.addObserver(b.bar)
assert len(a.bar.callbacks) == 1
assert len(mn) == 2
assert 'bar' in mn
assert 'baz' in mn
a.bar()
buf.sort()
assert buf == ['abar','bbar','bbaz']
buf = []
# When an object is destroyed it is unregistered from any methods it was
# observing, and is removed from the descriptor's instances dict.
del b
assert len(a.bar.callbacks) == 0
a.bar()
assert buf == ['abar']
buf = []
assert len(Foo.bar.instances) == 1
del a
assert len(Foo.bar.instances) == 0
Answer: I like your code: its quite interesting. While reviewing, I was hoping to see if there was a better way if keeping track of the instances of each ObservableMethod or if we could combine the ObservableMethod and ObservableMethodDescriptor classes. However, I could not think of a better way to store the instances of each ObservableMethod.
With that being said, I have a few recommendations, mostly centered around naming:
Rename your event function and Cleanup class.
Currently event is quite generic. Plus, it doesn't follow the 'function-names-start-with-verbs' convention. I would recommend renaming it to make_observable. This conveys much better what it (and its decorator) actually does.
As for renaming Cleanup: because of its verb-based name, it feels like a function. Classes are things, thus it makes sense to have a noun-based name; maybe CleanupHandler.
Improve your variable names.
Currently you have several 1-letter or 2-letter variable names. Make those names more descriptive:
def __get__(self, inst, cls):
. . .
if ID in self.instances:
# World record, organic matter?
wr, om = self.instances[ID]
Yes, in context we can deduce the meaning of the variable names. However, it pays to be more explicit rather than trusting that everyone understands what is going on:
def __get__(self, inst, cls):
. . .
if ID in self.instances:
# Much better.
weak_ref, observable_method = self.instances[ID]
In the example above is another point I want to make: ID is not constant (as convention says its name suggests). I would recommend renaming it to obj_id or something of the like. You could just use id but that may be a little confusing (and possibly dangerous) since id is a basic Python function.
My final point about variable names deals with multiple word names. You used 'camelCase' in your code. Pythonic convention say that underscores_in_names is preferred.
Spacing
Insert blank lines to help group logical sections of code. Looking at your __get__ code, inserting blank lines helps the visual flow of the method:
def __get__(self, inst, cls):
if inst is None:
return self
ID = id(inst)
if ID in self.instances:
wr, om = self.instances[ID]
if not wr():
msg = "Object id %d should have been cleaned up"%(ID,)
raise RuntimeError(msg)
else:
wr = weakref.ref(inst, Cleanup(ID, self.instances))
om = ObservableMethod(inst, self._func)
self.instances[ID] = (wr, om)
return om
format() vs. %-Notation
As this link says, %-notation isn't becoming obsolete. However, it says that using format() is the preferred method, especially if you are concerned about future compatibility.
# Your way
>>> print 'Hello %s!'%('Darin',)
# New way
>>> print 'Hello {}!'.format('Darin')
This also saves you from having to create temporary tuples just to print information.
Outside of these comments, the code looks neat and Pythonic. | {
"domain": "codereview.stackexchange",
"id": 7652,
"tags": "python, design-patterns"
} |
Carnot Engine Work from Heat Exchanger between Two Gas Streams | Question: To be clear, this is not a homework question, but it is something I am studying for an exam. The question is about a hilsch tube which separates a stream of high pressure air to a high temperature and cold temperature stream. After this, a carnot engine is placed between the streams to convert some of the heat from the hot stream to work and the rest of the heat is given to the cold stream such that the two streams are exiting at the same temperature.
I have given the full problem diagram at the end of the question.
The main question I have is this:
Most definitions of Carnot engine work have isothermal temperature reservoirs, cold and hot, at temperatures $T_{hot}$ and $T_{cold}$ such that
$\eta=W/Q_{hot}=1-T_{hot}/T_{cold}$
However since the temperatures here are constant, how does this relationship change if they continually change as energy is removed or added as in this heat exchanger between two streams?
The manner in which I have approached this problem is:
a) Apply an Energy Balance in which the Hilsch vortex tube is isenthalpic:
$H_{in} = H_{out}$
$\implies$
$n_AC_p(T_A-T_{ref})=n_BC_p(T_B-T_{ref})+n_CC_p(T_C-T_{ref})$
Since $C_p$ drops out, $n_A$ is given, and $T_{ref}$ is arbitrary (can be chosen to be 0), $n_A$ and $n_B$ can be found along with using $n_A=n_B+n_C$.
Which for this specific example ends up being
$n_A=1mol/s$, $n_B=0.833mol/s$, $n_C=0.166mol/s$.
b) This is where the problem originates. If we use the equation for efficiency above, we can end up using another energy balance where we take advantage of the equation $Q=mC_p\Delta T$, so we have:
$Q_{hot}=n_CC_p(T_C-T_D)$
where $Q_{hot}$ is the heat being removed from the hot source, or stream $C$.
So, similarly,
$Q_{cold}=n_BC_p(T_B-T_D)$
and
$W=Q_{hot}(1-T_{cold}/T_{hot})$
This would (if the temperatures were constant as reservoirs) be:
$W=Q_{hot}(1-T_B/T_C)=n_CC_p(T_C-T_D)(1-T_B/T_C)$
So, The energy balance would be:
$W = Q_{cold} + Q_{hot}$
$\implies$
$n_CC_p(T_C-T_D)(1-T_B/T_C)=n_BC_p(T_B-T_D)+n_CC_p(T_C-T_D)$
$\implies$
$n_C(T_C-T_D)(1-T_B/T_C)=n_B(T_B-T_D)+n_C(T_C-T_D)$
All variables in this equation are known except $T_D$, which allows us to calculate it, but I don't know if the assumptions are right to get here.
Answer: Well I don't quite know the details of Hilsch tube, but I can see in which direction the question is heading towards.
Here is what I think would solve the problem
The amount of heat extracted from the source $C$ is
$$
Q_{out} = n_C C_p(T_C - T_D)
$$
Also the amount of heat rejected to sink $B$ is
$$
Q_{in} = n_B C_p(T_D - T_B)
$$
where $n_B$ and $n_C$ are $\frac{n_A(T_C - T_A)}{T_C-T_B}$ and $\frac{n_A(T_A - T_B)}{T_C - T_B}$ respectively.
Now for source $C$ change in entropy is given by
$$
\Delta S_C = \int_{T_C}^{T_D}n_CC_p \frac{dT}{T} = n_C C_p \ln{\frac{T_D}{T_C}}
$$
Similarly the entropy change in sink $B$ is
$$
\Delta S_B = \int_{T_B}^{T_D}n_B C_p \frac{dT}{T} = n_B C_p \ln{\frac{T_D}{T_B}}
$$
Since its a carnot engine, the change in entropy of working fluid is zero.
Hence
$$
\Delta S_{total} = 0
$$
$$
n_C C_p \ln{\frac{T_D}{T_C}} + n_B C_p \ln{\frac{T_D}{T_B}} = 0
$$
Using this you can find out $T_D$, if it would have been same flow in source and sink i.e $n_B = n_C$ then this equation reduces to
$$
T_D = \sqrt{T_B T_C}
$$ | {
"domain": "physics.stackexchange",
"id": 12784,
"tags": "thermodynamics"
} |
How come metal isn't considered a state of matter? | Question: I know in chemistry metals are a class of elements on the periodic table, but in physics metal is more like a state of matter. All of the elements that are called metals on the periodic table are metals under normal conditions.
Apart from the 4 classical states of matter, Wikipedia lists a lot more, not only extreme ones, like quark-gluon plasma, BEC, superfluids, supersolids, but also liquid crystals, states that just have different magnetic properties and even glass is considered as a unique state. Metal isn't on the list, nor is it included as a state of matter in any other literature I've managed to find.
The reason I'm confused about this is because elements that are nonmetallic actually have metallic states at high pressures, including hydrogen, helium, oxygen, carbon, other heavier nonmetals. Even water, a compound, has a metallic state at 4000K, 100GPa. Metals have nonmetallic allotropes too.
Also, the primary difference between a metal and a nonmetal is analogous to the difference between a gas and plasma, which are different states of matter.
Another state of matter was discovered not long ago called the Jahn-Teller Metal. If this is classified as a state of matter, how come metal can't be classified as one too?
Answer: The problem is just that there are two different meanings of "state of matter"- the commonplace meaning, and the meaning used by physicists.
The commonplace meaning is that a state of matter is either solid, liquid, or gas. For some reason, plasma is now also often included. The definitions given usually involve rigidity and whether the substance fills its container.
The physicist meaning of state of matter is more like any substance with a given set of properties*, and so it has many more possibilities. For example, certain solids can change between being magnetic and being non-magnetic at a critical temperature, called the Curie temperature. Even though both are solids, they are still considered different states of matter in the physicist sense. Furthermore, in some cases the transition from liquid to gas is really a smooth crossover without any sharp transition, and in this case the two aren't really considered to be separate phases in the physicist sense. So the definitions are really quite different- one isn't just a generalization of the other. Unfortunately, popular descriptions almost always mix the two up, especially in a list like the Wikipedia list of phases.
So, then, is a metal a phase of matter? In the commonplace sense, it is not- the possible phases are solid, liquid, and gas, and metals can be either solid or liquid. However, in the physicist sense metals are indeed a phase, or depending on the system there may be multiple metallic phases. The study of metal-insulator transitions of various kinds is a major focus of condensed matter physics. The defining characteristic of a metal has varied somewhat over time and depending on context, but it is often either (1) A decreasing electric resistivity with decreasing temperature, (2) Extended (as opposed to localized) single-electron wavefunctions, or (3) A gapless (as opposed to gapped) energy spectrum.
*the formal definition generally involves some parameter that exists only within one of the phases, called an "order parameter" | {
"domain": "physics.stackexchange",
"id": 46291,
"tags": "condensed-matter, phase-transition, metals, states-of-matter"
} |
UniqueList Class in VBA | Question: I often find myself abusing dictionary objects just for the exist method like this,
For Each x in Something
If Not dict.Exists(x) Then dict.Add x, False
Next x
Then just exploit the .Exists(x) method. It happens often enough that I thought it merited it's own class.
Instead of composing Scripting.Dictionary, I decided to just maintain a sorted list of unique items. Usage is simple; unique items are inserted into the collection sorted. Non-unique items are not added.
Attributes
Let's just get this out of the way. Attribute VB_PredeclaredId must be set to True and NewEnum allows the class to be iterable. I still haven't found a way to replicate this using an array instead of a collection. With an array I could offload some of my code to my standard library, but then this wouldn't be standalone or portable.
VERSION 1.0 CLASS
BEGIN
MultiUse = -1
END
Attribute VB_Name = "UniqueList"
Attribute VB_GlobalNameSpace = False
Attribute VB_Creatable = False
Attribute VB_PredeclaredId = True
Attribute VB_Exposed = False
Option Explicit
Private list As Collection
Public Sub Class_Initialize()
Set list = New Collection
End Sub
Public sub Class_Terminate()
Set list = Nothing
End Sub
Public Property Get NewEnum() As IUnknown
Attribute NewEnum.VB_UserMemId = -4
Set NewEnum = list.[_NewEnum]
End Property
Data Manipulation
Public Sub Add(ByVal element As Variant)
If IsEmpty Then
list.Add element
Exit Sub
End If
Dim index As Long
index = LocateIndex(element)
If (list(index) > element) Then
list.Add element, Before:=index
ElseIf list(index) < element Then
list.Add element, After:=index
End If
End Sub
Public Sub Merge(ByVal list_ As Variant)
Dim element As Variant
For Each element In list_
Add element
Next element
End Sub
Public Sub Remove(ByVal element As Variant)
Dim index As Long
index = LocateIndex(element)
If (list(index) = element) Then list.Remove index
End Sub
Public Sub Clear()
Set list = Nothing
Set list = New Collection
End Sub
Introspection
Note I don't implement an Item(x) method. I don't want users to be able to scramble the data.
Public Function Exists(ByVal element As Variant) As Boolean
Exists = (element = list(LocateIndex(element)))
End Function
Public Function Count() As Long
Count = list.Count
End Function
Public Function IsEmpty() As Boolean
IsEmpty = (Count = 0)
End Function
Searching
The only private method is a binary search that returns where the element is or where it should be.
Private Function LocateIndex(ByVal element As Variant) As Long
Dim upper As Long
upper = Count
Dim lower As Long
lower = 1
While lower < upper
Dim middle As Long
middle = (lower + upper) \ 2
If list(middle) >= element Then
upper = middle
Else
lower = middle + 1
End If
Wend
LocateIndex = upper
End Function
Answer:
Note I don't implement an Item(x) method. I don't want users to be able to scramble the data.
Ok. But then why expose a NewEnum to enable For Each loops? Without an Item getter (with VB_UserMemId set to 0 to identify the type's default property), it's not clear how a For Each loop might work. An Item property won't let the client code scramble the data if you don't implement a setter for it.
Searching
I like the idea, I like the implementation, ..but I would have called if IndexOf. Just a couple of points:
If list(middle) >= element Then
If list is empty, this method explodes. You may want to consider raising a custom error in this case, and gracefully handle an empty collection in the public methods.
Also I like that you're declaring variables close to their usage, but this:
While lower < upper
Dim middle As Long
middle = (lower + upper) \ 2
Is potentially confusing. Consider moving the declaration outside the loop. It's not like the While block defined a scope anyway!
Dim middle As Long
While lower < upper
middle = (lower + upper) \ 2
The binary search and sorted nature of this UniqueList are nice, but given the goal:
I often find myself abusing dictionary objects just for the exist method
You wrote that code to implement your own Exists method, and you're using Variant which tells me you're using it for more than just Integer items.
Objects
It would have been nice to handle object items, since a Variant can also be an Object; your code is silent about how objects are dealt with.
Actually, if I read your code correctly, this UniqueList will happily allow me to Add a New ADODB.Connection, and then will blow up when I try to add a 2nd item... or if I try to remove it with the Remove method - the Clear method will work.
Because we can't override operators (like >=) in VBA, I'd handle objects with a \$O(n)\$ linear search that performs an equality check on each IsObject(item) item until it finds the existing object reference (or not). You could mitigate this a little, by introducing an IComparable interface that custom classes could implement, and then you could call comparable.CompareTo(element) and have custom classes that could be handled with a binary search, if they specify how they compare to items of a given type.
There should also be a mechanism for ensuring all items are of the same type: as it stands I could add an Integer, a Date, a Short and a Long to an instance of a UniqueList, and VBA would find a way to sort them, and that's implicit and not pretty.
I think your class is more like some kind of a SortedList<T> (to borrow from c# notation), that doesn't allow duplicates: all items in a SortedList<T> are of type T, whatever that type is. | {
"domain": "codereview.stackexchange",
"id": 9214,
"tags": "sorting, vba, collections, binary-search"
} |
Do TADs derive from operons? | Question: TADs (Topologically associated domains) are DNA sequences in the eukaryotes genome (except plants) that are between two sequences named "Insulator". The genes in TAD just are affected by enhancers and suppressors that are in the same TAD.
This structure can be seen in operons that exist in bacteria. Is there evidence to say that TADs derived from operons?
Answer: The short answer is "no".
Operons have very closely coupled regulation, even in eukaryotes. The classic operons are polycistronic, meaning that transcription of adjoining genes occurs as part of the same process.
TADs on the other hand are merely (not always very well defined) regions in between insulators, as you say. They probably have some common regulatory context, but it is nothing like the direct coupling observed in operons. | {
"domain": "biology.stackexchange",
"id": 11532,
"tags": "genetics, molecular-evolution, operons, tad"
} |
Binary classifier for a small range of the audio spectrum | Question: I am trying to detect a pitch in a narrow range of the audio spectrum with minimal samples. This corresponds to the rattling of certain mechanical systems. For example, the hum of an engine.
To do this, I am hoping to identify if there is a frequency at at a certain range such as 550hz-555hz.
I spoke with a learned fellow about this problem and he mentioned that there is a variation of the FFT that only targets the desired frequency range. What is this called?
I was hoping to avoid an amplitude (volume) training period by distinguishing peaks from white-noise in the frequency domain. Can anybody point me to a tunable parameter that will enable me to identify if a certain range has a peak? I was thinking of comparing the max height to the average.
I was wondering if anybody knew of a good method to do this.
Answer: A few points
I'd carefully verify the underlying assumptions: most engine noises,
hums or buzzes, have lots of harmonics and are therefore a very wide
spectrum. The fundamental is rarely the strongest component in
there.
5 Hz seems like an awfully narrow bandwidth. You can only get 10
independents samples per second out of a signal that narrow. So if
you need 50 samples to make a decision, it'll take you 5 seconds.
You may be more interested in the "transient event", i.e. the hum
starting or stopping. The transient has a much higher bandwidth (and
is probably easier to detect) than the steady state noise.
One way to do this is "down-mixing". Multiply the signal sine wave
of, say, 550 Hz and then lowpass filter with the desired bandwidth,
5 Hz, for example. You can then downsample if you want
One suggestion for an algorithm
Multiply signal with 550Hz
Form x[n] by low pass filtering with 5 Hz, measure energy
Form y[n] by low pass filter with 50 Hz, measure energy
If the energies of x[n] and y[n] are about the same, then your engine is on, otherwise it's off
Here is how it works: You look at the spectrum of your signal with a bandwidth of 10Hz and 100Hz around the center frequency. If the spectrum is mostly white (or pink as audio typically is), than the energy in the 100Hz band should be about 10 times larger than the energy in the 10Hz band. Now if you have a very narrow but strong spectral component in the 10 Hz band, than this dominates the energy in the 100 Hz band as well and therefore the energies in both bands will be roughly the same. | {
"domain": "dsp.stackexchange",
"id": 418,
"tags": "pitch"
} |
Use gdb for ros control plugin | Question:
Hi all,
Somebody who have used gdb to debug a ros control plugin please share some experience with me:)
I added the "launch-prefix="gdb -ex run --args" " attribute in the node tag of my launch file.
But when roslaunch the launch file here is what I got
"/opt/ros/indigo/lib/controller_manager/spawner": not in executable format: File format not recognized
Starting program: joint1_effort_controller __name:=controller_spawner __log:=/home/lc/.ros/log/7a02b5fc-3c75-11e6-8071-0023248137eb/rrbot-controller_spawner-1.log
No executable file specified.
My gdb version is 7.7.1 and my ubuntu version is 14.04, my ros version is indigo.
And by the way if don't use gdb, is there any other debug tools for debugging a plugin?
Thanks in advance
Erli
Originally posted by cangjiaxuan on ROS Answers with karma: 20 on 2016-06-27
Post score: 0
Answer:
The spawner script is a Python script that invokes some services to load / start / stop / unload your controllers. If you want to debug a controller, I think you'd need to run the process that contains your controller_manager and possibly your hardware_interface in gdb.
Originally posted by gvdhoorn with karma: 86574 on 2016-06-28
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by cangjiaxuan on 2016-06-28:
Thank you for your comment, now I know why I have such error.
For debugging a controller in gdb, would you mind to describe it in more detail or provides me with some link solving similar question?
Thanks again!
Erli
Comment by gvdhoorn on 2016-06-29:
It should be a matter of determining which node is running your controller_manager and then loading that node in gdb using the launch-prefix approach you already found.
Comment by cangjiaxuan on 2016-06-29:
Great! I think I got your idea, I would try it later.
Comment by cangjiaxuan on 2016-07-09:
Hi, I tied several ways and finally I can use gdb to debug my plugin. I write my experience below. | {
"domain": "robotics.stackexchange",
"id": 25076,
"tags": "ros, roslaunch, gdb, plugin"
} |
Array Signal Processing: What are snapshots in Direction of Arrival estimation? | Question: I do have an overview of basic radar signal processing chain [Data cube, 3 stages of FFT, Range, Range-Doppler map, coherent/non-coherent integration, coarse DoA etc.].
I am now trying to understand an advanced DoA estimation technique-MUSIC and one of the road blocks is the snapshot concept. Nearly every paper retrieved from google assumes an prior understanding of snapshots and am yet to find one that tells me what exactly is it.
So, what are snapshots in DoA estimation?
Answer: A snapshot is simply a data capture (sample window) of ADC samples from all receivers, that we know holds all the information to resolve what we need (distance and/or direction) given the various dimensions involved (transmit pulse length, nearest possible target distance, farthest possible distance, maximum array dimension, and the echo wavelet length). This defines the delay (from start of transmit time) of the window and the length of the window. The literature refers to a "snapshot" because many systems can repeatedly take snapshots to integrate-out noise, etc., but other cheaper systems can't. | {
"domain": "dsp.stackexchange",
"id": 5510,
"tags": "signal-analysis, signal-detection, correlation, music, radar"
} |
LU decomposition with pivoting | Question: I have to solve system of linear algebraic equations $AX=B$, where $A$ is a two-dimensional matrix with all elements of main diagonal equal to zero.
How to solve this problem? Iterational methods are not applied in this case.
One way is LU Decomposition method with reordering rows of $A$ to get entries in the main diagonal that are not zero, using permutation matrix. How can we quickly reorder the rows of the matrix or find the permutation matrix?
Note that the matrix dimensions are large and I have to write a program to solve SLAE in C# language, so I do not need any Matlab or Mathematica functions. Thanks!
Answer: This is a bipartite matching problem, which has several known algorithms. As a bonus, you get a combinatorial criterion for when it is even possible (namely, Hall's theorem).
Construct a bipartite graph with $n$ vertices on each side $x_i,y_j$. Connect $x_i$ to $y_j$ if $A_{ij} \neq 0$. A maximum matching in this graph, if it involves all the vertices, gives you a permutation $\pi$ such that $A_{i\pi(i)} \neq 0$ for all $i$. | {
"domain": "cs.stackexchange",
"id": 848,
"tags": "matrices, linear-algebra"
} |
Inverse Square Law of Radiation | Question: I am conducting an experiment in which I investigate the relationship between the counts per second detected by a Geiger Counter and the distance between said Geiger Counter and the source of radiation. Attached is a graph of the counts per second over time at a fixed distance. My question is: why so much variation between the data points? Is it by virtue of systematic errors?
Answer: It is difficult to provide a complete answer as you have not provided enough information about the experiment.
Your graph, which clearly shows the variation in values of the count rate, exaggerates the variation by having a false origin.
It is not clear as to whether or not a count rate of say $249$ is the result of a reading taken over a period of one second or an average of a reading taken over, say, $10$ seconds.
This matters because radioactive decay is a random process in that one cannot precisely state when a particular nucleus will decay. Statistically with such a process if $N$ is the count in a given time the error in the count is $\pm \sqrt N$. So you one off reading of $249$ has an error of $\pm\sqrt 249 \approx \pm 16$ and it could be shown as an error bar on your graph.
Another thing which could have been important is whether or not your readings are corrected for background radiation as the background count rate which can fluctuate during the course of a day. However as the background count rate was probably less than 1 per second that will not be a significant factor.
I have read off approximately 40 values of count rate from your graph and found the mean count rate to be $236\pm22$ per second.
As you have quite a number of readings the count rate should follow a Normal distribution which means that approximately $68\%$ of the readings should be in the interval $236\pm22$ (range $214-258$), $95\%$ in the interval $236\pm2 \times 22$ (range $192-280$) and $99\%$ in the interval $236\pm3 \times 22$ (range $170-302$).
A cursory glance at your data shows that the fluctuations you have observed are as to be expected.
There are also tests which could be used to test whether the distribution of count rates is Normal. There are many ways of getting this done including the use of Microsoft Excel.
Here is a visual comparison.
Using the experimental mean and standard deviation the blue dots shows what the data points would be if the distribution was normal and the orange dots are the data points from your experiment. It looks very much as though your data follows a Normal distribution.
Is it by virtue of systematic errors?
With radiation detectors one source of systematic error is the possibility of the detector not being to count all the particles which reach it due to the detector having a dead time and this error increases as the count rate increases. Given that the your range of count rates is relatively small this is unlikely to be a source of error with your experiment. | {
"domain": "physics.stackexchange",
"id": 83642,
"tags": "homework-and-exercises, experimental-physics, radiation"
} |
catkin work space initializing | Question:
I installed ROS version Lunar on ubuntu 16.04 . I was trying to initializing the catkin work space that this error appeared :
Could neither symlink nor copy file "/opt/ros/lunar/share/catkin/cmake/toplevel.cmake" to "/opt/ros/lunar/catkin_ws/src/CMakeLists.txt":
- [Errno 13] Permission denied
- [Errno 13] Permission denied: '/opt/ros/lunar/catkin_ws/src/CMakeLists.txt'negar
what should I do???
I prefer not to reinstall it and when I use sudo, It's not work
I also create directory in $HOME , but :
negar@negar:/home/ros_working/catkin_ws/src$ catkin_init_workspace
Could neither symlink nor copy file "/opt/ros/lunar/share/catkin/cmake/toplevel.cmake" to "/home/ros_working/catkin_ws/src/CMakeLists.txt":
[Errno 13] Permission denied: '/home/ros_working/catkin_ws/src/CMakeLists.txt'
And when I use sudo : sudo: catkin_init_workspace: command not found
Originally posted by NegarL on ROS Answers with karma: 3 on 2017-11-27
Post score: 0
Answer:
what should I do???
not try to make your Catkin workspace in a non-world-writable location. /opt/ros/lunar/catkin_ws is a location only writable by root (or users with similar permissions).
Try to create your workspace in your $HOME directory.
Edit: please add how you created your workspace (ie: the /home/ros_working/catkin_ws directory).
Also include the output of ls -al /home/ros_working/catkin_ws in your update.
And when I use sudo , sudo: catkin_init_workspace: command not found
never use sudo with ROS. It is hardly ever really necessary, and there are almost always other ways to achieve what you want to do.
It's certainly not needed to be able to create, init and build a workspace.
Originally posted by gvdhoorn with karma: 86574 on 2017-11-27
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by NegarL on 2017-11-27:
Thanks but It's ok to do that ?? to build it in another directory ?? no problem will occur ?
Comment by gvdhoorn on 2017-11-27:
I would say it's best practice to not build in /opt/ros/...
Comment by NegarL on 2017-11-27:
I made my catkin work space in $HOME but I still have the problem !
Comment by gvdhoorn on 2017-11-27:\
I made my catkin work space in $HOME but I still have the problem !
then please tell us exactly which commands you used.
Comment by gvdhoorn on 2017-11-27:
Don't use comments for this kind of update, edit your original question to add new info.
Comment by NegarL on 2017-11-28:
thanks, It was fixed
Comment by gvdhoorn on 2017-11-28:
You did not include the output of ls -al /home/ros_working/catkin_ws in your update. | {
"domain": "robotics.stackexchange",
"id": 29459,
"tags": "ros, catkin-workspace"
} |
Can all types of particle be created in quantum fluctuation? | Question: In quantum / vacuum fluctuation, a pair of virtual particles is formed. But can all different types of particles be created, both virtual fermions and virtual bosons? For example electrons, quarks, photons, W boson...?
Answer: Virtual particles don't exist. They are a computational device used in calculating interactions between quantum fields. For more on this see What actually are virtual particles? and Do virtual particles actually physically exist?
Furthermore vacuum fluctuation also don't exist, or at least not in the sense of pairs of (non-existant) virtual particles appearing and disappearing. For more on this see Are vacuum fluctuations really happening all the time?
However, we do perform calculations that involve virtual particles when calculating the properties of a field theory vacuum, and your question is reasonably interpreted as asking if we need to consider all possible types of particles when calculating the vacuum properties. And the answer is that yes we do need to consider all possible particles, that is calculating the properties of the Standard Model vacuum requires consideration of all the particles in the Standard Model. | {
"domain": "physics.stackexchange",
"id": 35447,
"tags": "vacuum, fermions, virtual-particles, bosons"
} |
Rectangle Coverage by Sweep Line | Question: I am given an exercise unfortunately I didn't succeed by myself.
There is a set of rectangles $R_{1}..R_{n}$ and a rectangle $R_{0}$. Using plane sweeping algorithm determine if $R_{0}$ is completely covered by the set of $R_{1}..R_{n}$.
For more details about the principle of sweep line algorithms see here.
Let's start from the beginning. Initially we know sweep line algorithm as the algorithm for finding line segment intersectionswhich requires two data structures:
a set $Q$ of event points (it stores endpoints of segments and intersections points)
a status $T$ (dynamic structure for the set of segments the sweep line intersecting)
The General Idea: assume that sweep line $l$ is a vertical line that starts approaching the set of rectangles from the left. Sort all $x$ coordinates of rectangles and store them in $Q$ in increasing order - should take $O(n\log n)$. Start from the first event point, for every point determine the set of rectangles that intersect at given $x$ coordinate, identify continuous segments of intersection rectangles and check if they cover $R_{0}$ completely at current $x$ coordinate. With $T$ as a binary tree it's gonna take $O(\log n)$. If any part of $R_{0}$ remains uncovered that $R_{0}$ is not completely covered.
Details: The idea of segment intersection algorithm was that only adjacent segments intersect. Based on this fact we built status $T$ and maintained it throughout the algorithm. I tried to find a similar idea in this case and so far with no success, the only thing I can say is two rectangles intersect if their corresponding $x$ and $y$ coordinates overlap.
The problem is how to build and maintain $T$, and what the complexity of building and maintain $T$ is. I assume that R trees can be very useful in this case, but as I found it's very difficult to determine the minimum bounding rectangle using R trees.
Do you have any idea about how to solve this problem, and particularly how to build $T$?
Answer: Let's start with $n$ axis-aligned rectangles, since there is a kind of easy direct argument. We'll sweep a vertical line. The events are the endpoints of horizontal edges of the rectangles. As we sweep we maintain a set of intervals on the sweep line that are "uncovered" by $R_i$, $i\ge 1$:
Add the vertical interval covered by the rectangle $R_i$ to the sweep line when we first encounter $R_i$
Remove the vertical interval covered by the rectangle $R_i$ from the sweep line when it moves past $R_i$
It's easy to do this with a binary tree so that updates take $O(\log n)$ time. (The problem is, essentially, 1-dimensional. You figure out if the endpoints are in an uncovered interval and extend/merge appropriately when adding and lengthen them when removing.)
Then you just check that, in the span of $R_0$, none of the uncovered intervals ever intersect the vertical span of $R_0$. The whole thing is $O(n\log n)$ time an $O(n)$ space.
For the general case, the obvious trick is not quite so fast. Use the standard sweep line algorithm to compute the whole planar subdivision induced by the rectangles.
Clearly some disk-like set $F'$ of the faces covers $R_0$. By itself, this doesn't tell us enough, since what we are interested in is whether any of these faces is inside $R_0$ and outside the other rectangles. To do this, we modify the construction a little bit, so that when we add an edge, we tag one side with the identity of the rectangle it's inside. This adds $O(1)$ overhead, so the construction is $O(n^2\log n)$ time; with no assumptions on the rectangles, the output can be $\Omega(n^2)$ in size, so we are using that much space in the worst case, so the time is, "existentially optimal" though not "output sensitive".
Finally, $R_0$ is covered so long as none of the faces in $F'$ have only edges not tagged as being in one of the $R_i$. The point is that if an edge of $f$ is in $R_i$, then the whole of $f$ is as well. Imagine sweeping a line over $f$ orthogonally along this edge: it can only leave $R_i$ either outside of $f$ or $f$ is bounded by more than one edge of $R_i$.
So the conclusion is that the special case is $O(n\log n)$ and the general one is $O(n^2\log n)$ at least, but I suspect it can be improved. | {
"domain": "cs.stackexchange",
"id": 156,
"tags": "algorithms, computational-geometry"
} |
Question on turtlesim tutorial: Go to Goal | Question:
Hello,
I am new to ROS and I am trying to implement the Go to Goal tutorial in C++ (the tutorial itself is in python but wanted to implemented it in C++). The part I am having trouble understanding is, in the function move2goal() there is a while loop who's condition depends on the magnitude of the error between the current pose and goal pose and it is in this loop that the commanded velocity is calculated. I ran the script and it works fine.
When I try to implement the same code in C++ I too have a function, velocity_calc() which has a while loop based on the magnitude error. It is in this loop that I calculate the commanded velocity (same as tutorial). The problem I am facing is, when I call this function in my main() (like they did in the tutorial) the compiler gets stuck in the while loop and is not able to update the current pose ie go to the call back function. I did over come this by replacing the while loop with an if condition in my callback and got it to work. But I am still curious as to why the while loop does not work for me. How is the python script able to update the current pose in the while loop?
turtle_cl.h
#ifndef TURTLECL_H
#define TURTLECL_H
#include
#include
#include
#include
namespace turtle_cl { class turtleCL {
public:
turtleCL();
void velocity_calc();
private:
ros::NodeHandle nh_;
ros::NodeHandle nh_private_;
ros::Subscriber pose_subscriber_;
ros::Publisher vel_publisher_;
void poseCallback(const turtlesim::PoseConstPtr &msg);
turtlesim::Pose current_pose;
geometry_msgs::Twist cmd_vel;
float k_p_lin, k_p_ang, tol, goal_x, goal_y, mag; };
}
#endif
turtle_cl.cpp
#include "turtle_cl/turtlecl.h"
namespace turtle_cl
{
turtleCL::turtleCL() :
nh_(ros::NodeHandle()),
nh_private_(ros::NodeHandle("~"))
{
pose_subscriber_ = nh_.subscribe
("/turtle1/pose",10, &turtleCL::poseCallback, this);
vel_publisher_ = nh_.advertise
("/turtle1/cmd_vel",10,this);
nh_private_.param("p_gain_linear",k_p_lin,0.1);
nh_private_.param("p_gain_ang",k_p_ang,0.1);
nh_private_.param("tolerance",tol,0.1);
nh_private_.param("xGoal",goal_x,1);
nh_private_.param("yGoal",goal_y,1);
}
void turtleCL::poseCallback(const turtlesim::PoseConstPtr &msg)
{
current_pose = *msg;
float mag1, mag2;
mag1 = pow((goal_x - msg->x),2);
mag2 = pow((goal_y - msg->y),2);
mag = sqrt(mag1 + mag2);
// ROS_INFO("in callback");
// if(mag >= tol)
// velocity_calc();
}
void turtleCL::velocity_calc()
{
//ROS_INFO("in velocity_cal %f",mag);
while(mag >= tol)
{
float velx,angx,angy;
angx = goal_x - current_pose.x;
angy = goal_y - current_pose.y;
cmd_vel.linear.x = k_p_lin * mag;
cmd_vel.angular.z = k_p_ang * atan2(angy,angx) - current_pose.theta;
vel_publisher_.publish(cmd_vel);
}
//ROS_INFO("in while %f",current_pose.x);
}
}
turtle_cl_node.cpp
#include
#include
int main(int argc, char** argv)
{
ros::init(argc,argv,"turtle_cl_node");
ros::NodeHandle nh;
turtle_cl::turtleCL hello;
hello.velocity_calc();
ros::spin();
}
Originally posted by prasgane on ROS Answers with karma: 16 on 2017-09-21
Post score: 0
Original comments
Comment by jayess on 2017-09-21:
Can you please post the relevant code here? If your code gets removed from that site this question won't be able to help anyone in the future.
Answer:
Adding ros::spinOnce() in velocity_calc() fixes this. I also learnt that python supports multi threading which enables it to get both the current pose while being in the while loop.
Originally posted by prasgane with karma: 16 on 2017-09-25
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 28900,
"tags": "ros, rospy, turtlesim, tutorials"
} |
Looping Strategy: Change behaviour after first iteration | Question: I have often encountered the following situation. I want to call a method in a Loop, but want to skip the call at the first run. It do not have to be a method, it also can be some lines of code that I want to skip. Here is one example:
I have a file which looks like that.
headline:headline1
content:lorem ipsum dolor sit amet
content2:lorem ipsum dolor sit amet
content3:lorem ipsum dolor sit amet
headline:headline2
content1:lorem ipsum dolor sit amet
.
.
.
I iterate through the lines and generate following html:
<div> headline1 </div>
<ul>
<li> lorem ipsum dolor sit amet </li>
<li> lorem ipsum dolor sit amet </li>
<li> lorem ipsum dolor sit amet </li>
</ul>
<div> headline2 </div>
<ul>
<li> lorem ipsum dolor sit amet </li>
</ul>
.
.
.
My Code looks like that:
private String createList(String file) {
StringBuilder output = new StringBuilder();
String[] lines = file.split("\n");
int headlineCounter = 0;
for (String line : lines) {
String[] fields = line.split(":");
if (fields[0].equals("headline")) {
if (headlineCounter > 0) {
output.append("</ul>");
}
output.append("<div>").append(fields[1]).append("</div>");
output.append("<ul>");
headlineCounter++;
} else {
output.append("<li>").append(fields[1]).append("</li>");
}
}
output.append("</ul>");
return output.toString();
}
The content always is wrapped with an <ul> tag. When a new headline appears the content of the last headline is at the end and has to be closed with </ul> except it's the first headline of course, because there is no content before, that has to be closed.
In my example I am using a counter. On this way it works, but there are other and better ways I think. How would you solve this problem? This is just an example there are of course other and better ways to generate html, I just wanted to show you an example to my question.
Answer: One elegant way to handle this is to parse the input into a more usable structure before transforming it to HTML. For example, each section could be represented as:
private class Section {
final String headline;
final List<String> contents = new ArrayList<String>();
Section(String headline) {
this.headline = headline;
}
String asHtml() {
StringBuilder sb = new StringBuilde();
sb.append("<div>").append(headline).append("</div>");
sb.append("<ul>");
for (String item : contents) {
sb.append("<li>").append(item).append("</li>");
}
sb.append("</ul>");
return sb.toString();
}
}
Note how the code clearly shows the structure of the resulting HTML.
The parsing code now has a more focused responsibility: parsing and validating the input. It has nothing to do with the output format any more.
private Iterable<Section> parseList(String fileContents) {
List<Section> sections = new LinkedList<Section>();
for (String line : fileContents.split("\n")) {
String[] fields = line.split(":");
if (fields.length != 2) {
// throw some exception
}
if ("headline".equals(fields[0])) {
sections.add(new Section(fields[1]));
continue;
}
if (sections.isEmpty()) {
// throw some error because "content" came before "headline"
}
sections.getLast().contents.add(fields[1]);
}
return sections;
}
I pointed out some lines with comments where you should do input validation. If you don't do this, you might get surprised by ArrayIndexOutOfBoundsExceptions if the input string is invalid.
Building an object that represents each section has the advantage that invalid HTML cannot be produces – your current code could emit a <li> outside of an <ul>.
Then in your main code, the sections are simply concatenated together:
StringBuilder output = new StringBuilder();
for (Sections section : parseList(...)) {
output.append(section.asHtml());
}
The advantage of separating the responsibilities of parsing and formatting is that it is now much easier to adapt a single part (e.g. to a new input or output format). My suggestion for the Section class has to be criticized here, because it does not follow the Single Responsibility Principle: it both represents a section, and formats the output. Those should ideally be separated into two different classes, so that the formatting can vary without Section having to change. | {
"domain": "codereview.stackexchange",
"id": 5316,
"tags": "java, parsing, iteration"
} |
Generate Synthetic Data Indicating Original data's Trend | Question: I am using timeGAN from ydata-synthetic repo.
Given a trained model synth, we generate synthetic data by:
synth_data = synth.sample(2000)
This will generate 2000 sequences randomly.
My question is, what if the original data has trend, and we wish to generate synthetic data which indicates the trend (similar size as original data)?
For example, suppose original data looks like below
and somehow we wish to generate synthetic data which also indicates the trend. Is it possible to do it? What I can think of is to increase seq_len to properly cover the trend.
Please help. Thanks.
Answer: To the best of my knowledge, all generally used synthetic data generation methods scale their data to reside in $[0, 1]$ or $[-1, 1]$. This is also done in TimeGAN & RCGAN.
If your data has a significant but regular downward trend, you probably want to reduce the trend in a data preprocessing step.
If your data has significant and highly varying trends (one going upwards, the other going downwards), then you simply stumbled across a limitation in the architecture. These models work best on somewhat normally distributed data. If your time-series goes all over the place, the model will have a hard time converging. More research still has to be done into time-series generative networks to be able to predict such trends. | {
"domain": "datascience.stackexchange",
"id": 11746,
"tags": "time-series, gan"
} |
resolving clipping audio issues | Question: I have implemented a pre-emphasis filter by the following (pseudo) code:
a = 0.5;
s1[0] = s[0];
for (n = 1; n < N; n++) {
s1[n] = (s[n-1] * a) + s[n]
}
The problem I am finding is that due to summing, there is clipping happening. I thought the solution would be to find the difference between the max value in s1[n] and 1.0 and then subtract s1[n] by that amount...
However, that results in my audio turning into complete garbage, which I am confused as to why........ Subtracting a constant amount from an entire signal should simply lower it's amplitude by that amount, correct?
Answer: Subtracting a constant value from all of your samples will just cause all the clipping to happen at -1, and will give you some nasty distortion. The proper way to remove clipping is to normalize the samples.
Keep track of the max value of s1 in your loop, then in a second loop divide all of of your points by the max value, i.e.
a = 0.5;
s1[0] = s[0];
max_s1 = 0;
mean = 0;
for (n = 1; n < N; n++) {
s1[n] = (s[n-1] * a) + s[n]
if (abs(s1[n]) > max_s1) {
max_s1 = abs(s1[n]);
}
mean += s1[n];
}
mean /= N;
max_s1 -= mean;
for (n = 0; n < N; n++) {
s1[n] = (s1[n] - mean) / max_s1;
}
EDIT: For bonus points your can also subtract the mean from your filtered signal prior to doing the normalization. This will ensure you get maximum level without distortion from clipping. | {
"domain": "dsp.stackexchange",
"id": 2239,
"tags": "audio"
} |
What are the effects of underwater "windmills"? | Question: Most early examples of tidal energy generators used barrages (damns) to generate electricity (La Rance, Kislaya Guba, Annapolis Royal). Meanwhile, a lot of newer projects use rotary blades in the tidal flow (like windmill blades), resulting in an installation that looks like an underwater wind farm. Examples of such projects are: FORCE (Fundy Ocean Research Center for Energy), SeaGen, TidalStream...
While some have claimed that the environmental effects of these new systems are much less severe, the fact is that tidal current technology is relatively new and has been applied in only a few sites. The environmental impacts seem to be based on hypothesis, modeling and simplified lab experiments, but my questions are:
What are these environmental effects?
How much are the local currents affected by these structures?
Answer: As you have noted, this technology is new, and so far only small numbers of experimental tidal energy converters (TECs) have been deployed. For this reason, little has been possible in the way of measurement, and so as you note, all estimates are based on models or other means of prediction.
To answer the second question first - how much the currents are affected depends how much of the energy you remove. Since the whole point of TECs is to remove energy from the flow, there must be some effect on currents. Garrett & Cummins (2005) built a simple analytic model of a channel between two large basins (which reflects many tidal energy scenarios) and showed that if all other considerations (e.g. navigation, engineering practicality) are ignored then the maximum power that may be extracted from the channel is obtained when its flow rate is reduced by approximately two thirds. However, this scenario is unlikely to ever be obtained, and the real limit on energy extraction at a given site (if not an economic limit) will probably be determined by what level of environmental effects are deemed to be acceptable. The relationship between power extracted and effect on the flow is not a simple one, and in most cases a significant proportion of the available power could be obtained with a relatively small change to the local currents.
A number of modelling studies have been made of more realistic scenarios for early deployment (e.g. Admadien et al 2012). These typically predict local changes to current speeds of up to 30%, which fade after a few km. Typically there is a decrease in the flow speed in line with the TECs, and an increase to either side of the farm/array, as some of the flow diverts around the added impedence. Effects on residual velocity (i.e. that which is left in the long term when the tidal cycles are averaged out over a period of time), which is relevant to sediment transport processes, are predicted up to at least 15km away.
Some baroclinic modelling (Yang & Wang 2013) has suggested increased mixing, and thus decreased stratification, as a result of the turbulence introduced by TECs. It is also conceivable that in other scenarios, stratification might be increased as a result of reduced flow speeds.
Physical effects, then, are likely to include direct effects on current speed, sediment, and stratification.
The obvious possible biological effect is from collisions. This is not my field, but as I understand it no effect is likely on small fish populations from collisions, although individuals may be affected. Collision risk for large animals (e.g. sharks and marine mammals) and for diving birds is a topic of active research, and is likely (especially for mammals) to depend on their behaviour around the devices. No large animal collisions have been reported on any of the prototypes undergoing testing so far.
A good review of possible effects on benthic organisms is provided by Shields et al (2011). These may include,
Direct disruption of seabed habitats by physical interference, e.g. from moorings
Disruption of ecological niches: Some organisms have evolved to survive in areas where others cannot - e.g. high current speed environments. Changes in seabed conditions, e.g. from greater or lesser current speeds, may cause them to be out-competed by other species that can then settle there.
Similarly, changes to sediment distribution represent changes to seabed habitats.
Alteration of flow patterns could have implications for species with a dispersive juvenile stage (e.g. larvae that rely on currents to spread) or those that rely on current flow for nutrient or waste transport. | {
"domain": "earthscience.stackexchange",
"id": 670,
"tags": "ocean, ocean-currents, tides"
} |
Find all the positive divisors of a positive integer | Question: This came from this question.
Find all the positive divisors of an integer >= 2.
Can stop when i * i >= number.
Please review for speed and style.
public class IntWithDivisors
{
public override int GetHashCode()
{
return Number;
}
public override bool Equals(object obj)
{
if(obj is IntWithDivisors)
{
return ((IntWithDivisors)obj).Number == this.Number;
}
return false;
}
public override string ToString()
{
return $"number {Number} Devisors " + string.Join(", ", Divisors);
}
public List<int> Divisors { get; } = new List<int>();
public int Count { get { return Divisors.Count(); } }
public int Number { get; }
public IntWithDivisors(int number)
{
if (number < 2)
{
throw new ArgumentOutOfRangeException();
}
Number = number;
Divisors = IntDivisors(number);
}
}
public static List<int> IntDivisors(int num)
{
//Debug.WriteLine($"\nIntDevisors {num}");
List<int> intDivisors = new List<int>();
intDivisors.Add(1);
intDivisors.Add(num);
int i;
int incr;
if(num / 2 * 2 == num)
{
i = 2;
incr = 1;
}
else
{
i = 3;
incr = 2;
}
for(; i*i < num; i += incr)
{
int numOveri = num / i;
if (numOveri * i == num)
{
//Debug.WriteLine(i);
intDivisors.Add(i);
intDivisors.Add(numOveri);
}
}
if(i*i == num)
{
intDivisors.Add(i);
}
intDivisors.Sort();
return intDivisors;
}
Answer: Why is the implementation of the algorithm a static method outside of the class IntWithDivisors? Is it in class Program? It shouldn't be there.
It is unusual to do complex work in the constructor. A constructor should only perform initialization. See answer of Telastyn in how complex a constructor should be. There are different possibilities to solve this
E.g. lazy evaluation:
private List<int> _divisors;
public List<int> Divisors
{
get {
if (_divisors == null) {
_divisors = IntDivisors(Number);
}
return _divisors;
}
}
Another option is to simply call a method to get the result.
This leads us to the next question: what is the task of the class IntWithDivisors? Do we really need to store the input number together with the output? Do we really need to override Equals and GetHashCode?
I would rather opt for a minimalist but reusable and flexible approach in the LINQ style as extension method implemented as iterator.
I observed that the starting i is one bigger than incr. We can use this fact to simplify the initialization.
Testing if a number is even is usually done with the modulo operator which yields the remainder of the division num % 2 == 0.
numOveri is a strange name. I renamed it to quotient and i to divisor.
The divisors are tested in ascending order; however, the quotients accumulate in descending order. Therefore, we can return the divisors immediately with yield return and store the quotients in a list. We then need to reverse this list before we return its items.
public static class IntExtensions
{
public static IEnumerable<int> SelectDivisors(this int num)
{
yield return 1;
int incr = num % 2 == 0 ? 1 : 2;
var largeDivisors = new List<int>();
for (int divisor = incr + 1; divisor * divisor <= num; divisor += incr) {
int quotient = num / divisor;
if (quotient * divisor == num) {
yield return divisor;
if (quotient != divisor) {
largeDivisors.Add(quotient);
}
}
}
largeDivisors.Reverse();
for (int k = 0; k < largeDivisors.Count; k++) {
yield return largeDivisors[k];
}
yield return num;
}
}
We can use this extension method like this (shown in a little test routine):
int[] numbers = new[] { 9, 12, 15, 16, 17, 27, 54 };
foreach (int num in numbers) {
Console.Write($"Number {num} has divisors ");
foreach (int n in num.SelectDivisors()) {
Console.Write(n + " ");
}
Console.WriteLine();
} | {
"domain": "codereview.stackexchange",
"id": 30859,
"tags": "c#, algorithm, .net, factors"
} |
map_server can't load map | Question:
I have this launch file in order to use nagigation stack.
I follower a tutorial on robotigniteacademy path planning but he used husky robots to simulate path planning on gazebo,i want to do the same thing with turtlebot3 on gazebo simulation but there isn't on turtlebot3 manual the chance to simulate path planning (only if you have real robot)
<launch>
<!-- Turtlebot3 -->
<include file="$(find turtlebot3_bringup)/launch/turtlebot3_remote.launch" />
<!-- Run the map server -->
<arg name="map_file" default="$(find my_move_base_launcher)/maps/map.yaml"/>
<node name="map_server" pkg="map_server" type="map_server" args="$(arg map_file)" />
<!--- Run AMCL -->
<include file="$(find my_move_base_launcher)/launch/amcl.launch.xml" />
<!--- Run Move Base -->
<include file="$(find my_move_base_launcher)/launch/turtlebot3_navigation_path_planning_2.launch" />
When i launch this file i have this error :
NODES
/
amcl (amcl/amcl)
map_server (map_server/map_server)
move_base (move_base/move_base)
robot_state_publisher (robot_state_publisher/robot_state_publisher)
ROS_MASTER_URI=http://localhost:11311
process[robot_state_publisher-1]: started with pid [25865]
process[map_server-2]: started with pid [25866]
process[amcl-3]: started with pid [25868]
process[move_base-4]: started with pid [25880]
[ INFO] [1526983048.924109442]: Subscribed to map topic.
[map_server-2] process has died [pid 25866, exit code 255, cmd /opt/ros/kinetic/lib/map_server/map_server /home/bera/catkin_ws/src/my_move_base_launcher/maps/map.yaml __name:=map_server __log:=/home/bera/.ros/log/17326908-5da3-11e8-afab-9c2a7033abfa/map_server-2.log].
log file: /home/bera/.ros/log/17326908-5da3-11e8-afab-9c2a7033abfa/map_server-2*.log
[ WARN] [1526983054.004897998]: Timed out waiting for transform from base_link to map to become available before running costmap, tf error: canTransform: target_frame map does not exist.. canTransform returned after 0.1015 timeout was 0.1.
Obv i have a maps folder with map.pgm and map.yaml files and inside yaml file i have to right path of the image file.
Suggestion?
------------EDIT:UPDATE --------------
this is my amcl.launch.xml that i call inside turtlebot3_navigation_path_planning_1.launch
<param name="use_map_topic" value="$(arg use_map_topic)"/>
<param name="min_particles" value="500"/>
<param name="max_particles" value="3000"/>
<param name="kld_err" value="0.02"/>
<param name="kld_z" value="0.99"/>
<param name="update_min_d" value="0.20"/>
<param name="update_min_a" value="0.20"/>
<param name="resample_interval" value="1"/>
<param name="transform_tolerance" value="0.5"/>
<param name="recovery_alpha_slow" value="0.00"/>
<param name="recovery_alpha_fast" value="0.00"/>
<param name="initial_pose_x" value="$(arg initial_pose_x)"/>
<param name="initial_pose_y" value="$(arg initial_pose_y)"/>
<param name="initial_pose_a" value="$(arg initial_pose_a)"/>
<param name="gui_publish_rate" value="50.0"/>
<remap from="scan" to="$(arg scan_topic)"/>
<param name="laser_max_range" value="3.5"/>
<param name="laser_max_beams" value="180"/>
<param name="laser_z_hit" value="0.5"/>
<param name="laser_z_short" value="0.05"/>
<param name="laser_z_max" value="0.05"/>
<param name="laser_z_rand" value="0.5"/>
<param name="laser_sigma_hit" value="0.2"/>
<param name="laser_lambda_short" value="0.1"/>
<param name="laser_likelihood_max_dist" value="2.0"/>
<param name="laser_model_type" value="likelihood_field"/>
<param name="odom_model_type" value="diff"/>
<param name="odom_alpha1" value="0.1"/>
<param name="odom_alpha2" value="0.1"/>
<param name="odom_alpha3" value="0.1"/>
<param name="odom_alpha4" value="0.1"/>
<param name="odom_frame_id" value="odom"/>
<param name="base_frame_id" value="base_footprint"/>
<remap from="scan" to="$(arg scan_topic)"/>
This is my turtlebot3_navigation_path_planning_2.launch file called inside the first launch file
<!-- controllare qui se mettere default="odom" oppure default="/odom" -->
<rosparam file="$(find my_move_base_launcher)/params/my_move_base_params.yaml" command="load"/>
<rosparam file="$(find my_move_base_launcher)/params/costmap_common_params_$(arg model).yaml" command="load" ns="global_costmap" />
<rosparam file="$(find my_move_base_launcher)/params/costmap_common_params_$(arg model).yaml" command="load" ns="local_costmap" />
<rosparam file="$(find my_move_base_launcher)/params/local_costmap_params.yaml" command="load" ns="local_costmap" />
<rosparam file="$(find my_move_base_launcher)/params/global_costmap_params.yaml" command="load" ns="global_costmap" unless="$(arg no_static_map)" />
<!-- dwa_local_planner_params.yaml sarebbe il mio my_move_base_params.yaml per cui devo toglierlo -->
<!-- <rosparam file="$(find turtlebot3_navigation)/param/dwa_local_planner_params.yaml" command="load" /> -->
<rosparam file="$(find my_move_base_launcher)/params/costmap_global_laser.yaml" command="load" ns="global_costmap" if="$(arg no_static_map)" />
<param name="global_costmap/width" value="100.0" if="$(arg no_static_map)"/>
<param name="global_costmap/height" value="100.0" if="$(arg no_static_map)"/>
<remap from="cmd_vel" to="$(arg cmd_vel_topic)"/>
<remap from="odom" to="$(arg odom_topic)"/>
Originally posted by kenhero on ROS Answers with karma: 31 on 2018-05-22
Post score: 0
Original comments
Comment by R. Tellez on 2018-05-22:
Just to let you know, as a subscriber of Robot Ignite Academy, you can do the course on programming T3 which explains how to do navigation: http://www.theconstructsim.com/construct-learn-develop-robots-using-ros/robotigniteacademy_learnros/ros-courses-library/mastering-with-ros-turtl
Comment by kenhero on 2018-05-22:
link is corrupted.
Btw i just followed the navigation course in the Academy and i just want to simulate path planning with turtlebot3 because it's the robot that i use to program in C++
Comment by R. Tellez on 2018-05-22:
The link was cutted: http://www.theconstructsim.com/construct-learn-develop-robots-using-ros/robotigniteacademy_learnros/ros-courses-library/mastering-with-ros-turtlebot3/
Answer:
Ok I solved.
I made several errors :
1)I didn't generate map with gmapping node
2)Error of localization , in the amcl.launch.xml file there were an error about initial pose of turtlebot3 , it was different from the initial coordinates of turtlebot3_house.launch coordinates,so local map wasn't around my turtlebot so
3)I didn't kill gmapping node before start simulation so i wasn't able to see the entire map on rviz
I'm using relaxed A* algorithm to find the best map (plugin found on github)
There are 2 issues now,with this algorithm
1)When turtlebot3 is close to goal it starts to call rotate behaviour recovery (basically turtlebot start to rotate in loop)
2)the second issue is that it's not able to find a path in gazebo house world,probably because map is too big.
It works only for short distances.
It says :
" The planner failed to find a path ,choose other goal position" or
"Not valid start or goal"
"Clearing costap to unstuck robot(0.100000m)
"Rotate recovery behavior started"
Is there a way to set some parameters or a way to test C++ code?
I'd like to discover if this package has some type of limitations, like memory or staff like that
Thanks
Originally posted by kenhero with karma: 31 on 2018-05-25
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 30875,
"tags": "navigation, mapping, map-server, ros-kinetic"
} |
Natural language text fast tokenizer (Rev.5) | Question: This is the next iteration of the Natural language text fast tokenizer code review. Special thanks goes to G. Sliepen, Toby Speight and uli who conducted previous reviews and to Matthieu M. and Adrian McCarthy who participated with important findings.
Functional specification
Implement a class for fast text tokenization handling some natural language specifics below:
Consider ‘ ‘ (space) as a delimiter, keeping a way to extend the list of delimiters later; the delimiters couldn’t be a part of other constructs.
Extract stable collocations like “i.e.”, “etc.”, “…” as a single lexem.
In case word contains “inword” characters like ‘-‘ and ‘’’ (examples: semi-column, half-, cat’s) return the whole construct as a whole lexem.
Threat all other non-alphanumeric characters as separate lexems.
Return sequences of numbers (integers without signs) as a single lexem.
Consider out of scope paired quotes and other lexical parsing level issues.
Performance is critical, since the amount of data is huge. The function should be thread-safe.
Changes
The code has been reworked according to all code review points.
Reservations
Methods implementation inside of the class definition done only to the sake of brevity; production code will have them implemented separately.
I am not sure that using value_type = std::string_view; is correct in TokenRange::Iterator; most likely I should store a nested struct for data and lexem and it should be a type of value_type, but I pretend like lexem is the value to be stored and data is some proxy of TokenRange::data which makes the iterator safer in case of TokenRange changes. At the end of the day, both TokenRange::data and TokenRange:: Iterator::data are just proxies to the original std::string / std::string_view passed to TokenRange.
The code
Here is the updated code for the code review; could you please take a look and suggest further ways to improve or confirm that this is ready to go code?
Fully functional demo.
#include <algorithm>
#include <cassert>
#include <chrono>
#include <cstring>
#include <functional>
#include <iostream>
#include <limits>
#include <locale>
#include <numeric>
#include <random>
#include <ranges>
#include <vector>
namespace fast {
template <typename Fn>
class CharacterClass
{
std::array<char, std::numeric_limits<char>::max() + 1> cache = {};
public:
explicit CharacterClass(Fn fn, const std::locale& locale = {})
{
set_locale(fn, locale);
}
void set_locale(Fn fn, const std::locale& locale)
{
auto const func = [&locale, &fn](char c) { return fn(c, locale); };
std::ranges::copy(std::views::iota(0u, cache.size())
| std::views::transform(func),
cache.begin());
}
bool operator()(char c) const { return cache[c]; }
};
CharacterClass isalpha(std::isalpha<char>);
CharacterClass isdigit(std::isdigit<char>);
CharacterClass isalnum(std::isalnum<char>);
}
class TokenRange {
std::string_view data;
public:
class Iterator {
const std::string_view delimiters = " ";
const std::vector<std::string_view> stable_lexems = { "...", "i.e.", "etc.", "etc..." };
const std::string_view inword_symbols = "-\'";
std::string_view data;
std::string_view lexem;
public:
using iterator_category = std::input_iterator_tag;
using value_type = std::string_view;
using difference_type = std::ptrdiff_t;
Iterator() {}
Iterator(std::string_view data) : data(data) { extract_lexem(); }
std::string_view operator*() const { return lexem; }
Iterator& operator++();
Iterator operator++(int);
friend bool operator==(const Iterator& it1, const Iterator& it2) { return it1.lexem == it2.lexem; }
friend bool operator!=(const Iterator& it1, const Iterator& it2) { return it1.lexem != it2.lexem; }
Iterator& operator=(const Iterator& it) { data = it.data; lexem = it.lexem; return *this; }
private:
void extract_lexem();
void skip_delimiters();
bool check_for_stable_lexems();
};
TokenRange(std::string_view data) : data(data) {}
Iterator begin() const {
return Iterator(data);
}
Iterator end() const {
return {};
}
};
void TokenRange::Iterator::skip_delimiters()
{
while (!data.empty() && std::ranges::contains(delimiters, data.front())) {
data.remove_prefix(1);
}
}
bool TokenRange::Iterator::check_for_stable_lexems()
{
auto it = std::ranges::max_element(stable_lexems, std::less<size_t>(),
[&](auto stable_lexem) {
return data.starts_with(stable_lexem) ? stable_lexem.size() : 0;
}
);
if (it != stable_lexems.end() && data.starts_with(it->data()) ) {
lexem = data.substr(0, it->size());
data = data.substr(it->size());
return true;
}
return false;
}
void TokenRange::Iterator::extract_lexem()
{
skip_delimiters();
if (check_for_stable_lexems()) {
return;
}
std::size_t index = 0;
while (index < data.size())
{
if (std::ranges::contains(delimiters, data[index])) {
break;
}
if (!fast::isalnum(data[index])) {
if (index == 0) {
++index;
}
break;
}
const bool is_next_char_inword_symbol = (index+1) < data.size() ? std::ranges::contains(inword_symbols, data[index+1]) : false;
if (is_next_char_inword_symbol) {
++index;
}
++index;
}
lexem = data.substr(0, index);
data = data.substr(index);
}
TokenRange::Iterator& TokenRange::Iterator::operator++()
{
extract_lexem();
return *this;
}
TokenRange::Iterator TokenRange::Iterator::operator++(int)
{
Iterator temp(data);
extract_lexem();
return temp;
}
int main()
{
{
std::string sample = "Let's consider, this cats' semi-simple sample, i.e. test data with ints: 100 and 0x20u, etc. For ... some testing...";
for (auto token : TokenRange(sample)) {
std::cout << token << " | ";
}
}
#define TEST_SUITE
#ifdef TEST_SUITE
struct {
std::string input;
std::vector<std::string> expected;
} samples[] = {
{ "", {} },
{ " ", {} },
{ " ", {} },
{ "etc.", { "etc." } },
{ "etc.i.e.", { "etc.", "i.e."} },
{ "...etc...", { "...", "etc..."} },
{ "......", { "...", "..." } },
{ "....", { "...", "." } },
{ ".,:", { ".", ",", ":"}},
{ "cat\'s cats\' 'cats'", { "cat\'s", "cats\'", "\'", "cats\'"}},
{ "semi-semi-column", { "semi-semi-column" } },
{ "Let's consider, this cats' semi-simple sample, i.e. test data with ints: 100 and 0x20u, etc. For ... some testing...",
{ "Let\'s", "consider", ",", "this", "cats\'", "semi-simple", "sample", ",", "i.e.", "test", "data", "with", "ints", ":", "100", "and", "0x20u", ",", "etc.", "For", "...", "some", "testing", "..." } },
};
for (auto& sample : samples) {
assert(std::ranges::equal(TokenRange(sample.input), sample.expected));
}
#endif // TEST_SUITE
}
Some final thoughts
I started with 48 lines of very simple (in terms of used language techniques) code which fit a C-style function and ended up with two sophisticated classes (well, at least one of them) of 110 lines of code whose behaviour is not so obvious and transparent for a newbie developer or another person in support.
The code was improved and some subtle defects were fixed, but all this could have been done in scope of the original function and would led even to reducing its size.
I have some personal profits:
I learned a lot about ranges, concepts and compiler warnings for them.
I learned a technique to distribute work between ‘operator++()’ and ‘operator*()’ in iterators.
I got the prove that std::string_view and some other tools used here in no meaning slower that traditional C-style pointers.
I got another confirmation that “working” code without unit tests doesn’t go.
So, all this comes as experience and I am thankful for all people involved here.
On the other hand, if we consider the task “as is”, here are my thoughts.
Profits
Now the code supports (at least preliminary and partially) new language technique, namely, std::ranges.
The API became much safer without char * and friendlier to the user with support of iterators and ranges.
Drawbacks
The code became longer and harder to maintain. This especially comes with this state machine data / lexem.
It is very hard to prove that code is still correct not only terms of tokenizing, but even in terms of usage C++ for iterators; many things done as “well, this works” and require very detailed reading of standard on iterators, ranges, concepts, requires’, etc. to make sure that this code fits all these requirements. So, this again increases requirements to personnel who will develop and support the code, while the original technical specification never required this.
The code no more portable to C language fast, if needed. I remember that I myself asked in the first post to use std::ranges, but I meant to use some functions which could be easily replaced back with C-style functions, not to implement this object as std::range itself.
This is always a design choice which way to go, to solve the specific task in existing code or to improve it with continuous refactoring. In my personal view, the main programmer’s mistake is to solve a wrong task and if initial functional specification told “Implement a function” and programmer started with expensive refactoring to iterators, ranges, etc. I would consider this as overengineering and spending effort aimlessly. If instead of fixing some lines of code in function they would develop two classes, I would consider this a mistake, since technical requirements never asked to implement something different and the client could be constrained with function usage (although, I would agree that code review title could give such a freedom). And the key point is that after first turn of code review I agreed with the proposal, so accepting the design decision is on me and I am totally responsible here. So, I am considering my choice in retrospective to learn.
And please, don’t get me wrong, I am very thankful to G. Sliepen who suggested this way and helped me learn a lot in practice and I am not saying that he suggested a wrong design decision; I believe, his circumstances and goals was slightly different, namely to show on this simple example how to develop good nowadays C+ software and he succeeded here. My only concern if this specific small task is a good candidate for this since it mixes two questions: good language style and a very specific task. My point is that the technique is great, but it is arguable that this particular task benefits from it taking into account the drawbacks.
To be precise in wording:
Should the proposed style be used instead of old one? Yes, in most of the cases.
Should the developer given the original task start this refactoring instead of fixing defects in the function? I am not sure, totally depends on the context and how it will be used
So, to put in a nutshell, I am still in twenty minds about this refactoring for this specific task, although I consider it “must learn to use when needed”.
Please, don’t consider this as concerns, complains or disagreements, this is just thoughts to share. Thank you all, who helped me here!
Answer: Enable compiler warnings and fix them
When developing code, always enable rather strict compiler warnings (for example, using -Wall -W -pedantic for Clang and GCC, but you could go even stricter at the risk of getting false positives). Whenever the compiler warns about something, don't ignore it but fix it. Both Clang and GCC complain about this:
In member function 'TokenRange::Iterator TokenRange::Iterator::operator++(int)':
warning: implicitly-declared 'constexpr TokenRange::Iterator::Iterator(const TokenRange::Iterator&)'
is deprecated [-Wdeprecated-copy]
note: because 'TokenRange::Iterator' has user-provided
'TokenRange::Iterator& TokenRange::Iterator::operator=(const TokenRange::Iterator&)'
So this means, at some future point in time, the copy constructor will not be generated automatically anymore because you provided a custom assignment operator. So a possible fix is to just add a copy constructor as well.
Even better would be to not need to add copy and assignment operators, as at first glance it should not be necessary at all: the compiler should be able to automatically generate default functions that copy the std::string_views. The problem, as you found out yourself, is because of the const variables delimiters, stable_lexems and inword_symbols. Make them static instead.
About your final thoughts
It's great that you yourself realize that all this effort has made you learn a great deal, and that the code has improved because of it! You are also right about it being hard to get the iterator version correct, and it indeed requires quite a bit more code. Ideally, you would just want to write something as close as possible to the first revision, and still get the benefit of using it in a range-for loop. Since C++23 you can, by using std::generator<>. Your code would then look like:
std::generator<std::string_view> tokenize(std::string_view data) {
…
while (/* not done yet */) {
…
std::string_view token = …;
co_yield token;
…
}
}
I did not mention this before since this is a very recent addition to the language, and you thus also need a very recent compiler.
What should you use in your actual project? That's indeed up to you. Sometimes a quick and dirty hack is all you need. In larger projects, refactoring the code to make it more generic and more standards compliant will pay off though. If you don't know when to do what, consider applying the YAGNI principle.
When and how to decompose functions
There is this pervasive notion that more lines of code is bad. There is some truth to that: more lines means more possibilities of bugs, more to maintain, and more to document. However, the real problem is actually complex code. If you have one function of 100 lines of complex code, or ten simple functions of only 20 lines each, then despite the latter being 200 lines of code, it will actually have less chance of bugs, easier to maintain, and more self-documenting.
Of course, you should only decompose when it makes sense: if you can cleanly move some lines of code into a separate function, and that function then does something clear and simple (and thus can be given a clear and simple name), and it makes the original function less complex. Even skip_delimiters(), despite being just a few lines of code, is a great example of this.
It's hard to say what the right number of functions is, or what the maximum size of a function should be, that depends on the nature of the code of course. However, I can tell you that most people, including myself, don't decompose as much as they should.
The helper member functions you are creating should be private. They don't change the public API nor the ABI, so this is very safe to do. If you really want to avoid complicating your class declaration, you could consider instead to make them out-of-class functions, and then pass references to any member variables you want to modify. For example:
static void skip_delimiters(std::string_view& data, std::string_view delimiters) {
{
while (!data.empty() && std::ranges::contains(delimiters, data.front())) {
data.remove_prefix(1);
}
}
void TokenRange::Iterator::extract_lexem()
{
skip_delimiters(data, delimiters);
…
}
There might also be some other ways to approach this problem. | {
"domain": "codereview.stackexchange",
"id": 45493,
"tags": "c++, parsing"
} |
How to calculate the movement of the object passing near other object in space? | Question: Assume object A is moving through the space and is passing near the other object (B). Assume the gravitational influence of other objects can be ignored. How to find the equation describing the movement of the object B?
There are 2 cases, object A is moving straightforward or it's moving on orbit (around other object).
I think the problem is quite elementary, but I couldn't find anything that could help solve that problem using the physics on the level of basic university course (I've studied computer science, so I've got only 1 semester of physics, and basic mathematical knowledge - integrals, algebra etc.).
I know the problem can be solved numerically, but I'm interested in finding the equation describing the movement.
Answer: Since you're interested in the equations of motion, I would solve this problem by using Lagrangian mechanics. Essentially, find the kinetic and potential energies for these two bodies, A and B.
Construct the Lagrangian:
$$L = T - V$$
where T is the kinetic energy, and V is the potential energy. Then use the Euler-Lagrange Equation to achieve the equations of motion (I would add it here, but I'm not sure of the specifics of your problem).
Two-body motion can always be constrained to a plane, so you may have to throw a constraint in there through the use of a Lagrange multiplier. | {
"domain": "astronomy.stackexchange",
"id": 21,
"tags": "orbit"
} |
Diffraction from the Earths edge | Question: I recorded an eight hour time lapse video of the afternoon as the sun set. I was surprised to see at about the halfway point of the video the light begin to cycle through lighter and darker phases with the pattern getting closer and closer or (faster and faster). This reassembles the fringe pattern of a single edge which I have written about at billalsept.com "Single Edge Certainty" I searched for information on viewing an occultation affect like this but nothing so far. Has this ever been recorded before? I new that occultation could be observed from distant objects like the moon passing in front of stars and planets but didn't realize we could see this phenomena standing right in the middle of the Earths own shadow.
https://www.dropbox.com/s/54j7bh9rqy3l2ha/Sunset%20Edge%20Diffraction.mov?dl=0
Answer: This is simply speculation together with a proposed experiment to test the speculation.
I believe what you are seeing is your camera autogaining as the light fades. The light's probably fading at a rate that the controller is not really designed to cope with and poor design has left the control loop with a tendency to hunt when presented with such a rate. You could try the experiment again with:
A different camera;
More precisely, you could use a camera with a very high sensitivity chip and use the software controls to hold the f-number and exposure time steady. That is, you don't give the autogain control a chance to taint the results, and you can impart your own smooth gain versus time profile to the sequence afterwards for clearer viewing.
If my theory is right, 2. in particular should get rid of the effect. | {
"domain": "physics.stackexchange",
"id": 42486,
"tags": "visible-light, photons, interference, diffraction"
} |
Number of accepting path of a non deterministic automaton | Question: I have a question that seems to me really natural and have probably already been studied. But keyword search on this site or google does not seems to help me to find any relevent paper.
I have got a finite non deterministic automaton $A$ over an alphabet $\alpha$ without epsilon-transition.
What can I tell about the number of different path the automaton could take for accepting a word ? In particular, I want to know if this number is bounded, or if for every $c$ I can find a word $w_c$ that is accepted in at least $c$ different way by the automaton.
Right now, I can find some necessary, and some sufficient condition, but not any necessary and sufficient condition, for the number to be unbounded
By clarity, I'll define the way I cound the number of accepting path. Let $w\in\alpha^*$ and $q$ a state, I can define the number of path to $q$ by inuction on $|w|$ by $N(\epsilon,q)=1$ if $q\in I$ else $0$, where $I$ is the set of initial state and $F$ of final state. $N(ws,q)=\sum_{q'\in Q\atop \delta(q',s)=q}N(w,q')$.
Then the number of path is $\sum_{q \in F}N(w,q)$.
Answer: This concept is called the ambiguity of the NFA.
Typically, there are 3 classes of ambiguity in this context: Bounded, polynomially bounded, and exponentially bounded.
Every NFA has at most an exponential number of runs on a given word (this is easy to see).
Interestingly, there is a simple syntactic characterization of polynomially bounded NFAs:
An NFA has a polynomial number of runs on a word $w$ iff for every state $q$, there is at most one cycle from $q$ to itself on every word $x\in \Sigma^*$. See this for details.
Testing for bounded ambiguity is PSPACE-complete. A good starting point is this paper | {
"domain": "cstheory.stackexchange",
"id": 2088,
"tags": "automata-theory, nondeterminism"
} |
What phenomena occur in a low voltage arc between copper and graphite electrodes, and why is the result dependent on electrode polarity? | Question: I was playing around with a laboratory power supply, drawing arcs between electrodes of various materials. I noticed phenomena that I found interesting, and couldn't really explain myself:
The circular electrode is a 5 euro cent coin, which is composed of steel with a rather thick copper plating. The long, thin electrode is a 0.7 mm diameter mechanical pencil lead (mainly composed of graphite) which has been previously slowly heated until red hot in order to drive off any volatile constituents that would otherwise rapidly vapourize and split it apart.
The power supply is a 30 V, 10 A switching mode laboratory power supply with configurable voltage and current limits. Both limits are set to their maximum values.
Graphite anode, copper cathode
When the positive lead is connected to the graphite electrode, it gets quickly consumed in a steady arc after contact is made. A black, brittle, hard and flaky residue is left on the copper surface, presumably graphite which has either melted or has undergone plastic deformation.
CH1 is the arc current, 1 A = 23 mV. CH2 is the voltage drop, measured at a less than ideal location, the power supply terminals. A zoomed in portion of the complete waveform is pictured on the right.
I find this surprising considering the high melting point of graphite and the presence of oxygen in the atmosphere. The copper plating suffers surprisingly little damage.
Graphite cathode, copper anode
When the negative lead is connected to the graphite electrode, an arc is difficult to ignite. An ohmic contact forms instead, and the electrode heats up extremely rapidly to incandescent temperatures. When an arc is finally struck by gently pulling away the cathode, the graphite is hardly consumed at all, but the copper experiences heavy pitting.
CH1 is the arc current, 1 A = 23 mV. CH2 is the voltage drop, measured at a less than ideal location, the power supply terminals. A zoomed in portion of the complete waveform is pictured on the right.
Why does the system behave so differently when the polarity is inverted? What exactly is the black residue composed of, and how is it deposited?
Answer: I'll put this out here for now - it's not a complete answer yet, but it's longer than a comment will hold. Nice experiment!
The physics of carbon arcs is interesting for many reasons - one is the production of nanoparticles and nanomaterials. Fullerenes such as "Buckeyballs" and Carbon nanotubes (CNTs) are often produced using Carbon arcs, and the physics of the process is currently an active field of research.
While your experiment is in air, it's possible that a small amount of carbon combines with (uses up) the oxygen in the arc region, such that other processes still take place. You might try the experiment with some relatively inert gas like nitrogen or helium, or even the standard trick of using a match or candle to remove most of the oxygen first. But be careful.
You shouldn't breath the air around your experiment either - please do it in an exhaust hood!
For example, an abstract from a google search:
Abstract: The atmospheric pressure carbon arc in inert gases such as helium is an important method for the production of nanomaterials. It has recently been shown that the formation of the carbondeposit on the cathode from gaseous carbon plays a crucial role in the operation of the arc, reaching the high temperatures necessary for thermionic emission to take place even with low melting point cathodes. Based on observed ablation and deposition rates, we explore the implications of deposit formation on the energy balance at the cathode surface and show how the operation of the arc is self-organised process. Our results suggest that the arc can operate in two different ablation-deposition regimes, one of which has an important contribution from latent heat to the cathodeenergy balance. This regime is characterised by the enhanced ablation rate, which may be favourable for high yield synthesis of nanomaterials. The second regime has a small and approximately constant ablation rate with a negligible contribution from latent heat.
From: Self-organisation processes in the carbon arc for nanosynthesis, J. Ng and Y. Raitses, J. Appl. Phys. 117, 063303 (2015); http://dx.doi.org/10.1063/1.4906784 | {
"domain": "physics.stackexchange",
"id": 29023,
"tags": "electromagnetism, thermodynamics, electricity, plasma-physics"
} |
What is the time complexity for division by repeated subtraction? | Question: Given the following algorithm :
input : a (integer), b (integer != 0)
result = 0;
while(a >= b)
{
a = a - b;
result = result + 1;
}
return result;
How to find the number of instructions and the time complexity of this kind of algorithm since we don't know neither a nor b nor the number of iteration in advance ?
Answer: Your algorithm makes exactly $\lfloor\frac{a}{b}\rfloor$ iterations of while loop.
If we suppose that the size of the inputs is $n=max\{a,b\}$ then your algorithm will make at most $n$ iterations; when $a=n$ and $b=1$.
Therefore, with this notation, your algorithm (on a high level) runs in $O(n)$ time complexity.
If you want to analyze it deeper, including the complexity of addition and subtraction which are $\Theta(n)$ for $n$-digit numbers, things get more complicated because $n$ we defined above was not the number of digits (decimal representation) but a value (unary representation).
As it can be shown, the base in which we represent numbers is not relevant, so the overall analyzed algorithm, having in mind also the properties of big-O and big-Theta runs in $O(n^2)$ where $n$ is the number of digits (in any base). | {
"domain": "cs.stackexchange",
"id": 12316,
"tags": "algorithm-analysis, time-complexity"
} |
Questions About Quantum Delta Function Potentials | Question:
I didn't think that it would be possible for a wave function to get through the delta function because there is no "leakage" of the wave function through an infinite potential barrier. I can understand why a particle could get through a non infinite barrier because of this exponential leakage but I don't understand how a particle could get through an infinite delta potential. My thinking right now is that maybe there is some imaginary (as apposed to real) leakage, similar to how a quadratic that doesn't cross the $x$ axis in the real plane may cross the $x$ axis in the imaginary plane. Is this the reason?
I tried to solve for the reflection and transmission coefficients by solving for the two solutions outside the delta function and you obviously get 4 different exponentials with imaginary exponents. I know that these two functions have to touch so you equate them at $x=0$ and get $A+B=C+D$. Then I did the finding the change in slope trick and I get another relation between these 4 coefficients. The problem is that this gives me 2 equations and 4 coefficients to solve for, so I'm not sure what to do now (also a k value so 5 variables I guess). For the bound solution the coefficients were much nicer and it was easy to solve, but I'm not sure how to solve this one. My initial thinking is that I can get rid of the D coefficient since I only want the transmitted wave to be right-moving, but that only cuts me down to 3 variables when I need either 2 or another equation.
Answer: Even with a delta function potential, continuity of the wave function is still required. (Please see comment from ACuriousMind below on this).
The derivative of the wavefunction is obviously not continuous, however. You can find the discontinuity by integration about the delta function from +s to -s, where s is a small parameter.
You then let s go to 0 and check the behaviour of the derivative. You don't state the wavefunction(s) you are using, but here is a plot of a typical one.
You can find the actual calculations here:
Delta Function Wikipedia | {
"domain": "physics.stackexchange",
"id": 32329,
"tags": "quantum-mechanics, homework-and-exercises, schroedinger-equation, potential, dirac-delta-distributions"
} |
Quantum operator calculations | Question:
We define the quantum operator
$$
P^\mu=\int{\frac{d^3p}{(2\pi)^3}}p^\mu a_p^\dagger a_p
$$
Now how can I calculate
$$
\langle p_2|P^\mu|p_1\rangle~?
$$
My attempt:
$$
\langle p_2|P^\mu|p_1\rangle =\int{\frac{d^3p}{(2\pi)^3}}\langle 0|a_{p_2} p^\mu a_p^\dagger a_p a_{p_1}^\dagger|0\rangle.
$$
Now we know that $\langle0|a_p a_q^\dagger|0\rangle =\delta^{(3)}(p-q)$ but I'm not quite sure how it works with multiple states in the bra-ket.
Answer: Use the canonical commutation relation $[a_p, a^{\dagger}_q] = (2\pi)^3\delta^3(p-q)$:
We have that $a_{p_2}p^{\mu}a^{\dagger}_pa_pa^{\dagger}_{p_1} =p^{\mu}a_{p_2}a^{\dagger}_pa^{\dagger}_{p_1}a_p + p^{\mu}a_{p_2}a^{\dagger}_p[a_p, a^{\dagger}_{p_1}]$
The first term is ignored, because when considering
$<0|p^{\mu}a_{p_2}a^{\dagger}_pa^{\dagger}_{p_1}a_p|0>$
We have an annihilation operator hitting the vacuum state, so this term in the integrand must vanish. Plugging in the commutator, your integral is equal to
$\int\frac{d^3p}{(2\pi)^3}p^{\mu}(2\pi)^3\delta^3(p-p_1)<0|a_{p_2}a^{\dagger}_p|0> $
You can now integrate out the delta function:
$p_1^{\mu}<0|a_{p_2}a^{\dagger}_{p_1}|0> $
Repeating the same trick with commuting the operators and using $<0|0>=1$, we have
$p_1^{\mu}(2\pi)^3\delta^3(p_1-p_2)$
Pretty much as expected. The much easier route is if you recognize that states $|p_1>$ are eigenstates of the operator $P^\mu$ with eigenvalue $p_1^\mu$. Using this,
$<p_2|P^\mu|p_1> = p_1^\mu<p_2|p_1>$
and using the normalization of singly excited states,
$=p_1^\mu(2\pi)^3\delta^3(p_1-p_2)$ | {
"domain": "physics.stackexchange",
"id": 63875,
"tags": "homework-and-exercises, quantum-field-theory, hilbert-space, operators, momentum"
} |
Is there a way to precisely quantify entanglement in general? | Question: Entanglement is among the most remarkable features of quantum mechanics. It is pointed out by many as the responsible for breaking Bell inequalities and numerous other surprising aspects of quantum theory. My issue is that I do not understand how to quantify entanglement in general quantum systems.
I know that the Entanglement Entropy is a very good quantifier for pure states. However, most physical states are not pure. For mixed states in bipartite two-level systems, I know that the Negativity (or Concurrence) are good entanglement quantifiers and are equivalent, making it seem that there is a "unique" way of quantifying entanglement in these systems.
My question is: What quantifiers can be used for mixed states in more general setups with multipartite systems and general Hilbert spaces? Is there a "unique", or best way of quantifying entanglement in these setups?
Answer: Determining whether or not a given mixed state was proven to be an NP-hard problem by Gurvits. Given that, it's quite challenging to have an easy formula for quantifying entanglement, even for bipartite systems! It is worth recalling that nonzero negativity is only a sufficient condition for bipartite entanglement in most dimensions: there are entangled states with zero negativity.
In general, there are infinitely many witnesses of entanglement, each of which gives a sufficient condition for the existence of entanglement; to prove entanglement/separability, one has to check many different witnesses. So, in that sense, there is neither a unique nor a best way of quantifying entanglement in general. There may be specific purposes for which specific types of entanglement are useful and so that can always inform your choice of an entanglement quantifier. | {
"domain": "physics.stackexchange",
"id": 81893,
"tags": "quantum-mechanics, quantum-information, quantum-entanglement"
} |
What are the theoretical / mathematical problems in discarding negative solutions of Dirac equation? | Question: I read some Q&A about it, but my question is why Dirac was so sure that he could not discard negative energy solutions.
It seems so natural that energy must be positive, that I suppose that if we use only positive solutions we get some theoretical problems. The plane wave $\psi = e^{-ip_{\mu}x^{\mu}}$ is a solution of the Dirac equation if
$p_0^2 = E^2 = |\mathbf p|^2 + m^2$. What comes from the relativistic invariance of the mass: $E^2 - P^2 = m^2$. And nobody thinks of negative energies when looking at that equation in special relativity. Moreover, he had to deal with the strange notion of an infinite sea of electrons.
Of course, positrons were discovered soon after his work, and gave experimental support to not discard them.
But besides the experimental confirmation, are there any theoretical problems if we discard them?
Answer: The problem is that Dirac equation can't be written as two equations, where one would only refer to positive-energy components, and the other to the negative ones. E.g. $\partial_t\psi_1$ component depends on $\psi_3$ and $\psi_4$ in the equation. The result is that, if you find the general solution of the equation, you'll see that for nonzero momenta the components are intermixed, and you only get pure positive/negative solutions for particle at rest (see this post for explicit solutions).
All this makes rejection of negative energy solutions not only "physically unwanted", but mathematically impossible. | {
"domain": "physics.stackexchange",
"id": 65697,
"tags": "quantum-mechanics, special-relativity, antimatter, dirac-equation, unitarity"
} |
FIR filters: is it possible to manipulate phase without change in magnitude response | Question: Here's response of FIR design:
which shows SPL 0dB and here's a wave file exported from ( DRC ) FIR design software: FIR48kHz.wav.
Audio software (internally uses FFTW routines) where this filter is then used/measured reports +2 dB peak gain for full range 0-24kHz filter and +4dB for reduced range 0-20kHz filter (software measures max peak for the filter only).
Audacity's "mark sounds" tool reports the file have short sound area from 0.02s to 0.04s?
Could it be that phase change effects to the magnitude response at some location not seen in plot (i.e, there's something left below 20Hz or above 20kHz).
Answer: It is possible to manipulate phase while maintaining constant amplitude over a portion of the Nyquist bandwidth with an FIR filter, but not over the full Nyquist bandwidth (DC to $f_s/2$ where $f_s$ is the sampling rate). A subset of FIR filters are linear phase which are composed of symmetric or antisymmetric coefficients. Under any other condition the phase can be non-linear, and we can then intuitively see that it would be feasible to manipulate the phase within the passband of the filter, while the amplitude within that passband remain flat (within a ripple constraint).
In most applications, this is sufficient since the percentage of bandwidth can typically be 85% or 90% depending on the allowable filter complexity. This won't provide an exact match but the error can be minimized based on the target response and filter length used.
The approach I would use to do this is a least squares solution based on the desired frequency response and has been detailed by our own MattL including example MATLAB code at his blog post copied here:
https://mattsdsp.blogspot.com/2022/10/fir-filters-with-prescribed-magnitude.html | {
"domain": "dsp.stackexchange",
"id": 11914,
"tags": "finite-impulse-response, phase, magnitude"
} |
Tooltip popup plugin | Question: This plugin displays a tooltip popup with the data obtained via Ajax. I am sure there are better ones out there, but my objective is to learn how to correctly build a plugin, not find the best one available. I would appreciate any comments, suggestions, criticism from a best practices and design pattern usage perspective.
A live demo is located here.
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
<html xmlns="http://www.w3.org/1999/xhtml">
<head>
<meta http-equiv="content-type" content="text/html; charset=ISO-8859-1" />
<title>screenshot</title>
<script src="http://code.jquery.com/jquery-latest.js" type="text/javascript"></script>
<script src="jquery.ajaxTip.js" type="text/javascript"></script>
<style type="text/css">
.myElement{margin:100px;}
.ajaxToolActive{color:blue;}
.myAjaxTip {
border:1px solid #CECECE;
background:white;
padding:10px;
display:none;
color:black;
font-size:11px;-moz-border-radius:4px;
box-shadow: 3px 1px 6px #505050;
-khtml-border-radius:4px;
-webkit-border-radius:4px;
border-radius:4px;
}
</style>
<script type="text/javascript">
$(function(){
$('.myElement').ajaxTip({
display: function(d){return '<p>'+d.name+'</p><p>'+d.address+'</p><p>'+d.city+', '+d.state+'</p>';},
getData:function(){return {id:this.data('id')}},
'class':'myAjaxTip'
});
$('.destroy').click(function(){$('.myElement').ajaxTip('destroy');});
});
</script>
</head>
<body>
<p class="myElement" data-id="1" title="ajaxTip Popup">John Doe</p>
<p class="myElement" data-id="2" title="ajaxTip Popup">Jane Doe</p>
<p class="myElement" data-id="3" title="ajaxTip Popup">Baby Doe</p>
<p class="destroy">Destroy</p>
</body>
</html>
/*
* jQuery ajaxTip
* Copyright 2013 Michael Reed
* Dual licensed under the MIT and GPL licenses.
*/
(function( $ ){
var methods = {
init : function( options ) {
// Create some defaults, extending them with any options that were provided
var settings = $.extend({
'url' : 'getAjaxTip.php', //To include extra data sent to the server, included it in the url
'class' : '', //Class to be added to tooltip (along with class standardAjaxTip)
'mouseMove': true, //Whether to move tooltip with mouse
'speed' : 'fast', //fadeIn speed
'delay' : 250, //milliseconds to delay before requesting data from server
'xOffset' : 20,
'yOffset' : 10,
'dataType' : 'json', //Returned data. Options are json, text, etc
'getData' : function(){return {}}, //Use to set additional data to the server
'display' : function(data){ //User function must include function(data) {... return string}
var string='';
for (var key in data) {string+='<p>'+data[key]+'</p>';}
return string;
}
}, options || {}); //Just in case user doesn't provide options
return this.each(function(){
var showing,title,timeoutID,ajax,$t=$(this).wrapInner('<span />'),ajaxTip;
$t.children('span').hover(function(e) {
if(!showing){
title = $t.attr('title');$t.attr('title','');//Prevent title from being displayed,and save for later to put back
timeoutID=window.setTimeout(function() {
ajax=$.get( settings.url,settings.getData.call($t),function(data){
ajaxTip=$('<div />')
.addClass('standardAjaxTip '+settings.class)
.html(((title != '')?'<h3>'+title+'</h3>':'')+settings.display(data))
.css("top",(e.pageY - settings.yOffset) + "px")
.css("left",(e.pageX + settings.xOffset) + "px")
.css("position","absolute")
.appendTo('body').fadeIn(settings.speed);
showing = true;
$t.addClass('ajaxToolActive');
}, settings.dataType);
},settings.delay); //Delay before requesting data from server
}
},
function()
{
//When not hover
if (typeof ajax == 'object') {ajax.abort();}
window.clearTimeout(timeoutID);
$t.attr('title',title);
$t.removeClass('ajaxToolActive');
if(showing){ajaxTip.remove();}
showing = false;
});
$t.mousemove(function(e) {
if(settings.mouseMove && showing) {ajaxTip.css("top",(e.pageY - settings.yOffset) + "px").css("left",(e.pageX + settings.xOffset) + "px");}
});
});
},
//Add additional methods as needed
destroy : function() {
//console.log('destroy');
return this.each(function(){
var $e = $(this);
$e.html($e.children('span').html());
})
},
};
$.fn.ajaxTip = function(method) {
if ( methods[method] ) {
return methods[method].apply( this, Array.prototype.slice.call( arguments, 1 ));
} else if ( typeof method === 'object' || ! method ) {
return methods.init.apply( this, arguments );
} else {
$.error( 'Method ' + method + ' does not exist on jQuery.ajaxTip' );
}
};
})( jQuery );
Answer: Here's the code mostly the same with some changes to style and comments:
(function($){
var defaults = {
'url' : 'getAjaxTip.php', // The url used to get the tooltip data.
'class' : '', // Css class(es) to add to tooltip (along with standardAjaxTip).
'mouseMove': true, // A flag indicating whether to move tooltip with mouse.
'speed' : 'fast', // The speed at which to fade in the tool tip.
'delay' : 250, // Delay (in ms) before requesting data from server.
'xOffset' : 20,
'yOffset' : 10,
'dataType' : 'json',
'getData' : function () {
return {};
},
// A function to transform the data from the server into an html fragment.
'display' : function(data) {
var htmlString = '';
$.each(data, function (key, val) {
htmlString += '<p>' + val + '</p>';
});
return htmlString;
}
};
var methods = {
init : function (options) {
// Create settings using the defaults extended with any options provided.
var settings = $.extend(defaults, options || {});
return this.each(function () {
var title,
timeoutID,
ajax,
$t,
ajaxTip;
// Wrap the content of the current element in a span.
$t = $(this).wrapInner('<span />');
$t.children('span').hover(function(e) {
if(!$t.hasClass('ajaxToolActive')) {
title = $t.attr('title');
$t.attr('title',''); // Remove the title so that it doesn't show on hover.
timeoutID = window.setTimeout(function () {
ajax = $.get(settings.url, settings.getData.call($t), function (data) {
// Create a div to be the tooltip pop up, add the styling as well as
// the html (from the display function) to it and then fade the element in
// using the speed specified in the settings.
ajaxTip = $('<div />')
.addClass('standardAjaxTip ' + settings['class'])
.html(((title !== '') ? '<h3>' + title + '</h3>' : '') + settings.display(data))
.css('top', (e.pageY - settings.yOffset) + 'px')
.css('left', (e.pageX + settings.xOffset) + 'px')
.css('position', 'absolute')
.appendTo('body')
.fadeIn(settings.speed);
$t.addClass('ajaxToolActive');
},
settings.dataType);
}, settings.delay);
}
},
function () {
// User is no longer hovering so cancel the call to the server and hide the tooltip.
if (typeof ajax === 'object') {
ajax.abort();
}
window.clearTimeout(timeoutID);
$t.attr('title', title);
if ($t.hasClass('ajaxToolActive')) {
ajaxTip.remove();
$t.removeClass('ajaxToolActive');
}
});
$t.mousemove(function (e) {
if (settings.mouseMove && $t.hasClass('ajaxToolActive')) {
ajaxTip.css('top', (e.pageY - settings.yOffset) + 'px')
.css('left', (e.pageX + settings.xOffset) + 'px');
}
});
});
},
destroy : function () {
return this.each(function () {
var $e = $(this);
$e.html($e.children('span').html());
});
}
};
$.fn.ajaxTip = function(method) {
if (methods[method]) {
return methods[method].apply(this, Array.prototype.slice.call(arguments, 1));
} else if (typeof method === 'object' || ! method) {
return methods.init.apply(this, arguments);
} else {
$.error('Method ' + method + ' does not exist on jQuery.ajaxTip');
}
};
}(jQuery));
I think it would also be a good idea to keep lines to 80 characters. I also removed the showing variable and checked to see if the element had the active class instead. The other main thing I change was settings.class to settings['class'] as class is a future reserved word. | {
"domain": "codereview.stackexchange",
"id": 3437,
"tags": "javascript, jquery, ajax, plugin"
} |
Is picking variants belonging to a specific group of genomes from VCF file possible? | Question: GnomAD has publicly available VCF files that have variant data gathered from over 15k individuals. For a university project, I want to randomly select 1k of these individuals and get the variant data belgoning to them only.
I am very new to bioinformatics in general and upon checking VCF files and how they work, my current conclusion is that this task is not achievable through filtering the vcf file in any way since variants in these files have no direct connection to the genomes they were found in (no array of individual IDs in a variant entry that show which individuals had this variant).
But this task was given to me by a professor whom I believe is competent to realize the task was impossible if it is like I thought, so is there something I am missing?
How can I go about randomly selecting a 1000 individuals from this 15 000 pool, and then generating a new VCF file with the data from these selected 1000?
Are there any fields that show which individuals genome samples showed a spesific variant entry in the VCF file?
Answer: It is. A VCF ought to be sub-settable to a specific list of samples. The only thing you have to come up with for your specific case, is a list of 1K random sample IDs within the original VCF.
Geenerally speaking, knowing that you can do this and how involves getting familiar with the VCF specification as well as the main bioinfo tool used to manipulate and work with this format: BCFtools
Let's use 1000 genomes data as an example.
You can use a bit of UNIX together with BCFtools in order to get a list of 1K random IDs from individuals:
(sort -R instead of shuf might work too)
bcftools query -l ALL.chr22.phase3_shapeit2_mvncall_integrated_v5a.20130502.genotypes.vcf.gz | shuf | head -1000 > myRandomIDs.txt
Then again BCFtools to subset the list of individuals from the original VCF and into a new VCF file:
bcftools view --samples-file myRandomIDs.txt ALL.chr22.phase3_shapeit2_mvncall_integrated_v5a.20130502.genotypes.vcf.gz -o myNewVCF.vcf
If you dig a bit deeper into the BCFtools documentation, you will find many things you can do extra if you wanted, like re-calculating the allele frequencies based on this new cohort of individuals, if you wanted that too. Or subset for positions/variants, instead of individuals.
GnomAD data
Using GnomAD data, selecting 1K random samples is impossible since data is aggregated and instead of genotypes per individual (and individuals you can select thus) we only have the frequency of the variants across various sub-cohorts within the GnomAD project. No individual level data.
The most you can do is select one of such sub-cohorts and filter for variants with a frequency > 0 (or a range) in such cohort. You could get variants present only and only is a specific cohort too, by using either frequency information or other info such as allele counts, etc.
The example below filters for variants with a frequency above 0 in the South Asian cohort and 0 frequency in the Southern European cohort. You can have a look at the header of the VCF in order to get to know all cohorts included and the meaning of all the information contained within the INFO filed in the VCF
bcftools filter -i "AF_sas > 0 & AF_nfe_seu = 0" gnomad.exomes.r2.1.1.sites.21.vcf.bgz
But I'm afraid that's not exactly what you wanted to do in the first place. | {
"domain": "bioinformatics.stackexchange",
"id": 2284,
"tags": "vcf, gnomad"
} |
Mechanism of Myosin Head Bending in Cross Bridge Cycle Power Stroke Phase | Question: What is the mechanism of bending of myosin head during the power stroke of the cross-bridge cycle of the muscle contraction? Does this have anything to do with the protein's 3-D structure i.e. folding of protein in space? I would prefer a physics explanation down to either the classical electrical dipole or quantum mechanical interaction amongst the proteins. I want to know what exactly produces the force that powers the bending. I am pretty sure that the force can only be electromagnetic. The question is how the force is manifested in the bending.
Any references for further reading are appreciated.
Answer: Introduction:
This is going to be quite a long answer. To have an introduction to the topic, you can have a look at articles from Wikipedia and RCSB Protein Data Bank.
The exact mechanism of physical interactions in myosin head during powerstroke cycle are not yet known. The only thing we definitely know about how release of Pi from myosin causes conformational changes (and hence, force) in it, as given at MJC website is:
This is a very basic description of the powerstroke cycle, so we'll move ahead for more detailed explanation of physical interactions during the cycle.
Details:
I have found three theories regarding physical interactions during the powerstroke cycle. Let's discuss them one by one.
THEORY 1: I could not find much explanation about physics of the cycle, the only thing I could find was at the JBC website (emphasis mine):
Mutation R759E in the myosin converter domain results in biochemical and biophysical defects as well as aberrant muscle structural and physiological properties. The central portion of the converter domain is encoded by exon 11e in indirect flight muscle (Fig., green), and the converter interfaces with the exon 9a-encoded relay domain (Fig., blue). Molecular modeling indicated that residues 508–511 in the relay loop are located near the converter residue 759 and defined weak and strong interactions of residues 509 and 511 during the rearrangements of the relay loop that are affiliated with the mechanochemical cycle. Furthermore, Ile508 can be cross-linked to Arg759 in Dictyostelium non-muscle myosin II when they are each substituted by cysteine. Therefore, we hypothesized that specific amino acids in the relay domain interact with converter residue 759 and that second site mutations in the relay residues may suppress the defects associated with converter mutation R759E.
They couldn't give much explanation of the physics involved in it. Maybe this is because I missed it or because there hasn't been much research on this because Dissecting the molecular mechanism of muscle myosin function in vivo has proved difficult. as said at JBC website.
THEORY 2: This theory, given at PNAS, provides a more detailed view of the physics, so we'll have a more thorough explanation of it:
Molecular motors produce force when they interact with their cellular tracks. For myosin motors, the primary force-generating state has MgADP tightly bound, whereas myosin is strongly bound to actin. We have generated an 8-Å cryoEM reconstruction of this state for myosin V and used molecular dynamics flexed fitting for model building. We compare this state to the subsequent state on actin (Rigor). The ADP-bound structure reveals that the actin-binding cleft is closed, even though MgADP is tightly bound. This state is accomplished by a previously unseen conformation of the $\beta$-sheet underlying the nucleotide pocket. The transition from the force-generating ADP state to Rigor requires a 9.5° rotation of the myosin lever arm, coupled to a $\beta$-sheet rearrangement. Thus, the structure reveals the detailed rearrangements underlying myosin force generation as well as the basis of strain-dependent ADP release that is essential for processive myosins, such as myosin V.
Just so that you know what it is, let me add a few points about Molecular Dynamics Flexible Fitting:
The molecular dynamics flexible fitting (MDFF) method can be used to flexibly fit atomic structures into density maps. The method consists of adding external forces proportional to the gradient of the density map into a molecular dynamics (MD) simulation of the atomic structure. For examples of MDFF applications, visit the websites on Mechanisms of Protein Synthesis by the Ribosome, Dynamics of Protein Translocation, Molecular Dynamics of Viruses, and Intrinsic Curvature Properties of Photosynthetic Proteins in Chromatophore.
Now, returning to the main point (results & discussion sections):
The equilibrium and rates of transition between the Strong-ADP and the Rigor states vary greatly among different myosin isoforms and predominantly determine how long a myosin motor can remain bound to actin in the absence of load. This kinetic tuning must be achieved by structural differences in the regions that we have seen to change in our Strong-ADP structure as well as regions involved in stabilizing the lever arm position...Our structures show that the Loop 1 conformation alters in the transition from Strong-ADP to Rigor. Thus, different sequences likely favor one conformation over the other, or promote the transition from the ADP-bound conformation to the Rigor conformation, providing a structural basis for this kinetic tuning.
For myosins, such as myosin V, that function in a cell as two-headed, processive motors, the length of processive runs and the initiation of processive runs are both enhanced by “gating” of the heads. For a two-headed molecule with both heads simultaneously attached to actin, gating refers to the fact that a lead head is essentially stalled in an ADP state strongly bound to actin, until the rear head is detached from actin by binding MgATP. This gating is attributable to the strain dependence of MgADP release. Although we do not know whether some of the subdomains of myosin may be deformed by strain, our Strong-MgADP actomyosin structure clearly reveals that strain must prevent the rearrangement of the $\beta$-sheet from the Strong-MgADP conformation to the Rigor conformation, based on the data presented in Results. Preventing this rearrangement is thus the basis of gating.
For better understanding, you should also see this video (same website) which shows animation of different conformations of myosin in different stages of the cycle:
The transitions are animated by direct morphing between the three structures Pre-Power Stroke (PPS), ADP state, and Rigor. Starting with the PPS conformation, myosin first rearranges to allow phosphate release without much change in its lever arm. Then, the first step of the powerstroke consists of a large swing of ∼58° of the lever arm toward the ADP state. The actin-binding cleft [between the U50 (blue) and L50 (white) subdomains] is closing for this transition, leading to the only state of the actomyosin cycle, which exhibits high affinity for both actin and the nucleotide. The second step of the powerstroke occurs upon ADP release and ends in the Rigor state after an additional lever arm swing of 9.5°.
THEORY 3: This theory gives even more detailed, but a bit different from the previous one, view about how the energy, released by ATP hydrolysis, is stored in myosin head. It is called Rotation-Twist-Tilt (RTT) energy storage mechanism. See this:
According to the mechanism, ATP hydrolysis in the catalytic site rotates the top of the regulatory domain, which, being connected to the coiled coils of the S-2 region, causes twist between them to increase, and leads to the twist in the myosin head. Since, at the time of hydrolysis, the myosin molecule is not bound to the actin filament, the head is free to rotate and tilt. The increase in twist is instrumental in storing the energy of ATP hydrolysis, while the rotation and tilt of the myosin head brings it sufficiently close to the actin so as to form the actomyosin complex. Untwisting of the coiled coils and the subsequent untilting and constrained reversal of rotation of the head cause the power stroke. This strain decreases the energy of interaction between actin and myosin and thus enables ATP to dissociate myosin from actin. The system is now in such a state that after the next ATP hydrolysis event, the myosin head can bind to actin, and thus, a new contractile cycle can be initiated.
The original paper of RTT mechanism (by Nath and Khurana, 2001) is available here. Since the detailed mechanism is too long, I am not posting it here. You can read the full process, just remember to have a pen and paper with you!
Conclusion:
All of the above theories provide an in-depth view of the powerstroke cycle. However, I'd still conclude that the exact physics behind the powerstroke cycle is not yet fully known, only the transitions in conformation of myosin head have been observed and interpreted to some extent.
References:
Myosin head: Wikipedia
Myosin: RCSB Protein Data Bank
Journal of Biological Chemistry: Mapping Interactions between Myosin Relay and Converter Domains That Power Muscle Function
Proceedings of the National Academy of Sciences of the United States of America: Force-producing ADP state of myosin bound to actin
Theoretical and Computational Biophysics Group: Molecular Dynamics Flexible Fitting
Cell Movements: From Molecules to Motility
Molecular mechanisms of energy transduction in cells: Biotechnology in India II
Molecular mechanism of the contractile cycle of muscle; Sunil Nath and Divya Khurana, March 2001 | {
"domain": "biology.stackexchange",
"id": 6019,
"tags": "biochemistry, biophysics, muscles, protein-folding"
} |
Accurate surveys of urban pigeon and bird populations | Question: I've been working on an assignment to develop mathematical models of urban pigeon populations. (As a disclaimer, this is part of a math course, not a biology course). I already understand the mathematics behind this and have developed several plausible models, but I'm at the stage where I need more detailed data in order to validate my models.
It's surprisingly difficult to find accurate population dynamics data for urban pigeons (especially given my inexperience with this kind of a literature search and general "unsophisticated" understanding of biology).
I do have access to some journals (e.g., via my university library, Google Scholar and Amazon.com), but the best resource I've been able to come up with so far is Illinois Birds: A Century of Change (published by the University of Illinois). The books on urban bird populations I've found so far all seem to be targeted to bird watchers (which I'm not).
Is anyone aware of a detailed/accurate data set on population dynamics (either of urban pigeon populations in particular or of urban bird populations in general)?
Answer: You can find population data for lots of species at The Global Population Dynamics Database, which can be used to test and fit population models. When searching for Columbidae (the dove family) I find 16 datasets of different lengths, but none for Columba livia though.
To search and download data you first need to register for a user account. | {
"domain": "biology.stackexchange",
"id": 6639,
"tags": "homework, theoretical-biology, ornithology, population-dynamics, population-biology"
} |
Responsive search bar implementation using OOCSS | Question: I am implementing a search bar here. I have finished basic UI. I have been recently developed a taste for scalable and robust HTML/CSS. Therefore I am closely following things like BEM and OOCSS
Am I on the right path? Also, how can I make the UI responsive?
index.html
<div id="container">
<form class="search-box">
<span class="search-box-icons">
<i class="fa fa-search icon-search"></i>
</span>
<input class="search-box-input" type="text" placeholder="Search"/>
<div class="search-box-autocomplete" style="display: none;">
Foo Bar
</div>
</form>
</div>
app.css
* {
margin: 0;
box-sizing: border-box;
font-family: 'Roboto', sans-serif;
}
body {
background: #343d46;
}
#container {
width: 500px;
margin: auto;
margin-top: 30px;
background: gray;
color: black;
}
.search-box {
width: 100%;
position: relative;
background: inherit;
color: inherit;
height: auto;
}
.search-box:hover .icon-search {
opacity: 1;
}
.search-box-input {
background: inherit;
color: inherit;
height: 50px;
font-size: 18px;
width: 100%;
padding: 10px;
padding-left: 40px;
display: inline-block;
border: none;
}
.search-box-input:focus {
outline: none;
}
.search-box-icons {
position: absolute;
display: inline-block;
margin: auto;
top: 16px;
left: 10px;
width: auto;
}
.icon-search {
color: inherit;
font-size: 16px;
display: inline;
cursor: pointer;
opacity: 0.4;
}
.search-box-autocomplete {
position: absolute;
background: #fff;
width: 100%;
margin: 0;
border-top: 1px solid gray;
}
Answer: Markup
You're using way more than you need. You have an unnecessary container element, and 2 empty elements for displaying a purely decorative icon. This should be all the markup you need to get the same effect:
<form class="search-box">
<input class="search-box-input" type="search" placeholder="Search"/>
<div class="search-box-autocomplete" style="display: none;">
Foo Bar
</div>
</form>
Placeholder text is not a replacement for label text. It's supposed to be for providing an example of the type of content you're looking for.
Strange that you're using the html5 placeholder property, but you're not using the html5 input type of search.
CSS
I do not subscribe to the BEM methodology, so I cannot comment as to how well you've followed it; it always looks overly verbose to me.
As I've already stated, your icon is purely decorative. It has no place in the markup. The most appropriate location for it is as a pseudo element on the label for the search field (which doesn't exist here) or the form element.
The following CSS:
#container {
width: 500px;
margin: auto;
margin-top: 30px;
background: gray;
color: black;
}
.search-box {
width: 100%;
position: relative;
background: inherit;
color: inherit;
height: auto;
}
Can be reduced to this:
.search-box {
width: 500px;
margin: 30px auto 0 auto;
background: gray;
color: black;
position: relative;
}
Usability
The entire color scheme has extremely low contrast.
The styling that browsers give when the input element has focus is considered a usability feature. If it doesn't fit into your design, you're supposed to adjust it to fit, not remove it.
There's no submit button. There are users who don't understand that they can just hit enter to submit a form. | {
"domain": "codereview.stackexchange",
"id": 11993,
"tags": "html, css"
} |
Problem involving reaction force between two masses | Question: Suppose there are two masses $m$ and $M$. Both of them are cubic and in contact with each other. They are at rest on a frictionless surface. Now if I apply a constant force $F$ on $m$ then the force felt by $m$ will be $F$. But $m$ will also push $M$ with $F$ force. And by Newton's third law $M$ will also exert a force $-F$ on $m$. So the net force on $m$ will be zero.
But I have a feeling that this isn't right. So exactly where am I wrong? Will $m$ and $M$ both accelerate?
Answer: The force exerted by $m$ on $M$ won't be equal to $F$ - it will be a smaller force, $f$.
We can find $f$, if we consider that both masses will move with the same acceleration, i.e., $a=(F-f)/m=f/M$.
Adding some details to address additional questions in the comments.
Regarding the interpretation of the Newton's third law.
According to Wikipedia, the law says:
When one body exerts a force on a second body, the second body
simultaneously exerts a force equal in magnitude and opposite in
direction on the first body.
So we can say that when body A exerts a force on body B, body B exerts an equal in magnitude and opposite in direction force on body A. So there are only two bodies involved and the equal in magnitude and opposite in direction forces are applied to the same point of contact or, more realistically, if the bodies interact over a finite surface, to the same surface of contact.
The diagram below shows how this is applicable to the problem at hand.
A finger pushes $m$ with force $F$ and, therefore, $m$ pushes the finger with force $-F$. Similarly, $m$ pushes $M$ with force $f$ and $M$ pushes back with force $-f$.
If M was immovable and, as a result, m could not move either, we could conclude, based on the Newton's second law, that, since the acceleration of $m$ is zero, the sum of forces acting on it must be zero as well and, therefore, $f$ must be equal $F$.
Otherwise, there is no law that says that $f$ must be equal to $F$, since these forces act between different pairs of objects.
You are asking why $m$ and $M$ are moving together.
The only reason $M$ starts moving in the first place is because it is pushed by $m$. Let's say, $M$, somehow, was able to gain an extra speed and get separated from $m$. As soon as that happened, $f$ would become zero and, therefore, $M$ would stop accelerating. On the other hand, the force acting on $m$, $F-f$, would increase and, therefore, its acceleration would increase, so it would immediately catch up with $M$.
Based on that logic, as long as we keep pushing $m$, even if the force is decreasing, $m$ and $M$ will move together. | {
"domain": "physics.stackexchange",
"id": 49745,
"tags": "newtonian-mechanics, forces, acceleration, free-body-diagram"
} |
Deferred log file close | Question: My code works in that it compiles, and when executed writes out a log file using a buffered writer.
But I wonder whether:
I am correct in assuming that this actually winds up deferring a .Close that is always called
a package variable is the idiomatic way of doing this in Go
I realise that I could just have createLogger return the file pointer, and defer on that ... but that seems even weirder, to have createLogger set a package variable on log, then return the file pointer to be treated as a local ...
I'm still struggling with an idiom here.
package main
import (
"bufio"
"log"
"os"
)
var logFile *os.File
func main() {
createLogger()
defer logFile.Close()
// do some stuff
}
func createLogger() {
logFile, err := os.OpenFile("log/app.log", os.O_RDWR|os.O_CREATE|os.O_APPEND, 0666)
if err != nil {
log.Fatalf("error opening file: %v", err)
os.Exit(-1)
}
log.SetOutput(bufio.NewWriter(logFile))
}
Answer: Yes, on both counts (although putting it in the main package is not).
Typically I would return a logFile from createLogger (calling it newLogger is idiomatic), and defer closing the result.
If you're building a logger to work across your whole app, I'd look at how the standard library does it.
Essentially, create a new package my_app_logger with an initializer which sets up the logger. That way, every package that includes my_app_logger gets the same logger instance. | {
"domain": "codereview.stackexchange",
"id": 9014,
"tags": "go, logging"
} |
Total orbital and spin angular momentum for a closed shell | Question: I read one Phys.SE question similar to mine, in
Total angular momentum in a full shell
but the question was so confusing and vague. The answer, though, was helpful for me to understand a part of my question.
It's said that the total orbital and spin angular momentum for a closed shell is zero. I understand, as explained in the link above, that paired electrons in a closed shell have zero net spin. That's because each pair has one up electron and one down. But I can't realize why this holds for total orbital angular momentum too.
Answer: The total angular momentum of a closed shell is zero because for fixed $l$, we have the possible states labeled by eigenvalues of $L_i$ as $m_{l,i} = l,\dots,0,\dots,-l$ in integer steps. The sum over all $m_l$ inside a shell is always zero, so total angular momentum of a shell is zero.
This is just the generalization of the argument with "up/down" for spin, which is the case $l = \frac{1}{2}$. | {
"domain": "physics.stackexchange",
"id": 23629,
"tags": "quantum-mechanics, angular-momentum, atomic-physics"
} |
In quantum weak measurement, what kind of theory replace Copenhagen interpretation? | Question: Here, I denote the initial states of device and quantum system as $|\Phi_\textrm{in}\rangle$ and $|\Psi_\textrm{in}\rangle$.
The measurement interval is $[t_i,t_f]$, after measurement, the device and quantum system will evolve to
$$\exp\left (-\mathrm i\int_{t_i}^{t_f} \hat{H} \mathrm dt\right)|\Psi_\textrm{in}\rangle|\Phi_\textrm{in}\rangle,$$ where $H$ is the coupling Hamiltonian, and then the device gives the results at time $t_m$( $t_m\geq t_f)$.
We make a postselection( a strong measurement) of the state of the quantum system at $t_s\geq t_f$. We mark the particles in a definite state and look at the weak measurement results of these particles.
The state of the device at $t_m$ is $\langle \Psi_f|\exp\left(-\mathrm i\displaystyle\int_{t_i}^{t_f} \hat{H} \mathrm dt\right)|\Psi_\textrm{in}\rangle|\Phi_\textrm{in}\rangle$.
My problem is, according to Copenhagen interpretation, the above expression is valid only when $t_s<t_m$, but it seems it is valid too when $t_s>t_m$. I wonder what replace Copenhagen interpretation here?
What's more, one of the papers I read indicates that both $t_m<t_s$ and $t_m>t_s$ are right. The only difference is that if $t_s<t_m$, we only have to measurement the ensemble corresponding to the specified state; if $t_s>t_m$, we have to measure the whole ensembles and then select the result corresponding to the ensemble in the specified state.
I want to make my question clearer below:
Denote the state at time $t_m$ of the whole ensemble and of the subensemble as $|\Psi\rangle$ and $|\Psi'\rangle$. Assume when $t_f<t<ts$, the state of the whole ensemble is $|\Psi\rangle=1/2 |\Psi_f\rangle +1/2{|\Psi_f\rangle}^{\perp}$, where $|\Psi_f\rangle$ is the state we want to postselect. If $t_s<t_m$, then after strong measurement, the subensemble collapses to a pure state $|\Psi_f\rangle$, the device at later time $t_m$ can be expressed as $\langle \Psi_f|\exp\left(-\mathrm i\displaystyle\int_{t_i}^{t_f} \hat{H} \mathrm dt\right)|\Psi_\textrm{in}\rangle|\Phi_\textrm{in}\rangle$.
If $t_m<t_s$, we measure the whole ensemble and then select the result of the subensemble according to the result of postselection. In this case, the state of the whole ensemble and of the subensemble at $t_m$ should not be influenced by the postselection according to Copenhagen Interpretation. Neither the whole ensemble nor the subensemble is in the state $|\Psi_f\rangle$ at earlier time $t_m$, how could we get the same result as the case $t_s<t_m$ by choosing the weak measurement result of the subensemble?
How can the result from postselection influence the result of subensemble?If before the postselection, none of the member of the ensemble is in state $|\Psi_f\rangle$
, then after postselection and selecting the subensemble which collapse to $|\Psi_f\rangle$
, the result after data processing is different with the result of the case the subensemble is in a definite state $|\Psi_f\rangle$
. Extremely, if at earlier time, the ensemble is a pure ensemble in a state $|\Psi\rangle=a |\Psi_f\rangle +b{|\Psi_f\rangle}^{\perp}$
, then the result after data processing is the same with the raw result
Answer: By "Copenhagen interpretation", I assume that you mean the interpretation with instantaneous "collapse" one usually encounters in an introductory quantum theory course. Such collapse is a useful rule to do calculation but it is only a fiction. What typically happens is that the quantum system is correlated with the macroscopic measurement device and other environmental degrees of freedom that we practically can't and don't keep track of. This process called decoherence, which is not instantaneous but can be very fast, creates the appearance of irreversible collapse while the global time evolution of the system + device + environment is still governed by the Schrödinger equation. Decoherence is a part of quantum theory. No new interpretation is introduced. (If you want to learn more about decoherence, you can learn about density operators first http://pages.uoregon.edu/svanenk/solutions/Mixed_states.pdf and then move on to https://arxiv.org/abs/quant-ph/0612118)
What this means for your question is that the process of measurement for all practical purposes can be described by the Schrödinger equation on a larger system and that is exactly what your equation
$$ \exp\left( -i \int_{t_i}^{t_f} dt H \right) |\psi_{in}\rangle |\phi_{in}\rangle $$ describes. The weak measurement doesn't have to "finish" before we are able to do something else with the state. You don't need a new rule to perform a measurement during another measurement.
After the question has been edited and clarified:
In the reference you have added (http://arxiv.org/abs/1109.6315) $t_m$ is when a strong measurement is performed on the device. If both $ t_s, t_m \ge t_f$ (i.e. the weak measurement has already finished), there is no difference whether a measurement is made on the system (at time $ t_s $) or on the device first because the two measurements commute. (Again, no new interpretation is needed.) Now specific to the weak value analysis is that you only care about a subensemble corresponding to a certain post-selected system state $ | \psi_f \rangle $. If $ t_m \ge t_s $, at $ t_m $ you already have the information of which subensemble you are supposed to post-select. If $ t_m < t_s $, at $ t_m $ you have to wait until $ t_s $ to learn the subensemble that you are supposed to post-select. There is no physical difference. The only difference is in how to post-process the data.
What selecting a subensemble means explicitly
Denote the state at time $t_m$ of the whole ensemble and of the subensemble as $|\Psi\rangle$ and $|\Psi'\rangle$. Assume when $t_f<t<ts$, the state of the whole ensemble is $|\Psi\rangle=1/2 |\Psi_f\rangle +1/2{|\Psi_f\rangle}^{\perp}$, where $|\Psi_f\rangle$ is the state we want to postselect. If $t_s<t_m$, then after strong measurement, the subensemble collapses to a pure state $|\Psi_f\rangle$...
You seem to think that a subsystem of a pure entangled system is in a definite superposition state and are confused when trying to apply collapse to this superposition state. So let's make this clear.
The situation at hand is that we have two subsystems, the "system" $s$ and the
"measurement device" $m$, evolving unitarily (i.e. by a Hamiltonian). The combined system started in a product pure state, so the final state is pure but entangled. This means that the complete statistical description of $s$ or $m$ alone is given by mixed density operator $\rho $; the expectation value of any observable $A$ (strong measurement) can be calculated from $\rho$ by the Born rule $\text{Tr}(\rho A)$. Any measurement statistics for $s$ is determined by the density operator $\rho_s$ and $\rho_s$ alone. It is not affected by $\rho_m$ and measurements on $m$.
We then made measurements $s$ and $m$ separately at time $t_s$ and $t_m$ respectively. (Yes, we are making strong measurement on a "measurement device." The "measurement device" is treated as a quantum system and have its own observables. To read out the desired result we have to measurement an observable of $m$ whose values are correlated with the values of an observable of $s$.)
(If you want more elaboration on the above two paragraphs, you might want to look at the principle of deferred measurement and the principle of implicit measurement in Nielsen & Chuang pp. 186-187.)
Now, here is what selecting a subensemble means. A system described by a mixed density operator can be interpreted as being in an ensemble of different pure states $|\psi_i \rangle $, each occurs with probability $p_i$. Nevertheless $\rho $ is not the same as a classical ensemble because the ensemble is not unique; there is infinitely many ensembles that can be given to a particular $\rho$. Choosing an observable and applying collapse is picking a preferred ensemble interpretation of $\rho $ thus making it a classical ensemble. (Note that the collapse is not applied to a superposition of $|\psi_i \rangle $) Then, only by conditioning on picking a subensemble $|\psi_i \rangle $ for a particular $i$, we are able to say that the combined system is in a product state $|\psi_i \rangle |\phi \rangle $ for some state $|\phi \rangle $ of $m$.
To recapitulate, after $t_f$, no matter if $t_s>t_m$ or $t_s \le t_m$, we can pretend that $s$ is always in a classical ensemble chosen by the observable of the strong measurement on $s$. Then post-selection is just a classical data processing to which no postulate of collapse enters. | {
"domain": "physics.stackexchange",
"id": 31413,
"tags": "quantum-mechanics, quantum-information, measurement-problem"
} |
$\mathbb Z_2$ or $\mathbb Z$ invariant for the Su-Schrieffer-Heeger (SSH) model | Question: I am trying to understand topological insulators and topological invariant.
The Su-Schrieffer-Heeger (SHH) model is often invoked as a protoypical topological insulator in 1D that carries localized zero modes at the edge.
In every single treatment I could find, people compute winding numbers
or Zak phases that can have one of two possible values. Thus, they are
$\mathbb Z_2$ invariants, right?
Then, often the classification of topological insulators from symmetries
is often discussed, and a "periodic table" is presented.
(For instance: https://topocondmat.org/w8_general/classification.html).
The SSH model falls into class AIII or BDI, depending on whether one
considers the electronic or the mechanical case (as in Kane & Lubensky 2013, Topological Boundary Modes in Isostatic Lattices).
However, in $d=1$, these periodic tables predict a $\mathbb Z$ invariant,
not a $\mathbb Z_2$ one!
So what is it that I am not understanding here? Is the invariant from
the periodic table a different one? What is the $\mathbb Z$ invariant for the
SSH model then? Or am I reading the table wrong?
Answer: The short answer is that it depends on which of the symmetries you enforce. More precisely, the simple SSH model has many symmetries, and it is a priori not clear which of these symmetries you consider as 'accidental' and which you consider as 'enforced'. This is a matter of choice. Depending on this choice, the model lands up in different possible symmetry classes (possible choices are A, AIII, AI, BDI and D, as I will explain; the respective invariants are $0$, $\mathbb Z$, $0$, $\mathbb Z$, $\mathbb Z_2$).
Let me give some more detail. Consider the SSH model
$$ H_\textrm{SSH} = - \sum_n \left( t_{AB} \; c^\dagger_{n,A} c_{n,B} + t_{BA} \; c_{n,B}^\dagger c_{n+1,A} + h.c. \right). $$
In the limit $t_{BA} = 1$ and $t_{AB} = 0$, we obtain the fixed point limit where the coupling is purely between unit cells, giving rise to a decoupled zero-energy fermion at each end of the chain.
It is conventional to define a single-particle Hamiltonian $\mathcal H_k$ through
$$ H = \sum_k \left( c^\dagger_{k,A}, c^\dagger_{k,B} \right) \mathcal H_k \left( \begin{array}{c} c_{k,A} \\ c_{k,B} \end{array} \right).$$
Hence for $H = H_\textrm{SSH}$, we have that $\mathcal H_{\textrm{SSH},k} = -\left[t_{AB} + t_{BA} \cos(k) \right] \sigma_x - t_{BA} \sin(k) \sigma_y$.
This model has a lot of symmetries. Let me go through them, focusing on how they act on the single-particle Hamiltonian (*):
A commuting anti-unitary 'time-reversal' symmetry $\mathcal T$ defined by $\mathcal T \mathcal H_k \mathcal T := \mathcal H_{-k}^*$ and $\mathcal T^2 = +1$. We see that $[\mathcal T, \mathcal H_{\textrm{SSH},k}] = 0$.
An anti-commuting unitary 'sublattice' symmetry $\mathcal S$ defined by $\mathcal S \mathcal H_k \mathcal S := \sigma_z \mathcal H_k \sigma_z$ and $\mathcal S^2 = +1$. We see that $\{ \mathcal S, \mathcal H_{\textrm{SSH},k} \} = 0$.
An anti-commuting anti-unitary `particle-hole' symmetry $\mathcal C$. We can simply define $\mathcal C = \mathcal S \mathcal T$. We have that $\mathcal C^2 = +1$ and $\{ \mathcal C, \mathcal H_{\textrm{SSH},k} \} = 0$.
Hence we see that the SSH model has all the three symmetries $\mathcal T,\mathcal C,\mathcal S$ that enter the periodic table of topological insulators/superconductors! We thus get to choose which class we put it in. You might think 'if it has all symmetries, then we must put it in the class BDI, which has all three symmetries'. That is not quite true: the class is not defined by 'which symmetries does our model have?' but rather 'what kind of arbitrary symmetric terms do we allow to add to our model?'. Let me give some more detail.
"The SSH model is in the class AIII": if we say this, we mean that we allow any perturbations that respect $\mathcal S$, but they need not obey $\mathcal T$ and $\mathcal C$. The table tells us there infinitely many distinct gapped phases, labeled by an integer $\mathbb Z$. This is easy to understand: the $\mathcal S$-symmetry above tells us that $\mathcal H_k$ has to anticommute with $\sigma_z$, hence $\mathcal H_k = h_x(k) \sigma_x + h_y(k) \sigma_y$. Since our model is gapped, we have a well-defined map $$S^1 \to \mathbb R^2 - \{0\}: k \to (h_x(k),h_y(k)).$$
This is an embedding of the circle into the punctured plane, which has a well-defined winding number around the origin. One can prove that the winding number is equivalent to
$$ \nu = \frac{1}{\pi} \int_{-\pi}^\pi \mathrm dk\ \; \langle \psi_k| \sigma_z i \partial_k |\psi_k \rangle. $$
It is straight-forward to derive that for the SSH model, we have $\nu = 0$ if $t_{AB} > t_{BA}$ (trivial phase) and $\nu = 1$ for $t_{AB} < t_{BA}$ (topological phase). The classification tells us that no matter what $\mathcal S$-symmetric term we add, we cannot adiabatically connect these two gapped phases.
"The SSH model is in the class BDI": this means we enforce all three symmetries. Since we saw that $\mathcal S$ by itself was already enough to protect $\mathbb Z$ distinct phases, it is trivial to observe that with extra symmetries, our classification does not get smaller.
"The SSH model is in the class D": this means we allow all perturbations that respect $\mathcal C$, but they can break $\mathcal T$ and/or $\mathcal S$. One can show that one can now connect a model that has $\nu = 2$ to one that has $\nu =0$. In the class AIII we could not do this. More generally, it turns out only $\nu \mod 2$ is a well-defined invariant (i.e. this number cannot change without a phase transition). Since the SSH model had $\nu = 1$ in the topological phase, we see that it is still a non-trivial phase in the class D. Equivalently, this $\mathbb Z_2$ invariant can be measured by
$$ \gamma = \frac{1}{\pi} \int_{-\pi}^\pi \mathrm dk\ \; \langle \psi_k| i \partial_k |\psi_k \rangle. $$
Indeed, one can show that $\gamma \equiv \nu \mod 2$.
"The SSH model is in the class A or AI": now we allow all possible term (class A) or all $\mathcal T$-preserving terms (class AI). The classification tells us that in either case, we can smoothly connect all gapped models. Indeed, nothing prevents us from adding an on-site potential to the SSH model, which can be used to smoothly connect the limit $t_{AB}=0,t_{BA}=1$ to the limit $t_{AB}=1,t_{BA}=0$. Hence, we are allowed to say that the SSH model is in one of these two classes, but if we do so, its edge modes are no longer topologically protected.
(*) Note that it is actually more natural (but, alas, less conventional) to consider how the symmetries act on Fock space, i.e. how they act on the 'actual' hamiltonian $H$. Then the three symmetries $T$, $C$ and $S$ are all commuting, as one would desire of a symmetry! It is only one when considers their effective action on the single-particle Hamiltonian $\mathcal H_k$ that some become anti-commuting, in an affront to our physical intuition. More precisely, $T$ is defined to complex-conjugation in the physical occupation basis. However, $C$ is defined to a unitary (commuting) symmetry via $c_{n,A} \leftrightarrow c^\dagger_{n,A}$ and $c_{n,B} \leftrightarrow -c^\dagger_{n,B}$. Note that this naturally explains its name as a 'particle-hole' transformation. The reason it seems to act as an anti-commuting anti-unitary symmetry $\mathcal C$ on the single-particle Hamiltonian has to do with it interchanging daggers. The latter can be rewritten as a transpose, up to a sign. Using hermiticity, the transpose can be replaced by complex-conjugation. | {
"domain": "physics.stackexchange",
"id": 49933,
"tags": "condensed-matter, topology, topological-insulators"
} |
Changed ROS_PACKAGE_PATH - how do I repair this? | Question:
Short version: I broke my ROS_PACKAGE_PATH, and now I don't know how to get my workspace to work again. I'm using rosbuild, not catkin, and jade; on ubuntu.
Long version:
I'm pretty much completely new to ros, and I have been working on an existing project from a coworker.
This is using rosbuild instead of catkin, and the workspace consists of a number of different packages.
Now, I have been having a lot of trouble getting another package to work with this (which was by another coworker and was build using catkin...). I've been googling and trying things and ... somehow I think I changed my ROS_PACKAGE_PATH.
I think what I did is that I ran a source command that's intended for catkin, which broke the whole thing.
When I try to build any of the packages that worked before, it doesn't work since it tells me that this is not a package. When I echo ROS_PACKAGE_PATH I get /opt/ros/jade/share:/opt/ros/jade/stacks.
Now, I found this answer http://answers.ros.org/question/70435/confusion-with-ros_package_path/, which has been helpful in at least understanding what the problem was. So I tried export ROS_PACKAGE_PATH=/user/this_project/this_workspace:/opt/ros/jade/share:/opt/ros/jade/stacks. This did work (in that it changed the package path), but I still can't compile anything, since it doesn't recognize any of the packages as packages. Which makes sense I guess, because the path I added is not the path of one package, but the path of the directory where all my packages are!
But how do I add the directory with all the packages? And how do I make it so that it recognizes new packages again in the future?
I also found a number of answers that mentioned adding the path of the workspace to a bashrc file. What is that? Where do I find it?
The folder that should be the workspace only has other folders, which are the different packages, and no files at all. I'm also not sure what exactly I should add to it if I could find it.
Bottom line: I have no idea what I'm doing here, and I'm getting a bit panicked.
I also don't have anybody I could ask (yesterday I asked another coworker to help me with the previous problem, who was involved in setting this whole project up - and we ended up breaking the whole thing altogether, so that I had to reset everything to the last git commit. And now I fear I've broken the whole ros installation!)
Anyway - I'd be very, very grateful if anybody can help me repair this mess!
Originally posted by Zaunkönig on ROS Answers with karma: 3 on 2016-09-22
Post score: 0
Original comments
Comment by sloretz on 2016-09-22:
~/.bashrc is a file that bash runs every time you open a bash shell. It means adding the "export ROS_PACKAGE_PATH=..." command to ~/.bashrc, to make iso it run every time the terminal opened. If setting ROS_PACKAGE_PATH didn't fix your problem, adding it to your bashrc certainly won't fix it.
Answer:
In general you first source the "global" setup, then the "workspace" setup:
source /opt/ros/indigo/setup.bash
source ~/your_catkin_ws/devel/setup.bash
The first command (re)sets the ROS_PACKAGE_PATH to your ROS distribution.
The latter command will prepend the "src" directory of the catkin workspace to the ROS_PACKAGE_PATH
As sloretz writes, if you put these commands in your ~/.bashrc, they are executed in every bash you start.
Originally posted by Felix Endres with karma: 6468 on 2016-09-22
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 25821,
"tags": "ros-package-path, ros-jade, rosbuild"
} |
Is there a limit on transition function compositions in NFAs? | Question: I want to prove regularity of a language that contains a known regular language $L$ using an NFA.
Say the transition function of the automation that accepts $L$ is $f$.
In my new transition function, is there a limitation on composition of $f$?
For example, can I make the following transition in the new function?
$$f'(q_i ,\sigma)=f(f(f(q_i, \sigma))))$$
If not, why?
Answer: You can define the transition function of an automaton to be whatever you want, as long as it's a function of the appropriate type, i.e., $Q\times\Sigma\to Q$ fora DFA and $Q\times\Sigma\to 2^Q$ for an NFA.
However, the composition you've attempted doesn't actually make sense. Since $f$ is the transition function of an NFA, it has type $Q\times\Sigma\to 2^Q$. As such, you can't write $f(f(q_i,\sigma))$, since the argument of the outer application of $f$ is some subset of $Q$, whereas there are supposed to be two arguments: an element of $Q$ and a symbol from the alphabet. So, for a double-composition, you probably need to write something like
$$\bigcup\{f(q,\sigma)\mid q\in f(q_i,\sigma)\}\,.$$
You can try to work out the triple-composition yourself.
But are you sure you want to do that? You're defining an automaton that says "Every time I read the character $\sigma$, I will do whatever the original automaton would have done if it read three $\sigma$s." As I said, you can make your automaton do whatever you want, but I'd be interested to know the context that makes this a sensible thing to do. | {
"domain": "cs.stackexchange",
"id": 10161,
"tags": "regular-languages, automata, finite-automata, computation-models"
} |
How is HIV evolutionarily viable despite its extreme virulence? | Question: How does HIV survive natural selection? And how has it managed to kill far more than any non-airborne virus in recorded history?
Answer: Human Immunodeficiency Virus is a mutated form of Simian Immunodeficiency Virus. In simians (Apes and Monkeys, not including humans), SIV is not pathogenic, in most cases, however, when the mutated form made the jump to humans, it became highly contagious and virulent. You can find a basic description in the wikipedia article here: Simian Immunodeficiency virus.
As for your second question, I would likely say that influenza has killed more humans in total than HIV, so you would have to provide a reference for your claim.
However, because HIV is a retrovirus that incorporates into the genome of the immune cells of its host, it can survive in the body for long periods of time in a dormant state. A person may not know they are infected for months or years, and they can pass the virus on to other people. Slow killing viruses such as HIV are very dangerous, as they can be transmitted to large swaths of the population before people are aware they are sick.
Update:
Based on comments, there may be a misunderstanding of the term virulence. From The Karolinska Instetutet medical subject heading (MeSH) definition:
The degree of pathogenicity within a group or species of microorganisms or viruses as indicated by case fatality rates and/or the ability of the organism to invade the tissues of the host. The pathogenic capacity of an organism is determined by its VIRULENCE FACTORS.
-The Karolinska Institutet; Virulence
The emphasis is mine, but virulence also refers to the ability of the virus to infect its host, and does not necessary lead to the hosts death. The virus that causes the common cold is highly virulent, but for almost the entire human population it causes nothing more than discomfort.
A virus like Ebola on the other hand is highly virulent in both senses of the term. The reason that there are far fewer cases of Ebola is that it kills the human host so quickly, that an afflicted person will on average only go on to infect about two other individuals, mainly because victims 1) die quickly, 2) are only contagious after symptoms manifest themselves, and 3) requires direct contact with the infected person's bodily fluids.
Contrast that with Influenza which a single person can infect tens to hundreds of people and is contagious before they are displaying symptoms and spreads it with casual contact, or as stated above, with HIV, where a person can be contagious for weeks, months, or even years before they are aware that they are infected, and you see why HIV is very sustainable in the Human population.
HIV also has the added weapon in its arsenal that it can lay dormant in cells for long periods of time so as to escape the detection of the host immune system, and as the disease attacks and kills immune cells, when the dormant virus activates again there are far fewer specialized cells to address the active infection.
Another thing is it is the comorbidity with other diseases that often results in the death of a patient with an HIV infection. AIDS results from other diseases having the opportunity to go unchecked after HIV has decimated the host immune system. | {
"domain": "biology.stackexchange",
"id": 5046,
"tags": "epidemiology, hiv"
} |
Calculating acceleration and velocity | Question: I'm writing some Quad Copter software and beginning to implement an altitude hold mode.
To enable me to do this I need to get an accurate reading for vertical velocity. I plan to use a Kalman filter for this but first I need to ensure that I'm getting the correct velocity from each individual sensor.
I have done this but I'm not 100% sure its correct so I was hoping to get some confirmation on here.
My first sensor is a Lidar distance sensor, I calculated acceleration and velocity using the following code:
float LidarLitePwm::getDisplacement()
{
int currentAltitude = read();
float displacement = currentAltitude - _oldAltitude;
_oldAltitude = currentAltitude;
return displacement; //cm
}
//Time since last update
float time = (1.0 / ((float)FLIGHT_CONTROLLER_FREQUENCY / 10.00)); // 50Hz, 0.02s
float lidarDisplacement = _lidar->getDisplacement();
_currentLidarVelocity = lidarDisplacement / time;
The second sensor is an accelerometer. I calculated acceleration and velocity using the following code:
Imu::Acceleration Imu::getAcceleration()
{
//Get quaternion
float q[4];
_freeImu.getQ(q);
//Get raw data
float values[9];
_freeImu.getValues(values);
//Extract accelerometer data
float acc[3];
acc[0]= values[0]; //x
acc[1]= values[1]; //y
acc[2]= values[2]; //z
//Gravity compensate
_freeImu.gravityCompensateAcc(acc, q);
//Convert acceleration from G to cm/s/s
_acceleration.x = acc[0] * 9.8 * 100;
_acceleration.y = acc[1] * 9.8 * 100;
_acceleration.z = acc[1] * 9.8 * 100;
return _acceleration; //cm/s/s
}
//Time since last update
float time = (1.0 / ((float)FLIGHT_CONTROLLER_FREQUENCY / 10.00)); // 50Hz, 0.02s
//Get accel
Imu::Acceleration imuAcceleration = _imu->getAcceleration();
//Get velocity
currentZVelocity += imuAcceleration.z * time; //cm/s
It would be great if someone could confirm if this is correct (or not)
Thanks
Joe
Answer: I noticed a weird thing in your code -- It looks like you are calculating a first derivative of something labeled altitude in LidarLitePwm::getAcceleration() (which would be velocity, and not acceleration) and later integrating it to get something labeled velocity and (which would in fact be altitude).
So either your variable and method names are wrong, or you are missing one more derivative in LidarLitePwm::getAcceleration(). Also it is a litle weird that you are using the time when integrating acceleration and not when differentiating it. It should be the same in both directions. If you include the time in seconds, your units will be cm/s/s as stated in the comments, if you don't they will be cm/tick/tick.
Since it seems that you want to reach velocity as the final product anyway, I would skip calculating the accelerations, rename LidarLitePwm::getAcceleration() to LidarLitePwm::getVelocity() and change it to something like this:
float LidarLitePwm::getVelocity(float time)
{
int currentAltitude = read();
float velocity = (currentAltitude - _oldAltitude) / time;
_oldAltitude = currentAltitude;
return velocity; //cm/s/s
}
... all this assuming that the read() function returns distance from ground in cm.
The accelerometer stuff seems ok.
Btw if you are holding altitude with PID don't you need the altitude rather than velocity? | {
"domain": "robotics.stackexchange",
"id": 633,
"tags": "sensors, accelerometer, lidar"
} |
Northern Europe tree identification | Question: I've seen it in Nordic countries (Sweden, Latvia). It's a tree that reaches maybe 5 or more meters high and its flowers have a beautiful scent.
Answer: It's the common Prunus padus, which flowers with its strong scent now in many parts of Sweden (including my yard). At least if you are referring to the common species found in the wild (there are planted domesticated varieties as well). It is common across many parts of Europe though, and not only the Nordic countries, and can be found in Asia as well. | {
"domain": "biology.stackexchange",
"id": 8682,
"tags": "species-identification, botany"
} |
Can acidified or neutral KMnO4 oxidise toluene to benzoic acid? | Question: Why is alkaline $\ce{KMnO4}$ used in the oxidation of toluene to benzoic acid? Can acidified or neutral $\ce{KMnO4}$ be used in this conversion?
Answer: Here are the three equations describing the reduction of manganese (and concurrent oxidation of whatever substrate may be present) under basic, neutral and acidic conditions respectively.
$$\ce{Mn^{+7}O4- +e- ->~ Mn^{+6}O4^2-~~~~[basic]}$$
$$\ce{2H2O + Mn^{+7}O4- + 3e- ->~ Mn^{+4}O2 + 4OH-~~~~[neutral]}$$
$$\ce{8H+ + Mn^{+7}O4^{-} + 5e- ->~ Mn^{+2} + 4H2O~~~~[acidic]}$$
Acidic conditions are the most economical as 5 electrons are transferred per mole of manganese, but oxidation can be achieved under all 3 conditions. A problem with neutral conditions is the precipitation of insoluble $\ce{MnO2}$ and the need for its subsequent separation. A problem with all 3 conditions is that oxidation will only occur if the material to be oxidized is somewhat soluble in the reaction medium.
The milder basic conditions are generally preferred for the oxidation of aromatic alkyl side chains because with the milder basic conditions an organic co-solvent that is soluble in water can be employed to facilitate solubility of the organic substrate. | {
"domain": "chemistry.stackexchange",
"id": 2056,
"tags": "organic-chemistry, acid-base, organic-oxidation"
} |
Parsing RFC 4180 CSV with GOTOs | Question: One of my data-import tools needs to support CSV files. I thought that parsing CSV is such a simple task that I didn't want to use any any external libraries for that. So here is one more RFC 4180 CSV parser. This one however works with two gotos.
I don't preach never use goto because I find there are situations in which it's useful. In this implementation it allows me to reduce code repetition by having only a single yield return and resetting all variables before parsing each line. Without the goto it would require one yiled return inside the loop and another one at the end for the last line. Resetting flags would also need to be done twice - initilization before the loop and then after each line.
The parser does not use any continues and if elses. I find they are confusing so I'd rather nest one more if/else than break the flow multiple times with a continue or seemingly equal conditions.
Everything it needs to be able to do is to parse a CSV into lines and columns. Reading files, verifying equal column count in each line or using headers for DataTables are jobs that other modules will take care of.
The interface might look unnecessary but I need it for dependency injection and mocking/testing.
public interface ICsvParser
{
IEnumerable<List<string>> Parse(string csv, char separator = ';');
}
public class CsvParser : ICsvParser
{
public IEnumerable<List<string>> Parse(string csv, char separator = ';')
{
if (csv == null) { throw new ArgumentNullException(nameof(csv)); }
if (string.IsNullOrEmpty(csv)) { yield break; }
var doubleQuote = '"';
var carriageReturn = '\r';
var lineFeed = '\n';
var eof = false;
var i = 0;
resume:
var isQuote = false;
var isEscapeSequence = false;
var isLineBreak = false;
var buffer = new StringBuilder();
var line = new List<string>();
for (; i < csv.Length; i++)
{
var current = csv[i];
if (isLineBreak)
{
if (current == lineFeed)
{
i++; // Skip the line-feed.
goto yield;
}
throw new ArgumentException($"Invalid character at {i}. Expected '\\n' but found '{current}'.");
}
else
{
if (isEscapeSequence)
{
if (current == doubleQuote)
{
buffer.Append(current);
}
else
{
isQuote = !isQuote;
if (current == separator)
{
line.Add(buffer.ToString());
buffer.Clear();
}
else
{
buffer.Append(current);
}
}
isEscapeSequence = false;
}
else
{
if (current == doubleQuote)
{
isEscapeSequence = true;
}
else
{
if (current == separator && !isQuote)
{
line.Add(buffer.ToString());
buffer.Clear();
}
else
{
if (current == carriageReturn)
{
isLineBreak = true;
}
else
{
buffer.Append(current);
}
}
}
}
}
}
eof = true;
yield:
// Current buffer is not added yet.
line.Add(buffer.ToString());
yield return line;
if (!eof)
{
goto resume;
}
}
}
Example
// test data
var csv = new[]
{
"foo;bar",
"baz;qux",
"\"foo;foo\";qux",
"foo\"\";\"\"bar",
"\"foo;\"\"foo\";qux",
";",
}
.Join("\r\n"); // my helper extension
var csvParser = new CsvParser();
csvParser.Parse(csv).Dump();
csvParser.Parse("").Dump();
Output:
foo
bar
baz
qux
foo;foo
qux
foo"
"bar
foo;"foo
qux
<empty>
<empty>
<empty> is just a placeholder I used here to indicate empty strings.
Answer: 1) I would save constants (doubleQuote, etc.) as fields, so they don't take up extra space in already fairly large method body.
2) I think your use of goto is fine. However you can also rewrite it without goto. At first glance it boils down to:
var buffer = new StringBuilder();
var line = new List<string>();
foreach(var ch in csv)
{
var newLine = IsNewLine(ch);
if (!newLine && TryAppend(ch, buffer, ...)) continue;
line.Add(buffer.ToString())
buffer.Clear();
if (newLine)
{
yield return line;
line = new List<string>();
}
}
if (line.Any()) yield return line;
which also looks fine and is a bit easier to read if you ask me.
3) Alternatively you can go further with gotos and use them as full-fledged state machine. It will allow you to easily remove common sections such as
line.Add(buffer.ToString());
buffer.Clear();
and deep if-else nests will probably go away as well. | {
"domain": "codereview.stackexchange",
"id": 28934,
"tags": "c#, parsing, csv"
} |
How does the HCl-KCl Buffer work? | Question: I have just been studying the $\ce{HCl}$-$\ce{KCl}$ 'buffer', but there are still quite a few things I am uncertain about. I would appreciate any help in clearing up some questions I have.
What I understand (possibly incorrectly) from my research so far:
Water is the weak acid acting as a buffer in this system. The $\ce{HCl}$ and $\ce{KCl}$ in the system are there to increase the solution's ionic strength, which somehow improves water's buffer capacity.
In high ionic strength solutions, the 'standard' equilibrium equation has to be changed to include the activity coefficients. For water's equilibrium, as the ionic strength increases, the activity constants decrease and so the K value decreases.
If anything above is incorrect, please point out my misunderstanding. There are still a few questions I can't find answers to:
Why does the increased ionic strength of the solution improve water's buffer capacity?
Isn't a higher equilibrium constant required to make water a better buffer? Then isn't it bad that the ionic strength decreases K?
Could any other salt have been used to increase the ionic strength, or is there some specific reason $\ce{HCl}$-$\ce{KCl}$ is used?
I'm finding this 'buffer' system to be extremely confusing, so any help answering these questions would be greatly appreciated.
Answer: Introduction
Let's define buffer capacity quantitatively as
$$\beta=\cfrac{\mathrm{d}c_\mathrm{b}}{\mathrm{d}(\mathrm{pH})}=-\cfrac{\mathrm{d}c_\mathrm{a}}{\mathrm{d}(\mathrm{pH})}$$
that is, the relationship between concentration (in equivalents) of strong base ($c_\mathrm{b}$) or acid ($c_\mathrm{a}$) added to a solution and its change in $\mathrm{pH}$. From now on I'll assume we're adding a monoprotic base or acid, so "equivalents" and "moles" (and their concentrations) can be used interchangeably.
Pure water
Pure water has a very low buffering capacity - its $\mathrm{pH}$ is very sensitive to the addition of acids or bases. For instance, imagine we add a concentration $c_\mathrm{b}$ of base to pure water:
$\ce{H2O + B- <=> OH- + BH}$
The self-ionisation equilibrium of water will be displaced:
$\ce{[OH-]} \approx c_\mathrm{b}$
and therefore
$ c_b \approx \ce{[OH-]} = \cfrac{K_\mathrm{w}}{\ce{[H3O+]}} = 10^{\mathrm{pH}-\mathrm{p}K_\mathrm{w}}$
so, taking the derivative,
$\beta_{\ce{OH-}} = \cfrac{\mathrm{d}c_\mathrm{b}}{\mathrm{d}(\mathrm{pH})} = \cfrac{\mathrm{d}}{\mathrm{d}(\mathrm{pH})} 10^{\mathrm{pH}-\mathrm{p}K_\mathrm{w}} = 10^{\mathrm{pH}-\mathrm{p}K_\mathrm{w}} \ln{10}$
it's easy to show that, similarly, for the addition of acids,
$\beta_{\ce{H+}} = -\cfrac{\mathrm{d}c_\mathrm{a}}{\mathrm{d}(\mathrm{pH})} = 10^{-\mathrm{pH}} \ln{10}$
and, combining the buffer effect of both semi-systems, we get the total buffer capacity of water:
$\beta_{\ce{H2O}} = \left( 10^{-\mathrm{pH}} + 10^{\mathrm{pH}-\mathrm{p}K_\mathrm{w}} \right) \ln{10}$
Weak acid/base pair
1:1 solutions of weak acids/bases are a typical buffer system around $\mathrm{pH}=\mathrm{p}K_\mathrm{a}$ for the buffer. To understand why, let's imagine we add a weak acid/base pair, $\ce{HA/KA}$, with total concentration $C_\mathrm{A}$, to which we later add a certain concentration of base, $c_\mathrm{b}$. This weak acid/base pair will be described by the equilibrium
$K_A = \cfrac{\ce{[H3O+] [A-]}}{\ce{[HA]}}$
which, taking into account that $C_\mathrm{A} = \ce{[HA]} + \ce{[A-]}$, implies
$\ce{[A-]} = C_\mathrm{A} \cfrac{K_\mathrm{A}}{\ce{[H3O+]} + K_\mathrm{A}}$
From charge balance:
$\ce{[H3O+]} + \ce{[K+]} = \ce{[OH-]} + \ce{[A-]}$
Taking into account that $\ce{[K+]}$ is equal to the formal concentration of $\ce{KA}$ we added, so $c_\mathrm{b} = \ce{[K+]}$, and the previous expression for $\ce{[A-]}$:
$c_b = \cfrac{K_\mathrm{w}}{\ce{[H3O+]}} - \ce{[H3O+]} + C_\mathrm{A} \cfrac{K_A}{\ce{[H3O+]} + K_\mathrm{A}} = 10^{\mathrm{pH}-\mathrm{p}K_\mathrm{w}} - 10^{-\mathrm{pH}} + C_\mathrm{A} \cfrac{10^{-pK_A}}{10^{-\mathrm{pH}} + 10^{-\mathrm{p}K_\mathrm{A}}}$
So the buffer capacity of the solution will be:
$\beta = \cfrac{\mathrm{d}c_\mathrm{b}}{\mathrm{d}(\mathrm{pH})}= \left(10^{\mathrm{pH}-\mathrm{p}K_w} + 10^{-\mathrm{pH}} + C_A \cfrac{10^{-\mathrm{pH}-\mathrm{p}K_\mathrm{A}}}{\left( 10^{-\mathrm{pH}} + 10^{-\mathrm{p}K_\mathrm{A}} \right)^2} \right) \ln{10}$
Note that the first two terms are the buffer capacity of water, so the contribution of the acid/base pair is
$\beta_{\ce{HA/A-}} = C_\mathrm{A} \cfrac{10^{-\mathrm{pH}-\mathrm{p}K_\mathrm{A}}}{\left( 10^{-\mathrm{pH}} + 10^{-\mathrm{p}K_\mathrm{A}} \right)^2} \ln{10}$
Additive buffer capacity contributions can be calculated this way.
For instance, for a typical acetate buffer ($\ce{HAc} \ 0.2 \mathrm{M} $, $ \ce{NaAc} \ 0.2 \mathrm{M} $; $\mathrm{p}K_\mathrm{a} = 4.76$, $C_\mathrm{A} = 0.4\mathrm{M}$), the buffering capacity looks like this:
Ionic strength
Since buffers are moderately concentrated electrolyte solutions, the assumption that activity coefficients in them are approx. 1 shouldn't be automatic. How does this affect the calculations of buffer capacity we've performed until now?
Let's define an effective equilibrium pseudo-constant, $K'$, operating on concentrations, that is mathematically equivalent to the actual equilibrium constant, $K$, operating on activities:
$K_\mathrm{A} = \cfrac{a_{\ce{H3O+}} a_{\ce{A-}}}{a_{\ce{HA}}} \iff K'_\mathrm{A} = K_\mathrm{A} \cfrac{\gamma_{\ce{H3O+}} \gamma_{\ce{A-}}}{\gamma_{\ce{HA}}} = \cfrac{\ce{[H3O+]} \ce{[A-]}}{\ce{[HA]}}$
So reflecting the impact of ionic strength on buffer capacity becomes a question of evaluating $K'_\mathrm{A}$ and using it to replace $K_\mathrm{A}$ in our previous expressions. For instance, we can use the Debye-Hückel relationship:
$\mathrm{p}K'_\mathrm{A} = \mathrm{p}K_\mathrm{A} + A \left( 2 z_{\ce{HA}} - 1 \right) \left( \cfrac{\sqrt{I}}{1 + \sqrt{I}} - 0.1 I\right)$
where $A$ is a constant ($\approx 0.51$ at room temperature), $z_{\ce{HA}}$ is the charge of the conjugate acid and $I=\frac{1}{2}\sum c_i z^2_i$ is the ionic strength of the solution. Note that $\mathrm{p}K'_\mathrm{A}$ has to be solved recursively, however, as it depends on $I$, which depends on the concentration of the different ions, which depend on $\mathrm{p}K'_\mathrm{A}$.
A similar correction can be applied to $K_\mathrm{w}$ to obtain $K'_\mathrm{w}$.
Applying this ionic strength correction to our acetic/acetate $\pu{0.4M}$ buffer above, we can see the impact of ionic strength:
The $\ce{HCl/KCl}$ buffer
So what about the $\ce{HCl/KCl}$ buffer? Unlike the example of acetic/acetate, $\ce{HCl}$ is a strong electrolyte - so it dissociates completely at all values of $\mathrm{pH}$. So we can treat it in two ways: as simply a water buffer with a different starting $\mathrm{pH}$, or as an acid/base conjugate pair buffer with a $\mathrm{p}K_\mathrm{A} < 0$. Both are equivalent, as the acid/base contribution is negligible compared to the water contribution due to the $\mathrm{p}K_\mathrm{A}$ value.
For instance, for a typical $\ce{HCl}\ 0.2\mathrm{M}$, $\ce{KCl}\ 0.2\mathrm{M}$ buffer ($\mathrm{p}K_\mathrm{A} = -6.3$, $\mathrm{pH} \approx 0.7$, $I \approx 0.8$), the contributions to buffer capacity are as follows:
$\beta_{\ce{H2O}} = \left( 10^{\mathrm{pH}-\mathrm{p}K'_\mathrm{w}} + 10^{-\mathrm{pH}} \right) \ln{10} \approx 10^{-\mathrm{pH}} \ln{10} = 0.4594$
$\beta_{\ce{HCl/Cl-}} = C_A \cfrac{10^{-\mathrm{pH}-\mathrm{p}K'_\mathrm{A}}}{\left( 10^{-\mathrm{pH}} + 10^{-\mathrm{p}K'_\mathrm{A}} \right)^2} \ln{10} \approx C_\mathrm{A} 10^{\mathrm{p}K'_\mathrm{A}-\mathrm{pH}} \ln{10} = 5.865 · 10^{-8}$
So, for all effects and purposes we have a buffer that behaves just like water. It is, however, a buffer that behaves just like water in a region of the $\mathrm{pH}$ scale which has a relatively large buffer capacity by virtue of being close to $\mathrm{pH}=0$. For instance, this is what buffer capacity looks like around our typical $\ce{HCl}\ 0.2\mathrm{M}$, $\ce{KCl}\ 0.2\mathrm{M}$ buffer (the red dashed line marks the initial $\mathrm{pH}$):
Note that buffer capacity is highly asymmetrical around this point - adding acid increases the buffer capacity of this system, requiring increasingly more acid to decrease $\mathrm{pH}$, while adding base decreases the buffer capacity, making the system's $\mathrm{pH}$ more sensitive to further base additions. This can be easily visualised if we realise that the area under the $\beta$ curve represents the concentration that needs to be added to move in the $\mathrm{pH}$ scale.
Note also that, unlike in weak acid/base pairs, we cannot simply increase concentration to increase buffer capacity while retaining the same $\mathrm{pH}$ - as the concentration-dependent term, corresponding to the $\ce{HCl/KCl}$ base, is negligible. Ionic strength is irrelevant for $\beta$, as well, as $K'_\mathrm{w}$ and $K'_\mathrm{A}$ is not involved - $\beta_{\ce{H3O+}}$ is the only relevant contribution to $\beta$ in this very acidic region, and, by definition, it is not affected by activity concerns (unlike $\beta_{\ce{OH-}}$ or $\beta_{\ce{HCl/Cl-}}$).
Does that mean that $\ce{HCl/KCl}$ buffers with different concentrations all have the same buffer capacity? Not at all: but their different buffer capacity is mediated by $\mathrm{pH}$. In other words, different concentrations produce different $\mathrm{pH}$ which, in turn, produces a different $\beta$ - which is unresponsive to changes in $I$ that do not change $\mathrm{pH}$.
Two final plots to illustrate this. Imagine we prepare solutions of $\ce{HCl/KCl}$ (1:1) at different total concentrations $C_{\ce{Cl-}}=c_{\ce{HCl}}+c_{\ce{KCl}}$. If we compare the two contributions to $\beta$, it's clear that $\ce{HCl/KCl}$ increases buffer capacity almost exclusively through $\mathrm{pH}$; $\beta_{\ce{H3O+}}$ is proportional to $\ce{[H3O+]}$ which increases linearly with $c_{\ce{HCl}}$ as $\ce{HCl}$ is a strong electrolyte:
Conversely, what if we control for $\mathrm{pH}$ and change only $I$? Let's imagine a series of solutions containing a fixed concentration of $\ce{HCl}$ and varying amounts of a non-reacting, strong electrolyte without common ions - such as $\ce{NaBr}$ - so we can change the ionic strength of the solution without affecting neither the $\mathrm{pH}$ nor the $\ce{HCl/Cl-}$ equilibrium. The impact of $I$ on $\beta$ would look like this (note that the minimum $I$ corresponds to a solution with $\ce{HCl}$ and no $\ce{NaBr}$):
This is because although the $\beta_{\ce{HCl/Cl-}}$ and $\beta_{\ce{OH-}}$ terms do depend on $I$, these terms are many, many orders of magnitude lower than the $\beta_{\ce{H3O+}}$ term, which doesn't.
So, after this discussion, we can now directly respond to your questions.
TL;DR
1) Why does the increased ionic strength of the solution improve water's buffer capacity?
It doesn't. The buffer capacity of solutions of strong acids are dominated by the $\beta_{\ce{H3O+}}$ term, which (by definition of $\mathrm{pH}$) is not affected by ionic strength.
There are, however, other reasons to desire an ionic strength in that range - in biological systems, many molecules, particularly proteins, have a privileged ionic strength stability gap - above or below which they tend to denaturalise or precipitate, and outside of which their biological activity can be inhibited or poisoned. Although different systems will require different ionic strengths, buffers are typically prepared with ionic strengths in the range $0.1-1\mathrm{M}$ for that reason.
2) Isn't a higher equilibrium constant required to make water a better buffer? Then isn't it bad that the ionic strength decreases $K$?
The $\beta_{\ce{H3O+}}$ term, by construction, does not depend on $K'_\mathrm{w}$, so it is unaffected by ionic strength. The $\beta_{\ce{OH-}}$ term does, and as you mention, it is decreased at moderate ionic strength - although it is increased at higher $I$. In general, the impact of ionic strength on buffers is as follows: increasing ionic strength begins by lowering $\beta$, then, as $I$ keeps increasing, $\beta$ raises again.
3) Could any other salt have been used to increase the ionic strength, or is there some specific reason $\ce{HCl-KCl}$ is used?
Yes, any other salt could have been used to increase the ionic strength. Adding $\ce{KCl}$ affects the acid-base pair contribution to $\beta$, but as we've seen, that contribution is completely negligible - as will be the case with strong acids, which operate mainly through the $\ce{H3O+}$ contribution. | {
"domain": "chemistry.stackexchange",
"id": 9931,
"tags": "acid-base, reaction-mechanism, aqueous-solution, ions"
} |
Is $(NP^{NP})^{NP} = NP^{(NP^{NP})}$? | Question: In the "last paragraph" of the "first page" of the following paper:
Vikraman Arvind, Johannes Köbler, Uwe Schöning, Rainer Schuler, "If NP Has Polynomial-Size Circuits, then MA = AM," Theoretical Computer Science, 1995.
I encountered a somewhat counter-intuitive claim:
$(\Sigma^P_2 \cap \Pi^P_2)^{NP} = \Sigma^P_3 \cap \Pi^P_3$
I think the identity above is deduced from the following:
$(\Sigma^P_2)^{NP} = \Sigma^P_3$
and
$(\Pi^P_2)^{NP} = \Pi^P_3$
The former is more simply written as $(NP^{NP})^{NP} = NP^{NP^{NP}}$, which is quite odd!
Edit: In light of Kristoffer's comment below, I'd like to add the following inspiring remark from Goldreich's complexity book (pp. 118-119):
It should be clear that the class $C_1^{C_2}$ can be defined for two complexity classes $C_1$ and $C_2$, provided that $C_1$ is associated with a class of standard machines that generalizes naturally to a class of oracle machines. Actually, the class $C_1^{C_2}$ is not defined based on the class $C_1$ but rather by analogy to it. Specifically, suppose that $C_1$ is the class of sets that are recognizable (or rather accepted) by machines of a certain type (e.g., deterministic or non-deterministic) with certain resource bounds (e.g., time and/or space bounds). Then, we consider analogous oracle machines (i.e., of the same type and with the same resource bounds), and say that $S \in C_1^{C_2}$ if there exists an adequate oracle machine $M_1$ (i.e., of this type and resource bounds) and a set $S_2 \in C_2$ such that $M_1^{S_2}$ accepts the set $S$.
Answer: ${\Sigma_2^P}^{NP}$ is the set of language decided by an alternating turing machine in existential, and then universal state, with an oracle in NP. Both the universal and the existantial part can querye NP.
Hence, in this case you decided to write this as $(NP^{NP})^{A}$ then the way you should think of it is as $(NP^{NP^A\cup A})$ (by $\cup$ I mean an oracle either to $A$ or to an $NP^A$ language).
Hence ${\Sigma_2^P}^{NP}$ is equal to $(NP^{(NP^{NP})})^{NP}$ which is certainly equal to $(NP^{NP^{NP}})$ since every query you could make to the $NP$ oracle, you could make it to the $NP^{NP}$ oracle. | {
"domain": "cstheory.stackexchange",
"id": 91,
"tags": "cc.complexity-theory, complexity-classes"
} |
Can there be a single ray of light? | Question: My physics teacher told me that a beam of light is a collection of rays of light and there cannot be a single absolute ray of light. Is this true?
Answer: This is how an optical ray is defined:
In optics a ray is an idealized geometrical model of light, obtained by choosing a curve that is perpendicular to the wavefronts of the actual light, and that points in the direction of energy flow
The mathematical function that describes the classical propagation of light depends on the wave equations of Maxwell.
Here is what a wavefront starting from a point source looks like.
So the ray is the line perpendicular to the front,that gives the direction of the energy from this single point.
Light is built up from many wavefronts next to each other so there are many optical rays, as many as the wavefronts.
To answer the title
Can there be a single ray of light?
To have a single ray of light you would have to have a single point source for the light . In nature, in order to create a classical beam the point source will not be a point, so as to have one ray. There will be many atoms radiating photons that will build up the wavefront, so there cannot be a single ray of light really because there will be many point sources building it up. (photons and atoms are another level of complexity in how classical light beams are made and needs a background in quantum mechanics) | {
"domain": "physics.stackexchange",
"id": 86916,
"tags": "optics, electromagnetic-radiation, visible-light, photons, geometric-optics"
} |
Virtual Memory vs Cache for block identification | Question: Both are based on the principle of locality. Then why virtual memory uses table lookup while cache memory uses associative memory for block identification?
Answer: I think it's because when using virtual memory, you have access to disk and thus have vasts amounts of memory to store extra data structures than can help you identify each page. In cache, you don't have that much memory, so the way to do things is to add identification (tag and set) bits to each cache line so that you don't need another data structure to identify each cache block because it would be expensive memory-wise. You just iterate over each set, identify the set you're looking for, then iterate over each cache line in that set and identify the cache line you're looking for and then extract the desired bytes. Also, iterating over all cache lines is not that expensive because there aren't a lot of them, but applying this technique to pages, which can be thousands, is really inefficient. | {
"domain": "cs.stackexchange",
"id": 16222,
"tags": "cpu-cache, virtual-memory"
} |
How does using a Bell state lead to a $\cos \left(\frac{1}{8}\pi \right)$ probability of winning in the CHSH game? | Question: I have trouble understanding how the CHSH (which stands for John Clauser, Michael Horne, Abner Shimony, and Richard Holt) game, as described in this paper (and shortly explained in this post), works.
I understand that $75\%$ is the maximum probability of winning in a classical system.
The following Bell state
$$\frac{\left| 00 \right> + \left| 11 \right>}{\sqrt{2}}$$
can be interpreted as having $50\%$ chance that both qubits are $\left| 0\right>$ and $50\%$ chance that they are both $\left| 1 \right>$.
This state can be prepared using the following gate
However, it is unclear to me how the Bell state (above) leads to a $\cos\left(\frac{1}{8}\pi\right)\approx0.85$ probability of winning in the CHSH game.
I have made a visual representation of the Bloch sphere from one side in Desmos, and I see how a certain angle corresponds to a certain probability. This might be an incorrect interpretation, but this is how I picture a qubit.
So, how does the CHSH game conclude that the probability to 'win' in a quantum system is $\cos(\frac{1}{8}\pi)$, or an angle of $45°$ in my Desmos example?
Answer: $\newcommand{\ket}[1]{\left|#1\right>}$
Indeed, the story about $\cos\frac{\pi}{8}$, is not told often, except in quantum mechanics lesson, and even there, it is often left as “exercise left to the reader”. This number is not obvious at first sight, but is the result of an optimization and a straightforward application of quantum mechanics computation rules.
What is apparently lacking in your description to find it is the description of the measurements.
To keep things (relatively) simple, I will assume that we deal with single photons entangled in polarization, and I will only consider linear polarization. Let denote by $α$ (rep. $β$) the angle defining Alice’s (resp. Bob’s) measurement. Measuring the polarization of a single photon in a direction $α$, is a binary measurement, giving $0$ if the photon is oriented along $α$, and $1$ if it is oriented along $α+\frac{π}{2}$. Measuring along this direction, is equivalent to first rotate the photon by an angle $-α$, ant then measure it in the vertical-horizontal (a.k.a $α=0$) basis. This rotation is a linear transformation, transforming Alice’s state as follows:
$$\ket{0}:↦\cosα\ket0 - \sinα \ket1$$
$$\ket{1}:↦\sinα\ket0 + \cosα \ket1$$
Bob’s states transform in a similar way, leading, for the global state, to
$$
\frac{\ket{00}+\ket{11}}{\sqrt2}:↦\\(\cosα\cosβ+\sinα\sinβ)\frac{\ket{00}+\ket{11}}{\sqrt2}+(-\cosα\sinβ+\sinα\cosβ)\frac{\ket{01}-\ket{10}}{\sqrt2}$$
To find the optimal measurement for the CHSH game, you need then to optimize over the possible sets of angles $(α,α',β,β')$. Of course, being clever and knowing trigonometric formulæ helps in this optimization. (Knowing that the answer is $(0,\frac{π}{4},\frac{π}{8},\frac{3π}{8})$ helps too !).
Following the habits of the field, I leave the complete computation as an exercise to the reader ;-).
By the way, this only shows that the CHSH game can be won with 85% success rate with quantum entanglement. The fact that one cannot do better is known as Tsirelson’s bound, and
involves linear algebra.
Footnotes
¹: *If it where more obvious, many discussions on the nature of entanglement might have happened much before the 1960’s * | {
"domain": "physics.stackexchange",
"id": 36114,
"tags": "quantum-mechanics, quantum-information, quantum-entanglement, bells-inequality"
} |
Creating the start of a Caesar cipher | Question: My code takes an input, uses the input to take a number of letters from the start of the alphabet and put them at the end. I intend to use this to create a Caesar style cipher.
Is there a way to do this more elegantly?
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
namespace ConsoleApplication
{
class Program
{
static void Main()
{
GettingInput gi = new GettingInput();
var input = gi.GetInput();
Console.WriteLine("Your input:\n{0}", input);
List<char> alphabet = new List<char>();
for (char c = 'a'; c <= 'z'; c++)
{
alphabet.Add(c);
}
for (int i = 0; i < input; i++)
{
alphabet.Add(alphabet[i]);
}
for (int i = input ; i > 0; i--)
{
alphabet.Remove(alphabet[i-1]);
}
for (int i = 0; i < 26; i++)
{
Console.WriteLine(alphabet[i]);
}
Console.ReadLine();
}
}
}
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
namespace ConsoleApplication
{
public class GettingInput
{
public int GetInput()
{
Console.WriteLine("Please input an int:");
while (true)
{
try
{
int input = Convert.ToInt32(Console.ReadLine());
return input;
}
catch (Exception ex)
{
Console.WriteLine(ex);
Console.WriteLine("Please input a valid int:");
}
}
}
}
}
All feedback welcome!
Answer:
Instead of create the whole list and modify it subsequently, you could calculate the value directly.
The code could like that:
static void Main()
{
GettingInput gi = new GettingInput();
var input = gi.GetInput();
Console.WriteLine("Your input:\n{0}", input);
var first = (int)'a';
var last = (int)'z';
var length = last - first;
var query = Enumerable
.Range(0, length)
.Select(i => (char)(first + ((input + i) % length)));
foreach (var number in query)
Console.WriteLine(number);
Console.ReadLine();
}
You could use int.TryParse(Console.ReadLine(), out input) instead of catching the exception | {
"domain": "codereview.stackexchange",
"id": 21026,
"tags": "c#, beginner, caesar-cipher"
} |
Why do medium stars collapse to form supernovae while big stars collapse to form black holes? | Question: I understand that a star runs out of fuel slowly and gradually by fusing heavier and heavier particles (because of pressure by gravity) until the last heaviest particle is not able to fuse at the core of the star. This is where gravity takes over, and mass collapses (emitting gases out) to become a smaller dense star (with no energy left to give, just the gravity to absorb).
What I don't understand is why medium stars collapse to become a supernova which emits light while big stars collapse to form black holes which absorb all light.
Answer: There are 2 main kinds of supernova. The kind you are referring to is a type 2 supernova. There are also type 1a, type 1b, and type 1c supernova. Type 1a supernova are not caused by the implosion of huge stars, but rather, a white dwarf accumulating ever greater amounts of mass from a companion star until the Chandrasekhar limit for mass is reached. Then we get the type 1a supernova, which leaves little core remnant and is more or less constant sized explosion with standard luminosity. This is the reason type 1a supernova are used as standard candles to gauge the distance of objects in the universe.
As David Hammen's answer has pointed out, type 2 supernova endings occur only for stars that are more than 8x our sun's mass. Neutron stars and black holes are remnants left post-supernova, after the outer layers and majority of star has has blown off. The size of the remnant is predictable to an extent pre-supernova and depends on the mass of the star. However this is only an approximation and there are always exceptions as the physics is complex and depends on too many variables. Type 1 supernovas leave little of no remnant.
Cause of Type 2 Supernova:
So what causes a dying star of said mass or greater to blow itself up to smitherines in such a way that more energy is released in the short span of the explosion that that of an entire galaxy for same time period. This is in contrast to the much more docile, gradual, and protracted conclusion of stars whose mass is less than 8xSun; planetary nebula after a red giant collapse and leaves white dwarf as remnant.
Massive Star Pre-Supernova(type 2) and implosion:
The core of a massive star will accumulate iron and heavier elements which are not exo-thermically fusible. Iron is the end of the exothermic fusion chain. Any fusion to heavier nuclei will be endothermic. Endothermic fusion absorbs energy from the surrounding layer causing it to cool down and condense around the core further. This added inward pressure is in addition to the inward pressure of gravity and causes a gradient where the core continues to collapse and get denser while exothermic fusion maintains expansive outward pressure above the gradient.
When the density of the inner core reaches a point where the pressure overcomes electron degeneration pressure. Then another gradient is formed where inside the gradient atoms no longer exists because the electrons of the atoms will join their nucleus. The protons in the nucleus combine with the electrons to become neutrons, and release neutrinos. There will be no more individual atoms inside of this gradient. The sudden collapse inside the electron degeneration pressure gradient of the core triggers the initial supernova implosion as said core collapses in a few seconds to a tiny point object of nuclear density (3×1017 kg/m3) no bigger than Manhattan.
Supernova Explosion:
The sudden in-rush of both the heavy element core outside the electron degeneration gradient, and the exotherimically fusible outer layers of the star cause immense heat and pressure in a few seconds period of time causing much of the lighter than iron elements to simultaneously fuse all at once.
**Anti-Matter:**
Also, the extremely dense and energetic neutrino flux would be rushing outward from the mentioned EDP core collapse. When the neutrinos encounter head on the equally energetic atomic nuclei rushing inwards the following occurs:
When a neutron absorbs a neutrino, antimatter is formed as neutrons become anti-protons and positrons.
When this Anti-Matter encounters normal matter, we have the most efficient and complete mass to energy conversion possible:
Matter-AntiMatter Annihilation (MAMA).
The combination of the simultaneous fusion and the MAMA causes all hell to break loose in the greatest release of energy in the shortest time, 2nd only to the gamma ray burst, known in the universe.
Remnant Post-Supernova:
What is left behind after most of the star has blown off is a solid extremely dense mass of neutrons around the size of Manhattan. Thus the term neutron star.
There is nothing denser than a neutron star. (See note below) It is just pure mass with no empty space left. To get a perspective what we are taking about here, our normal atom on earth is almost all empty space. If we take the simplest atom which is hydrogen, and the nucleus (one proton) was scaled up to the size of a pea sized marble in the middle of a football stadium, the orbital of the single electron would extend all the way at the outer edge of the stadium, the electron no bigger than a grain of sand; almost all empty space.
Now just imagine a sphere of solid mass of such marbles filling the stadium with no space left. That is how we can picture the density of a neutron star, and it is the density limit of the universe. A black hole has this same density, just more massive.
"A neutron star is so dense that one teaspoon (5 milliliters) of its material would have a mass over 5.5×1012 kg (that is 1100 tonnes per 1 nanolitre), about 900 times the mass of the Great Pyramid of Giza."
With the density limit constant, a neutron star can only get larger, occupy a greater volume, as it gains more mass. The mass and surface area determine the escape velocity, the velocity needed for an object to escape the host's gravity from the surface. The immense gravity causes the smallest neutron star to have an escape velocity of 100,000 km/s, or one third of the speed of light.
When the mass of the neutron star surpasses the level at which the escape velocity for the neutron star exceeds the speed of light, then even light cannot escape its gravitation, thus becoming a black hole.
Note: There is theoretically a point where the gravity and density of a neutron star will reach a threshold called the neutron degeneracy pressure. This was hypothesized by Tolman–Oppenheimer–Volkoff, and limit by same name. This would result in greater density called baryonic density. Then if we go denser, we can get quark density, and so on. These are all speculative theories since the sub-nuclear interactions are not well understood, and we can't look inside a black hole to see the evidence. | {
"domain": "astronomy.stackexchange",
"id": 2366,
"tags": "black-hole, supernova"
} |
Principal focus question | Question: On my notes İ have written that the definition of principle focus is : the point on the principal axes where rays appear to diverge or rays actually converge.
However İ was looking at this diagram of a concave mirror.
The rays do not converge at the principal focus .
So is the definition wrong.
As you can see the rays do not converge at point F (principal focus) which is ironical to the definition.
The image is not formed on the principal focus which from the definition is what should happen.
Why is this?
Update.: From the answers given İ understand this now, however İ don't know why lots of websites do not include that it is for parallel rays only.
Answer:
On my notes İ have written that the definition of principal focus is : the point on the principal axes where rays appear to diverge or rays actually converge.
should read
On my notes İ have written that the definition of principle focus is : the point on the principal axes where rays initially parallel to the principal axis appear to diverge or rays actually converge.
In your diagram ray 1 passes through the focal point.
Also ray 2 does the same in reverse. | {
"domain": "physics.stackexchange",
"id": 51850,
"tags": "optics, visible-light, lenses"
} |
RVIZ: URDF Model disappears when loaded | Question:
I'm trying to load a simple urdf model into rviz. However, when rviz loads, the model is there for a split second, then disappears. All the status read ok, and rviz shows no problems. I can make the model appear, but only if I change the fixed frame to something non-existent, which then produces many issues. The terminal ready no issues either. I've already tried a common solution to export LC_NUMERIC to en_US.UTF-8, but this doesn't fix the issue.
I'm running ROS Melodic on Ubuntu 18.04.5 LTS through a Virtualbox VM on my Lenovo ideapad 330S (Windows 10).
RVIZ Config File:
Panels:
- Class: rviz/Displays
Help Height: 78
Name: Displays
Property Tree Widget:
Expanded:
- /Status1
- /RobotModel1/Links1
- /TF1
Splitter Ratio: 0.5
Tree Height: 416
- Class: rviz/Selection
Name: Selection
- Class: rviz/Tool Properties
Expanded:
- /2D Pose Estimate1
- /2D Nav Goal1
- /Publish Point1
Name: Tool Properties
Splitter Ratio: 0.5886790156364441
- Class: rviz/Views
Expanded:
- /Current View1
Name: Views
Splitter Ratio: 0.5
- Class: rviz/Time
Experimental: false
Name: Time
SyncMode: 0
SyncSource: ""
Preferences:
PromptSaveOnExit: true
Toolbars:
toolButtonStyle: 2
Visualization Manager:
Class: ""
Displays:
- Alpha: 0.5
Cell Size: 1
Class: rviz/Grid
Color: 160; 160; 164
Enabled: true
Line Style:
Line Width: 0.029999999329447746
Value: Lines
Name: Grid
Normal Cell Count: 0
Offset:
X: 0
Y: 0
Z: 0
Plane: XY
Plane Cell Count: 10
Reference Frame: <Fixed_Frame>
Value: true
- Alpha: 1
Class: rviz/RobotModel
Collision Enabled: false
Enabled: true
Links:
All Links Enabled: true
Expand Joint Details: false
Expand Link Details: false
Expand Tree: false
Link Tree Style: Links in Alphabetic Order
back_left:
Alpha: 1
Show Axes: false
Show Trail: false
Value: true
back_right:
Alpha: 1
Show Axes: false
Show Trail: false
Value: true
chassis:
Alpha: 1
Show Axes: false
Show Trail: false
Value: true
front_left:
Alpha: 1
Show Axes: false
Show Trail: false
Value: true
front_right:
Alpha: 1
Show Axes: false
Show Trail: false
Value: true
Name: RobotModel
Robot Description: robot_description
TF Prefix: ""
Update Interval: 0
Value: true
Visual Enabled: true
- Class: rviz/TF
Enabled: true
Frame Timeout: 15
Frames:
All Enabled: true
back_left:
Value: true
back_right:
Value: true
base_link:
Value: true
chassis:
Value: true
front_left:
Value: true
front_right:
Value: true
Marker Scale: 1
Name: TF
Show Arrows: true
Show Axes: true
Show Names: true
Tree:
base_link:
chassis:
back_left:
{}
back_right:
{}
front_left:
{}
front_right:
{}
Update Interval: 0
Value: true
Enabled: true
Global Options:
Background Color: 48; 48; 48
Default Light: true
Fixed Frame: chassis
Frame Rate: 30
Name: root
Tools:
- Class: rviz/Interact
Hide Inactive Objects: true
- Class: rviz/MoveCamera
- Class: rviz/Select
- Class: rviz/FocusCamera
- Class: rviz/Measure
- Class: rviz/SetInitialPose
Theta std deviation: 0.2617993950843811
Topic: /initialpose
X std deviation: 0.5
Y std deviation: 0.5
- Class: rviz/SetGoal
Topic: /move_base_simple/goal
- Class: rviz/PublishPoint
Single click: true
Topic: /clicked_point
Value: true
Views:
Current:
Class: rviz/Orbit
Distance: 2.1567115783691406
Enable Stereo Rendering:
Stereo Eye Separation: 0.05999999865889549
Stereo Focal Distance: 1
Swap Stereo Eyes: false
Value: false
Focal Point:
X: 0
Y: 0
Z: 0
Focal Shape Fixed Size: true
Focal Shape Size: 0.05000000074505806
Invert Z Axis: false
Name: Current View
Near Clip Distance: 0.009999999776482582
Pitch: 0.17539839446544647
Target Frame: <Fixed Frame>
Value: Orbit (rviz)
Yaw: 0.8203979730606079
Saved: ~
Window Geometry:
Displays:
collapsed: false
Height: 713
Hide Left Dock: false
Hide Right Dock: false
QMainWindow State: 000000ff00000000fd0000000400000000000001560000022bfc0200000008fb0000001200530065006c0065006300740069006f006e00000001e10000009b0000005c00fffffffb0000001e0054006f006f006c002000500072006f007000650072007400690065007302000001ed000001df00000185000000a3fb000000120056006900650077007300200054006f006f02000001df000002110000018500000122fb000000200054006f006f006c002000500072006f0070006500720074006900650073003203000002880000011d000002210000017afb000000100044006900730070006c006100790073010000003d0000022b000000c900fffffffb0000002000730065006c0065006300740069006f006e00200062007500660066006500720200000138000000aa0000023a00000294fb00000014005700690064006500530074006500720065006f02000000e6000000d2000003ee0000030bfb0000000c004b0069006e0065006300740200000186000001060000030c00000261000000010000010f0000022bfc0200000003fb0000001e0054006f006f006c002000500072006f00700065007200740069006500730100000041000000780000000000000000fb0000000a00560069006500770073010000003d0000022b000000a400fffffffb0000001200530065006c0065006300740069006f006e010000025a000000b200000000000000000000000200000490000000a9fc0100000001fb0000000a00560069006500770073030000004e00000080000002e10000019700000003000003bd0000003efc0100000002fb0000000800540069006d00650100000000000003bd000002eb00fffffffb0000000800540069006d006501000000000000045000000000000000000000014c0000022b00000004000000040000000800000008fc0000000100000002000000010000000a0054006f006f006c00730100000000ffffffff0000000000000000
Selection:
collapsed: false
Time:
collapsed: false
Tool Properties:
collapsed: false
Views:
collapsed: false
Width: 957
X: 67
Y: 27
Originally posted by JoelB on ROS Answers with karma: 31 on 2021-05-26
Post score: 0
Answer:
Stupid Problem. Had alpha on material set to zero, so everything was just invisible. That was fun...
Originally posted by JoelB with karma: 31 on 2021-05-26
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 36464,
"tags": "rviz, urdf, ros-melodic"
} |
Boundary conditions in method of images for an infinite grounded plane | Question: We have a charge $q$ situated above a grounded infinite plane at some distance. This $q$ will induce some charge on the plane. But I've read that one of the boundary conditions for potential is that the potential that we need to find just above the plane, goes to zero at infinity. How is this true as we have an infinite plane of induced charge?
Answer: We do have an infinite plane of induced charge but a finite charge is induced(unlike what you probably assumed). Also, the induced charge do not have a uniform charge density. | {
"domain": "physics.stackexchange",
"id": 75663,
"tags": "electrostatics, electric-fields, potential, boundary-conditions"
} |
Equilibrium constant of a reaction whose order of forward and reverse reaction is not same | Question: Suppose, a reaction is like,
$$\ce{aA + bB <=> cC + dD}$$
and that A, B, C, D all are gas.
Now it is known that:
$$K_c = \dfrac{[A]^a[B]^b}{[C]^c[D]^d}$$
Now if the forward reaction is second order and the reverse reaction is in first-order somehow then, how will this change.
I read some articles online but it was not actually clear that if all the reactants or products are in the same state like gas or aqueous then how $K_c$ or $K_p$ (for gas) will change depending on the rate of reaction or if they don't depend on that totally.
It will be helpful if one example is provided.
Answer: I'll try and answer your question, but a "full" answer would take a book.
Given the reaction:
$$\ce{aA + bB <=> cC + dD}$$
Then assuming an elementary reaction in the gaseous state the concentration equilibrium constant always has products over reactants and will be:
$$K_c = \frac{[C]^c[D]^d}{[A]^a[B]^b}$$
The above equation relies on two very specific assumptions:
The reaction is an elementary reaction which often isn't true and the actual coefficients must be determined experimentally since the reaction occurs in steps.
That concentrations can be used instead of activities.
Formally using activities explains part of your question immediately. For instance if A, B and C are gases and D is a liquid or solid, then the activity of D would be unity by definition.
Now breaking down the equilibrium expression into two rate equations we have:
$r_\mathrm{f}$ - Forward reaction rate
$r_\mathrm{r}$ - Reverse reaction rate
$\ce{a^*, b^*, c^*}$ and $\ce{d^*}$ = experimentally determined coefficients which may or may not be equal to the stoichiometric coefficients
$k_\mathrm{f}$ and $k_\mathrm{r}$ are constants for the forward and reverse reactions respectively.
Now using concentrations instead of activities:
\begin{align}
r_\mathrm{f} &= k_\mathrm{f} \ce{[A]^{a^*}[B]^{b^*}}\\
r_\mathrm{r} &= k_\mathrm{r} \ce{[C]^{c^*}[D]^{d^*}}
\end{align}
and at equilibrium by definition:
\begin{align}
r_\mathrm{f} &= r_\mathrm{r}\\
k_\mathrm{f} \ce{[A]^{a^*}[B]^{b^*}} &= k_\mathrm{r} \ce{[C]^{c^*}[D]^{d^*}}
\end{align}
so:
$$K_c = \frac{k_\mathrm{f}}{k_\mathrm{r}}= \frac{\ce{[C]^{c^*}[D]^{d^*}}}{\ce{[A]^{a^*}[B]^{b^*}}}$$
To determine the coefficients the experimentalist can manipulate the experiment to simply the kinetic expression. For example let $\ce{[A] \gg [B]}$ then $\ce{[A]^{a^*}}$ is essentially a constant.
Does this answer your question? | {
"domain": "chemistry.stackexchange",
"id": 15120,
"tags": "equilibrium, kinetics"
} |
Are the radial spokes in Saturn's rings reliably visible via ground-based telescopes | Question: In the mid 1970's, Franklin and O'Meara saw persistent "radial spoke-like features" in the rings of Saturn, that should not have existed due to the differential rotation of the rings. A publication on this observation was rejected by a journal apparently on the grounds that the phenomenon was considered to be illusory (c.f Sciparelli/Lowell's Martian canals?).
From "Seeing in the Dark: How
Amateur Astronomers Are Discovering the Wonder" By Timothy Ferris.
My question is, how reliably were these features visible from the ground-based telescopes of the day? Could sufficiently strong evidence for the existence been obtained before images from space probes? Was the rejection of the paper reasonable in the historical context in which the paper was submitted?
Answer: Bryan (2007) gives a number of reasons why O'Meara's discovery was largely discounted:
As you stated, the behavior was entirely inconsistent with Keplerian predictions of motion.
While O'Meara was able to reproduce his findings, no other independent observers could.
The detections were done entirely visually, rather than with numerical measurements.
There had been previous observations (i.e. in the 19th century), but in different places in the rings (the A and C rings, not the B ring). These have never been confirmed, and it is still believed that these were actually illusions.
Reliable observations from ground-based telescopes, especially those of amateurs, have arisen within the past decade or so. Since 2007, 67 "candidate" observations have been made of the spokes, many during the 2009 Saturnian equinox (keep in mind that the spokes may be a seasonal phenomenon). One additional factor that made observations difficult was that the spokes seemed to have vanished, even from space probes, in the years after the Voyager observations, making it impossible for ground-based telescopes to see them. | {
"domain": "astronomy.stackexchange",
"id": 2223,
"tags": "amateur-observing, saturn, planetary-ring"
} |
Most stable conformational isomer of 3-methoxycyclohexan-1-ol | Question:
Among the following, the most stable isomer is?
I am aware of the fact that equatorial substituents are more stable than axial substituents but couldn't proceed to apply it here. However the answer key gives the answer as d) in which both substituents are in axial position.
Answer: I would say that the answer to the question depends strongly on the solvent used.
In case anybody still doesn’t see it: in the orientation (d), the compound can form an intramolecular hydrogen bond from the hydroxy group to the methoxy group. This is especially favourable in solvents that cannot participate in hydrogen bonding, e.g. dichloromethane.
Dissolving the same molecule in methanol, however, could change the entire story. The intramolecular hydrogen bond is favoured in the absence of other hydrogen bond donors or acceptors, but in a hydrogen bonding solvent there is absolutely no shortage of donors and acceptors and it can be assumed that all sites that can participate in hydrogen bonding in any way are saturated with hydrogen bonds. At this point, the steric interaction probably becomes more important and I would assume that the molecule preferentially assumes a diequatorial configuration. | {
"domain": "chemistry.stackexchange",
"id": 7119,
"tags": "organic-chemistry, alcohols, isomers, cyclohexane, ethers"
} |
Tilt on hover image | Question: I am asking for a review of this code to see if I am following common best practices or if there is a better way to accomplish the goal set for this code.
My purpose was to create an image that would CSS3 rotateY() to your mouse's position over it with as close to a semantic approach as I could get and scalable to any size.
My approach was a simple Div->Img tag structure, the div's size is indirectly the image's size due to the design limitation of listeners in JS being attached to the div (this is so the rotation's effect on size doesn't jitter the image around).
Anyway, if I could get some critique I would be very appreciative. Thank you for reading!
Live Version
HTML index.html
<!DOCTYPE html>
<html lang="en">
<head>
<title>Hover Card</title>
<meta charset="utf-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<link rel="stylesheet" href="css/style.css">
</head>
<body>
<div class="container">
<div class="hover-card">
<img src="images/me.jpg" alt="A picture of Torben Leif.">
</div>
</div>
<script src="scripts/main.js"></script>
</body>
</html>
css/style.css
.container {
margin: calc(50vh - 125px) auto;
width: 250px;
height: 250px;
}
.hover-card {
width: 230px;
height: 230px;
}
.hover-card img {
width: 100%;
height: 100%;
border-radius: 20px;
overflow: hidden;
object-fit: cover;
object-position: center 20%; /* Specific To Avatar Image */
transition: .2s linear;
transform-style: preserve-3d;
transform: perspective(600px) rotateY(var(--js-hover-rotate-val));
}
scripts/main.js
var card = document.querySelector('.hover-card');
var img = document.querySelector('.hover-card img');
var hoverComplete = true; // Used For Smooth Transitions While Rotating.
var degrees = 0; // External For Freeze Fix.
img.addEventListener('transitionend', (e) => {
if(e.propertyName == 'transform')
hoverComplete = true;
});
card.addEventListener('mousemove', (e) => {
if(hoverComplete) {
let newDegrees = Math.floor((1 - (e.pageX - card.getBoundingClientRect().left) / card.offsetWidth) * 90 - 45) * -1;
if(newDegrees !== degrees) { // Freeze Fix
degrees = newDegrees;
img.style.setProperty('--js-hover-rotate-val', degrees + 'deg');
hoverComplete = false;
}
}
});
card.addEventListener('mouseleave', (e) => {
img.style.setProperty('--js-hover-rotate-val', '0deg')
hoverComplete = false;
});
Answer: I find the calculation of tilt a bit complicated. If you know maximum tilt in degrees that shall happen on the left and right edges, you could declare it and a calculation function as
const MAX_INCLINE = 28; // <- just a guess
const shift = (w, x, limit = MAX_INCLINE) => (x / w * 2 - 1) * limit;
Next, Element#getBoundingClientRect, in my opinion, is unnecessary, because you receive all the necessary information in the instance of Event passed to your event listener function:
event.target.width is the width of target element that listens to mouse event, meaning the image in your case, and
event.offsetX is the position of mouse on X-axis relative to the element, meaning it's closer to 0 when the cursor is closer to the left edge, and closer to X when the cursor is closer to the right edge, where X is width of the element (image).
Considering this, the event listener function would look as simple as
(event) => {
const { width } = event.target;
const { offsetX } = event;
image.style.setProperty('--js-hover-rotate-val', `${shift(width, offsetX)}deg`);
};
A codepen with the final code might be also useful. | {
"domain": "codereview.stackexchange",
"id": 29367,
"tags": "javascript, css, image, html5"
} |
Shape of biases in Transformer's Feedforward Network | Question: In transformer network (Vaswani et al., 2017), the feedforward networks have equation:
$$\mathrm{FNN}(x) = \max(0, xW_1 + b_1) W_2 + b_2$$
where $x \in \mathbb{R}^{n \times d_\mathrm{model}}$, $W_1 \in\mathbb{R}^{d_\mathrm{model} \times d_{ff}}$, $W_2 \in\mathbb{R}^{d_{ff} \times d_\mathrm{model}}$.
We know that the biases $b_1$ and $b_2$ are vectors.
But, for the equation to work the shape of $b_1$ and $b_2$ must agree, i.e., $b_1 \in\mathbb{R}^{n \times d_{ff}}$ and $b_2 \in\mathbb{R}^{n \times d_\mathrm{model}}$.
My question: is it true that
$b_1 = \begin{bmatrix} (b_1)_{1} & (b_1)_{2} & \dots & (b_1)_{d_{ff}}\\ (b_1)_{1} & (b_1)_{2} & \dots & (b_1)_{d_{ff}} \\ \vdots & \vdots & & \vdots \\ (b_1)_{1} & (b_1)_{2} & \dots & (b_1)_{d_{ff}} \end{bmatrix}$
and
$b_2 = \begin{bmatrix} (b_2)_{1} & (b_2)_{2} & \dots & (b_2)_{d_\mathrm{model}}\\ (b_2)_{1} & (b_2)_{2} & \dots & (b_2)_{d_\mathrm{model}} \\ \vdots & \vdots & & \vdots \\ (b_2)_{1} & (b_2)_{2} & \dots & (b_2)_{d_\mathrm{model}} \end{bmatrix}$ ?
Answer: The biases are vectors, and their shape should be $b_1\in \mathbb R^{d_{ff}}$ and $b_2\in \mathbb R^{d_\text{model}}$.
To verify that let's compute the shapes of the above formula (for simplicity we can exclude the $\max$):
$$\begin{align}
xW_1 &= (n\times d_\text{model})\times (d_\text{model}\times d_{ff}) = n\times d_{ff} \\
(xW_1)W_2 &= (n\times d_{ff})\times (d_{ff}\times d_\text{model}) = n\times d_\text{model}
\end{align}
$$
In practice, according to the underlying implementation the biases can be either:
expanded to match the batch dimensions, i.e. the vectors can be replicated $n$ times (equal to the batch size) - as you wrote;
summed element per element.
added directly to the weight matrix: this requires adding an extra dimension to both $x$ (the inputs) and $W$, like: $xW = (n\times d_{model}+1)\times (d_{model}+1\times d_{ff})$.
In the first two cases, the biases are stored as vectors. I'm not aware of implementations that keeps the biases as matrices (i.e. replicated vectors) also because the batch size $n$ is not said to remain fixed: e.g. on inference it's tipically one. | {
"domain": "ai.stackexchange",
"id": 3831,
"tags": "transformer, feedforward-neural-networks, linear-algebra"
} |
Why is the buckminsterfullerene the purest form of carbon? | Question: Other websites say that $\ce{C60}$ doesn't have surface bonds that are attracted by other atoms as in graphite and diamond.
I understand that graphite may be attracted by other atoms because of its dangling electron. But why diamond? Each carbon in diamond is covalently bonded to $4$ other carbon atoms in a tetrahedral fashion.
Answer: Diamond has dangling bonds on the outer surface of the crystal for pretty much the same reason as graphite. If you understood graphite differently, then you understood it wrong.
See, a molecule of oxygen contains 2 atoms, a molecule of sulfur has 8; but how many atoms are there in a "molecule" of diamond or graphite? Try drawing one to the end, so as to count them. You won't be able to do that. There is no end. The thing is infinite. But the real-world objects are finite, which means that at some point you have to say "Enough" and crop your ideal structure, and in doing so, you leave dangling bonds which attract other atoms. Fullerene lacks those, and hence is "more pure".
There is an altogether different dimension to the problem. Our thought experiment implied that we are able to produce a huge crystal without defects except maybe some on the surface. This is not true. Real-world compounds always contain impurities, and once you have a wrong atom built into the crystal lattice of graphite or diamond, it is stuck there forever. You'll never remove it, short of destroying the entire crystal. Fullerenes, on the other hand, are molecular compounds. They can be dissolved. They can be put through chromatography, sublimation, and other purification techniques. We can always remove any impurity (not that we can remove all of them, because nothing is ideal).
Either way, fullerenes win. | {
"domain": "chemistry.stackexchange",
"id": 8265,
"tags": "crystal-structure, carbon-allotropes"
} |
Improving performance of a subroutine that checks for a vacancy in a lattice | Question: I use the following subroutine to check whether a small subsection of a 3D Int array is all equal to zero. If any value in the particular subsection is non-zero, I exit and return .false.. Within CheckForVacancy, subroutine bc gets called which takes care of out of bounds array indices.
This subroutine gets called millions of times during a particular run and it is responsible for about half the running time of the entire program. In the distant past, I tried to optimize it by changing the order in which I access the Lattice array and by using the any built-in with array slices but I never saw much improvement. Furthermore, the "array slices" solution can get complicated when dealing with the boundaries of the array. I also tried fiddling with optimization flags but I was largely experimenting blindly at that point.
The code is as follows:
LOGICAL FUNCTION CheckForVacancy (x, y, z)
! Checks for vacancy for a site [x,y,z]
use atrpmodule
IMPLICIT NONE
INTEGER, INTENT(IN) :: x, y, z
INTEGER :: Sx, Sy, Sz, i
INTEGER, DIMENSION(1:26) :: SpaceX = (/1,1,1,0,-1,-1,-1,0,1,1,1,0,-1,-1,-1,0,0,1,1,1,0,-1,-1,-1,0,0/)
INTEGER, DIMENSION(1:26) :: SpaceY = (/0,0,0,0,0,0,0,0,1,1,1,1,1,1,1,1,1,-1,-1,-1,-1,-1,-1,-1,-1,-1/)
INTEGER, DIMENSION(1:26) :: SpaceZ = (/1,0,-1,-1,-1,0,1,1,1,0,-1,-1,-1,0,1,1,0,1,0,-1,-1,-1,0,1,1,0/)
!---------------------------------------------------------------------------
checkforvacancy=.true.
do i=1,26
Sx = x + SpaceX(i)
Sy = y + SpaceY(i)
Sz = z + SpaceZ(i)
call bc(Sx,Sy,Sz)
if (lattice(Sx,Sy,Sz)/=0)then
CheckForVacancy=.false.
exit
endif
enddo
END FUNCTION CheckForVacancy
SUBROUTINE bc (x, y, z)
! Takes case of boundary conditions
USE atrpmodule
IMPLICIT NONE
INTEGER :: x, y, z
!---------------------------------------------------------------------------
IF (x < 1) then
x = x + LattXDimm
elseIF(x > LattXDimm) then
x = x - LattXDimm
endif
IF (y < 1) then
y = y + LattYDimm
elseIF (y > LattYDimm) then
y = y - LattYDimm
endif
IF (z < 1) then
z = z + LattZDimm
elseIF (z > LattZDimm) then
z = z - LattZDimm
endif
END SUBROUTINE bc
For reference, Lattice is defined like this:
INTEGER, DIMENSION(:,:,:), ALLOCATABLE :: Lattice
It is allocated depending on program input and then set equal to zero. Also for reference, I am using the GNU compiler.
I suspect that the answer to this questions may be that I cannot do better, but I wanted to see if any Fortran programmers can spot something that I am missing or may want to try.
Answer: Three thoughts about this.
First is unless the lattice is small you won't actually run into a boundary condition most times. Check if a boundary condition is possible once and have two CheckForVacancy functions one with and one without boundary checks.
Second thought is remove the boundary check altogether by adding duplicate elements. In one dimension if you had the following
Index:1,2,3,4,5
Value:3,4,3,2,1
Make the array
Index:0,1,2,3,4,5,6
Value:1,3,4,3,2,1,3
So you trade extra storage to remove the bounds checks.
Finally, you still have a lot of no_ops in the for loop. All those add zero statements do nothing. You could unroll the loop to eliminate them at the expense of more code. You removed the 0,0,0 option already I think.
Your comment indicates that the lattice gets modified during execution, that makes removing the bounds check more complex.
Depending on the size of the lattice and the number of times you call CheckForVacancy for an individual cell you could either memoize the cells adjacent to a cell or pre-compute them. This would work if you have enough memory and if you call the function with the same values multiple times. | {
"domain": "codereview.stackexchange",
"id": 29916,
"tags": "performance, fortran"
} |
Video of light passing through water | Question: How is this possible?
https://www.youtube.com/watch?v=EtsXgODHMWk
Video shows beam of light travelling through water.
I was under impression that Einstein's equations showed that light speed is relative to everything else, so if I run next to light beam I still won't be able to see it cause it will be going away from me with the speed of light no matter my speed.
Answer: This is a process called Femto-photography. It works more like stop motion than a normal video. It basically works like this:
You flash the light and take a very short exposure picture 1e-9 seconds later.
Then you flash the light and take a very short exportsure picture 2e-9 seconds later.
Then you flash the light and take a very short exposure picture 3e-9 seconds later.
etc. etc.
When you put all these pictures together it appears as if the light is travelling slowly. | {
"domain": "physics.stackexchange",
"id": 24119,
"tags": "special-relativity, optics, photons, camera"
} |
Is this spacetime diagram for the twins paradox correct? | Question: The link at question is here: http://www.einsteins-theory-of-relativity-4engineers.com/twin-paradox-graphical-solution.html
My question in specific is the loop in Jim's worldline. I know that typically in the Twins paradox with a instantaneous acceleration or inertial frame jump, the difference in ages is explained by the gap in time the the travelling twin doesn't experience due to this changing of frames. But is this loop how it would look with a constant acceleration, or is the author of this picture incorrect? Is a closed loop like this even possible without going faster than the speed of light?
Thanks for any help.
Answer: Even though the author doesn't specify what math they did, it's pretty straightforward to tell that they don't know what they're doing. Relativity is extremely permissive about what coordinate system we use, but when we have an acceptable coordinate system and then do a change of coordinates to get a different one, there are certain requirements. The functions expressing the new coordinates in terms of the old ones must be smooth, and they must also be one-to-one. The fact that Jim's world-line crosses itself tells us that whatever coordinate system the author used, it wasn't one-to-one. So whatever they did was just plain wrong.
It's fine to try to do treatments of the twin paradox in this style. Special relativity can handle accelerating frames of reference (contrary to what some people say). However, it can be a little tricky to get it right; counterintuitive things can happen; you have to be careful about your mathematical assumptions; the description can be nonunique; and there is no guarantee that you will end up with a single coordinate chart that covers all of spacetime. The most common description is referred to as the Rindler coordinates. If someone wanted to do a better presentation in this style, probably a nicer way to do it would be to let Pam have constant proper acceleration. Then the transformation would simply be the transformation from Minkowski coordinates to Rindler coordinates. There is also a treatment in this style in Hewitt, Conceptual Physics.
The danger in this style is that impressionable people will get the idea that there's only one way to do it, or that all kinds of presentation-dependent facts are "real." We can never say whether a certain event for Jim and a certain event for Pam are "really" simultaneous. At best they are simultaneous according to a certain convention defining simultaneity. This was in fact one of the basic insights leading to Einstein's 1905 formulation of relativity: that simultaneity is a matter of convention. | {
"domain": "physics.stackexchange",
"id": 45530,
"tags": "special-relativity, spacetime, reference-frames, coordinate-systems, time-dilation"
} |
Sort half-edges around common vertex in 3d | Question: I'm trying to figure out this problem for very long time and am no getting nowhere. I'm working on a simple 3d modeler that uses half-edge data structure.
Say I have non-manifold geometry where two triangles share a common vertex, as shown in the image below. And I want to add another triangle such that now three triangles share a common vertex. Once we add the new triangle we need to reorder the half-edges around the common vertex. In 2d this ordering is done by sorting the half-edges from the common vertex clockwise, as explained in this post.
However in 3d this becomes a nightmare. If the same three triangles share the common vertex but have an arbitrary orientation in 3d space and are not coplanar. How can one possibly sort the half-edges?
I experimented with using the common vertex normal to construct a plane, and project all the half-edges around the vertex to that plane. After which we could sort them clockwise relative to the plane. But I've found this approach to have a lot of issues. And now I'm all out of ideas.
Answer: It seems that the problem here is that you try to determine something similar to a rotation system from a skeleton1 embedded in $\mathbb{R}^3$. The problem here is that such a skeleton is not enough to uniquely define a rotation system, because rotation systems encode embeddings of a graph onto a surface. Without having access to a given surface as a reference, almost any rotation along a vertex is possible, and you will not be able to find the "right" one.
So what can you do? This likely depends a bit on what it is you are actually modelling. Since the rotation around a vertex is a concept that depends on a surface, to determine it you will have to decide what your surface looks like, at least locally around the vertex you're inserting a triangle in. Likely, the edges that are part of a triangle should always be consecutive in the ordering, but that doesn't give all the information.
One possible approach on how to decide where a new triangle fits into the ordering, is to look at all consecutive pairs of triangles in the existing ordering, and insert the new triangle in between the pair with smallest distance to the new triangle. (Note that you have to store the ordering around the vertex explicitly, you cannot compute this on the fly)
More precisely, to determine where to place a new triangle incident to vertex $v$ in the ordering around $v$, consider all pairs of consecutive edges $(e_i,e_{i+1})$ that do not form a triangle with vertices $(a,b)$ being their endpoints not equal to $v$. Let $(x,y)$ be the new vertices of the triangle. Place the edges of the new triangle in between the pair of edges $(e_i,e_{i+1})$ such that $\min(\|a-x\|+\|b-y\|, \|a-y\|+\|b-x\|)$ (or another similarity measure of the pairs $(x,y)$ and $(a,b)$) is minimized.
For example, in the figure below, vertices with the same colors are the pairs we'd compare with our new vertices $x,y$.
1: I will assume that you just have a graph for now, but perhaps a simplicial complex might be a better description? I don't think it matters much for the discussion here, though. | {
"domain": "cs.stackexchange",
"id": 16942,
"tags": "computational-geometry, doubly-connected-edge-list"
} |
Finding the carry out of the "+" operator in SystemVerilog | Question: I'm trying to learn digital design this summer and currently going through this excercise of creating a 32-bit ALU based on this schematic:
Im using the + operator to create an adder, but I need to grab the carry out for use in my carry flag. Should I just hard code an adder instead of using the + operator? I pretty much hard coded the equation for the carry out of a full adder in my else statement provided below. I'm pretty new to this, and any of type help would be appreciated. I'm also not sure if my code is well designed/structured. Any tips would be awesome!!
module alu(input logic [31:0] a, b,
input logic [1:0] alu_operation,
output logic [31:0] out,
output logic [3:0] alu_flag);
logic [31:0] sum, b_out;
assign b_out = alu_operation[0] ? ~b : b; // if 0 bit of op = 1, then we invert b to subtract, otherwise we keep b and add.
assign sum = a + b_out + alu_operation[0];
always @ (alu_operation, a, b)
case (alu_operation)
2'b00: out <= sum; // ADD
2'b01: out <= sum; // SUBTRACT
2'b10: out <= a & b; // AND
2'b11: out <= a | b; // OR
endcase
wire logic negative, zero, carry, overflow;
assign negative = alu_flag[3];
assign zero = alu_flag[2];
assign carry = alu_flag[1];
assign overflow = alu_flag[0];
always @ (out, a, b)
begin
if (out[31] == 1)
alu_flag[3] <= 1; // Negative
if (out == 0)
alu_flag[2] <= 1; // Zero
if ( ~alu_operation[1] & ((a & b) | (alu_operation[0] & (a ^ b)))) // Carry = AB + Cin(A xor B)
alu_flag[1] <= 1; // Cin = alu_operation[0]
if ((sum[31] & a[31]) & (~alu_operation[1]) & (b[31] ^~ a[31] ^~ alu_operation[0]))
alu_flag[0] <= 1; // Overflow
end
endmodule
Answer: The layout of your code is good, and you chose meaningful names for your signals. There are some improvements you can make, however.
The following signals are essentially unused:
wire logic negative, zero, carry, overflow;
You do make assignments to them, but you never use them otherwise. If you synthesize your code, they will be optimized away. Therefore, you can simply delete them and their assignments.
Good coding practices recommend that you should use blocking assignments (=) for combinational logic. Since you do not have any sequential logic, you should not use nonblocking assignments (<=) in your always blocks.
Your always blocks have incomplete sensitivity lists. This will lead to simulation mismatches before and after synthesis. You should use the implicit sensitivity list syntax:
always @*
Your second always block will infer latches because you do not make assignments to the signals under all conditions. One way to avoid latches is to use an else clause for each if statement.
Here is the new code, taking all of the above into account. I also made some minor changes regarding indentation and begin/end usage:
module alu (
input logic [31:0] a, b,
input logic [1:0] alu_operation,
output logic [31:0] out,
output logic [3:0] alu_flag
);
logic [31:0] sum, b_out;
assign b_out = alu_operation[0] ? ~b : b; // if 0 bit of op = 1, then we invert b to subtract, otherwise we keep b and add.
assign sum = a + b_out + alu_operation[0];
always @* begin
case (alu_operation)
2'b00: out = sum; // ADD
2'b01: out = sum; // SUBTRACT
2'b10: out = a & b; // AND
2'b11: out = a | b; // OR
endcase
end
always @* begin
if (out[31]) begin
alu_flag[3] = 1; // Negative
end else begin
alu_flag[3] = 0;
end
if (out == 0) begin
alu_flag[2] = 1; // Zero
end else begin
alu_flag[2] = 0;
end
// Carry = AB + Cin(A xor B)
if ( ~alu_operation[1] & ((a & b) | (alu_operation[0] & (a ^ b)))) begin
alu_flag[1] = 1;
end else begin
alu_flag[1] = 0;
end
if ((sum[31] & a[31]) & (~alu_operation[1]) & (b[31] ^~ a[31] ^~ alu_operation[0])) begin
alu_flag[0] = 1; // Overflow
end else begin
alu_flag[0] = 0;
end
end
endmodule
Regarding your question about the + operator, I think you should keep it because it is good to code at a high level of abstraction. Synthesis tools will optimize your code. If your synthesized design does not meet your goals (timing, power, area, etc.), then you can revisit the implementation. | {
"domain": "codereview.stackexchange",
"id": 41599,
"tags": "verilog"
} |
Magnetic field line experiment in space | Question: I was wondering what would happen if you take a bar magnet (maybe in a long cylindrical form so that there are no edges) to space, let it float and carefully and slowly sprinkle iron filings around it.
How would the result look like? Would I get a 3-dimensional representation of the magnetic field lines? Or just a big mess?
I couldn't really find any answer to this question on the internet.
Answer: The magnetic forces involved are stronger than the gravitational forces on iron particles, so it will be much like on Earth. In the lecture room, I show my students this demo: http://chinalabsupplies.com/magnetic_demo/1257-1.jpg
We also have iron filings in a gel. The poster came up with this nice video, where one sees how the iron particles in oil get attracted also to each other and orient themselves - this gives the impression of field lines. https://www.youtube.com/watch?v=KKyFHDJL_1s | {
"domain": "physics.stackexchange",
"id": 36689,
"tags": "magnetic-fields, space"
} |
Sequence and Series Solver | Question: I made a sequence and series solver just for helping me solve homework, and I'm in dire need of ways to make it further compact and efficient, since I used brute force.
If you want to see what this is supposed to do, please see my github.
This Python code is meant for the fx-cg50 calculator's micropython, where there are a lot of functions that don't work including fractions, some mathematical functions such as math.gcd and math.isclose. So I really require some advice or coding tricks to simplify my program.
Disclaimer: I'm only an A-Level student of 16 years; consider me a beginner. I know eval is insecure, but I'm not planning on uploading this online; it's only for my personal use.
# just to make the input_checker function smaller and to eliminate repeated code, responsible for iterating through a list of inputs
def three_variables_looper_arithmetic():
count_list = [input("enter a1: "), input("enter n: "), input("enter d: ")]
count_list = [float(eval(count)) for count in count_list if isinstance(count, str)]
return count_list
# just to make the input_checker function smaller and to eliminate repeated code, responsible for iterating through a list of inputs
def three_variables_looper_geometric():
count_list = [input("enter a1: "), input("enter r: "), input("enter n: ")]
count_list = [float(eval(count)) for count in count_list if isinstance(count, str)]
return count_list
# loops through all the inputs of a given situation based on whether its arithmetic
# or not, and checks whether the input is string like "6/2" so it could evaluate it, allows input of fractions
def input_checker(choice_main, choice_sub, L):
if choice_main == 'arithmetic':
if choice_sub == 'a_nth':
return three_variables_looper_arithmetic()
elif choice_sub == 'sum_to_nth_without_L':
return three_variables_looper_arithmetic()
elif choice_sub == 'sum_to_nth_with_L':
count_list = [input("enter a1: "), input("enter n: "), L]
count_list = [float(eval(count)) for count in count_list if isinstance(count, str)]
return count_list
elif choice_sub == "a_nth_exceed":
count_list = [input("enter a1: "), input("enter r/d: ")]
count_list = [float(eval(count)) for count in count_list if isinstance(count, str)]
return count_list
elif choice_sub == "sum_to_nth_without_L_exceed":
count_list = [input("enter a1: "), input("enter r/d: ")]
count_list = [float(eval(count)) for count in count_list if isinstance(count, str)]
return count_list
elif choice_sub == "sum_to_nth_with_L_exceed":
count_list = [input("enter a1: "), L]
count_list = [float(eval(count)) for count in count_list if isinstance(count, str)]
return count_list
elif choice_main == 'geometric':
if choice_sub == 'a_nth':
return three_variables_looper_geometric()
elif choice_sub == 'sum_to_nth':
return three_variables_looper_geometric()
elif choice_sub == 'sum_to_infinity':
count_list = [input("enter a1: "), input("enter r: ")]
count_list = [float(eval(count)) for count in count_list if isinstance(count, str)]
return count_list
elif choice_sub == "a_nth_exceed":
count_list = [input("enter a1: "), input("enter r/d: ")]
count_list = [float(eval(count)) for count in count_list if isinstance(count, str)]
return count_list
elif choice_sub == "sum_to_nth_without_L_exceed":
count_list = [input("enter a1: "), input("enter r/d: ")]
count_list = [float(eval(count)) for count in count_list if isinstance(count, str)]
return count_list
# checks if L is an x or not, also based on whether its on the exceed or normal path, and
# an x means L is not present, while a value of L represents it is present and used in calculation
def L_evaluator(L, option, choice_n, value):
if option == "normal":
if L == "x":
a1, n, d = input_checker(choice_main, choice_n, L)
result = (n/2)*(2*a1+(n-1)*d)
return result
else:
choice_n = choice_map_sub['x']
a1, n, L = input_checker(choice_main, choice_n, L)
result = (n/2)*(a1+L)
return result
if option == "exceed":
if L == "x":
a1, d = input_checker(choice_main, choice_n, 0)
a1, d = float(a1), float(d)
n = 1
while True:
result = (n/2)*(2*a1+(n-1)*d)
if (result >= float(value)):
break
n += 1
return n
else:
choice_n = choice_map_exceed['c']
a1, L = input_checker(choice_main, choice_n, L)
n = 1
while True:
result = (n/2)*(a1+L)
if (result >= float(value)):
break
n += 1
return n
# finds the first n to exceed a certain value, by using brute force method
def minimum_n_finder(choice_main, choice_map_exceed):
choice_n_input = None
if choice_main == "arithmetic":
while choice_n_input not in ['a', 'b']:
choice_n_input = input("Enter a for nth\nEnter b for sum\n>> ")
choice_n = choice_map_exceed[choice_n_input]
print("enter x in n")
if choice_n == "a_nth_exceed":
print("a1+(n-1)d > Value")
a1, d = input_checker(choice_main, choice_n, 0)
n = 1
value = input("Enter the value to exceed: ")
while True:
result = a1+(n-1)*d
if (result >= float(value)):
break
n += 1
print("The minimum n to exceed is " + str(n))
if choice_n == "sum_to_nth_without_L_exceed":
n = 1
print("Sn=(n/2)(2a1+(n-1)d)>Value\nSn=(n/2)(a1+L)>Value\nEnter x if L is unknown")
L = input("Enter L: ")
value = input("Enter the value to exceed: ")
result = L_evaluator(L, "exceed", choice_n, value)
print("The minimum n to exceed is " + str(result))
elif choice_main == 'geometric':
while choice_n_input not in ['a', 'b']:
choice_n_input = input("Enter a for nth\nEnter b for sum_to_nth\n>> ")
choice_n = choice_map_exceed[choice_n_input]
if choice_n == "a_nth_exceed":
print("a1(r)^(n-1)>Value")
a1, r = input_checker(choice_main, choice_n, 0)
if a1 == 0:
print("a cannot be 0")
raise SystemExit
n = 1
value = input("Enter the value to exceed: ")
while True:
result = a1*(r)**(n-1)
if (result >= float(value)):
break
n += 1
print("The minimum n to exceed is " + str(n))
elif choice_n == "sum_to_nth_without_L_exceed":
print("Sn=(a1(1-(r)^n))/(1-r)")
a1, r = input_checker(choice_main, choice_n, 0)
if a1 == 0:
print("a cannot be 0")
raise SystemExit
n = 1
value = input("Enter the value to exceed: ")
while True:
result = (a1*(1-(r)**n))/(1-r)
if (result >= float(value)):
break
n += 1
print("The minimum n to exceed is " + str(n))
# as this code is for a calculator the x button is very easily accessible to shut the whole program.
def stopper():
stop_or_continue = input("Stop?: enter x then\n>>> ")
if stop_or_continue == "x":
raise SystemExit
print("Sequence & Series Solver")
# asks whether you want to solve arithmetically or geometrically, depends on the sequence/series
while True:
choice_main , choice_input_main = None, None
choices_main_options = ['a','b']
choice_map_main ={"a": 'arithmetic', "b": 'geometric'}
while choice_input_main not in choices_main_options:
choice_input_main = input("a for arithmetic\nb for geometric\n>> ")
choice_main = choice_map_main[choice_input_main]
if choice_main == "arithmetic":
print("Arithmetic: ")
choice_sub, choice_input_sub = None, None
choices_sub_options = ['a', 'b', 'c']
choice_map_sub = {'a': 'a_nth', 'b': 'sum_to_nth_without_L', 'x': 'sum_to_nth_with_L', 'c':'minimum_number_of_terms_to_exceed'}
while choice_input_sub not in choices_sub_options:
choice_input_sub = input("a for a_nth term\nb for sum\nc for min_term_to_exceed\n>> ")
choice_sub = choice_map_sub[choice_input_sub]
# the variable choice_main refers to whether the choice is arithmetic or geometric
# choice_sub refers to the types of formulas you'll use in sequences/series
if choice_sub == "a_nth":
print("a_nth=a1+(n-1)d")
a1, n, d = input_checker(choice_main, choice_sub, 0)
result = a1+(n-1)*d
print(round(result,4))
elif choice_sub == "sum_to_nth_without_L":
print("Sn=(n/2)(2a1+(n-1)d)\nSn=(n/2)(a1+L)\nEnter x if L is unknown")
L = input("Enter L: ")
print(round(L_evaluator(L, "normal", choice_sub, 0), 4))
elif choice_sub == "minimum_number_of_terms_to_exceed":
choice_map_exceed = {'a': 'a_nth_exceed', 'b': 'sum_to_nth_without_L_exceed', 'c': 'sum_to_nth_with_L_exceed'}
minimum_n_finder("arithmetic", choice_map_exceed)
elif choice_main == "geometric":
print("Geometric: ")
choice_sub, choice_input_sub = None, None
choices_sub_options = ['a', 'b', 'c', 'd']
choice_map_sub = {'a': 'a_nth', 'b': 'sum_to_nth', 'c': 'sum_to_infinity', 'd': 'minimum_number_of_terms_to_exceed'}
while choice_input_sub not in choices_sub_options:
choice_input_sub = input("a for a_nth term\nb for sum\nc for sum to infinity\nd for min_terms_exceed\n>> ")
choice_sub = choice_map_sub[choice_input_sub]
if choice_sub == "a_nth":
print("a_nth=a1(r)^(n-1)")
a1, r, n = input_checker(choice_main, choice_sub, 0)
result = a1*(r)**(n-1)
print(round(result,4))
elif choice_sub == "sum_to_nth":
print("Sn=(a1(1-(r)^n))/(1-r)")
a1, r, n = input_checker(choice_main, choice_sub, 0)
try:
result = (a1*(1-(r)**n))/(1-r)
print(round(result,4))
except (ZeroDivisionError, NameError):
print("r cannot be 1!")
elif choice_sub == "sum_to_infinity":
print("S_inf=a1/(1-r)")
a1, r = input_checker(choice_main, choice_sub, 0)
if (r > 1):
print("r cannot be greater than 1")
raise SystemExit
try:
result = a1/(1-r)
print(round(result,4))
except (ZeroDivisionError, NameError):
print("r cannot be 1!")
elif choice_sub == "minimum_number_of_terms_to_exceed":
choice_map_exceed = {'a': 'a_nth_exceed', 'b': 'sum_to_nth_without_L_exceed', 'c': 'sum_to_nth_with_L_exceed'}
minimum_n_finder("geometric", choice_map_exceed)
stopper()
Answer: First, you can just return the list comprehension instead of first overwriting count_list. That should save you a few lines without affecting readability.
For all of your if-cases, you could use a dictionary instead. So that if choice_sub in dict.keys(): dict[choice_sub].
Generally, when functions are getting as long as yours are, it's a good idea to start thinking about OOP and classes. And you could separate the "front-end" (all of your prints and inputs) from your "back-end" (the functions/methods performing the calculations). This would also make it easier on yourself if you want to migrate away from a CLI app to something with a graphical interface (which could help make the formatting of the formulas more readable). | {
"domain": "codereview.stackexchange",
"id": 37679,
"tags": "python, python-3.x, calculator, math-expression-eval"
} |
How are gravitons compatible with general relativity? | Question: I have been reading about how gravity has 2 equivalents descriptions:
General Relativity.
Explained by the graviton.
How are these two things compatible?
How can it be that gravity is explained perfectly through curving spacetime, and at the same time we want to understand it by thinking of a particle that mediates its interaction? Isn't this confusing?
Answer: There is no working theory that has been, with complete consistency, started from gravitons, and ended up at 4D General relativity in its low energy limit.
But the idea of the whole endeavor is the same thing as Electricity and magnetism. You have the classical picture of E&M that is based around electric and magnetic fields, and then you have the photon-based version of that theory, given by QED, where perturbations of those fields get treated as "photons", and in a limit, a superposition of many of these states average out to the fields.
It's the same idea with some theory of gravity -- perturbations of the metric tensor are gravitational waves, and you'd conceptualize "small" perturbations as "gravitons", and in some low-energy macroscopic limit, a superposition of many graviton states would look like the macroscopic metric tensor. | {
"domain": "physics.stackexchange",
"id": 78446,
"tags": "general-relativity, spacetime, curvature, quantum-gravity, carrier-particles"
} |
SICK LMS1xx in Husky (Hydro) | Question:
Hi, does anyone have any idea how to use SICK LMS100 laser scanner in Husky?
I have been trying to use the package provided in here - https://github.com/clearpathrobotics/LMS1xx - but could hardly make it. (error message I got : connection to device failed, changing of IP does not work)
Ubuntu 12.04.4 + Hydro.
Thanks!
Originally posted by DavidSuh on ROS Answers with karma: 1 on 2014-08-26
Post score: 0
Original comments
Comment by DavidSuh on 2014-08-26:
Thanks Murilo. You were right, it was indeed IP problem. Someone else changed the IP in LMS while I was on my vacation last month. LMS now works nicely again. Thanks a lot!
Answer:
I'm also using a Husky with LMS100 on 12.04 + Hydro. I forked the repo from clearpath and made very minor changes. Have you checked the IP configured on the LMS? Can you show your launch file?
Originally posted by Murilo F. M. with karma: 806 on 2014-08-26
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 19194,
"tags": "husky, ros-hydro, sicklms"
} |
SWAP Test as a Projective Measurement | Question: In a much cited paper by Lloyd et al Quantum Algorithm for Supervised and Unsupervised Machine Learning, they proposed a rather cute quantum algorithm to evaluate the distance between an input feature vector $\vec{u}$ and the centroid of a set of $M$ training vectors $\{\vec{v}_{j}\}$, i.e. an average $(1/M)\sum_{j=1}^{M}\vec{v}_{j}$. On page 3 of this paper, after they have constructed state $|\psi\rangle$ for "system and ancilla", they proceeded to "... use a swap test to perform a projective measurement on the ancilla alone to see if it is in the state $|\phi\rangle$ ...". I have three closely related questions.
Which part of state $|\psi\rangle$ is due to the ancilla? If state $|0\rangle$ there represents part of the ancilla, it seems this ancilla state in $|\psi\rangle$ is not a single qubit, as it must have the same length as $|j\rangle$.
What states are involved in this the swap test? My wild guess is state $|\psi\rangle$ and state $|\phi\rangle$ and the projection $\langle\phi|\psi\rangle$ is a state whose norm is the desired distance.
How is this swap test performed in this setting? I guess an extra single qubit is needed to be the ancilla.
Your wisdom will be highly appreciated.
Answer: There are two different ancillas floating around, one used in $|\psi\rangle$ and another to conduct the swap test later on:
In the above picture with subsystems explicitly labeled we have
\begin{align}
|\psi \rangle_{A_2 B} &= \frac{1}{2} \left( |0\rangle_{A_2} |u\rangle_B + \frac{1}{\sqrt{M}} \sum_{j=1}^M |j\rangle_{A_2} |v_j\rangle_B\right) \\
|\phi\rangle_C &= \frac{1}{\sqrt{Z}} \left( ||\vec{u}|| |0\rangle_C - \frac{1}{\sqrt{M}} \sum_{j=1}^M ||\vec{v}_j|| |j\rangle_C \right)
\end{align}
Its tedious to work out the SWAP test in this case, but you can verify that this makes sense by just choosing $M=1$ corresponding to a single vector $\vec{v}$ that you're comparing to $\vec{u}$. After the swap and final $H$ gate are performed your goal is to rewrite the state as
$$|0\rangle_{A_1} \left(|\psi_{A_2 B}\rangle |\phi\rangle_C + |\phi\rangle_{A_2} |\psi_{BC}\rangle \right) + \dots
$$
which you will recognize from a typical SWAP test as having the readout probability of "0" in register $A_1$ being a function of $\langle \psi |\phi\rangle$. However the $|\psi_{BC}\rangle$ will have its registers out of order and will therefore appear ``backwards'' so you'll need to shuffle some subsystems to finish the derivation.
Answering your question:
Yes, systems $A_2$ and $C$ are both $(M+1)$-dimensional, so you need $\lceil{\log_2 (M+1)}\rceil$ qubits in those registers
Correct, you're trying to evaluate $\langle \psi | \phi \rangle $, which corresponds to $\langle \vec{u}, \vec{V}\rangle$ where ($\vec{V} = \frac{1}{M} \sum_j \vec{v}_j$). However you will also need to estimate $Z$ to find $|| \vec{u} - \vec{V} ||^2$ as discussed in the paper.
See diagram above | {
"domain": "quantumcomputing.stackexchange",
"id": 2415,
"tags": "quantum-gate, measurement, quantum-enhanced-machine-learning"
} |
what is inside your $PYTHONPATH? | Question:
Hello all,
I am trying to find a solution for the problem I have with this error message:
import rospkg
ImportError: No module named rospkg
Can somebody tell me what it should be inside the $PYTHONPATH exactly?
For me it is only : /opt/ros/fuerte/lib/python2.7/dist-packages
Thanks,
Originally posted by Zara on ROS Answers with karma: 99 on 2012-07-30
Post score: 0
Answer:
You didn't provide any information about the platform you are using,how you installed ROS and which version you are using (see the support guidelines) so I assume you installed from Debian packages and you use Fuerte.
The debian package that contains rospkg is python-rospkg and the corresponding files are put into /usr/share/pyshared which is a system directory. That means that you don't need to add it to the python path since that's one of the default locations python searches for modules.
Did you install python-rospkg:
sudo apt-get install python-rospkg
Normally, the package should be installed automatically since it is a dependency of ROS. If it's not there, I'm sure something went wrong with your ROS installation.
Originally posted by Lorenz with karma: 22731 on 2012-07-30
This answer was ACCEPTED on the original site
Post score: 2
Original comments
Comment by Zara on 2012-08-01:
I checked that one too. Finally I erased my Ubuntu 12.0.4 and re-installed Ubuntu 12.0.4 and ROS again. Everything works fine now. Knocking the wood!! | {
"domain": "robotics.stackexchange",
"id": 10419,
"tags": "ros, gazebo, python, gazebo-simulator, rospkg"
} |
What is the right way to keep track of the different things we try? | Question: In Machine Learning we usually try many combinations of different features, filters we apply to the data, transformations on the features or the target variables and different versions of hyperparameters.
This fact makes it difficult to keep track of what works and what doesn't if we are not exhaustive with how we keep track of the different combinations we try.
I am wondering if there are any best practices around this problem. My current approach is to keep track of the different combinations naming files making reference to the parts that compose it, for example a hyperparameters pickle file I would name it booster_params_{}_{}_{}_{}.pickle'.format(filter_name, features_name, model_target, params_iteration)
where filter name is the set of filters I'm applying to the data, features name refers to the set of features used, model target to the target I'm modeling and params iteration refers to the version of the hyperparameters.
This seems like an overkill, and that is why I'm looking for ideas on how to tackle this problem.
Answer: You can maintain multiple versions of booster_params.pickle either:
1) through a version control system
or
2) manually, e.g. booster_params_v1.pickle, ...v2, etc. and a separate file where you would describe each version | {
"domain": "datascience.stackexchange",
"id": 5907,
"tags": "machine-learning"
} |
Boulware and Unruh vacuum in Schwarzschild spacetime | Question: I am studying quantization in Schwarzschild spacetime. In class the Boulware vacuum $\left| B \right>$ has been defined using the o.n. modes $u_I(x) = \frac{1}{4\pi \sqrt{\omega}}e^{-i\omega v}$, $u_R(x) = \frac{1}{4\pi \sqrt{\omega}}e^{-i\omega u} \Theta(r - 2m)$ and $u_L(x) = \frac{1}{4\pi \sqrt{\omega}} e^{i\omega u}\Theta(2m-r)$ (the $u_R$ are modes outside the horizon with positive norm and frequency, while the $u_L$ are modes inside the horizon with positive norm and negative frequency corresponding to partner states). Then one clearly sees the modes are singular on the horizon since there $u = \infty$ and thus the Boulware vacuum is singular on the horizon, however it can still be used to describe the polarized vacuum outside of a star since then the horizon does not really exists as the interior metric will not be Schwarzschild.
The Unruh vacuum $\left|U\right>$ has then been introduced, the o.n. modes are the same for the ingoing sector while they are $\frac{1}{4\pi\sqrt{\omega_K}}e^{-i\omega_K U}$ for the outgoing one, $U = \pm \frac{1}{k}e^{-ku}$ being the Kruscal coordinate and $k$ the surface gravity.
Now I know the outgoing mode at late time in a gravitational collapse behave in general as the outgoing modes of the Unruh vacuum. Now to derive Hawking's radiation the following is computed (in the s-wave and no backscattering approximation) $\left<U\right|N_{\omega}^R\left|U\right> = \frac{1}{e^{8\pi m \omega} - 1}$ having $N^R_\omega$ be the number operator associated to the modes $u_R$.
There are multiple reasons why this is not clear to me (please validate the italic statements):
I understand the outgoing modes of the Unruh vacuum and of a late time gravitational collapse are the same, thus it makes sense that the Unruh vacuum is the vacuum at a late time in a gravitational collapse, here what is not clear to me is that respect to which observer this is the vacuum: I suppose the vacuum of an inertial observer at the horizon since $U$ is the local inertial coordinate on the future horizon.
why do we use $N_\omega^R$, here I suppose because the Boulware vacuum is the vacuum of an inertial observer at infinity since the modes become Minkowski modes, however the modes are singular on the horizon, which does exist in the case of gravitational collapse (unlike a static star), I feel like I can neglect this because the observer is at infinity even though it seems odd.
if my above statements are correct here the vacuum depends on the position of the observer (different on the horizon and at infinity).
Answer: Your definition of the word "vacuum" seems to be "the state in which a given observer sees no particles". This is not a common definition within QFTCS (in fact, we typically use "vacuum" to mean any pure Gaussian state, regardless of observers). With this in mind, let us move on to your questions.
I suppose the vacuum of an inertial observer at the horizon since $U$ is the local inertial coordinate on the future horizon.
Any static observer on Schwarzschild spacetime will observe particles in the Unruh vacuum at late times. Even if the observer is at the horizon. In a gravitational collapse spacetime, observers in the far past (before the formation of the black hole) see no particles. After the star collapses, they see particles appearing. Hence, in Schwarzschild spacetime, static observers in the far past will see no particles.
I suppose because the Boulware vacuum is the vacuum of an inertial observer at infinity since the modes become Minkowski modes
In the Boulware vacuum, no static observer sees any particles. It does not matter where they are in spacetime, as long as they are not on the horizon (which is nonphysical). The Boulware vacuum does not make sense in a gravitational collapse spacetime because it is a nonphysical state. It only makes sense in a star or planetary spacetime. This is also the reason why we have no Hawking effect on star or planetary spacetimes, as I discussed in this answer.
if my above statements are correct here the vacuum depends on the position of the observer (different on the horizon and at infinity).
The notion of particle does depend on the observer. The physical state of the field does not. The correct state to work with depends on the physical situation at hand, just like in a laboratory you should work with the state in which the system is prepared. In a planetary spacetime, for example, the Unruh vacuum is nonphysical because it predicts modes falling into a horizon that does not exist in the spacetime you are considering. Hence, it cannot be taken seriously as a good model for the quantum field. Similarly, in a gravitational collapse spacetime, the Boulware vacuum is not a physically acceptable model for the state of the quantum field because it is singular at the horizon. I discussed this in more length at the answer I previously mentioned.
The state does not depend on the observer. The particle interpretation does. | {
"domain": "physics.stackexchange",
"id": 94022,
"tags": "quantum-field-theory, hawking-radiation, qft-in-curved-spacetime, unruh-effect"
} |
Diagrammatic Quantum Reasoning: Proving the loop equation using yanking equations | Question: I'm trying to study the book: Picturing Quantum Processes: A First Course in Quantum Theory and Diagrammatic Reasoning, and would like some help with Exercise 4.12:
The relevant equations are as follows:
As an aside, I would really appreciate it if anyone knows where to find the solutions of this book.
Thank you!
Answer: Here is the solution. The trick is to use "the only connectivity matters" rule. The swap rule of 4.9 helps us reorder the inputs, which then makes it topologically equivalent to the next diagram (match the first and second wires of the states). | {
"domain": "quantumcomputing.stackexchange",
"id": 1191,
"tags": "mathematics, zx-calculus"
} |
Optimizing a nested for loop over JS array of objects | Question: NOTE: I'm bringing this question up in code review rather than stack overflow since I already have a working solution. I just am looking for ways to do it better.
I have two arrays. One is an array of simple strings (i.e. A1), while the other is an array of objects (i.e. A2). I need to pluck only those objects from A2, that have relevant keys present in A1.
Here is my implementation using a double for loop. It works, but I feel it is not elegant or efficient. How do I make this run better? Use of external libraries such as underscore JS is allowed.
var A1 = ["1","2","3","4"];
var A2 = [
{label:"one", value:"1"},
{label:"two", value:"2"},
{label:"three", value:"3"},
{label:"four", value:"4"},
{label:"five", value:"5"},
{label:"six", value:"6"},
];
var result = [];
for(var i=0; i<A2.length; i++){
for(var j=0; j<A1.length; j++ ){
if(A1[i] == A2[j].value){
result.push( A2[j]);
}
}
}
The output of the above is :
result = [
{label:"one", value:"1"},
{label:"two", value:"2"},
{label:"three", value:"3"},
{label:"four", value:"4"},
]
Answer: As others have already mentioned, Array.prototype.filter() might be the simplest approach (or Array.prototype.reduce() could also be used but would require more conditional logic). It would typically be slower than the nested for loops because it would be adding additional function calls, but for small data sets it typically wouldn't be noticeable. For example, I did a search on Google for "jsperf filter nested loop" and found this jsPerf test.
Using Array.prototype.filter() on A2, pass a callback function that returns true when the value at property value is included in A1 by checking A1.indexOf() for a value greater than -1.
const result = A2.filter(function(o) {
return A1.indexOf(o.value) > -1;
});
This can be simplified to a single line using an ES-6 arrow function and Array.prototype.includes() (Not supported by IE):
const result = A2.filter(o => A1.includes(o.value));
Try it in this snippet:
var A1 = ["1","2","3","4"];
var A2 = [
{label:"one", value:"1"},
{label:"two", value:"2"},
{label:"three", value:"3"},
{label:"four", value:"4"},
{label:"five", value:"5"},
{label:"six", value:"6"},
];
const result = A2.filter(o => A1.includes(o.value));
console.log('result', result);
If you wanted to use Underscore.js, _.filter() and _.includes() could be used to filter out any object in A2 without a value for the value property contained in A1. Expand the snippet below for a demonstration.
var A1 = ["1","2","3","4"];
var A2 = [
{label:"one", value:"1"},
{label:"two", value:"2"},
{label:"three", value:"3"},
{label:"four", value:"4"},
{label:"five", value:"5"},
{label:"six", value:"6"},
];
const result = _.filter(A2, function(o) { return _.includes(A1, o.value);});
console.log('result', result);
<script src="https://cdnjs.cloudflare.com/ajax/libs/underscore.js/1.8.3/underscore-min.js"></script>
There is an Underscore helper _.pluck() but that is used to collect a value from each item in a collection at a given property (similar to Array.prototype.map().
Lodash also has the same helpers: _.filter() and _.includes().
var A1 = ["1","2","3","4"];
var A2 = [
{label:"one", value:"1"},
{label:"two", value:"2"},
{label:"three", value:"3"},
{label:"four", value:"4"},
{label:"five", value:"5"},
{label:"six", value:"6"},
];
const result = _.filter(A2, function(o) { return _.includes(A1, o.value);});
console.log('result', result);
<script src="https://cdnjs.cloudflare.com/ajax/libs/lodash.js/4.17.4/lodash.min.js"></script>
Though some question whether libraries like lodash and underscore are really needed anymore. For a discussion on that, check out this article. | {
"domain": "codereview.stackexchange",
"id": 28075,
"tags": "javascript, array, underscore.js"
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.