anchor stringlengths 0 150 | positive stringlengths 0 96k | source dict |
|---|---|---|
Chain rule for mutual information | Question: I am a bit confused in the following definition:
what does comma mean. I know I(X;Y) is the mutual information between X and Y but I am not sure how to interpret the I(X_1,X_2;Y)).
Answer: $I(X_1,X_2;Y)$ is $I(X;Y)$, where $X = (X_1,X_2)$.
For example, suppose that $X_1,Y$ are two independent uniformly random bits, and $X_2 = X_1 \oplus Y$. Then $I(X_1;Y) = 0$ while $I(X_1,X_2;Y) = 1$.
In more detail, the joint distribution of $(X_1,X_2,Y)$ is the uniform distribution over the vectors $(0,0,0),(0,1,1),(1,0,1),(1,1,0)$. The variables $X_1$ and $Y$ are independent, and so $I(X_1;Y) = 0$. In contrast, the variable $(X_1,X_2)$ (which ranges over $\{0,1\}^2$) determines $Y$, and so $H(Y|X_1,X_2) = 0$. Consequently, $$I(X_1,X_2;Y) = H(Y) - H(Y|X_1,X_2) = H(Y) = 1.$$ | {
"domain": "cs.stackexchange",
"id": 20012,
"tags": "information-theory"
} |
Why wouldn't my robot stop? | Question: I am working on an Arduino based robot which engages a braking mechanism detecting anything in front. I was using an ultrasonic sensor, to detect obstacles which worked well while the robot was on my table (i.e under construction). But when I ran it on the ground, it doesn't stops and crashes.
The robot is programmed as if anything is detected 50 cm ahead if the robot, the braking mechanism stops the wheels. But when testing, the robot just wouldn't stop.
My robot is running at an average 7.5m/s . Thinking that doppler's effect might have rendered my sensor useless, I tried a little IR sensor I had lying around (range 25 cm approx), but that didn't work as well.
What am I doing wrong here?
Answer: 7.5m/s is pretty fast for a robot, but still not that much compared to speed of sound (approx. 330m/s), so Doppler effect may introduce some error to measurments, but not critical. The point is that if you set detection distance to only 50cm, your robot has only some 66ms to:
perform the measurment
communicate with the sensor, or make a AD conversion
run the algorithm
send a command to motor driver
stop motors and decelarate
Even if it starts braking immidately, its inertia may be to big to decelerate completely before cruching into the obstacle. Would a car driving 7.5m/s (27km/h) stop in a distance o 0,5m?
Suggested soultions:
Try increasing a detection distance
Try using a IR sensor with a digital interface - they measurment times are much lower than in US sensors
If you have to brake really quickly, think of using additional mechanical brakes | {
"domain": "robotics.stackexchange",
"id": 1090,
"tags": "arduino, motor, ultrasonic-sensors"
} |
Trying to understand the big picture behind DNA sequencing, alignment and searching | Question: I'm about to start a bioinformatics research project but I haven't any biological background.
I know my project is in regards to a performance analysis of DNA sequencing and searching "weapons" like Hadoop, Apache Spark and Apache Flink - so I've spent the last couple of days trying to put together the "DNA picture" before I get started with the programming stuff.
My understanding of the situation is that:
Next-generation sequencing (NGS) techniques are used to efficiently provide reads of DNA (conversions from real, physical DNA to something that can be read and analysed), however today's most practical methods provide reads that are short and in disorder.
Reads tell us which nucleotides, labeled one of ACGT, occur in sequence. Different nucleotides or values to take their place may exist, like N or X. Reads could range from 50 to 1000s of nucleotides in length depending on the method of sequencing.
You can find historical raw read data in various places online, including the Sequence Read Archive (SRA). The same website contains lots of other DNA/bio related information. Reads are commonly stored in .fasta files which follow a simple and practical standard. A single file could contain a very small or very large read or sequence.
Reads are then provided to DNA alignment programs like bowtie which will place them back in the correct order. The algorithms could use a template sequence to align them, or run "de novo" (without a template). The result of these alignments have also been indexed online however for the purpose of my studies I'll probably be aligning them myself.
Once aligned (or maybe during alignment), nucleotide differences from a template sequence can be programmatically found or searched for, with crossbow for instance. Note that many other tasks can too be performed - not only this search. If a particular substitution occurs in more than 1% of a what I think is called a "genome's" population, then it is called a Single-nucleotide Polymorphism (SNP or snip). Most SNPs have two alleles, or two different recorded nucleotides (like either G or C), but more than two is possible.
SNPs can be studied and mapped to various conditions or characteristics. A particular nucleotide could be responsible for part of one's emotional tendencies, reaction to particular medicines, or maybe anything, so a particular SNP could make a big difference.
What am I missing/what did I get wrong?
Answer: Here's a quick summary of a few mis-hits in your otherwise good analysis:
Not many bioinformatics applications use Hadoop, Apache Spark or Apache Flink. In fact, I have never heard of the Apache Spark and Flink tools, and I've seen only 2 people use Hadoop to process alignment files.
Reads are not "converted" real, physical DNA. They are representations of the signals from the molecules making up the DNA, as read by sequencing machines.
A,T,C,G are molecules making up DNA. N refers to "any of A,T,C,G", which translates to either unknown or ambiguous.
Reads are stored in FASTQ files. Sequences are stored in FASTA files. Reads include sequence as well as quality information, so, FASTA+QUAL=FASTQ
Re-ordering is a crude but approximate way to think about it. Remember, this process involves overlaps while reordering seldom does. Overlaps are crucial to the process of assembly/alignment. You are correct about the alignment to reference sequence and de novo assembly parts.
This is correct, though statistical models are applied to account for differences that are not necessarily significant variants (such as for sequencing errors)
Yes, variants, which include SNPs, can be correlated to differences in phenotype (traits). Emotional tendencies is something a bit too advanced, think of something more basic like diabetes or eye color.
The picture you have here is of a regular NGS analysis pipeline. This involves alignment/assembly, variant calling and biological analysis with relevant hypotheses. The assembly/alignment is the most computationally expensive part, and we use either HPC clusters or scalable cloud services such as AWS to get this done.
You should definitely talk to a biologist that has some computational experience to gain insight into the reason behind our analyses. Once you understand the motivations, your contribution will be more relevant and helpful to the community. | {
"domain": "biology.stackexchange",
"id": 3907,
"tags": "bioinformatics, dna-sequencing, snp"
} |
Is it correct to talk about an empty orbital? | Question: Professor A. J. Kirby mentions:
The properties of an orbital are those of an electron contained in it. It is normal practice, illogical though it may sound, to talk of 'vacant orbitals'.The properties of vacant orbitals are those calculated for electrons occupying them.
Since an orbital isn't defined until it is occupied by an electron, would it still be correct to say that an empty orbital (such as a LUMO) can interact with other filled orbitals?
Answer:
"The properties of an orbital are those of an electron contained in it. It is normal practice, illogical though it may sound, to talk of 'vacant orbitals'. The properties of vacant orbitals are those calculated for electrons occupying them."
I consider this a bit of a truism, especially the last sentence. IUPAC defines as an orbital:
Wavefunction depending explicitly on the spatial coordinates of only one electron.
which as noted in this answer falls short of considering spin. So let's use that definition for spatial orbitals. (For a spin-orbital, the analogous definition would be: depending explicitly on the spatial coordinates and spin coordinate ...) At this point, let us point out that a spin-orbital is a one-electron wavefunction. It is not an observable. One can observe the density or even the spin-density by means of X-ray diffraction, indirectly by NMR, ESR etc. (Another truism: orbitals of a single-electron systems, such as the hydrogen atom, are an important, but also somewhat trivial exception.) The density can be calculated from a many-electron wavefunction (WF) and a popular way of obtaining such a WF is combining several orbitals. This involves a lot more theory and mathematics than introductory chemistry courses can show.
So one can decide that if no electron is present, there is no orbital and stand on sound mathematical ground. Then again, I can just put an electron there by means of mathematics. Depending on the "computation-chemistry-method", for instance when employing a basis set (which is by far the most common approach), one can calculate properties even for empty orbitals such as the orbital energy (not an observable), which can be used to (approximately) describe electronic excitations (an observable). Computational chemists have resolved to call the empty orbitals "virtual" to bridge the gap between the two opposing views and use them as mathematical tools to describe excited states or to improve the quality of the description of the ground state.
Since an orbital isn't defined until it is occupied by an electron, would it still be correct to say that an empty orbital (such as a LUMO) can interact with other filled orbitals?
Let's consider the formation of a bond between two hydrogen atoms in a special way: reversing a heterolytic splitting, which we will briefly compare with one reversing a homolytic splitting.
In order to decide this question, one needs to split the non-existent hair dividing the following two views: a) Since there are no empty orbitals, in the heterolytic case, the proton will distort the orbitals on the anion until the bond is formed. Does the orbital on the former proton now magically appear? b) There is an empty orbital on the proton. However, at large distance, its effect on the anion can be simulated by an electric field, which should not carry orbitals (one can assume that the metal plates creating the field are far away and crank up the charge). If a situation involving an empty orbital cannot be distinguished from one where there isn't one, is it really there?
Of course, the end result of either way of looking at things is the same hydrogen molecule that we also get from combining two hydrogen atoms the standard way. I thus suggest to abandon the view of orbitals carried around by atoms (except in the computational chemist way, as will be outlined below). Rather, I suggest to think of the effective potential felt by a newly added electron - where would it go? Regardless of how the nuclei got to where they are now, where do the electrons go? "Unoccupied orbital" is then a useful shorthand for the relevant regions.
(At this point, I consider the original question answered. I will elaborate a bit on how I got here.)
The last question of the previous paragraph is one way of looking at the algorithm of the Hartree-Fock procedure (the first step of wavefunction-based quantum chemistry, which is comparable to density functional theory, DFT, in this regard). We have some "basis set" (and we do not care what is looks like right now) containing candidates for orbitals, nuclei (i.e. charges and positions) and a number of electrons. The first step is to evaluate the potential/the forces acting on the candidate orbitals, which is nuclei only at this point$^1$. One then linearly combines the candidate orbitals (that's LCAO right there) to form the best choice one can make. Obeying the aufbau principle, one fills the candidate orbitals until all electrons have got an orbital. One then updates the potential (which now also considers the electrons), combines again, fills the new candidates, updates the potential again and so forth until the changes between iterations are small. The result is then evaluated in terms of energy, and possibly electron density or other observables. By doing it this way, one arrives at the familiar MO picture without assuming an electron distribution or specific orbitals filled in a certain way on certain atoms.
Let's get back to the basis set. In the previous paragraph, no assumption was made on its nature. The initial candidate orbitals could be any shape (as long as they are reasonable from a mathematical point of view, for instance, they must be twice differentiable, square-integrable etc.) and many different choices exist. Experience shows that using functions resembling the exact orbitals of hydrogen are very useful for molecular chemistry$^2$. This is because they combine results of decent accuracy with computational efficiency and chemist-friendly starting points of interpretation - such as HOMOs and LUMOs that can be related to individual atoms, flawed though as that line of reasoning may be (as I have hoped to show with this wall of text).
Footnotes:
$^1$ A bit of a white lie: this is only the special situation of the so-called core guess. But done here for illustration.
$^2$ Solid state chemistry is a different matter, where standing waves are the norm. | {
"domain": "chemistry.stackexchange",
"id": 9238,
"tags": "quantum-chemistry, orbitals, theoretical-chemistry"
} |
Data reduction program | Question: This is my first program in Python and first time I'm writing in dynamically typed language. I have substituted real detectPeaks function with a simple placeholer. I'm aware of several problems with this program.
I still don't quite know how to 'live with' Python scoping, hence the qFixfunction. I didn't bother to find a better name since I want to rewrite whole program.
Doing anything outside of any function seems unclean to me. It seems that it is ok in Python. Am I right? # It did allow me to have global cmdArgs object witch makes logic connected to cmd arguments clean.
I'm also concerned with naming functions and variables and structure of the program. The program does not have to be very flexible I would get away with hardcoding most of the things but prefer not unless it would complicate code significantly.
__version__ = dirt3
import sys, os.path, argparse, glob
import numpy as np
def detectPeaks(in_list):
in_list = [s.replace(',', '.') for s in in_list]
in_list.pop() # remove "\r\n"
X = np.array(map(float, in_list[:2]))
Y = np.array(map(float, in_list[2:]))
XminIndex = np.argmin(X)
YminIndex = np.argmin(Y)
val = X[XminIndex] + Y[YminIndex]
return (XminIndex, YminIndex, val)
def processFileLine(line):
No, time, A, B = line.split('\t')[0], line.split('\t')[1], line.split("\t")[2::2], line.split("\t")[3::2]
if cmdArgs.A:
retA = detectPeaks(A)
ansA = No + '\t' + time + '\t' + str(retA[0]) + '\t' + str(retA[1]) + '\t' + str(retA[2]) + '\n'
else:
ansA = "" #to get read of Unbound local var err
if cmdArgs.B:
retB = detectPeaks(B)
ansB = No + '\t' + time + '\t' + str(retB[0]) + '\t' + str(retB[1]) + '\t' + str(retB[2]) + '\n'
else:
ansB = ""
return ansA + ansB
def mkOutFilePath(outDirPath, inFilePath):
inFilename = os.path.basename(inFilePath)
return os.path.join(outDirPath, inFilename + ".peak")
def qFix(inFilePath):
if inFilePath == '-':
return sys.stdin
else:
return open(inFilePath, "r")
def processFile(inFilePath):
'''
if inFilePath == '-':
inFile = sys.stdin
else:
inFIle = open(inFilePath, "r")
'''
inFile = qFix(inFilePath)
if cmdArgs.outFiles == '-':
outFile = sys.stdout
elif os.path.isfile(cmdArgs.outFiles):
outFile = open(cmdArgs.outFiles, "a")
else:
outFile = open(mkOutFilePath(cmdArgs.outFiles, inFilePath), "w")
for line in inFile.readlines():
if line[0] == '#':
outFile.write(line)
elif line[0] == 'N' and line[1] == 'r' :
outFile.write("Nr\tTime\tX\tY\tValue\n")
else:
outFile.write(processFileLine(line))
def main():
if cmdArgs.inFiles == '-':
processFile('-')
else:
if os.path.isfile(cmdArgs.inFiles):
filesList = [cmdArgs.inFiles]
else:
if cmdArgs.recuresive:
filesList = [y for x in os.walk(cmdArgs.inFiles) for y in glob.glob(os.path.join(x[0], '*.dat'))]
else:
filesList = glob.glob(os.path.join(cmdArgs.inFiles, '*.dat'))
for filePath in filesList:
processFile(filePath)
def checkPath(path):
if (path != '-') and not os.path.exists(path):
exit(2)
return path
if __name__ == "__main__":
cmdArgsParser = argparse.ArgumentParser()
cmdArgsParser.add_argument('inFiles', nargs='?', default='-', type=checkPath, help="Path to input file(s).")
cmdArgsParser.add_argument('outFiles', nargs='?', default='-', type=checkPath, help="Path to output file(s).")
cmdArgsParser.add_argument("-d", "--delete", action="store_true", help="Delete input files after procesing.")
cmdArgsParser.add_argument("-r", "--recuresive", action="store_true", help="Operate recuresively.")
cmdArgsParser.add_argument("-A", action="store_true", help="Use data from A set.")
cmdArgsParser.add_argument("-B", action="store_true", help="Use data from B set.")
cmdArgsParser.add_argument("-V", "--version", action="version", version=__version__)
cmdArgs = cmdArgsParser.parse_args()
main()
Files read by the program have comments that are to be copied to the output.
Example file(simplified, there is lots of data in real ones):
# Comments
Nr: Time 1A 1B 2A 2B 3A 3B 4A 4B
1 2015 0,10 0,10 0,10 0,10 0,10 0,10 0,10 0,10
2 2015 0,10 0,10 0,10 0,10 0,10 0,10 0,10 0,10
3 2015 0,10 0,10 0,10 0,10 0,10 0,10 0,10 0,10
4 2015 0,10 0,10 0,10 -0,10 0,10 0,10 0,10 0,10
5 2015 0,10 0,10 0,10 0,10 0,10 0,10 0,10 0,10
6 2015 0,10 0,10 0,10 0,15 0,10 0,10 0,10 0,10
7 2015 0,10 0,20 0,30 0,20 0,10 0,10 0,10 0,10
8 2015 -0,10 0,10 0,10 0,10 -0,10 0,10 0,10 0,10
Output for sample file:
# Commests
Nr Time X Y Value
1 2015 0 0 0.2
2 2015 0 0 0.2
3 2015 0 0 0.2
4 2015 0 0 0.2
5 2015 0 0 0.2
6 2015 0 0 0.2
7 2015 0 0 0.2
8 2015 0 0 -0.2
Answer: Some suggestions:
Follow the pep8 style guide.
In detectPeaks, you should slice in_list initially, rather than popping. So in_list[:-1].
In detectPeaks, you should convert to float in the initial list comprehension.
In detectPeaks, you should convert to a single numpy array, then slice that to get X and Y.
In detectPeaks, you don't need to wrap the return in ( )
In processFileLine, don't put multiple operations on a single line like that.
In processFileLine, you should do the splitting once, then get the items from that.
In processFileLine, you should use string replacement. So, for example, ansA = '{}\t{}\t{}\t{}\t{}\n'.format(No, time, *retA). Or better yet you can define a string ansform = '{}\t{}\t{}\t{}\t{}\n' at the beginning, then apply the format to it in each case, so ansA = 'ansform.format(No, time, *retA) and ansB = 'ansform.format(No, time, *retB).
Use with to automatically open and close files when you are done with them.
When looping over a file object, you don't need readlines. Just do, for example, for line in inFile. That will automatically loop over all the lines in the file.
In processFile, you should test for a slice of a string. So elif line[:2] == 'Nr':
In the loop of processFile, it would be better to define a string in the if...elif...else block, and then after the block write that string.
Never, ever, under any circumstances call exit. If you need to exit, either allow the program to exit normally or throw an exception. In your case, raise an exception.
You should put the argument parsing in a function.
In main, you should return at the end of the if cmdArgs.inFiles == '-': block, which will allow you to avoid the else case. Or better yet, do if cmdArgs.inFiles == '-' or os.path.isfile(cmdArgs.inFiles):, since filesList = [cmdArgs.inFiles] will work in both cases.
In main, you should do elif cmdArgs.recuresive (which is miss-spelled, by the way).
In main, the list comprehension should be a generator expression. This will allow you to avoid having to store all the filenames. | {
"domain": "codereview.stackexchange",
"id": 14761,
"tags": "python, beginner, python-2.x"
} |
shimming the c open function on linux and logging its usage | Question: I'm currently trying to implement a c shim which sits between the open function from the c standard library and a program.
the shim should transparently write all file paths being opened to a log within a directory defined by the environment variable IO_SHIM_PREFIX
I have this working relatively well, but in order to achieve it I had to fake the fcntl.h header guards, and I think that there must be a better way.
If I include fcntl.h directly I get the following error
gcc -shared -fPIC -o shim.so src/shim.c -ldl
src/shim.c:47:5: error: conflicting types for βopenβ
47 | int open(const char *pathname, int flags){
| ^~~~
In file included from src/shim.c:16:
/usr/include/fcntl.h:168:12: note: previous declaration of βopenβ was here
168 | extern int open (const char *__file, int __oflag, ...) __nonnull ((1));
| ^~~~
make: *** [Makefile:3: all] Error 1
I'm guessing it has something to do with the __nonnull ((1)) part at the end but I'm not much of a c programmer and I dont understand it.
Makefile
CC=gcc
all: src/shim.c
$(CC) -shared -fPIC -o shim.so src/shim.c -ldl
src/shim.c
// required for RTLD_NEXT
#define _GNU_SOURCE
#include <stdio.h>
#include <stdlib.h>
#include <dlfcn.h>
#include <unistd.h>
#include <math.h>
#include <string.h>
// this bypasses the header guard in bits/fcntl.h
// its a terrible idea, but I don't know the right way to do this.
#define _FCNTL_H
#include <bits/fcntl.h>
#undef _FCNTL_H
typedef int (*open_fn_ptr)(const char*, int);
void write_to_log(const char* prefix, const char* pathname, open_fn_ptr original_open){
pid_t pid = getpid();
pid_t tid = gettid();
char* pattern;
if (prefix[strlen(prefix) -1] == '/'){
pattern = "%s%d_%d.log";
}
else{
pattern = "%s/%d_%d.log";
}
int path_len = snprintf(NULL, 0, pattern, prefix, pid, tid);
char* log_filepath = (char*) malloc(sizeof(char) * path_len);
sprintf(log_filepath, pattern, prefix, pid, tid);
int fd = original_open(log_filepath, O_WRONLY | O_APPEND | O_CREAT);
write(fd, pathname, strlen(pathname));
write(fd, "\n", sizeof(char));
close(fd);
free(log_filepath);
}
int open(const char *pathname, int flags){
// acquire a pointer to the original implementation of open.
open_fn_ptr original_open = dlsym(RTLD_NEXT, "open");
const char* prefix = getenv("IO_SHIM_PREFIX");
if (prefix != NULL) {
write_to_log(prefix, pathname, original_open);
}
return original_open(pathname, flags);
}
Usage example
IO_SHIM_PREFIX=`realpath .` LD_PRELOAD=./shim.so brave-browser
Answer:
char* log_filepath = (char*) malloc(sizeof(char) * path_len);
malloc() returns a void*, which in C converts to any object-pointer type (unlike in C++, if you're used to that). So the cast is unnecessary (it's slightly harmful, in that it distracts attention from more dangerous casts). Also, because char is the unit of size, sizeof (char) can only be 1, so the multiplication is pointless. That line should be simply
char* log_filepath = malloc(path_len + 1);
Note the +1 there - the code had a bug because we forgot to allocate space for the null character that ends the string.
There are other uses of sizeof (char) where plain old 1 would be more appropriate and easier to read.
Here, we assign a string literal to a char*:
pattern = "%s%d_%d.log";
That's poor practice, as writes to *pattern are undefined behaviour. The best fix is to declare pattern to point to const char:
const char *pattern;
I don't think it's necessary to choose between the two patterns - the filesystem interface will ignore consecutive directory separators, so it's safe to always use "%s/%d_%d.log".
// this bypasses the header guard in bits/fcntl.h
// its a terrible idea, but I don't know the right way to do this.
#define _FCNTL_H
#include <bits/fcntl.h>
#undef _FCNTL_H
Your intuition here is correct; we are relying on the compiler/platform innards here rather than the public interface. You should define open_fn_ptr to have the same signature as the library does:
int open(const char *pathname, int flags, ...)
{
Then we can simply include the documented, supported POSIX header:
#include <fcntl.h>
// acquire a pointer to the original implementation of open.
open_fn_ptr original_open = dlsym(RTLD_NEXT, "open");
That assignment is an invalid conversion in C - although void* can be assigned to any object pointer, that's not true for function pointers. It might be worth drawing attention here using an explicit cast, though GCC will still emit a warning - see Casting when using dlsym(). | {
"domain": "codereview.stackexchange",
"id": 41693,
"tags": "c, linux"
} |
Does any colour appear white to our eyes if its emitted power is extremely large? | Question: let's consider an ideal monocromatic source (for instance red) and let's assume you can regulate its emitted power without compromising its spectral "finesse".
Start from 0 emitted W/sr. It appears black to our eyes.
Increase the emitted power just a little little bit, still lower than the minimum detectable power of our eye and nervous system. It still appears black despite being physically red (it has a red specific wavelength).
Now, increase the emitted power to a normal level and it will appear red.
Finally, increase it to an exaggerate value that will almost (like when looking at the sun) blind our eyes. Will it appear white?
My question starts from the following colour representation which is very common in graphic design:
Let's consider the maximum saturation (i.e. the boundary of the cylinder). Any wavelength (i.e. hue) becomes white if its lightness increases at maximum level (it is a relative lightness compared to a reference white, if I'm not wrong).
I guess that this kind of representation agrees with physics otherwise it won't make sense for me :( So I was wondering if any colour appears white if its luminance becomes extremely high.
Answer: Wikipedia says, "The HSL [hue, saturation, lightness] representation models the way different paints mix together to create color in the real world." (https://en.wikipedia.org/wiki/HSL_and_HSV) That is to say, it models subtractive color mixing, which is different from additive color mixing as in computer displays and, different from how humans perceive colors.
"Lightness" in the HSL model has nothing to do with emitted power or luminance. In subtractive color mixing, "lighter" colors mean less ink is applied, and the lightest possible color is the color that you get by applying no ink at all to the presumed-to-be-white paper. | {
"domain": "physics.stackexchange",
"id": 91080,
"tags": "optics, visible-light, photons, vision, lightning"
} |
How to reconstruct information from a graph of an oscillation? | Question: We are given a graph of the position of a wave (amplitude).
How can we calculate the wavelength, frequency and the maximum speed of a particle attached to that wave?
We have
Speed = wave length $\times$ frequency,
$W=2 \pi \times$ frequency ,
$V_{max}=A\times W$.
So how to calculate A?
Answer: $2\pi\omega \cdot A = v_{max}$ so try $v_{max}=\lambda \omega=A \cdot 2\pi\omega$. Solve for what you want, $A=\frac{\lambda}{2\pi}$ where $\lambda$ is different for each wave, as I enumerated in the comments, $\lambda_A=2$m and $\lambda_B=4$m.
I hope I answered your question right from what you're telling me. I will stress that in order to get a solid response, post your FULL question in clear terms and make sure you post everything you know and tried. Try to focus it down to conceptual questions. We want to help, but we also don't want to explicitly do your homework. Hope this helps, cheers. | {
"domain": "physics.stackexchange",
"id": 5323,
"tags": "homework-and-exercises, waves, frequency, wavelength"
} |
Energy eigenvalues of a Q.H.Oscillator with $[\hat{H},\hat{a}] = -\hbar \omega \hat{a}$ and $[\hat{H},\hat{a}^\dagger] = \hbar \omega \hat{a}^\dagger$ | Question: I just finished deriving the commutators:
\begin{align}
[\hat{H}, \hat{a}] &= -\hbar \omega \hat{a}\\
[\hat{H}, \hat{a}^\dagger] &= \hbar \omega \hat{a}^\dagger\\
\end{align}
On the Wikipedia it is said that these commutators can be used to find energy eigenstates of Quant. harm. oscillator, but explanation is a bit too fast there. Anyway i strive to be able to derive the equation $W_n = \hbar \omega \left(n + \tfrac{1}{2}\right)$ in full, but first i need to clarify why theese two relations hold:
\begin{align}
\hat{H}\hat{a} \psi_n &= (W_n - \hbar \omega) \hat{a} \psi_n\\
\hat{H}\hat{a}^\dagger \psi_n &= (W_n + \hbar \omega) \hat{a}^\dagger \psi_n
\end{align}
I can't see any commutators in above relations, so how do the commutators i just calculated help us to get and solve these two relations?
I am sorry for asking such a basic questions. I am a self-taught and a real freshman to commutators algebra.
Answer: The commutators in the above expressions are sued to change the order of the Hamiltonian and annihilation or creation operators. I'll show you the first one in some detail, the second one should not give you problems afterwards.
We start from $\hat{H}\hat{a}\psi_n$. Using the commutator $[\hat{H},\hat{a}] = \hat{H}\hat{a}-\hat{a}\hat{H} = -\hbar\omega\hat{a}$, we can write $\hat{H}\hat{a}\psi_n = (\hat{a}\hat{H}-\hbar\omega\hat{a})\psi_n$. Because we have $\hat{H}\psi_n = W_n\psi_n$, we get $(\hat{a}\hat{H}-\hbar\omega\hat{a})\psi_n = (\hat{a}W_n-\hbar\omega\hat{a})\psi_n = (W_n-\hbar\omega)\hat{a}\psi_n$ (note that we can change the order of the annihilation operator and c-numbers $W_n$ and $\hbar\omega$). Therefore, we have $\hat{H}\hat{a}\psi_n = (W_n-\hbar\omega)\hat{a}\psi_n$ and we conclude that $\hat{a}\psi_n$ is an eigenstate of the Hamiltonian with eigenvalue $W_n-\hbar\omega$. | {
"domain": "physics.stackexchange",
"id": 7614,
"tags": "quantum-mechanics, homework-and-exercises, operators, harmonic-oscillator, commutator"
} |
Pattern in string count for C | Question: I am trying to write a function that will count the occurrences of a pattern in a string without overlap. This is what I have right now.
size_t count(char *string, char *pat) {
size_t patternLength = strlen(pat);
size_t stringLength = strlen(string);
size_t count = 0;
char *compareString = malloc(patternLength + 1);
for (int i = 0; i < stringLength; i++) {
if (i + patternLength > stringLength) return count;
strcpy(compareString, &string[i], i+patternLength);
if (strcmp(compareString, pat) == 0) {
count++;
i--;
i+=patternLength; /* non overlapping find */
}
}
free(compareString);
return count;
}
Answer: Drop the Intermediate String
You're allocating compareString and strcpying into it just so that you can use strcmp. But instead, there's also memcmp, which lets you compare from the original string directly:
Fix the loop condition
You're not really looping from 0 to stringLength, you're looping from 0 to stringLength - patternLength. Splitting up those concerns in two is confusing.
Adjusting i
You have:
i--;
i+=patternLength; /* non overlapping find */
We can do both in one:
i += patternLength - 1;
Better solution:
size_t count(char *string, char *pat) {
size_t patternLength = strlen(pat);
size_t stringLength = strlen(string);
size_t count = 0;
for (int i = 0; i < stringLength - patternLength; i++) {
if (memcmp(string+i, pat, patternLength) == 0) {
count++;
i += patternLength - 1;
}
}
return count;
} | {
"domain": "codereview.stackexchange",
"id": 17148,
"tags": "c, strings"
} |
What exactly happens in inelastic collisions between electrons and nucleus (Frank-Hertz experiment)? | Question: In the Frank-Hertz experiment, electrons (inelastic ) hit atoms and excite their energy levels. What happens to the electron after it has hit (what I believe is) the nucleus? I imagine that if the kinetic energy of the incoming electron is just enough to make one of the atom's electrons jump from one level to another, then the incoming electron ends up at near 0 speed near/on the nucleus, at this point we'd have an electron stuck to the nucleus, but that isnt permitted. So what's going on?
Answer: An atom is $\sim 10^5$ bigger than its nucleus, meaning that the electron cloud surrounding the atom extends much further out than the nucleus itself. When an incoming electron approaches the atom, it will first encounter this electron cloud and interact with it by a repulsive force. The nucleus will be much further away, so its force on the electron is much smaller. When you also consider that the interaction is actually hapening with the atoms outter most electron, while the other electrons are shielding most of the positive charge of the nucleus, as can be seen on the image below, then its even clearer that the electron is not interacting with the nucleus. | {
"domain": "physics.stackexchange",
"id": 53274,
"tags": "quantum-mechanics, experimental-physics, atomic-physics"
} |
For stabilizer codes, is a certain logical operation uniqueοΌ | Question: Suppose we have a $[[n, 1]]$ stabilizer code $Q$ and a single-qubit unitary $U$. We define the logical counterpart of $U$ as $\bar{U}$. My question is: Is there just one $\bar{U}$ up to stabilizers of $Q$?
I am asking this because I have seen the definition of transversal gate to be like $\bar{U}=U^{\otimes n}$, while the right side seems to be unique if $U$ is given.
Answer: The answer turns out to be NO as denoted in this answer. There seems to be infinite number of operations that preserves the codespace of $Q$ and acts as if $\bar{U}$ for any given single-qubit unitary $U$.
As an example, we can define $Q$ to be a two-qubit trivial code with stabilizer $IZ$ and logical operator $X_L = XI, Z_L=ZI$. If $U=H$ is a Hadamard gate, then any operations with the following form
$$
H\otimes |0\rangle\langle0| + A\otimes |1\rangle\langle1|
$$
is a logical $\bar{H}$, where $A$ is an arbitrary single-qubit unitary. | {
"domain": "quantumcomputing.stackexchange",
"id": 5584,
"tags": "stabilizer-code, logical-gates"
} |
Fourier transform of unit step | Question: I was reading pdf by caltech and in one of its section, Fourier transform of Unit step signal is calculated but I am confused, how this can be possible if region of convergence for Laplace transform ($1/s$) of unit step signal does not contain imaginary axis?
And if above case is possible then if it given that impulse response of a system is Unit step then frequency response of it should also exist and equal to $H(Ο)= ΟΞ΄(Ο) + 1/jΟ$ and then can we
calculate Fourier transform of output by computing $H(Ο)X(Ο)$ where $X(Ο)$ is Fourier transform of input?
Answer: The Fourier transform can be generalized for functions that are not absolutely integrable. We can define a Fourier transform for functions with a constant envelope (e.g., sine, cosine, complex exponential), and even for functions with polynomial growth (but not with exponential growth). In these cases we must be prepared to deal with generalized functions in the expression of the Fourier transform, such as the Dirac delta impulse or its derivatives. This is also true for the Fourier transform of the step function.
The multiplication property of the Fourier transform remains true, albeit with certain restrictions, so we can generally compute the Fourier transform of the convolution of two functions by multiplying their Fourier transforms, provided that the convolution exists. | {
"domain": "dsp.stackexchange",
"id": 8916,
"tags": "fourier-transform, frequency-response, laplace-transform"
} |
Graph implementation adjacency list 2.0 | Question: Version 1:
Graph implementation adjacency list 1.0
Please note the following:
All edges are directed
A user of the class Graph shall never be able to aces an object of type Edge or Vertex
Only class Vertex can create an object Edge and only class Graph can create an object Vertex
No loops are allowed
Maximum one edge at the same direction from a vertex to another vertex
Edge.h
#ifndef EDGE_H
#define EDGE_H
#include <string>
class Edge
{
public:
class ConstructionToken //Only class Vertex can create an object Edge
{
private:
ConstructionToken();
friend class Vertex;
};
Edge( const Edge &);
Edge( const ConstructionToken & );
private:
//weight, etc...
};
#endif /* EDGE_H */
Edge.cpp
#include "Edge.h"
Edge::ConstructionToken::ConstructionToken() = default;
Edge::Edge( const Edge & ) = default;
Edge::Edge( const ConstructionToken & )
{
}
Vertex.h
#ifndef VERTEX_H
#define VERTEX_H
#include <iterator>
#include <map>
#include <vector>
#include "Edge.h"
class Vertex
{
public:
class ConstructionToken //Only Graph can create an object of type Vertex
{
private:
ConstructionToken() = default;
friend class Graph;
};
Vertex( const ConstructionToken & );
const std::vector<std::string> copy_edges() const;
void insert_edge( const std::string & );
void remove_edge( const std::string & );
private:
std::map<std::string, Edge> edges;
//weight, visited, etc...
};
#endif /* VERTEX_H */
Vertex.cpp
#include <vector>
#include <utility>
#include "Vertex.h"
#include "Edge.h"
using edge_pair = std::pair<std::string, Edge>;
Vertex::Vertex( const ConstructionToken & ){}
void
Vertex::insert_edge( const std::string & end_point )
{
Edge new_edge{ Edge::ConstructionToken{} };
edge_pair temp( end_point, new_edge );
edges.insert( temp );
}
void
Vertex::remove_edge( const std::string & edge )
{
edges.erase( edge );
}
const std::vector<std::string>
Vertex::copy_edges() const
{
std::vector<std::string> keys;
for( auto& pair : edges )
{
keys.push_back( pair.first );
}
return keys;
}
Graph.h
#ifndef GRAPH_H
#define GRAPH_H
#include <map>
#include <string>
#include "Vertex.h"
class Graph
{
public:
Graph() = default;;
void insert_vertex( std::string);
void insert_edge( std::string, std::string);
void remove_edge( std::string, std::string );
Graph transpose() const;
Graph merge( const Graph & ) const;
Graph inverse() const;
void print_graph() const;
protected:
void insert_vertex( std::string, Vertex);
void insert_edge( std::string, Edge);
private:
std::map<std::string,Vertex> vertexes;
};
void print_graph( Graph );
#endif /* GRAPH_H */
Graph.cpp
#include <iostream>
#include <map>
#include <string>
#include <utility>
#include "Graph.h"
#include "Vertex.h"
#include "Edge.h"
void
Graph::insert_vertex( std::string name)
{
//Check constructor for Vertex if not understandable
Vertex::ConstructionToken c;
Vertex v{ c };
insert_vertex( name, v );
}
void
Graph::insert_vertex( std::string name, Vertex v )
{
std::pair<std::string, Vertex> temp (name, v );
vertexes.insert( temp );
}
void
Graph::insert_edge( std::string node, std::string new_edge )
{
if( node == new_edge ) //No loops are allowed
{
return;
}
//Check that the node excist
auto it = vertexes.find( node );
if( it == vertexes.end() )
{
return;
}
it -> second.insert_edge( new_edge );
}
void
Graph::remove_edge( std::string node, std::string edge )
{
auto it = vertexes.find( node );
if( it == vertexes.end() )
{
return;
}
it -> second.remove_edge( edge );
}
Graph
Graph::transpose() const
{
Graph Graph_T;
//Vertex
for( auto& pair : vertexes )
{
Graph_T.insert_vertex( pair.first );
}
//Edges
std::vector<std::string> end_points;
for( auto& pair : vertexes )
{
end_points = pair.second.copy_edges();
for( auto & edge : end_points )
{
Graph_T.insert_edge( edge, pair.first );
}
}
return Graph_T;
}
Graph
Graph::merge( const Graph & G2 ) const
{
Graph merge_graphs;
//Merge vertexes
for( auto& pair : vertexes)
{
merge_graphs.insert_vertex( pair.first );
}
for( auto& pair : G2.vertexes )
{
merge_graphs.insert_vertex( pair.first );
}
//Merge edges
std::vector<std::string> end_points;
for( auto& pair : vertexes )
{
end_points = pair.second.copy_edges();
for( auto & edge : end_points )
{
merge_graphs.insert_edge( pair.first, edge );
}
}
for( auto& pair : G2.vertexes )
{
end_points = pair.second.copy_edges();
for( auto & edge : end_points )
{
merge_graphs.insert_edge( pair.first, edge );
}
}
return merge_graphs;
}
Graph
Graph::inverse() const
{
//Create a Graph temp which is complete
Graph temp;
for( auto& pair : vertexes )
{
temp.insert_vertex( pair.first );
}
for( auto& vertex1 : vertexes )
{
for( auto vertex2 : vertexes )
{
temp.insert_edge( vertex1.first, vertex2.first );
}
}
//Remove all edges in temp that also are in (*this)
std::vector<std::string> end_points;
for( auto& pair : vertexes )
{
end_points = pair.second.copy_edges();
for( auto edge : end_points )
{
temp.remove_edge( pair.first, edge );
}
}
return temp;
}
void
Graph::print_graph() const
{
std::vector<std::string> end_points;
for( auto& pair : vertexes )
{
end_points = pair.second.copy_edges();
std::cout << pair.first << " : ";
for( auto& edge : end_points )
{
std::cout << " -> " << edge;
}
std::cout << std::endl;
}
}
void print_graph( Graph G )
{
G.print_graph();
}
Answer: In all, this is a well constructed set of classes. Here are a few things I see that could help you improve it further.
Put everything in your own namespace
Avoid polluting the global namespace by wrapping your headers in your own namespace. This will save headaches later if you attempt to use your classes with some other library.
Pass const reference to free-standing print_graph
The compiler will create a copy of the passed Graph unless you declare it
void print_graph( const Graph& );
Eliminate the spurious semicolon
Within the Graph.h file is this line:
Graph() = default;;
It's not technically an error, but there should only be a single semicolon at the end of that line.
Consider adding other defaults
While it may be useful to specifically point out that constructor for Graph is the default, the default copy and default move constructors are not specifically listed. It's not necessary but it's not obvious to me (or probably other readers of this class) why only one is listed.
Consider deleteing unneeded constructors
The Edge and Vertex classes do not need or use the default constructors, so it may be prudent to explicitly delete them as:
Edge() = delete;
Consider changing print_graph to a stream inserter
The print_graph member function and freestanding function could both be replaced with a more flexible ostream inserter:
friend std::ostream& operator<<(std::ostream& out, const Graph& g)
{
std::vector<std::string> end_points;
for( auto& pair : g.vertexes )
{
end_points = pair.second.copy_edges();
out << pair.first << " : ";
for( auto& edge : end_points )
{
out << " -> " << edge;
}
out << std::endl;
}
return out;
} | {
"domain": "codereview.stackexchange",
"id": 13040,
"tags": "c++, c++11, reinventing-the-wheel, graph"
} |
Is the powerset of a regular set also a regular set? | Question: If so, where can I find a proof of it? If not, is there a counterexample?
By powerset of a regular language I mean the set of all subsets of a regular language.
Thank you, Marcus.
Answer: No. In computer science, a language is normally defined to be a subset of $\{0,1\}^*$. If $L$ is a language, then the powerset $2^L$ is not a subset of $\{0,1\}^*$, so it is not a language.
(See e.g., https://en.wikipedia.org/wiki/Formal_language#Definition.)
If $L$ is finite, then the powerset $2^L$ is finite, so as Emil JeΕΓ‘bek points out, there is a way you can encode it so its encoding is a regular language, since all finite languages are regular.
If $L$ is infinite, then as Yuval Filmus points out, the powerset $2^L$ is not countable, so there is no way to encode it as a language. Thus, when $L$ is an infinite regular language, then the answer to your question is "no, not even if you're allowed to choose a reasonable encoding". | {
"domain": "cs.stackexchange",
"id": 15849,
"tags": "regular-languages, sets"
} |
Are Toffoli gates actually used in designing quantum circuits? | Question: In an actual quantum computer, are we designing circuits with Toffoli Gates and then using compilers or optimizers to remove redundancies so that we can use fewer qubits than a full Toffoli gates would require? Or are multiple gates composed by the compilers into a single quantum circuit?
Are these operations mathematically equivalent if you use Toffoli Gates or Fredkin gates, or is one gate easier for optimizers to work with?
Answer: I guess you assume that you can implement any quantum circuit using Toffoli gates only; this is not true.
Toffoli gate is classically universal, but not quantum universal. The importance of Toffoli gate in quantum information science is based on 2 facts:
Toffoli gate is classically universal, that is you can implement any classical circuit using Toffoli gates only;
Toffoli gate is reversible, so it is also a quantum gate.
The existence of Toffoli gate is a proof that you can implement any classical circuit using quantum circuit.
The notion of universal quantum set of gates is more subtle compared with classical case. A set of quantum gates is said to be computationally universal if it can simulate any quantum circuit with arbitrarily small error, and the simplest computationally universal set of quantum gates is Toffoli and Hadamard gates; see A Simple Proof that Toffoli and Hadamard are Quantum Universal | {
"domain": "quantumcomputing.stackexchange",
"id": 1527,
"tags": "gate-synthesis, universal-gates"
} |
What curves spacetime in Schwarzschild metric? | Question: I understand that the Schwarzschild solution is valid in the outside region of a massive object, with no other masses involved. Therefore the energy-momentum tensor is 0. But then: what curves space? In other words, in a vacuum without the presence of a massive object, the energy-momentum tensor is also 0, but that space is not curved.
Sorry if the question seems trivial, but I just don't understand.
Answer: Stationary space-times like the Schwarzschild metric have a time-like Killing vector field $\xi^a$. A Killing vector field is a generator of isometries of the metric, so you can think of a time-like Killing vector field as generating the symmetry that corresponds to energy conservation. Every space-time where the coefficients of the metric are not explicitly time depend has a time-like Killing vector field.
The so-called Komar integral gives rise to the total energy, i.e. mass within a stationary space-time and is given by
$$M=-\frac{1}{8\pi G}\int_S\epsilon_{abcd}\nabla^c\xi^d,$$
where you integrate the so-called Hodge dual of the derivative of $\xi^a$ over the 2-sphere $S$ at space-like infinity ($r\to\infty$). (Note that there is no mess up with the indices, but the integrand is a two-form integrated over a 2-sphere.) The derivation of this can for example be found in GR by Wald from page 285 on.
Although in the Schwarzschild metric the energy momentum tensor is zero everywhere, being a vacuum solution, the Komar integral is not zero, showing that with this interpretation mass is a property of the geometry itself. As result we get the parameter from the metric which is usually interpreted as the mass of the black hole for which the metric is modelling it's surrounding (or also the mass of a star, if the metric is applied to one, but then it is only considered valid for $r$ larger than the star's radius) | {
"domain": "physics.stackexchange",
"id": 87108,
"tags": "general-relativity, black-holes, curvature, stress-energy-momentum-tensor, singularities"
} |
Why do some reactions require specific pressures to happen? | Question: For example, carbon monoxide reacts with hydrogen to synthesize methanol in the presence of some catalysts, but the pressure needs to be $\pu{50 atm}$ and the temperature needs to be $\pu{523 K}$.
Why the pressure? What does the pressure adds to the reaction?
Answer: ChemGuide has a good introductory article here.
The effects of increasing pressure and temperature are, to an extent, equivalent. Increased pressure leads to increased collisions and increased collision strength between molecules, allowing the (usually high) activation energy barrier to be overcome at a noticeable rate; at standard temperatures and pressures, for example, collisions between $\ce{CO}$ and $\ce{H2}$ are far too infrequent and involve far too little energy for the formation of $\ce{CH3OH}$.
Remark. How do we achieve high pressures and temperatures? We could pressurize a steel cylinder and use it as a reaction vessel, for one, but a novel approach involves sonochemistry. Ultrasound waves in solution lead to cavitation, the formation and rapid collapse of bubbles, which are highly pressurized and at high temperatures. The use of ultrasound can even open up new reaction pathways not usually available; for example, radical species are often transient, but the forcing conditions in these bubbles allow them to survive long enough to react. | {
"domain": "chemistry.stackexchange",
"id": 7865,
"tags": "pressure, reaction-control"
} |
moveit pick contact between object and robot attached object | Question:
Hi, I'm trying to implement the moveit pick and place tutorial with eDO robotic arm.
The execution of the pick is ok until the retreat phase that doesn't happen. At this point it complains about the robot attached object colliding with the version of itself not attached to the robot. It seems like the planning scene is not updated correctly by the pick pipeline. If I try to plan any movement after the execution of the pick it works: the object is attached to the robot, collisions are checked also for the object and everything is as expected.
Any idea on how to solve this? If screenshots or other information can help please ask! Thanks in advance.
I get this warning in the node executing the tutorial code (where I just changed what was necessary to adapt it to a different robot):
[ WARN] [1580292540.888764909, 2094.339000000]: Fail: ABORTED: Solution found but the environment changed during execution and the path was aborted
This is part of the log for edo_moveit_planning_execution.launch (complete log here):
[ INFO] [1580292525.613987951, 2079.238000000]: Planning attempt 1 of at most 1
[ INFO] [1580292525.627931453, 2079.252000000]: Added plan for pipeline 'pick'. Queue is now of size 1
[ INFO] [1580292525.638283054, 2079.262000000]: Planner configuration 'edo' will use planner 'geometric::RRTConnect'. Additional configuration parameters will be set when the planner is constructed.
[ INFO] [1580292525.638923972, 2079.262000000]: edo/edo: Starting planning with 1 states already in datastructure
[ INFO] [1580292525.656668612, 2079.280000000]: edo/edo: Created 5 states (2 start + 3 goal)
[ INFO] [1580292525.656751369, 2079.280000000]: Solution found in 0.018063 seconds
[ INFO] [1580292525.707741909, 2079.330000000]: SimpleSetup: Path simplification took 0.050783 seconds and changed from 4 to 2 states
[ INFO] [1580292525.714201018, 2079.336000000]: Found successful manipulation plan!
[ INFO] [1580292525.714473434, 2079.337000000]: Pickup planning completed after 0.100263 seconds
[ INFO] [1580292525.715957646, 2079.338000000]: Disabling trajectory recording
[ INFO] [1580292533.718085292, 2087.242000000]: Controller successfully finished
[ INFO] [1580292537.356226829, 2090.847000000]: Controller successfully finished
[ INFO] [1580292538.876601021, 2092.348000000]: Found a contact between 'object' (type 'Object') and 'object' (type 'Robot attached'), which constitutes a collision. Contact information is not stored.
[ INFO] [1580292538.876921946, 2092.349000000]: Collision checking is considered complete (collision was found and 0 contacts are stored)
[ INFO] [1580292538.877004243, 2092.349000000]: Upcoming trajectory component 'retreat' is invalid
[ INFO] [1580292538.886717594, 2092.358000000]: Found a contact between 'object' (type 'Object') and 'object' (type 'Robot attached'), which constitutes a collision. Contact information is not stored.
[ INFO] [1580292538.886811120, 2092.358000000]: Collision checking is considered complete (collision was found and 0 contacts are stored)
[ INFO] [1580292538.886864355, 2092.358000000]: Trajectory component 'retreat' is invalid after scene update
[ INFO] [1580292538.886909123, 2092.358000000]: Stopping execution because the path to execute became invalid(probably the environment changed)
[ INFO] [1580292538.886967748, 2092.358000000]: Cancelling execution for
[ INFO] [1580292538.887087779, 2092.358000000]: Stopped trajectory execution.
[ INFO] [1580292538.887396084, 2092.358000000]: Controller successfully finished
[ INFO] [1580292538.887473120, 2092.358000000]: Completed trajectory execution with status PREEMPTED ...
[ INFO] [1580292538.887646527, 2092.358000000]: Waiting for a 2.000000 seconds before attempting a new plan ...
[ INFO] [1580292540.887836066, 2094.338000000]: Done waiting
I'm using ROS Melodic on Ubuntu with MoveIt master branch.
Originally posted by lucarinelli on ROS Answers with karma: 13 on 2020-01-29
Post score: 1
Original comments
Comment by lucarinelli on 2020-01-29:
I think it might be related to https://github.com/ros-planning/moveit/issues/1835
Answer:
I had same issue. I solved it using workaround as mentioned here #122 (in the update). Additionally, you may also have a look at this #1835.
Originally posted by Rajendra with karma: 26 on 2020-02-20
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by lucarinelli on 2020-02-20:
Thank you!
The 'add hacky object name hack' seems to be a good wordaround for now.
Comment by nicola-sysdesign on 2021-10-14:
Hi, I'm facing the same issue. Unfortunately the link mentioned in #122 seems broken now. Can you describe the workaround that you used to solve this issue? | {
"domain": "robotics.stackexchange",
"id": 34343,
"tags": "ros, moveit, ros-melodic, collision"
} |
Function logic of PR2 Odometry iterativeLeastSquares | Question:
Hi all,
Can someone tell what happens in "iterativeLeastSquares" method in http://mirror.umd.edu/roswiki/doc/api/pr2_mechanism_controllers/html/classcontroller_1_1Pr2Odometry.html#af876e6a197abaf3f511d41389a1794d6
According to its description the function is used to compute the most likely solution to the odometry using iterative least squares. And, I tried understanding the code but with no luck.
If someone can give a rough idea, then I would be able to follow the code.
Thank you
CS
Originally posted by ChickenSoup on ROS Answers with karma: 387 on 2012-12-03
Post score: 0
Original comments
Comment by ChickenSoup on 2012-12-06:
what I need to know is what the algorithm does to the individual wheel velocities and steering angles of casters to come up with the final velocity of the base. If someone can give a pointer to a research paper that would be really appreciated.
Answer:
The "iterativeLeastSquares" function implements an iterative least squares technique to compute odometry.
It actually computes and returns x where A*x = b
x: [Vx Vy W]^T of the whole robot base i.e. Vx = odom_vel_.linear.x; Vy = odom_vel_.linear.y; W = odom_vel_.angular.z
b: [v0 0 v1 0 v2 0 v3 0]^T here v0 = wheel 0's velocity in its steering direction and the 0 = wheel 0's velocity in the direction perpendicular to its steering direction and so on.
A: the matrix that transforms x to individual wheel velocities b.
Originally posted by ChickenSoup with karma: 387 on 2013-02-08
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 11973,
"tags": "ros"
} |
Faraday's Law and Lenz's Law: Is there any theoretical explanations on why changing magnetic field induces an electric field? | Question: This is a more specific extension to this question I came across today
One certain aspect of Faraday's Law always stumped me (other than it is an experimental observation back in the 19th century)
The Maxwell-Faraday Equation reads:
$$\nabla \times \mathbf{E}=-\frac{\partial \mathbf{B}}{\partial t}$$
I am also briefly aware that in special relativity, magnetic fields in one frame are basically electric fields in another, but
Q1 How exactly does a changing magnetic field induces an electric field. Is there any theoretical explanation that came up in the literature using more fundamental theories such as QED and relativity that explains how it happens?
Q2 Is their a theoretical reason Why does the electric field is produced in a way that opposes the change in the magnetic field?
Answer: Indeed, this observation remains mysterious from a 19th century viewpoint.
Since we know special relativity, though, it is natural in the covariant formulation of electromagnetism that spatial and temporal changes of fields are interrelated. More specifically, we need to express the three-vectors $\vec E$ and $\vec B$ in a covariant way, which is done by defining the field strength tensor $F$ component-wise as
$$ F^{i0} := E^i \quad \text{and} \quad F^{ij} = \epsilon^{ijk}B_k$$
This object now behaves properly (as a 2-tensor) under Lorentz transformations, in contrast to the three-vectors $\vec E$ and $\vec B$ whose components mix.
Now, Maxwell's equations of course must also be written covariantly,
$$ \partial_\mu F^{\mu\nu} = j^\nu \quad \text{and} \quad \partial_\mu\left(\frac{1}{2}\epsilon^{\mu\nu\sigma\rho}F_{\sigma\rho}\right) = 0$$
and if you go and write out this with $\partial_t$ and $\vec \nabla$ and so on again, you get back, among others, the Maxwells-Faraday equation.
So, essentially, the mixture of electric and magnetic fields, and their spatial and temporal changes, is a direct consequence of the fact that the world is not Galilean, but relativistic. | {
"domain": "physics.stackexchange",
"id": 20241,
"tags": "electromagnetism"
} |
catkin command not found | Question:
Hi Everyone, total noob here...
I've run into a problem while trying to use ROS on Ubuntu 12.04. I installed it according to the instructions on the website and the installation seemed to have gone well, but I am unable to build a catkin workspace. I tried rebooting and even installing catkin again but nothing. Terminal (over ssh to my Macbook Pro) continuously gives me the error:
administrator@ROS:~/catkin_ws/src$ catkin_init_workspace
Could neither symlink nor copy file "/opt/ros/hydro/share/catkin/cmake/toplevel.cmake" to "/home/administrator/catkin_ws/src/CMakeLists.txt":
[Errno 13] Permission denied
[Errno 13] Permission denied: '/home/administrator/catkin_ws/src/CMakeLists.txt
and if I login as root and attempt, I get:
root@ROS:~# cd ~/catkin_ws/src
root@ROS:~/catkin_ws/src# catkin_init_workspace
catkin_init_workspace: command not found
root@ROS:~/catkin_ws/src#
(ROS is the name of my Ubuntu machine).
I'm pretty sure its something really simple that I'm missing (first time with row), but any help is appreciated.
Thanks in Advance!
Mr_E
Originally posted by mr_electric on ROS Answers with karma: 1 on 2013-12-29
Post score: 0
Original comments
Comment by mr_electric on 2013-12-29:
I first tried as a regular user, but it still did not allow me to access the file. Would deleting the directory I created be sufficient to fix the permissions, or will I have to do a full reinstall?
Thanks for your help!!
Mr_E
Answer:
You should do everything in the tutorials as non root user. If you use a root user when doing operations the files generated will be owned by root and other users will not have permissions to access those files. Which will give you Permission denied errors like above.
You should clear the workspace and do everything as a non root user.
Originally posted by tfoote with karma: 58457 on 2013-12-29
This answer was ACCEPTED on the original site
Post score: 3
Original comments
Comment by tfoote on 2013-12-29:
Please use comments for questions. Yes you need to delete the whole directory as root. (~/catkin_ws) | {
"domain": "robotics.stackexchange",
"id": 16544,
"tags": "ros, catkin, ros-hydro, ubuntu"
} |
Finding k shortest Paths with Eppstein's Algorithm | Question: I'm trying to figure out how the Path Graph $P(G)$ according to Eppstein's Algorithm in this paper works
and how I can reconstruct the $k$ shortest paths from $s$ to $t$ with the corresponding heap construction $H(G)$.
So far:
$out(v)$ contains all edges leaving a vertex $v$ in a graph $G$ which are not part of a shortest path in $G$. They are heap-ordered by the "waste of time" called $\delta(e)$ when using this edge instead of the one on a shortest paths. By applying Dijkstra I find the shortest paths to every vertex from $t$.
I can calculate this by taking the length of the edge + (the value of the head vertex (where the directed edge is pointing to) - the value of the tail vertex (where the directed edge is starting). If this is $> 0$ it is not on a shortest path, if it is $= 0$ it is on a shortest path.
Now I build a 2-Min-Heap $H_{out}(v)$ by heapifying the set of edges $out(v)$ according to their $\delta(e)$ for any $v \in V$, where the root $outroot(v)$ has only one child (= subtree).
In order to build $H_T(v)$ I insert $outroot(v)$ in $H_T(next_T(v))$ beginning at the terminal vertex $t$. Everytime a vertex is somehow touched while inserting it is marked with a $*$.
Now I can build $H_G(v)$ by inserting the rest of $H_{out}(w)$ in $H_T(v)$. Every vertex in $H_G(v)$ contains either $2$ children from $H_T(v)$ and $1$ from $H_{out}(w)$ or $0$ from the first and $2$ from the second and is a 3-heap.
With $H_G(v)$ I can build a DAG called $D(G)$ containing a vertex for each $*$-marked vertex from $H_T(v)$ and for each non-root vertex from $H_{out}(v)$.
The roots of $H_G(v)$ in $D(G)$ are called $h(v)$ and they are connected to the vertices they belong to according to $out(v)$ by a "mapping".
So far, so good.
The paper says I can build $P(G)$ by inserting a root $r = r(s)$ and connecting this to $h(s)$ by an inital edge with $\delta(h(s))$. The vertices of $D(G)$ are the same in $P(G)$ but they are not weighted. The edges have lengths. Then for each directed edge $(u,v) \in D(G)$ the corresponding edges in $P(G)$ are created and weighted by $\delta(v) - \delta(u)$. They are called Heap Edges. Then for each vertex $v \in P(G)$, which represents an edge not in a shortest path connecting a pair of vertices $u$ and $w$, "cross edges" are created from $v$ to $h(w)$ in $P(G)$ having a length $\delta(h(w))$. Every vertex in $P(G)$ only has a out going degree of $4$ max.
$P(G)$'s paths starting from $r$ are supposed to be a one-to-one length correspondence between $s$-$t$-paths in $G$.
In the end a new heap ordered 4-Heap $H(G)$ is build. Each vertex corresponds to a path in $P(G)$ rooted at $r$. The parent of any vertex has one fewer edge. The weight of a vertex is the lenght of the corresponding path.
To find the $k$ shortest paths I use BFS to $P(G)$ and "translate" the search result to paths by using $H(G)$.
Unfortunately, I don't understand how I can "read" $P(G)$ and then "translate" it through $H(G)$ to receive the $k$ shortest paths.
Answer: It's been long enough since I wrote that, that by now my interpretation of what's in there is probably not much more informed than any other reader's. Nevertheless:
I believe that the description you're looking for is the last paragraph of the proof of Lemma 5. Basically, some of the edges in P(G) (the "cross edges") correspond to sidetracks in G (that is, edges that diverge from the shortest path tree). The path in G is formed by following the shortest path tree to the starting vertex of the first sidetrack, following the sidetrack edge itself, following the shortest path tree again to the starting vertex of the next sidetrack, etc. | {
"domain": "cstheory.stackexchange",
"id": 4115,
"tags": "ds.algorithms, graph-theory, graph-algorithms, directed-acyclic-graph, shortest-path"
} |
Why is the plasma density within a Debye sphere at odds with overall plasma density? | Question: A strongly coupled plasma is characterized by the following attributes:
higher number density
lower particle speeds (lower temperature)
smaller Debye length
continuous electrostatic influence throughout, stronger long range interaction
sparsely populated Debye sphere (lower Debye Number)
Likewise, weakly coupled plasmas are characterized by the inverse attributes:
lower number density
higher particle speeds
larger Debye length
only occasional electrostatic influence, weaker long range interaction
densely populated Debye sphere (higher Debye number)
The number density within the Debye sphere directly contrasts with the overall number density. Does this imply that a weakly coupled plasma has Debye-sphere-sized pockets of high density plasma within the greater, low-density plasma medium? Likewise, does this imply that a strongly coupled plasma has Debye-sphere-sized pockets of low density plasma within the greater, high-density plasma medium?
At first I thought it could be up to the size of the Debye sphere but sources clearly state density not just population.
Sources:
https://farside.ph.utexas.edu/teaching/plasma/Plasma/node7.html
https://en.wikipedia.org/wiki/Plasma_parameter
https://www.chemeurope.com/en/encyclopedia/Plasma_parameter.html
Similar question but without a direct answer: How is it possible that a collisionless plasma has a more densely populated Debye sphere?
Answer: I would not go as far as you do in your conclusions. I think the confusion comes from the wording "densely populated" which does not mean "high density". The constraint on the populations of the Debye's sphere just add a compatible constraint onto the densities.
Let's take:$ \rho _{s};\lambda _{s} $ for strongly coupled plasma and :$ \rho _{w};\lambda _{w} $ for a weakly coupled one.
The population of the Debye Sphere is given by:
$$ N_{D}= \rho .V \sim \rho. \lambda _{D} ^{3}$$
And we are given: $$\begin{cases}\lambda _{s} << \lambda _{w}\\\rho_{s} >> \rho_{w}\\ N_{w} >> N_{s}\end{cases}$$
The question is to know if all these constraints are compatible. It comes:
$$ \frac{ N_{s} }{ N_{w} } = \frac{ \rho _{s} }{ \rho _{w} } \big( \frac{ \lambda _{s} }{ \lambda _{w} } \big)^{3}$$
But: $\begin{cases}\frac{ \rho _{s} }{ \rho _{w} } \gg 1\\\frac{ \lambda _{s} }{ \lambda _{w} } \ll 1\end{cases} $ is not incompatible with:$ N_{s} \ll N_{w} $ | {
"domain": "physics.stackexchange",
"id": 90421,
"tags": "plasma-physics, debye-length"
} |
How exactly does polarization by scattering work? | Question: Consider an electron sitting at the origin of a coordinate system. Let an unpolarized light travelling in the $z$-direction excite the electron at the origin. The motion of the electron can be thought of as two independent oscillatory motions, one along $x$-axis and the other along $y$-axis.
If we look at the scattered radiation along $y$-axis, there will be none due to the motion along the $y$-axis. The scattered radiation that reaches the eye when viewed along $y$-axis is due to the motion along $x$-axis. It is true that an electron oscillating in the $x$-axis will give rise to maximum intensity when viewed along $y$-axis.
I cannot understand why an electron oscillating along $x$-axis will produce electric field polarized along $x$-axis.
Answer: Because the oscillating scattering electron behaves like an oscillating electric dipole in the sense that both can be represented as a small oscillating source of current.
The radiation fields due to such a system are described in any Electromagnetism textbook.
The oscillating charge acts like an oscillating current, backwards and forwards in the direction of oscillation. One then solves the inhomogeneous wave equation using its general solution, that tells us that the magnetic vector potential generated by the oscillating current is in the same direction as that current. The magnetic field is the curl of this vector potential and so is directed azimuthally curling around the oscillating current. The electric field of the transverse waves is then perpendicular to the magnetic field and also to the radial vector pointing away from the oscillating dipole - i.e. in a poloidal direction (the $\theta$ direction in spherical coordinates).
Thus whatever the viewing direction, the electric field lines up with the projected oscillation direction, with no component perpendicular to it. It is therefore linearly polarised and the polarization direction is $\hat{\theta}$.
Now in the case of the scattering example, if you have an oscillation along the x-axis and view that radiation along the y-axis, then the $\hat{\theta}$ direction is the same as the $\hat{x}$ direction.
This is a wordy explanation. The maths is more elegant, but can be found in most Electromagnetism texts. | {
"domain": "physics.stackexchange",
"id": 63847,
"tags": "electromagnetism, electromagnetic-radiation, everyday-life, scattering, polarization"
} |
Do I need a license or certificate to practice chemistry or what are the advantages if did have one? | Question: What can I do as a non-professional chemist that is legal?
If a license is not required, then what are the advantages to having one, such as ordering compounds, getting a job, etc.?
Answer: Note that being a chemist doesn't require any kind of state license or the like in Germany, neither.
But buying dangerous* substances and running a chemical lab requires that you** have certain licenses and implement safety standards.
Some of these exams a chemist has to pass during studies (e.g. on toxicology and chemical substances legislation) so having a Master/Diplom in chemistry automatically means that you have those licenses.
Having this license (Chemikaliensachkunde) and being reliable (legal meaning) and of age are legal requirements you must meet in order to be allowed to buy dangerous substances. In practice however, the suppliers put another very practical additional need: they usually do mail orders only B2B and require you to have VAT/USt number.
So if I as a small-scale freelancer who fulfills all the legal requirements want to buy from say, Sigma, I don't get anything because I don't have the tax number (for my size of side-business it would be too much hassle to go through this, so I use the small business exemtion law and for the supplier it would be too much hassle to make sure the buyer fulfills the legal requirements if it isn't at least a certain business size).
As a private person, you can buy chemicals:
at the pharmacy
at specialized stores, e.g.
photography chemicals at a good photography stores
carbide at a speleology outfitter's
gases (possibly including dry ice) at a gas merchant's, etc.
you basically cannot buy dangerous substances by mail/online order (too much burocratic hassle: the seller would need to make sure of your identity and Chemikaliensachkunde etc.)
I've never tried to buy privately anything that requires the Chemikaliensachkunde e.g. at a pharmacy so I cannot tell how difficult that is in practice.
* As a rule of thumb, there are restrictions for substances (or mixtures) that are extremely flammable, oxidizing, toxic or very toxic, (suspected of being) carcinogenic, mutagenic or teratogenic (plus a bunch of listed things)
** or some employee of yours
However, you can of course work as a chemist without (much or even any) need to handle dangerous substances. For example, I'm analytical chemist / spectroscopist / chemometrician. About 90 % of my work is in front of a computer analysing measurement data, developing improved ways of doing so and writing reports/papers. At least half of the rest is in front of a computer programming instruments. I've handled exactly one dangerous substance according to chemicals legislation at two days (actually a sample of few ΞΌg I got for analysis) during the last year. The everyday stuff I handle (methanol, ethanol, isopropanol, disinfectant solution and cyclo-hexane and paracetamol=acetaminophen as calibration standards) are not legally restricted dangerous chemicals.
I do handle biohazard material far more often - but that is not what the Chemikaliensachkunde is good for. So for my work, the more relevant safety issues are biological/human material safety and laser safety... It would be quite easy not to have any chemical lab exposure (is seldom enough as it is... - but hey, what did I become chemist for?)
Also I have some colleagues doing the same type of work who are not chemists but e.g. physicists and thus do not have the Chemikaliensachkunde.
For getting a job it is obviously of advantage if you can show some kind of education/training that makes you fit for the job. For jobs in chemical industry/academia that is obviously a chemistry degree or training as chemical lab technician, but also related professions such as physics/biology/pharmacy/toxicology depending on what the job in question is. | {
"domain": "chemistry.stackexchange",
"id": 2018,
"tags": "organic-chemistry"
} |
Is charge energetic? | Question: The mass of a body is known to have two important features.
It responds to and can be the source of a field (gravity).
It is energetic ($E=mc^2$).
The charge of body is known to have at least one of these properties,
namely:
It responds to and can be the source of a field (the EM-field).
I am curious if it could also have the other property as well. That is, could there be a charge equivalent of $E=mc^2$? Perhaps something like
$$ E=Q\sqrt{\dfrac{c^4}{4 \pi G \epsilon_{0}}}~? $$
Answer: I believe it should be
2) it is charged
Because charge itself is the analogue to energy for the EM field. Both energy and charge are conserved because of a deeper symmetry in the universe. For energy, this is the time symmetry of the laws of nature. For charge, this is the underlying symmetry of the EM field equations. See Noether's Theorem for more about this. In other words, charge itself is not energetic, it is another fundamental property.
$E=mc^2$ relates energy and mass. So to have something similar for EM, you'd be looking for an "electric mass", basically $q/c^2$. That's a very interesting idea. My best guess is that the EM field is too simple to allow something like that. | {
"domain": "physics.stackexchange",
"id": 34805,
"tags": "electromagnetism, energy, charge, quantum-electrodynamics"
} |
Trouble starting gazebo_experimental - Failed to load plugin [gazeboGuiDisplayImage] | Question:
Hello. I'd like to contribute to gazebo_experimental, and have successfully built everything from source on a clean Ubuntu 16.04 desktop. I followed this tutorial to build all dependencies from source (in debug mode). Then I followed instructions here to attempt to build (in debug mode) and run. The build was clean, and all tests pass. But when I run gazebo -v 4, I get the following errors:
[GUI] [Msg] Init app
[GUI] [Err] [PluginLoader.cc:89] Library[] does not exist!
[GUI] [Err] [Iface.cc:244] Failed to load plugin [gazeboGuiDisplayImage]
[GUI] [Err] [PluginLoader.cc:89] Library[] does not exist!
[GUI] [Err] [Iface.cc:244] Failed to load plugin [gazeboGuiDiagnostics]
[GUI] [Msg] Create main window
[GUI] [Msg] Run main window
QMetaObject::invokeMethod: No such method QMenuBar::aboutToShow()
My ENV variables are set as follows:
vagrant@vagrant:~/dev/gazebo/gazebo_experimental/build$ echo $LD_LIBRARY_PATH
/usr/local/lib
vagrant@vagrant:~/dev/gazebo/gazebo_experimental/build$ echo $GAZEBO_PLUGIN_PATH
/usr/local/lib:examples/dummy_demo/systems/
Any help would be appreciated.
Originally posted by jakelevirne on Gazebo Answers with karma: 26 on 2017-06-18
Post score: 0
Answer:
I answered my own question. I was missing the IGN_GUI_PLUGIN_PATH environment variable. I set it to the full path of my gazebo_experimental build src/gui directory and was able to run the dummy_demo.
Originally posted by jakelevirne with karma: 26 on 2017-06-18
This answer was ACCEPTED on the original site
Post score: 1 | {
"domain": "robotics.stackexchange",
"id": 4131,
"tags": "gazebo"
} |
How long does the contact between a free-falling rigid sphere and the ground last in a perfectly elastic collision? | Question: It seems to me that this time is finite, although it seems infinitely small, but if it is finite, is it also identical for any perfectly elastic collision?
What should I know about this time?
Answer: You can't deform a real material without losing some energy to heat (this is known as internal friction or mechanical hysteresis). In the ideal case of a perfectly elastic collision, if you're allowing this internal friction to exist, then zero deformation can occur, implying that the idealized materials are perfectly rigidβthat is, that their elastic moduli are infinite. This in turn requires a contact time of zero, which is typical for introductory physics treatment of kinematics and collisions.
Alternatively, if you posit that the internal friction is zero, then you can have a nonzero contact time in which the materials squish together, storing strain energy, and then rebound. In fact, for very compliant materials, the contact time could be quite long. This problem is treated in the field of impact mechanics. Note, however, that compliant does not mean soft; no permanent deformation can occur (this is the softβhard dichotomy), only recoverable deformation (this is the compliantβstiff dichotomy). Nonrecoverable deformation would preclude an elastic collision. Does this make sense?
As examples, rubber and steel balls both bounce quite well off a relatively stiff surface, which could be surprising because their stiffnesses differ by about six orders of magnitude. Elastomers such as rubber are compliant (with Young's modulus under 1 MPa, for example) but not soft (from a strain point of view; they are soft from an applied-force point of view). Thus, they provide a mostly-elastic collision (with a relatively long contact time) because they don't permanently deform much (compare with Silly Putty, for example) and don't waste a lot of deformation through friction. In contrast, the bounciness of steel balls arises from their relatively high stiffness and strengthβwhich preclude lattice flexing and dislocation movement that would lead to hysteretic and plastic deformation lossesβand the corresponding contact time is relatively short. | {
"domain": "physics.stackexchange",
"id": 80200,
"tags": "collision"
} |
Bisimilarity and Trace Equivalence in Labelled Transition Systems | Question: I'm a bit confused regarding the relation between trace equivalence and bisimilarity. These lecture notes I found and a few others documents I've read state that "if an LTS is deterministic then two states are bisimilar if they are trace equivalent".
When reading around the topic I found this page, which shows the following image:
These LTS' are trace equivalent and deterministic(?), why does the rule not hold that they are then bisimilar?
Answer: The answer is that the LTS on the left isn't deterministic, as the label (or action) open_door doesn't go to a single state and hence that action is non-deterministic.
This example shows that determinism is indeed required for trace equivalence and bisimilarity to be equivalent. | {
"domain": "cs.stackexchange",
"id": 11248,
"tags": "terminology"
} |
Does SWT/ISWT require intermediate approximation coefficients to represent/reconstruct the original signal? | Question: Taking into account that Stationary Wavelet Transform (Algorithme Γ trous) is not an orthogonal transform do we need intermediate approximation coefficients for signal decomposition?
For example, in maximally decimated DWT we have the following decomposition tree:
And according to the conclusions from the MRA (multi resolution analysis) to fully represent (and reconstruct) the signal it is suffice to have only the last approximation subband plus all of the difference subbands.
Does this also hold true for SWT?
Answer: If I understand well, no, you don't need intermediate approximation coefficients, as long as you have (not) sampled properly. So yes, having the last approximation subband plus all of the difference subbands suffices.
Moreover, there are several inverses, since the transform is redundant, which can reveal better than the obvious one. Note that a stationary wavelet packet transformation would require intermediate approximation coefficients.
If I may, orthogonality is not the question here: you could perform SWT with non-orthogonal wavelet filters. With insight from your other question Additive white gaussian noise and undecimated DWT, I propose to recall that:
a filter-bank is called critical when the number of samples and coefficients is the same (outside border). It does not need to be orthogonal
such a linear transform can generally be implemented with a square matrix
in some cases, the matrix is orthogonal, a subset of critically-sampled transforms.
In the literature, one sometimes finds "orthogonal" as a proxy for "square" & orthogonal, which is correct, but "square" being more important in that case, as compared to redundant schemes. | {
"domain": "dsp.stackexchange",
"id": 4846,
"tags": "wavelet"
} |
ROS Answers SE migration: ROSJOY help | Question:
Hi, I'm running hydro on ubuntu 12.04
I have been trying to go through the tutorial on writing a node to use a ps3 controller with ros here. After many try's I think it failed because hydro doesnβt have turtlesim/Velocity.h . I'm a beginner with cs and am not sure how to build this controller node from scratch. in particular, what would I do to this code to get it to work on hydro?
Here's the code
#include <ros/ros.h>
#include <geometry_msgs/Twist.h>
#include <sensor_msgs/Joy.h>
class TeleopTurtle
{
public:
TeleopTurtle();
private:
void joyCallback(const sensor_msgs::Joy::ConstPtr& joy);
ros::NodeHandle nh_;
int linear_, angular_;
double l_scale_, a_scale_;
ros::Publisher vel_pub_;
ros::Subscriber joy_sub_;
};
TeleopTurtle::TeleopTurtle():
linear_(1),
angular_(2)
{
nh_.param("axis_linear", linear_, linear_);
nh_.param("axis_angular", angular_, angular_);
nh_.param("scale_angular", a_scale_, a_scale_);
nh_.param("scale_linear", l_scale_, l_scale_);
vel_pub_ = nh_.advertise<turtlesim::Velocity>("turtle1/turtle1/cmd_vel", 1);
joy_sub_ = nh_.subscribe<sensor_msgs::Joy>("joy", 10, leopTurtle::joyCallback, this);
}
void TeleopTurtle::joyCallback(const sensor_msgs::Joy::ConstPtr& joy)
{
turtlesim::Velocity vel;
vel.angular = a_scale_*joy->axes[angular_];
vel.linear = l_scale_*joy->axes[linear_];
vel_pub_.publish(vel);
}
int main(int argc, char** argv)
{
ros::init(argc, argv, "teleop_turtle");
TeleopTurtle teleop_turtle;
ros::spin();command_velocity
}
After making two changes,
Changing turtlesim_velocity to geometry_msg/Twist
Changing Command_Velocity to cmd_vel
I get the following errors when running rosmake on the package.
/home/donni/catkin_ws/src/beginner_tutorials/learning_ps3joy/src/turtle_teleop_ps3joy.cpp:44:3: error: βturtlesimβ has not been declared /home/donni/catkin_ws/src/beginner_tutorials/learning_ps3joy/src/turtle_teleop_ps3joy.cpp:44:23: error: expected β;β before βvelβ /home/donni/catkin_ws/src/beginner_tutorials/learning_ps3joy/src/turtle_teleop_ps3joy.cpp:45:3: error: βvelβ was not declared in this scope
Can someone interpret them for me?
Originally posted by dshimano on ROS Answers with karma: 129 on 2014-07-31
Post score: 1
Answer:
HI,
As of Hydro turtlesim uses the geometry_msgs/Twist message instead of turtlesim/Velocity.
Change to cmd_vel instead of command_velocity
So in your code use geometry_msgs/Twist instead of turtlesim/Velocity, Also, publish it to cmd_vel and it should work then.
Originally posted by adreno with karma: 253 on 2014-07-31
This answer was ACCEPTED on the original site
Post score: 2
Original comments
Comment by dshimano on 2014-07-31:
Hi, thanks for your help. I made those changes, and got some more errors. I am adding them to my question if you can help me out some more. | {
"domain": "robotics.stackexchange",
"id": 18846,
"tags": "joystick"
} |
2D Maze Game with Monsters | Question: Introduction
I've started to learn C programming a bit and wanted to create a simple 2D console game. Let me first introduce you to the game level/map structure:
(1) ################# (2) #################
# # # #
# v S ^# # S #
# # # #
# < ####### # v #
# > # # > < #
# A # # ^ #
# # # A#
########### # A #
# #
#################
The different symbols represent the following game objects:
# = wall
S = player
A = goal
^,v,<,> monster looking up/down/left/right, respectively.
The goal is to get to one of the goal cells without touching a monster.
There can be only one player S, but multiple monsters and goals on the map. On each game tick, the player enters w, a, s, or d and moves up/left/down/right one cell, respectively. Then, all the monster entities move one unit in their corresponding direction.
If the player moves "into" a wall, the player's position is not updated. Monsters, however, bounce back 180 deg from the wall.
The goals (A) and walls (#) do not ever move. However, # acts as a boundary for both the player and monsters. The player can move onto the A but monsters treat the goal cells as walls as well.
One caveat is that monsters may overlap (see Level 2) such that only the monster first read from the level file is displayed in such an overlapping cell. If the player runs into a monster the monster is displayer on top and the game ends printing out a lose message. If the player reaches the goal, the goal symbol stays on top and the game ends printing out a win message.
Internally, I read in the level file into a 2D char array and save all dynamic entities (player and monsters) in a separate data structure. I then remove all dynamic entities from the 2D array to use it as a "canvas" for drawing. That way I can update all entities' locations and then decide how they're going to be painted onto the canvas, but am still able to use static elements (# and A) for collision detection.
Code
Note that I'm only allowed to use the C99 standard.
common.h
#ifndef COMMON_H
#define COMMON_H
#include <stdio.h>
typedef enum error_code {
OK = 0,
COULD_NOT_OPEN_FILE = 1,
COULD_NOT_READ_FILE = 2,
INVALID_OPTIONS = 3,
ALLOC_FAILED = 4
} t_error_code;
typedef struct error_object {
char msg[100];
t_error_code error_code;
} t_error_object;
t_error_object make_error(const char *message, t_error_code error_code);
int get_file_size(FILE *f);
int get_line_count(FILE *f);
char* strdup_(const char* src);
#endif
common.c
#include "common.h"
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
t_error_object make_error(const char *message, t_error_code error_code) {
t_error_object error_obj;
strncpy(error_obj.msg, message, 100);
error_obj.error_code = error_code;
return error_obj;
}
int get_file_size(FILE *f) {
fseek(f, 0, SEEK_END);
int len = ftell(f);
fseek(f, 0, SEEK_SET);
return len;
}
char *strdup_(const char *src) {
char *dst = malloc(strlen (src) + 1);
if (dst == NULL) return NULL;
strcpy(dst, src);
return dst;
}
game_params.h
#ifndef GAME_PARAMS_H
#define GAME_PARAMS_H
#include <stdio.h>
#include "common.h"
typedef struct game_params {
FILE *level_file;
FILE *input_file;
FILE *output_file;
} t_game_params;
void cleanup_game_files(t_game_params *params);
t_error_object open_game_files(t_game_params *params,
const char* level_file_name,
const char* input_file_name,
const char* output_file_name);
t_error_object parse_game_parameters(t_game_params *params_out, int argc, char **argv);
#endif
game_params.c
#include "game_params.h"
#include "common.h"
#include <getopt.h>
#include <errno.h>
#include <string.h>
void cleanup_game_files(t_game_params *params) {
if (params->level_file != NULL)
fclose(params->level_file);
params->level_file = NULL;
if (params->input_file != NULL)
fclose(params->input_file);
params->input_file = NULL;
if (params->output_file != NULL)
fclose(params->output_file);
params->output_file = NULL;
}
t_error_object open_game_files(t_game_params *params,
const char *level_file_name,
const char *input_file_name,
const char *output_file_name) {
params->input_file = stdin;
params->output_file = stdout;
if (input_file_name != NULL) {
if (strstr(input_file_name, ".txt") == NULL) {
return make_error("Eingabe-Datei kann nicht gelesen werden", COULD_NOT_READ_FILE);
}
params->input_file = fopen(input_file_name, "r");
if (params->input_file == NULL) {
return make_error("Eingabe-Datei konnte nicht geΓΆffnet werden", COULD_NOT_OPEN_FILE);
}
}
if (output_file_name != NULL) {
params->output_file = fopen(output_file_name, "w");
if (params->output_file == NULL) {
return make_error("Ausgabe-Datei konnte nicht geΓΆffnet werden", COULD_NOT_OPEN_FILE);
}
}
if (level_file_name == NULL) {
return make_error("Level-Datei muss angegeben werden", COULD_NOT_OPEN_FILE);
}
params->level_file = fopen(level_file_name, "r");
if (params->level_file == NULL) {
return make_error("Level-Datei konnte nicht geΓΆffnet werden", COULD_NOT_OPEN_FILE);
}
return make_error("", OK);
}
t_error_object parse_game_parameters(t_game_params *params_out, int argc, char **argv) {
char *level_file_name = NULL;
char *input_file_name = NULL;
char *output_file_name = NULL;
while (optind < argc) {
if (argv[optind][0] != '-') {
if (level_file_name != NULL)
return make_error("Level-Datei darf nur einmal angegeben werden", INVALID_OPTIONS);
level_file_name = argv[optind];
optind++;
}
int opt;
if ((opt = getopt(argc, argv, "i:o:")) != -1) {
switch (opt) {
case 'i':
if (input_file_name != NULL)
return make_error("Eingabe-Datei darf nur einmal angegeben werden", INVALID_OPTIONS);
input_file_name = optarg;
break;
case 'o':
if (output_file_name != NULL)
return make_error("Ausgabe-Datei darf nur einmal angegeben werden", INVALID_OPTIONS);
output_file_name = optarg;
break;
default:
return make_error("Falsche Optionen ΓΌbergeben", INVALID_OPTIONS);
}
}
}
// Γffne die Dateien zum Lesen/Schreiben und speichere File-Handles in `params`
t_error_object ret = open_game_files(params_out, level_file_name, input_file_name, output_file_name);
return ret;
}
entity.h
#ifndef ENTITY_H
#define ENTITY_H
#include "board.h"
typedef enum entity_type {
PLAYER,
MONSTER,
NO_ENT
} t_entity_type;
typedef struct position {
int x;
int y;
} t_position;
typedef struct entity {
t_position pos;
t_direction facing_dir;
t_entity_type type;
} t_entity;
t_entity_type get_entity_type(char c);
t_entity create_entity(t_entity_type type, int x, int y, t_direction dir);
int compare_positions(t_position *pos1, t_position* pos2);
void handle_collision(t_board *board, t_entity *entity, t_position *new_pos);
int check_wall(t_board *board, t_position *new_pos);
int check_valid_move(t_board *board, t_position *new_pos);
void move_entity(t_board *board, t_entity *entity, t_direction dir);
#endif
entity.c
#include "common.h"
#include "direction.h"
#include "entity.h"
#include "board.h"
t_entity_type get_entity_type(char c) {
int is_player = (c == 'S');
int is_monster = (map_char_to_direction(c) != NONE);
if (is_player)
return PLAYER;
if (is_monster)
return MONSTER;
return NO_ENT;
}
t_entity create_entity(t_entity_type type, int x, int y, t_direction dir) {
t_entity entity;
t_position pos;
pos.x = x;
pos.y = y;
entity.type = type;
entity.pos = pos;
entity.facing_dir = dir;
return entity;
}
int check_wall(t_board *board, t_position *new_pos) {
if (board->cells[new_pos->y][new_pos->x] == '#')
return 1;
return 0;
}
int check_valid_move(t_board *board, t_position *new_pos) {
if (new_pos->x >= board->col_size || new_pos->x < 0)
return 0;
if (new_pos->y >= board->num_rows || new_pos->y < 0)
return 0;
return 1;
}
void handle_collision(t_board *board, t_entity *entity, t_position *new_pos) {
t_position old_pos = entity->pos;
if (!check_valid_move(board, new_pos)) {
*new_pos = old_pos;
return;
}
int collided_with_wall = check_wall(board, new_pos);
if (entity->type == MONSTER) {
if (collided_with_wall || get_cell_at(board, new_pos->x, new_pos->y) == 'A') {
entity->facing_dir = get_opposite_direction(entity->facing_dir);
*new_pos = old_pos;
char c = map_direction_to_char(entity->facing_dir);
set_cell_at(board, new_pos->x, new_pos->y, c);
return;
}
}
if (entity->type == PLAYER && collided_with_wall) {
*new_pos = old_pos;
}
}
int compare_positions(t_position *pos1, t_position* pos2) {
if (pos1->y < pos2->y)
return -1;
if (pos1->y > pos2->y)
return 2;
if (pos1->x < pos2->x)
return -1;
if (pos1->x > pos2->x)
return 1;
return 0;
}
void move_entity(t_board *board, t_entity *entity, t_direction dir) {
t_position old_pos;
old_pos.x = entity->pos.x;
old_pos.y = entity->pos.y;
t_position new_pos = old_pos;
t_direction new_dir = dir;
if (entity->type == MONSTER) {
new_dir = entity->facing_dir;
}
switch (new_dir) {
case UPWARDS:
new_pos.y--;
break;
case LEFT:
new_pos.x--;
break;
case DOWNWARDS:
new_pos.y++;
break;
case RIGHT:
new_pos.x++;
break;
case NONE:
break;
}
handle_collision(board, entity, &new_pos);
entity->pos = new_pos;
}
board.h
#ifndef BOARD_H
#define BOARD_H
#include "game_params.h"
#include "direction.h"
#include <stdio.h>
typedef struct position t_position;
typedef struct entity t_entity;
typedef enum cell_type {
ENTITY,
WALL,
EMPTY
} t_cell_type;
typedef struct board {
int num_rows;
int col_size;
char **cells;
int num_entities;
int player_index;
t_entity *entities;
t_position *goal_positions;
} t_board;
void cleanup_board(t_board *board);
char get_cell_at(t_board *board, int x, int y);
void set_cell_at(t_board *board, int x, int y, char c);
t_cell_type get_cell_type(char c);
void clear_entities_from_board(t_board *board);
void place_entities_on_board(t_board *board);
void print_board(t_board *b, FILE *output);
void get_board_dims(char *buf, int *num_rows, int *col_size);
t_error_object fill_board(t_board *board, char *board_data, int len);
t_error_object handle_entity_alloc(t_board *board, const t_entity *entity,
int *actual_entity_count, int *expected_entity_count);
t_error_object set_initial_positions(t_board *board);
t_error_object initialize_board(t_board* board, const t_game_params *params);
#endif
board.c
#include "common.h"
#include "entity.h"
#include "board.h"
#include <string.h>
#include <stdlib.h>
void cleanup_board(t_board *board) {
for (int i = 0; i < board->num_rows; i++) {
if (board->cells[i] != NULL)
free(board->cells[i]);
}
if (board->cells != NULL)
free(board->cells);
if (board->entities)
free(board->entities);
}
char get_cell_at(t_board* board, int x, int y) {
return board->cells[y][x];
}
void set_cell_at(t_board *board, int x, int y, char c) {
board->cells[y][x] = c;
}
void clear_entities_from_board(t_board *board) {
for (int i = 0; i < board->num_entities; i++) {
t_entity ent = board->entities[i];
set_cell_at(board, ent.pos.x, ent.pos.y, ' ');
}
}
void place_entities_on_board(t_board *board) {
// First draw Player (S)
t_entity *player = &board->entities[board->player_index];
// 'A' always stays on top of 'S' when they overlap
if (get_cell_at(board, player->pos.x, player->pos.y) != 'A')
set_cell_at(board, player->pos.x, player->pos.y, 'S');
// Then draw Monsters (M) in reverse (right-to-left)
// to satisfy the condition that monsters seen earlier
// should appear before monsters seen at a later point
// in case some monsters overlap at a single position
for (int i = board->num_entities - 1; i >= 0; i--) {
t_entity ent = board->entities[i];
char symbol = ' ';
if (ent.type != MONSTER)
continue;
symbol = map_direction_to_char(ent.facing_dir);
set_cell_at(board, ent.pos.x, ent.pos.y, symbol);
}
}
void print_board(t_board *board, FILE *output) {
place_entities_on_board(board);
for (int row = 0; row < board->num_rows; row++) {
for (int col = 0; col < board->col_size; col++) {
char c = board->cells[row][col];
if (c != 0)
fputc(c, output);
}
fputc('\n', output);
}
clear_entities_from_board(board);
}
void get_board_dims(char *buf, int *num_rows, int *col_size) {
int num_lines = 0;
int longest_line_len = 0;
char* buf_copy = strdup_(buf);
char* pch = strtok(buf_copy, "\n");
while (pch != NULL) {
num_lines++;
if (strlen(pch) > longest_line_len)
longest_line_len = strlen(pch);
pch = strtok(NULL, "\n");
}
free(buf_copy);
buf_copy = NULL;
*num_rows = num_lines;
*col_size = longest_line_len;
}
t_error_object fill_board(t_board *board, char *board_data, int len) {
int cur_row = 0;
int cur_col = 0;
char **b = calloc(board->num_rows, sizeof(char*));
if (b == NULL) {
return make_error("Konnte keinen Speicherplatz fΓΌr das Gameboard allozieren", ALLOC_FAILED);
}
for (int i = 0; i < board->num_rows; i++) {
b[i] = calloc(board->col_size, sizeof(char));
if (b[i] == NULL) {
return make_error("Konnte keinen Speicherplatz fΓΌr das Gameboard allozieren", ALLOC_FAILED);
}
}
for (int i = 0; i < len; i++) {
if (board_data[i] == '\n') {
cur_row++;
cur_col = 0;
continue;
}
b[cur_row][cur_col] = board_data[i];
cur_col++;
}
free(board_data);
board->cells = b;
return make_error("", OK);
}
t_error_object handle_entity_alloc(t_board *board, const t_entity *entity,
int *actual_entity_count, int *expected_entity_count) {
*actual_entity_count += 1;
if (*actual_entity_count > *expected_entity_count) {
*expected_entity_count = *expected_entity_count * 2 + 1;
board->entities = realloc(board->entities, *expected_entity_count * sizeof(t_entity));
}
if (board->entities == NULL) {
return make_error("Konnte keinen Speicherplatz fΓΌr die EntitΓ€ten allozieren", ALLOC_FAILED);
}
board->entities[*actual_entity_count - 1] = *entity;
return make_error("", OK);
}
t_cell_type get_cell_type(char c) {
t_entity_type ent_type = get_entity_type(c);
int is_wall = (c == '#');
int is_empty = (c == ' ');
if (ent_type != NO_ENT)
return ENTITY;
if (is_wall)
return WALL;
if (is_empty)
return EMPTY;
return EMPTY;
}
t_error_object set_initial_positions(t_board *board) {
int expected_entity_count = 1;
int actual_entity_count = 0;
board->entities = calloc(expected_entity_count, sizeof(t_entity));
if(board->entities == NULL) {
return make_error("Konnte keinen Speicherplatz fΓΌr die EntitΓ€ten allozieren", ALLOC_FAILED);
}
for (int y = 0; y < board->num_rows; y++) {
for (int x = 0; x < board->col_size; x++) {
int c = board->cells[y][x];
t_cell_type type = get_cell_type(c);
if (type != ENTITY)
continue;
t_entity_type ent_type = get_entity_type(c);
t_direction ent_dir = map_char_to_direction(c);
t_entity ent = create_entity(ent_type, x, y, ent_dir);
t_error_object ret = handle_entity_alloc(board, &ent, &actual_entity_count, &expected_entity_count);
if (ret.error_code != OK)
return ret;
if (ent_type == PLAYER)
board->player_index = actual_entity_count - 1;
}
}
board->num_entities = actual_entity_count;
return make_error("", OK);
}
t_error_object initialize_board(t_board *board, const t_game_params *params) {
int num_rows;
int col_size;
int file_size = get_file_size(params->level_file);
char *level_data = calloc(file_size + 1, sizeof(char));
if (level_data == NULL) {
return make_error("Konnte keinen Speicherplatz fΓΌr das Gameboard allozieren", ALLOC_FAILED);
}
fread(level_data, file_size, 1, params->level_file);
if (ferror(params->level_file) != 0) {
return make_error("Konnte Level-Datei nicht lesen", COULD_NOT_READ_FILE);
}
get_board_dims(level_data, &num_rows, &col_size);
board->num_rows = num_rows;
board->col_size = col_size;
fill_board(board, level_data, file_size);
set_initial_positions(board);
return make_error("", OK);
}
direction.h
#ifndef DIRECTION_H
#define DIRECTION_H
typedef enum direction {
UPWARDS,
LEFT,
DOWNWARDS,
RIGHT,
NONE
} t_direction;
char map_direction_to_char(t_direction dir);
t_direction map_char_to_direction(char dir);
t_direction get_opposite_direction(t_direction dir);
#endif
direction.c
#include "direction.h"
char map_direction_to_char(t_direction dir) {
switch (dir) {
case UPWARDS:
return '^';
case LEFT:
return '<';
case DOWNWARDS:
return 'v';
case RIGHT:
return '>';
case NONE:
return 0;
}
return 0;
}
t_direction map_char_to_direction(char dir) {
switch (dir) {
case '^':
case 'w':
return UPWARDS;
case '<':
case 'a':
return LEFT;
case 'v':
case 's':
return DOWNWARDS;
case '>':
case 'd':
return RIGHT;
}
return NONE;
}
t_direction get_opposite_direction(t_direction dir) {
switch (dir) {
case UPWARDS:
return DOWNWARDS;
case LEFT:
return RIGHT;
case DOWNWARDS:
return UPWARDS;
case RIGHT:
return LEFT;
case NONE:
return NONE;
}
return NONE;
}
dungeon.h
#define DUNGEON_H
#include "game_params.h"
#include "board.h"
typedef enum game_status {
RUNNING,
WON,
LOST
} t_game_status;
void cleanup(t_game_params *params, t_board *board);
t_game_status check_win_or_death(t_board *board);
void game_loop(t_board *board, t_game_params *params);
int main(int argc, char **argv);
#endif
dungeon.c
#include "common.h"
#include "direction.h"
#include "entity.h"
#include "board.h"
#include "dungeon.h"
#include <string.h>
#include <stdlib.h>
void cleanup(t_game_params *params, t_board *board) {
cleanup_game_files(params);
cleanup_board(board);
}
t_game_status check_win_or_death(t_board *board) {
t_entity *player = &board->entities[board->player_index];
if (get_cell_at(board, player->pos.x, player->pos.y) == 'A')
return WON;
for (int i = 0; i < board->num_entities; i++) {
t_entity *ent = &board->entities[i];
if (ent->type == PLAYER)
continue;
int positions_match = compare_positions(&player->pos, &ent->pos) == 0;
if (positions_match && ent->type == MONSTER)
return LOST;
}
return RUNNING;
}
void game_loop(t_board *board, t_game_params *params) {
FILE *input_stream = params->input_file;
FILE *output_stream = params->output_file;
int step = 1;
char command = 0;
t_game_status game_status = RUNNING;
while (1) {
fprintf(output_stream, "%d ", step);
fscanf(input_stream, " %c", &command);
if (input_stream != stdin) {
fprintf(output_stream, "%c", command);
fprintf(output_stream, "\n");
}
t_direction dir = map_char_to_direction(command);
for (int i = 0; i < board->num_entities; i++) {
t_entity *ent = &board->entities[i];
move_entity(board, ent, dir);
}
game_status = check_win_or_death(board);
print_board(board, params->output_file);
if (game_status != RUNNING)
break;
step++;
}
if (game_status == LOST)
fprintf(output_stream, "Du wurdest von einem Monster gefressen.\n");
else if (game_status == WON)
fprintf(output_stream, "Gewonnen!\n");
}
int main(int argc, char **argv) {
t_game_params params = {NULL, NULL, NULL};
t_board board = {0, 0, NULL, 0, 0, NULL, NULL};
t_error_object err;
err = parse_game_parameters(¶ms, argc, argv);
if (err.error_code != OK) {
cleanup(¶ms, &board);
fprintf(stderr, "%s, error_code: %d\n", err.msg, err.error_code);
return err.error_code;
}
err = initialize_board(&board, ¶ms);
if (err.error_code != OK) {
cleanup(¶ms, &board);
fprintf(stderr, "%s, error_code: %d\n", err.msg, err.error_code);
return err.error_code;
}
print_board(&board, params.output_file);
game_loop(&board, ¶ms);
cleanup(¶ms, &board);
return 0;
}
Questions
How can I deal with cleaning up resources in a more concise way? As of now, I'm trying to emulate exception handling by letting errors bubble up to main and doing general cleanup there. I thought about passing around a structure (allocator pattern) to error-throwing functions.
Better error handling
Instead of "abusing" the game board for both drawing and collision checking, should I wrap cells in a custom data structure?
I'm still working on fixing const correctness here and there.
Is the direction abstraction a good pattern or uselessly bloating my codebase?
Is there a better data structure to represent my game board and the dynamic entities?
Unifying collision and win checks. I use the canvas state to check for collisions and win but compare the "virtual" player and monster positions to check for a lose.
Answer: Answers to your questions
How can I deal with cleaning up resources in a more concise way? As of now, I'm trying to emulate exception handling by letting errors bubble up to main and doing general cleanup there. I thought about passing around a structure (allocator pattern) to error-throwing functions.
C++ makes this a lot easier, with RAII and language support for exceptions. In C, letting errors "bubble up" and letting main() do the cleanup only works if main() did all the allocations, or can somehow see allocations done by other functions. In larger programs, that is usually not a good strategy.
Split errors into two categories:
Unrecoverable errors, like failing to allocate memory or failing to read a required file. In this case, just print the error message to stderr and call exit(EXIT_FAILURE).
Recoverable errors. Use a return type that can indicate an error status, like a bool representing success or failure, an integer or enum with an error code, or if you return a pointer to an object, NULL might represent failure. Then the caller can decide how to recover from that error.
Instead of "abusing" the game board for both drawing and collision checking, should I wrap cells in a custom data structure?
Having a dedicated type for cells is indeed a good idea.
I'm still working on fixing const correctness here and there.
Yes, a lot of function arguments can be made const.
Is the direction abstraction a good pattern or uselessly bloating my codebase?
I don't think it adds that much bloat, but there are other ways it could have been handled. Consider creating a struct that stores a direction as x and y coordinates, like so:
typedef struct direction {
int8_t dx;
int8_t dy;
} t_direction;
Then for example in move_entity(), you no longer need the switch-statement, but can just write:
new_pos.x = old_pos.x + new_dir.dx;
new_pos.y = old_pos.y + new_dir.dy;
Is there a better data structure to represent my game board and the dynamic entities?
There are many ways you can store the board and the entities, all with their own pros and cons. Yours has the advantage that both printing the board and looking up what is at a given position is very easy. If you treat the goal as a dynamic entity, then the only static things remaining are the walls. So you could use a bit array to store it in only one eigth the amount of memory you are currently using, and still be able to fast wall collision detection. Printing the board would be a bit more complex though.
Unifying collision and win checks. I use the canvas state to check for collisions and win but compare the "virtual" player and monster positions to check for a lose.
Yes, ideally when updating the enemies and the player, set game_status if they collide with each other, or if the player collides with the goal.
Unsafe use of strncpy()
When calling make_error(), you copy the string message into the array msg[100] using strncpy(). However, if the length of message was 100 or more characters, then strncpy() will not have written a NUL-byte at the end of msg[]. Either write a NUL-byte to msg[sizeof(msg) - 1] unconditionally, or use a safer function to write into msg[], like snprintf().
Alternatively, don't make a copy at all. You are only ever calling make_error() with a string literal, so you could just store a pointer to the string in t_error_object. This would also make this object much more light-weight.
Misleading error message
If the input filename does not contain ".txt" anywhere in the filename, you return an error that translates to "input file cannot be read". However, the file might be perfectly fine. Either don't restrict the filename, or return an error message saying that the filename should end in ".txt".
Prefer bool for true/false results
A function like check_valid_move() should return a bool to indicate true or false values. | {
"domain": "codereview.stackexchange",
"id": 42074,
"tags": "c, game, error-handling, memory-management"
} |
How to name Hg[Co(SCN)4]? | Question: I have basically written the entire name except for the oxidation state of $\ce{Co}$.
Mercury tetrathiocyanatocobaltate( )
to know what is the oxidation state of cobalt, I must know the oxidation state of the mercury cation. But mercury can have $+1$ and $+2$ oxidation states. How do I know which one is it in this compound?
Answer: TL;DR: There is no inner- or outer coordination spheres in this complex. Both cobalt(II) and mercury(II) have nearly ideal tetrahedral coordination environment with $4$ $\ce{N}$ and $4$ $\ce{S}$ atoms, respectively, hence the proper name would be cobalt(II) mercury(II) tetrathiocyanate. Mercury(I) readily undergoes disproportionation, so assuming mercury(II) is the safest option.
Actually, denoting an inner coordination sphere is not correct for this compound. First crystal structure [1] has been assigned to cobalt(II) tetrathiocyanatomercurate(II) $\ce{Co[Hg(SCN)4]}$:
It appears reasonably certain that the $\ce{S}$ atom is attached to the $\ce{Hg}$ in tetrahedral co-ordination, with $\ce{S-Hg-S}$ angles of 120Β° and 104Β°.
But two decades later the structure has been refined again, by the group including the author of the original publication. This time the geometry has been established more precisely (ICSD#36062), and it turned out that there is an infinite network of $\ce{Co^2+}$ and $\ce{Hg^2+}$ cations cross-linked via thiocyanate ligands, and the new suggested name was cobalt mercury thiocyanate $\ce{Co(SCN)4Hg}$ [2]:
The combination of two tetrahedrally coordinated atoms, $\ce{Hg}$ and
$\ce{Co}$, has produced a most unusual arrangement in which the $\ce{Hg}$ and $\ce{Co}$ atoms are held apart by four spirals, each containing $4$ $\ce{SCN}$ bridges, which are interlinked so that any one $\ce{SCN}$ bridge takes part
in eight spirals.
[...] Each such spiral is a spring holding the $\ce{Hg}$ and $\ce{Co}$ atoms apart and straining the bonds in the process. It is almost certainly this strain which flattens the tetrahedral coordination round $\ce{Hg}$ and $\ce{Co}$ in the $c$ direction. If the arrangement could be reproduced mechanically it would probably provide the ideal spring mattress!
References
Jeffery, J. W. Nature 1947, 159 (4044), 610. DOI: 10.1038/159610a0.
Jeffery, J. W.; Rose, K. M. Acta Cryst B 1968, 24 (5), 653β662. DOI: 10.1107/S0567740868002980. | {
"domain": "chemistry.stackexchange",
"id": 14978,
"tags": "inorganic-chemistry, nomenclature, coordination-compounds"
} |
Ring expansion from a given cyclic carbocation | Question: How will the cyclobutane ring behave in the case of cyclobutylmethylium (cyclobutylmethyl cation)?
I initially thought there would be ring expansion to a five membered ring so that there may be less angle strain and a secondary carbocation instead of a primary one. But I have also been told that there will be ring contraction for stability purposes. I would be highly obliged if someone could explain the mechanism involved here.
Answer: I think your friend is thinking of the cyclobutyl carbocation which does ring contract to the cycylopropyl carbinyl carbocation (and also equilibrates with the methallyl carbocation).
However, just as you thought, the cyclobutyl carbinyl carbocation does ring open to the cyclopentyl carbocation (ref_1, ref_2, ref_3). This rearrangement is driven by carbocation stability (primary to secondary) and relief of the steric strain present in the 4-membered ring, again, just as you suggested.
Note: this question has been asked previously on SE Chem, however I believe the accepted answer is incorrect, as it seems to primarily address the cyclopropyl carbinyl case, and what is said about the cyclobutyl carbinyl carbocation (little to no ring expansion) is incorrect. | {
"domain": "chemistry.stackexchange",
"id": 12095,
"tags": "organic-chemistry, reaction-mechanism, stability, carbocation, rearrangements"
} |
How to compute entropy of networks? (Boltzmann microstates and Shannon entropy) | Question: I also asked in SO here a few days ago, thought it may be also interesting for physics-related answers.
I would like to model a network as a system.
A particular topology (configuration of edges between vertices) is a state-of-order of the system (a micro-state).
I am trying to compute the entropy of a specific topology as a measure of the complexity of information embedded in that topological structure.
I donβt have a degree in physics, I would like to have answers that can help in creating a concept of entropy applied to networks (particularly small-world networks), as systems embedding information in their topology.
Below, I share my reasoning and doubts.
I first thought to make an analogy with Shannon entropy applied to strings: Here entropy is a measure of the randomness of a string as a sum of probability to have certain digits happenings.
Similarly, I then thought that entropy may hold for an ErdΕsβRΓ©nyi random network, and the measure could reflect the randomness of an edge between a pair of vertexes.
Does Shannon entropy hold for non-random types of networks?
As a second approach, I thought that according to Boltzmannβs definition, entropy is the multiplicity of equivalent states.
How could equivalent topologies be modelled (or how to can we compute similarity between two networks)?
How to measure how much a state of order of a particular topology is βuncommonβ, with respect to all other possible configurations?
Should I attempt to model a topology as a probability over all possible distributions of edges (complete network)?
Answer: For all definitions of entropy, you have to define an ensemble of states to define the respective probabilities (this is related, if not equivalent to the macrostate). For example when you calculate the Shannon entropy of a string, you assume an ensemble of possible strings (and their likelihood) given by the probability of certain letters in your language of choice. For a sufficiently long string, you can estimate those probabilities from the string itself and thus βbootstrapβ your ensemble.
So, to do something similar for networks, you first have to define an appropriate ensemble of networks that you want to consider. These would be your βequivalent topologiesβ. What makes sense here depends on how you want to interpret your entropy or, from another point of view, what properties of the network you consider variable for the purpose of encoding information. One way you may want to consider are network null models a.k.a. network surrogates. There are several methods available for obtaining suchΒΉ, but note that the properties of the underlying ensemble differ and are not always obvious.
Some further remarks:
Assuming that each network is its own microstate, the Shannon and the Gibbs entropy should be equivalent, except for a constant factor.
You may want to take a look at Phys. Rev. Lett. 102, 038701, which applies thermodynamic concepts to networks, though I never found out what ensemble they are considering.
how to can we compute similarity between two networks
There are several proposals of a distance metrics for network, starting with the Euclidean distance of the adjancency matrix. Unfortunately, I do not have a good citation at hand.
For any reasonable ensemble, you end up with a gargantuan amount of networks/microstates. Therefore it is usually not feasible to empirically estimate the probability of all microstates or even a given microstate. Instead, you have to estimate the probability density for a neighbourhood of microstates. For this you need the above distance.
ΒΉβ―I tried to cite all of them in the introduction of this paper of mine, but thatβs not entirely up to date. | {
"domain": "physics.stackexchange",
"id": 35940,
"tags": "statistical-mechanics, entropy, information, complex-systems, network"
} |
ADO.NET Wrapper | Question: I end up writing a lot of the same code when I query a database I thought I would try to encapsulate that all in a class that could be used by any provider that implemented IDbConnection. i'm not looking to do much, return a DataTable when I want a result set, things like that. Here is my first attempt. I've tested that it works with SQL Server, but would like to know if there is any way to improve it or any issues I may not be aware of.
public class DataQuery<TConnection> where TConnection : IDbConnection, new()
{
private string connectionString;
private TConnection cnn;
private TConnection NewConnection()
{
cnn = new TConnection();
cnn.ConnectionString = connectionString;
return cnn;
}
public DataQuery(string connectionString)
{
this.connectionString = connectionString;
}
public int Execute(IDbCommand cmd)
{
using (cnn = NewConnection())
using (cmd)
{
cmd.Connection = cnn;
cnn.Open();
return cmd.ExecuteNonQuery();
}
}
public DataTable QueryDataTable(IDbCommand cmd)
{
using (cnn = NewConnection())
using (cmd)
{
cmd.Connection = cnn;
cnn.Open();
var t = new DataTable();
t.Load(cmd.ExecuteReader());
return t;
}
}
public T QueryValue<T>(IDbCommand cmd)
{
using (cnn = NewConnection())
using (cmd)
{
cmd.Connection = cnn;
cnn.Open();
return (T)cmd.ExecuteScalar();
}
}
public IEnumerable<IDataRecord> QueryDataRecord(IDbCommand cmd)
{
using (cnn = NewConnection())
using (cmd)
{
cmd.Connection = cnn;
cnn.Open();
using (var reader = cmd.ExecuteReader())
{
while (reader.Read())
{
yield return reader;
}
}
}
}
public IEnumerable<string[]> QueryStringArray(IDbCommand cmd)
{
using (cnn = NewConnection())
using (cmd)
{
cmd.Connection = cnn;
cnn.Open();
using (var reader = cmd.ExecuteReader())
{
while (reader.Read())
{
string[] vals = new string[reader.FieldCount];
for (int i = 0; i < reader.FieldCount; i++)
{
vals[i] = reader.IsDBNull(i) ? "" : reader[i].ToString();
}
yield return vals;
}
}
}
}
public IEnumerable<string> QueryString(IDbCommand cmd)
{
using (cnn = NewConnection())
using (cmd)
{
cmd.Connection = cnn;
cnn.Open();
var reader = cmd.ExecuteReader();
while (reader.Read())
{
string[] vals = new string[reader.FieldCount];
for (int i = 0; i < reader.FieldCount; i++)
{
yield return reader.IsDBNull(0) ? "" : reader[0].ToString();
}
}
}
}
public bool Exists(IDbCommand cmd)
{
using (cnn = NewConnection())
using (cmd)
{
cmd.Connection = cnn;
cnn.Open();
using (var reader = cmd.ExecuteReader())
{
return reader.Read();
}
}
}
}
Sample usage
var db = new Sandbox.DataQuery<SqlConnection>(@"Server=xps13\sqlexpress;Database=AdventureWorks2014;Trusted_Connection=True;");
var execute = db.Execute(new SqlCommand("update Person.Address set PostalCode = PostalCode"));
Console.WriteLine("db.Execute: {0}", execute.ToString());
var results = db.QueryString(new SqlCommand("select * from Person.Address")).ToList();
Console.WriteLine("db.QueryString: {0}", results.Count());
var exists = db.Exists(new SqlCommand("select * from Person.Address where 1=0"));
Console.WriteLine("db.Exists: {0}", exists ? "exists" : "does not exist");
//Output
//db.Execute: 19614
//db.QueryString: 176526
//db.Exists: does not exist
Answer:
using (cnn = NewConnection()) {}
This is a very dangerous design. Especially the private TConnection cnn field that is shared by each method.
If you ever use it in paralell then those methods will overwrite each other connections. You should use them locally only:
using (var cnn = NewConnection()) {}
using (cmd)
This is also a no go. I'd be really surprised when I found that my command has been disposed by the method using it. It is however a good idea to maintain the command by the method. It would be better if you hadn't to create it outside.
I suggest this design instead:
public int Execute(Action<IDbCommand> configureCommand)
{
using (var cnn = NewConnection())
using (var cmd = cnn.CreateCommand())
{
configureCommand(cmd);
cmd.Connection = cnn;
cnn.Open();
return cmd.ExecuteNonQuery();
}
}
then you can use it like this:
var execute = db.Execute(cmd => cmd.CommandText = "update Person.Address set PostalCode = PostalCode");
This way you don't have to think about which IDbCommand you need to create.
or with parameters:
var execute = db.Execute(cmd =>
{
cmd.CommandText = "update Person.Address set PostalCode = PostalCode";
var p1 = new SqlParameter("@p1", SqlDbType.VarChar);
p1.Value = "abc";
cmd.Parameters.Add(p1);
}); | {
"domain": "codereview.stackexchange",
"id": 22183,
"tags": "c#, ado.net"
} |
A object viewed from a red glass would appear red colour? | Question: Original Question:
Explain, why in daylight an object appears red when seen through a red glass and black when seen through a blue glass?
My understanding according to what is given in my textbook was like that in daylight blue colour is almost absent in the rays reaching us (Ques: 1 Is it true? Why so?).
Hence, now when the rays strike the object and head to our eyes, they will be having all seven colours VIBGYOR in increasing order of their presence like Violet will be least blue may be little little and red will be present mostly
(Ques 2: Am I correct? Why so? A broader explanation is expected)
So now on striking the red glass , it absorbs all other colours but reflect red colour, so if the red colour if reflected then how do we view the object red the red light is reflected na? Is it because some part of red light gets refracted through the red glass too? So the light that is reflected or refracted (Refracted also or reflected only?) is the same colour as of the mirror (or any other coloured object) in general?
So now for blue glass if my intuition is correct up there than it it easy to deduct that the object will be almost black (Almost or full black?).
So now overall do I stand correct?
Note: Wherever you find it wrong or a bit doubtful feel free to demonstrate and point those things to me.
Answer:
My understanding according to what is given in my textbook was like that in daylight blue colour is almost absent in the rays reaching us
I'm not sure why your textbook would say that. It certainly isn't true. Here's a graph showing the spectrum of sunlight (source Wikimedia commons)
That pretty clearly shows (to the left end of the region labeled as "Visible" that while there is less blue in sunlight than (say) yellow, the blue is certainly not entirely absent. So, if your textbook says this, it is time to stop trusting that textbook!!
So now on striking the red glass , it absorbs all other colours but reflect red colour, so if the red colour if reflected then how do we view the object red the red light is reflected na? Is it because some part of red light gets refracted through the red glass too? So the light that is reflected or refracted (Refracted also or reflected only?) is the same colour as of the mirror (or any other coloured object) in general?
I think the question was asking about looking at an object "through" the glass. So we are not really concerned with reflection, we are concerned with the light which transmits through the glass (that light will refract, but that has nothing to do with the colour we will see, so let's just focus on the light passing through the glass, without worrying about the fact that refraction causes the light to somewhat change its direction of travel). The glass looks red because it absorbs light at shorter wavelengths. But this doesn't mean that only red passes through. For any real "red glass" it is just that much more red light is transmitted than other colours. Try it out. If you look at a white object through red glass it will certainly look red. But if you look at any other object you are going to see other colours but everything will be tinted "towards red", which is a rather complicated effect.
But now back to the original question.
Explain, why in daylight an object appears red when seen through a red glass and black when seen through a blue glass?
Is this even true? I've certainly never noticed it. Finding a blue glass and looking out the window I can most certainly report that it is not true. If I look at a white object then it looks blue. If I look at any other object I see multiple colours tinted towards blue. Where did this question come from? The question seems to be talking about a somewhat fictional reality.
If you look at a red object through blue glass, it could appear nearly black if the glass is "very blue" in the sense that it only transmits well over a very narrow wavelength range (rare in practice...). Very little blue light is reflected from the red object, and so most of the light arriving from the object at the glass is red. This hypothetical "very blue" glass transmits the red very poorly, and so you will see hardly any light arriving from the object. The object looks black because there is too little light arriving from it for your eyes to detect. But that's a very specific case of a red object viewed through a blue glass. This would be true if viewed in sunlight, but would also be true if viewed in any other "roughly white light" such as light from typical lightbulbs. | {
"domain": "physics.stackexchange",
"id": 98164,
"tags": "homework-and-exercises, visible-light, geometric-optics"
} |
Is there a way to modify Kadane's Algorithm such that we know the resulting subarray? | Question: Kadane's Algorithm is an algorithm that solves the maximum subarray problem by clever dynamic programming. Is there a way to further modify the algorithm so that we would get to know the resulting subarray that produces the corresponding maximum sum?
PS: I don't know whether I should post this here, or Stack Overflow, or both.
Answer: Yes, you can.
Kadanes's algorithm keeps the value of the best subarray in a variable, let's call it best_sum.
Notice the invariant in the algorithm.
best_sum will always (after each iteration) contain the value of the maxium subarray of the prefix of the array that you already visited.
Additionally you know the best suffix sum of the current prefix, let's call it current_sum, which you use to update best_sum.
You just need to do the same thing with the position.
Introduce three more variables current_startindex, best_startindex and best_endindex. current_startindex tells you, at which position the current best suffix starts. best_startindex and best_endindex indicate the start and end of the best subarray from the visited prefix.
Keep those variables valid in each iteration. E.g.
Whenever you update current_best, also update current_startindex.
And whenever you update best_max, also update the best_startindex and best_endindex, so that at the end of each iteration the value is correct.
After iterating over the complete array, best_sum will contain the sum of values of the maximum subarray, and best_startindex and best_endindex will tell you the positions. | {
"domain": "cs.stackexchange",
"id": 14498,
"tags": "algorithms, maximum-subarray"
} |
Every graph with $\delta(G) \ge 2$ has a cycle of length at least $\delta(G)+1$? | Question: I'm reading up on graph theory using Diestel's book. Right on the outset I got confused though over proposition 1.3.1 on page 8 which reads:
Proposition 1.3.1. Every graph G contains a path of length $\delta(G)$ and a cycle of length at least $\delta(G)+1$ (provided that $\delta(G) \ge 2$).
Following the proof I can see why this would be true if G actually contains a cycle, but I keep thinking there are many graphs, like the path graph itself and connected trees, with $\delta(G) \ge 2$ but which don't have any cycles.
I found this question on the same proposition, asking to prove it. The accepted answers there seem to quote Diestel's proof verbatim, assuming G just has a cycle.
I'm pretty sure I'm missing something, so I wonder why one would choose this formulation or whether I'm simply misunderstanding the proposition. Is it assumed that graphs are cyclic unless stated otherwise? Might this be specific to the context in a way I managed to overlook?
As a reminder, $\delta(G)$ is the minimum degree, taken over all vertices of $G$.
Answer: If $\delta(G) \geq 2$ every vertex is connected to at least 2 others. This invariably leads to a cycle in any finite graph.
To see why, try to construct a path without a cycle from a graph with $\delta(G) \geq 2$. Every vertex you add is connected to either a previously added vertex (forming a cycle), or an other vertex. However, in turn, that vertex is connected... Since the graph is finite you will at some point run out of vertices to add which you haven't seen already, forcing you to form a cycle. | {
"domain": "cs.stackexchange",
"id": 5226,
"tags": "graphs"
} |
Mass swinging in a horizontal circle | Question:
A mass of $100$ grams is tied to a $50 cm$ long string(secured to the ceiling). The mass swings around in an horizontal circle with a constant speed and it performs a quarter of circle every second. What's the tension of the string and its angle with respect to the horizontal axis?
So the problem gives me this data:
mass : 0,1 kg
string length : 0.5 m
angular speed : $\frac{\pi}{2}rads^{-1}$
I can find the weight($g = 9.8 ms^{-2}$):
weight : 0.98 N
Now I don't know how to find the tension and angle using only this data. I've seen similar problem and they usually give you already the tension or the angle of the string.
Answer: Let the tension in the string be T. Then clearly for the horizontal plane, the component of T along the radius provides the required inward (centripetal) force.
If you assume the angle marked in the figure to be $\theta$, then
T$ cos\theta = m*\omega ^2 * r$
Here, $r = $ (string length)*$cos\theta$
Hence, $$T = m*\omega^2*l$$where $l$ is the string length. I hope you can take it from here. | {
"domain": "physics.stackexchange",
"id": 48350,
"tags": "homework-and-exercises, newtonian-mechanics, centripetal-force"
} |
Do "procedurally generated" images use a set of base images to generate new images (as AI generated images do)? | Question: I am new here, and apologize if this question is off-topic.
I know that AI generated images are based on a set or database of real images created by real artists.
In game development, I have heard of the term "procedurally generated" images.
My questions are :
Are "procedurally generated" images generated from a set of real
images created by real artists (as AI generated images are) ?
What are the main differences between "procedurally generated"
images and AI generated images ?
Answer: In Procedural Content Generation (PCG) in computer games, you can generate procedurally a variety of contents, from characters, weapons, songs, maps, dungeons, story, and even entire NPCs. In few words, you start from a set of base images and rules, you then apply these rules on a random subset of images (note that the rules are usually based on some random generator, so there is variability and perturbations in them too - but this depends mostly on the role of PCG and what type of content you want to design.) to create new images. You can repeat this process at each gameplay, for example, to generate an entire new experience for the player or use the method to generate candidates, which are then refined by hand - so picking the very best generated ones.
In contrast, generative AI (or deep generative models) starts by training a (usually large) model on a (large) dataset, such that when noise is injected (or, as in some other approach, the noise is the input) you can generate (i.e., synthesize) a new sample that looks like a plausible image from that dataset. In this regard, there are multiple effective methods like generative adversarial networks and diffusion models, but in some cases also variational autoencoders and normalizing flows are fine.
The difference between PCG and GenAI is that, in the former case you design by hand the base images and rules, whereas in the second case you design the model and, in some cases, also curate the dataset. So, in PCG the generation is controlled by your rules, whereas in GenAI is determined by what the model has successfully learned. | {
"domain": "ai.stackexchange",
"id": 4004,
"tags": "machine-learning, ai-design, game-ai"
} |
The treatment of infinitesimal quantities | Question: Please be advised that my question is different from some of the existing threads like this one.
I have long been convinced that if we are to question the value of something which we ultimately are going to take derivative of, then second order quantities are of no importance to us. For example, in the extremisation of an action. Moreover, in perturbation theory we are entitled to ignore higher order terms, the exact order of which we decide based on the precision we would like to have.
I am less convinced however, that this particular treatment of translation operation by J.J Sakurai in page 40 of his book Modern Quantum Mechanics, third edition is correct:
Suppose for one particular position state $|x\rangle$ we define a translation operator $\mathcal{X(dx)}:\mathcal{H}\to\mathcal{H}$ such that its effect on state $|x\rangle$ is $$\mathcal{X}(dx)|x\rangle:=|x+dx\rangle$$
Among many properties we would like such translation operator to have, one of them is the conservation of probability which demands $\mathcal{X}^\dagger(dx)\mathcal{X}(dx)=id$. Asserted by J.J. Sakurai, the translation operator is of the form
$$\mathcal{X}(dx)=id-iA\cdot dx$$ for some Hermitian operator $A$. Calculation shows that
$$\mathcal{X}^\dagger(dx)\mathcal{X}(dx)=id-i(A-A^\dagger)\cdot dx+O(dx^2)$$where the second order terms do not have zero coefficients. So why are we allowed to ignore these second order terms in this specific case?
Answer: Strictly speaking you are right, but at the sacrifice to the relaxed good-faith spirit of the introduction
involved in the book!
Indeed, to satisfy your exceptionally literalist meaning, you'd need the full translation group element to be
$$\mathcal{X}(dx)={\mathbb I}-iA\cdot dx+ O(dx^2),$$ for a Hermitian Lie group algebra element $A$.
The reader is invited to implicitly ignore/blur-out all such higher order infinitesimals, which, complacently, the authors assume the readers have already internalized at the sight of shift variables written as dx.
In practice/reality, of course, a reader should appreciate quite soon that this is seat-of-the-pants shorthand for
$$\mathcal{X}(dx)= \exp(-iA\cdot dx),$$
the standard Lagrange translation operator, exponentiation of the gradient operator. Take extra care to appreciate the correctness of the signs involved! | {
"domain": "physics.stackexchange",
"id": 92411,
"tags": "quantum-mechanics, operators, differentiation, calculus, unitarity"
} |
Intuition on Different SI Units and Squaring Decimals (Beginner Question) | Question: So this is a very basic question. I am certainly not in grasp of some key aspect of physics equations here. I hope someone will be able to help me out.
Here is the equation for drag force:
$$F_{D}=\frac{1}{2} \rho v^{2} C_{D} A$$
If $\rho$, $C_{D}$,$A$ = 1, The equation changes to $F_{D} = \frac{1}{2}v^{2}$
If I plug in $v$ = 1 $m/s$
The equation evaluates to $1/2 \times 1$
But Instead if I change the SI unit to cm the equation suddenly evaluates to $1/2 \times 100\times100
$ cm/s (100 m/s) which is a different and much bigger value compared to the first result(1 m/s). Is this move invalid in Physics equations? Are the equations strictly restricted to the primary SI units and converting them to other forms makes the equation invalid somehow?
Another related question:
Why do drag force or any force that is $F$ $\propto$ $v^{2}$ goes on to decrease below $v = 1$, but while $v > 1$ they increase exponentially.
Ex: If $v = 0.5$,
$0.5 * 0.5 = 0.25$ (Which is way smaller, almost half the size of the input)
and If $v = 5$,
$5 * 5 = 25$ (Which is way larger than the input)
If this maps onto real world, it feels to me like the force detects if the velocity is < 1, it makes the force way smaller and when it is greater than 1, way bigger. However again if I switch these to a completely different SI unit like cm (assuming I was talking about m/s). Suddenly the forces are exponentially bigger. I know this sounds kinda silly to an expert, but what is wrong with this formulation? I can't seem to put my finger on it.
Answer: Well, few problems here, which all seem to come down to a disregard for units of measurement.
First, the quantities you take to be equal to $1$ in the beginning still have dimensions (apart from the drag coefficient, let's keep that at $1$ for simplicity), so they're not just "$1$". In fact, you see that shortly after this, you're writting results for a force in units of velocity, which is nonsense.
If $\rho=1\, kg/m^3$ and $A=1\, m^2$ then, for $v=1\, m/s$, you get $F_D=0.5\, N$, where $N$ is the force unit Newton given by $1\, N=1\,kg\cdot m/s^2$.
Now let's use $v=100\, cm/s$. Then $A=10^4 cm^2$ and $\rho=10^{-6}\, kg/m^3$. Substituting all of these, you get $F_D=50\,kg\cdot cm/s^2=0.5\,kg\cdot m/s^2$, which is the same as what we got above.
As for your second question, again, you have to keep track of units. It doesn't make sense to say that $v^2$ is "larger" or "smaller" than $v$. They have different units of measurement, so that comparison does not make sense.
In fact, notice how, if $v=5\, cm/s$, then $v^2=25\, cm^2/s^2$, but we can also write these as $v=0.05\,m/s$ and $v^2=0.0025\,m^2/s^2$.
This would be the equivalent of saying that the area of a square is somehow smaller or bigger than the lengths of the sides. | {
"domain": "physics.stackexchange",
"id": 76762,
"tags": "newtonian-mechanics, dimensional-analysis, drag, si-units, intuition"
} |
Is a lion a bony fish? | Question: If you ask Wikidata "Does the species lion (Q140) have a parent taxon line up to the Osteichthyes (Q27207, bony fishes)?", it answers yes:
SELECT ?item1
WHERE {
wd:Q140 wdt:P171* ?item1.
?item1 wdt:P171 wd:Q27207.
}
Here's a direct link.
Now, I assume that this must be wrong at some level (I'm no biologist, so please correct me if I'm wrong), so I tried to find the error.
This query displays that path more explicitly, starting with mammals:
Mammals (Q7377) have Tetrapoda (Q19159) as a parent taxon.
Tetrapoda (Q19159) have Tetrapodomorph (Q1209254) as a parent taxon.
Tetrapodomorph (Q1209254) have Rhipidistia (Q150598) as a parent taxon.
Rhipidistia (Q150598) have Sarcopterygii (Q160830) as a parent taxon.
Sarcopterygii (Q160830) have Osteichthyes (Q27207) as a parent taxon.
The third point seems strange, because Rhipidistia are described as a taxon of fish, which would mean that all mammals are fish.
Maybe this comes from the fact that tetrapods (and therefore mammals) evolved from Sarcopterygii 390 million years ago, as described here.
Is "having evolved from" considered a parent taxon in biology? If not, which of the five statements above is wrong?
Answer: The path is correct. The safest reading is to say that the lion shares a set of characteristics with the lungfish. You can also say that lion and carps are bony vertebrates (Euteleostomes).
In evolutionary taxonomy, each taxon does not need to consist of a single ancestral node and all its descendants: it allows for groups to be excluded from their parent taxa (in other words, not "being part of" but, as you said, "having evolved from"). Thus, lungfishes are closer to the lions than they are from, e.g., sharks or trout. In other words (even if this is too simplistic to say so), their "common ancestor" is more "recent".
In the Linnaean taxonomic system, all fishes were in the same class. This corresponds to our "everyday life" classification of fishes but not to the most recents findings about the "common ancestors".
To make things clearer, you need to understand the concept of paraphyly. The bony fishes group is paraphyletic with respect to the tetrapoda. In other words, it consists of the group's last common ancestor and all descendants of that ancestor excluding the tetrapoda. "Bony fish" is therefore not a clade, but "Osteichthyes/Euteleostomes" is a clade (= a monophyletic group of the bony vertebrate with a common ancestor that looked like our "modern" fishes). | {
"domain": "biology.stackexchange",
"id": 10145,
"tags": "taxonomy, classification"
} |
robot_localization with GPS map frame won't stay fixed | Question:
Hey guys,
I'm running a dual ekf setup of robot_localization to fuse GPS alongside navsat_transform_node which provideds the map->utm transform. I have been trying to figure this out for days now but i can't get this map frame's orientation to stay aligned with the odom frame?
I am running a gnss heading receiver which provides the true earth referenced heading which i fuse in both state estimation nodes as a pose message. My understanding is, since the only source of orientation is coming from this one message for both nodes, they should both always have the same rotation? The map frame initially is the same as odom but after some driving around and turning the robot it goes wild and very inaccurate. My gps is an RTK GPS setup i have confirmed that accuracy down to 5mm and a heading accuracy of 0.1 degree, so i know there is no issue with my GPS.
Below is my robot_localization configuration:
ekf_se_odom:
frequency: 20
two_d_mode: true
sensor_timeout: 0.15
transform_time_offset: 0.0
transform_timeout: 0.0
print_diagnostics: true
debug: false
map_frame: map
odom_frame: odom
base_link_frame: base_link
world_frame: odom
odom0: /warthog_velocity_controller/odom
odom0_config: [false, false, false,
false, false, false,
true, true, false,
false, false, true,
false, false, false]
odom0_queue_size: 10
odom0_nodelay: true
odom0_differential: false
odom0_relative: false
pose0: /gps/odometry
pose0_config: [false, false, false,
false, false, true,
false, false, false,
false, false, false,
false, false, false]
pose0_queue_size: 10
pose0_nodelay: true
pose0_differential: false
pose0_relative: false
imu0: /mcu_imu/data
imu0_config: [false, false, false,
false, false, false,
false, false, false,
false, false, true,
true, true, false]
imu0_differential: false
imu0_nodelay: false
imu0_relative: false
imu0_queue_size: 10
use_control: false
ekf_se_map:
frequency: 20
sensor_timeout: 0.15
two_d_mode: true
transform_time_offset: 0.0
transform_timeout: 0.0
print_diagnostics: true
debug: false
debug_out_file: "/home/alec/debug_ekf.txt"
map_frame: map
odom_frame: odom
base_link_frame: base_link
world_frame: map
odom0: /warthog_velocity_controller/odom
odom0_config: [false, false, false,
false, false, false,
true, true, false,
false, false, true,
false, false, false]
odom0_queue_size: 10
odom0_nodelay: true
odom0_differential: false
odom0_relative: false
pose0: /gps/odometry
pose0_config: [false, false, false,
false, false, true,
false, false, false,
false, false, false,
false, false, false]
pose0_queue_size: 10
pose0_nodelay: true
pose0_differential: false
pose0_relative: false
odom2: /bunkbot_localization/odometry/gps
odom2_config: [true, true, false,
false, false, false,
false, false, false,
false, false, false,
false, false, false]
odom2_queue_size: 10
odom2_nodelay: true
odom2_differential: false
odom2_relative: false
imu0: /mcu_imu/data
imu0_config: [false, false, false,
false, false, false,
false, false, false,
false, false, true,
true, true, false]
imu0_differential: false
imu0_nodelay: true
imu0_relative: false
imu0_queue_size: 10
use_control: false
Below is my navsat_transform_node configuration:
navsat_transform:
broadcast_utm_transform: true
delay: 3.0
frequency: 20
magnetic_declination_radians: 0.0 # Set this depdending on origin location in the world
publish_filtered_gps: true
use_odometry_yaw: true
wait_for_datum: false
yaw_offset: 0.0
zero_altitude: true
I set the use_odometry_yaw to true as my heading is fused into the odometry and not from the IMU, i also set yaw_offset to 0 as the heading is 0 when facing east.
Here are some GIF's of the current behaviour:
And below is a GIF of my heading receiver odometry which is zero when pointing EAST, i do a full rotation in this example:
Originally posted by agurman on ROS Answers with karma: 111 on 2019-04-14
Post score: 0
Answer:
I believe i have solved this issue by duplicating my gnss heading pose message, one for the first ekf node and one for the second ekf node, and setting the frame_id of each message to be respective of the node, eg. odom or map.
The map frame now stays right on top of the odom frame and retains the same orientation. I am not exactly sure why this works, if anyone could elaborate that would be great.
Originally posted by agurman with karma: 111 on 2019-04-16
This answer was ACCEPTED on the original site
Post score: 0
Original comments
Comment by tseco on 2020-03-24:
Hello,
We have a similar problem but we can not solve it with your proposal. Did you find the reason it worked changing the frame_id of the heading pose message?
Thanks in advance! | {
"domain": "robotics.stackexchange",
"id": 32872,
"tags": "navigation, mapping, gps, ros-kinetic, navsat-transform-node"
} |
Normalization for QFT single particle destruction operator | Question: I don't understand a particular statement in the QFT book by Klauber. The particular page I'm having difficulty on is page 67 of chapter 3 (PDF link).
The big picture is that the author wishes to investigate what the (operator) solutions to the Klein-Gordon equation, $\phi(x)$ and $\phi^\dagger(x)$, do when acting on the vacuum state $|0\rangle$. As prep for this, he creates a "general single particle state" ("general" meaning non $\mathbf{k}$-eigenstate) by operating on the vacuum with the operator
$$C\equiv\sum_\mathbf{k}A_\mathbf{k}a_\mathbf{k}^\dagger,\tag{3-108}$$
$$C|0\rangle=\sum_\mathbf{k}A_\mathbf{k}a_\mathbf{k}^\dagger|0\rangle=A_1|\phi_1\rangle+A_2|\phi_2\rangle+\cdots\equiv|\phi\rangle\tag{3-109}$$
Each $A_\mathbf{k}$ is just a number, the absolute value square of which represents the probability of finding the $\mathbf{k}$ eigenstate for the single particle.
The new state $C|0\rangle=|\phi\rangle$ is interpreted as a single particle state in a superposition of $\mathbf{k}$-eigenstates $|\phi_k\rangle$. The subscript $\mathbf{k}$ represents different momenta.
For probability/normalization arguments, the numbers $A_\mathbf{k}$ should obey
$$\sum_\mathbf{k}\left|A_\mathbf{k}\right|^2=1.\tag{3-110}$$
I feel like I understand the above statements.
The author then introduces the "general single particle destruction operator"
$$D\equiv\sum_\mathbf{k}a_k,\tag{3-111}$$
and shows that when applied to our general single particle state $|\phi\rangle$ above, the vacuum is (re)produced:
$$\begin{eqnarray}
D|\phi\rangle&=&\left(\sum_\mathbf{k}a_k\right)A_1|\phi_1\rangle+\left(\sum_\mathbf{k}a_k\right)A_2|\phi_2\rangle+\cdots\\
&=&A_1\underbrace{a_1|\phi_1\rangle}_{=|0\rangle}+A_1\underbrace{a_2|\phi_1\rangle}_{=0}+A_1\underbrace{a_3|\phi_1\rangle}_{=0}+\cdots+\\
&\ &+A_2\underbrace{a_1|\phi_2\rangle}_{=0}+A_2\underbrace{a_2|\phi_2\rangle}_{=|0\rangle}+ A_2\underbrace{a_3|\phi_2\rangle}_{=0}+\cdots+\\
&\ &+\cdots\\
&=&\underbrace{\left(A_1 + A_2 + \cdots\right)}_\text{can normalize = 1}|0\rangle.
\end{eqnarray}\tag{3-112}$$
(Note the subtle but important differences in the underbraces; some are $0$ while others are $|0\rangle$.)
The part I am struggling with is understanding how the underbrace "can normalize = 1" at the end of $\text{(3-112)}$ can be true given $\text{(3-110)}$. It seems to me that the $A$ terms appearing at the end of $\text{(3-112)}$ are the same ones defined in the construction operator $C$ and normalized so that their absolute values squared sum to $1$. How can their just-plain sum also be of magnitude $1$? I know that one would *like * the underbraced term to sum to zero, but I don't see how that can be.
It was suggested I consider the quantity $\langle\phi|D^\dagger D|\phi\rangle$. Here is my attempt to calculate it.
$$
\begin{eqnarray}
\langle\phi|D^\dagger D|\phi\rangle&=&\langle0|(A_1^\dagger+A_2^\dagger+\cdots)(A_1+A_2+\cdots)|0\rangle=\langle0|\sum_\mathbf{j}\sum_\mathbf{k}A_\mathbf{j}^\dagger A_\mathbf{k}|0\rangle\\
&=&\sum_\mathbf{j}\sum_\mathbf{k}A_\mathbf{j}^\dagger A_\mathbf{k}\underbrace{\langle0|0\rangle}_{=1}=\underbrace{\sum_\mathbf{j}\sum_\mathbf{k}A_\mathbf{j}^\dagger A_\mathbf{k}}_\text{Can't simplify}\ne1
\end{eqnarray}
$$
Answer: Paragraphs "Creating a General Single Particle State (Discrete Solution Form)
"($3.108 \to 3.110 $) and "Destroying a General Single Particle State (Discrete)
" ($3.111 \to 3.112 $) are two independent paragraphs, are should not be mixed.
It is not possible to start with a normalized state $C\equiv\sum_\mathbf{k}A_\mathbf{k}a_\mathbf{k}^\dagger$, with $\sum_\mathbf{k}\left|A_\mathbf{k}\right|^2=1$, then applying the operator $D\equiv\sum_\mathbf{k}a_k$, and find that the resulting state $\sum\limits_i A_i|0\rangle$ is also normalized (except in the trivial case where there is only one term in the sum).
The main reason is that the operator $D$ is not unitary, so there is no reason why it should transform a normalized state into an other normalized state. Or said, differently :
$\sum_\mathbf{k}\left|A_\mathbf{k}\right|^2 \neq |\sum\limits_{k} A_\mathbf{k} |^2$ | {
"domain": "physics.stackexchange",
"id": 11091,
"tags": "quantum-field-theory"
} |
The maximum air drag force doesn't coincide with the maximum velocity | Question: I'm trying to decode my data from an experiment conducted today. We wanted to calculate the air drag acting on a pendulum. In order do to this, we first created a model for our frictional force F:
Using some mechanics, we derived that $$ml^2\ddot{\theta} = F l - mgl\sin(\theta)$$
Which yielded us an equation for the force $$F = ml\ddot{\theta} + mg\sin(\theta)$$
The angular acceleration and the angle of displacement could be retrieved from our experiment using a set of cameras. This data was then feed into the equation derived above, mysterically however, when the force is plotted against time, alongside the velocity of the blob, we find that they're not in phase. What is expected is that the force exerted by air drag should be maximum at the bottom, where the velocity is as it maximum, instead, the force is 0.
We used a pendulum of length 36 cm, and a mass of 67 grams. Here's what the plot looks like:
As you can see, the force is at a maximum for when the blob is stationary.
We know that the force is proportional to the velocity using some facts about Reynolds number and so forth..., (or the derivative of our angle with respect to time), so that would yield us a sine wave (supposing the angle is represented by a cosine wave). The force is then a sine wave, but according to our equation, the RHS must be a sum of two cosine waves, since the second derivative or angle with respect to time gives us a cosine again, so the equations doesn't seem to hold. What can possibly creates this fault?
Answer: The position data clearly shows a beating frequency, which is not possible with a single mass pendulum. This is therefor most likely a sampling artifact. Since according to the comments the framerate of the camera is already fairly high, I would suggest to repeat the experiment with a slower pendulum. Increasing the pendulum length by a factor of four should lead to a decrease in the frequency by a factor of two, which should lead to far more accurate position sensing.
Having said this, it just occurred to me that there is another possible source for the beating: it's the structure that the pendulum is attached to. If it is not rigid enough, then it will couple to the motion of the pendulum and the coupled system can produce an actual physical deviation from simple (almost) harmonic motion that would look similar to the observed motion. I would therefor probably check the rigidity of the attachment point first before modifying the geometry. | {
"domain": "physics.stackexchange",
"id": 91835,
"tags": "homework-and-exercises, newtonian-mechanics, classical-mechanics, experimental-physics, drag"
} |
Why isn't auto ionisation of water considered when we find the pH during salt hydrolysis? | Question: $\ce{H+}$ concentration from the auto-ionisation of water is $10^{-7}$
If we have $\ce{HA}$ as weak acid and $\ce{BOH}$ as weak base having $\mathrm pK_\mathrm{a} = 3.2$ and $\mathrm pK_\mathrm{b} =3.4$ respectively, we are get a salt $\ce{AB}$.
When we find $\ce{H+}$ concentration for this salt using the formula,
$$\ce{H+} = \sqrt{\frac{K_\mathrm wK_\mathrm a}{K_\mathrm b}}$$
We get $\ce{H+}$ concentration in the solution to be $10^{-8}$.
In such a scenario, why don't we add $\ce{H+}$ concentration from the auto-ionisation of water to the $\ce{H+}$ concentration?
Answer: What happens in salt hydrolysis is enhancement of water ionization with salts. Try asking yourself where do $\ce{H+}$ and $\ce{OH-}$ come from in the solution of $\ce{AB}$, and you'll find that they come from $\ce{H2O}$. Or equivalently, hydrolysis of $\ce{A-}$ can be written as
$$
\ce{A- + H+ <=> HA,H2O <=> H+ + OH-}\stackrel{\text{add}}{\Longrightarrow}\ce{A- + H2O <=> HA + OH-}.
$$
In a word, ionization has been taken into consideration in your calculation. There are multiple equilibria in the system, so you cannot simply add $10^{-7}$. | {
"domain": "chemistry.stackexchange",
"id": 14444,
"tags": "physical-chemistry, acid-base, equilibrium, hydrolysis, salt"
} |
Encoding Categorical Data Without Increasing the Dimension | Question: I've been exploring methods for encoding categorical data. I was hoping to find a good method that does not increase the dimension of the dataset, similar to the one used on this dataset about drug use: Drug consumption (quantified) Data Set
Each piece of categorical data in this dataset was converted to some real number, but yet the dimension of the dataset was not increased. Instead of just randomly replacing values with numbers, there appears to be some thought out method behind this. Can anyone shed some light on this matter?
Answer: There are several different types of categorical data. For example, the severity of trauma or psychological scale is not categorical by nature: there is a latent continuous feature that was converted to discrete. In such a case described quantification is absolutely reasonable.
For the non-ordered (nominal) attributes (for example, country or ethnicity) any quantification mostly meaningless and really can create bias and introduce artificial order.
For the discussed database most of the attributes were ordinal. Two nominal attributes were helpless in any coding: we tested dummy coding and CatPCA based coding.
Really, as Erwan wrote, each time it is necessary to analyze variables and then decide how to encode it. | {
"domain": "datascience.stackexchange",
"id": 6442,
"tags": "categorical-data, encoding"
} |
Is $(L^2, L_z)$ a complete set of commuting observables? | Question: According to the main definition we define a (C.S.C.O.) complete set of commuting observables $(A,B,C, \dots)$ if:
Every commutator between the operators of the list is $0$
If we fix the eigenvalues of the operators there exists a unique eigenvector with these eigenvalues.
(Anyway, there is a reference for the exact formal definition of this concept ? In every textbook I have this concept is introduced with just a brief discussion on the subject.)
If I follow blindly this definition I conclude that ($L^2$, $L_z$) is a CSCO, because if a fix a value of $l$ and a value of $m$ there exists a unique eigenvector (namely a unique spherical harmonic for every fixed value of $l$ and $m$).
But if this set is complete, why in the study of the Hydrogen atom I can add to the set the Hamiltonian $H$ ?
For myself the set must not be complete, because if I fix just one value of $l$ or either $m$, I can clearly notice the degeneracy.
I even think that the latter reasoning may serve as a method to find that the set of observables is not complete, but I haven't found any reference in the literature.
So, what parts of my reasoning are wrong ?
Answer: It depends on the Hilbert space.
If I follow blindly this definition I conclude that $(L^2, L_z)$ is a CSCO, because if a fix a value of $l$ and a value of $m$ there exist a unique eigenvector (namely a unique spherical harmonic for every fixed value of $l$ and $m$)
This is true if you're considering $L^2(S^2)$, the natural Hilbert space for particles confined to the surface of a sphere. However, the Hydrogen atom lives in $L^2(\mathbb R^3) \simeq L^2(\mathbb R \times S^2)$. In the latter space, the eigenstates of fixed $l$ and $m$ are degenerate; since the hydrogen atom wavefunctions can be written $\psi_{nlm}$, clearly for a fixed $l$ and $m$ we can have many different states corresponding to an infinity of possible $n$'s.
The addition of the hydrogen atom Hamiltonian as a third commuting observable breaks this degeneracy, and so $(H,L^2,L_z)$ are a complete set of commuting observables for $L^2(\mathbb R^3)$.
Note also that if we consider the spin of the electron as well, our Hilbert space becomes $L^2(\mathbb R^3) \otimes \mathbb C^2$, and the states of fixed $n,l,m$ are now doubly degenerate. To break this degeneracy, we need to add another mutually-commuting observable such as $S_z$.
In the latter case if I add the $S_z$ operator, now the states with $(n,l,m,s_z)$ are degenerate and I can lift this degeneracy by adding $S$ resulting at the end with a C.S.C.O ? And In general I can state that the degeneracy is equal to the dimension of the Hilbert space minus the number of operators ?
The answer to both questions is no. If your Hilbert space is $L^2(\mathbb R^3)\otimes \mathbb C^2$ and you consider the observables $(H,L^2,L_z)$, then the eigenspace corresponding to some $(n,l,m)$ is two-dimensional, because a general eigenstate of $H,L^2,$ and $L_z$ would be of the form
$$\Psi_{nlm} = \psi_{nlm}(\mathbf x) \otimes\pmatrix{\alpha \\ \beta}$$
for some arbitrary $\alpha,\beta\in \mathbb C$. To lift this degeneracy, we add $S_z$ to the set. Now the most general state corresponding to e.g. $(n,l,m,+1/2)$ would be of the form
$$\Psi_{nlm\uparrow} = \psi_{nlm}(\mathbf x) \otimes \pmatrix{\alpha \\ 0 }$$
for arbitrary $\alpha\in\mathbb C$, so the corresponding eigenspace is one-dimensional. This is what we mean by non-degeneracy in this context.
The answer to your second follow-up question is also no. There's no connection between the number of operators and the dimensionality of the Hilbert space. A simple example would be the infinite dimensional Hilbert space $L^2(\mathbb R)$ equipped with harmonic oscillator Hamiltonian $H_{QHO}$. Because $H_{QHO}$ has no degeneracy, it comprises a CSCO all by itself. | {
"domain": "physics.stackexchange",
"id": 73907,
"tags": "quantum-mechanics, hilbert-space, operators, angular-momentum, observables"
} |
If a satelite is in orbit around a two body system what is it's shape? | Question: For example if something was orbiting the sun only slightly further out than the earth, it would risk capture by the earth when the earth caught up to it in it's orbit. If it was orbiting at a great distance, then it would have an almost circular orbit around the barycenter of the sun-earth system (ignoring other planets for now). I understand the Lagrange points, but I'm wondering what happens when things are orbiting slightly further out.
Answer: This is called a "restricted three-body" problem. It is slightly simpler than a general three-body problem since one of the bodies can be assumed to be "light". However, even this simplified problem doesn't have an analytical solution.
The orbit of the satellite doesn't have a particular shape, and can be chaotic. In some situations it can be approximated usefully as perturbed ellipse: that is it can be approximated as a Keplerian orbit but with non-constant orbital parameters, which nevertheless vary in a fairly regular way: for example, the orientation of the ellipse might rotate, or precess, or the values of the inclination and eccentricity might oscillate. | {
"domain": "astronomy.stackexchange",
"id": 5939,
"tags": "orbital-mechanics"
} |
Are rotation matrices faithful representations of the rotation group? | Question: I would like to use rotation matrices as representations of the rotation group. I would like to know if these representations are faithful, i.e. isomorphic to the rotational group elements.
I read on the bottom of p. 61 in Ref. 1 that
"Only the $j = 1$ representation is isomorphic to the rotation group itself."
Can someone explain to my why this is the case?
Note: $j=1$ means that the eigenvalue of $J^2$ is $j(j+1)$, where $J^2=J_x^2+J_y^2+J_z^2$, where $J_i$ is the generator of rotation about the $i$-axis.
References:
J. Tseng, Symmetry and Relativity, lecture notes, 2017. The PDF file is available here.
Answer: Given a non-negative integer $j\in\mathbb{N}_0$, the spin-$j$ group representation/homomorphism $$\rho: SO(3)~\to~ GL(2j+1,\mathbb{R}) $$
is faithful/injective iff $j>0$, but technically speaking, never a group isomorphism, since it is never surjective, $${\rm Im}(\rho)~\subsetneq~ GL(2j+1,\mathbb{R}) .$$ | {
"domain": "physics.stackexchange",
"id": 50218,
"tags": "group-theory, rotation, representation-theory, group-representations"
} |
Root Locus and RouthβHurwitz stability criterion | Question: A satellite launcher has a unit feedback system, whose TF global open loop is given by:
$$G_c(s)G(s) = \frac{K(s^2-4s+18)(s+2)}{(s^2-2)(s+12)} $$
a) Draw the root locus for this function
b) Determine the range values of $K$ that make this system stable.
I don't know where start to evaluate a), because the TF has the same number of zeros and poles so, in this case, there's no branches and asymptotes?
In the item b), I achieve the following expression to evaluate the RouthβHurwitz criterion:
$$s^3(1+K)+s^2(12-2K)+s(-2+10K) -24 + 36K = 0$$
However, when I finished the RouthβHurwitz table and evaluated the inequalities, don't seems correct with the root locus provided by MATLAB.
Answer: Suppose we have a third order polynomial in the form :
$$ s^3+a_2s^2+a_1s+a_0 = 0$$
There is nice caveat for third order systems which is derived from the Routh-Hurwitz stability criterion. In order for this polynomial to be stable the following three conditions have to be met (trying to derive the Routh-Hurwitz table will be a total mess for this particular system):
$a_2 > 0$
$a_0 > 0$
$a_2a_1 > a_0$
The characteristic polynomial of the third order system is:
$$ (K+1)s^3+(12-2K)s^2+(10K-2)s+36K-24=0 $$
which by considering the fact that $K>0 \ (\Rightarrow K+1>1>0)$ can be rewritten:
$$ s^3+\frac{12-2K}{K+1}s^2+\frac{10K-2}{K+1}s+\frac{36K-24}{K+1}=0 $$
The above requirements for this particular polynomial are:
$\frac{36K-24}{K+1} > 0 \ \Rightarrow \ K > 0.6667 $
$\frac{12-2K}{K+1} > 0 \ \Rightarrow \ K < 6$
$\frac{(12-2K)(10K-2)}{(K+1)^2} > \frac{36K-24}{K+1} \ \Rightarrow \ K\in[0 \ 2]$
Taking these into consideration we conclude that the gain $K$ should lie somewhere in between the interval:
$$ 0.6667 \ \le \ K \ \le \ 2 $$
If you indeed try the values $0.6666$ or $2.01$ for $K$ you will see that your system goes unstable. For your information, there is a same caveat for the second order polynomials of the form:
$$ s^2+a_1s+a_0 = 0$$
This polynomial is stable if only and only if $a_1,a_0 > 0$.
Now, regarding you root locus of your open loop function, it is somewhat challenging to derive it since there is some complexity going on. You can always use some software to obtain it. This is the root locus from MATLAB:
Below is the root locus of the closed loop system for a specific value of $K=2$. Notice that the marks for the closed loop poles are indeed located on the imaginary axis which means that the system is critically stable (not strictly stable).
The system becomes strictly stable for values of the gain $K$ which lie in the interval: $(0.6667 \ 2)$. For $K=1$ the root locus of the closed loop system becomes:
And as a last test the root locus for $K=0.6667$ of the closed loop system also includes one pole of the closed loop system on the imaginary axis which implies again that the system is critically stable and not strictly stable:
As you see the stability of the system is very well stated throught the Routh-Hurwitz criterion. There are some rules of thumb in order to obtain the root locus of a system, such as that the poles of the system "go" towards the zeros of the system. However, I encourage you to try and obtain some on your own and have some software package to check them. Check also these series on how to draw them by hand. They are really good.
https://www.youtube.com/playlist?list=PLUMWjy5jgHK3-ca6GP6PL0AgcNGHqn33f | {
"domain": "engineering.stackexchange",
"id": 3277,
"tags": "control-engineering, control-theory, matlab"
} |
Algorithm to compute the n-th derivative of a polynomial in Python | Question: As a personal exercise, I'm trying to write an algorithm to compute the n-th derivative of an ordered, simplified polynomial (i.e., all like terms have been combined). The polynomial is passed as an ordered list where the i-th index corresponds (though is not equivalent) to the coefficient of x to the n-th power.
Example:
Take the derivative of: \$3x^3 + 5x^2 + 2x + 2\$ -> [3,5,2,2]
1st derivative: \$9x^2 + 10x + 2\$
2nd derivative: \$18x + 10\$
3rd derivative: \$18\$
4th...n-th derivative: \$0\$
Implementation in Python:
def poly_dev(poly, dev):
"""
:param poly: a polynomial
:param dev: derivative desired, e.g., 2nd
"""
c = 0 # correction
r = dev # to remove
while dev > 0:
for i in range(1, len(poly)-c):
poly[i-1] = (len(poly)-(i+c))*poly[i-1]
dev -= 1 # I suspect this
c += 1 # this can be simplified
return poly[:-r]
E.g., print(poly_dev(poly = [3,5,2,2], dev = 2))
I have a math background, but I'm only starting to learn about computer science concepts like Complexity Theory.
I've intentionally tried to avoid reversing a list, as I know that can be expensive. What other steps can I change to decrease this procedure's run time?
Answer: You are going one by one derivative, when you could easily do them all at once. With a little multiply-and-divide trick, you could save on complexity.
Also, your function will return wrong results for negative dev (it should raise an exception instead).
Further, your function will mess up the original list:
>>> poly = [3, 5, 2, 2]
>>> print(poly_dev(poly, 2))
[18, 10]
>>> print(poly)
[18, 10, 2, 2]
Here is my take on it:
def poly_dev(poly, dev):
if dev == 0:
return poly[:]
if dev > len(poly):
return list()
if dev < 0:
raise ValueError("negative derivative")
p = 1
for k in range(2, dev+1):
p *= k
poly = poly[:-dev]
n = len(poly)-1
for i in range(len(poly)):
poly[n-i] *= p
p = p * (i+dev+1) // (i+1)
return poly
So, the first thing I do after handling trivial cases is computing the multiplication factor for the first coefficient and take only a copy of the part of the list that I need (to avoid messing up the original).
Then I multiply each coefficient with the multiplication factor p, followed by computing the next one, meaning "divide by the smallest member of the product that defines p and multiply by the one bigger then the biggest one".
Notice that the division is //, which is integer division. That way you don't lose precision (and I think it's a bit quicker than the floating point one, but I'm not sure right now).
It may look redundant to multiply and later divide by the same number, but it's less work than multiplying dev numbers in each go.
Also, it would be more Pythonic to have enumerate(reversed(poly)) instead of range(len(poly)) in the for loop, but then we'd still need n-i somewhere, so this seemed cleaner to me for this particular case. Maybe I'm missing something obvious here. | {
"domain": "codereview.stackexchange",
"id": 20413,
"tags": "python, algorithm, complexity, symbolic-math"
} |
ML estimation - solve for x | Question: I'm trying to solve the following maximum likelihood estimation but for multiplicative noise instead of additive noise:
So the goal is to do ML-estimation for a scalar constant $x$, which is multiplied with noise:
$$\tilde{d}[n] = a[n]x$$
The noise $a[n]$ is a realization of the i.i.d. random variable $A \sim \mathcal{N}(\mu_A, \sigma_A)$.
The probability density function is given by
$$ \pi(x,\mathbf{\tilde{d}}) = \frac{1}{x^N} \pi_A\left(\frac{\tilde{\mathbf{d}}}{x}\right)$$
with $\pi_A$ being the pdf of $A$.
Now I took the natural logarithm of this expression and derived with respect to $x$, which let to the following term:
$$\frac{\partial}{\partial x} \ln\left(\pi(x,\mathbf{\tilde{d}})\right) = -\frac{N}{x} + \frac{1}{\sigma_A^2} \sum_{n=0}^{N-1} \left[ \frac{\tilde{d}[n](\tilde{d}[n]-\mu_a x)}{x^3} \right] \overset{!}{=}0$$
But how do I continue from that? My idea was to solve this equation for $x$, but I can't figure out a closed form solution...
Answer: It looks okay to me. If you define empirical mean $\hat{\mu}_d = \frac 1N \sum \tilde{d}[n]$ and empirical second moment $\hat{\gamma}_d = \frac 1N \sum \tilde{d}^2[n]$ then you effectively have an equation of the form $$-N/x + A/x^3 + B/x^2 \stackrel{!}{=} 0,$$ where $A$ and $B$ directly depend on $\hat{\mu}_d, \hat{\gamma}_d, \mu_A, \sigma_A$. If you multiply this equation with $x^3$ it becomes a quadratic equation in $x$, which you can solve for $x$...
I get something like $$x^2 - \frac{\hat{\mu}_d \mu_A}{\sigma_A^2} x + \frac{\gamma_d}{\sigma_A^2} = 0$$ but that was scribbled down in a rush so I would have to check it again later. Can you confirm or correct? | {
"domain": "dsp.stackexchange",
"id": 9431,
"tags": "signal-analysis, estimation, statistics, parameter-estimation, maximum-likelihood-estimation"
} |
Is the free Bose field a conformal field? | Question: Let's consider the free Bose field
$$S=\frac{1}{2}g\int d^2x\ \partial_\mu\varphi\partial^\mu\varphi.$$
The action apparently shows that the system has conformal invariance (at least at the classical level). And we can read off that the Bose field has conformal dimension $0$. But now, I read in a book (An introduction to Conformal Field Theory: With Application to String Theory, R. Blumenhagen & E. Plauschinn, pp 50, below Eq.(2.92)) that:
As we have mentioned earlier, the free boson $\varphi(z, \bar{z})$ is not a conformal field since its conformal dimensions vanish $(h, \bar{h}) = (0, 0)$.
My question is, does the vanishing conformal dimension indicate that the field itself is not a conformal field? How exactly do we define a conformal field?
Furthermore, the two point correlator of a (quasi-)primary field with vanishing conformal dimension should have the form $\langle\varphi(x)\varphi(y)\rangle= \frac{C_{12}}{|x_1-x_2|}$, but for the free Bose field, on the other hand, we have $\langle\varphi(x)\varphi(y)\rangle= -\frac{1}{4\pi g}\ln (x-y)^2$. Why are they different? Maybe you will say, it is the difference for the correlator forms that indicates the free Bose field is not a conformal field. But I would like to know is there any deeper reason to explain why its correlator is different from that of a primary field and hence itself is not a conformal field?
Answer: Indeed, the most obvious reason a 2D free boson field is not a proper field of a conformal field theory is that its two-point function is "wrong". This, however, should not itself be surprising - if we just start from a field invariant under conformal symmetry (which $\phi$ is), then we would from the "pure" CFT standpoint expect the theory to be trivial too. However, if we switch to a better description of the theory it becomes apparent how to read it as a proper CFT: The idea is that of usual bosonization, and we declare the "true" conformal field to be $V_\alpha = \, :\mathrm{e}^{\mathrm{i}\alpha\varphi}:$.
How do we know there's a "true" CFT hidden here? Well, we know that the space of states is not trivial! We have the currents $j(z) = \mathrm{i}\partial\varphi$ and its antiholomorphic counterpart. Expanding these in modes (since they fulfill $\partial j = \bar{\partial}\bar{j} = 0$) and integrating yields
$$ \varphi(z,\bar{z}) = \phi_0 - \mathrm{i}(a_0z
+ \bar{a}_0z) - \mathrm{i}\sum_{n\neq 0}\frac{1}{n}\left(\mathrm{e}^{-nz}a_n + \mathrm{e}^{-n\bar{z}}\bar{a}_n\right)$$
Examining the commutation relations leads us to intepret $a_n$ as creation and $a_{-n}$ as annihilation operator for $n > 0$. The Hilbert space then must carry a representation of this operator algebra, and in particular it must decompose as the direct sum of Fock spaces built from an eigenvector of $a_0$.
Now, the exponentials in the mode expansion look rather "un-conformal". We usually want mode expansions like $a_nz^{-n-h}$. So we apply the cylinder-to-plane map $z\mapsto \mathrm{e}^{-z}$. This leads us to consider the theory of the conformal fields $V_\alpha$, and one could find that these obey the proper n-point functions for conformal fields. That is, the theory of 2D bosons on a cylinder is itself not naturally interpreted as a pure CFT, but its equivalent theory on a plane (with the usual conceit of CFT where we remove the origin and think of it as the infinite past) does possess such an interpretation. | {
"domain": "physics.stackexchange",
"id": 80009,
"tags": "quantum-field-theory, string-theory, conformal-field-theory"
} |
Where, exactly, does the boundary lie between 'entangled' particles and merely 'interacting' or 'coupled' ones? | Question: Have scientists discovered, yet, the precise amount of interaction needed to actually 'entangle' a pair or more of particles, beyond mere interaction or even coupling?
Is there a distinct difference, or is it merely a matter of degree?
Answer: Let us begin by examining bar magnets (with some distance between them). Once they are aligned with each other, they are in a coupled state for exactly as long as no other external interaction disturbs their common state. Our measuring instruments, with which we can see the coupling, do not destroy this coupling. The simplest measuring device is simply our eyes.
Now we want to align the bar magnets in a black box. The alignment, which is to be random in the experiment, is not visible to us. If we place a small bar magnet on the top of the box, this magnet will move and be aligned by the magnets inside the box. We are able to find the current positions of the magnets in the box.
Now we are reducing more and more the dimensions of the two bar magnets in the box. At a moment, the measuring magnet on the black box (being βstrongerβ) will rotate the magnets in the box. The experiment will be successful as long as the measuring magnet on the box will be smaller the magnets in the box.
And now we have two electrons in the box (let's assume we can hold them with optical tweezers). And through their magnetic dipole moments, they align each other, just like the bar magnets used before. Do we at least have the possibility to see that they are in a coupled state?
Perhaps a moving electron on the top of the box is deflected by the electrons in the box under the influence of the magnetic field. However, the electron on the top could influence the electrons in the box and bring them out of alignment. However, by running the experiment several times, we can statistically prove that the two electrons in the box are aligned.
This is a contrived example. In principle, however, experiments with entangled photons (Spontaneous parametric down-conversion) work in exactly the same way. Only statistically can we prove that the photons generated in this way are entangled.
Source
Look at point 2 in the sketch. We believe that the particles are in a common intermediate state described by a common function. Does this mean that when one of the particles is measured (which is only statistically possible over several pairs of particles!) the second particle is transferred from the common state to its individual state? Or does only our non-knowledge about the actual states collapse?
Conclusion
The boundary between "entangled" particles and merely "interacting" or "coupled" particles is thus the possibility of measuring their states without destroying their states (coupled particles) or not having this possibility without statistical methods (entangled particles). | {
"domain": "physics.stackexchange",
"id": 79948,
"tags": "quantum-mechanics, quantum-information, quantum-spin, quantum-electrodynamics, quantum-entanglement"
} |
Does matter follow the curvature of space-time? | Question: How is matter affected by the warping of space-time? Does it expand and contract, follow the curvature of space? What happens the shape/volume/density of matter when it enters a gravity well, or washed over by gravitational waves?
Note: I dont have a background in physics. Just want a basic layman's explanation, so I can visualize.
Answer: If the matter consists of a loose collection of falling objects, say raindrops, or a group of rocks, then each individual piece of matter will not expand or contract owing to curvature of spacetime as it falls (except a tiny bit as I will explain in a moment), but the distances between the objects will expand or contract, depending on what spacetime is locally doing. Notice that this prediction is the same as the one made by the more familiar Newtonian picture of gravity: particles falling to Earth are each attracted to the centre of the Earth so two particles at the same height but separated by some horizontal distance will get gradually closer together as they fall. Two particles starting out from different heights get further apart because the lower one accelerates a bit more than the upper one.
The curvature of spacetime is also inviting each individual raindrop or rock to be squeezed or stretched in the same way, but the internal electromagnetic forces between the atoms are resisting this, so the individual raindrops or rocks remain of almost constant size.
When a gravitational wave passes by, the story is similar: separate freely falling rocks will be moved closer or further apart, but the size and shape of each individual rock is hardly affected because the gravitational effect is small compared to the internal electromagnetic forces. However, in an extreme case such as near a neutron star or a black hole then the gravitational effects will overwhelm everything else so that even rocks are crushed in one direction and pulled apart in the other. | {
"domain": "physics.stackexchange",
"id": 70526,
"tags": "general-relativity, spacetime, curvature, gravitational-waves, density"
} |
General method of deriving the mean field theory of a microscopic theory | Question: What's the most general way of obtaining the mean field theory of a microscopic Hamiltonian/action ? Is the Hubbard-Stratonovich transformation the only systematic method? If the answer is yes then what does necessitate our mean field parameter to be a Bosonic quantity ? Is the reason that all of directly physical observable quantities should commute?
Answer: Actually Wikipedia has an answer for your question,
https://en.wikipedia.org/wiki/Mean_field_theory
which will tell you how to bulid a mean field approximation self-consistently based on the Bogoliubov inequality.
If you want to know more details about the fundamental inequality,you can go through the book,Statistical Mechanics: A Set Of Lectures,written by Feynman.
Hope it helps. | {
"domain": "physics.stackexchange",
"id": 35525,
"tags": "condensed-matter, field-theory, phase-transition"
} |
Problem in RViz? | Question:
when i run RViz this msg appear to me and i can not solve it :
/home/eng/.rviz/display_config does not exist!
i use fuerte
can one help me please ...
Thank you ....
Originally posted by M Samir on ROS Answers with karma: 1 on 2014-03-04
Post score: 0
Answer:
This warning only says that you have no saved rviz config. You can save you current config with File-> Save Config.
Originally posted by fivef with karma: 2756 on 2014-03-04
This answer was ACCEPTED on the original site
Post score: 1 | {
"domain": "robotics.stackexchange",
"id": 17162,
"tags": "rviz, urdf"
} |
How can blank be written on the tape if it is not part of input alphabet? | Question: I have read that input alphabet of Turing machine is subset of tape symbols because in input alphabet we don't allow blank symbol. But when ever there is a transition to final state the transition as (Blank,Blank,R/L) (First parameter here is input,second is what machine will write on tape,third is direction).It means whenever we have processed input,we will read Blank and replace Blank by Blank and goto final state.But how can blank be written on the tape if it is not part of input alphabet?
With this knowledge I am trying to answer the following statement
"It is decidable that Turing machine will print some non blank character
".
Because Turing machine will always write something on the tape so it should be decidable. But is it possible that there are only two states and there is only one transition as "(Blank,Blank,R/L)"?
Answer: The formal definition of a turing machine is a 7-tuple $(Q, \Sigma, \Gamma, \delta, q_0, q_{\text{accept}}, q_{\text{reject}})$, where:
$Q$ is a finite set of states.
$\Sigma$ is a finite set of input symbols (input alphabet).
$\Sigma$ does not contain the blank symbol ($\sqcup$).
$\Gamma$ is a finite set of tape symbols (tape alphabet).
$\sqcup \in \Gamma$ and $ \Sigma \subseteq \Gamma$.
$\delta : Q^\prime \times \Gamma \rightarrow Q \times \Gamma \times \{L,R\} $ is the transition function.
$Q^\prime = Q - \{q_{\text{accept}}, q_{\text{reject}}\}$
$q_0 \in Q$ is the start state.
$q_{\text{accept}} \in Q$ is the accept state.
$q_{\text{reject}} \in Q$ is the reject state.
The part you want to pay attention to is the definition of the transition function:
$\delta : Q^\prime \times \Gamma \rightarrow Q \times \Gamma \times \{L,R\} $
Note that the transition function accepts inputs that are of the tape alphabet, and writes output that is also of the tape alphabet.
The turing machine is decidable only if it recognizes some language L and and halts for every input. | {
"domain": "cs.stackexchange",
"id": 11125,
"tags": "turing-machines, computation-models"
} |
Algorithm to check that one sum is less than another | Question: Suppose that we have two sums:
a1+a2+...+an,
b1+b2+...+bm
We can perform only two binary operations on operands of these sums:
lt (less than)
eq (equal)
These operations can return 3 possible results: true, false, unknown. Other operations like summation, subtraction and etc. are not allowed.
The question is how to implement algorithm for function lt(a1+a2+...+an, b1+b2+...+bm), that also returns true, false or unknown? But unknown can be returned only if there is no enough information to return true or false.
For example if we know that (n=m=3 && a1 < b3 && a2 = b2 && a3 < b1) the algorithm have to return true.
Or if we know that (n=3 && m=2 && a1 < b1 && a2 < b1 && a3 < b2 && b1 < b2) the algorithm have to return unknown.
Answer: I will assume your variables can take on arbitrarily real numbers. Build a weighted directed graph that represents all known inequalities, with one vertex per element (i.e., $n+m$ vertices in total). In particular, apply 'eq' and 'lt' to all ${n+m \choose 2}$ pairs of elements. Then:
If $x < y$ (i.e., $\textsf{lt}(x,y)$ returns true), where $x,y$ are two elements, add a directed edge $x \to y$ with length $-1$.
If $x \le y$ (i.e., $\textsf{lt}(y,x)$ returns false), add a directed edge $x \to y$ with length 0.
If $x=y$ (i.e., $\textsf{eq}(x,y)$ returns true), then add directed edges $x \to y$ and $y \to x$ with length 0.
If $x\ne y$ (i.e., $\textsf{eq}(x,y)$ returns false), do nothing and add no edges.
Let $d(x,y)$ be the length of the shortest path from $x$ to $y$ in this graph. If $d(x,y)=c$, then we will be guaranteed that $x \le y + c$, and this is the optimal $c$ for which this is true (i.e., it is the smallest $c$ for which this is true).
Compute a new bipartite graph $G$ with an edge $a_i\to b_j$ of length $d(a_i,b_j)$ for each $a_i,b_j$. This can be computed with the Floyd-Warshall algorithm, for example.
Find the minimum-weight perfect matching in this bipartite graph. This is an instance of the assignment problem, and hence can be solved in polynomial time with standard algorithms. If the total weight of the matching is $w$, then we can conclude that
$$a_1+\dots+a_n \le b_1+\dots+b_m + w.$$
Consequently, if $w \le 0$, then we can conclude that $a_1+\dots+a_n \le b_1+\dots+b_m$ and we can return "true" to the original question.
Solve another assignment problem for the bipartite graph with edges $b_j \to a_i$ of length $d(a_i,b_j)$ for each $a_i,b_j$. In this way, we obtain $w'$ such that
$$b_1+\dots+b_m \le a_1+\dots+a_n + w'.$$
If $w' \le 0$, return "false" to the original question.
Otherwise, return "unknown".
This takes $O(nm)$ calls to the binary operations and $O((nm)^{2.5} \log n)$ running time.
This works for this very specific problem. A more general approach is to use linear programming: if you want to check whether $(I_1 \land \cdots \land I_k) \implies J$, where the $I$'s and $J$ are linear inequalities, then check whether the system of inequalities $I_1 \land \cdots \land I_k \land \neg J$ is satisfiable using linear programming. This will let you support more general mathematical expressions and more general relationships between the variables, as long as they are all linear functions/inequalities of the variables. In your problem, after you call 'lt' and 'eq' on all pairs of elements, each operation that returns something other than "unknown" gives you an inequality or equality $I_i$ on two variables. Here $J$ is the inequality $a_1+\dots+a_n \le b_1+\dots+b_m$. So, you could apply linear programming directly to your problem. This approach also supports more general operations, as long as they are all linear.
My answer above essentially solved the linear programming problem for the special case where $I_1,\dots,I_k$ are all inequalities with a difference of two variables, and $J$ has the form $a_1+\dots+a_n \le b_1+\dots+b_m$. I used a standard data structure for representing differences of two variables (the graph), and then combined it with an algorithm for the assignment problem to capture the one additional inequality with multiple variables. | {
"domain": "cs.stackexchange",
"id": 17752,
"tags": "algorithms"
} |
Pure state vs mixed state in this example | Question: Consider, I have a quantum state $|\Psi\rangle$, such that :
$$|\Psi\rangle=c_1|\psi_1\rangle+c_2|\psi_2\rangle$$
This is defined as a pure state, since I have complete information about the system. Before measurement ( collapse ), the system is in state $|\Psi\rangle. $ After measurement, there would be a $|c_1|^2$ probability that the state is now $|\psi_1\rangle,$ and a $|c_2|^2$ probability that the system is now in state $|\psi_2\rangle$. Let us have some operator $\hat{H}$ such that $\hat{H}|\psi_n\rangle=\lambda_n|\psi_n\rangle$.
Suppose I have a friend, who performs this measurement. If he gets $\lambda_1$, it means the state has now become $\psi_1$ and vice-versa.
However, imagine now, that my friend was coming over to tell me the results. However, due to some weird phenomena, he disappeared off the face of the earth, before he could tell me the result.
So, all I know now is that before the measurement, the state was $\Psi$, and an experiment was conducted, and the wavefunction collapsed to either $\psi_1$ or $\psi_2$. Since my friend couldn't tell me the results, I don't know which state is the system actually in. From the initial state, I can vaguely say that there is a $|c_1|^2$ probability of the final state being $\psi_1$ and a $|c_2|^2$ probability of it being $\psi_2$.
I cannot say that the system is in superposition since I know that a measurement was carried out, and one of the two values was obtained.
Since I no longer have complete information about my system, can I consider this an example of a mixed state? So, can I use a density matrix to describe this final state, since this seems like an example of mixed state?
Answer: Yes your reasoning is entirely correct. You would now describe the state at hand with the density operator
$$ \rho = |c_1|^2 |\psi_1 \rangle \langle \psi_1| + |c_2|^2 |\psi_2 \rangle \langle \psi_2|$$
Note that when dealing with mixed states, the description of the particle depends on the knowledge of the observer. Your friend would have described the state as $ |\psi_1 \rangle \langle \psi_1| $ or $|\psi_2 \rangle \langle \psi_2|$. This might seem really weird, but the point is that if you would repeat the whole experiment ( including your friend and his disappearence) then your description would yield the correct statistical outcomes. | {
"domain": "physics.stackexchange",
"id": 88455,
"tags": "quantum-mechanics, statistical-mechanics, wavefunction, density-operator, quantum-states"
} |
Combinations of elements in array | Question: I wrote this code to get all possible arrangements for an array containing 3 elements:
let a = ["A", "B", "C"];
let b = [];
function change(A) {
let x = [];
for (let i = 0; i < A.length; i++) {
x.push(A[i]);
}
for (let i = 0; i < x.length; i++) {
A[i] = x[i + 1];
if (i == (x.length - 1)) {
A[i] = x[0];
}
}
}
function combinations() {
for (let i = 0; i < a.length; i++) {
b.push([a[0], a[1], a[2]]);
b.push([a[0], a[2], a[1]]);
change(a);
}
console.log(b);
}
combinations();
How can I write this more concisely, and how is it possible to get all possible arrangements of an array consisting of more than 3 elements?
Answer: Firstly, style looks good. Indentation, spacing, etc. is good. Common mistakes include indenting code inconsistently or in a way that does not match the scope of the code, your code does not have those issues.
change()
All this appears to do is move the first element to the last place and shift all the other elements down by 1. You could do this without creating a separate array:
function change(A) {
let tmp = A[0];
for (let i = 0; i < A.length - 1;) {
A[i] = A[++i];
}
A[A.length - 1] = tmp;
}
Remove the if in the loop in change() regardless
But if you wanted to use the method that creates a temporary copy of the original array, instead of checking for the last iteration of the loop:
for (let i = 0; i < x.length; i++) {
A[i] = x[i + 1];
if (i == (x.length - 1)) {
A[i] = x[0];
}
}
Just move the statement outside the loop (and make the loop iterate one less time because you will overwrite the value anyway):
for (let i = 0; i < x.length - 1; i++) {
A[i] = x[i + 1];
}
A[x.length - 1] = x[0];
combinations() and global variables
combinations() simply accesses a global array hardcoded in the function. Defining a function that takes no arguments and immediately calling it once is not really better than just having the function body by itself. For better practice and code reuse, you should make combinations() accept an array parameter:
function combinations(A) {
let b = [];
for (let i = 0; i < A.length; i++) {
b.push([A[0], A[1], A[2]]);
b.push([A[0], A[2], A[1]]);
change(A);
}
console.log(b);
}
Additionally, b was not used outside combinations() so should be a local variable instead.
Revised:
function change(A) {
let tmp = A[0];
for (let i = 0; i < A.length - 1;) {
A[i] = A[++i];
}
A[A.length - 1] = tmp;
}
function combinations(A) {
let b = [];
for (let i = 0; i < A.length; i++) {
b.push([A[0], A[1], A[2]]);
b.push([A[0], A[2], A[1]]);
change(A);
}
console.log(b);
}
combinations(["A", "B", "C"]);
As for finding the combinations of arrays larger than 3, you already have a way to do this with arrays of 3, so for one additional element, you could insert that in each one of the possible positions in each one of the existing combinations. You could implement a recursive approach to handle an arbitrary number of elements. | {
"domain": "codereview.stackexchange",
"id": 45578,
"tags": "javascript, mathematics, combinatorics"
} |
How high into the air are spores of molds commonly occurring? | Question: I understand so far that most of the air surrounding us contains mold spores so that given the right environment and a food source they start to grow. I assume this is true for air close to the earth. But how high mold spores are still common?
To clarify: I don't want to grow mold, I want to know how high into the air on earth mold spores are usually common.
Answer: Air currents can carry bacteria and mold spores into stratospheric altitudes. Early balloon collections found bacteria and mold in air samples at a little above 71 thousand feet altitude. There were only a few colonies, but we also know that our bacterial media often do not allow many microorganisms to grow.
More recent studies of the atmospheric microbiome have indicated that a wide variety of bacteria and fungi can routinely be found in the atmosphere thousands of feet up. Modern measurements find about 10 million living microbes per cubic meter.
Sandstorms or jet streams are known to be able to move living bacteria long distances. Spores are often of about the same size and I see no reason why we will not find that they do the same. | {
"domain": "biology.stackexchange",
"id": 4121,
"tags": "mycology, species-distribution"
} |
Does Law of Reflection gets violated? | Question: I was curious as to why the "Law of Reflection" is only a law and not a principle. Are there any specific conditions or circumstances where it is not followed by chance? If so, how so?
Note: I am not discussing about or referring to anything in special theory of relativity, however the reader may offer information on it as well.
Answer: Yes it can be violated. Any surface with a dispersive character, for example a periodic series of reflectors (aka a grating) violates this principle. | {
"domain": "physics.stackexchange",
"id": 98314,
"tags": "optics, visible-light, reflection"
} |
Correct way to use arguments | Question:
I have a launch file that plays rosbag files among other things. Right now it is something like this
<launch>
<node name="rosbag" type="play" pkg="rosbag" args="--pause $(find my_ros_package)/bags/this_bag01.bag" >
</node>
....
</launch>
So, when launched it plays the rosbag file this_bag01.bag
I want to make it more flexible so that I can call it with any rosbag file. What is the most appropriate way to do this?
The way I am thinking is
Create a shell script file that I can call with an argument
In this shell I can put
export LAUNCH_FILE= (and here the argument)
roslaunch my_ros_package thelauch_file.launch
Inside the launch file put $(env LAUNCH_FILE)
<launch>
<node name="rosbag" type="play" pkg="rosbag" args="--pause $(find my_ros_package)/bags/($ env LAUNCH_FILE)" ></node>
</launch>
Originally posted by Kansai on ROS Answers with karma: 170 on 2021-04-07
Post score: 0
Answer:
Well, only for changing the file to play, this seems pretty overkill to me. And still limits you to have the file located at my_ros_package/bags/, which again isn't very generic...
A few points to consider:
wrapping the whole thing in a shell script, when all you do is export the environment variable and then call the launch file is not required. You could simply export the variable manually. I see no benefit in adding the additional step there.
You could also use a launch arg, instead of the environment variable substitution. Basically change the launch file to
<launch>
<arg name="file" default="this_bag01.bag"/>
<node name="rosbag" type="play" pkg="rosbag" args="--pause $(find my_ros_package)/bags/($arg file)"/>
</launch>
and call it with roslaunch my_ros_package thelaunch_file.launch file:=otherfile.bag.
Check out the docs about the various posibilities.
Personally, I'd definitely go for option two (actually, I'll manually launch the rosbag executable, but then again, you might need the filename somewhere else). Option one seems to me to try to accomplish the same thing, at the expense of adding another file that you'd have to change every time (which is not good for any VCS)...
Originally posted by mgruhler with karma: 12390 on 2021-04-08
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by gvdhoorn on 2021-04-08:
Why even consider using anything but roslaunch args for this?
The advantage of the shell script approach is unclear to me and it seems like this is a textbook example of a situation where args are meant to be used. | {
"domain": "robotics.stackexchange",
"id": 36293,
"tags": "ros, roslaunch"
} |
Array even and odd indexes sorting and printing its values | Question: I'm looking for a more elegant solutions of this task.
Task:
Given a string, print its even-indexed and odd-indexed characters as space-separated strings on a single line.
Input Format:
The first line contains an integer (the number of test cases).
Each line of the subsequent lines contain a string.
puts "Input a number of test cases"
t = gets.to_i
t.times do
puts "Input a string please"
s = gets.strip
z = s.split(//)
b = z.each_with_index.sort_by { |i, x| [x.even? ? 0 : 1, x] }
b.map { |i, x| print i if x.even? }
print " "
b.map { |i, x| print i if x.odd? }
puts
end
Answer: It's a good idea to become familiar with all the methods on Array and Enumerable. It just so happens that Enumerable has a method called partition that has a purpose of dividing an array into .two parts (e.g. even and odd). It'll help tidy up your code.
So let's use partition on the array of chars and partition based on index.
parts = s.chars.partition.with_index{|_,i| i.even?}
Note since we are only using the index, the actual character is unimportant so I used _ instead of a variable name in the partition block.
Now you have two arrays of characters--let's join each array so that you have a single array with two strings:
strings = parts.map(&:join)
If you don't know about &:join it is a shortcut for {|a| a.join} (using Symbol#to_proc if you want to read up on it).
Then, finally print it. Note that in your code, you intermingle printing with manipulation. This is best avoided if you can; print only once you have things in final form.
print strings.join(' ')
The above could all be put together in one line if you'd rather:
print s.chars.partition.with_index{|_,i| i.even?}.map(&:join).join(' ') | {
"domain": "codereview.stackexchange",
"id": 25851,
"tags": "beginner, ruby"
} |
Why is the sun so dense? | Question: Looking at the radius and mass of stars on Wikipedia, I see that the Sun is the densest of all, often many times denser than other stars. Is that because only non-dense starts are easily seen from a distance? Are there any stars of comparable luminosity to the sun that can be seen with the naked eye, and do they have a similar density as the sun?
If needed I can copy-paste the mass and radius of other stars here for reference.
Answer: The answer lies in the selection bias towards brighter stars. There are two reasons this makes the Sun look relatively dense.
The first is in Martin's answer. Looking at a list of brightest stars, many (e.g. Betelgeuse, Aldebaran, Antares) are red giants. These are stars that have finished burning hydrogen into helium in their cores and are much larger in size than main-sequence stars like the Sun. As a result, their mean densities are small.
The second effect is that the more massive a main-sequence star is, the smaller its mean density but the greater its luminosity. So again, more massive stars on the main-sequence (e.g. Rigel) are easier to see but also have lower mean densities.
If you compare the Sun to stars from a list of Sun-like stars, you'll find it isn't unusual. | {
"domain": "physics.stackexchange",
"id": 1749,
"tags": "sun"
} |
What are the "minimum requirements" for a single cell? | Question: I saw a description of the "minimum requirements" for a cell at http://creation.com/origin-of-life in the section called "What are the minimum requirements for a cell to live?" and I'm wondering if this is scientifically accurate - and if not - what are the real requirements?
[note/warning - the link above is to a creationist site - I'm only quoting this source because I'm trying find the science involved and I didn't find other sources talking about all the necessary pieces. I'm not trying to promote creationism with this question so please don't attack the source or me for bringing it. If you can find a non-creationist source which outlines these requirements, I'll be happy to update the question and remove this source.)
In summary it lists the requirements as follows:
cell membrane
way of storing information (DNA)
way of reading 2. to make components needed
RNA polymerase
gyrases to untwist DNA
ribosomes to make proteins
(a few others I omitted b/c I don't know if they're really important)
means of creating fuel (ATP synthase)
a means of copying the information for reproduction
The context of the question is similar to Can scientists create totally synthetic life? and a question I wrote What is the most complex biological organism (or precursors) that we have been able to synthesize from raw materials?.
I'm trying to understand what would be involved in making a cell from scratch. Somehow I found this source but I don't know if it's accurate.
Answer: So, to make cells from scratch you would need: amphiphilic molecules to form a membrane, a decent mix of simple molecules (sugars, nucleic acids, peptides) that serve as reactant and building blocks for more complex things, some simple catalyzer (metals, minerals, pepetides, aptamers, etc..) to run the reactions, enough energy to maintain lots of reactions running and keep them far from the equilibrium. Of course, lots of time...
An even more minimalistic approach would use a single molecule to do the enzymatic job while storing evolvable informations. In this case RNA alone has been proposed as substitute for proteins and DNA in a very minimal cell. In theory also DNA alone could do the same. However the minimal tasks needed to be done: maintain the self (to be compartimentalized), to grow (to have a flux of molecule not in equilibrium), to divide, to maintain the information stably enough to be useful for the next generation and at the same time prone too mutability to evolve.
One more point. Life itself doesn't strictly require to be compartmentalized.
You can think of a network of chemical reactions that have the ability to grow, to replicate its components etc but without a cell membrane. However I think that a cell-like form of life is somewhat more likely to happen. | {
"domain": "biology.stackexchange",
"id": 3981,
"tags": "molecular-biology"
} |
Detecting Calcified stones and cracks in tooth | Question: I had a dental problem when my first molar broke some months ago and after some decay and pain I am undergoing a dental treatment. The doctor was performing a root canal and kept on talking about calcified stones being formed and looking for a 'tak' sound inspite of having x-rays. Doctor did not point to the calcified stones earlier and suddenly came out with this new issue. In the mean time when looking for stones doctor actually managed to break my tooth and has blamed grinding and clenching for a crack that caused the tooth to break.
I am shocked that neither the calcified stone nor the crack showed up in x-ray and the preliminary examination. With current technology, can the cracks in tooth and calcified stones not be detected.
Answer: Cracks on your teeth are hard to detect if they are cracks on the mesial or distal surfaces (surface of the tooth closest to and farthest from the midline respectively) through x-rays. Please check this link for illustrations and very good explanations from the University of Maryland dentistry department. The different methods on the detection of cracks as well as the uses of radiographic evidence can be found in this link.
Now pulp stones can be identified through x-rays unless they are too small or not too dense. There has been researches done specifically using radiographs for detection like "A radiographic assessment of the prevalence of pulp stones in Australians". It could also be human error that your doctor was unable to detect it. A small explanation on pulp stones can be found here. | {
"domain": "biology.stackexchange",
"id": 2254,
"tags": "teeth, hygiene"
} |
$\tau$ pair production question | Question: There's a question on my homework about the process $e^{-} e^{+} \rightarrow \tau^{+} \tau^{-}$. Specifically, it is claimed that the minimum energy required of the colliding positron and electron beams is slightly less than twice the $\tau$ mass, and I am asked to explain this deviation and compute it. In the context of this course, we have only discussed the relativistic kinematics of such processes (which would predict a minimum energy of twice the $\tau$ mass), so I am not sure what might be responsible. I'm grasping at straws here, but might it have something to do with the Coulomb interaction between the opposite charges? Thanks in advance.
Answer: Assuming that $\tau^-\tau^+$ can form a bound state similarly to positronium($e^-e^+$), all we need is the form of the ground state of positronium, specifically that it is proportional to the reduced mass of the pair: $\mu = \frac{m_1m_2}{m_1+m_2}$.
Knowing that positronium's ground state is $(-13.6/2)= -6.8$ eV and that the new reduced mass for the Tau particle bound state is simply $\frac{m_{Tau}}{2} = \frac{m_{Tau}}{m_e}\mu_{e^-e^+}$
It follows that the energy of the Tau particle bound state is this factor, $\frac{m_{Tau}}{m_e}$ multiplied by $-6.8$ eV.
$(-6.8)(3477) = -2.36\times10^4$ eV | {
"domain": "physics.stackexchange",
"id": 12050,
"tags": "homework-and-exercises, special-relativity, particle-physics, pair-production"
} |
Why do Grignard reagents act as sources of nucleophilic Rβ and not Brβ? | Question: In reactions with Grignard reagents, I've always seen the alkyl group being the nucleophilic component, with the RβMgBr bond being broken. This is always rationalised by the fact that the alkyl carbon has a partial negative charge, whereas the electrophile (e.g. a carbonyl group) has a partial positive charge.
However, the halogen atom should also have a partial negative charge. Why doesn't it attack the electrophile instead, breaking the RMgβBr bond?
Answer: In all likelihood the halide ion does attack the substrate. But, the product such a reaction mode would form does not accumulate whereas the alkylated product does.
Recall that in a nucleophilic reaction halide ions are good leaving groups, at least if you stay away from fluorides which are rarely used for Grignard reagents. So while the halide ion may add to the substrate, it may also be displaced if an alkyl-anion moiety from another Grignard molecule comes in, or possibly the halide ion may just launch on its own to attach to a magnesium atom (which, in a Grignard reagent, acts as an electrophilic center, for example by adding to the oxygen end of a carbonyl group).
In contrast, when the alkyl-anion moiety attacks, the carbon-carbon bond it forms is strongly favored thermodynamically and a lot harder to break; here we do not have a good leaving group. So the alkylated product rather than the halogenated product is what accumulates. | {
"domain": "chemistry.stackexchange",
"id": 14246,
"tags": "organic-chemistry, grignard-reagent"
} |
Do singularities have a "real" as opposed to mathematical or idealized existence? | Question: I was thinking of, for example a Schwarzchild metric at r=0, i.e. the gravitational singularity, a point of infinite density. I realise that there are different types of singularities--timelike, spacelike, co-ordinate singularities etc. In a short discussion with Lubos, I was a bit surprised when I assumed they are idealized and I believe he feels they exist. I am not a string theorist, so am not familiar with how singularities are dealt with in it. In GR, I know the Penrose-Hawking singularity theorems, but I also know that Hawking has introduced his no-boundary, imaginary time model for the Big Bang, eliminating the need for that singularity. Are cosmic strings and other topological defects singularities or approximations of them (if they exist). In what sense does a singularity exist in our universe? --as a real entity, as a mathematical or asymptotic idealization, as a pathology in equations to be renormalized or otherwise ignored, as not real as in LQG, or as real in Max Tegmark's over-the-top "all mathematical structures are real"?
Answer: Dear Gordon, I hope that other QG people will write their answers, but let me write mine, anyway.
Indeed, you need to distinguish the types of singularities because their character and fate is very different, depending on the type. You rightfully mentioned timelike, spacelike, and coordinate singularities. I will divide the text accordingly.
Coordinate singularities
Coordinate singularities depend on the choice of coordinates and they go away if one uses more well-behaved coordinates. So for example, there seems to be a singularity on the event horizon in the Schwarzschild coordinates - because $g_{00}$ goes to zero, and so on. However, this singularity is fake. It's just the artifact of using coordinates that differ from the "natural ones" - where the solution is smooth - by a singular coordinate transformation.
As long as the diffeomorphism symmetry is preserved, one is always allowed to perform any coordinate transformation. For a singular one, any configuration may start to look singular. This was case in classical general relativity and it is the case for any theory that respects the symmetry structure of general relativity.
The conclusion is that coordinate singularities can never go away. One is always free to choose or end up with coordinate systems where these fake singularities appear. And some of these coordinate systems are useful - and will remain useful: for example, the Schwarzschild coordinates are great because they make it manifest that the black hole solution is static. Physics will never stop using such singularities. What about the other types of the singularities?
Spacelike singularities
Most famously, these include the singularity inside the Schwarzschild black hole and the initial Big Bang singularity.
Despite lots of efforts by quantum cosmologists (meaning string theorists working on cosmology), especially since 1999 or so, the spacelike singularities remain badly understood. It's mainly because they inevitably break all supersymmetry. The existence of supersymmetry implies the existence of time-translational symmetry - generated by a Hamiltonian, the anticommutator of two supercharges. However, this symmetry is brutally broken by a spacelike singularity.
So physics as of 2011 doesn't really know what's happening near the very singular center of the Schwarzschild black hole; and near the initial Big Bang singularity. We don't even know whether these questions may be sharply defined - and many people guess that the answer is No. The latter problem - the initial Big Bang singularity - is almost certainly linked to the important topics of the vacuum selection. The eternal inflation answers that nothing special is happening near the initial point. A new Universe may emerge out of its parent; one should quickly skip the initial point because nothing interesting is going on at this singular place, and try to evolve the Universe. The inflationary era will make the initial conditions "largely" irrelevant, anyway. However, no well-defined framework to calculate in what state (the probabilities...) the new Universe is created is available at this moment.
You mentioned the no-boundary initial conditions. I am a big fan of it but it is not a part of the mainstream description of the initial singularity as of 2011 - which is eternal inflation. In eternal inflation, the initial point is indeed as singular as it can get - surely the curvatures can get Planckian and maybe arbitrarily higher - however, it's believed by the eternal inflationary cosmologists that the Universe cannot really start at this point, so they think it's incorrect to imagine that the boundary conditions are smooth near this point in any sense, especially in the Hartle-Hawking sense.
The Schwarzschild singularity is different - because it is the "final" spacelike singularity, not an initial condition - and it's why no one has been talking about smooth boundary conditions over there. Well, there's a paper about the "black hole final state" but even this paper has to assume that the final state is extremely convoluted, otherwise one would macroscopically violate the predictions of general relativity and the arrow of time near the singularity.
While the spacelike singularities remain badly understood, there exists no solid evidence that they are completely avoided in Nature. What quantum gravity really has to do is to preserve the consistency and predictivity of the physical theory. But it is not true that a "visible" suppression of the singularities is the only possible way to do so - even though this is what people used to believe in the naive times (and people unfamiliar with theoretical physics of the last 20 years still believe so).
Timelike singularities
The timelike singularities are the best understood ones because they may be viewed as "classical static objects" and many of them are compatible with supersymmetry which allowed the physicists to study them very accurately, using the protection that supersymmetry offers.
And again, it's true that most of them, at least in the limit of unbroken supersymmetry and from the viewpoint of various probes, remained very real. The most accurate description of their geometry is singular - the spacetime fails to be a manifold, i.e. diffeomorphic to an open set near these singularities. However, this fact doesn't lead to any loss of predictivity or any inconsistency.
The simplest examples are orbifold singularities. Locally, the space looks like $R^d/\Gamma$ where $\Gamma$ is a discrete group. It's clear by now that such loci in spacetime are not only allowed in string theory but they're omnipresent and very important in the scheme of things. The very "vacuum configuration" typically makes spacetime literally equal to the $R^d/\Gamma$ (locally) and there are no corrections to the shape, not even close to the orbifold point. Again, this fact leads to no physical problems, divergences, or inconsistencies.
Some of the string vacua compactified on spaces with orbifold singularities are equivalent - dual - to other string/M-theory vacua on smooth manifolds. For example, type IIA string theory or M-theory on a singular K3 manifold is equivalent to heterotic strings on tori with Wilson lines added. The latter is non-singular throughout the moduli space - and this fact proves that the K3 compactifications are also non-singular from a physics viewpoint - they're equivalent to another well-defined theory - even at places of the moduli spaces where the spacetime becomes geometrically singular.
The same discussion applies to the conifold singularities; in fact, orbifold points are a simple special example of cones. Conifolds are singular manifolds that include points whose vicinity is geometrically a cone, usually something like a cone whose base is $S^2\times S^3$. Many components of the Riemann curvature tensor diverge. Nevertheless, physics near this point on the moduli space that exhibits a singular spacetime manifold - and physics near the singularity on the "manifold" itself - remains totally well-defined.
This fact is most strikingly seen using mirror symmetry. Mirror symmetry transforms one Calabi-Yau manifold into another. Type IIA string theory on the first is equivalent to type IIB string theory on the second. One of them may have a conifold singularity but the other one is smooth. The two vacua are totally equivalent, proving that there is absolutely nothing physically wrong about the geometrically singular compactification. We may be living on one. The equivalence of the singular compactifications and non-singular compactifications may be interpreted as a generalized type of a "coordinate singularity" except that we have to use new coordinates on the whole "configuration space" of the physical theory (those related by the duality) and not just new spacetime coordinates.
It's very clear by now that some singularities will certainly stay with us and that the old notion that all singularities have to be "disappeared" from physics was just naive and wrong. Singularities as a concept will survive and singular points at various moduli spaces of possibilities will remain there and will remain important. Physics has many ways to keep itself consistent than to ban all points that look singular. That's surely one of the lessons physics has learned in the duality revolution started in the mid 1990s. Whenever physics near/of a singularity is understood, we may interpret the singularity type as a generalization of the coordinate singularities.
At this point, one should discuss lots of exciting physics that was found near singularities - especially new massless particles and extended objects (that help to make singularities innocent while preserving their singular geometry) or world sheet instantons wrapped on singularities (that usually modify them and make them smooth). All these insights - that are cute and very important - contradict the belief that there's no "valid physics near singularities because singularities don't exist". Spacetime manifolds with singularities do exist in the configuration space of quantum gravity, they are important, and they lead to new, interesting, and internally consistent phenomena and alternative dual descriptions of other compactifications that may be geometrically non-singular. | {
"domain": "physics.stackexchange",
"id": 410,
"tags": "general-relativity, cosmology, black-holes, big-bang, singularities"
} |
Examples of context sensitive syntactic constructs (statements) | Question: So, I am implementing a context sensitive syntactic analyzator. It's kind of an experimantal thing and one of the things I need are usable syntactical contructs to test it on.
For example, the following example isn't possible to parse using standard CFG (context free grammar). Basically it allows to declare multiple variables of unrelated data types and simultaneously initialize them.
int bool string number flag str = 1 true "Hello";
If I omit few details, the language used can be formally described like this:
L = {anbncn | n >= 1}
So, I would appreciate as much of similar examples as you can think of, ideally something from the area of programming languages.
Also, I am aware that current programming languages and their compilers are context sensitive, mainly thanks to semantic analysis, so I would like to state I am not looking for things like:
type checking
is variable declared?
I would preffer the examples to be actual syntactic constructs like the shown declaration example above.
Thanks in advance ;).
Answer: Here are three context-sensitive syntaxes actually found in programming languages. I don't believe I've ever seen a language which has types, names and values distributed as per your example, but it could certainly exist, and I'm sure there are even less readable syntaxes which are possible. The following are at least somewhat readable:
Syntactic whitespace, as per Python or Haskell. This is usually handled with a context-sensitive lexical scanner rather than a context-sensitive grammar, but it is certainly context-sensitive, and it could be handled with a context-sensitive grammar if you had the machinery available. (In fact, it could be cleaner to handle it in the parser, especially for languages like Haskell in which layout-sensitive parsing is optional. [Note 1])
Multi-dimensional array literals. Here, I'm not talking about languages which implement heterogeneous one-dimensional arrays as first-class types, so that an array can be an element of another array; in that case, there is no requirement that a multi-dimensional array be regular. Rather, I'm talking about languages in which multi-dimensional arrays must be regular, and so an irregular literal is a syntax error:
julia> [2 3 4;5 6 8]
2x3 Array{Int64,2}:
2 3 4
5 6 8
julia> [2 3 4;5 6]
ERROR: hvcat: row 2 has mismatched number of columns
in hvcat at abstractarray.jl:993
That's a bit of a cheat, because the Julia syntax above is really syntactic sugar for a call to hvcat, as indicated in the error message. But it could have been syntactic. Fortress -- the syntactic collector's favourite vapourware language -- proposed syntactic array literals:
The parts of higher-dimensional matrices are separated by repeated-semicolons, where the dimensionality of the result is equal to one plus the number of repeated semicolons. Here is a 3 x 3 x 3 x 2 matrix:
[ 1 0 0
0 1 0
0 0 1 ;; 0 1 0
1 0 1
0 1 0 ;; 1 0 1
0 1 0
1 0 1
;;;
1 0 0
0 1 0
0 0 1 ;; 0 1 0
1 0 1
0 1 0 ;; 1 0 1
0 1 0
1 0 1 ]
The elements in a matrix expression may be either scalars or matrices themselves. If they are matrices, then the elements along a row (or column) must have the same number of columns (or rows), though two elements in different rows (columns) need not have the same number of columns (rows). A scalar is treated as a one by one matrix. (Quoted from The Fortress Language Specification by Guy L. Steele et al, Β§2.3.19, p. 21)
Agreement between parameter count in function prototypes and number of arguments in function calls. Perhaps this fits in your concept of "type checking", and I don't think it adds anything interesting to your problem set. But it is certainly context-sensitive.
Notes
I stumbled upon Layout-sensitive Generalized Parsing by Sebastian Erdweg, Tillmann Rendel, Christian KΓ€stner, and Klaus Ostermann. while writing this answer, but I haven't read it. It seems to propose a usable formalism for layout-aware parsing. | {
"domain": "cs.stackexchange",
"id": 6545,
"tags": "programming-languages, context-sensitive, language-design, syntax"
} |
Is the Cross Product of two vectors in General Relativity on a 3-Space the same as in "Non-Relativistic" Physics? | Question: Considering the covariant tensor $$C = e_{ijk}A^j B^k = A \times B $$
in a 3 spherical-space diagonal metric $$ds^2 = g_{i,i}dx^{i}\cdot dx^i$$
Isn't $C$ the same as "Classical/Newtonian" physics? (Meaning $C_r = A_{\theta} \cdot B_{\phi} - B_{\theta} \cdot A_{\phi} , C_\theta = .... $ )
Answer: If $\pi_{ijk}$ is the ordinary Levi-Civita symbol, then the Levi-Civita tensor has components $\epsilon_{ijk}=\sqrt{g}\pi_{ijk}$ (I am assuming the metric is positive definite, but if not then replace $g$ with $|g|$), where $g$ is the determinant of the metric tensor.
Therefore, we have for $C_{i}=\epsilon_{ijk}A^j B^k$ $$ C_1=\sqrt g(A^2B^3-A^3B^2) \\ C_2=\sqrt g(A^3B^1-A^1B^3) \\ C_3=\sqrt g(A^1B^2-A^2B^3). $$
These are the covariant components however. If one wants the contravariant components, then one must raise the indices. If the metric is non-diagonal, this will cause a mixing of the components, but if the metric is diagonal, then we will simply have $C^i=g^{ii}C_i$ (no summation).
Also note that in vector calculus, the components of vectors in say spherical coordinates are usually taken for the orthonormal frame/dreibein $e_r,e_\vartheta,e_\varphi$ rather than the holonomic frame $\partial_r,\partial_\vartheta,\partial_\varphi$. Since the orthonormal basis vector fields are, well, orthonormal, they satisfy $$ e_r\times e_\vartheta=e_\varphi \\ e_\vartheta\times e_\varphi=e_r \\ e_\varphi\times e_r=e_\vartheta, $$ essentially the same relation as the cartesian basis vectors, thus in the orthonormal frame, cross products are calculated the same. | {
"domain": "physics.stackexchange",
"id": 65970,
"tags": "general-relativity, differential-geometry, metric-tensor, vectors"
} |
My implementation of the repository pattern | Question: I'm using this pattern since a few months and I was wondering if I can make it any better.
The one thing I am not satisfied about is the dispose method. In every repository I have to add a dispose method that calls the base dispose method. Is there a way to get rid of the dispose method in every repository and just let it (automatically) call the base dispose method?
I'm using Linq2Sql.
Interface:
public interface IRepository<TEntityType> where TEntityType : class
{
TEntityType GetById(int id);
IQueryable<TEntityType> GetAll();
void Delete(TEntityType item);
void Add(TEntityType item);
}
Base:
public class BaseRepository<TEntityType> where TEntityType : class
{
private readonly DataClassesDataContext _dataContext;
protected BaseRepository()
{
_dataContext = new DataClassesDataContext();
}
protected Table<TEntityType> GetTable()
{
return _dataContext.GetTable<TEntityType>();
}
protected void SaveChanges()
{
_dataContext.SubmitChanges();
}
protected virtual void Dispose()
{
if (_dataContext != null)
{
_dataContext.Dispose();
}
}
}
Repository class:
public class FirstClassRepository : BaseRepository<FirstTable>, IRepository<FirstTable>, IDisposable
{
public FirstTable GetById(int id)
{
return GetTable().FirstOrDefault(x => x.Test == id);
}
public IQueryable<FirstTable> GetAll()
{
return GetTable();
}
public void Delete(FirstTable item)
{
GetTable().DeleteOnSubmit(item);
}
public void Add(FirstTable item)
{
GetTable().InsertOnSubmit(item);
}
public new void Dispose()
{
base.Dispose();
}
}
Answer:
The one thing I am not satisfied about is the dispose method. In every repository I have to add a dispose method that calls the base dispose method.
IDisposable
The IDisposable interface should be impelemented in the base class. By using the default pattern of disposing, you will get this result
public class BaseRepository<TEntityType> : IDisposable where TEntityType : class
{
private readonly DataClassesDataContext _dataContext;
protected BaseRepository()
{
_dataContext = new DataClassesDataContext();
}
protected Table<TEntityType> GetTable()
{
return _dataContext.GetTable<TEntityType>();
}
protected void SaveChanges()
{
_dataContext.SubmitChanges();
}
public void Dispose()
{
Dispose(true);
GC.SuppressFinalize(this);
}
~BaseRepository()
{
Dispose(false);
}
protected void Dispose(Boolean disposing)
{
// free unmanaged ressources here
if (disposing)
{
// This method is called from Dispose() so it is safe to
// free managed ressources here
if (_dataContext != null)
{
_dataContext.Dispose();
}
}
}
}
But wait, what if you also need to dispose ressources of the FirstClassRepository object which descends from that object. Then the FirstClassRepositoryobject also needs to implement the IDisposable interface with the pattern below.
public class FirstClassRepository : BaseRepository<FirstTable>, IRepository<FirstTable>, IDisposable
{
public FirstTable GetById(int id)
{
return GetTable().FirstOrDefault(x => x.Test == id);
}
public IQueryable<FirstTable> GetAll()
{
return GetTable();
}
public void Delete(FirstTable item)
{
GetTable().DeleteOnSubmit(item);
}
public void Add(FirstTable item)
{
GetTable().InsertOnSubmit(item);
}
public void Dispose()
{
try
{
Dispose(true); //true: safe to free managed resources
GC.SuppressFinalize(this);
}
finally
{
base.Dispose();
}
}
~FirstClassRepository()
{
Dispose(false);
}
protected void Dispose(Boolean disposing)
{
// free unmanaged ressources here
if (disposing)
{
// This method is called from Dispose() so it is safe to
// free managed ressources here
}
}
}
If the FirstClassRepository obect doesn't need to dispose ressources you can omit the implementation of IDisposable.
EDIT based on comment:
The ~FirstClassRepository() method is what the compiler takes as beeing the Finalize() method of the object. If you forget to call the Dispose() method of that object, this method will be called at the time your object is destructed.
So as the method is implemented by calling the overloaded Dispose(Boolean) method with the parameter == false, only unmanaged ressources will be freed (where the comment is written) as the GC takes care of the managed objects.
Otherwise your implementation looks fine for me.
Some SO links
proper-use-of-the-idisposable-interface
when-should-i-use-gc-suppressfinalize | {
"domain": "codereview.stackexchange",
"id": 8900,
"tags": "c#, repository"
} |
Decimation and filtering in the frequency domain | Question: I'm working on a Software Defined Radio project where I'd like to low-pass filter and decimate an analytical signal (IQ) sampled at 96ksps. Let's say the low-pass filter has a cutoff at 5kHz and I'd like to decimate by a factor 4 so that I have 24ksps out.
The idea is to perform the filtering using fast convolution using the overlap and save method as described in this article [pdf]:
http://www.3db-labs.com/01598092_MultibandFilterbank.pdf
I'm wondering if there are any pitfalls to my approach:
Performing an N length FFT.
Then doing an N length circular convolution (my multiplying with the FFT of my filter of length P).
Then performing an N/4 IFFT back to decimate by 4 using the N/4 center taps of the forward FFT. Since my filter is a low-pass with a cutoff at 5kHz there should be very little energy outside the N/4 center taps of the FFT, and the P - 1 samples I need to discard should also lie outside the IFFT (if my filter is not too long).
EDIT:
This specific application is on a Raspberry Pi 3. After having given this some more thought I've realized it's not as clever as I first thought. I've stared too much on 0 centered FFTs and briefly forgot this is not the case. I would have to "remove" zeros in the middle of my FFT to make it shorter and then perform the IFFT.
What I will do is to do the fast convolution with an N length FFT and N length IFFT, and decimate when I copy samples to the output buffer.
Answer: I do a lot of decimation in the frequency domain. Little details are important.
I assume you already know the basic rules for fast convolution: the FFT length N is equal to the data blocksize L plus the length of the filter impulse response M minus 1. Each operation uses L samples of new data plus M-1 samples of data from the old block.
Ensure that the impulse response of your lowpass filter is shifted to the front of your time domain buffer AND properly windowed to M samples before you take the forward FFT to get the frequency domain representation of your filter. This keeps the result from wrapping around in the time domain when you take the inverse FFT. (Remember you're actually doing circular convolution when you want linear convolution.)
Kaiser is by far my favorite window because of its tuning knob. Use a large enough value to push the sidelobes down, as they will alias into your output. I typically construct a zero-phase brick wall in the frequency domain, take the inverse transform, apply the window and shift to the front of the buffer, then take the forward FFT.
Also make sure that the main lobe of your frequency response is still essentially zero past the Nyquist limit of your decimated output. I.e., don't make the Kaiser parameter too big.
If you're also doing frequency shifting by rotating the frequency bins, remember that you have to shift by a number of bins that corresponds to one data block size. For example, if L=M-1, then you can translate by any even number of bins.
Don't worry about FFT blocksizes that aren't powers of two. Just pick a convenient size that doesn't have any large prime factors and FFTW3 will perform well. I use L=M-1=3840 to decimate 192 kHz by 4:1 to 48 kHz with a block time of 20 ms. That's a FFT blocksize of L+M-1 = 7680 = 2^9 * 3 * 5. | {
"domain": "dsp.stackexchange",
"id": 7166,
"tags": "fft, decimation, software-defined-radio, fast-convolution"
} |
Is a leaky integrator the same thing as a low pass filter? | Question: The equation governing a leaky integrator (according to Wikipedia at least) is
$\frac{d\mathcal{O}}{dt} + A\mathcal{O}(t) = \mathcal{I}(t)$.
Is a continuous-time leaky integrator thus the same thing as a low pass filter with time-constant $A$, up to some scaling of the input?
Answer: A so-called leaky integrator is a first-order filter with feedback. Let's find its transfer function, assuming that the input is $x(t)$ and the output $y(t)$:
$$
\frac{dy(t)}{dt} + Ay(t) = x(t)
$$
$$
\mathcal{L}\left\{\frac{dy(t)}{dt} + Ay(t)\right\} = \mathcal{L}\left\{x(t)\right\}
$$
where $\mathcal{L}$ denotes application of the Laplace transform. Moving forward:
$$
sY(s) + AY(s) = X(s)
$$
$$
H(s) = \frac{Y(s)}{X(s)} = \frac{1}{s + A}
$$
(taking advantage of the Laplace transform's property that $\frac{dy(t)}{dt} \Leftrightarrow sY(s)$, assuming that $y(0) = 0$).
This system, with transfer function $H(s)$, has a single pole at $s = -A$. Remember that its frequency response at frequency $\omega$ can be found by letting $s=j\omega$:
$$
H(j\omega) = \frac{1}{j\omega + A}
$$
To get a rough view of this response, first let $\omega \to 0$:
$$
\lim_{\omega \to 0} H(\omega) = \frac{1}{A}
$$
So the system's DC gain is inversely proportional to the feedback factor $A$. Next, let $w \to \infty$:
$$
\lim_{\omega \to \infty} H(\omega) = 0
$$
The system's frequency response therefore goes to zero for high frequencies. This follows the rough prototype of a lowpass filter. To answer your other question with respect to its time constant, it's worth checking out the system's time-domain response. Its impulse response can be found by inverse-transforming the transfer function:
$$
H(s) = \frac{1}{s+A} \Leftrightarrow e^{-At}u(t) = h(t)
$$
where $u(t)$ is the Heaviside step function. This is a very common transform that can often be found in tables of Laplace transforms. This impulse response is an exponential decay function, which is usually written in the following format:
$$
h(t) = e^{-\frac{t}{\tau}}u(t)
$$
where $\tau$ is defined to be the function's time constant. So, in your example, the system's time constant is $\tau = \frac{1}{A}$. | {
"domain": "dsp.stackexchange",
"id": 2614,
"tags": "filters"
} |
PowerShell script to automate the search of DLL files | Question: I often write different applications in C++ using different libraries, and sometimes it takes a lot of time to find where are the *.dll files to distribute. I use Dependency Walker but it still takes a lot of time to look in the GUI. I wrote a PowerShell script that will find all used libraries and do something with them. Please review my code.
Param
(
[Parameter(Mandatory=$true, Position=0)]
[ValidateScript({Test-Path $_ -PathType Leaf})]
[String]$Path,
[Parameter()]
[ValidateScript({Test-Path $_ -PathType Container})]
[String[]]$Exclude
)
function likeAnyOf($obj,$array)
{
foreach($item in $array)
{
if($obj -like $item)
{
return $item
}
}
return $null
}
Set-StrictMode -Version Latest
$systemPath = [System.Environment]::SystemDirectory
$tempName = [System.IO.Path]::GetTempFileName() # generate a temporary filename
Start-Process depends.exe -ArgumentList '/c','/f:1',"/oc:$tempName",$Path -Wait -NoNewWindow # wait until it writes all the data
$modules = Import-Csv $tempName -Encoding Default | Select-Object -Property Module -Unique # read the output of depends.exe and select all unique DLL paths
Remove-Item -Path $tempName -Force # delete the temporary file
$dllsToCopy = @{}
foreach($module in $modules)
{
$modulePath = $module.Module
$moduleParent = Split-Path $modulePath -Parent
$moduleName = Split-Path $modulePath -Leaf
if( $moduleParent -like $systemPath )
{
Write-Verbose "Skipped $moduleName as a system module ($moduleParent)"
continue
}
$excluded = likeAnyOf $moduleParent (Resolve-Path $Exclude)
if($excluded -ne $null)
{
Write-Host "Skipped $moduleName as explicitly excluded ($excluded)"
continue
}
$exists = Test-Path $modulePath -PathType Leaf
if($exists)
{
$dllsToCopy.Add($moduleName, $modulePath)
Write-Host "Added $moduleName found at $moduleParent"
}
else
{
Write-Error "Couldn't find $moduleName"
}
}
Answer: You can replace all this:
function likeAnyOf($obj,$array)
{
foreach($item in $array)
{
if($obj -like $item)
{
return $item
}
}
return $null
}
# ...
$excluded = likeAnyOf $moduleParent (Resolve-Path $Exclude)
With this:
$excluded = Resolve-Path $Exclude | ? { $moduleParent -like $_ }
The question mark (?) is an alias for Where-Object, so read up on that for an explanation.
Instead of this:
if ($excluded -ne $null)
{
# ...
}
you can do this:
if ($excluded)
{
# ...
}
It's just a little bit easier to read. | {
"domain": "codereview.stackexchange",
"id": 8029,
"tags": "powershell"
} |
Why can no other chemical elements be used for making a light bulb filament? | Question: As far as I can see from this Wikipedia article on the incandescent light bulb, there have been only four types of light bulb filaments: those made of carbon, those made of osmium, those made of tantalum and those made of tungsten (wolfram). I wonder why it is impossible to use any other chemical element for that purpose. Is there a simple explanation of that reason?
Answer: Essentially: carbon, osmium, tantalum, and tungsten, along with rhenium, are the only (known, stable) elements that have melting points high enough to remain solid at the high temperatures required to achieve the colors of standard incandescent light bulbs.
Why? Incandescent light bulbs produce light by heating a filament, which gets so hot that it emits enough radiation to light up the room. We can model the color of the light bulb well by approximating it as a blackbody at thermal equilibrium, governed by Planck's law. When electricity is passed through it, the light bulb filament heats up until it reaches, at equilibrium, the temperature referred to on the bulb label. The colder the temperature, the redder the bulb, and the higher the temperature, the bluer the bulb.
Standard incandescent light bulbs have temperatures between 2700 K and 3000 K. This is because the temperature that produces a peak at the red-most end of the visible spectrum (with $\lambda=750$ nm) is ~3800 K. Thus, as as filament temperature is reduced below 3800 K, the peak will shift further out of the visible range and the light produced will appear dimmer (due to Wien's displacement law) and so colder temperatures appear too dim to function as a light bulb.
As the filament temperature is increased, however, the operational temperature of the light bulb approaches the melting point of the filament. Since the temperature is just the mean kinetic energy of molecules, many of the molecules in the filament would have more energy individually than that mean, severely reducing the lifetime of a light bulb with a filament whose operational temperature is too close to its melting point. This is why incandescent bulbs are rarely rated above 3000 K (at least those without some kind of special coating that makes them bluer).
However, the only elements with melting points above 3000 K are the elements you mention: carbon, osmium, tantalum, and tungsten, with the exception of rhenium, one of the rarest elements on earth. See the elements in red (which have melting points at or above 3000 K) in the following periodic table (from ptable.com):
That doesn't mean there aren't other materials with high melting points that might function well as filaments for an incandescent bulb, such as tantalum carbide (which can melt at around 3900 K), for example. Here's a 1935 patent for a tantalum carbide lamp. Even rhenium has been considered. Here's a 2001 patent for a tungsten-rhenium alloy filament, although given the rarity of rhenium, it is likely not economical to use as a filament for consumer-grade light bulbs.
Non-incandescent light bulbs produce light through entirely different mechanisms, or else don't use a solid filament, which is why LED light bulbs don't require tungsten et al. at all. | {
"domain": "physics.stackexchange",
"id": 92641,
"tags": "electricity, material-science, physical-chemistry"
} |
Chemical reaction that can produce lots of heat from 2-3 simple liquid ingredients? | Question: Have an experiment I want to try out utilizing thermoelectric power and I want to generate the heat via a simple chemical reaction. Ideally I'd like to mix 2 liquids together (one being water or alcohol would be great) that can get to around boiling water temperature for a few minutes or longer. Byproducts need to be only other liquids that can be drained easily or gasses that aren't dangerous to humans. No solids wastes and non corrosive / carcinogen / caustic.
So far I've found one that's close to my needs but still produces a solid waste product and isn't particularly safe, which is Calcium Oxide + Water -> Calcium Hydroxide + Heat. This might work if there's another liquid I could add after the main reaction to turn the Calcium Hydroxide into a liquid for disposal.
Answer: The first thing that comes to mind, if you have access to a stock-room, is mixing two solutions: (1) NaOH; (2) HCl. This can release a lot of heat if your solutions are concentrated enough, and it forms salt-water if your NaOH and HCl are of the same molarity.
Edit: I'm on my lunch break, so I did some of the math...
The chemical reaction of interest in this case is:
$$\ce{OH- + H+ -> H2O}$$
The standard enthalpies of formation ($\Delta H_\mathrm f^\circ$) of these species are:
$$\begin{array}{lr}
\hline
\text{Species} & ΞH_\mathrm f^\circ/\pu{kJ mol-1} \\
\hline
\ce{OH-} & -229.99 \\
\ce{H+} & 0.00 \\
\ce{H2O} & -288.83 \\
\hline
\end{array}$$
Thus, the change in enthalpy for the reaction is:
$$\Delta H^\circ = -288.83\ \mathrm{kJ/mol} - (-229.99\ \mathrm{kJ/mol}+0\ \mathrm{kJ/mol}) = -58.84\ \mathrm{kJ/mol}$$
Therefore, for $1\ \mathrm{mol}$ of $\ce{NaOH}$ + $1\ \mathrm{mol}$ of $\ce{HCl}$, you get $58.84\ \mathrm{kJ}$ of heat. Say you want to release enough heat to get the net solution up to $100\ \mathrm{^\circ C}$.
Water has a heat capacity of $4.18\ \mathrm{J/(g\ ^\circ C)}$. Say you have $1\ \mathrm L$ of $\ce{NaOH}$ + $1\ \mathrm L$ of $\ce{HCl}$, you'll need enough heat to raise the temperature of $2\ \mathrm L$ of water to $100\ \mathrm{^\circ C}$. I'll assume the water starts off at $25\ \mathrm{^\circ C}$, so you have $2\,000\ \mathrm g$ and $75\ \mathrm{^\circ C}$ to go.
$$4.18\ \mathrm{J/(g\ ^\circ C)} \cdot 2\,000\ \mathrm g \cdot 75\ \mathrm{^\circ C} = 62\,700\ \mathrm J = 62.7\ \mathrm{kJ}$$
How many moles of $\ce{NaOH}$ + $\ce{HCl}$ do you need for that much heat?
$$\frac{q}{\Delta H^\circ} = \frac{62.7\ \mathrm{kJ}}{58.84\ \mathrm{kJ/mol}} = 1.065\ \mathrm{mol}$$
That would mean that you can mix $1\ \mathrm L$ of $1.065\ \mathrm M$ $\ce{NaOH}$ + $1\ \mathrm L$ of $1.065\ \mathrm M$ of $\ce{HCl}$, and would theoretically expect to get a temperature close to $100\ \mathrm{^\circ C}$.
This might be off a bit because I've made some assumptions:
$\Delta H^\circ$ is constant with respect to temperature from $25\ \mathrm{^\circ C}$ to $100\ \mathrm{^\circ C}$. This may not be true.
the dissolved salts in water dont significantly affect its heat capacity
You have $2\,000\ \mathrm g$ of water in $1.065\ \mathrm M$ $\ce{NaOH}$ + $1.065\ \mathrm M$ $\ce{HCl}$
I think this could get you close though? I'm a bit surprised that the molarities aren't higher... It's a starting point at least.
Disclaimer of course: be careful with the $\ce{NaOH}$ + $\ce{HCl}$ solutions, they can be dangerous. Use proper chemistry hygeine protocols. The mixture should be benign, but you should confirm this with pH paper. | {
"domain": "chemistry.stackexchange",
"id": 7063,
"tags": "thermodynamics, heat"
} |
.Net core string symmetric encryption | Question: I need to store sensitive strings in DB, so decided to encrypt it in a DB and decrypt it at the application layer. And noticed that it is not so easy to find ready to use an example. Please check my code below. It is based on this damienbod's topic, but I included IV to the string itself and also want to know if it is production-ready. And a somewhat unrelated question: probably I should use an old Rijndael solution like this?
public class SymmetricEncryptDecrypt
{
private const int ivBytes = 128;
public (string Key, string IVBase64) InitSymmetricEncryptionKeyIV()
{
var key = GetEncodedRandomString(32); // 256
using (Aes cipher = CreateCipher(key))
{
cipher.GenerateIV();
var IVBase64 = Convert.ToBase64String(cipher.IV);
return (key, IVBase64);
}
}
private byte[] GetNewIv()
{
using (Aes cipher = CreateCipher(GetEncodedRandomString(32)))
{
cipher.GenerateIV();
return cipher.IV;
}
}
/// <summary>
/// Encrypt using AES
/// </summary>
/// <param name="text">any text</param>
/// <param name="IV">Base64 IV string/param>
/// <param name="key">Base64 key</param>
/// <returns>Returns an encrypted string</returns>
public string Encrypt(string text, string key)
{
var iv = this.GetNewIv();
using (Aes cipher = CreateCipher(key))
{
cipher.IV = iv;
ICryptoTransform cryptTransform = cipher.CreateEncryptor();
byte[] plaintext = Encoding.UTF8.GetBytes(text);
byte[] cipherText = cryptTransform.TransformFinalBlock(plaintext, 0, plaintext.Length);
return Convert.ToBase64String(iv.Concat(cipherText).ToArray());
}
}
/// <summary>
/// Decrypt using AES
/// </summary>
/// <param name="text">Base64 string for an AES encryption</param>
/// <param name="key">Base64 key</param>
/// <returns>Returns a string</returns>
public string Decrypt(string encryptedText, string key)
{
var cipherTextBytesWithSaltAndIv = Convert.FromBase64String(encryptedText);
var ivStringBytes = cipherTextBytesWithSaltAndIv.Take(ivBytes / 8).ToArray();
// Get the actual cipher text bytes by removing the first 64 bytes from the cipherText string.
var cipherTextBytes = cipherTextBytesWithSaltAndIv.Skip(ivBytes / 8).Take(cipherTextBytesWithSaltAndIv.Length - (ivBytes / 8)).ToArray();
using (Aes cipher = CreateCipher(key))
{
cipher.IV = ivStringBytes;
ICryptoTransform cryptTransform = cipher.CreateDecryptor();
byte[] plainBytes = cryptTransform.TransformFinalBlock(cipherTextBytes, 0, cipherTextBytes.Length);
return Encoding.UTF8.GetString(plainBytes);
}
}
private string GetEncodedRandomString(int length)
{
var base64 = Convert.ToBase64String(GenerateRandomBytes(length));
return base64;
}
/// <summary>
/// Create an AES Cipher using a base64 key
/// </summary>
/// <param name="key"></param>
/// <returns>AES</returns>
private Aes CreateCipher(string keyBase64)
{
// Default values: Keysize 256, Padding PKC27
Aes cipher = Aes.Create();
cipher.Mode = CipherMode.CBC; // Ensure the integrity of the ciphertext if using CBC
cipher.Padding = PaddingMode.ISO10126;
cipher.Key = Convert.FromBase64String(keyBase64);
return cipher;
}
private byte[] GenerateRandomBytes(int length)
{
var byteArray = new byte[length];
RandomNumberGenerator.Fill(byteArray);
return byteArray;
}
}
Answer: This code looks complicated for me. Not because I don't know how to use AES encrypter but it contains a lot of redundancy. From setting CipherMode.CBC which is default to GetNewIv() which is random by default.
Is a lot of code is targeting the a security improvement? If it is, you have a breach storing key in a string. I can easily get the key from App's memory even if it currently not in use because string is immutable and can be stored in memory for undefined period of time. Storing key in byte[] array allows to cleanup the array at any moment.
If security on DB side is enough then you may simplify the code as following two methods.
I use streams because it more friendly for me. As a bonus the code can be easily ported to streams use.
private static string Encrypt(string text, byte[] key)
{
using AesManaged aes = new AesManaged() { Key = key };
using MemoryStream ms = new MemoryStream();
ms.Write(aes.IV);
using (CryptoStream cs = new CryptoStream(ms, aes.CreateEncryptor(), CryptoStreamMode.Write, true))
{
cs.Write(Encoding.UTF8.GetBytes(text));
}
return Convert.ToBase64String(ms.ToArray());
}
private static string Decrypt(string base64, byte[] key)
{
using MemoryStream ms = new MemoryStream(Convert.FromBase64String(base64));
byte[] iv = new byte[16];
ms.Read(iv);
using AesManaged aes = new AesManaged() { Key = key, IV = iv };
using CryptoStream cs = new CryptoStream(ms, aes.CreateDecryptor(), CryptoStreamMode.Read, true);
using MemoryStream output = new MemoryStream();
cs.CopyTo(output);
return Encoding.UTF8.GetString(output.ToArray());
}
Demo
string text = "Hello world";
byte[] key = Enumerable.Range(0, 32).Select(x => (byte)x).ToArray(); // just for example :)
string base64 = Encrypt(text, key);
Console.WriteLine(base64);
Console.WriteLine(Decrypt(base64, key));
Output
Qh+XfnIuIdgllOiKFgzCpTURW+bUuj91S91zA1przRQ=
Hello world
I think that's enough to keep it secure on DB side. Btw, you may keep the Key generator methods if you need it.
P.S. old Rijndael solution - Rijndael was superseded by Aes in .NET, i don't remember the exact reason but something like Rijndael has a padding issue and Aes fixed it.
One more tip. If you're on Windows and the encrypted data is allowed to be lost (in case of emergency), you can use DPAPI to protect the key (MS NuGet package exists, Current User/Local Machine protecting modes available) and store protected key in DB in the same sequence as IV. Then you can use totally random key for each Encrypt() call and store it with data. The data may be lost on Windows reinstall or moving the ASP.NET server to other machine, because DPAPI uses key associated with Current User credentials or Local Machine ID. But protected data can't be restored if DB was stolen or in any other way on other PC. Anyway learn how DPAPI works, it may be useful if the server runs under Windows. | {
"domain": "codereview.stackexchange",
"id": 41089,
"tags": "c#, .net, asp.net-core, encryption"
} |
C# Chat - Part 2: Client | Question: This is the second part of a multi-part review. The first part, the server for this client, can be found here.
I've been building a simple C# server-client chat-style app as a test of my C#. I've picked up code from a few tutorials, and extended what's there to come up with my own spec.
In this second part, I'd like to get some feedback on my client. It feels leaner and more efficient than the server, but I don't doubt that there are plenty of problems in here.
Program.cs
using System;
using System.Collections.Generic;
using System.Threading;
using System.Runtime.InteropServices;
using System.Windows.Forms;
namespace MessengerClient
{
class Program
{
private static Thread receiverThread;
private static bool FirstRun = true;
static void Main(string[] args)
{
if (FirstRun)
{
Console.BackgroundColor = ConsoleColor.White;
Console.ForegroundColor = ConsoleColor.Black;
Console.Clear();
Application.ApplicationExit += new EventHandler(QuitClient);
FirstRun = false;
}
if (args.Length == 1 && args[0] == "--debug")
{
Console.WriteLine("<DEBUG> Setting debug mode ON...");
Output.DebugMode = true;
}
Console.WriteLine("Enter the IP to connect to, including the port:");
string address = Console.ReadLine();
try
{
string[] parts = address.Split(':');
receiverThread = new Thread(new ParameterizedThreadStart(Receiver.Start));
receiverThread.Start(address);
Client.Start(parts[0], Int32.Parse(parts[1]));
}
catch (Exception e)
{
Console.Clear();
Output.Message(ConsoleColor.DarkRed, "Could not connect: " + e.Message);
Main(new string[1]);
}
}
private static void QuitClient(object sender, EventArgs e)
{
Client.Disconnect();
while (!Commands.ExitHandlingFinished)
{
Thread.Sleep(100);
}
}
}
}
Client.cs
using System;
using System.Collections.Generic;
using System.Text;
using System.Net;
using System.Net.Sockets;
using System.Threading;
namespace MessengerClient
{
class Client
{
private static ASCIIEncoding encoder = new ASCIIEncoding();
private static int clientId = 0;
public static int GetClientId()
{
return clientId;
}
public static TcpClient client = new TcpClient();
private static IPEndPoint serverEndPoint;
public static void Start(string ip, int port)
{
serverEndPoint = new IPEndPoint(IPAddress.Parse(ip), port);
try
{
client.Connect(serverEndPoint);
}
catch (Exception e)
{
throw new Exception("No connection was made: " + e.Message);
}
while (true)
{
Output.Write(ConsoleColor.DarkBlue, "Me: ");
Console.ForegroundColor = ConsoleColor.DarkBlue;
string message = Console.ReadLine();
Console.ForegroundColor = ConsoleColor.Black;
if (Commands.IsCommand(message))
{
Commands.HandleCommand(client, message);
continue;
}
SendMessage(message);
}
}
public static void SendMessage(string message)
{
NetworkStream clientStream = client.GetStream();
byte[] buffer;
if (message.StartsWith("[Disconnect]") || message.StartsWith("[Command]"))
{
buffer = encoder.GetBytes(message);
}
else
{
buffer = encoder.GetBytes("[Send]" + message);
}
clientStream.Write(buffer, 0, buffer.Length);
clientStream.Flush();
}
public static void HandleResponse(ResponseCode code)
{
switch (code)
{
case ResponseCode.Success:
return;
case ResponseCode.ServerError:
Output.Message(ConsoleColor.DarkRed, "The server could not process your message. (100)");
break;
case ResponseCode.NoDateFound:
Output.Message(ConsoleColor.DarkRed, "Could not retrieve messages from the server. (200)");
break;
case ResponseCode.BadDateFormat:
Output.Message(ConsoleColor.DarkRed, "Could not retrieve messages from the server. (201)");
break;
case ResponseCode.NoMessageFound:
Output.Message(ConsoleColor.DarkRed, "The server could not process your message. (300)");
break;
case ResponseCode.NoHandlingProtocol:
Output.Message(ConsoleColor.DarkRed, "The server could not process your message. (400)");
break;
case ResponseCode.NoCode:
Output.Message(ConsoleColor.DarkRed, "Could not process the server's response. (NoCode)");
break;
default:
return;
}
}
public static void ParseClientId(string id)
{
clientId = Int32.Parse(id);
}
public static void Disconnect()
{
SendMessage("[Disconnect]");
Commands.EndRcvThread = true;
Output.Debug("Requested receive thread termination.");
Output.Message(ConsoleColor.DarkGreen, "Shutting down...");
}
}
}
Receiver.cs
using System;
using System.Collections.Generic;
using System.Text;
using System.Net;
using System.Net.Sockets;
using System.Threading;
namespace MessengerClient
{
class Receiver
{
private static TcpClient client = new TcpClient();
private static IPEndPoint serverEndPoint;
public static void Start(object address)
{
string[] parts = ((string) address).Split(':');
try
{
serverEndPoint = new IPEndPoint(IPAddress.Parse(parts[0]), Int32.Parse(parts[1]));
}
catch (Exception e)
{
Output.Message(ConsoleColor.DarkRed, "Could not connect: " + e.Message);
return;
}
try
{
client.Connect(serverEndPoint);
client.ReceiveTimeout = 500;
}
catch (Exception e)
{
Output.Message(ConsoleColor.DarkRed, "Could not connect: " + e.Message);
return;
}
NetworkStream stream = client.GetStream();
string data = "";
byte[] received = new byte[4096];
while (true)
{
if (Commands.EndRcvThread)
{
Output.Debug("Ending receiver thread");
client.Close();
Output.Debug("Cleaned up receive client");
Commands.RcvThreadEnded = true;
Commands.HandleResponse("[DisconnectAcknowledge]");
Output.Debug("Notified Commands handler of thread abortion");
Thread.CurrentThread.Abort();
return;
}
data = "";
received = new byte[4096];
int bytesRead = 0;
try
{
bytesRead = stream.Read(received, 0, 4096);
}
catch (Exception e)
{
continue;
}
if (bytesRead == 0)
{
break;
}
int endIndex = received.Length - 1;
while (endIndex >= 0 && received[endIndex] == 0)
{
endIndex--;
}
byte[] finalMessage = new byte[endIndex + 1];
Array.Copy(received, 0, finalMessage, 0, endIndex + 1);
data = Encoding.ASCII.GetString(finalMessage);
Output.Debug("Server message: " + data);
try
{
ProcessMessage(data);
}
catch (Exception e)
{
Output.Message(ConsoleColor.DarkRed, "Could not process the server's response (" + data + "): " + e.Message);
}
}
}
public static void ProcessMessage(string response)
{
Output.Debug("Processing message: " + response);
response = response.Trim();
if (response.StartsWith("[Message]"))
{
Output.Debug("Starts with [Message], trying to find ID");
response = response.Substring(9);
int openIndex = response.IndexOf("<");
int closeIndex = response.IndexOf(">");
if (openIndex < 0 || closeIndex < 0 || closeIndex < openIndex)
{
Output.Debug("No ID tag? ( <ID-#-HERE> )");
throw new FormatException("Could not find ID tag in message");
}
int diff = closeIndex - openIndex;
int id = Int32.Parse(response.Substring(openIndex + 1, diff - 1));
if (id != Client.GetClientId())
{
string message = response.Substring(closeIndex + 1);
Console.WriteLine();
Output.Message(ConsoleColor.DarkYellow, "<Stranger> " + message);
Output.Write(ConsoleColor.DarkBlue, "Me: ");
}
else
{
Output.Debug("ID is client ID, not displaying.");
}
}
else if (response == "[DisconnectAcknowledge]" || response == "[CommandInvalid]")
{
Output.Debug("Sending response to Commands handler: " + response);
Commands.HandleResponse(response);
}
else if (response.Length == 5 && response.StartsWith("[") && response.EndsWith("]"))
{
Client.HandleResponse(ResponseCodes.GetResponse(response));
}
else
{
Output.Debug("Figuring out what to do with server message: " + response);
try
{
Int32.Parse(response);
Output.Debug("Int32.Parse has not failed, assume client ID sent.");
Client.ParseClientId(response);
return;
}
catch (Exception e) {
Output.Debug("Could not process client ID: " + e.Message);
}
Output.Debug("Could not identify what to do with message.");
Output.Message(ConsoleColor.DarkCyan, "<Server> " + response);
}
}
}
}
ResponseCodes.cs
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
namespace MessengerClient
{
public enum ResponseCode
{
Success,
ServerError,
NoDateFound,
BadDateFormat,
NoMessageFound,
NoHandlingProtocol,
NoCode,
NoResponse
}
class ResponseCodes
{
public static Dictionary<string, ResponseCode> CodeStrings = new Dictionary<string, ResponseCode>
{
{"[600]", ResponseCode.Success},
{"[100]", ResponseCode.ServerError},
{"[200]", ResponseCode.NoDateFound},
{"[201]", ResponseCode.BadDateFormat},
{"[300]", ResponseCode.NoMessageFound},
{"[400]", ResponseCode.NoHandlingProtocol},
};
public static ResponseCode GetResponse(string code)
{
if (CodeStrings.ContainsKey(code))
{
return CodeStrings[code];
}
else
{
return ResponseCode.NoCode;
}
}
}
}
Commands.cs
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading;
using System.Net.Sockets;
namespace MessengerClient
{
class Commands
{
public static volatile bool EndRcvThread = false;
public static volatile bool RcvThreadEnded = false;
public static bool ExitHandlingFinished = false;
public static bool IsCommand(string command)
{
if (command.StartsWith("/"))
{
return true;
}
else
{
return false;
}
}
public static void HandleCommand(TcpClient client, string command)
{
string[] args = command.Split(' ');
switch (args[0].ToLower())
{
case "/server":
if (args.Length >= 2)
{
int startIndex = args[0].Length;
string commandArgs = command.Substring(startIndex + 1);
Client.SendMessage("[Command]" + commandArgs);
}
else
{
Output.Message(ConsoleColor.DarkRed, "Not enough arguments");
return;
}
break;
case "/exit":
Client.Disconnect();
break;
default:
Output.Message(ConsoleColor.DarkRed, "Unknown command.");
return;
}
}
public static void HandleResponse(string response)
{
// Command was sent; server did not recognise
if (response == "[CommandInvalid]")
{
Output.Message(ConsoleColor.DarkRed, "The command was not recognised by the server.");
return;
}
// Disconnect was sent; server acknowledges
if (response == "[DisconnectAcknowledge]")
{
EndRcvThread = true;
Output.Debug("Waiting for thread termination");
while (!RcvThreadEnded)
{
Thread.Sleep(100);
}
Output.Debug("Thread terminated, cleaning send client");
Client.SendMessage("");
Client.client.Close();
Output.Debug("Cleaned up send client");
if (Output.DebugMode)
{
Console.WriteLine();
Output.Debug("Press any key to exit");
Console.ReadKey();
}
Environment.Exit(0);
}
// Fallback for neither case: pass it off to the client
ResponseCode code = ResponseCodes.GetResponse(response);
Client.HandleResponse(code);
}
}
}
The final class, Output.cs, is the same class as in the last post, and I'm still happy with it so am not putting it up for review. Please also note, I do have XML documentation comments in the code but to save characters have excluded them here.
Answer: private static bool FirstRun = true;
private fields are lowerCamelCase or _lowerCamelCase, the latter receiving much preference.
if (args.Length == 1 && args[0] == "--debug")
This is fine if you only have one argument but as soon as you want multiple you'll have issues: people expect arguments to be swappable so you might want to look into making this more generic if you go in that direction.
I would also use args.Any() to make it more expressive.
Output.DebugMode = true;
I don't like the supposedly singleton instance of Output. You might just as well create a normal instance and pass it along to your client, no? Perhaps some dependency injection?
I would parse the Uri before you pass it to the client. A trick to do it with IP + Port could be this:
Uri uri = new Uri("http://" + "192.168.11.11:8080");
Console.WriteLine (uri.Host);
Console.WriteLine (uri.Port);
Keep it in a try-catch though because it will throw an exception if it's badly formatted.
This also solves the problem that you might have in case no port is specified (ArrayIndexOutOfRangeException) or that the IP can't be parsed into an IPEndPoint (FormatException).
On top of that it also keeps the responsibility of validation inside your main() block instead of passing exceptions through threads and all that stuff.
catch (Exception e)
Console.Clear();
Don't clear my console! I use that to retract my steps and perhaps contact support.
Main(new string[1]);
What's the point of setting its length to 1? I do like the approach you used here to re-call the Main method.
Client.Disconnect();
while (!Commands.ExitHandlingFinished)
{
Thread.Sleep(100);
}
This sort of polling should have a timeout in case something isn't going as expected. An indication to the user that the program is quitting is advised as well.
I'd advise you to group your members by their type so you know exactly where you can find something. group private fields, private static fields, public static fields, methods, etc.
public static TcpClient client = new TcpClient();
We don't do public fields in C#. This should be a property (why does the outside world even need to know about this inner detail?)
Too much static. This is very hard to test and limits scalability.
catch (Exception e)
{
throw new Exception("No connection was made: " + e.Message);
}
Pass in the original exception as the new inner exception.
The only way to interrupt your chat program is by exiting the application. That isn't very nice -- I might want to keep the program open! Perhaps provide interruptability?
if (message.StartsWith("[Disconnect]") || message.StartsWith("[Command]"))
This is a simple approach and its purposes are clear but I would consider a custom object that holds a property Message and something like MessageKind which could be an enum of Message and ConnectionStatus, that sort of stuff. It allows you to add other variations more easily and doesn't restrict you to an exact string to work with.
public static void ParseClientId(string id)
{
clientId = Int32.Parse(id);
}
Seems a little pointless -- You're even adding characters. I would also just use int instead of Int32 to retain conformity with the rest of the code.
Thread.CurrentThread.Abort();
You shouldn't have to abort the thread since that is considered unreliable. Just using return; should do the trick.
int endIndex = received.Length - 1;
while (endIndex >= 0 && received[endIndex] == 0)
{
endIndex--;
}
This is a curious piece of code to me. Maybe I'm misinterpreting it so perhaps you can clarify: are you expecting 0 values being sent? Why would it do this?
You should probably add a comment to specify what you're doing (e.g.: // trimming useless data).
response = response.Substring(9);
Make clear why you're using 9. The above debug statement isn't adequate documentation (it's not explicitly linked to the line of code so people might remove it). It also strikes me more as a comment than debug output, really.
int openIndex = response.IndexOf("<");
This is the first I see of these fishbrackets. What are they used for? Commentate it!
Output.Debug("ID is client ID, not displaying.");
Should a client be able to talk to himself? If yes: you're not doing that. If no: this indicates something is fundamentally wrong! Throw an exception and let the user know -- don't just hide it in the logs.
Int32.Parse(response);
Output.Debug("Int32.Parse has not failed, assume client ID sent.");
Client.ParseClientId(response);
Double work for no reason. Consider using int.TryParse() instead.
EndRcvThread
RcvThreadEnded
Only a very select few abbreviations are recommended (db, app, etc). These aren't amongst them.
Again note that these are fields and not properties!
if (command.StartsWith("/"))
{
return true;
}
else
{
return false;
}
Also known as
return command.StartsWith("/")
args[0].ToLower()
String comparison should never be done like this for two main reasons:
Performance impact -- you create a new string. What if that first string barely fitted in your memory?
It's not a correct comparison. This comment makes it clear but I suggest reading the entire post as well.
Overall the code can be followed pretty well.
Two things I would definitely look into if I were you: threading and static-ness. I'm not versed enough in threading to give a meaningful review but certain static fields and thread handling raised some eyebrows.
The static-ness of your code is something you really should address though: It's very hard to test and your classes are very tightly coupled. I'd rather see instances being passed around where needed.
While on the note of testing: all your external dependencies are hardcoded in it -- look into dependency injection if you want to start unit-testing some things! | {
"domain": "codereview.stackexchange",
"id": 12927,
"tags": "c#, client"
} |
Thought experiment with specific heat | Question: I have been thinking about the following thought experiment regarding a material whose specific heat decreases with external electric field (see this article for an example of such a material). Say that we put such a material outside under no electric field and let it equilibriate to the ambient temperature of 300 K. We then bring over a parallel-plate capacitor that we have precharged which generates a strong electric field between the plates, and place the material inside of it. Now, because the specific heat capacity of the material has been lowered, it can no longer hold all the heat that it previously contained, and so it will radiate heat into the environment. In theory, one could use this heat transfer to power a heat engine or drive an electric power plant. Once equilibriated, we remove the material from the capacitor, and it will start to suck in heat from the air, which can once again be used to generate power - and in theory, it seems like this cycle can be repeated ad infinitum. Clearly, this can't be the case, because what I am describing is an infinite source of free power, but can anyone point out the flaw in my above argument?
Answer:
Now, because the specific heat capacity of the material has been lowered, it can no longer hold all the heat that it previously contained
Heat capacity tells us how much energy is needed to produce a change in temperature. It is not a measure of the total energy something can "hold". A change in heat capacity does not result in spontaneous energy absorption or emission "for free". | {
"domain": "physics.stackexchange",
"id": 94470,
"tags": "thermodynamics, temperature, energy-conservation, thought-experiment"
} |
What are the maximum and minimum impact speeds for an asteroid that would strike the Earth? | Question: I gather that the impact speed depends on the radius of the orbit of the asteroid.
Is the orbit of asteroids in the same direction around the sun, or can they move in the "opposite" direction?
Answer: Although most objects orbit the Sun in the same direction β having emerged out of the same rotating gas cloud that spawned the Solar System β some asteroids and other minor planets do move in opposite, or retrograde, orbits (see this Wikipedia article for a list of such objects).
The minimum speed for an asteroid is achieved if it has more or less the same velocity around the Sun as the Earth. In this case the gravitational attraction of Earth will accelerate the object to the escape velocity of Earth, i.e.
$$
v_\mathrm{min} = v_{\mathrm{esc,}\oplus} = \sqrt{\frac{2GM_\oplus}{R_\oplus}} \simeq 11\,\mathrm{km}\,\mathrm{s}^{-1}.
$$
Here, $G$, $M_\oplus$, and $R_\oplus$ are the gravitational constant and the mass and radius of Earth, respectively.
The maximum speed is achieved at a "head-on" collision. Earth's speed around the Sun is
$$
v_{\mathrm{orb,}\oplus} = \sqrt{\frac{GM_\odot}{d}} \simeq 30\,\mathrm{km}\,\mathrm{s}^{-1},
$$
where $M_\odot$ and $d$ are the mass of the Sun and the distance from Sun to Earth (1 AU).
If the asteroid travels on the same orbit, but in the opposite direction, the impact will then be at 60 km/s. However, if the asteroid comes from far away (e.g. the Oort Cloud), it will be accelerated by the Sun and achieve a velocity equal to the escape velocity from the Sun at the location of Earth. As is seen from the two equations above, the orbital speed and the escape velocity differ by a factor of $\sqrt{2}$. That is, an object falling from infinity toward the Sun, will have a speed equal to $30\,\mathrm{km}\,\mathrm{s}^{-1}\times\sqrt{2}=42\,\mathrm{km}\,\mathrm{s}^{-1}$ when it reaches Earth.
Hence, the maximum impact velocity is
$$
v_\mathrm{max} = 30\,\mathrm{km}\,\mathrm{s}^{-1} + 42\,\mathrm{km}\,\mathrm{s}^{-1} = 72\,\mathrm{km}\,\mathrm{s}^{-1}.
$$ | {
"domain": "astronomy.stackexchange",
"id": 5113,
"tags": "asteroids"
} |
Genome and Proteome | Question: Due to the dynamism of protein expression, a genome can give rise to different proteomes, but could we say that a proteome comes from different genomes?
Answer: Yes, different genomes can produce the same proteome.
Imagine a genome that only has a single protein-coding sequence (without splicing isoforms), the rest of the genome is simply regulatory sequences. Whatever those regulatory sequences may be, as long as that single protein is expressed, it'll be the same proteome.
If you consider a single nucleotide difference enough to say two genomes are different, then there are probably quite a lot of different genomes on earth that produce proteomes identical with at least one other genome. | {
"domain": "biology.stackexchange",
"id": 5898,
"tags": "genomics, proteomics"
} |
Why are there exactly 207 morpho-electrical types of neurons? | Question: I'm taking an introductory neuroscience course online, and it mentions that of the 55 morphological types and the 11 electrical types, there are 207 morpho-electrical types. How does this work? 55 times 11 is 605, so it's not a simple mapping of 1 to 1. 207 isn't a factor of and doesn't share any proper divisors with 605, but it's roughly 1/3 (but not exactly). I don't see how a third makes sense though.
What causes exactly 207 morpho-electrical types?
Feel free to let me know if I should move this to Health or Psychology.SE, I'm not quite sure where to post this.
Answer: This is just that one author's opinion - it is telling that both sources you have for the number come from the same author. Henry Markram has a leadership role in the Blue Brain Project, a project to create a very detailed computer simulation of a chunk of neocortex to answer various questions. In the context of this project, it is somewhat necessary to make some decisions about numbers of cell types so that they can be incorporated in the model. People can bicker all day long about which types are actually unique types, though.
As far as the mathematics of having 55 morphological types and 11 electrical types, yet only 207 morpho-electrical types rather than 605, the answer is that the matrix is sparse. To get 605 types, you would need to observe all 11 electrical types within each of the 55 morphological categories. If not every electrical type is observed for a given morphological type, you will have less than 605 total. In some cases morphological types are specifically associated with particular electrical types.
You should also read about the concept of lumpers and splitters - in summary, in any context of classification without clear distinctions, different people will come up with different boundaries. It isn't really possible to argue "right" versus "wrong" in this context, because doing so requires weighting different values. Lumping may be beneficial for the sake of simplicity and generalization but miss some details; splitting may allow for more nuance but also risks not seeing "the forest for the trees." | {
"domain": "biology.stackexchange",
"id": 8178,
"tags": "neuroscience, neurophysiology, neuroanatomy, morphology"
} |
$Y-\Delta$ transformation | Question: I'm working on electrical circuits and the $Y-\Delta$ transformation. Since I thought it was tough for me, I wanted to challenge myself with more problems, and I found this one on the web. However, I can't really see any obvious $T$'s or $\pi$'s for which I can use this transformation. Maybe there's some clever equivalent circuit for which these $T$'s and $\pi$'s become more obvious. If someone can hint me with such a circuit, without presenting the full solution since I want to try by myself, I'd be glad.
Answer: You can reduce the 2 sets of 3 resistors, that are in parallel configuration, to get two equivalent resistors with resistance $10/3 \, \Omega$ and $20/3 \, \Omega$. In this way, you have an equivalent circuit with 4 resistors in series. | {
"domain": "physics.stackexchange",
"id": 90625,
"tags": "homework-and-exercises, electric-circuits, electrical-resistance"
} |
Calculating the surface integral over a charged box | Question: In a question that I'm trying to solve, we're told that there is a box with dimensions 20cm long, 4cm high, and 3cm deep. we're given that the electric field on every surface of the box is pointing vertically upwards, and that the strength on the bottom is 1500 volts/m, over all the sides its 1000V/M, and over the top its 600V/M
we are asked to find the total charge in the box.
I'm assuming I need to use Gauss's law and find the flux through a surface. Am I allowed to just use a plane above the box for this or does it need to be a surface that is closed and fully encompasses the box?
In my calculations I assumed I could just use a plane above the box (since that's where all the field lines are pointing). so I got: $$\oint \overset{\rightarrow }{E}dA = \frac{q}{\varepsilon _{0}}$$
and since the electric field is perpendicular to the plane its just $$\left |\overset{\rightarrow }{E} \right |A = \frac{q}{\varepsilon _{0}}$$ where $E = 1000+1500+600= 3100$ and $A = 20cm *3cm = 0.6m$
giving: $$q = 3100\times 0.6\times\left ( 8.85\times10^{-12}\right )$$ (the 8.85 is the value im using for epsilon naught, which I'm not sure if it is right?)which gives $1.6461\times10^{-8}$ which is coming back as incorrect.
I have also trying setting the field strength on the bottom of the box as negative, since the lines are heading into the box, which I think is actually correct, and used the same exact method as above (only, doing it twice and minus-ing the result from the bottom). Still wrong, so I'm guessing there is something more fundamental wrong with my working? If anyone can shed light on where I'm making my mistake I would be very grateful. Thanks.
Answer: Alright, one issue I noticed is the incorrect Gauss' Law expression. It is actually:
$$ \oint {\vec E}\cdot \mathrm{d}{\vec A} = \frac{q}{\epsilon_0} $$
So only the perpendicular component of the vectors through each face surface matters. Since the electric field vectors for all the voltages you've given across each surface are acting upwards, only the top and bottom faces will contribute to the Gauss' Law expression.
The error as I see it is that you've included the side faces in your evaluation, when there is no electric field acting perpendicular to the respective faces.
The convention here is such that all vectors leaving the Gaussian surface (a cube in this case) are positive and vice versa.
The area of the top and bottom faces is: $ 0.2 \hspace{2 pt}m \times 0.03\hspace{2 pt} m = 0.006 \hspace{2 pt}m^2 $
Applying the values you've given:
$$ q = \epsilon_0[(-1500+600)\times 0.006] = -4.779 \times 10^{-11} \hspace{3 pt} C$$
I hope this is the answer given, otherwise you could directly post the question and I can give it another go. | {
"domain": "physics.stackexchange",
"id": 30431,
"tags": "homework-and-exercises, electrostatics, gauss-law"
} |
Minimum Uncertainty Wavefunction derivation | Question: Can anyone point me to a reference (preferably either something online or something a small liberal arts school would be likely to have in its library) that goes through a derivation of the minimum uncertainty wavefunction in more detail than in the Griffiths?
Edit I've moved the second part of my original question to a separate post: 3D Minimum uncertainty wavepackets
Answer: First some preliminaries. Suppose you have hermitian operators $A$ and $B$ and some state $\left| \psi \right>$. Denote by $\left<X\right>$ the expectation of $X$ in the state $\psi$, i.e. $\left<\psi\right| X \left| \psi\right>$. Denote by $\bar A := A - \left< A \right>$ and $\bar B := B - \left< B \right>$ the part of of $A$, resp. $B$ with vanishing expectation.
So, let's compute $\left< \bar A^2 \right> \left<\bar B^2 \right>$. According to Cauchy-Schwarz inequality that this is always greater or equal than
$\left|\left< \bar A \bar B \right>\right|^2$ (just plug in $\psi$ and interpret these expressions as scalar products). Now, we can express the product as sum of hermitian and antihermitian component
$$\left|\left< \bar A \bar B \right>\right|^2 = {1\over4}\left< [A,B]/i \right>^2 + {1\over 4}\left< \{\bar A, \bar B\} \right>^2$$
(here we used the fact that $[\bar A, \bar B] = [A,B]$).
If the commutator is just a number times identity operator then we can discard the expectations and after removing the anticommutator term (because it doesn't have any important interpretation and it doesn't spoil the inequality) we are left with HUP. But we're not interested in this application right now. Instead, we want to minimize the error term and that means we want equalities everywhere (it's not clear that it's possible to attain them, but let's assume this for a while). First, Cauchy-Schwarz inequality becomes equality if the vectors in the scalar product are colinear $$\bar B \left| \psi \right> = c \bar A \left| \psi \right>$$ Second, we want the expectation of anticommutator to vanish $$\left<\psi\right| \{\bar A, \bar B\} \left| \psi \right> = 0$$ So this gives us two equations for $\psi$. Let's see what we can get from them for $A = x$ and $B = p$. For simplicity let's assume that $\left<x\right> = \left<p\right> = 0$ (the general solution doesn't change anything much).
From first condition we obtain
$$(p - cx) \left | \psi \right> = 0$$ which is a differential equation
$$ (i \partial_x + cx) \psi(x) = 0$$
with a solution $\psi(x) = K \exp(-\alpha x^2)$ with ${\rm Re} \alpha > 0$ (so that this is indeed a vector from our Hilbert space) and $K$ being just a normalization constant. Finally from the anticommutator relation we get $\alpha = {1 \over 4(\Delta x)^2}$ and we're done. | {
"domain": "physics.stackexchange",
"id": 695,
"tags": "quantum-mechanics, heisenberg-uncertainty-principle"
} |
Bash program that sets up and configures the environment for new Debian installs | Question: Here is my second version of a bash program I'm writing, as per @hjpotter92 I've updated and cleaned it since last:
This script is run directly after a fresh install of Debian, to do:
sets up LS_COLORS====> colors in /bin/ls output
sets up Portsentry (security for ssh and defence for ports)
sets up syntax highlighting in nano,
sets up iptables,
sets up ssh,
sets up custom bashrc files
creates users on the system if needed,
checks if user has a password set and sets it if not,
installs non-free firmware and sets up apt with virtualbox deb file and multimedia deb.
This is from my previous postβββ
You have a lot of stub code in the script. Clean it up; remove unused
statements/declarations/tasks and post another question with the
updated code.
QUESTIONS:
Can you give me pointers on what could be better or different?
Do you have any ideas about new/other features for a setup program for a developer on Debian Stretch(9)?
Here is the code:
#!/bin/bash -x
# redirect all errors to a file
exec 2>debianConfigVersion3.1ERRORS.txt
##################################################################################################### exec 3>cpSuccessCodes.txt ##
SCRIPTNAME=$(basename "$0")
if [ "$UID" != 0 ]
then
echo "This program should be run as root, exiting! now....."
exit 1
fi
if [ "$#" -eq 0 ]
then
echo "RUN AS ROOT...Usage if you want to create users:...$SCRIPTNAME USER_1 USER_2 USER_3 etc."
echo "If you create users they will be set with a semi strong password which you need to change later as root with passwd"
echo
echo
echo "#################### βββββββββββ OR ββββββββββ #############################"
echo
echo
echo "RUN AS ROOT...Usage without creating users: $SCRIPTNAME"
echo
sleep 10
fi
echo "Here starts the party!"
echo "Setting up server..........please wait!!!!!"
sleep 3
### ββββ NEXT TIME USE "declare VARIABLE" ββββββββββ #####
OAUTH_TOKEN=d6637f7ccf109a0171a2f55d21b6ca43ff053616
CURRENTDIR=/tmp/svaka
BASHRC=.bashrc
NANORC=.nanorc
BASHRCROOT=.bashrcroot
SOURCE=sources.list
SSHD_CONFIG=sshd_config
#-----------------------------------------------------------------------ββ
export DEBIAN_FRONTEND=noninteractive
#-----------------------------------------------------------------------ββ
if grep "Port 22" /etc/ssh/sshd_config
then
echo -n "Please select/provide the port-number for ssh in iptables and sshd_config:"
read port ### when using the "-p" option then the value is stored in $REPLY
PORT=$port
fi
############################### make all files writable, executable and readable in the working directory#########
if /bin/chmod -R 777 "$CURRENTDIR"
then
continue
else
echo "chmod CURRENTDIR failed"
sleep 3
exit 127
fi
################ Creating new users #####################1
checkIfUser()
{
for name in "$@"
do
if /usr/bin/id -u "$name" #>/dev/null 2>&1
then
echo "User: $name exists....setting up now!"
else
echo "User: $name does not exists....creating now!"
/usr/sbin/useradd -m -s /bin/bash "$name" #>/dev/null 2>&1
fi
done
}
###########################################################################3
################# GET USERS ON THE SYSTEM ###################################
prepare_USERS()
{
checkIfUser "$@"
/usr/bin/awk -F: '$3 >= 1000 { print $1 }' /etc/passwd > "$CURRENTDIR"/USERS.txt
/bin/chmod 777 "$CURRENTDIR"/USERS.txt
if [[ ! -f "$CURRENTDIR"/USERS.txt && ! -w "$CURRENTDIR"/USERS.txt ]]
then
echo "USERS.txt doesn't exist or is not writable..exiting!"
exit 127
fi
for user in "$@"
do
echo "$user" >> /tmp/svaka/USERS.txt || { echo "writing to USERS.txt failed"; exit 127; }
done
}
###########################################################################33
################33 user passwords2
userPass()
{
if [[ ! -f "$CURRENTDIR"/USERS.txt && ! -w "$CURRENTDIR"/USERS.txt ]]
then
echo "USERS.txt doesn't exist or is not writable..exiting!"
exit 127
fi
while read i
do
if [ "$i" = root ]
then
continue
fi
if [[ $(/usr/bin/passwd --status "$i" | /usr/bin/awk '{print $2}') = NP ]] || [[ $(/usr/bin/passwd --status "$i" | /usr/bin/awk '{print $2}') = L ]]
then
echo "$i doesn't have a password."
echo "Changing password for $i:"
echo $i:$i"YOURSTRONGPASSWORDHERE12345ΓΓ‘" | /usr/sbin/chpasswd
if [ "$?" = 0 ]
then
echo "Password for user $i changed successfully"
sleep 5
fi
fi
done < "$CURRENTDIR"/USERS.txt
}
################################################ setting up iptables ####################3
setUPiptables()
{
#if ! grep -e '-A INPUT -p tcp --dport 80 -j ACCEPT' /etc/iptables.test.rules
if [[ `/sbin/iptables-save | grep '^\-' | wc -l` > 0 ]]
then
echo "Iptables already set, skipping..........!"
else
if [ "$PORT" = "" ]
then
echo "Port not set for iptables exiting"
echo -n "Setting port now, insert portnumber: "
read port
PORT=$port
fi
if [ ! -f /etc/iptables.test.rules ]
then
/usr/bin/touch /etc/iptables.test.rules
else
/bin/cat /dev/null > /etc/iptables.test.rules
fi
/bin/cat << EOT >> /etc/iptables.test.rules
*filter
# Allows all loopback (lo0) traffic and drop all traffic to 127/8 that doesn't use lo0
-A INPUT -i lo -j ACCEPT
-A INPUT ! -i lo -d 127.0.0.0/8 -j REJECT
# Accepts all established inbound connections
-A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT
# Allows all outbound traffic
# You could modify this to only allow certain traffic
-A OUTPUT -j ACCEPT
# Allows HTTP and HTTPS connections from anywhere (the normal ports for websites)
-A INPUT -p tcp --dport 80 -j ACCEPT
-A INPUT -p tcp --dport 443 -j ACCEPT
# Allows SSH connections
# The --dport number is the same as in /etc/ssh/sshd_config
-A INPUT -p tcp -m state --state NEW --dport $PORT -j ACCEPT
# Now you should read up on iptables rules and consider whether ssh access
# for everyone is really desired. Most likely you will only allow access from certain IPs.
# Allow ping
# note that blocking other types of icmp packets is considered a bad idea by some
# remove -m icmp --icmp-type 8 from this line to allow all kinds of icmp:
# https://security.stackexchange.com/questions/22711
-A INPUT -p icmp -m icmp --icmp-type 8 -j ACCEPT
# log iptables denied calls (access via dmesg command)
-A INPUT -m limit --limit 5/min -j LOG --log-prefix "iptables denied: " --log-level 7
# Reject all other inbound - default deny unless explicitly allowed policy:
-A INPUT -j REJECT
-A FORWARD -j REJECT
COMMIT
EOT
sed "s/^[ \t]*//" -i /etc/iptables.test.rules ## remove tabs and spaces
/sbin/iptables-restore < /etc/iptables.test.rules || { echo "iptables-restore failed"; exit 127; }
/sbin/iptables-save > /etc/iptables.up.rules || { echo "iptables-save failed"; exit 127; }
/usr/bin/printf "#!/bin/bash\n/sbin/iptables-restore < /etc/iptables.up.rules" > /etc/network/if-pre-up.d/iptables ## create a script to run iptables on startup
/bin/chmod +x /etc/network/if-pre-up.d/iptables || { echo "chmod +x failed"; exit 127; }
fi
}
###################################################33 sshd_config4
setUPsshd()
{
if grep "Port $PORT" /etc/ssh/sshd_config
then
echo "sshd already set, skipping!"
else
if [ "$PORT" = "" ]
then
echo "Port not set"
exit 12
fi
users=""
/bin/cp -f "$CURRENTDIR"/sshd_config /etc/ssh/sshd_config
sed -i "s/Port 34504/Port $PORT/" /etc/ssh/sshd_config
for user in `awk -F: '$3 >= 1000 { print $1 }' /etc/passwd`
do
users+="${user} "
done
if grep "AllowUsers" /etc/ssh/sshd_config
then
sed -i "/AllowUsers/c\AllowUsers $users" /etc/ssh/sshd_config
else
sed -i "6 a \
AllowUsers $users" /etc/ssh/sshd_config
fi
/bin/chmod 644 /etc/ssh/sshd_config
/etc/init.d/ssh restart
fi
}
#################################################3333 Remove or comment out DVD/cd line from sources.list5
editSources()
{
if grep '^# *deb cdrom:\[Debian' /etc/apt/sources.list
then
echo "cd already commented out, skipping!"
else
sed -i '/deb cdrom:\[Debian GNU\/Linux/s/^/#/' /etc/apt/sources.list
fi
}
####################################################33 update system6
updateSystem()
{
/usr/bin/apt update && /usr/bin/apt upgrade -y
}
###############################################################7
############################# check if programs installed and/or install
checkPrograms()
{
if [ ! -x /usr/bin/git ] || [ ! -x /usr/bin/wget ] || [ ! -x /usr/bin/curl ] || [ ! -x /usr/bin/gcc ] || [ ! -x /usr/bin/make ]
then
echo "Some tools with which to work with data not found installing now......................"
/usr/bin/apt install -y git wget curl gcc make
fi
}
#####################################################3 update sources.list8
updateSources()
{
if grep "deb http://www.deb-multimedia.org" /etc/apt/sources.list
then
echo "Sources are setup already, skipping!"
else
sudo /bin/cp -f "$CURRENTDIR"/"$SOURCE" /etc/apt/sources.list || { echo "sudo cp failed"; exit 127; }
/bin/chmod 644 /etc/apt/sources.list
/usr/bin/wget http://www.deb-multimedia.org/pool/main/d/deb-multimedia-keyring/deb-multimedia-keyring_2016.8.1_all.deb || { echo "wget failed"; exit 127; }
/usr/bin/dpkg -i deb-multimedia-keyring_2016.8.1_all.deb
/usr/bin/wget -q https://www.virtualbox.org/download/oracle_vbox_2016.asc -O- | sudo apt-key add -
updateSystem || { echo "update system failed"; exit 127; }
/usr/bin/apt install -y vlc vlc-data browser-plugin-vlc mplayer youtube-dl libdvdcss2 libdvdnav4 libdvdread4 smplayer mencoder build-essential
sleep 3
updateSystem || { echo "update system failed"; exit 127; }
sleep 3
fi
}
###############################################33 SETUP PORTSENTRY ############################################################
##############################################3 ############################################################33
setup_portsentry()
{
if ! grep -q '^TCP_PORTS="1,7,9,11,15,70,79' /etc/portsentry/portsentry.conf || [[ ! -f /etc/portsentry/portsentry.conf ]]
then
/usr/bin/apt install -y portsentry logcheck
/bin/cp -f "$CURRENTDIR"/portsentry.conf /etc/portsentry/portsentry.conf || { echo "cp portsentry failed"; exit 127; }
/usr/sbin/service portsentry restart || { echo "service portsentry restart failed"; exit 127; }
fi
}
###############################################################################################################################33
#####################################################3 run methods hereβ ###################################################3
##################################################### ###################################################
prepare_USERS "$@"
userPass "$@"
setUPiptables
setUPsshd
editSources
updateSystem
setup_portsentry
checkPrograms
updateSources
########################################################################################################### #####3##
##############################################################################################################3Methods
##########################################3 Disable login www #########
passwd -l www-data
#################################### firmware
apt install -y firmware-linux-nonfree firmware-linux
sleep 5
################ NANO SYNTAX-HIGHLIGHTING #####################3
if [ ! -d "$CURRENTDIR"/nanorc ]
then
if [ "$UID" != 0 ]
then
echo "This program should be run as root, goodbye!"
exit 127
else
echo "Doing user: $USER....please, wait\!"
/usr/bin/git clone https://$OAUTH_TOKEN:x-auth-basic@github.com/gnihtemoSgnihtemos/nanorc || { echo "git failed"; exit 127; }
cd "$CURRENTDIR"/nanorc || { echo "cd failed"; exit 127; }
/usr/bin/make install-global || { echo "make failed"; exit 127; }
/bin/cp -f "$CURRENTDIR/$NANORC" /etc/nanorc >&3 || { echo "cp failed"; exit 127; }
/bin/chown root:root /etc/nanorc || { echo "chown failed"; exit 127; }
/bin/chmod 644 /etc/nanorc || { echo "chmod failed"; exit 127; }
if [ "$?" = 0 ]
then
echo "Implementing a custom nanorc file succeeded!"
else
echo "Nano setup DID NOT SUCCEED\!"
exit 127
fi
echo "Finished setting up nano!"
fi
fi
################ LS_COLORS SETTINGS #############################
if ! grep 'eval $(dircolors -b $HOME/.dircolors)' /root/.bashrc
then
echo "Setting root bashrc file....please wait!!!!"
if /bin/cp -f "$CURRENTDIR/$BASHRCROOT" "$HOME"/.bashrc
then
echo "Root bashrc copy succeeded!"
else
echo "Root bashrc cp failed, exiting now!"
exit 127
fi
/bin/chown root:root "$HOME/.bashrc" || { echo "chown failed"; exit 127; }
/bin/chmod 644 "$HOME/.bashrc" || { echo "failed to chmod"; exit 127; }
/usr/bin/wget https://raw.github.com/trapd00r/LS_COLORS/master/LS_COLORS -O "$HOME"/.dircolors || { echo "wget failed"; exit 127; }
echo 'eval $(dircolors -b $HOME/.dircolors)' >> "$HOME"/.bashrc
fi
while read user
do
if [ "$user" = root ]
then
continue
fi
sudo -i -u "$user" user="$user" CURRENTDIR="$CURRENTDIR" BASHRC="$BASHRC" bash <<'EOF'
if grep 'eval $(dircolors -b $HOME/.dircolors)' "$HOME"/.bashrc
then
:
else
echo "Setting users=Bashrc files!"
if /bin/cp -f "$CURRENTDIR"/"$BASHRC" "$HOME/.bashrc"
then
echo "Copy for $user (bashrc) succeeded!"
sleep 3
else
echo "Couldn't cp .bashrc for user $user"
exit 127
fi
/bin/chown $user:$user "$HOME/.bashrc" || { echo "chown failed"; exit 127; }
/bin/chmod 644 "$HOME/.bashrc" || { echo "chmod failed"; exit 127; }
/usr/bin/wget https://raw.github.com/trapd00r/LS_COLORS/master/LS_COLORS -O "$HOME"/.dircolors || { echo "wget failed"; exit 127; }
echo 'eval $(dircolors -b $HOME/.dircolors)' >> "$HOME"/.bashrc
fi
EOF
done < "$CURRENTDIR"/USERS.txt
echo "Finished setting up your system!"
cd ~/ || { echo "cd ~/ failed"; exit 155; }
rm -rf /tmp/svaka || { echo "Failed to remove the install directory!!!!!!!!"; exit 155; }
I've checked the program into https://www.shellcheck.net/
Answer: It's good you're taking advices and improving your script.
There's still much work to do, so keep it up!
Never do chmod 777
I haven't seen a single valid use case of permission 777 in recent memory.
And there's no valid use case of it in this script.
Don't take permission values lightly.
Use precisely the permission bits you really need.
Don't specify absolute path of common commands
Unless you have a good specific reason to do otherwise,
don't specify the absolute path of commands such as /usr/bin/awk and /bin/chmod.
Let the shell find awk and chmod in PATH.
Using absolute paths reduces the portability and usability of scripts.
This script will only work if I have those binaries at those exact locations.
Don't litter the filesystem with log files
Because of this:
# redirect all errors to a file
exec 2>debianConfigVersion3.1ERRORS.txt
A log file will be created in the current working directory of the user calling the script.
If you call this script from many different working directories,
all of those directories will have such file.
This is not good behavior from scripts.
If you want to collect logs from the script,
it would be better to use a dedicated directory that is independent from the current working directory of the calling user.
Also, the comment is a bit misleading.
It redirects stderr, which is not necessary errors.
In this example, the output due to the -x flag goes there,
but I wouldn't call that "errors".
Avoid unnecessary sleep
echo "Setting up server..........please wait!!!!!"
sleep 3
After the echo here, a user may assume that the script is busy working.
But it's not, it's just sleeping!
I don't see the point of this sleep.
In fact all sleep commands in this script look unnecessary and rather annoying,
from a user's point of view.
Unclear logic
The motivation of this code is not clear to me:
if grep "Port 22" /etc/ssh/sshd_config
then
echo -n "Please select/provide the port-number for ssh in iptables and sshd_config:"
read port ### when using the "-p" option then the value is stored in $REPLY
PORT=$port
fi
Why ask the user for a port number if /etc/ssh/sshd_config contains "Port 22"?
What is even the importance of /etc/ssh/sshd_config containing "Port 22"?
A line such as # Port 2222 would match, and then what?
Why should that affect the decisions made by the script?
The global PORT variable is used and modified in multiple places in the program,
and it's hard to follow what happens to it.
I have a couple of tips to clean this up:
Since we're talking specifically about SSHD port, call the variable sshd_port, not just PORT. (And avoid uppercase variable names, which are intended for system variables only, such as PATH.)
Do you really need to support non-default SSHD port? Probably not. In that case, just set sshd_port=22 at the top of the script, and do not ask the user to enter it. Nice and simple.
Just for the record, the original code would have been better written like this:
if grep -q "Port 22" /etc/ssh/sshd_config
then
read -p "Please select/provide the port-number for ssh in iptables and sshd_config:" PORT
fi
The improvements:
Adding the -q flag for grep make the search terminate immediately when a match is found, and it also suppresses unnecessary output
It's possible to read directly into the variable PORT
But as I said earlier, this is not the most important problem here.
If you simplify your logic, I think this piece of code can completely disappear.
Overengineering
I jumped into reviewing the script from top to bottom.
Now I see that was a mistake,
it would have been better to get an overall view first.
Because the biggest problem is not all the above stuff,
but that the script is overengineered:
it contains a lot of stuff that's probably unnecessary.
So my first and foremost suggestion is to start by trimming it down:
Drop features you don't really need
Simplify as much as possible
Is there a good reason to support non-default SSHD port?
In setUPsshd, why append AllowUsers after line 6? Why not simply the end of the file?
Why use absolute paths instead of simple command names?
Why do if [[ `/sbin/iptables-save | grep '^\-' | wc -l` > 0 ]] when you already know better techniques exist: if grep -q '^-' /sbin/iptables-save
... The above are just examples. Question everything that looks complicated.
Next, review the function and variable names. For example:
the name checkIfUser doesn't describe well what it does. (The comment "Creating new users" does -> that should have been the function name.)
the name CURRENTDIR is really poor. You probably meant workdir.
the name i is very poor to represent usernames
... and so on, I suggest to review all
Next:
Avoid code duplication. Extract common logic to functions. Each function with a single purpose, and with a good name that describes that purpose.
Use consistent techniques: for example you used if grep ... in most places, but sometimes you used a different, far worse technique. Use the better technique, consistently everywhere.
Finally, there are some obvious quality issues such as sometimes using "$CURRENTDIR"/USERS.txt and other times using /tmp/svaka/USERS.txt.
Bugs
In setUPsshd this copies from "$CURRENTDIR"/sshd_config:
/bin/cp -f "$CURRENTDIR"/sshd_config /etc/ssh/sshd_config
But I don't see evidence that such file exists. CURRENTDIR is set to /tmp/svaka at the beginning of the script, and this directory is deleted at the end after every run,
so in all likelihood, the referenced source file probably doesn't exist.
In prepare_USERS, you extract usernames into the "$CURRENTDIR"/USERS.txt,
and then append some more usernames to it.
This appending step will probably append usernames that are already there,
resulting in duplicates. | {
"domain": "codereview.stackexchange",
"id": 32343,
"tags": "beginner, bash, linux, shell, installer"
} |
Is this double "double slit experiment" involving entanglement possible? | Question: The experiment goes as follows:
Put a particle emitter (photon, electron etc.) between a pair of double slits. The emitter launches pairs of particles that are entangled in such a way that if one goes through slit A the other goes through slit 2, if one goes through slit B the other goes through slit 1.
My prediction for this experiment is that if we put a detector on slit A (or any other slit) that can detect through which slit one of the particles went through then no interference patter will form on ether side of the emitter, if we don't place a detector
we should see interference patterns emerge on both sides.
Is this experiment possible?
If so, was this or an equivalent experiment ever done?
Answer: There is no reason in principle, that I can think of, for this experiment to be impossible. The entanglement could be achieved by aligning the source and the slits so that the upper slit on one side, the source and the lower slit on the other side are in a line. If the particles are produced in pairs with no total momentum then they will be emitted in opposite directions, so if one goes through the upper slit, the other must go through the lower slit.
I doubt this set up has ever been tested, but equivalent experiments have been done using entangled electron. In these electron experiments the roles of "goes through the upper/lower slit" is played by the electron's spin being up or down in some particular direction. It terns out that measurements of the electrons spin in a direction at $90^\circ$ to your chosen direction can be understood in terms of interference between the spin up and spin down states.
In terms of the result of the experiment I don't think you will observe interference in either case. An intuitive way to see that this has to be true is to imagine we set up the two slits a light year apart and the source sends a pair of pulses containing a large number of entangled photons. If I am waiting at one screen I can wait until just before the photons arrive to decide whether or not to measure which slit they pass through. If you are waiting at the other screen then if the result you observe depends on whether I measured my photons or not, then we could use this to send a message faster than light. Since we can't do that and since if I measure my photon then we know which slit yours went through, it must be that you never observe an interference pattern.
This isn't as weird as it first seems (at least once you are used to the regular double slit experiment anyway) Effectively all we have done is measure which slit the photon passes through when it is first created, by creating its entangled partner, rather than doing it when the photon actually passes through the slits. | {
"domain": "physics.stackexchange",
"id": 74543,
"tags": "quantum-mechanics, experimental-physics, quantum-entanglement, double-slit-experiment"
} |
What is the form of the wave packet in terms of momentum? | Question: The wave packet in terms of the wave number $k$ is:
\begin{equation}
\Psi(x, t) = \frac{1}{\sqrt{2\pi}} \int_{-\infty}^{+\infty} \mathrm{d}k \ A(k) \ e^{-i(kx-\omega t)}
\tag{1}
\end{equation}
Knowing that $p = \hbar k$ and $E = \hbar \omega$ we can replace $k$ with $p$, and Eq. (1) becomes:
\begin{equation}
\Psi(x, t) = \frac{1}{\hbar \sqrt{2\pi}} \int_{-\infty}^{+\infty} \mathrm{d}p \ A\left(\frac{p}{\hbar}\right) \ e^{-i(px- Et)/\hbar} = \frac{1}{\hbar \sqrt{2\pi}} \int_{-\infty}^{+\infty} \mathrm{d}p \ \phi(p) \ e^{-i(px- Et)/\hbar}
\tag{2}
\end{equation}
However, this appears to be wrong, and the equation is found in the literature as:
\begin{equation}
\Psi(x, t) = \frac{1}{\sqrt{2 \pi \hbar}} \int_{-\infty}^{+\infty} \mathrm{d}p \ \phi(p) \ e^{-i(px- Et)/\hbar}
\tag{3}
\end{equation}
with the $\hbar$ under the square root. How does this happen? Shouldn't $\mathrm{d}p = \hbar \ \mathrm{d}k$?
Answer: In your equation 2 you implicitly make the following definition of $\phi(p)$
\begin{equation}
A\left(\frac{p}{\hbar}\right) = \phi(p)
\end{equation}
Let's think a bit more about how $\phi(p)$ should be defined.
We want $\phi(p)$ to be a properly normalized momentum space wavefunction, meaning that $|\phi(p)|^2 dp$ should be a dimensionless number, corresponding to the probability of finding the particle's momentum in an interval from $p$ to $p+dp$. Therefore, $\phi(p)$ should have dimensions of $p^{-1/2}$.
Now look at $A(k)$. From the same argument, we know that $A$ has dimensions of $k^{-1/2}$. However, then your implicit definition equating $A$ and $\phi$ above cannot be correct by dimensional analysis, because it equates two quantities with different dimensions.
Therefore, in order to relate $A$ and $\phi$, we need a factor of $\sqrt{\hbar}$, purely for dimensional reasons, leading to the correct transformation
\begin{equation}
A\left(\frac{p}{\hbar}\right) = \sqrt{\hbar} \ \phi(p)
\end{equation}
Carrying this through leads to the usual expression.
To state this somewhat differently, if you impose standard normalization conditions on $A$ and $\phi$ as wavefunctions in $k$ and $p$ space, respectively:
\begin{eqnarray}
\int_{-\infty}^\infty dk |A(k)|^2 &=& 1 \\
\int_{-\infty}^\infty dp |\phi(p)|^2 &=& 1
\end{eqnarray}
you will find that $A$ and $\phi$ are related by a factor of $\sqrt{\hbar}$. By defining $\phi=A$, without the $\sqrt{\hbar}$ factor, you implicitly fixed an unconventional normalization of $\phi(p)$, which then explains why your final expression has a different overall normalization than the standard one. | {
"domain": "physics.stackexchange",
"id": 89128,
"tags": "quantum-mechanics, fourier-transform"
} |
Loading information about a competition every time the page is loaded | Question: I am using KeystoneJS to manage data and my APIs (which includes Node.js, MongoDB, and Express.js). It uses Mongoose to connect with MongoDB; Keystone queries are essentially the same as Mongoose queries, just prefixed with keystone.list('listname').model instead of mongoose.model('name').
My use case is loading information about a competition when a relevant page is being visited by a user. I'm using Express.js to identify the competition we're looking for (stored on the key path of my Competition model, and identified in Express as req.params.cid (Competition ID). Other Express.js routes will use the information from the res.locals.competition path (name, date, location, etc.)
What I am currently doing is running one query to get information about the current competition every time the page is visited. This is stored as a middleware function, loadCompetition, which is called for every Express route that matches /competition/cid*.
app.all('/competition/:cid*', middleware.loadCompetition);
My code looks like this:
exports.loadCompetition = function (req, res, next) {
keystone.list('Competition').model.findOne({key: req.params.cid}).exec(function (err, competition) {
if (competition && !err) {
res.locals.competition = competition;
return next();
}
else {
if (!competition) {
req.flash("error", "No competition could be found at this page.");
return res.redirect("/");
}
else {
req.flash("error", "An unexpected error occurred. Please try again! If this error persists, email admin@ezratech.us");
return res.redirect("/");
}
}
});
};
Is this the most effective way of loading a competition like this? Is it feasible to load information about the competition once, and only need to reload it if something changes? Or is what I'm doing the only realistic way of loading information on a competition? I think I would need to query the competition anyway to see if datum has changed (Keystone can track the date a document was modified at, on the updatedAt path), and that would make this entirely redundant. But I don't know if I am missing something. If anything else looks awry, please let me know as well.
Answer: Is this slowing down your app by a significant amount? If not, I wouldn't worry about it.
Other than that I would consider removing the custom error handling, and rather pass it on to some other more general error handler at the end of your middleware stack (just call next(err)), and maybe make the Not Found error a 404 status code. | {
"domain": "codereview.stackexchange",
"id": 25207,
"tags": "javascript, node.js, mongodb, express.js, mongoose"
} |
In what sense the scale factor $a(t)\to 0$ lead to the big bang singularity? | Question: How do we understand that the limit $a(t)\to 0$ (where $a(t)$ is the scale factor) lead to a spacetime singularity? Is it by substituting $a(t)$ in the FRW metric and concluding that space makes no sense? But the FRW metric may not be valid right back to the earliest time. In what sense, $a(t)\to 0$ lead to a breakdown of the conventional cosmology?
Answer: It's not really about the FLRW coordinates, because you can always introduce a coordinate singularity anywhere you want. Instead it's about the fact that the temperature, density, etc. would diverge as $a(t) \to 0$. So if nothing else intervened, we would have a physical singularity due to the extreme energy densities and gravitational fields. | {
"domain": "physics.stackexchange",
"id": 50799,
"tags": "general-relativity, cosmology, space-expansion, big-bang, singularities"
} |
How can we efficiently and unbiasedly decide which children to generate in the expansion phase of MCTS? | Question: When executing MCTS' expansion phase, where you create a number of child nodes, select one of the numbers, and simulate from that child, how can you efficiently and unbiasedly decide which child(ren) to generate?
One strategy is to always generate all possible children. I believe that this answer says that AlphaZero always generates all possible ($\sim 300$) children. If it were expensive to compute the children or if there were many of them, this might not be efficient.
One strategy is to generate a lazy stream of possible children. That is, generate one child and a promise to generate the rest. You could then randomly select one by flipping a coin: heads you take the first child, tails you keep going. This is clearly biased in favor of children earlier in the stream.
Another strategy is to compute how many $N$ children there are and provide a function to generate child $X < N$ (of type Nat -> State). You could then randomly select one by choosing uniformly in the range $[0, N)$. This may be harder to implement than the previous version because computing the number of children may be as hard as computing the children themselves. Alternatively, you could compute an upper-bound on the number of children and the function is partial (of type Nat -> Maybe State), but you'd be doing something like rejection sampling.
I believe that if the number of iterations of MCTS remaining, $X_t$, is larger than the number of children, $N$, then it doesn't matter what you do, because you'll find this node again the next iteration and expand one of the children. This seems to suggest that the only time it matters is when $X_t < N$ and in situations like AlphaZero, $N$ is so much smaller than $X_0$, that this basically never matters.
In cases where $X_0$ and $N$ are of similar size, then it seems like the number of iterations really needs to be changed into something like an amount of time and sometimes you spend your time doing playouts while other times you spend your time computing children.
Have I thought about this correctly?
Answer: The first thing to consider in this question is: what do we mean when we talk about "generating a child/node". Just creating a node for a tree data structure, and allocating some memory (initialised to nulls / zeros) for data like deeper children, visit counts, backpropagated scores, etc., is rarely a problem in terms of efficiency.
If you also include generating a game state to store in that node when you say "generating a node", that can be a whole lot more expensive, since it requires applying the effects of a move to the previous game state to generate the new game state (and, depending on implementation, probably also requires first copying that previous game state). But you don't have to do this generally. You can just generate nodes, and only actually put a game state in them if you later on reach them again through the MCTS Selection phase.
For example, you could say that AlphaZero does indeed generate all the nodes for all actions immediately, but they're generally "empty" nodes without game states. They do get "primed" with probabilities computed by the policy network, but that policy network doesn't require successor states inside those nodes; it's a function $\pi(s, a)$ of the current state $s$ (inside the previous node), and the action $a$ leading to the newly-generated node.
But if you're really sure that, for your particular problem domain, the generation of nodes itself already is inefficient, then...
[...] This is clearly biased in favor of children earlier in the stream.
Yes, you would get a significant bias with such a stream-based approach, probably wouldn't work well.
[...] This may be harder to implement than the previous version because computing the number of children may be as hard as computing the children themselves. [...]
Again I agree with your observation, I don't think there are many problems where this would be a feasible solution.
I believe that if the number of iterations of MCTS remaining, X_t, is larger than the number of children, N, then it doesn't matter what you do, because you'll find this node again the next iteration and expand one of the children.
This would only be correct for the children of the root node. For any nodes deeper in the tree, it is possible that MCTS never reaches them again even if $X_t > N$, because it could dedicate most of it search effort to different subtrees.
I think your solution would have to involve some sort of learned function (like the policy network in AlphaZero) which can efficiently compute a recommendation for a node to generate, only using the inputs that are already available before you pick a node to generate. In AlphaZero's policy network, those inputs would be the state $s$ in your current node, and the outward actions $a$ (each of which could lead to a node to be generated). This would often actually be very far from unbiased, but I imagine a strong, learned bias would likely be desireable anyway if you're in a situation where the mere generation of nodes is a legitimate concern for performance. | {
"domain": "ai.stackexchange",
"id": 1300,
"tags": "monte-carlo-tree-search"
} |
If we copy-paste the universe, would it follow the same trajectory? | Question: If we would copy-paste the universe in a single instant, would they follow the same trajectory? If yes. would this mean that the trajectory of our universe (and our self) is set in stone? If no. What would make it differ?
Answer: We cannot tell from current experiments. It depends on the nature of quantum mechanics. Are quantum mechanical results truly random? The standard Copenhagen interpretation would say yes, but other interpretations would disagree. The problem of the right interpretation is still open. By "copy-pasting" the universe it is unclear if you copy also all the possible outcomes or some hidden variables that decide the results of quantum experiments.
Any small difference in outcomes at the quantum level may lead to significant differences with respect to the original universe due to chaos. | {
"domain": "physics.stackexchange",
"id": 94188,
"tags": "universe, determinism"
} |
Collate Votes by District | Question: I have a function that is taking in a list where each element is a vote in JSON format. I am then building and returning a dictionary that looks like this:
{ "District A" : {
"Match A" : {
"Candidate A" : {
0 : 10
1 : 20
}
}
}
Each element in the list could look something like this:
{
"district": "district a",
"list": {
"matches": [
{
"match": "King",
"list": {
"candidates": [
{
"match": "King",
"candidate": "Candidate 1",
"ranking": 0
},
{
"match": "King",
"candidate": "Candidate 2",
"ranking": 1
},
{
"match": "King",
"candidate": "Candidate 3",
"ranking": 2
},
{
"match": "King",
"candidate": "Candidate 4",
"ranking": 3
},
{
"match": "King",
"candidate": "Candidate 5",
"ranking": 4
}
]
}
},
{
"match": "Queen",
"list": {
"candidates": [
{
"match": "Queen",
"candidate": "Candidate 1",
"ranking": 2
},
{
"match": "Queen",
"candidate": "Candidate 2",
"ranking": 0
},
{
"match": "Queen",
"candidate": "Candidate 3",
"ranking": 0
},
{
"match": "Queen",
"candidate": "Candidate 4",
"ranking": 1
},
{
"match": "Queen",
"candidate": "Candidate 5",
"ranking": 0
}
]
}
}
]
}
}
Here is my code that works:
def collate_by_district(votes):
collated_votes = {}
for vote in votes:
# If the district is in collated_votes
if vote['district'] in collated_votes:
for bout in vote['content']['matches']:
# If the matches is in collated_votes
if bout['match'] in collated_votes[vote['district']]:
# Check if the candidate is in collated_votes
for candidate in bout['content']['candidates']:
# If the candidate is already in collated_votes update their ranking
if candidate['candidate'] in collated_votes[vote['district']][bout['match']]:
if candidate['ranking'] in collated_votes[vote['district']][bout['match']][candidate['candidate']]:
collated_votes[vote['district']][bout['match']][candidate['candidate']][candidate['ranking']] += 1
else:
collated_votes[vote['district']][bout['match']][candidate['candidate']][candidate['ranking']] = 1
else:
rankings = {}
if candidate['ranking'] in rankings:
rankings[candidate['ranking']] += 1
else:
rankings[candidate['ranking']] = 1
collated_votes[vote['district']][bout['match']][candidate['candidate']] = rankings
else:
match = {}
for candidate in bout['content']['candidates']:
rankings = {}
if candidate['ranking'] in rankings:
rankings[candidate['ranking']] += 1
else:
rankings[candidate['ranking']] = 1
match[candidate['candidate']] = rankings
collated_votes[vote['district']][bout['match']] = match
else:
match = {}
for bout in vote['content']['matches']:
candidates = {}
for candidate in bout['content']['candidates']:
rankings = {}
candidates[candidate['candidate']] = rankings
if candidate['ranking'] in rankings:
rankings[candidate['ranking']] += 1
else:
rankings[candidate['ranking']] = 1
match[bout['match']] = candidates
collated_votes[vote['district']] = match
return collated_votes
It works, and is fast enough, but it just seems rather unwieldy and hard to read. Is there a better way that all these nested if/then statements?
Answer: Is there a better way? Yes, absolutely!
There are two tools that ship standard with Python that you need to investigate: collections.Counter and collections.defaultdict.
For your code that looks like this:
if vote['district'] in collated_votes:
if bout['match'] in collated_votes[vote['district']]:
if candidate['candidate'] in collated_votes[vote['district']][bout['match']]:
if candidate['ranking'] in collated_votes[vote['district']][bout['match']][candidate['candidate']]:
else:
match = {}
rankings = {}
All this stuff is exactly what defaultdict is intended to handle: provide a dictionary that, when accessed with a key not currently present in the dictionary, returns a "default" value generated from a no-args factory function.
You can use this with dict as the factory function, or list or even something clever like a functools.partial of defaultdict that generates nested defaultdicts!
You can also pass int as the factory, which constructs an integer (default value = 0) as the default value.
But wait! Instead of using defaultdict(int) you could also use collections.Counter. This implements a bag that counts the number of times each element is added. (It's much like defaultdict(int) but with slightly different semantics.)
Which one you use (defaultdict(int) or Counter()) will depend on your code structure. If you can structure the items as a sequence or generator, the Counter might be the best option - it can take a constructor parameter that slurps them all up. If you have to bounce around from one collection to another because of the way your data is structured, the defaultdict approach might be best.
Something like this:
import functools
from collections import defaultdict
Candidate_factory = functools.partial(defaultdict, int)
Matches_factory = functools.partial(defaultdict, Candidate_factory)
Districts_factory = functools.partial(defaultdict, Matches_factory)
Districts = Districts_factory()
for district in ("District A",):
for match in ("Match 1",):
for candidate in ("Leroy",):
Districts[district][match][candidate] += 1
import pprint
pprint.pprint(Districts)
Which outputs:
$ python test.py
defaultdict(functools.partial(<class 'collections.defaultdict'>, functools.partial(<class 'collections.defaultdict'>, <class 'int'>)),
{'District A': defaultdict(functools.partial(<class 'collections.defaultdict'>, <class 'int'>),
{'Match 1': defaultdict(<class 'int'>,
{'Leroy': 1})})}) | {
"domain": "codereview.stackexchange",
"id": 28582,
"tags": "python, python-3.x"
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.