anchor stringlengths 0 150 | positive stringlengths 0 96k | source dict |
|---|---|---|
Has any research been done on DNN Music? | Question: DNNs are typically used to classify things (of course) but can we let them go wild with sounds and then tell them if we think it sounds good or not? I'd like to think after a training class has been made (perhaps comparing the output to an existing song) we could get an NN that has a basic concept of music.
Timing would be an issue; I'm not sure how feasible this is. A strongly weighted input attached to all hidden layers perhaps? Use it as the bias?
Is this even slightly feasible?
Answer: The first thing is to define what is a «good» and a «bad» sound. This is an extremely tricky issue, since the networks need numeric inputs. And music is whole bunch of numbers.
I know from people doing research in identifying how similar two sounds are, and imitation, say: you hear a sound and try to make another that sounds like it. Like when you hum a song or similar. That is by no means easy. These guys are using something similar to feature extraction, with Fourier transforms and energy and such things. They feed the networks with the (selected) features and... Train.
Now, to return to your original question: *What do you present as target during training?* You can present different types of music as categories and classify (I couldn't help but think on this research with fish). Or you define categories of music you like and see if the network can classify them ;)
One basic decision here is how long you get a piece of sound. Since it is needed to analyse frequency, this is a key issue. Since you talked about DNN, I was wondering if you wanted to do it online, as a stream, in which case I don't have the slightest idea where to begin, other than do it after a little while.
Other idea: I remember a little sketch in this series about a researcher that makes use of the relations between peaks in the Fourier spectrum in order to differentiate noise from music. | {
"domain": "ai.stackexchange",
"id": 34,
"tags": "deep-neural-networks, machine-learning"
} |
How do you perforate a teflon sheet? | Question: I have a sheet of thin unperforated teflon that I need to perforate by tomorrow for a lab I work in. None of my coworkers know how to perforate the sheet. Does anyone know how I can do this using basic tools from my house?
Answer: You should contact a specialized company, however you should also provide more infos about the diameter and the number of the holes you want. This is an excerpt from FedTech:
laser cutting is the method of choice for small perforation, as laser
cutting has a smaller, tighter cutting tolerance and thus has the
capability to create tiny, intricate perforations. However, if you
prefer, abrasive waterjet cutting can be used as well but for larger
perforation (.030" and up, respectively). If you prefer to use
non-abrasive waterjet cutting, you can create perforations up to .004"
in diameter, however this form of waterjet cutting is restricted in
cutting certain types of material. With non-abrasive waterjet cutting,
you can cut up to about 12" thick from the following: plastic, rubber,
foam, composites, silicone, foam, felt, cork, Teflon (PTFE), neoprene,
and other soft materials | {
"domain": "chemistry.stackexchange",
"id": 1395,
"tags": "materials, experimental-chemistry"
} |
Why does air remain a mixture? | Question: As we all know, air consists of many gases including oxygen and carbon dioxide. I found that carbon dioxide is heavier than O2. Does the volume difference neglect the mass difference? Is it same for all other gases in air or is there another force that keeps all of these gases together?
If I take a breath of a fresh air, will the exhaled air be heavier because of its higher CO2 content? Will it fall on the floor?
Answer: CO2 will, on average, equilibrate slightly lower than O2 in a gravitational field. But the difference in the force of gravity is very small compared to the random thermal motion of the molecules, thus the effect is effectively negligible in day to day life.
In the context of the atmosphere as a whole this can be a non-negligible effect (e.g. this link), and in astrophysical contexts, this can be very important (e.g. this paper or this one). | {
"domain": "physics.stackexchange",
"id": 4382,
"tags": "thermodynamics, atmospheric-science, density, air"
} |
Can some amount of gravity be caused by virtual particles? | Question: On the scale of a galaxy, can vacuum-based pairs of virtual particles be noticed gravitationally? Even though they are short-lived, shouldn't their mass and numbers be enough to be measured/felt?
Answer: Yes, as far as anyone can tell, virtual particles in the vacuum should gravitate, and generate a huge spacetime curvature everywhere, which isn't observed. This is the famous cosmological constant problem. | {
"domain": "physics.stackexchange",
"id": 15581,
"tags": "gravity, virtual-particles"
} |
Can air inside a high temperature (1300C) Kiln cause an explosion? | Question: I saw many kiln designs that lack an opening for hot\pressurized air to come out , or any pressure valve.
But when air is heated to such a high temperature (1300C) inside a closed chamber (the kiln), won't the pressure inside increase enough to explode the kiln ?
(These kilns usually operate for several hours under this high temperature)
Answer: I think in general the physics would make it unlikely that the air pressure would build up faster than it leaks out, attempting to seal a furnace is unlikely to happen by accident.
A bomb calorimeter or pressure vessel made to specifically seal the inside from outside does not have a loose fitting lid over a cast refractory edge, it is made to much finer tolerances.
Chemical reactions are not likely unless you have a significant mount of vaporisable fuel in the charge and this is contained until at some point it lifts the lid and lets vapour out and air in with glowing elements to ignite the mixture, a flame may result, explosion still unlikely as volume small and not contained any longer.
In practice most kilns have a hole for temperature sensor that is not airtight and would let pressure equalise as they are heated.
Unless a specific atmosphere (inert, reducing, oxidising, carburising) is required in a kiln there is no real need to make them air tight and they rarely are, even then it is often achieved within a protective wrapping of the load and supporting chemistry inside the wrapping like charcoal, cyanides etc). | {
"domain": "chemistry.stackexchange",
"id": 10218,
"tags": "physical-chemistry, heat, temperature, vapor-pressure, pressure"
} |
How can promoter binding sites be determined? | Question: I have been trying to find out which sigma factor is responsible for the transcription of RNA polymerase subunits $\alpha$ (rpoA) and $\beta ^{\prime}$ (rpoC) in Bacillus subtilis. I would expect it to be the housekeeping $\sigma^A$. However I have found out from the Bacillus subtilis transcription factor database, that $\sigma^A$ binding sites have been found in the promoters of $\beta$ (rpoB) and $\delta$ (rpoE) subunits, and a whole bunch of alternative $\sigma$ factors, but it says nothing about $\alpha$ or $\beta ^{\prime}$.
So I want to know if there is a way for me to find out, without entering a wet lab!
Firstly; how do we know that something is a transcription/sigma factor binding site? What is it about the region? And how do we know which TF/SF it is a binding site for? Could the same transcription/sigma factor have multiple different binding site motifs?
Secondly; Since the genome for B. subtilis is available, would I be able to find out binding site motifs for all the sigma factors, and then determine which is/are contained within the promoter for the $\alpha$ subunit? I.e. if I know the TF/SF sequence, is there a way of finding out which promoters it can bind to?
Finally; the main purpose of this question is to find out how rpoA is transcribed, and I know that in general sigma factors are required in bacteria for specific transcription initiation, but could it be that rpoA transcription is non-specific, and that the sigma factor is just not involved in its transcription?
Please let me know if there is anything I can do to improve my question, I can provide references if it would be a good idea. Thanks!
Answer: I don't have a definitive answer, but I can perhaps offer some insight. Given the necessary function of rpoA, I would be willing to bet that SigA is the factor responsible for its transcription, so I will focus my discussion there.
Predicting promoters without experimentation can be very challenging given their immense variability. The idealized core promoter consists of a -10 region, a spacer and a -35 region. Recognition of the promoter by a sigma factor is done at the -10 and -35 regions: each sigma factor recognizes a different consensus sequence. Bacillus subtilis SigA recognizes the consensus sequences TTGACA (-35) and TATAAT (-10). They key word there is consensus however, as it is rare that a promoter will actually contain these exact sequences. In reality, 3/6 nucleotides matching could be considered reasonable. Between the -10 and -35 regions is a spacer with conserved length. In the absence of transcription activators, SigA requires a 17 +/- 1 nucleotide spacer. This is due to thermodynamic constraints. Very briefly, promoter melting (at -10) is dependent the angle between the -10 and -35 regions. Given the helical nature of DNA, this angle is in turn dependent on the distance between the -10 and -35 regions. If the angle is just right, the untwisting of the DNA to align the promoter elements and thus allow sigma binding provides the energy to melt the promoter. If the angle is too large, the regions can never align. If the angle is to small, alignment will not provide enough energy for melting. Complicating this are non-standard core promoters and various flanking sequences that all play a role in transcription initiation. As a direct answer to one of your question, a single sigma factor can bind to many different DNA sequences with some conserved regions.
Fortunately, the B. subtilis genome has been sequenced. Your search for the promoter might involve looking for the -10 and -35 regions 5' to the rpoA gene. This however doesn't take into account that rpoA might be part of a polycistronic operon and thus under the control of a promoter further upstream. Suh et al. (1986) did basically this and reported:
Our DNA sequence analysis suggests further experiments to study regulation within the rpoA region. Because inhibition of polycistronic message translation by free protein S4 is one of the chief controls of alpha operon expression in E. coli (16, 23), the apparent absence of S4 from the B. subtilis alpha region suggests different regulation. Although the close coupling of the rpsM and rpsK genes in our sequence (Fig. 3) supports the notion of a translational coupling similar to E. coli, the putative rpoA promoters in the relatively large intercistronic distance following rpsK are also different (28, 29). However, the apparent absence of a rho-independent terminator in this region could indicate that significant transcription occurs from additional upstream promoters in B. subtilis as well as in E. coli (28).
Albeit this is an old paper, but I was unable to find more recent studies in my quick research. Basically they compared the B. subtilis rpoA gene to that of Escherichia coli, where it is located in an operon with ribosomal proteins rpsM, rpsK, rpsD and rplQ (under the control of a single promoter). They found both similarities and differences. They were able to find two possible SigA core promoters in the region between rpoA and rpsK (the next gene 5' to rpoA in B subtilis). There was also a relatively large distance between the two genes, suggesting that they are not part of the same operon. Furthermore, they were able to find a ribosome binding site (Shine-Dalgarno sequence) between the hypothetical promoters and the translation start site. On the other hand there is no transcription terminator between rpsK and rpoA which suggested that a promoter further upstream regulated rpoA expression. After their sequence analysis, they suggested that further experimentation was required.
In conclusion, it's hard to find promoters just by looking at the sequence. Even if you do find a possible promoter, it's impossible to say whether or not it is utilized without experimentation. That said, given that the alpha subunit is necessary for cell function, that the rpoA gene appears to be transcriptionally tied to ribosomal proteins (which are also necessary for cell function) and that SigA is the predominant factor in log phase B. subtilis, I feel that it is very likely that SigA is ultimately the factor responsible for rpoA expression
I also thought I'd mention that there are various online bioinformatics tools that attempt to predict promoters, but I will not comment more on them because I have not used them. | {
"domain": "biology.stackexchange",
"id": 2098,
"tags": "bioinformatics, bacteriology, transcription, binding-sites"
} |
Interpretation of correlation functions that are higher than 2 point | Question: I am studying QFT from Peskin & Schroeder. There I found a physical interpretation of a 2-point correlation function. According to Peskin & Schroeder, the 2-point correlation function is nothing but a propagator between two points and I am happy with that. But the question that arises immediately is, what are the n-point correlation functions?
e.g.- does the 3-point correlation function given by, $\langle\omega|T[\phi(x)\phi(y)\phi(z)]|\omega\rangle$ imply the amplitude of propagation from $x$ to $z$ via $y$?
Is this understanding correct?
Answer: In a typical QFT, Wicks theorem tells you that the expectation value of all higher point correlation functions can be expressed as two point correlation functions. Essentially, a $2n$-point correlation function tells you about $n$ particles propagating from point $x_i$ to $x_j$, where $i,j = 1...n$.
For example, a 4 points look like, $$\langle\Omega|T\left\{\phi(x_1)\phi(x_2)\phi(x_3)\phi(x_4)\right\}|\Omega\rangle = \langle\Omega|T\left\{\phi(x_1)\phi(x_2)\right\}|\Omega\rangle\langle\Omega|T\left\{\phi(x_3)\phi(x_4)\right\}|\Omega\rangle + \langle\Omega|T\left\{\phi(x_1)\phi(x_3)\right\}|\Omega\rangle\langle\Omega|T\left\{\phi(x_2)\phi(x_4)\right\}|\Omega\rangle + \langle\Omega|T\left\{\phi(x_1)\phi(x_4)\right\}|\Omega\rangle\langle\Omega|T\left\{\phi(x_2)\phi(x_3)\right\}|\Omega\rangle $$
Which corresponds to one particle going from $x_1$ to $x_2$ and one going from $x_3$ to $x_4$, plus all the other possibilities. Diagramatically, this looks like
You will notice that your example of a three point correlation function is zero according to this theorem. | {
"domain": "physics.stackexchange",
"id": 47665,
"tags": "quantum-field-theory, many-body, correlation-functions"
} |
ROSLaunch: [] is neither a launch file in package [] nor is [] a launch file name | Question:
Hello,
I'm new ROS user. I've already installed the OS, but when I type "roslaunch kobuki_node minimal.launch --screen" it gives me an error.
I'm using Ubuntu 17.10 Artful Aardvark (amd64)
This is coppied from my terminal:
sgzhelev@Lenovo-Legion-Y520-15IKBN:~$ roslaunch kobuki_node minimal.launch --screen
RLException: [minimal.launch] is neither a launch file in package [kobuki_node] nor is [kobuki_node] a launch file name
The traceback for the exception was written to the log file
Thank you for all advices!
-- Steve
Originally posted by sgzhelev on ROS Answers with karma: 13 on 2018-10-27
Post score: 1
Original comments
Comment by gvdhoorn on 2018-10-28:
First thing to check: have you sourced the correct setup.bash before trying to run roslaunch like this?
Additionally: did you install the kobuki packages from sources (ie: did you have to run catkin_make first) or did you use apt-get install ..?
Comment by sgzhelev on 2018-10-28:
Yes, I have sourced the bash file and I used apt-get install, not catkin_make.
Comment by gvdhoorn on 2018-10-28:
Then do you have the kobuki packages installed? They do not come with a default installation.
What is the output of dpkg -l | grep kobuki (add that to your original question, use the edit button/link)?
Comment by sgzhelev on 2018-10-28:
I will try that later because now I'm trying with Ubuntu 16.04 Xenial and ROS Kinetic.
Comment by sgzhelev on 2018-10-28:
When I type dpkg -l | grep kobuki the terminal is not showing anything. It's going on new row and waiting for a new command.
Answer:
I should've paid more attention:
I'm using Ubuntu 17.10 Artful Aardvark (amd64)
together with the Melodic tag and:
I used apt-get install [..]
is the cause here.
None of the Kobuki packages have been released as binaries for ROS Melodic yet.
Unless things have changed very recently, you'll need to build all of the turtlebot (2) stack from source on Melodic. That is possible, but you can/may run into some issues while doing that (see #q305453 for a recent Q&A about that).
The simple packages (such as kobuki_description) will build without issues, but as ROS Melodic uses Gazebo 9 (instead of Gazebo 7), you'll run into issues getting things like kobuki_gazebo_plugins to build.
Originally posted by gvdhoorn with karma: 86574 on 2018-10-28
This answer was ACCEPTED on the original site
Post score: 0
Original comments
Comment by sgzhelev on 2018-10-28:
Yesterday, I have tried to install ROS from source, but Ubuntu freezed, maybe is becouse, like you said Melodic is not finished yet. Now I'm trying with Ubuntu 16.04 Xenial and ROS Kinetic, because at school I have successfully installed it on 16.04.
Comment by gvdhoorn on 2018-10-28:\
I have tried to install ROS from source, but Ubuntu freezed, maybe is becouse, like you said Melodic is not finished yet.
No, that is most likely not the cause.
The base Melodic release is "finished", it's just that not all packages available in Kinetic are available on Melodic.
Comment by gvdhoorn on 2018-10-28:\
Now I'm trying with Ubuntu 16.04 Xenial and ROS Kinetic
So do you have Melodic or Kinetic installed?
Comment by sgzhelev on 2018-10-28:
I've got two laptops. The first is with Ubuntu 17.10 and Melodic and the second is with 16.04 and Kinetic.
Comment by gvdhoorn on 2018-10-28:
Final comment: try to always use binary packages when installing ROS. Installing from source is almost never necessary or desirable.
Comment by sgzhelev on 2018-10-28:
Thanks for the answers but do you know where is located the log file?
Comment by gvdhoorn on 2018-10-28:
Which log file?
Comment by sgzhelev on 2018-10-28:
Never mind... It all works now. I just had to install node... Thanks for your help and have a nice day! | {
"domain": "robotics.stackexchange",
"id": 31972,
"tags": "roslaunch, ros-melodic"
} |
Are the particle-antiparticle pairs produced in vacuum virtual particles, and can they interact with normal particles? | Question: If it is true that due to energy fluctuations of a vacuum being able to produce a particle-antiparticle pair that shortly annihilate with each other and disappear again, is the following circumstance possible?
I was then thinking that if this was the case, there would be some inherent uncertainty to any particle's position, because of the particle/antiparticle production in vacuum was random, then if I leave an electron in a specified place for a short period of time, the pair production could have caused the electron to move. This seemed to fit in really well with the concept of a wave function, and that we can never predict a particle's position with 100% accuracy, and to me this seemed to offer some sort of an explanation as to why this was the case.
The only knowledge I have on this topic is knowledge gained through pop-science, and so I don't really understand much of this at all, so any help understanding would be great. I talked to my friend about this, and he said something along the lines of "the pair production particles aren't 'real' particles, and they can't really interact with anything", though I'm not too sure I understand what that means.
Answer:
I talked to my friend about this, and he said something along the lines of "the pair production particles aren't 'real' particles, and they can't really interact with anything", though I'm not too sure I understand what that means.
Your friend is correct.
A virtual particle is a mathematical construct, it is a creation of the mathematical model used to fit elementary particle physics. This model has an iconal representation for the mathematical formulas that build up the functions that give the probability of interaction for elementary particles. . These are the Feynman diagrams and all the lines correspond to functions and the vertices to the strength of the interactions described by the iconal representation.
The particles in the table are real , their properties have been measured in the lab, and the theoretical model fitting them is the standard model. They are represented in the above diagram with the arrows. The exchanged wiggly line is a virtual photon, not a real one. A virtual particle has a varying mass, not the mass on the table, because it represents an integral over the variables. The reason it keeps the name, in this case photon, it is because it carries the all the relevant quantum numbers of the name.
A virtual particle can never be measured by construction of the integrals.
Thus in pair production from the vacuum the pair cannot become real , your middle diagram, unless there exists an interaction where energy is supplied, and one of the particles can become real as with the Hawking radiation effect, where the gravitational potential of the black hole supplies the energy. | {
"domain": "physics.stackexchange",
"id": 24527,
"tags": "quantum-field-theory, pair-production"
} |
Least action principle and uniform motion | Question: I'm trying to apply the principle of least action to the case of a uniform motion under no potential. Assume the object starts with initial velocity $v_0$, moving from point $A$ to point $B$. We know that the motion is a uniform one with the object reaching the point $B$ after a time $T=\frac{AB}{v_0}$. The associated action is $S_1=\int_0^T\frac{mv^2}{2}dt=\frac{mv_0}{2}AB$.
Now consider a path where the velocity is reduced suddenly from $v_0$ to $v_1<v_0$ near the initial point $A$. The associated action would be $S_2\approx\frac{mv_1}{2}AB$ which is smaller than $S_1$.
What is the problem here?
Answer: Least action (or stationary action) means least along all those paths which begin and end at the given points, and that means the given times as well as locations. If a path sets out with a smaller speed then in order to arrive at the final location at the required time, it will have to go faster at some stage. The integral of kinetic energy over that path is larger than the integral over the constant-velocity path. | {
"domain": "physics.stackexchange",
"id": 97278,
"tags": "classical-mechanics, lagrangian-formalism, boundary-conditions, variational-principle"
} |
Kolmogorov backward equations in active biological systems | Question: I am trying to understand the following equations from a paper by Wang et al. [1, p. 5 at the SI]:
Michaelis–Menten Representation of the Kinesin Cycle
We assumed that there is a strong coupling between ATP turnover rate of kinesin and kinesin stepping on an MT. Each chemical step in the chemomechanical cycle of kinesin was assumed to be irreversible, except ATP binding and releasing (Fig. 1, step i). Consider the following scheme:
$$\large\ce{0 <->[$k_\mathrm{ATP}\lbrack\ce{S}\rbrack$][$k_\mathrm{-ATP}$] 1 ->[$k_\mathrm{s}$] 2 ->[$k_\mathrm{attach}$] 3 ->[$k_\mathrm{ADP}$] 4 ->[$k_\mathrm{h}$] 5 ->[$k_\mathrm{-P}$] 6}$$
Assume that $F_i(t)$ is the probability for the system to reach state $6$ if, at time $t = 0$, the system is in state $i.$ This function is governed by the backward master equations:
$$\frac{\mathrm{d}F_0(t)}{\mathrm{d}t} = k_\mathrm{ATP}[\ce{S}]F_1(t) - k_\mathrm{ATP}[\ce{S}]F_0(t)\label{eqn:1}\tag{S1}$$
Why does equation \eqref{eqn:1} read as
$$\frac{\mathrm{d}F_0(t)}{\mathrm{d}t} = k_\mathrm{ATP}[\ce{S}]F_1(t) - k_\mathrm{ATP}[\ce{S}]F_0(t)$$
and not
$$\frac{\mathrm{d}F_0(t)}{\mathrm{d}t} = k_\mathrm{-ATP}[\ce{S}]F_1(t) - k_\mathrm{ATP}[\ce{S}]F_0(t)?$$
I would like to change/modulate $S(t)=\cos(t)$ in time and see how it changes the states $F_i(t)$
References
1 Wang, Q.; Diehl, M. R.; Jana, B.; Cheung, M. S.; Kolomeisky, A. B.; Onuchic, J. N. Molecular Origin of the Weak Susceptibility of Kinesin Velocity to Loads and Its Relation to the Collective Behavior of Kinesins. PNAS 2017, 114 (41), E8611–E8617. https://doi.org/10/gdj4h5.
Answer: UPDATED: I wrote a first answer assuming that $t>0$ which got close to what's in the paper, but not quite the same. Thanks go to Karsten Theis for pointing out that $t<0$ in these "backward equations". Here's a corrected explanation:
The scenario is that we have a system that can be in any of seven consecutive states (numbered 0 through 6). At some time $t<0$, we want to know something about the probability of a system in each state reaching state 6 by time $t=0$, which is expressed as $F_i(t)$ where $i$ indicates the state number.
We start with a system in state 6 at time $t$. Since reversal back to state 5 is not permitted and there is no state beyond state 6, any system that is in state 6 will remain in state 6 up to (and past) time $t=0$. Thus the probability $F_6(t)=1$. In the paper, they use a Dirac $\delta$ function to indicate this, because that is necessary when $F_6(t)$ is used in the other equations.
For the other five states, the equations do not give explicit expressions for the probabilities, but instead express the change in probability over time as a function of the probability in a differential equation.
Let's consider a system that at time $t$ is in state $5$. During a time interval $\Delta t$, two things can happen: Either (1) the system remains in state $5$ or (2) the system switches to state $6$.
One assumption they seem to make is that the only way the probability can change is if the system changes state. So if the system remains in state 5, its probability $F_5(t+\Delta t)$ remains equal to $F_5(t)$. If it changes to state 6, then its probability increases to $F_6(t)$. The change is thus a weighted combination of these two.
In more detail, if we consider the probability of switching from state 5 to state 6 during the interval $\Delta t$ to be $P_{56}$, we can say that our new probability is the weighted sum
$F_5(t+\Delta t)=P_{56}F_6(t)+(1-P_{56})F_5(t)$,
so our change in probability is
$\Delta F_5=P_{56}F_6(t)+(1-P_{56})F_5(t)-F_5(t)=P_{56}[F_6(t)-F_5(t)]$.
Our last step is to figure out what $P_{56}$ is, and it is given by the rate constant for the change from state $5$ to state $6$. The rate constant is the average frequency of the reaction, so the probability of the reaction occurring in a given time interval is the rate constant times the length of time. So we replace $P_{56}$ with $k_{-P}\Delta t$ and get
$\Delta F_5(t)=k_{-P}\Delta t[F_6(t)-F_5(t)]$.
Dividing by $\Delta t$, we have
$\dfrac{\Delta F_5(t)}{\Delta t}=k_{-P}[F_6(t)-F_5(t)]$.
As $\Delta t$ goes to $0$, this becomes the derivative:
$\dfrac{dF_5}{dt}=k_{-P}[F_6(t)-F_5(t)]$.
You can repeat this for the other five states to generate the other equations in the paper. The key point is that even when back reactions are permitted (such as from state 1 to state 0), the probability function is only for the systems that began in the indicated state, so the rate constant for a system coming back to that state does not enter into the equation. An implicit assumption is that the system does not make two moves during the time interval $dt$. That is, it cannot move from state 0 to state 1 and back to state 0. | {
"domain": "chemistry.stackexchange",
"id": 12239,
"tags": "physical-chemistry, reaction-mechanism, kinetics, theoretical-chemistry, structural-biology"
} |
Renormalization condition for field strength renormalization | Question: I am studying $\phi^4$ theory and so far I understand the mass and coupling constant renormalizations. In these theories, once we expand a diagram in perturbation theory we "cancel" the divergences by demanding the value we get from the diagram equals a measured quantity such as the physical mass or physical coupling.
For field strength renormalization I understand the need for it and how it appears, but because it is not physical like mass or the coupling constant are I am less sure about the corresponding renormalization condition. My guess is that if we define our physical field $\phi$ as $\phi = Z^{-1/2}\phi_0$ where $\phi_0$ is the bare field, then we require that in the UV limit the physical field has coefficient 1. This places a condition on $Z$ so that the physical fields have no dependence on the momentum cutoff.
Is this correct or is there more to it?
Answer: At any loop level, the general form of the dressed propagator of the $\phi^4$ theory is
$$\tilde{D}_\text{resummed}(p)=\frac{i}{p^2-m_0^2(\Lambda)-\Sigma(p^2,\Lambda)}\tag{1}\label{1},$$
where $\Sigma$ is the self-energy, i.e. the sum of 1PI diagrams. Although at one-loop level the self energy depends merely on the cutoff scale $\Lambda$ displays no $p$ dependence, it does at higher loop levels. The effect is that not only the pole of the propagator is shifted - giving rise as OP knows to the renormalized mass in the on shell renormalization scheme, but it also changes the behaviour around the pole, i.e. the residue. In fact, if you define $m_R$, the renormalized mass by
$$p^2-m_0^2(\Lambda)-\Sigma(p^2,\Lambda)\lvert_{p^2=m_R}=0$$
The expansion of the propagator around the pole yields
$$\frac{i}{p^2-m_0^2(\Lambda)-\Sigma(p^2,\Lambda)}\tag{2}\label{2}\approx\frac{iZ}{p^2-m_R}+\text{(finite terms)},$$
where
$$Z=1-\frac{d\Sigma}{dp^2}\bigg\lvert_{p^2=m_R}\tag{3}\label{3}$$. As a side note, OP may have recognized that \eqref{2} is the Källén-Lehmann spectral representation of the propagator. Rescaling the fields as OP mentions removes $Z$ from the numerator of \eqref{2}. | {
"domain": "physics.stackexchange",
"id": 100516,
"tags": "quantum-field-theory, renormalization, greens-functions, propagator, self-energy"
} |
Karatsuba multiplication in Rust | Question: This is an implementation of the Karatsuba algorithm for multiplication:
use std::cmp::max;
/// Multiplies two numbers using the Karatsuba algorithm
fn karatsuba(a: isize, b: isize) -> isize {
// Single digit multiplication: no need for Karatsuba
if a < 10 && b < 10 {
a * b
} else {
let nr_of_digits = max(get_nr_of_digits(a), get_nr_of_digits(b));
let half_nr_of_digits = nr_of_digits / 2;
let (p, q) = split_at(half_nr_of_digits, a);
let (r, s) = split_at(half_nr_of_digits, b);
let u = karatsuba(p, r);
let w = karatsuba(q, s);
let v = karatsuba(p + q, r + s);
// Since we used integer division for half_nr_of_digits,
// half_nr_of_digits * 2 is not always equal to nr_of_digits.
// For example when nr_of_digits is 9.
let raised_u = u * 10_isize.pow(half_nr_of_digits * 2);
let raised_v_w_u = (v - w - u) * 10_isize.pow(half_nr_of_digits);
// That's the product of a and b
raised_u + raised_v_w_u + w
}
}
/// Gets the number of digits in a number. For example:
/// get_nr_of_digits(12345) == 5
fn get_nr_of_digits(x: isize) -> u32 {
let mut nr_of_digits = 1;
let mut copy = x;
while copy > 9 {
copy /= 10;
nr_of_digits += 1;
}
nr_of_digits
}
/// Splits a number at a position. For example:
/// split_at(2, 1234) == (12, 34)
fn split_at(pos: u32, x: isize) -> (isize, isize) {
let power = 10_isize.pow(pos);
let high = x / power;
let low = x % power;
(high, low)
}
#[test]
fn split_at_works() {
assert_eq!(split_at(2, 1234), (12, 34));
assert_eq!(split_at(1, 67), (6, 7));
assert_eq!(split_at(2, 67), (0, 67));
assert_eq!(split_at(2, 674), (6,74));
assert_eq!(split_at(2, 67461), (674, 61));
assert_eq!(split_at(3, 674610), (674, 610));
}
#[test]
fn karatsuba_works() {
// Positive numbers
assert_eq!(karatsuba(12, 34), 12 * 34);
assert_eq!(karatsuba(3, 4), 3 * 4);
assert_eq!(karatsuba(5678, 4321), 5678 * 4321);
assert_eq!(karatsuba(678, 4321), 678 * 4321);
assert_eq!(karatsuba(67, 65432), 67 * 65432);
assert_eq!(karatsuba(671, 654), 671 * 654);
assert_eq!(karatsuba(6781001, 6542001), 6781001 * 6542001);
assert_eq!(karatsuba(671, 654), 671 * 654);
assert_eq!(karatsuba(67, 654321), 67 * 654321);
assert_eq!(karatsuba(678032, 432132012), 678032 * 432132012);
// Negative numbers
assert_eq!(karatsuba(-678, 432), -678 * 432);
assert_eq!(karatsuba(678032, -232132012), 678032 * -232132012);
assert_eq!(karatsuba(571, -654), 571 * -654);
}
#[test]
fn get_nr_of_digits_works() {
assert_eq!(get_nr_of_digits(0), 1);
assert_eq!(get_nr_of_digits(10), 2);
assert_eq!(get_nr_of_digits(12345), 5);
assert_eq!(get_nr_of_digits(87654321), 8);
}
I'd love to know how to make this faster and rustier.
Answer: I can only think of two good reasons for using base 10:
The purpose of the code is to teach, and the person (could be yourself) or people being taught are more comfortable with base 10 than base 2.
The underlying representation of the integers is in base 10.
Note that case 2 is extremely unlikely. And since you're asking about speed, I assume that case 1 doesn't apply either (although see below). Therefore you should probably be using a base which is a power of 2. The only case in which this isn't true is where profiling showed that the bottleneck was converting your final output to base 10 for printing and so you chose to write a custom big integer implementation which uses a base which is a power of 10: I have personally encountered this situation, and chose to work base 1000000000 so that the base would fit in a 32-bit integer and the product of two digits fit within a 64-bit integer.
Note that working with a base which is a power of 2 means that all of the power operations can be done with bitshifts, or if you're able to access the internal data structure of a big integer then by index adjustments.
Note also the reference to big integers above. Working (isize, isize) -> isize, there's no point using Karatsuba except for teaching purposes. It would be faster to use *. | {
"domain": "codereview.stackexchange",
"id": 31296,
"tags": "algorithm, rust"
} |
Why is an HIV infection considered "incurable"? | Question: My biology teacher told me that if one caught HIV, they cannot be cured because it was near to impossible to be completely virus-free. She said this was because HIV keeps on changing its glycoprotein coat.
Can someone please explain what she meant by "keeps on changing its glycoprotein coat" to me?
Answer: The reasons why HIV is "incurable" (a misnomer) are legion:
HIV is a retrovirus, which means it inserts its own genome into the host cell's genome. You must therefore kill each and every infected cell to rid the body of the virus.
HIV is a lentivirus, which means it has a long incubation period, so it can "lay low" before symptoms are readily detected.
HIV infects CD4+ helper T-cells, macrophages, and dendritic cells, which are responsible for mediating the host immune response. Thus, it infects specifically the cells we need to fight an infection.
HIV drastically reduces the number of CD4+ T-cells.
HIV has a number of viral proteins that prevent some of the body's antiviral mechanisms
HIV can infect via cell-free (large number of particles, low infection rate) or cell-to-cell (low number of particles, high infection rate) routes.
HIV is wildly variable. It has a small RNA genome (about 10kb) that mutates very rapidly. Given the number of viral particles produced each day during infection — well over 1 billion — every single base is mutated every day.
I'm simplifying matters a bit — you should just read the en.Wikipedia article for your own research — but that's a rundown of some of the reasons why. | {
"domain": "biology.stackexchange",
"id": 7014,
"tags": "genetics, immunology, pathology, hiv"
} |
Texture managing with smart pointers | Question: I have an SDL_Wrapper which is working perfectly (it is not broken)! Please suggest how I could improve performance, and how I could make my unique_ptr(s) dispose automatically.
So far, I call my class like this:
CWindow window = std::make_unique<CWindowWrap>("title", 0, 0, 140, 100, 0)
Is it the best way to do it? Will my unique_ptr dispose automatically?
Here's the SDL_Wrapper class:
#include "LogManager.h"
#include <memory>
#include <SDL.h>
//SDL_Renderer wrapper
////SDL_Window wrapper !
class CWindowWrap
{
public:
CWindowWrap(const char* title, int xpos, int ypos, int width, int height, int flags)
: ptr_window(SDL_CreateWindow(title, xpos, ypos, width, height, flags))
{
LOG("SDL_Window Wrapper", "Constructed Window !");
}
virtual ~CWindowWrap()
{
if (ptr_window != nullptr){
SDL_DestroyWindow(ptr_window);
LOG("SDL_Window Wrapper", "Destroyed Window !");
}
else
LOG("SDL_Window Wrapper", "Window doesn't need to be destroyed !");
}
//disable copy constructor
CWindowWrap(CWindowWrap const&) = delete;
CWindowWrap& operator=(CWindowWrap const&) = delete;
//allow move
CWindowWrap(CWindowWrap&& move)
: ptr_window(nullptr)
{
using std::swap;
swap(ptr_window, move.ptr_window);
}
CWindowWrap& operator=(CWindowWrap&& move){
using std::swap;
swap(ptr_window, move.ptr_window);
}
operator SDL_Window*() { return ptr_window; } // implicite conversion between CWindowWrap ---> SDL_Window*;
private:
SDL_Window* ptr_window = nullptr;
};
typedef std::unique_ptr<CWindowWrap> CWindow;
class CRendererWrap
{
public:
CRendererWrap(CWindowWrap window, int x, int y)
: ptr_renderer(SDL_CreateRenderer(window, x, y))
{}
~CRendererWrap()
{
if (ptr_renderer != nullptr){
SDL_DestroyRenderer(ptr_renderer);
}
}
//disable copy constructor
CRendererWrap(CRendererWrap const&) = delete;
CRendererWrap& operator=(CRendererWrap const&) = delete;
//allow move
CRendererWrap(CRendererWrap&& move)
: ptr_renderer(nullptr)
{
using std::swap;
swap(ptr_renderer, move.ptr_renderer);
}
CRendererWrap& operator=(CRendererWrap&& move){
using std::swap;
swap(ptr_renderer, move.ptr_renderer);
}
operator SDL_Renderer*() { return ptr_renderer; } // implicite conversion between CRendererWrap ---> SDL_Renderer*;
private:
SDL_Renderer* ptr_renderer = nullptr;
};
typedef std::unique_ptr<CRendererWrap> CRenderer;
struct Window_param
{
const char* title;
const int width;
const int height;
const int xPos;
const int yPos;
const int flags;
};
Answer: I think that conceptually you're fine. There are some details that need attention.
Hide the implementation
Your creation syntax:
CWindow window = std::make_unique<CWindowWrap>("title", 0, 0, 140, 100, 0)
requires the user to know that CWindow is a unique_ptr<CWindowWrap> this isn't ideal. I believe that it's a good idea to hide the implementation class.
You should put your CWindowWrap class in an implementation namespace:
namespace detail{
class CWindowWrap{
...
};
}
using CWindow = std::unique_ptr<detail::CWindowWrap>;
Then you should provide a creator function like this:
CWindow make_window(const char* title, int xpos, int ypos, int width, int height, Uint32 flags){
return std::make_unique<detail::CWindowWrap>(title, xpos, ypos, width, height, flags);
}
Don't use virtual if you don't need it
CWindowWrap doesn't need a virtual destructor as far as I can tell and I can't imagine why you would inherit from this. So simply remove virtual.
Wrapper classes should match types exactly
You're not matching the types properly for the constructor of CWindowWrap (flags should be Uint32 see here).
Move assignment/construction
As you have already initialized ptr_window in the in-class declaration, you do not need : ptr_window(nullptr) in your move constructor.
Your move assignment operator is missing a return statement.
That said, as you are not supposed to instantiate CWindowWrap directly you could simply = delete the move assignment operator and constructor. They won't be used.
Same comments goes for the other wrapper. | {
"domain": "codereview.stackexchange",
"id": 13272,
"tags": "c++, memory-management, sdl"
} |
Why lipophilic molecules can pass phospholipid bilayer, in spite of 2 hydrophilic layers? | Question: It is commonly told that, hydrophobic/ lipophilic/ nonpolar molecules can quite easily pass phospholipid bilayer, and hydrophilic (polar or ionic) molecules can't pass (when no protein aid that); because hydrophobic nature of the lipid.
But in the same logic, hydrophobic molecules shouldn't pass through the bilayer. Because there are 2 hydrophilic layers in the membrane. i.e.
A and A' .
Then how the hydrophobic molecules can pass through A and A'?
I guess, it is due to thin diameter of A and A' . Is that?
Answer: Good question. This is my take.
It's not just the surface of the membrane that's polar. There is water (polar) on both sides of the membrane. In most animal cells there is also an unequal distribution of charges across the membrane. The environment outisde of the cell is typically positive due to an excess of positive ions, especially sodium. The inside of the cell is typically negative due to an excess of negative ions such as phosphate.
This means the hydrophobic molecules aren't any more at home in the environment outside, or inside, the membrane than they are at the surface. There's no reason to suppose any more repulsion at the surface. So, just due to their random kinetic motion they will find themselves at the membrane's surface, some with the necessary kinetic energy to cross.
There's another way to view this. We shouldn't think of the membrane as allowing hydrophobic substances to enter. We should think of it as NOT allowing hydrophyllic substance to enter without a proper ID check by proteins in the membrane. | {
"domain": "biology.stackexchange",
"id": 6126,
"tags": "biophysics, cell-membrane"
} |
TLC: Why is a more polar eluent more effective at displacing the points? | Question: If the points are non-polar, I don't see how making the eluent more polar will make them travel faster. Anybody care to explain?
Answer: Most likely you are using "normal phase" TLC plates. The stationary phase is silica gel, which has free hydroxyl groups making it a polar surface. In TLC, there is a competition for the plate between the analytes and the eluent. As you make the eluent more polar, the eluent will interact more with the plate leaving the analytes to be carried with the mobile phase. | {
"domain": "chemistry.stackexchange",
"id": 1771,
"tags": "organic-chemistry, polarity, teaching-lab"
} |
Why is the time coordinate different in the metric? | Question: I have been using the metric for quite a while now and I never thought about it. Why does the time coordinate always have an opposite sign to the space one?
In other words, why does the metric have this form(in free space)?
$$
ds^2= -dt^2+d\vec{x}^2
$$
Answer: The short answer is because the universality of the speed of light requires it. The detailed reasoning is in the derivation of the Lorentz transformation. | {
"domain": "physics.stackexchange",
"id": 64178,
"tags": "metric-tensor, metric-space"
} |
Dynamic programming tree algorithm | Question:
We have a network of sensors organized in tree form, where the sensors
occupy the nodes. Most of the time the sensors are turned off, until
the root sensor wakes up the rest of the sensors. This process of
awakening the nodes requires a certain amount of time since, in a
unit of time, a sensor can awaken only one of its children. Obviously, the total wake-up time for all sensors depends on the order in which each sensor wakes up your children.
It is asked to design an efficient algorithm that gives a sorting of
the children of each node in the tree providing the minimum time to
wake up all the sensors.
The solution must use dynamic programming.
I have got no clue where to start from; I know dynamic programming exercises are based on suboptimal structure and overlapping problems, and in this case, at least, overlapping problems is quite obvious since the tree structure is used.
Have you got any clue about the underlying recurrence? -- I am not asking you the solution since those are my homework :)
Thanks.
Answer: Suppose that the root of the tree has subtrees $T_1,\ldots,T_m$, and that you found the optimal wake-up times $t_1,\ldots,t_m$ for each of the subtrees (that's the "dynamic programming" part, though I'd just call it recursion). In which order should you wake up the subtrees? If $t_i > t_j$, should you wake $T_i$ or $T_j$ first? Use this to come up with a recursive formula for the optimal wake-up time of the tree. | {
"domain": "cs.stackexchange",
"id": 10266,
"tags": "algorithms, trees, dynamic-programming"
} |
Minimax implementation of tic tac toe | Question: I have the following working tic tac toe program. Someone said it's a convoluted mess and I'm looking for pointers on how to clean it up.
import java.util.Scanner;
import java.util.ArrayList;
import java.util.Random;
import java.util.Arrays;
public class Shortver{
private static final int boardRowDim = 3;
private static final int boardColDim = 3;
private String[][] board;
private String playerName;
private String playerMark;
private String computerMark;
private boolean humanGoes;
private boolean winner;
private boolean draw;
private int gameTargetScore;
private boolean output = false;
private boolean toSeed = false;
private ArrayList<Integer> availableMoves;
public Shortver(String name, boolean whoGoesFirst){
availableMoves = new ArrayList<Integer>();
board = new String[boardRowDim][boardColDim];
for (int i = 0; i < board.length; i++){
for(int j = 0; j < board[0].length; j++){
board[i][j] = ((Integer)(double2single(i,j))).toString();
availableMoves.add(double2single(i,j));
}
}
playerName = name;
humanGoes = whoGoesFirst;
playerMark = "X";
computerMark = "O";
gameTargetScore = 15;
if(!humanGoes){
playerMark = "O";
computerMark = "X";
gameTargetScore = - 15;
}
winner = false;
draw = false;
}
public static void main(String[] args)throws Exception{
System.out.println("\u000C");
Scanner kboard = new Scanner(System.in);
printHeader();
System.out.print(" Please enter your name ; ");
String name = kboard.next();
name = capitalize(name);
System.out.print("\n\n X's go first. " + name + ", please enter your mark ('X' or 'O')");
String mark = kboard.next().toUpperCase();
boolean whoPlaysFirst = (mark.equals("X")) ? true : false;
Shortver myGame = new Shortver(name,whoPlaysFirst);
myGame.playGame(kboard);
}
public void playGame(Scanner kboard)throws Exception{
Integer move = null;
boolean goodMove;
String kboardInput = null;
Scanner input;
int[] cell2D = new int[2];
Random random = new Random();
int nextComputerMove;
if(toSeed){
board = seedBoard();
availableMoves = seedAvailable(board);
int x = 0;
int o = 0;
for(int i = 0; i < 3;i++){
for(int j = 0;j < 3;j++){
if(board[i][j].equals("X"))x++;
else if(board[i][j].equals("O"))o++;
}
}
if((x - o) == 1) humanGoes = true;
else if((x - o) == 0) humanGoes = false;
else{
System.out.println("Fatal Error: seed bad");
System.exit(0);
}
System.out.println("humangoes = " + humanGoes + x + o);
}
while(!winner && !draw){
printHeader();
goodMove = false;
drawBoard(board);
if(!humanGoes && availableMoves.size() < 9){
System.out.println("That's a great move, I'll have to think about this");
Thread.sleep(2000);
}
if(humanGoes){
while(!goodMove){
System.out.print("\n\n Please enter a number for your move : ");
kboardInput = kboard.next();
input = new Scanner(kboardInput);
if(input.hasNextInt()){
move = input.nextInt();
if(move == 99){
System.out.println("You found the secret exit code");
Thread.sleep(2000);
printHeader();
System.out.println("bye");
System.exit(0);
}
goodMove = checkMove(move);
if(!goodMove)System.out.println(" WARNING: Incorrect input, try again");
}else{
System.out.println(" WARNING: Incorrect input, try again");
}
}
cell2D = single2Double(move);
board[cell2D[0]][cell2D[1]] = playerMark;
}else{
String[][] currentBoard = new String[boardRowDim][boardColDim];
currentBoard = copyBoard(board);
ArrayList<Integer> currentAvailableMoves= new ArrayList<Integer>();
currentAvailableMoves = copyAvailableMoves(availableMoves);
//System.out.println(System.identityHashCode(currentAvailableMoves));
int[] bestScoreMove = new int[2];
bestScoreMove = findBestMove(currentBoard,currentAvailableMoves,true,0,kboard);
move = availableMoves.get(availableMoves.indexOf(bestScoreMove[1]));
cell2D = single2Double(move);
board[cell2D[0]][cell2D[1]] = computerMark;
}
humanGoes = humanGoes ? false:true;
availableMoves = updateAvailableMoves(move,availableMoves);
if (Math.abs(score(board)) == 15) winner = true;
if (availableMoves.size() == 0) draw = true;
if(winner || draw){
printHeader();
drawBoard(board);
}
if(score(board) == gameTargetScore)System.out.println(playerName + " you are too good for me. \n" +
"Congratulations you won!!\n\n");
else if(score(board) == -gameTargetScore)System.out.println("IWONIWONIWONohboyIWONIWONIWON");
else if(draw)System.out.println("Good game. It's a draw!");
}
}
public void drawBoard(String[][] someBoard){
String mark = " ";
Integer row,col;
String type;
for( int i = 0;i < 15; i++){
System.out.print(" ");
for (int j = 0; j < 27; j++){
mark = " ";
if(i==5 || i == 10)mark = "-";
if(j==8 || j == 17)mark = "|";
row = i/5;
col = j/9;
type = someBoard[row][col];
if(type == "X"){
if( ((i%5 == 1 || i%5 == 3) &&
(j%9 == 3 || j%9 == 5)) ||
(i%5 == 2 &&
j%9 == 4))mark = "X";
}else if(type == "O"){
if( ((i%5 == 1 || i%5 == 3) &&
(j%9 == 3 || j%9 == 4 || j%9 == 5)) ||
((i%5 == 2) &&
(j%9 == 3 || j%9 == 5))) mark = "O";
}else{
if( i%5 == 2 && j%9 == 4){
mark = ((Integer)(row * 3 + col)).toString();
}
}
System.out.print(mark);
}
System.out.println();
}
System.out.println("\n\n\n");
}
public boolean checkMove(Integer move){
boolean goodMove = false;
for(Integer available : availableMoves){
if (available == move) goodMove = true;
}
return goodMove;
}
public int score(String[][] newBoard){
int row;
int newCol;
int score = 0;
for (int strategy = 0; strategy < 8; strategy++){
score = 0;
for (int col = 0; col < 3; col++){
if(strategy < 3){ //rows
row = strategy ;
newCol = col;
}else if (strategy < 6){ //cols
row = col;
newCol = strategy - 3;
}else{//diag
int diag = strategy - 6;
row = col - 2 * diag * (col - 1);
newCol = col;
}
if(newBoard[row][newCol].equals("X")){
score+=5;
}else if(newBoard[row][newCol].equals("O")){
score+=-5;
}
}
score = (Math.abs(score)== 15) ? score : 0;
if(Math.abs(score) == 15) break;
}
return score;
}
public String[][] copyBoard(String[][] originalBoard){
String[][] duplicateBoard = new String[boardRowDim][boardColDim];
for (int i = 0;i < boardRowDim; i++){
for(int j = 0; j < boardColDim; j++){
duplicateBoard[i][j] = originalBoard[i][j];
}
}
return duplicateBoard;
}
public String[][] updateBoard(Integer move, String mark, String[][]oldBoard){
String[][] currentBoard = new String[boardRowDim][boardColDim];
int[] cell2D = new int[2];
currentBoard = copyBoard(oldBoard);
cell2D = single2Double(move);
currentBoard[cell2D[0]][cell2D[1]] = mark;
return currentBoard;
}
public ArrayList<Integer> copyAvailableMoves(ArrayList<Integer> originalAvailableMoves){
ArrayList<Integer> duplicateAvailableMoves = new ArrayList<Integer>();
for(int i = 0; i < originalAvailableMoves.size();i++){
duplicateAvailableMoves.add(originalAvailableMoves.get(i));
}
return duplicateAvailableMoves;
}
public ArrayList<Integer> updateAvailableMoves(Integer move, ArrayList<Integer> oldAvailableMoves){
ArrayList<Integer> currentAvailableMoves = new ArrayList<Integer>();
currentAvailableMoves = copyAvailableMoves(oldAvailableMoves);
currentAvailableMoves.remove(move);
return currentAvailableMoves;
}
public String[][] seedBoard(){
String[][] sampleBoard ={{"0","O","X"},{"X","4","O"},{"6","7","X"}};
//String[][] sampleBoard ={{"X","O","O"},{"3","4","X"},{"6","7","8"}};
return sampleBoard;
}
public ArrayList<Integer> seedAvailable(String[][] seedBoard){
ArrayList seedMoves = new ArrayList<Integer>();
int index = -1;
for(int i = 0; i < 3;i++){
for (int j = 0; j < 3; j++){
if(!seedBoard[i][j].equals("X") && !seedBoard[i][j].equals("O")){
index = i*3 + j;
seedMoves.add(index);
}
}
}
return seedMoves;
}
public int[] findBestMove(String[][] currentBoard, ArrayList<Integer> currentAvailableMoves,boolean currentComputerMoves,int depth,Scanner kboard){
ArrayList<Integer> simulateAvailableMoves = new ArrayList<Integer>();
String[][] simulateBoard = new String[boardRowDim][boardColDim];
int[] scoreMove = new int[2]; //return array with score and associated move
int[] cell2D = new int[2]; //array holding i and j of board to place Mark (X or O)
int computerTargetScore = (computerMark.equals("X")) ? 15:-15;
int[][] scoreMoveAvailable = new int[currentAvailableMoves.size()][2];
Integer simulateMove = null; //current move inside loop
Boolean simulateComputerMoves = null;
for(int i = 0; i < currentAvailableMoves.size(); i++){
scoreMoveAvailable[i][0] = 0; //score
scoreMoveAvailable[i][1] = -1; // square 0 - 8
}
for (int i = 0; i < currentAvailableMoves.size() ;i++){
simulateAvailableMoves = copyAvailableMoves(currentAvailableMoves);
simulateBoard = copyBoard(currentBoard);
simulateComputerMoves = currentComputerMoves;
simulateMove = simulateAvailableMoves.get(i);
simulateAvailableMoves = updateAvailableMoves(simulateMove,simulateAvailableMoves);
cell2D = single2Double(simulateMove);
if(simulateComputerMoves){
simulateBoard[cell2D[0]][cell2D[1]] = computerMark;
simulateComputerMoves = false;
if(score(simulateBoard) == computerTargetScore || simulateAvailableMoves.size() == 0){
scoreMove[0] = score(simulateBoard);
scoreMove[1] = simulateMove;
}else{
depth++;
scoreMove = findBestMove(simulateBoard,simulateAvailableMoves,simulateComputerMoves,depth,kboard);
}
}else{
simulateBoard[cell2D[0]][cell2D[1]] = playerMark;
simulateComputerMoves = true;
if(score(simulateBoard) == (-computerTargetScore) || simulateAvailableMoves.size() == 0){
scoreMove[0] = score(simulateBoard);
scoreMove[1] = simulateMove;
}else{
depth++;
scoreMove = findBestMove(simulateBoard,simulateAvailableMoves,simulateComputerMoves,depth,kboard);
}
}
scoreMoveAvailable[i][0] = scoreMove[0] ;
scoreMoveAvailable[i][1] = simulateMove;
}
int[] bestScoreMove = new int[2];
bestScoreMove[0] = scoreMoveAvailable[0][0]; //set bestScoreMove to first element in arraylist
bestScoreMove[1] = scoreMoveAvailable[0][1];
if( (currentComputerMoves && computerMark.equals("X") ) || (!currentComputerMoves && computerMark.equals("O") ) ) {
for (int i = 0; i < scoreMoveAvailable.length;i++){
if(scoreMoveAvailable[i][0] > bestScoreMove[0]){
bestScoreMove[0] = scoreMoveAvailable[i][0] ;
bestScoreMove[1] = scoreMoveAvailable[i][1];
}
}
}else{
for (int i = 0; i < scoreMoveAvailable.length;i++){
if(scoreMoveAvailable[i][0] < bestScoreMove[0]){
bestScoreMove[0] = scoreMoveAvailable[i][0] ;
bestScoreMove[1] = scoreMoveAvailable[i][1];
}
}
}
return bestScoreMove;
}
/*
* just some static methods to help make things easy
*/
public static void printHeader(){
System.out.println("u000C Welcome to TicTacToe\n" +
" where you can match wits\n" +
" against the computer\n" +
"(the real challenge is making it a draw)\n");
}
/*
* the next 2 methods convert the index of a double array to a single array
* and the index of a single array to a double array
*/
public static int double2single(int row, int col){
int singleCell = 0;
singleCell = boardRowDim * row + col;
return singleCell;
}
public static int[] single2Double(int cell){
int[] cell2D = new int[2];
cell2D[0] = cell / boardColDim;
cell2D[1] = cell % boardColDim;
return cell2D;
}
public static String capitalize(String word){
word = word.substring(0,1).toUpperCase() + word.substring(1);
return word;
}
}
Answer: Sorry, but that someone is probably right. At the same time, thanks for being thus brave and asking on how to improve here.
It's not so simple to give feedback in such a scenario, since there are so many details that go "wrong". Since you tagged this for beginner, I'll focus on things that will benefit you most as well as things you can easily adapt. You might want to apply some of these to see how your code becomes more and more readable.
Designing
When thinking about the software (don't forget to do that before coding), write down some sentences in prose English that describe the game. Let me give an example:
TicTacToe is a game played by two players. They play on a square 3x3 sized board. In an alternating manner, player 1 puts an X onto a cell and player 2 puts an O onto a cell. Only one sign is allowed per cell. The game ends when the first player has 3 contiguous of his own signs in any direction (horizontal, vertical, diagonal)
Find the subjects in these sentences. Each one is a candidate for a class. We have Game, Player, Board, Cell, Sign, Direction.
Next, find out what each of these could be doing (methods) and what data it needs to do that (members). Game might hold the rules, e.g. alternating the players and ending the game. Player could have the name of the player and the sign and perhaps a statistic of wins versus looses. Board might not do a lot, but it needs to hold the data (empty cells, full cells, size). Sign is just X or O - perhaps not enough to recitify a class. Direction could hold masks for all 8 ways to get a win.
It'll be a long way to go from the current code to 4 classes. This really is a hint for your next project or a complete rewrite.
IDE
Use a real IDE. It's clearly visible that your IDE (BlueJ) did not help you write good code.
A good IDE will give you hints about
unused variables
unnecessary imports
typos
redundant initializers
simplification of boolean expressions
invalid String comparisons
unnecessary public access to methods
prefer primitive types
move assignments to declaration
join declaration and assignment
replace for loops by foreach loops
I'll not go into details with any of these, because it's usually not necessary to do a review on those, because the IDE does the review for me (or you).
You can learn a lot from the hints of the IDE alone. And it will make review easier for us.
In this review, I'll tell you a bit about IntelliJ.
Size of the class
Your code has 450 lines. Some people would say it's ok and fits the rule of 30. Others, including me, would like to see classes with about 200 lines. Assuming that this code would split up evenly with the 4 classes mentioned in the designing chapter, that's ~ 110 lines each. That would be great!
Why does size matter? If a method is very long, it does probably more than one thing. If a class is too big, it likely has too many reasons for change.
One file (which is one class in Java) is often the smalles unit a developer needs to read in order to understand something. Reading and understanding 450 lines is a lot and I'd better not be interrupted during that time.
What can easily be separated here?
A Main class which only contains the main() method. Some call it Application or Program. You could also name it TicTacToe. That main() method will wire up all other parts, so it does integration work.
How would you do that? Don't do it manually. Assuming IntelliJ as the IDE, right click the main method and choose Refactor / Move. Then enter Main as the class name and ignore the fact that it's red. The class will be created when you click Refactor.
The method drawBoard() seems to do drawing only. You could move it to a Board class.
The method capitalize() is used in main() only. It can be moved to the Main class.
Remove dead code
Applying all the IDE suggestions will reveal dead code at this point:
boolean toSeed = false;
if(toSeed){
...
}
You can get rid of 22 lines (5%) immediately.
How would you do that? Don't do it manually. Click on the condition. Press Alt+Enter to access the quick tip light bulb. Choose Remove if statement.
You'll then find that updateBoard() and seedBoard() and seedAvailable() are unused. Similar, use Remove unused method. Again 30 lines (6%) less reading.
Also: delete all commented code without thinking.
Naming
What is Shortver? Is that in contrast to Longver?
Do you see how class names TicTacToe, Game, Player and Board tell me so much more about what the program is about in comparison to Shortver?
Example: at what time do I figure out what the code is about? In line 440, the code mentions the term "TicTacToe" for the first time. Usually people read top to bottom, so that's very late.
How would you rename that? Don't do it manually. Right click the class name Shortver, choose Refactor / Rename and give it at least a slightly better name, following the 6 steps of naming.
Too many empty lines
Use empty lines for separating things. Using empty lines you can create paragraphs. Paragraphs will help the reader understand what code belongs together and where something new starts.
Paragraphs will help you finding methods to extract (example later).
Remove nonsense comments
Like
/*
* just some static methods to help make things easy
*/
Hopefully every method in your code does something useful and makes things easier.
Size of methods
You can reduce the size of methods by extracting smaller methods. Example:
if(score(board) == gameTargetScore) {
System.out.println(playerName + " you are too good for me. \n" +
"Congratulations you won!!\n\n");
} else if(score(board) == -gameTargetScore) {
System.out.println("IWONIWONIWONohboyIWONIWONIWON");
} else if(draw) {
System.out.println("Good game. It's a draw!");
}
That would make an excellent method printGameEndMessage().
How would you do that? Don't do it manually. Mark all of these lines, right click, choose Refactor / Extract / Method.
Another example:
if(humanGoes){
...
}else{
...
}
The code inside the if block would make up a method humanMove() and the code in the else block goes into computerMove().
That way, you end up with a short 30 line method playGame().
Bugs
In drawBoard(), you're doing string comparison with the == operator. IMHO this only accidentally works due to string interning. The correct way is to use .equals().
To me that was an indicator that you might have been a beginner on Java and you have probably worked with a language before that allowed string comparisons with ==. (I asked both questions in the comments)
Magic numbers
When we find numbers in code that don't have a name, we call them "magic numbers", because they don't have an explanation.
If the number 3.14 is in your code without the name pi, do you know that should be pi or it's just 3.14000?
One of these methods is drawBoard(). All that i and j and numbers... Which one is a column, which one is a line? But then there is row and col, argh ...!
Rename i to consoleRow, j to colsoleColumn, row to boardRow, col to boardColumn.
Change 15 to 3*5. Change 27 to 3*9. This will make it more clear that we still have a 3*3 board. Change 10 to 2*5. Change 8 to 9-1. Change 17 to 2*9-1.
That way you have less different numbers and it's easier to guess their meaning.
Conclusion
After about 2 hour of working on your code, I slowly begin to understand what it does.
I reduced from 460 lines of code to 28 + 53 + 275 = 357 lines (in 3 classes).
At this point I would need a few more advanced changes, since I need to remove duplicate code. I still don't understand the 80 lines method findBestMove().
So, that's pretty bad for a simple game like TicTacToe - but hey, I probably wrote worse code when I was your age. Nothing to worry about. Keep on learning. Keep on asking. Embrace feedback. Do pair programming. | {
"domain": "codereview.stackexchange",
"id": 34830,
"tags": "java, beginner, tic-tac-toe, ai"
} |
How do ripples form and why do they spread out? | Question: When I throw a rock in the water, why does only a small circular ring around the rock rises instead of the whole water body, and why does it fall outwards and not inwards or why fall out in any direction instead of just going back down like ball which after getting thrown up falls back to its initial position?
Answer: The lunar craters result from the impact of rocks on a solid surface. As the body opens its way down, the displaced material has to go somewhere, and it is sideways and upwards. The boundary of the crater are stable if its slope is not so big that fragments can roll down.
The initial effect is similar on water, but liquids can not hold shear stresses. Any slope is too steep. So, once formed, the higher portions starts to roll down the initial slope, inwards and outwards. The inward part produce turbulence in the region of the impact.
The effect of the outward movement on the surroundings is like a second impact, but now on a circle around the boundary of the initial crater.
The repetition of that process produces the typical circular waves, with decreasing amplitudes. | {
"domain": "physics.stackexchange",
"id": 67813,
"tags": "waves"
} |
Measure a frequency response function | Question: I have a slide here which says following:
For the control loop
...apply $d$, measure $u$ and $y$, calculate
$\hat{H}(f) = \frac{S_{yu}(f)}{S_{uu}(f)}$
but then typically
$E\{\hat{H}(f)\} \ne H(f)$
Can someone explain to me the meaning of the last equation? Is it correct to read this line as: "The expected value of $\hat{H}(f)$ is not equal to $H(f)$", and if yes, why is it like this?
Answer: In the context depicted in the diagram, the transfer function estimate can be shown to be biased (wrong in expectation) due to the feedback loop. The input will, due to the feedback, be correlated with the noise n that appears in the output, causing the bias.
See, e.g., "System modeling and identification" by Rolf Johansson, chapter 4, for more details.
In fact, several classical identification methods are biased when operating on closed-loop data. This includes several common subspace-based identification methods. Notably, the prediction-error method (PEM) is unbiased also for closed-loop data due to it explicitly taking causality into account (subspace id typically does not). | {
"domain": "engineering.stackexchange",
"id": 4784,
"tags": "control-engineering, system-identification"
} |
Gravitational potential due to uniform ring | Question: So i want to find the Gravitational potential caused by a uniform ring with radius $R$ at any point in space. I know the solution of the field should not have any dependence on the azimuthial angle due to symmetry, so the potential should be of this form:
$V=V(r,\theta)$
I know that you can solve this using Laplace's equation at the symmetric axis and then expand it for all of space and this indeed will give you a function that does not depend on $\phi$. I wanted to solve this using a different approach. Let's say the ring is made out of little masses $dm_i$ and everyone of those little masses creates a gravitational field. The position vector of each tiny mass is then: $\vec{r_i}=Rcos\phi_i i +Rsin\phi_ij$, according to this the potential that is created due to a tiny mass at any point with position vector $\vec{r}$ should be:
$dV=-G\frac{dm}{|\vec{r}-\vec{r_i}|}\Rightarrow V=-G\rho \int_{0}^{2\pi}\frac{d\phi_i}{|\vec{r}-\vec{r_i}|}\Rightarrow V=-G\rho \int_{0}^{2\pi}\frac{d\phi_i}{\sqrt{R^2-2rRsin\theta cos(\phi_i-\phi)+r^2}}$
My problem isn't that i cant solve the integral (which i can't) my problem is that whatever its solution at the end the potential, will have a dependence of $\phi$ which shouldn't be happenning. I can't understand what i am doing wrong, any tips will be apprecieated.
Answer: Change the variable of integration to $\phi’_i=\phi_i-\phi$ and the dependence on $\phi$ will go away.
The new limits of integration will look different but you are still integrating over all 360 degrees of the ring so you can change them back to $0$ to $2\pi$. | {
"domain": "physics.stackexchange",
"id": 64539,
"tags": "homework-and-exercises, gravity, newtonian-gravity, symmetry, potential"
} |
Algorithm for finding the largest subgraph without a directed triangle | Question: I would like to find the largest set of vertices in a directed graph. This set should not contain a cycle with exactly three vertices. Cycles with less vertices aren't possible with the given graphs; larger cycles can be in the set.
Do you know an algorithm for that?
Answer: The problem of finding a maximum set with no induced directed triangles is NP-complete via a reduction from maximum independent set.
Let G=(V,E) be an undirected graph and k be an integer, for which we wish to know whether there is an independent set of at least k vertices. Let |V|=n. From G construct a directed graph G'=(V',A'), where V' consists of the disjoint union of V and n additional vertices. For each undirected edge in G create two directed edges between the same vertices in G'. In addition, connect each of the n additional vertices in G' by edges in both directions to the vertices in V, but do not add edges between any two of the n additional vertices.
Then, a triangle-free set of vertices in G' consists either of a subset of V only (with at most n vertices) or it contains some vertices in the set of additional vertices together with an independent set of vertices in G. Therefore, there exists a triangle-free set of size n+k in G' iff there exists an independent set of size k in G.
However, this reduction produces many 2-cycles, so it leaves open the possibility that the problem might be easier in the case that the input graph has no 2-cycles, which you say is the case for your inputs. Unfortunately this special case is still hard: a more complicated reduction based on the same idea shows that the problem remains NP-complete even for 2-cycle-free graphs. In the more complicated reduction, add mn additional vertices rather than simply n. Replace each undirected edge in G by a directed edge, oriented arbitrarily. For each directed edge uv connecting two vertices of V, make n additional vertices in V', each of them connected to u and v to form a directed triangle. Then the resulting graph has mn+k vertices in a triangle-free set iff the original graph has k vertices in an independent set. | {
"domain": "cstheory.stackexchange",
"id": 764,
"tags": "ds.algorithms, graph-algorithms"
} |
Meaning of time derivative of an operator | Question: Today when my professor was deriving this equation:
$$\frac{\mathrm d\langle A\rangle}{\mathrm dt}=\frac{i}{\hbar}\langle\left[H,\,A\right]\rangle+\left\langle\frac{\partial A}{\partial t}\right\rangle,
$$
he treated the corresponding operator of the observable $A$ as if it was a function and took its time derivative and it shows up in the last term of the equation as can be seen.
What I don't get is what does it mean to take the time derivative of an operator? If the operator is a multiplicative function, then I can understand it, but suppose the operator was $\mathrm d/\mathrm dx$, what does it mean to take the time derivative of a $\mathrm d/\mathrm dx$ operator?
Answer: One comment pointed out that the derivative $\frac{\partial}{\partial t} \frac{d}{dx} = 0$. In a certain sense however $ \frac{d}{dt} \frac{d}{dx} \neq 0$. I will try to explain this further.
Let's think first about kinematics. Then we have a Hilbert space $\mathscr{H}$ and some operator $A$ on it. For example the Hilbert space could be the space of square-integrable functions and $A$ the derivative, $\mathscr{H} = L^2(\mathbb{R})$ and $A = \frac{d}{dx}$.
If we want to study time evolution, we have to prescribe an one-parameter group of unitaries, $U_t$. We may now ask for the orbits of operators under this group of unitaries, that is, we might be interested in
$$ A(t) := U_t^{-1} A U_t \ . $$
We might additionally study observables which we let parametrically depend on $t$; for example we may multiply an operator with a function $f(t)$. I will ignore this in the following, but it is not hard to take into account. Take a time evolution operator
$$U_t = \exp(- i H t) \ .$$
Then we may compute the time derivative of $A(t)$:
$$ \frac{d A(t)}{dt} = \frac{d }{dt} \left(e^{ i t H} A e^{-i t H}\right) = (i H) e^{ i t H} A e^{-i t H} + e^{ i t H} A e^{-i t H} ( - i H) = i [H,A(t)] \ . $$
Let's consider the case of $A = \frac{d}{dx}$ in more detail, and let's choose
$$H = - \frac{1}{2} \frac{d^2}{d x^2} + \frac{1}{2} x^2 \ ,$$
which is of course the harmonic oscillator. Now we want to see what
$$A(t) = \left(\frac{d}{dx}\right)(t)$$
is. Note that this is no longer the derivative! The connection it has to the derivative is that
$$ \left(\frac{d}{dx}\right)(0) = \frac{d}{dx} \ .$$
The equation of motion for $A$ is
$$ \frac{d A(t)}{dt} = i [ H(t), A(t)] = \frac{i}{2} \left[ x(t)^2, \left(\frac{d}{dx}\right)(t) \right] \ . $$
Here i used that $H(t) = H$. In this formula we may now use that for two operators $B(t),C(t)$ that are obtained from operators $B,C$ by conjugating with an unitary $U_t$, it holds that
$$ [B(t),C(t)] = ([B,C])(t) \ .$$
Hence
$$ \left[ x(t)^2, \left(\frac{d}{dx}\right)(t) \right] = \left(\left[ x^2, \frac{d}{dx}\right] \right)(t) = 2 x(t) \ . $$
Playing the same game with $x(t)$, we get the set of coupled equations
$$ \frac{d \left(\frac{d}{dx}\right)\!(t)}{d t} = i x(t) \ , \\
\frac{d x(t)}{dt} = i \left(\frac{d}{dx}\right)\!(t) \ .$$
which may be easily solved to give
$$ \left(\frac{d}{dx}\right)\!(t) = \cos(t) \frac{d}{dx} + i \sin(t) x \, \\
x(t) = \cos(t) x + i \sin(t) \frac{d}{dx} \ . $$ | {
"domain": "physics.stackexchange",
"id": 55527,
"tags": "quantum-mechanics, operators, time, differentiation, observables"
} |
Equivalent viscous damper for friction damping | Question: I have a friction damping system which is exited by a harmonic force FE (depicted on the left side).
Is there a way to convert the friction damper to a linear or nonlinear damper, such that the damping at a given excitement frequency is equal?
I am only considering sliding friction.
A reasonable approximation would be sufficient as well.
Any papers or articles on the topic would also be highly appreciated.
Answer: In general this is impossible, because for a given value of $F_e$, the friction damper dissipates a fixed amount of energy per cycle of vibration independent of the vibration frequency.
This is (nonlinear) hysteretic damping, not (nonlinear) viscous damping.
The only way to approximate this with a viscous damper would be to make $C$ a function of both the $F_e$ and the frequency $\omega$, which won't produce a useful equation of motion except in the special case where the machine only operates at one fixed frequency $\omega$.
Aside from that issue, a general way to make the approximation is to model one cycle of the stick-slip motion of the friction damper and find the energy dissipated during the cycle. Then choose $C$ to dissipate the same amount of energy.
For a simple slip-stick damper you can do this from first principles, though the details are messy, and you need the complete equation of motion of the system - you haven't specified how the mass and/or stiffness are connected to the damper.
A more general approach is to use the so-called Harmonic Balance Method to produce a numerical approximation. There are many variations on the basic idea (and many research papers describing them!) but one implementation is the NLVib function in Matlab. | {
"domain": "engineering.stackexchange",
"id": 4332,
"tags": "mechanical-engineering, friction"
} |
Another Inclined plane question | Question:
I did the FBD, and I found too many variables which are not eliminating...Moreover, I believe this question is based on kinetic and static friction. But, $\mu$ here is ambiguously defined...How Do I get the integral value?
Answer: For pushing it up, we have to overcome friction(act downwards) as well as the $mg\sin\theta$. So, $$3N=f+mg\sin\theta$$
Now the block is just slipping , so friction is acting upwards, and so does the force applied externally.So,
$$N+f=mg\sin\theta$$
Eliminate $N$ and use $f=\mu mg\cos\theta$.
Solve for $\mu$ you get your answer. | {
"domain": "physics.stackexchange",
"id": 7771,
"tags": "homework-and-exercises, newtonian-mechanics, classical-mechanics, forces"
} |
How to construct NMR spectra from chemical shift tensors? | Question: If I know the chemical shift tensor (from ab initio methods), $\sigma$, for each atom in my structure, is it possible to construct an NMR spectra for my system? If yes, how and what mathematical implementations do I need?
Answer: Yes, it is possible. It is done often. The solution depends on the dynamics and orientation(s) of the spins in the ensemble (as will be shown bellow). I will assume you want to calculate the spectrum in frequency space.
$$
\omega_{cs} = \sigma_{zz}^{LF}(-\gamma \vec{B}_{0})
$$
We can hide a few variables and rewrite this in a much more useful way as:
$$
\omega_{CS} = (\vec{b}^t_0 \overset {\leftrightarrow}{\omega}\vec{b}_0)^{LF} = (\vec{b}^t_0 \overset {\leftrightarrow}{\omega}\vec{b}_0)^{PAS} =(\vec{b}^t_0 \overset {\leftrightarrow}{\omega}\vec{b}_0)^{RF}
$$
where,
$$
\vec{b_0} = \frac{\vec{B_0}}{B_0}.
$$
The above allows us to calculate the chemical shift in many reference frames. The frame where $\omega$ is diagonal (the PAS) is often convenient.
$$
\omega_{CS} = \omega_{xx}^{PAS}(\vec{b}^{PAS}_{0,x})^2 + \omega_{yy}^{PAS}(\vec{b}^{PAS}_{0,x})^2 + \omega_{zz}^{PAS}(\vec{b}^{PAS}_{0,x})^2
$$
We can define $\theta$ as the angle between $Z^{PAS}$ and $b_0$ and $\phi$ as the angle between $X^{PAS}$ and the x-component of the $b_0$ field. This lets us write:
$$
\omega_{CS} = \omega_{xx}^{PAS} \cos^2\phi \sin^2\theta + \omega_{yy}\sin^2\phi \sin^2\theta+ \omega_{zz}^{PAS}\cos^2\theta
$$
The above is the main equation you will need to calculate a chemical shift spectrum in frequency space. Numerically you will sum over all the angles in your ensemble. If your sample is a powder you could use angles sampled from a Gaussian distribution on a sphere for example. Typically people then apply convolution with a Lorentzian function (to account for relaxation and smooth the numerical artifacts). If your sample is a solution and the molecular reorientation time is fast compared to the inverse of the anisotropy, everything but the isotropic component will be averaged away. To put this in context (and to get to some possibly familiar equations) it will be useful to introduce some convention. In the Haeberlen-convention:
$$|\delta_{zz} - \delta_{iso}| \ge |\delta_{xx} - \delta_{iso}| \ge |\delta_{yy} - \delta_{iso}|$$
$$\delta_{iso} = \frac{1}{3}(\delta_{xx} + \delta_{yy}+ \delta_{zz})$$
$$\delta = \delta_{zz} =\delta_{iso}$$
$$\eta = (\delta_{yy}-\delta_{xx})/\delta$$
Where $\delta$ is called the anisotropy and $\eta$ is called asymmetry. Graphically:
Using this convention and some geometry you can get to:
$$
\omega _{CS} = \frac{\delta}{2} (3\cos ^2\theta -1 + \eta \sin^2 \theta \cos 2\phi) + \delta_{iso}
$$
Also, ab initio chemical shift calculations normally are reported in shielding values, you will probably want to convert to shift values using a sutable reference.
Reference:
Multidimensional Solid-State NMR and Polymers, by Klaus Schmidt-Rohr and Hans Wolfgang Spiess, chapter 2 | {
"domain": "chemistry.stackexchange",
"id": 1339,
"tags": "nmr-spectroscopy"
} |
what is the coordinates in robot_localization | Question:
Hi Tom/all:
As suggested in robot_localization document and REP 103, the imu data is required to be adhere to frame like "x forward, y left, z up", and odom data should adhere to frame like "x east, y north, z up" (which is ENU coordinates).
Is that so? The relevant input data should go with the coordinates mentioned above or we need to broadcast transform from imuMsg->frame_id to baselink and odomMsg->frame_id to odom.
By the way, what coordinate should GPS data adhere to ?
I have no idea whether I'm right. I do appreciate it if someone can enlighten it to me.
Originally posted by DaDaLee on ROS Answers with karma: 113 on 2017-06-15
Post score: 0
Answer:
All the data is transformed into its target frames before being fused. Pose data will get transformed in the frame dictated by the world_frame parameter, and twist data will get transformed into the frame dictated by the base_link_frame parameter.
GPS data should adhere to the standards in the sensor_msgs/NavSatFix message.
Originally posted by Tom Moore with karma: 13689 on 2017-09-22
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 28127,
"tags": "ros, navigation, robot-localization, coordinates"
} |
Understanding conclusions that functional regions are under negative selection? | Question: I am reading in notes for a comparative annotation lecture that :
all DNA is subject to mutations
most functional regions are under negative selection (ie., mutations are often deleterious)
The conclusion was:
that pieces of DNA with specific functions (especially genes) tend to be conserved against mutation more strongly than a DNA region with no specific function.
So if i understand it properly, since all DNA can accumulate mutations, the regions of genes that provide some fitness benefits avoid the negative selection of random mutations and are preserved (conserved?).
Also, the author placed in parentheses, (especially genes), therefore are there non coding regions that can provide benefits? Is he referring to "Selfish DNA" like Transposable elements that can have regulatory functions (or so i have heard).
Answer: Lets assume for simplicity that DNA is globally subjected to the same mutation rate (which is probably not a fully correct assumption).
Now let take a DNA region which is functional (what you meant by giving a fitness benefit), mutations in this region will occur as anywhere else in the genome. Some mutations will be deleterious and reduce the fitness of (or kill) the cell therefore, on the long run, cells that do not carry those deleterious mutations will propagate. This means that now if you look at the genome in the cell that propagated, certain genomic regions will contain less mutations than other because of the negative selection. They were "protected" against deleterious mutation.
Similarly, In multicellular organisms any mutations that reduces the fitness of the germline cells will be negatively selected (again functional regions will be more conserved).
The author was likely referring to regulatory regions. These are regions that do not encode proteins but are very important for gene regulations. Regulatory regions include enhancers, silencers, insulators and locus control regions (Maston GA, 2006). These regions will also be more conserved as deleterious mutations in regulatory elements could reduce the fitness of the cell/germline. | {
"domain": "biology.stackexchange",
"id": 3681,
"tags": "genetics, genomics, mutations"
} |
Impulse response rest conditions of LTI systems | Question: Why the impulse response (initial) rest conditions are taken like $h(t)=0$ and $h'(t) =1$, if the order of differential equation is $2$.
Answer: Consider the second order LCCDE which defines the input-output relationship of a causal & LTI system, assuming $a_0=1$:
$$a_0 y^{''}(t) + a_1 y^{'}(t) + a_2 y(t) = x(t). \tag{1}$$
When the input is a unit-impulse, $x(t) = \delta(t)$, then the solution of the DE is $y(t) = h(t)$, which is called as the impulse response of the LTI system. General solution of Eq.1, makes use of a splitting of the output into a homogeneous part, $y_h(t)$, and a particular part, $y_p(t)$, whose superposition then yield $$h(t) = y_h(t) + y_p(t). \tag{2}$$
Since the particular input is an impulse, then the particular solution $y_p(t)$, as $t \to \infty$, is identically zero. Indeed, assuming that $y_p(t)$ includes an impulse, then on the RHS of the DE, derivatives of $\delta(t)$ should appear, but which is not balanced by the LHS of the DE, contradicting the assumption. And this will always be the case, whenever the order of LHS is higher than that of the RHS. Therefore $$h(t) = y_h(t). \tag{3}$$
For this second order DE, one needs two auxiliary (initial) conditions, derived based on the initial-rest property of LTI systems, yielding:
$$y_h(0) = 0 \tag{4}$$ and $$y_h^{'}(0) = 1/a_0 = 1. \tag{5}$$
Eq.4 is a direct consequence of initial-rest property, and Eq.5 can be seen by integrating both sides the DE from $t=0^{-}$ to $t=0^{+}$, with input $x(t) = \delta(t)$, and recognising that the output, $y_h(t)$, for this specific DE, cannnot contain any impulses. Hence,
$$\int_{0^{-}}^{0^{+}} \left( a_0 y_h^{''}(t) + a_1 y_h^{'}(t) + a_2 y_h(t) \right)dt = \int_{0^{-}}^{0^{+}} \delta(t) \tag{6}$$
$$a_0 \int_{0^{-}}^{0^{+}} y_h^{''}(t) dt + a_1 \int_{0^{-}}^{0^{+}} y_h^{'}(t)dt + a_2 \int_{0^{-}}^{0^{+}} y_h(t)dt = 1 \tag{7}
$$
The second and third integrals in Eq.7 vanish, since, as explained in the previous paragraphs, their integrands do not include any impulses. But $y''(t)$ includes an impulse, and the first integral is, therefore, nonzero:
$$a_0 \int_{0^{-}}^{0^{+}} y^{''}(t) dt = a_0 \left( y_h^{'}(0^{+}) - y_h^{'}(0^{-}) \right) = 1. \tag{8}
$$
It follows again by initial-rest that $y'(0^-) = 0$, and thus we have:
$$y_h^{'}(0^{+}) = \frac{1}{a_0}. \tag{9}$$
Since $y_h(t) = h(t)$, and since the impulses are gone, we can now replace $0^{+}$ with $0$, and then it follows that:
$$ \boxed{ h'(0) = \frac{1}{a_0} = 1 } \tag{10} $$
yielding the required initial condition for solving the LCCDDE to obtain the impulse response $h(t)$ of the associated LTI system.
Similar argumentation follows for systems of higher orders. | {
"domain": "dsp.stackexchange",
"id": 5789,
"tags": "continuous-signals"
} |
Printing Brown numbers | Question: I have written a program to print Brown numbers from Brocard's problem.
I want to improve the code.
Brocard's problem is a problem in mathematics that asks to find
integer values of n and m for which
n ! + 1 = m^2
where n! is the factorial.
Pairs of the numbers (n, m) that solve Brocard's problem are called
Brown numbers. There are only three known pairs of Brown numbers:
(4,5), (5,11), and (7,71).
#include <iostream>
unsigned int factorial(unsigned int num)
{
unsigned int result = 1;
for(unsigned int i = 1; i <= num; i++)
{
result = result*i;
}
return result;
}
void print_brown_numbers(int limit)
{
for(unsigned int i = 2; i <= limit; i++)
{
for(unsigned int j = 1; j < i; j++)
{
if((i * i) == (factorial(j) + 1))
{
std::cout << i << " " << j << " are brown numbers\n";
}
}
}
}
int main()
{
unsigned int max_limit;
std::cout << "Enter the maximum limit for you want to test \n";
std::cin >> max_limit;
print_brown_numbers(max_limit);
}
Answer: This is a pretty good brute-force way to find these numbers. It will obviously be slow because of it's brute-force, but it's easy to read and understand, which is always a plus! Here are some ways you could improve it.
Don't Try All Combinations
Once you've calculated a factorial, it's pretty easy to add 1 to it and determine if it's a perfect square. I would write a function called is_perfect_square() that looks like this:
bool is_perfect_square(const int x)
{
double square = static_cast<double>(x);
double root = sqrt(square);
return root == std::round(root);
}
Now you can simply do this:
void print_brown_numbers(int limit)
{
for (unsigned int i = 2; i <= limit; ++i)
{
unsigned int fact = factorial(i);
if (is_perfect_square (fact + 1))
{
std::cout << sqrt(fact + 1) << " " << i << " are brown numbers\n";
}
}
}
Note that in my version, the limit is the limit of the factorial, not the square, so it works slightly differently.
Error Handling
Most compilers will use 32-bits for an int or an unsigned int these days. Factorials can get very large very quickly. 12! is the largest factorial a 32-bit int or unsigned int can hold. You should check your inputs to make sure you won't get an overflow and let the user enter a different one if it would.
Types
You're mixing signed and unsigned types. While there's nothing wrong with your math, you're leaving 1 bit (so half the range - 2 billion values) on the table for the square. It will never be negative, so you should just make it unsigned to begin with. (Or make the factorial signed. Either way is fine given the maximum values.) | {
"domain": "codereview.stackexchange",
"id": 29179,
"tags": "c++"
} |
Tension in wire rotating with a drum | Question: A wire of length $L$ and mass $M$ is fastened around a circular drum and the drum is set into rotation about its centre with constant angular velocity $\omega$. I wanted to find the tension in the string. (Question taken up from Kleppner and Kolenkow Mechanics).
Here is a solutions to this question with a diagram of the $\triangle \theta$ portion of the curve. One thing that baffles me here is why the normal reaction on the wire has not been taken into account.
Because then the equations would become
$$T \triangle \theta - \triangle N= \triangle m r {\omega}^ 2$$
Answer: Check the conditions of the problem. Probably it is assumed that the wire rotates so fast that it expands and leaves contact with the drum. The drum is not really needed. KK only want to find the tension in the hoop which is due to its rotation, not the component due to the normal reaction from the drum. | {
"domain": "physics.stackexchange",
"id": 34789,
"tags": "homework-and-exercises, newtonian-mechanics, forces, free-body-diagram"
} |
netpbm fails to install when installing with Homebrew on OS X | Question:
When building packages that depend on netpbm on OS X using Homebrew, netpbm fails to install due to a libpng version mismatch. This only seems to occur on Xcode 4.2.1 but could affect others.
Here is the error output:
https://gist.github.com/1456449
I have opened a ticket here:
https://github.com/mxcl/homebrew/issues/9079
Originally posted by William on ROS Answers with karma: 17335 on 2011-12-10
Post score: 1
Answer:
There is a work around until it is fixed upstream within Homebrew:
brew install netpbm --HEAD
And then resume using normal ROS commands.
Originally posted by William with karma: 17335 on 2011-12-10
This answer was ACCEPTED on the original site
Post score: 2 | {
"domain": "robotics.stackexchange",
"id": 7601,
"tags": "ros, macos, homebrew, macos-lion, osx"
} |
Userdata change type | Question:
Hi,
I try to du a small smach tree and pass a class between the state with userdata, the code look like this
class Auto_move_info(object):
__pallet_position = None
__pallet_place_position = None
def __init__(self, pallet_position=None, pallet_place_position=None):
self.__pallet_position = pallet_position
self.__pallet_place_position = pallet_place_position
def set_pallet_position(self, pallet_position):
self.__pallet_position = pallet_position
def get_pallet_position(self):
return self.__pallet_position
def set_pallet_place_position(self, pallet_place_position):
self.__pallet_place_position = pallet_place_position
def get_pallet_place_position(self):
return self.__pallet_place_position
class Get_pallet_position(smach.State):
def __init__(self):
smach.State.__init__(self, outcomes=['pallet_position_found'],
output_keys=['get_pallet_position_op'])
self.move_pallet_A_to_X = Auto_move_info('A','X')
def execute(self, userdata):
time.sleep(2)
userdata.get_pallet_position_op = self.move_pallet_A_to_X
print ("1")
print(self.move_pallet_A_to_X)
print ("2")
rospy.loginfo('Get_pallet_position')
return 'pallet_position_found'
class Calculate_pallet_position(smach.State):
def __init__(self):
smach.State.__init__(self, outcomes=['calculation_pallet_done'],
input_keys=['calculate_pallet_position_ip'],
output_keys=['calculate_pallet_position_op'])
def execute(self, userdata):
time.sleep(2)
print ("3")
print(userdata.calculate_pallet_position_ip)
print ("4")
rospy.loginfo('Executing state Calculate_pallet_position')
return 'calculation_pallet_done'
The result the become:
[INFO] [WallTime: 1476091443.061490] State machine starting in initial state 'Get_pallet_position' with userdata:
[]
1
<class_definition.Auto_move_info object at 0xb59ced2c>
2
[INFO] [WallTime: 1476091445.064946] Get_pallet_position
[INFO] [WallTime: 1476091445.065734] State machine transitioning 'Get_pallet_position':'pallet_position_found'-->'Calculate_pallet_position'
3
<smach.user_data.Const object at 0xb6158b4c>
4
[INFO] [WallTime: 1476091447.070927] Executing state Calculate_pallet_position
Why does the object move_pallet_A_to_X change from an Auto_move_info object to a Const?
Originally posted by pv on ROS Answers with karma: 16 on 2016-10-10
Post score: 0
Answer:
It works, but I wounder if I can have any problem with that the input_key and the output_key has the same name? Should I do in some other way? Right now I have declared both like input and output but only one of them is remapping with another state.
class Calculate_pallet_position(smach.State):
def __init__(self):
smach.State.__init__(self, outcomes=['calculation_pallet_done'],
input_keys=['calculate_pallet_position_ip','calculate_pallet_position_op'],
output_keys=['calculate_pallet_position_ip','calculate_pallet_position_op'])
#-------------------------------------------------------------------------------
# Create the sub SMACH state machine
sm_go_to_pallet = smach.StateMachine(outcomes=['go_to_pallet_done'])
# Open the container
with sm_go_to_pallet:
# Add states to the container
smach.StateMachine.add('Get_pallet_position', Get_pallet_position(),
transitions={'pallet_position_found':'Calculate_pallet_position'},
remapping={'get_pallet_position_op':'calculate_pallet_position_ip'})
smach.StateMachine.add('Calculate_pallet_position', Calculate_pallet_position(),
transitions={'calculation_pallet_done':'Move_forklift_to_pallet'},
remapping={'calculate_pallet_position_op':'move_forklift_to_pallet_ip'})
Originally posted by pv with karma: 16 on 2016-10-12
This answer was ACCEPTED on the original site
Post score: 0
Original comments
Comment by Philipp Schillinger on 2016-10-12:
It is completely fine to use a userdata key as input and output. If you need to use a custom class instantiation in userdata, this reflects the fact that calling methods on this object might have side effects.
Comment by pv on 2016-10-13:
Thanks for the answers! | {
"domain": "robotics.stackexchange",
"id": 25945,
"tags": "smach"
} |
Conservation of Energy in Capillary Tube | Question: The capillary action lets a liquid rise in a narrow tube to a certain height. In this, the liquid gains some potential energy. According to the Conservation of Energy, the energy must come from somewhere. So where does the energy come from?
One possible solution that I thought was that the molecules of the capillary exert forces on the liquid molecules. These forces are responsible for potential energy being stored in the liquid molecules. The molecules could consume this potential energy and rise in the tube. But I don't know whether this is correct or not. Could anyone tell me if this is correct or some other solution exists?
Answer: I had the same question and found the following excellent answer, which I've restated here with a correction:
Capillary action is minimizing the Helmholtz free energy F of the system.
We can actually find a formula for the capillary height that does this once we have accounted for the three components of energy that change under capillary action:
With $\gamma_{L}$ and $\gamma_{G}$ as the surface tensions at the liquid-solid and gas-solid interfaces respectively (defined by $\gamma = \frac{\delta F}{\delta A})$ then:
If the liquid rises by an amount δh, then we have:
$\gamma_{L}$ 2πr δh increase in the surface energy due to the liquid-solid contact
$\gamma_{G}$ 2πr δh decrease in the surface energy due to the gas-solid contact
$\rho g$ $\pi r^2$ h δh increase in the gravitational potential energy of the system where ρ is the difference between the liquid and gas densities, because we're also displacing air and changing the potential energy of that little parcel of air as well.
Overall change in energy $\delta F = \delta h(\pi r^2 \rho g h + 2\pi r(\gamma_L - \gamma_G))$.
At equilibrium $\delta f = 0$, so $r \rho g h + (\gamma_L - \gamma_G) = 0$ and we have
$$h = \frac{\gamma_G - \gamma_L}{\rho g r}$$ | {
"domain": "physics.stackexchange",
"id": 79550,
"tags": "energy-conservation, conservation-laws, potential-energy, capillary-action"
} |
Strain field rotational invariance | Question: How it can be seen that the quantity
$$u_{ij}={1\over2}(\partial_iu_j+\partial_ju_i)$$
is rotationaly invariant?
I've tried to use
$$u_i=R_{ij}u_j',$$
where $R_{ij}$ is the rotation matrix.
Here is my attempt:
$$ {{\partial u_j} \over{ \partial x_i}}={{\partial u_j} \over{ \partial x'_k}}{{\partial x'_k} \over{ \partial x_i}}= {{\partial R_{jl}u'_l} \over{ \partial x'_k}}R^{-1}_{ki}= R_{jl}{{\partial u'_l} \over{ \partial x'_k}}R^{-1}_{ki} $$
where I have used $x'_k=R^{-1}_{km}x_m$.
So for the symmetric combination:
$$2u_{ij}= R_{jl}{{\partial u'_l} \over{ \partial x'_k}}R^{-1}_{ki}+R_{il}{{\partial u'_l} \over{ \partial x'_k}}R^{-1}_{kj}$$
which looks right, but I don't know how to proceed.
I guess the statement I'm trying to prove is just incorrect because its obvious this is not an isotropic tensor in general case. It works only for
$${{\partial u'_l} \over{ \partial x'_k}}= \lambda\delta_{lk}$$ and $u_{ij}$ proportional to $\delta_{ij}$
Answer: It's not rotationally invariant. It's a tensor after all, not a scalar. | {
"domain": "physics.stackexchange",
"id": 40939,
"tags": "homework-and-exercises, condensed-matter, elasticity, continuum-mechanics"
} |
ROS Answers SE migration: Ros Networking | Question:
My setup:
export ROS_HOSTNAME=alpa
export ROS_MASTER_URI=http://alpha:11311
export ROS_HOSTNAME=betha
export ROS_MASTER_URI=http://alpha:11311
I ran turtlesim_node on the first machine and the turtle_teleop_key on the second machine. i have already launch the roscore on the Master machine, the turtlesim and turtle_teleop_key launch well but i dont have the remote when I tooch the key to move the turtle.
someone has an idea ?
thanks
Edit: any hostname is attach at one Ip adress different. when i ping r2d2 on c3po i have the exchange. The connection between the 2 ros is good because I can start the master and the processes start on each pc starts without error but not the control
i want to do with the teleop_key
export ROS_HOSTNAME=r2d2
export ROS_MASTER_URI=http://c3po:11311.
I run turtlesim in this machine
export ROS_HOSTNAME=c3po
export ROS_MASTER_URI=http://c3po:11311
in this one I ran firts roscore and turtle_teleop_key, all ran well but no romote of turtlesim by turtle_teleop_key.
help please
Originally posted by polin on ROS Answers with karma: 1 on 2017-05-22
Post score: 0
Answer:
Please see wiki/ROS/MultipleMachines.
You only run a single master, so both ROS_MASTER_URI variables should point to the same IP (ie: the one running your roscore).
Additionally: the value of ROS_HOSTNAME is used by ROS nodes to identify who is 'talking' to them. But localhost is a special hostname, that will (should?) always resolve to the local machine (at 127.0.0.1). It is therefore not usable to configure a distributed (ie: networked) ROS nodegraph.
Make sure to use either globally resolvable hostnames or don't set ROS_HOSTNAME, but set ROS_IP to the IP of the network interface that you use to connect both computers to the same network.
Edit: the first version of your question used localhost everywhere, the second c3po and r2d2 and your last version now uses alpha and betha. Can you confirm that those hostnames actually resolve to IPs? The values you set for ROS_HOSTNAME have to actually be existing hosts on your network, you cannot just make something up.
On alpha, what is the output of ping betha (or on c3po, what is the output of r2d2)?
export ROS_HOSTNAME=r2d2
export ROS_MASTER_URI=http://c3po:11311
Edit 2: just making sure: you are exporting those variables in all terminals that you start ROS programs in? Or do you have those export lines added to your .bashrc?
If you haven't, add the export lines to your .bashrc on both machines (for now, at least). Close all terminals.
Now on c3po:
open a new terminal: start roscore in it
open a new terminal: run the following:
rostopic pub -r1 /chatter std_msgs/String "something"
Now on r2d2:
open a new terminal: run the following:
rostopic echo /chatter
you should now see rostopic printing "something" once per second.
To make sure everything us working, do the reverse (so start rostopic pub -r1 .. on r2d2 and rostopic echo .. on c3po).
If you don't get any output from rostopic echo .. on r2d2, your network setup is not correct. In that case, try a rosnode ping /rosout on r2d2.
Originally posted by gvdhoorn with karma: 86574 on 2017-05-22
This answer was ACCEPTED on the original site
Post score: 0
Original comments
Comment by polin on 2017-05-23:
See the change
export ROS_HOSTNAME=r2d2 export ROS_MASTER_URI=http://c3po:11311. I run turtlesim in this machine
export ROS_HOSTNAME=c3po export ROS_MASTER_URI=http://c3po:11311 in this on I ran roscore and turtle_teleop_key , all ran well but no romote of turtlesim by turtle_teleop_ke
Comment by gvdhoorn on 2017-05-23:
Please edit (add, not overwrite) your original question with this new information. The comments are not really suited for these things.
Also: please clarify what you mean with "all ran well". | {
"domain": "robotics.stackexchange",
"id": 27966,
"tags": "ros"
} |
Any experiences for plotting a stationary wavelet transform? | Question: I am experimenting with wavelets for my thesis and am currently working with the stationary WT pywavelets provides.
There are very nice plots for CWTs, but does anyone know a technique for producing a plot that gives a good overall understanding of the produced SWT?
Right now I am basically just producing a list of details coefficient plots for each level, and not regarding the averages at all.
I hope this question belongs here.
Cheers!
EDIT:
I really just iterate through the transform and plot each details coefficient vector.
(I'm aware the example uses DWT and not SWT, but it should be analogous)
fig, axes = plt.subplots(3, 5, figsize=[14, 8])
c = pywt.wavedec(data1, 'haar', mode='periodization')
for i in range(0, 5):
axes[0, i].plot(c[i],c = "r")
The result then looks like this:
Answer: I don't do Python, I'm an old person sticking to his old Matlab (codes and) habits. However, up to extension/wavelet/border issues, SWT is a discrete equivalent to CWT. And in most versions, the number of samples in approximations or details is the same (which is not the case for the DWT). Hence, you can concatenate 1D rows of details (or approximations) into 2D images, akin to traditional scalograms. The following images and code show the process. I have generated a random piecewise polynomial signal with increasing degrees. Then rows of details, and an image of the rows concatenated. The motivation is to address the impact of vanishing moments on piece-wise polynomials, with border effects.
Here are two different realizations.
The Matlab code, that may be reproduced in Python:
dataLength = 512;
data = zeros(dataLength,1);
nChunk = 4;
lChunk = dataLength/nChunk;
for iChunk = 0:nChunk-1
idxChunk = iChunk*lChunk+(1:lChunk)';
polyChunk = rand(iChunk+1,1)-0.5;
dataChunk = polyval(polyChunk,linspace(-0.5,0.5,lChunk));
data(idxChunk) = dataChunk;
end
nLevel = 4;
[swa,swd] = swt(data,nLevel,'db2');
figure(1);clf
subplot(1,3,1)
plot(data,'x');axis tight
for iPlot = 1:nLevel
subplot(nLevel,3,3*(iPlot-1)+2)
plot(swd(iPlot,:));;axis tight
end
subplot(1,3,3)
imagesc(swd)
Other related questions/answers:
STFT and DWT (Wavelets)
Does SWT/ISWT require intermediate approximation coefficients to represent/reconstruct the original signal?
Extract approximation and detail coefficients
Undecimated DWT vs. CWT. In what cases is one preferable over another?
Additive white Gaussian noise and undecimated DWT
Reproducing paper results about a wavelet transformation using python | {
"domain": "dsp.stackexchange",
"id": 7761,
"tags": "python, wavelet, transform, stationary"
} |
What is meaning of the *fractional* (not integer) number $x$ in $Ga_xIn_{1-x}As$ for semiconductor composite? | Question: So far, for molecule, I had learned the indice parameter $x$ as a integer number.
But what is meaning of the fractional (not integer) number $x$ in $Ga_xIn_{1-x}As$ for semiconductor composite ?
Is is a fraction, but for what ? For which ensemble ?
Should $Ga_xIn_{1-x}As$ be interpreted as a molecule or not at all ?
Answer: Indium gallium arsenide is an alloy of indium arsenide and gallium arsenide. The notation $\mathrm{Ga}_x\mathrm{In}_{1-x}\mathrm{As}$ denotes the alloy obtained from alloying the two metals in a ratio $x:1-x$, it is not a formula for a molecule. As an (alloyed) metal, the compound in solid form has an crystal lattice structure, and does not present as discrete molecules of any kind. | {
"domain": "physics.stackexchange",
"id": 66015,
"tags": "solid-state-physics, semiconductor-physics"
} |
Why is this transformation not regarded as a symmetry? | Question: Problem 4.3 in Mathematical Methods for Physics and Engineering by M. Blennow asks us to find the symmetries of a parallelogram tiling of the plane:
The solutions manual lists the following types of symmetry transformations: translations along either edge direction by a multiple of the side length, and 180° rotations about a vertex, the midpoint of an edge, or about the midpoint of a parallelogram. The different types of rotations can all be related by translations, so all of the above can be generated by the transformations $T_1$, $T_2$, and $c$ illustrated below:
However, I think that there should be one more. To get a non-ambiguous characterization of it, let us pick some vertex as the origin and introduce a basis as below:
Then, of course, each point in the plane can be uniquely identified as $x^1 \vec e_1 + x^2 \vec e_2$, for some $x^1, x^2 \in \mathbb R$, and we can define a transformation $\sigma$ by its action
$$x^1 \vec e_1 + x^2 \vec e_2 \mapsto -x^1 \vec e_1 + x^2 \vec e_2.$$
It seems to me that $\sigma$ is a symmetry of the tiling. Moreover, it cannot be constructed from the generators already listed, because it is not parity conserving (i.e., as a linear transformation it has negative determinant).
So, why is $\sigma$ not regarded as a symmetry transformation? Is there some conventional restriction on the types of transformations that we regard as symmetry transformations that forbids $\sigma$?
One commenter did not believe that the pattern is invariant under $\sigma$, so here is a short proof. In the introduced basis each horizontal line can be written as
$$H_n=\{r\vec e_1+n\vec e_2:r\in\mathbb R\}$$
for some integer $n$. Similarly each "skew-vertical" line can be written
$$V_n=\{n\vec e_1+r\vec e_2:r\in\mathbb R\}.$$
Hence
$$\sigma:H_n\mapsto\{-r\vec e_1+n\vec e_2:r\in\mathbb R\}=H_n$$
and
$$\sigma:V_n\mapsto\{-n\vec e_1+r\vec e_2:r\in\mathbb R\}=V_{-n}.$$
Answer: Check wallpaper group. "The types of transformations that are relevant here are called Euclidean plane isometries". Your transformation is not an isometry because the angle θ between $e_1$ and $e_2$ changes to 180−θ. Isometries must preserve length and angle.
Mathematically we can (and physicists often do) consider larger symmetry groups than Isometry Groups. For example transformations which preserve angle but not length form the Conformal Group; smooth reversible transformations which preserve neither angle or length form the Diffeomorphism Group.
We can also consider isometry groups which preserve length and angle with respect to a metric other than the Euclidean metric. For example in Special Relativity the appropriate isometry group is called the Lorentz Group. | {
"domain": "physics.stackexchange",
"id": 78813,
"tags": "symmetry, group-theory"
} |
Test if two IDictionary objects contain the same values | Question: I'm adding a function to my test library to assist in testing if two IDictionary objects contain the same keys/values.
I need the method to be generic and support a dictionary that has collections as values. So I thought I'd try to do it without using class types.
It appears to be working, but I would like to clarify that I didn't make any mistakes that would impact my tests later.
/// <summary>
/// Checks if two dictionaries contain the same values. Supports recursive
/// dictionaries and collections as values.
/// </summary>
/// <param name="pExpect">Expected value</param>
/// <param name="pActual">Actual value</param>
// ReSharper disable CanBeReplacedWithTryCastAndCheckForNull
public static void dictionary(IDictionary pExpect, IDictionary pActual)
{
Assert.IsNotNull(pExpect);
Assert.IsNotNull(pActual);
if (pExpect.Keys.Count != pActual.Keys.Count)
{
Assert.Fail("Expected {0} keys, but contains {1} keys.", pExpect.Keys.Count, pActual.Keys.Count);
}
object[] expectKeys = new object[pExpect.Keys.Count];
object[] actualKeys = new object[pExpect.Keys.Count];
pExpect.Keys.CopyTo(expectKeys, 0);
pActual.Keys.CopyTo(actualKeys, 0);
// check if the two key sets are the same
CollectionAssert.AreEquivalent(expectKeys, actualKeys);
for (int i = 0, c = expectKeys.Length; i < c; i++)
{
object expect = pExpect[expectKeys[i]];
object actual = pActual[actualKeys[i]];
// both can be null
if (expect == null && actual == null)
{
continue;
}
// both must be assigned a value
Assert.IsNotNull(expect);
Assert.IsNotNull(actual);
// must be same types
Assert.AreEqual(expect.GetType(), actual.GetType());
if (expect is IDictionary)
{
// support recursive dictionary checks
dictionary((IDictionary)expect, (IDictionary)actual);
}
else if (expect is ICollection)
{
CollectionAssert.AreEquivalent((ICollection)expect, (ICollection)actual);
}
else
{
Assert.AreEqual(expect, actual);
}
}
}
I am wondering if my last check Assert.AreEqual(expect, actual) will cover most remaining test cases.
EDIT: Fixed testing dictionaries that have keys in different orders.
Answer: Your code doesn't work, because items in a dictionary aren't ordered in any way.
Example code that fails your test:
dictionary(
new Dictionary<int, int> { { 0, 0 }, { 1, 1 } },
new Dictionary<int, int> { { 1, 1 }, { 0, 0 } });
Some more notes about your code:
public static void dictionary(IDictionary pExpect, IDictionary pActual)
dictionary is a bad name for this method, because it doesn't really explain what it's supposed to do. Something like DictionaryEqualityTest would be better.
Also, I would avoid using the non-generic IDictionary if possible, IDictionary<TKey, TValue> would be better (though it would complicate the recursion). | {
"domain": "codereview.stackexchange",
"id": 5223,
"tags": "c#, unit-testing, hash-map"
} |
Why is electromotive force in magnetohydrodynamics a vector quantity? | Question: In the mean-field dynamo theory in magnetohydrodynamics, I frequently came across a quantity;
$\langle v'\times B' \rangle$, which is termed as the mean electromotive force. I want to know that why is it termed as electromotive force, if it is a vector.
Everywhere else I have seen emf is just the potential difference and hence a scalar. Is this emf different than the emf used in mean-field dynamo theory?
Answer: $\left\langle \mathbf{v}' \times \mathbf{B}' \right\rangle$ has dimensions of electric field, rather than potential. Therefore, it is different from the standard definition of electromotive force. In a highly conductive fluid it would be equal to $-\left\langle \mathbf{E}' \right\rangle$ (by Ohm's law). It could be considered the electromotive force per unit length in the direction parallel to the vector resulting from the motion of the fluid. | {
"domain": "physics.stackexchange",
"id": 94038,
"tags": "electromagnetism, forces, vectors, terminology, magnetohydrodynamics"
} |
my Butterworth lowpass formulas do not agree with Fisher webpage | Question: I want to implement Butterworth low-pass filter. Thanks to this question, I have found out that the filter coefficients can be generated using Tony Fisher web-site or using his code. But the problem arose when I had tried to verify his formulas myself.
Wikipedia says that the derivation of low-pass formulas is simple: we start with the Butterworth polynomial of order n. I took n=1
$$B_1\left(\frac{s}{\omega_{cutoff}}\right)=1+\frac{s}{\omega_{cutoff}}$$
(note that $\omega_{cutoff}$ is angular frequency, not usual one) then do bilinear transform
$$s = 2f_{sampling}\cdot\frac{z-1}{z+1}$$
($f_{sampling}$ is usual frequency, not angular) and rewrite the resulting fraction in the form with non-positive powers of $z$.
To make the story short, my final formula for transfer function is
$$H(z) = \frac{Y(z)}{X(z)}$$
where
$$Y(z) = \frac{a}{1+a}+\frac{a}{1+a}z^{-1}$$
$$X(z) = 1-\frac{1-a}{1+a}z^{-1}$$
and
$$a = \frac{\omega_{cutoff}}{2f_{sampling}}$$
and the resulting formula is
$$y[n] = \frac{a}{1+a}x[n]+\frac{a}{1+a}x[n-1]+\frac{1-a}{1+a}y[n-1]$$
For testing I use $f_{cutoff}=1$ (which is $\omega_{cutoff}=2\pi$) and $f_{sampling}=30$.
We should not worry about coefficients in front of $x[]$ since Fisher multiplies them by gain factor anyway, but the coefficient in front of $y[n-1]$ equals
$$\frac{1-a}{1+a} = 0.8104139027$$
while Fisher's web-page for $f_{cutoff}=1$ and $f_{sampling}=30$ gives $0.8097840332$.
If you had a patience to finish this reading, may be you can explain, where I (or Fisher?) am wrong.
Answer: The problem is in the way you apply the bilinear transform. You have to use the appropriate (pre-)warping of the frequencies. Since the bilinear transform warps the frequency axis, you have to make sure that the corner frequency of the discrete-time filter is correct. One way to do that is as follows. The bilinear transform is defined as
$$s=k\frac{z-1}{z+1}\tag{1}$$
with some constant $k$ yet to be defined. If we denote the analog frequency by $\Omega$, and the normalized discrete-time frequency by $\omega$, Eq. (1) becomes for $s=j\Omega$ and $z=e^{j\omega}$
$$j\Omega=k\frac{e^{j\omega}-1}{e^{j\omega}+1}=k\frac{e^{j\omega/2}-e^{-j\omega/2}}{e^{j\omega/2}+e^{-j\omega/2}}=jk\tan(\omega/2)\tag{2}$$
Eq. (2) describes the frequency warping caused by the bilinear transform. If we use an analog lowpass filter with a normalized corner frequency $\Omega_0=1$, we must choose the constant $k$ such that for the desired discrete-time corner frequency $\omega_0$ the term on the right-hand side of Eq. (2) becomes $1$:
$$k=\frac{1}{\tan(\omega_0/2)}\tag{3}$$
where $\omega_0$ is the desired corner frequency of the digital filter. Eq. (3) and Eq. (1) define the appropriately normalized bilinear transform that you must use.
So for your example, the normalized first-order analog Butterworth lowpass transfer function is given by
$$H(s)=\frac{1}{1+s}\tag{4}$$
Applying the bilinear transform gives
$$H(z)=\frac{1}{1+k\frac{z-1}{z+1}}=\frac{z+1}{z(1+k)+1-k}=\frac{1}{1+k}\cdot\frac{1+z^{-1}}{1+\frac{1-k}{1+k}z^{-1}}\tag{5}$$
Ignoring the gain term $1/(1+k)$, the corresponding difference equation is
$$y[n]=x[n]+x[n-1]-\frac{1-k}{1+k}y[n-1]\tag{6}$$
With a desired corner frequency $\omega_0=2\pi/30$ you get from (3) $k=9.5144$ and $(1-k)/(1+k)=-0.80978$, just like on Fisher's website. | {
"domain": "dsp.stackexchange",
"id": 3083,
"tags": "discrete-signals, filter-design, lowpass-filter, bilinear-transform, derivation"
} |
In microarray normalization, why is the normalization factor this? | Question: I am working on analysis of a huge number of microarray files. I was trying to understand the need for normalization in microarray data and was going through this paper by John Quackenbush(2002). In the paper, the author mentions that
There are a number of reasons why data must be normalized, including
unequal quantities of starting RNA, differences in labeling or
detection efficiencies between the fluorescent dyes used, and
systematic biases in the measured expression levels
Then he talks about a simple normalization techniques. Assuming that the total hybridization intensities summed over all elements in the arrays should be the same for each sample, he defines a normalization factor is calculated by
summing the measured intensities in both channels-
where Gi and Ri are the measured intensities for the 'i'th array element (for example, the green and red (or experimental and control) intensities in a two-color microarray assay) and Narray is the total number of elements represented in the microarray.
Then the author says this -
This is the part I don't understand. What is the need to introduce $G_k^{'}$, $R_k^{'}$ and why are they what they are? Most importantly, why is $T_i$ equal to $R_i/G_i$ first and then ($1/N_{total}$)*($R_i/G_i$)?
Any ideas?
Answer: $G_k^{'}$ and $R_k^{'}$ are normalized values of $G_k$ and $R_k$.
Take say G as $[1,2,3,4]$ and R as $[100,150,200,400]$ as your values and you want to normalize them. This is scaling one of them onto the other and bringing them on an equal level to compare. So in your case the factor is $85$ units. So a unit of R amounts to $85$ units in G.
So to scale G to R, multiply G by $85$ or you can scale R to the level of G by dividing R by $85$. So the values are either $[85,170,..]$ and $[100,150,200,400]$ or $[1,2,3,4]$ and $[1.17,1.76,..]$ from our example.
I think it should be mentioned as $T^{'} = R_i/(G_i*N_{total})$ as in log ratio from the statement in your question. | {
"domain": "biology.stackexchange",
"id": 6827,
"tags": "genetics, gene-expression, microarray"
} |
Can I use LLM to explain codebase? | Question: I am a Data Engineer, and I am currently assigned a task to refactor an outdated code and rectify any bugs present. However, I am unable to comprehend the code written in the existing codebase. Furthermore, the developers who worked on this codebase did not provide any documentation. Consequently, I am inquiring if there is a feasible method to convert the entire codebase into an extensive text document. Subsequently, I would like to utilize ChatGPT to translate the codebase into a comprehensive document(very long text, with folder structure tree and code inside src) that I can use to embedding. I do not require an in-depth explanation of the code; rather, I am seeking a more abstract-level understanding, such as the purpose of specific files, the functionality of particular folders, etc.
Answer: Sure, many people have done that.
You can also ask it to add comments or try to find bugs.
Just take into account that LLMs are known for generating bullshit, so the explanations could be mere fabrications and the generated code may not work (in evident or subtle ways).
I myself have tried chatGPT for generating code, but I had to iterate a few times until I got it working. I suggest you prepare some unit tests and integration tests to ensure that everything is working as before chatGPT's suggested changes.
Take into account that the amount of text/code an LLM can use as context is not that large, so you may need to ask multiple times regarding different parts of the code base.
There may also be privacy concerns regarding the fact that you are basically sending the source code of your company to a third party, which is something many employers would frown upon. | {
"domain": "datascience.stackexchange",
"id": 11626,
"tags": "nlp, data-mining, word-embeddings, language-model, gpt"
} |
Can you take components of tension in a string? | Question: Imagine there is a string, not a massless one but a heavy one, which is attached to the ceiling where the angle between the ceiling and the tangent at the end of the string is $\theta$. (First visualize it, or draw it somewhere). Let at any point the tension in the string be T. Now obviously there are components of forces acting, as T$\sin\theta$ is balancing its weight. But if I nudge the string from that point with a very lightly, I actually do not experience a resistive force (I've tried it several times). But I know T$\cos\theta$ is acting towards me.
So my question is, why do not I experience a resistive force?
Also, when we derive the expression for centripetal force, we do not take components of Tension but of its weight, because it "sounds" meaningless.
Is it logical to take components of T?
Answer: Look up the Catenary Curve properties and notice that the weight of the cable causes a reaction in both $x$ and $y$ axis as you noted. The vertical components of the reactions sum up to the weight of the cable, and the horizontal components are such as the tension being tangent to the shape of the cable.
Now what you "feel" when you move the support is the inertial force of moving the cable which I suppose it too small for you to feel. If you try with a heavy steel cable, and you accelerate in the same order of magnitude as gravity then you will feel inertial resistance.
Notice that you also feel an effective stiffness as you pull on the cable as the sag decreases and the incident angle creates higher horizontal reactions. In addition you might have elastic deformation also decreasing the above stiffness by $1/k_\mathrm{eff} = 1/k_\mathrm{elastic} + 1/k_\mathrm{geometric}$. | {
"domain": "physics.stackexchange",
"id": 9649,
"tags": "homework-and-exercises"
} |
quantum fluctuations at the horizon | Question: So suppose we have a black hole with hair, that is a background solution in our field theory that describes a black hole spacetime and in which a field coupled to gravity has a non zero configuration.
For the purpose of the discussion suppose that the metric is Schwartzschild, and without loss of generality let us choose the Schwartzschild coordinates. So the metric and its inverse have divergent components at the horizon:$$g^{00}=g_{rr}=\frac{1}{f(r)}=\frac{1}{1-\frac{a}{r}}$$
Now when we consider the quantum fluctuations around this background, don't we have problems defining the perturbative theory near the horizon?
My impression is that if we write the operators of the theory for fluctuations some of them will apriori diverge.
Indeed we write the operators for the fluctuations by substituting some (from 1 to all) background fields with a fluctuation of that field in the monomials of the original Lagrangian. Even if these monomials will be bounded (this will usually be the case if there are no naked singularities), when we substitute with a fluctuation one of the fields that in these monomials approach zero at the horizon (and that tame the divergence of another field in the same monomial, on the background solution), we get a fluctuation operator which has fluctuation times divergent term. Therefore this operator at the horizon has an infinite strong coupling with respect to some of the others, in particular with respect to the kinetic term for fluctuations of the field that we are coupling to gravity.
This disparity will be present even among operators with the same number of fluctuations: already by substituting one background field with one fluctuation we can get operators which diverge or which go to zero at the horizon (depending if we substitute the zero or the divergent background field).
Another problem of this setting is that now there are operators with many fluctuations that can diverge and therefore be more important with respect to the ones of order one with less fluctuations (for instance the kinetic terms).
So in conclusion it would seem that the presence of the horizon causes problems in defining quantum fluctuations around it, unless one imposes conditions on these fluctuations: going like the respective background for instance. Is this a meaningful requirement?
Of course I am doing everything in the sad Schwartzschild coordinate system. But the Lagrangian of the fluctuations is a scalar and the various operators should be too if we define the fluctuations of the fields to be tensors (always possible I would say). So I would expect that these divergences are indeed independent from the coordinates.
Maybe I presented the question in a somewhat involved way (I tried my best not to do so); in particular I am not sure I need some other field in addition to gravity to get this puzzle. I hope somebody already debunked this! Thanks for any input.
Answer: I think I found the answer by myself: there are actually some conditions on the fluctuations.
The fact is that independently on the coordinate system we choose, we want the fluctuations to be bounded or at least to be way smaller than the respective background fields. If we do so, then when we go to the schwartzschild coordinates the fluctuations will be zero. If they were non zero, then by changing to smooth (Kruskal) coordinates, they would become infinite.
So the strong coupling in schwartzschild coordinates is actually due to having infinite fluctuations in other coordinates.
Therefore in the schwartzschild coordinates the fluctuations will have to go like their respective fields at the horizon. | {
"domain": "physics.stackexchange",
"id": 57616,
"tags": "quantum-field-theory, general-relativity, black-holes, event-horizon, qft-in-curved-spacetime"
} |
Airflow selection mechanism | Question: In a sensor project I'm involved with, we have a requirement to bypass the airflow from one sensor while it does some further measurements. To do this, I implemented a flap driven by a standard servo which rotates the flap through approximately 90 degrees, opening one port and closing another.
I'd like to try and improve this with a better, less leaky, mechanism. The first thing I've thought of is a cylinder rotating within another:
The green outer cylinder contains three ports to which the inlet pipe and two outlet pipes (~20mm dia) are connected. The inner cylinder (~30mm dia) rotates to select one or other of the outlet pipes - thus it has either a really elongated inlet port or four ports. I figure I can place the servo in the centre - this dictates the inner cylinder diameter.
Is there an existing off-the-shelf product that would do this for me or will I have to try and 3-D print it (or find two waste pipes that will fit appropriately)?
Is this a reasonably way to do airflow selection or will it turn out to be even more leaky than the airflap (which wasn't particularly well done as it happens - it can be improved)?
Edit: As an addition in the OP rather than as an answer/comment: This is another way of doing this. Once you know "pneumatic" is the term you can add terms like "servo valve" to google image search and get all sorts of neat ideas (including little lego valves!):
Answer: Are you bound to the shape/configuration of part you constructed in your question? If not, a simple 3-way pneumatic solenoid valve would make this very simple and leak proof(relatively speaking). They are inexpensive too.(eBay) You would have an input and the valve will allow flow through one outlet and then energize the solenoid and you will get flow through the second outlet. | {
"domain": "engineering.stackexchange",
"id": 353,
"tags": "mechanical-engineering, design, airflow"
} |
Does a geodesic exist that will take someone across the event horizon? | Question: I saw the movie "Interstellar" a few years back, and was amazed that Cooper was able to fall from 1 AU into a black hole before his daughter turned 110. Intuitively, I would think that there are no paths in our spacetime that cross over the event horizon.
Does a geodesic exist that can take one from normal space to the other side of an event horizon before the end of time?
Answer:
Does a geodesic exist that can take one from normal space to the other side of an event horizon before the end of time?
TLDR answer
Yes.
More complete answer
There are geodesics that do cross the event horizon. Both the geodesics themselves and the horizon are coordinate independent features of the spacetime, so this statement is fully coordinate independent. These geodesics include timelike, null, and spacelike geodesics. In principle, Cooper could follow a timelike geodesic across the horizon, so all further discussion of geodesics will specifically assume timelike geodesics.
In the Schwarzschild spacetime all timelike geodesics pass through the event horizon and then go to the singularity. So that clearly answers the question "Does a geodesic exist that can take one from normal space to the other side of an event horizon" in the affirmative, consistent with the TLDR answer.
The only tricky or complicated part is the final part of the question "before the end of time". That phrase is ambiguous, so there are multiple ways to interpret it. Here is a list of ways to interpret that phrase and the consequence to the TLDR answer.
"before the end of time" could mean:
Before the end of proper time along the geodesic. This is my preferred interpretation because proper time is coordinate independent, so this is the meaning that makes the whole question one about the physics rather than about some arbitrary choice of coordinates. In this case, the answer is Yes. Proper time ends as the geodesic approaches the singularity, and the crossing of the event horizon is an earlier event.
Before the end of coordinate time in $X$ coordinates where $X \in \{\text{Schwarzschild, isotropic, ...} \}$. Since this is a coordinate time this will only be true when using $X$ coordinates. The $X$ coordinates do not cover the region of spacetime containing the event horizon, so the $t$ coordinate ends before the horizon. Therefore, in this case the answer is No. The event where the geodesic crosses the horizon is after the end of $X$ coordinate time.
Before the end of coordinate time in $X$ coordinates where $X \in \{\text{Kruskal–Szekeres, Lemaître, Gullstrand–Painlevé, ...} \}$. Since this is a coordinate time this will only be true when using $X$ coordinates. The $t$ coordinate in $X$ coordinates is finite at the horizon, so the horizon is in the region covered by these coordinates. Therefore, in this case the answer is Yes. The event where the geodesic crosses the horizon is before the end of $X$ coordinate time.
Before the end of coordinate time in $X$ coordinates where $X \in \{\text{Eddington-Finkelstein, ...} \}$. This will only be true when using $X$ coordinates. There is no coordinate time in $X$ coordinates. None of the coordinates are timelike, there are only spacelike and null coordinates. The coordinate chart covers the event horizon, so the event where the geodesic crosses the horizon has perfectly valid and finite coordinates in $X$ coordinates. But none of those coordinates are "time". Therefore, in this case the answer is undefined. There is no time so there is no end of time so there is no way to compare the perfectly valid event of crossing horizon with the end of time to determine if it was before or after. | {
"domain": "physics.stackexchange",
"id": 98508,
"tags": "black-holes, observers, event-horizon, estimation, causality"
} |
How to calculate how many photons are in the universe? | Question: The "universe" is a sphere with a radius of $10^{25}$ m
the medium temperature is 3K,
how many photons there are in the universe?
$$n_\gamma = \int_{0}^{\infty} \frac{8h\pi\nu^3} {{c^3}{}} \frac{1} {{e^\frac{h\nu} {KT} -1}{}}d\nu = 2.4\frac{8\pi} {c^3} (\frac{KT} {h})^3 \simeq 1.64* 10^{17} photons$$
but according to previous answers and other references...
e.g.
https://www.quora.com/How-many-photons-are-there-in-the-universe
the number is much bigger $$10^{89} $$
where is the problem in my tentative?
As you can see in my tentative, I would be better if the answer is based on "classical thermodynamic" using plack distribution, and a boltzmann-like point of view.
Estimation based on cosmological facts are also welcome.
When you solve the integral while you do some substitution you have to solve an integral like this $$ \int_{0}^{\infty} \frac{x^2} {{e^x -1}{}} dx \simeq2.4$$
Answer: Besides using a too small value for the radius of the Universe — which is in fact some 46 billion lightyears, or $R_\mathrm{Uni}\sim4\times10^{28}\,\mathrm{cm}$ — I think you've just made a simple calculation error (my guess is mixing SI and cgs units):
Your result for $n_\gamma$ ("$1.64\times10^{17}$ photons") is a number, whereas in fact the result should be a number density, measured e.g. in $\mathrm{cm}^{-3}$. This value should then be multiplied by the volume of the (observable) Universe.
Photons $\simeq$ CMB photons
The number of photons in the Universe is dominated by the CMB photons, by over two orders of magnitude (see this answer for a discussion of the Universal photon background). Each $\mathrm{cm}^3$ of space holds roughly $n_\gamma = 410$ CMB photons, which I estimate from observations in that answer, but which is indeed also what I get with your own calculation.
If you include all photons — not just CMB photons, but also radio, IR, optical, etc. — the result is $n_\gamma\simeq413\,\mathrm{cm}^{-3}$).
Hence, with a volume of $V_\mathrm{Uni} = 4\pi R_\mathrm{Uni}^3/3 = 3.5\times10^{86}\,\mathrm{cm}^3$ — the total number of photons is
$$
N = n_\gamma V_\mathrm{Uni} = 1.4\times10^{89}\,\mathrm{photons}.
$$
As has been commented above, the number of photons is not really conserved. However, the amount of CMB photons that has been absorbed since they were emitted is actually negligible. The only interactions of these photons that alter their state is scattering on free electrons after the Universe was reionized (which happened 0.5 to 1 billion years after they were emitted). The optical depth to this so-called Thompson scattering is $\tau = 0.066$ (Planck collaboration 2015), so the fraction of CMB photons that have scattered is $1 - e^{-0.066} = 0.06$. But this process doesn't remove any photons from the budget, it only polarizes them. | {
"domain": "physics.stackexchange",
"id": 33338,
"tags": "homework-and-exercises, thermodynamics, cosmology, thermal-radiation, cosmic-microwave-background"
} |
Condensing multiple conditionals into function | Question: I feel like I'm repeating a ton of code here and there's got to be a better way to do this but am just completely spacing right now. I basically have an else if clause that, if their conditions are met, repeat the same code, but I'm also repeating the same code in the else statement along with a few other conditions.
What would be the best way to condense this? Could I create one master browser check function?
var searchEngine = document.referrer;
if ($('#userName').length){
if (/Chrome/.test(searchEngine)){
browser = "Chrome";
} else if (/Firefox/.test(searchEngine)){
browser = "FF";
} else if (/Safari/.test(searchEngine)){
browser = "Safari";
}
} else if (/ip(hone|od|ad)/i.test(searchEngine)){
if (/Chrome/.test(searchEngine)){
browser = "Chrome";
} else if (/Firefox/.test(searchEngine)){
browser = "FF";
} else if (/Safari/.test(searchEngine)){
browser = "Safari";
}
} else {
if (/Chrome/.test(searchEngine)){
browser = "Chrome";
} else if (/Firefox/.test(searchEngine)){
browser = "FF";
} else if (/Safari/.test(searchEngine)){
browser = "Safari";
} else if (/ip(hone|od|ad)/i.test(searchEngine)){
browser = "iOS";
} else if (/(?:(compatible;.*)?Trident\/7.0)/ig.test(searchEngine)){
version = 11;
browser = "IE";
} else if (/MSIE (\d+\.\d+);/.test(searchEngine)){
browser = "IE";
}
}
Answer: The Chrome, Firefox, and Safari duplicated code exists as the first part of every conditional and those three can be moved outside. Then have all of the other logic in an else. You'll end up with the $('#userName').length and /ip(hone|od|ad)/i.test(searchEngine) being empty. The reason I haven't provided a code example is because only you can decide if that's a problem or not. | {
"domain": "codereview.stackexchange",
"id": 11316,
"tags": "javascript, jquery, beginner"
} |
On APX problems | Question: Given optimization problems $P1$ and $P2$. The slides below say that, $P1 \leq_L P2$ does not imply $P2 ∈ APX$, then $P1 ∈ APX$. Why is this so?
http://www.di.univr.it/documenti/AttDidAva/allegato/allegato581250.pdf
$MAX INDEP SET$ is not in APX while $MAX CUT$ is.
Is an L reduction, $MAX INDEP SET$ $\leq_L$ $MAX CUT$ possible? The above implication does not say this is impossible.
Is there an inspiration to understand this from why $MAX INDEP SET$ is not in APX while $MAX CUT$ is?
Answer: Suppose you have a reduction $f$ from problem $A$ to problem $B$. So if $f(x) \in B$ then $x \in A$ and if $f(x) \notin B$ then $x \notin A$ (here $x \in A$ means $x$ is a YES instance of $A$).
Now suppose that $A$ and $B$ are in fact optimization problems, say both are maximization problems. Now $f$ maps an instance $(x,a)$ of $A$ to an instance $(y,b)$ of $B$ such that $B(y) \geq b$ implies $A(x) \geq a$ and $B(y) < b$ implies $A(x) < a$ (here $A(x)$ is the optimal value of $x$ for problem $A$).
Suppose you had an approximation algorithm $O$ for $B$, say with approximation ratio $\rho < 1$. Then $O(y) \geq \rho B(y)$. How would you convert this to an approximation algorithm for $A$? There are two problems:
$O$ only gives a solution for $B$ problems. You will need another procedure $P$ that converts a solution for $y$ to a solution for $x$.
You need a promise that if the solution produced by $O$ is good as a solution for $y$, then the solution produced by applying $P$ is good for $x$.
When these two conditions are satisfied, the reduction preserves approximability, and then the implication is correct. But they need not hold in general.
Here is a concrete example. A set of vertices in a graph is a vertex cover iff its complement is an independent set. While vertex cover has a $2$-approximation algorithm, independent set is (probably) not approximable. Here the reductions $f$ and $P$ are trivial - given a graph $G$ with $n$ vertices and a threshold $a$ for independent set, $f$ outputs the same graph $G$ and the threshold $b=n-a$ for vertex cover. The conversion procedure $P$ simply complements the solution.
Consider the $2$-approximation algorithm for vertex cover (which is a minimization problem). Thus, if the graph $G$ has a vertex cover of size $T$, then it returns a vertex cover of size $2T$. In terms of the optimal independent set, we go from $n-T$ to $n-2T$. This could be a big difference for some values of $T$: for example, if $T = n/2$, then there is an independent set containing half the vertices, but this reduction only gives us an independent set of size $0$!
Finally, there are logspace reductions among most well-known NP-complete problems, including probably independent set and MAX-CUT. Since logspace reductions compose, all you need to do is follow the chain of reductions and verify that all of them are logspace. | {
"domain": "cstheory.stackexchange",
"id": 2250,
"tags": "cc.complexity-theory, apx"
} |
Understanding the strategy of Sanger DNA sequencing | Question: The Sanger sequencing method creates large numbers of sequences of all possible lengths, ending with a specific nucleotide, by terminating with a tagged (fluorescent) nucleotide at the end.
But if you already have fluorescent nucleotides of a specific base, why not just do regular PCR with them, create a huge number of full length copies of the original sequence, and then simply see the locations where the nucleotide fluoresces to determine all the locations of that base. Why do we need a large number of copies ending at each possible location, as with the Sanger method?
Answer: If you mark the full length strand of the DNA with the fluorescent labels, you will get a lot of signals from the same nucleotide without the possibility to discriminate where the actual nucleotide is located on the strand.
Sanger sequencing doesn't end with the preparation of the terminated and labelled DNA strands, the following step is crucial in discriminating where the labelled base actually is. After labelling, the sample is run through high resolution capillary gel electrophoresis to sort them by size. The smallest fragment comes out first, then the one with one base more and so on. The detector which then identifies which fragment is coming through is located at the end of the capillary.
This works like shown in the schematic image (from here):
If you would mark a complete strand, no dicrimination by size is possible (ideally, all DNA would run on the same height of a gel) and all flourescent signals for all positions would come on the same spot. Not very helpful. | {
"domain": "biology.stackexchange",
"id": 10083,
"tags": "dna-sequencing"
} |
Is there a comprehensive database of fossils (with images) online? | Question: Not sure if this is the best stackexchange to ask...
I have not been able to find a decent database of fossils on the web, does one exist?
Here are some of the links I have found through Wikipedia and Google:
http://www.paleoportal.org/index.php?globalnav=doing_paleo§ionnav=more_submissions&state_id=0&submission_type_id=27&type_id=5
http://www.morphobank.org/index.php/Projects/Index
http://research.amnh.org/paleontology/
http://en.wikipedia.org/wiki/List_of_fossil_sites
http://www.fossilsites.com/STATES/AB.HTM
http://inyo.coffeecup.com/site/ammonoids/ammonoids.html
http://www.historicalclimatology.com/
http://geonames.usgs.gov/pls/gnispublic/f?p=gnispq:3:4473291015495555::NO::P3_FID:354414
http://www.scotese.com/earth.htm
http://www.trilobites.info/biostratigraphy.htm
http://burgess-shale.rom.on.ca/en/
http://www.fossilmuseum.net/FossilGalleries.htm
http://www.kumip.ku.edu/cambrianlife/Utah-Online-Fossil-Exhibits&Collections.html
http://www.ucmp.berkeley.edu/collections/catalogs.html
http://peabody.yale.edu/collections/search-collections?vp
http://books.google.com/books?id=Ezm1OA_s6isC&printsec=frontcover&dq=hartwig+primate&cd=1&hl=en#v=onepage&q&f=false
What are the best places to get fossil data and images of the fossils? Or are museums and textbooks the only alternative?
Answer: No. Museums are the traditional repository for fossils, and the process of "digitizing" their collections is slow and labor intensive. Often, the museums only aim to digitize what me might call the "meta-data" associated with the fossil, as was done here:
http://ucmpdb.berkeley.edu/cgi/ucmp_query2?admin=&query_src=ucmp_index&table=ucmp2&spec_id=V8111&one=T
A truly comprehensive database is not feasible in the near future. A single photograph likely would not be sufficient to characterize the fossil -- the interesting components of fossils are often microscopic. For example:
http://www.pnas.org/content/99/14/9117.full.pdf
Even taking a single photograph can be very labor intensive, and fossils can be fragile. Proper photo-documentation would probably require multiple images. More generally, a comprehensive database would probably need to include non-photographic data relating to the fossil -- such as chemical composition of non-visual imaging techniques (X-ray, IR, UV, etc).
For the foreseeable future, "comprehensive" collections will be housed in museums without full digital representation. The only way to know how comprehensive these collections are is to ask the museum curator, who will be aware of the scope and limitations of the collection. | {
"domain": "biology.stackexchange",
"id": 1579,
"tags": "evolution, bioinformatics"
} |
Octomap slows down with increase in map size | Question:
I'm trying to use octomap with a velodyne HDL-32E sensors point cloud. I move around in a large complex (Approx 1 km by 1 km) and generate the map. The map update rate is really good when the total map size is small (I can see moving cars and people being classified into obstacles), but as the map size increases, the update rate slows quite a lot. Can someone suggest me ways to overcome this problem? I'm currently considering just running the map in the local frame to get immediate obstacles, but this will defeat the purpose (global planning) for which I had switched to octomap.
Originally posted by raskolnikov_reborn on ROS Answers with karma: 53 on 2015-12-19
Post score: 3
Original comments
Comment by 2ROS0 on 2015-12-20:
This obstacle detection of people and cars is running separately? Are you sure it is the Octomap update that's the bottleneck?
Comment by raskolnikov_reborn on 2015-12-21:
I meant those as examples to illustrate that moving obstacles are detected nearly instantly at the beginning but not at the end. Static fixtures such as walls and trees etc are detected properly all the way to the end but the updates get slower and slower with the increase in map size.
Comment by raskolnikov_reborn on 2015-12-25:
I can confirm that the problem is related to map size more than time of operation. If is create a local window map without a global frame that is both map and base frame are the sensor, the problem is greatly reduced. With a finite map size, octomap updates do not get slower over time.
Comment by krishnaece1505 on 2016-03-01:
I am trying to use octomap to create a 3D map form point clouds input in Pointcloud2 format. I have built the libraries. But, when I launch the octomap_mapping.launch file, I see no output in RVIZ in /map. Since, you have got it working, may I know the procedure to use this library?
Comment by chukcha2 on 2016-03-02:
I was told that octomap is used to create a map from already registered point clouds (or a set of point clouds and transforms). I haven't found a good way to register point clouds from a velodyne lidar (VLP-16 in my case). Can you please share how you are able to register point clouds?
Comment by raskolnikov_reborn on 2016-04-19:
Just start publishing a transform between your laser frame and your global frame. specify the sensor frame in the launch file and the global frame and you should be done. Make sure you provide the pointcloud in the sensor frame if you enable ground filtering otherwise you get out of range errors.
Comment by chukcha2 on 2016-04-19:
Thanks, but what I am asking is how to get that transform (between laser frame and global frame). What method are you using? Thanks.
Answer:
I would expect that map updates themselves do not slow down over time, as the computation of the voxels to "touch"/modify should take similar average computation time no matter where in the octomap the sensor is placed. Of course, if the average distance of range measurements increases over the course of your datasets, an increase in processing time is to be expected as the raytracing takes longer then. Of course, there are other factors at play that might have adverse effects, like allocation and placement of voxel information in memory.
What definitely takes longer the larger the map grows is iterating over all cells, for instance for visualization purposes. Are you possibly doing that somewhere in your code? If you are using the standard octomap_server , the publishAll() method is called after every map update. This method iterates over all cells, so it would explain a slow-down with increasing map size nicely.
Originally posted by Stefan Kohlbrecher with karma: 24361 on 2016-04-20
This answer was ACCEPTED on the original site
Post score: 2
Original comments
Comment by raskolnikov_reborn on 2016-04-20:
Thanks. That may be it. I sort of worked around the issue by only using a local map to navigate obstacles and clearing everything outside a bounding box every second. But Maybe If I could just get the map server to publish changes in cell state rather than all the cells, it'll be faster.
Comment by raskolnikov_reborn on 2016-04-22:
Hi. In line with my earlier comment. Is there a way I can configure octomap_server to not publish/iterate the entire map again and only publish the updates.
Comment by Stefan Kohlbrecher on 2016-04-22:
I don't think so, but you could easily fork it and then do whatever you want. Alternatively, you could use the octomap library in your own node and this way have full control over what happens. | {
"domain": "robotics.stackexchange",
"id": 23263,
"tags": "ros, navigation, mapping, octomap"
} |
Gazebo 9 Contact Plugin Not Working | Question:
I have followed the tutorial in this link http://gazebosim.org/tutorials?tut=contact_sensor for contact sensor plugin. However, it seems not just plugin, the sensor itself is not working in gazebo 9.7 . Is there anyone make it running in the gazebo 9?
Originally posted by dogukan_altay on Gazebo Answers with karma: 3 on 2019-05-21
Post score: 0
Answer:
For anyone who encounters the same issue. The bumper sensor should be attached to the link with the desired collision. Sensor should not be attached to a dedicated link.
Originally posted by dogukan_altay with karma: 3 on 2019-05-21
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 4406,
"tags": "gazebo"
} |
How left out down-sampling in 3D-DWT just on the Z direction | Question: I am working on image processing by a 3D wavelet transform.
I have a problem with the classification size of the wavelet coefficients. As you know, when we apply 3D_DWT on the image (for example: Hyperion of the area by 256*256*128 size) the sub-band size would be decreased (for LLL in 1l level the size is 128*128*64).
How I could apply down sampling just for the Z direction (spectral direction)?
It means this size of the sub-band should be 256*256*62 after one level wavelet applying. I want to do it in the Matlab script (dwt3.m & wavedec3.m), but when I removed the down sampling, it was applied in all three directions.
Answer: You are applying a 3D DWT scheme whose down-sampling is coherent across the three directions. Nothing forbids you from decomposing bands the way you want, and the literature is full of so-called isotropic, Tensor, Separable, Square or Standard implementations (vocabulary: separable/non-separable,
standard/non-standard, S-form/NS-form, rectangular/square, anisotropic/isotropic, tensor/Mallat, hyperbolic/isotropic, separated/combined wavelet transform)
To do that, it is more appropriate to perform 1D wavedec passes your own way on each direction separately. | {
"domain": "dsp.stackexchange",
"id": 5717,
"tags": "wavelet, downsampling, 3d"
} |
Dissociation of diatomic molecules | Question: At high enough temperature, diatomic molecules in a dilute gas should dissociate. Where do their electrons go? Will they dissociate into neutral atoms or into ions? Does it depend on the substance? What for $\ce{H2}$, $\ce{O2}$, $\ce{HCl}$, $\ce{HF}$? Which quantum mechanical technique can decide the question - Hartree-Fock?
Answer:
Where do their electrons go?
IIRC, in the gas phase, the energy required for heterolysis is usually much greater than that required for homolysis. Consequently, usually, each of the atoms of a diatomic molecule retains one of the originally bonded electrons on dissociation.
Which quantum mechanical technique can decide the question -
Hartree-Fock?
In principle, one could scan the potential energy surface1) by elongating the bond using some quantum chemical method to look on what actually happens. Beaware though, that the most used variant of the Hartree-Fock (HF) method, the so-called restricted Hartree-Fock (RHF) method is know to be ill-suited for such calculations: it is known to fail completely when the homolytical dissociation of a closed-shell molecule into two open-shell fragments is considered.
Such behaviour is usually refered to as the Hartree-Fock "dissociation catastrophe", although, strictly speaking, only the restricted variant of the Hartree-Fock method is a subject to it. The unrestricted Hartree-Fock (UHF) method does not suffer from the "dissociation catastrophe" and tends to give at least qualitatively correct dissociation limit. So, use UHF at least. Even better, use some correlated method.
1) Simply a curve in this case, since there is only single degree of freedom (bond length). | {
"domain": "chemistry.stackexchange",
"id": 6680,
"tags": "quantum-chemistry"
} |
Are we seeing everything in a delayed manner? | Question: If light is faster in vacuum medium than in air medium,
does it mean that we are seeing everything in a delayed manner since we live in air medium?
Is there any way to see things in actual speed i.e. in vacuum?
P.s. I'm not a physics grad, so I'm sorry if my question is trivial.
Answer: If you mean "do we see things in slow motion", the answer is "no". We see things with a slight delay, but at the same speed as if the medium was a vacuum.
The easiest way to see this is to think about what would happen over time. Let's assume we are looking at a clock, and the light from the clock gets to us slowly - say it takes a second longer than it would in a vacuum. Then when the second hand reaches "1 second past the hour", I see it at the top of the hour. But a second later, the information "it is now one second later" must reach me. Otherwise, all that information will end up piled up between the clock and me - and a person who just walks into the room would either see a different time than I see (they see the one second delay), or for them the situation would be different than it was for me when I walked into the room. Neither of those things make sense.
So - constant delay due to the extra time the signal takes; but other than that, no difference in speed with which observed events unfold.
As was pointed out by @hobbs, the actual difference in speed between light in vacuum and in air is tiny. With the refractive index of air at STP around 1.0003, the difference is not something you would normally notice. Light travels 1 meter in about 3 nano seconds; on that scale, an extra 0.03% adds about 1 pico second. | {
"domain": "physics.stackexchange",
"id": 34281,
"tags": "visible-light, speed-of-light, refraction"
} |
Does lower the temperature always mean more stable structure and larger density? | Question: In solid states, the lower temperature meant more stable structure and transition fro bcc(packing fraction 0.74) fcc(packing fraction 0.74) to hcp(packing fraction 0.68).
1.a. Does this mean atom, such as Fe, at bcc had larger lattice parameter and lower density than Fe at fcc (At same temperature and pressure)?
1.b. Fe went through three different states (bcc-fcc-bcc) at very high temperature to standard temperature and pressure, but iron expand in summer(a bit of high temperature). Does this mean Fe would expand(bcc-fcc) then shrink(fcc-bcc) with the increase of temperature?
2.a. Does lower the temperature always mean more stable structure? However, I heard people saying that Tin(Sn) would become powder like dust when at low temperature, a clear contradiction to the ideal of "stable".
2.b. A side question: Though water was not metal, it also become more organized and thus expand during the transition from liquid to solid. Has the ice become more stable or less stable?
Answer: Iron is a particularly bad crystal to investigate phase stability vs temperature. The reason is the large influence of magnetic energy on the Gibbs free energy. The place to see this is in Dinsdale's compilation of elemental free energies.
So there are several interesting things to see. First, without the magnetic contributions, the stable crystal structure at room temperature would actually be hcp, transitioning to fcc, then to bcc as the temperature increases. This is a not uncommon series of transitions for various elements.
Further, Dinsdale has a plot of the P-T phase diagram, where one sees that as the pressure is increased into the GPa range the bcc phase is no longer stable. Then hcp is actually stable at room temperature and fcc is stable at higher temperatures. (To some extent this is reasonable because of the higher packing fraction of (hcp, fcc) vs bcc.
(Tin going to 'powder' has more to do with the accumulated stresses in a solid-solid phase transition. Don't confuse that with actual phase stability.) | {
"domain": "physics.stackexchange",
"id": 71275,
"tags": "solid-state-physics, crystals"
} |
why potential becomes equal when capacitor are connected | Question: If we have a circuit as shown in which two capacitor are connected of unequal potential.
Now if we close the switch the charge will flow till potential becomes equal.
But as we know $Q=CV$,
$V=Ed$ in this $V$ would be constant as $E$ is constant in a capacitor. Then how potential will become equal?
Does the distance between capacitor changes or capacity of capacitor changes?
Answer: Charges flow until the potential difference across each capacitor is the same.
In each capacitor the product $Ed$ will be the same.
The electric field in each capacitor will change because the charge density on the capacitor plates changes. | {
"domain": "physics.stackexchange",
"id": 31226,
"tags": "homework-and-exercises, electrostatics, potential, capacitance"
} |
ROS2 environment variables | Question:
Hi, I was building ROS2 from source and I already had kinetic installed on my system.
After following all the instructions here, I tried to see my environment variables to check if everything was fine(as I did with each distro of ROS1) but I got this output :
ROS_ROOT=/opt/ros/kinetic/share/ros
ROS_PACKAGE_PATH=/home/batman/rosbuild-packages:/home/batman/catkin_ws/install/share:/opt/ros/kinetic/share
ROS_MASTER_URI=http://192.168.88.10:11311
ROS_VERSION=2
ROSCONSOLE_FORMAT=${message}
ROS_HOSTNAME=192.168.88.223
ROSLISP_PACKAGE_DIRECTORIES=
ROS_DISTRO=ardent
ROS_IP=192.168.88.223
ROS_ETC_DIR=/opt/ros/kinetic/etc/ros
Here I can see the ROS_VERSION as 2 and the distro variable as ardent but the ROS_PACKAGE_PATH and ROS_ROOT is still set according to kinetic. Do I need to change that, if yes how?
Originally posted by aarushg on ROS Answers with karma: 65 on 2018-05-11
Post score: 0
Answer:
First of all you should not source both setup files if you don't have to (e.g. for building something like the ros1_bridge). Commonly you should only source one of the two. Otherwise you will likely run into problems where some packages (e.g. std_msgs) exist in both ROS versions.
Regarding the questions about ROS_PACKAGE_PATH and ROS_ROOT (and several others like ROS_MASTER_URI, ROS_HOSTNAME, ROS_IP): ROS 2 doesn't use either of these variables.
Originally posted by Dirk Thomas with karma: 16276 on 2018-05-11
This answer was ACCEPTED on the original site
Post score: 3
Original comments
Comment by aarushg on 2018-05-11:
Thanks!
But since I was using kinetic since a long time, it was already sourced, so is there a way to unsource it(reset every variable) before sourcing ros2 or the only way is to unset/remove the unrequired variables of ros1 individually?
Comment by Dirk Thomas on 2018-05-11:
Rather than trying to unset variables / modify the ones which have been changed by the setup script you might want to open a fresh shell (assuming that you don't source the setup file in your bashrc). If you do I would recommend to define an alias instead and explicitly call the alias to source ROS. | {
"domain": "robotics.stackexchange",
"id": 30804,
"tags": "ros, ros2, ardent"
} |
checking if string is generated by regular grammar | Question: How do I check wether a string is generated by given regular grammar?
I know you can check for it in O(N), what is the algorithm called?
Answer: A regular grammar corresponds quite closely to a non-deterministic finite automaton, reading the word in the usual way (right regular grammar) or backwards (left regular grammar). You can run $m$-state NFAs on inputs of length $n$ in complexity $O(2^m n)$, which is linear if you fix the NFA. | {
"domain": "cs.stackexchange",
"id": 5919,
"tags": "regular-languages"
} |
Why does, at one time, tension balances the weight but at the other time don't balance it? | Question: Everyone is accustomed to many set-ups of Atwood machine; sometimes the question asks about acceleration of the blocks or so; sometimes they ask about tension etcetera.
But one thing I couldn't understand is how at sometimes, tension balances the weight of a mass while at other times, it can't.
I was viewing some worked-out problems of my previous-year class & saw this one:
The mass of the part of the string below a certain point $A$ is $m$. A block of mass $M$ is attached to the lower end of the string. Find the tension in the string at the lower end & at $A$ when
$\bullet$ $M$ is at rest.
$\bullet$ $M$ descends with acceleration $a$.
The answers are quite simple; for first part the tensions at $A$ & at the lower end are $(M+m)g$ & $Mg$ respectively & for the second case tension at $A$ is $(M + m)(g -a)$ & at the lower point is $M(g-a)$.
The author wrote for the first case as the block is in equilibrium, using the 1st law & for the later case, as the block descends with acceleration, using Newton's 2nd law. Okay, he is right.
But why is the tension different for the two cases? Anyone should say that at the later case an additional downward force is applied on the block so that it descends with acceleration $a$, but why can't then the rope balance that extra force? When the extra force is applied on the block, definitely it stretched the rope which would create tension; can't it balance that extra force on the block & prevent it from accelerating?
So, why does, at one case, tension balanced $M$ but in the other case didn't balance it?
What I am missing. Could anyone explain?
Answer:
But why is the tension different for the two cases? Anyone should say that at the later case an additional downward force is applied on the block so that it descends with acceleration a, but why can't then the rope balance that extra force?
I believe you are thinking "backwards". The acceleration actually tells us what is going on. The downward force on the block (weight and/or pull from a different rope) is a different interaction than the rope tension pulling up. There is no reason to expect them to be the same because they are from different sources. The acceleration is what actually happens when the two forces are acting on the same object. The acceleration tells us that the forces cannot be equal at that moment.
When the extra force is applied on the block, definitely it stretched the rope which would create tension; can't it balance that extra force on the block & prevent it from accelerating??
Yes, it can if something is done to change the upward tension force to become equal to the downward force. Consider this based on your statement: A rope holds a 1 kg object in equilibrium. The tension in the rope equals the weight of the object. A 500 g object is added to the 1 kg object. The rope stretches, the tension increases, then the system returns to equilibrium. In order for the rope to stretch, the system briefly accelerated downward because the downward force was greater than the tension force at the moment the 500 g was added. After accelerating downward, the tension force changed and began pulling up with a greater force and stopped the downward motion.
On the other hand, imagine the rope is a piece of stretchy plastic or gum. It holds the 1 kg object in equilibrium, but when the 500 g is added, the objects accelerate downward continually because the stretching doesn't increase the tension, in fact, it might decrease the tension, depending on the gum. From the instantaneous acceleration we can calculate what the tension in the gum is, but by observation is can't be equal to the weight.
The tension is NOT a reaction or Newton 3rd Law force of the gravity. It is a separate force. | {
"domain": "physics.stackexchange",
"id": 24934,
"tags": "newtonian-mechanics, string"
} |
Performance Effects of Dropping ADC Least Significant Bits before DSP processing (Besides the Obvious) | Question: What would be the effect of dropping LSBs of an ADC on the performance of a DSP system apart from the obvious reduction in dynamic range. For example if there is a 14 bit ADC and only 10 bits are used, will it affect any other area of performance? especially if the ENOB of the ADC in the system was 10 bits to begin with?
Would it not effect the processing gain being achieved by the DSP (e.g. FFT processing gain)?
Answer:
What would be the effect of dropping LSBs of an ADC on the performance of a DSP system apart from the obvious reduction in dynamic range
None really. Technically it increases the level of quantization noise so it primarily affects your SNR.
However if your your ADC has only 10 effective bits to start with, the impact on the SNR is very small.
There is a difference between dropping the word length to 10 bits or staying with 14 bits and just zeroing out the 4 LSBs. The former does indeed reduce dynamic range the latter one only reduces SNR.
Would it not effect the processing gain being achieved by the DSP (e.g. FFT processing gain)?
Not directly. However with a lower word length you may have to use different scaling management to prevent overflow and clipping. That depends on the specific algorithm, the data and the word length. | {
"domain": "dsp.stackexchange",
"id": 11564,
"tags": "fft, signal-detection, snr, analog-to-digital, quantization"
} |
Gazebo Transport Messages: No standard messages for double, uint32, string, etc? | Question:
It seems like the transport message types that come built-in with gazebo don't have any standard messages for things like a double, uint32, string, etc. I did manage to find one for an int32 under gazebo::msgs, but that's it. Am I somehow missing the rest?
Originally posted by pcdangio on Gazebo Answers with karma: 207 on 2016-04-21
Post score: 0
Answer:
For string there is GzString, and as you said there is one for int32. But I guess that's all.
For the other types you'd need to define custom messages, as explained here.
Originally posted by chapulina with karma: 7504 on 2016-04-21
This answer was ACCEPTED on the original site
Post score: 0
Original comments
Comment by pcdangio on 2016-04-21:
Yea I figured I'd have to make my own custom messages for the basic types. If I can summon the energy, maybe I'll integrate them into gazebo::msgs and do a pull request. | {
"domain": "robotics.stackexchange",
"id": 3906,
"tags": "gazebo"
} |
Load plugin where base class exists in another project | Question:
I'm having some trouble loading a plugin and decided to open an issue to see if it's even possible. I am creating package_B where i'm creating a class that implements a plugin base class that exists in package_A.
Also, somewhere in package_A, they call loader->createInstance(plugin_type). When plugin_type equals the plugin i've created in package_B, it fails saying the plugin doesn't exist. But when the plugin type is one of the plugins created/exported from package_A, it works just fine. I feel like maybe pluginlib isn't built to do what i'm trying to make it do. Thoughts?
<?xml version="1.0"?>
<package format="2">
<name>test</name>
<version>0.2.0</version>
<description>The test package</description>
<maintainer email="me@domain.com">Jordan Lack</maintainer>
<license>TODO</license>
<buildtool_depend>catkin</buildtool_depend>
<build_depend>cmake_modules</build_depend>
<depend>eigen</depend>
<depend>hardware_interface</depend>
<depend>roscpp</depend>
<depend>pluginlib</depend>
<depend>tinyxml</depend>
<depend>boost</depend>
<depend>transmission_interface</depend>
<export>
<test plugin="${prefix}/plugins.xml" />
</export>
</package>
And here's my CMakeLists.txt
cmake_minimum_required(VERSION 2.8.3)
project(test)
find_package(catkin REQUIRED hmi_cmake hardware_interface roscpp cmake_modules pluginlib
transmission_interface)
find_package(TinyXML REQUIRED)
find_package(Boost REQUIRED)
find_package(Eigen3 REQUIRED)
# check c++11 / c++0x
include(CheckCXXCompilerFlag)
check_cxx_compiler_flag("-std=c++0x" COMPILER_SUPPORTS_CXX0X)
if (COMPILER_SUPPORTS_CXX0X)
set(CMAKE_CXX_FLAGS "-std=c++0x")
else ()
message(FATAL_ERROR "The compiler ${CMAKE_CXX_COMPILER} has no C++11 support. Please use a
different C++ compiler.")
endif ()
catkin_package(
INCLUDE_DIRS include
LIBRARIES ${PROJECT_NAME}
CATKIN_DEPENDS hardware_interface roscpp pluginlib transmission_interface
DEPENDS TinyXML Boost Eigen3
)
include_directories(include ${catkin_INCLUDE_DIRS} ${TinyXML_INCLUDE_DIRS} ${Boost_INCLUDE_DIRS}
${EIGEN3_INCLUDE_DIRS})
add_library(${PROJECT_NAME} src/sensor_parser.cpp src/transmission_interface_loader.cpp)
target_link_libraries(${PROJECT_NAME} ${catkin_LIBRARIES} ${TinyXML_LIBRARIES} ${Boost_LIBRARIES})
install(DIRECTORY include/${PROJECT_NAME}/
DESTINATION ${CATKIN_PACKAGE_INCLUDE_DESTINATION})
install(TARGETS ${PROJECT_NAME}
ARCHIVE DESTINATION ${CATKIN_PACKAGE_LIB_DESTINATION}
LIBRARY DESTINATION ${CATKIN_PACKAGE_LIB_DESTINATION}
RUNTIME DESTINATION ${CATKIN_PACKAGE_BIN_DESTINATION})
install(FILES plugins.xml
DESTINATION ${CATKIN_PACKAGE_SHARE_DESTINATION})
add_library(test_plugins test/test_plugins.cpp)
target_link_libraries(test_plugins ${catkin_LIBRARIES} ${PROJECT_NAME})
install(TARGETS test_plugins
ARCHIVE DESTINATION ${CATKIN_PACKAGE_LIB_DESTINATION}
LIBRARY DESTINATION ${CATKIN_PACKAGE_LIB_DESTINATION}
RUNTIME DESTINATION ${CATKIN_PACKAGE_BIN_DESTINATION})
And finally my plugins.xml
<library path="lib/libtest_plugins">
<class type="test_namespace::TestActuatorPlugin" base_class_type="test_namespace::Actuator">
<description>
An actuator plugin for unit testing
</description>
</class>
<class type="test_namespace::TestImuPlugin" base_class_type="test_namespace::Imu">
<description>
An imu plugin for unit testing
</description>
</class>
<class type="test_namespace::TestForceTorquePlugin" base_class_type="test_namespace::ForceTorque">
<description>
A force/torque plugin for unit testing
</description>
</class>
<!--This next plugin is the one that ros_control can't load-->
<class type="test_namespace::MyRequisiteProvider" base_class_type="transmission_interface::RequisiteProvider">
<description>
A force/torque plugin for unit testing
</description>
</class>
</library>
Originally posted by jlack on ROS Answers with karma: 78 on 2017-06-27
Post score: 0
Original comments
Comment by jlack on 2017-06-27:
I can confirm if I build ros_control from source and put my plugin in ros controls ros_control_plugins.xml it loads successfully. I feel like it has to be possible with my plugin in a different package. Isn't that how Gazebo loads other peoples Gazebo plugins? oh package_A is ros_control
Comment by gvdhoorn on 2017-06-28:
If I understand the description of your setup correctly then this should be fully supported, but whether it'll be succesful depends on your package.xml and CMakeLists.txt being correct. Could you edit your question and include both? Use the Preformatted Text (101010) button.
Comment by jlack on 2017-06-28:
ok @gvdhoorn I have added the requested bits.
Comment by gvdhoorn on 2017-06-28:
In order to be considered a plugin for something, your plugin needs a run_depend on that something. I can't really say whether that is the case here. See #q163476 for a related question.
Comment by jlack on 2017-06-28:
Since this is package.xml format 2, the <depend> tag should mean both <build_depend> and <run_depend> I believe. However your link raised a different question. In my package.xml, should my export line be<transmission_interface plugin="${prefix}/plugins.xml" />?
Answer:
Ok I finally figured out what's going on. The problem is in the package.xml, since the base class for my plugin lives in transmission_interface, the export line in the package.xml needs to read
<export>
<transmission_interface plugin="${prefix}/plugins.xml" />
</export>
The pluginlib documentation does state this is the case. I just hadn't read that part.
Originally posted by jlack with karma: 78 on 2017-06-28
This answer was ACCEPTED on the original site
Post score: 0
Original comments
Comment by gvdhoorn on 2017-06-28:
re: depend being both run_ and build_ depend: you're right, that is correct.
re: export line: that was going to be my next comment.
Glad you got things to work. | {
"domain": "robotics.stackexchange",
"id": 28228,
"tags": "ros, pluginlib"
} |
Which decoupled first? The dark matter or the CMB photons? | Question: According theories that work best for the structure formation which happened first in the history of the evolution of the Universe, the decoupling of dark matter or the decoupling of photons? How sure are we about it?
Answer: This is the Big Bang time line:
In this review the freeze out of dark matter candidates happens in the neucleosynthesis time, above shown at less than a second. As the photons decouple at 380.000 years there can be no doubt that they come last. It would need a drastically different model to doubt the sequence. | {
"domain": "physics.stackexchange",
"id": 57313,
"tags": "particle-physics, cosmology, dark-matter, cosmic-microwave-background, structure-formation"
} |
Restrictions on theories which describe particle which is the dark matter candidate | Question: Lets have theory which describes (cold) dark matter candidate. I know two cosmological (not astrophysical) restrictions for particle: its lifetime has to be larger than the lifetime of the Universe, and its energy density has to correspond CDM density. What are the other restrictions?
Answer: Good question!
I agree with the two restrictions on dark matter (DM) that you mentioned. In total I would mention four main restrictions:
It must be non-luminous:
In practice this means no coupling (or extremely weak) to $U(1)_{em}$ and no coupling to $SU(3)_c$. We know it cannot interact with the strong force because e.g. radiation of gluons would give rise, among other things, to neural pions that decay to photons.
It must have a very weak self interaction:
Many observations constrain the self interaction properties of DM. A velocity dependent interaction however could get around the strongest constraints on some scales to give a significant self interaction on other scales.
It must be cold:
DM has to be non-relativistic during structure formation, this means it must have a mass larger than $m_\chi \gtrsim 1$ keV.
It must be stable:
If DM had a decay rate comparable to the age of the universe it would affect cosmology significantly, something we do not see.
In addition to these restrictions there are a lot of other restrictions that depends on assumptions of your theory. An important example is that of thermally produced dark matter. If you assume that DM was, at some point, in thermal equilibrium with the standard model particles, then we can use the fact that we know the current DM density to find the annihilation rate of DM. The result you end up with looks something like this:
$$ \langle \sigma v \rangle_{\text{ann}} \approx 2.5\cdot10^{-9} \text{GeV}^{-2} \approx 10^{-36} \text{cm}^2 \approx 3 \cdot10^{-26} \text{cm}^3/\text{s} \approx 1 \, \text{pb},$$
where $\langle \sigma v \rangle_{\text{ann}}$ is the thermally averaged annihilation cross section. The exact numerical value depends somewhat on the model you are looking at, but always gives about the same order of magnitude as this.
The constraint from thermal production is more useful than the others because it is a specific result that your model must reproduce. That means that it doesn't just eliminate some parts of parameter space, but it (usually) lets you fix one of the parameters of your theory completely in therms of the other parameters. | {
"domain": "physics.stackexchange",
"id": 24072,
"tags": "cosmology, dark-matter"
} |
GPG error on Debian | Question:
Hi
I followed below link
http://wiki.ros.org/lunar/Installation/Debian
But when I run below command:
sudo apt-key adv --keyserver hkp://ha.pool.sks-keyservers.net:80 --recv-key 421C365BD9FF1F717815A3895523BAEEB01FA116
Error comes:
mucip@debian-ev:~$ sudo apt-key adv --keyserver hkp://ha.pool.sks-keyservers.net:80 --recv-key 421C365BD9FF1F717815A3895523BAEEB01FA116
Executing: /tmp/apt-key-gpghome.YxA1unHkRT/gpg.1.sh --keyserver hkp://ha.pool.sks-keyservers.net:80 --recv-key 421C365BD9FF1F717815A3895523BAEEB01FA116
gpg: connecting dirmngr at '/tmp/apt-key-gpghome.YxA1unHkRT/S.dirmngr' failed: IPC connect call failed
gpg: keyserver receive failed: No dirmngr
mucip@debian-ev:~$
What is the problem?...
Regards,
Mucip:)
Originally posted by Mucip on ROS Answers with karma: 26 on 2017-06-24
Post score: 0
Original comments
Comment by gvdhoorn on 2017-06-24:
Please use the Preformatted Text button (the one with 101010 on it) to format console copy-pastes. That makes them much easier to read.
Thanks.
Comment by Mucip on 2017-06-24:
Hi,
You're right. Sorry. I will use this next time...
Comment by Mucip on 2017-06-24:
Hi,
I applied format. Thanks. I am learning... ;-)
Answer:
Hi,
After than Gvdhoorn's answer. I googled little bitr more. I installed below packets;
su
sudo apt-get install gnupg
sudo apt-get install dirmngr
Than redo below code,
sudo apt-key adv --keyserver hkp://ha.pool.sks-keyservers.net:80 --recv-key s421C365BD9FF1F717815A3895523BAEEB01FA116
Than now everything fine again... ;-)
Regards,
Mucip:)
Originally posted by Mucip with karma: 26 on 2017-06-27
This answer was ACCEPTED on the original site
Post score: 1 | {
"domain": "robotics.stackexchange",
"id": 28201,
"tags": "debian"
} |
how to represent the effect of linking rigid-bodies together? | Question: I have 2 rigid-bodies (b1,b2) if i linked one to the other (as if they are conjoined together) , how to represent b1 effect on b2 and b2 effect on b1
Is there any LAW that affect the position/orientation of the other body ?
notes :
i am using Quaternions for orientations
i don't want to treat them as one body
i have only primitive shapes (box,sphere,..) to link.
Answer: The open-source physics engine ODE allows you to connect two bodies using any of a number of different joints. One of those joints is the "Fixed" joint. It's much more stable, in the physics engine, to represent the two bodies as a single body but maintain two separate geometries for collision purposes. However, ODE probably handles collision detection/resolution differently from what you have in mind. It only detects collision after one frame of interpenetration and then constrains the relative velocity of the colliding bodies in such a was as to force them apart on the next time step. That type of constraint is much easier to satisfy for a single rigid body than two, but perhaps you're actually preventing penetration and so need a different technique.
The fixed joint simply constrains the two bodies to have zero relative angular velocity and zero relative linear velocity (and also has an error correction term to eliminate small numerical drift). After that, the LCP solver handles the rest. | {
"domain": "physics.stackexchange",
"id": 10720,
"tags": "classical-mechanics, rotational-dynamics, rigid-body-dynamics"
} |
Difference Equations, why M<=N for causality? | Question: in my notes for DSP they have the difference equation general form as:
y(k) + a1*y(k-1) + a2*y(k-2) + ... + an*y(k-N) = b0*x(k) + b1*x(k-1) + ... + bm*x(k-M)
with the claim that for the output to not depend on future values of the input then M<=N.
Why is this the case?
In fact, if I'm not missing something here I think I can provide a counterexample:
y(k) = x(k-1)
which has N = 0 and M = 1.
But this is just a unit delay, the output is just the previous value of the input.
What's going on here?
Answer: You're right, it's simply not true that $M\le N$ is necessary for causality. The difference equation in your question can always be implemented by a causal system.
However, note that the difference equation is not uniquely related to a causal system. A simple example is
$$y[n]=ay[n-1]+bx[n]\tag{1}$$
This can obviously be implemented by a causal system. But for $a\neq 0$ you can also rewrite (1) as
$$y[n-1]=\frac{1}{a}(y[n]-bx[n])\tag{2}$$
which suggests an anti-causal system, even though (1) and (2) are completely equivalent. This difference is reflected by the transfer functions ($\mathcal{Z}$-transforms) of the corresponding systems. For a causal system the region of convergence (ROC) is outside a circle enclosing all poles, whereas for an anti-causal system the ROC is inside a circle outside of which all poles are located. | {
"domain": "dsp.stackexchange",
"id": 2837,
"tags": "discrete-signals"
} |
EKF rejects timestamp more than the cache length earlier than newest transform cache | Question:
Hi,
Having a px4flow optical flow sensor, I want to convert its opt_flow_rad message into a TwistWithCovarianceStamped, which I can use in my EKF localization node.
However, the ekf node doesn't accept my twists. It warns (I configured my ros log to show nodes instead of time):
[ WARN] [/ekf_localization_node]:
WARNING: failed to transform from
/px4flow->base_link for twist0 message
received at 3370.342710000. The
timestamp on the message is more than
the cache length earlier than the
newest data in the transform cache.
The message conversion (optflow_odometry) works like this:
#include <ros/ros.h>
#include <geometry_msgs/TwistWithCovarianceStamped.h>
#include <px_comm/OpticalFlowRad.h>
ros::Publisher twist_publisher;
void flow_callback (const px_comm::OpticalFlowRad::ConstPtr& opt_flow) {
// Don't publish garbage data
if(opt_flow->quality == 0){ return; }
geometry_msgs::TwistWithCovarianceStamped twist;
twist.header = opt_flow->header;
// translation from optical flow, in m/s
twist.twist.twist.linear.x = (opt_flow->integrated_x/opt_flow->integration_time_us)/opt_flow->distance;
twist.twist.twist.linear.y = (opt_flow->integrated_y/opt_flow->integration_time_us)/opt_flow->distance;
twist.twist.twist.linear.z = 0;
// rotation from integrated gyro, in rad/s
twist.twist.twist.angular.x = opt_flow->integrated_xgyro/opt_flow->integration_time_us;
twist.twist.twist.angular.y = opt_flow->integrated_ygyro/opt_flow->integration_time_us;
twist.twist.twist.angular.z = opt_flow->integrated_zgyro/opt_flow->integration_time_us;
// Populate covariance matrix with uncertainty values
twist.twist.covariance.assign(0.0); // We say that generally, our data is uncorrelated to each other
// However, we have uncertainties for x, y, z, rotation about X axis, rotation about Y axis, rotation about Z axis
double uncertainty = pow(10, -1.0 * opt_flow->quality / (255.0/6.0));
for (int i=0; i<36; i+=7)
twist.twist.covariance[i] = uncertainty;
twist_publisher.publish(twist);
}
int main(int argc, char** argv) {
ros::init(argc, argv, "optflow_odometry");
ros::NodeHandle n;
ros::Subscriber flow_subscriber = n.subscribe("/px4flow/opt_flow_rad", 100, flow_callback);
twist_publisher = n.advertise<geometry_msgs::TwistWithCovarianceStamped>("visual_odom", 50);
ros::spin();
return 0;
}
My launchfile looks like this:
<?xml version="1.0"?>
<launch>
<!-- Optical Flow Sensor -->
<node pkg="tf" type="static_transform_publisher" name="base_link_to_px4flow" args="0.14 0 0 0 0 0 /base_link /px4flow 50" />
<node name="px4flow" pkg="px4flow" type="px4flow_node" respawn="true" output="screen">
<param name="serial_port" value="/dev/serial/by-id/usb-3D_Robotics_PX4Flow_v1.3_000000000000-if00"/>
</node>
<node pkg="tas_odometry" type="optflow_odometry" name="optflow_odometry" output="screen"/>
<!-- EKF for odom->base_link -->
<node pkg="robot_localization" type="ekf_localization_node" name="ekf_localization_node" output="screen">
<param name="map_frame" value="map"/>
<param name="world_frame" value="odom"/>
<param name="odom_frame" value="odom"/>
<param name="base_link_frame" value="base_link"/>
<param name="print_diagnostics" value="true"/>
<param name="frequency" value="100"/>
<param name="two_d_mode" value="true"/>
<param name="twist0" value="/visual_odom"/>
<rosparam param="twist0_config">
[false, false, false, # x, y, z,
false, false, false, # roll, pitch, yaw,
true, true, false, # vx, vy, vz, ---> vx, vy from optical flow
false, false, true, # vroll, vpitch, vyaw, ---> use px4flow gyro
false, false, false] # ax, ay, az
</rosparam>
</node>
</launch>
Lastly, here are my TF Tree and Node Graph:
What do I have to change to get this working?
Cheers
Laurenz
Originally posted by lalten on ROS Answers with karma: 102 on 2016-01-12
Post score: 1
Original comments
Comment by sloretz on 2016-01-13:
How did you come up with the formula converting uncertainty to covariance?
Comment by lalten on 2016-01-14:
opt_flow->quality is 0 at min and 255 max. I wanted high uncertainty (~1) for low quality and low uncertainty for high quality (1e-6). The exponential curve puts emphasis on higher quality.
Answer:
It works when you change the header construction of the converted message. The timestamp must be reset and the leading / in the frame_id must be removed.
I'm not sure changing the timestamps is a good idea, but it works like this.
// Build new message from old header
geometry_msgs::TwistWithCovarianceStamped twist;
twist.header = opt_flow->header;
twist.header.stamp = ros::Time::now(); // Otherwise the timestamp on the message is more than the cache length earlier than the newest data in the transform cache
if(twist.header.frame_id.substr(0,1) == "/")
twist.header.frame_id = twist.header.frame_id.erase(0,1); // Otherwise: error, because tf2 frame_ids cannot start with a '/'
Originally posted by lalten with karma: 102 on 2016-01-12
This answer was ACCEPTED on the original site
Post score: 0
Original comments
Comment by tfoote on 2016-01-12:
The timestamp in the range of 3000 suggests that it's not time since epoch. Maybe time since startup. For a more accurate timestamp you need to calibrate the offset between the sensors timestamps and ros time, and then apply that correction instead of stamping it with the current time of receipt.
Comment by lalten on 2016-01-14:
Good idea, that's probably the reason! Thanks :) | {
"domain": "robotics.stackexchange",
"id": 23411,
"tags": "ros, ekf, static-transform-publisher, ekf-localization-node, ros-indigo"
} |
How can the amplitude of a scattered wave be greater than the amplitude of the incident wave (at resonance)? | Question: When waves are scattered at resonance, the amplitude of the scattered wave can be much greater than the incoming wave. But where is the energy coming from that allows the increased amplitude of the scattered wave?
Answer: Without a specific example it's hard to answer this question, but generally speaking resonance when it exists in a system tends to amplify energy at the specific resonant frequency. Conservation of energy applies so what you may see is that energy is drawn from other frequency bands and or locations in the system to concentrate energy within the space where resonance occurs. | {
"domain": "physics.stackexchange",
"id": 41359,
"tags": "electromagnetism, energy, waves, acoustics, scattering"
} |
Catkin invoke script built in other package | Question:
I am trying to using add_custom_target to invoke a binary during the build process. This is tricky because the binary is built by another Catkin package and is installed its libexec directory. I can't figure out how to make this work when both packages are built in the same Catkin workspace. I also am not sure how to call the binary in a way that is portable between the devel and install spaces.
These seem to be two separate issues:
How do make my custom target depend on the binary built by the other package? I can't simply add it as a dependency, because that would fail when the other package is in a chained workspace.
What is the correct way of resolving the path to the binary? Should I use rosrun, rospack, or catkin_find? Or do I need to somehow export a CMake variable that contains the path?
Originally posted by mkoval on ROS Answers with karma: 524 on 2014-06-15
Post score: 1
Answer:
I found a few other answers that hint at the solution. This is what I finally came up with that works:
I created a file my_package-extras.cmake.em that exports CMake variables that point to all of the binary files that I plan to use during the build of another Catkin project. This file detects whether we are doing a devel build or an install build---and sets the paths appropriately---by checking whether my_package_SOURCE_DIR is defined.
Finally, I added the option CFG_EXTRAS my_package-extras.cmake to the catkin_package() call. This file will be included in any CMake files that find_package(my_package). Therefore, those CMake files can use the variables defined in that file to depend on (to insure that its built) and call the binaries exported by my_package.
Originally posted by mkoval with karma: 524 on 2014-06-19
This answer was ACCEPTED on the original site
Post score: 1 | {
"domain": "robotics.stackexchange",
"id": 18277,
"tags": "catkin, cmake"
} |
Why is it an advantage "that Markov chains are never needed" to obtain gradients? | Question: In the original GAN (Generative Adversarial Network) paper, Generative adversarial networks by I. Goodfellow, J. Pouget-Abadie, M. Mirza et. al. they state an advantage of the GAN is "that Markov chains are never needed, only backprop is used to obtain gradients, no inference is needed during learning" (Section 6 of paper).
I don't understand why this is an advantage? If we look at this statement from the other way around, why would using Markov chains be a disadvantage?
Answer: Implementation Considerations
Markov chains are sequential, because they describe one state t_0 based on the previous state t_-1. When you have long Markov chains, you basically have a long sequence of calculations, each relying on the previous state to be calculated. Due to this sequential nature, parallelizing the computation, as you can do with the gradients and inference in a neural net, is not possible.
Architecture Considerations
In the paper they state several issues with Markov chains:
"[...] methods based on Markov chains require that the distribution be somewhat blurry in order for the chains to be able to mix between modes."
In addition, in Section 6 they state that Markov chains require "blurry" data distributions as opposed to GANs, which can represent "sharp" distributions as well.
Section 2, first paragraph: They talk about Markov chains as a means for approximating the partition function of Deep/Restricted Boltzman Machines, which would otherwise be intractable. However, they state that mixing is a problem here. I am not sure what they mean by mixing here.
As I understand the caption of Figure 2, they state that Markov chain mixing leads to correlated samples. This might be due to the seed, i.e., the inital state, you must provide in order to sample from a Markov chain. | {
"domain": "datascience.stackexchange",
"id": 11194,
"tags": "gradient-descent, backpropagation, gan, generative-models, markov"
} |
Why does sending $T\rightarrow \infty(1-i\epsilon)$ in the slightly imaginary direction cause the $n=0$ term to decay slower? | Question: This is in reference to equation 4.27 in Peskin and Schroeder. To derive a formula for the interacting vacuum in terms of the free vacuum we evolve the free vacuum in time with the full Hamiltonian and then take the limit as $T\rightarrow \infty(1-i\epsilon)$. We are taking the limit in a "slightly imaginary direction" so that the exponential factor $e^{-iE_nT}$ factor dies slowest for $n=0$. My question is why this is?
The equation for reference: $$e^{-iHT}|0\rangle=e^{-iE_0T}|\Omega\rangle\langle\Omega|0\rangle+\sum_{n\neq 0}e^{-iE_nT}|n\rangle\langle n|0\rangle. \tag{p.86}$$ In which $|0\rangle$ is the free vacuum and $|\Omega\rangle$ is the interacting vacuum and $|n\rangle$ are eigenstates of the full Hamiltonian, $H$.
Answer: From the standard mathematics of complex exponentials:
$$\begin{align}
e^{-iE\,t(1-i\epsilon)} &= e^{-iE\,t}e^{-E\,\epsilon\, t} \\
&= e^{-E\,\epsilon\,t}\left(\cos(Et) - i \sin(Et)\right)
\end{align}$$
Since, definitionally, $n=0$ is the lowest possible value for $E$, and it appears in a negative exponential in front of a term of magnitude 1, the $n=0$ state falls off most slowly for all $\epsilon > 0$ | {
"domain": "physics.stackexchange",
"id": 72035,
"tags": "quantum-field-theory, vacuum, interactions, regularization, s-matrix-theory"
} |
Unary in $P$, binary not in $P$ | Question: I would like to know if there is a known decision problem with the following characteristics:
Represented in unary, the problem is decidable in polynomial time.
Represented in binary, the problem is not decidable in polynomial time (and this fact has been proved, not just conjectured).
For example, Subset-Sum is in $P$ when represented in unary, but it is $NP$-complete in binary. However, this problem does not satisfy my second requirement because we do not know whether $P=NP$.
Answer: Every single-exponential (i.e. known to be solvable in $O(c^n)$ for some constant $c$) EXPTIME-complete problem will answer your requirements.
For example, see checking thorough rerefinement on finite modal transition systems. | {
"domain": "cs.stackexchange",
"id": 2990,
"tags": "complexity-theory, polynomial-time"
} |
Returning MIME types according to input filenames' extensions | Question: I am coding on codingame.com, where one of the 'easy' challenges is to assign MIME type strings to input strings of file names based on their extensions.
My code passes four out of five of the test cases, and the last one is an optimization test. A big data set is inputted, and my approach times out.
I started coding in python under a year ago, and I haven't had much practice. I would like to know what I could do to improve my code.
I have seen really short solutions to other puzzles in the 'other solutions' tab, so I am not surprised if there is a clever workaround to make this shorter.
import sys
import math
n = int(input()) # Number of elements which make up the association table.
q = int(input()) # Number Q of file names to be analyzed.
MIMETable = {}
fileNames = {}
extensions = {}
answer = ''
#extract association input to table
for i in range(n):
# ext: file extension
# mt: MIME type.
ext, mt = input().split()
MIMETable [ext.lower()] = mt
#print(str(ext) + '\t| ' + str(mt), file=sys.stderr)
#print('\n', file=sys.stderr)
#extract filename input
for i in range(q):
fname = input() # One file name per line.
fileNames[i] = fname
#print(str(i) + ' ' + fname, file=sys.stderr)
# find the extensions of the filenames
# and add them to extensions{} in lowercase as they are in MIMETable
for index, name in fileNames.items():
if '.' in name:
try:
extensions[index] = name.split('.')[-1].lower()
except IndexError:
extensions[index] = 'unknown'
else:
extensions[index] = 'unknown'
#print(extensions[index], file=sys.stderr)
#print('', file=sys.stderr)
#if there is an extension, find the corresponding MIME type
for fileIndex in range(q):
extensionFound = False
for mimeExtension, mimetype in MIMETable.items():
if extensions[fileIndex] == mimeExtension:
answer += mimetype
extensionFound = True
#print('extension found: ' + extensions[fileIndex] + '\t' + mimetype, file=sys.stderr)
if not extensionFound:
answer += 'UNKNOWN'
answer += '\n'
print('\n', file=sys.stderr)
# For each of the Q filenames, display on a line the corresponding MIME type. If there is no corresponding type, then display UNKNOWN.
print(answer)
Answer:
You should use a list rather than a dictionary for fileNames or extensions.
You don't get any benefit for using a dictionary,
and actually makes the code slightly harder to use.
You should utilize a dictionary's \$O(1)\$ key lookup, rather than re-implement an \$O(n)\$ look up.
This is as if extensions[fileIndex] in MIMETable is already fast.
Utilize dictionary functions such as get(key, default).
You don't need three separate loops, as they are all just different sections of transforming from a file name to a MIME type.
It would also be easier to read it without the loops too.
You should follow PEP8, this means your names should be MIME_table rather than MIMEtable.
Also MIME_table looks odd, so I'd change it to mime_table.
You don't need math or sys, and so I'd remove them.
Your comments are left overs to debugging,
instead of commenting out prints when you're debugging,
I'd recommend that you use Pythons logging library.
Without merging the loops together, you can change your code to:
amount_mime_types = int(input())
amount_file_names = int(input())
mime_table = {}
for i in range(amount_mime_types):
ext, mt = input().split()
mime_table[ext.lower()] = mt
file_names = []
for _ in range(amount_file_names):
file_names.append(input())
extensions = []
for name in file_names:
if '.' in name:
extensions.append(name.split('.')[-1].lower())
else:
extensions.append('unknown')
mime_types = []
for extension in extensions:
mime_types.append(mime_table.get(extension, 'UNKNOWN'))
print('\n'.join(mime_types))
Where if you join the file_names, extensions and mime_types loops together you get:
amount_mime_types = int(input())
amount_file_names = int(input())
mime_table = {}
for i in range(amount_mime_types):
ext, mt = input().split()
mime_table[ext.lower()] = mt
mime_types = []
for _ in range(amount_file_names):
name = input()
if '.' in name:
extension = name.split('.')[-1].lower()
else:
extension = 'unknown'
mime_type = mime_table.get(extension, 'UNKNOWN')
mime_types.append(mime_type)
print('\n'.join(mime_types))
There are still a couple ways to improve this:
You can write all this in a function.
This also improves the speed of the program.
Pre-define mime_types to an 'array' of None. mime_types = [None] * amount_file_names.
This is as lists sometimes have to move and is more a 'just in case' optimisation. | {
"domain": "codereview.stackexchange",
"id": 22530,
"tags": "python, performance, beginner, python-3.x"
} |
Reconstruction of Audio Signal from its Absolute Spectrogram | Question: I have the absolute Spectrogram of an audio signals.
I lost the phase data of the Spectogram because of various processing applied on the original spectrogram of the signal.
I'm trying to reconstruct the audio signal in a meaningful (Audibly) manner from teh absolute value only of the Spectrogram.
The obvious inverse won't work (The DFT inverse of the absolute, Since the Phase is significant).
The Spectrogram is a result of fusion of few audio signals as I'm trying to create a smooth transition between audio signals.
Anyone has experience with the problem? Anyone has experience with this procedure?
Could anyone refer me to a code, article, etc...
Thanks.
Answer: One thing commonly done (for example in the source separation community) is to use the phase data of the original signal (before transformation where applied to it) - the result is much better than null or random phase, and not so far from algorithms aiming at reconstructing the phase information from scratch.
A classic reconstruction algorithm is Griffin&Lim's, described in the paper "Signal estimation from modified short-time Fourier transform". This is an iterative algorithm, each iteration requires a full STFT / inverse STFT, which makes it quite costly.
This problem is indeed an active area of research, a search for STFT + reconstruction + magnitude will yield plenty of papers aiming at improving on Griffin&Lim in terms of signal quality and/or computational efficiency. | {
"domain": "dsp.stackexchange",
"id": 456,
"tags": "audio, spectrogram"
} |
What are the differences between a liposome and simple cell membrane | Question: I'm learning about chemical evolution and the transition from RNA to protocells to cells. I understand that the protocells have a phospholipid structure called a liposome.
I recently saw a video showing CG animations of the workings of a cell and it seemed that the cell wall was much more complex than what it seems a liposome is.
I'm looking for information about the differences between a liposome and a basic cell wall (not sure of the difference for different types of cells -
but I guess for prokaryote since that would have been first.)
Answer: By looking on wikipedia I found:
About liposome
A liposome is a spherical vesicle having at least one lipid bilayer. The liposome can be used as a vehicle for administration of nutrients and pharmaceutical drugs. Liposomes can be prepared by disrupting biological membranes (such as by sonication).
Liposomes are most often composed of phospholipids, especially
phosphatidylcholine, but may also include other lipids, such as egg
phosphatidylethanolamine, so long as they are compatible with lipid
bilayer structure. A liposome design may employ surface ligands for
attaching to unhealthy tissue.
About cell membrane
The cell membrane (also known as the plasma membrane or cytoplasmic
membrane) is a biological membrane that separates the interior of all
cells from the outside environment.The cell membrane is selectively
permeable to ions and organic molecules and controls the movement of
substances in and out of cells. The basic function of the cell
membrane is to protect the cell from its surroundings.
It consists of the lipid bilayer with embedded proteins. Cell
membranes are involved in a variety of cellular processes such as cell
adhesion, ion conductivity and cell signalling and serve as the
attachment surface for several extracellular structures, including the
cell wall, glycocalyx, and intracellular cytoskeleton. Cell membranes
can be artificially reassembled.
So the major difference seems to be the presence of "embedded proteins" inside the cell membrane. | {
"domain": "biology.stackexchange",
"id": 7656,
"tags": "molecular-biology, cell-membrane"
} |
Google Foobar challenge: Exiting a space station maze, where one wall may be removed | Question: I'm solving Foobar challenge. My code runs perfectly in eclipse but when i verify it on foobar it says
Execution took too long .
The question is - You have maps of parts of the space station, each starting at a prison exit and ending at the door to an escape pod. The map is represented as a matrix of 0s and 1s, where 0s are passable space and 1s are impassable walls. The door out of the prison is at the top left (0,0) and the door into an escape pod is at the bottom right (w-1,h-1).
Write a function answer(map) that generates the length of the shortest path from the prison door to the escape pod, where you are allowed to remove one wall as part of your remodeling plans. The path length is the total number of nodes you pass through, counting both the entrance and exit nodes. The starting and ending positions are always passable (0). The map will always be solvable, though you may or may not need to remove a wall. The height and width of the map can be from 2 to 20. Moves can only be made in cardinal directions; no diagonal moves are allowed.
Test cases
Inputs:(int) maze = [[0, 1, 1, 0], [0, 0, 0, 1], [1, 1, 0, 0], [1, 1,
1, 0]]
Output: (int) 7
Inputs: (int) maze = [[0, 0, 0, 0, 0, 0], [1, 1, 1, 1, 1, 0], [0, 0,
0, 0, 0, 0], [0, 1, 1, 1, 1, 1], [0, 1, 1, 1, 1, 1], [0, 0, 0, 0, 0,
0]] Output: (int) 11
My code :
class Maze{
Maze(int i,int j){
this.flag=false;
this.distance=0;
this.x=i;
this.y=j;
}
boolean flag;
int distance;
int x;
int y;
}
public class Answer{
public static boolean isPresent(int x,int y,int r,int c)
{
if((x>=0&&x<r)&&(y>=0&&y<c))
return true;
else
return false;
}
public static int solveMaze(int[][] m,int x,int y,int loop)
{
int r=m.length;
int c=m[0].length;
int result=r*c;
int min=r*c;
Maze[][] maze=new Maze[r][c];//Array of objects
for(int i=0;i<r;i++)
{
for(int j=0;j<c;j++)
{
maze[i][j]=new Maze(i,j);
}
}
Queue<Maze> q=new LinkedList<Maze>();
Maze start=maze[x][y];
Maze[][] spare=new Maze[r][c];
q.add(start);//Adding source to queue
int i=start.x,j=start.y;
while(!q.isEmpty())
{
Maze temp=q.remove();
i=temp.x;j=temp.y;
int d=temp.distance;//distance of a cell from source
if(i==r-1 &&j==c-1)
{
result=maze[i][j].distance+1;
break;
}
maze[i][j].flag=true;
if(isPresent(i+1,j,r,c)&&maze[i+1][j].flag!=true)//check down of current cell
{
if(m[i+1][j]==0)//if there is path, add it to queue
{
maze[i+1][j].distance+=1+d;
q.add(maze[i+1][j]);
maze[i][j].flag=true;
}
if(m[i+1][j]==1 && maze[i+1][j].flag==false && loop==0)//if there is no path, see if breaking the wall gives a path.
{
int test=solveMaze(m,i+1,j,1);
if(test>0)
{
test+=d+1;
min=(test<min)?test:min;
}
// maze[i+1][j].flag=true;
}
}
if(isPresent(i,j+1,r,c)&&maze[i][j+1].flag!=true)//check right of current cell
{
if(m[i][j+1]==0)
{
maze[i][j+1].distance+=1+d;
q.add(maze[i][j+1]);
}
if(m[i][j+1]==1 && maze[i][j+1].flag==false && loop==0)
{
int test=solveMaze(m,i,j+1,1);
if(test>0)
{
test+=d+1;
min=(test<min)?test:min;
}
maze[i][j+1].flag=true;
}
}
if(isPresent(i-1,j,r,c)&&maze[i-1][j].flag!=true)//check up of current cell
{
if(m[i-1][j]==0)
{
maze[i-1][j].distance+=1+d;
q.add(maze[i-1][j]);
}
if(m[i-1][j]==1 && maze[i-1][j].flag==false && loop==0)
{
int test=solveMaze(m,i-1,j,1);
if(test>0)
{
test+=d+1;
min=(test<min)?test:min;
}
maze[i-1][j].flag=true;
}
}
if(isPresent(i,j-1,r,c)&&maze[i][j-1].flag!=true)//check left of current cell
{
if(m[i][j-1]==0)
{
maze[i][j-1].distance+=1+d;
q.add(maze[i][j-1]);
}
if(m[i][j-1]==1 && maze[i][j-1].flag==false && loop==0)
{
int test=solveMaze(m,i,j-1,1);
if(test>0)
{
test+=d+1;
min=(test<min)?test:min;
}
maze[i][j-1].flag=true;
}
}
}
return ((result<min)?result:min);
}
public static int answer(int[][] m)
{
int count;
int r=m.length;
int c=m[0].length;
count=solveMaze(m,0,0,0);
return count;
}
public static void main(String[] args)
{
Scanner sc=new Scanner(System.in);
System.out.println("enter row size ");
int m=sc.nextInt();
System.out.println("enter column size ");
int n=sc.nextInt();
int[][] maze=new int[m][n];
System.out.println("Please enter values for maze");
for(int i=0;i<m;i++)
{
for(int j=0;j<n;j++)
{
maze[i][j]=sc.nextInt();
}
}
int d=answer(maze);
System.out.println("The maze can be solved in "+d+" steps");
}
}
Answer: It looks like your current implementation is wrong. Running it on a simple 2x2 square without walls should give 2 steps (move right, move down). But your program answers with 4.
Because it's wrong, we're not supposed to even answer this question, but let me give you some tips to get started anyway (without actually fixing your problem, that's still your own job).
Test without manual input
Instead of inputting the numbers each time, it's easier to test if you have some mazes hard coded. This can be easily achieved like this:
public static void main(String[] args) {
int[][] maze = new int[][]{
{0, 0},
{0, 0}};
// int[][] maze = new int[][]{
// {0, 1, 1, 0},
// {0, 0, 0, 1},
// {1, 1, 0, 0},
// {1, 1, 1, 0}};
// int[][] maze = new int[][]{
// {0,1,0},
// {1,0,0},
// {0,0,0}};
int d = answer(maze);
System.out.println("The maze can be solved in " + d + " steps");
}
You can comment/uncomment to use another maze.
Names
It's generally discouraged to use single letter variables. This makes your code hard to read. The only exceptions are i, j in for loop indices and some really specific cases.
So replace _ with _:
r -> maxRow
c -> maxCol
i -> row
j -> col
q -> queue
spare -> (removed completely as it is unused)
class Maze
Your Maze class does not actually represent a maze. It represents a single square in a maze. So I would rename it to Square or Tile to better convey its purpose.
Copy paste
You have this piece of code exactly 4 times:
if (isPresent(row + 1, col, maxRow, maxCol) && maze[row + 1][col].flag != true)//check down of current cell
{
if (m[row + 1][col] == 0)//if there is path, add it to queue
{
maze[row + 1][col].distance += 1 + d;
queue.add(maze[row + 1][col]);
maze[row][col].flag = true;
}
if (m[row + 1][col] == 1 && maze[row + 1][col].flag == false && loop == 0)//if there is no path, see if breaking the wall gives a path.
{
int test = solveMaze(m, row + 1, col, 1);
if (test > 0) {
test += d + 1;
min = (test < min) ? test : min;
}
// maze[i+1][j].flag=true;
}
}
The only difference are the indices. Wouldn't it be nice if you could extract this into a method instead?
To do this cleanly (without passing everything as parameters) you might want to refactor the entire class. I think it would be a decent idea to have some fields inside a MazeSolver that represent the current state of the solution. Like for example the queue the maze perhaps even the current minimum distance?
Result variable?
If I understood most of your code correctly your Maze[][] array already keeps track of the distance to reach that square. Doesn't this mean you could just return maze[maxRow-1][maxCol-1].distance?
Perhaps some logic needs to be changed to also handle passing through a wall though. I didn't fully understand how your code handled this cleanly (mostly because it didn't give the correct answer either). | {
"domain": "codereview.stackexchange",
"id": 25994,
"tags": "java, programming-challenge, time-limit-exceeded, pathfinding"
} |
Secure logout: session termination | Question: I've been reading the security issue on logging out from a website system written in PHP, using sessions.
My current code is:
session_start();
if (isset($_SESSION["logged_in"])) {
unset($_SESSION["logged_in"]);
unset($_SESSION["ss_fprint"]);
unset($_SESSION["alive"]);
session_destroy();
session_regenerate_id(true);
}
// NEW MODIFIED CODE
session_start();
if (isset($_SESSION["logged_in"])) {
$_SESSION = array();
if (ini_get("session.use_cookies")) {
$params = session_get_cookie_params();
setcookie(session_name(), '', time() - 42000,
$params["path"], $params["domain"],
$params["secure"], $params["httponly"]
);
}
session_destroy();
header("Location: ../index.php");
die();
} else {
header("Location: ../online.php");
die();
}
I use this class.
The code from the class should ensure and protect against hijacking and capture and fixation.
I have generated a session with this code from the above link, and I want to logout properly.
I tried print_r() out all $_SESSION data, and it was empty after I ran my logout code.
Is my logout secure enough?
OBS:: This system I'm making is not for some big company with a huge big mega need for security, but the basics should be implemented.
Answer: looks alright enough. i would change is replace all those unset() lines with just $_SESSION = array();
and check the manual, it has a sample to clear your session cookies if you have them enabled. | {
"domain": "codereview.stackexchange",
"id": 4426,
"tags": "php, security, session"
} |
Parity of Photons | Question: In nuclear physics, while studying gamma decay (Nuclear physics, Roy and Nigam, 1st ed, pp 450) I have read that the parity of photons depends on the type of multipole radiation they represent. Means for electric type parity is $(-1)^L$. For magnetic type parity is $(-1)^{L+1}$. So, for E1 type radiation, parity is negative, and for M1 type radiation, parity is positive.
But in particle physics, I am reading (Intro. to elementary particles, Griffiths, revised 2nd ed, pp 141) that photons are vector particles with intrinsic parity -1.
So, are we considering only E1 type radiation here? Or these parities are completely unrelated?
Answer: The parities of E1 and M1 type radiations are the parities of radiation states. These are inferred from various atomic and nuclear transition selection rules.
They have nothing to do with parity of a photon vector field $A^\mu$, which is of course negative. | {
"domain": "physics.stackexchange",
"id": 99247,
"tags": "particle-physics, photons, nuclear-physics, parity, multipole-expansion"
} |
Pi in place of binary | Question: Some time ago, I asked this question but no one quite understood it or was able to answer. I deleted the original question and have since decided to try it again.
As I understand it, all things digital are originally based on ones and zeros - binary code.
However, I have wondered for some time if it would be possible (now or in the future) to use the digits of pi (22/7) in place of the ones and zeros.
So, my question, is it possible? Could it ever be?
Answer: No it cannot happen for many reasons.
How would logic look like?
How would you add two numbers?
There is a big problem with telling apart $0$ and $1$ at high frequencies, so adding third option would be harder to manufacture. But encoding it with non-natural basis gets harder.
With non-natural base, all operations that we do are doomed.
If you consider that, try easy example: convert $4$ and $6.5$ into $\pi$ base, add them and write down result.
Since the finite precission kicks in, your basic addition fails. The only operation that would benefit from such base is $\pi + \pi$.
And the most efficient base is e. | {
"domain": "cs.stackexchange",
"id": 6103,
"tags": "computational-complexity"
} |
What does a torque flange measure? | Question: If i mount a torque flange so that it connects 2 axles. Does it measure the difference between the torque applied to each axle?
Answer: I assume you refer to something like the following:
This is a torque measurement device that measures the torque which is carried by each shaft.
I.e. assuming
you have two shafts connected with the torque flange and
if one shaft is driven by a motor with torque M on one end
and the other shaft is driving a generator (or a brake)
Then if the system is not accelerating or decelerating rotationally, the torque sensor should read torque M (i.e. all torque supplied by the motor will "go through" the torque flange -- and the shafts for that matter-- and will be consumed at the generator/brake).
If on the other hand the system is increasing or decreasing the rpm, then the torque sensor will only read part of the torque (and with a lot of oscillations) -- the part of the torque that will go through the flange has to do with the acceleration of the rotational masses past the torque flange.
In general the torque flanges usually use a strain gage principle. I.e. when torque passes through the flange it create a twisting angle. That angle can be measured and because of the calibration of the torque flange sensor it is possible to know what is the torque "passing through" the sensor. | {
"domain": "engineering.stackexchange",
"id": 5059,
"tags": "torque"
} |
Why k- Vertex Cover is not in PTIME when it can be expressed in FO-logic | Question: We can express property that graph has vertex cover of size at most k with first order formula:
$$\exists x_1 \exists x_2...\exists x_k (\forall y \forall z (E(z,y) \ \rightarrow \ \bigvee_{1 \leq i \leq k} y=x_i \ \vee \ \bigvee_{1 \leq i \leq k} z=x_i))$$
But, there is also theorem (proven for example in Libkin's book Finite model theory) that for every given FO sentence and finite model A one can decide does:
$$A \models \varphi$$
in PTIME.
Why can't we use this for sentence expressing k-VC property, written above, and decide does any given graph has this property?
Answer: Your argument shows that for each fixed $k$, the problem $k$-VC can be solved in polynomial time (indeed, the algorithm enumerates all sets of size $k$ and checks whether they are vertex covers, all of which can be accomplished in $O(n^{k+1})$ or so). However, the vertex cover problem is different – it accepts $k$ as an input. This version of the vertex cover problem cannot be expressed in first-order logic. | {
"domain": "cs.stackexchange",
"id": 9808,
"tags": "complexity-theory, graphs, time-complexity, first-order-logic"
} |
Model-based RL algorithms for continuous state space and finite action space | Question: At the beginning, if I have a complete model $p(s' \mid s, a)$ (an assumed true model that describes the environment well enough) and the reward function $r(s,a,s')$. How can I exploit the model and learn a good policy in this situation? Assume that the state space is continuous, and the action space is finite.
Traditional dynamic programming/planning methods like policy iteration or value iteration cannot be directly applied since there are infinitely many states. If I try to get samples from the model and apply an algorithm like DQN or any non-linear function approximation, it looks like that I use the model-free approach and do not get the full advantages of the model.
May I ask if there are any reinforcement learning/planning methods that use the explicit model given at the beginning to solve the MDP? (in case of continuous state space and finite action space)
Answer: In optimal control field to minimize certain well-defined costs especially in process industries, continuous state space model-based planning methods such as model predictive control (MPC) is a common decision time planning/control method which can handle both linear and nonlinear models, and unlike training data intensive model-free RL methods it's much more data efficient, faster, online, and good (though may not optimal):
MPC is based on iterative, finite-horizon optimization of a plant model. At time
$t$ the current plant state is sampled and a cost minimizing control strategy is computed (via a numerical minimization algorithm) for a relatively short time horizon in the future:$[t,t+T]$. Specifically, an online or on-the-fly calculation is used to explore state trajectories that emanate from the current state and find (via the solution of Euler–Lagrange equations) a cost-minimizing control strategy until time $t+T$. Although this approach is not optimal, in practice it has given very good results. Much academic research has been done to find fast methods of solution of Euler–Lagrange type equations, to understand the global stability properties of MPC's local optimization, and in general to improve the MPC method.
If your model is linear (most mechanical systems) and cost is quadratic, LQR is also a common optimal feedback control method.
The main differences between MPC and LQR are that LQR optimizes across the entire time window (horizon) whereas MPC optimizes in a receding time window,[4] and that with MPC a new solution is computed often whereas LQR uses the same single (optimal) solution for the whole time horizon.
Finally similar to Dyna-Q learning architecture with background learning & planning introduced in Sutton and Barto's Reinforcement Learning, An Introduction, especially if your initial known sample model is not accurate or keeps evolving, one can combine model-based with model-free RL methods, you just replace tabular Q-planning there with your linear or nonlinear function approximations planning version such as described in Jong and Stone's paper Model-Based Exploration in Continuous State Spaces:
This paper develops a method
for approximating continuous models by fitting data to a finite sample of states, leading to finite representations compatible with existing
model-based exploration mechanisms. Experiments with the resulting
family of fitted-model reinforcement learning algorithms reveals the critical importance of how the continuous model is generalized from finite
data. This paper demonstrates instantiations of fitted-model algorithms
that lead to faster learning on benchmark problems than contemporary
model-free RL algorithms that only apply generalization in estimating
action values. Finally, the paper concludes that in continuous problems,
the exploration-exploitation tradeoff is better construed as a balance between exploration and generalization. | {
"domain": "ai.stackexchange",
"id": 3772,
"tags": "reinforcement-learning, reference-request, model-based-methods, continuous-state-spaces, discrete-action-spaces"
} |
Dynamic network messaging | Question: I'm building a client and server for a game but wanted a generic messaging system in a shared library that let me focus on application logic and was largely separate form the underlying networking I/O details.
I'm mostly looking for suggestions or possible improvements. Performance on the server is a concern with all the casting. I'm stuck on .Net 3.5 for the client (Unity3D) but if there's decent performance improvements to be found for the server I'd consider splitting the library. I'm also using Protobuf-net for serializing messages being sent across the network. My network protocol uses two 4 byte prefixes, one for the message byte length and the 2nd is an int32 that maps to a dictionary of types so I can deserialize messages to their exact type upon arrival. The typemap is built at startup reflecting through all the message subtypes in the same library.
I'll not post the client/server code because I see those as separate concerns (Network I/O, handshaking, serialization etc can be done a bunch of different ways). Lastly, I've cut out most the threading code to make it easier to read.
My approach breaks down messaging into three simple Message, Request and Response types with the following rules:
All network messages will derive from the Message class.
Any message that derives from Request will expect a Response in reply.
Request and Response are both sub-classes of Message.
Message.cs
public partial class Message
{
public virtual bool IsResponse { get { return false; } }
public int id { get; }
public string description { get; }
static int idCounter = 0;
public Message()
{
id = idCounter++;
}
public Message(string description)
{
id = idCounter++;
this.description = description;
}
}
Response.cs
public class Response : Message
{
public MessageStatus status { get; }
public bool Success { get { return status == MessageStatus.Ok; } }
public override bool IsResponse { get { return true; } }
public Response(int id, string description)
{
this.id = id;
this.status = MessageStatus.Ok;
this.description = description;
}
public Response(int id, MessageStatus status, string description)
{
this.id = id;
this.status = status;
this.description = description;
}
}
Request is not shown, but it's basically Message with an additional type field so the application knows what kind of Response type is expected. Game/Application logic can register to listen for the arrival of specific message subclasses. All the plumbing is taken care of via two singeltons; a MessageRouter and a ResponseRouter (couldn't think of better names). As the server or client receives data and the protocol handler deserializes it into a Message instance it then passes it off to the MessageRouter via the Dispatch method:
MessageRouter.cs
class MessageRouter
{
private static MessageRouter instance;
protected Dictionary<Type, object> handlers;
public static MessageRouter Instance
{
get
{
if (instance == null) { instance = new MessageRouter(); }
return instance;
}
}
protected MessageRouter()
{
handlers = new Dictionary<Type, object>();
}
public bool AddHandler<T>(Action<T> handler) where T : Message
{
if (handler == null) { return false; }
var messageType = typeof(T);
object outHandler;
if (handlers.TryGetValue(messageType, out outHandler))
{
var typedHandler = outHandler as Action<T>;
handlers[messageType] = typedHandler + handler;
return true;
}
handlers.Add(messageType, handler);
return true;
}
public bool RemoveHandler<T>(Action<T> handler) where T : Message
{
if (handler == null) { return false; }
var messageType = typeof(T);
return handlers.Remove(messageType);
}
public bool Dispatch<T>(T message) where T : Message
{
if (message == null) { return false; }
if (message.IsResponse)
{
if (ResponseRouter.Instance.Dispatch(message)) { return true; }
}
var messageType = message.GetType();
object handler;
if (handlers.TryGetValue(messageType, out handler))
{
if (handler == null) { return false; }
(handler as Action<T>)(message);
}
return false;
}
}
As you may have noticed, if the message has an IsResponse property of true it is passed off to the ResponseRouter instead as I assume no code should be registering to listen for response types via the MessageRouter (I should probably check for that and log a warning). The ResponseRouter uses a similar pattern but they're more one-shot events, as soon as they're triggered the handler is removed from the response router. Response handlers are passed into the client/server's send method:
public delegate void ResponseCallback<T>(T response) where T : Message;
void Send<TMessage>(TMessage message)
where TMessage : Message;
{ ... }
void Send<TRequest, TResponse>(TRequest request, ResponseCallback<TResponse> responseCallback,)
where TRequest : Request
where TResponse : Response;
{
...
ResponseRouter.Instance.RegisterCallback(request, responseCallback);
...
}
ResponseRouter.cs
public class ResponseRouter
{
private static ResponseRouter instance;
private Dictionary<int, object> responseMap;
public static ResponseRouter Instance
{
get
{
if (instance == null) { instance = new ResponseRouter(); }
return instance;
}
}
protected ResponseRouter()
{
responseMap = new Dictionary<int, object>();
}
public bool RegisterCallback<TRequest,TResponse>(TRequest request, ResponseCallback<TResponse> callback)
where TRequest : Request
where TResponse : Response
{
if (responseMap.ContainsKey(request.id)) { return false; }
responseMap.Add(request.id, callback);
return true;
}
public bool Dispatch<T>(T response) where T : Message
{
if (response == null) { return false; }
object callback = null;
if (responseMap.TryGetValue(response.id, out callback))
{
responseMap.Remove(response.id);
}
if (callback == null) { return false; }
(callback as ResponseCallback<T>)(response);
return true;
}
}
To support a new message type I simply have to subclass Message or Response and write the application code to hook into handling them. The only messaging special case is authentication. Establishing or restoring client sessions (disconnect recover) is all logic that can exist far away from my core server/client code.
Answer: Constructor chaining
Calling one constructor from another constructor is called constructor chaining and helps to dry (don't repeat yourself) your code.
For instance the Message class would benefit like so
public partial class Message
{
public virtual bool IsResponse { get { return false; } }
public int id { get; }
public string description { get; }
static int idCounter = 0;
public Message()
{
id = idCounter++;
}
public Message(string description)
:this()
{
this.description = description;
}
}
and the Response class like so
public class Response : Message
{
public MessageStatus status { get; }
public bool Success { get { return status == MessageStatus.Ok; } }
public override bool IsResponse { get { return true; } }
public Response(int id, string description)
:this(id, MessageStatus.Ok, description)
{
}
public Response(int id, MessageStatus status, string description)
{
this.id = id;
this.status = status;
this.description = description;
}
}
MessageRouter
By using the null-coalescing operator ?? you can clean the Instance property like so
public static MessageRouter Instance
{
get
{
return instance ?? (instance = new MessageRouter());
}
}
Using a stateful Singleton prevents the class and its caller's to be tested. Assume you have one test to check that a handler is added successfully and a test to remove a handler. The first one is easy, but for the second one you will to first add a handler and then remove it so you are basically testing 2 different/independent parts of your code but tests should only test 1 part of your code.
So you could say, fine I will just remove the handler which you have added with the first test. This is also bad, because one test should not depend on another test. But you say, hey I can live with that problem here. Then I would need to answer, if you change the order of the calling tests the tests won't assert the same way. If at first the remove handler test is executed it will return false.
So a much better way would be to use an Interface which is the injected to the constructor of the caller's and stored in a private field.
Taking a look at this
public bool AddHandler<T>(Action<T> handler) where T : Message
{
if (handler == null) { return false; }
var messageType = typeof(T);
object outHandler;
if (handlers.TryGetValue(messageType, out outHandler))
{
var typedHandler = outHandler as Action<T>;
handlers[messageType] = typedHandler + handler;
return true;
}
handlers.Add(messageType, handler);
return true;
}
I see a few points to mention. First if a passed in argument is null I would expect the method to throw an ArgumentNullException instead of just returning false. Second you really should add some vertical space to group related code together / distinct unrelated code like so
public bool AddHandler<T>(Action<T> handler) where T : Message
{
if (handler == null) { throw new ArgumentNullException("handler"); }
// if you use C# 6.0 you can use the nameof operator like so
// if (handler == null) { throw new ArgumentNullException(nameof(handler)); }
var messageType = typeof(T);
object outHandler;
if (handlers.TryGetValue(messageType, out outHandler))
{
var typedHandler = outHandler as Action<T>;
handlers[messageType] = typedHandler + handler;
return true;
}
handlers.Add(messageType, handler);
return true;
}
This does not cost you any more memory or decreases execution speed, but make it easier to grasp the logic at first glance which makes it a lot easier to maintain.
public bool RemoveHandler<T>(Action<T> handler) where T : Message
{
if (handler == null) { return false; }
var messageType = typeof(T);
return handlers.Remove(messageType);
}
here the messageType variable is only used to remove the entry of the dictionary which just can be omitted like so
public bool RemoveHandler<T>(Action<T> handler) where T : Message
{
if (handler == null) { return false; }
return handlers.Remove(typeof(T));
}
Some of the mentioned points aply to the ResponseRouter too. | {
"domain": "codereview.stackexchange",
"id": 15818,
"tags": "c#, .net, event-handling, networking, unity3d"
} |
Why is it easier to remove stains with water? | Question: And I don't mean stain removers, but literally tap water.
See here's the thing, you can't remove a stain without some sort of friction, and water reduces friction. Yet if I say, spill some coffee and later remove the coffee stain, if I were to wipe it with a cloth, very little happens. If I wet the cloth, it's stupidly easier.
Answer: As @Tweej suggests, it's because of water solubility. Because water molecules are quite polar, most things that are charged or polar are soluble in it (i.e.~"hydrophilic"). When a coffee stain dries up, the residue sticks to the surface. But when water is applied, it will readily mix with the water, and more easily be removed.
Fats and oils are hydrophobic, and tend to separate from water --- which is why oil stains are so much harder to remove (with water, and water-based cleaners). | {
"domain": "physics.stackexchange",
"id": 30720,
"tags": "fluid-dynamics, everyday-life, water, charge, physical-chemistry"
} |
launching image_view for a topic | Question:
I am trying to view the images my robot's cameras are receiving remotely.
The following in the command line will bring up a window with the video stream
$ rosrun image_view image_view image:=/stereo/right/image_raw
However I cannot find any information on how to put the "image:=/stereo/right/image_raw" part into a launch file
Alternatively, how can I use image_view in a python node to embed the video in a gui of my design?
I'm using:
ROS electric
Ubuntu 11.10
uvc_camera
Tkinter
Originally posted by DocSmiley on ROS Answers with karma: 127 on 2012-08-17
Post score: 2
Answer:
This is called remapping, and can be done using the remap tag.
You can display images from Python is using OpenCV's imshow function.
Originally posted by Dan Lazewatsky with karma: 9115 on 2012-08-17
This answer was ACCEPTED on the original site
Post score: 5 | {
"domain": "robotics.stackexchange",
"id": 10658,
"tags": "roslaunch, image-view, camera"
} |
How does one build a waterproof basement? | Question: I have been doing a bunch of research online into how to build a basement that is below the water table. The common wisdom is to "not waste the money doing that" but as someone who doesn't like to be told no and always likes to fully understand their options, I am curious how I could go about building a basement that is below the water table.
I know it is possible to build a waterproof basement because submarines exist, marinas often build underwater sections of buildings, pools are a thing, etc. The ideal would be to have the primary structure for the basement be poured concrete with no windows or other holes in the walls (plumbing/electricity/air would come in through the ceiling from the higher floors).
For simplicity, lets assume that I don't need sewer out of the basement (perhaps by using an up-flush system to pump sewage/water out). Lets also imagine that the basement is on a flood plain so I can build it while it is dry, but it will be completely submersed in water later in the year.
Finally, again for simplicity lets assume that I am building in a location that has no building codes I need to abide by (so any option is on the table), but I do care about the structure being sound for a long time (e.g., 100 years).
What are the techniques I could use to water PROOF the basement so that it can be fully submerged under water for extended periods of time without leaking?
Answer: IT's certainly possible to build a cellar below the GW table. You need to ensure two things:
water proof
countering buoancy due to the groundwater
As for point one, water proof concretes exist, it's a matter of concrete recipe and care in execution. A bit of information on this can be found here. Coating, e.g. water proof PE liners that can be applied to the concrete also exist. Again, this requires care in execution.
As for point 2, the buoancy can be countered by the weight of the building, the weight of the contents (water tanks don't tend to swim when full, oil tanks may be an issue!), by building a larger baseplate (so a bit of ground would have to be lifted as well) or even by friction with the ground. This depends on size, dpeth, and shape of your cellar and needs to be designed together with a structural engineer.
During construction you need to manage the groundwater some way.
Buildings below ground water level are built all the time when unavoidable, usually when you need the depth for some reason. | {
"domain": "engineering.stackexchange",
"id": 2551,
"tags": "structural-engineering, civil-engineering, structures"
} |
Is there any planet bigger than a star? | Question: Or a star smaller than a planet?
Which star and planet would be an example of this?
Answer: The answer depends on whether you mean is any planet bigger than any star, or whether the planet and star have to be in the same system and have been discovered/measured, rather than just that they could exist in principle.
There are a few known planets with measured radii that are bigger than the lowest mass stars.
Here is a plot from Chabrier et al. (2008) (and plenty more data will have been added since), which shows the basic picture. This is the mass-radius plot for both stars and exoplanets.
It turns out that there are some hot Jupiters that have radii about twice that of Jupiter in our Solar System. You can find examples at exoplanets.org, such as HAT P-67b and XO-6b. These planets are bigger than theory suggests for a "cold" exoplanet, probably because of "insolation" (heating by their parent star) - e.g. Enoch et al. (2012).
On the other hand, the smallest stars, those just above the brown dwarf limit of $\sim 0.075M_\odot$, that are predicted to have radii (at least once they are a billion years old and have reached the main sequence), of about 1.3 times that of Jupiter. At older ages they can become even smaller - about the size of Saturn (black dashed line).
In terms of measurements, there are low-mass objects in eclipsing binaries and also a handful of very low-mass stars that have interferometric radii. For example Proxima Cen (the nearest star to the Sun) is reported to have an interferometric radius of $(0.145 \pm 0.011)R_\odot$ (or 1.44 Jupiter radii) by Demory et al. (2009) and so this is clearly smaller than the biggest exoplanets.
If one demands that the exoplanet and star are part of the same system, then although they could exist in principle (as per the discussion above), there aren't any examples (yet). The curves in the plot above are not dependent on the type of star a planet orbits. Therefore, in principle, it might be possible for a $>1M_J$ planet to be found orbiting a (only just) smaller $<0.1 M_\odot$ star, even if it receives negligible insolation.
In practice, giant exoplanets are rare around low mass stars, so it could be some time before an example is found. However, a close candidate might be GJ3512b which is an exoplanet with $M\sin i = 0.46 M_{\rm Jup}$ (i.e. this is a minimum mass, since the orbital inclination $i<90^{\circ}$) that orbits an M5.5V star quite similar to Proxima Cen (Morales et al. 2019). The star has an estimated radius of $(0.139 \pm 0.005) R_\odot$ and the age is thought to be a few billion years. Looking at the curves in the plot then a cold exoplanet with $i \sim 30^{\circ}$ might be comparable in size to the star. Unfortunately, the exoplanet doesn't transit so no radius measurement is available and it is unlikely to be inflated by stellar insolation because it is in a relatively wide orbit around a faint star
An interesting suggestion is that a young exoplanet might offer the best chance of being bigger than its host star. This is because the contraction timescale of a giant planet is longer than the pre main sequence contraction timescale of its star. The curves in the plot above for 1 Gyr and 10 Gyr show this effect, but it is even more extreme for ages $0.1$ Gyr. Thus the best chance of finding planets bigger than their host stars is to look at young systems in star forming regions. Some of these may already have been found using direct imaging, though in my opinion these quite high-mass "exoplanets" ($>5$ Jupiter masses) orbiting at very large distances ($>100$ au) are more like binary brown dwarfs. | {
"domain": "astronomy.stackexchange",
"id": 5994,
"tags": "star, exoplanet, size"
} |
What causes this pattern of sunlight reflected off a table leg? | Question:
My friend noticed an interference-like pattern around the table leg. However, we do know that interference patterns of sunlight produces rainbow colours. What seems to be happening here?
Answer: These are probably caused by minute, periodic variations in the diameter of the table leg, formed by drawing through a die. Any vibration in the process would end up being circumferential waves in the surface of the tube. Changes in the diameter mean changes in the slope of the surface, and thus focus the reflected light to different rings around the base of the leg.
You could probably confirm this by shining a laser pointer at the leg and slowly moving it down along the leg; the reflected point on the ground will move periodically, pausing when it's moving down across a concave (along the axis) portion of the leg, and moving quickly when it's moving down along a convex portion.
Where the reflected laser pauses is where a broad beam of light would be focused and brighter; where the reflection moves quickly is where the beam of light would be diffused and darker. | {
"domain": "physics.stackexchange",
"id": 31161,
"tags": "optics, everyday-life, reflection, home-experiment, interference"
} |
SpaceSort - A new sorting algorithm | Question: A sorting algorithm i have written.
Pros
- Gives each element a position once.
- Few comparisons needed.
Cons
- For certain sets of data, large amount of extra storage is needed.
- Only works with integers.
The main reason i post the algorithm here is to learn if it already exists or not. From the research i have done i can't find anything similar. Though i am sceptical it would be very fun to know if it is new or not. What i mostly want to know however is if the algorithm could be useful and if my implementation of it is reasonable.
Algorithm:
Basic idea: When you divide the number you want to sort with the largest number of the set you get a percentage. This percentage gives a rough position where in the sorted list that number should be.
Because it is difficult to know how the data is distributed the mean/average is used. By dividing the mean-value with the amount of unique values in the set, you can decide how much of the data set is above and below the mean-value.
Then by using the above idea on all the elements, each element can get a position and be put in a new sorted list.
Some lists with a high amount of clustering or small number of elements with large values, can have the same position calculated from different elements. Therefore all the elements are first put in a larger array to lessen the amount of collisions and if a collision occur the element is moved sligthly up/down. Bacause the new array is larger, elements can be moved up and down without causing problems with insertion of new ones.
Duplicates are handled by counting the collisions of each element.
When all elements have been placed in the larger array it is then collapsed into an array of the correct size.
Exampel:
[4, 20, 6, 4, 4]
min = 4, max = 20, mean = 8
Calculating the possible amout of unique elements:
(max - min + 1) = (20 - 4 + 1) = 17
Calculating a percentage for the mean:
(1 - (mean-min)/unique) = (1 - (8-4)/17) = 0.76
This tells us roughly how large portion of the elements are below the mean. This makes sense because we have 4 out of 5 elements lower than 8.
If we for this example create a 20 times larger array (100 elements) we know that the 80 first elements are for the 4 elements under 8. Thus each element has 20 places to use for collisions.
First element:
4 => ((element - min)*places for each element) = (4-4)*20 = 0
6 => (6-4)*20 = 40
Here 4 gets position 0 and 6 gets position 40. Just as 6 is between 4 and 8,40 is between 0 and 80. After doing this for all elements and then removing all the empty space in the large array, an array have been sorted.
Testing:
I have timed a python vesion of this code by hard-coding different scenarios and
comparing time with quicksort and countingsort. Out of a lack of experience i
can't draw any real conclusions but these are my results:
With a set of data larger than roughly 500 elements spacesort took half the time quicksort did.
Generally, countingsort is faster but certain sets of data such as a completely homogeneous set can be done faster with spacesort after a few code changes. And of course sets with a large range of data.
Good to know:
The large empty array created is called "space" in the code.
Except for a data-array the function needs an argument "size", this konstant multiplied by the length of the data-array gives the length of the space-array. e.g the example above used size = 20.
How large the empty array created needs to be depends alot on the set of data. The smaller the size the faster the algorith and less memory usage. A completely homogeneous set needs only the same size as the original data set. The less homogeneous the larger it needs to be. A good starting point might be 10 times larger.
struct spaceElement {
public int data;
public bool set; // true if a value have been set to this element.
}
/* Takes two arguments, data = set of data to be sorted and size then returns the sorted list.*/
static int[] spaceSort (int[]data, int size) {
int min, max, mean;
minMaxMean(data, out min, out max, out mean);
spaceElement[] space = insertElements(data, size, min, max, mean);
return collapseSpace(space, data.Length);
}
// Arguments: list of data, size constant for space, min-value, max-value, mean-value.
// Returns the space array with sorted elements.
static spaceElement[] insertElements(int[] data, int size, int min, int max, int mean){
int lowMax, highMin;
lowMaxhighMin(data, min, max, mean, out lowMax, out highMin);
int numDiffElements = max - min + 1; // Largets possible amount of different elements. e.g 4,20,6,4,4 has 20-4+1 = 17 elements.
double meanPercent = 1 - (mean - min) / (double)numDiffElements; // Determines what percentage of elements are below the mean. e.g 4,5,6,7,20 => 1 - (7 - 4)/17 = 0.82.
// Creates a large array where the sorted elements can be inserted according to their size.
int spaceLength = data.Length * size;
spaceElement[] space = new spaceElement[spaceLength + size * 2 + 1];
// Determine how much space each element gets if it lies below or above the mean.
double lowMult = meanPercent * data.Length * size / (lowMax - min + 1);
double highMult = (1 - meanPercent) * data.Length * size / (max - highMin + 1);
foreach ( int x in data) {
bool done = false;
int index = 0;
int lastStep = size;
// Determine the initial position of the element in the space array.
if ( x <= mean) {
index = (int)Math.Round( (x - min) * lowMult + size/2 );
lastStep = (int)Math.Round(lowMult);
}
else {
index = (int)Math.Round( (x - highMin) * highMult + size + meanPercent*spaceLength );
lastStep = (int)Math.Round(highMult);
}
/* First checks if the position is empty, if it is put the element there.
Else if value is the same as the already stored value increment count on that element.
If the value is not equal to the already stored value jump up/down half of the lowMult/highMult. */
while (!done) {
if (!space[index].set){
space[index].data = x;
space[index].set = true;
done = true;
}
else if (space[index].data == x) {
space[index].count += 1;
done = true;
}
else if ( space[index].data > x ) {
lastStep = (int)Math.Round(lastStep / 2.0d);
index -= lastStep;
}
else {
lastStep = (int)Math.Round(lastStep / 2.0d);
index += lastStep;
}
}
}
return space;
}
/* Arguments: list of data.
Out: min-value, max-value and mean/average-value */
static void minMaxMean ( int[] data, out int min, out int max, out int mean ) {
min = data[0];
max = data[0];
long sum = 0;
foreach ( int x in data ) {
sum += x;
if ( x < min )
min = x;
if ( x > max )
max = x;
}
mean = (int)Math.Round(sum / (double)data.Length);
}
/* Arguments : list of data, min-value, max-value, mean/average-value.
Returns the highest value less than or equal to the mean-value and
the lowest value greater than the mean-value. */
static void lowMaxhighMin( int[] data, int min, int max, int mean, out int lowMax, out int highMin) {
lowMax = min;
highMin = max;
foreach ( int x in data ) {
if ( x > lowMax && x <= mean )
lowMax = x;
if ( x < highMin && x > mean )
highMin = x;
}
}
// Creates an array of the same size as the original data array and populates it with the sorted values.
static int[] collapseSpace( spaceElement[] space, int length ) {
int[] newArray = new int[length];
int count = 0;
foreach ( spaceElement x in space) {
if ( x.set) {
for (int y = 0; y <= x.count; y++) {
newArray[count] = x.data;
count++;
}
}
}
return newArray;
}
Answer: So you are using a function mapping each element to the interval (0, 1) to sort them. If you wanted to make this fully generic, here is the way to do that:
First, assume arbitrary precision. Your doubles only allow about 253 different values to be represented which means that you will probably run into problems with maliciously-constructed 64-bit integer inputs which end up mapping a large range of numbers to the same double. For example, an array of 10 elements where 5 of them are the integers 0-4, then 5 of them are maybe 261 to 261+4, probably only the first 5 get properly sorted because for the second two the doubles are likely all equal even though the integers are not.
Second, array indexing is going to mess you up. It is not a constant-time operation with regard to the length of the array, so it probably creates a secret O(n2) scaling, and the people who care about the speed of sorting algorithms usually have gigabytes or more of data to sort, they cannot easily just say "oh, I will allocate an array that is substantially bigger than the amount of data I need to sort."
I think when you look at both of these criteria together, you realize that what you're really interested in is a sort of high-fanout tree: and the first place to start with that is to start with a low-fanout tree (like a binary tree, each bit in the decimal expansion tells you to go into the left or right subtree) for "correctness" and then "optimize for speed".
At the end of this analysis you'll be taking N bits from the binary expansion at a time, let's say 6, and then we use that to index into a 64-element array. Then to handle equal elements we need stacks at the bottom and some way to determine if the arbitrary-precision numbers are equal etc. So for the pathological 5+5 example that you couldn't sort, we can just divide 64-bit ints into bitstrings with bit-shifts and masks the obvious way. In JSON (but not JS, which doesn't have 64-bit integers!) you could write the result without trailing nulls as:
[
[[[[[[[[[[
0,
1,
2,
3,
4
]]]]]]]]]],
null,
[[[[[[[[[[
2305843009213693952,
2305843009213693953,
2305843009213693954,
2305843009213693955,
2305843009213693956
]]]]]]]]]]
]
In fact it seems like you would do best to have a trie data structure to reduce this overhead even a little more; using base64 encoding this I think is the trie
{
"AAAAAAAAAAA": ["A", "B", "C", "D", "E"],
"CAAAAAAAAAA": ["A", "B", "C", "D", "E"]
}
Now that you see that this is generally what you are doing, I can tell you that the name of this class of algorithms is called "Radix sort." Something similar can be read in Haskell code if you look at the implementation for the standard container Data.IntSet. The direct generalization where you say "wait, now that I see that I really need this tree structure, what happens if I just let the data direct the sorting of the tree?" is called a "self-balancing" search tree, so you might want to start with looking up the red-black tree and go from there. | {
"domain": "codereview.stackexchange",
"id": 24893,
"tags": "c#, algorithm, sorting"
} |
Least Mean Squares Algorithm confusion with Adaptive Line Enhancement | Question: I'm just a bit confused about the least mean squares algorithm to separate wideband and narrowband in an adaptive filter for voice conversation. I'm interested in the narrowband part and I'm confused about the LMS equation as follows:
Is h being iterated twice here with j and n? Wouldn't that mean I have two different sets of h values?
I'm having trouble putting this into MATLAB.
Answer: I recommend you read up on the LMS algorithm and try to understand it before you start implementing it, otherwise you won't be able to find any errors in your code. In that formula, the index $n$ is the iteration number, and the index $j$ is the vector index of the filter coefficients. So at any iteration $n$ you have $N+1$ filter coefficients (indexed by $j$) that need to be updated according to that equation. | {
"domain": "dsp.stackexchange",
"id": 6956,
"tags": "adaptive-filters"
} |
catkin_add_gtest linking error | Question:
Around this line in CMakeLists.txt,
if (CATKIN_ENABLE_TESTING)
catkin_add_gtest(test_mongo_roscpp test/test_mongo_ros.cpp)
target_link_libraries(test_mongo_roscpp warehouse_ros)
add_rostest(test/mongo_ros.test)
endif()
Running test results in to undefined reference error as follows.
$ catkin_make run_tests
:
#### Running command: "make run_tests -j8 -l8" in "/home/roslinguini/cws_planning/build"
####
[ 33%] Built target gtest
[ 66%] Built target gtest_main
Linking CXX executable /home/roslinguini/cws_planning/devel/lib/warehouse_ros/test_mongo_roscpp
CMakeFiles/test_mongo_roscpp.dir/test/test_mongo_ros.cpp.o: In function `MongoRos_MongoRos_Test::TestBody()':
test_mongo_ros.cpp:(.text+0x2c7): undefined reference to `mongo_ros::dropDatabase(std::string const&, std::string const&, unsigned int, float)'
test_mongo_ros.cpp:(.text+0xf42): undefined reference to `mongo::LT'
test_mongo_ros.cpp:(.text+0xf60): undefined reference to `mongo::GT'
test_mongo_ros.cpp:(.text+0x1839): undefined reference to `mongo::LT'
test_mongo_ros.cpp:(.text+0x22f7): undefined reference to `ros::console::g_initialized'
test_mongo_ros.cpp:(.text+0x2307): undefined reference to `ros::console::initialize()'
:
collect2: error: ld returned 1 exit status
make[3]: *** [/home/roslinguini/cws_planning/devel/lib/warehouse_ros/test_mongo_roscpp] Error 1
make[2]: *** [warehouse_ros/CMakeFiles/test_mongo_roscpp.dir/all] Error 2
make[1]: *** [CMakeFiles/run_tests.dir/rule] Error 2
make: *** [run_tests] Error 2
Invoking "make run_tests -j8 -l8" failed
Removing target_link_libraries (like following) doesn't seem to change the result. So I assume the error occurs at catkin_add_gtest (I have no idea whether linking should occur here though).
if (CATKIN_ENABLE_TESTING)
catkin_add_gtest(test_mongo_roscpp test/test_mongo_ros.cpp)
endif()
How can I fix this?
I've been also trying with catkin_tools (the most recently with version 0.4.1) on Travis CI but looks like the result is similar/the same. You can take a look at a result where the error starts around here.
And full code is in this PR.
Thank you.
Originally posted by 130s on ROS Answers with karma: 10937 on 2016-04-20
Post score: 0
Answer:
Your target test_mongo_roscpp is only linking against warehouse_ros but nothing else. At least roscpp is missing as well as something providing the symbols related to mongo.
If you have find_package()-ed all dependencies it might be suffcient to link against ${catkin_LIBRARIES}.
I commented on another CMake problem in your PR: https://github.com/ros-planning/warehouse_ros/pull/26/files#r60482049
Originally posted by Dirk Thomas with karma: 16276 on 2016-04-20
This answer was ACCEPTED on the original site
Post score: 0
Original comments
Comment by 130s on 2016-04-20:
Your comment in PR helps me to move forward, thanks!
Your target test_mongo_roscpp is only linking against warehouse_ros but nothing else.
I thought with target_link_libraries I can trasitively link targets, no? ref
Comment by Dirk Thomas on 2016-04-20:
No, not across CMake projects. | {
"domain": "robotics.stackexchange",
"id": 24411,
"tags": "ros, catkin-make, catkin, catkin-tools"
} |
Catkin can't find move_base | Question:
I'm trying to rebuild a package I made in fuerte in groovy. Back in fuerte it depended on move_base and move_base_msgs. When I try to make it using catkin in groovy I get the following:
-- +++ processing catkin package: 'autonomous_navigation'
-- ==> add_subdirectory(autonomous_navigation)
CMake Error at /opt/ros/groovy/share/catkin/cmake/catkinConfig.cmake:71
(find_package):
Could not find a configuration file for package move_base.
Set move_base_DIR to the directory containing a CMake configuration file
for move_base. The file will have one of the following names:
move_baseConfig.cmake
move_base-config.cmake
Call Stack (most recent call first):
autonomouse_navigation/CMakeLists.txt:7 (find_package)
CMake Error at /opt/ros/groovy/share/catkin/cmake/catkinConfig.cmake:71
(find_package):
Could not find a configuration file for package move_base_msgs.
Set move_base_msgs_DIR to the directory containing a CMake configuration file
for move_base_msgs. The file will have one of the following names:
move_base_msgsConfig.cmake
move_base_msgs-config.cmake
Call Stack (most recent call first):
autonomouse_navigation/CMakeLists.txt:7 (find_package)
I get a similar error trying to follow the Simple Sending Goals tutorial under navigation. The error is just for move_base_msgs since move_base isn't a dependency of the package.
Did the move_base package change? What do I need to do to be able to use the move_base for setting up navigational goals?
Originally posted by mculp42 on ROS Answers with karma: 28 on 2013-03-07
Post score: 0
Answer:
move_base is part of the navigation stack, which is not catkinized. A catkin-based package cannot depend on a rosbuild based package, you should probably stick with rosbuild for now, or catkinize navigation and build from source.
Originally posted by fergs with karma: 13902 on 2013-03-07
This answer was ACCEPTED on the original site
Post score: 4 | {
"domain": "robotics.stackexchange",
"id": 13246,
"tags": "navigation, catkin, move-base"
} |
What is magnetic flux? | Question: I have so far derived the link between special relativity in a straight wire with a current and the Biot-Savart law. Here's my problem. All this is based on the fact that the magnetic field for some mysterious reason should go around the wire, instead of straight down/up from/to the wire like the electric field. I understand why the electric field goes like it does, but I don't understand what the magnetic field is. I only know the formulas that we have been taught in school, but they only explain the connection between the electric field and the magnetic, not why the magnetic is like it is. And then there is this magnetic flux term which got described as the magnetic flux density moving through a given area. I think the problem lay in the fact that I fundamentally only understand the effects of magnetism not what it is. (Btw I'm in high school if it helps anyone to know at what level they should be explaining at. I really hope someone can help me understand this)
Answer:
I fundamentally only understand the effects of magnetism not what it is.
Uhmm. Do you understand "what is" electric field? As a general note, it's not advisable to put such questions in physics. Physics isn't about what things are but rather about how they behave (in a general and variable meaning).
the magnetic field for some mysterious reason should go around the wire
Well, this is an experimental fact. Direction of magnetic field is said by a magnetic needle. | {
"domain": "physics.stackexchange",
"id": 57809,
"tags": "electromagnetism, magnetic-fields"
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.