anchor
stringlengths
0
150
positive
stringlengths
0
96k
source
dict
Work done in space
Question: What is the work done if we apply a force on body and body is displaced through a distance, but the condition is that the above activity is performed in space where there is no other force is present? Answer: The definition of work is, mathematically, given some force ($\vec{F}$) and some path or distance ($C$) and its infinitesimal part of such path or distance($d\vec{s}$), all of which incorprioates their directions (i.e. their vector property): $$ W = \int_C \vec{F} \cdot d\vec{s} $$ That is, if there is no force, then there is no work. In your particular scenario, there is other no force, so, therefore, there is no work (as Alfred Centauri's comment) other than that applied by the given force. Your force vector acts, mathematically, like all other force vectors. The amount of work done is still formulated in the same way, just with the force that you are talking about. The work equation always assumes the net force vector; again in your case, there is only one force vector, so, the net force and your force is the same force vector.
{ "domain": "physics.stackexchange", "id": 45781, "tags": "newtonian-mechanics, forces, work" }
How does a knife cut things at the atomic level?
Question: As the title says. It is common sense that sharp things cut, but how do they work at the atomical level? Answer: For organic matter, such as bread and human skin, cutting is a straightforward process because cells/tissues/proteins/etc can be broken apart with relatively little energy. This is because organic matter is much more flexible and the molecules bind through weak intermolecular interactions such as hydrogen bonding and van der Waals forces. For inorganic matter, however, it's much more complicated. It can be studied experimentally, e.g. via nanoindentation+AFM experiments, but much of the insight we have actually comes from computer simulations. For instance, here is an image taken from a molecular dynamics study where they cut copper (blue) with different shaped blades (red): In each case the blade penetrates the right side of the block and is dragged to the left. You can see the atoms amorphise in the immediate vicinity due to the high pressure and then deform around the blade. This is a basic answer to your question. But there are some more complicated mechanisms at play. For a material to deform it must be able to generate dislocations that can then propagate through the material. Here is a much larger-scale ($10^7$ atoms) molecular dynamics simulation of a blade being dragged (to the left) along the surface of copper. The blue regions show the dislocations: That blue ring that travels through the bulk along [10-1] is a dislocation loop. If these dislocations encounter a grain boundary then it takes more energy to move them which makes the material harder. For this reason, many materials (such as metals, which are soft) are intentionally manufactured to be grainy. There can also be some rather exotic mechanisms involved. Here is an image from a recent Nature paper in which a nano-tip is forced into calcite (a very hard but brittle material): What's really interesting about it is that, initially, crystal twins form (visible in Stage 1) in order to dissipate the energy - this involves layers of the crystal changing their orientation to accommodate the strain - before cracking and ultimately amorphising. In short: it's complicated but very interesting!
{ "domain": "physics.stackexchange", "id": 16907, "tags": "solid-state-physics, material-science, atoms, molecules, molecular-dynamics" }
Finding the potential energy using a conservation of energy
Question: The force experienced by a particle of mass $m$ in a planet's gravitational field along the z-axis, where $G$ is a constant is $$F = -\frac{GmM}{z^2}$$ What is the potential energy corresponding to the equation using energy conservation? So using a conservation of energy: $$E(t) = T(t) + V(t) = \frac{1}{2}mv^2 + mgh$$ We have acceleration vector: $$\vec{a}(t)=\begin{pmatrix}0 \\ -g\end{pmatrix}$$ thus we have a velocity vector : $$\vec{v}(t)=\begin{pmatrix}v_{0}\cos(\theta) \\ - gt\end{pmatrix}$$ integrating again we have a position vector : $$\vec{r}(t)=\begin{pmatrix}v_{0}\cos(\theta)\:t \\ -\frac{g}{2}t^{2}\end{pmatrix}$$ So potential energy of an equation becomes : $$V(t)=mgh=mgs_{y}(t)=mg\left(-\frac{g}{2}t^{2}\right)$$ Am I on the right track? Any help will be appreciated Answer: Conservation of energy tells us that the change in the kinetic energy of an object is equal to the work done on the object:$$K_{final}-K_{initial}=W.$$ We also know that we can define a total mechanical energy of a system to be the sum of the kinetic energy plus terms we call potential energy: $$E=K+U,$$ but the potential energies are due to force interactions inside the system. The energy of a system can be changed by doing work on the system, and only by doing work on the system. Energy is never spontaneously created or destroyed. That's the essence of conservation of energy. If gravity does work on an object, the kinetic energy of that object will change. If we include the gravitational interaction in the definition of the system, then the system has potential energy, and we might be able to treat the system as having a constant total mechanical energy if no outside force acts on the system, and therefore no work is done on the system (note that this is a special case of conservation of energy--energy is always conserved, but system energy is not always constant): $$E_{initial}=E_{final}$$ $$K_i+U_i=K_f+U_f$$ $$U_i-U_f=K_f-K_i=W$$ $$U_f-U_i=-W$$ A potential energy function can be found by calculating the negative of the work done by a force on an object as the object moves from some reference point (where the potential energy is defined to be zero) to some other point in space. Work done by a force $\vec{F}$ is defined as $$\int\vec{F}\cdot d\vec{r}$$ along some path. If the work is independent of the path but only depends on the beginning and ending locations, a potential energy function exists: $$U(\vec{R})=-\int_{ref}^{\vec{R}}\vec{F}\cdot d\vec{r}.$$ In your case, you have a force which is acting in the negative z direction. The reference point will be (as standardized in physics) $z=+\infty$ and your final location should be some finite $Z$: $$U(Z)=-\int_{\infty}^{Z}\frac{-GmM}{z^2} dz.$$ You can do the integral yourself. Yes, the function is always negative.
{ "domain": "physics.stackexchange", "id": 34282, "tags": "homework-and-exercises, newtonian-mechanics, newtonian-gravity, potential-energy" }
Question on field strength tensor in YM
Question: just a quick question on $F_{\mu\nu}^a$. I'm correct to think $F_{\mu}^{\mu,a}$ vanishes, aren't I? (Just want to make sure...) My reasoning is as follows: The derivative terms cancel anyways - that's obvious - so the only "critical" term of $F_{\mu}^{\mu,a}$ is $f^{abc}A_{\mu}^b A^{\mu,c}$ but this vanishes because the combination of A's is symmetric but the $f$ totally antisymmetric. Am I right? Answer: Yes, $$\sum_{\mu} F_{\mu}{}^{\mu}~:=~\sum_{\mu,\nu}F_{\mu\nu} g^{\nu\mu}~=~0$$ vanishes because it is a trace of a product of a symmetric and an antisymmetric tensor. It is irrelevant for the argument that $F_{\mu\nu}$ is Lie algebra valued.
{ "domain": "physics.stackexchange", "id": 9692, "tags": "yang-mills" }
"Blue Bumper" Stars
Question: I was recently overviewing various massive compact halo object studies (the Anglo-Australian MACHO collaboration and the French I/II EROS collaboration), and they frequently reference "blue bumper stars," irregular variable stars which produce light curves very similar to gravitational microlensing event. Further searches for more information about them was mostly fruitless, producing a few conference proceedings: The MACHO Project LMC variable star inventory: Aperiodic blue variables (code 1995llnl.reptR....P) The MACHO Project LMC Variable Star Inventory: Aperiodic Blue Variables (code 1995AAS...18710202P), but little else. Is there any more information on this type of variable star, especially a more rigorous and targeted study of them? Also, how is it possible to get the full text for either of those articles? Answer: I did some hunting and followed the paper trail to Cook et al. (1995). In Section 4, they identify a class of stars that brighten aperiodically. They reckon that these "blue bumpers" are Be stars: B-type stars that show strong emission lines. Then again, this is off one paper and I'm really not sure if this is a widely accepted view, but I imagine there would've been more fanfare if this was an exciting new class of object? I'm not massively learned on the subject of Be stars, but the current consensus seems to be that they are rotating very rapidly, to the extent that there is a substantial circumstellar disk of material around the equator. Be stars are themselves a type of shell star, all of which are known to show aperiodic variations, presumably because of the unstable nature of the system. In fact, the whole class seems to form an observational problem because of their variable nature and light from the star getting mixed up with the disk.
{ "domain": "physics.stackexchange", "id": 3128, "tags": "astronomy, specific-reference, stars" }
Is ensemble learning a subset of meta learning?
Question: I'm studying ensemble learning methods, focusing on random forest and gradient boost. I read this article about this topic and this about meta-learning. It is possible to say that ensemble learning is a subset of meta-learning? Answer: Meta-learning, also known as "learning to learn", is a subset of machine learning It is used to improve the results and performance of a learning algorithm by changing some aspects of the learning algorithm based on experiment results. Meta-learning helps researchers understand which algorithm(s) generate the best/better predictions from datasets. Meta-learning algorithms use metadata of learning algorithms as input. Then, they make predictions and provide information about the performance of these learning algorithms as output. From What Is Meta-Learning in Machine Learning? More generally, meta-models for supervised learning are almost always ensemble learning algorithms, and any ensemble learning algorithm that uses another model to combine the predictions from ensemble members may be referred to as a meta-learning algorithm.
{ "domain": "datascience.stackexchange", "id": 10908, "tags": "ensemble-modeling, meta-learning" }
LabView or another software for experiments
Question: I used LabView for a lot of my BME undergrad, but the labs focused mainly on things dealing electrical signals. In the real world I'm working mostly on ME projects, but we lack a good deal of experimentation equipment. What are the limitations of LabVIEW as it relates to ME experiments? Answer: LABVIEW can be easily be used for ME related experimentation. One such example would be to use an actuating mechanism to exercise a user interface on a mechanical DUT, which dispense a specific material quantity into a holding container. The weight of the material is measured using a weighing scale which communicates to a computer or controller. Also the test system also includes a digital manometer. LABVIEW commands the actuator via RS232, and the weight of the material is captured using a scale. The weight data is communicated back to LABVIEW via RS232 serial communication. This is an example of using basic tools to develop fairly complex ME experiment,both electrical and mechanical tools. Limitation of LABVIEW Labview is a very capable tool for bench top experimentation either in ME or EE environment. To the most part capability of the tools is limited by users understanding and experience of the LABVIEW software and other tools. Labview’s graphical programming environment doesn't blend with the traditional structured or OOP programming environment. Therefore maintenance and enhancement is a limitation in a traditional sense. National Instrument will argue against my opinion. Labview offers basic program structures such as for loops, if then else, and while loops. This is sufficient for typical basic bench top testing software. But Labview has limitation in implementing advance structure like a binary search tree or recursion. This can be done, but not too elegant. With the growth in LABVIEW, it is almost necessary to have fairly modern computer to use LABVIEW software. But with currently a good enough computer can be purchased for a bargain. In summary, limitation is mostly the skill of the LABVIEW user. References: Mechatronics Cylinder Serial Communication Test (RS232) PS Scale Digital Manometers
{ "domain": "engineering.stackexchange", "id": 79, "tags": "mechanical-engineering, measurements, product-testing, labview" }
Why do we use the Lagrangian and Hamiltonian instead of other related functions?
Question: There are 4 main functions in mechanics. $L(q,\dot{q},t)$, $H(p,q,t)$, $K(\dot{p},\dot{q},t)$, $G(p,\dot{p},t)$. First two are Lagrangian and Hamiltonian. Second two are some kind of analogical to first two, but we don't use them. Every where they write that using them can cause problems. But why? Answer: Here is one argument: Starting from Newton's 2nd law, the Lagrangian $L(q,v,t)$ is just one step away. A Legendre transformation $v\leftrightarrow p$ to the Hamiltonian $H(q,p,t)$ is well-defined for a wide class of systems because there is typically a bijective relation between velocity $v$ and momentum $p$. On the other hand, there is seldomly a bijective relation between position $q$ and force $f$ (although Hooke's law is a notable exception). Therefore the Legendre transforms $K(f,v,t)$ and $G(f,p,t)$ are often ill-defined.
{ "domain": "physics.stackexchange", "id": 55334, "tags": "classical-mechanics, lagrangian-formalism, coordinate-systems, hamiltonian-formalism" }
Reference for the shortcomings of Google's PageRank algorithm?
Question: Sometimes, when using Google search, you don't immediately get quality results to your query. It is seems that PageRank algorithm gets distracted by widely used keywords that have different meanings and uses. Therefore, you need to spend extra time and possibly use different keywords to refine the search context. Is there a good reference that addresses the PageRank algorithm's shortcomings? Is there a contextual search algorithm? Answer: to answer your specific question, there are many papers that discuss PageRank mathematically, such as: Deeper Inside PageRank, (A. N. Langville and C.D. Meyer), Internet Mathematics (1), 335–400 (2004) and in each one you might find discussion of computational and operational issues (computing it faster, using memory more efficiently), but I don't know of any that stand out as addressing shortcomings of -results- specifically. to answer what you're getting at ("why does google not give me what I want without trying?") google search does not equal PageRank (though PR is a major part of it) PageRank itself doesn't address lexical ambiguity Google's additions try to address multiple meanings of words (different meanings under different contexts), and synonyms (other strings that mean the same thing); they're not perfect, but more and more they are being addressed.
{ "domain": "cstheory.stackexchange", "id": 83, "tags": "ds.algorithms, reference-request" }
How to interpret results of a Clustering Heart Failure Dataset?
Question: I am doing an analysis about this dataset: click In this dataset there are 13 features, 12 of input and 1 is the target variable, called "DEATH_EVENT". I tried to predict the survival of the patients in this dataset, using the features. Hoewever, now I was trying to do a cluster analysis to see if the patients are grouped in clusters. This is the code I have written. from sklearn.cluster import KMeans Features = ['ejection_fraction','serum_creatinine'] #the highest correlated features with death_event X = heart_data[Features] wcss = [] for i in range(1, 11): kmeans = KMeans(n_clusters=i, init='k-means++', max_iter=300, n_init=10, random_state=0) kmeans.fit(X) wcss.append(kmeans.inertia_) plt.plot(range(1, 11), wcss) plt.title('Elbow Method') plt.xlabel('Number of clusters') plt.ylabel('WCSS') plt.show() From this graph I can observe that there are 2 clusters. Now kmeans = KMeans(n_clusters=2, init='k-means++', max_iter=300, n_init=10, random_state=0) pred_y = kmeans.fit_predict(X) plt.scatter(X["ejection_fraction"], X["serum_creatinine"]) plt.scatter(kmeans.cluster_centers_[:, 0], kmeans.cluster_centers_[:, 1], s=300, c='red') plt.show() And I obtained this chart: Now, what can I say from this chart? I think that it is unuseful, right? I used only the features ejection_fraction and serum_creatinine because these are the only I used for the prediction. Or I have to use all the variables except from DEATH_EVENT? In this way: X = heart_data.iloc[:, :11] But in this case I obtain this: I am not able to understand these charts, I think that I am doing something wrong, but what? Where are the clusters? How to interpret these results? UPDATE: I am not able to use Umap_learn, my Mac can not install it, I received a lot of errors. However, I did something related to your advices. Here there is all the code: https://pastebin.com/RdJb0ydu The first 2 parts are the code that you have written here. In the 3rd part I use kmeans(n_clusters=2) because from the silhouette I saw that the best was with 2 clusters. Then I did the prediction and concatenated the results to the original dataset and I printed out the column of DEATH_EVENT and the column with the results of clustering. From this column, what can I say? How can I understand if the 0 of the prediction refers to the survived patients or to the died patients? Answer: I would use all the features and see how the separateness of my clusters behave according to some metric, for example, silhouette score Additionally, it is very important to scale your data prior to clustering since kmeans is a distance-based algorithm. heart_data = pd.read_csv("https://archive.ics.uci.edu/ml/machine-learning-databases/00519/heart_failure_clinical_records_dataset.csv") from sklearn.cluster import KMeans from sklearn.pipeline import Pipeline from sklearn.preprocessing import StandardScaler from sklearn.metrics import silhouette_score Features = heart_data.drop(["DEATH_EVENT"], axis = 1).columns X = heart_data[Features] sc = [] for i in range(2, 25): kmeans = Pipeline([("scaling",StandardScaler()),("clustering",KMeans(n_clusters=i, init='k-means++', max_iter=300, n_init=10, random_state=0))]).fit(X) score = silhouette_score(X, kmeans["clustering"].labels_) sc.append(score) plt.plot(range(2, 25), sc, marker = "o") plt.title('Silhouette') plt.xlabel('Number of clusters') plt.ylabel('Score') plt.show() You could also try different combinations of features so that score is maximum For visualization purposes you can use a decomposition technique from sklearn.decomposition import PCA import matplotlib.pyplot as plt plt.style.use("seaborn-whitegrid") pca = Pipeline([("scaling",StandardScaler()),("decompositioning",PCA(n_components = 2))]).fit(X) X2D = pca.transform(X) plt.scatter(X2D[:,0],X2D[:,1], c = kmeans["clustering"].labels_, cmap = "RdYlBu") plt.colorbar(); Last but not least, I recommend to use a manifold projection such as UMAP in your data, It might help on your task by generating "well-defined" clusters (might but not necessarily, nonetheless it is worthy to try) Look, by using UMAP the results seems to improve: code: # pip install umap-learn heart_data = pd.read_csv("https://archive.ics.uci.edu/ml/machine-learning-databases/00519/heart_failure_clinical_records_dataset.csv") from sklearn.cluster import KMeans, DBSCAN from sklearn.pipeline import Pipeline from sklearn.preprocessing import StandardScaler from sklearn.metrics import silhouette_score from umap import UMAP Features = heart_data.drop(["DEATH_EVENT"], axis = 1).columns X = heart_data[Features] sc = [] for i in range(2, 25): kmeans = Pipeline([("scaling",StandardScaler()),("umap",UMAP()),("clustering",KMeans(n_clusters=i, init='k-means++',random_state=0))]).fit(X) score = silhouette_score(X, kmeans["clustering"].labels_) sc.append(score) plt.plot(range(2, 25), sc, marker = "o") plt.title('Silhouette') plt.xlabel('Number of clusters') plt.ylabel('Score') plt.show() from sklearn.decomposition import PCA import matplotlib.pyplot as plt plt.style.use("seaborn-whitegrid") kmeans = Pipeline([("scaling",StandardScaler()),("umap",UMAP()),("clustering",KMeans(n_clusters=3, init='k-means++',random_state=0))]).fit(X) pca = Pipeline([("scaling",StandardScaler()),("umap",UMAP()),("decompositioning",PCA(n_components = 2))]).fit(X) X2D = pca.transform(X) plt.scatter(X2D[:,0],X2D[:,1], c = kmeans["clustering"].labels_, cmap = "RdYlBu") plt.colorbar(); Plot show first and second principal components of umap projection (It is simply a projection of how all the features would look in 2D space) Colours are the cluster id. i.e. for every colour we see in which cluster the algorithm (k-means) assigned each observation to.
{ "domain": "datascience.stackexchange", "id": 8883, "tags": "python, clustering, k-means" }
why it is a dissipating term and is the second order?
Question: corresponding paper why it said that $-\lambda \frac{|\dot{x}|\dot{x}}{2}$ is a small second order dissipating term ? For a linear second order system, the velocity is a dissipating term, because I know the equation is derived from the mass-damper-spring phsyical system. I am new to ordinary differential equation, the first derivative state term always plays a dissipating role no matter its power and its sign in a second order system? Answer: The word dissipation here is referring to the energy dissipated as heat in the canonical mass-spring-damper system. Usually this term is related to $\dot{x}$ in the form of $\ddot{x} = -b\dot{x}$, and in many cases we call this damping - but it is essentially just friction. Now, if we were concerned with our system's velocity and not position, we should clearly see that the above system is stable (assuming $b \gt 0$). This is the simple linear case, but what happens if we do not care about small velocities, but want to heavily "dampen" large velocities. In this case, we could look for damping of the form $\ddot{x} = -b\dot{x}^{2}$. But what happens when this system starts out with a negative velocity? Clearly it is unstable. So, we will have to look for a different method of damping. Consider the system described by $\ddot{x} = -b\ sgn(\dot{x})\dot{x}^2$. Whenever our velocity is negative, acceleration will be positive. Whenever velocity is positive, acceleration will be negative. This is now a stable system. Finally, consider this system's behavior close to the origin. Our damping force decreases exponentially as we approach a velocity of zero. As this is the case, it makes sense to consider this a small dissipating term - as it will not dissipate all of our systems energy in a finite amount of time. As a final note, let $b = \frac{\lambda}{2}$ and note that $|\dot{x}|\dot{x} = sgn(\dot{x})\dot{x}^{2}$.
{ "domain": "robotics.stackexchange", "id": 2542, "tags": "control" }
How to estimate the runoff time for my area?
Question: I want to use an rainfall-runoff model for pluvial flash floods and need to choose for my synthetic rain event the appropriate duration. In the literature it often says that the chosen duration should be as long or twice as long as the runoff time from the beginning of the runoff until out of the catchment. How do I estimate the runoff time? Perhaps somehow by considering the catchment area or the topography? Answer: The suggestion above to check historical data to get an idea of the time to peak/ time of concentration, and a get sense of the general storm response time is a good one. There are also empirical methods to relate the catchment slope, runoff coefficient, etc. to the time to peak/ time of concentration, which sounds like what you are looking for. You can find examples of them in the MTO Computational Methods webpage, more localized ones might exist wherever you are located. Look for the section on time to peak, although you may need other definitions in there to determine your inputs to that calculation (such as catchment slope). Keep in mind that 'runoff time' is not a specific term, and that the 'runoff time' for a full response may be extremely long, as nature tends to follow exponential responses trends with long tails in recession. However, the time to peak and time of concentration are more specific terms with definitions you can find and apply. In terms of how long to run the simulation, depends again based on what you are doing. For a real storm, there may be subsequest rainfall events before the full response is seen. If you just need the peak, then 2-3x the estimated time to peak from the methods above would likely be fine, depending on how flashy the catchment is. If you are running a hydrologic model of sorts, then run it for a long time to see the full response to check the response hydrograph, and compare that to real data to help validate your model.
{ "domain": "earthscience.stackexchange", "id": 2086, "tags": "hydrology, models" }
Why are materials that are better at conducting electricity also proportionately better at conducting heat?
Question: It seems like among the electrical conductors there's a relationship between the ability to conduct heat as well as electricity. Eg: Copper is better than aluminum at conducting both electricity and heat, and silver is better yet at both. Is the reason for this known? Are there materials that are good at conducting electricity, but lousy at conducting heat? Answer: See http://en.wikipedia.org/wiki/Thermal_conductivity In metals, I think it generally has to do with the higher valence band electron mobility, but it gets more interesting elsewhere. In metals, thermal conductivity approximately tracks electrical conductivity according to the Wiedemann-Franz law, as freely moving valence electrons transfer not only electric current but also heat energy. However, the general correlation between electrical and thermal conductance does not hold for other materials, due to the increased importance of phonon carriers for heat in non-metals. As shown in the table below, highly electrically conductive silver is less thermally conductive than diamond, which is an electrical insulator.
{ "domain": "physics.stackexchange", "id": 5020, "tags": "electricity, material-science, heat" }
When should I use cryofixation and chemical fixation?
Question: We know that the technique used in TEM sample preparation involves multiple steps, one of the most important of them is fixation. Fixation can be of two types: Cryofixation, that suggests that the specimen are subjected to freezing temperature to perserve the cell's living material before we observe them. Chemical fixation involves the use of certain chemicals, such as formalin, glutaraldehyde, etc. for almost same purpose. The question is when it is mandatory to use one of them only? Such as, cryofixation or chemical one? At some instances, have the researchers ever need to use both of the techniques at the same time? Answer: Cryofixation usually preserves the organelles inside the cells better, but is more work and usually results in TEM images that are lower contrast. Chemical fixation is easier and gives good contrast, but is more likely to destroy details in delicate samples. Chemical fixation will usually be used unless it is found or expected to not work well for a particular sample. It is possible to mix the methods, for example by including glutaraldehyde in freeze substitution solutions. Here is an article that describes everything you asked about in more detail, if you can access it: A comparison of cryo- versus chemical fixation in the soil green algae Jaagiella
{ "domain": "biology.stackexchange", "id": 11139, "tags": "biochemistry, zoology, microbiology, lab-techniques, fluorescent-microscopy" }
How to autonavigate in an unknown map
Question: I have a custom robot running on raspberry pi(raspbian) with ROS kinetic. I have understood from this that for the robot to avoid obstacles, there must be a known map that will only be generated by manually controlling the robot using either joystick or keyboard, etc. Can it be possible for the robot to autonomously navigate through an unknown map and avoid obstacles? Where can I find such package and how do I integrate this with my custom robot? My robot does not have any LIDARs, btw. It has an ultrasonic sensor in front, 2 analog IR sensors in the front sides and 2 digital IR sensors in the back. Originally posted by Nelle on ROS Answers with karma: 21 on 2018-08-21 Post score: 1 Answer: Can it be possible for the robot to autonomously navigate through an unknown map and avoid obstacles? yes. This procedure is typically called autonomous mapping or exploration. There are quite a few packages that provide this sort of functionality. See #q187944, #q240011, #q270227 and quite a few other previous questions about this. Originally posted by gvdhoorn with karma: 86574 on 2018-08-23 This answer was ACCEPTED on the original site Post score: 2
{ "domain": "robotics.stackexchange", "id": 31604, "tags": "ros-kinetic" }
Using preprocessing/alignment functions on the server
Question: I am new to bash and the processes behind cluster computing in general and need some help with understanding some basics. After looking all over the internet and this forum (+ askUbuntu) I found nothing that addresses this issue. I have a series of raw RNA sequencing files: .bam and wish to utilize samtools (or others) to begin the data preprocessing steps defined in many workflows. I have installed samtools and many other programs necessary for the workflow, however, once I am on the remote server in bash, and try and run a function over the files of interest, it gives me the error: -bash-samtools: command not found When I try to install the package, say samtools, by running sudo apt install samtools, I get an error stating I do not have permission to do such. What do I do? Must I bring up this issue to the server provider or is there a way around it? Thank you for your time, Answer: The short answer is to use conda. In the bioconda channel we have most of the tools used in bioinformatics, such as samtools. You do not need administrator permissions to install conda or packages with it, so your lack of sudo ability is not an issue. I have installed samtools and many other programs necessary for the workflow, however, once I am on the remote server in bash, and try and run a function over the files of interest, it gives me the error Installing packages on your local machine won't affect the remote server unless they share a file system, so this is expected. As an aside, BAM files aren't really "raw", they're considered processed data (fastq/fast5 files are considered truly "raw" data in the sequencing world).
{ "domain": "bioinformatics.stackexchange", "id": 876, "tags": "rna-seq, scrnaseq, samtools, bash" }
How to implement transmission in tracked chassis with one motor?
Question: I see that in small robots tracked chassis is implemented with 2 motors, each powering one side of the vehicle, like this: (image stolen from here) But in real scale tanks I assume there is only one motor so there must be some way of applying power to both sides independently. Answer: Yes there is and the principle is same as wheeled vehicles. There is a part in vehicle chassis called differential which transmits the raw power from the engine to the axle. Tanks and other tracked vehicles use a similar system to utilize a single engine output for both sides. Although going forward is pretty simple, there is a major problem that still persists. Since tanks don't have additional mechanism for turning, they use 2 different methods which acts as a solution. The first one is a basic solution, there are 2 different brake clutches for individual tracks. When one of them is used, the corresponding track slows down and the vehicle turns in that direction by dragging the tracks. The second one is called neutral steering and not that common. It was used in tanks such as German Panther. This system has a hybrid gearbox to set different speeds in different directions for each track. Vehicles with this mechanism can do a stationary 360 degree turn. Neutral steering looks pretty but trying to implement it into a robotic infrastructure requires a high knowledge in mechanical engineering as this kind of steering system is a very complex one. More information about tracked vehicle steering can be found here: Tracked Vehicle Steering
{ "domain": "robotics.stackexchange", "id": 951, "tags": "tracks, gearing, chassis" }
What was missing in Dirac's argument to come up with the modern interpretation of the positron?
Question: When Dirac found his equation for the electron $(-i\gamma^\mu\partial_\mu+m)\psi=0$ he famously discovered that it had negative energy solutions. In order to solve the problem of the stability of the ground state of the electron he invoked Pauli's exclusion principle and postulated that negative energy states well already filled by a "sea" of electrons. This allowed him to predict the positron, viewed as a hole in the sea. This interpretation was ultimately discarded owing to its inapplicability to bosons and difficulties with explaining the invisibility of the infinite charge of the sea. According to my understanding, the modern argument goes something like this. There is a discrete symmetry of the lagrangian called charge conjugation $\psi \rightarrow \psi^c$ which allows the negative energy solutions to be interpreted as positive energy solutions for a second mode of excitation of the electron field with opposite charge, called positrons. The decay of electrons to positrons is then suppressed by the $U(1)$ gauge symmetry of the lagrangian forcing conservation of electrical charge. According to this interpretation What Dirac would have missed was the lagrangian formalism. Is this historically and physically correct? Answer: Dirac's derivation of the existence of positrons that you described was a totally legitimate and solid argument and Dirac rightfully received a Nobel prize for this derivation. As you correctly say, the same "sea" argument depending on Pauli's exclusion principle isn't really working for bosons. Modern QFT textbooks want to present fermions and bosons in a unified language which is why they mostly avoid the "Dirac sea" argument. But this fact doesn't make it invalid. The infinite potential charge of the Dirac sea is unphysical. In reality, one should admit that he doesn't know what the charge of the "true vacuum" is. So there's an unknown additive shift in the quantity $Q$ and of course that the right additive choice is the choice that implies that the physical vacuum $|0\rangle$ (with the Dirac sea, i.e. with the negative-energy electron states fully occupied) carries $Q=0$. The right choice of the additive shift is a part of renormalization and the choice $Q=0$ is also one that respects the ${\mathbb Z}_2$ symmetry between electrons and positrons. It is bizarre to say that Dirac missed the Lagrangian formalism. Dirac was the main founding father of quantum mechanics who emphasized the role of the Lagrangian in quantum mechanics. That's also why Dirac was the author of the first steps that ultimately led to Feynman's path integrals, the approach to quantum mechanics that makes the importance of the Lagrangian in quantum mechanics manifest. It would be more accurate to say that Dirac didn't understand (and opposed) renormalization so he couldn't possibly formulate the right proof of the existence of the positrons etc. that would also correctly deal with the counterterms and similar things. Still, he had everything he needed to define a consistent theory at the level of precision that was available to him (ignoring renormalization of loop corrections): he just subtracted the right (infinite) additive constant from $Q$ by hand. Your sentence The decay of electrons to positrons is then supressed by the U(1) gauge symmetry of the lagrangian forcing conservation of electrical charge. is strange. Since the beginning – in fact, since the 19th century – the U(1) gauge symmetry was a part of all formulations of electromagnetic theories. It has been a working part of Dirac's theory from the very beginning, too. The additive shift in $Q$, $Q=Q_0+\dots$, doesn't change anything about the U(1) transformation rules for any fields because they're given by commutators of the fields with $Q$ and the commutator of a $c$-number such as $Q_0$ with anything vanishes: $Q_0$ is completely inconsequential for the U(1) transformation rules. All these facts were known to Dirac, too. The fact that the U(1) gauge symmetry was respected was the reason that there has never been anything such as a "decay of electrons to positrons" in Dirac's theory, not even in its earliest versions. An electron can't decay to a positron because that would violate charge conservation while the charge has always been conserved. For historical reasons, one could mention that unlike Dirac, some other physicists were confused about these elementary facts such as the separation of 1-electron state and 1-positron states in different superselection sectors. In particular, Schrödinger proposed a completely wrong theory of "Zitterbewegung" (trembling motion) which was supposed to be a very fast vibration caused by the interference between the positive-energy and negative-energy solutions. However, there's never such interference in the reality because the actual states corresponding to these solutions carry different values of the electric charge. Their belonging to different superselection sectors is the reason why the interference between them can't ever be physically observed. The "Zitterbewegung" is completely unphysical.
{ "domain": "physics.stackexchange", "id": 52859, "tags": "quantum-field-theory, electrons, antimatter, dirac-equation" }
Does $PV\propto T$ apply to a photon gas?
Question: For an ultrarelativistic ideal gas, I know that $p=\frac{u}3$; $TV^3 =$ constant; $pV\propto T$. For a photon gas, I know that the first two results apply as well. However, I am unsure if the third result also applies (or how to prove it). My biggest concern is that while for an ultrarelativistic gas, the proportionality constant $Nk_B$ makes physical sense, it does not for a photon gas as $N$ doesn’t make sense (chemical potential is zero). From what I recall, for the ideal gas, I employ result 3 to obtain result 2. How do I prove the relationship for the photon gas, and how do I tackle my concern? Answer: The confusion might come from the fact that the particle number is a function of temperature. Let us start with a photon dispersion $\epsilon_k=c k$ . The grand canonical partition function for $\mu=0$ and two independent polarisations is then given by: $$ Z_G = \Pi_k \frac{1}{(1-e^{-\beta \epsilon_k})^2} = e^{-\beta \Phi} $$ From here we can directly calculate $PV$ with the grand canonical partition function $\Phi$: $$ - PV = -k_B T \, \log Z_G= 2 k_B T \sum_k \log(1-e^{-\beta \epsilon_k} ) $$ Taking now the continuum limit of the sum: $$ 2 k_B T \sum_k \log(1-e^{-\beta \epsilon_k} ) = \frac{2k_B T V}{h^3} \int dk^3 \log(1-e^{-\beta \epsilon_k} ) = \frac{8 \pi V}{(hc)^3 \beta^4} \int_0^\infty dx \, x^2 \log(1-e^{-x} ) $$ The remaining integral is known to be $-\frac{\pi^4}{45}$. So we find: $$ PV = \frac{\pi^2}{45} \frac{k_B^4}{(\hbar c)^3} V T^4 $$ To relate this to the particle number we also have to calculate the average particle number. Here we must consider the operator average, since the chemical potential is zero. $$ <N>= 2\sum_k <n_k> = \frac{2 V}{h^3} \int dk^3 \frac{1}{e^{\beta \epsilon}-1}= \frac{V}{\pi^2 \beta^3 (\hbar c)^3} \int_0^\infty \frac{x^2}{e^{x}-1} = \frac{2 \zeta(3) \, V k_B^3}{\pi^2 (\hbar c)^3} V T^3 $$ The integral appearing in here is $2 \zeta(3)$. With this we can calculate now also the relation between $PV$ with $<N>$. $$ PV= \frac{\pi^4}{90 \zeta(3)} <N> k_B T $$
{ "domain": "physics.stackexchange", "id": 56034, "tags": "thermodynamics, statistical-mechanics, photons, ideal-gas, gas" }
Stretching operator for quantum mechanics
Question: As a counterpart to the quantum mechanical translation operator (see for example this post) is there a unitary operator which describes the stretching of a line. That is consider I have a chain of particles on a line, spaced at equal distances, $d$, from one another. The chain is assumed to be 1D and symmetric with respect to the origin. I want to describe the transformation of this chain into another such that the equal separation distance is instead $2d$. This will correspond to a stretching of the whole line about the origin. Can a unitary, similar to that used to describe the translation of individial particles, be used to describe this? Answer: By way of exemplifying @ACuriousMind 's succinct comment, first recall Lagrange's translation operator, $$ e^{b \frac{\partial}{\partial x}} f(x)= f(x+b). $$ Changes of variable produce arbitrary advective flows. For instance, for your dilation, $$ y\equiv e^x, \qquad \Longrightarrow \qquad x=\ln y . $$ Defining $g(y)=g(e^x)\equiv f(x)$, evaluate $$ e^{by \frac{\partial}{\partial y}} g(y)= g(e^{b+x})= g( e^b ~ y). $$ You have stretched y by a factor of $e^b$, and in your case you wanted doubling, so $b=\ln 2$. In terms of QM operators, the dilation operator is $\exp(i\frac{b}{\hbar}\hat{y}\hat{p}_y )$, a rotation in phase space. You may well have used this in quantum optics without taking stock of it, since for the QSH oscillator $$ [a,a^\dagger ]=1 $$ combinatorially isomorphic to $$ [\frac{\partial}{\partial y}, y]=1, $$ so, then $$ e^{it\omega~ a^\dagger a} g(a^\dagger) = g(e^{it\omega} ~ a^\dagger). $$
{ "domain": "physics.stackexchange", "id": 49576, "tags": "quantum-mechanics, operators" }
Sign of work done in an $L$-$R$ circuit
Question: Suppose we have a L-R circuit with an inductor of inductance $L$ and resistance $R$ and emf $\epsilon$. Now initially when the switch is closed no current is present. Let the maximum current be $I_{max}$ then the Energy stored in the inductor after sufficient time is $U=\frac{1}{2}L{I_{max}}^2$. It is also clear that the magnitude of work done by the battery on the inductor and the magnetic field are both equal to $U$. But I don’t know which of the two does 'positive' and which does 'negative work' Also is power supplied or delivered to/from an inductor when current through inductor increases? Answer: Firstly, it is necessary to note that LC circuit is not a conservative system - while the current in the inductor is brought to a steady level some energy is necessarily dissipated in the resistance. Moreover, with a steady current in the inductor the energy continues to be dissipated. Textbooks would normally distinguish work done by the system and work done on the system, which have opposite signs. Thus, one usually discusses only one of these (which @Dale refers to as positive/negative sign conventions - his/her answer is certainly correct). This is rather general in energy discussions - e.g., in thermodynamics one similarly distinguishes the work done by/on the gas. Thus, if we adopt the point of view of the battery - it spends energy, which goes into building the current in the inductor and producing Joule's heat in the resistance. Thus, the battery is doing positive work, while the resistor and the inductor is doing negative work on the battery. Conversely, if we look from the point of view of the inductor, its energy increases, i.e., it has a work done on it, which means that the inductor itself is doing negative work. To summarize: the sign of the work is a matter of prospective: we do not say that work is positive/negative, but rather that A does positive/negative work on B, which means that A transfers energy to B. When the work is called simply positive/negative, it means that there is an agree convention about who does the work.
{ "domain": "physics.stackexchange", "id": 74308, "tags": "electric-circuits, work, electrical-resistance, inductance" }
Count the number of biggest numbers on the board
Question: I coded two answers to this challenge. The first one was a standard attempt, but after it timed out, I figured out the "trick" to solving the problem much more simply. However, I'd still appreciate comments and critiques on my first attempt, because there will be other problems that will need a long approach and I'd like to improve how I'd code the answer. Challenge Julia is playing a game on an infinite 2-dimensional grid with the bottom left cell referenced as (1, 1). All the cells contain a value of zero initially. The game consists of n steps. In each step, Julia is given two integers a and b. The value of each of the cells in the coordinate (u, v) satisfying 1 ≤ u ≤ a and 1 ≤ v ≤ b, is increased by 1. After n such steps, if x is the largest number in any cell on the board, how many instances of x are there on the board? Complete the function countX that has one parameter, a string array, steps, denoting the values of a and b for each of steps of the game. The function should return the total number of occurrences of greatest integer x in the grid after n steps. Sample Input and Output * Input: [ '18 29', '32 17', '34 9', '38 15', '36 22', '7 14', '5 100' ] Output: 2 First Attempt function countX(stepArr){ let board = []; //2 dimensional array let bigNum = 0; let bigNumCount = 0; const stepArrCount = stepArr.length; for(let k = 0; k < stepArrCount; k++){ const sArr = stepArr[k].split(" "); let s1 = sArr[0]; //vertical steps let s2 = sArr[1]; //horiz steps for(let i = 0; i < s1; i++){ let stopCompare = false; for(let j = 0; j < s2; j++){ if(board[i] !== undefined){ if(board[i][j] !== undefined){ board[i][j]++; } else{ board[i][j] = 1; } } else{ board[i] = []; board[i][j] = 1; } if((k + 1 === stepArrCount) && !stopCompare){ if(board[i][j] > bigNum){ bigNum = board[i][j]; bigNumCount = 1; } else if(board[i][j] === bigNum){ bigNumCount++; } else{ stopCompare = true; } } } } } return bigNumCount; } Second Attempt function countX(stepArr){ let sArr = stepArr[0].split(" "); let smallColA = parseInt(sArr[0]); let smallColB = parseInt(sArr[1]); if(sArr.length > 1){ for(let i = 1; i < stepArr.length; i++){ sArr = stepArr[i].split(" "); let s1 = parseInt(sArr[0]); let s2 = parseInt(sArr[1]); if(s1 < smallColA){ smallColA = s1; } if(s2 < smallColB){ smallColB = s2; } } } return smallColA * smallColB; } * Explanation of Input and Output (provided by challenger) Assume that the following board corresponds to cells (i, j) where 1 ≤ i ≤ 4 and 1 ≤ j ≤ 7. At the beginning the board is in the following state: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 After the first step we obtain: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 0 0 0 0 1 1 1 0 0 0 0 After the second step we have: 0 0 0 0 0 0 0 1 1 1 1 1 1 1 2 2 2 1 1 1 1 2 2 2 1 1 1 1 Finally, after the last step the board will look like this: 1 0 0 0 0 0 0 2 1 1 1 1 1 1 3 2 2 1 1 1 1 3 2 2 1 1 1 1 So, the maximum number is 3 and there are exactly two cells which contain 3. Hence the answer is 2. Answer: The second solution If the input array is empty, the program will crash with an exception. It's recommended to specify the radix to parseInt calls: let smallColA = parseInt(sArr[0], 10); The if(sArr.length > 1){ condition is unnecessary, the loop condition naturally takes care of that. Even better, no need to treat the first element specially, you can initialize smallColA and smallColB to Infinity. Not only it will be simpler, but in it will work even when the input array is empty. The first (naive) solution You could write this simpler: if(board[i][j] !== undefined){ board[i][j]++; } else{ board[i][j] = 1; } ... using a technique you already used in your other recent question: board[i][j] = (board[i][j] || 0) + 1;
{ "domain": "codereview.stackexchange", "id": 23249, "tags": "javascript, programming-challenge" }
Heat Diffusion and Specific heat
Question: Consider two slabs - Slab A and Slab B, insulated on "latereal" faces as shown, initially at the same temperature, and having identical dimensions. The slabs at t= 0 are brought in contact with two heat reservoirs (on left and right) at temperatures $T_1$ and $T_2$. Slabs have the same thermal conductivity but different specific heats, with $c_A > c_B$ Since specific heat of A > that of B I argue that the temperature profiles at any instant of time t, would be as follows: i.e. since $c_A > c_B$ A will have a hard time raising it's temperature than B. As a result the temperature gradients in A will be smaller (in magnitude) than in the case of B. This would mean that the heat transferring to A from the left reservoir in any time dt is smaller in A than in B. Furthermore, the rate of heat transfer in intermediate layers will also be lower in A than B. I've often read that a higher specific heat restricts thermal diffusion, could this be one way of explaining it why? Answer: You are right. What you are referring to maybe the property called Thermal diffusivity, $\alpha$ $$\alpha =\frac{k}{\rho C_p}$$ Cp = specific heat k = thermal conductivity $\rho $= density So Thermal diffusivity is inversely related to specific heat. However in your example as soon as slab A reaches the T2, a bit after slab B, on the right side they both transfer heat at the same rate because they have the same thermal conductivity.
{ "domain": "engineering.stackexchange", "id": 4559, "tags": "mechanical-engineering, heat-transfer, thermal-conduction" }
What is the difference between iai_kinect2 and libfreenect2?
Question: Hi, I bought Kinect V2 (MISTAKE) and have to use it for gmapping because my university won't grant me more funds. I have already installed libfreenect2. Do I also need to install iai_kinect2 . In case you have a better alternative for me to make map of the environment for mapping than using gmapping then kindly post it. So my question is Difference between both of them? And what is openNI? Can I use it as an alternative to iai_kinect2 since I am unable to install it properly? Originally posted by pallavbakshi on ROS Answers with karma: 39 on 2017-01-20 Post score: 0 Answer: iai_kinect2 is a collection of ROS nodes that build on-top of libfreenect2 to make the data (ie: point clouds) that a Kinect2 produces available through a set of publishers. In other words: it is a ROS driver for Kinect2. But it needs libfreenect2 to be able to communicate with the Kinect2 hardware. For what OpenNI is, see wikipedia/OpenNI. In the context of ROS it's basically an alternative to libfreenect. I have already installed libfreenect2. Do I also need to install iai_kinect2. If you want to have point clouds, RGB and IR images from your Kinect2 published as ROS msgs, then yes: libfreenect2 is not enough. There may be alternative drivers, but I can't suggest one (iai_kinect2 has always worked well for me). [..] iai_kinect2 [..] I am unable to install [..] properly? If you are having trouble installing it, you could either post a question on this board, or (possibly better) open an issue over at code-iai/iai_kinect2/issues. Originally posted by gvdhoorn with karma: 86574 on 2017-01-20 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by pallavbakshi on 2017-01-21: Hi, thanks for the answer. I have already opened an issue but haven't got any solution yet. I am reinstalling ubuntu 14.04 as a last resort. Comment by pallavbakshi on 2017-01-21: Can you shed some more light on libfreenect2 as well? I have already looked at their page but couldn't get much about the function of libfreenect2. Comment by gvdhoorn on 2017-01-21: libfreenect2 is essentially what you would call a driver on Windows: it's a software library that on the one end 'speaks' a USB protocol that allows it to communicate with a Kinect2, and on the other end offers what information it receives (clouds, imgs) to the Operating System. Comment by gvdhoorn on 2017-01-21: The answer by @clungzta also covers this.
{ "domain": "robotics.stackexchange", "id": 26778, "tags": "navigation, iai-kinect2, turtlebot, libfreenect, gmapping" }
Which matrix represents the similarity between words when using SVD?
Question: Two words can be similar if they co-occur "a lot" together. They can also be similar if they have similar vectors. This similarity can be captured using cosine similarity. Let $A$ be a $n \times n$ matrix counting how often $w_i$ occurs with $w_k$ for $i,k = 1, \dots, n$. Since computing the cosine similarity between $w_i$ and $w_k$ might be expensive, we approximate $A$ using truncated SVD with $k$ components as: $$A \approx W_k \Sigma W^{T}_{k} = CD$$ where $$C = W_{k} \Sigma \\ D = W^{T}_{k}$$ Where are the cosine similarities between the words $w_i$ and $w_k$ captured? In the $C$ matrix or the $D$ matrix? Answer: You can find some material here and here but the idea (at least in this case) is the following: consider the full SVD decomposition of the symmetric matrix $A = W \Delta W^T$. We want to calculate the cosine similarity between the $i$-th column (aka word) $a_i$ and the $j$-th column $a_j$ of $A$. Then $a_k = A e_k$, where $e_k$ is the $k$-th vector of the canonical basis of $\mathbb{R}$. Let's call $\cos(a_i,a_j)$ the cosine between $a_i,a_j$. Then $$\cos(a_i,a_j) = \cos(Ae_i,Ae_j) = \cos(W \Delta W^T e_i,W \Delta W^T e_j) = \cos(\Delta W^T e_i,\Delta W^T e_j)$$ where the last equality holds because $W$ is an orthogonal matrix (and so $W$ is conformal, i.e. it preserves angles). So you can calculate the cosine similarity between the columns of $\Delta W^T$. A $k$-truncated SVD gives a well-enough approximation. In general, columns of $W \Delta$ and rows of $W$ have different meanings!
{ "domain": "ai.stackexchange", "id": 1021, "tags": "machine-learning, natural-language-processing, math" }
When to set a quantity to constant in a Lagrangian (of geodesic equation)?
Question: I've been wrestling with this problem for quite a while now, and I can't seem to understand what I am allowed to do or not. Let us consider the following action : $$S = \int \sqrt{-g_{\mu\nu}\frac{dx^{\nu}}{|ds|}\frac{dx^{\mu}}{|ds|}}|ds|$$ Now, I want to show that this gives the famous geodesic equation if we choose $|ds| = d\tau$, the proper time. I know how the proof goes with the variation of S, but I wanted to prove it with the Euler-Lagrange expression, and that is how I arrive to the root of my problem. I know that if we choose the parameter as stated before, we must have that $$g_{\mu\nu}\frac{dx^{\nu}}{|ds|}\frac{dx^{\mu}}{|ds|} = -1.$$ Obviously, we can't replace that directly in the initial equation for S, otherwise we will have a constant lagrangian. My question is therefore the following : when can I effectively set $g_{\mu\nu}\frac{dx^{\nu}}{|ds|}\frac{dx^{\mu}}{|ds|}$ to be $-1$, without coming up with a wrong result ? Indeed, let me write the steps for the Euler-Lagrange equation : $$\frac{d}{d\tau}\left(\frac{\partial}{\partial U^{\alpha}}(\sqrt{-g_{\mu\nu}U^{\nu}U^{\mu}}) \right) = \frac{\partial}{\partial x^\alpha}(\sqrt{-g_{\mu\nu}U^{\nu}U^{\mu}})$$ $$\frac{d}{d\tau}\left(\frac{g_{\alpha \nu}U^{\nu}}{\sqrt{M}} \right) = \frac{\partial_{\alpha}g_{\mu\nu}}{2\sqrt{M}}U^{\mu}U^{\nu}$$ With $$U^{\nu} = \frac{dx^{\nu}}{|ds|}$$ and $$M = -g_{\mu\nu}U^{\nu}U^{\mu}.$$ Now, in the above equation, if I set $M = 1$, then I do find the geodesic equation. Let me rephrase the question : why can I set $M = 1$ now? If I had set $M = 1$ before doing any of the $\frac{\partial}{\partial U^\nu}$ or $\frac{\partial}{\partial x^\nu}$ derivatives, I would have gotten a wrong result. Why can I set $M=1$ inside the $\frac{d}{|ds}$ derivative, but not inside the others? I hope I made myself clear! Answer: Because the square root action is reparametrization invariant, the solutions to Euler-Lagrange (EL) equation are geodesics with arbitrary parametrization. As you already noted, it would be inconsistent to choose $$M~=~{\rm constant}$$ before performing the variation and before doing all the partial differentiations in the EL equation. By restricting to $$M~=~{\rm constant}$$ after the partial differentiations in the EL equation, you are restricting your parametrized geodesics solutions to only those which are affinely parametrized. (Note that the non-affinely parametrized geodesic equation has an additional term.) You are allowed to set $$M~=~{\rm constant}$$ before the final total parameter differentiation in the EL equation because this differentiation is along the very same curve rather than, say, a differentiation comparing neighboring curves in a variational process. See also my related Phys.SE answer here where all of this is explained in more details.
{ "domain": "physics.stackexchange", "id": 35047, "tags": "general-relativity, lagrangian-formalism, variational-principle, action, geodesics" }
Kobuki Left Bumper stuck
Question: My Kobuki left bumper has somehow gotten stuck, I could probably live with this, but it messes up the auto_docking node, which reports feedback: [dockdrive: bumped]: and my robot just backs up. I've verified the behavior by running the following commands in order. roscore rostopic echo /mobile_base/events/bumper roslaunch kobuki_node minimal.launch --screen As soon as the kobuki_node comes up the echo report the following below. Also, the right and center bumper work as expected. bumper: 0 state: 1 Pressing the left bumper has no effect on the echo command above. Ie. the Kobuki think the left bumper is always pressed. So I have two options: Option 1: Find some documentation on how to take the bumper off and see why it's stuck. Option 2: Get the auto_docking node to work despite the left bumper being stuck.** Any ideas? Thanks Originally posted by jseal on ROS Answers with karma: 258 on 2016-03-13 Post score: 0 Answer: If you take a flat head screw drive and slide it into the opening on the side of the bumper, you can then pry up enough to see a white contact switch with a flashlight (be very carefully). A second screw driver can then be used to unfree the stuck bumper. With the bumper not stuck the auto dock routine works perfect. Originally posted by jseal with karma: 258 on 2016-03-22 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 24097, "tags": "ros, navigation, move-base, kobuki" }
Looking for a 'CITY, STATE' within a body of text (from a CITY-STATE database)
Question: I'm looking for an optimal way to search a large body of text for a combination of words that resemble any CITY, STATE combination I have in a separate CITY-STATE database. My only idea would be to do a separate search against the body of text for each CITY, STATE in the database, but that would require a lot of time considering the amount of CITY, STATE combinations the database has in it. The desired result from this query would be to pull a single CITY, STATE for each body of text I am analyzing to tell the geographical side of the story for this data subset. Anyone know of an optimal way/process to do such a query? Answer: The only thing I can see would be to separate both city and state lists and treat the problem as an automaton: parse your text, run through the n-grams, whenever you detect a CITY token (meaning a n-gram present in your list of cities or close to it in a similarity sense, as there might be misspellings) then look for a STATE token in its neighbourhood (similarly by looking into a list of states, using an edit distance metric to allow for misspellings). If you find one, then you can tag your text with that geographical location. Of course, allowing for misspellings will bring some false positives but you could easily bypass that by doing a quick lookup through your corpus to see that "SALAMI, OREGANO" is different from "SALEM, OREGON" (because the frequency of the latter will hopefully be higher than the former)
{ "domain": "datascience.stackexchange", "id": 454, "tags": "databases, search, parsing, data-indexing-techniques" }
Python - 'if name in dict:' OR 'try:/except:'
Question: I'm writing some OO Python and I would like to know which of these methods is more pythonic, or which is generally better, or faster. This is an accessor for an attribute which is a dictionary. Method 1: if/else def GetSlot(self, slot_name): if slot_name in self.Slots: return self.Slots[slot_name] else: return None Method 2: try/except def GetSlot(self, slot_name): try: return self.Slots[slot_name] except KeyError: return None I understand that using try/except is better use of duck typing, but will this make a positive impact on performance? Answer: It depends. Mostly, it depends on whether it'll generally be in the dictionary or not. If it's nearly always in the dictionary, then the try/except method would win out, while if it's in there as often as not, then checking would be somewhat faster. However, python already anticipated your need. There's a better option: def GetSlot(self, slot_name): return self.Slots.get(slot_name) All mappings support get, which has a default optional argument that defaults to None. This should be the best option. However, that said... GetSlot is not a PEP8-compliant name. It should be get_slot. self.Slots should probably be self.slots. NOTE: Since you confirmed in a comment that you are in fact using externally defined names, I think it is best practice to follow those names as well as their naming conventions when appropriate. This should probably not be a method at all. In python, accessor functions are somewhat frowned upon. If you really need to do something like this, use properties. However, anything wanting to get slot_name from self.Slots should just use self.Slots.get(slot_name). Even if you're reaching into the object from outside. Java and other languages that advocate getter/setter methods do so because it later allows you to change how it is gotten, but Python is better and doesn't require workarounds like this to influence how you access a variable. Read more here. A note on Python getters and setters Python descriptors are a powerful tool. It's not actually an object, it's a protocol of how python retrieves variables when they're used as object attributes. The classic example is python's property, but in truth methods on a class are also descriptors. An object implements this protocol by having __get__, __set__ or __delete__ methods on it. Not all are required for every type of descriptors - please follow the link for in-depth classification and usages. What this all means in practice is that python objects can change how they are retrieved. This is impossible in languages like Java, which can cause engineering issues. Let's first figure out why people use getters and setters in Java, because the reasons they have are quite important. So I have a class with an attribute that I want to expose to the public for usage. (NOTE: Java users please don't take offense. I'm writing just Python here since I've honestly never used Java. All my knowledge of it is second-hand at best.) class Point: def __init__(self, x, y): self.x, self.y = x, y self.distance_to_orgin = pythagoras(x, y) # Later... instance = Point(a, b) instance.distance_to_orgin Now, everyone can access it. So I publish my library, and everyone's code works fine. But then, I get a better idea - every point already knows it's x and y, so I can always calculate the distance_to_orgin if I need it. In Java, I now have a problem. Because to calculate something on retrieval, I NEED a function - but everyone accesses it by attribute access. I cannot make this change in a backwards compatible manner. So, programmers learn to make getter and setter methods. Compare to before: class Point: def __init__(self, x, y): self.x, self.y = x, y self.distance_to_orgin = pythagoras(x, y) def get_distance_to_orgin(self): return self.distance_to_orgin # Later... instance = Point(a, b) instance.get_distance_to_orgin() Now, if I want to change how it works, I can just write a different get_distance_to_orgin method and I can make the change - fully backwards compatible! So why isn't this a problem in Python? Because we have descriptors. In 99.9% of cases, the builtin property() does everything you want. I can just amend my class definition like this: class Point: def __init__(self, x, y): self.x, self.y = x, y # self._distance_to_orgin = pythagoras(x, y) No longer needed @property def distance_to_orgin(self): return pythagoras(self.x, self.y) # Later... instance = Point(a, b) instance.distance_to_orgin And the external usage is exactly the same as it was at the very start, back when we were building a class in a naive way! So that is why getters and setters are important for many languages, but why we don't need them in Python.
{ "domain": "codereview.stackexchange", "id": 36279, "tags": "python" }
Wavefunction of isomers
Question: In quantum chemistry, the wavefunction for a molecule can be viewed as the output of a function $\xi(m, n_1,..., n_k)$ with $m, n_i \in \mathbb{Z}^+$ that returns a $|\psi\rangle$ that satisfies a $H|\psi\rangle = E|\psi\rangle$. $H$ is the electrostatic Hamiltonian for $m$ electrons and $k$ nuclei with charges $\{+n_k\}$ (and the appropriate masses). I believe there is an injective mapping between $\xi$ and $|\psi\rangle$, right? In which case, how are chemical isomers explained? All isomers for a given chemical formula are stable and have the same $\xi$, but they have different energy eigenvalues and different structures. Answer: Well, the problem is easy from the standpoint of chemistry. Yes, there are multiple isomers. So why are there different structures and different energy eigenvalues? The function you describe is high-dimensional. For even something "small" like water, we're talking about 3 atoms and 10 electrons. If we restrict ourselves to a Born-Oppenheimer picture and assume the nuclei are fixed, we still have a 13-body problem. It's clear that different chemical isomers are local minima on a highly multi-dimensional space. So then the question from a physics standpoint would be whether it's possible to "hop" from different isomeric structures. Of course in the general case, this should be possible, and of course chemistry knows of many such things (chirality, stereochemistry, cis-trans isomerization, etc.) And of course, given a specific structure/isomer, the eigenvalues and eigenvectors from quantum mechanics will differ simply because of the different electrostatic and electron correlation effects.
{ "domain": "physics.stackexchange", "id": 17209, "tags": "quantum-mechanics, wavefunction, quantum-chemistry" }
About the standard derivation of the gravitational redshift
Question: The objective is to derive the gravitational redshift ONLY from the Einstein's equivalence principle (E.E.P.), without using the whole theory of Relativity. This is the standard "informal" derivation of the gravitational redshift (For example Carroll in his book follows this way): Consider an emitter, $E$, e.g. a vibrating atom, at rest at a point near the Earth's surface, say, of gravitational potential $\phi$. Let it send light, or any other electromagnetic, signals to a receiver $R$ at rest directly above $E$ and distance $h$ from it; the gravitational potential at $R$ is $\phi+\Delta\phi$, where $\Delta\phi = gh$, $g$ being the acceleration due to gravity. Let $\nu_E$ be the frequency of the signal as measured at $E$, and $\nu_R$ the frequency of the signal received, and measured, at $R$. Then it is used the relativistic Doppler effect, in the case where the receiver is moving with constant relative velocity $V$ respect to the emitter, to show that: $$ \frac{\nu_R-\nu_E}{\nu_E}=-\frac{\Delta\phi}{c^2}=-\frac{gh}{c^2} $$ By the E.E.P. will follow easly the gravitational redshift. And now my trouble: In the derivation of the basic formula for the classical Doppler shift (which, it may be recalled, is the first approximation in $\frac{V}{c}$ of the corresponding special relativistic formula), on which the standard arguent is so decisively based, the emitter and the receiver move with constant velocities relative to an inertial frame and $V$ is the constant velocity of the receiver relative to the emitter and away from it. That is, the velocity of the emitter is the same at the instant of the emission and, likewise, the velocity of the receiver is the same at the instant of the reception. This is not the case when $E$ and $R$ are accelerating relative to an inertial frame. So, should I conclude that the above argument is wrong? Answer: I don't have Carroll's book, but I don't recognise the description you give of the derivation of the red shift, and in particular I don't see why the relativistic Doppler shift is relevant. The derivation I'm familiar with is to say that the change in potential energy is $mgd$, where $m$ is the effective mass given by $E = h\nu = mc^2$. So: $$ h\nu_e - h\nu_r = \frac{h\nu_e}{c^2} gd $$ and a quick rearrangement gives: $$ \frac{\nu_e - \nu_r}{\nu_e} = \frac{gd}{c^2} $$ No Doppler shift involved.
{ "domain": "physics.stackexchange", "id": 10911, "tags": "general-relativity, special-relativity, doppler-effect, gravitational-redshift" }
What is the pH of 1M Glycerol?
Question: If the pKa of Glycerol is 14.15. How do you calculate the pH for it? I assume that the Henderson-Hasselbalch derivative that works for weak acids and bases is not applicable here. Answer: The answer is approximately 6.88. Therefore a $1\ M$ solution of glycerol in water will be ever so slightly acidic (considering that the hydroxyls are much weaker bases than acids, i.e. that the equilibrium constant for the reaction $\ce{C3H7O2OH + H2O <=> C3H7O2O^- + H3O^+}$ is much larger than the constant for $\ce{C3H7O2OH + H2O <=> C3H7O2OH2^+ + OH^-}$, which may or may not be true. Inclusion of the second equilibrium will drive the true pH even closer to 7) For very weak or very dilute acids/bases, solving a dissociation problem is a bit more tricky because the self-dissociation equilibrium of water must be taken into account. It is often a valid approximation to forget about it, as water has only a small tendency to self-dissociate and so its effect on equilibrium concentrations of $\ce{H^+_{(aq)}}$ and $\ce{OH^{-}_{(aq)}}$ is often negligible, but this is not your case; glycerol in water seems to be about as weak an acid as water itself. The best way to solve equilibrium problems is to set up a large equation including all factors by combining smaller equations. We need to make an equation that takes into account all the species in the medium ($\ce{H^+_{(aq)}}$, $\ce{OH^{-}_{(aq)}}$, $\ce{C3H7O2O^{-}_{(aq)}}$ and $\ce{C3H7O2OH_{(aq)}}$ {for convenience I shall label the latter two $\ce{A^-}$ and $\ce{HA}$}) and the related equilibrium constants ($K_a$ and $k_w$). The smaller equations are found by balancing charges (one equation) and the amount of matter of each substance (one or more equations depending on the problem). Charge balance: $[H^+]=[OH^-]+[A^-]$ (I) Matter balance: $[HA]+[A^-]=C_{acid}=1\ M$ (II) The equations for the equilibrium constants are: $K_a=\frac{[H^+][A^-]}{[HA]}=10^{-14.15}$ (III) $k_w=[H^+][OH^-]=1\times 10^{-14}$ (IV) To solve the problem, we need to play with the equalities until we get a single equation with a single variable. It's convenient to obtain an equation in $[H^+]$ because we are able to find the pH directly by applying $-log$ to the answer. Notice that $[OH^-]=\frac{k_w}{[H^+]}$ (V) $[HA]=\frac{[H^+][A^-]}{K_a}$ (VI) Substituting (VI) in (II) yields, after some algebra: $[A^-]=\frac{C_{acid}K_a}{[H^+]+K_a}$ (VII) Now insert (V), (VI) and (VII) in (I) so that you get: $[H^+]=\frac{k_w}{[H^+]}+\frac{C_{acid}K_a}{[H^+]+K_a}$ A little bit of persistence will get you to the following polynomial: $[H^+]^3+K_a[H^+]^2-(C_{acid}K_a+k_w)[H^+]-K_ak_w=0$ Replacing the values of $K_a$, $k_w$ and $C_{acid}$, the only positive root for the equation is $[H^+]=1.30688\times 10^{-7}$, which after applying the antilogarithm results in $pH = 6.88$.
{ "domain": "chemistry.stackexchange", "id": 844, "tags": "ph" }
Would a metal enclosure (such as a shipping container) protect its contents from the effects an electromagnetic pulse?
Question: I was watching a program about disaster preparedness, and it was suggested that the metal enclosure of a common shipping container (of the intermodal variety) would be sufficient to protect its contents from a large electromagnetic pulse (the kind that could affect an entire region or continent). I have my doubts that this is true, as it seems like a misunderstanding of how electromagnetic pulses work—but I can't find any reliable resources on the subject. What does physics have to say about this? Would a metal enclosure (such as a shipping container) protect its contents from the effects an electromagnetic pulse large enough to affect a large geographic region? Answer: Looks like a metal enclosure would be OK, provided its seams and joints are electromagnetically closed , see http://www.wbdg.org/ccb/FEDMIL/std188_125_1.pdf , however, I am not sure this requirement is satisfied in off-the-shelf containers, so some extra electromagnetic hardening of seams and joints may be required.
{ "domain": "physics.stackexchange", "id": 2438, "tags": "electromagnetism" }
How can we conclude from Maxwell's wave equation that the speed of light is the same regardless of the state of motion of the observers?
Question: I am reading a book titled "Relativity Demystified --- A self-teaching guide by David McMahon". He explains the derivation of electromagnetic wave equation. $$ \nabla^2 \, \begin{cases}\vec{E}\\\vec{B}\end{cases} =\mu_0\epsilon_0\,\frac{\partial^2}{\partial t^2}\,\begin{cases}\vec{E}\\\vec{B}\end{cases} $$ He then compares it with $$ \nabla^2 \, f =\frac{1}{v^2}\,\frac{\partial^2 f}{\partial t^2} $$ and finally find $$ v=\frac{1}{\sqrt{\mu_0\epsilon_0}}=c $$ where $c$ is nothing more than the speed of light. The key insight to gain from this derivation is that electromagnetic waves (light) always travel at one and the same speed in vacuum. It does not matter who you are or what your state of motion is, this is the speed you are going to find. Now it is my confusion. The nabla operator $\nabla$ is defined with respect to a certain coordinate system, for example, $(x,y,z)$. So the result $v=c$ must be the speed with respect to $(x,y,z)$ coordinate system. If another observer attached to $(x',y',z')$ moving uniformly with respect to $(x,y,z)$ then there must be a transformation that relates both coordinate systems. As a result, they must observe different speed of light. Questions Let's put aside the null result of Michelson and Morley experiments because they came several decades after Maxwell discovered his electromagnetic wave derivation. I don't know the history of whether Maxwell also concluded that the speed of light is invariant under inertial frame of reference. If yes, then which part of his derivation was used to base this conclusion? Answer: Your question is an excellent one and you are right about the $\nabla$ operator. And you are also right about the insufficiency of the argument you report in the book you are reading. To make the argument more carefully, there are two options. The first would be to work out how the Maxwell equations themselves change as you go to another inertial frame. That would take a lot of calculating if you start from first principles. (And by the way, they don't change---you get back the same equations but now in terms of ${\bf E}', {\bf B}', \rho', {\bf j}', {\bf \nabla}', \partial/\partial t'$). A second option, mathematically easier but still requiring some work if you are not familiar with it, is to show that the $\nabla$ operator and the $\partial/\partial t$ operator have a special property: when you combine them in the combination $$ \nabla^2 - \frac{1}{c^2} \frac{\partial^2}{\partial t^2} $$ then their effect is the same as $$ \nabla'^2 - \frac{1}{c^2} \frac{\partial^2}{\partial t'^2} $$ All the changes when moving from unprimed to primed coordinates cancel out. If you are familiar with partial differentiation then you could try checking this. When you learn the subject more fully, it becomes an example that can be handled more easily using the language of 4-vectors. I think that McMahon might possibly have not thought carefully enough about what he was deriving and what he was assuming in his argument. He might for example have been taking it for granted that the Maxwell equations themselves take the same form in all inertial frames. But if he did not first prove that in his book then he ought not to claim that the derivation of waves of given speed from them proves that the wave speed will be independent of the motion of the source.
{ "domain": "physics.stackexchange", "id": 69252, "tags": "special-relativity, electromagnetic-radiation, speed-of-light, inertial-frames, maxwell-equations" }
Haskell Dynamic Programming on a Tree
Question: Here is my shot at Homework 8 of CIS-194 (from Spring '13). The problem is about a hierarchy of employees at a company (a tree), each node being of type Employee, each described by a label (empName :: String) and an integer value (empFun :: Integer). We need to choose a subset of employees so that no two chosen employees have a parent-child relationship within the tree and the sum of empFuns of the chosen employees is maximised. Some of the outline and structure of this code is based on what was suggested by the document, but I am curious to know if the rest of it is consistent and idiomatic Haskell. I'm not sure what I'm expecting as feedback, but any is welcome. module Party where import Data.List ( sort ) import Data.Tree ( Tree(Node) ) import Employee ( Employee(empFun, empName), Fun, GuestList(..) ) instance Semigroup GuestList where (GL a b) <> (GL c d) = GL (a ++ c) (b + d) instance Monoid GuestList where mempty = GL [] 0 glCons :: Employee -> GuestList -> GuestList glCons emp (GL a b) = GL (a ++ [emp]) (empFun emp) moreFun :: GuestList -> GuestList -> GuestList moreFun = max treeFold :: (a -> [b] -> b) -> Tree a -> b treeFold f (Node label subtree) = f label (map (treeFold f) subtree) nextLevel :: Employee -> [(GuestList, GuestList)] -> (GuestList, GuestList) nextLevel emp subList = (glCons emp (foldMap snd subList), foldMap (uncurry max) subList) maxFun :: Tree Employee -> GuestList maxFun tree = uncurry moreFun $ treeFold nextLevel tree getFormattedGL :: GuestList -> String getFormattedGL gl = unlines (("Total fun " ++ show (fun gl)) : sort (map empName (emps gl))) where fun (GL _ fun) = fun emps (GL emps _) = emps work :: [Char] -> String work = getFormattedGL . maxFun . read main :: IO () main = readFile "company.txt" >>= putStrLn . work Answer: Unfortunately, there is a bug in glCons. The employee's fun should get added, not replace the current fun level: glCons :: Employee -> GuestList -> GuestList glCons emp (GL a b) = GL (a ++ [emp]) (empFun emp) -- ^ But let's stay on glCons. We can expect glCons to be used via foldr. However, the ++ operator leads to quadratic behaviour. When we add a single element, we should therefore use (:) (and reverse, if the order matters): glCons :: Employee -> GuestList -> GuestList glCons emp (GL a b) = GL (emp : a) (empFun emp + b) Other than that the code seems fine.
{ "domain": "codereview.stackexchange", "id": 40999, "tags": "beginner, haskell, tree, dynamic-programming" }
Where do I find gazebo message structure?
Question: I need to know how does the msgs::SonarStamped message look like, but can't find it in documentation. How can I find it? Originally posted by kumpakri on Gazebo Answers with karma: 755 on 2018-12-11 Post score: 0 Answer: Here's the documentation for all messages: http://osrf-distributions.s3.amazonaws.com/gazebo/msg-api/9.0.0/classes.html Originally posted by chapulina with karma: 7504 on 2018-12-11 This answer was ACCEPTED on the original site Post score: 2 Original comments Comment by kumpakri on 2018-12-11: Thank you. I can't really find the structure of the message. Does it really not exist anywhere? Can I get it other way like ROS messages with rosmsg show? Comment by chapulina on 2018-12-11: Oh you're right, the documentation looks really bad: http://osrf-distributions.s3.amazonaws.com/gazebo/msg-api/9.0.0/sonar__stamped_8proto.html Comment by chapulina on 2018-12-11: Here's the source code: https://bitbucket.org/osrf/gazebo/src/gazebo9/gazebo/msgs/sonar_stamped.proto Comment by kumpakri on 2018-12-13: Thank you! The source code was sufficient to get the information I needed.
{ "domain": "robotics.stackexchange", "id": 4362, "tags": "gazebo-9" }
How can I make sense of black hole complementarity if the universe consist of one manifold and observers are not married to coordinates?
Question: I'm reaching way over my head here, so bare in mind my knowledge base is at best upper undergraduate. This is, unfortunately, yet another byproduct of discussions in this page that is itself a byproduct of another question. User benrg says, Judging from this and some other comments, safesphere believes that different observers occupy different "private universes," and it can be true in one observer's coordinate system that an object crosses the horizon and in another's that it never does. That simply isn't true. Then commenter Accumulation replies, If it is untrue, it being untrue certainly isn't simple. According to black hole complementarianism, to the outside observer, nothing ever crosses the event horizon. To an infalling observer, they do cross the event horizon. I know black hole complementarity is only a rough postulate, but I'm not sure how to make sense of it. If the universe, including those having black holes, consists of one spacetime manifold, does that not mean there is only one possible account of what happens? Yes, the event horizon allows us to have events that are causally disconnected, but they lie on the same manifold so they can't occupy different universes. Suppose I throw a rock radially into the black hole. There are at least two different descriptions of what happens according to different coordinates. According to the Schwarzschild coordinates, the rock goes in slower and slower inward (according to the Schwarzschild $r$ and $t$) and it reaches the event horizon only at $t\rightarrow\infty$. According to the Kruskal–Szekeres coordinates, the rock passes the event horizon like nothing happened with finite Kruskal–Szekeres coordinates. Now both descriptions are true, because the difference is a matter of coordinates. So far so good. Here is my problem: observers are not coordinate systems. An outside observer does not occupy Schwarzschild coordinates and an observer is not married to Schwarzschild coordinates. An observer occupies a coordinate-independent universe modeled by a spacetime manifold. Moreover, an observer does not experience Schwarzschild coordinates. The only thing an observer experiences is his worldline and all the photons that reach him. That's it. So in that case, it makes no sense to identify a person with any coordinate system. Question: Are my statements correct so far? Now, this leads me to the follow conundrum: black hole complementarity says that to an outside distant observer, an infalling object never gets past the event horizon. What I don't understand is what does it even mean to say, "to an outside distant observer [this thing] happens?" Like I said above, the only thing an observer experiences is his worldline and all the photons that reach him. Moreover, can't a distant observer decide to use Kruskal–Szekeres coordinates in his or her calculations to conclude that the object does fall past the event horizon with no issues whatsoever. Question: How can black hole complementarity make sense if a distant outside observer is not married to any particular coordinate system AND he or she can simply choose coordinates that extend past the EV? Question: Is there a more precise phrasing of black hole complementarity that would clear up my confusion? Answer: Black hole complementarity is supposed to be a duality like AdS/CFT. That is, the physics on the boundary (event horizon) and the physics in the bulk (black hole interior) are supposed to be the same physics described in different coordinate systems, though the coordinate systems are bases for something like a quantum Hilbert space rather than charts on a manifold. I don't think it's impossible that a theory could be subjective in the sense that whether a black hole interior exists or not depends on whether or not you fall in – i.e., there are different worlds like those of many-worlds/relative-state QM, but you choose which one you end up in by your spacetime motion instead of being duplicated in all of them – but black hole complementarity isn't an idea of that kind. (And my other answer was about classical general relativity, where there isn't any horizon complementarity, much less private universes.) How can black hole complementarity make sense if a distant outside observer is not married to any particular coordinate system AND he or she can simply choose coordinates that extend past the EH? If complementarity is correct, then anyone, even if they've fallen through the horizon, is free to pick a boundary or bulk description of the interior, since they're equivalent.
{ "domain": "physics.stackexchange", "id": 85712, "tags": "general-relativity, black-holes, spacetime, event-horizon" }
Designing Butterworth filter in Matlab and obtaining filter [a.b] coefficients as integers for online Verilog HDL code generator
Question: I've designed a very simple low-pass Butterworth filter using Matlab. The following code snippet demonstrates what I've done. fs = 2.1e6; flow = 44 * 1000; fNorm = flow / (fs / 2); [b,a] = butter(10, fNorm, 'low'); In [b,a] are stored the filter coefficients. I would like to obtain [b,a] as integers so that I can use an online HDL code generator to generate code in Verilog. The Matlab [b,a] values seem to be too small to use with the online code generator (the server-side Perl script refuses to generate code with the coefficients), and I am wondering if it would be possible to obtain [b,a] in a form that can be used as a proper input. The a coefficients that I get in Matlab are: 1.0000 -9.1585 37.7780 -92.4225 148.5066 -163.7596 125.5009 -66.0030 22.7969 -4.6694 0.4307 The b coefficients that I get in Matlab are: 1.0167e-012 1.0167e-011 4.5752e-011 1.2201e-010 2.1351e-010 2.5621e-010 2.1351e-010 1.2201e-010 4.5752e-011 1.0167e-011 1.0167e-012 Using the online generator, I would like to design a filter with a 12-bit bitwidth and I or II filter form. I don't know what is meant by the "fractional bits" at the above link. Running the code generator (http://www.spiral.net/hardware/filter.html) with the [b,a] coefficients listed above, with fractional bits set at 20 and a bitwidth of 12, I receive the following run error: Integer A constants: 1048576 -9603383 39613104 -96912015 155720456 -171714386 131597231 -69209161 23904282 -4896220 451621 Integer B constants: 0 0 0 0 0 0 0 0 0 0 0 Error: constants wider than 26 bits are not allowed, offending constant = -69209161, effective bitwidth = 7 mantissa + 20 fractional = 27 total. An error has occurred - please revise the input parameters. How might I change my design so that this error does not occur? UPDATE: Using Matlab to generate a 6th-order Butterworth filter, I get the following coefficients: For a: 1.0000 -5.4914 12.5848 -15.4051 10.6225 -3.9118 0.6010 for b: 0.0064e-005 0.0382e-005 0.0954e-005 0.1272e-005 0.0954e-005 0.0382e-005 0.0064e-005 Running the online code generator (http://www.spiral.net/hardware/filter.html), I now receive the following error (with fractional bits as 8 and bitwidth of 20): ./iirGen.pl -A 256 '-1405' '3221' '-3943' '2719' '-1001' '153' -B '0' '0' '0' '0' '0' '0' '0' -moduleName acm_filter -fractionalBits 8 -bitWidth 20 -inData inData -inReg -outReg -outData outData -clk clk -reset reset -reset_edge negedge -filterForm 1 -debug -outFile ../outputs/filter_1330617505.v 2>&1 At least 1 non-zero-valued constant is required. Please check the inputs and try again. Perhaps the b-coefficients are too small, or perhaps the code generator (http://www.spiral.net/hardware/filter.html) wants the [b,a] in another format? UPDATE: Perhaps what I need to do is scale the [b,a] coefficients by the number of fractional bits to obtain the coefficients as integers. a .* 2^12 b .* 2^12 However, I still think that the b coefficients are extremely small. What am I doing wrong here? Perhaps another type of filter (or filter design method) would be more suitable? Could anyone make a suggestion? UPDATE: As suggested by Jason R and Christopher Felton in the comments below, an SOS filter would be more suitable. I've now written some Matlab code to obtain an SOS filter. fs = 2.1e6; flow = 44 * 1000; fNorm = flow / (fs / 2); [A,B,C,D] = butter(10, fNorm, 'low'); [sos,g] = ss2sos(A,B,C,D); The SOS matrix that I get is: 1.0000 3.4724 3.1253 1.0000 -1.7551 0.7705 1.0000 2.5057 1.9919 1.0000 -1.7751 0.7906 1.0000 1.6873 1.0267 1.0000 -1.8143 0.8301 1.0000 1.2550 0.5137 1.0000 -1.8712 0.8875 1.0000 1.0795 0.3046 1.0000 -1.9428 0.9598 Is it possible to still use the Verilog code generation tool (http://www.spiral.net/hardware/filter.html) to implement this SOS filter, or should I simply write the Verilog by hand? Is a good reference available? I would wonder if an FIR filter would be better to use in this situation. MOREOVER: Recursive IIR filters can be implemented using integer math by expressing coefficients as fractions. (See Smith's excellent DSP signal processing book for further details: http://www.dspguide.com/ch19/5.htm) The following Matlab program converts Butterworth filter coefficients into fractional parts using the Matlab rat() function. Then as mentioned in the comments, second order sections can be used to numerically implement the filter (http://en.wikipedia.org/wiki/Digital_biquad_filter). % variables % variables fs = 2.1e6; % sampling frequency flow = 44 * 1000; % lowpass filter % pre-calculations fNorm = flow / (fs / 2); % normalized freq for lowpass filter % uncomment this to look at the coefficients in fvtool % compute [b,a] coefficients % [b,a] = butter(7, fNorm, 'low'); % fvtool(b,a) % compute SOS coefficients (7th order filter) [z,p,k] = butter(7, fNorm, 'low'); % NOTE that we might have to scale things to make sure % that everything works out well (see zp2sos help for 'up' and 'inf' options) sos = zp2sos(z,p,k, 'up', 'inf'); [n,d] = rat(sos); sos_check = n ./ d; % this should be the same as SOS matrix % by here, n is the numerator and d is the denominator coefficients % as an example, write the the coefficients into a C code header file % for prototyping the implementation % write the numerator and denominator matices into a file [rownum, colnum] = size(n); % d should be the same sections = rownum; % the number of sections is the same as the number of rows fid = fopen('IIR_coeff.h', 'w'); fprintf(fid, '#ifndef IIR_COEFF_H\n'); fprintf(fid, '#define IIR_COEFF_H\n\n\n'); for i = 1:rownum for j = 1:colnum if(j <= 3) % b coefficients bn = ['b' num2str(j-1) num2str(i) 'n' ' = ' num2str(n(i,j))]; bd = ['b' num2str(j-1) num2str(i) 'd' ' = ' num2str(d(i,j))]; fprintf(fid, 'const int32_t %s;\n', bn); fprintf(fid, 'const int32_t %s;\n', bd); end if(j >= 5) % a coefficients if(j == 5) colstr = '1'; end if(j == 6) colstr = '2'; end an = ['a' colstr num2str(i) 'n' ' = ' num2str(n(i,j))]; ad = ['a' colstr num2str(i) 'd' ' = ' num2str(d(i,j))]; fprintf(fid, 'const int32_t %s;\n', an); fprintf(fid, 'const int32_t %s;\n', ad); end end end % write the end of the file fprintf(fid, '\n\n\n#endif'); fclose(fid); Answer: As discussed it is best to use the sum of sections, that is break the higher-order filter into a cascaded 2nd order filters. The updated question has the SOS matrix. Using this code and an example here the Python object can be used to generate the individual sections. In matlab save SOS In Python import shutil import numpy from scipy.io import loadmat from siir import SIIR matfile = loadmat('SOS.mat') SOS = matfile['SOS'] b = numpy.zeros((3,3)) a = numpy.zeros((3,3)) section = [None for ii in range(3)] for ii in xrange(3): b[ii] = SOS[ii,0:3] a[ii] = SOS[ii,3:6] section[ii] = SIIR(b=b[ii], a=a[ii], W=(24,0)) section[ii].Convert() # Create the Verilog for the section shutil.copyfile('siir_hdl.v', 'iir_sos_section%d.v'%(ii)) Additional information on fixed-point can be found here
{ "domain": "dsp.stackexchange", "id": 197, "tags": "matlab" }
Why performing axial symmetry, results in the same masses for pion and sigma mesons?
Question: Under axial transformations, $\sigma$ and $\pi$ are rotated into each other: $\vec{\pi} \rightarrow \vec{\pi}+ \vec{\theta} \sigma $, $\sigma \rightarrow \sigma+ \vec{\theta}.\vec{\pi} $. In arXiv:nucl-th/9706075, page 12, it is the written that if axial symmetry is a symmetry of QCD Hamiltonian, then $\sigma$ and $\pi$ should have the same eigenvalues, i.e., the same masses. My question is that how this symmetry results in expecting the same masses for $\sigma$ and $\pi$? I know that these mesons have different masses and it is the result of spontaneous symmetry breaking, but if the symmetry was not broken, why should we expect the same masses for these states? How can we prove this? Answer: The mass terms for the $ \sigma $ and $ \vec{ \pi } $ fields are, \begin{equation} m _\sigma \sigma \sigma + m _\pi \vec{ \pi } \cdot \vec{ \pi } \end{equation} You have two terms that are going to turn into each other under a symmetry transformation. Thus they need to have the same coefficient in order to remain invariant under the symmetry (feel free to explicitly stick in the transformations if you are not comfortable with this). If we do have, $ m _\sigma = m _\pi \equiv m $ then the mass terms take the form, \begin{equation} m \Pi \cdot \Pi \end{equation} where $ \Pi \equiv \left( \vec{ \pi } , \sigma \right) $ and \begin{equation} \Pi \rightarrow \left( \begin{array}{cc} 1 & \vec{ \theta } \\ - \vec{ \theta } & 1 \end{array} \right) \Pi \end{equation} (you missed a minus sign above) under the axial symmetry. Since this is a unitary transform, it leaves the mass term invariant as expected. Thus we conclude that for the system to be invariant under axial transformation it requires the masses of the $\sigma $ and $\vec{\pi}$ be degenerate.
{ "domain": "physics.stackexchange", "id": 21674, "tags": "particle-physics, symmetry" }
Generalized Project Euler #4: Largest palindrome from product of two n-digit numbers in Python
Question: This solves Project Euler 4: Largest palindrome product using Python (not limited to 3 digit numbers). I need suggestions for improvements to either the Python code or the math/algorithm since time of execution increases exponentially as number of digits in multipliers increase. A palindromic number reads the same both ways. The largest palindrome made from >the product of two 2-digit numbers is 9009 = 91 × 99. Find the largest palindrome made from the product of two 3-digit numbers. Here, I am trying to generalise the problem for n-digit numbers: import sys print(sys.version) ''' A palindromic number reads the same both ways. The largest palindrome made from the product of two 2-digit numbers is 9009 = 91 × 99. This program intends to find the largest palindrome made from the product of two n-digit numbers. ''' digits_str = input("Enter no. of digits in multipliers : ") digits = int(digits_str) min_plier = (10 ** (digits-1)) # Minimum n-digit number for eg. if digits = 3, min_plier = 100 max_plier = int(("9" * (digits+1))[:digits]); # Maximum n-digit number for eg. if digits = 3, max_plier = 999 # Calculate product and get palindrome pallindromes = [] for z in range (max_plier, min_plier , -1): max_plier = z # To avoide repetitive calcualtions. for x in range(max_plier, min_plier, -1): global pallindromes product = z * x # Check if product obtained is palindrome and is greater than previously obtained palindrome. if (str(product) == str(product)[::-1]) : pallindromes.append(product) print("Largest palindrome is : " + str(max(pallindromes))) Here's the time required for execution as the number of digits increase: No. of digits : 2, Largest palindrome : 9009, Time required for execution : 1.403140s No. of digits : 3, Largest palindrome : 906609, Time required for execution : 1.649165s No. of digits : 4, Largest palindrome : 99000099, Time required for execution : 39.202920s No. of digits : 5, Largest palindrome : 9966006699, Time required for execution : 1hr 3min 54.552400s Answer: Instead of build a list of palindromes, calculate max palindrome. When z * z is lower than max_palindrome, You can break the first for loop (all other palindromes will be lower). When palindrome is lower than max_palindrome, You can break the second for loop (all other palindromes will be lower). Thanks @Peter Taylor for fix location of if statement. Convert int to str is expensive. Do it once, not twice. Based of @Peter Taylor algorithm, set step to -2 Based on Joe Wallis algorithm, modify second loop to range(max_plier, int((z * z) ** 0.5), -2) Code: min_plier = 10 ** (digits - 1) # Minimum n-digit number for eg. if digits = 3, min_plier = 100 max_plier = 10 ** digits - 1 # Maximum n-digit number for eg. if digits = 3, max_plier = 999 max_palindrome = 0 for z in range(max_plier, min_plier, -2): if z * z < max_palindrome: break for x in range(max_plier, int((z * z) ** 0.5), -2): product = z * x # Check if product is greater than previously obtained palindrome. if product < max_palindrome: break sproduct = str(product) # Check if product obtained is palindrome. if sproduct == sproduct[::-1]: max_palindrome = product print("Largest palindrome is : %s" % max_palindrome) Test you solution: No. of digits : 4, Largest palindrome : 99000099, Time required for execution : 32s No. of digits : 5, Largest palindrome : 9966006699, Time required for execution : 55min Test my solution No. of digits : 4, Largest palindrome : 99000099, Time required for execution : 1ms No. of digits : 5, Largest palindrome : 9966006699, Time required for execution : 7ms No. of digits : 6, Largest palindrome : 999000000999, Time required for execution : 67ms No. of digits : 7, Largest palindrome : 99956644665999, Time required for execution : 373ms Final soulution and explanation Lets look at simple example: mmax = 4 for i in range(mmax, 1 - 1, -1): for j in range(mmax, i - 1, -1): print(i, j) 4 4 3 4 3 3 2 4 2 3 2 2 1 4 1 3 1 2 1 1 It's combinations with replacement (not permutation). In this case first palindrome is max palindrome. digits = int(input("Enter no. of digits in multipliers : ")) def largest_product_two(digits): min_plier = 10 ** (digits - 1) # Minimum n-digit number for eg. if digits = 3, min_plier = 100 max_plier = 10 ** digits - 1 # Maximum n-digit number for eg. if digits = 3, max_plier = 999 for z in range(max_plier, min_plier, -2): for x in range(max_plier, z - 1, -2): product = z * x sproduct = str(product) # Check if product obtained is palindrome. if sproduct == sproduct[::-1]: return product return None out = largest_product_two(digits) print("Largest palindrome is : %s" % out) Hint to check For size=3 palindrome is when: \$100000x + 10000y + 1000z + 100z + 10y + x\$ \$100001x + 10100y + 1100z\$ \$11(9091x + 910y + 100z)\$ For size=4 is similar. Palindrome is divisible by 11 for z in range(max_plier, min_plier, -2): for x in range(max_plier, z - 1, -2): product = z * x if product % 11 != 0: continue or more pythonic products = (z * x for z in range(max_plier, min_plier - 1, -2) for x in range(max_plier, z - 1, -2) if z * x % 11 == 0) for product in products: sproduct = str(product) # Check if product obtained is palindrome. if sproduct == sproduct[::-1]: max_palindrome = product break No. of digits : 6, Largest palindrome : 999000000999, Time required for execution : 30ms No. of digits : 7, Largest palindrome : 99956644665999, Time required for execution : 166ms No. of digits : 8, Largest palindrome : 9999000000009999, Time required for execution : 51s
{ "domain": "codereview.stackexchange", "id": 22663, "tags": "python, programming-challenge, palindrome" }
ligand binding fluorescent protein
Question: does anyone know of a fluorescent protein that upon binding of a substrate its fluorescence is activated? i've been looking on this and perhaps my keywords are not the right ones. Thanks in advance Answer: There are only 3 classes of fluorescent proteins proteins known I think. Two of them are relevant to your needs maybe. The first are usually called 'fluorescent proteins'. These proteins are usually derived from the A victoria jellyfish. Its entirely passive in its fluorescence, but has been tuned to many levels of quantum yield and color. There was a new sort of fluorescent protein cloned recently from japanese eels. It is activated by binding to bilirubin. Its so new that its not likely to be available, but there you go. The third class would be the luciferases. They are luminescent, not fluorescent but in producing light involves binding from a cofactor called luciferase which is oxidized to produce their light. This is not really fluorescence since the electrical stimulation that causes light production is not from incident light, but since what you are looking for might not exist I wanted to remind you of the luciferases, which bind their redox substrates to activate. The luciferin molecule actually varies as there are several classes of luciferase. I think Flavin and NAD are the most common substrates.
{ "domain": "biology.stackexchange", "id": 1654, "tags": "fluorescent-microscopy" }
Leetcode 3 sum code optimisation
Question: I was working on 3sum problem on leetcode Question Given an array nums of n integers, are there elements a, b, c in nums such that a + b + c = 0? Find all unique triplets in the array which gives the sum of zero. Note: The solution set must not contain duplicate triplets. Example: Given array nums = [-1, 0, 1, 2, -1, -4] A solution set is: [ [-1, 0, 1], [-1, -1, 2] ] My Solution var threeSum = function(nums) { let out = [] let seen = {} for(let i = 0; i < nums.length; i++){ remainingArr = nums.filter((d,id) => id != i); //Calling twoSum on remaining array twoSum(remainingArr, -nums[i], out, seen) } //Return in expected format by splitting strings and forming arrays return out.map(d => d.split(',')).filter(d => d.length > 0) }; var twoSum = function(nums, target, out, seen){ let myMap = {} for(let i = 0; i < nums.length; i++){ if(myMap[target - nums[i]] != undefined){ //If match found convert it to string so that we can test for dupicates let val = [target - nums[i], nums[i], -target].sort((a,b) => a - b).join(','); //Test for duplicates if(!seen[val]) { out.push(val) seen[val] = true } } myMap[nums[i]] = i; } } The above solution fails for last 2 very large test cases. For the 2sum implementation I have used the hash map solution rather than 2 pointers. According to solutions on leetcode I can see the best possible time complexity here is \$O(N^2)\$. But isn't my solution also \$O(N^2)\$ (as i'm using seen map inside the inner loop). How can I optimize this further? Answer: Performance This is a performance only review and does not address any styling or composition. Code examples are focused on performance with no attention given to readability, naming, or re-usability Time complexity != performance Time complexity is not a measure of performance. It is a measure of how performance changes as the input size changes. Two functions can have the same time complexity but very different performance metrics. Improving performance Looking at your code I see some code that will negatively effect performance. Your original code cleaned up a little. Semicolons, spaces and the like. threeSum function threeSum(nums) { let out = []; let seen = {}; for (let i = 0; i < nums.length; i++) { remainingArr = nums.filter((d, id) => id != i); twoSum(remainingArr, -nums[i], out, seen); } return out.map(d => d.split(',')).filter(d => d.length > 0); }; function twoSum(nums, target, out, seen) { let myMap = {}; for (let i = 0; i < nums.length; i++) { if (myMap[target - nums[i]] != undefined) { let val = [target - nums[i], nums[i], -target].sort((a,b) => a - b).join(','); if (!seen[val]) { out.push(val) seen[val] = true } } myMap[nums[i]] = i; } } Major performance hits The biggest problem is the line ... if (myMap[target - nums[i]] != undefined) { where the most likely outcome is that myMap[target - nums[i]] is undefined. Set or Map rather than Object When JS sees a property name it needs to locate that property. First it looks at the objects own properties. If that property does not exist, it then starts a recursive search up the prototype chain. If the result is undefined it will have to have searched all the way up the prototype chain before it can return undefined. As this is the most likely outcome this line adds a lot of additional (under the hood) overhead. You can use a Set to avoid the need to traverse the prototype chain. Memory management There is also an incurred memory overhead because you create and release the object myMap each time the function twoSum is called. Because javascript does not free up memory until either forced to due to low memory, or when the code is at idle (Presumably in the leetcode environment that is after the function threeSum has exited and before the result is verified) All the created myMap slowly eat up memory and will incur a GC (garbage collection) overhead. On a shared environment such as leetcodes cloud processing network memory allocated to a task can be rather small meaning forced GC calls are much more likely. To avoid memory management overheads reduce the amount of work by reducing the number of new objects created. In example threeSum1 I moved myMap to the first function and pass it to the second. I clear the map in the second function which is less of a management hit than creating and destroying a new one. threeSum1 function threeSum1(nums]) { const out = []; const seen = new Set(); const myMap = new Set(); for (let i = 0; i < nums.length; i++) { remainingArr = nums.filter((d,id) => id != i); twoSum1(remainingArr, -nums[i], out, seen, myMap); } return out.map(d => d.split(',')).filter(d => d.length > 0); }; function twoSum1(nums, target, out, seen, myMap) { const b = -target; myMap.clear(); for (let i = 0; i < nums.length; i++) { const a = nums[i], idx = target - a; if (myMap.has(idx)) { let val = [idx, a, b].sort((a,b) => a - b).join(','); if (!seen.has(val)) { out.push(val); seen.add(val); } } myMap.add(a); } } More info MDN Allocations MDN Garbage collection Minor improvements You use Array.sort to sort the 3 values and then Array.join them to get a unique key for the 3 values that sum to 0. let val = [target - nums[i], nums[i], -target].sort((a,b) => a - b).join(','); JavaScript's sort knows nothing about the array or why you are sorting it For 3 items there are only 6 resulting outcomes, requiring at most 4 compares. You don't want the sorted array you just want to know how to build the key. Building a small string using join is slower than building it manually using concatenation operators. Thus we can remove the sort and use a set of if, else. and ternaries ? to build the key. No need to swap items in an unneeded array (an array that will use memory management just to exist). No need to use the slow join function to create the key. Additional improvements. For the best performance avoid Indexing into arrays Repeating calculations Iterating over arrays more often than needed. Manipulating Strings Final code Assuming that the order of items in each array in the returned array does not matter, and that the items can be Numbers (not Strings) we can remove the need to map and filler the result. We store items in vars rather than indexing into the array nums[i] each time we want the value. eg a = nums[i] We calculate values only once. eg b = -target, idx = target - a threeSum2 function threeSum2(nums) { var i = 0; const out = [], seen = new Set(), map = new Set(); while (i < nums.length) { twoSum2(nums.filter((d,id) => id != i), -nums[i++], out, seen, map); } return out; }; function twoSum2(nums, target, out, seen, map) { var val = "", i; const b = -target; map.clear(); for (i = 0; i < nums.length; i++) { const a = nums[i], idx = target - a; if (myMap.has(idx)) { if (a < b && a < idx) { val = idx < b ? "" + a + idx + b : "" + a + b + idx } else if (b < idx) { val = idx < a ? "" + b + idx + a : "" + b + a + idx } else { val = a < b ? "" + idx + a + b : "" + idx + b + a } if (!seen.has(val)) { out.push([a, b, idx]); seen.add(val); } } map.add(a); } } Results Below are the test results for the 3 functions above. threeSum Your original functions with changes unrelated to performance. threeSum1 Major performance changes threeSum2 Minor performance changes The first test is on a set of 100 arrays 100 items long with an evenly distributed random set of integers in the range -10000 to 10000 Note threeSum2 is 5 time faster than original. Note threeSum1 is only marginally quicker as the optimizations target only the resulting output data. Name Mean time 1 Call per sec Rel performance Total time Calls threeSum2 1,455.175µs 687 100.00% 1,412ms 970 threeSum1 1,547.062µs 646 94.03% 1,624ms 1,050 threeSum 7,047.454µs 141 20.52% 6,907ms 980 I don`t know the nature of the arrays that leetcode send the function. The following table shows the result if we focus on the code that creates the result. This is done by reducing the range of values of the input to increase the resulting out array length. Testing is on a set of 100 arrays 100 items long with an evenly distributed random set of integers in the range -100 to 100 Name Mean time 1 Call per sec Rel performance Total time Calls threeSum2 1,904.629µs 525 100.00% 2,000ms 1,050 threeSum1 3,219.081µs 310 59.05% 3,380ms 1,050 threeSum 6,522.878µs 153 29.14% 5,871ms 900 The results show that threeSum2 is the quickest, either by a small or large margin depending on the number of matches found in the input. Will it pass the leetcode test? Will it be fast enough to pass the tests? That I do not know as I have not tried this example. I do know that leetcode test times can swing wildly (from best to worst for the very same code) . Although I do not know as a fact, why, I strongly suspect that run time performance is effected by number of users using the service. It is my experience (as an .au user) that to get the best results is to use the service in off peek times. More As i wrote this answer I forget to look into the array you filter remainingArr = nums.filter((d,id) => id != i); There is opportunity for more optimization in this line worth about ~5% performance increase. Hints use a Set and remove items using Set.delete tracking removed items, then replace them for the next pass with Set.add You can iterate a set using for (const v of remainingArr) { Or All the filter does is remove one element at the current index from the array. If you passed that index to the second function rather than a filtered array. Test settings Test settings. Same for both tests Env: Chrome 89.0.4389.90 (64-bit). Laptop Passive cooling (ambient 17.9°) Test Cycles.......: 100 Groups per cycle..: 1 Calls per group...: 10 Cool down 2........: 1,000,000µs 1 In microseconds µs (One millionth second) 2 time between cycles
{ "domain": "codereview.stackexchange", "id": 40961, "tags": "javascript, algorithm, programming-challenge, time-limit-exceeded, k-sum" }
MoveIt demo.launch error
Question: Hi, I have used the setup assistant of MoveIt to create the config package and it worked fine. But when I launched the demo.launch file, I got the following error : ... auto-starting new master process[master]: started with pid [43949] ROS_MASTER_URI=http://localhost:11311 setting /run_id to eb76e8ea-711b-11e6-8339-480fcf447f3c process[rosout-1]: started with pid [43963] started core service [/rosout] process[joint_state_publisher-2]: started with pid [43980] process[robot_state_publisher-3]: started with pid [43981] ERROR: cannot launch node of type [moveit_ros_move_group/move_group]: can't locate node [move_group] in package [moveit_ros_move_group] process[rviz_ipt_d_0342_43937_3307353756922455826-5]: started with pid [43982] [ WARN] [1472827541.765044325]: The root link base_link has an inertia specified in the URDF, but KDL does not support a root link with an inertia. As a workaround, you can add an extra dummy link to your URDF. The URDF works perfectly fine with RViZ and Joint State Publisher GUI sliders (without MoveIt) and after a while I get the interactive markers in RViZ (with MoveIt) and I am able to move the arm around but it says 'No Planning Library Loaded'. Also system dependencies seem to be satisfied. I'm unable to find out what the exact issue is. EDIT : I did binary installation of the MoveIt for Jade. I used this link to install MoveIt. Then I launched the setup assistant and created the config package following this video. Here are the screenshots of RViz and Terminal after launching demo.launch : Originally posted by bhavyadoshi26 on ROS Answers with karma: 95 on 2016-09-02 Post score: 1 Original comments Comment by gvdhoorn on 2016-09-03: If this is the output right after you've used the Setup Assistant (ie: you haven't changed anything in the generated launch files), could you please report this at the MoveIt issue tracker? Please mention how you installed MoveIt, and what happened. Comment by gvdhoorn on 2016-09-05: Issue: ros-planning/moveit#200. Comment by bhavyadoshi26 on 2016-09-05: @gvdhoorn I have reported this issue to the MoveIt Issue Tracker. I have mentioned there about the steps that I followed. I'll edit the question here as well. Comment by Ayush Sharma on 2017-03-04: I am facing similar error. Can anyone help me? Comment by gvdhoorn on 2017-03-04: No. Not without more information. Answer: The referenced ticket is already closed. If you still see the issue please take an action in this post. Originally posted by 130s with karma: 10937 on 2017-04-03 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 25667, "tags": "ros, moveit, move-group" }
What does Qubitization mean?
Question: I was listening to an advanced lecture about quantum computing, when the professor introduced a chapter called "Qubitization and the quantum singular value transform", but never really cared to define what qubitization means. After some research I came across the term in it's first usage by Chuang in 2016 I believe in this paper https://arxiv.org/pdf/1610.06546.pdf. However, I can't seem to see where the authors actually give a proper definition in the above paper. Any help on what is commonly referred to as qubitization is much apprectiated. Answer: It just means that you block encode the matrix $A$ into a larger unitary matrix, $U_A$. That is, $$ U_A = \begin{pmatrix} A & *\\ * & *\end{pmatrix}$$ where $*$ indicate arbitrary matrix elements. Note that each unitary matrix is already a trivial block encoding of itself. Once this is done, you can implement $$ U = \begin{pmatrix} P(A) & *\\ * & *\end{pmatrix}$$ where $P(A)$ is a polynomial (usually Chebyshev) approximation of some function you want to implement. For instance, if you want to find $A^{-1}$ so that you can solve the linear system of equations $Ax =b$ then $P(A)$ is the polynomial (usually Chebyshev)approximation of $f(x) = 1/x$. This implementation can be done through a quantum circuit of the form The key point is that the Quantum Signal Processing theorem tells us that the above sequence can approximate rich space of functions by modifying the sequence $\{\varphi_i \}_{i=0}^d$. Furthermore, this can be done efficiently as well as noted in the paper "Efficient phase-factor evaluation in quantum signal processing" by Dong et al.
{ "domain": "quantumcomputing.stackexchange", "id": 5365, "tags": "hamiltonian-simulation" }
Electric field due to a dipole at large distances
Question: It says $$ E= \frac{1}{4\pi \epsilon_0} \frac{p}{r^3} $$ so by this this eqn, if we measure electric field at a distant point then we can never find $q$ and $d$ separately. $q$ and $d$ are charge and separation between charges of a dipole. Instead we can only find $p$. It is from Principles of Physics, Resnik Ch. Electric field. I don’t get that! Please explain! Answer: It comes from the derivation of the formula, there you can see that different values of $Q$ and $d$ can give the same result. For the field a large distance $r$ along the line of the dipole $$E= \frac{1}{4\pi \epsilon_0} \frac{Q}{r^2} - \frac{1}{4\pi \epsilon_0} \frac{Q}{{(r+d)}^2}$$ If $d$ is small compared to $r$ $$E= \frac{1}{4\pi \epsilon_0} \frac{Q[{(r+d)}^2-r^2]}{r^4}$$ $$E= \frac{1}{4\pi \epsilon_0} \frac{2Qd}{r^3}$$ So only the product of $Q$ and $d$ could be found. If we were to get data about the field closer to the charges, $d$ could no longer be considered small compared to $r$ and the values of $Q$ and $d$ could be separately determined.
{ "domain": "physics.stackexchange", "id": 80925, "tags": "electrostatics, charge, dipole" }
Where can I find the standard molar entropy for ammonium bicarbonate?
Question: I'd like to know the standard molar entropy S0solid for ammonium bicarbonate to use in an example in class, to calculate ΔS0 for the reaction, NH4HCO3(s) → NH3(g) + H2O(g) + CO2(g). Already checked: Wikipedia, NIST Chemistry WebBook, Google. I don't have access to the CRC handbook. (Arthur and Chester Miller already found the standard enthalpy of formation for ammonium bicarbonate, in this forum, in 2016; and it's in the German Wikipedia.) Answer: “Standard Thermodynamic Properties of Chemical Substances”, in CRC Handbook of Chemistry and Physics, 90th Edition (CD-ROM Version 2010), David R. Lide, ed., CRC Press/Taylor and Francis, Boca Raton, FL. gives values for the standard molar enthalpy of formation $\Delta_\mathrm fH^\circ$, the standard molar Gibbs energy of formation $\Delta_\mathrm fG^\circ$, and the standard molar entropy $S^\circ$ (all at $T=298.15\ \mathrm K$ and $p=100\ \mathrm{kPa}=1\ \mathrm{bar}$) of crystalline ammonium hydrogen carbonate as follows. $$\begin{align} \Delta_\mathrm fH^\circ&=-849.4\ \mathrm{kJ/mol}\\ \Delta_\mathrm fG^\circ&=-665.9\ \mathrm{kJ/mol}\\ S^\circ&=120.9\ \mathrm{J/(mol\ K)} \end{align}$$
{ "domain": "chemistry.stackexchange", "id": 10288, "tags": "thermodynamics, reference-request, entropy" }
how controlling the turtle in screen can help in controlling the robot in the real wold
Question: how controlling the turtle in screen can help in controlling the robot in the real wold Originally posted by viki on ROS Answers with karma: 41 on 2013-01-28 Post score: 0 Original comments Comment by Erwan R. on 2013-01-28: I don't understand your question. Please provide more details. Answer: When you control turtle on your screen, ur publishing movement commands on a topic to which turtle is subscribed. In a similar way you can control a real bot by publishing velocity command on a topic to which bot is subscribed. You can use the same code just by changing the topic name provided datatype of topic is same. In most cases topic is of type "geometry_msgs::Twist". Originally posted by ayush_dewan with karma: 1610 on 2013-01-28 This answer was ACCEPTED on the original site Post score: 0 Original comments Comment by viki on 2013-01-28: ok. thank you for reply Comment by viki on 2013-01-28: ok. thank you for reply
{ "domain": "robotics.stackexchange", "id": 12614, "tags": "ros, robot" }
Can quantum disentanglement be triggered by time dilation?
Question: The question is really one question that leads to the final one: Is it possible to realize a qubit that naturally flips between two quantum states on a definite and fixed period without any ongoing external stimulation? If so, if such a qubit were forced into entanglement with another such oscillating qubit and then one of the qubits accelerated up to very near light-speed, would relativistic time dilation lead to disentanglement/desynchronization of these two "qubit clocks"? Answer: (1) Yes. A simple example is when the two states are not eigenstates of the energy operator. Imagine that the two lowest energy eigenstates are $|1\rangle$ and $|2\rangle$, with energy $E_1$ and $E_2$ respectively, and form a superposition state from them. A good example are the two lowest energy states of a 1D double well, where $|1\rangle$ would be the lowest energy symmetric and $|2\rangle$ the anti-symmetric state. The states $|+\rangle=(|1\rangle+|2\rangle)/\sqrt{2}$ and $|-\rangle=(|1\rangle-|2\rangle)/\sqrt{2}$ would then correspond to having the system in the left or right well respectively. But if placed in state $|+\rangle$ at $t=0$ the state will evolve as $ (|1\rangle e^{-iE_1t/\hbar}+|2\rangle e^{-iE_2t/\hbar})/\sqrt{2}$, wobbling back and forth between the two wells at a frequency set by the difference in energy between the two states. (2) Entanglement is not generally a Lorenz boost invariant property. There are papers discussing how they entangle spin and momentum degrees of freedom, and this changes under the Lorenz transformation. Momentum-momentum entagnlement does not change. In general, how entangled different observables are depends on what they are. So the answer seems to be that if one of the qubit clocks are boosted, if the degree of freedom is something like spin, then entanglement can remain although they would no longer be in perfect sync. Remember that entanglement does not mean the two systems are perfectly the same or unchanging, but rather that describing one system will necessarily imply things about the other system in a nontrivial way.
{ "domain": "physics.stackexchange", "id": 60824, "tags": "quantum-information, quantum-entanglement, quantum-computer" }
What unitary gate produces these quantum states from the computational basis?
Question: Suppose that we have one-qubit unitary $U$ that maps $$ \left| 0 \right> \longmapsto \frac{1}{\sqrt{2}} \left| 0 \right> + {\frac{1+i}{2}} \left| 1\right> $$ and $$ \left| 1 \right> \longmapsto {\frac{1-i}{2}} \left| 0 \right> - \frac{1}{\sqrt{2}} \left| 1\right> $$ What is $U$? Answer: Firstly simply rewrite probability amplitudes of returned states as columns of a matrix: $$ U = \begin{pmatrix} \frac{1}{\sqrt{2}} & \frac{1-i}{2} \\ \frac{1+i}{2} & -\frac{1}{\sqrt{2}} \end{pmatrix} $$ Now do some algebra $$ U = \frac{1}{\sqrt{2}} \begin{pmatrix} 1 & \frac{1-i}{\sqrt{2}} \\ \frac{1+i}{\sqrt{2}} & -1 \end{pmatrix} = \frac{1}{\sqrt{2}} \begin{pmatrix} 1 & \mathrm{e}^{-i\frac{\pi}{4}} \\ \mathrm{e}^{i\frac{\pi}{4}} & -1 \end{pmatrix} $$ There is a quantum gate called $\mathrm{U2}$: $$ \mathrm{U2}(\phi,\lambda)= \frac{1}{\sqrt{2}} \begin{pmatrix} 1 & -\mathrm{e}^{i\lambda} \\ \mathrm{e}^{i\phi} & \mathrm{e}^{i(\phi+\lambda)} \end{pmatrix} $$ Setting $\phi=\frac{\pi}{4}$ and $\lambda = \frac{3}{4}\pi$ you have a resut since $\phi+\lambda =\pi$, so $\mathrm{e}^{i(\phi+\lambda)} = \mathrm{e}^{i\pi} = -1$ and $-\mathrm{e}^{i\lambda}=-\mathrm{e}^{i\frac{3}{4}\pi} = -\frac{-1+i}{\sqrt{2}}$. Conclusion: $U=\mathrm{U2}\big(\frac{\pi}{4},\frac{3}{4}\pi\big)$
{ "domain": "quantumcomputing.stackexchange", "id": 1202, "tags": "quantum-gate, quantum-state, unitarity" }
Converting a natural number to a permutation matrix in Python. How to speed it up perhaps avoiding messing around with lists, sets and dicts?
Question: It is something a bit complex to explain here, but the code bellow takes a 128-bit number and converts it into a permutation matrix, a beast which I have already faced before. The matrix is represented as a list of numbers. Each number is a row. The way I've found to do the mapping number->matrix was to convert the number through a multi-radix (or something that could be considered one) numeric system, so each digit corresponds to a row in the matrix. Since there can't be duplicate rows, some offset magic was needed (that is one of the uses of map taboo used below). How could this code be improved regarding data structures and use of loops? More conceptually, what about my choice of conversion through a multi-radix system? Could it be simpler and still be a perfect mapping from naturals to permutation matrices? ps. There are 2^128 numbers and 35! matrices. 2^128 < 35! , So all numbers can have a unique corresponding matrix. from sortedcontainers import SortedSet, SortedDict def permutmatrix2int(m): """Convert permutation matrix 35x35 to number.""" taboo = SortedSet() digits = [] rowid = 34 for bit in m[:-1]: bitold = bit for f in taboo: if bitold >= f: bit -= 1 taboo.add(bitold) digits.append(bit) rowid -= 1 big_number = digits[0] pos = 0 base = b = 35 for digit in digits[1:]: big_number += b * digit pos += 1 base -= 1 b *= base return big_number def int2permutmatrix(big_number): """Convert number to permutation matrix 35x35.""" taboo = SortedDict() res = big_number base = 35 bit = 0 while base > 1: res, bit = divmod(res, base) if res + bit == 0: bit = 0 for ta in taboo: if bit >= ta: bit += 1 base -= 1 taboo[bit] = base for bit in range(35): if bit not in taboo: break taboo[bit] = base - 1 return list(map( itemgetter(0), sorted(taboo.items(), reverse=True, key=itemgetter(1)) )) Answer: With a great help from a mathematician (at least in spirit) regarding permutations, factoradic and matrices, I could implement the following, which is 30 times faster. I provide the opposite function as a bonus. def pmatrix2int(m): """Convert permutation matrix to number.""" return fac2int(pmatrix2fac(m)) def int2pmatrix(big_number): """Convert number to permutation matrix.""" return fac2pmatrix((int2fac(big_number))) def pmatrix2fac(matrix): """Convert permutation matrix to factoradic number.""" available = list(range(len(matrix))) digits = [] for row in matrix: idx = available.index(row) del available[idx] digits.append(idx) return list(reversed(digits)) def fac2pmatrix(digits): """Convert factoradic number to permutation matrix.""" available = list(range(len(digits))) mat = [] for digit in reversed(digits): # print(digit, available) mat.append(available.pop(digit)) return mat def int2fac(number): """Convert decimal into factorial numeric system. Left-most is LSB.""" i = 2 res = [0] while number > 0: number, r = divmod(number, i) res.append(r) i += 1 return res def fac2int(digits): """Convert factorial numeric system into decimal. Left-most is LSB.""" radix = 1 i = 1 res = 0 for digit in digits[1:]: res += digit * i radix += 1 i *= radix return res
{ "domain": "codereview.stackexchange", "id": 38162, "tags": "python, python-3.x, matrix, base64" }
Formula (or heuristic) for calculating force required for incremental change in angle of attack of an airfoil
Question: Assume I have a system comprising of an airfoil A immersed in a fluid. Further assume, that the airfoil is affixed via two bolts at one end, and is untethered at the other end. The system can be parametised as follows: S is the span of the airfoil C is the chord of the airfoil V (vega) is the relative velocity of the fluid passing over the airfoil $\rho$ is the density of the fluid in which the airfoil is is immersed $\theta$ is the current angle of attack of the airfoil (in radians) L the distance between two bolts affixing the airfoil at one end (< C) [[Question]] Given a new angle $\theta_i$, what would be the formula for calculating the force required to be exerted on the two bolts to change the current angle of attack of the airfoil from $\theta$ to $\theta_i$? (assuming all other variables held constant) [[Notes]] From the lift characteristics of an airfoil, I think its fair to assume that a greater force will be required to be exerted as the angle of attack increases (up until the stall angle) Additionally, since the force required is likely to be monotonically increasing (up until stall angle), I would prefer if the function actually returned the supremum of the the forces required to increase the angle of attack from: $\theta$ $\rightarrow$ $\theta$ + $\delta$$\omega$ where $\delta$$\omega$ is the change in radians divided into an infinitesimal number of steps. Ideally, the formula should derived from first principles (or the answer be provided as pseudocode for an algorithm), so that I can follow the logic, and apply it to an airfoil of non-rectangular shape. Answer: As per "NASA Aeronautics And Space Administration" the thin foil lift equation is $$L = Cl * A * .5 * r * V^2 $$ Cl is the lift coefficient and in small angle range is directly related to angle of attack, multiplied by other factors. They have a java app here FoilSim app Which is similar to what you seem to be asking. You have to set the security of your computer Java to let this app run. They offer help to set it up. As far as your model of the wing and its attachment to support via two bolts, it is not practical and minimum number of bolts required would be three non-collinear bolts to turn yor hinge connection into a cantilever connection.
{ "domain": "engineering.stackexchange", "id": 1132, "tags": "mechanical-engineering, fluid-mechanics, applied-mechanics, aerodynamics" }
What happens to extra-galactic rays when they arrive at the solar system?
Question: Quazars send baryons from other galaxies towards us which are deflected from by the local magnetosphere. The early solar system probably picked up many millions of extragalactic cosmic rays for every tonne of local atoms. Humans probably contain some baryons which are sourced from >3 billion light years away. What happens to extragalactic cosmic rays when they arrive at the solar system/at a protoplanetary disk? Where do they go? Does a proton from that far away adopt a local electron and become hydrogen? Answer: What happens to extragalactic cosmic rays when they arrive at the solar system/at a protoplanetary disk? Note that cosmic rays are generally considered to have energies in excess of ~109 eV. In a 1(0.01) nT magnetic field, a 1 GeV proton would have a ~5.7(566) Gm gyroradius. Note that the gradient scale length of the heliospheric boundaries are on the order of 100s of km to several Mm, i.e., much smaller than the typical cosmic ray gyroradius. This is important because if the order were flipped, one might not expect many cosmic rays below a certain energy to penetrate far into the heliosphere. These are the lowest energy cosmic rays, whereas at higher energy the gyroradius can exceed an astronomical unit, e.g., ~1015 eV proton in a 1 nT magnetic field has a gyroradius of >20,000 AU which is larger than the entire heliosphere (on the ram side with the interstellar medium at least). So most cosmic rays above ~1 GeV that are ions that come from outside the heliosphere enter the heliosphere and begin to follow the standard single particle motions associated with the observed electric and magnetic fields of the system. Does a proton from that far away adopt a local electron and become hydrogen? No, this almost certainly does not happen. A proton with at least 1 GeV of kinetic energy is relativistic and so the recombination cross-section is going to be tiny. Much lower energy protons can recombine and form low energy neutral atoms that can enter the heliosphere unaffected now by the magnetic field. We can detect these with spacecraft such as IBEX. The early solar system probably picked up many millions of extragalactic cosmic rays for every tonne of local atoms. I am not sure about that but one could presumably calculate such things by looking at the well documented and published cosmic ray spectrum (e.g., see figure below). Note that the roll-off at low energies is actually due to solar transients like coronal mass ejection (CME) which lead to a relationship called the Forbush decrease that tends to follow the solar cycle.
{ "domain": "physics.stackexchange", "id": 80624, "tags": "astrophysics, atomic-physics, baryons, cosmic-rays" }
change sbpl_lattice_planner preference to reverse motion?
Question: Sbpl_lattice_planner's default for navigation seems to be to move forward with minimal reverse movements. It seems to choose inefficient routes in order to move forward rather than in reverse. For example, sometimes it will turn around prior to running a forward route rather than taking a simpler reverse route. 1-Is this true? 2-Is there any functionality in sbpl_lattice_planner to make the default to be to prefer moving in reverse with minimal forward motion? The robot has Ackermann-like steering and it needs to move predominately in the reverse direction. We could switch the forward direction to reverse however there are times where forward motion is needed. Originally posted by bk on ROS Answers with karma: 25 on 2014-01-21 Post score: 2 Original comments Comment by ahendrix on 2014-01-21: Out of curiosity, which local planner are you using for your robot? Comment by bk on 2014-01-21: We created a custom local planner Comment by ahendrix on 2014-01-21: Is the source available somewhere? There's a distinct lack of good local planners for ackermann-style vehicles. Comment by orion on 2014-01-21: I will second this. I will confirm the solution below works, as I have made primitives for ackermann for SBPL. I am working on making my own local planner, but would be up for working with some people on it. Have even been looking at converting "carlike planner" or seeing how hector is doing it. Comment by bk on 2014-01-22: The local planner is in a very preliminary stage. It does not adhere to the nav_core::BaseLocalPlanner interface, does not use cmd_vel nor the ackermann steering message types, and will unfortunately not be universally helpful in other ways. I will let you know if something changes Answer: Yes. Most of the default primitives prefer forward motion. Absolutely. You'll need to write your own motion primitives that don't have a penalty on reverse motion. The stock motion primitives are generated by matlab code in the sbpl/matlab/mprim directory. The primitive generation is fairly well documented, so it shouldn't be too hard to modify. @ben points out that there is a tutorial on editing SBPL motion primitives: http://sbpl.net/node/52 Originally posted by ahendrix with karma: 47576 on 2014-01-21 This answer was ACCEPTED on the original site Post score: 3 Original comments Comment by bk on 2014-01-21: That makes sense. I will validate this tomorrow. Thanks! Comment by ben on 2014-01-21: Within the last 6 months, we posted more tutorials on the sbpl website that you might find useful. Here's a really good one written by Victor Hwang - with really good pictures. It describes how to generate motion prims: http://sbpl.net/node/52 Comment by bk on 2014-01-22: Thanks for the tutorials. They have been very helpful! Comment by bk on 2014-01-22: Thanks ahendrix for the solution!
{ "domain": "robotics.stackexchange", "id": 16713, "tags": "ros, sbpl-lattice-planner, sbpl" }
How to classify a document by image?
Question: I need an opens source solution to classify a document. I do not want to use NLP i need only to check the look and feel. I tried OpenCV. I have a template and i need to match it. import cv2 template = cv2.imread(template_file,0) template = cv2.normalize(template, None, alpha=0, beta=1, norm_type=cv2.NORM_MINMAX, dtype=cv2.CV_32F) method = ['cv2.TM_CCOEFF_NORMED'] img = img2.copy() method = eval(meth) # Apply template Matching res = cv2.matchTemplate(img,template,method) min_val, max_val, min_loc, max_loc = cv2.minMaxLoc(res) if (max_val > threshold): print('match OK') this method doesn't seems so robust and I get a lot of false positive. Answer: You might want to look at Siamese CNNs depending on the size of your dataset. A good introduction can be found here.
{ "domain": "datascience.stackexchange", "id": 5596, "tags": "python, similar-documents" }
Tools for manual disambiguation/editing of match results
Question: We are using a clustering approach to find data that is present in multiple datasets. Eg, if a product is sold on Barnes & Noble and Amazon, we want to know that it's the same product. The algorithm spits out 3 lists of data: High confidence that a pair of rows are the same Unsure if two rows are the same (low confidence either way) High confidence that two rows are not the same We want a manual process to do the following: Evaluate if our high confidence matches are correct (useful during development) Disambiguate "unsure" results I can't for the life of me work out (a) what this kind of process is called and (b) what tools are out there to help us do this. We've resorted to manually inspecting CSVs in Excel and labelling columns. At some point we can stop doing process 1 once our pipeline is good enough, but we will always have a need for process 2. NB this isn't really human-in-the-loop as commonly described, because the results of the human QA don't get pumped back into training data (there is no training data - it's clustering). Answer: I realised that this kind of problem is best solved by rolling your own solution. Here are the options I evaluted: Google Sheets. Simple, dirty. Can integrate with Databricks via a library. Streamlit. Nicer than Sheets, can create the app in Python. Retool. Simpler than Streamlit but not necessarily targeted at data apps.
{ "domain": "datascience.stackexchange", "id": 12132, "tags": "clustering, data-quality" }
What would be the effect of a large solar flare on motor vehicles?
Question: With the solar cycle now leading to an increase in flare activity, one naturally starts to think the effects of a large solar flare heading this way. While major flares like the Carrington Event are well described, along with the impact in NE Canada in 1989, what I have had some trouble finding is the effect of a large solar flare on modern motor vehicles. Would a large solar flare cause the batteries in a Tesla to explode? Would it destroy the electronic ignition in a Honda Accord? Would we even be able to turn on and drive a modern car (use a 2015 Honda Accord as a reference)? I looked through some recent posts - here and here, but didn't find a good answer. Answer: Never drive a car which is larger than a football field during a solar storm Otherwise, don't worry about it. The answer to Does a geomagnetic storm visibly deflect a compass? shows the plot below which was a very large event. It looks like the fastest change was about 200 nT per minute or about 3 x 10-9 Tesla/second. For a 6 square meter car that's an electric field around the car's perimeter of about 20 nanovolts all the way around, inducing a minuscule current, probably below that due to galvanic effects (dissimilar metals, rain, salt, rust, dirt). You need a big antenna to pick it up. Since the area increases faster than the perimeter, if your car were 2,000 km by 3,000 km that would be about 20,000 volts! Of course a real circuit (a continental-sized power grid) presents substantial conductivity, so you wouldn't necessarily see 20 kV lightning, but you will see blown transformers and switching stations because of the induced current overloading. Source Of course, if your car has a magnetic compass stuck to the windshield (my grandfather's car did) it might show several degrees of deflection, but that's probably buried in the errors caused by the soft-iron effects of the car's magnetic frame itself. Would a digital compass work reliably when installed in a car?
{ "domain": "astronomy.stackexchange", "id": 5862, "tags": "the-sun, solar-flare, flare" }
On demand Publishing?
Question: I'm using ROS kinetic with Yumi package from KTH. I am getting some points from the camera and do some processing and send the coordinates back to the robot arms to move. What I'm doing is letting the camera input and processing system on a node (publisher) to publish the coordinates and the control and move of the robot one (subscriber) on another node. The problem is that the camera publish sequence of coordinates to the robot. My question is that how can I publish the first message only and then wait till demand or flag to publish the second message and so on? Thank you in Advance Originally posted by Ahmad on ROS Answers with karma: 41 on 2018-01-29 Post score: 0 Answer: Instead of using a publisher and subscriber you could use a service and client. The camera input and processing would be in the service, and the client would request the coordinate whenever it needs it. Here is a service/client tutorial for C++ or here is one for Python. Originally posted by Airuno2L with karma: 3460 on 2018-01-29 This answer was ACCEPTED on the original site Post score: 4
{ "domain": "robotics.stackexchange", "id": 29900, "tags": "ros, msg-subscriber, ros-kinetic" }
Print multiplication tables
Question: Write a function that given a max argument will print a nicely aligned multiplication tables, for example, if max = 8 1 2 3 4 5 6 7 8 2 4 6 8 10 12 14 16 3 6 9 12 15 18 21 24 4 8 12 16 20 24 28 32 5 10 15 20 25 30 35 40 6 12 18 24 30 36 42 48 7 14 21 28 35 42 49 56 8 16 24 32 40 48 56 64 The code is pretty straightforward, but I am interested in any possible improvement. def print_multiplication_table(max) pad = (1 + (max*max).to_s.length) puts (1..max) .to_a .product((1..max).to_a) .map{|a, b| a * b} .each_slice(max) .map{ |x| x.map(&:to_s).map{ |x| " " * (pad - x.length) + x}.join(' ')} .join("\n") end print_multiplication_table(8) Answer: def print_multiplication_table(max) I think size would be a better name for the argument. max sounds like it's the largest number in the table. pad = (1 + (max*max).to_s.length) Rename to column_width. Padding is the spaces that need to be added to reach the column width. The parentheses aren't needed. .map{ |x| x.map(&:to_s).map{ |x| " " * (pad - x.length) + x}.join(' ')} join(' ') adds an extra space between columns. The 1 + in the calculation of the column width already gurantees there's at least one space between two numbers, so now there are at least two spaces between numbers. If this is intended, change the 1 to 2 in the calculation of the column width, so the minimum number of spaces is determined in a single place in the code. Since you only want to print the table, it is unnecessary to build all those temporary arrays and strings. A nested loop will be shorter and much easier to read than a chain of calls: def print_multiplication_table(size) column_width = 2 + (size*size).to_s.length (1..size).each do |i| (1..size).each do |j| result = (i * j).to_s padding = " " * (column_width - result.length) print padding + result end print("\n") end end
{ "domain": "codereview.stackexchange", "id": 17051, "tags": "ruby, formatting" }
Distance times Velocity
Question: Is there a meaningful physical concept of $distance * velocity$? Came across something analogous in computer science and was wondering if there was any physical analogue. Answer: In diffusion equations, the diffusion coefficient typically has a dimensionality of $\mathrm{L^2T^{-1}}$. For instance, the heat equation is typically written in this form: $$ \frac{\partial u}{\partial t}-\alpha\nabla u=0, $$ where $u$ is temperature and $\alpha$ is the thermal diffusivity. The thermal diffusivity is defined as: $$ \alpha=\frac{k}{\rho c_p}, $$ where $k$ is the thermal conductivity of the medium, $\rho$ is the mass density and $c_p$ is the specific heat capacity. The SI unit of thermal diffusivity is $\mathrm{m^2/s}$.
{ "domain": "physics.stackexchange", "id": 97944, "tags": "kinematics, velocity, dimensional-analysis, distance" }
"Smart" (ish) pointer in C
Question: I am a programmer with a primarily Rust-based background who has nonetheless taken an interest in C lately due to its simplicity and minimalism. However, one of the things I miss most from Rust is safe memory management. So I created a "smart pointer" of sorts to replicate that experience: #include <stdio.h> /* This is a "smart" pointer that uses Option types for safe memory management. */ typedef enum option { Some, None } Option; typedef struct OptionPointer { void *content; Option state; int is_malloc; } OptionPointer; union internal_PointerType { void *content; int err; }; typedef struct PointerType { int choice; union internal_PointerType internal; } PointerType; OptionPointer create_ptr(void *content, int is_malloc) { OptionPointer p; p.content = content; p.state = Some; p.is_malloc = is_malloc; return p; } void set_ptr_invalid(OptionPointer *ptr) { ptr->state = None; // Only need to free memory // if it was dynamically allocated, // otherwise we only need to worry // about dangling pointers if (ptr->is_malloc) { free(ptr->content); } } PointerType get_ptr(OptionPointer *ptr) { PointerType res; res.choice = 0; if (ptr->state == None) { res.choice = 0; res.internal.err = 1; } else { res.choice = 1; res.internal.content = ptr->content; } return res; } // Example int main() { char *a = "This is some testing text"; OptionPointer ptr = create_ptr(a, 0); // Imaginary scenario where the pointer becomes invalid set_ptr_invalid(&ptr); PointerType res = get_ptr(&ptr); if (res.choice) { char *content = (char*)res.internal.content; printf("%s\n", content); } else { printf("Invalid pointer at %s:%d\n", __FILE__, __LINE__); } } I can already see some notable problems: I literally just started programming in C a few days ago so I probably am not using recommended best practices "Smart" only goes so far - you'd have to call set_ptr_invalid() every single time a pointer could become invalid as a result of an operation This (possibly?) might not work well with malloc() and doesn't cover the case that malloc() wasn't successful All feedback on this would be much appreciated! Answer: Disclaimer: The next few lines are my opinion as a C programmer so take it with a grain of salt. As a C programmer, I will never use this. The language is verbose enough already due to its lack of features. You are now erasing the type of the pointer by stuffing it in void making my job harder. Take the following example. void do_something(PointerType * some_object); What is the underlying type of some_object? There is no way of telling without documentation. If some idiot stuffs the wrong underlying type. I have a nice time figuring out access violations. Let us say it is MyStruct. typedef struct MyStruct { char character; int integer; } MyStruct; If I was implementing do_something. I could just do it like this with regular pointers. void do_something(MyStruct * some_object){ if(some_object == NULL){ printf("Invalid pointer at %s:%d\n", __FILE__, __LINE__); return; } printf("Value of my struct is {character : %c, integer : &d}", some_object->character, some_object->integer); } With your code I have to first get the underlying type and then cast it to MyStruct*, it adds quite a few lines. OptionPointer also adds an overhead of an enum to my pointer. And is going potentially confuse the compiler optimizer generating suboptimal code. Review Add a check for NULL in create_ptr. Someone might easily stuff a NULL in there. This would also mean you have to have an error mechanism if create_ptr fails. Use opaque ptr paradigm to hide the internals of OptionPointer otherwise there is no way of preventing a C programmer from just treating this as a struct and modify state and contents directly. Using opaque ptr also means that you will have to add a function to check the value of choice. There is only 1 way a pointer fails aka it is NULL. So you do not need a separate err code for that. This is how I would write the same thing. #include <stdio.h> #include <stdbool.h> /*---------- Pointer.h --------------*/ typedef struct Pointer Pointer; Pointer create_ptr(void * ptr); void set_ptr_invalid(Pointer *ptr); bool is_ptr_valid(Pointer* ptr); void* get_ptr(Pointer* ptr); /*---------- Pointer.c ------------*/ typedef struct Pointer { void* content; } Pointer; Pointer create_ptr(void * ptr){ Pointer p; p.content = ptr; return p; } void set_ptr_invalid(Pointer *ptr) { ptr->content = NULL; } bool is_ptr_valid(Pointer* ptr){ return ptr->content != NULL; } void* get_ptr(Pointer* ptr){ return ptr->content; } /*------------ Main.c ---------- */ // Example int main() { char const * a = "This is some testing text"; Pointer ptr = create_ptr((void*)a); if (is_ptr_valid(&ptr)) { char *content = (char*)get_ptr(&ptr); printf("%s\n", content); } else { printf("Invalid pointer at %s:%d\n", __FILE__, __LINE__); } set_ptr_invalid(&ptr); if (is_ptr_valid(&ptr)) { char *content = (char*)get_ptr(&ptr); printf("%s\n", content); } else { printf("Invalid pointer at %s:%d\n", __FILE__, __LINE__); } } ```
{ "domain": "codereview.stackexchange", "id": 45321, "tags": "c, pointers" }
filter coefficients to know high pass and low pass filter
Question: Looked on few questions already there in dsp.stackexchange and couldn't get the direct answer. Basically I have a kernel of a gaussian separable filter. float w[N*2+1] = {-0.00467444956f, -0.0531099625f, 0.0152470656f, 0.300147355f, 0.484779954f, 0.300147355f, 0.0152470656f, -0.0531099625f, -0.00467444956f}; float w2[N*2+1] = {-0.00899997167f, 0.0197822638f, -0.0300691985f, 0.0374629796f, 0.963647842f, 0.0374629796f, -0.0300691985f, 0.0197822638f, -0.00899997167f}; The coefficients in first first increasing and then starts decreasing.The middle point is strongest and it is close to 0.5. In 2nd the coefficients are of alternate sign with middle being storngest close to 1. What properties these coefficients signify? How can I relate them to high pass or low pass filter? I am using the above kernels in the smoothening the image using Gaussian separable filter. Answer: A quick rule-of-thumb to quickly assess short FIR filters: if the sum of your coefficients is close to $1$, then the filter preserves the constant signals (because it will gives you the gain at frequency 0 or DC). And possibly preserves some other low-frequencies too. So it may have a low-pass behavior. If the sum of the odd coefficients minus the even coefficients is close to zero, the filter tends to attenuate high frequencies. So it may have a high-cut behavior. What is in between is not known but this provides you with a first approximation. Other computations tell yoou if it is somehow low-cut and high-pass. This is the case for your first filter: sum $1$, odd-even $ -0.0119$. Adding that it is symmetric, with triangular shape (up/down), the low-pass is likely, like a weighted average with decaying weights away from the center. And this can be checked (see Figure below). For the second one, this is more difficult: coefficients sum to $1$, and the odd/even difference is quite high ($0.77$). But if you look carefully, only one coefficient is big, the others are very small. So it looks like a unit pulse ("discrete dirac"), hence an all-pass behavior is possible, with a tiny little smoothing at high frequencies, as shown below.. But these are only rules of thumbs, which turn out correct. I can see no high-pass in your filters. If you combine then in 2D (left in Figure below), you are likely to smooth mostly in the direction of the first filter, like a $\frac{1}{4}[1\,2\,1]^T$ 3-tap Gaussian filter (right in Figure below):.added
{ "domain": "dsp.stackexchange", "id": 3387, "tags": "image-processing, lowpass-filter, gaussian, highpass-filter" }
What is an example of a hidden variable model that meets the bound of Bell's inequality?
Question: Following https://en.wikipedia.org/wiki/Bell%27s_theorem: The best possible local realist imitation (red) for the quantum correlation of two spins in the singlet state (blue), insisting on perfect anti-correlation at 0°, perfect correlation at 180°. Just for my own understanding and learning of math, let's pretend that indeed the experimental data was the red curve rather than the blue curve. Then following the same article, we would be able to model this using a local realist model: $$ {\displaystyle C_{h}(a,b)=E(A(\mathbf {a} ,\lambda )B(\mathbf {b} ,\lambda ))=\int _{\Lambda }A(\mathbf {a} ,\lambda )B(\mathbf {b} ,\lambda )p(\lambda )d\lambda .} $$ What choices of $A(\mathbf {a} ,\lambda )$, $B(\mathbf {b} ,\lambda )$, and $p(\lambda )$ would yield the red curve? Answer: Assuming $\mathbf {a}, \mathbf{b} \in \mathbb{R}^2$, with $\alpha, \beta$ being the corresponding angles, the following model yields the red graph: \begin{array}{l} \lambda \sim{\rm{Uniform}}\left[ {0,2\pi } \right] \to \mathop P\left( \lambda \right) = \frac{1}{{2\pi }} \\ A(\mathbf {a} ,\lambda ) = {\mathop{\rm sgn}} \cos (\alpha - \lambda) \\ B(\mathbf {b} ,\lambda ) = -{\mathop{\rm sgn}} \cos (\beta - \lambda) \end{array}
{ "domain": "physics.stackexchange", "id": 85538, "tags": "quantum-mechanics, probability, bells-inequality" }
To what extent is it possible to use genetic algorithms to make wind mill turbine blades more efficient?
Question: I recently watched this video on youtube. It featured someone explaining how he used genetic algorithms to improve the efficiency of wind mill turbines by finding the optimal shape for the blades. Eventually, the narrator found a non-trivial shape for the blades that seemed to be more efficient than windmills that have blades with a "standard" shape. He did, however, use the wrong viscosity for the substance in which the wind mills are situated. Therefore the preliminary questions are: Would the shape of the new blades be the same if the same procedure would be carried out in a substance with the correct viscosity? Would the blades still be more efficient than the "standard" blades? Now to the real question(s). The narrator in the video carries out this genetic algorithm procedure with two-dimensional wind turbine blades. I was wondering, though, if a similar procedure could also be carried out in a tree-dimensional environment? I guess it would be more complicated, and perhaps would require more computing power/time, but if it can be used to enhance the efficiency of wind turbine blades, it would be a good idea to do so, right? Has this already been done? Why or why not? Last question: if the genetic algorithm would indeed create novel, more efficient wind turbine blades, would that mean that some company would actually pick up on the idea and start producing them? Or would the novel shape be too complicated and therefore too expensive to produce, rendering the design useless from a financial perspective? Thanks a lot in advance! Answer: Short answer, yes you could build a GA that worked on a 3D model of a turbine with the correct viscosity, and (probably) yes, it would still find good designs. Exactly what those designs would be is somewhat unpredictable -- they might be very conventional blade arrangements or they might be completely novel and weird. However, that's also true of many other search algorithms. GAs are, at heart, just one of a class of hundreds of stochastic search algorithms. All you need for a GA to work is three things: (1) a way of describing what a possible solution looks like, (2) a way of trying to combine good solutions to get more good solutions, and (3) a way of determining when one solution is better than another. The former can be as simple as just a vector of numbers ([number of blades, distance between blades, angle of attack, ...]) or may be arbitrarily complex, allowing for each blade to have an independent length and position, etc. For number three, you need a model that tells you how good a turbine is. This is where you need to get the viscosity of air correct, embed the right equations governing the electrical outputs, etc. That just leaves the genetic operators -- how do you take good designs and produce more or better ones? Again, this can be very complex, but it can also be quite simple. For instance, you have a design with three blades, each five meters long. OK, make a small random change to that. Try blades that are 5.2 meters long. If the model tells you this is better, keep it. If not, throw it away and keep the original five meter blades. It's in this stage where domain knowledge is often invaluable. Having someone who can say, "the number of blades is really important -- taking a good five blade design and removing a blade isn't going to work unless you also change lots of other aspects of the design" allows you to simply say, "OK, my algorithm will avoid making that type of change in favor of making changes that are more likely to help. Like I said, GAs are just search algorithms. They tend to often perform a "broader" search, which can sometimes make them more likely to find these off-the-wall solutions than some other methods. On the flip side, they typically take much longer to find good solutions than some other methods. That's also an approximation; sometimes a GA works great, sometimes it works very poorly. Understanding which cases will be which is one of the great unsolved problems for people working in optimization. Finally, would companies adopt them if better designs were found? As you say, that's a more complex question. It isn't even just a matter of "too complicated". Suppose you find a 5% better turbine design that's completely feasible. You'll still likely have to redesign manufacturing equipment and processes, perhaps find new suppliers, etc. In the real world, "good enough" is often just that. However, there are numerous examples of real-world problems where methods like GAs or other random search methods have found solutions that were adopted. A lot of them are tucked behind corporate walls, but there have been notable successes in things like antenna design and aircraft design, as well as non engineering optimization problems like vehicle routing problems, scheduling problems, etc.
{ "domain": "cstheory.stackexchange", "id": 1663, "tags": "optimization, ne.neural-evol, genetic-algorithms" }
What does it mean to say that "6 tons of dark energy would be found within the radius of Pluto's orbit"?
Question: What does it mean to say that "6 tons of dark energy would be found within the radius of Pluto's orbit"? Does it mean that the dark energy is orbiting the solar system? Or does it mean a flow of dark energy through a sphere? Or does it refers to an energy in a particular system of inertia in the solar system? Or what? Again on a mass–energy equivalence basis, the density of dark energy (6.91 × 10−27 kg/m3) is very low: in the solar system, it is estimated only 6 tons of dark energy would be found within the radius of Pluto's orbit. However, it comes to dominate the mass–energy of the universe because it is uniform across space. Wikipedia/Dark energy Answer: We don't really know what dark energy is, but the default description is that it's not really dynamical. It can't flow or orbit. It's just a fixed built-in energy that is possessed by every cubic centimeter of space. On a more technical level, the simplest way to accomodate dark energy in the Einstein field equations is by adding a cosmological constant term. Naive attempts to make it variable rather than constant cause the stress-energy tensor to have a nonvanishing divergence, which makes GR not self-consistent.
{ "domain": "physics.stackexchange", "id": 17790, "tags": "space, dark-energy" }
Class for Switch statements
Question: This is more to get clarity on an implementation that I did. I am working on a React, Node, and Electron application that essentially has a form that a user inputs values that will update some content files. The application has to scale for a lot of different scenarios so I found that I kept writing switch statements to incorporate all of these scenarios. So for the sake of time I wrote a class that: Takes in two arrays, An array of keys and array of functions that correspond to those keys. The constructor function creates a table for the values Then there is a method on the class that allows you to look up that value and returns it here is the code Array.prototype.createObjectFromKeysandValues = function(array) { let table = {} for(let pointer=0;pointer<this.length;pointer++){ let key = this[pointer] let value = array[pointer] table[key]=value } return table } module.exports = class KeyFunctionMappingTable { constructor(keys, values){ this.lookUpTable = Object.assign({}, keys.createObjectFromKeysandValues(values), { default: () => new Error("Value Not Found")}) } findHandlerFunction(value){ return this.lookUpTable[value]|| this.lookUpTable.default } } So when the code is implemented it looks like this ( this scenario is for React Components) : const factory = new KeyFunctionMappingTable(['select', 'input'], [ (options, labelText) => ( <Select options={options} labelText={labelText} />), (options, labelText) => ( <Input labelText={labelText} />)]) I am finding that I have about three of these "mappings" throughout the code. What I want to know is if this seems like a readable implementation and does this make sense to strangers. If you dont think so please let me know your suggestion. Answer: You have just added an unneeded layer of complexity to the problem. You can just create the factory as an Object const factory = { select(opts, text) { return (<select options={opts} labelText={text} />) }, input(opts, text) { return (<input labelText={text} />) }, default() { return new Error("Value Not Found") } }; Then call the function with (factory[value] || factory.default)(opts, text);
{ "domain": "codereview.stackexchange", "id": 36953, "tags": "javascript, object-oriented, node.js, react.js, electron" }
Calculating the pH given number of moles of HI and 500. ml of water
Question: Shouldn't we also add the volume of the HI to the volume of the water (500mL) when we calculate the molarity? My book simply divided the number of mol by 500 mL. Answer: Simple questions are often interesting. (1) Most importantly, volumes are rarely additive. So the notion of adding the volume of the HI and the volume of water just doesn't work. (2) In an answer to a question about hydroiodic acid, Curt F. showed a chart of concentration and density. For the $\pu{20 ^\circ C}$ values the liters of water in a liter of the acid solution can easily be calculated. So for the OP to get an error of 1% or less, the molarity would have to be about 0.2 molar or less. \begin{array}{|c|c|c|c|} \hline \text{Concentration, %}(w/w) & \text{Density, }\pu{kg/L}(\text{@ }\pu{20 ^\circ C}) & \text{Molarity} & \pu{L(water)/L(acid)} \\ \hline 5.2 & 1.0342 & 0.420 & 0.980 \\ \hline 10.8 & 1.0812 & 0.922 & 0.964\\ \hline 16.4 & 1.1226 & 1.44 & 0.938 \\ \hline 22.4 & 1.1765 & 2.06 & 0.913 \\ \hline 27.2 & 1.2333 & 2.62 & 0.898 \\ \hline 33.1 & 1.2918 & 3.34 & 0.864 \\ \hline 38.7 & 1.3605 & 4.12 & 0.834\\ \hline 42.9 & 1.4208 & 4.77 & 0.811 \\ \hline 48.7 & 1.5072 & 5.74 & 0.773 \\ \hline 53.0 & 1.5913 & 6.59 & 0.748 \\ \hline 57.0 & 1.6933 & 7.55 & 0.728 \\ \hline \hline \end{array} You can see from the red line that the data is not linear.
{ "domain": "chemistry.stackexchange", "id": 13861, "tags": "ph" }
Detecting hallways using LaserScan data
Question: Hello everyone, I am trying to detect hallways (two solid parallel lines) using laser data (sensor_msgs/LaserScan). Does anyone know of any ready to use ROS packages to get this job done. If there are no ready to use packages, my other option is to use Hough transforms to detect straight lines and search for two parallel lines. To do this, I have converted sensor_msgs/LaserScan to sensor_msgs/PointCloud2. How do I access individual points so that I can discard the Z axis values and apply Hough transform by voting an accumulator. (I am assuming PointCloud2 is a message type and we can not access individual points) Do I need to convert it into some other form? If so how can I do that? Any help is much appreciated. Thank you. Originally posted by San on ROS Answers with karma: 61 on 2016-02-18 Post score: 0 Answer: Thanks for the quick reply. Yes, what you said seems valid. However, I have a question, How do I apply hough's transform to laser data? for the equation r = x cos theta + y sin theta, I can get r value (range in laser message), theta (theta increments from the laser message). I am still left with two unknowns x and y. I used hough's transforms on Images before as follows: I iterate over every pixel and for every edge pixel, I calculate theta using gradient, solve for r in r = x cos theta + y sin theta vote for r , theta in a accumulator. The (r, theta) that gets maximum votes are my hough parameters. So, when I have two unknowns (x, y ) how can solve for the equation. Edit: I found the answer. The laser messages are already in polar form. so, you have r and theta from it. Using r, theta find x = r * cos theta and y = r * sin theta. round of both x and y. Using opencv create an empty image (white image) - the image size depends on the accuracy/resolution you need, for every calculated x, y manipulate pixel at x,y to black pixel. By doing so, you will get a white image with black dots (walls/obstacles detected by the laser). Now, use the built in Hough transform from OpenCV and find the lines. In order to detect the hallway, select two strong lines whose slopes are equal. Originally posted by San with karma: 61 on 2016-02-18 This answer was ACCEPTED on the original site Post score: 0 Original comments Comment by Arowana on 2016-03-01: I implemented Hough transform before and I am not sure OpenCV is required as you only need it for visualization. I think you can make in a easier way the debugging output using python with matplotlib. You will vote in a numpy 2D array then show it Comment by Arowana on 2016-03-01: I checked and it is true OpenCV has a Hough line detector, however I think it was design for images instead of a laser scan. Comment by doisyg on 2016-07-21: I would be very curious to see the code of how you implemented this and how efficient it is. It is possible?
{ "domain": "robotics.stackexchange", "id": 23831, "tags": "laser, pcl" }
Neural network architecture for multinomial logit model
Question: I'm working on a neural network model to predict the outcomes of horse races. To date, I've built a model in R using a multinomial logit model (similar to a logit model but with N outcomes, where N = no. of horses in a race). I've also carefully read Andrew Task's excellent Grokking Deep Learning book and learned the basics of PyTorch. I've now got to the point where I'm able to build a very simple neural network using only starting prices (i.e., odds for horses at start of race) as input and the net correctly works out that it should bet on the favourite. I'm using the architecture of 16 inputs (odds for up to 16 runners, set to zero if fewer than 16 runners in a race), 16 outputs (probability of horse winning a race), 1 hidden layer with \sqrt{16 \ times 16} nodes, a RELU activation function applied to the hidden layer, and a SOFTMAX activation function applied to the output layer. I apply argmax on the output layer to chose the winning horse. Based on my earlier analysis in R, betting on the favourite results in a win rate of 35.7% whereas betting on the horse chosen by my multinomial logit model results in a (lower) win rate of 23.4%, i.e., for now my model underperforms backing the favourite. I've been able to replicate the 35.7% figure using the neural network with the architecture described above (actually I undershoot this figure but I know how to change the architecture to exactly hit this figure). Surprisingly, however, when I swap out market price (which wouldn't really be available ahead of a race for betting purposes) and swap in exactly the same features I used in the multinomial logit model I manage to achieve a win rate of only about 17%, even if I train the model with 500 epochs. As I'm relatively knew to the world of neural networks, I've no idea how to go about tweaking the architecture or hyperparameters of the neural network to improve its performance such that it's at least able to match the performance of the classical statistical model I built earlier. (I'm making the bold assumption that a neural network should be able to do at least as well as a classical statistical model, provided the net is architected correctly.) Any pointers would be greatly appreciated! (FYI, this is a personal project to help me learn deep learning, and not any commercial enterprise.) In the plot below, confused-dust and absurd-universe refer to versions of the model with market prices as sole inputs whereas comic-wood and true-resonance refer to versions using the same set of features as in the multinomial model. Thank you! Answer: I eventually arrived at the solution below, which can be used to replicate the result of a multinomial logit regression: class ParsLin(nn.Module): """ Parsimonious version of Linear """ def __init__(self, input_layer_nodes, output_layer_nodes): super().__init__() # Check if output_layer_nodes is an integer multiple of input_layer_nodes if input_layer_nodes % output_layer_nodes != 0: raise ValueError("inputt_layer_nodes must be an integer multiple of output_layer_nodes") self.input_size = input_layer_nodes self.output_size = output_layer_nodes self.coefficient_size = input_layer_nodes // output_layer_nodes weights = torch.zeros(self.coefficient_size) self.weights = nn.Parameter(weights) # nn.Parameter is a Tensor that's a module parameter. # Xavier (Glorot) initialization nn.init.xavier_uniform_(self.weights.view(1, self.coefficient_size)) def forward(self, x): # Reshape races tensor to separate features and horses n = x.shape[0] reshaped_input = x.view(n, self.coefficient_size, self.output_size) # Transpose tensor to have each horse's features together transposed_input = reshaped_input.transpose(1, 2) # Multiply transposed tensor with coefficients tensor (broadcasted along last dimension) marginal_utilities = transposed_input * self.weights # Sum multiplied tensor along last dimension utilities = marginal_utilities.sum(dim=-1) return utilities class MLR(nn.Module): """ Parsimonious version of LinSoft intended to replicate a multinomial logit regression with alternative specific variables and generic coefficients only """ def __init__(self, input_layer_nodes, output_layer_nodes, bias=None): # bias is unused argument and will be ignored super().__init__() self.neural_network = nn.Sequential( ParsLin(input_layer_nodes, output_layer_nodes), nn.Softmax(dim=1) ) def forward(self, x): logits = self.neural_network(x) return logits
{ "domain": "datascience.stackexchange", "id": 11689, "tags": "neural-network, logistic-regression" }
Will fuerte have an Ubuntu 10.04 installation?
Question: I am on Ubuntu Lucid (10.04 LTS) and I do not plan to move to Ubuntu 11.XX or Ubuntu 12.XX. I wished to enquire if the upcoming ROS release of fuerte will have an Ubuntu 10.04 installation ? Originally posted by Arkapravo on ROS Answers with karma: 1108 on 2012-01-14 Post score: 1 Answer: Yes, see: http://www.ros.org/reps/rep-0003.html Originally posted by dornhege with karma: 31395 on 2012-01-14 This answer was ACCEPTED on the original site Post score: 3 Original comments Comment by Arkapravo on 2012-01-14: @dornhege Thank you
{ "domain": "robotics.stackexchange", "id": 7894, "tags": "ros, installation, ubuntu, ubuntu-lucid, ros-fuerte" }
Center of mass of constrained rigid body
Question: Say we have a rod with center of mass at its geometric center. Rigid bodies rotate around center of mass. If you apply force $\mathbf{F}$ at distance $\mathbf{r}$ from center of mass you generate torque $\boldsymbol{\tau} = \mathbf{r} × \mathbf{F}$, resulting in angular acceleration. If you apply force at center of mass, this results only in linear acceleration. However what happens if we constrain (hang) the rigid body from one of its ends? Why does applying force at the geometric center now generates torque? Does center of mass shift? Answer: Does center of mass shift? Absolutely not! Centre of mass can shift only if there is a change in distribution of mass of the body. That means we are still applying force on the centre of mass of the body, and the value of $\vec{r}$ in $\vec{\tau}=\vec{r}×\vec{F}$ is still zero. So we can be sure that the force being applied by us is not producing any torque. Then what could be the explanation of the torque we witness? Think about it again, what is required for producing a torque about a point? A force whose line of action does not pass through that point! This implies that there must be at least one such force in the above condition that we have been ignoring till now. Where that force could be? To find it, try comparing the motion of the body in the two different scenarios that you have presented and think about what is causing the motion to differ in the second case. When we say that the rigid body is constrained, what exactly is causing this constraint? As we can see in the above illustration, in the first case, the topmost point moves with the same acceleration as the rest of the body. But when we hinge this point (constrain the body), its motion stops. From here, we can imply that the hinge applies a force on the point opposing its original motion. (Please note that I have only shown a component of the hinge force. It can have another componet along the rod too.) Since the line of action of this force is not passing through the centre of mass of the rod, it is perfectly capable to produce a torque and cause rotation of the rod (which it indeed does). Why does applying force at the geometric center now generates torque? The torque is generated because your force at the geometric centre is causing a hinge reaction force (which is not at the geometric centre), and this reaction force is generating the torque.
{ "domain": "physics.stackexchange", "id": 65464, "tags": "newtonian-mechanics, reference-frames, torque, rigid-body-dynamics" }
What is the definition of the charge conjugation?
Question: I seem to have troubles finding definitions of the charge conjugation operator that are independant of the theory considered. Weinberg defined it as the operator mapping particle types to antiparticles : $$\operatorname C \Psi^{\pm}_{p_1 \sigma_1 n_1;p_2 \sigma_2 n_2; ...} = \xi_{n_1} \xi_{n_2} ... \Psi^{\pm}_{p_1 \sigma_1 n_1^c;p_2 \sigma_2 n_2^c; ...}$$ He does not really seem to specify what he means by "antiparticles" around there, but I'm guessing this is the one-particle state that is conjugate to this one. This assumes that it is possible to decompose everything into one-particle states. Wightman seems to go with $C \gamma^\mu C^{-1} = \bar \gamma^\mu$, which isn't terribly satisfying and also only works for spinor fields. I've seen thrown around that the $C$ conjugation corresponds roughly to the notion of complex conjugation on the wavefunction but never really expanded upon. Is there a generic definition of charge conjugation that does not depend on how the theory is constructed? The CPT theorem in AQFT indeed seems to not have any of those extraneous constructions, but the action of the different symmetries is a bit hidden as $$(\Psi_0, \phi(x_1) ... \phi(x_n) \Psi_0) = (\Psi_0, \phi(-x_n) ... \phi(-x_1) \Psi_0)$$ Is the action of $C$ symmetry $\Psi' = C \Psi$ just a state such that for any operator $A$, $$(\Psi, A \Psi) = (\Psi', A^\dagger \Psi')$$ or something to that effect? From some parts seems like it may just be $C \phi C^{-1} = \phi^*$. Answer: All of your fields naturally lie in some representation of the group of all symmetries (these include gauge symmetries, global gauge transformations and global Lorentz transformations). Charge conjugation is simply passing to the conjugate representation of that group. E.g. complex scalars are 1d irreps of $U(1)$, and the conjugate object is $\phi^{*}$. The same logic also works for spinors, gauge fields, etc.
{ "domain": "physics.stackexchange", "id": 47317, "tags": "quantum-field-theory, charge, charge-conjugation, cpt-symmetry" }
Why can't the density difference between the liquid and solid be an appropriate order parameter for liquid-to-solid transition?
Question: The order parameter $\mathcal{O}$ in the case of a liquid-gas transition is the density difference $\mathcal{O}=\rho_{liq}-\rho_{gas}$. But in the case of a liquid-to-solid transition, the order parameter $\mathcal{O}$ is not taken as the density difference $\mathcal{O}=\rho_{sol}-\rho_{liq}$. What is the reason? Is it just because the density difference is too small to measure? Answer: There is no unique definition of "the" order parameter. However, we would like to use an order parameter that exhibits the full symmetry breaking pattern of the transition. If the transition corresponds to some symmetry $G$ breaking to a smaller symmetry $H$ then we want the order parameter to transform non-trivially under $G$, and exhibit the residual symmetry $H$. In the liquid-gas transition there are no continuous symmetries, and using the density is fine. In liquid-solid we break translational symmetry to a crystallographic symmetry. This requires a more complicated order parameter, like the F-trafo of the density correlator.
{ "domain": "physics.stackexchange", "id": 42213, "tags": "statistical-mechanics, phase-transition" }
PR2 Pick and Place Crashes in simple_pick_and_place_example.py
Question: Hi I'm new to ROS, I've downloaded and installed Electric, and now I'm trying to get the PR2 to pick up and place objects in the Gazebo simulator. I'm interested in writing my own pick-and-place package using Python, so I've been following the tutorial here. I've downloaded all the relevant packages and I execute the command below in that order. roslaunch pr2_gazebo pr2_empty_world.launch roslaunch gazebo_worlds table.launch roslaunch gazebo_worlds coffee_cup.launch export ROBOT=sim roslaunch pr2_tabletop_manipulation_launch pr2_tabletop_manipulation.launch stereo:=true rosrun pr2_pick_and_place_demos simple_pick_and_place_example.py Everything launches fine except for the last command, which gets stuck at the message: [INFO] [WallTime: 1333224769.117507] [1004.719000] ik_utilities: waiting for IK services to be there I'm not sure why it can't find IK services (or what they are, since I can't find it in the tutorial), so what is it and how do I get it so the PR2 can grasp objects? EDIT: For some reason it gets past this line now, but after it moves the arms and head it crashes with the output below (Sorry about the format, but pasting here I can't get the newlines to work properly unless I space it out to every other line): [INFO] [WallTime: 1333234720.247776] [431.624000] ik_utilities: waiting for IK services to be there [INFO] [WallTime: 1333234720.370728] [431.630000] ik_utilities: services found [INFO] [WallTime: 1333234720.388736] [431.631000] getting the IK solver info [INFO] [WallTime: 1333234720.440548] [431.632000] done getting the IK solver info [INFO] [WallTime: 1333234720.514891] [431.636000] ik_utilities: done init [INFO] [WallTime: 1333234720.527255] [431.637000] done creating IKUtilities class objects Traceback (most recent call last): File "/opt/ros/electric/stacks/pr2_object_manipulation/applications/pr2_pick_and_place_demos/test/simple_pick_and_place_example.py", line 110, in sppe = SimplePickAndPlaceExample() File "/opt/ros/electric/stacks/pr2_object_manipulation/applications/pr2_pick_and_place_demos/test/simple_pick_and_place_example.py", line 54, in init self.papm = PickAndPlaceManager() File "/opt/ros/electric/stacks/pr2_object_manipulation/applications/pr2_pick_and_place_demos/src/pr2_pick_and_place_demos/pick_and_place_manager.py", line 172, in init self.cms[0] = controller_manager.ControllerManager('r', self.tf_listener, use_slip_controller, use_slip_detection) File "/opt/ros/electric/stacks/pr2_object_manipulation/manipulation/pr2_gripper_reactive_approach/src/pr2_gripper_reactive_approach/controller_manager.py", line 243, in init self.cartesian_desired_pose = self.get_current_wrist_pose_stamped('/base_link') File "/opt/ros/electric/stacks/pr2_object_manipulation/manipulation/pr2_gripper_reactive_approach/src/pr2_gripper_reactive_approach/controller_manager.py", line 676, in get_current_wrist_pose_stamped (current_trans, current_rot) = self.return_cartesian_pose(frame) File "/opt/ros/electric/stacks/pr2_object_manipulation/manipulation/pr2_gripper_reactive_approach/src/pr2_gripper_reactive_approach/controller_manager.py", line 1429, in return_cartesian_pose (trans, rot) = self.tf_listener.lookupTransform(frame, self.whicharm+'_wrist_roll_link', rospy.Time(0)) tf.ExtrapolationException: Lookup would require extrapolation into the past. Requested time 431.621000000 but the earliest data is at time 431.960000000, when looking up transform from frame [/r_wrist_roll_link] to frame [/base_link] Exception in thread Thread-32 (most likely raised during interpreter shutdown): Traceback (most recent call last): File "/usr/lib/python2.6/threading.py", line 532, in __bootstrap_inner File "/usr/lib/python2.6/threading.py", line 484, in run File "/opt/ros/electric/stacks/pr2_object_manipulation/manipulation/pr2_gripper_reactive_approach/src/pr2_gripper_reactive_approach/joint_states_listener.py", line 80, in joint_states_listener File "/opt/ros/electric/stacks/ros_comm/clients/rospy/src/rospy/client.py", line 101, in spin <type 'exceptions.AttributeError'>: 'NoneType' object has no attribute 'core' Unhandled exception in thread started by Error in sys.excepthook: Original exception was: Exception in thread /r_arm_controller/joint_trajectory_action/status (most likely raised during interpreter shutdown): Traceback (most recent call last): File "/usr/lib/python2.6/threading.py", line 532, in __bootstrap_inner File "/usr/lib/python2.6/threading.py", line 484, in run File "/opt/ros/electric/stacks/ros_comm/clients/rospy/src/rospy/impl/tcpros_pubsub.py", line 169, in robust_connect_subscriber File "/opt/ros/electric/stacks/ros_comm/clients/rospy/src/rospy/impl/tcpros_base.py", line 720, in receive_loop <type 'exceptions.TypeError'>: 'NoneType' object is not callable On a side note: One other error I noticed was that the manipulation pipeline could not connect to the database because I don't have it installed on my computer (I'm on a public machine and don't have permissions to install the database locally). Is the database necessary? Because the manipulation pipeline seems to run without it. If it is necessary, is there a workaround? Originally posted by jker on ROS Answers with karma: 115 on 2012-03-31 Post score: 1 Answer: Thanks for bringing this issue to my attention--it's been awhile since I updated simple_pick_and_place_example.py . I've updated the tutorial on the wiki here: http://www.ros.org/wiki/pr2_pick_and_place_demos/Tutorials/A%20Simple%20Pick%20And%20Place%20Example%20Using%20The%20Pick%20And%20Place%20Manager There was also a minor bug in pr2_pick_and_place_manager with updating the place rectangle, which has now been fixed. If you check out our Electric branch of pr2_object_manipulation: svn co https://code.ros.org/svn/wg-ros-pkg/stacks/pr2_object_manipulation/branches/0.5-branch pr2_object_manipulation and then add that directory to your ROS_PACKAGE_PATH, you'll get fixed versions of both simple_pick_and_place_example.py and pr2_pick_and_place_manager.py . (You are indeed right about removing the take_static_collision_map = 1 argument, but you also want update_place_rectangle = 1.) (Those fixes will go out with the next release into debians.) As for the tf error... erm. Wow. You should really not have to put a half-second sleep in between those two things. I can't replicate that error on my computer; could you please try, instead of adding the 0.5 sec sleep, changing the line just under the lookupTransform that says except tf.Exception: and switch it to except (tf.Exception, tf.ExtrapolationException): and let me know if that helps? Also also, you may want to try launching your simulation with roslaunch manipulation_worlds pr2_table_object.launch instead of separately doing pr2 + table + coffee cup. (The coke can in that world has our grasping hack on it, which makes it easier for the robot to not drop it once it's picked up.) Originally posted by hsiao with karma: 741 on 2012-04-02 This answer was ACCEPTED on the original site Post score: 1
{ "domain": "robotics.stackexchange", "id": 8814, "tags": "ros, ik, tabletop, ros-electric, services" }
How is my guitar cable picking up clear radio-signals?
Question: I've recently noticed radio-signals coming through my guitar amplifier. I'm on the top floor of a building in a densely populated city, with a clear visual path to many other buildings. My guitar cable is plugged into the amp when I hear the radio-station, and the guitar is not attached to the cable. I can no longer hear the signals when I unplug the cable, so I know the cable is almost certainly the culprit. The core of my question is: I know that an electromagnetic wave will induce oscillations in an amplifier circuit, but I would expect that to only produce a hum. How am I able to hear clear signals via the cable? I was horrified to hear that infernal Kars4Kids commercial playing through my beloved Marshall Amp. I'm replacing this cable immediately. Answer: An electromagnetic wave will induce oscillations in an amplifier circuit, and if you vary the frequency of the wave, you will get more than just a steady hum. It's also indemnified by law. 8.2 FCC: "this device may not cause harmful interference, and this device must accept any interference received, including interference that may cause undesired operation." Interestingly, my Marshal amp also did this, but neither the Fender nor the Hartke amps ever did. The Peavey powered mixer was also subject to this, but to a much lesser extent: you could only hear it when cranked up to 11, while the Marshal needed to only be at about 2. I wonder if Mosfets have anything to do with it. AFAIK, the Marshal is the only one that had them. Also on the Peavey, it only happened if you touched a lose wire to the metal case. I would suspect that using the ground lift switch might help. But IME, that's just usually the thing keeping it from being worse. There has to be some pretty dirty incoming power before lifting the ground actually helps.
{ "domain": "physics.stackexchange", "id": 58676, "tags": "electromagnetism, radio, antennas" }
Are Point particles real or just used for simplicity?
Question: Wikipedia says A point particle is an idealization of particles heavily used in physics. Also In philosophy of science, idealization is the process by which scientific models assume facts about the phenomenon being modeled that are strictly false but make models easier to understand or solve. However many Pop-Science books and videos claim particles are just points. Also if particles are not really point, will we be able to discover their size eventually? Does it mean they have specific shape? Answer: A particle is a point from the perspective of us in the same way that the Earth is a point from the perspective of galaxies. The Earth is not really a point, because it obviously does extend into space with non-zero size. But it is so tiny in comparison to anything relevant that it can be considered a point without losing relevant detail in analyses made at galactic scales. In other words, the Earth is modelled as a point. This is an idealisation that simplifies the calculations by ignoring unnecessary detail and information. A particle such as the electron, the photon, the phonon, the quantum bosons etc. at the quantum scale is - depending on the particle - typically considered a probability cloud, a concentration of energy, a vibration chunk or the like. These also do extend into space with non-zero size. But they are so, so tiny in comparison to anything else which is not also close to the same particle-scale. In theory there does not exist any object in our world which is not in 3D. Everything has some size, however small. No 0D point-particles exist. Only point-like particle because we choose to consider them as were they just points.
{ "domain": "physics.stackexchange", "id": 81961, "tags": "particle-physics, point-particles" }
Change Odom topic name in Navigation stack
Question: Hello, I have set up a simple navigation following the default configuration from navigation tutorial Right now, i am publishing odometry in the odom topic, and works fine, I can see the move_base subscribing to the odom topic. But i want to implement some filteres, so I am publising a filtered odometry message on a second topic name odom_filtered. what i want to know is how could I indicate the navigation stack to use the odom_filtered topic rather than the odom one? Thanks for your time. Originally posted by nalistic on ROS Answers with karma: 15 on 2020-08-25 Post score: 0 Original comments Comment by Humpelstilzchen on 2020-08-25: Need more details, what nodes of the navigation stack are you using? Most nodes are using the tf odom->base_link, not the nav_msgs/Odometry topic. Make sure you are publishing this transform only in one node. But if you really need to change a topic name for a node, try remap Comment by nalistic on 2020-08-25: Thanks for your response, I am using move_base node, and that node was expecting the odometry to be published in the odom topic, so i could simply change the topic odom to odom_raw and use the remap property to change the odometry/filtered to odom, thanks a lot for the response, change the comment to an answer so i can mark it as correct. Answer: using remap as suggested by @humpelstizchen did the trick Originally posted by nalistic with karma: 15 on 2020-10-14 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 35457, "tags": "ros, navigation, ros-kinetic, base-odometry" }
How does one calculate the chance that an asteroid hits the Earth and what does this chance mean?
Question: I read in this article (in "Independent"): An asteroid that is projected to come close to Earth later this year has a 0.41 percent chance of hitting the planet, according to Nasa data. The Center for Near-Earth Object Studies (CNEOS), from Nasa’s Jet Propulsion Laboratory, said the celestial object, known as 2018VP1, is predicted to pass near Earth one day before the US presidential election on 2 November. As usual, I started wondering. How did the people involved at NASA calculated this 0,41% chance of an impact on the earth? What does it even mean to say this? I had the impression that the trajectory of an object thought to come close to earth within a few months could be calculated exactly. So how did they calculate this uncertainty? Which variables are involved and why they possess uncertainty? Answer: An object discovered recently (2018, I would assume from the name) will have some uncertainty in its position and 3D velocity. That uncertainty will translate into a bigger uncertainty in its position at some time in the future. If you take a simple example of an object at the origin, measured to be moving along the x-axis. To keep things simple, assume there are no forces acting upon it, but there is an uncertainty in its three velocity components. i.e. it has velocity components $v_x \pm \Delta v_x$, $0 \pm \Delta v_y$ and $0\pm \Delta v_z$. Now you have a disc, at position $x_0$ along the x-axis, oriented so the flat part is in the $yz$ plane, and with some radius $r$ and you want to work out whether your object will hit it. Obviously if the velocity has its centrally estimated value it will hit it bang on. But after a time $t= x_0/v_x$, if the velocity components in the $y$ and $z$ directions did have a non-zero value, as suggested by their error bars, then the uncertainty in the y and z coordinates at time $t$ would be $\Delta v_y t$ and $\Delta v_z t$. The probability of hitting the disc would be 1 minus the integral of the probability distribution of the y,z position from a radius $r$ out to infinity. You can imagine this like a cone of possible positions that grows with time, and you are calculating what fraction of the cone cross-sectional area is intercepted by the disc. The task is also complicated for small asteroids because they can be accelerated by non gravitational effects. These include outgassing and mass loss (more important for comets) and the Yarkovsky effect (anisotropic re-radiation of solar flux) and Poynting-Robertson drag (e.g. Broz et al. 2005), leading to rather unpredictable, gradual changes in the orbit. With this particular object I think it is just that the orbital parameters are rather imprecise. The closest approach is projected to be about 400,000 km, but with a 3-sigma uncertainty of about 4 million km (and the time of arrival is uncertain by a few days). Only a small fraction of that probability distribution ends up with an impact. Since the object is also about 1-m in size, it is unlikely to much damage and would break up in the atmosphere. Because it is so faint, I doubt the parameters will be improved until a few weeks before the fly-by.
{ "domain": "physics.stackexchange", "id": 70787, "tags": "newtonian-mechanics, orbital-motion, probability, estimation, asteroids" }
Why isn't all of the dust in a nebula used in the formation of a star?
Question: I was watching a show on discovery and according to it, in a nebula the dust and gases slowly come together and as the gravity increases and the pressure rises in the core the gases fuse together and a star is born and the rest of the left over dust and gases come together and form planets and moons. So my question is that isn't all of the dust and gases used in the formation of the star? Why is some of it left and used to form planets, or why it is not sucked by the newly born star due to it's gravity. Why does the left over turn into planets and not other stars? Answer: In my opinion that "collapsing nebula" image is somewhat misleading, because the trajectory of dust/gas particles would be (ignoring magnetic field) an orbit and not a free fall collapse. However collisions make particle change orbit, and the particles whose new orbits comes closer to the future sun experience more collisions (because of the higher density there) and eventually end up within the sun. But after the collision, if one of the particles ends up going towards the sun, the other ends up going away ; this ensures that some of the mass never reaches the sun. This process also allows other "higher density" regions to gather mass, and those end up being planets (or indeed other stars if the available mass is high enough). However when a star forms, the wind it produces blows most of the leftover out of the solar system. Only the leftover which has reached a high enough size & density stays in orbit around the sun.
{ "domain": "physics.stackexchange", "id": 17816, "tags": "gravity, astrophysics, planets, stars, nebulae" }
Lorentz force from potential- extra term?
Question: I'm trying to verify the E.M potential energy $U= \int{A_\mu J^\mu} = q(\phi - A_j v^j )$ by using the connection: $$ F= - \frac{\partial U}{\partial r} + \frac{d}{dt} \frac{\partial U}{\partial v} $$ with $F=q(E+v \times B)$. I seem to have some extra term. We work in units where $q=1$. The L.H.S: $$ F_i=E_i + (v \times B)_i = E_i + \epsilon_{ijk} v_j B_k = \\= - \frac{\partial \phi}{\partial r^i}-\dot{A}_i + \epsilon_{kij} \cdot v_j \cdot \epsilon_{klm}\partial_lA_m = \\ = - \frac{\partial \phi}{\partial r^i}-\dot{A}_i + v_j \partial_lA_m \cdot \left( \delta^l_i \delta^m_j - \delta^m_i \delta^l_j \right) = \\ = - \frac{\partial \phi}{\partial r^i}-\dot{A}_i + v_j \partial_i A_j - v_j \partial_jA_i . $$ Now, the last term is: $$ v_j \partial_jA_i= \frac{dr^j}{dt} \frac{ \partial A^i}{\partial r^j }= \frac{dA_i}{dt} = \dot{A_i} $$ So we get the L.H.S: $$ - \frac{\partial \phi}{\partial r^i}-\dot{A}_i + v_j \partial_i A_j - \dot{A_i} $$ The R.H.S (first term): $$ - \frac{\partial U}{\partial r^i} = - \frac{\partial (\phi-A_j v_j )}{\partial r^i} \\ = - \frac{\partial \phi}{\partial r^i} + v_j \partial_i A_j $$ The R.H.S (second term): $$ \frac{d}{dt} \frac{\partial U}{\partial v^i} = \frac{d}{dt} \frac{\partial }{\partial v^i} \left( -A_j v_j \right) = -\frac{d}{dt} \left( A_i \right) = -\dot{A}_i $$ So the R.H.S gives: $$ - \frac{\partial \phi}{\partial r^i} + v_j \partial_i A_j -\dot{A}_i $$ and there is a $-\dot{A}_i$ term difference. What am I missing? Answer: You're messing up partial vs total derivatives with the $\dot{A}_i$ term. The electric field is \begin{align} \mathbf{E}&=-\nabla\phi-\frac{\partial \mathbf{A}}{\partial t}. \end{align} Recall that these fields depend on $(t,x,y,z)$, on time and position; also when calculating $\frac{d}{dt}$, you need to keep in mind that you're evaluating along a particle's trajectory so you have $x,y,z$ as functions of time as well. So, in components, \begin{align} E_i=-\frac{\partial \phi}{\partial r^i}-\frac{\partial A_i}{\partial t}. \end{align} So, with this, the Lorentz force we expect is given in components by \begin{align} F_i=q\left[\left(-\frac{\partial \phi}{\partial r^i}-\frac{\partial A_i}{\partial t}\right) + \left(\frac{\partial A_j}{\partial r^i}v^j-\frac{\partial A_i}{\partial r^j}v^j\right)\right]. \end{align} You pretty much have this expression written down as your LHS, I just wanted to point out that instead of $\dot{A}_i$, you should have written $\frac{\partial A_i}{\partial t}$. Now, we go to the RHS . You actually have all the right expressions, but you're not using the chain rule correctly. We have \begin{align} -\frac{\partial U}{\partial r^i}+\frac{d}{dt}\left(\frac{\partial U}{\partial v^i}\right)&=-q\left(\frac{\partial \phi}{\partial r^i}-\frac{\partial A_j}{\partial r^i}v^j\right)+ (-q\dot{A_i})\\ &=-q\left(\frac{\partial \phi}{\partial r^i}-\frac{\partial A_j}{\partial r^i}v^j\right) -q\left(\frac{\partial A_i}{\partial t}+\frac{\partial A_i}{\partial r^j}v^j\right)\\ &=q\left[\left(-\frac{\partial \phi}{\partial r^i}-\frac{\partial A_i}{\partial t}\right) + \left(\frac{\partial A_j}{\partial r^i}v^j-\frac{\partial A_i}{\partial r^j}v^j\right)\right]\\ &=F_i, \end{align} where it is the second equal sign that the chain rule is used. So, your main error was the mis-definition of the electric field and in the following statement: Now, the last term is: \begin{align} v_j \partial_jA_i= \frac{dr^j}{dt} \frac{ \partial A^i}{\partial r^j }= \frac{dA_i}{dt} = \dot{A_i} \end{align} You're missing the $\frac{\partial A_i}{\partial t}$ term.
{ "domain": "physics.stackexchange", "id": 89664, "tags": "homework-and-exercises, electromagnetism, forces, potential, potential-energy" }
Data type detection for a CSV file parser
Question: I am building a CSV file parser, and in order to get the appropriate Object to represent different data types found on the parsed files, I wrote the following function: public static Object stringToDataType(String valueAsString) throws ParseException{ // detections ordered by probability of occurrence in Buffer_Bank. String decimalPattern = detectDecimal(valueAsString); if(decimalPattern != null){ return stringToBigDecimal(valueAsString, decimalPattern); }else{ String integerPattern = detectInteger(valueAsString); if(integerPattern != null){ return stringToBigInteger(valueAsString); }else{ String datePattern = detectDate(valueAsString); if(datePattern != null){ return stringToDate(valueAsString, datePattern); }else{ // value is a String... nothing else to do! return valueAsString; } } } } In order to call the appropriate 'casting' methods (stringToDate, stringToBigInteger, stringToBigDecimal), I need to use the specific pattern string that gets returned when detecting if a particular value is of that type. Hence, the pattern value gets used for routing to the appropriate function, and also as an argument in that function call. To make this method logic 'flatter', I could avoid passing the pattern as an argument, and calling the pattern detection method (detectDecimal, detectInteger, detectDate) again from within the 'casting' method, but that would lead to parsing the same value two times, which seems wasteful at best. So, is this the best way this method can be written without getting to call the detection methods twice? EDIT Here are the detectDate and stringToDate methods, as requested (the methods for BigInteger and BigDecimal are very similar). Please let me know if I need to post anything else: private static String detectDate(String dateString) { for (String regexp: DATE_FORMAT_REGEXPS.keySet()) { if(dateString.toLowerCase().matches(regexp)) { return DATE_FORMAT_REGEXPS.get(regexp); } } return null; // value is not a date } public static Date stringToDate(String valueAsString, String format) throws ParseException{ SimpleDateFormat dateFormat = new SimpleDateFormat(format); Date date = dateFormat.parse(valueAsString); return date; } Answer: Yes, you can make that code a lot more readable by noticing a simple fact. When you have the code String decimalPattern = detectDecimal(valueAsString); if(decimalPattern != null){ return stringToBigDecimal(valueAsString, decimalPattern); }else{ // ... } The else part is actually unnecessary. If the test is true then we return with a value. This means that you exit early from the method, without going further down. So you can rewrite it like this: String decimalPattern = detectDecimal(valueAsString); if(decimalPattern != null){ return stringToBigDecimal(valueAsString, decimalPattern); } // ... As you found out, this has the advantage that the subsequent code is shifted by one indentation to the left, making it easier to read. So only with this change, the code now looks like: public static Object stringToDataType(String valueAsString) throws ParseException { // detections ordered by probability of occurrence in Buffer_Bank. String decimalPattern = detectDecimal(valueAsString); if (decimalPattern != null) { return stringToBigDecimal(valueAsString, decimalPattern); } String integerPattern = detectInteger(valueAsString); if (integerPattern != null) { return stringToBigInteger(valueAsString); } String datePattern = detectDate(valueAsString); if (datePattern != null) { return stringToDate(valueAsString, datePattern); } return valueAsString; } You posted an example of your detect* and stringTo* methods. In fact, you don't need two methods. You only need a single one that is specific to the current type you want to test. What you want is: Try to interpret the value as a decimal and return it; If not, try to interpret the value as an integer and return it; If not, try to interpret the value as a date and return it. Said like this, it is possibe to realize that you only need 3 methods: tryDate, tryInteger and tryDecimal. Each of those will do the necessary to try to interpret the String and return null if they can't. Basically, the code you currently have inside stringTo* should be merged into the part where you found the correct pattern in detect*. As an example: private static Object tryDate(String dateString) { for (Map.Entry<String, String> entry : DATE_FORMAT_REGEXPS.entrySet()) { if (dateString.toLowerCase().matches(entry.getKey())) { SimpleDateFormat dateFormat = new SimpleDateFormat(entry.getValue()); try { return dateFormat.parse(dateString); } catch (ParseException e) { // what to do in this case? Possibilites: throw an exception or return null } } } return null; // value is not a date } Note that I changed your code so that it loops directly around the entries instead of looping just with the keys. You should also decide what to do in case of a ParseException. You could rethrow a custom runtime exception, wrapping the original, or returning null to signal that the attempt failed. The big issue is that there is some duplicated logic in this: each time we are detecting a pattern and, if it's not null, applying it to the value. If you were to add more detection algorithm, this method can become very quickly copy/pasted code and clumsy. We need to refactor that and make it generic. Using Java 8, you can define that as a list of Function. private static final List<Function<String, Object>> FUNCTIONS = Arrays.asList(s -> tryDecimal(s), s -> tryInteger(s), s -> tryDate(s)); The goal of those functions is to convert a String into a Object. Here, we define 3 which are the 3 methods mentioned above. Then we can have public static Object stringToDataType(String valueAsString) { return FUNCTIONS.stream() .map(f -> f.apply(valueAsString)) .filter(Objects::nonNull) .findFirst() .orElse(valueAsString); } This creates a Stream pipeline over the functions. Each of them is applied to the given String, non-null elements are filtered and only the first one is kept. Effectively, this will select the first non-null element, so it will select the first mapping that matched. If all are null then we return a default value with orElse which is the given value as String. If somehow you want to keep your current approach with two different methods (one for detecting the pattern and one for parsins), then what you want comes down to creating a custom class that will hold: A function that returns the pattern detected given the value as String; A function that returns the object given the value as String and a pattern. Simply, it could be: public final class Conversion { private final UnaryOperator<String> patternDetector; private final BiFunction<String, String, Object> toObjectFunction; public Conversion(UnaryOperator<String> patternDetector, BiFunction<String, String, Object> toObjectFunction) { this.patternDetector = patternDetector; this.toObjectFunction = toObjectFunction; } public UnaryOperator<String> getPatternDetector() { return patternDetector; } public BiFunction<String, String, Object> getToObjectFunction() { return toObjectFunction; } } Then we can make a static utility that applies a given Conversion to a given value: private static Object stringToDataType(String valueAsString, Conversion conversion) { String pattern = conversion.getPatternDetector().apply(valueAsString); if (pattern != null) { return conversion.getToObjectFunction().apply(valueAsString, pattern); } return null; } and finally, we can refactor the main method: private static final List<Conversion> CONVERSIONS = Arrays.asList( new Conversion(v -> detectDecimal(v), (v, p) -> stringToBigDecimal(v, p)), new Conversion(v -> detectInteger(v), (v, p) -> stringToBigInteger(v)), new Conversion(v -> detectDate(v), (v, p) -> stringToDate(v, p)) ); public static Object stringToDataType(String valueAsString) throws ParseException { return CONVERSIONS.stream() .map(c -> stringToDataType(valueAsString, c)) .filter(Objects::nonNull) .findFirst() .orElse(valueAsString); } Now, if we want to add more conversion in the future, we just need to update this list that, arguably, should be made a constant instead of being initialized each time. The Stream pipeline will return the first non-null element and will short-circuit once it has been found. If all are null, there we return the default value which is the given value as string.
{ "domain": "codereview.stackexchange", "id": 19767, "tags": "java, performance, beginner, parsing" }
Using vgg16 or inception with wights equals to None
Question: When using pre-trained models like vgg16 or inception, it seems that one of the benfits of using pre-trained model, is to save up time of training. Is there a reason to use the pre-trained models without loading the weights ? (use random weights) ? Answer: The advantage of using a pre-trained model without loading the weights (which would mean you are only use the model, not a pre-trained version) is that you can easily use an existing model architecture and applying it to your problem. This can save you quite some time since you don't have to build the model architecture yourself in tensorflow/keras/pytorch and can go straight to applying the model on your data.
{ "domain": "datascience.stackexchange", "id": 9542, "tags": "deep-learning, transfer-learning, inception, vgg16, inceptionresnetv2" }
position of useful filtered data inside DFT based filtered output
Question: I'm trying to filter a 400 samples signal with various bandstop FIR filters (constant group delay), one at a time to see which one gives me the desired result. Every filter was build using the Kaiser Window method in Matlab. Each filter has a different number of coefficients. The smaller one has 782 and the biggest one 2460. I'm performing the filtering via fast convolution using the following Matlab code: temp = ifft( fft(filterCoeffs,ffsSize)' .* fft( signalSamples, ffsSize) ); FilteredSignal = temp(offset+1:offset+length(signalSamples)); I'm trying to identify where inside the FilteredSignal vector of ffsSize are the 400 filtered samples. So far depending on the value of ffsSize and the amount of filter coefficients is the offset that I have to impose to recover the original 400 samples filtered. Even though I already know the right offset (where the useful filtered signal starts) I want to know the theoretical explanation of why there. I also want to know why is the group delay 1/2 of the amount of filter coefficients? By the way if I use the Matlab function: filter(filterCoeffs,1,signalSamples) It returns 400 samples but not the right ones. Any idea why? Is like it doesn't identify the right offset I have 2 cases: No time aliasing (circular convolution equivalent to linear convolution) ffsSize = length(filterCoeffs)+length(signalSamples) - 1; aliasingSamples = 0 offset = groupDelay(Filter)+ length(signalSamples); Time aliasing (circular convolution not equivalent to linear convolution) ffsSize = length(filterCoeffs); aliasingSamples = ffsSize - length(signalSamples); if(groupDelay(Filter) < aliasingSamples) %this only happens when I use the 782 coefficient filter offset = groupDelay(Filter); else offset = aliasingSamples; end Answer: Lets put forward the intuition behind the concept of the group delay before further discussing how to find the delay of FIR filters. Consider an input signal $x[n]$ of length $L_x$ which is nonzero between $n=0$ and $n=L_x-1$. And let a simplistic filter impulse response to be $h[n] = \delta[n-d]$. The output is immediately shown to be $y[n] = x[n-d]$ which is input shifted by $d$ samples to the right (for $d$ positive). Let's look at the frequency response of the filter; $H(e^{j\omega}) = e^{-j \omega d}$ which is nothing but a phase term $\phi(\omega) = - \omega d$. Consider a slightly complex filter $h[n] = 0.1 \delta[n] + 0.8\delta[n-1] + 0.1 \delta[n-2]$. Its frequency response can be written as $H(e^{j\omega}) = e^{-j 1 \omega} B(\omega)$ where $B(\omega) = 0.2 \cos(\omega n) + 0.8$ is a real valued function and has no phase-term. The output of this filter can be written in terms of delayed inputs due to each distinct impulse such as : $$y[n] = 0.1 x[n] + 0.8 x[n-1] + 0.1 x[n-2]$$ It can be proposed that $y[n]$ is dominated by the middle term $x[n-1]$ of a shift $1$ sample. (a more detailed analysis would say that low frequencies of $x[n]$ are dominated by the middle term and high frequencies are tending to cancel etc. which makes it a lowpass filter eventually) In this latter example the phase function of the filter was $\phi(\omega) = -1\omega$ whereas the dominating component $x[n-1]$ at the output also had a delay of one sample to the left. This observation can be used as an intuitive basis for the claim that the delay (or the group delay) associated with an FIR filter is given by the negative of the derivative of its phase response wrt frequency $\omega$; i.e., $$\boxed{ \tau = - \frac{d \phi(\omega)}{d\omega} }$$ Lets restrict the discussion to LTI, FIR symmetric filters which have linear phase responses. An LTI filter is described by its impulse response $h[n]$ which enables us to compute its response to an arbitrary input $x[n]$ via the discrete convolution sum: $$ y[n] = x[n] \star h[n] = \sum_{k=-\infty}^{\infty} x[k] h[n-k] $$ where the dummy index $k$ would actually run over those valid range of nonzero multiplicands. The filter also has a frequency response which is the DTFT of its impulse response: $$ H(e^{j\omega})= \mathcal{F} \{ h[n] \} =\sum_{n=-\infty}^{\infty} h[n]e^{-j\omega} = |H(e^{j\omega})| e^{j \phi_h(\omega)}$$ Now we define the group delay (in samples) of filter as: $$\tau = - \frac{d\phi_h(\omega)}{d\omega}$$ In general group delay will be a function of frequency $\omega$, however linear phase FIR filters have the property that their phase function is a linear function of frequency such as $$\phi_h(\omega) = K - d \omega$$ From which we deduce that the group delay will be $\tau = d$ which is independent of frequency; i.e. all components of an applied signal $x[n]$ will shift the same amount when filtered by a linear phase filter. Given such a linear phase filter whose delay is $\tau = d$ and assume that you apply a very narrowband signal $x[n]$ such as: $$ x[n] = \cos(\omega_0 n) \frac{ \sin(\omega_1 n)}{\pi n} $$ where $\omega_1 \ll 1$ and $0 < \omega_0 < \pi$. Then the associated output for such a signal will be of the form $$ y[n]=A(\omega_0) x[n-d] = A(\omega_0) \cos(\omega_0 (n-d)) \frac{ \sin(\omega_1 (n-d))}{\pi (n-d)}$$ Finally, for example, it can be shown that a causal and symmetric FIR filter of length $M = 2L + 1$ from $n=0$ to $n=2L$ such that $h[n] = h[2L-n]$ will have a linear-phase term of the form $\phi_h(\omega) = K - \omega L$. Where $K$ can be zero. Hence its group delay will be $L$ samples. Lets show it here: $$ \begin{align} H(e^{j\omega}) &= \sum_{n=0}^{2L} h[n] e^{-j \omega n} \\ &= \sum_{n=0}^{L-1} h[n] e^{-j \omega n} + \sum_{n=L+1}^{2L} h[n] e^{-j \omega n} + h[L] e^{-j \omega L} \\ &= \sum_{n=0}^{L-1} h[n] e^{-j \omega n} + \sum_{n=L+1}^{2L} h[2L-n] e^{-j \omega n} + h[L] e^{-j \omega L} \\ \end{align} $$ Substitute $2L-n = m$ in the second sum: $$ \begin{align} H(e^{j\omega}) &= \sum_{n=0}^{L-1} h[n] e^{-j \omega n} + \sum_{m=L-1}^{0} h[m] e^{-j \omega (2L-m)} + h[L] e^{-j \omega L} \\ &= \sum_{n=0}^{L-1} h[n] e^{-j \omega n} + e^{-j\omega 2L} \sum_{m=L-1}^{0} h[m] e^{j \omega m} + h[L] e^{-j \omega L} \\ &= \sum_{n=0}^{L-1} h[n] e^{-j \omega n} + e^{-j\omega 2L} \sum_{m=0}^{L-1} h[m] e^{j \omega m} + h[L] e^{-j \omega L} \\ \end{align} $$ Recognize the similarity of the first sum and second sum such that denoting the first sum as $A(\omega)$ then the second sum is $A(\omega)^*$ (assuming real $h[n]$), Hence $$ H(e^{j\omega}) = A(\omega) + e^{-j\omega 2L} A(\omega)^* + h[L] e^{-j \omega L} \\ $$ Taking all of them into the $e^{-j\omega L}$ parenthesis yields: $$ H(e^{j\omega}) = e^{-j \omega L} \left( A(\omega)e^{j \omega L} + A(\omega)^* e^{-j \omega L} + h[L] \right) \\ $$ Since $A(\omega)e^{j \omega L} + A(\omega)^* e^{-j \omega L} = 2 \mathcal{Re} \{ A(\omega)e^{j \omega L} \}$ We reduce the frequency response of the FIR filter to: $$H(e^{j\omega}) = e^{-j \omega L} ( 2 \mathcal{Re} \{ A(\omega)e^{j \omega L} \} + h[L] ) $$ Where the thing in the parenthesis is a real valued function and has zero phase term, therefore we call it $B(\omega)$ and deduce that the DTFT $H(e^{j\omega})$ of the causal, symmetric, real valued impulse response $h[n]$ is $$ \boxed{ H(e^{j\omega}) = B(\omega) e^{-j\omega L} }$$ from which we find that the associated phase response of the filter is: $$\phi_h(\omega) = -\omega L $$ and the associated group delay is therefore $$\boxed{ \tau = - \frac{d \phi_h(\omega)} {d\omega} = L}$$ samples... % This script demonstrates the use of time-aliasing in an advantageous way % in obtaining the "central" portion of the circular convolution implemented % via the DFT approach... % % Let x[n] of length Lx and h[n] of length Lh = 2*L+1, are convolved linearly % % y[n] = x[n] * h[n] % % so that "full" length of y[n] is: Ly = Lx + Lh -1 % % However in some cases we are only interested in those samples of length Lx % that relate to input x[n] which are at the "central" portion of y[n], whereas % the initial and final L samples carry transient information. Hence we take % % yc[n] = y[L:L+Lx-1] % % When a linear convolution is implemented with DFT, the length of DFT must % satisfy the following; if we are to avoid time-aliasing: M >= Lz=Lx+Lh-1 % However this method computes ALL the full samples of y[n] in addition to the % central ones. Hence if we wish to compute only the central portion of % linear convolution y[n], then we can allow some aliasing in the DFT method % The amount of which is indicated in the length of DFT as: % % M = Lz - L % % Then this will produce a result ya[n] whose last Lx samples carry the % required information such that % % yc[n] = y[L:L+Lx-1] = ya[Lz-L-Lx:Lz-L-1] % % in matlab notation the indices will be added by 1 as: % % yc(:) = y(L+1:L+Lx) = ya(Lz-L-Lx+1:Lx-L); % % ============================================================================== clc; clear all; close all; % S0 - Define the parameters: % --------------------------- Lx = 32; L = 15; Lh = 2*L+1; Lz = Lx+Lh-1; % S1 - Generate the signals: % -------------------------- x = randn(1,Lx); h = randn(1,Lh); % S2 - implement time-domain convolution first: % --------------------------------------------- yt = conv(x,h); y1 = yt(L+1:L+Lx); % get the central portion of the convolution % S3 - Implement frequency domain DFT based aliased-circular conv: % ---------------------------------------------------------------- M = Lz-L; % DFT length (with aliasing) yt = real( ifft( fft(x,M).*fft(h,M) , M) ); % compute aliased result y2 = yt(L+1:end); % S4 - compare the results % ------------------------ figure,stem(y1,'b') hold on stem(y2,'g') figure,stem(y1-y2);title('the error');
{ "domain": "dsp.stackexchange", "id": 5695, "tags": "dft, finite-impulse-response, filtering, group-delay, fast-convolution" }
Do human women smell different before they enter labour?
Question: Surfing youtube I found a video where a cat shows a protective behaviour towards a pregnant woman soon to enter labour. In the miriad of average (dumb) comments I found another person stating that her cat felt precisely (via olfactory means I guess) that her (the woman's - not the cat's) day had come to give birth and started acting weirdly. Do mammals leave a clear olfactory footprint before labour? And is this footprint noticeable by cats and dogs? Is this definetely myth? Is this something yet to study? or is this something well known? If the latter is the case: is it based on the sense of smell? is there anything to read to study the topic? Answer: It is not a definite myth but is a subject of much debate. However, it may well be that it is from smell (olfaction). We certainly know that cats have a significantly larger number of scent receptors, being approximately 200 million, compared to the 5 million receptors humans have. Cats also have an organ called the Jacobson's organ found in many animals that helps to pick up scents even more, especially chemicals - hormones. As an example, the primary hormone released to progress labour is oxytocin, which causes the body to perform homeostatic positive feedback (initial stimulus is increased until an end result). This begins when the baby's head puts pressure on the cervix, which triggers contractions. This in turn starts oxytocin release which speeds up both the frequency and intensity of the contractions. This increased concentration of oxytocin could well be what the cat is smelling as animals are well known to detect human hormones such as adrenaline in situations of fear. Although oxytocin is usually present in the body even apart from childbirth, in the moments leading up to labour and just afterwards, the oxytocin concentration is at its highest concentration. The cat probably sensed the 'abnormally' strong smell and began to behave accordingly as it recognised it from instinct (cats release oxytocin in labour too). It's still all very hypothetical though so I don't know of any authoritative investigations for you to have a look at – it’s one of those anecdotal subjects which has yet to have any concrete evidence.
{ "domain": "biology.stackexchange", "id": 8041, "tags": "human-biology, perception, pregnancy, olfaction" }
Why is $Y=\beta_0 x^{\beta_1} e$ a linear model?
Question: Why is $Y=\beta_0 x^{\beta_1} e$ a linear model? When we apply the transform, it becomes $lnY = ln\beta_0+\beta_1 lnx +lne$, and why is it still linear when the $\beta_0$ part is under ln? Answer: The term "linearity" is context-dependent, so a linear regression model is not necessarily the same as a linear function. A linear function is classified as such via the superposition principle, requiring both additivity and homogenity. We can generalize linear maps with to multivariate functions simply: Additivity: $f(\vec{x}_1+\vec{x}_2)=f(\vec{x}_1)+f(\vec{x}_2)\enspace \forall\ \vec{x}_1,\vec{x}_2\in \Bbb{R}^n$ Homogeneity: $f(\alpha \vec{x})=\alpha f(\vec{x})\enspace \forall\ \vec{x}\in \Bbb{R}^n$ and $\alpha\in \Bbb{R}$ So the equation $f(\beta_0,\beta_1)=\beta_0 x^{\beta_1}e$ is a nonlinear function because it is not additive nor homogeneous. However, w.r.t. linear regression, linear models are of the form $Y=\beta_0+\beta_1f_1(x_1)+\beta_2f_2(x_2)+\cdots+\beta_nf_n(x_n)+c$, regardless of whether any $f_i$ is a nonlinear map. So, after a nonlinear transformation such as natural logarithm, you can consider the resulting regression model as the form $Y'=\beta_0'+\beta_1ln(x)+1$ where the primes denote the natural log transformed variables. This demonstrates linearity between $Y'$ and the parameters $\beta_0'$ and $\beta_1$.
{ "domain": "datascience.stackexchange", "id": 8349, "tags": "regression, linear-regression" }
Why do photons follow specific path after reflection from a mirror surface if they can be emitted in any direction by electrons of mirror surface?
Question: The electron absorbs the energy of photon(with specific frequency)and re-emits the photon.The photon can be emitted in any direction. So why do they get re-emitted in a specific direction after reflection? On hitting normal to surface the photons follow path in reverse normal direction and hitting at an angle it follows V path. Answer: Although a single photon can only be absorbed and emitted by a single electron, it leaves that electron in exactly its original state. There is no record, and no way of knowing, which electron absorbed and emitted the photon. According to quantum theory, to calculate the result when any electron could have absorbed and emitted the photon, we must form a superposition of all the processes which could have taken place. The calculation in quantum mechanics takes the form of wave mechanics -- this does not have to mean that there is actually a wave, only that the mathematical theory behaves as though there was a wave. Since a wave would reflect at a particular angle, that is also what happens for photons. The reasons for this strange quantum behaviour are deep and subtle. Quantum mechanics is actually a theory of probabilities, not a theory of physical waves. The mathematics tells us that the probability of a particular angle of reflection is actually a certainty; we can even show that this mathematical behaviour is necessary to a consistent probabilistic interpretation of quantum theory. It is much harder, probably impossible, to conceptualise exactly what is physically taking place at the level of elementary particles leading to these kinds of results.
{ "domain": "physics.stackexchange", "id": 66101, "tags": "visible-light, photons, reflection, wave-particle-duality" }
Why is the zero of standard enthalpy of formation a convention?
Question: The standard enthalpy of formation $\Delta H_f^°$ of pure elements is zero by definition. Why is that a convention? It is true that enthalpy is defined unless a constant (like energy and entropy), but enthalpy of formation is actually a variation of enthalpy, so we don't really care. Also, how could we possibly choose something else than zero if there's no heat transfer? Note: I'm using General Chemistry (Ralph Petrucci) Answer: It starts with this: The standard enthalpy of formation of any substance, $\Delta H^\circ_\mathrm{f}$, is (by convention) defined as the enthalpy change for the reaction at 1 bar and a specified temperature (usually $\pu{298.15 K}$), in which the product is 1 mole of that substance, and the reactants are its component elements in their respective standard states. For instance, $\Delta H^\circ_\mathrm{f}$ for $\ce{H2O_{(l)}}$ is equal to $\Delta H^\circ$ for the following reaction at standard state: $$\ce{H2_{(g)} + 1/2O2_{(g)}->H2O_{(l)}},$$ since $\ce{H2_{(g)}}$ and $\ce{O2_{(g)}}$ are the respective standard states for hydrogen and oxygen. Once you've accepted this definition as the convention for determining $\Delta H^\circ_\mathrm{f}$ for any substance, it directly follows that the value of $\Delta H^\circ_\mathrm{f}$ for any element in its standard state must be zero. For instance, $\Delta H^\circ_\mathrm{f}$ for $\ce{H2_{(g)}}$ is equal to $\Delta H^\circ$ for the following reaction: $$\ce{H2_{(g)} -> H2_{(g)}},$$ which is necessarily zero. To give an analogy: Suppose you define the "altitude of formation", $\Delta z_f$, of any location on earth as the change in altitude necessary to reach that location from sea level. Hence $\Delta z_f$ for the summit of Mt. Everest is $\Delta z$ for the altitutude change associated with: $$\ce{sea level -> summit of Everest},$$ which is 29,029 feet. It necessarily follows, from this convention, that $\Delta z_f$ for any location at sea level is zero, since that would be equal to $\Delta z$ for: $$\ce{sea level -> sea level}$$.
{ "domain": "chemistry.stackexchange", "id": 14416, "tags": "enthalpy" }
A "natural" decidable problem not in $\mathsf{NP}$?
Question: Are there any "natural" examples of decidable problems that are definitively known not to be in NP? The decidable languages I know of that are not contained in NP are usually derived from the time hierarchy theorem, which produces "artificial" languages based on diagonalization. Answer: From an answer to a related question on NP-hard problems which are not contained in NP: probably the most natural example is determining whether two regular expressions (including the Kleene star for arbitrary repetition, and a squaring operation to allow compact expressions of very large fixed numbers of repetitions) are equivalent. The resulting problem is EXPSPACE complete. Because EXPSPACE contains NEXP, which contains NP strictly (by the time hierarchy theorem), this is a decideable problem which is not in NP.
{ "domain": "cs.stackexchange", "id": 1159, "tags": "np" }
How does the equipartition theorem apply to vibrational modes?
Question: Specifically, how, based on the idea that energy will be shared equally among all degrees of freedom, does one come to the conclusion that each vibrational mode of a molecule will occupy $kT$ units of energy? When learning statistical thermodynamics, classes often spend a lot of time talking about the Maxwell-Boltzmann distribution and even deriving that the expectation value for the energy of a monoatomic ideal gas is $\frac{3}{2}kT$ where $\frac12 kT$ comes from energy distributed to each cartesian coordinate plane. Now, I find that I learn a whole lot from mathematical derivations of these types of things, and from simply working through the math and understanding its physical meaning. So, I want to know, both on a conceptual level and through some sort of mathematical explanation, why does each vibrational mode of a molecule contribute $kT$ to the overall energy of the molecule when those modes are occupied? My only guess is that one degree of freedom is kinetic energy (associated with the moving atoms) and the other is potential energy (analogy with a spring works here) but, if that is true, much clarity would be gained from a mathematical demonstration of this. Answer: As you probably already know, the equipartition theorem states that each quadratic degree of freedom will, on average, possess an energy $\frac{1}{2}kT$, where a ‘quadratic degree of freedom’ is one for which the energy depends on the square of some property. Now lets consider the kinetic and potential energies associated with translational, rotational and vibrational energy: Translational degrees of freedom: $~E = \frac{1}{2}mv^2$ Rotational degrees of freedom: $~E = \frac{1}{2}Iω^2$ Vibrational degrees of freedom: $~E = \frac{1}{2}mv^2 + \frac{1}{2}kx^2$ As you can see that for translational and rotational degrees of freedom, there is only one quadratic. Hence each translational and rotational state will contain $\frac{1}{2}kT$ energy. However for vibrational states, the energy contains two quadratic contributions. Therefore each vibrational state will contain $kT$ energy. Now lets consider the reasoning behind this. You are right when you say that the vibrational degree of freedom depends on kinetic and potential energy and hence has 2 quadratic contributions. In the equation for the energy of the vibrational state, $\frac{1}{2}mv^2$ represents the kinetic energy and $\frac{1}{2}kx^2$ represents the potential energy. A molecule in the vibrational state acts like a harmonic oscillator. So let consider $H_2$. You can imagine that the bond between the 2 hydrogen atoms is like a spring which contracts and expands in length. The average bond length can be thought as the equilibrium length of the bond. When the bond the contracts, the displacement from this equilibrium length is known as 'x'. Now using Hooke's Law, the potential energy of the system is $\frac{1}{2}kx^2$ where k is the spring constant while the kinetic energy is $\frac{1}{2}mv^2$. Hence the total energy of the system is given by: $~E = \frac{1}{2}mv^2 + \frac{1}{2}kx^2$
{ "domain": "chemistry.stackexchange", "id": 4140, "tags": "physical-chemistry, thermodynamics" }
Validating user access to a function
Question: I was reviewing the following code block and noticed there was validation check at the top. The 3rd line in the function checks if the invoice belongs to the logged in customer. I am wondering what the correct way of doing this might be. Some options that come to me head are: GetInvoice should take an overload and also take in the loggedInCustomerId. The GetInvoice method would then throw an exception if it was an invalid call. (Would the business logic be the best place for this) Filter attribute should be created (i think this is overkill) What would the best approach to something like this be? public ActionResult ViewInvoice(Guid invnum) { int loggerInCustomerId = GetTheLoggedInCustomerId(); Invoice invoice = _invoiceLogic.GetInvoice(invnum); if (invoice.CustomerId != loggerInCustomerId) { //Invalid Action return RedirectToAction("Index", "MyInvoices"); } //do other stuff as normal } Answer: public ActionResult ViewInvoice(Guid invnum) Regardless that one could figure out that invnum means "invoice number" this name isn't good. First it isn't a number but a globally unique identifier and second using abbreviations for variable naming shouldn't be done because it reduces readability. -> how about renaiming this to invoiceId ? int loggerInCustomerId = GetTheLoggedInCustomerId(); The variable name is misspelled you should change it to loggedInCustomerId The "validation" part wether a invoice belongs to the current loggedin customer should be done by querying the database (if one is used) to get the invoice which matches the invoiceId and the loggedInCustomerId. An overloaded GetInvoice(GUID, int) could be used in the business layer (will be _invoiceLogic) which shouldn't throw an exception but return either null or some Invoice.Empty constant. Throwing an exception doesn't buy you much but will slow down the application because you will also need to handle it. Maybe a method like the Dictioary<T>.TryGetValue() would be a good fit so one could write something along this lines public ActionResult ViewInvoice(Guid invoiceId) { int loggedInCustomerId = GetTheLoggedInCustomerId(); Invoice invoice; if (!_invoiceLogic.TryGetInvoice(invoiceId, loggedInCustomerId, out invoice) { return RedirectToAction("Index", "MyInvoices"); } //do other stuff as normal } A version without accessing the data layer could then look like public bool TryGetInvoice(Guid invoiceId, int customerId, out Invoice invoice) { invoice = GetInvoice(invoiceId); if (invoice.CustomerId == customerId) { return true; } invoice = null; return false; } A version with a fictive data layer (because I don't know if you use one) could look like this public bool TryGetInvoice(Guid invoiceId, int customerId, out Invoice invoice) { invoice = _dataLayer.GetInvoice(invoiceId, customerId); return invoice != null; } If you use a TryGetXXX method you need to make sure that such a method never throws an exception because it is expected that it doesn't.
{ "domain": "codereview.stackexchange", "id": 18047, "tags": "c#, security, mvc, asp.net-mvc" }
Can we approximate the number of possible witnesses for an NP language?
Question: Assuming $L \in NP$, meaning that L could be verified by some $V(x,y)$, such that there exists a polynomial $P(X)$ such that $|y| \leq P(|x|)$ for every $x \in L$. I was wondering - can I assume something about the possible witness space? My intuition says that this number is bounded by $O(2^{P(|X|)})$ (all possible subsets of a certain problem, all possible strings by the length at most $P(X)$ etc...) Thanks Answer: The class of counting problems associated to languages in NP is called #P. More formally, $f:\mathbb{N}\rightarrow\mathbb{N}$ is in $\#P$ if there exists a nondeterministic polynomial time Turing machine $M$ such that for all $x\in\Sigma^*$, $f(x)$ is the number of accepting paths of $M$ on input $x$. This is equivalent to requiring the existence of a deterministic verifier $M(x,y)$ such that $f_M(x)=\big|\big\{y: M(x,y) \text{ accepts}\big\}\big|$. You ask, given a language in $NP$ with it's verifier $M(x,y)$, can we approximate $f_M$? If the witnesses are of length $p(|x|)$ then obviously $0\le f_M(x)\le 2^{p(|x|)}$, but without knowing more about $M$ then we can't really say anything else about $f_M$. Note that computing $f_M$ is harder than asking whether a witness exists (obviously if you could count then you could determine existence). In fact, counting is considered much harder, as demonstrated by Toda's theorem (given an oracle for counting, one could decide the entire polynomial hierarchy in polynomial time). So how hard is approximating such functions, or (allow me to focus on a specific $\#P$ complete language) how hard is it to approximate $\#SAT(\varphi)$, the number of satisfying assignments of a given CNF. In these lecture notes by Jonathan Katz, a constant factor approximation is considered, i.e. an algorithm producing a number inside the range $\left[2^{-k}\#SAT(\varphi),2^k\#SAT(\varphi)\right]$ for some fixed $k\in\mathbb{N}$. They show that with high probability, given an oracle for regular SAT, one can produce such an approximation in polynomial time (the algorithm can also be strengthen to yield an $1\pm p^{-1}(|\varphi|)$ approximation). So while exact counting is considered much harder than $NP$, an $NP$ oracle is enough to approximate $\#SAT$ with high probability.
{ "domain": "cs.stackexchange", "id": 11564, "tags": "time-complexity, np" }
Identify bug found eating wood table
Question: This bug came out of a hole in a wood table. I'm in south-west France but since we bought that table one year ago, it was most probably in the table when we bought it so it could be from anywhere. I'd like to know what it is. More context and pictures in https://diy.stackexchange.com/questions/232607/holes-in-wood-furniture-identifying-the-cause. Answer: I'm not an entomologist at all, or even in Europe, so take my answer with a grain of salt. I think this is a beetle of the long-horn beetles, which are all beetles that have a larval stage that borrow in wood. In particular I think this is in the genus Clytus and may be Clytus arietis, which is quite common in Europe and North America. There are other species in the genus that look pretty similar too (e.g. C. rhamni, so I can't be sure). C. arietis is pretty small - about 10-18 mm, which looks about right for your hole size, and it has very similar patterns on its elytra. It is also hairy, as yours is. Apparently it is called the "wasp beetle" as it is a Batesian mimic, and may even make a buzzing noise when disturbed.
{ "domain": "biology.stackexchange", "id": 11607, "tags": "species-identification, entomology" }
Is my electric kettle collecting old water during the week?
Question: I have an electric kettle at work. On Monday, it's empty and I pour in about 1.5 liters of water. I usually end up drinking about 1 liter per day and refilling 1 liter each morning. At the end of the week, I rinse it out because in my mind, there is .5 liters of "old" water from the beginning of the week on the bottom. Is that true? Or is the water constantly mixed around from being used and then refilled? Answer: Realistically, you're probably very close to having it fully mixed. There are at least three sources of mixing present. Diffusion is a very slow process, so you can ignore it. You will presumably be agitating the water a lot when you pour the new water in, and that might completely mix everything. If that doesn't fully mix the water, there will also be some convection due to the heating you apply when you boil the water for your tea. Here's what you get if you assume that it's fully mixed each day. On Monday, you add 1.5 L and drink 1.0 L, leaving 0.5 L. On Tuesday you add 1.0 L and mix everything up. You then drink 1.0 L of the mixture, leaving 0.3333 L of Tuesday's water and 0.1667 L of Monday's water. On Wednesday you add 1.0 L, mix everything up, and drink 1.0 L of the mixture. Etc. After the $n^{\mathrm{th}}$ day, you have $1.5/3^n$ L of Monday's water left, $1.0/3^{n-1}$ L of Tuesday's water left, and $1.0/3^{n-k}$ L of the $k^{\mathrm{th}}$ day's water left. For the particulars you gave, that leaves you with 6.17 mL of Monday's water 12.3 mL of Tuesday's water, 37.0 mL of Wednesday's water, 111 mL of Thursday's water, and 333 mL of Friday's water, left over after you have poured Friday's tea.
{ "domain": "physics.stackexchange", "id": 4010, "tags": "water" }
Gauss' law for a box without inner charges
Question: I would like to understand Gauss' law for the electric field without using the divergence theorem. I'm already aware of related questions like this. Consider a charge $q$ placed in the origin and a box as Gaussian surface defined as: $$V = \{(x,y,z) : x \in [-a, a], y \in [b, c], z \in [-d, d]\},$$ with $a>0$, $c > b > 0$ and $d > 0$. As known, the charge $q$ generates an electric field in a generic point $(x, y,z)$ defined as: $$\vec{E}(x,y,z) = \frac{q}{4\pi \varepsilon_0 (x^2+y^2+z^2)^{\frac{3}{2}}}\begin{bmatrix}x\\y\\z\end{bmatrix}.$$ To ease notation, let $\vec{n}_{x=-a}$ be the outgoing unit normal vector of the face with $x=-a$, and $\Phi_{x=-a}$ the electric flux through this face. It is straightforward that: $$\vec{n}_{x=a} = -\vec{n}_{x=-a} = \begin{bmatrix}1\\0\\0\end{bmatrix},\\ \vec{n}_{z=d} = -\vec{n}_{z=-d} = \begin{bmatrix}0\\0\\1\end{bmatrix},\\ \vec{n}_{y=c} = -\vec{n}_{y=b} = \begin{bmatrix}0\\1\\0\end{bmatrix}.$$ Gauss' theorem asserts that: $$\Phi_{x=-a} + \Phi_{x=a} + \Phi_{y = b} + \Phi_{y = c} + \Phi_{z=-d} + \Phi_{z = +d} = 0,$$ since no charge is present in this box. It's straightforward to show that $\Phi_{x=-a} =- \Phi_{x=a}$ and $\Phi_{z=-d} =- \Phi_{z=d}$. Hence: $$\Phi_{y = b} + \Phi_{y = c} = 0.$$ I get the following: $$\Phi_{y = b} = -\int_{-a}^{a} \int_{-d}^{d}\frac{qb}{4\pi \varepsilon_0 (x^2+b^2+z^2)^{\frac{3}{2}}}dx dz,\\ \Phi_{y = c} = +\int_{-a}^{a} \int_{-d}^{d}\frac{qc}{4\pi \varepsilon_0 (x^2+c^2+z^2)^{\frac{3}{2}}}dx dz.$$ By introducing $$\eta(s) = \int_{-a}^{a} \int_{-d}^{d}\frac{s}{(x^2+z^2+s^2)^{\frac{3}{2}}}dx dz,$$ we have: $$\Phi_{y = b} = -\frac{q}{4 \pi \varepsilon_0 }\eta(b)\\ \Phi_{y=c} = + \frac{q}{4 \pi \varepsilon_0 }\eta(c).$$ Therefore, in order to have $\Phi_{y = b} + \Phi_{y = c} = 0$, we need that $\eta(c) = \eta(b).$ But this sound absurd, since it means that the function $\eta(s)$ should be constant. What's the problem with my thoughts? Answer: The contradiction stems from the following and I will explain why. It's straightforward to show that $\Phi_{x=-a} =- \Phi_{x=a}$ and $\Phi_{z=-d} =- \Phi_{z=d}$. Hence: $$\Phi_{y = b} + \Phi_{y = c} = 0.$$ I am going to talk in a coordinate system with a vertical $y$-axis. If the sum of the fluxes through the sides of the cube (through faces $x=a,x=-a,z=d,z=-d$) is zero, then it should follow that the sum of the flux through the top and bottom of the cube ($y=b,y=c$) should also be zero. However, it is not the case that the total flux through the sides is zero. At all points on the side of the cube, the electric field is pointing outwards, not inwards. Therefore the total flux must be positive. This means that the sum of the fluxes through the top and bottom of the cube can be negative to make the total flux zero. That is to say that $\Phi_{y = b} + \Phi_{y = c} < 0$ so $\eta(s)$ is not constant. Furthermore, say we didn't know that the flux out the sides was positive, it will still be clear that the sum of the flux of the top and bottom surfaces is not zero: the area of the top and bottom faces is the same since it is a cube, but the electric field decays with distance from the charge at the origin so it is less at the top $y=c$ surface than it is at the bottom $y=d$ surface. Therefore the flux flowing in at the bottom is greater than the flux flowing out at the top, hence their sum is negative, not zero. Here is a diagram of the situation. The red arrows are roughly what the electric field looks like at each of the corners of the cube (note that they are not to scale though - I drew them myself).
{ "domain": "physics.stackexchange", "id": 67609, "tags": "electrostatics, electric-fields, potential, gauss-law" }
Global symmetries in type IIB string theory vs type IIB supergravity
Question: In the AdS/CFT correspondence I know that the mapping of global symmetries involves also the S duality that in the field theory side is $SL(2,Z)$. In Type IIB supegravity this duality is $SL(2,R)$. I don't understand how to matching these symmetries. Answer: The AdS/CFT correspondence (or this particular, most famous example of it) is the equivalence between the non-gravitational ${\mathcal N}=4$ supersymmetric Yang-Mills theory on one side ("the boundary CFT"); and the type IIB superstring theory in $AdS_5\times S^5$ ("the bulk theory") on the other side. The equivalence is exact and the gauge theory is well-defined, so the dual bulk AdS description has to be consistent, too. String/M-theory is the only consistent theory of quantum gravity so the bulk theory has to be a string theory and not just "supergravity", too. Supergravity is just a (nonrenormalizable, and for this reason and others, inconsistent) low-energy approximation (effective field theory) of the full theory, string theory. This particular one has the noncompact symmetry $SL(2,{\mathbb R})$. However, quantum effects break it to the discrete subgroup $SL(2,{\mathbb Z})$, the same symmetry that we see on the boundary CFT side. It's not hard to see how it happens. For example, the supergravity contains black 1-branes carrying one of the two string-like charges under the 2-form B-fields. One of them is the "fundamental (F1) string" charge and the other one is the "D1-brane" charge. These two charges transform as a doublet under the $SL(2)$ group. In supergravity used as a classical description, these 1-branes are seen as black-hole-like extended solutions – the black 1-branes – but the amount of charge these BPS objects carry is arbitrary, a continuous, real number. The same holds for the two types of 5-branes, those with the NS5-brane charge and the D5-brane charge, another doublet. However, in the quantum theory, the charges cannot be continuous. One way to prove this assertion is to appreciate the Dirac quantization rule (which results from the requirement of single-valuedness of the wave function of one object when it orbits around the electromagnetically dual object's Dirac string). F1-strings are electromagnetically dual to the NS5-branes and D1-branes are electromagnetically dual to D5-branes. For the charges of mutually dual objects (just like for the electric charges and magnetic monopoles in 4D), the Dirac quantization rule has to hold: $$ Q_E Q_M \in 2\pi {\mathbb Z} $$ It follows that the allowed F1,NS5; D1, D5-charges are not continuous. They actually belong to a lattice that is "self-dual" in the sense of the Dirac quantization condition above. Such a lattice isn't quite unique. When the allowed minimum charge $Q_E$ is increased $k$ times and the minimum $Q_M$ is decreased $k$ times, the Dirac quantization rule continues to hold. Also, one may mix the electric and magnetic charges (in a way that fully matches the QCD-like theta-angle in the gauge theory). When the conditions are taken into account, one sees that there is a 2-real-dimensional family of possible charge lattices and it may be written as the usual moduli space $$ SL(2,{\mathbb Z}) \backslash SL(2,{\mathbb R}) / SO(2, {\mathbb R}) $$ It may be represented as the "fundamental domain" of the modular group. It's the same as the space parameterized by $g_{YM}$ and $\theta_{QCD}$ in the gauge theory, of course. The moduli space is a quotient on both sides. The left-multiplication identifies the points with respect to the actual "U-duality group" $SL(2,{\mathbb Z})$ because elements of this group act in such a way that they map the lattice of allowed charges onto the same lattice. The approximation of the moduli space by "one point" is basically equivalent to assuming that the charges of all the brane-like objects are continuous, not quantized, and ignoring to the Dirac quantization rules and all similar quantum effects. This approximation is good enough at "low energies". When we consider the low-energy limit, the only objects that carry such charges are "large branes" that look like macroscopically large extended black holes. And those carry so huge charges, relatively to the minimum units, that these charges may be considered continuous. I mentioned the quantization of the brane charges as a reason why the dual theory is the full string theory and not just the supergravity. There exist other differences between string theory and supergravity. All of these discriminating features may be checked in AdS/CFT and in all of them, one may see that the gauge theory is dual to the full string theory and not just "supergravity". For example, one may see all the excited string states in terms of traces of long products in the gauge theory (complex composite operators), verify that they interact just like in string theory, see all the wrapped branes, check that the string theory cures all the problems of SUGRA with its nonrenormalizability, and many other things. On the gauge theory side, the supergravity limit is obtained by focusing on the planar limit of simple enough single-trace operators etc. But if one sees beyond this subset of observables and beyond the approximations with which it is being computed to compare the gauge theory with SUGRA, one may see all of string theory, too.
{ "domain": "physics.stackexchange", "id": 24311, "tags": "string-theory, ads-cft, discrete, supergravity, duality" }
Walter Lewin Lectures in HD
Question: I like the lectures by Walter Lewin 8.0x. However the quality of the videos is pretty bad. Is there any way (DVD, web,...) to get the lecture videos in a good quality, best in HD? Answer: I have both very good news and very bad news for you. The good news: The original quality (at least the higher quality I have been able to find) seems to be here: Go to http://videolectures.net/mit801f99_physics_classical_mechanics/ then click on one of the lectures, e.g http://videolectures.net/mit801f99_lewin_lec04/ then look at the right side of the playing video; there are several links: Download mit801f99_lewin_lec04_01.flv (Video 148.2 MB) Download Video Download mit801f99_lewin_lec04_01.m4v (Video 113.4 MB) Download Video Download mit801f99_lewin_lec04_01.rm (Video 113.4 MB) Download Video Download mit801f99_lewin_lec04_01.wmv (Video 466.0 MB) Download subtitles Download subtitles: TT/XML, RT, SRT As you see, the .wmv version weights much more, and it is actually in a much better resolution. Now, the bad news: There seems to be no way to massively download all the .wmv files with the lectures. More than that, even if you decide to visit each single lecture page to download them one by one, you have to do it with your navigator, not with kget / wget or any other download manager. Additionally, the .wmv (windows media video) format is not 100% compatible with my codecs in ubuntu. I cannot play the videos with VLC, but rather have to use other programs, and the aspect ratio has to be manually set every time... So please, if you find another possibility to download a higher resolution version of these funny lectures, tell it here, or e-mail to ngc6720.at.gmail.com (Note that there is a couple of isolated higher resolution lectures in youtube, but not the complete collection) I hope this helps.
{ "domain": "physics.stackexchange", "id": 18838, "tags": "soft-question, specific-reference, education" }
What is the cutoff distance in this algorithm?
Question: In the following article: a fast and efficient Cα-based method for accurately assigning protein secondary structure elements What are the minimum and maximum cutoff distances in this algorithm? Also, what does it mean by: I.e. where does (2x43)x3 come from? Answer: The 43 features are listed in the supplementary materials on page 4. For example, Feature Type Atoms Involved 1 Distance i, i-5 2 Distance i, i-4 3 Distance i, i-3 4 Distance i, i-2 ... 11 Angle i-1, i, i+1 12 Dihedral i, i+1, i+2, i+3 Are some of them; so Feature 11 is the angle between atom(i-1), atom(i), and atom(i+1). Since it says these were calculated for both the C-alphas and pseudo-centers, this gives 2 * 43. Then this is multiplied by three as mentioned in your quoted paragraph. These features (measurements, almost) are numbers, not booleans - so I do not think that they use distance cutoffs as such, although the random forest method is presumably learning various cutoffs and encodes those in the structure of the forest.
{ "domain": "bioinformatics.stackexchange", "id": 2037, "tags": "proteins, algorithms" }
Is there experimental evidence of time order inversion for spacelike events?
Question: The title sums up the question. Given two events separated by a spacelike interval, say one takes place after the other in an inertial frame, then by a suitable boost we may invert the time order of both events.(This of course is not possible for timelike separated events) My question is hence: is there is any direct experimental evidence of this? And if not direct, then somewhat indirect. Answer: Relativity of simultaneity is a logical consequence of c's invariance. Time order reverseals have not been tested directly as far as I know, but since the invariance of c from which the relativity follows has been tested in a lot of experiments I would say that the thought experiments regarding this issue are solid.
{ "domain": "physics.stackexchange", "id": 19848, "tags": "special-relativity, experimental-physics, causality" }
Does a walk after a meal help with digestion?
Question: I am a bit confused with the notion that "walking after meals helps you in digestion ", Some say that it helps, whereas others oppose it. Can someone come up with a valid explanation for this? Answer: apparently walking helps in the movement of food into the stomach and improves digestion. Also helps in decreasing blood sugar after meals, which decreases cardiovascular risk and potential signal diabetes by helping muscles absorb glucose in the blood. Here is the link to a study done comparing the results of walking after food and after a drink. Also a study was conducted where when people with Type 2 diabetes took a 20 minute walk just 15 minutes after eating, their post-meal blood sugar levels were lower than if they had walked before dinner or not walked at all. Here is the link to another similar study.
{ "domain": "biology.stackexchange", "id": 4155, "tags": "digestion" }
How to separate two liquids with both having the same properties?
Question: I was curious if it was possible to separate such liquids e.g. two vegetable oils that happened to have different ingredients, so it is a different liquid, but the same density etc. Answer: Typical vegetable oils are not just one chemical compound, but a mixture (after removing everything that is not) of glyceride triesters. The chance that there is some overlap between your two oils is big, so the answer to your immediate questions likely is no, completely impossible. Even if there is no overlap in the chemical composition, you would probably still have to separate the oil mixture into all its components, and then know how to mix them together again to get the original two oils. The latter is possible, of course, with a reasonable analytical effort, the former is close to impossible to perform on a macroscopic sample. A GC(-MS) can easily separate (and identify) the compounds, but that's a few billion molecules or so. Doing liquid column chromatography would be very tricky, with any reasonable apparative effort would give you a few milligrams of each compound, and perhaps there are still a few you can't separate. Distillation is impossible (boiling points are high and too similar), fractionation by crystallisation would only work well for some of the triglycerides, etc. All in all it's a bad mess. Don't mix two oils and try to separate them afterwards. ;-)
{ "domain": "chemistry.stackexchange", "id": 10729, "tags": "mixtures, separation-techniques" }
What is the relation between speed and angular velocity and radius in the given problem?
Question: Can anyone help me in solving the above question ? I have considered the bottom-most point, and used the conservation of angular momentum at that point. I am confused that what should be the value of moment of inertia to be taken ? $I$=$MR^2$ or $I= MR^2 + MR^2$? Answer: There are usually 2 phases in the motion. First the object slides while possibly also rotating. Kinetic friction reduces linear velocity and may reduce or increase angular velocity. This phase continues until the no slip condition is reached. In the second phase there is pure rolling motion. There is a kinetic friction force $F=\mu mg$ acting on the disk, which causes linear deceleration $a=-\frac{F}{m}$. This force also exerts a torque $Fr=J\alpha$ where $J=\frac12 mr^2$ is the moment of inertia about the centre of the disk. The torque causes angular deceleration $\alpha=-\frac{Fr}{J}$. The linear and angular velocities at time $t$ after the disk is released are $v=v_0-at$ and $\omega=\omega_0-\alpha t$. If the disk stops altogether before pure rolling motion commences, then the linear and angular velocities become zero at the same time $t$. This allows you to find the relation between $v_0, \omega_0$. Reference : Sliding and Rolling - The Physics of a Rolling Ball.
{ "domain": "physics.stackexchange", "id": 39571, "tags": "homework-and-exercises, angular-momentum, moment-of-inertia" }